Explore Courses
Liverpool Business SchoolLiverpool Business SchoolMBA by Liverpool Business School
  • 18 Months
Bestseller
Golden Gate UniversityGolden Gate UniversityMBA (Master of Business Administration)
  • 15 Months
Popular
O.P.Jindal Global UniversityO.P.Jindal Global UniversityMaster of Business Administration (MBA)
  • 12 Months
New
Birla Institute of Management Technology Birla Institute of Management Technology Post Graduate Diploma in Management (BIMTECH)
  • 24 Months
Liverpool John Moores UniversityLiverpool John Moores UniversityMS in Data Science
  • 18 Months
Popular
IIIT BangaloreIIIT BangalorePost Graduate Programme in Data Science & AI (Executive)
  • 12 Months
Bestseller
Golden Gate UniversityGolden Gate UniversityDBA in Emerging Technologies with concentration in Generative AI
  • 3 Years
upGradupGradData Science Bootcamp with AI
  • 6 Months
New
University of MarylandIIIT BangalorePost Graduate Certificate in Data Science & AI (Executive)
  • 8-8.5 Months
upGradupGradData Science Bootcamp with AI
  • 6 months
Popular
upGrad KnowledgeHutupGrad KnowledgeHutData Engineer Bootcamp
  • Self-Paced
upGradupGradCertificate Course in Business Analytics & Consulting in association with PwC India
  • 06 Months
OP Jindal Global UniversityOP Jindal Global UniversityMaster of Design in User Experience Design
  • 12 Months
Popular
WoolfWoolfMaster of Science in Computer Science
  • 18 Months
New
Jindal Global UniversityJindal Global UniversityMaster of Design in User Experience
  • 12 Months
New
Rushford, GenevaRushford Business SchoolDBA Doctorate in Technology (Computer Science)
  • 36 Months
IIIT BangaloreIIIT BangaloreCloud Computing and DevOps Program (Executive)
  • 8 Months
New
upGrad KnowledgeHutupGrad KnowledgeHutAWS Solutions Architect Certification
  • 32 Hours
upGradupGradFull Stack Software Development Bootcamp
  • 6 Months
Popular
upGradupGradUI/UX Bootcamp
  • 3 Months
upGradupGradCloud Computing Bootcamp
  • 7.5 Months
Golden Gate University Golden Gate University Doctor of Business Administration in Digital Leadership
  • 36 Months
New
Jindal Global UniversityJindal Global UniversityMaster of Design in User Experience
  • 12 Months
New
Golden Gate University Golden Gate University Doctor of Business Administration (DBA)
  • 36 Months
Bestseller
Ecole Supérieure de Gestion et Commerce International ParisEcole Supérieure de Gestion et Commerce International ParisDoctorate of Business Administration (DBA)
  • 36 Months
Rushford, GenevaRushford Business SchoolDoctorate of Business Administration (DBA)
  • 36 Months
KnowledgeHut upGradKnowledgeHut upGradSAFe® 6.0 Certified ScrumMaster (SSM) Training
  • Self-Paced
KnowledgeHut upGradKnowledgeHut upGradPMP® certification
  • Self-Paced
IIM KozhikodeIIM KozhikodeProfessional Certification in HR Management and Analytics
  • 6 Months
Bestseller
Duke CEDuke CEPost Graduate Certificate in Product Management
  • 4-8 Months
Bestseller
upGrad KnowledgeHutupGrad KnowledgeHutLeading SAFe® 6.0 Certification
  • 16 Hours
Popular
upGrad KnowledgeHutupGrad KnowledgeHutCertified ScrumMaster®(CSM) Training
  • 16 Hours
Bestseller
PwCupGrad CampusCertification Program in Financial Modelling & Analysis in association with PwC India
  • 4 Months
upGrad KnowledgeHutupGrad KnowledgeHutSAFe® 6.0 POPM Certification
  • 16 Hours
O.P.Jindal Global UniversityO.P.Jindal Global UniversityMaster of Science in Artificial Intelligence and Data Science
  • 12 Months
Bestseller
Liverpool John Moores University Liverpool John Moores University MS in Machine Learning & AI
  • 18 Months
Popular
Golden Gate UniversityGolden Gate UniversityDBA in Emerging Technologies with concentration in Generative AI
  • 3 Years
IIIT BangaloreIIIT BangaloreExecutive Post Graduate Programme in Machine Learning & AI
  • 13 Months
Bestseller
IIITBIIITBExecutive Program in Generative AI for Leaders
  • 4 Months
upGradupGradAdvanced Certificate Program in GenerativeAI
  • 4 Months
New
IIIT BangaloreIIIT BangalorePost Graduate Certificate in Machine Learning & Deep Learning (Executive)
  • 8 Months
Bestseller
Jindal Global UniversityJindal Global UniversityMaster of Design in User Experience
  • 12 Months
New
Liverpool Business SchoolLiverpool Business SchoolMBA with Marketing Concentration
  • 18 Months
Bestseller
Golden Gate UniversityGolden Gate UniversityMBA with Marketing Concentration
  • 15 Months
Popular
MICAMICAAdvanced Certificate in Digital Marketing and Communication
  • 6 Months
Bestseller
MICAMICAAdvanced Certificate in Brand Communication Management
  • 5 Months
Popular
upGradupGradDigital Marketing Accelerator Program
  • 05 Months
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Corporate & Financial Law
  • 12 Months
Bestseller
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in AI and Emerging Technologies (Blended Learning Program)
  • 12 Months
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Intellectual Property & Technology Law
  • 12 Months
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Dispute Resolution
  • 12 Months
upGradupGradContract Law Certificate Program
  • Self paced
New
ESGCI, ParisESGCI, ParisDoctorate of Business Administration (DBA) from ESGCI, Paris
  • 36 Months
Golden Gate University Golden Gate University Doctor of Business Administration From Golden Gate University, San Francisco
  • 36 Months
Rushford Business SchoolRushford Business SchoolDoctor of Business Administration from Rushford Business School, Switzerland)
  • 36 Months
Edgewood CollegeEdgewood CollegeDoctorate of Business Administration from Edgewood College
  • 24 Months
Golden Gate UniversityGolden Gate UniversityDBA in Emerging Technologies with Concentration in Generative AI
  • 36 Months
Golden Gate University Golden Gate University DBA in Digital Leadership from Golden Gate University, San Francisco
  • 36 Months
Liverpool Business SchoolLiverpool Business SchoolMBA by Liverpool Business School
  • 18 Months
Bestseller
Golden Gate UniversityGolden Gate UniversityMBA (Master of Business Administration)
  • 15 Months
Popular
O.P.Jindal Global UniversityO.P.Jindal Global UniversityMaster of Business Administration (MBA)
  • 12 Months
New
Deakin Business School and Institute of Management Technology, GhaziabadDeakin Business School and IMT, GhaziabadMBA (Master of Business Administration)
  • 12 Months
Liverpool John Moores UniversityLiverpool John Moores UniversityMS in Data Science
  • 18 Months
Bestseller
O.P.Jindal Global UniversityO.P.Jindal Global UniversityMaster of Science in Artificial Intelligence and Data Science
  • 12 Months
Bestseller
IIIT BangaloreIIIT BangalorePost Graduate Programme in Data Science (Executive)
  • 12 Months
Bestseller
O.P.Jindal Global UniversityO.P.Jindal Global UniversityO.P.Jindal Global University
  • 12 Months
WoolfWoolfMaster of Science in Computer Science
  • 18 Months
New
Liverpool John Moores University Liverpool John Moores University MS in Machine Learning & AI
  • 18 Months
Popular
Golden Gate UniversityGolden Gate UniversityDBA in Emerging Technologies with concentration in Generative AI
  • 3 Years
Rushford, GenevaRushford Business SchoolDoctorate of Business Administration (AI/ML)
  • 36 Months
Ecole Supérieure de Gestion et Commerce International ParisEcole Supérieure de Gestion et Commerce International ParisDBA Specialisation in AI & ML
  • 36 Months
Golden Gate University Golden Gate University Doctor of Business Administration (DBA)
  • 36 Months
Bestseller
Ecole Supérieure de Gestion et Commerce International ParisEcole Supérieure de Gestion et Commerce International ParisDoctorate of Business Administration (DBA)
  • 36 Months
Rushford, GenevaRushford Business SchoolDoctorate of Business Administration (DBA)
  • 36 Months
Liverpool Business SchoolLiverpool Business SchoolMBA with Marketing Concentration
  • 18 Months
Bestseller
Golden Gate UniversityGolden Gate UniversityMBA with Marketing Concentration
  • 15 Months
Popular
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Corporate & Financial Law
  • 12 Months
Bestseller
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Intellectual Property & Technology Law
  • 12 Months
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Dispute Resolution
  • 12 Months
IIITBIIITBExecutive Program in Generative AI for Leaders
  • 4 Months
New
IIIT BangaloreIIIT BangaloreExecutive Post Graduate Programme in Machine Learning & AI
  • 13 Months
Bestseller
upGradupGradData Science Bootcamp with AI
  • 6 Months
New
upGradupGradAdvanced Certificate Program in GenerativeAI
  • 4 Months
New
KnowledgeHut upGradKnowledgeHut upGradSAFe® 6.0 Certified ScrumMaster (SSM) Training
  • Self-Paced
upGrad KnowledgeHutupGrad KnowledgeHutCertified ScrumMaster®(CSM) Training
  • 16 Hours
upGrad KnowledgeHutupGrad KnowledgeHutLeading SAFe® 6.0 Certification
  • 16 Hours
KnowledgeHut upGradKnowledgeHut upGradPMP® certification
  • Self-Paced
upGrad KnowledgeHutupGrad KnowledgeHutAWS Solutions Architect Certification
  • 32 Hours
upGrad KnowledgeHutupGrad KnowledgeHutAzure Administrator Certification (AZ-104)
  • 24 Hours
KnowledgeHut upGradKnowledgeHut upGradAWS Cloud Practioner Essentials Certification
  • 1 Week
KnowledgeHut upGradKnowledgeHut upGradAzure Data Engineering Training (DP-203)
  • 1 Week
MICAMICAAdvanced Certificate in Digital Marketing and Communication
  • 6 Months
Bestseller
MICAMICAAdvanced Certificate in Brand Communication Management
  • 5 Months
Popular
IIM KozhikodeIIM KozhikodeProfessional Certification in HR Management and Analytics
  • 6 Months
Bestseller
Duke CEDuke CEPost Graduate Certificate in Product Management
  • 4-8 Months
Bestseller
Loyola Institute of Business Administration (LIBA)Loyola Institute of Business Administration (LIBA)Executive PG Programme in Human Resource Management
  • 11 Months
Popular
Goa Institute of ManagementGoa Institute of ManagementExecutive PG Program in Healthcare Management
  • 11 Months
IMT GhaziabadIMT GhaziabadAdvanced General Management Program
  • 11 Months
Golden Gate UniversityGolden Gate UniversityProfessional Certificate in Global Business Management
  • 6-8 Months
upGradupGradContract Law Certificate Program
  • Self paced
New
IU, GermanyIU, GermanyMaster of Business Administration (90 ECTS)
  • 18 Months
Bestseller
IU, GermanyIU, GermanyMaster in International Management (120 ECTS)
  • 24 Months
Popular
IU, GermanyIU, GermanyB.Sc. Computer Science (180 ECTS)
  • 36 Months
Clark UniversityClark UniversityMaster of Business Administration
  • 23 Months
New
Golden Gate UniversityGolden Gate UniversityMaster of Business Administration
  • 20 Months
Clark University, USClark University, USMS in Project Management
  • 20 Months
New
Edgewood CollegeEdgewood CollegeMaster of Business Administration
  • 23 Months
The American Business SchoolThe American Business SchoolMBA with specialization
  • 23 Months
New
Aivancity ParisAivancity ParisMSc Artificial Intelligence Engineering
  • 24 Months
Aivancity ParisAivancity ParisMSc Data Engineering
  • 24 Months
The American Business SchoolThe American Business SchoolMBA with specialization
  • 23 Months
New
Aivancity ParisAivancity ParisMSc Artificial Intelligence Engineering
  • 24 Months
Aivancity ParisAivancity ParisMSc Data Engineering
  • 24 Months
upGradupGradData Science Bootcamp with AI
  • 6 Months
Popular
upGrad KnowledgeHutupGrad KnowledgeHutData Engineer Bootcamp
  • Self-Paced
upGradupGradFull Stack Software Development Bootcamp
  • 6 Months
Bestseller
upGradupGradUI/UX Bootcamp
  • 3 Months
upGradupGradCloud Computing Bootcamp
  • 7.5 Months
PwCupGrad CampusCertification Program in Financial Modelling & Analysis in association with PwC India
  • 5 Months
upGrad KnowledgeHutupGrad KnowledgeHutSAFe® 6.0 POPM Certification
  • 16 Hours
upGradupGradDigital Marketing Accelerator Program
  • 05 Months
upGradupGradAdvanced Certificate Program in GenerativeAI
  • 4 Months
New
upGradupGradData Science Bootcamp with AI
  • 6 Months
Popular
upGradupGradFull Stack Software Development Bootcamp
  • 6 Months
Bestseller
upGradupGradUI/UX Bootcamp
  • 3 Months
PwCupGrad CampusCertification Program in Financial Modelling & Analysis in association with PwC India
  • 4 Months
upGradupGradCertificate Course in Business Analytics & Consulting in association with PwC India
  • 06 Months
upGradupGradDigital Marketing Accelerator Program
  • 05 Months
  • Home
  • Blog
  • Data Science
  • Apache Spark Streaming Tutorial: A Step-by-Step Guide to Real-Time Data Processing

Apache Spark Streaming Tutorial: A Step-by-Step Guide to Real-Time Data Processing

By Utkarsh Singh

Updated on Feb 11, 2025 | 17 min read

Share:

Apache Spark Streaming processes real-time data from sources like Kafka, Flume, and cloud storage using micro-batches for efficient, fault-tolerant processing.

Unlike traditional batch systems, which process data in intervals, Spark Streaming handles data incrementally for lower latency and higher scalability. It excels in integrating with the broader Spark ecosystem for large-scale, real-time data processing compared to frameworks like Flink or Kafka Streams.

This Apache Spark Streaming tutorial covers architecture, key concepts, and applications—an essential skill for data engineers and big data professionals.

Apache Spark Streaming Tutorial: Step-by-Step Guide

Apache Spark Streaming is a powerful real-time data processing engine built on Apache Spark’s distributed computing framework. It processes continuous data streams using micro-batching, breaking incoming data into small batches for near real-time computation. 

It works by ingesting real-time data streams from sources like Kafka, Flume, or IoT sensors, processing them in micro-batches using Spark’s Resilient Distributed Dataset (RDD) model, and applying transformations (map, filter, reduce) to extract insights.

The processed data is then stored in HDFS, NoSQL databases, or indexed in ElasticSearch for further analysis, enabling low-latency, fault-tolerant, and scalable real-time analytics.

Now, let’s dive into the Apache Spark Streaming tutorial and learn what Spark Streaming is, its key components, and how to set up and run a streaming application step by step:

Step 1: Prerequisites

Before running a Spark Streaming application, ensure you have the following:

  • Apache Spark (Latest Version, 3.x+) installed on your system.
  • Java 8+ or Python 3.x for running Spark applications.
  • Kafka, Flume, or Socket Streams as data sources.
  • Hadoop or a cloud-based data lake (AWS S3, Azure Blob Storage, or Google Cloud Storage) for storing intermediate results.

When using cloud-based data lakes, you benefit from scalable storage and seamless integration with cloud-native services, reducing infrastructure overhead.

In contrast, local Hadoop-based setups may require more manual configuration and scaling to handle large data volumes efficiently.

For a deeper dive into real-time data processing and stream analytics, explore upGrad’s software programming courses. The expert-led curriculum reinforces knowledge of key programming languages, frameworks, and development tools.

Also Read: Apache Spark Architecture: Everything You Need to Know

Step 2: Create a Spark Streaming Context

The Spark Streaming Context is the entry point for any Spark Streaming application. It defines the streaming job and manages data flow.

Example (Python – PySpark):

from pyspark.sql import SparkSession
from pyspark.streaming import StreamingContext

# Initialize Spark Session
spark = SparkSession.builder.appName("SparkStreamingApp").getOrCreate()

# Define Streaming Context with 5-second batch intervals
ssc = StreamingContext(spark.sparkContext, 5)

# Enable checkpointing for fault tolerance
ssc.checkpoint("hdfs://namenode:8020/spark_checkpoint")

# Print the state of the StreamingContext to ensure it's active
print(ssc)
# Alternatively, you can use ssc.getState() to check the status
print(ssc.getState())

Explanation:

1. Import Libraries:

  • SparkSession: Manages the Spark application.
  • StreamingContext: Enables real-time data processing.

2. Initialize Spark Session:

  • Creates a SparkSession named "SparkStreamingApp".
  • getOrCreate() ensures an existing session is reused if available.

3. Define Streaming Context:

  • Uses sparkContext to set up streaming context.
  • The batch interval is set to 5 seconds, meaning Spark Streaming will collect and process data in batches every 5 seconds.

4. Enable Checkpointing for Fault Tolerance:

  • The ssc.checkpoint() method is crucial for ensuring fault tolerance in your streaming application. 
  • It saves the state of the computation to a reliable storage (e.g., HDFS). In case of a failure, this checkpoint helps recover the application to its last successful state.

5. Verify Streaming Context:

  • To ensure the streaming context is active, you can print ssc or use ssc.getState() to check its status.
  • This will give you information about whether the streaming context has been initialized and is running correctly.

Expected Output: This code initializes Spark Streaming but doesn’t produce direct output. The console logs will confirm the setup:

INFO StreamingContext: StreamingContext started with batch interval 5s
INFO SparkSession: Initialized SparkSession: SparkStreamingApp

If you print ssc or check its state, you will see output similar to:

StreamingContext[streamingContextId, active=false]

Instead of processing each event individually, Spark Streaming groups incoming data into small batches at fixed intervals. In this case, it processing logs every 5 seconds

  • T = 0-5s: Collect logs and process in batch 1.
  • T = 5-10s: Collect new logs and process in batch 2.
  • T = 10-15s: Repeat, ensuring continuous real-time analysis.

Micro-batching balances latency and throughput, making it more efficient than event-driven systems like Apache Flink for high-throughput applications.

Also Read: Apache Kafka Architecture: Comprehensive Guide For Beginners

Step 3: Define Data Input Sources

Spark Streaming supports multiple real-time data sources, including:

  • Kafka: For high-throughput real-time event processing (e.g., Twitter's real-time analytics pipeline uses Kafka and Spark to process millions of tweets per second, enabling trend analysis and sentiment detection).
  • Flume: Used for ingesting large volumes of event-driven log data (e.g., application logs in cloud environments).
  • Socket Streams: For real-time text data (e.g., live server logs or IoT telemetry).
  • Cloud Storage & Databases: Streaming from AWS S3, Google Cloud, or NoSQL databases.

Example (Reading from Kafka in Python – Using Structured Streaming API):

from pyspark.sql import SparkSession

# Initialize Spark Session
spark = SparkSession.builder.appName("SparkStreamingApp").getOrCreate()

# Read data from Kafka using Structured Streaming API
kafkaStream = spark.readStream \
    .format("kafka") \
    .option("kafka.bootstrap.servers", "localhost:9092") \
    .option("subscribe", "events") \
    .load() 

Explanation:

1. Structured Streaming:

  • Instead of using the deprecated KafkaUtils.createDirectStream(), you should now use Structured Streaming to process data from Kafka in a more reliable and scalable manner.
  • spark.readStream is the new approach to streaming data, providing a higher-level abstraction for handling real-time data streams.

2. Create a Direct Kafka Stream:

  • spark.readStream.format("kafka") specifies that you're reading from Kafka as a stream.
  • .option("kafka.bootstrap.servers", "localhost:9092") sets the Kafka broker address (e.g., localhost:9092) where Kafka is running.
  • .option("subscribe", "events") defines the Kafka topic ("events") that Spark will consume data from.
  • .load() triggers the loading of the stream from Kafka.

Note: This approach eliminates the need for KafkaUtils.createDirectStream(), simplifying the code and ensuring better integration with Spark's built-in structured streaming capabilities.

Expected Output:

This code does not produce immediate output but starts consuming messages from Kafka. If successful, the console will show logs like:

INFO KafkaUtils: Connected to Kafka topic: events
INFO Kafka Stream started successfully

Once Kafka starts sending messages, Spark will begin processing real-time data from the "events" topic. 

Also Read: Kafka vs RabbitMQ: What Are the Biggest Differences and Which Should You Learn

Step 4: Apply Transformations

Transformations are operations that modify the incoming data. Spark Streaming offers a variety of transformations, including map(), filter(), reduceByKey(), and window(). 

These operations allow you to transform, filter, and aggregate data as it streams in real time.

  • map(): Applies a function to each data element.
  • filter(): Filters data based on a condition.
  • window(): Aggregates data over a fixed time window (e.g., compute moving averages).
  • updateStateByKey(): A stateful transformation that allows maintaining state across batches, useful for tasks like running totals or counting unique items.

Example (Filtering error logs in real-time):

from pyspark.sql.functions import col

# Convert Kafka messages to STRING and filter for "ERROR" logs
error_logs = kafkaStream.selectExpr("CAST(value AS STRING) as value") \
    .filter(col("value").contains("ERROR"))

Explanation:

1. Convert Kafka Messages: Kafka messages are typically in binary format (e.g., bytes). To filter based on the content of the messages, you first need to convert the value field from binary to string using CAST(value AS STRING).

2. Filter Streaming Data:

  • This line processes the real-time Kafka stream and filters messages containing the word "ERROR".
  • col("value").contains("ERROR") checks each message in the Kafka stream for the keyword "ERROR".
  • Only logs that contain the word "ERROR" will be retained for further processing.

3. Stateful Transformations (optional note): If you want to keep track of state across different batches (e.g., running totals or counts), you can use updateStateByKey(). This is a useful stateful transformation to track ongoing calculations in streaming data.

4. Streaming Data Processing:

  • This is a real-time filtering operation, meaning only error logs from Kafka’s "events" topic will be processed.
  • The resulting error_logs stream can be saved, displayed, or sent to monitoring tools for alerting.

Expected Output:

The stream will process Kafka messages in real time, and only logs that contain the word "ERROR" will be retained. If successful, the console will show logs like:

INFO StreamingQuery: Processing 100 records...

Once the filtering is applied, the error_logs stream will be ready for further transformations or actions (like writing to a file or a database).

Also Read: Apache Kafka Architecture: Comprehensive Guide For Beginners

Step 5: Define Output Operations

Once data is processed, it must be stored or sent to a sink. Common output destinations (sinks) include:

  • HDFS or Cloud Storage (AWS S3, Google Cloud Storage) for long-term storage.
  • NoSQL databases (Cassandra, MongoDB) for real-time dashboards.
  • Elasticsearch for indexing and full-text search (e.g., log analysis).

Example (Saving results to HDFS with append mode):

error_logs.writeStream \
    .format("text") \
    .option("path", "hdfs://namenode:8020/logs/errors") \
    .option("checkpointLocation", "hdfs://namenode:8020/spark_checkpoint/errors") \
    .outputMode("append") \
    .start()

Explanation:

1. Saves Filtered Logs to HDFS:

  • This line writes the filtered error logs to a Hadoop Distributed File System (HDFS) location.
  • The directory path "hdfs://namenode:8020/logs/errors" specifies where the files will be stored.

2. Handling Incremental Writes:

  • By default, Spark Streaming might overwrite files if not configured properly. To prevent this, we use outputMode("append"), which ensures that logs are added incrementally rather than overwriting existing files.
  • This configuration guarantees that each batch of error logs is appended to the output files without deleting or replacing previous data.

3. How Data is Stored:

  • Spark Streaming creates multiple text files within the directory based on the batch intervals defined in the StreamingContext.
  • Each batch generates a new file containing only error logs from that interval.

4. Checkpointing: checkpointLocation is specified to store the state of the stream in case of failures, which is important for fault tolerance and recovery.

5. File Naming Convention:

The output files are automatically named like:

/logs/errors-00000
/logs/errors-00001

Each file contains error logs captured within that batch.

Example (Saving Results to Cassandra or MongoDB): In addition to saving data to HDFS, Spark can also write to NoSQL databases like Cassandra or MongoDB. 

  • Saving to Cassandra:
error_logs.writeStream \
    .format("org.apache.spark.sql.cassandra") \
    .option("keyspace", "logs") \
    .option("table", "error_logs") \
    .outputMode("append") \
    .start()
  • Saving to MongoDB:
error_logs.writeStream \
    .format("mongodb") \
    .option("uri", "mongodb://localhost:27017/logs.error_logs") \
    .outputMode("append") \
    .start()

Expected Output in HDFS: 

After running this, you can check the saved error logs in HDFS using:

hdfs dfs -ls /logs/errors

Example output:

Found 2 items
-rw-r--r--   1 spark spark   2048 2025-02-05 12:31 /logs/errors-00000
-rw-r--r--   1 spark spark   3072 2025-02-05 12:36 /logs/errors-00001

To view the stored logs:

hdfs dfs -cat /logs/errors-00000

Example content:

ERROR 2025-02-05 12:30:45 [Server] Connection timeout occurred.
ERROR 2025-02-05 12:30:47 [Database] Query execution failed.

This allows long-term storage of error logs for analysis, debugging, and compliance tracking. 

Also Read: Top 20 HDFS Commands You Should Know About

Step 6: Start and Monitor the Application

After defining the context, sources, transformations, and outputs, start the Spark Streaming job:

ssc.start()  # Start execution  
ssc.awaitTermination()  # Keep application running

Explanation:

1. Start Streaming Execution:

  • ssc.start() begins the Spark Streaming job, activating all defined streaming computations.
  • Once started, Spark continuously ingests, processes, and outputs data based on the transformations applied (e.g., filtering logs, writing to HDFS).

2. Keep the Application Running:

  • ssc.awaitTermination() keeps the streaming job alive until it is manually stopped.
  • Without this, the program would execute once and exit immediately.

3. Real-Time Processing:

  • Spark continuously processes new data from Kafka, Flume, or other sources based on the batch interval set in StreamingContext.
  • The application will keep running indefinitely unless manually stopped or terminated due to an error.

4. Graceful Shutdown

  • To ensure a clean shutdown, you should use ssc.stop(stopSparkContext=True, stopGraceFully=True), which will stop the Spark Streaming context gracefully and preserve the processed data. This method helps prevent any data loss during termination.
  • Implementation in a try-except block:
try:
    ssc.start()  # Start the streaming job
    ssc.awaitTermination()  # Keep the application running
except KeyboardInterrupt:
    print("Streaming job stopped manually.")
finally:
    ssc.stop(stopSparkContext=True, stopGraceFully=True)  # Gracefully stop the job

Monitoring Spark Streaming Jobs:

  • Spark UI: You can monitor the performance and progress of your Spark Streaming job via the Spark UI. Navigate to the 'Streaming' tab to see detailed information about the batch processing, including how long each batch took, the number of records processed, and the status of the current batch.
  • Prometheus/Grafana: If you have Prometheus and Grafana set up, you can use them to track metrics like throughput, latency, and resource usage for Spark Streaming jobs.

Expected Output (Console Logs):

Once executed, Spark will start processing real-time streaming data, and the console will show logs like:

INFO StreamingContext: StreamingContext started with batch interval 5s
INFO KafkaUtils: Connected to Kafka topic: events
INFO StreamingJob: Processing new batch of streaming data

If new data arrives in the Kafka topic, Spark processes it in 5-second micro-batches and writes results to HDFS or another specified sink.

To manually stop Spark Streaming:

ssc.stop(stopSparkContext=True, stopGraceFully=True)

Or in the terminal:

Ctrl + C  # Stop execution

This ensures the streaming application shuts down gracefully while preserving processed data. 

Monitoring tools like Spark UI, Prometheus, or Grafana can help track performance and debug issues.

Also Read: Apache Spark Dataframes: Features, RDD & Comparison

How Apache Spark Streaming Works: Architecture and Components?

Spark Streaming follows a structured pipeline to process real-time data efficiently. Below are its key components:

Input Sources in Spark Streaming

These are the data ingestion points where Spark Streaming collects real-time data. Examples include:

  • Kafka (Event Processing): Used for handling high-throughput data streams like user interactions in web applications.
  • Flume (Log Streaming): Commonly used for ingesting system logs from distributed infrastructure.
  • Amazon Kinesis (Cloud Streaming): Streams real-time data for fraud detection and AI-driven applications.

Spark Streaming Engine

The Spark Streaming Engine converts raw streaming data into Discretized Streams (DStreams), which are then processed using Spark’s distributed computing model. Key features include:

  • Fault Tolerance: If a node fails, Spark recovers lost data from the last checkpoint.
  • Scalability: Automatically scales across a distributed cluster for large workloads.
  • Micro-Batching: Processes data in fixed intervals (e.g., every 5 seconds).

Sinks in Spark Streaming

Sinks are output destinations where processed data is stored. Common sinks include:

  • HDFS, AWS S3, Azure Blob Storage – For persisting processed data.
  • Elasticsearch – For real-time log analysis and search-based applications.
  • PostgreSQLMySQL, NoSQL (CassandraMongoDB) – For storing structured real-time data.

To learn  advanced SQL techniques and elevate your data analysis capabilities, enroll in upGrad’s free course on SQL Functions and Formulas

Complete Apache Spark Streaming Code (Kafka Integration)

This Spark Streaming application reads real-time data from Kafka, processes it by filtering error logs, and writes the results to HDFS for further analysis.

Before running this code, ensure:

  • Apache Spark 3.x+ is installed.
  • Apache Kafka is set up with a topic named "logs".
  • Hadoop HDFS is running for storing processed logs.

Here’s the complete Spark Streaming Code (PySpark):

# Import necessary libraries
from pyspark.sql import SparkSession
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils

# Step 1: Initialize Spark Session
spark = SparkSession.builder \
    .appName("SparkStreamingKafka") \
    .config("spark.sql.streaming.checkpointLocation", "/tmp/checkpoints") \
    .getOrCreate()

# Step 2: Create Spark Streaming Context with batch interval of 5 seconds
ssc = StreamingContext(spark.sparkContext, 5)

# Step 3: Define Kafka Data Source (Topic: "logs")
kafka_stream = KafkaUtils.createDirectStream(ssc, ["logs"], {"metadata.broker.list": "localhost:9092"})

# Step 4: Extract log messages from Kafka stream (assuming logs are JSON-formatted strings)
parsed_logs = kafka_stream.map(lambda x: x[1])  # Extract log message from Kafka tuple

# Step 5: Filter only error logs from the stream
error_logs = parsed_logs.filter(lambda log: "ERROR" in log)

# Step 6: Save filtered error logs to HDFS for further analysis
error_logs.saveAsTextFiles("hdfs://namenode:8020/logs/errors")

# Step 7: Start the Spark Streaming job
ssc.start()
ssc.awaitTermination()  # Keeps the streaming process running

Explanation:

1. Initialize Spark Session: Creates a SparkSession with a checkpoint directory for fault tolerance.

2. Set up Streaming Context: Defines a batch interval of 5 seconds, meaning Spark processes data every 5 seconds.

3. Connect to Kafka Topic: Reads real-time log messages from Kafka’s "logs" topic.

4. Extract Log Messages: Kafka produces messages as (key, value) tuples, and we extract only the log text.

5. Filter Error Logs: Identifies logs containing the keyword "ERROR".

6. Write to HDFS: Saves processed logs to HDFS for further analysis or visualization.

7. Start the Streaming Job: Begins the real-time data pipeline and continuously processes new logs.

In Spark 3.x, KafkaUtils.createDirectStream() is deprecated, so users should consider using Structured Streaming with Kafka via spark.readStream() for more efficient and flexible stream processing.

Expected Output (HDFS Output File):

After running the script, you will find filtered error logs in HDFS:

hdfs dfs -ls /logs/errors

Sample log output in HDFS:

ERROR 2025-02-05 12:30:45 [Server] Connection timeout occurred.
ERROR 2025-02-05 12:30:47 [Database] Query execution failed.
ERROR 2025-02-05 12:30:52 [Auth] Unauthorized access detected.

This real-world Apache Spark Streaming tutorial showcases how big data processing pipelines handle real-time log analysis, making it a must-have skill for data engineers and AI professionals. 

Also Read: How to Become a Big Data Engineer: An Ultimate Guide

Now that you’ve learned how to set up and run Spark Streaming, it's important to understand why it's preferred over traditional batch processing. Let’s explore its key benefits and the challenges of real-world implementation.

Why Choose Apache Spark Streaming: Key Benefits and Challenges

As businesses increasingly rely on real-time analytics, the need for a scalable, fault-tolerant, and high-performance streaming framework has become critical. 

Traditional batch processing systems, such as Hadoop MapReduce, cannot process data in real time, making them unsuitable for real-world applications.

Apache Spark Streaming bridges this gap by providing a distributed, micro-batch processing engine that enables real-time data ingestion, transformation, and analysis at scale. 

Below are some key benefits of using Spark Streaming:

1. Real-Time Processing Capabilities

  • Processes continuous data streams in near real-time with low latency.
  • Ideal for financial transactions, social media monitoring, and IoT event processing.

2. Seamless Integration with the Spark Ecosystem

  • Works alongside Spark SQL, MLlib (Machine Learning), and GraphX for advanced analytics.
  • Supports structured streaming, making it easy to integrate with existing batch workflows.

3. Scalability and Fault Tolerance

  • Can scale horizontally across hundreds or thousands of nodes for high-throughput workloads.
  • Uses RDD lineage and checkpointing for automatic fault recovery.

4. Wide Range of Input and Output Sources

  • Reads from Kafka, Flume, Kinesis, HDFS, S3, and NoSQL databases.
  • Outputs to HDFS, Cassandra, Elasticsearch, JDBC databases, and real-time dashboards.

5. Open Source and Strong Community Support

  • Backed by a large open-source community with frequent updates.
  • Supported by major cloud providers (AWS, Azure, GCP) for enterprise-level streaming applications.

Despite its advantages, deploying Spark Streaming in production presents certain challenges. Below is a comparison of key challenges and their solutions:

Challenge

Solution

Scalability Issues – High-throughput streaming applications may face performance bottlenecks when handling large volumes of data. Use horizontal scaling by adding more Spark worker nodes and tuning batch intervals for better throughput. Optimize executor memory allocation.
Fault Tolerance and Recovery – If a node crashes, streaming jobs may lose data or require manual intervention to restart. Enable checkpointing and write-ahead logs (WAL) to ensure data persistence. Use Kafka offsets to resume processing without data loss.
Performance Optimization – Inefficient use of resources may lead to high processing latency. Optimize parallelism using appropriate number of partitions. Use stateful streaming aggregations with windowing for efficient computation.
Handling Data Inconsistencies – Streaming data often contains duplicates, missing values, or schema variations. Implement idempotent writes, use deduplication techniques (e.g., watermarking), and validate schema before ingestion.
Integration with Data Sources and Sinks – Connecting Spark Streaming with multiple external systems can be complex. Use structured streaming APIs for better compatibility with JDBC, NoSQL, message queues, and cloud storage.

By addressing these challenges proactively, organizations can build robust, real-time analytics pipelines that drive data-driven decision-making.

Also Read: Future of Data Analytics in India: Trends & Career Options 2025

While Spark Streaming offers scalability, real-time processing, and fault tolerance, how does it apply in real-world scenarios? Let’s look at how industries leverage Spark Streaming for real-time analytics.

Applications and Use Cases of Apache Spark Streaming

Apache Spark Streaming is widely used in real-world applications where low-latency, high-throughput data processing is critical. It enables businesses to analyze, detect patterns, and respond to events in real-time, making it a powerful tool for fraud detection, monitoring, and predictive analytics.

Below are some of the most impactful use cases of Apache Spark Streaming across industries:

1. Real-Time Log Monitoring

  • Continuously processes server logs, application logs, and network activity to detect anomalies.
  • Used in cybersecurity for threat detection and intrusion prevention.
  • Companies like Netflix and LinkedIn use Spark Streaming to monitor system health and prevent failures.

2. Fraud Detection

  • Analyzes banking transactions in real-time to detect suspicious activities.
  • Financial institutions use machine learning models with Spark Streaming to flag anomalous credit card transactions.
  • Helps reduce identity theft, money laundering, and online payment fraud.

Also Read: Fraud Detection in Machine Learning: What You Need To Know

3. Social Media Analytics

  • Processes real-time data from platforms like Twitter, Facebook, and Instagram, with Twitter’s Firehose API and Kafka seamlessly working together to stream massive volumes of data for real-time analytics and trend detection.
  • Identifies trending topics, customer sentiment, and brand perception.
  • Used in digital marketing to track campaign performance in real-time.

Also Read: How to Make Social Media Marketing Strategy in Just 8 Steps

4. IoT Data Processing

  • Ingests sensor data from IoT devices in smart cities, healthcare, and manufacturing.
  • Enables predictive maintenance in industries by detecting failures before they happen.
  • Used in self-driving cars to process sensor inputs and make instant decisions.

Also Read: Top 50 IoT Projects For all Levels in 2025 [With Source Code]

5. Stock Market Analysis

  • Processes high-frequency trading data to identify market trends.
  • Hedge funds and investment firms use Spark Streaming to predict stock movements and execute trades in milliseconds.
  • Helps in risk assessment and portfolio management by analyzing real-time market fluctuations.

Its ability to ingest, process, and analyze massive streams of data with low latency is shaping the future of AI-driven decision-making. 

Also Read: Top 10 Apache Spark Use Cases Across Industries and Their Impact in 2025

With Spark Streaming transforming industries like finance, IoT, and social media analytics, mastering it is a valuable skill. upGrad’s courses provide hands-on training to help you build industry-ready expertise in real-time data processing.

How Can upGrad Help You Learn Apache Spark Streaming?

upGrad, South Asia’s leading Higher EdTech platform, equips 10M+ learners with big data and real-time analytics skills. Their courses offer hands-on training in stream processing, Kafka integration, and distributed computing.

You'll master fault tolerance, scalability, and performance tuning, gaining industry-ready expertise to build real-time data pipelines.

Here are some relevant courses you can check out:

You can also get personalized career counseling with upGrad to guide your career path, or visit your nearest upGrad center and start hands-on training today!

Unlock the power of data with our popular Data Science courses, designed to make you proficient in analytics, machine learning, and big data!

Elevate your career by learning essential Data Science skills such as statistical modeling, big data processing, predictive analytics, and SQL!

Stay informed and inspired  with our popular Data Science articles, offering expert insights, trends, and practical tips for aspiring data professionals!

Frequently Asked Questions

1. How does Apache Spark Streaming handle late-arriving data in real-time applications?

2. What is the difference between Spark Streaming and Structured Streaming?

3. How does Spark Streaming ensure exactly-once processing in distributed environments?

4. Can Spark Streaming handle event-time processing instead of processing-time?

5. How does Apache Spark Streaming handle backpressure when data volume spikes?

6. What optimizations can improve Spark Streaming performance on large-scale data?

7. How does checkpointing help in Spark Streaming fault recovery?

8. Can Spark Streaming process unstructured data like images and videos?

9. What’s the impact of micro-batch size on Spark Streaming latency?

10. How does Spark Streaming compare to Apache Flink for real-time data processing?

11. Can Apache Spark Streaming integrate with cloud-native architectures?

Utkarsh Singh

18 articles published

Get Free Consultation

+91

By submitting, I accept the T&C and
Privacy Policy

Start Your Career in Data Science Today

Suggested Blogs