Apache Spark Streaming Tutorial: A Step-by-Step Guide to Real-Time Data Processing
Updated on Feb 11, 2025 | 17 min read | 8.4k views
Share:
For working professionals
For fresh graduates
More
Updated on Feb 11, 2025 | 17 min read | 8.4k views
Share:
Table of Contents
Apache Spark Streaming processes real-time data from sources like Kafka, Flume, and cloud storage using micro-batches for efficient, fault-tolerant processing.
Unlike traditional batch systems, which process data in intervals, Spark Streaming handles data incrementally for lower latency and higher scalability. It excels in integrating with the broader Spark ecosystem for large-scale, real-time data processing compared to frameworks like Flink or Kafka Streams.
This Apache Spark Streaming tutorial covers architecture, key concepts, and applications—an essential skill for data engineers and big data professionals.
Apache Spark Streaming is a powerful real-time data processing engine built on Apache Spark’s distributed computing framework. It processes continuous data streams using micro-batching, breaking incoming data into small batches for near real-time computation.
It works by ingesting real-time data streams from sources like Kafka, Flume, or IoT sensors, processing them in micro-batches using Spark’s Resilient Distributed Dataset (RDD) model, and applying transformations (map, filter, reduce) to extract insights.
The processed data is then stored in HDFS, NoSQL databases, or indexed in ElasticSearch for further analysis, enabling low-latency, fault-tolerant, and scalable real-time analytics.
Now, let’s dive into the Apache Spark Streaming tutorial and learn what Spark Streaming is, its key components, and how to set up and run a streaming application step by step:
Step 1: Prerequisites
Before running a Spark Streaming application, ensure you have the following:
When using cloud-based data lakes, you benefit from scalable storage and seamless integration with cloud-native services, reducing infrastructure overhead.
In contrast, local Hadoop-based setups may require more manual configuration and scaling to handle large data volumes efficiently.
Also Read: Apache Spark Architecture: Everything You Need to Know
Step 2: Create a Spark Streaming Context
The Spark Streaming Context is the entry point for any Spark Streaming application. It defines the streaming job and manages data flow.
Example (Python – PySpark):
from pyspark.sql import SparkSession
from pyspark.streaming import StreamingContext
# Initialize Spark Session
spark = SparkSession.builder.appName("SparkStreamingApp").getOrCreate()
# Define Streaming Context with 5-second batch intervals
ssc = StreamingContext(spark.sparkContext, 5)
# Enable checkpointing for fault tolerance
ssc.checkpoint("hdfs://namenode:8020/spark_checkpoint")
# Print the state of the StreamingContext to ensure it's active
print(ssc)
# Alternatively, you can use ssc.getState() to check the status
print(ssc.getState())
Explanation:
1. Import Libraries:
2. Initialize Spark Session:
3. Define Streaming Context:
4. Enable Checkpointing for Fault Tolerance:
5. Verify Streaming Context:
Expected Output: This code initializes Spark Streaming but doesn’t produce direct output. The console logs will confirm the setup:
INFO StreamingContext: StreamingContext started with batch interval 5s
INFO SparkSession: Initialized SparkSession: SparkStreamingApp
If you print ssc or check its state, you will see output similar to:
StreamingContext[streamingContextId, active=false]
Instead of processing each event individually, Spark Streaming groups incoming data into small batches at fixed intervals. In this case, it processing logs every 5 seconds
Micro-batching balances latency and throughput, making it more efficient than event-driven systems like Apache Flink for high-throughput applications.
Also Read: Apache Kafka Architecture: Comprehensive Guide For Beginners
Step 3: Define Data Input Sources
Spark Streaming supports multiple real-time data sources, including:
Example (Reading from Kafka in Python – Using Structured Streaming API):
from pyspark.sql import SparkSession
# Initialize Spark Session
spark = SparkSession.builder.appName("SparkStreamingApp").getOrCreate()
# Read data from Kafka using Structured Streaming API
kafkaStream = spark.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092") \
.option("subscribe", "events") \
.load()
Explanation:
1. Structured Streaming:
2. Create a Direct Kafka Stream:
Note: This approach eliminates the need for KafkaUtils.createDirectStream(), simplifying the code and ensuring better integration with Spark's built-in structured streaming capabilities.
Expected Output:
This code does not produce immediate output but starts consuming messages from Kafka. If successful, the console will show logs like:
INFO KafkaUtils: Connected to Kafka topic: events
INFO Kafka Stream started successfully
Once Kafka starts sending messages, Spark will begin processing real-time data from the "events" topic.
Also Read: Kafka vs RabbitMQ: What Are the Biggest Differences and Which Should You Learn
Step 4: Apply Transformations
Transformations are operations that modify the incoming data. Spark Streaming offers a variety of transformations, including map(), filter(), reduceByKey(), and window().
These operations allow you to transform, filter, and aggregate data as it streams in real time.
Example (Filtering error logs in real-time):
from pyspark.sql.functions import col
# Convert Kafka messages to STRING and filter for "ERROR" logs
error_logs = kafkaStream.selectExpr("CAST(value AS STRING) as value") \
.filter(col("value").contains("ERROR"))
Explanation:
1. Convert Kafka Messages: Kafka messages are typically in binary format (e.g., bytes). To filter based on the content of the messages, you first need to convert the value field from binary to string using CAST(value AS STRING).
2. Filter Streaming Data:
3. Stateful Transformations (optional note): If you want to keep track of state across different batches (e.g., running totals or counts), you can use updateStateByKey(). This is a useful stateful transformation to track ongoing calculations in streaming data.
4. Streaming Data Processing:
Expected Output:
The stream will process Kafka messages in real time, and only logs that contain the word "ERROR" will be retained. If successful, the console will show logs like:
INFO StreamingQuery: Processing 100 records...
Once the filtering is applied, the error_logs stream will be ready for further transformations or actions (like writing to a file or a database).
Also Read: Apache Kafka Architecture: Comprehensive Guide For Beginners
Step 5: Define Output Operations
Once data is processed, it must be stored or sent to a sink. Common output destinations (sinks) include:
Example (Saving results to HDFS with append mode):
error_logs.writeStream \
.format("text") \
.option("path", "hdfs://namenode:8020/logs/errors") \
.option("checkpointLocation", "hdfs://namenode:8020/spark_checkpoint/errors") \
.outputMode("append") \
.start()
Explanation:
1. Saves Filtered Logs to HDFS:
2. Handling Incremental Writes:
3. How Data is Stored:
4. Checkpointing: checkpointLocation is specified to store the state of the stream in case of failures, which is important for fault tolerance and recovery.
5. File Naming Convention:
The output files are automatically named like:
/logs/errors-00000
/logs/errors-00001
Each file contains error logs captured within that batch.
Example (Saving Results to Cassandra or MongoDB): In addition to saving data to HDFS, Spark can also write to NoSQL databases like Cassandra or MongoDB.
error_logs.writeStream \
.format("org.apache.spark.sql.cassandra") \
.option("keyspace", "logs") \
.option("table", "error_logs") \
.outputMode("append") \
.start()
error_logs.writeStream \
.format("mongodb") \
.option("uri", "mongodb://localhost:27017/logs.error_logs") \
.outputMode("append") \
.start()
Expected Output in HDFS:
After running this, you can check the saved error logs in HDFS using:
hdfs dfs -ls /logs/errors
Example output:
Found 2 items
-rw-r--r-- 1 spark spark 2048 2025-02-05 12:31 /logs/errors-00000
-rw-r--r-- 1 spark spark 3072 2025-02-05 12:36 /logs/errors-00001
To view the stored logs:
hdfs dfs -cat /logs/errors-00000
Example content:
ERROR 2025-02-05 12:30:45 [Server] Connection timeout occurred.
ERROR 2025-02-05 12:30:47 [Database] Query execution failed.
This allows long-term storage of error logs for analysis, debugging, and compliance tracking.
Also Read: Top 20 HDFS Commands You Should Know About
Step 6: Start and Monitor the Application
After defining the context, sources, transformations, and outputs, start the Spark Streaming job:
ssc.start() # Start execution
ssc.awaitTermination() # Keep application running
Explanation:
1. Start Streaming Execution:
2. Keep the Application Running:
3. Real-Time Processing:
4. Graceful Shutdown:
try:
ssc.start() # Start the streaming job
ssc.awaitTermination() # Keep the application running
except KeyboardInterrupt:
print("Streaming job stopped manually.")
finally:
ssc.stop(stopSparkContext=True, stopGraceFully=True) # Gracefully stop the job
Monitoring Spark Streaming Jobs:
Expected Output (Console Logs):
Once executed, Spark will start processing real-time streaming data, and the console will show logs like:
INFO StreamingContext: StreamingContext started with batch interval 5s
INFO KafkaUtils: Connected to Kafka topic: events
INFO StreamingJob: Processing new batch of streaming data
If new data arrives in the Kafka topic, Spark processes it in 5-second micro-batches and writes results to HDFS or another specified sink.
To manually stop Spark Streaming:
ssc.stop(stopSparkContext=True, stopGraceFully=True)
Or in the terminal:
Ctrl + C # Stop execution
This ensures the streaming application shuts down gracefully while preserving processed data.
Monitoring tools like Spark UI, Prometheus, or Grafana can help track performance and debug issues.
Also Read: Apache Spark Dataframes: Features, RDD & Comparison
Spark Streaming follows a structured pipeline to process real-time data efficiently. Below are its key components:
These are the data ingestion points where Spark Streaming collects real-time data. Examples include:
The Spark Streaming Engine converts raw streaming data into Discretized Streams (DStreams), which are then processed using Spark’s distributed computing model. Key features include:
Sinks are output destinations where processed data is stored. Common sinks include:
To learn advanced SQL techniques and elevate your data analysis capabilities, enroll in upGrad’s free course on SQL Functions and Formulas.
This Spark Streaming application reads real-time data from Kafka, processes it by filtering error logs, and writes the results to HDFS for further analysis.
Before running this code, ensure:
Here’s the complete Spark Streaming Code (PySpark):
# Import necessary libraries
from pyspark.sql import SparkSession
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
# Step 1: Initialize Spark Session
spark = SparkSession.builder \
.appName("SparkStreamingKafka") \
.config("spark.sql.streaming.checkpointLocation", "/tmp/checkpoints") \
.getOrCreate()
# Step 2: Create Spark Streaming Context with batch interval of 5 seconds
ssc = StreamingContext(spark.sparkContext, 5)
# Step 3: Define Kafka Data Source (Topic: "logs")
kafka_stream = KafkaUtils.createDirectStream(ssc, ["logs"], {"metadata.broker.list": "localhost:9092"})
# Step 4: Extract log messages from Kafka stream (assuming logs are JSON-formatted strings)
parsed_logs = kafka_stream.map(lambda x: x[1]) # Extract log message from Kafka tuple
# Step 5: Filter only error logs from the stream
error_logs = parsed_logs.filter(lambda log: "ERROR" in log)
# Step 6: Save filtered error logs to HDFS for further analysis
error_logs.saveAsTextFiles("hdfs://namenode:8020/logs/errors")
# Step 7: Start the Spark Streaming job
ssc.start()
ssc.awaitTermination() # Keeps the streaming process running
Explanation:
1. Initialize Spark Session: Creates a SparkSession with a checkpoint directory for fault tolerance.
2. Set up Streaming Context: Defines a batch interval of 5 seconds, meaning Spark processes data every 5 seconds.
3. Connect to Kafka Topic: Reads real-time log messages from Kafka’s "logs" topic.
4. Extract Log Messages: Kafka produces messages as (key, value) tuples, and we extract only the log text.
5. Filter Error Logs: Identifies logs containing the keyword "ERROR".
6. Write to HDFS: Saves processed logs to HDFS for further analysis or visualization.
7. Start the Streaming Job: Begins the real-time data pipeline and continuously processes new logs.
In Spark 3.x, KafkaUtils.createDirectStream() is deprecated, so users should consider using Structured Streaming with Kafka via spark.readStream() for more efficient and flexible stream processing.
Expected Output (HDFS Output File):
After running the script, you will find filtered error logs in HDFS:
hdfs dfs -ls /logs/errors
Sample log output in HDFS:
ERROR 2025-02-05 12:30:45 [Server] Connection timeout occurred.
ERROR 2025-02-05 12:30:47 [Database] Query execution failed.
ERROR 2025-02-05 12:30:52 [Auth] Unauthorized access detected.
This real-world Apache Spark Streaming tutorial showcases how big data processing pipelines handle real-time log analysis, making it a must-have skill for data engineers and AI professionals.
Also Read: How to Become a Big Data Engineer: An Ultimate Guide
Now that you’ve learned how to set up and run Spark Streaming, it's important to understand why it's preferred over traditional batch processing. Let’s explore its key benefits and the challenges of real-world implementation.
As businesses increasingly rely on real-time analytics, the need for a scalable, fault-tolerant, and high-performance streaming framework has become critical.
Traditional batch processing systems, such as Hadoop MapReduce, cannot process data in real time, making them unsuitable for real-world applications.
Apache Spark Streaming bridges this gap by providing a distributed, micro-batch processing engine that enables real-time data ingestion, transformation, and analysis at scale.
Below are some key benefits of using Spark Streaming:
1. Real-Time Processing Capabilities
2. Seamless Integration with the Spark Ecosystem
3. Scalability and Fault Tolerance
4. Wide Range of Input and Output Sources
5. Open Source and Strong Community Support
Despite its advantages, deploying Spark Streaming in production presents certain challenges. Below is a comparison of key challenges and their solutions:
Challenge |
Solution |
Scalability Issues – High-throughput streaming applications may face performance bottlenecks when handling large volumes of data. | Use horizontal scaling by adding more Spark worker nodes and tuning batch intervals for better throughput. Optimize executor memory allocation. |
Fault Tolerance and Recovery – If a node crashes, streaming jobs may lose data or require manual intervention to restart. | Enable checkpointing and write-ahead logs (WAL) to ensure data persistence. Use Kafka offsets to resume processing without data loss. |
Performance Optimization – Inefficient use of resources may lead to high processing latency. | Optimize parallelism using appropriate number of partitions. Use stateful streaming aggregations with windowing for efficient computation. |
Handling Data Inconsistencies – Streaming data often contains duplicates, missing values, or schema variations. | Implement idempotent writes, use deduplication techniques (e.g., watermarking), and validate schema before ingestion. |
Integration with Data Sources and Sinks – Connecting Spark Streaming with multiple external systems can be complex. | Use structured streaming APIs for better compatibility with JDBC, NoSQL, message queues, and cloud storage. |
By addressing these challenges proactively, organizations can build robust, real-time analytics pipelines that drive data-driven decision-making.
Also Read: Future of Data Analytics in India: Trends & Career Options 2025
While Spark Streaming offers scalability, real-time processing, and fault tolerance, how does it apply in real-world scenarios? Let’s look at how industries leverage Spark Streaming for real-time analytics.
Apache Spark Streaming is widely used in real-world applications where low-latency, high-throughput data processing is critical. It enables businesses to analyze, detect patterns, and respond to events in real-time, making it a powerful tool for fraud detection, monitoring, and predictive analytics.
Below are some of the most impactful use cases of Apache Spark Streaming across industries:
1. Real-Time Log Monitoring
2. Fraud Detection
Also Read: Fraud Detection in Machine Learning: What You Need To Know
3. Social Media Analytics
Also Read: How to Make Social Media Marketing Strategy in Just 8 Steps
4. IoT Data Processing
Also Read: Top 50 IoT Projects For all Levels in 2025 [With Source Code]
5. Stock Market Analysis
Its ability to ingest, process, and analyze massive streams of data with low latency is shaping the future of AI-driven decision-making.
Also Read: Top 10 Apache Spark Use Cases Across Industries and Their Impact in 2025
With Spark Streaming transforming industries like finance, IoT, and social media analytics, mastering it is a valuable skill. upGrad’s courses provide hands-on training to help you build industry-ready expertise in real-time data processing.
upGrad, South Asia’s leading Higher EdTech platform, equips 10M+ learners with big data and real-time analytics skills. Their courses offer hands-on training in stream processing, Kafka integration, and distributed computing.
You'll master fault tolerance, scalability, and performance tuning, gaining industry-ready expertise to build real-time data pipelines.
Here are some relevant courses you can check out:
You can also get personalized career counseling with upGrad to guide your career path, or visit your nearest upGrad center and start hands-on training today!
Unlock the power of data with our popular Data Science courses, designed to make you proficient in analytics, machine learning, and big data!
Elevate your career by learning essential Data Science skills such as statistical modeling, big data processing, predictive analytics, and SQL!
Stay informed and inspired with our popular Data Science articles, offering expert insights, trends, and practical tips for aspiring data professionals!
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Start Your Career in Data Science Today
Top Resources