- Blog Categories
- Software Development
- Data Science
- AI/ML
- Marketing
- General
- MBA
- Management
- Legal
- Software Development Projects and Ideas
- 12 Computer Science Project Ideas
- 28 Beginner Software Projects
- Top 10 Engineering Project Ideas
- Top 10 Easy Final Year Projects
- Top 10 Mini Projects for Engineers
- 25 Best Django Project Ideas
- Top 20 MERN Stack Project Ideas
- Top 12 Real Time Projects
- Top 6 Major CSE Projects
- 12 Robotics Projects for All Levels
- Java Programming Concepts
- Abstract Class in Java and Methods
- Constructor Overloading in Java
- StringBuffer vs StringBuilder
- Java Identifiers: Syntax & Examples
- Types of Variables in Java Explained
- Composition in Java: Examples
- Append in Java: Implementation
- Loose Coupling vs Tight Coupling
- Integrity Constraints in DBMS
- Different Types of Operators Explained
- Career and Interview Preparation in IT
- Top 14 IT Courses for Jobs
- Top 20 Highest Paying Languages
- 23 Top CS Interview Q&A
- Best IT Jobs without Coding
- Software Engineer Salary in India
- 44 Agile Methodology Interview Q&A
- 10 Software Engineering Challenges
- Top 15 Tech's Daily Life Impact
- 10 Best Backends for React
- Cloud Computing Reference Models
- Web Development and Security
- Find Installed NPM Version
- Install Specific NPM Package Version
- Make API Calls in Angular
- Install Bootstrap in Angular
- Use Axios in React: Guide
- StrictMode in React: Usage
- 75 Cyber Security Research Topics
- Top 7 Languages for Ethical Hacking
- Top 20 Docker Commands
- Advantages of OOP
- Data Science Projects and Applications
- 42 Python Project Ideas for Beginners
- 13 Data Science Project Ideas
- 13 Data Structure Project Ideas
- 12 Real-World Python Applications
- Python Banking Project
- Data Science Course Eligibility
- Association Rule Mining Overview
- Cluster Analysis in Data Mining
- Classification in Data Mining
- KDD Process in Data Mining
- Data Structures and Algorithms
- Binary Tree Types Explained
- Binary Search Algorithm
- Sorting in Data Structure
- Binary Tree in Data Structure
- Binary Tree vs Binary Search Tree
- Recursion in Data Structure
- Data Structure Search Methods: Explained
- Binary Tree Interview Q&A
- Linear vs Binary Search
- Priority Queue Overview
- Python Programming and Tools
- Top 30 Python Pattern Programs
- List vs Tuple
- Python Free Online Course
- Method Overriding in Python
- Top 21 Python Developer Skills
- Reverse a Number in Python
- Switch Case Functions in Python
- Info Retrieval System Overview
- Reverse a Number in Python
- Real-World Python Applications
- Data Science Careers and Comparisons
- Data Analyst Salary in India
- Data Scientist Salary in India
- Free Excel Certification Course
- Actuary Salary in India
- Data Analyst Interview Guide
- Pandas Interview Guide
- Tableau Filters Explained
- Data Mining Techniques Overview
- Data Analytics Lifecycle Phases
- Data Science Vs Analytics Comparison
- Artificial Intelligence and Machine Learning Projects
- Exciting IoT Project Ideas
- 16 Exciting AI Project Ideas
- 45+ Interesting ML Project Ideas
- Exciting Deep Learning Projects
- 12 Intriguing Linear Regression Projects
- 13 Neural Network Projects
- 5 Exciting Image Processing Projects
- Top 8 Thrilling AWS Projects
- 12 Engaging AI Projects in Python
- NLP Projects for Beginners
- Concepts and Algorithms in AIML
- Basic CNN Architecture Explained
- 6 Types of Regression Models
- Data Preprocessing Steps
- Bagging vs Boosting in ML
- Multinomial Naive Bayes Overview
- Bayesian Network Example
- Bayes Theorem Guide
- Top 10 Dimensionality Reduction Techniques
- Neural Network Step-by-Step Guide
- Technical Guides and Comparisons
- Make a Chatbot in Python
- Compute Square Roots in Python
- Permutation vs Combination
- Image Segmentation Techniques
- Generative AI vs Traditional AI
- AI vs Human Intelligence
- Random Forest vs Decision Tree
- Neural Network Overview
- Perceptron Learning Algorithm
- Selection Sort Algorithm
- Career and Practical Applications in AIML
- AI Salary in India Overview
- Biological Neural Network Basics
- Top 10 AI Challenges
- Production System in AI
- Top 8 Raspberry Pi Alternatives
- Top 8 Open Source Projects
- 14 Raspberry Pi Project Ideas
- 15 MATLAB Project Ideas
- Top 10 Python NLP Libraries
- Naive Bayes Explained
- Digital Marketing Projects and Strategies
- 10 Best Digital Marketing Projects
- 17 Fun Social Media Projects
- Top 6 SEO Project Ideas
- Digital Marketing Case Studies
- Coca-Cola Marketing Strategy
- Nestle Marketing Strategy Analysis
- Zomato Marketing Strategy
- Monetize Instagram Guide
- Become a Successful Instagram Influencer
- 8 Best Lead Generation Techniques
- Digital Marketing Careers and Salaries
- Digital Marketing Salary in India
- Top 10 Highest Paying Marketing Jobs
- Highest Paying Digital Marketing Jobs
- SEO Salary in India
- Content Writer Salary Guide
- Digital Marketing Executive Roles
- Career in Digital Marketing Guide
- Future of Digital Marketing
- MBA in Digital Marketing Overview
- Digital Marketing Techniques and Channels
- 9 Types of Digital Marketing Channels
- Top 10 Benefits of Marketing Branding
- 100 Best YouTube Channel Ideas
- YouTube Earnings in India
- 7 Reasons to Study Digital Marketing
- Top 10 Digital Marketing Objectives
- 10 Best Digital Marketing Blogs
- Top 5 Industries Using Digital Marketing
- Growth of Digital Marketing in India
- Top Career Options in Marketing
- Interview Preparation and Skills
- 73 Google Analytics Interview Q&A
- 56 Social Media Marketing Q&A
- 78 Google AdWords Interview Q&A
- Top 133 SEO Interview Q&A
- 27+ Digital Marketing Q&A
- Digital Marketing Free Course
- Top 9 Skills for PPC Analysts
- Movies with Successful Social Media Campaigns
- Marketing Communication Steps
- Top 10 Reasons to Be an Affiliate Marketer
- Career Options and Paths
- Top 25 Highest Paying Jobs India
- Top 25 Highest Paying Jobs World
- Top 10 Highest Paid Commerce Job
- Career Options After 12th Arts
- Top 7 Commerce Courses Without Maths
- Top 7 Career Options After PCB
- Best Career Options for Commerce
- Career Options After 12th CS
- Top 10 Career Options After 10th
- 8 Best Career Options After BA
- Projects and Academic Pursuits
- 17 Exciting Final Year Projects
- Top 12 Commerce Project Topics
- Top 13 BCA Project Ideas
- Career Options After 12th Science
- Top 15 CS Jobs in India
- 12 Best Career Options After M.Com
- 9 Best Career Options After B.Sc
- 7 Best Career Options After BCA
- 22 Best Career Options After MCA
- 16 Top Career Options After CE
- Courses and Certifications
- 10 Best Job-Oriented Courses
- Best Online Computer Courses
- Top 15 Trending Online Courses
- Top 19 High Salary Certificate Courses
- 21 Best Programming Courses for Jobs
- What is SGPA? Convert to CGPA
- GPA to Percentage Calculator
- Highest Salary Engineering Stream
- 15 Top Career Options After Engineering
- 6 Top Career Options After BBA
- Job Market and Interview Preparation
- Why Should You Be Hired: 5 Answers
- Top 10 Future Career Options
- Top 15 Highest Paid IT Jobs India
- 5 Common Guesstimate Interview Q&A
- Average CEO Salary: Top Paid CEOs
- Career Options in Political Science
- Top 15 Highest Paying Non-IT Jobs
- Cover Letter Examples for Jobs
- Top 5 Highest Paying Freelance Jobs
- Top 10 Highest Paying Companies India
- Career Options and Paths After MBA
- 20 Best Careers After B.Com
- Career Options After MBA Marketing
- Top 14 Careers After MBA In HR
- Top 10 Highest Paying HR Jobs India
- How to Become an Investment Banker
- Career Options After MBA - High Paying
- Scope of MBA in Operations Management
- Best MBA for Working Professionals India
- MBA After BA - Is It Right For You?
- Best Online MBA Courses India
- MBA Project Ideas and Topics
- 11 Exciting MBA HR Project Ideas
- Top 15 MBA Project Ideas
- 18 Exciting MBA Marketing Projects
- MBA Project Ideas: Consumer Behavior
- What is Brand Management?
- What is Holistic Marketing?
- What is Green Marketing?
- Intro to Organizational Behavior Model
- Tech Skills Every MBA Should Learn
- Most Demanding Short Term Courses MBA
- MBA Salary, Resume, and Skills
- MBA Salary in India
- HR Salary in India
- Investment Banker Salary India
- MBA Resume Samples
- Sample SOP for MBA
- Sample SOP for Internship
- 7 Ways MBA Helps Your Career
- Must-have Skills in Sales Career
- 8 Skills MBA Helps You Improve
- Top 20+ SAP FICO Interview Q&A
- MBA Specializations and Comparative Guides
- Why MBA After B.Tech? 5 Reasons
- How to Answer 'Why MBA After Engineering?'
- Why MBA in Finance
- MBA After BSc: 10 Reasons
- Which MBA Specialization to choose?
- Top 10 MBA Specializations
- MBA vs Masters: Which to Choose?
- Benefits of MBA After CA
- 5 Steps to Management Consultant
- 37 Must-Read HR Interview Q&A
- Fundamentals and Theories of Management
- What is Management? Objectives & Functions
- Nature and Scope of Management
- Decision Making in Management
- Management Process: Definition & Functions
- Importance of Management
- What are Motivation Theories?
- Tools of Financial Statement Analysis
- Negotiation Skills: Definition & Benefits
- Career Development in HRM
- Top 20 Must-Have HRM Policies
- Project and Supply Chain Management
- Top 20 Project Management Case Studies
- 10 Innovative Supply Chain Projects
- Latest Management Project Topics
- 10 Project Management Project Ideas
- 6 Types of Supply Chain Models
- Top 10 Advantages of SCM
- Top 10 Supply Chain Books
- What is Project Description?
- Top 10 Project Management Companies
- Best Project Management Courses Online
- Salaries and Career Paths in Management
- Project Manager Salary in India
- Average Product Manager Salary India
- Supply Chain Management Salary India
- Salary After BBA in India
- PGDM Salary in India
- Top 7 Career Options in Management
- CSPO Certification Cost
- Why Choose Product Management?
- Product Management in Pharma
- Product Design in Operations Management
- Industry-Specific Management and Case Studies
- Amazon Business Case Study
- Service Delivery Manager Job
- Product Management Examples
- Product Management in Automobiles
- Product Management in Banking
- Sample SOP for Business Management
- Video Game Design Components
- Top 5 Business Courses India
- Free Management Online Course
- SCM Interview Q&A
- Fundamentals and Types of Law
- Acceptance in Contract Law
- Offer in Contract Law
- 9 Types of Evidence
- Types of Law in India
- Introduction to Contract Law
- Negotiable Instrument Act
- Corporate Tax Basics
- Intellectual Property Law
- Workmen Compensation Explained
- Lawyer vs Advocate Difference
- Law Education and Courses
- LLM Subjects & Syllabus
- Corporate Law Subjects
- LLM Course Duration
- Top 10 Online LLM Courses
- Online LLM Degree
- Step-by-Step Guide to Studying Law
- Top 5 Law Books to Read
- Why Legal Studies?
- Pursuing a Career in Law
- How to Become Lawyer in India
- Career Options and Salaries in Law
- Career Options in Law India
- Corporate Lawyer Salary India
- How To Become a Corporate Lawyer
- Career in Law: Starting, Salary
- Career Opportunities: Corporate Law
- Business Lawyer: Role & Salary Info
- Average Lawyer Salary India
- Top Career Options for Lawyers
- Types of Lawyers in India
- Steps to Become SC Lawyer in India
- Tutorials
- Software Tutorials
- C Tutorials
- Recursion in C: Fibonacci Series
- Checking String Palindromes in C
- Prime Number Program in C
- Implementing Square Root in C
- Matrix Multiplication in C
- Understanding Double Data Type
- Factorial of a Number in C
- Structure of a C Program
- Building a Calculator Program in C
- Compiling C Programs on Linux
- Java Tutorials
- Handling String Input in Java
- Determining Even and Odd Numbers
- Prime Number Checker
- Sorting a String
- User-Defined Exceptions
- Understanding the Thread Life Cycle
- Swapping Two Numbers
- Using Final Classes
- Area of a Triangle
- Skills
- Explore Skills
- Management Skills
- Software Engineering
- JavaScript
- Data Structure
- React.js
- Core Java
- Node.js
- Blockchain
- SQL
- Full stack development
- Devops
- NFT
- BigData
- Cyber Security
- Cloud Computing
- Database Design with MySQL
- Cryptocurrency
- Python
- Digital Marketings
- Advertising
- Influencer Marketing
- Performance Marketing
- Search Engine Marketing
- Email Marketing
- Content Marketing
- Social Media Marketing
- Display Advertising
- Marketing Analytics
- Web Analytics
- Affiliate Marketing
- MBA
- MBA in Finance
- MBA in HR
- MBA in Marketing
- MBA in Business Analytics
- MBA in Operations Management
- MBA in International Business
- MBA in Information Technology
- MBA in Healthcare Management
- MBA In General Management
- MBA in Agriculture
- MBA in Supply Chain Management
- MBA in Entrepreneurship
- MBA in Project Management
- Management Program
- Consumer Behaviour
- Supply Chain Management
- Financial Analytics
- Introduction to Fintech
- Introduction to HR Analytics
- Fundamentals of Communication
- Art of Effective Communication
- Introduction to Research Methodology
- Mastering Sales Technique
- Business Communication
- Fundamentals of Journalism
- Economics Masterclass
- Free Courses
- Home
- Blog
- Data Science
- Apache Spark Streaming Tutorial: A Step-by-Step Guide to Real-Time Data Processing
Apache Spark Streaming Tutorial: A Step-by-Step Guide to Real-Time Data Processing
Updated on Feb 11, 2025 | 17 min read
Share:
Table of Contents
Apache Spark Streaming processes real-time data from sources like Kafka, Flume, and cloud storage using micro-batches for efficient, fault-tolerant processing.
Unlike traditional batch systems, which process data in intervals, Spark Streaming handles data incrementally for lower latency and higher scalability. It excels in integrating with the broader Spark ecosystem for large-scale, real-time data processing compared to frameworks like Flink or Kafka Streams.
This Apache Spark Streaming tutorial covers architecture, key concepts, and applications—an essential skill for data engineers and big data professionals.
Apache Spark Streaming Tutorial: Step-by-Step Guide
Apache Spark Streaming is a powerful real-time data processing engine built on Apache Spark’s distributed computing framework. It processes continuous data streams using micro-batching, breaking incoming data into small batches for near real-time computation.
It works by ingesting real-time data streams from sources like Kafka, Flume, or IoT sensors, processing them in micro-batches using Spark’s Resilient Distributed Dataset (RDD) model, and applying transformations (map, filter, reduce) to extract insights.
The processed data is then stored in HDFS, NoSQL databases, or indexed in ElasticSearch for further analysis, enabling low-latency, fault-tolerant, and scalable real-time analytics.
Now, let’s dive into the Apache Spark Streaming tutorial and learn what Spark Streaming is, its key components, and how to set up and run a streaming application step by step:
Step 1: Prerequisites
Before running a Spark Streaming application, ensure you have the following:
- Apache Spark (Latest Version, 3.x+) installed on your system.
- Java 8+ or Python 3.x for running Spark applications.
- Kafka, Flume, or Socket Streams as data sources.
- Hadoop or a cloud-based data lake (AWS S3, Azure Blob Storage, or Google Cloud Storage) for storing intermediate results.
When using cloud-based data lakes, you benefit from scalable storage and seamless integration with cloud-native services, reducing infrastructure overhead.
In contrast, local Hadoop-based setups may require more manual configuration and scaling to handle large data volumes efficiently.
Also Read: Apache Spark Architecture: Everything You Need to Know
Step 2: Create a Spark Streaming Context
The Spark Streaming Context is the entry point for any Spark Streaming application. It defines the streaming job and manages data flow.
Example (Python – PySpark):
from pyspark.sql import SparkSession
from pyspark.streaming import StreamingContext
# Initialize Spark Session
spark = SparkSession.builder.appName("SparkStreamingApp").getOrCreate()
# Define Streaming Context with 5-second batch intervals
ssc = StreamingContext(spark.sparkContext, 5)
# Enable checkpointing for fault tolerance
ssc.checkpoint("hdfs://namenode:8020/spark_checkpoint")
# Print the state of the StreamingContext to ensure it's active
print(ssc)
# Alternatively, you can use ssc.getState() to check the status
print(ssc.getState())
Explanation:
1. Import Libraries:
- SparkSession: Manages the Spark application.
- StreamingContext: Enables real-time data processing.
2. Initialize Spark Session:
- Creates a SparkSession named "SparkStreamingApp".
- getOrCreate() ensures an existing session is reused if available.
3. Define Streaming Context:
- Uses sparkContext to set up streaming context.
- The batch interval is set to 5 seconds, meaning Spark Streaming will collect and process data in batches every 5 seconds.
4. Enable Checkpointing for Fault Tolerance:
- The ssc.checkpoint() method is crucial for ensuring fault tolerance in your streaming application.
- It saves the state of the computation to a reliable storage (e.g., HDFS). In case of a failure, this checkpoint helps recover the application to its last successful state.
5. Verify Streaming Context:
- To ensure the streaming context is active, you can print ssc or use ssc.getState() to check its status.
- This will give you information about whether the streaming context has been initialized and is running correctly.
Expected Output: This code initializes Spark Streaming but doesn’t produce direct output. The console logs will confirm the setup:
INFO StreamingContext: StreamingContext started with batch interval 5s
INFO SparkSession: Initialized SparkSession: SparkStreamingApp
If you print ssc or check its state, you will see output similar to:
StreamingContext[streamingContextId, active=false]
Instead of processing each event individually, Spark Streaming groups incoming data into small batches at fixed intervals. In this case, it processing logs every 5 seconds
- T = 0-5s: Collect logs and process in batch 1.
- T = 5-10s: Collect new logs and process in batch 2.
- T = 10-15s: Repeat, ensuring continuous real-time analysis.
Micro-batching balances latency and throughput, making it more efficient than event-driven systems like Apache Flink for high-throughput applications.
Also Read: Apache Kafka Architecture: Comprehensive Guide For Beginners
Step 3: Define Data Input Sources
Spark Streaming supports multiple real-time data sources, including:
- Kafka: For high-throughput real-time event processing (e.g., Twitter's real-time analytics pipeline uses Kafka and Spark to process millions of tweets per second, enabling trend analysis and sentiment detection).
- Flume: Used for ingesting large volumes of event-driven log data (e.g., application logs in cloud environments).
- Socket Streams: For real-time text data (e.g., live server logs or IoT telemetry).
- Cloud Storage & Databases: Streaming from AWS S3, Google Cloud, or NoSQL databases.
Example (Reading from Kafka in Python – Using Structured Streaming API):
from pyspark.sql import SparkSession
# Initialize Spark Session
spark = SparkSession.builder.appName("SparkStreamingApp").getOrCreate()
# Read data from Kafka using Structured Streaming API
kafkaStream = spark.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092") \
.option("subscribe", "events") \
.load()
Explanation:
1. Structured Streaming:
- Instead of using the deprecated KafkaUtils.createDirectStream(), you should now use Structured Streaming to process data from Kafka in a more reliable and scalable manner.
- spark.readStream is the new approach to streaming data, providing a higher-level abstraction for handling real-time data streams.
2. Create a Direct Kafka Stream:
- spark.readStream.format("kafka") specifies that you're reading from Kafka as a stream.
- .option("kafka.bootstrap.servers", "localhost:9092") sets the Kafka broker address (e.g., localhost:9092) where Kafka is running.
- .option("subscribe", "events") defines the Kafka topic ("events") that Spark will consume data from.
- .load() triggers the loading of the stream from Kafka.
Note: This approach eliminates the need for KafkaUtils.createDirectStream(), simplifying the code and ensuring better integration with Spark's built-in structured streaming capabilities.
Expected Output:
This code does not produce immediate output but starts consuming messages from Kafka. If successful, the console will show logs like:
INFO KafkaUtils: Connected to Kafka topic: events
INFO Kafka Stream started successfully
Once Kafka starts sending messages, Spark will begin processing real-time data from the "events" topic.
Also Read: Kafka vs RabbitMQ: What Are the Biggest Differences and Which Should You Learn
Step 4: Apply Transformations
Transformations are operations that modify the incoming data. Spark Streaming offers a variety of transformations, including map(), filter(), reduceByKey(), and window().
These operations allow you to transform, filter, and aggregate data as it streams in real time.
- map(): Applies a function to each data element.
- filter(): Filters data based on a condition.
- window(): Aggregates data over a fixed time window (e.g., compute moving averages).
- updateStateByKey(): A stateful transformation that allows maintaining state across batches, useful for tasks like running totals or counting unique items.
Example (Filtering error logs in real-time):
from pyspark.sql.functions import col
# Convert Kafka messages to STRING and filter for "ERROR" logs
error_logs = kafkaStream.selectExpr("CAST(value AS STRING) as value") \
.filter(col("value").contains("ERROR"))
Explanation:
1. Convert Kafka Messages: Kafka messages are typically in binary format (e.g., bytes). To filter based on the content of the messages, you first need to convert the value field from binary to string using CAST(value AS STRING).
2. Filter Streaming Data:
- This line processes the real-time Kafka stream and filters messages containing the word "ERROR".
- col("value").contains("ERROR") checks each message in the Kafka stream for the keyword "ERROR".
- Only logs that contain the word "ERROR" will be retained for further processing.
3. Stateful Transformations (optional note): If you want to keep track of state across different batches (e.g., running totals or counts), you can use updateStateByKey(). This is a useful stateful transformation to track ongoing calculations in streaming data.
4. Streaming Data Processing:
- This is a real-time filtering operation, meaning only error logs from Kafka’s "events" topic will be processed.
- The resulting error_logs stream can be saved, displayed, or sent to monitoring tools for alerting.
Expected Output:
The stream will process Kafka messages in real time, and only logs that contain the word "ERROR" will be retained. If successful, the console will show logs like:
INFO StreamingQuery: Processing 100 records...
Once the filtering is applied, the error_logs stream will be ready for further transformations or actions (like writing to a file or a database).
Also Read: Apache Kafka Architecture: Comprehensive Guide For Beginners
Step 5: Define Output Operations
Once data is processed, it must be stored or sent to a sink. Common output destinations (sinks) include:
- HDFS or Cloud Storage (AWS S3, Google Cloud Storage) for long-term storage.
- NoSQL databases (Cassandra, MongoDB) for real-time dashboards.
- Elasticsearch for indexing and full-text search (e.g., log analysis).
Example (Saving results to HDFS with append mode):
error_logs.writeStream \
.format("text") \
.option("path", "hdfs://namenode:8020/logs/errors") \
.option("checkpointLocation", "hdfs://namenode:8020/spark_checkpoint/errors") \
.outputMode("append") \
.start()
Explanation:
1. Saves Filtered Logs to HDFS:
- This line writes the filtered error logs to a Hadoop Distributed File System (HDFS) location.
- The directory path "hdfs://namenode:8020/logs/errors" specifies where the files will be stored.
2. Handling Incremental Writes:
- By default, Spark Streaming might overwrite files if not configured properly. To prevent this, we use outputMode("append"), which ensures that logs are added incrementally rather than overwriting existing files.
- This configuration guarantees that each batch of error logs is appended to the output files without deleting or replacing previous data.
3. How Data is Stored:
- Spark Streaming creates multiple text files within the directory based on the batch intervals defined in the StreamingContext.
- Each batch generates a new file containing only error logs from that interval.
4. Checkpointing: checkpointLocation is specified to store the state of the stream in case of failures, which is important for fault tolerance and recovery.
5. File Naming Convention:
The output files are automatically named like:
/logs/errors-00000
/logs/errors-00001
Each file contains error logs captured within that batch.
Example (Saving Results to Cassandra or MongoDB): In addition to saving data to HDFS, Spark can also write to NoSQL databases like Cassandra or MongoDB.
- Saving to Cassandra:
error_logs.writeStream \
.format("org.apache.spark.sql.cassandra") \
.option("keyspace", "logs") \
.option("table", "error_logs") \
.outputMode("append") \
.start()
- Saving to MongoDB:
error_logs.writeStream \
.format("mongodb") \
.option("uri", "mongodb://localhost:27017/logs.error_logs") \
.outputMode("append") \
.start()
Expected Output in HDFS:
After running this, you can check the saved error logs in HDFS using:
hdfs dfs -ls /logs/errors
Example output:
Found 2 items
-rw-r--r-- 1 spark spark 2048 2025-02-05 12:31 /logs/errors-00000
-rw-r--r-- 1 spark spark 3072 2025-02-05 12:36 /logs/errors-00001
To view the stored logs:
hdfs dfs -cat /logs/errors-00000
Example content:
ERROR 2025-02-05 12:30:45 [Server] Connection timeout occurred.
ERROR 2025-02-05 12:30:47 [Database] Query execution failed.
This allows long-term storage of error logs for analysis, debugging, and compliance tracking.
Also Read: Top 20 HDFS Commands You Should Know About
Step 6: Start and Monitor the Application
After defining the context, sources, transformations, and outputs, start the Spark Streaming job:
ssc.start() # Start execution
ssc.awaitTermination() # Keep application running
Explanation:
1. Start Streaming Execution:
- ssc.start() begins the Spark Streaming job, activating all defined streaming computations.
- Once started, Spark continuously ingests, processes, and outputs data based on the transformations applied (e.g., filtering logs, writing to HDFS).
2. Keep the Application Running:
- ssc.awaitTermination() keeps the streaming job alive until it is manually stopped.
- Without this, the program would execute once and exit immediately.
3. Real-Time Processing:
- Spark continuously processes new data from Kafka, Flume, or other sources based on the batch interval set in StreamingContext.
- The application will keep running indefinitely unless manually stopped or terminated due to an error.
4. Graceful Shutdown:
- To ensure a clean shutdown, you should use ssc.stop(stopSparkContext=True, stopGraceFully=True), which will stop the Spark Streaming context gracefully and preserve the processed data. This method helps prevent any data loss during termination.
- Implementation in a try-except block:
try:
ssc.start() # Start the streaming job
ssc.awaitTermination() # Keep the application running
except KeyboardInterrupt:
print("Streaming job stopped manually.")
finally:
ssc.stop(stopSparkContext=True, stopGraceFully=True) # Gracefully stop the job
Monitoring Spark Streaming Jobs:
- Spark UI: You can monitor the performance and progress of your Spark Streaming job via the Spark UI. Navigate to the 'Streaming' tab to see detailed information about the batch processing, including how long each batch took, the number of records processed, and the status of the current batch.
- Prometheus/Grafana: If you have Prometheus and Grafana set up, you can use them to track metrics like throughput, latency, and resource usage for Spark Streaming jobs.
Expected Output (Console Logs):
Once executed, Spark will start processing real-time streaming data, and the console will show logs like:
INFO StreamingContext: StreamingContext started with batch interval 5s
INFO KafkaUtils: Connected to Kafka topic: events
INFO StreamingJob: Processing new batch of streaming data
If new data arrives in the Kafka topic, Spark processes it in 5-second micro-batches and writes results to HDFS or another specified sink.
To manually stop Spark Streaming:
ssc.stop(stopSparkContext=True, stopGraceFully=True)
Or in the terminal:
Ctrl + C # Stop execution
This ensures the streaming application shuts down gracefully while preserving processed data.
Monitoring tools like Spark UI, Prometheus, or Grafana can help track performance and debug issues.
Also Read: Apache Spark Dataframes: Features, RDD & Comparison
How Apache Spark Streaming Works: Architecture and Components?
Spark Streaming follows a structured pipeline to process real-time data efficiently. Below are its key components:
Input Sources in Spark Streaming
These are the data ingestion points where Spark Streaming collects real-time data. Examples include:
- Kafka (Event Processing): Used for handling high-throughput data streams like user interactions in web applications.
- Flume (Log Streaming): Commonly used for ingesting system logs from distributed infrastructure.
- Amazon Kinesis (Cloud Streaming): Streams real-time data for fraud detection and AI-driven applications.
Spark Streaming Engine
The Spark Streaming Engine converts raw streaming data into Discretized Streams (DStreams), which are then processed using Spark’s distributed computing model. Key features include:
- Fault Tolerance: If a node fails, Spark recovers lost data from the last checkpoint.
- Scalability: Automatically scales across a distributed cluster for large workloads.
- Micro-Batching: Processes data in fixed intervals (e.g., every 5 seconds).
Sinks in Spark Streaming
Sinks are output destinations where processed data is stored. Common sinks include:
- HDFS, AWS S3, Azure Blob Storage – For persisting processed data.
- Elasticsearch – For real-time log analysis and search-based applications.
- PostgreSQL, MySQL, NoSQL (Cassandra, MongoDB) – For storing structured real-time data.
To learn advanced SQL techniques and elevate your data analysis capabilities, enroll in upGrad’s free course on SQL Functions and Formulas.
Complete Apache Spark Streaming Code (Kafka Integration)
This Spark Streaming application reads real-time data from Kafka, processes it by filtering error logs, and writes the results to HDFS for further analysis.
Before running this code, ensure:
- Apache Spark 3.x+ is installed.
- Apache Kafka is set up with a topic named "logs".
- Hadoop HDFS is running for storing processed logs.
Here’s the complete Spark Streaming Code (PySpark):
# Import necessary libraries
from pyspark.sql import SparkSession
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
# Step 1: Initialize Spark Session
spark = SparkSession.builder \
.appName("SparkStreamingKafka") \
.config("spark.sql.streaming.checkpointLocation", "/tmp/checkpoints") \
.getOrCreate()
# Step 2: Create Spark Streaming Context with batch interval of 5 seconds
ssc = StreamingContext(spark.sparkContext, 5)
# Step 3: Define Kafka Data Source (Topic: "logs")
kafka_stream = KafkaUtils.createDirectStream(ssc, ["logs"], {"metadata.broker.list": "localhost:9092"})
# Step 4: Extract log messages from Kafka stream (assuming logs are JSON-formatted strings)
parsed_logs = kafka_stream.map(lambda x: x[1]) # Extract log message from Kafka tuple
# Step 5: Filter only error logs from the stream
error_logs = parsed_logs.filter(lambda log: "ERROR" in log)
# Step 6: Save filtered error logs to HDFS for further analysis
error_logs.saveAsTextFiles("hdfs://namenode:8020/logs/errors")
# Step 7: Start the Spark Streaming job
ssc.start()
ssc.awaitTermination() # Keeps the streaming process running
Explanation:
1. Initialize Spark Session: Creates a SparkSession with a checkpoint directory for fault tolerance.
2. Set up Streaming Context: Defines a batch interval of 5 seconds, meaning Spark processes data every 5 seconds.
3. Connect to Kafka Topic: Reads real-time log messages from Kafka’s "logs" topic.
4. Extract Log Messages: Kafka produces messages as (key, value) tuples, and we extract only the log text.
5. Filter Error Logs: Identifies logs containing the keyword "ERROR".
6. Write to HDFS: Saves processed logs to HDFS for further analysis or visualization.
7. Start the Streaming Job: Begins the real-time data pipeline and continuously processes new logs.
In Spark 3.x, KafkaUtils.createDirectStream() is deprecated, so users should consider using Structured Streaming with Kafka via spark.readStream() for more efficient and flexible stream processing.
Expected Output (HDFS Output File):
After running the script, you will find filtered error logs in HDFS:
hdfs dfs -ls /logs/errors
Sample log output in HDFS:
ERROR 2025-02-05 12:30:45 [Server] Connection timeout occurred.
ERROR 2025-02-05 12:30:47 [Database] Query execution failed.
ERROR 2025-02-05 12:30:52 [Auth] Unauthorized access detected.
This real-world Apache Spark Streaming tutorial showcases how big data processing pipelines handle real-time log analysis, making it a must-have skill for data engineers and AI professionals.
Also Read: How to Become a Big Data Engineer: An Ultimate Guide
Now that you’ve learned how to set up and run Spark Streaming, it's important to understand why it's preferred over traditional batch processing. Let’s explore its key benefits and the challenges of real-world implementation.
Why Choose Apache Spark Streaming: Key Benefits and Challenges
As businesses increasingly rely on real-time analytics, the need for a scalable, fault-tolerant, and high-performance streaming framework has become critical.
Traditional batch processing systems, such as Hadoop MapReduce, cannot process data in real time, making them unsuitable for real-world applications.
Apache Spark Streaming bridges this gap by providing a distributed, micro-batch processing engine that enables real-time data ingestion, transformation, and analysis at scale.
Below are some key benefits of using Spark Streaming:
1. Real-Time Processing Capabilities
- Processes continuous data streams in near real-time with low latency.
- Ideal for financial transactions, social media monitoring, and IoT event processing.
2. Seamless Integration with the Spark Ecosystem
- Works alongside Spark SQL, MLlib (Machine Learning), and GraphX for advanced analytics.
- Supports structured streaming, making it easy to integrate with existing batch workflows.
3. Scalability and Fault Tolerance
- Can scale horizontally across hundreds or thousands of nodes for high-throughput workloads.
- Uses RDD lineage and checkpointing for automatic fault recovery.
4. Wide Range of Input and Output Sources
- Reads from Kafka, Flume, Kinesis, HDFS, S3, and NoSQL databases.
- Outputs to HDFS, Cassandra, Elasticsearch, JDBC databases, and real-time dashboards.
5. Open Source and Strong Community Support
- Backed by a large open-source community with frequent updates.
- Supported by major cloud providers (AWS, Azure, GCP) for enterprise-level streaming applications.
Despite its advantages, deploying Spark Streaming in production presents certain challenges. Below is a comparison of key challenges and their solutions:
Challenge |
Solution |
Scalability Issues – High-throughput streaming applications may face performance bottlenecks when handling large volumes of data. | Use horizontal scaling by adding more Spark worker nodes and tuning batch intervals for better throughput. Optimize executor memory allocation. |
Fault Tolerance and Recovery – If a node crashes, streaming jobs may lose data or require manual intervention to restart. | Enable checkpointing and write-ahead logs (WAL) to ensure data persistence. Use Kafka offsets to resume processing without data loss. |
Performance Optimization – Inefficient use of resources may lead to high processing latency. | Optimize parallelism using appropriate number of partitions. Use stateful streaming aggregations with windowing for efficient computation. |
Handling Data Inconsistencies – Streaming data often contains duplicates, missing values, or schema variations. | Implement idempotent writes, use deduplication techniques (e.g., watermarking), and validate schema before ingestion. |
Integration with Data Sources and Sinks – Connecting Spark Streaming with multiple external systems can be complex. | Use structured streaming APIs for better compatibility with JDBC, NoSQL, message queues, and cloud storage. |
By addressing these challenges proactively, organizations can build robust, real-time analytics pipelines that drive data-driven decision-making.
Also Read: Future of Data Analytics in India: Trends & Career Options 2025
While Spark Streaming offers scalability, real-time processing, and fault tolerance, how does it apply in real-world scenarios? Let’s look at how industries leverage Spark Streaming for real-time analytics.
Applications and Use Cases of Apache Spark Streaming
Apache Spark Streaming is widely used in real-world applications where low-latency, high-throughput data processing is critical. It enables businesses to analyze, detect patterns, and respond to events in real-time, making it a powerful tool for fraud detection, monitoring, and predictive analytics.
Below are some of the most impactful use cases of Apache Spark Streaming across industries:
1. Real-Time Log Monitoring
- Continuously processes server logs, application logs, and network activity to detect anomalies.
- Used in cybersecurity for threat detection and intrusion prevention.
- Companies like Netflix and LinkedIn use Spark Streaming to monitor system health and prevent failures.
2. Fraud Detection
- Analyzes banking transactions in real-time to detect suspicious activities.
- Financial institutions use machine learning models with Spark Streaming to flag anomalous credit card transactions.
- Helps reduce identity theft, money laundering, and online payment fraud.
Also Read: Fraud Detection in Machine Learning: What You Need To Know
3. Social Media Analytics
- Processes real-time data from platforms like Twitter, Facebook, and Instagram, with Twitter’s Firehose API and Kafka seamlessly working together to stream massive volumes of data for real-time analytics and trend detection.
- Identifies trending topics, customer sentiment, and brand perception.
- Used in digital marketing to track campaign performance in real-time.
Also Read: How to Make Social Media Marketing Strategy in Just 8 Steps
4. IoT Data Processing
- Ingests sensor data from IoT devices in smart cities, healthcare, and manufacturing.
- Enables predictive maintenance in industries by detecting failures before they happen.
- Used in self-driving cars to process sensor inputs and make instant decisions.
Also Read: Top 50 IoT Projects For all Levels in 2025 [With Source Code]
5. Stock Market Analysis
- Processes high-frequency trading data to identify market trends.
- Hedge funds and investment firms use Spark Streaming to predict stock movements and execute trades in milliseconds.
- Helps in risk assessment and portfolio management by analyzing real-time market fluctuations.
Its ability to ingest, process, and analyze massive streams of data with low latency is shaping the future of AI-driven decision-making.
Also Read: Top 10 Apache Spark Use Cases Across Industries and Their Impact in 2025
With Spark Streaming transforming industries like finance, IoT, and social media analytics, mastering it is a valuable skill. upGrad’s courses provide hands-on training to help you build industry-ready expertise in real-time data processing.
How Can upGrad Help You Learn Apache Spark Streaming?
upGrad, South Asia’s leading Higher EdTech platform, equips 10M+ learners with big data and real-time analytics skills. Their courses offer hands-on training in stream processing, Kafka integration, and distributed computing.
You'll master fault tolerance, scalability, and performance tuning, gaining industry-ready expertise to build real-time data pipelines.
Here are some relevant courses you can check out:
- Introduction to Data Analysis using Excel
- Analyzing Patterns in Data and Storytelling
- Executive Diploma in Data Science & AI
- Post Graduate Certificate in Machine Learning and Deep Learning (Executive)
- Post Graduate Certificate in Machine Learning & NLP (Executive)
You can also get personalized career counseling with upGrad to guide your career path, or visit your nearest upGrad center and start hands-on training today!
Unlock the power of data with our popular Data Science courses, designed to make you proficient in analytics, machine learning, and big data!
Explore our Popular Data Science Courses
Elevate your career by learning essential Data Science skills such as statistical modeling, big data processing, predictive analytics, and SQL!
Top Data Science Skills to Learn
Stay informed and inspired with our popular Data Science articles, offering expert insights, trends, and practical tips for aspiring data professionals!
Read our popular Data Science Articles
Frequently Asked Questions
1. How does Apache Spark Streaming handle late-arriving data in real-time applications?
2. What is the difference between Spark Streaming and Structured Streaming?
3. How does Spark Streaming ensure exactly-once processing in distributed environments?
4. Can Spark Streaming handle event-time processing instead of processing-time?
5. How does Apache Spark Streaming handle backpressure when data volume spikes?
6. What optimizations can improve Spark Streaming performance on large-scale data?
7. How does checkpointing help in Spark Streaming fault recovery?
8. Can Spark Streaming process unstructured data like images and videos?
9. What’s the impact of micro-batch size on Spark Streaming latency?
10. How does Spark Streaming compare to Apache Flink for real-time data processing?
11. Can Apache Spark Streaming integrate with cloud-native architectures?
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Start Your Career in Data Science Today
Top Resources