Apache Spark Architecture: Everything You Need to Know in 2024
Updated on Jul 03, 2023 | 8 min read | 7.2k views
Share:
For working professionals
For fresh graduates
More
Updated on Jul 03, 2023 | 8 min read | 7.2k views
Share:
Table of Contents
What is Apache Spark?
Apache Spark is a bunch of computing framework intended for real-time open-source data processing. Fast computation is the need of the hour and Apache spark is one of the most efficient and swift frameworks planned and projected to achieve it.
The principal feature of Apache Spark is to increase the processing speed of an application with the assistance of its in-built cluster computing. Apart from this, it also offers interface for programming complete clusters with various aspects like implicit data parallelism and fault tolerance. This provides great independence as you do not need any special directives, operators, or functions, which are otherwise required for parallel execution.
Spark Application – This operates codes entered by users to get to a result. It works on its own calculations.
Apache SparkContext – This is the core part of the architecture. It is used to create services and carry out jobs.
Task – Every step has its own peculiar task that runs step by step.
Apache Spark Shell – In simple words, it is basically an application. Apache Spark Shell is one of the vital triggers on how data sets of all sizes are processed with quite ease.
Stage – Various jobs, when split, are called stages.
Job – It is a set of calculations that are run parallelly.
Apache Stark is principally based on two concepts viz. Resilient Distributed Datasets (RDD) and Directed Acyclic Graph (DAG). Casting light on RDD, this comes to light that it is a stock of data items broken and saved on worker nodes. Hadoop datasets and parallelized collections are the two RDDs that are supported.
The earlier one is for HDFS whereas the latter is for Scala gatherings. Jumping to DAG – it is a cycle of mathematical calculations conducted on data. This eases the process by getting rid of the multiple carrying out of operations. This is the sole reason Apache Spark is preferred over Hadoop. Learn more about Apache Spark vs Hadoop Mapreduce.
Before delving deeper, let us go through the architecture. Apache Spark has a great architecture where the layers and components are loosely incorporated with plenty of libraries and extensions that do the job with sheer ease. Chiefly, it is based on two main concepts viz. RDD and DAG. For anyone to understand the architecture, you need to have a sound knowledge of various components such as Spark Ecosystem and its basic structure RDD.
The goal of the development of Apache Spark, a well-known cluster computing platform, was to speed up data processing applications. Popular open-source framework Spark uses in-memory cluster computing to speed up application performance.
Here are some features of Spark architecture-
Strong Caching: A simple programming layer provides effective disk durability and cache features.
Real-Time: It enables minimal latency and real-time computation due to its in-memory processing.
Deployment: It may be deployed via Mesos, Hadoop through YARN, or Spark’s own cluster management.
Polyglot: Spark also supports Python, R, Scala, and Java in addition to these four other languages. Any one of these languages may be used to create Spark code. Additionally, Python and Scala command-line interfaces are offered by Spark.
Speed: For processing massive volumes of data, Spark is up to 100 times quicker than MapReduce. Additionally, it has the ability to break the data into manageable bits.
The layered architecture of Apache Spark is clearly defined and built around two fundamental abstractions:
1. Resilient Distributed Datasets (RDD)
It is an essential tool for computing data. It serves as an interface for immutable data and allows you to double-check the data in the case of a failure. It is a kind of data structure that aids in data recalculation in case of errors. RDDs can be altered using either transformations or actions.
2. Directed Acyclic Graph (DAG)
Stage-oriented scheduling is implemented by the DAG scheduling layer of the Apache Spark architecture. For each job, the driver transforms the program into a DAG. A driver is a series of connections made between nodes.
The physical locations of the resources indicated before can be ascertained using an execution model. There are three execution modes available for selection:
1. Cluster Mode
The most popular approach to execute Spark Applications is in cluster mode. The driver process is launched on a worker node inside the cluster together with the executor processes as soon as the cluster manager gets the pre-compiled JAR, Python script, or R script. This indicates that all Spark application-related processes are under the control of the cluster manager.
2. Client Mode
The only difference between client mode and cluster mode is that the Spark driver stays on the client computer that made the application submission. So, the executor processes are maintained by the cluster management, and the Spark driver processes are maintained by the client computer. Common names for these devices include edge nodes or gateway nodes.
3. Local Mode
The complete Spark program runs on a single computer in local mode. The use of threads on that same system allows for the observation of parallelism. This simple procedure makes it simple to experiment with local development and test apps. However, it is not advised to run production applications in this manner.
This is one of the platforms that is entirely united into a whole for a couple of purposes – to provide backup storage of unedited data and an integrated handling of data. Moving further, Spark Code is quite easy to use. Also, it is way easier to write. It is also popularly used for filtering all the complexities of storage, parallel programming, and much more.
Unquestionably, it comes without any distributed storage and cluster management, though it is quite famous for being a distributed processing engine. As we know, both Compute engine and Core APIs are its two parts, yet it has a lot more to offer – GraphX, streaming, MLlib, and Spark SQL. The value of these aspects is not unknown to anyone. Processing algorithms, ceaseless processing of data, etc. bank on Spark Core APIs solely.
A good deal of organizations needs to work with massive data. The core component that works with various workers is known as driver. It works with plenty of workers that are acknowledged as executors. Any Spark Application is a blend of drivers and executors. Read more about the top spark applications and uses.
Spark can cater to three kinds of work loads
To get the gist of the concept truly, it must be kept in mind that Spark Ecosystem has various components – Spark SQL, Spark streaming, MLib (Machine Learning Library), Spark R, and many others.
When learning about Spark SQL, you need to ensure that to make the most of it, you need to modify it to achieve maximum efficiency in storage capacity, time, or cost by executing various queries on Spark Data that are already a part of outer sources.
After this, Spark Streaming allows developers to carry out both batch-processing and data streaming simultaneously. Everything can be managed easily.
Furthermore, graphic components prompt the data to work with ample sources for great flexibility and resilience in easy construction and transformation.
Next, it comes to Spark R that is responsible for using Apache Spark. This also benefits with distributed data frame implementation, which supports a couple of operations on large data sets. Even for distributed machine learning, it bids support using machine learning libraries.
Finally, the Spark Core component, one of the most pivotal components of Spark ecosystem, provides support for programming and supervising. On the top of this core execution engine, the complete Spark ecosystem is based on several APIs in different languages viz. Scala, Python, etc.
What’s more, Spark backs up Scala. Needless to mention, Scala is a programming language that acts as a base of Spark. On the contrary, Spark supports Scala and Python as an interface. Not just this, the good news is it also bids support to interface. Programs written in this language can also be performed over Spark. Here, it is to learn that codes written in Scala and Python are greatly similar. Read more about the role of Apache spark in Big Data.
Spark also supports the two very common programming languages – R and Java.
Now that you have learned how the Spark ecosystem works, it is time you explored more about Apache Spark by online learning programs. Get in touch with us to know more about our eLearning programs on Apache Spark.
If you are interested to know more about Big Data, check out our Advanced Certificate Programme in Big Data from IIIT Bangalore.
Check our other Software Engineering Courses at upGrad.
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Start Your Career in Data Science Today
Top Resources