What is Hive in Hadoop? History and Its Components
Updated on Nov 30, 2022 | 7 min read | 6k views
Share:
For working professionals
For fresh graduates
More
Updated on Nov 30, 2022 | 7 min read | 6k views
Share:
Table of Contents
Apache Hive is an open-sourced warehousing system that is built on top of Hadoop. Hive is used for querying and analyzing massive datasets stored within Hadoop. It works by processing both structured and semi-structured data.
Through this article, let’s talk in detail about Hive in Hadoop, its history, its importance, Hive architecture, some key features, a few limitations, and more!
Apache Hive is simply a data warehouse software built by using Hadoop as the base. Before Apache Hive, Big Data engineers had to write complex map-reduce jobs to perform querying tasks. With Hive, on the other hand, things drastically reduced as engineers now only need to know SQL.
Hive works on a language known as HiveQL (similar to SQL), making it easier for engineers who have a working knowledge of SQL. HiveQL automatically translates your SQL queries into map-reduce jobs that Hadoop can execute.
In doing so, Apache presents the concept of abstraction into the working of Hadoop and allows data experts to deal with complex datasets without learning the Java programming language for working with Hive. Apache Hive works on your workstation and converts SQL queries into map-reduce jobs to be executed on the Hadoop cluster. Hive categorizes all of your data into tables, thereby providing a structure to all the data present in HDFS.
The Data Infrastructure Team introduced Apache Hive at Facebook. It is one of the technologies that’s being proactively used on Facebook for numerous internal purposes. Over the years, Apache Hive has run thousands of jobs on the cluster with hundreds of users for a range of applications.
The Hive-Hadoop cluster at Facebook stores more than 3PB of raw data. It can load 15TB of data in real-time daily. From there, Apache Hive grew even more in its use cases, and today, it is used by giants like IBM, Yahoo, Amazon, FINRA, Netflix, and more.
Get your data science certification online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.
Before coming up with Apache Hive, Facebook struggled with many challenges like the ever-increasing data size to analyze and the utter inconsistency in this large dataset. These challenges made it difficult for Facebook to handle its data-intensive tasks seamlessly. The traditional RDBMS-based structures were not enough to control the ever-increasing pressure.
Facebook first introduced map-reduce to overcome these challenges but then simplified it further by offering Apache Hive, which works on HiveQL.
Eventually, Apache Hive emerged as the much-needed saviour and helped Facebook overcome the various challenges. Now, using Apache Hive, Facebook was able to achieve the following:
All in all, Apache Hive helped developers save a lot of time that would otherwise go into writing complex map-reduce jobs. Hive brings simplicity to summarization, analysis, querying, and exploring of data.
Being reliant only on SQL, Apache Hive is a fast and scalable framework and is highly extensible. If you understand basic querying using SQL, you’ll be able to work with Apache Hive in no time! It also offers file access on different data stores like HBase and HDFS.
Now that you understand the importance and emergence of Apache Hive, let’s look at the major components of Apache Hive. The architecture of Apache Hive includes:
This is used for storing metadata for each of the tables. The metadata generally consists of the location and schema. Metastore also consists of the partition metadata, which helps engineers track the progress of different datasets that have been distributed over the clusters. The data that is stored here is in the traditional RDBMS format.
Driver in Apache Hive is like a controller responsible for receiving the HiveQL statements. Then, it starts the execution of these statements by creating different sessions. The driver is also responsible for monitoring and managing the life cycle of the implementation and its progress along the way. Drivers hold all the important metadata that is generated when a HiveQL statement is executed. It also acts as a collection point of data obtained after the map-reduce operation.
The compiler is used for compiling the HiveQL queries. It converts the user-generated queries into a foolproof execution plan which contains all the tasks that need to be performed. The plan also includes all the steps and procedures required to follow map-reduce to get the required output. The Hive compiler converts the user-input query into AST (Abstract Syntax Tree) to check for compile-time errors or compatibility issues. The AST is transformed into a Directed Acyclic Graph (DAG) when none of the issues are encountered.
The optimizer does all the transformations on the execution plan required to reach the optimized DAG. It does so by aggregating all the transformations together, like converting an array of individual joins into a single joins – to enhance the performance. In addition, the optimizer can split different tasks by applying a transformation on data before the reduced operation is performed – again, to improve the overall performance.
Once Apache Hive has performed the compilation and optimization tasks, the executor performs the final executions. It takes care of pipelining the tasks and bringing them up to completion.
Command-line interface (CLI) is used for providing the external user with a user interface to interact with the different features of Apache Hive. CLI is what makes up the UI of Hive for the end-users. On the other hand, the Thrift server allows external clients to interact with Hive over a network, similar to the ODBC or JDBC protocols.
As mentioned earlier, Apache Hive brought about a much-needed change in the way engineers worked on data jobs. No longer was Java the go-to language, and developers could work just by using SQL. Apart from that, there are several other essential features of Hive as well, such as :
Since the world of Data Science is relatively new and ever-evolving, even the best tools available in the market have some limitations. Resolving those limitations is what will give us the next best tools. Here are a few limitations of working with Apache Hive for you to keep in mind:
Apache Hive brought about drastic and amazing improvements in the way data engineers work on large datasets. Moreover, by completely eliminating the need for Java programming language, Apache Hive brought a familiar comfort to data engineers. Today, you can work smoothly with Apache Hive if you have the fundamental knowledge of SQL querying.
As we mentioned earlier, Data Science is a dynamic and ever-evolving field. We’re sure that the coming years will bring forth new tools and frameworks to simplify things even further. If you are a data enthusiast looking to learn all the tools of the trade of Data Science, now is the best time to get handsy with Big Data tools like Hive.
At upGrad, we have mentored and guided students from all over the world and helped people from different backgrounds establish a firm foothold in the Data Science industry. Our expert teachers, industry partnerships, placement assistance, and robust alumni network ensures that you’re never alone in this journey. So check out our Executive PG Program in Data Science, and get yourself enrolled in the one that’s right for you – we’ll take care of the rest!
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Start Your Career in Data Science Today
Top Resources