Understanding the Concept of Hierarchical Clustering in Data Analysis: Functions, Types & Steps
Updated on Apr 03, 2023 | 7 min read | 5.6k views
Share:
For working professionals
For fresh graduates
More
Updated on Apr 03, 2023 | 7 min read | 5.6k views
Share:
Table of Contents
Clustering refers to the grouping of similar data in groups or clusters in data analysis. These clusters help data analysts organise similar data points into one group while also differentiating them from other data that are not similar.
Hierarchical clustering of data is one of the methods used to group data into a tree of clusters. It is one of the most popular and useful approaches to data grouping. If you want to be a part of the growing field of data science and data analysis, hierarchical clustering is one of the most important things to learn.
This article will help you understand the nature of hierarchical clustering, its function, types and advantages.
As the name suggests, hierarchical clustering groups different data into clusters in a hierarchical or tree format. Every data point is treated as a separate cluster in this method. Hierarchical cluster analysis is very popular amongst data scientists and data analysts as it summarises the data into a manageable hierarchy of clusters that is easier to analyse.
The hierarchical clustering algorithms take multiple different data points and take the closest of the two to make a cluster. It repeats these steps until all the data points turn into one cluster. The process can also be inverted to divide one single merged cluster into different smaller clusters and ultimately into data points.
The hierarchical method of clustering can be visually represented as a dendrogram which is a tree-like diagram. A dendrogram can be cut off at any point during the clustering process when the desired number of clusters has been made. This also makes the process of analysing the data easier.
The process of hierarchical clustering is quite simple to understand. A hierarchical clustering algorithm treats all available data sets as different clusters. Then, it identifies two data sets that are the most similar and merges them into a cluster. After that, the system keeps repeating these steps until all the data points merge into one large cluster. The process can also be stopped once the required number of clusters is available for analysis.
The progress and output of a hierarchical clustering process can be visualised as a dendrogram that can help you identify the relationship between different clusters and how similar or different they are in nature.
A hierarchical clustering algorithm can be used in two different ways. Here are the characteristics of two types of hierarchical clustering that you can use.
The agglomerative method is the more popularly used way of hierarchically clustering data. In this method, the algorithm is presented with multiple different data sets, each of which is treated as a cluster of its own. Then the algorithm starts combining into clusters of twos based on how similar they are to each other. It repeats these steps until the required number of clusters is reached. This method is more popularly used in hierarchical cluster analysis.
The divisive method of hierarchical clustering is the reverse of the agglomerative method. In this method, the algorithm is presented with a single large cluster of numerous data points which it differentiates step by step based on their disparity. This results in multiple data sets that have different properties. The divisive method is not used often in practice.
Learn data science courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.
As mentioned before, there are three main steps in the hierarchical clustering of data.
However, it is also very important to remember how to identify similar points in hierarchical clustering. If you study a dendrogram produced by an algorithm, you can easily identify the central points of each different cluster. The clusters that have the least distance from each other in the dendrogram are the most similar. This is the reason why it’s also referred to as a distance-based algorithm. The similarity between one cluster and all the other ones in a dendrogram is called a proximity matrix.
You also have to choose the correct distance measure while using hierarchical clustering. For example, based on whether you choose your distance measure to be their gender or educational background, a data set involving information about the same people will produce different dendrograms.
Now that you have a clear understanding of hierarchical clustering, let us look at how to perform hierarchical clustering Python. Here is what performing hierarchical clustering would look like using Python’s ‘scikit-learn’ library.
Let us suppose that there are two variables (x and y) in a dataset with six observations:
Observations | x | y |
1 | 1 | 1 |
2 | 2 | 1 |
3 | 4 | 3 |
4 | 5 | 4 |
5 | 6 | 5 |
6 | 7 | 5 |
As a scatter plot, this is how these observations will be visualised:
Python
import numpy as
np
import matplotlib.pyplot as plt
# Define the dataset
X = np.array([[1, 1], [2, 1], [4, 3], [5, 4], [6, 5], [7, 5]])
# Plot the data
plt.scatter(X[:,0], X[:,1])
plt.show()
There are two clusters of observations in this plot- one includes lower values of x and y, and the other with higher values of x and y.
You can use ‘scikit learn’ to perform hierarchical clustering on this dataset.
The two clusters of observations in the plot have different values. One consists of higher values of x and y, and the other with lower.
Check out our free data science courses to get an edge over the competition.
Out of the two main methods of hierarchical clustering that we have discussed before, we will use the agglomerative clustering method with the ‘ward’ linkage method. The ‘ward’ method minimises the variations of the clusters which are being merged together, therefore producing clusters which are similar in size and shape.
Python
from sklearn.cluster import AgglomerativeClustering
# Perform hierarchical clustering
clustering AgglomerativeClustering (n_clusters=2, linkage=’ward‘).fit(X)
The ‘n-clusters’ parameter was used here to specify that we want two clusters.
We can use different colours for each cluster when we plot them:
Python
# Plot the clusters
colors= np.array([‘r‘, ‘b‘])
plt.scatter (X[:,0], X[:,1], c=colors [clustering.labels_])
plt.show()
The two clusters in the data have been correctly identified by the clustering algorithm. You can also use what label the clustering algorithm has assigned to each observation:
Python
print(clustering.labels_)
csharp
[0 0 1 1 1 1]
The last four observations were assigned to cluster 1, while the first two were assigned to cluster 0.
If you want to visualise the hierarchical structure of these clusters, you can generate a dendrogram to do so:
Python
from scipy.cluster.hierarchy import dendrogram, linkage
# Compute the linkage matrix
Z = linkage(X, ‘ward‘)
# Plot the dendrogram
dendrogram(Z)
plt.show()
The dendrogram can help us visualise the hierarchy of merged clusters.
Data clustering is a very important part of data science and data analysis. If you want to learn different clustering methods, then upGrad can help you kickstart your learning journey! With the aid of master classes, industry sessions, mentorship sessions, Python Programming Bootcamp, and live learning sessions, upGrad’s Master of Science in Data Science is a course designed for professionals to gain an edge over competitors.
Offered under the guidance of the University of Arizona, this course boosts your data science career with a cutting-edge curriculum, immersive learning experience with industry experts and job opportunities.
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Start Your Career in Data Science Today
Top Resources