What is Decision Tree in Data Mining? Types, Real World Examples & Applications
Updated on Jul 05, 2024 | 15 min read | 19.1k views
Share:
For working professionals
For fresh graduates
More
Updated on Jul 05, 2024 | 15 min read | 19.1k views
Share:
Table of Contents
In its raw form, data requires efficient processing to transform into valuable information. Predicting outcomes hinges on uncovering patterns, anomalies, or correlations within the data, a process known as “knowledge discovery in databases.”
The term “data mining” emerged in the 1990s, integrating principles from statistics, artificial intelligence, and machine learning. As someone deeply entrenched in this field, I’ve witnessed how automated data mining revolutionized analysis, accelerating the process significantly. With data mining, users can uncover insights and extract valuable knowledge from vast datasets more swiftly and effectively than ever before. It’s truly remarkable how technology has transformed the landscape of data analysis, making it more accessible and efficient for professionals across various industries.
Data mining might also be referred to as the process of identifying hidden patterns of information which require categorization. Only then the data can be converted into useful data. The useful data can be fed into a data warehouse, data mining algorithms, data analysis for decision making.
Learn data science courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.
A type of data mining technique, Decision tree in data mining builds a model for classification of data. The models are built in the form of the tree structure and hence belong to the supervised form of learning. Other than the classification models, decision trees are used for building regression models for predicting class labels or values aiding the decision-making process. Both the numerical and categorical data like gender, age, etc. can be used by a decision tree.
The structure of a decision tree consists of a root node, branches, and leaf nodes. The branched nodes are the outcomes of a tree and the internal nodes represent the test on an attribute. The leaf nodes represent a class label.
1. A decision tree works under the supervised learning approach for both discreet and continuous variables. The dataset is split into subsets on the basis of the dataset’s most significant attribute. Identification of the attribute and splitting is done through the algorithms.
2. The structure of the decision tree consists of the root node, which is the significant predictor node. The process of splitting occurs from the decision nodes which are the sub-nodes of the tree. The nodes which do not split further are termed as the leaf or terminal nodes.
3. The dataset is divided into homogenous and non-overlapping regions following a top-down approach. The top layer provides the observations at a single place which then splits into branches. The process is termed as “Greedy Approach” due to its focus only on the current node rather than the future nodes.
4. Until and unless a stop criterion is reached, the decision tree will keep on running.
5. With the building of a decision tree, lots of noise and outliers are generated. To remove these outliers and noisy data, a method of “Tree pruning” is applied. Hence, the accuracy of the model increases.
6. Accuracy of a model is checked on a test set consisting of test tuples and class labels. An accurate model is defined based on the percentages of classification test set tuples and classes by the model.
Figure 1: An example of an unpruned and a pruned tree
Decision trees lead to the development of models for classification and regression based on a tree-like structure. The data is broken down into smaller subsets. The result of a decision tree is a tree with decision nodes and leaf nodes. Two types of decision trees are explained below:
The classification includes the building up of models describing important class labels. They are applied in the areas of machine learning and pattern recognition. Decision trees in machine learning through classification models lead to Fraud detection, medical diagnosis, etc. Two step process of a classification model includes:
Figure 2: Example of a classification model.
Regression models are used for the regression analysis of data, i.e. the prediction of numerical attributes. These are also called continuous values. Therefore, instead of predicting the class labels, the regression model predicts the continuous values.
List of Algorithms Used
A decision tree algorithm known as “ID3” was developed in 1980 by a machine researcher named, J. Ross Quinlan. This algorithm was succeeded by other algorithms like C4.5 developed by him. Both the algorithms applied the greedy approach. The algorithm C4.5 doesn’t use backtracking and the trees are constructed in a top-down recursive divide and conquer manner. The algorithm used a training dataset with class labels which get divided into smaller subsets as the tree gets constructed.
n * |D| * log |D|
Where, n is the number of attributes in training dataset D and |D| is the number of tuples.
Figure 3: A discrete value splitting
The lists of algorithms used in a decision tree are:
ID3
The whole set of data S is considered as the root node while forming the decision tree. Iteration is then carried out on every attribute and splitting of the data into fragments. The algorithm checks and takes those attributes which were not taken before the iterated ones. Splitting data in the ID3 algorithm is time consuming and is not an ideal algorithm as it overfits the data.
C4.5
It is an advanced form of an algorithm as the data are classified as samples. Both continuous and discrete values can be handled efficiently unlike ID3. Method of pruning is present which removes the unwanted branches.
CART
Both classification and regression tasks can be performed by the algorithm. Unlike ID3 and C4.5, decision points are created by considering the Gini index. A greedy algorithm is applied for the splitting method aiming to reduce the cost function. In classification tasks, the Gini index is used as the cost function to indicate the purity of leaf nodes. In regression tasks, sum squared error is used as the cost function to find the best prediction.
CHAID
As the name suggests, it stands for Chi-square Automatic Interaction Detector, a process dealing with any type of variables. They might be nominal, ordinal, or continuous variables. Regression trees use the F-test, while the Chi-square test is used in the classification model.
upGrad’s Exclusive Data Science Webinar for you –
MARS
It stands for Multivariate adaptive regression splines. The algorithm is specially implemented in regression tasks, where the data is mostly non-linear.
Greedy Recursive Binary Splitting
A binary splitting method occurs resulting in two branches. Splitting of the tuples is carried out with the calculation of the split cost function. The lowest cost split is selected and the process is recursively carried out to calculate the cost function of the other tuples.
Overall, decision trees play a crucial role in data mining by facilitating classification, prediction, visualization, feature selection, and interpretability in the analysis of large datasets.
Predict loan eligibility process from given data.
Step1: Loading of the data
The null values can be either dropped off or filled with some values. The original dataset’s shape was (614,13), and the new data-set after dropping the null values is (480,13).
Step2: a look at the dataset.
Step3: Splitting the data into training and test sets.
Step 4: Build the model and fit the train set
Before visualization some calculations are to be made.
Calculation 1: calculate the entropy of the total dataset.
Calculation 2: Find the entropy and gain for every column.
p = 278, n=116 , p+n=489
Entropy(G=Male) = 0.87
p = 54 , n = 32 , p+n = 86
Entropy(G=Female) = 0.95
In this split the whole data-set with Married status yes
p = 227 , n = 84 , p+n = 311
E(Married = Yes) = 0.84
In this split the whole data-set with Married status no
p = 105 , n = 64 , p+n = 169
E(Married = No) = 0.957
p = 271 , n = 112 , p+n = 383
E(Education = Graduate) = 0.87
p = 61 , n = 36 , p+n = 97
E(Education = Not Graduate) = 0.95
Gain = 0.01
4) Self-Employed Column
p = 43 , n = 23 , p+n = 66
E(Self-Employed=Yes) = 0.93
p = 289 , n = 125 , p+n = 414
E(Self-Employed=No) = 0.88
Gain = 0.01
p = 325 , n = 85 , p+n = 410
E(Credit Score = 1) = 0.73
p = 63 , n = 7 , p+n = 70
E(Credit Score = 0) = 0.46
Gain = 0.2
Compare all the gain values
Credit score has the highest gain. Hence, it will be used as the root node.
Step 5: Visualize the Decision Tree
Figure 5: Decision tree with criterion Gini
Figure 6: Decision tree with criterion entropy
Step 6: Check the score of the model
Almost 80% percent accuracy scored.
Decision trees are mostly used by information experts to carry on an analytical investigation. They might be used extensively for business purposes to analyze or predict difficulties. The flexibility of the decision tree allows them to be used in a different area:
Decision trees allow the prediction of whether a patient is suffering from a particular disease with conditions of age, weight, sex, etc. Other predictions include deciding the effect of medicine considering factors like composition, period of manufacture, etc.
Decision trees help in predicting whether a person is eligible for a loan considering his financial status, salary, family members, etc. It can also identify credit card frauds, loan defaults, etc.
Shortlisting of a student based on his merit score, attendance, etc. can be decided with the help of decision trees.
If you are interested in gaining hands-on experience in data mining and getting trained by experts in the, you can check out upGrad’s Executive PG Program in Data Science. The course is directed for any age group within 21-45 years of age with minimum eligibility criteria of 50% or equivalent passing marks in graduation. Any working professionals can join this executive PG program certified from IIIT Bangalore.
Understanding a decision tree in data mining is pivotal for mid-career professionals seeking to enhance their analytical skills. Decision trees serve as powerful tools for classification and prediction tasks, offering a clear and interpretable framework for data analysis. By exploring the various types of decision trees and real-world examples, professionals can gain valuable insights into their applications across diverse industries. Armed with this knowledge, individuals can leverage decision trees to make informed decisions and drive business outcomes. Moving forward, continued learning and practical application of decision tree techniques will further empower professionals to excel in the dynamic field of data mining.
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Start Your Career in Data Science Today
Top Resources