View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All

What is Decision Tree in Data Mining? Types, Real World Examples & Applications

By Rohit Sharma

Updated on Jul 05, 2024 | 15 min read | 19.1k views

Share:

Introduction to Data Mining

In its raw form, data requires efficient processing to transform into valuable information. Predicting outcomes hinges on uncovering patterns, anomalies, or correlations within the data, a process known as “knowledge discovery in databases.” 

The term “data mining” emerged in the 1990s, integrating principles from statistics, artificial intelligence, and machine learning. As someone deeply entrenched in this field, I’ve witnessed how automated data mining revolutionized analysis, accelerating the process significantly. With data mining, users can uncover insights and extract valuable knowledge from vast datasets more swiftly and effectively than ever before. It’s truly remarkable how technology has transformed the landscape of data analysis, making it more accessible and efficient for professionals across various industries.

Data mining might also be referred to as the process of identifying hidden patterns of information which require categorization. Only then the data can be converted into useful data. The useful data can be fed into a data warehouse, data mining algorithms, data analysis for decision making.

Learn data science courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

Decision tree in Data mining

A type of data mining technique, Decision tree in data mining builds a model for classification of data. The models are built in the form of the tree structure and hence belong to the supervised form of learning. Other than the classification models, decision trees are used for building regression models for predicting class labels or values aiding the decision-making process. Both the numerical and categorical data like gender, age, etc. can be used by a decision tree.

Structure of a decision tree

The structure of a decision tree consists of a root node, branches, and leaf nodes. The branched nodes are the outcomes of a tree and the internal nodes represent the test on an attribute. The leaf nodes represent a class label. 

Working of a decision tree

1. A decision tree works under the supervised learning approach for both discreet and continuous variables. The dataset is split into subsets on the basis of the dataset’s most significant attribute. Identification of the attribute and splitting is done through the algorithms.

2. The structure of the decision tree consists of the root node, which is the significant predictor node. The process of splitting occurs from the decision nodes which are the sub-nodes of the tree. The nodes which do not split further are termed as the leaf or terminal nodes. 

3. The dataset is divided into homogenous and non-overlapping regions following a top-down approach. The top layer provides the observations at a single place which then splits into branches. The process is termed as “Greedy Approach” due to its focus only on the current node rather than the future nodes.

4. Until and unless a stop criterion is reached, the decision tree will keep on running.

5. With the building of a decision tree, lots of noise and outliers are generated. To remove these outliers and noisy data, a method of “Tree pruning” is applied. Hence, the accuracy of the model increases.

6. Accuracy of a model is checked on a test set consisting of test tuples and class labels. An accurate model is defined based on the percentages of classification test set tuples and classes by the model. 

Figure 1: An example of an unpruned and a pruned tree


 

Source

 

Types of Decision Tree

Decision trees lead to the development of models for classification and regression based on a tree-like structure. The data is broken down into smaller subsets. The result of a decision tree is a tree with decision nodes and leaf nodes. Two types of decision trees are explained below:

1. Classification

The classification includes the building up of models describing important class labels. They are applied in the areas of machine learning and pattern recognition. Decision trees in machine learning through classification models lead to Fraud detection, medical diagnosis, etc. Two step process of a classification model includes:

  • Learning: A classification model based on the training data is built.
  • Classification: Model accuracy is checked and then used for classification of the new data. Class labels are in the form of discrete values like “yes”, or “no”, etc.
background

Liverpool John Moores University

MS in Data Science

Dual Credentials

Master's Degree18 Months
View Program

Placement Assistance

Certification8-8.5 Months
View Program

Figure 2: Example of a classification model.

Source

2. Regression

Regression models are used for the regression analysis of data, i.e. the prediction of numerical attributes.  These are also called continuous values. Therefore, instead of predicting the class labels, the regression model predicts the continuous values. 

List of Algorithms Used

A decision tree algorithm known as “ID3” was developed in 1980 by a machine researcher named, J. Ross Quinlan. This algorithm was succeeded by other algorithms like C4.5 developed by him. Both the algorithms applied the greedy approach. The algorithm C4.5 doesn’t use backtracking and the trees are constructed in a top-down recursive divide and conquer manner. The algorithm used a training dataset with class labels which get divided into smaller subsets as the tree gets constructed.

  • Three parameters are selected initially- attribute list, attribute selection method, and data partition. Attributes of the training set are described in the attribute list.
  • Attribution selection method includes the method for selection of the best attribute for discrimination among the tuples.
  • A tree structure depends on the attribute selection method.
  • The construction of a tree starts with a single node.
  • Splitting of the tuples occurs when different class labels are represented in a tuple. This will lead to the branch formation of the tree.
  • The method of splitting determines which attribute should be selected for the data partition. Based on this method, the branches are grown from a node based on the outcome of the test.
  • The method of splitting and partitioning is recursively carried out, ultimately resulting in a decision tree for the training dataset tuples.
  • The process of tree formation keeps on going until and unless the tuples left cannot be partitioned further.
  • The complexity of the algorithm is denoted by 

n * |D| * log |D| 

Where, n is the number of attributes in training dataset D and |D| is the number of tuples.

Source

Figure 3: A discrete value splitting 

The lists of algorithms used in a decision tree are:

ID3

The whole set of data S is considered as the root node while forming the decision tree. Iteration is then carried out on every attribute and splitting of the data into fragments. The algorithm checks and takes those attributes which were not taken before the iterated ones. Splitting data in the ID3 algorithm is time consuming and is not an ideal algorithm as it overfits the data.

C4.5

It is an advanced form of an algorithm as the data are classified as samples. Both continuous and discrete values can be handled efficiently unlike ID3. Method of pruning is present which removes the unwanted branches.

CART

Both classification and regression tasks can be performed by the algorithm. Unlike ID3 and C4.5, decision points are created by considering the Gini index. A greedy algorithm is applied for the splitting method aiming to reduce the cost function. In classification tasks, the Gini index is used as the cost function to indicate the purity of leaf nodes. In regression tasks, sum squared error is used as the cost function to find the best prediction.

CHAID

As the name suggests, it stands for Chi-square Automatic Interaction Detector, a process dealing with any type of variables. They might be nominal, ordinal, or continuous variables. Regression trees use the F-test, while the Chi-square test is used in the classification model.
upGrad’s Exclusive Data Science Webinar for you – 

MARS

It stands for Multivariate adaptive regression splines. The algorithm is specially implemented in regression tasks, where the data is mostly non-linear.

Greedy Recursive Binary Splitting

A binary splitting method occurs resulting in two branches. Splitting of the tuples is carried out with the calculation of the split cost function. The lowest cost split is selected and the process is recursively carried out to calculate the cost function of the other tuples.

Functions of Decision Tree in Data Mining  

  • Classification: Decision trees serve as powerful tools for classification tasks in data mining. They classify data points into distinct categories based on predetermined criteria. 
  • Prediction: Decision trees can predict outcomes by analyzing input variables and identifying the most likely outcome based on historical data patterns. 
  • Visualization: Decision trees offer a visual representation of the decision-making process, making it easier for users to interpret and understand the underlying logic. 
  • Feature Selection: Decision trees assist in identifying the most relevant features or variables that contribute to the classification or prediction process. 
  • Interpretability: Decision trees provide transparent and interpretable models, allowing users to understand the rationale behind each decision made by the algorithm. 

Overall, decision trees play a crucial role in data mining by facilitating classification, prediction, visualization, feature selection, and interpretability in the analysis of large datasets.

Decision Tree with Real World Example

Predict loan eligibility process from given data.

Step1: Loading of the data 

The null values can be either dropped off or filled with some values. The original dataset’s shape was (614,13), and the new data-set after dropping the null values is (480,13).

Step2: a look at the dataset.

Step3: Splitting the data into training and test sets.

Step 4: Build the model and fit the train set

Before visualization some calculations are to be made.

Calculation 1: calculate the entropy of the total dataset.

Calculation 2: Find the entropy and gain for every column.

  1. Gender column
  • Condition 1: data-set with all male’s in it and then,

p = 278, n=116 , p+n=489

Entropy(G=Male) = 0.87

  • Condition 2: data-set with all female’s in it and then,

p = 54 , n = 32 , p+n = 86

Entropy(G=Female) = 0.95

  • Average information in gender column
  1. Married column
  • Condition 1: Married = Yes(1)

In this split the whole data-set with Married status yes

p = 227 , n = 84 , p+n = 311

E(Married = Yes) = 0.84

  • Condition 2: Married = No(0)

In this split the whole data-set with Married status no

p = 105 , n = 64 , p+n = 169

E(Married = No) = 0.957

  • Average Information in Married column is
  1. Educational column
  • Condition 1: Education = Graduate(1)

p = 271 , n = 112 , p+n = 383

E(Education = Graduate) = 0.87

  • Condition 2: Education = Not Graduate(0)

p = 61 , n = 36 , p+n = 97

E(Education = Not Graduate) = 0.95

  • Average Information of Education column= 0.886

Gain = 0.01

4) Self-Employed Column

  • Condition 1: Self-Employed = Yes(1)

p = 43 , n = 23 , p+n = 66

E(Self-Employed=Yes) = 0.93

  • Condition 2: Self-Employed = No(0)

p = 289 , n = 125 , p+n = 414

E(Self-Employed=No) = 0.88

  • Average Information in Self-Employed in Education Column = 0.886

Gain = 0.01

  1. Credit Score column: the column has 0 and 1 value.
  • Condition 1: Credit Score = 1

p = 325 , n = 85 , p+n = 410

E(Credit Score = 1) = 0.73

  • Condition 2: Credit Score = 0

p = 63 , n = 7 , p+n = 70

E(Credit Score = 0) = 0.46

  • Average Information in Credit Score column = 0.69

Gain = 0.2

Compare all the gain values

Credit score has the highest gain. Hence, it will be used as the root node.

Step 5: Visualize the Decision Tree

Figure 5: Decision tree with criterion Gini

Source

Figure 6: Decision tree with criterion entropy

Source 

Step 6: Check the score of the model

Almost 80% percent accuracy scored.

List of Applications

Decision trees are mostly used by information experts to carry on an analytical investigation. They might be used extensively for business purposes to analyze or predict difficulties. The flexibility of the decision tree allows them to be used in a different area:

1. Healthcare

Decision trees allow the prediction of whether a patient is suffering from a particular disease with conditions of age, weight, sex, etc. Other predictions include deciding the effect of medicine considering factors like composition, period of manufacture, etc.

2. Banking sectors

Decision trees help in predicting whether a person is eligible for a loan considering his financial status, salary, family members, etc. It can also identify credit card frauds, loan defaults, etc.

3. Educational Sectors

Shortlisting of a student based on his merit score, attendance, etc. can be decided with the help of decision trees. 

List of Advantages

  • The interpretable results of a decision model can be represented to senior management and stakeholders.
  • While building a decision tree model, preprocessing of the data, i.e. normalization, scaling, etc. is not required.
  • Both types of data- numerical and categorical can be handled by a decision tree which displays its higher efficiency of use over other algorithms.
  • Missing value in data doesn’t affect the process of a decision tree thereby making it a flexible algorithm.

What Next? 

If you are interested in gaining hands-on experience in data mining and getting trained by experts in the, you can check out upGrad’s Executive PG Program in Data Science. The course is directed for any age group within 21-45 years of age with minimum eligibility criteria of 50% or equivalent passing marks in graduation. Any working professionals can join this executive PG program certified from IIIT Bangalore.

Conclusion:

Understanding a decision tree in data mining is pivotal for mid-career professionals seeking to enhance their analytical skills. Decision trees serve as powerful tools for classification and prediction tasks, offering a clear and interpretable framework for data analysis. By exploring the various types of decision trees and real-world examples, professionals can gain valuable insights into their applications across diverse industries. Armed with this knowledge, individuals can leverage decision trees to make informed decisions and drive business outcomes. Moving forward, continued learning and practical application of decision tree techniques will further empower professionals to excel in the dynamic field of data mining. 

Frequently Asked Questions (FAQs)

1. What is a Decision Tree in Data Mining?

2. What are some of the important nodes used in Decision Trees?

3. What are the advantages of using Decision Trees?

Rohit Sharma

679 articles published

Get Free Consultation

+91

By submitting, I accept the T&C and
Privacy Policy

Start Your Career in Data Science Today

Top Resources

Recommended Programs

IIIT Bangalore logo
bestseller

The International Institute of Information Technology, Bangalore

Executive Diploma in Data Science & AI

Placement Assistance

Executive PG Program

12 Months

View Program
Liverpool John Moores University Logo
bestseller

Liverpool John Moores University

MS in Data Science

Dual Credentials

Master's Degree

18 Months

View Program
upGrad Logo

Certification

3 Months

View Program