Explore Courses
Liverpool Business SchoolLiverpool Business SchoolMBA by Liverpool Business School
  • 18 Months
Bestseller
Golden Gate UniversityGolden Gate UniversityMBA (Master of Business Administration)
  • 15 Months
Popular
O.P.Jindal Global UniversityO.P.Jindal Global UniversityMaster of Business Administration (MBA)
  • 12 Months
New
Birla Institute of Management Technology Birla Institute of Management Technology Post Graduate Diploma in Management (BIMTECH)
  • 24 Months
Liverpool John Moores UniversityLiverpool John Moores UniversityMS in Data Science
  • 18 Months
Popular
IIIT BangaloreIIIT BangalorePost Graduate Programme in Data Science & AI (Executive)
  • 12 Months
Bestseller
Golden Gate UniversityGolden Gate UniversityDBA in Emerging Technologies with concentration in Generative AI
  • 3 Years
upGradupGradData Science Bootcamp with AI
  • 6 Months
New
University of MarylandIIIT BangalorePost Graduate Certificate in Data Science & AI (Executive)
  • 8-8.5 Months
upGradupGradData Science Bootcamp with AI
  • 6 months
Popular
upGrad KnowledgeHutupGrad KnowledgeHutData Engineer Bootcamp
  • Self-Paced
upGradupGradCertificate Course in Business Analytics & Consulting in association with PwC India
  • 06 Months
OP Jindal Global UniversityOP Jindal Global UniversityMaster of Design in User Experience Design
  • 12 Months
Popular
WoolfWoolfMaster of Science in Computer Science
  • 18 Months
New
Jindal Global UniversityJindal Global UniversityMaster of Design in User Experience
  • 12 Months
New
Rushford, GenevaRushford Business SchoolDBA Doctorate in Technology (Computer Science)
  • 36 Months
IIIT BangaloreIIIT BangaloreCloud Computing and DevOps Program (Executive)
  • 8 Months
New
upGrad KnowledgeHutupGrad KnowledgeHutAWS Solutions Architect Certification
  • 32 Hours
upGradupGradFull Stack Software Development Bootcamp
  • 6 Months
Popular
upGradupGradUI/UX Bootcamp
  • 3 Months
upGradupGradCloud Computing Bootcamp
  • 7.5 Months
Golden Gate University Golden Gate University Doctor of Business Administration in Digital Leadership
  • 36 Months
New
Jindal Global UniversityJindal Global UniversityMaster of Design in User Experience
  • 12 Months
New
Golden Gate University Golden Gate University Doctor of Business Administration (DBA)
  • 36 Months
Bestseller
Ecole Supérieure de Gestion et Commerce International ParisEcole Supérieure de Gestion et Commerce International ParisDoctorate of Business Administration (DBA)
  • 36 Months
Rushford, GenevaRushford Business SchoolDoctorate of Business Administration (DBA)
  • 36 Months
KnowledgeHut upGradKnowledgeHut upGradSAFe® 6.0 Certified ScrumMaster (SSM) Training
  • Self-Paced
KnowledgeHut upGradKnowledgeHut upGradPMP® certification
  • Self-Paced
IIM KozhikodeIIM KozhikodeProfessional Certification in HR Management and Analytics
  • 6 Months
Bestseller
Duke CEDuke CEPost Graduate Certificate in Product Management
  • 4-8 Months
Bestseller
upGrad KnowledgeHutupGrad KnowledgeHutLeading SAFe® 6.0 Certification
  • 16 Hours
Popular
upGrad KnowledgeHutupGrad KnowledgeHutCertified ScrumMaster®(CSM) Training
  • 16 Hours
Bestseller
PwCupGrad CampusCertification Program in Financial Modelling & Analysis in association with PwC India
  • 4 Months
upGrad KnowledgeHutupGrad KnowledgeHutSAFe® 6.0 POPM Certification
  • 16 Hours
O.P.Jindal Global UniversityO.P.Jindal Global UniversityMaster of Science in Artificial Intelligence and Data Science
  • 12 Months
Bestseller
Liverpool John Moores University Liverpool John Moores University MS in Machine Learning & AI
  • 18 Months
Popular
Golden Gate UniversityGolden Gate UniversityDBA in Emerging Technologies with concentration in Generative AI
  • 3 Years
IIIT BangaloreIIIT BangaloreExecutive Post Graduate Programme in Machine Learning & AI
  • 13 Months
Bestseller
IIITBIIITBExecutive Program in Generative AI for Leaders
  • 4 Months
upGradupGradAdvanced Certificate Program in GenerativeAI
  • 4 Months
New
IIIT BangaloreIIIT BangalorePost Graduate Certificate in Machine Learning & Deep Learning (Executive)
  • 8 Months
Bestseller
Jindal Global UniversityJindal Global UniversityMaster of Design in User Experience
  • 12 Months
New
Liverpool Business SchoolLiverpool Business SchoolMBA with Marketing Concentration
  • 18 Months
Bestseller
Golden Gate UniversityGolden Gate UniversityMBA with Marketing Concentration
  • 15 Months
Popular
MICAMICAAdvanced Certificate in Digital Marketing and Communication
  • 6 Months
Bestseller
MICAMICAAdvanced Certificate in Brand Communication Management
  • 5 Months
Popular
upGradupGradDigital Marketing Accelerator Program
  • 05 Months
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Corporate & Financial Law
  • 12 Months
Bestseller
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in AI and Emerging Technologies (Blended Learning Program)
  • 12 Months
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Intellectual Property & Technology Law
  • 12 Months
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Dispute Resolution
  • 12 Months
upGradupGradContract Law Certificate Program
  • Self paced
New
ESGCI, ParisESGCI, ParisDoctorate of Business Administration (DBA) from ESGCI, Paris
  • 36 Months
Golden Gate University Golden Gate University Doctor of Business Administration From Golden Gate University, San Francisco
  • 36 Months
Rushford Business SchoolRushford Business SchoolDoctor of Business Administration from Rushford Business School, Switzerland)
  • 36 Months
Edgewood CollegeEdgewood CollegeDoctorate of Business Administration from Edgewood College
  • 24 Months
Golden Gate UniversityGolden Gate UniversityDBA in Emerging Technologies with Concentration in Generative AI
  • 36 Months
Golden Gate University Golden Gate University DBA in Digital Leadership from Golden Gate University, San Francisco
  • 36 Months
Liverpool Business SchoolLiverpool Business SchoolMBA by Liverpool Business School
  • 18 Months
Bestseller
Golden Gate UniversityGolden Gate UniversityMBA (Master of Business Administration)
  • 15 Months
Popular
O.P.Jindal Global UniversityO.P.Jindal Global UniversityMaster of Business Administration (MBA)
  • 12 Months
New
Deakin Business School and Institute of Management Technology, GhaziabadDeakin Business School and IMT, GhaziabadMBA (Master of Business Administration)
  • 12 Months
Liverpool John Moores UniversityLiverpool John Moores UniversityMS in Data Science
  • 18 Months
Bestseller
O.P.Jindal Global UniversityO.P.Jindal Global UniversityMaster of Science in Artificial Intelligence and Data Science
  • 12 Months
Bestseller
IIIT BangaloreIIIT BangalorePost Graduate Programme in Data Science (Executive)
  • 12 Months
Bestseller
O.P.Jindal Global UniversityO.P.Jindal Global UniversityO.P.Jindal Global University
  • 12 Months
WoolfWoolfMaster of Science in Computer Science
  • 18 Months
New
Liverpool John Moores University Liverpool John Moores University MS in Machine Learning & AI
  • 18 Months
Popular
Golden Gate UniversityGolden Gate UniversityDBA in Emerging Technologies with concentration in Generative AI
  • 3 Years
Rushford, GenevaRushford Business SchoolDoctorate of Business Administration (AI/ML)
  • 36 Months
Ecole Supérieure de Gestion et Commerce International ParisEcole Supérieure de Gestion et Commerce International ParisDBA Specialisation in AI & ML
  • 36 Months
Golden Gate University Golden Gate University Doctor of Business Administration (DBA)
  • 36 Months
Bestseller
Ecole Supérieure de Gestion et Commerce International ParisEcole Supérieure de Gestion et Commerce International ParisDoctorate of Business Administration (DBA)
  • 36 Months
Rushford, GenevaRushford Business SchoolDoctorate of Business Administration (DBA)
  • 36 Months
Liverpool Business SchoolLiverpool Business SchoolMBA with Marketing Concentration
  • 18 Months
Bestseller
Golden Gate UniversityGolden Gate UniversityMBA with Marketing Concentration
  • 15 Months
Popular
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Corporate & Financial Law
  • 12 Months
Bestseller
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Intellectual Property & Technology Law
  • 12 Months
Jindal Global Law SchoolJindal Global Law SchoolLL.M. in Dispute Resolution
  • 12 Months
IIITBIIITBExecutive Program in Generative AI for Leaders
  • 4 Months
New
IIIT BangaloreIIIT BangaloreExecutive Post Graduate Programme in Machine Learning & AI
  • 13 Months
Bestseller
upGradupGradData Science Bootcamp with AI
  • 6 Months
New
upGradupGradAdvanced Certificate Program in GenerativeAI
  • 4 Months
New
KnowledgeHut upGradKnowledgeHut upGradSAFe® 6.0 Certified ScrumMaster (SSM) Training
  • Self-Paced
upGrad KnowledgeHutupGrad KnowledgeHutCertified ScrumMaster®(CSM) Training
  • 16 Hours
upGrad KnowledgeHutupGrad KnowledgeHutLeading SAFe® 6.0 Certification
  • 16 Hours
KnowledgeHut upGradKnowledgeHut upGradPMP® certification
  • Self-Paced
upGrad KnowledgeHutupGrad KnowledgeHutAWS Solutions Architect Certification
  • 32 Hours
upGrad KnowledgeHutupGrad KnowledgeHutAzure Administrator Certification (AZ-104)
  • 24 Hours
KnowledgeHut upGradKnowledgeHut upGradAWS Cloud Practioner Essentials Certification
  • 1 Week
KnowledgeHut upGradKnowledgeHut upGradAzure Data Engineering Training (DP-203)
  • 1 Week
MICAMICAAdvanced Certificate in Digital Marketing and Communication
  • 6 Months
Bestseller
MICAMICAAdvanced Certificate in Brand Communication Management
  • 5 Months
Popular
IIM KozhikodeIIM KozhikodeProfessional Certification in HR Management and Analytics
  • 6 Months
Bestseller
Duke CEDuke CEPost Graduate Certificate in Product Management
  • 4-8 Months
Bestseller
Loyola Institute of Business Administration (LIBA)Loyola Institute of Business Administration (LIBA)Executive PG Programme in Human Resource Management
  • 11 Months
Popular
Goa Institute of ManagementGoa Institute of ManagementExecutive PG Program in Healthcare Management
  • 11 Months
IMT GhaziabadIMT GhaziabadAdvanced General Management Program
  • 11 Months
Golden Gate UniversityGolden Gate UniversityProfessional Certificate in Global Business Management
  • 6-8 Months
upGradupGradContract Law Certificate Program
  • Self paced
New
IU, GermanyIU, GermanyMaster of Business Administration (90 ECTS)
  • 18 Months
Bestseller
IU, GermanyIU, GermanyMaster in International Management (120 ECTS)
  • 24 Months
Popular
IU, GermanyIU, GermanyB.Sc. Computer Science (180 ECTS)
  • 36 Months
Clark UniversityClark UniversityMaster of Business Administration
  • 23 Months
New
Golden Gate UniversityGolden Gate UniversityMaster of Business Administration
  • 20 Months
Clark University, USClark University, USMS in Project Management
  • 20 Months
New
Edgewood CollegeEdgewood CollegeMaster of Business Administration
  • 23 Months
The American Business SchoolThe American Business SchoolMBA with specialization
  • 23 Months
New
Aivancity ParisAivancity ParisMSc Artificial Intelligence Engineering
  • 24 Months
Aivancity ParisAivancity ParisMSc Data Engineering
  • 24 Months
The American Business SchoolThe American Business SchoolMBA with specialization
  • 23 Months
New
Aivancity ParisAivancity ParisMSc Artificial Intelligence Engineering
  • 24 Months
Aivancity ParisAivancity ParisMSc Data Engineering
  • 24 Months
upGradupGradData Science Bootcamp with AI
  • 6 Months
Popular
upGrad KnowledgeHutupGrad KnowledgeHutData Engineer Bootcamp
  • Self-Paced
upGradupGradFull Stack Software Development Bootcamp
  • 6 Months
Bestseller
upGradupGradUI/UX Bootcamp
  • 3 Months
upGradupGradCloud Computing Bootcamp
  • 7.5 Months
PwCupGrad CampusCertification Program in Financial Modelling & Analysis in association with PwC India
  • 5 Months
upGrad KnowledgeHutupGrad KnowledgeHutSAFe® 6.0 POPM Certification
  • 16 Hours
upGradupGradDigital Marketing Accelerator Program
  • 05 Months
upGradupGradAdvanced Certificate Program in GenerativeAI
  • 4 Months
New
upGradupGradData Science Bootcamp with AI
  • 6 Months
Popular
upGradupGradFull Stack Software Development Bootcamp
  • 6 Months
Bestseller
upGradupGradUI/UX Bootcamp
  • 3 Months
PwCupGrad CampusCertification Program in Financial Modelling & Analysis in association with PwC India
  • 4 Months
upGradupGradCertificate Course in Business Analytics & Consulting in association with PwC India
  • 06 Months
upGradupGradDigital Marketing Accelerator Program
  • 05 Months

Decision Tree Example: A Comprehensive Guide to Understanding and Implementing Decision Trees

By Pavan Vadapalli

Updated on Feb 04, 2025 | 24 min read

Share:

A decision tree is a machine learning algorithm that makes decisions by splitting input data based on its attributes. Its simplicity and flexibility make it a popular choice among various machine learning algorithms.

A decision tree has a hierarchical structure that represents the decision-making process using nodes and branches. It splits data at each node to reach the final decision at the terminal state.

If you're looking to understand how a decision tree works, a decision tree example in this blog will guide you through it. You'll learn how to build a decision tree and explore its practical applications. Dive in!

What is a Decision Tree Example? Definition and Key Components

A decision tree is a supervised learning algorithm used for both classification and regression tasks. The algorithm makes decisions based on the features (attributes) of the input data, with the goal of predicting an outcome.

The decision tree is hierarchical in structure and consists of nodes and branches. Here’s a breakdown of the key components of a decision tree.

  • Root Node

The root is the topmost node in the decision tree that represents the entire dataset. The root node is split into two or more branches based on a decision rule.

  • Internal Nodes

Internal nodes mark the decision point where a dataset is divided based on a feature value. Each internal node represents a feature in the dataset and shows how the data is split at that level.

  • Branches

Branches are edges that connect nodes and represent the outcome of a decision rule. Each branch represents the outcome of a decision on the feature values.

  • Leaf Nodes

The leaf nodes are the terminal nodes that provide the final output of the decision-making process. In classification, they represent the predicted class label, while in regression, they represent a predicted value.

Learn how to use popular machine learning algorithms like decision trees for classification and regression tasks. Enroll in upGrad’s Online Artificial Intelligence & Machine Learning Programs and prepare yourself for a career in machine learning.

Now that you understand the components of a decision tree, let’s see how these elements come together in a practical analysis example.

Decision Tree Analysis Example

To understand a decision tree better, here’s a real-world example of a decision tree used to predict loan approval based on applicant features. You’ll use metrics such as credit score, income, and employment status for making decisions.

Here’s the decision tree analysis example for predicting loan approval.

Dataset Overview

The dataset used for predicting loan approval would be based on the following features.

  • Credit Score: It is a numeric score that represents the applicant's creditworthiness. In India, a CIBIL score (300-900) is used.
  • Income: The applicant's annual income in INR (Indian Rupees).
  • Employment Status: Whether the applicant is employed full-time or part-time.
  • Loan Approval: The target variable. It is usually yes or no and represents whether the loan was approved or not.

Building the Decision Tree

To create a decision tree, follow these essential steps to ensure accurate model building and effective decision-making.

1. Data Preprocessing

Clean the dataset to handle any missing or inconsistent values. Convert categorical values such as Employment Status into numerical values for the algorithm to understand (e.g., Full-Time = 1, Part-Time = 2, Unemployed = 0).

2. Choose the Split Criteria

The decision tree uses criteria like Gini impurity or information gain (entropy) to choose the best feature at each node. These metrics ensure that the splits lead to the most similar subsets.

3. Build the Tree

Starting at the root node, the tree will decide the best feature to split on. For instance, the root node might first ask: Is credit score > 600?

  • If the answer is Yes, it may check, Income > 5,00,000 INR.
  • If the answer is No, it could check, Employment Status = Full-Time?
  • The leaf nodes will represent the final decision (Loan Approval: Yes or No).

Here’s the representation of the decision tree.

 [Root Node]

            Is Credit Score > 600?
                  /       \
              Yes         No
            /               \
  [Internal Node]          [Internal Node]
    Is Income > 5,00,000?  Is Employment Status = Full-Time?
          /      \                 /           \
      Yes        No           Yes           No
    /             \         /                \
[Leaf Node]   [Leaf Node]  [Leaf Node]      [Leaf Node]
    Yes            No          Yes               No

Interpreting the Decision Tree Example Results

For the given example, let’s understand how the decision tree gives the result.

1. Root Node (Credit Score > 600):

If the credit score is above 600, the applicant is more likely to get approved. The next step is to check income:

  • If income > 5,00,000 INR, the loan is approved (Leaf Node: Yes).
  • If income ≤ 5,00,000 INR, the loan is more likely to be rejected (Leaf Node: No).

2. If Credit Score ≤ 600:

If the credit score is less than or equal to 600, you have to check the Employment Status:

  • If employment is full-time, the loan may still be approved despite the lower credit score (Leaf Node: Yes).
  • If not employed full-time (part-time or unemployed), the loan is likely to be rejected (Leaf Node: No).

Also Read: How to Create Perfect Decision Tree | Decision Tree Algorithm [With Examples]

Now that you’ve explored a decision tree analysis example, let’s check out the mechanism behind the working of the decision tree. 

Understanding the Working Mechanism of Decision Trees 

A decision tree is a recursive algorithm that splits the dataset into smaller subsets until a terminal condition is reached. Each node in the decision tree represents a decision based on an attribute.

Here’s a step-by-step breakdown of how a decision tree is constructed.

Step 1: Start with the root node containing the entire dataset (denoted as S)

The entire dataset is considered as S, where each instance in the dataset has features and a target label. The root node represents this dataset S.

The goal is to split this dataset in such a way that it leads to clearer and more accurate predictions.

Step 2: Select the most relevant attribute in the dataset using an Attribute Selection Measure (ASM)

The decision tree algorithm must decide which attribute must be used to split the dataset. This is done by using an Attribute Selection Measure such as Information Gain, Gini Impurity, or Chi-square.

The objective is to select the attribute that best divides the data based on the chosen measure. This ensures that a single class dominates the resulting subsets.

Step 3: Split the dataset (S) into subsets based on the chosen attribute's possible values

After selecting the best feature, the dataset S is divided into subsets based on the possible values of that attribute. For example, if the selected feature is income, the data can be split into subsets for "income > 50,000" and "income ≤ 50,000".

The objective of this step is to break down the data into smaller and more manageable pieces. The subsets will be analyzed in the subsequent steps.

Step 4: Create a decision tree node that represents the selected attribute

A node is created for each attribute selected in the previous step. Each of these nodes represents a test on the attribute that satisfies a certain condition. 

For instance, Is income > 50,000? could be a node in the tree.

Each node represents a decision point where the dataset is split into two or more branches. They represent the different values (or ranges) of the chosen feature. 

For instance, "Income > 50,000" or "Income ≤ 50,000".

Step 5: Recursively apply the process to the subsets from step 3, continuing until no further classification is possible, at which point the final node becomes a leaf.

The process continues recursively for each subset created in the previous step. For each subset, the best attribute is selected again, and the dataset is split further. 

The recursion process stops in the following condition.

  • All instances in a subset belong to the same class
  • There are no remaining features to split on
  • The remaining dataset cannot be split meaningfully

When the recursion stops, a leaf node is created to represent the outcome or classification for that subset. 

A final tree is now created, which contains a series of internal nodes (representing decisions) and leaf nodes (representing final classifications or predictions).

Now that you’ve explored the steps to create a decision tree, let’s understand how you select the attributes for splitting datasets.

Criteria for Selecting Attributes

To split the data at each node, different criteria are used to measure how well a feature divides the data. Information Gain and Gini Index are the two most common methods used for this purpose.

Let’s explore these two methods in detail.

1. Information Gain

Information Gain measures how much information a feature provides about the target variable. The feature with the highest Information Gain is considered the best feature to split the data.

Here’s how you calculate Information Gain.

Information   Gain ( S ,   A ) = Entropy ( S ) - | Sv | | S | Entropy ( Sv )

Where Sv is subset of S for which feature A has value v

| Sv | | S |

 is the proportion of the instances in the subset Sv  

Entropy is calculated using the following formula.

Entropy ( S ) =   - p ( x ) l o g 2   p ( x )

Where p(x) is the probability of class x in the dataset.

Here’s a sample code snippet of how Information Gain is calculated for a particular feature.

import numpy as np
# Example dataset: [Feature1, Feature2] and the corresponding target labels
data = [
    [1, 'A'], [1, 'A'], [2, 'B'], [2, 'A'], [3, 'B'], [3, 'B'],
    [4, 'A'], [4, 'B'], [5, 'A'], [5, 'A']
]
# Labels (Target)
labels = ['Approved', 'Approved', 'Denied', 'Approved', 'Denied', 'Denied', 'Approved', 'Denied', 'Approved', 'Approved']

# Function to calculate entropy
def entropy(data, labels):
    # Identify unique labels (the possible outcomes)
    unique_labels = set(labels)
    
    # Initialize entropy value
    entropy_value = 0
    
    # Iterate over each unique label (outcome)
    for label in unique_labels:
        # Calculate the probability (p) of the label in the dataset
        p = labels.count(label) / len(labels)
        
        # Add the entropy contribution from this label to the total entropy value
        entropy_value -= p * np.log2(p)
    
    # Return the total entropy value, which quantifies the uncertainty of the dataset
    return entropy_value

# Function to calculate Information Gain
def information_gain(data, labels, feature_index):
    # Calculate the entropy of the full dataset before any split
    initial_entropy = entropy(data, labels)

    # Create subsets based on the feature (i.e., the feature at the given index)
    feature_values = [row[feature_index] for row in data]  # Extract the values of the feature at feature_index
    subsets = {}  # Dictionary to store subsets for each feature value
    for val in set(feature_values):  # For each unique value of the feature
        # Create a subset of labels corresponding to that feature value
        subsets[val] = [labels[i] for i in range(len(data)) if feature_values[i] == val]
    
    # Calculate the weighted entropy after the split
    weighted_entropy = 0
    for subset in subsets.values():
        # For each subset, calculate the entropy and weight it by the fraction of the total data it represents
        weighted_entropy += (len(subset) / len(data)) * entropy(data, subset)

    # Information Gain is the reduction in entropy from the original dataset to the weighted entropy after the split
    return initial_entropy - weighted_entropy

# Calculate information gain for the feature at index 0 (Feature1)
print("Information Gain for Feature1:", information_gain(data, labels, 0))
import numpy as np

# Example dataset: [Feature1, Feature2] and the corresponding target labels
data = [
    [1, 'A'], [1, 'A'], [2, 'B'], [2, 'A'], [3, 'B'], [3, 'B'],
    [4, 'A'], [4, 'B'], [5, 'A'], [5, 'A']
]
# Labels (Target)
labels = ['Approved', 'Approved', 'Denied', 'Approved', 'Denied', 'Denied', 'Approved', 'Denied', 'Approved', 'Approved']
# Function to calculate entropy
def entropy(data, labels):
    unique_labels = set(labels)
    entropy_value = 0
    for label in unique_labels:
        p = labels.count(label) / len(labels)
        entropy_value -= p * np.log2(p)
    return entropy_value

# Function to calculate Information Gain
def information_gain(data, labels, feature_index):
    # Calculate the entropy of the full dataset
    initial_entropy = entropy(data, labels)

    # Create subsets based on the feature
    feature_values = [row[feature_index] for row in data]
    subsets = {}
    for val in set(feature_values):
        subsets[val] = [labels[i] for i in range(len(data)) if feature_values[i] == val]
    # Calculate the weighted entropy after the split
    weighted_entropy = 0
    for subset in subsets.values():
        weighted_entropy += (len(subset) / len(data)) * entropy(data, subset)

    # Calculate the information gain
    return initial_entropy - weighted_entropy
# Calculate information gain for the feature at index 0 (Feature1)
print("Information Gain for Feature1:", information_gain(data, labels, 0))

Explanation:

  • Entropy is calculated for the whole dataset.
  • The program then split the dataset based on Feature1 values (e.g., 1, 2, 3, 4, 5).
  • For each subset, the program computes the entropy and then the information gain.

The information_gain function calculates the split in the following way.

  • The function first calculates the entropy of the original dataset using the entropy function. This gives the uncertainty (or disorder) of the labels before any split is made.
  • The function then focuses on a specific feature (identified by feature_index) to create subsets of the data.
  • Each unique value in this feature (e.g., 1, 2, 3, 4, 5) creates a subset of corresponding labels. For example, if Feature1 = 1, the corresponding labels could be ['Approved', 'Approved'].
  • After creating the subsets, the function calculates the entropy for each subset using the entropy function.
  • Then, it calculates the weighted average of these entropies based on the proportion of the dataset each subset represents.
  • Information gain is computed as the difference between the initial entropy (before the split) and the weighted entropy (after the split).

Output:

Information Gain for Feature1: 0.571

2. Gini Index

The Gini Index splits the dataset by measuring its impurity. If the subsets consist of instances belonging to a single class, the Gini Index is minimized. It is a popular choice in decision trees like CART (Classification and Regression Trees).

The Gini Index is calculated using the following formula.

G i n i ( S ) = 1 - p ( x ) 2

Where p(x) is the probability of class x in the dataset.

Similarly, Gini Gain is calculated using the following formula.

G i n i _ G a i n ( S ,   A ) = G i n i ( S ) - | S v | | S | G i n i ( S v )

Where Sv is subset of S for which feature A has value v

|Sv|/|S| is the proportion of the instances in the subset Sv 

Here’s a sample code snippet to compute the Gini Index for a feature.

# Function to calculate Gini Index
def gini_index(data, labels):
    unique_labels = set(labels)
    gini_value = 1
    for label in unique_labels:
        p = labels.count(label) / len(labels)
        gini_value -= p ** 2
    return gini_value

# Function to calculate Gini Gain
def gini_gain(data, labels, feature_index):
    # Calculate the Gini index of the full dataset
    initial_gini = gini_index(data, labels)

    # Create subsets based on the feature
    feature_values = [row[feature_index] for row in data]
    subsets = {}
    for val in set(feature_values):
        subsets[val] = [labels[i] for i in range(len(data)) if feature_values[i] == val]

    # Calculate the weighted Gini index after the split
    weighted_gini = 0
    for subset in subsets.values():
        weighted_gini += (len(subset) / len(data)) * gini_index(data, subset)

    # Calculate the Gini gain
    return initial_gini - weighted_gini

# Calculate Gini gain for the feature at index 0 (Feature1)
print("Gini Gain for Feature1:", gini_gain(data, labels, 0))

Explanation:

  • The Gini Index is calculated for the whole dataset.
  • The dataset is then split based on Feature1 values.
  • For each subset, the program computes the Gini Index and then the Gini Gain.

Output:

Gini Gain for Feature1: 0.28

Also Read: Gini Index Formula: A Complete Guide for Decision Trees and Machine Learning

Now that you’ve explored the different methods to select attribute criteria, let’s see how you can increase the efficiency of the decision tree using a technique called pruning.

What is Pruning in Decision Trees? Core Concepts

Pruning is used to reduce the size of the decision tree by removing unnecessary branches. The technique simplifies the model by removing unnecessary complexity and focusing on critical features.

You can implement pruning in Scikit-learn using parameters like min_samples_split or ccp_alpha for cost complexity pruning.

Example: Pruning helps streamline decision-making in customer segmentation tasks by removing unnecessary branches in a decision tree. This results in a more efficient model that focuses on the most important features for segmenting customers, such as purchasing behavior or demographics. 

There are two main types of pruning: pre-pruning (early stopping) and post-pruning (backward pruning. Let’s explore the differences between these two types.

Criteria  Pre-Pruning Post-Pruning
Application  During tree construction. After the tree has grown completely.
Overfitting Reduces overfitting by stopping early. Reduces overfitting by simplifying a complex tree.
Underfitting If the tree is too simple, it may lead to underfitting. Prevents underfitting by allowing the tree to grow first.
Computation Faster to compute as the tree is simple. Slower as there is a need for full tree construction first.
Effectiveness  It may fail to capture important patterns if the tree is too shallow. Removes complexity without losing accuracy.
Flexibility  Less flexible, as stopping criteria have to be predefined. More flexible as pruning can be done based on performance.

Now that you have explored the two different pruning methods of reducing tree complexity, let’s check out the steps to implement a decision tree using Python.

Implementing a Decision Tree in Python: Key Steps

You can use Python libraries like scikit-learn to develop a model that can classify or predict data based on decision-making rules. 

Here’s an overview of the steps involved in implementing decision tree Python.

Step 1: Import the Required Libraries

In the first step, import the libraries needed for loading the dataset, splitting the data, building the decision tree, and evaluating the performance of the model. You’ll use libraries like Pandas and Seaborn for this process.

Code snippet:

# Import necessary libraries
import pandas as pd  # For handling data in DataFrame format (organizing, manipulating data)
from sklearn.model_selection import train_test_split  # For splitting data into training and testing sets
from sklearn.tree import DecisionTreeClassifier  # For building a decision tree model for classification tasks
from sklearn.metrics import accuracy_score, confusion_matrix  # For evaluating the performance of the model
import seaborn as sns  # For visualization, especially used for plotting confusion matrices
import matplotlib.pyplot as plt  # For creating plots and visualizations

Step 2: Load the Dataset

Once the libraries have been imported, you need a sample dataset for the process. Here, you’ll use the iris dataset, which is easily available through scikit-learn. The dataset is stored in the Pandas DataFrame from which you can easily access the columns and rows of the table.

The dataset contains features like sepal width, sepal length, petal length, and petal width, and a target variable that classifies the species of the iris plant.

Code snippet:

# Load dataset (example using the iris dataset)
from sklearn.datasets import load_iris  # Import the function to load the iris dataset
data = load_iris()  # Load the iris dataset into the variable 'data'

# Convert the dataset into a pandas DataFrame for easier manipulation and analysis
df = pd.DataFrame(data.data, columns=data.feature_names)  # Create a DataFrame using the feature data from the dataset, with column names from 'feature_names'
# Add the target labels (species of iris) as a new column to the DataFrame
df['target'] = data.target  # The target variable contains the species labels (setosa, versicolor, virginica)

Step 3: Split the Data into Training and Testing Sets

In this step, you’ll separate the features (X) and target labels (y), then split the data into training and testing sets. About 80% of the data will be used for training, and 20% will be used for testing.

Code snippet:

# Split the data into training and testing sets (80% train, 20% test)
X = df.drop('target', axis=1)  # Drop the 'target' column to get the features (X contains the input data for the model)
y = df['target']  # Assign the 'target' column to y (y contains the output labels or the target for the model)

# Split the data into training and testing sets using an 80-20 split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)  # Split the data: 80% for training, 20% for testing
# random_state=42 ensures reproducibility of the split

Step 4: Train the Decision Tree Classifier on the Training Data

For the training process, you need to create an instance of the DecisionTreeClassifier by importing it from the sklearn library. It will learn patterns in the training data (X_train, y_train). The fit() method is called for model training.

The classifier will undergo optimization methods such as Gradient Descent and Backpropagation before finally building the Decision Tree Classifier model.

The criterion specifies the function used to measure the quality of a split. It can be measured using Gini and entropy. 

The gini creates more balanced splits and is computationally faster. It is often preferred when dealing with large datasets. Entropy can sometimes provide better results in terms of the purity of the final nodes but takes time in computation.

Code snippet:

# Initialize the decision tree classifier and train the model
dt_classifier = DecisionTreeClassifier(random_state=42, criterion='gini')  # Create an instance of the DecisionTreeClassifier, with a fixed random_state for reproducibility
dt_classifier.fit(X_train, y_train)  # Train the decision tree classifier on the training data (X_train and y_train)

Step 5: Make Predictions on the Test Data

In this step, you have to use the trained decision tree model to predict the class labels for the test dataset. Once the model is trained, The predict() method is applied to the test features (X_test), and it outputs the predicted class labels based on the learned patterns from the training data.

Here is why this step is crucial.

  • The predict() method generates the predicted labels (y_pred) for each observation in the test set (X_test).
  • It allows you to compare these predictions against the actual labels (y_test) in the test dataset.
  • The accuracy and performance of the model can be evaluated after this step, as it shows how well the model generalizes to unseen data.

Code snippet:

# Make predictions using the trained decision tree model
y_pred = dt_classifier.predict(X_test)  # Use the trained decision tree model to predict the labels for the test data (X_test)

Step 6: Compare the Actual Values with the Predicted Results

You will build a simple data frame that will be made up of two columns: the real values of the test set on one side and the predicted values on the other side.

You’ll have to print the actual labels (y_test) and the predicted labels (y_pred) to see how well the model performed.

Code snippet:

# Compare actual values with predicted results
print("Actual values:", y_test.values)  # Print the actual labels (true values) from the test data (y_test)
print("Predicted values:", y_pred)  # Print the predicted labels generated by the decision tree model (y_pred)

Step 7: Evaluate the Model Using Confusion Matrix and Accuracy

After having the real and predicted values, you can build a simple classification matrix and calculate the accuracy of your model built using simple library functions within sklearn. 

The accuracy score is calculated by inserting both the real and predicted values of the test set. The confusion matrix table shows the correct and incorrect predictions on a classification problem.

Code snippet:

# Evaluate the model performance
accuracy = accuracy_score(y_test, y_pred)  # Calculate the accuracy of the model by comparing actual values (y_test) and predicted values (y_pred)
print("Accuracy:", accuracy)  # Print the calculated accuracy of the model

# Compute the confusion matrix
conf_matrix = confusion_matrix(y_test, y_pred)  # Generate the confusion matrix to evaluate the performance of the model

# Display confusion matrix using seaborn heatmap
plt.figure(figsize=(6, 5))  # Set the figure size for the plot
sns.heatmap(conf_matrix, annot=True, fmt="d", cmap="Blues", xticklabels=data.target_names, yticklabels=data.target_names)  # Visualize the confusion matrix with annotations
plt.xlabel("Predicted")  # Label the x-axis as "Predicted"
plt.ylabel("Actual")  # Label the y-axis as "Actual"
plt.title("Confusion Matrix")  # Set the title of the plot
plt.show()  # Display the plot

Explanation:

  • accuracy_score() calculates the accuracy of the model by comparing y_test (true labels) and y_pred (predicted labels).
  • confusion_matrix() generates the confusion matrix for the model’s performance.
  • The program uses Seaborn to visualize the confusion matrix as a heatmap, making it easier to understand the classification performance.

Output:

Accuracy: 1.0  # This means that the model predicted all the test cases correctly
                Predicted
                Setosa  Versicolor  Virginica
    Actual
    Setosa         15           0          0
    Versicolor      0          15          1
    Virginica       0           0         14

With a functional understanding of how to implement a decision tree, let’s evaluate its benefits and challenges for real-world applications.

What Are the Advantages and Disadvantages of Decision Trees?

Decision trees are a widely used machine learning algorithm due to their simplicity and interpretability. However, they also have issues like overfitting and instability.

Here are the advantages and disadvantages of a decision tree.

1. Advantages

  • Easy to Interpret

The tree structure visually represents how decisions are made, making it easy for humans to interpret. 

Example: A decision tree can show that if "Credit Score > 600" and "Income > 60K", the outcome is "Loan Approved."

  • Can Handle Numerical and Categorical Data

Decision trees can handle both numerical (e.g., age, income) and categorical (e.g., gender, occupation) data.

Example: A decision tree can be used to classify data where one feature is continuous (income), and another is categorical (loan approval status).

  • Requires Minimal Data Preprocessing

Decision trees do not require data normalization or encoding of categorical variables, unlike models such as SVMs or linear regression. 

Example: Decision trees can split data using categorical features, such as splitting on "Yes" vs "No" for a binary feature.

  • Non-parametric

Decision trees are a good choice when you have little knowledge about the data or when the data doesn't follow a standard distribution.

Example: Decision trees can handle data from complex, real-world systems where the relationships are nonlinear and complex.

  • Handle Missing Data

Some decision tree algorithms can handle missing data naturally. They can decide which feature is the best split based on the available data.

Example: A decision tree can still classify a person based on available features, even if some data points, such as age or salary, are missing.

Also Read: Data Preprocessing in Machine Learning: 7 Key Steps to Follow, Strategies, & Applications

2. Disadvantages: 

  • Vulnerable to Overfitting

Decision trees may overfit if they are allowed to grow too deep. An overfitted tree captures noise and leads to poor generalization to unseen data.

Example: A decision tree that has too many branches may perform very well on the training data but poorly on the test data.

Also Read: What is Overfitting & Underfitting In Machine Learning? [Everything You Need to Learn]

  • Unstable

A small change in the dataset can result in a completely different tree structure. This feature makes decision trees less robust.

Example: If a decision tree is trained on slightly different data, the final structure may change drastically.

  • Bias Toward Features with More Categories

When a particular feature has many unique values, it can influence the decision-making process, leading to improper splits.

Example: A feature like "zipcode" with many unique values can lead to a complex tree and bias the decision-making process.

  • Poor Performance on Non-linear Data

Decision trees may fail when the relationship between input features and the target variable is non-linear and complex. 

Example: A decision tree may fail to predict house prices in a region with highly variable factors like proximity to the city, even if other variables (e.g., number of rooms) are known.

  • Greedy Algorithm

Decision trees use a greedy approach to choose the best split at each step based on a certain criterion. This approach may not always lead to the globally optimal solution. 

Example: A decision tree may make improper splits in the early stages, which may prevent it from reaching the correct decision.

Now that you've explored the advantages and limitations of the decision tree algorithm, let's take a look at why it can be a great choice for certain tasks.

Why Choose a Decision Tree? 5 Factors to Choose

Decision trees are a popular choice in machine learning, due to their interpretability, simplicity, and robust nature. Here are the five reasons that make the decision tree algorithm a popular choice.

  • Flexibility in Handling Diverse Data Types

Decision trees can work with both categorical and numerical data, which gives them an edge over other algorithms that need data to be transformed. This makes them suitable for a wide range of industries, including finance, healthcare, and marketing.

Decision trees can automatically select the most important features for making decisions as they build the tree. This is useful in high-dimensional data where manually selecting features can be time-consuming.

  • Works Well For Large Datasets

Decision trees can efficiently handle large datasets and work well even when the data size is high. This is particularly useful for industries like e-commerce, where large volumes of customer data has to be processed quickly.

  • Can Handle Missing Data

It can use techniques like surrogate splits to make decisions even if some feature values are missing for certain data points. It is an ideal choice in scenarios where missing values are common, such as sensor data.

  • Handle Non-linear Relationships

The algorithm can handle complex, non-linear relationships between input features and the target variable. Decision trees can be used to capture complex combinations of behavior patterns to predict customer churn.

Also Read: Decision Trees in Machine Learning: Functions, Classification, Pros & Cons

Additionally, decision trees serve as the building blocks for ensemble methods such as Random Forests and Gradient Boosting, which enhance model performance by combining multiple trees for improved accuracy and reduced variance.

Now that you know why decision tree algorithms are beneficial for certain cases, let’s explore some of their applications in the real world.

Real-World Applications of Decision Trees

Decision trees are powerful models used across various industries to handle issues like risk assessment, disease prediction, and fraud detection.

Here are some of the common applications of the decision tree algorithm.

  • Fraud detection

Detect fraud transactions or activities in banking and finance by analyzing historical data patterns and identifying suspicious behavior.

Example: In fraud detection, precision and recall are often prioritized to minimize false positives and negatives.

  • Personalized marketing   

Individual marketing strategies for customers by predicting their choice, purchasing history, and response to different marketing campaigns.   

Example: In personalized marketing, accuracy and customer segmentation are used to deliver targeted campaigns.

  • Disease prediction

Predict diseases by analyzing patient data, such as symptoms, medical history, and lifestyle factors.

Example: Sensitivity and specificity are critical in correctly identifying patients at risk with minimal misdiagnosis.

  • Predictive maintenance

Predicting when equipment or machinery will fail based on historical performance data and then scheduling maintenance.

Example: Mstrics like the mean time between failures (MTBF) are used to predict and prevent equipment failures before they occur.

Let's explore the broader applications of decision trees by highlighting the industries where they are used and their specific use cases.

Industry  Specific Use Cases
Insurance
  • Risk assessment
  • Claim prediction
  • Customer segmentation
Retail & E-commerce
  • Personalized marketing
  • Sales forecasting
  • Customer segmentation
Energy 
  • Equipment failure prediction
  • Energy consumption forecasting
  • Resource allocation
Transportation
  • Route optimization
  • Autonomous vehicle decision-making
  • Traffic prediction
Entertainment
  • Content personalization
  • Movie recommendation systems
  • Audience analysis
Telecommunications
  • Customer service optimization
  • Churn prediction
  • Service usage prediction

Learn how machine learning concepts, such as deep learning and neural networks, can solve real-world problems. Join the free course on Fundamentals of Deep Learning and Neural Networks.

Now that you’ve explored the applications of the decision tree algorithm in industries, let's explore how you can learn and implement this concept for machine learning applications.

How Can upGrad Help You Ace Your Career in Machine Learning?

Decision trees are crucial to machine learning, as they help in making data-driven predictions and insights. For instance, machine learning has helped the finance industry increase fraud detection rates by up to 50%. This demonstrates the vast potential for growth in various sectors.

If you’re looking to build a career in this field, mastering machine learning concepts is essential. upGrad offers courses that strengthen your foundational knowledge and provide practical experience tailored to industry needs.

Here are some courses offered by upGrad in machine learning.

Do you need help deciding which courses can help you excel in machine learning? Contact upGrad for personalized counseling and valuable insights. For more details, you can also visit your nearest upGrad offline center

Expand your expertise with the best resources available. Browse the programs below to find your ideal fit in Best Machine Learning and AI Courses Online.

Discover in-demand Machine Learning skills to expand your expertise. Explore the programs below to find the perfect fit for your goals.

Discover popular AI and ML blogs and free courses to deepen your expertise. Explore the programs below to find your perfect fit.

Reference Link:
https://bankautomationnews.com/allposts/ai/the-power-of-machine-learning-in-transaction-monitoring/

Frequently Asked Questions (FAQs)

1. What is meant by a decision tree?

2. What is a specific decision tree example?

3. Is the decision tree supervised or unsupervised?

4. What are the three types of decision trees?

5. What is splitting in a decision tree?

6. When to use a decision tree?

7. What is overfitting in a decision tree?

8. Which is the best decision tree algorithm?

9. How to use accuracy in a decision-tree?

10. How to create a decision tree?

11. What is preprocessing for a decision tree?

Pavan Vadapalli

971 articles published

Get Free Consultation

+91

By submitting, I accept the T&C and
Privacy Policy

India’s #1 Tech University

Executive Program in Generative AI for Leaders

76%

seats filled

View Program

Suggested Blogs