Author DP

MK Gurucharan

3+ of articles published

Critical Analyst / Storytelling Expert / Narrative Designer

Domain:

upGrad

About

Gurucharan M K, Undergraduate Biomedical Engineering Student | Aspiring AI engineer | Deep Learning and Machine Learning Enthusiast

Published

Most Popular

Gini Index for Decision Trees: Mechanism, Perfect & Imperfect Split With Examples
Blogs
Views Icon

71191

Gini Index for Decision Trees: Mechanism, Perfect & Imperfect Split With Examples

As you start learning about supervised learning, it’s important to get acquainted with the concept of decision trees. Decision trees are akin to simplified diagrams that assist in solving various types of problems by making sequential decisions. One key metric used in enhancing the efficiency of decision trees is the Gini Index. This criterion plays a crucial role in guiding decision trees on how to optimally partition the data they’re presented with. Here, we’re looking closely at something called the Decision tree for Gini Index. It’s a tool that helps decision trees decide how to split up the information they’re given.  In this article, I’ll explain the Gini Index in easy words. We’ll talk about perfect and imperfect splits using examples you can relate to. By the end, you’ll see how decision trees can help solve real problems, making it easier for you to use them in your own work. Let’s get started!  What is Gini Index? The Gini Index is a way of quantifying how messy or clean a dataset is, especially when we use decision trees to classify it. It goes from 0 (cleanest, all data points have the same label) to 1 (messiest, data points are split evenly among all labels).  Think of a dataset that shows how much money people make. A high Gini Index for this data means that there is a huge difference between the rich and the poor, while a low Gini Index means that the income is more balanced.  When we build decision trees, we want to use the Gini Index to find the best feature to split the data at each node. The best feature is the one that reduces the Gini Index the most, meaning that it creates the purest child nodes. This way, we can create a tree that can distinguish different labels based on the features.  What Does a Decision Tree do? A decision tree is a machine learning algorithm used for both classification and regression tasks. It resembles a tree-like structure with branches and leaves. Each branch represents a decision based on a specific feature of the data, and the leaves represent the predicted outcome.  Data points navigate through the decision tree based on their respective feature values, traversing down branches determined by the split conditions that are chosen using the decision tree Gini index as a criterion for selection. Ultimately, they reach a leaf and receive the prediction assigned to that leaf. Decision trees are popular for their interpretability and simplicity, allowing easy visualization of the decision-making process. The Gini Index plays a crucial role in building an effective decision tree by guiding the selection of optimal splitting features. By minimizing the Gini index for decision tree at each node, the tree progressively separates data points belonging to different classes, leading to accurate predictions at the terminal leaves.  Here’s a breakdown of how to build decision tree using Gini index:  Calculate the Gini Index of the entire dataset. This represents the initial level of impurity before any splitting.  Consider each feature and its threshold values. For each combination, calculate the Gini Index of the two resulting child nodes after splitting the data based on that feature and threshold.  Choose the feature and threshold combination that leads to the smallest Gini Index for the child nodes. This indicates the most significant decrease in impurity, resulting in a more homogeneous separation of data points.  Repeat the process recursively on each child node. Use the same approach to select the next split feature and threshold, further minimizing the Gini Index and separating data points based on their class labels.  Continue splitting until a stopping criterion is met. This could be reaching a pre-defined tree depth, minimum data size per node, or a sufficiently low Gini Index at all terminal leaves.   By iteratively using the Decision Tree Gini Index to guide feature selection and data partitioning, decision trees can effectively learn complex relationships within the data and make accurate predictions for unseen instances.  Flow of a Decision Tree  Here I have noted the flow of a decision tree Gini index: Training: The decision tree is built by applying a splitting algorithm to the training data. The algorithm chooses the feature and its threshold value that best minimizes the Gini Index within the resulting child nodes. This process is repeated recursively on each subgroup until reaching a stopping criterion, like minimum data size or maximum tree depth.  Prediction: A new data point traverses the tree based on its own feature values, navigating down branches determined by the splitting conditions. Finally, it reaches a leaf and receives the prediction assigned to that leaf.  Ensembles: Decision trees can be combined into ensembles like random forests or boosting to improve accuracy and reduce overfitting. This involves building multiple trees from different subsets of the data and aggregating their predictions, leading to a more robust model.  Calculation The Gini Index or Gini Impurity is calculated by subtracting the sum of the squared probabilities of each class from one. It favours mostly the larger partitions and are very simple to implement. In simple terms, it calculates the probability of a certain randomly selected feature that was classified incorrectly. The Gini Index varies between 0 and 1, where 0 represents purity of the classification and 1 denotes random distribution of elements among various classes. A Gini Index of 0.5 shows that there is equal distribution of elements across some classes. Mathematically, The Gini Index is represented by  The Gini Index works on categorical variables and gives the results in terms of “success” or “failure” and hence performs only binary split. It isn’t computationally intensive as its counterpart – Information Gain. From the Gini Index, the value of another parameter named Gini Gain is calculated whose value is maximized with each iteration by the Decision Tree to get the perfect CART FYI: Free NLP course! Let us understand the calculation of the Gini Index with a simple example. In this, we have a total of 10 data points with two variables, the reds and the blues. The X and Y axes are numbered with spaces of 100 between each term. From the given Gini index Decision tree example , we shall calculate the Gini Index and the Gini Gain. For a decision tree, we need to split the dataset into two branches. Consider the following data points with 5 Reds and 5 Blues marked on the X-Y plane. Suppose we make a binary split at X=200, then we will have a perfect split as shown below. It is seen that the split is correctly performed and we are left with two branches each with 5 reds (left branch) and 5 blues (right branch). But what will be the outcome if we make the split at X=250? We are left with two branches, the left branch consisting of 5 reds and 1 blue, while the right branch consists of 4 blues. The following is referred to as an imperfect split. In training the Decision Tree model, to quantify the amount of imperfectness of the split, we can use the Gini Index.  Checkout: Types of Binary Tree Basic Mechanism To calculate the Gini Impurity, let us first understand it’s basic mechanism. First, we shall randomly pick up any data point from the dataset Then, we will classify it randomly according to the class distribution in the given dataset. In our dataset, we shall give a data point chosen with a probability of 5/10 for red and 5/10 for blue as there are five data points of each colour and hence the probability. Now, in order to calculate the Gini index decision tree formula: Where, C is the total number of classes and p(i) is the probability of picking the data point with the class i. In the above Gini index decision tree solved example, we have C=2 and p(1) = p(2) = 0.5, Hence the Gini Index can be calculated as, G =p(1) ∗ (1−p(1)) + p(2) ∗ (1−p(2))     =0.5 ∗ (1−0.5) + 0.5 ∗ (1−0.5)     =0.5 Where 0.5 is the total probability of classifying a data point imperfectly and hence is exactly 50%. Now, let us calculate the Gini Impurity for both the perfect and imperfect split that we performed earlier, Perfect Split The left branch has only reds and hence its Gini Impurity is, G(left) =1 ∗ (1−1) + 0 ∗ (1−0) = 0 The right branch also has only blues and hence its Gini Impurity is also given by, G(right) =1 ∗ (1−1) + 0 ∗ (1−0) = 0 From the quick calculation, we see that both the left and right branches of our perfect split have probabilities of 0 and hence is indeed perfect. A Gini Impurity of 0 is the lowest and the best possible impurity for any data set. Best Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our courses, visit our page below. Machine Learning Courses Imperfect Split  In this case, the left branch has 5 reds and 1 blue. Its Gini Impurity can be given by, G(left) =1/6 ∗ (1−1/6) + 5/6 ∗ (1−5/6) = 0.278 The right branch has all blues and hence as calculated above its Gini Impurity is given by, G(right) =1 ∗ (1−1) + 0 ∗ (1−0) = 0 Now that we have the Gini Impurities of the imperfect split, in order to evaluate the quality or extent of the split, we will give a specific weight to the impurity of each branch with the number of elements it has. (0.6∗0.278) + (0.4∗0) = 0.167 Now that we have calculated the Gini Index, we shall calculate the value of another parameter, Gini Gain and analyse its application in Decision Trees. The amount of impurity removed with this split is calculated by deducting the above value with the Gini Index for the entire dataset (0.5) 0.5 – 0.167 = 0.333 This value calculated is called as the “Gini Gain”. In simple terms, Higher Gini Gain = Better Split.  Hence, in a Decision Tree algorithm, the best split is obtained by maximizing the Gini Gain, which is calculated in the above manner with each iteration.  After calculating the Gini Gain for each attribute in the data set, the class, sklearn.tree.DecisionTreeClassifier will choose the largest Gini Gain as the Root Node. When a branch with Gini of 0 is encountered it becomes the leaf node and the other branches with Gini more than 0 need further splitting. These nodes are grown recursively till all of them are classified. In-demand Machine Learning Skills Artificial Intelligence Courses Tableau Courses NLP Courses Deep Learning Courses Also Read: Decision Tree in AI: Introduction, Types & Creation Relevance of Entropy Entropy, a key concept in decision trees, measures the uncertainty or randomness within a dataset. It specifically quantifies the degree to which a subset of data contains examples belonging to different classes, playing a crucial role in the decision-making process of the tree. By choosing features that minimize entropy within splits, we lead to purer branches and, ultimately, construct a more accurate decision tree. While both the Gini Index and entropy are utilized in decision trees to assess data purity, they calculate the difference in impurity slightly differently. The Gini Index, like entropy, serves as a metric to evaluate the likelihood of a specific feature being misclassified when selected randomly. However, entropy in the decision tree gives a more detailed measure of the disorder or variability of the system, offering a slightly different perspective on data purity and impurity reduction strategies. Gini Index: Compares the proportion of each class within a data subset before and after the split, favoring features that maximize the difference.  Entropy: Compares the overall uncertainty of the original data to the combined uncertainty of the resulting subsets, preferring features that lead to the largest decrease in overall entropy.  Both Gini Index and entropy have their advantages and disadvantages, and the choice depends on the specific data and task. Generally, Gini Index works well for binary classification, while entropy might be better suited for multiple classes.  Difference between Gini Index and Entropy Factor Gini Index Entropy Definition Measures the probability of misclassification. Measures the amount of information (or uncertainty) in a dataset. Formula Gini=1−∑i=1n​pi2​ Entropy=−∑i=1n​pi​log2​(pi​) Range 0 to 0.5 for binary classification. 0 to 1 for binary classification. Impurity Lower values indicate purer nodes. Lower values indicate purer nodes. Calculation Complexity Generally simpler to compute. Generally more complex to compute. Splitting Criterion Prefers to maximize the probability of a single class. Prefers splits that create the most uniform class distribution. Use in Algorithms Commonly used in the CART (Classification and Regression Tree) algorithm. Commonly used in the ID3 (Iterative Dichotomiser 3) and C4.5 algorithms. Sensitivity to Data Distribution Less sensitive to changes in class distribution. More sensitive to changes in class distribution. Interpretation Measures how often a randomly chosen element would be incorrectly classified. Measures the average amount of information required to identify the class of an element. Bias Towards Purity Slightly biased towards larger classes. More balanced, less biased towards larger or smaller classes. Behavior at Pure Nodes At a pure node (one class), Gini = 0. At a pure node (one class), Entropy = 0. Mathematical Nature Quadratic measure. Logarithmic measure. Robustness to Outliers More robust to outliers due to its quadratic nature. Less robust to outliers due to the logarithmic calculation. Preferred When Simplicity and speed are crucial. A more nuanced measure of information gain is needed. Gini Index vs Information Gain Both Gini Index and Information Gain are measures of impurity used in decision trees to choose the best feature for splitting the data at each node. However, they calculate this difference in slightly different ways and have their own strengths and weaknesses.  Gini Index:  Focuses on class proportions: Compares the proportion of each class within a data subset before and after the split, favoring features that maximize the difference. This makes it sensitive to class imbalance, potentially favoring splits that isolate minority classes even if they don’t significantly improve overall clarity.  Simple and computationally efficient: Easier to calculate compared to Information Gain, making it faster to build decision trees.  Works well for binary classification: Emphasizes maximizing the gap between classes, making it effective when dealing with two distinct outcomes.  Information Gain:  Measures entropy change: Compares the total entropy of the original data to the combined entropy of the resulting subsets after the split, preferring features that lead to the largest decrease in overall uncertainty. This is more nuanced and can handle multiple classes effectively.  Less sensitive to class imbalance: Doesn’t solely focus on isolating minority classes but accounts for overall reduction in uncertainty even if the split proportions are uneven.  More computationally expensive: Calculating entropy involves logarithms, making it slightly slower than Gini Index for tree construction.  Can be better for multi-class problems: Provides a more comprehensive picture of class distribution changes, potentially leading to better results with multiple outcomes.  Here’s a table summarizing the key differences:  Feature  Gini Index  Information Gain  Focus  Class proportions  Entropy change  Strengths  Simple, efficient, good for binary classification  More nuanced, handles imbalance, good for multiple classes  Weaknesses  Sensitive to class imbalance, less informative for multiple classes.  More computationally expensive    Use in Machine Learning There are various algorithms designed for different purposes in the world of machine learning. The problem lies in identifying which algorithm to suit best on a given dataset. The decision tree algorithm seems to show convincing results too. To recognize it, one must think that decision trees somewhat mimic human subjective power. So, a problem with more human cognitive questioning is likely to be more suited for decision trees. The underlying concept of decision trees can be easily understandable for its tree-like structure.  Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau Conclusion An alternative to the Decision tree for Gini Index is the Information Entropy which used to determine which attribute gives us the maximum information about a class. It is based on the concept of entropy, which is the degree of impurity or uncertainty. It aims to decrease the level of entropy from the root nodes to the leaf nodes of the decision tree.  In this way, the Gini Index is used by the CART algorithms to optimise the decision trees and create decision points for classification trees.  If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

by MK Gurucharan

Calendor icon

24 Jun 2024

Basic CNN Architecture: Explaining 5 Layers of Convolutional Neural Network
Blogs
Views Icon

270742

Basic CNN Architecture: Explaining 5 Layers of Convolutional Neural Network

Introduction In the last few years of the IT industry, there has been a huge demand for once particular skill set known as Deep Learning. Deep Learning a subset of Machine Learning which consists of algorithms that are inspired by the functioning of the human brain or the neural networks. Check out our free data science courses to get an edge over the competition. These structures are called as Neural Networks. It teaches the computer to do what naturally comes to humans. Deep learning, there are several types of models such as the Artificial Neural Networks (ANN), Autoencoders, Recurrent Neural Networks (RNN) and Reinforcement Learning. But there has been one particular model that has contributed a lot in the field of computer vision and image analysis which is the Convolutional Neural Networks (CNN) or the ConvNets.  CNN is very useful as it minimises human effort by automatically detecting the features. For example, for apples and mangoes, it would automatically detect the distinct features of each class on its own. You can also consider doing our Python Bootcamp course from upGrad to upskill your career. CNNs are a class of Deep Neural Networks that can recognize and classify particular features from images and are widely used for analyzing visual images. Their applications range from image and video recognition, image classification, medical image analysis, computer vision and natural language processing. CNN has high accuracy, and because of the same, it is useful in image recognition. Image recognition has a wide range of uses in various industries such as medical image analysis,  phone, security, recommendation systems, etc.  The term ‘Convolution” in CNN denotes the mathematical function of convolution which is a special kind of linear operation wherein two functions are multiplied to produce a third function which expresses how the shape of one function is modified by the other. In simple terms, two images which can be represented as matrices are multiplied to give an output that is used to extract features from the image. Learn Machine Learning online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career. Basics of CNN Architecture Convolutional Neural Networks (CNNs) are deep learning models that extract features from images using convolutional layers, followed by pooling and fully connected layers for tasks like image classification. They excel in capturing spatial hierarchies and patterns, making them ideal for analyzing visual data. There are two main parts to a CNN architecture A convolution tool that separates and identifies the various features of the image for analysis in a process called as Feature Extraction.  The network of feature extraction consists of many pairs of convolutional or pooling layers.  A fully connected layer that utilizes the output from the convolution process and predicts the class of the image based on the features extracted in previous stages. This CNN model of feature extraction aims to reduce the number of features present in a dataset. It creates new features which summarises the existing features contained in an original set of features. There are many CNN layers as shown in the basic CNN architecture with diagram. Source   Featured Program for you: Fullstack Development Bootcamp Course There are three types of CNN architecture which are the convolutional layers, pooling layers, and fully-connected (FC) layers. When these layers are stacked, a CNN architecture will be formed. In addition to these three layers, there are two more important parameters which are the dropout layer and the activation function which are defined below. Good Read: Introduction to Deep Learning & Neural Networks 1. Convolutional Layer This layer is the first layer that is used to extract the various features from the input images. In this layer, the mathematical operation of convolution is performed between the input image and a filter of a particular size MxM. By sliding the filter over the input image, the dot product is taken between the filter and the parts of the input image with respect to the size of the filter (MxM). The output is termed as the Feature map which gives us information about the image such as the corners and edges. Later, this feature map is fed to other layers to learn several other features of the input image. The convolution layer in CNN passes the result to the next layer once applying the convolution operation in the input. Convolutional layers in CNN benefit a lot as they ensure the spatial relationship between the pixels is intact. 2. Pooling Layer In most cases, a Convolutional Layer is followed by a Pooling Layer. The primary aim of this layer is to decrease the size of the convolved feature map to reduce the computational costs. This is performed by decreasing the connections between layers and independently operates on each feature map. Depending upon method used, there are several types of Pooling operations. It basically summarises the features generated by a convolution layer. In Max Pooling, the largest element is taken from feature map. Average Pooling calculates the average of the elements in a predefined sized Image section. The total sum of the elements in the predefined section is computed in Sum Pooling. The Pooling Layer usually serves as a bridge between the Convolutional Layer and the FC Layer. This CNN model generalises the features extracted by the convolution layer, and helps the networks to recognise the features independently. With the help of this, the computations are also reduced in a network. Must Read: Neural Network Project Ideas 3. Fully Connected Layer The Fully Connected (FC) layer consists of the weights and biases along with the neurons and is used to connect the neurons between two different layers. These layers are usually placed before the output layer and form the last few layers of a CNN Architecture. In this, the input image from the previous layers are flattened and fed to the FC layer. The flattened vector then undergoes few more FC layers where the mathematical functions operations usually take place. In this stage, the classification process begins to take place. The reason two layers are connected is that two fully connected layers will perform better than a single connected layer. These layers in CNN reduce the human supervision 4. Dropout Usually, when all the features are connected to the FC layer, it can cause overfitting in the training dataset. Overfitting occurs when a particular model works so well on the training data causing a negative impact in the model’s performance when used on a new data. To overcome this problem, a dropout layer is utilised wherein a few neurons are dropped from the neural network during training process resulting in reduced size of the model. On passing a dropout of 0.3, 30% of the nodes are dropped out randomly from the neural network. Dropout results in improving the performance of a machine learning model as it prevents overfitting by making the network simpler. It drops neurons from the neural networks during training. Must Read: Free deep learning course! 5. Activation Functions Finally, one of the most important parameters of the CNN model is the activation function. They are used to learn and approximate any kind of continuous and complex relationship between variables of the network. In simple words, it decides which information of the model should fire in the forward direction and which ones should not at the end of the network. It adds non-linearity to the network. There are several commonly used activation functions such as the ReLU, Softmax, tanH and the Sigmoid functions. Each of these functions have a specific usage. For a binary classification CNN model, sigmoid and softmax functions are preferred an for a multi-class classification, generally softmax us used. In simple terms, activation functions in a CNN model determine whether a neuron should be activated or not. It decides whether the input to the work is important or not to predict using mathematical operations. Importance of ReLU in CNN ReLU (Rectified Linear Unit) is a popular activation function used in Convolutional Neural Networks (CNNs). It introduces non-linearity by outputting the input directly if it’s positive and zero otherwise, helping models to learn complex patterns. Best Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our courses, visit our page below. Machine Learning Courses LeNet-5 CNN Architecture  In 1998, the LeNet-5 architecture was introduced in a research paper titled “Gradient-Based Learning Applied to Document Recognition” by Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. It is one of the earliest and most basic CNN architecture. It consists of 7 layers. The first layer consists of an input image with dimensions of 32×32. It is convolved with 6 filters of size 5×5 resulting in dimension of 28x28x6. The second layer is a Pooling operation which filter size 2×2 and stride of 2. Hence the resulting image dimension will be 14x14x6. Similarly, the third layer also involves in a convolution operation with 16 filters of size 5×5 followed by a fourth pooling layer with similar filter size of 2×2 and stride of 2. Thus, the resulting image dimension will be reduced to 5x5x16. Once the image dimension is reduced, the fifth layer is a fully connected convolutional layer with 120 filters each of size 5×5. In this layer, each of the 120 units in this layer will be connected to the 400 (5x5x16) units from the previous layers. The sixth layer is also a fully connected layer with 84 units. The final seventh layer will be a softmax output layer with ‘n’ possible classes depending upon the number of classes in the dataset. Source Also visit upGrad’s Degree Counselling page for all undergraduate and postgraduate programs. The above diagram is a representation of the 7 layers of the LeNet-5 CNN Architecture. Below are the snapshots of the Python code to build a LeNet-5 CNN architecture using keras library with TensorFlow framework In Python Programming, the model type that is most commonly used is the Sequential type. It is the easiest way to build a CNN model in keras. It permits us to build a model layer by layer. The ‘add()’ function is used to add layers to the model. As explained above, for the LeNet-5 architecture, there are two Convolution and Pooling pairs followed by a Flatten layer which is usually used as a connection between Convolution and the Dense layers. The Dense layers are the ones that are mostly used for the output layers. The activation used is the ‘Softmax’ which gives a probability for each class and they sum up totally to 1. The model will make it’s prediction based on the class with highest probability.  The summary of the model is displayed as below. In-demand Machine Learning Skills Artificial Intelligence Courses Tableau Courses NLP Courses Deep Learning Courses Conclusion Hence, in this article we have understood the basic CNN structure, it’s architecture and the various layers that make up the CNN model. Also, we have seen basic CNN architecture example of a very famous and traditional LeNet-5 model with its Python program. We have understood how the dependence on humans decreases to build effective functionalities. Distinct layers in CNN transform the input to output using differentiable functions. If you’re interested to learn more about machine learning courses, check out IIIT-B & upGrad’s Executive PG Programme in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms. Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau

by MK Gurucharan

Calendor icon

21 Jun 2024

Transfer Learning in Deep Learning [Comprehensive Guide]
Blogs
Views Icon

5735

Transfer Learning in Deep Learning [Comprehensive Guide]

Introduction  What is Deep Learning? It is a branch of Machine Learning which uses a simulation of the human brain which are known as neural networks. These neural networks are made up of neurons that are similar to the fundamental unit of the human brain. Top Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our certification courses on AI & ML, kindly visit our page below. Machine Learning Certification The neurons make up a neural network model and this field of study altogether is named as deep learning. The end result of a neural network is called a deep learning model. Mostly, in deep learning, unstructured data is used from which the deep learning model extracts features on its own by repeated training on the data. Such models that are designed for one particular set of data when available for use as the starting point for developing another model with a different set of data and features, is known as Transfer Learning. In simple terms, Transfer Learning is a popular method where one model developed for a particular task is again used as the starting point to develop a model for another task. Transfer Learning  Transfer Learning has been utilized by humans since time immemorial. Though this field of transfer learning is relatively new to machine learning, humans have used this inherently in almost every situation. We always try to apply the knowledge gained from our past experiences when we face a new problem or task and this is the basis of transfer learning. For instance, if we know to ride a bicycle and when asked to ride a motorbike which we haven’t done before, our experience with riding a bicycle will always be applied when riding the motorbike such as steering the handle and balancing the bike. This simple concept forms the base of Transfer Learning. To understand the basic notion of Transfer Learning, consider a model X is successfully trained to perform task A with model M1. If the size of the dataset for task B is too small preventing the model Y from training efficiently or causing overfitting of the data, we can use a part of model M1 as the base to build model Y to perform task B.  Why Transfer Learning? According to Andrew Ng, one of the pioneers of today’s world in promoting Artificial Intelligence, “Transfer Learning will be the next driver of ML success”. He mentioned it in a talk given at Conference on Neural Information Processing Systems (NIPS 2016). It is of no doubt that the success of ML in today’s industry is primarily due to supervised learning. On the other hand, going forward, with more amount of unsupervised and unlabeled data, transfer learning will be one technique that will be heavily utilized in the industry.  Nowadays, people prefer using a pre-trained model that is already trained on a variety of images such as ImageNet than building a whole Convolutional Neural Network model from scratch. Transfer learning has several benefits, but the main advantages are saving training time, better performance of neural networks, and not needing a lot of data. Read: Top Deep Learning Techniques Methods of Transfer Learning  Generally, there are two ways of applying transfer learning – One is developing a model from scratch and the other is to use a pre-trained model. In the first case, we usually build a model architecture depending upon the training data and the ability of the model to extract weights and patterns from the model is studied carefully with several statistical parameters. After a few rounds of training, depending upon the result, some changes may be required to be made to the model to achieve optimal performance. In this way, we can save the model and use it as a starting to build another model for a similar task. The second case of using pre-trained models are usually most commonly referred to Transfer Learning. In this, we have to look up for pre-trained models that are shared by several research institutions and organizations released periodically for general use. These models are available for download on the internet along with their weights and can be used to build models for similar datasets. Trending Machine Learning Skills AI Courses Tableau Certification Natural Language Processing Deep Learning AI Enrol for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career. Transfer Learning Implementation – VGG16 Model Let us go through an application of Transfer Learning by utilizing a pre-trained model called as VGG16. The VGG16 is a Convolutional Neural Network model that was released by the Professors of University of Oxford in the year 2014. It was one of the famous models that won the ILSVR (ImageNet) Competition that year. It is still acknowledged as one of the best vision model architectures. It has 16 weight layers including 13 convolutional layers, 3 fully connected layers, and a soft max layer. It has approximately 138 million parameters. Given below is the Architecture of the VGG16 Model. Image Source: https://towardsdatascience.com/understand-the-architecture-of-cnn-90a25e244c7 Step 1: The first step is to import the VGG16 model that is provided by the keras library in the TensorFlow framework. Step 2: In the next step, we shall assign the model to a variable “vgg” and download the weights of the ImageNet by giving it as an argument to the model Step 3: As these pre-trained models such as VGG16, ResNet have been trained on several thousands of images and are used to classify several classes, we do not need to train the layers of the pre-trained model once again. Hence, we set all the layers of the VGG16 model as “False”.  Step 4: As we have frozen all the layers and removed the last classification layers of the pre-trained VGG16 model, we need to add a classification layer on top of the pre-trained model to train it on a dataset. Hence, we flatten the layers and introduce a final Dense layer with softmax as the activation function with an example of a binary class prediction model.   Step 5: In this final step, we print the summary of our model to visualize the layers of the pre-trained VGG16 model and the two layers that we added on top of it utilizing Transfer Learning.  From the above summary, we can see that there are close to 14.76M total parameters of which only about 50,000 parameters belonging to the last two layers are allowed to be used for training purposes due to the condition set above in Step 3. The remaining 14.71M parameters are referred to as non-trainable parameters. Also Read: Deep Learning Algorithm [Comprehensive Guide] Once these steps are performed, we can perform steps to train the regular Convolutional Neural Network by compiling our model with external hyperparameters such as optimizer and loss function. After compiling, we can begin the training using the fit function for a set number of epochs. In this way, we can utilize the method of transfer learning to train any dataset with several such pre-trained models on the net and adding a few layers on top of the model according to the number of classes of our training data. Challenges in Transfer Learning Transfer learning can bring numerous benefits, but it also comes with its own set of challenges. Understanding and addressing these challenges are essential for successful implementation. Some of the common challenges in transfer deep learning are:   Domain Shift: Transfer learning assumes that the source and target domains are related, but there may be a significant difference between them in practice. This domain shift can impact the effectiveness of transferred knowledge. Addressing domain shift requires careful consideration of data distribution and feature representations. Task Selection: Choosing the appropriate source and target tasks is crucial in transfer learning. While some tasks share similarities, others may be vastly different. Selecting tasks that have sufficient overlap in features and objectives increases the likelihood of successful transfer. Negative Transfer: Negative transfer occurs when the knowledge transferred from the source domain hinders performance in the target domain. It can happen if the source task is too dissimilar to the target task or if irrelevant information is transferred. Negative transfer can be mitigated by careful model selection and fine-tuning techniques. Data Availability: Transfer learning relies on the availability of labeled data in the source domain. However, in certain scenarios, labeled data may be scarce or expensive to obtain. This limitation poses a challenge, particularly when the target domain has a limited amount of labeled data.   Overcoming these challenges requires a deep understanding of the underlying principles and techniques of transfer learning. Researchers and practitioners continually work on developing innovative approaches to tackle these challenges and improve the effectiveness of transfer learning. Transfer Learning in Natural Language Processing (NLP) Transfer learning has significantly impacted various fields, including Natural Language Processing (NLP). By leveraging pre-trained language models, transfer learning has revolutionized the way NLP tasks are approached. Here are a few key aspects of transfer learning in NLP: Pre-trained Language Models: Language models like BERT (Bidirectional Encoder Representations from Transformers), GPT (Generative Pre-trained Transformer), and RoBERTa (Robustly Optimized BERT Approach) have achieved remarkable success in NLP tasks. These models are pre-trained on vast amounts of text data, enabling them to capture rich semantic and contextual information. They can then be fine-tuned on specific downstream tasks, such as sentiment analysis, named entity recognition, or machine translation. Transfer Learning Architectures: In NLP, Transfer learning architectures typically involve pre-training and fine-tuning. A language model is trained on a large corpus of unlabeled text data during pre-training using unsupervised learning techniques. This step helps the model learn general language representations. In the fine-tuning stage, the pre-trained model is further trained on task-specific labeled data to adapt it for specific NLP tasks. Application Areas: Transfer learning has been successfully applied to various NLP tasks, including sentiment analysis, text classification, question answering, machine translation, and text generation. By leveraging pre-trained models, practitioners can achieve state-of-the-art results with less labeled data and computational resources. Future Directions: Transfer learning in NLP is an active area of research, and ongoing efforts focus on improving model architectures, training procedures, and domain adaptation techniques. Exploring transfer learning in low-resource languages and addressing challenges specific to NLP tasks remain exciting areas for further investigation. Researchers and practitioners have unlocked new possibilities and achieved breakthroughs in natural language understanding and generation by incorporating transfer learning techniques into NLP. Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau Conclusion  In this article, we have gone through the basic understanding of Transfer Learning, its application, and also its implementation with a sample pre-trained VGG16 Model from the keras library. In addition to this, it has been found out that using the pre-trained weights only from the last two layers of the network has the biggest effect on convergence. This also results in faster convergence due to repeated usage of features. Transfer Learning has a lot of applications in building models today. Most importantly, AI for healthcare applications needs several such pre-trained modes due to its large size. Although, Transfer Learning may be in its initial stages, in the coming years it will be one of the most used methods to train large datasets with more efficiency and accuracy. If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

by MK Gurucharan

Calendor icon

18 Jun 2023

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon

Explore Free Courses