View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All

Regularization in Machine Learning: How to Avoid Overfitting?

By Kechit Goyal

Updated on Jul 03, 2023 | 9 min read | 6.1k views

Share:

What Is Regularization In Machine Learning?

How does regularization differ in the context of machine learning? In machine learning, regularization refers to minimizing or reducing the coefficient estimates towards zero to prevent the machine learning model from underfitting.

The ‘coefficients’ for the input parameters are also included in those voluminous books on machine learning models. The machine learning model generally assigns larger importance to a parameter if its coefficient is higher. Therefore, regularization in machine learning entails modifying these coefficients by altering their magnitude and decreasing them to impose generalization. 

Machine learning involves equipping computers to perform specific tasks without explicit instructions. So, the systems are programmed to learn and improve from experience automatically. Data scientists typically use regularization in machine learning to tune their models in the training process. Let us understand this concept in detail. 

Understanding Overfitting And Underfitting

Overfitting is acknowledged when an ML model performs exceptionally satisfactorily on training data but poorly on test data. Low bias and large variance both increase the risk of overfitting.

Causes Of Overfitting

  • Data used for training have noise (junk values) since it has not been cleaned.
  • The model’s variance is quite big.
  • The used training dataset is too little in size.

Underfitting is the term used to describe a model’s inability to generalize adequately to new data after poorly learning the patterns in the training data. Low variance and excessive bias lead to underfitting.

Causes Of Underfitting

  • Data used for training have noise (junk values) since it has not been cleaned.
  • The model is heavily biased.
  • The used training dataset is too little in size.

Regularization Dodges Overfitting

Regularization in machine learning allows you to avoid overfitting your training model. Overfitting happens when your model captures the arbitrary data in your training dataset. Such data points that do not have the properties of your data make your model ‘noisy.’ This noise may make your model more flexible, but it can pose challenges of low accuracy. 

Consider a classroom of 10 students with an equal number of girls and boys. The overall class grade in the annual examination is 70. The average score of female students is 60, and that of male students is 80. Based on these past scores, we want to predict the students’ future scores. Predictions can be made in the following ways:

  • Under Fit: The entire class will score 70 marks
  • Optimum Fit: This could be a simplistic model that predicts the score of girls as 60 and boys as 80 (same as last time)
  • Over Fit: This model may use an unrelated attribute, say the roll number, to predict that the students will score precisely the same marks as last year

Placement Assistance

Executive PG Program13 Months
View Program
background

Liverpool John Moores University

Master of Science in Machine Learning & AI

Dual Credentials

Master's Degree19 Months
View Program

Regularization is a form of regression that adjusts the error function by adding another penalty term. This additional term keeps the coefficients from taking extreme values, thus balancing the excessively fluctuating function. 

Any machine learning expert would strive to make their models accurate and error-free. And the key to achieving this goal lies in mastering the trade-off between bias and variance. Read on to get a clear picture of what this means. 

Balancing Bias and Variance

The expected test error can be minimized by finding a method that accomplishes the right ‘bias-variance’ balance. In other words, your chosen statistical learning method should optimize the model by simultaneously realizing low variance and low bias. A model with high variance is overfitted, and high bias results in an underfitted model.  

Cross-validation offers another means of avoiding overfitting. It checks whether your model is picking up the correct patterns from the data set, and estimates the error over your test set. So, this method basically validates the stability of your model. Moreover, it decides the parameters that work best for your particular model.

Increasing the Model’s Interpretability

The objective is not only to get a zero error for the training set but also to predict correct target values from the test data set. So, we require a ‘tuned’ function that reduces the complexity of this process.

Explaining Regularization in Machine Learning

Regularization is a form of constrained regression that works by shrinking the coefficient estimates towards zero. In this way, it limits the capacity of models to learn from the noise. 

Let’s look at this linear regression equation:

Y=β0+β1X1+β2X2+…..+βpXp

Here, β denotes the coefficient estimates for different predictors represented by (X). And Y is the learned relation. 

Since this function itself may encounter errors, we will add an error function to regularize the learned estimates. We want to minimize the error in this case so that we can call it a loss function as well. Here’s what this loss function or Residual Sum of Squares (RSS) looks like:

Therefore, data scientists use regularization to adjust the prediction function. Regularization techniques are also known as shrinkage methods or weight decay. Let us understand some of them in detail. 

Elastic Net Regression

Other than two popular regularization techniques in Machine Learning, elastic net regression is the third technique to perform variable selection and regularization:

  • The approach improves the regularization of statistical models by combining the lasso and ridge regression approaches. Regression models are regularized using elastic net linear regression employing the penalties from the lasso and ridge procedures.
  • The elastic net method combines regularization with variable selection.
  • The elastic net technique is best suited when the dimensional data exceeds the number of samples used.
  • The main functions of the elastic net technology are groupings and variable selection.

Ridge Regularization

In Ridge Regression, the loss function is modified with a shrinkage quantity corresponding to the summation of squared values of β. And the value of λ decides how much the model would be penalized. 

The coefficient estimates in Ridge Regression are called the L2 norm. This regularization technique would come to your rescue when the independent variables in your data are highly correlated.

Lasso Regularization

In the Lasso technique, a penalty equalling the sum of absolute values of β (modulus of β) is added to the error function. It is further multiplied with parameter λ which controls the strength of the penalty. Only the high coefficients are penalized in this method. 

The coefficient estimates produced by Lasso are referred to as the L1 norm. This method is particularly beneficial when there are a small number of observations with a large number of features.

To simplify the above approaches, consider a constant, s, which exists for each value of λ. Now, in L2 regularization, we solve an equation where the sum of squares of coefficients is less than or equal to s. Whereas in L1 regularization, the summation of modulus of coefficients should be less than or equal to s. 

Read: Machine Learning vs Neural Networks

Both the methods mentioned above seek to ensure that the regression model does not consume unnecessary attributes. For this reason, Ridge Regression and Lasso are also known as constraint functions. 

Difference Between Ridge Regression And Lasso Regression

  • Ridge regression incorporates all of the model’s features and is mostly used to lessen overfitting. By making the coefficients smaller, it lessens the model’s complexity.
  • Lasso regression aids in minimizing both feature selection and overfitting in the model.

Regularization In Deep Learning

When faced with brand-new data from the issue domain, regularization is a set of approaches that can prevent overfitting in neural networks and hence increase the accuracy of a Deep Learning model. 

Now that we know how regularization reduces overfitting, we will explore a few alternative methods to use regularization in deep learning. The methods are:

  • L2 and L1 regularization
  • Dropout
  • Data augmentation
  • Early stopping

RSS and Predictors of Constraint Functions

With the help of the earlier explanations, the loss functions (RSS) for Ridge Regression and Lasso can be given by β1² + β2² ≤ s and |β1| + |β2| ≤ s, respectively. β1² + β2² ≤ s would form a circle, and RSS would be the smallest for all points that lie within it. As for the Lasso function, the RSS would be the lowest for all points lying within the diamond given by |β1| + |β2| ≤ s.

Ridge Regression shrinks the coefficient estimates for the least essential predictor variables but doesn’t eliminate them. Hence, the final model may contain all the predictors because of non-zero estimates. On the other hand, Lasso can force some coefficients to be exactly zero, especially when λ is large. 

Read: Python Libraries for Machine Learning

How Regularization Achieves a Balance

There is some variance associated with a standard least square model. Regularization techniques reduce the model’s variance without significantly increasing its squared bias. And the value of the tuning parameter, λ, orchestrates this balance without eliminating the data’s critical properties. The penalty has no effect when the value of λ is zero, which is the case of an ordinary least squares regression.

The variance only goes down as the value of λ rises. But this happens only till a certain point, after which the bias may start rising. Therefore, selecting the value of this shrinkage factor is one of the most critical steps in regularization. 

Conclusion

In this article, we learned about regularization in machine learning and its advantages and explored methods like ridge regression and lasso. Finally, we understood how regularization techniques help improve the accuracy of regression models. If you are just getting started in regularization, these resources will clarify your basics and encourage you to take that first step! 

If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Frequently Asked Questions (FAQs)

1. What are your job options after learning Machine Learning?

2. How much salary does a machine learning engineer draw per year?

3. What is the required skill set for machine learning?

Kechit Goyal

95 articles published

Get Free Consultation

+91

By submitting, I accept the T&C and
Privacy Policy

India’s #1 Tech University

Executive Program in Generative AI for Leaders

76%

seats filled

View Program

Top Resources

Recommended Programs

LJMU

Liverpool John Moores University

Master of Science in Machine Learning & AI

Dual Credentials

Master's Degree

19 Months

View Program
IIITB
bestseller

IIIT Bangalore

Executive Diploma in Machine Learning and AI

Placement Assistance

Executive PG Program

13 Months

View Program
IIITB

IIIT Bangalore

Post Graduate Certificate in Machine Learning & NLP (Executive)

Career Essentials Soft Skills Program

Certification

8 Months

View Program