View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All

What is the EM Algorithm in Machine Learning? [Explained with Examples]

By Pavan Vadapalli

Updated on Sep 23, 2022 | 7 min read | 10.5k views

Share:

The EM algorithm or Expectation-Maximization algorithm is a latent variable model that was proposed by Arthur Dempster, Nan Laird, and Donald Rubin in 1977.

In the applications for machine learning, there could be few relevant variables part of the data sets that go unobserved during learning. Try to understand Expectation-Maximization or the EM algorithm to gauge the estimation of all latent variables using observed data. You might begin with understanding the main problems in this context of EM algorithm variables.

In the context of statistic modeling, the most common problem could be when you try estimating joint probability distribution for any data set in em algorithm in machine learning.

How To Explain Em Algorithm In Machine Learning Estimations?

Probability Density related estimation is actually the construction of estimate-based as per the observed data. When you explain em algorithm in machine learning, it involves selecting probability distribution functions as well as the parameters of this function best explaining the joint probability of observed data.

  •  The initial step in density estimation relates to creating a plot of all observations in a random sample. This is a basic part of understanding em algorithm in machine learning.
  • In terms of the output, the bin number plays the most significant role in how many bars are available in distribution. It also determines how nicely density gets plotted.

Density estimation needs the selection of probability distribution-related functions and parameters of distribution that explain the joint probability-related distribution of a sample. The main problem with this density estimation could be:

  1. Choosing the probability distribution-related function
  2. Choosing parameters of probability distribution-related function?

A common technique that solves this issue is Maximum Likelihood Estimation, or something you call “maximum likelihood”.

A latent variable model comprises observable variables and unobservable variables. Observed variables are those that can be measured whereas unobserved (latent/hidden) variables are inferred from observed variables. 

As explained by the trio, the EM algorithm can be used to determine the local maximum likelihood (MLE) parameters or maximum a posteriori (MAP) parameters for latent variables (unobservable variables that need to be inferred from observable variables) in a statistical model. It is used to predict these values or determine data that is missing or incomplete, provided that you know the general form of probability distribution associated with these latent variables.

Placement Assistance

Executive PG Program13 Months
background

Liverpool John Moores University

Master of Science in Machine Learning & AI

Dual Credentials

Master's Degree19 Months

To put it simply, the general principle behind the EM algorithm in machine learning involves using observable instances of latent variables to predict values in instances that are unobservable for learning. This is done until convergence of the values occurs.

The algorithm is a rather powerful tool in machine learning and is a combination of many unsupervised algorithms. This includes the k-means clustering algorithm, among other EM algorithm variants. 

Join the Machine Learning Course online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career.

The Expectation-Maximization Algorithm

Let’s explore the mechanism of the Expectation-Maximization algorithm in Machine Learning:

Source

  • Step 1: We have a set of missing or incomplete data and another set of starting parameters. We assume that observed data or the initial values of the parameters are generated from a specific model.
  • Step 2: Based on the observable value in the observable instances of the available data, we will predict or estimate the values in the unobservable instances of the data or the missing data. This is known as the Expectation step (E – step).
  • Step 3: Using the data generated from the E – step, we will update the parameters and complete the data set. This is known as the Maximization step (M – step) which is used to update the hypothesis.

Steps 2 and step 3 are repeated until convergence. Meaning if the values are not converging, we will repeat the E – step and M – step.

What Is The Maximum Likelihood Estimation?

In terms of statistics, maximum likelihood estimation is a method that helps to estimate all parameters of the probability distribution. The same works by maximizing a likelihood function for making probable the observed data for any statistical model.

However, the Maximum Likelihood mode comes with a big limitation. This is its assumption that data is complete as well as fully observed. It never mandates that a model could actually access all data. It goes on to assume that all variables that are relevant to a model are already present. The reality is that some relevant variables could remain hidden, leading to inconsistencies. Such hidden variables causing inconsistencies are termed Latent Variables.

The Relevance Of EM Algorithm

In the presence of a latent variable, the traditional maximum estimator won’t work as you anticipate. Find an appropriate model parameter in the presence of a latent variable by employing the Expectation-Maximization or EM algorithm for machine learning.

Source

Advantages and Disadvantages of the EM Algorithm

Disadvantages of EM Algorithm
1 Every iteration in the EM algorithm results in a guaranteed increase in likelihood.
2 The Expectation step and Maximization step is rather easy and the solution for the latter mostly exists in closed form.
Advantages of the EM Algorithm
1 The expectation-Maximization algorithm takes both forward and backward probabilities into account. This is in contrast with numerical optimization which takes only the forward probabilities into account.
2 EM algorithm convergence is very slow and is only made to the local optima.

Applications of the EM Algorithm 

The latent variable model has plenty of real-world applications in machine learning.

  1. It is used in unsupervised data clustering and psychometric analysis.
  2. It is also used to compute the Gaussian density of a function.
  3. The EM algorithm finds extensive use in predicting the Hidden Markov Model (HMM) parameters and other mixed models.
  4. EM algorithm finds plenty of use in natural language processing (NLP), computer vision, and quantitative genetics.
  5. Other important applications of the EM algorithm include image reconstruction in the field of medicine and structural engineering. 

Let us understand the EM algorithm using a Gaussian Mixture Model.

EM Algorithm For Gaussian Mixture Model

To estimate the parameters of a Gaussian Mixture Model, we will need some observed variables generated by two separate processes whose probability distributions are known. However, the data points of the two processes are combined and we do not know which distribution they belong to. 

We aim to estimate the parameters of these distributions using the Maximum Likelihood estimation of the EM algorithm as explained above. 

Here is the code we will use:

# Given a function for which we have to compute density of 

# Gaussian at point x_i given mu, sigma: G(x_i, mu, sigma); and

# another function to compute the log-likelihoods: L(x, mu, sigma, pi)

def estimate_gmm(x, K, tol=0.001, max_iter=100):

    ”’ Estimate GMM parameters.

        :param x: list of observed real-valued variables

        :param K: integer for number of Gaussian

        :param tol: tolerated change for log-likelihood

        :return: mu, sigma, pi parameters

    ”’

    # 0. Initialize theta = (mu, sigma, pi)

    N = len(x)

    mu, sigma = [rand()] * K, [rand()] * K

    pi = [rand()] * K

    curr_L = np.inf

    for j in range(max_iter):

        prev_L = curr_L

        # 1. E-step: responsibility = p(z_i = k | x_i, theta^(t-1))

        r = {}

        for i in range(N):

            parts = [pi[k] * G(x_i, mu[k], sigma[k]) for i in range(K)]

            total = sum(parts)

            for i in k:

                r[(i, k)] = parts[k] / total

        # 2. M-step: Update mu, sigma, pi values

        rk = [sum([r[(i, k)] for i in range(N)]) for k in range(K)]

        for k in range(K):

            pi[k] = rk[k] / N

            mu[k] = sum(r[(i, k)] * x[i] for i in range(N)) / rk[k]

            sigma[k] = sum(r[(i, k)] * (x[i] – mu[k]) ** 2) / rk[k]

        # 3. Check exit condition

        curr_L = L(x, mu, sigma, pi)

        if abs(prev_L – curr_L) < tol:

            break

    return mu, sigma, pi

In the E-Step, we can use the Bayes theorem to determine the expected values of the given data points that are drawn from the past iterations of the algorithm. In the M-Step, we assume that the values of the latent variables are fixed to estimate the proxies in the unobserved instances using the Maximum Likelihood. Finally, we use the standard mean and standard deviation formulas to estimate the parameters of the gaussian mixture model.

Conclusion

This brings us to the end of the article. For more information on Machine Learning concepts, get in touch with the top faculty of IIIT Bangalore and Liverpool John Moores University through upGrad‘s Master of Science in Machine Learning & AI program. 

It is an 18 months course that offers 450+ hours of learning content, 12+ industry projects, 10 Capstone project options, and 10+ coding assignments. You also enjoy personalised mentorship from industry experts, and career guidance counselling through live sessions. The next batch begins on Feb 28, 2021!

Frequently Asked Questions (FAQs)

1. What is meant by EM clustering?

2. What are the real-life applications of the EM algorithm?

3. How is the MLE algorithm different from the EM algorithm?

Pavan Vadapalli

900 articles published

Get Free Consultation

+91

By submitting, I accept the T&C and
Privacy Policy

India’s #1 Tech University

Executive Program in Generative AI for Leaders

76%

seats filled

View Program

Top Resources

Recommended Programs

LJMU

Liverpool John Moores University

Master of Science in Machine Learning & AI

Dual Credentials

Master's Degree

19 Months

IIITB
bestseller

IIIT Bangalore

Executive Diploma in Machine Learning and AI

Placement Assistance

Executive PG Program

13 Months

upGrad
new course

upGrad

Advanced Certificate Program in GenerativeAI

Generative AI curriculum

Certification

4 months