View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All

Recurrent Neural Networks: Introduction, Problems, LSTMs Explained

By Pavan Vadapalli

Updated on Nov 24, 2022 | 5 min read | 5.6k views

Share:

Introduction

In a traditional feed-forward neural network, the flow of the information is only in one direction, which is from the input layer to the hidden layers and finally the output layer.

Here, the output of each layer depends only on its immediately previous layer, and so it does not have any memory of the past layers while going forward. For example, consider a simple neural network and feed in the word “layer” as the input.

The neural network will process the word one character at a time. While it reaches the character “e”, it no longer has any memory of the previous characters “l”, “a” and “y”. This is why a feed-forward neural network will never be able to predict the next character output.

Now, this is where a recurrent neural network comes to the rescue. It is able to remember all the previous characters because it possesses a memory of its own. As the name suggests, the flow of information recurs in a loop in the case of a recurrent neural network.

At every stage, it receives two inputs: the current state, and the information gathered from the previous states. Thus, this kind of neural network does well in tasks like predicting the next character and other sequential data in general, like speech, audio, time series, etc.

Placement Assistance

Executive PG Program13 Months
background

Liverpool John Moores University

Master of Science in Machine Learning & AI

Dual Credentials

Master's Degree19 Months

Taking the above example of the word “layer”, suppose the neural network is trying to predict the fifth character. The hidden block in the diagram above applies a recurrence formula at every time step, which will have the current input as well as its previous state. So for time-step t if the current input is “y” then the previous state is “a” and the formula is applied to both “y” and “a” to get the next output state. 

The formula is represented as–

ht = f(ht-1 , xt)

where ht is the new state, ht-1 is the previous state and it is the current input. Each input corresponds to a time step, and the same weight matrix is assigned to the RNN at each time step along with the same function.

So taking the f(x) activation function as tanh and assigning the weights (whh and wxh) to the current and previous states, we get-

ht = tanh (Whhht-1 + Wxhxt)

Now the output would be-

yt = Whyht

If we try to have a deeper recurrent neural network, it means having more than one hidden layer. Here, all hidden layers will have different weights and different activation functions so that each layer is independent and behaves differently. Having the same weights and bias for each hidden layer will defeat the purpose and make them behave the same.

Problems

Some problems or issues which occur while training recurrent neural networks are vanishing gradients and exploding gradients. A gradient is simply the measure of how much the output of a function changes with the change in input. Higher the gradient or slope, the faster the recurrent neural network is learning, and vice-versa.

Vanishing gradients take place when the gradient value is so small that the RNN model takes extremely long to learn or doesn’t learn at all. This is a difficult problem to tackle, however, it can be solved by using LSTMs, GRU, or the Relu activation function.

Exploding gradients take place when some of the weights are given far too much importance by assigning them an extremely high value. This problem is easier to tackle than vanishing gradients. RMSprop can be used to adjust the learning rate, or the backpropagation can be truncated at a suitable time step.

LSTMs

Recurrent neural networks by default tend to have a short-term memory, with the exception of LSTMs. They basically have a 3-gate module—

Forget gate: This gate decides how much of the past information should be remembered and how much should be omitted. 

Input gate: This gate decides how much of the present input is to be added to the current state. 

Output gate: This gate decides how much of the current state will be passed on to the output.

Also Read: Machine Learning Project Ideas

Conclusion

This modified version of RNN thus is able to remember things for a longer-term, without having to worry about vanishing gradients. LSTMs are helpful in classifying or predicting series where the duration of time lags are unknown. RNNs in general have always helped in sequenced data modeling, with the added advantage that they can process inputs and outputs of varying lengths.

If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Learn ML Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.

Pavan Vadapalli

900 articles published

Get Free Consultation

+91

By submitting, I accept the T&C and
Privacy Policy

India’s #1 Tech University

Executive Program in Generative AI for Leaders

76%

seats filled

View Program

Top Resources

Recommended Programs

LJMU

Liverpool John Moores University

Master of Science in Machine Learning & AI

Dual Credentials

Master's Degree

19 Months

IIITB

IIIT Bangalore

Post Graduate Certificate in Machine Learning & NLP (Executive)

Career Essentials Soft Skills Program

Certification

8 Months

IIITB
bestseller

IIIT Bangalore

Executive Diploma in Machine Learning and AI

Placement Assistance

Executive PG Program

13 Months