View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All

An Intuition Behind Sentiment Analysis: How To Do Sentiment Analysis From Scratch?

By Pavan Vadapalli

Updated on Sep 26, 2022 | 8 min read | 5.3k views

Share:

Introduction

Text is the most important means of perceiving information for human beings. The majority amount of intelligence gained by humans is through learning and comprehending the meaning of texts and sentences around them.

After a certain age, humans develop an intrinsic reflex to understand the inference of any word/text without even knowing. For machines, this task is completely different. To assimilate the meanings of texts and sentences, machines rely on the fundamentals of Natural Language Processing (NLP).

Deep learning for natural language processing is pattern recognition applied to words, sentences, and paragraphs, in much the same way that computer vision is pattern recognition applied to pixels of image.

None of these deep learning models truly understand text in a human sense; rather, these models can map the statistical structure of written language, which is sufficient to solve many simple textual tasks. Sentiment analysis is one such task, for example: classifying the sentiment of strings or movie reviews as positive or negative.

These have large scale applications in the industry too. For example: a goods and services company would like to gather the data of the number of positive and negative reviews it has received for a particular product to work upon the product-life cycle and improve its sales figures and gather customer feedback.

Placement Assistance

Executive PG Program13 Months
View Program
background

Liverpool John Moores University

Master of Science in Machine Learning & AI

Dual Credentials

Master's Degree19 Months
View Program

Join the Artificial Intelligence Course online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career.

Preprocessing

The task of sentiment analysis can be broken down into a simple supervised machine learning algorithm, where we usually have an input X, which goes into a predictor function to get Yhat. We then compare our prediction with the true value Y, This gives us the cost which we then use to update the parameters (theta) of our text processing model. 

To tackle the task of extracting sentiments from a previously unseen stream of texts, the primitive step is to gather a labeled dataset with separate positive and negative sentiments. These sentiments can be: good review or bad review, sarcastic remark or non-sarcastic remark, etc.

The next step is to create a vector of dimension V, where V corresponds to the entire vocabulary size of the corpus of text. This vocabulary vector will contain every word (no word is repeated) that is present in our dataset and will act as a lexicon for our machine which it can refer to. Now we preprocess the vocabulary vector to remove redundancies. The following steps are performed:

  1. Eliminating URLs and other non-trivial information (which does not help determine the meaning of a sentence)
  2. Tokenizing the string into words: suppose we have the string “I love machine learning”, now by tokenizing we simply break the sentence into single words and store it in a list as [I, love, machine, learning]
  3. Removing stop words like “and”, “am”, “or”, “I”, etc.
  4. Stemming: we transform each word to its stem form. Words like “tune”, “tuning” and “tuned” have semantically the same meaning, so reducing them to its stem form that is “tun” will reduce the vocabulary size
  5. Converting all words to lowercase

To summarize the preprocessing step, let’s take a look at an example: say we have a positive string “I am loving the new product at upGrad.com”. The final preprocessed string is obtained by removing the URL, tokenizing the sentence into single list of words, removing the stop words like “I, am, the, at”, then stemming the words “loving” to “lov” and “product” to “produ” and finally converting it all to lowercase which results in the list [lov, new, produ].

Feature Extraction

After the corpus is preprocessed, the next stride would be to extract features from the list of sentences. Like all other neural networks, deep-learning models don’t take as input raw text: they only work with numeric tensors. The preprocessed list of words are hence in need to be converted into numerical values. This can be done in the following way. Assume that given a compilation of strings with positive and negative strings such as (assume this as the dataset):

Positive strings Negative strings
  • I am happy because I am learning NLP
  • I am happy
  • I am sad, I am not learning NLP
  • I am sad

Now to convert each of these strings into a numerical vector of dimension 3, we create a dictionary to map the word, and the class it appeared in (positive or negative) to the number of times that word appeared in its corresponding class.

Vocabulary Positive frequency Negative frequency
I 3 3
am 3 3
happy 2 0
because 1 0
learning 1 1
NLP 1 1
sad 0 2
not 0 1

After generating the aforementioned dictionary, we look at each of the strings individually, and then sum the number positive and negative frequency number of the words that appear in the string leaving the words that do not appear in the string. Let’s take the string ‘“I am sad, I am not learning NLP” and generate the vector of dimension 3.

“I am sad, I am not learning NLP”

Vocabulary Positive frequency Negative frequency
I 3 3
am 3 3
happy 2    0
because 1   0
learning 1 1
NLP 1 1
sad 0 2
not 0 1
  Sum = 8 Sum = 11

We see that for the string “I am sad, I am not learning NLP”, only two words “happy, because” are not contained in the vocabulary, now to extract features and create the said vector, we sum the positive and negative frequency columns separately leaving out the frequency number of the words that are not present in the string, in this case we leave “happy, because”. We obtain the sum as 8 for the positive frequency and 9 for the negative frequency.

Hence, the string “I am sad, I am not learning NLP” can be represented as a vector X = [1, 8, 11], which makes sense as the string is semantically in a negative context. The number “1” present in index 0 is the bias unit which will remain “1” for all forthcoming strings and the numbers “8”,“11” represent the sum of positive and negative frequencies respectively.

In a similar manner, all the strings in the dataset can be converted to a vector of dimension 3 comfortably. 

Read more: Sentiment Analysis Using Python: A Hands-on Guide

Applying Logistic Regression

Feature extraction makes it easy to understand the essence of the sentence but machines still need a more crisp way to flag an unseen string into positive or negative. Here logistic regression comes into play that makes use of the sigmoid function which outputs a probability between 0 and 1 for each vectorised string. 

Figure 1: Graphical notation of sigmoid function

Figure 1 shows that whenever the dot product of theta and X is negative, the prediction function classifies the string and negative and whenever the prediction function is positive, it outputs the string as positive.

Also Read: Top 4 Data Analytics Project Ideas: Beginner to Expert Level 

What Next?

Sentiment Analysis is an essential topic in machine learning. It has numerous applications in multiple fields. If you want to learn more about this topic, then you can head to our blog and find many new resources.

On the other hand, if you want to get a comprehensive and structured learning experience, also if you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Frequently Asked Questions (FAQs)

1. Why is Random Forest Algorithm best for machine learning?

2. How is machine learning different from deep learning?

3. Why do data scientists prefer the random forest algorithm?

Pavan Vadapalli

899 articles published

Get Free Consultation

+91

By submitting, I accept the T&C and
Privacy Policy

India’s #1 Tech University

Executive Program in Generative AI for Leaders

76%

seats filled

View Program

Top Resources

Recommended Programs

LJMU

Liverpool John Moores University

Master of Science in Machine Learning & AI

Dual Credentials

Master's Degree

19 Months

View Program
IIITB
bestseller

IIIT Bangalore

Executive Diploma in Machine Learning and AI

Placement Assistance

Executive PG Program

13 Months

View Program
IIITB

IIIT Bangalore

Post Graduate Certificate in Machine Learning & NLP (Executive)

Career Essentials Soft Skills Program

Certification

8 Months

View Program