Basic Fundamentals of Statistics for Data Science

If you’re an aspiring Data Scientist, being familiar with the core concepts of Statistics is essential. You need not be a Ph.D. in Statistics to excel at Data Science, but you need to know enough to perhaps describe a couple of basic algorithms at a dinner party. Going forward, we’ll walk you through some of the prerequisites in Statistics for Data Science.

If you’ve just entered the word of Data Science, you might have come across people stating “Maths” as a prerequisite to Data Science. In all honesty, it’s not Maths, per se, but Statistics that’s a major prerequisite for Data Science.

These days, libraries like Tensorflow hide almost all the complex Mathematics away from the user. Good for us, but it’s still good to have a basic understanding of the underlying principles on which these things work. Having a general understanding of basic statistics for Data Science can help you utilise these libraries better.

This article will help arm you with some theorems, concepts, and equations that will not only help your cause as a Data Scientist but will also make you sound like you aced the course on Advanced Statistical Computing big time.

Statistical Distributions

This is probably one of the most important things you need to know while arming yourself with prerequisite Statistics for Data Science.

  • Poisson Distribution

     

     

    Statistics for Data ScienceThe Poisson distribution is one of the most essential tools in statistics. It’s used for to calculate the number of events that are likely to occur in a time interval. For instance, how many phone calls are likely to occur in any particular period of time.

    The funny looking symbol in this equation (λ) is known as lambda. It is used to represent the average number of events occurring per time interval.

    Another good example where Poisson distribution finds use is to calculate loss in manufacturing. Suppose a machine produces sheets of metal and has X flaws per yard. Suppose, for instance, the error rate was 2 per yard of the sheet – then using Poisson distribution, we can calculate the probability that exactly two errors will occur in a yard.

  • Binomial Distribution

    Statistics for Data Science

    If you’ve ever encountered basic Statistics, you might have come across Binomial Distribution.

    Let’s say you had an experiment of flipping an unbiased coin thrice.

    Can you tell the probability of the coin showing heads on all three flips?

    First, from basic combinatorics, we can find out that there are eight possible combinations of results when flipping a coin thrice. Now, we can plot the probabilities of having 0,1,2, or 3 heads. That plot will give us our required binomial distribution for this problem. When graphed, you’ll notice that it looks very similar to a typical normal distribution curve, in theory, both are very similar. While Binomial Distribution is for discrete values (a limited number of coin flips), Normal Distribution takes care of continuous values.

    There are a number of distributions other than the ones we talked about above. If you’re an interested soul and also want to arm yourself better with the needed Statistics for Data Science, we suggest you to read up about the following distributions as well:

  • Geometric Distribution
  • Hypergeometric Distribution
  • Discrete Uniform Distribution
  • Negative Binomial Distribution
Exploratory Data Analysis and its Importance to Your Business

 

Some Theorems and Algorithms

When we talk about Statistics for Data Science, we just can’t ignore the basic theorems and algorithms that are the foundation of many libraries that you’ll be working on as a Data Scientist. There are a number of classification algorithms, clustering algorithms, neural network algorithms, decision trees, so on and so forth. In this section, we’ll talk about a few basic theorems that you should know – it’ll also help you understand other complex theorems with ease.

Bayes Theorem

This is one of the common theorems that you’ll come across if you’ve had any formal education in Computer Science. There have been numerous books over the years that excessively discuss Bayes Theorem and its concepts in an elaborate manner.

Bayes Theorem greatly simplifies complex concepts. It explains a lot of statistical facts using a few simple variables. It supports the concept of  “conditional probability”(e.g., If A occurred, it played in role in the occurrence of B). The most appreciable thing about this is the fact that you can predict the probability of any hypothesis using just the given data points.

Bayes can help you predict the probability of someone having cancer just by knowing their age. It can also let you know if an email is a spam based on the number of words. This theorem is in essence used to remove uncertainty.

Fun fact: Bayes Theorem helped predict locations of U-boats as well as predicting the configuration of the Enigma machine to translate the German codes, in WW2. Even in modern Data Science Bayes finds extensive applications in many algorithms.

The What’s What of Data Warehousing and Data Mining

 

K-Nearest Neighbor Algorithm

Statistics for Data Science

This is a very easy algorithm both in terms of understanding and implementation. So much so that it’s referred to as the “lazy algorithm”. Its simplicity lies in the fact that it’s based on logical deductions than any concept of statistics, per se. In layman terms, this algorithm looks to find groups closest to each other.

K-NN uses the concept of Euclidean Distance. It searches for local groups in and around a specified number of focal points. That number is represented by “k”. There are many approaches to finding out how large the value of ‘k’ should be as this is a user-decided value.

This concept is great for feature clustering, basic market segmentation, and seeking out outliers from a group of data entries. Most modern programming languages implement the K-NN algorithm in just two lines of code.

Bagging (Bootstrap aggregating)

Bagging essentially refers to creating more than one models of a single algorithm – like a decision tree. Each of the models is trained on a different sample data (this is called bootstrap sample).

Therefore, each decision tree is made using different sample data – this solves the problem of overfitting to the sample size. Grouping decision trees like this essentially help in reducing the total error, as the overall variance decreases with each new tree added. A bag of such decision trees is known as a random forest.

Get Started in Data Science with Python

 

ROC Curve Analysis

Statistics for Data Science

The term ROC stands for Receiver Operating Characteristic. The ROC analysis curve finds extensive use in Data Science. It predicts how well a test is likely to perform by measuring its overall sensitivity vs. its fall-out rate. ROC Analysis is extremely important when determining the viability of any model.

How does it work?

Your machine learning model might give you some inaccurate predictions.Some of them are because a particular value should’ve been ‘true’ but is instead set ‘false’, or vice-versa.

What is the probability of you being correct then?

Using the ROC curve, you can see how accurate your prediction is. With the two different parables, you can also figure out where to put your threshold value. The threshold is where you decide if the binary classification is positive or negative – true or false.

As the two parables get closer to each other, the area under the curve will tend to zero. This essentially means that your model is tending to inaccuracy. Greater the area, greater is the accuracy of your model. This is one of the first tests used when testing any modeling, as it helps detect problems early on by telling whether or not the model is correct.

A real-life example of ROC curves – They are used to depict the connection/trade-off between clinical sensitivity and specificity for cut-off for a particular test or a combination of tests – in a graphical way. To add to that, the area under the ROC curve also gives a fair idea of the benefits of using the tests mentioned above. Hence, ROC curves find extensive use in Biochemistry for choosing an appropriate cut-off. Ideally, the best cut-off is the one that has the lowest false positive rate with highest true positive rate together.  

How Can You Transition to Data Analytics?

 

In Conclusion…

The above list of topics is by no means a comprehensive list of everything you need to know in Statistics. This list is just to give you a flavor of what all you might encounter in your journey of Data Science, and how can you be prepared for it.

All in all, this article introduces to some of the core concepts of Statistics for Data Science. A deep understanding of the concepts explained coupled will help you understand the other concepts easily.

Let us know if we missed an important topic in the comments below!

Facebook Comments

Prepare for a Career of the Future

UpGrad and IIIT-Bangalore's PG Diploma in Data Science
Learn More