Author DP

Thulasiram Gunipati

14+ of articles published

Words Crafter / Idea Explorer / Insightful Critic

Domain:

upGrad

Current role in the industry:

Senior Solution Architect (Data & Analytics) at Louis Dreyfus Company.,Data Science Manager at Eyeota.

Educational Qualification:

Executive Program in Algorithmic Trading (with Distinction) from QuantInsti.,Post Graduate Diploma in Data Analysis from the International Institute of Information Technology Bangalore.

Expertise:

Data Science Management

Machine Learning at Scale

Computer Vision

Natural Language Processing

Interpretable Machine Learning

Tools & Technologies:

Generative Adversarial Networks (GANs)

Natural Language Processing (NLP)

Artificial Intelligence (AI)

R Programming Language

Predictive Modeling

Certifications:

Machine Learning System Design from Educative

Dimensionality Reduction using an Autoencoder in Python from Coursera

Automatic Machine Learning with H2O AutoML and Python from Coursera

Predict Future Product Prices Using Facebook Prophet from Coursera

#AppsflyerKnowledgeChampion from AppsFlyer

Graph Databases from Neo4j

Duolingo Spanish Fluency: Beginner (Estimated) from Duolingo

More than 30 MOOCs from Coursera

edx and udacity from United Latino Students Association

Quality Assurance from CTI

Bangalore

edX Honor Code Certificate for Justice from edX

edX Honor Code Certificate for The Science of Everyday Thinking from edX

About

Thulasiram is a veteran with 20 years of experience in production planning, supply chain management, quality assurance, Information Technology, and training. Trained in Data Analysis from IIIT Bangalore and UpGrad, he is passionate about education and operations and ardent about applying data analytic techniques to improve operational efficiency and effectiveness. Presently, working as Program Associate for Data Analysis at UpGrad.

Published

Most Popular

40+ Machine Learning Interview Questions & Answers – Linear Regression
Blogs
Views Icon

42898

40+ Machine Learning Interview Questions & Answers – Linear Regression

Machine Learning Interviews can vary according to the types or categories, for instance, a few recruiters ask many Linear Regression interview questions. When going for the role of Machine Learning Engineer interview, they can specialize in categories like Coding, Research, Case Study, Project Management, Presentation, System Design, and Statistics. We will focus on the most common types of categories and how to prepare for them.  Getting your desired job as a machine learning engineer may need you to pass a machine learning interview. The categories included in these interviews are frequently coding, machine learning concepts, screening, and system design. Different facets of your expertise and knowledge in the topic are assessed in each category. In this article, we’ll examine the most typical machine learning interview questions and offer helpful preparation advice for each of them. It is a common practice to test data science aspirants on commonly used machine learning algorithms in interviews. These conventional algorithms being linear regression, logistic regression, clustering, decision trees etc. Data scientists are expected to possess an in-depth knowledge of these algorithms. We consulted hiring managers and data scientists from various organisations to know about the typical ML questions which they ask in an interview. Based on their extensive feedback a set of question and answers were prepared to help aspiring data scientists in their conversations. Linear Regression interview questions are the most common in Machine Learning interviews. Q&As on these algorithms will be provided in a series of four blog posts. Each blog post will cover the following topic:- Linear Regression Logistic Regression Clustering Decision Trees and Questions which pertain to all algorithms Let’s get started with linear regression! 1. What is linear regression? In simple terms, linear regression is a method of finding the best straight line fitting to the given data, i.e. finding the best linear relationship between the independent and dependent variables. In technical terms, linear regression is a machine learning algorithm that finds the best linear-fit relationship on any given data, between independent and dependent variables. It is mostly done by the Sum of Squared Residuals Method. 2. State the assumptions in a linear regression model. There are three main assumptions in a linear regression model: The assumption about the form of the model: It is assumed that there is a linear relationship between the dependent and independent variables. It is known as the ‘linearity assumption’. Assumptions about the residuals: Normality assumption: It is assumed that the error terms, ε(i), are normally distributed. Zero mean assumption: It is assumed that the residuals have a mean value of zero. Constant variance assumption: It is assumed that the residual terms have the same (but unknown) variance, σ2 This assumption is also known as the assumption of homogeneity or homoscedasticity. Independent error assumption: It is assumed that the residual terms are independent of each other, i.e. their pair-wise covariance is zero. Assumptions about the estimators: The independent variables are measured without error. The independent variables are linearly independent of each other, i.e. there is no multicollinearity in the data. Explanation: This is self-explanatory. If the residuals are not normally distributed, their randomness is lost, which implies that the model is not able to explain the relation in the data. Also, the mean of the residuals should be zero. Y(i)i= β0+ β1x(i) + ε(i) This is the assumed linear model, where ε is the residual term. E(Y) = E(β0+ β1x(i) + ε(i))         = E(β0+ β1x(i) + ε(i)) If the expectation(mean) of residuals, E(ε(i)), is zero, the expectations of the target variable and the model become the same, which is one of the targets of the model. The residuals (also known as error terms) should be independent. This means that there is no correlation between the residuals and the predicted values, or among the residuals themselves. If some correlation is present, it implies that there is some relation that the regression model is not able to identify. If the independent variables are not linearly independent of each other, the uniqueness of the least squares solution (or normal equation solution) is lost. Join the Artificial Intelligence Course online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career. 3. What is feature engineering? How do you apply it in the process of modelling? Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models resulting in improved model accuracy on unseen data. In layman terms, feature engineering means the development of new features that may help you understand and model the problem in a better way. Feature engineering is of two kinds — business driven and data-driven. Business-driven feature engineering revolves around the inclusion of features from a business point of view. The job here is to transform the business variables into features of the problem. In the case of data-driven feature engineering, the features you add do not have any significant physical interpretation, but they help the model in the prediction of the target variable. FYI: Free nlp course! To apply feature engineering, one must be fully acquainted with the dataset. This involves knowing what the given data is, what it signifies, what the raw features are, etc. You must also have a crystal clear idea of the problem, such as what factors affect the target variable, what the physical interpretation of the variable is, etc. 5 Breakthrough Applications of Machine Learning 4. What is the use of regularisation? Explain L1 and L2 regularisations. Regularisation is a technique that is used to tackle the problem of overfitting of the model. When a very complex model is implemented on the training data, it overfits. At times, the simple model might not be able to generalise the data and the complex model overfits. To address this problem, regularisation is used. Regularisation is nothing but adding the coefficient terms (betas) to the cost function so that the terms are penalised and are small in magnitude. This essentially helps in capturing the trends in the data and at the same time prevents overfitting by not letting the model become too complex. L1 or LASSO regularisation: Here, the absolute values of the coefficients are added to the cost function. This can be seen in the following equation; the highlighted part corresponds to the L1 or LASSO regularisation. This regularisation technique gives sparse results, which lead to feature selection as well. L2 or Ridge regularisation: Here, the squares of the coefficients are added to the cost function. This can be seen in the following equation, where the highlighted part corresponds to the L2 or Ridge regularisation. 5. How to choose the value of the parameter learning rate (α)? Selecting the value of learning rate is a tricky business. If the value is too small, the gradient descent algorithm takes ages to converge to the optimal solution. On the other hand, if the value of the learning rate is high, the gradient descent will overshoot the optimal solution and most likely never converge to the optimal solution. To overcome this problem, you can try different values of alpha over a range of values and plot the cost vs the number of iterations. Then, based on the graphs, the value corresponding to the graph showing the rapid decrease can be chosen. The aforementioned graph is an ideal cost vs the number of iterations curve. Note that the cost initially decreases as the number of iterations increases, but after certain iterations, the gradient descent converges and the cost does not decrease anymore. If you see that the cost is increasing with the number of iterations, your learning rate parameter is high and it needs to be decreased. Best Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our courses, visit our page below. Machine Learning Courses 6. How to choose the value of the regularisation parameter (λ)? Selecting the regularisation parameter is a tricky business. If the value of λ is too high, it will lead to extremely small values of the regression coefficient β, which will lead to the model underfitting (high bias – low variance). On the other hand, if the value of λ is 0 (very small), the model will tend to overfit the training data (low bias – high variance). There is no proper way to select the value of λ. What you can do is have a sub-sample of data and run the algorithm multiple times on different sets. Here, the person has to decide how much variance can be tolerated. Once the user is satisfied with the variance, that value of λ can be chosen for the full dataset. One thing to be noted is that the value of λ selected here was optimal for that subset, not for the entire training data. 7. Can we use linear regression for time series analysis? One can use linear regression for time series analysis, but the results are not promising. So, it is generally not advisable to do so. The reasons behind this are — Time series data is mostly used for the prediction of the future, but linear regression seldom gives good results for future prediction as it is not meant for extrapolation. Mostly, time series data have a pattern, such as during peak hours, festive seasons, etc., which would most likely be treated as outliers in the linear regression analysis. 8. What value is the sum of the residuals of a linear regression close to? Justify. Ans The sum of the residuals of a linear regression is 0. Linear regression works on the assumption that the errors (residuals) are normally distributed with a mean of 0, i.e. Y = βT X + ε Here, Y is the target or dependent variable, β is the vector of the regression coefficient, X is the feature matrix containing all the features as the columns, ε is the residual term such that ε ~ N(0,σ2). So, the sum of all the residuals is the expected value of the residuals times the total number of data points. Since the expectation of residuals is 0, the sum of all the residual terms is zero. Note: N(μ,σ2) is the standard notation for a normal distribution having mean μ and standard deviation σ2. 9. How does multicollinearity affect the linear regression? Ans Multicollinearity occurs when some of the independent variables are highly correlated (positively or negatively) with each other. This multicollinearity causes a problem as it is against the basic assumption of linear regression. The presence of multicollinearity does not affect the predictive capability of the model. So, if you just want predictions, the presence of multicollinearity does not affect your output. However, if you want to draw some insights from the model and apply them in, let’s say, some business model, it may cause problems. One of the major problems caused by multicollinearity is that it leads to incorrect interpretations and provides wrong insights. The coefficients of linear regression suggest the mean change in the target value if a feature is changed by one unit. So, if multicollinearity exists, this does not hold true as changing one feature will lead to changes in the correlated variable and consequent changes in the target variable. This leads to wrong insights and can produce hazardous results for a business. A highly effective way of dealing with multicollinearity is the use of VIF (Variance Inflation Factor). Higher the value of VIF for a feature, more linearly correlated is that feature. Simply remove the feature with very high VIF value and re-train the model on the remaining dataset. In-demand Machine Learning Skills Artificial Intelligence Courses Tableau Courses NLP Courses Deep Learning Courses 10. What is the normal form (equation) of linear regression? When should it be preferred to the gradient descent method? The normal equation for linear regression is — β=(XTX)-1.XTY Here, Y=βTX is the model for the linear regression, Y is the target or dependent variable, β is the vector of the regression coefficient, which is arrived at using the normal equation, X is the feature matrix containing all the features as the columns. Note here that the first column in the X matrix consists of all 1s. This is to incorporate the offset value for the regression line. Comparison between gradient descent and normal equation: Gradient Descent Normal Equation Needs hyper-parameter tuning for alpha (learning parameter) No such need It is an iterative process It is a non-iterative process O(kn2) time complexity O(n3) time complexity due to evaluation of XTX Prefered when n is extremely large Becomes quite slow for large values of n Here, ‘k’ is the maximum number of iterations for gradient descent, and ‘n’ is the total number of data points in the training set. Clearly, if we have large training data, normal equation is not prefered for use. For small values of ‘n’, normal equation is faster than gradient descent. What is Machine Learning and Why it matters 11. You run your regression on different subsets of your data, and in each subset, the beta value for a certain variable varies wildly. What could be the issue here? This case implies that the dataset is heterogeneous. So, to overcome this problem, the dataset should be clustered into different subsets, and then separate models should be built for each cluster. Another way to deal with this problem is to use non-parametric models, such as decision trees, which can deal with heterogeneous data quite efficiently. 12. Your linear regression doesn’t run and communicates that there is an infinite number of best estimates for the regression coefficients. What could be wrong? This condition arises when there is a perfect correlation (positive or negative) between some variables. In this case, there is no unique value for the coefficients, and hence, the given condition arises. 13. What do you mean by adjusted R2? How is it different from R2? Adjusted R2, just like R2, is a representative of the number of points lying around the regression line. That is, it shows how well the model is fitting the training data. The formula for adjusted R2  is — Here, n is the number of data points, and k is the number of features. One drawback of R2 is that it will always increase with the addition of a new feature, whether the new feature is useful or not. The adjusted R2 overcomes this drawback. The value of the adjusted R2 increases only if the newly added feature plays a significant role in the model. 14. How do you interpret the residual vs fitted value curve? The residual vs fitted value plot is used to see whether the predicted values and residuals have a correlation or not. If the residuals are distributed normally, with a mean around the fitted value and a constant variance, our model is working fine; otherwise, there is some issue with the model. The most common problem that can be found when training the model over a large range of a dataset is heteroscedasticity(this is explained in the answer below). The presence of heteroscedasticity can be easily seen by plotting the residual vs fitted value curve. 15. What is heteroscedasticity? What are the consequences, and how can you overcome it? A random variable is said to be heteroscedastic when different subpopulations have different variabilities (standard deviation). The existence of heteroscedasticity gives rise to certain problems in the regression analysis as the assumption says that error terms are uncorrelated and, hence, the variance is constant. The presence of heteroscedasticity can often be seen in the form of a cone-like scatter plot for residual vs fitted values. One of the basic assumptions of linear regression is that heteroscedasticity is not present in the data. Due to the violation of assumptions, the Ordinary Least Squares (OLS) estimators are not the Best Linear Unbiased Estimators (BLUE). Hence, they do not give the least variance than other Linear Unbiased Estimators (LUEs). There is no fixed procedure to overcome heteroscedasticity. However, there are some ways that may lead to a reduction of heteroscedasticity. They are — Logarithmising the data: A series that is increasing exponentially often results in increased variability. This can be overcome using the log transformation. Using weighted linear regression: Here, the OLS method is applied to the weighted values of X and Y. One way is to attach weights directly related to the magnitude of the dependent variable. How does Unsupervised Machine Learning Work? 16. What is VIF? How do you calculate it? Variance Inflation Factor (VIF) is used to check the presence of multicollinearity in a dataset. It is calculated as—  Here, VIFj  is the value of VIF for the jth variable, Rj2 is the R2 value of the model when that variable is regressed against all the other independent variables. If the value of VIF is high for a variable, it implies that the R2  value of the corresponding model is high, i.e. other independent variables are able to explain that variable. In simple terms, the variable is linearly dependent on some other variables. 17. How do you know that linear regression is suitable for any given data? To see if linear regression is suitable for any given data, a scatter plot can be used. If the relationship looks linear, we can go for a linear model. But if it is not the case, we have to apply some transformations to make the relationship linear. Plotting the scatter plots is easy in case of simple or univariate linear regression. But in case of multivariate linear regression, two-dimensional pairwise scatter plots, rotating plots, and dynamic graphs can be plotted. 18. How is hypothesis testing used in linear regression? Hypothesis testing can be carried out in linear regression for the following purposes: To check whether a predictor is significant for the prediction of the target variable. Two common methods for this are — By the use of p-values: If the p-value of a variable is greater than a certain limit (usually 0.05), the variable is insignificant in the prediction of the target variable. By checking the values of the regression coefficient: If the value of regression coefficient corresponding to a predictor is zero, that variable is insignificant in the prediction of the target variable and has no linear relationship with it. To check whether the calculated regression coefficients are good estimators of the actual coefficients.   19. Explain gradient descent with respect to linear regression. Gradient descent is an optimisation algorithm. In linear regression, it is used to optimise the cost function and find the values of the βs (estimators) corresponding to the optimised value of the cost function. Gradient descent works like a ball rolling down a graph (ignoring the inertia). The ball moves along the direction of the greatest gradient and comes to rest at the flat surface (minima). Mathematically, the aim of gradient descent for linear regression is to find the solution of ArgMin J(Θ0,Θ1), where J(Θ0,Θ1) is the cost function of the linear regression. It is given by —   Here, h is the linear hypothesis model, h=Θ0 + Θ1x, y is the true output, and m is the number of the data points in the training set. Gradient Descent starts with a random solution, and then based on the direction of the gradient, the solution is updated to the new value where the cost function has a lower value. The update is: Repeat until convergence 20. How do you interpret a linear regression model? A linear regression model is quite easy to interpret. The model is of the following form: The significance of this model lies in the fact that one can easily interpret and understand the marginal changes and their consequences. For example, if the value of x0 increases by 1 unit, keeping other variables constant, the total increase in the value of y will be βi. Mathematically, the intercept term (β0) is the response when all the predictor terms are set to zero or not considered. These 6 Machine Learning Techniques are Improving Healthcare 21. What is robust regression? A regression model should be robust in nature. This means that with changes in a few observations, the model should not change drastically. Also, it should not be much affected by the outliers. A regression model with OLS (Ordinary Least Squares) is quite sensitive to the outliers. To overcome this problem, we can use the WLS (Weighted Least Squares) method to determine the estimators of the regression coefficients. Here, less weights are given to the outliers or high leverage points in the fitting, making these points less impactful. 22. Which graphs are suggested to be observed before model fitting? Before fitting the model, one must be well aware of the data, such as what the trends, distribution, skewness, etc. in the variables are. Graphs such as histograms, box plots, and dot plots can be used to observe the distribution of the variables. Apart from this, one must also analyse what the relationship between dependent and independent variables is. This can be done by scatter plots (in case of univariate problems), rotating plots, dynamic plots, etc. 23. What is the generalized linear model? The generalized linear model is the derivative of the ordinary linear regression model. GLM is more flexible in terms of residuals and can be used where linear regression does not seem appropriate. GLM allows the distribution of residuals to be other than a normal distribution. It generalizes the linear regression by allowing the linear model to link to the target variable using the linking function. Model estimation is done using the method of maximum likelihood estimation. 24. Explain the bias-variance trade-off. Bias refers to the difference between the values predicted by the model and the real values. It is an error. One of the goals of an ML algorithm is to have a low bias. Variance refers to the sensitivity of the model to small fluctuations in the training dataset. Another goal of an ML algorithm is to have low variance. For a dataset that is not exactly linear, it is not possible to have both bias and variance low at the same time. A straight line model will have low variance but high bias, whereas a high-degree polynomial will have low bias but high variance. There is no escaping the relationship between bias and variance in machine learning. Decreasing the bias increases the variance. Decreasing the variance increases the bias. So, there is a trade-off between the two; the ML specialist has to decide, based on the assigned problem, how much bias and variance can be tolerated. Based on this, the final model is built. 25. How can learning curves help create a better model? Learning curves give the indication of the presence of overfitting or underfitting. In a learning curve, the training error and cross-validating error are plotted against the number of training data points. A typical learning curve looks like this: If the training error and true error (cross-validating error) converge to the same value and the corresponding value of the error is high, it indicates that the model is underfitting and is suffering from high bias. 26. Recognize the differences between machine learning’s regression and classification. Classification vs. Regression in Machine Learning: Objective: Classification: Focuses on predicting the category or class labels of new data points. Regression: Aims to predict a continuous quantity or numeric value for new data. Output: Classification: Outputs discrete values representing class labels (e.g., spam or not spam). Regression: Outputs continuous values, such as predicting house prices or stock prices. Use Cases: Classification: Commonly used in tasks like image recognition, sentiment analysis, or spam filtering. Regression: Applied in scenarios like predicting sales, temperature, or any numeric outcome. Algorithms: Classification: Algorithms include Decision Trees, Support Vector Machines, and Neural Networks. Regression: Algorithms encompass Linear Regression, Decision Trees, and Random Forests. Evaluation: Classification: Evaluated using metrics like accuracy, precision, and recall. Regression: Assessed using metrics like Mean Squared Error (MSE) or Mean Absolute Error (MAE). 27. What is Confusion Matrix? It is one of the most common and interesting machine-learning interview questions. Here is its simple answer. Definition: A Confusion Matrix is a table used in classification to evaluate the performance of a machine learning model. It clearly summarizes the model’s predictions versus the actual outcomes. Components: True Positives (TP): Instances correctly predicted as positive. True Negatives (TN): Instances correctly predicted as negative. False Positives (FP): Instances incorrectly predicted as positive. False Negatives (FN): Instances incorrectly predicted as negative. Purpose: It provides a deeper understanding of a model’s effectiveness by breaking down correct and incorrect predictions. Metrics: Derived metrics include accuracy, precision, recall, and F1-score, offering a nuanced assessment of model performance. 28.  Explain Logistic Regression Purpose: Logistic Regression is a statistical method used for binary classification problems, predicting the probability of an instance belonging to a particular class. Output: It produces probabilities using the logistic function, ensuring values between 0 and 1. Algorithm: Utilizes the logistic function (sigmoid) to model the relationship between the independent variables and the dependent binary outcome. Decision Boundary: Establishes a decision boundary, classifying instances based on the calculated probabilities. Application: Widely applied in predicting outcomes like whether an email is spam or not, disease diagnosis, and credit risk assessment. Linear Relationship: Assumes a linear relationship between input features and the log odds of the predicted outcome. 29. Why are Validation and Test Datasets Needed? This is a must-know topic in machine learning interview preparation. Importance of Validation and Test Datasets: Training Dataset: Purpose: Used for training machine learning models by exposing them to labeled examples. Validation Dataset: Purpose: Essential for tuning model hyperparameters and preventing overfitting. Test Dataset: Purpose: Provides an unbiased evaluation of a model’s performance on new, unseen data. Generalization Check: Validation: Ensures the model generalizes well beyond the training set. Test: Verifies the model’s generalization to entirely new, unseen data. Model Selection: Validation: Guides the selection of the best-performing model during training. Test: Confirms the chosen model’s effectiveness on independent data, validating its real-world applicability. Avoiding Overfitting: Validation: Guards against overfitting by fine-tuning the model based on its performance on a separate dataset. Test: Provides a final checkpoint to confirm the model’s robustness and suitability for deployment. 30. What is Dimensionality Reduction? Definition: Purpose: Dimensionality Reduction is a technique in machine learning aimed at reducing the number of input features or variables in a dataset while preserving essential information. Curse of Dimensionality: Issue: Mitigates the “curse of dimensionality,” where high-dimensional data can lead to increased computational complexity and overfitting. Techniques: Principal Component Analysis (PCA): A linear technique that transforms data into a lower-dimensional space. t-Distributed Stochastic Neighbor Embedding (t-SNE): Non-linear method suitable for visualizing high-dimensional data in lower-dimensional space. Benefits: Computational Efficiency: Reduces computational load and memory requirements. Enhanced Model Performance: Addresses multicollinearity and improves model generalization. Applications: Image Processing: Simplifies image features. Text Mining: Condenses text data dimensions. Feature Engineering: Aids in feature selection and simplifies model interpretation. 31. What is the meaning of Parametric and Non-parametric Models? Parametric Models: Definition: Parametric models assume a specific functional form for the underlying data distribution. Characteristics: They have a fixed number of parameters that remain constant regardless of the size of the dataset. Examples: Linear Regression, Logistic Regression. Non-parametric Models: Definition: Non-parametric models make no assumptions about the underlying data distribution. Characteristics: They adapt and grow in complexity with the dataset size. Examples: k-nearest Neighbors (KNN), Decision Trees, and Support Vector Machines (SVM). Flexibility: Parametric: Constrained by assumed distribution, limiting flexibility. Non-parametric: Highly flexible, suitable for diverse data patterns. Data Size Impact: Parametric: Stable with a fixed set of parameters, less affected by data size. Non-parametric: Adaptability makes them more suitable for varying dataset sizes. Assumptions: Parametric: Requires assumptions about data distribution. Non-parametric: Free from distribution assumptions, providing more flexibility for various datasets. 32. What is Cross-validation in Machine Learning? You can expect this question in a typical machine learning interview. The answer is explained below. Definition: Purpose: Cross-validation is a resampling technique used to assess a machine learning model’s performance by dividing the dataset into subsets for training and evaluation. K-Fold Cross-validation: Procedure: Divide the dataset into K folds, using K-1 folds for training and the remaining one for validation in each iteration. Benefits: Reduced Bias: Provides a more robust estimate of model performance, reducing bias introduced by a single train-test split. Stratified Cross-validation: Application: Ensures that each fold maintains the proportion of classes present in the original dataset, which is particularly useful for imbalanced datasets. Leave-One-Out Cross-validation (LOOCV): Special Case: When K equals the number of instances in the dataset, a single-fold validation is created. Model Selection: Use: Aids in selecting the best-performing model and helps prevent overfitting or underfitting. 33. What is Entropy in Machine Learning? Definition: Information Measure: Entropy is a measure of uncertainty or disorder in a set of data, often used in the context of decision trees and information theory. Information Gain: Concept: In decision tree algorithms, entropy is used to calculate information gain, representing the reduction in uncertainty achieved by splitting a dataset based on a particular feature. Calculation: Formula: Entropy is mathematically expressed as the negative sum of the probabilities of each class multiplied by the logarithm of the probability. Low Entropy: Interpretation: Low entropy indicates high certainty or homogeneity in a dataset. Decision Trees: Role: Entropy guides decision tree splits, favoring features that maximize information gain, leading to more accurate and efficient tree structures. Entropy Reduction: Objective: Minimizing entropy through optimal feature selection contributes to improved decision-making and model performance. 34. What is Epoch in Machine Learning? Definition: Temporal Unit: An epoch refers to one complete pass through the entire training dataset by a machine learning model during training. Training Iteration: Purpose: Models learn from the entire dataset in each epoch, adjusting weights and biases to minimize the loss function. Batch Processing: Subdivisions: In deep learning, epochs are composed of smaller batches, allowing for more efficient updates of model parameters. Convergence Check: Monitoring: Researchers often monitor training performance over multiple epochs to assess convergence and prevent overfitting. Hyperparameter: Tuning: The number of epochs is a hyperparameter that requires tuning to optimize model performance without unnecessary computational costs. Early Stopping: Strategy: Training may be halted early if further epochs don’t significantly improve performance, preventing prolonged computation without substantial gains. 35. What are Type I and Type II Errors? Type I Error (False Positive): Definition: Type I error occurs when a null hypothesis is incorrectly rejected, indicating a false positive result. Significance: Often denoted by the symbol α, it represents the level of significance or the probability of making such an error. Type II Error (False Negative): Definition: Type II error happens when a false null hypothesis is not rejected, leading to a false negative outcome. Power: Represented by the symbol β, it is correlated with the statistical power of a test, indicating the probability of accepting a false null hypothesis. Trade-off: Balancing Act: In hypothesis testing, there is a trade-off between Type I and Type II errors; reducing one typically increases the other. Critical in Hypothesis Testing: Importance: Understanding and minimizing Type I and Type II errors are crucial in designing robust statistical tests and ensuring the validity of results. 36. How is a Random Forest different from a Gradient Boosting Machine (GBM)? Ensemble Learning: Random Forest: It is an ensemble learning method that builds multiple decision trees and merges their predictions through averaging or voting. GBM: Gradient Boosting Machine is another ensemble method that constructs decision trees sequentially, with each tree correcting the errors of the previous ones. Tree Construction: Random Forest: Trees are constructed independently, and the final prediction is an aggregation of individual tree predictions. GBM: Trees are built sequentially, focusing on reducing the errors of the previous models. Training Process: Random Forest: Training is parallelized as trees are constructed independently. GBM: Training is sequential, with each tree attempting to improve upon the errors of the ensemble. Overfitting: Random Forest: Less prone to overfitting due to the averaging effect of multiple trees. GBM: More sensitive to overfitting, especially if the number of trees is not properly tuned. Handling Outliers: Random Forest: Robust to outliers as individual trees might be affected, but the ensemble is less likely to be. GBM: Sensitive to outliers, as subsequent trees may attempt to correct errors introduced by outliers in earlier trees. 37. Differentiate between Sigmoid and Softmax Functions. This is one of the popular machine learning coding interview questions. I have explained the differences between the two functions in a simple manner. Read below. Purpose: Sigmoid: Primarily used for binary classification, providing independent probabilities for each class. Softmax: Applied in multi-class classification, offering a probability distribution over multiple classes. Output Range: Sigmoid: Outputs individual probabilities between 0 and 1, suitable for binary decisions. Softmax: Generates a normalized probability distribution across classes, ensuring the sum equals 1. Application: Sigmoid: Common in binary classification neural networks. Softmax: Ideal for neural networks handling multiple mutually exclusive classes. Independence: Sigmoid: Assumes instances can belong to multiple classes. Softmax: Assumes instances belong to a single exclusive class. Activation Function: Sigmoid: Used in the output layer for binary classification. Softmax: Employed in the output layer for multi-class classification. Decision Boundary: Sigmoid: Binary decisions based on a threshold (e.g., 0.5). Softmax: Assigns instances to the class with the highest probability. 38. What are the Two Main Types of Filtering in Machine Learning? Two Main Types of Filtering in Machine Learning: Temporal Filtering: Purpose: Focuses on analyzing and processing data over time. Application: Commonly used in time-series analysis and forecasting tasks. Examples: Moving averages exponential smoothing. Frequency Filtering: Purpose: Concentrates on the frequency components within data. Application: Applied in signal processing, image processing, and feature extraction. Examples: Fourier Transform, wavelet analysis. 39. What is Ensemble Learning? Definition: Ensemble Learning involves combining predictions from multiple machine learning models to enhance overall performance and accuracy. Key Components: Base Models: Ensemble methods utilize diverse base models, such as decision trees or neural networks. Voting or Weighting: Combining predictions through voting (majority) or assigning weights based on model performance. Advantages: Improved Accuracy: Ensemble methods often outperform individual models, capturing a more comprehensive understanding of complex patterns. Robustness: They are less prone to overfitting and generalizing well to diverse datasets. Types of Ensemble Learning: Bagging (Bootstrap Aggregating): Parallel training of multiple models on bootstrapped subsets. Boosting: Sequential training where models focus on correcting errors of predecessors. 40. What is the difference between the Standard scalar and the MinMax Scaler? Scaling Method: Standard Scaler: Utilizes z-score normalization, transforming data to have a mean of 0 and a standard deviation of 1. MinMax Scaler: Scales data to a specific range, usually between 0 and 1, maintaining the relative distances between values. Effect on Outliers: Standard Scaler: Sensitive to outliers, as it considers the mean and standard deviation. MinMax Scaler: Less sensitive to outliers, as it focuses on the range of values. Output Range: Standard Scaler: May produce values outside the 0 to 1 range. MinMax Scaler: Constricts values to the specified range. Use Cases: Standard Scaler: Suitable when the distribution of features is approximately Gaussian. MinMax Scaler: Effective when features have varying scales, and a specific range is desired. 41. How does tree splitting take place? Feature Selection: Decision Point: Identify the feature that best splits the dataset based on certain criteria, commonly using measures like Gini impurity or information gain. Splitting Criteria: Threshold Determination: Establish a threshold value for the selected feature that optimally divides the data into subsets. Categorical Features: For categorical features, split based on distinct categories. Evaluation: Criterion Evaluation: Assess the effectiveness of the split using the chosen impurity measure. Best Split: Choose the split that minimizes impurity or maximizes information gain. Recursive Process: Repeat: Continue recursively splitting each subset until a stopping condition is met, such as a predefined tree depth or a minimum number of samples per leaf. 42. What is the F1-score, and How Is It Used? Calculation: Precision and Recall: The F1-score is the harmonic mean of precision and recall, combining both metrics into a single value. Formula: F1 = 2 * (Precision * Recall) / (Precision + Recall). Balanced Metric: Harmonizes Precision and Recall: This is particularly useful when there is an uneven class distribution, ensuring a balanced evaluation of a classifier’s performance. Application: Binary Classification: Commonly applied in scenarios where there are two classes (positive and negative). Imbalanced Datasets: Suitable for assessing models on datasets where one class significantly outnumbers the other. 43. What is Overfitting, and how can it be avoided? Definition: Issue: Overfitting occurs when a model learns the training data too well, capturing noise and patterns that don’t generalize to new, unseen data. Causes: Complex Models: Overly complex models, such as deep neural networks, are prone to overfitting. Small Datasets: Limited training data increases the likelihood of the model memorizing noise. Avoidance Strategies: Regularization: Introduce penalties for complex model structures to discourage overfitting. Cross-Validation: Evaluate model performance on multiple subsets of the data to ensure generalization. Feature Selection: Choose relevant features and avoid unnecessary complexity. Data Augmentation: Increase dataset size through transformations to expose the model to diverse examples. 44. What is the Hypothesis in Machine Learning? Definition: Assumption: In machine learning, a hypothesis is an assumption or conjecture about the relationship between input features and the target variable. Representation: Function Form: Often represented as a mathematical function that maps input features to the predicted output. Training Process: Adjustment: During training, the model iteratively adjusts its hypothesis based on the error between predicted and actual outcomes. Example: Linear Regression: In linear regression, the hypothesis might be a linear equation expressing the relationship between input features and the target variable. 45. What is the Variance Inflation Factor? Definition: Multicollinearity Measure: VIF is a statistical measure that quantifies the extent to which the variance of an estimated regression coefficient increases when predictors are highly correlated. Calculation: Formula: VIF is calculated for each predictor in a regression model as the ratio of the variance of the model with all predictors to the variance of a model with only that predictor. Interpretation: High VIF: Values exceeding 10 indicate significant multicollinearity, suggesting that predictors may be too correlated. Impact: Effects: High VIF values can lead to unstable and less reliable coefficient estimates in regression models. Machine Learning Interviews and How to Ace Them Machine Learning Interviews can vary according to the types or categories, for instance a few recruiters ask many Linear Regression interview questions. When going for the role of Machine Learning Engineer interview, they can specialise in categories like Coding, Research, Case Study, Project Management, Presentation, System Design, and Statistics. We will focus on the most common types of categories and how to prepare for them.  1. Coding  Coding and programming are significant components of a machine learning interview and are frequently used to screen applicants. To do well in these interviews, you need to have solid programming abilities. Coding interviews typically run 45 to 60 minutes and are made up of only two questions. The interviewer poses the topic and anticipates that the applicant would address it in the least amount of time possible. How to prepare – You can prepare for these interviews by having a good understanding of the data structures, complexities of time and space, management skills, and the ability to understand and resolve a problem. upGrad has a great software engineering course that can help you enhance your coding skills and ace that interview.  In machine learning interviews, coding and programming abilities are essential and frequently utilized to evaluate candidates. You’ll be given coding issues to effectively solve in a constrained amount of time throughout these interviews. Strong programming skills, data structure expertise, an understanding of time and space complexities, and problem-solving talents are necessary to succeed in these interviews. Consider enrolling in a software engineering course, such as the one provided by upGrad, to prepare for coding interviews. It can help you improve your coding abilities and get ready for the coding problems that will come up during the interview.  During these interviews, your knowledge of machine learning principles will be carefully assessed. Questions may encompass subjects like convolutional layers, recurrent neural networks, generative adversarial networks, and speech recognition, depending on the employment needs. 2. Machine Learning  Your understanding of machine learning will be evaluated through interviews. Convolutional layers, recurrent neural networks, generative adversary networks, speech recognition, and other topics may be covered depending on the employment needs. How to prepare – To be able to ace this interview, you must ensure that you have a thorough understanding of the job roles and responsibilities. This will help you identify the specifications of ML that you must study. However, if you do not come across any specifications, you must deeply understand the basics. An in-depth course in ML that upGrad provides can help you with that. You can also study the latest articles on ML and AI to understand their latest trends and you can incorporate them on a regular basis.  3. Screening This interview is somewhat informal and typically one of the initial points of the interview. A prospective employer often handles it. This interview’s major goal is to provide the applicant with a sense of the business, the role, and the duties. In a more informal atmosphere, the candidate is also questioned about their past to determine whether their area of interest matches the position. How to prepare – This is a very non-technical part of the interview. All this required is your honesty and the basics of your specialization in Machine Learning.  In the initial stage of the interview process, the screening interview is frequently casual. Its main objective is to give the applicant an overview of the organization, the position, and the duties. To determine whether a candidate is a good fit for the role, questions about their experience and hobbies may be asked. Being truthful about your history and showcasing your general and machine learning-specific knowledge are important aspects of screening interview preparation. 4. System Design Such interviews test a person’s capacity to create a fully scalable solution from beginning to finish. The majority of engineers are so preoccupied with an issue that they frequently overlook the wider picture. A system design interview calls for an understanding of numerous elements that combine to produce a solution. These elements include the front-end layout, the load balancer, the cache, and more. An effective and scalable end-to-end system is easier to develop when these issues are well understood. How to prepare – Understand the concepts and components of the system design project. Use real-life examples to explain the structure to your interviewer for a better understanding of the project.  Interviews for system design assess a candidate’s capacity to create a fully scalable solution from scratch. It involves knowledge of numerous elements that contribute to a scalable end-to-end system, including front-end layout, load balancing, caching, and more. Learn the terms and elements of system design projects to perform well in a system design interview. To help the interviewer better comprehend your approach, use examples from real-world situations while describing the structure you propose. If there is a significant gap between the converging values of the training and cross-validation errors, i.e. the cross-validating error is significantly higher than the training error, it suggests that the model is overfitting the training data and is suffering from a high variance. Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau If there is a significant gap between the converging values of the training and cross-validating errors, i.e. the cross-validating error is significantly higher than the training error, it suggests that the model is overfitting the training data and is suffering from a high variance. Machine Learning Engineers: Myths vs. Realities That’s the end of the first section of this series. Stick around for the next part of the series which consist of questions based on Logistic Regression. Feel free to post your comments. Co-authored by – Ojas Agarwal You can check our Executive PG Programme in Machine Learning & AI, which provides practical hands-on workshops, one-to-one industry mentor, 12 case studies and assignments, IIIT-B Alumni status, and more. 

by Thulasiram Gunipati

Calendor icon

10 Sep 2023

What on Earth is Simpson’s Paradox? How Does it Affect Data?
Blogs
Views Icon

6422

What on Earth is Simpson’s Paradox? How Does it Affect Data?

Simpson’s paradox is a phenomenon in probability and statistics, in which a trend appears in different groups of data, but disappears or reverses when these groups are combined. You need to be very careful while calculating averages or pooling data from different sectors. It is always better to check whether the pooled data tell the same story or a different one from that of the non-aggregated data. If the story is different, then there is a high probability of Simpson’s paradox. A lurking variable must be affecting the direction of the explanatory and target variables. Learn data science courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career. Historical Background Simpson’s Paradox was discovered in the early twentieth century, with contributions from various statisticians and scholars. In 1951, Edward H. Simpson, a British statistician, found one of the earliest prominent examples. However, the paradox itself had been observed in various forms even before Simpson’s work. Simpsons Paradox refers to a phenomenon in which an apparent trend or relationship in aggregated data reverses or disappears when the data is disaggregated into subgroups. If not fully understood and accounted for, this surprising discovery might lead to incorrect findings. Consider the famous Simpson’s Paradox example to gain a better understanding of the dilemma it presents. Assume two departments, A and B, at a university and the goal is to compare their respective acceptance rates of male and female candidates. On a surface analysis of the aggregated data, it appears that Department A has a higher admittance rate for both males and females than Department B; however, when we break down the data by gender, we see that while Department A has a higher admittance rate for both genders, Department B actually has a lower rate for each gender combined. This trend reversal at the subgroup level is an example of Simpson’s Paradox. Real-world Applications Simpson’s Paradox has far-reaching implications and has been observed in various domains, including social sciences, healthcare, education, economics, and sports. Understanding this Simpson’s paradox in data science is crucial for avoiding misinterpretation of data and making accurate decisions. In the field of healthcare, Simpson’s Paradox has been encountered in studies evaluating the effectiveness of treatments. For instance, a drug may show positive effects overall but fail to demonstrate efficacy when the data is analyzed based on different patient characteristics or disease severity levels. This highlights the importance of considering subgroup analyses to gain a comprehensive understanding of treatment outcomes. In economics, Simpson’s Paradox can occur when analyzing income inequality across different regions or demographic groups. Aggregated data may suggest a decreasing income gap, but disaggregating the data could reveal that inequality actually worsens within each subgroup. This emphasizes the need to examine data from various perspectives to avoid overlooking underlying patterns. Preventive Measures To circumvent Simpson’s Paradox and guarantee precise study and analysis, researchers and investigators ought to take several preventive steps. First and foremost, it is essential to perform a subgroup analysis. By closely observing the data at the subgroup level, subtleties in the underlying connections can be exposed. This allows for a more astute understanding of the data and helps uncover potential confounding variables or interaction effects that can contribute to the paradox. Additionally, the sample size must be taken into account. Adequate sample sizes within subgroups are essential to obtain dependable and statistically substantial outcomes. Insufficient sample sizes can cause illogical determinations and exacerbate the odds of experiencing Simpson’s Paradox. Contextual data is another significant factor to bear in mind. Understanding the exact setting in which the data was collected can help recognize conceivable predispositions and confounding factors. This data can then be incorporated into the analysis to offer a more exact elucidation of the discoveries. Lastly, by utilizing progressed factual techniques, such as multidimensional analysis and causal modeling, can give assistance to untangle the real connections between variables. These techniques permit distinguishing and controlling confounding factors, offering a stronger analysis. By executing these preventive measures, researchers and analysts can minimize the danger of experiencing Simpson’s Paradox and enhance the accuracy and dependability of their discoveries. It is essential to approach data investigation with alertness and to consider the potential effect of subgroup results to guarantee logical choices in view of exact perceptions of the data. Let us understand Simpson’s paradox with the help of an another example: In 1973, a court case was registered against the University of California, Berkeley. The reason behind the case was gender bias during graduate admissions. Here, we will generate synthetic data to explain what really happened. Let’s assume the combined data for admissions in all departments is as follows Gender Applicants Admitted Admission Percentage Men 2,691 1,392 52% Women 1,835 789 43% If you observe the data carefully, you’ll see that 52% of the males were given admission, while only 43% of the women were admitted to the university. Clearly, the admissions favoured the men, and the women were not given their due. However, the case is not so simple as it appears from this information alone. Let’s now assume that there are two different categories of departments — ‘Hard’ (hard to get into) and ‘Easy’. Our learners also read: Learn Python Online for Free Let’s divide the combined data into these categories and see what happens Department Applied Admitted Admission Percentage Men Women Men Women Men Women Hard 780 1,266 200 336 26% 27% Easy 1,911 569 1,192 453 62% 80% Do you see any gender bias here? In the ‘Easy’ department, 62% of the men and 80% of the women got admission. Likewise, in the ‘Hard’ department, 26% of the men and 27% of the women got admission. Is there any bias here? Yes, there is. But, interestingly, the bias is not in favour of the men; it favours the women!!! If you combine this data, then an altogether different story emerges. A bias favouring the men becomes apparent. In statistics, this phenomenon is known as ‘Simpson’s paradox.’ But why does this paradox occur? Top Essential Data Science Skills to Learn SL. No Top Data Science Skills to Learn 1 Data Analysis Certifications Inferential Statistics Certifications 2 Hypothesis Testing Certifications Logistic Regression Certifications 3 Linear Regression Certifications Linear Algebra for Analysis Certifications Simpson’s paradox occurs if the effect of the explanatory variable on the target variable changes direction when you account for the lurking explanatory variable. In the above example, the lurking variable is the ‘department.’ In the case of the ‘Easy’ department, the percentages of men and women applying were in equal proportion. While in the case of the ‘Hard’ department, more women applied than men, and this led to more women applications getting rejected. When this data is combined, it shows a visible bias towards male admissions, which is really non-existent. Now suppose you were a statistician for the Indian government and inspected a fighter plane that returned from the Chinese war of 1965. Inspecting the bullet holes in the aircraft surface, what would you recommend? Would you recommend the strengthening of the areas hit by bullets? The following is an excerpt from a StackExchange “During World War II, Abraham Wald was a statistician for the U.S. government. He looked at the bombers that returned from missions and analysed the pattern of the bullet ‘wounds’ on the planes. He recommended that the Navy reinforce areas where the planes had no damage. Read our popular Data Science Articles Data Science Career Path: A Comprehensive Career Guide Data Science Career Growth: The Future of Work is here Why is Data Science Important? 8 Ways Data Science Brings Value to the Business Relevance of Data Science for Managers The Ultimate Data Science Cheat Sheet Every Data Scientists Should Have Top 6 Reasons Why You Should Become a Data Scientist A Day in the Life of Data Scientist: What do they do? Myth Busted: Data Science doesn’t need Coding Business Intelligence vs Data Science: What are the differences? upGrad’s Exclusive Data Science Webinar for you – Watch our Webinar on How to Build Digital & Data Mindset? document.createElement('video'); https://cdn.upgrad.com/blog/webinar-on-building-digital-and-data-mindset.mp4   Why? We have selective effects at work. This sample suggests that damage inflicted on the observed areas could be withstood. Either the plane was never hit in the untouched areas — an unlikely proposition — or strikes to those parts were lethal. We care about the planes that went down, not just those that returned. Those that fell likely suffered an attack in a place that was untouched on those that survived.” In statistics, things are not as they appear on the surface. You need to be skeptical and look beyond the obvious during analyses. Maybe it’s time to read ‘Think Like a Freak’ or ‘How to Think Like Sherlock’. Let us know if you already have and what your thoughts are on the same!

by Thulasiram Gunipati

Calendor icon

14 Jun 2023

A Brilliant Future Scope of Machine Learning
Blogs
Views Icon

10503

A Brilliant Future Scope of Machine Learning

A constant form of silent evolution is machine learning. We thought computers were the big all-that that would allow us to work more efficiently; soon, machine learning was introduced to the picture, changing the discourse of our lives forever. The reshaping of the world started with teaching computers to do things for us, and now it has reached the stage where even that simple step is eliminated. It is no longer imperative for us to teach computers how to execute complex tasks like text translation or image recognition: instead, we built systems that let them do it themselves. It’s as close to magic as the muggle community will ever reach! The exceptionally powerful form of machine learning being used today goes by the name “deep learning”. On vast quantities of data, it builds complex mathematical structures called neural networks. Constructed to be analogous to how the human brain functions, it was in 1930 that neural networks themselves were first introduced. Though, it was only in the past decade or so that computers have become efficient enough to use that ability. What exactly is Machine Learning? So, in general terms, machine learning is a result of the application of Artificial Learning. Let’s take the example of you shopping online — have you ever been in a situation where the app or website started recommending products that might in some way be associated or similar to the purchase you made? If yes, then you have seen machine learning in action. Even the “bought together” combination of products is another byproduct of machine learning. This is how companies target their audience, and divide people into various categories to serve them better, make their shopping experience tailored to their browsing behavior. Machine learning is merely based on predictions made based on experience. It enables machines to make data-driven decisions, which is more efficient than explicitly programming to carry out certain tasks. These algorithms are designed in a fashion that gives exposure to new data that can help organisations learn and improve their strategies. The Future of Jobs What is the future of Machine Learning? Improved cognitive services With the help of machine learning services like SDKs and APIs, developers are able to include and hone the intelligent capabilities into their applications. This will empower machines to apply the various things they come across, and accordingly carry out an array of duties like vision recognition, speech detection, and understanding of speech and dialect. Alexa is already talking to us, and our phones are already listening to our conversations— how else do you think the machine “wakes up” to run a google search on 9/11 conspiracies for you? Those improved cognitive skills are something we could not have ever imagined happening a decade ago, yet, here we are. Being able to engage humans efficiently is under constant alteration to serve and understand the human species better. We already spend so much time in front of screens that our mobiles have become an extension of us- and through cognitive learning, it has literally become the case.  Your machine learns all about you, and then accordingly alters your results. No two people’s Google search results are the same: why? Cognitive learning. The Rise of Quantum Computing “Quantum computing”— sounds like something straight out of a science fiction movie, no? But it has become a genuine phenomenon. Satya Nadella, the chief executive of Microsoft Corp., calls i7t one of the three technologies that will reshape our world.  Quantum algorithms have the potential to transform and innovate the field of machine learning. It could process data at a much faster pace and accelerate the ability to draw insights and synthesize information. Heavy-duty computation will finally be done in a jiffy, saving so much of time and resources. The increased performance of machines will open so many doorways that will elevate and take evolution to the next level. Something as basic as two numbers- 0 and 1 changed the way of the world, imagine what could be achieved if we ventured into a whole new realm of computers and physics? Join the AI & ML course online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career. Rise of Robots With machine learning on the rise, it is only natural that the medium gets a face on it— robots! The sophistication of machine learning is not a ‘small wonder’ if you know what I mean. Multi-agent learning, robot vision, self-supervised learning all will be accomplished through robotisation. Drones have already become a normality, and have now even replaced human delivery men. With the rapid speed technology is moving forward, even the sky is not the limit. Our childhood fantasies of living in an era of the Jetsons will soon become reality. The smallest of tasks will be automated, and human beings will no longer have to be self-reliant because you will have a bot following you like a shadow at all times. Career Opportunities in the field? Now that you are aware of the reach of machine learning and how it can single-handedly change the course of the world, how can you become a part of it? Here are some job options that you can potentially think of opting – Machine Learning Engineer – They are sophisticated programmers who develop the systems and machines that learn and apply knowledge without having any specific lead or direction. Deep Learning Engineer – Similar to computer scientists, they specialise in using deep learning platforms to develop tasks related to artificial intelligence. Their main goal is to be able to mimic and emulate brain functions. Data Scientist – Someone who extracts meaning from data and analyses and interprets it. It requires both methods, statistics, and tools. Computer Vision Engineer – They are software developers who create vision algorithms for recognising patterns in images. Machine learning already is and will change the course of the world in the coming decade. Let’s eagerly prep and wait for what the future awaits. Let’s hope that machines do not get the bright idea of taking over the world, because not all of us are Arnold Schwarzenegger. Fingers crossed! Importance of Machine Learning Machine learning is important as it helps give a new perspective on customer trends and business patterns. Machine learning is being used by various big companies today, such as Facebook, Uber, Ola, Google, etc.  It is also useful in driving business results such as money and time-saving ideas, and It is also helpful in automating the tasks which would otherwise be performed by an individual manually.  Use cases of machine learning Healthcare  The technology of machine learning is highly useful in the field of healthcare. The technology of natural language processing helps give accurate insights. Other machine learning applications would be CT-Scan, X-ray, Ultrasound, etc. The reason why healthcare is a machine learning future scope is that it is helpful in redefining age-old processes.  Banking and Finance Machine learning uses statistical patterns to make accurate predictions. The technology is also helpful in document analysis, fraud detection, KYC processing, high-frequency trading, etc. It is the future scope of machine learning which is scouring the banking sector.  Image recognition  Another scope of machine learning is image recognition. It is useful to detect images over the internet, the social networking site such as Facebook uses this to see images for tagging. Top Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our certification courses on AI & ML, kindly visit our page below. Machine Learning Certification If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s Executive PG Programme in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms. Trending Machine Learning Skills AI Courses Tableau Certification Natural Language Processing Deep Learning AI Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau

by Thulasiram Gunipati

Calendor icon

25 Mar 2023

33 Machine Learning Interview Questions & Answers – Logistic Regression
Blogs
Views Icon

23490

33 Machine Learning Interview Questions & Answers – Logistic Regression

Welcome to the second part of the series of commonly asked interview questions based on machine learning algorithms. We hope that the previous section on Linear Regression was helpful to you. Machine learning is a growing field. Its demand is increasing and the market is expected to grow very rapidly in the coming years. The demand for machine learning is high because of its vast applicability. There is no limitation to its applications. With the growing technology, the uses of machine learning are almost everywhere from a simple switch to big giant technologies. It is considered one of the highest-paying careers in today’s times. The average salary of a machine learning engineer is 7.5 LPA and the salary ranges from 3.5 LPA to 21.0 LPA it can be more than that as well owing to the  experience, skillsets, and upskilling history. (Source) Logistic regression is a machine learning classification algorithm. It is a statistical analysis method to predict the binary outcome. It predicts a dependent variable by analysing the relationship between one or more independent variables. It is about fitting a curve to a data as opposed to the linear regression that is about fitting a line in the data.  Logistic regression is vastly applicable and can be used to predict for data sets such as whether a political candidate will win or no or whether a patient will have herart attack ornot.  This is how to explain logistic regression in interview. Let’s find the answers to questions on logistic regression: 1. What is a logistic function? What is the range of values of a logistic function? f(z) = 1/(1+e -z ) The values of a logistic function will range from 0 to 1. The values of Z will vary from -infinity to +infinity. 2. Why is logistic regression very popular? Logistic regression is famous because it can convert the values of logits (logodds), which can range from -infinity to +infinity to a range between 0 and 1. As logistic functions output the probability of occurrence of an event, it can be applied to many real-life scenarios. It is for this reason that the logistic regression model is very popular. It is one of the most commonly asked logistic regression questions. Logistic regression is also predictive analysis just like all the other regressions and is used to describe the relationship between the variables. There are many real-life examples of logistic regression such as the probability of predicting a heart attack, the probability of finding if the transaction is going to be fraudulent or not, etc. In-demand Machine Learning Skills Artificial Intelligence Courses Tableau Courses NLP Courses Deep Learning Courses 3. What is the formula for the logistic regression function? f(z) = 1/(1+e-(α+1X1+2X2+….+kXk)) The Difference between Data Science, Machine Learning and Big Data! 4. How can the probability of a logistic regression model be expressed as conditional probability? P(Discrete value of Target variable | X1, X2, X3….Xk).  It is the probability of the target variable taking up a discrete value (either 0 or 1 in case of binary classification problems) when the values of independent variables are given. For example, the probability an employee will attrite (target variable) given his attributes such as his age, salary, KRA’s, etc. 5. What are the odds? These types of logistic regression questions and answers are being asked during the interview to understand the level of basic foundation the candidate has. It is the ratio of the probability of an event occurring to the probability of the event not occurring. For example, let’s assume that the probability of winning a lottery is 0.01. Then, the probability of not winning is 1- 0.01 = 0.99.’ The odds of winning the lottery = (Probability of winning)/(probability of not winning) The odds of winning the lottery = 0.01/0.99 The odds of winning the lottery are 1 to 99, and the odds of not winning the lottery are 99 to 1. 6. What are the outputs of the logistic model and the logistic function? The logistic model outputs the logits, i.e. log odds; and the logistic function outputs the probabilities.    Logistic model = α+1X1+2X2+….+kXk. The output of the same will be logits.    Logistic function = f(z) = 1/(1+e-(α+1X1+2X2+….+kXk)). The output, in this case, will be the probabilities. Best Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our courses, visit our page below. Machine Learning Courses 7. How to interpret the results of a logistic regression model? Or, what are the meanings of alpha and beta in a logistic regression model? Alpha is the baseline in a logistic regression model. It is the log odds for an instance when all the attributes (X1, X2,………….Xk) are zero. In practical scenarios, the probability of all the attributes being zero is very low. In another interpretation, Alpha is the log odds for an instance when none of the attributes is taken into consideration.    Beta is the value by which the log odds change by a unit change in a particular attribute by keeping all other attributes fixed or unchanged (control variables). The beta in logistical regression is associated with predictor X, where X is representing the expected change in log odds. Whereas, the alpha is a constant. Join the Artificial Intelligence Course online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career. 8.  What is odds ratio? The Odds ratio is the ratio of odds between two groups. The odd ratio is carried out to obtain the ratio of more than one variable that is involved. For example, let’s assume that we are trying to ascertain the effectiveness of a medicine. We administered this medicine to the ‘intervention’ group and a placebo to the ‘control’ group.    Odds ratio (OR) = (odds of the intervention group)/(odds of the control group) Interpretation If the odds ratio = 1, then there is no difference between the intervention group and the control group If the odds ratio is greater than 1, then the control group is better than the intervention group If the odds ratio is less than 1, then the intervention group is better than the control group. 5 Breakthrough Applications of Machine Learning 9. What is the formula for calculating the odds ratio? In the formula above, X1 and X0 stand for two different groups for which the odds ratio needs to be calculated. X1i stands for the instance ‘i’ in group X1. Xoi stands for the instance ‘i’ in group X0. stands for the coefficient of the logistic regression model. Note that the baseline is not included in this formula. In-demand Machine Learning Skills Artificial Intelligence Courses Tableau Courses NLP Courses Deep Learning Courses 10. Why can’t linear regression be used in place of logistic regression for binary classification? The reasons why linear regressions cannot be used in the case of binary classification are as follows: Distribution of error terms: The distribution of data in the case of linear and logistic regression is different. Linear regression assumes that error terms are normally distributed. In the case of binary classification, this assumption does not hold true. Model output: In linear regression, the output is continuous. In the case of binary classification, an output of a continuous value does not make sense. For binary classification problems, linear regression may predict values that can go beyond 0 and 1. If we want the output in the form of probabilities, which can be mapped to two different classes, then its range should be restricted to 0 and 1. As the logistic regression model can output probabilities with logistic/sigmoid function, it is preferred over linear regression. Variance of Residual errors: Linear regression assumes that the variance of random errors is constant. This assumption is also violated in the case of logistic regression. This can be asked in an alternate ways such as , “Logistic regression error values are normally distributed. state if it is true or false?” or “ Select the wrong statement about the logistic regression?” Linear Regression is a model that is used to estimate the relationship between two variables, one dependent and one independent variable using a straight line.  Linear Regression is helpful in predicting the value of a variable based on another value as two variables are involved here. The prediction done using linear regression provides a scientific and accurate depth to the study. FYI: Free Deep Learning Course! 11. Is the decision boundary linear or nonlinear in the case of a logistic regression model? The decision boundary is a line that separates the target variables into different classes. The decision boundary can either be linear or nonlinear. In the case of a logistic regression model, the decision boundary is a straight line. Logistic regression model formula = α+1X1+2X2+….+kXk. This clearly represents a straight line. Logistic regression is only suitable in such cases where a straight line is able to separate the different classes. If a straight line is not able to do it, then nonlinear algorithms should be used to achieve better results. The importance of decision boundaries is high. It is a known fact that the decision boundary is the surface that separates the data points belonging to different class labels. These are not limited to the data points that are already provided. The model has the feature of making predictions for any new possible combinations as well. 12. What is the likelihood function? The likelihood function is the joint probability of observing the data. For example, let’s assume that a coin is tossed 100 times and we want to know the probability of getting 60 heads from the tosses. This example follows the binomial distribution formula. p = Probability of heads from a single coin toss n = 100 (the number of coin tosses) x = 60 (the number of heads – success) n-x = 30 (the number of tails) Pr(X=60 |n = 100, p) The likelihood function is the probability that the number of heads received is 60 in a trail of 100 coin tosses, where the probability of heads received in each coin toss is p. Here the coin toss result follows a binomial distribution. This can be reframed as follows:        Pr(X=60|n=100,p) = c x p60x(1-p)100-60        c = constant        p = unknown parameter The likelihood function gives the probability of observing the results using unknown parameters. 13. What is the Maximum Likelihood Estimator (MLE)?  The MLE chooses those sets of unknown parameters (estimator) that maximise the likelihood function. The method to find the MLE is to use calculus and setting the derivative of the logistic function with respect to an unknown parameter to zero, and solving it will give the MLE. For a binomial model, this will be easy, but for a logistic model, the calculations are complex. Computer programs are used for deriving MLE for logistic models. (Here’s another approach to answering the question.) MLE is a statistical approach to estimating the parameters of a mathematical model. MLE and ordinary square estimation give the same results for linear regression if the dependent variable is assumed to be normally distributed. MLE does not assume anything about independent variables. The point in the parameters that aim to maximise the likelihood function is famously known as the maximum likelihood estimate. This method has gained popularity for statistical inference owing to its intuitive and flexible features. The maximum likelihood estimators have some interesting features such as consistency functional equivariance efficiency and second order efficiency.  These features allow better scope for reliable outputs. The maximum likelihood estimator is useful for getting unbiased output in the case of large data sets as well. Along with this, it facilitates achieving a consistent yet flexible approach while making it ideal for a broad range of applications. 14. What are the different methods of MLE and when is each method preferred? In the case of logistics regression, there are two approaches of to MLE. They are conditional and unconditional methods. Conditional and unconditional methods are algorithms that use different likelihood functions. The unconditional formula employs a joint probability of positives (for example, churn) and negatives (for example, non-churn). The conditional formula is the ratio of the probability of observed data to the probability of all possible configurations. The unconditional method is preferred if the number of parameters is lower compared to the number of instances. If the number of parameters is high compared to the number of instances, then conditional MLE is to be preferred. Statisticians suggest that conditional MLE is to be used when in doubt. Conditional MLE will always provide unbiased results. These 6 Machine Learning Techniques are Improving Healthcare 15. What are the advantages and disadvantages of conditional and unconditional methods of MLE? Conditional methods do not estimate unwanted parameters. Unconditional methods estimate the values of unwanted parameters also. Unconditional formulas can directly be developed with joint probabilities. This cannot be done with conditional probability. If the number of parameters is high relative to the number of instances, then the unconditional method will give biased results. Conditional results will be unbiased in such cases. 16. What is the output of a standard MLE program? The output of a standard MLE program is as follows: Maximised likelihood value: This is the numerical value obtained by replacing the unknown parameter values in the likelihood function with the MLE parameter estimator. Estimated variance-covariance matrix: The diagonal of this matrix consists of estimated variances of the ML estimates. The off-diagonal consists of the covariances of the pairs of the ML estimates. 17. Why can’t we use Mean Square Error (MSE) as a cost function for logistic regression? In logistic regression, we use the sigmoid function and perform a non-linear transformation to obtain the probabilities. Squaring this non-linear transformation will lead to non-convexity with local minimums. Finding the global minimum in such cases using gradient descent is not possible. Due to this reason, MSE is not suitable for logistic regression. Cross-entropy or log loss is used as a cost function for logistic regression. In the cost function for logistic regression, the confident wrong predictions are penalised heavily. The confident right predictions are rewarded less. By optimising this cost function, convergence is achieved. 18. Why is accuracy not a good measure for classification problems? Accuracy is not a good measure for classification problems because it gives equal importance to both false positives and false negatives. However, this may not be the case in most business problems. For example, in the case of cancer prediction, declaring cancer as benign is more serious than wrongly informing the patient that he is suffering from cancer. Accuracy gives equal importance to both cases and cannot differentiate between them. It is important to explain what is accuracy before answering this question. Accuracy as the name signifies is freedom from error. It is a condition or quality of being true, correct, and defect-free.It is not a good measure for classification problems in the case of imbalanced data. 19. What is the importance of a baseline in a classification problem? Most classification problems deal with imbalanced datasets. Examples include telecom churn, employee attrition, cancer prediction, fraud detection, online advertisement targeting, and so on. In all these problems, the number of positive classes will be very low when compared to negative classes. In some cases, it is common to have positive classes that are less than 1% of the total sample. In such cases, an accuracy of 99% may sound very good but, in reality, it may not be. Here, the negatives are 99%, and hence, the baseline will remain the same. If the algorithms predict all the instances as negative, then also the accuracy will be 99%. In this case, all the positives will be predicted wrongly, which is very important for any business. Even though all the positives are predicted wrongly, an accuracy of 99% is achieved. So, the baseline is very important, and the algorithm needs to be evaluated relative to the baseline. A baseline is the most broken down or simplest possible prediction. A baseline is useful in understanding the reliability of any trained model. 20. What are false positives and false negatives? False positives are those cases in which the negatives are wrongly predicted as positives. For example, predicting that a customer will churn when, in fact, he is not churning. False negatives are those cases in which the positives are wrongly predicted as negatives. For example, predicting that a customer will not churn when, in fact, he churns. The tests have a chance of having either false positives or false negatives. The professionals need to be extra cautious while working with the data to avoid any such scenarios of false positives and false negatives occurring. 21. What are the true positive rate (TPR), true negative rate (TNR), false-positive rate (FPR), and false-negative rate (FNR)? TPR refers to the ratio of positives correctly predicted from all the true labels. In simple words, it is the frequency of correctly predicted true labels. True Positives are the values that are actually positive and predicted positive.    TPR = TP/TP+FN    TNR refers to the ratio of negatives correctly predicted from all the false labels. It is the frequency of correctly predicted false labels. True negatives are the values that are actually negative and predicted negative.    TNR = TN/TN+FP    FPR refers to the ratio of positives incorrectly predicted from all the true labels. It is the frequency of incorrectly predicted false labels. False positives are the values that are actually negative and predicted positive.    FPR = FP/TN+FP    FNR refers to the ratio of negatives incorrectly predicted from all the false labels. It is the frequency of incorrectly predicted true labels. False negatives are the values that are actually positive and predicted negative.    FNR = FN/TP+FN 22. What are precision and recall? Precision is the proportion of true positives out of predicted positives. To put it in another way, it is the accuracy of the prediction. It is also known as the ‘positive predictive value’.    Precision = TP/TP+FP Recall is the same as the true positive rate (TPR). It is important to examine both precision and recall while evaluating a model’s effectiveness.  Precision is known to be a fraction of relevant instances among the retrieved instances.  And recall is a fraction of relevant instances that were retrieved. How does Unsupervised Machine Learning Work? 23. What is the F-measure? It is the harmonic mean of precision and recall. In some cases, there will be a trade-off between precision and recall. In such cases, the F-measure will drop. It will be high when both the precision and the recall are high. Depending on the business case at hand and the goal of data analytics, an appropriate metric should be selected. F-measure = 2 X (Precision X Recall) / (Precision+Recall) The F-score or F- measure is commonly used for evaluation o information retrieval system such as search engines, etc. The F- measure is used to measure the model accuracy. It combines precision and recall.  And is defined as the harmonic mean of the precision and recall of the model. It measures the accuracy of a test. 24. What is accuracy? It is the number of correct predictions out of all predictions made.    Accuracy = (TP+TN)/(The total number of Predictions) 25. What are sensitivity and specificity? Specificity is the same as true negative rate, or it is equal to 1 – false-positive rate. Specificity = TN/TN + FP. Sensitivity is the true positive rate. Sensitivity = TP/TP + FN 26. How to choose a cutoff point in the case of a logistic regression model? The cutoff point depends on the business objective. Depending on the goals of your business, the cutoff point needs to be selected. For example, let’s consider loan defaults. If the business objective is to reduce the loss, then the specificity needs to be high. If the aim is to increase profits, then it is an entirely different matter. It may not be the case that profits will increase by avoiding giving loans to all predicted default cases. But it may be the case that the business has to disburse loans to default cases that are slightly less risky to increase the profits. In such a case, a different cutoff point, which maximises profit, will be required. In most instances, businesses will operate around many constraints. The cutoff point that satisfies the business objective will not be the same with and without limitations. The cutoff point needs to be selected considering all these points. As a thumb rule, choose a cutoff value that is equivalent to the proportion of positives in a dataset. What is Machine Learning and Why it matters 27. How does logistic regression handle categorical variables? The inputs to a logistic regression model need to be numeric. The algorithm cannot handle categorical variables directly. So, they need to be converted into a format that is suitable for the algorithm to process. The various levels of a categorical variable will be assigned a unique numeric value known as the dummy variable. These dummy variables are handled by the logistic regression model as any other numeric value. 28. What is a cumulative response curve (CRV)? In order to convey the results of an analysis to the management, a ‘cumulative response curve’ is used, which is more intuitive than the ROC curve. A ROC curve is very difficult to understand for someone outside the field of data science. A CRV consists of the true positive rate or the percentage of positives correctly classified on the Y-axis and the percentage of the population targeted on the X-axis. It is important to note that the percentage of the population will be ranked by the model in descending order (either the probabilities or the expected values). If the model is good, then by targeting a top portion of the ranked list, all high percentages of positives will be captured. As with the ROC curve, there will be a diagonal line that represents random performance. Let’s understand this random performance as an example. Assuming that 50% of the list is targeted, it is expected that it will capture 50% of the positives. This expectation is captured by the diagonal line, which is similar to the ROC curve. 29. What are the lift curves? The lift is the improvement in model performance (increase in true positive rate) when compared to random performance. Random performance means if 50% of the instances are targeted, then it is expected that it will detect 50% of the positives. Lift is in comparison to the random performance of a model. If a model’s performance is better than its random performance, then its lift will be greater than 1. In a lift curve, the lift is plotted on the Y-axis and the percentage of the population (sorted in descending order) on the X-axis. At a given percentage of the target population, a model with a high lift is preferred. 30. Which algorithm is better at handling outliers logistic regression or SVM? Logistic regression will find a linear boundary if it exists to accommodate the outliers. Logistic regression will shift the linear boundary in order to accommodate the outliers. SVM is insensitive to individual samples. There will not be a major shift in the linear boundary to accommodate an outlier. SVM comes with inbuilt complexity controls, which take care of overfitting. This is not true in the case of logistic regression. 31. How will you deal with the multiclass classification problem using logistic regression? The most famous method of dealing with multiclass classification using logistic regression is using the one-vs-all approach. Under this approach, a number of models are trained, which is equal to the number of classes. The models work in a specific way. For example, the first model classifies the datapoint depending on whether it belongs to class 1 or some other class; the second model classifies the datapoint into class 2 or some other class. This way, each data point can be checked over all the classes. 32. Explain the use of ROC curves and the AUC of a ROC Curve. A ROC (Receiver Operating Characteristic) curve illustrates the performance of a binary classification model. It is basically a TPR versus FPR (true positive rate versus false-positive rate) curve for all the threshold values ranging from 0 to 1. In a ROC curve, each point in the ROC space will be associated with a different confusion matrix. A diagonal line from the bottom-left to the top-right on the ROC graph represents random guessing. The Area Under the Curve (AUC) signifies how good the classifier model is. If the value for AUC is high (near 1), then the model is working satisfactorily, whereas if the value is low (around 0.5), then the model is not working properly and just guessing randomly. 33. How can you use the concept of ROC in a multiclass classification? The concept of ROC curves can easily be used for multiclass classification by using the one-vs-all approach. For example, let’s say that we have three classes ‘a’, ’b’, and ‘c’. Then, the first class comprises class ‘a’ (true class) and the second class comprises both class ‘b’ and class ‘c’ together (false class). Thus, the ROC curve is plotted. Similarly, for all three classes, we will plot three ROC curves and perform our analysis of AUC. Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau We have so far covered the two most basic ML algorithms, Linear and Logistic Regression, and we hope that you have found these resources helpful. Learn ML Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career. Machine Learning Engineers: Myths vs. Realities The next part of this series is based on another very important ML Algorithm, Clustering. Feel free to post your doubts and questions in the comment section below. Co-authored by – Ojas Agarwal

by Thulasiram Gunipati

Calendor icon

13 Sep 2022

Data Analyst vs Data Scientist – Spot the Difference
Blogs
Views Icon

5714

Data Analyst vs Data Scientist – Spot the Difference

With Data Science jobs on the rise, there’s a question that often lurks in the minds of aspirants – What’s the difference between a Data Scientist and a Data Analyst? Are these 2 the same? Such questions have been a source of great confusion among youngsters who wish to make a successful career in Data Science. Today, we’re here to put these questions to rest and clarify the entire matter for you! Before diving in deep into the job profile of a Data Scientist and that of a Data Analyst, let’s first understand the core difference between the 2 job roles. Data Scientist Job Role – Data Scientists are expert professionals equipped with a combination of coding, mathematical, statistical, analytical, and ML skills. Even during a Data Science interview, most of the questions are in and around these concepts. They explore and examine large datasets gathered from multiple sources, clean it, organize it, and process it to facilitate the ease of interpretation. While they can do analyzing tasks of an analyst, they also have to work with advanced ML algorithms, predictive models, programming and statistical tools to make sense of data and develop new processes for data modeling. A Data Scientist can also be labeled as a Data Researcher or a Data Developer, depending upon the skill set and job demand. Data Analyst Job Role – As the name suggests, Data Analysts are primarily involved with the day-to-day data collection and analysis tasks. They must sift through data to identify meaningful insights from data. They look at business problems and try to find the answers to a specific set of questions from a given set of data. Furthermore, Data Analysts create visual representations of data in the form of graphs, charts, etc., for the ease of understanding of every stakeholder involved in the business process. A Data Analyst can also labelled as Data Architect or Data Administrator or an Analytics Engineer, depending upon the skill set and job demand.   Gathering from this description of the two job profiles, it is clear that a Data Scientist mainly deals with finding meaning from incoherence (unstructured/semi-structured datasets), whereas a Data Analyst has to find answers to questions based on the findings of a Data Scientist. However, sometimes the job roles do overlap, thereby giving rise to a grey area. And while Data Analysts and Data Scientists both share some similarities, there are certain pivotal differences between the two roles. Data Scientist and Data Analyst – A Comparision 1. Responsibilities Just a minute ago, we talked about the primary job responsibilities of a Data Scientist and Data Analyst in a nutshell. Now, we’ll talk about their respective job responsibilities in detail. Data Scientist: Create & define programs for data collection, modelling, analysis, and reporting. Perform data cleansing and processing operations to mine valuable insights from data. Develop custom data models and ML algorithms to suit company/customer needs. To mine and analyze data from company databases to foster optimization and improvement of business operations (product development, marketing techniques, and customer satisfaction). To use the right data visualization and predictive modelling tools to boost revenue generation, marketing strategies, enhance customer experiences, etc. To develop new ML methods and analytical models. To correlate different datasets, determine the validity of new data sources and data collection methods. To coordinate and communicate with both IT and business management teams to implement data models and monitor the outcomes. To identify new business opportunities and determine how the findings can be used to enhance business strategies and outcomes. To create sophisticated tools/processes to monitor and analyze the performance of data models accurately. To develop A/B testing frameworks to test model functioning and quality. To take on the role of a visionary who can unlock new possibilities from data. Data Analyst: To analyze and mine business data to identify correlations and discover valuable patterns from disparate data points. To work with customer-centric algorithm models and personalize them to fit individual customer requirements. To create and deploy custom models to uncover answers to business matters such as marketing strategies and their performance, customer taste, and preference patterns, etc. To map and trace data from multiple systems to solve specific business problems. To write SQL queries to extract data from the data warehouse and to identify the answers to complex business issues. To apply statistical analysis methods to conduct consumer data research and analytics. To coordinate with Data Scientists and Data Engineers to gather new data from multiple sources. To design and develop data visualization reports, dashboards, etc., to help the business management team to make better business decisions. To perform routine analysis tasks as well as quantitative analysis as and when required to support day-to-day business functioning and decision making. Checkout: Data Analyst Salary in India Explore our Popular Data Science Courses Executive Post Graduate Programme in Data Science from IIITB Professional Certificate Program in Data Science for Business Decision Making Master of Science in Data Science from University of Arizona Advanced Certificate Programme in Data Science from IIITB Professional Certificate Program in Data Science and Business Analytics from University of Maryland Data Science Courses 2. Skills The role of a Data Scientist is highly specialized and versatile. Hence, Data Scientists mostly have advanced degrees such as a Master’s or PhD. According to KDnuggets, nearly 88% of Data Scientists have a master’s degree, and at least 46 % of them hold a PhD. Let’s take a look at the role requirements of a Data Scientist: A minimum of a Master’s degree in Statistics/Mathematics/Computer Science. Better if you have a PhD. Proficiency in programming languages like R, Python, Java, SQL, to name a few. In-depth knowledge of ML techniques, including clustering, decision trees, artificial neural networks, etc. In-depth knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, etc.). Experience in working with statistical and data mining techniques (linear regression, random forest, trees, text mining, social network analysis, etc.). Experience in working with as well as creating data architectures. Experience in manipulating data sets and developing statistical models. Experience in using web services such as S3, Spark, Redshift, DigitalOcean, etc. Experience in analyzing data from third-party providers like Google Analytics, AdWords, Facebook Insights, Site Catalyst, Coremetrics, etc. Experience in working with distributed data/computing tools like Map/Reduce, Hadoop, Spark, Hive, MySQL, etc. Experience in data visualization using tools like ggplot, Tableau, Periscope, Business Objects, D3, etc. For the job role of a Data Analyst, the minimum requirement is to have an undergraduate STEM (science, technology, engineering, or math) degree. Having advanced degrees is excellent, but it is not a necessity. If you have strong Math, Science, Programming, Database, Predictive Analytics, and Data Modeling skills, you’re good to go. Here’s a list of all the essential requirements for a Data Analyst: Undergraduate degree in Mathematics/Statistics/Business with a focus on analytics. Proficiency in programming languages like R, Python, Java, SQL, to name a few. A solid combination of analytical skills, intellectual curiosity, and business acumen. In-depth knowledge of data mining techniques and emerging technologies including MapReduce, Spark, ML, Deep Learning, artificial neural networks, etc. Experience in working with the agile methodology. Experience in working with Microsoft Excel and Office. Strong communication skills (both verbal and written). Ability to manage and handle multiple priorities simultaneously. Top Data Science Skills to Learn to upskill SL. No Top Data Science Skills to Learn 1 Data Analysis Online Courses Inferential Statistics Online Courses 2 Hypothesis Testing Online Courses Logistic Regression Online Courses 3 Linear Regression Courses Linear Algebra for Analysis Online Courses upGrad’s Exclusive Data Science Webinar for you – How upGrad helps for your Data Science Career? document.createElement('video'); https://cdn.upgrad.com/blog/alumni-talk-on-ds.mp4   3. Salary According to a PwC study report, by 2020, there will be around 2.7 million job openings for Data Scientists and Data Analysts. It further states that the applicants for these job roles must be “T-shaped”, as in, they must possess not only technical and analytical skills but also soft skills including communication, teamwork, and creativity. Since it is difficult to find such talent with the right skill set and the demand for Data Scientists and Analysts exceed the supply by a large margin, these roles promise handsome salary package. However, the job of a Data Scientist being much more demanding than that of a Data Analyst, the salary of Data Analysts is naturally lower than Data Scientists. Glassdoor maintains that the average annual salary of Data Scientists is Rs. 10,00,000, whereas that of a Data Analyst is Rs. 4,82,041. Learn data science courses from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career. Concluding thoughts Considering all the points mentioned above, the job title of Data Scientists and Data Analysts seem deceptively similar owing to the few similarities in skill sets and job responsibilities. For instance, if you have a STEM background with a flair in programming, analytics, and statistics, you are ideally suited for a career in Data Science. However, the subtle differences between the two give rise to the significant disparity in the salary level. Read our popular Data Science Articles Data Science Career Path: A Comprehensive Career Guide Data Science Career Growth: The Future of Work is here Why is Data Science Important? 8 Ways Data Science Brings Value to the Business Relevance of Data Science for Managers The Ultimate Data Science Cheat Sheet Every Data Scientists Should Have Top 6 Reasons Why You Should Become a Data Scientist A Day in the Life of Data Scientist: What do they do? Myth Busted: Data Science doesn’t need Coding Business Intelligence vs Data Science: What are the differences? If you are still cannot make a choice, let’s make it simpler for you – suppose you are great with numbers, but you still need to go a long way to perfect your coding and data modelling skills, you’d better start your career as a Data Analyst. Gradually, you can upskill and then become a Data Scientist. This way, the job of a Data Analyst can become a stepping stone to becoming a Data Scientist. All in all, both the options are emerging and highly lucrative career choices, so you’ll have a promising career in Data Science no matter what you choose.

by Thulasiram Gunipati

Calendor icon

08 Jul 2019

Applications of Data Science and Machine Learning in NETFLIX
Blogs
Views Icon

6670

Applications of Data Science and Machine Learning in NETFLIX

Industries are using Data science in exciting and creative ways. Data Science is turning up in unexpected places improving the efficiency of various sectors. It is powering up human decision making and impacting the top and bottom lines of the business like never before. Industries are delighting millions of customers by powering up their applications with data science and machine learning. Top Machine Learning and AI Courses Online Master of Science in Machine Learning & AI from LJMU Executive Post Graduate Programme in Machine Learning & AI from IIITB Advanced Certificate Programme in Machine Learning & NLP from IIITB Advanced Certificate Programme in Machine Learning & Deep Learning from IIITB Executive Post Graduate Program in Data Science & Machine Learning from University of Maryland To Explore all our certification courses on AI & ML, kindly visit our page below. Machine Learning Certification This blog series aims to talk about interesting applications of data science and machine learning in various companies. A company will be spotlighted in each blog post. This blog series will talk about how companies like Google, Apple, LinkedIn, Uber, Instagram, Twitter, Instacart, Netflix, Washington post, Quora, Pinterest, Amazon, Medium, Microsoft, etc. are leveraging Data Science and Machine learning to power their businesses. So, let us start this series with ‘Netflix’. Trending Machine Learning Skills AI Courses Tableau Certification Natural Language Processing Deep Learning AI Enrol for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career. NETFLIX It is well known that Netflix uses Recommendation Systems for suggesting movies or shows to its customers. Apart from movie recommendations, there are many other lesser-known areas in which Netflix is using data science and machine learning are: Deciding personalised Artwork for the movies and shows   Suggesting the best frames from a show to the editors for creative work   Improving the Quality of Service (QoS) streaming by deciding about video encoding, advancements in client side and server side algorithms, caching the video etc   Optimizing different stages of production   Experimenting with various algorithms using A/B testing and deciding causal inference. Reduce the time taken for experimenting using interweaving etc. A Sample Road-map for Building Your Data Warehouse Personalised Artwork Every movie recommended by Netflix comes with associated Artwork. The Artwork that comes along with a movie suggestion is not common for everyone. Like movie recommendation, the Artwork related to a show is also personalised. All the members do not see a single best Artwork. A portfolio of Artwork will be created for a specific title. Depending on the taste and preference of the audience machine learning algorithm will choose an artwork which maximises the chances of viewing the title. A portfolio of Artwork created for the title ‘Stranger Things’: Personalisation at work. Top row – Artwork suggested for a viewer who likes the actress Uma Thurman. Bottom row – Artwork suggestion for a viewer who likes the actor John Travolta: Artwork personalisation is not always straightforward. There are challenges to artwork personalisation. Firstly, a single image can only be chosen for Artwork personalisation. In contrast, many movies can be recommended at a time. Secondly, the artwork suggestion should work in association with a movie recommendation engine. It typically sits on top of movie recommendation. Thirdly, personalised artwork recommendation should take into account image suggestions for other movies. Otherwise, there will not be variation and diversity in artwork suggestions which will be monotonous. Fourth, Should the same artwork or a different one be displayed between sessions. Every time showing different images will confuse the viewer and will also lead to the attribution problem. Attribution problem is which Artwork lead the audience to view the show. Artwork personalisation leads to significant improvements in discovering content by the viewers. Artwork Personalisation is the first instance of not only a personalised recommendation but how the recommendation is made to the members. Netflix is still actively researching and perfecting this nascent technique. An Overview of Association Rule Mining and its Applications Art of Image Discovery A single hour of ‘Stranger Things’ consists of 86,000 static video frames. A single season (10 episodes) consists on average 9 million total frames. Netflix is adding content regularly to cater to its global customers. In such a situation it is not possible to harvest manually to find the ‘Right’ artwork for the ‘Right’ person. It is next to impossible for the human editors to search for the best frames which will bring out the unique elements of the show. To tackle this challenge at scale Netflix built a suite of tools to resurface best frames which truly capture the true spirit of the show. Pipeline to automatically capture the best frames for a show: Frame annotations are used to capture the objective signals which are used for image ranking. To achieve frame annotations a video is divided into multiple small chunks. These chunks are processed in parallel using a framework known as ‘Archer’. This parallel processing is helping Netflix to capture the frame annotations in scale. Each piece is handled by a machine vision algorithm to obtain the frame characteristics. For example, some of the properties of the frame that are captured are colour, brightness, contrast etc. A category of features which will tell what is happening in a frame and caught during frame annotation are face detection, motion estimation, object detection etc. Netflix also identified a set of properties from the core principles of photography, cinematography and visual aesthetic design like rule-of-third etc. which are captured during frame annotation. The next step after frame annotation is to rank the images. Some factors considered for ranking are actors, diversity of the images, content maturity etc. Netflix is using deep learning techniques to cluster the images of actors in a show, prioritise the main characters and de-prioritise the secondary characters. The frames with violence and nudity are given a meagre score. Using this ranking method the best frames for a show is surfaced. This way the artwork and editorial team will have a set of high-quality images to work with instead of dealing with millions of frame for a particular episode. Data Science in Production Netflix is spending eight billion dollars this year for creating original content. Content created for millions of audience across the globe in more than 20 languages. It should not surprise us if Netflix is using Data Science for producing original content. In fact, Netflix is using Data Science in every step of content production. Typically producing content will consist of pre-production, production and post-production stages. Planning, budgeting etc. happens in pre-production. Principal photography is part of the production. Steps like editing, sound mixing etc. are part of post-production. Adding of sub-titles and removing the technical glitches are part of localisation and quality control. Now let us see how data science help optimises each stage of production. Pipeline to automatically capture the best frames for a show: As said earlier, budgeting is part of pre-production. Many decisions need to take before production starts. For example, the location for shooting. Data science is extensively used to analyse the cost implications of a specific location. Decisions are taken by delicately balancing the creative vision and budgets. Costs minimisation is done without compromising the vision of the content. Production involves shooting thousands of shots spanning many months. Production will have an objective, but it needs to be undertaken under specific constraints. For example, constraints can be that an actor is available for only one week, a location is only available for particular days, the working hours for the crew is 8 hours per day, time constraints such as a day shot or night shot, the team may have to move locations between shoots. Preparing a shooting schedule with all these constraints can be a nightmare for the director. Mathematical optimisation techniques are used here with an objective and constraints. This optimisation technique will give a rough shooting schedule. This schedule is refined further with adjustments. Post-production will take as much time as production if not more. Data visualisation techniques are used to check the bottlenecks in post-production. Visualisation techniques are also used to track the trend in post-production and project it into the future. This forecasting is done to see the workload of various teams and staffing the team appropriately. In localisation, shows are dubbed from one language to another. Prioritisation regarding which shows needs to be dubbed is decided based on data analysis.  Dubbed content which proved popular in the past is prioritised. Quality control will check for issues like syncing between audio and video, syncing of subtitles with sound etc. Quality control is done both before and after encoding (the process of compressing videos into different bitrates for streaming on different devices). Netflix accumulated historical data from manual quality control checks.  This data consisted of the errors which occurred in the past, the video formats in which the errors were found, the partners from whom this content was obtained, the genre of the content etc. Yes, Netflix saw a pattern of errors in the genre as well. Using this data a machine learning model was built which predicts either ‘pass’ or ‘fail’ of the quality checks. If a machine learning algorithm predicts ‘fail’,  then that asset will go through a round of manual quality checks. Top Companies Hiring Data Scientists in India Streaming Quality of Experience and A/B testing Data science is extensively used for ensuring the quality of the streaming experience. Quality of network connectivity is predicted to ensure the quality of streaming. Netflix actively predicts which show is going to be streamed in a particular location and caches the content in the nearby server. The caching and storing of content are done when internet traffic is low. This ensures content is streamed without buffers and customer satisfaction is maximized.A/B testing is extensively used whenever a change is done to the existing algorithm, or a new algorithm is proposed. New techniques like interleaving and repeated measures are used to speed up the A/B testing process using a very less number of samples. Popular AI and ML Blogs & Free Courses IoT: History, Present & Future Machine Learning Tutorial: Learn ML What is Algorithm? Simple & Easy Robotics Engineer Salary in India : All Roles A Day in the Life of a Machine Learning Engineer: What do they do? What is IoT (Internet of Things) Permutation vs Combination: Difference between Permutation and Combination Top 7 Trends in Artificial Intelligence & Machine Learning Machine Learning with R: Everything You Need to Know AI & ML Free Courses Introduction to NLP Fundamentals of Deep Learning of Neural Networks Linear Regression: Step by Step Guide Artificial Intelligence in the Real World Introduction to Tableau Case Study using Python, SQL and Tableau To conclude, these are some ways Netflix is using data analysis to engage and awe the customers. If you are interested in diving deep and knowing more about how this marvellous company is using data science, visit their Research blog. There is a treasure trove of articles on their blog waiting to be explored. A Beginner’s Guide to Data Science and Its Applications In the upcoming blog series let us see how Instacart is leveraging data science and machine learning. Now you have read this blog, provide feedback on what you think about this article. Also, offer suggestions regarding which company you would like to see in my future series. Learn data science courses from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

by Thulasiram Gunipati

Calendor icon

21 Aug 2018

Load More ^
Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon

Explore Free Courses