Machine Learning Interview Cheat Sheet



  • This new cheat sheet will be included in my upcoming book Machine Learning: Foundations, Toolbox, and Recipes to be published in September 2019, and available (for free) to Data Science Central members exclusively. This cheat sheet is 14 pages long.
  • Hello Everyone, today we have come up with some key questions that you might encounter in a machine learning interview. If you are new to python then click Python Full Read More cheat sheet. Python CheatSheet – Python for Artificial Intelligence.

There are several areas of data mining and machine learning that will be covered in this cheat-sheet: Predictive Modelling. Regression and classification algorithms for supervised learning (prediction), metrics for evaluating model performance. Cheat sheets for tips and tricks for working with KNIME Software: Beginners, Control and Orchestration, and Machine Learning topics are available.

Would you like to see this cheatsheet in your native language? You can help us translating it on GitHub!
CS 229 - Machine Learning

By Afshine Amidi and Shervine Amidi

Classification metrics

In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model.

Confusion matrix The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:

Predicted class
+-
Actual class+TP
True Positives
FN
False Negatives
Type II error
-FP
False Positives
Type I error
TN
True Negatives

Microsoft Machine Learning Cheat Sheet


Main metrics The following metrics are commonly used to assess the performance of classification models:

MetricFormulaInterpretation
Accuracy$displaystylefrac{textrm{TP}+textrm{TN}}{textrm{TP}+textrm{TN}+textrm{FP}+textrm{FN}}$Overall performance of model
Precision$displaystylefrac{textrm{TP}}{textrm{TP}+textrm{FP}}$How accurate the positive predictions are
Recall
Sensitivity
$displaystylefrac{textrm{TP}}{textrm{TP}+textrm{FN}}$Coverage of actual positive sample
Specificity$displaystylefrac{textrm{TN}}{textrm{TN}+textrm{FP}}$Coverage of actual negative sample
F1 score$displaystylefrac{2textrm{TP}}{2textrm{TP}+textrm{FP}+textrm{FN}}$Hybrid metric useful for unbalanced classes

ROC The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by varying the threshold. These metrics are are summed up in the table below:

MetricFormulaEquivalent
True Positive Rate
TPR
$displaystylefrac{textrm{TP}}{textrm{TP}+textrm{FN}}$Recall, sensitivity
False Positive Rate
FPR
$displaystylefrac{textrm{FP}}{textrm{TN}+textrm{FP}}$1-specificity

Machine Learning Interview Cheat Sheet

AUC The area under the receiving operating curve, also noted AUC or AUROC, is the area below the ROC as shown in the following figure:


Machine Learning Interview Cheat Sheet

Regression metrics

Basic metrics Given a regression model $f$, the following metrics are commonly used to assess the performance of the model:

Total sum of squaresExplained sum of squaresResidual sum of squares
$displaystyletextrm{SS}_{textrm{tot}}=sum_{i=1}^m(y_i-overline{y})^2$$displaystyletextrm{SS}_{textrm{reg}}=sum_{i=1}^m(f(x_i)-overline{y})^2$$displaystyletextrm{SS}_{textrm{res}}=sum_{i=1}^m(y_i-f(x_i))^2$

Interview

Coefficient of determination The coefficient of determination, often noted $R^2$ or $r^2$, provides a measure of how well the observed outcomes are replicated by the model and is defined as follows:

[boxed{R^2=1-frac{textrm{SS}_textrm{res}}{textrm{SS}_textrm{tot}}}]

Main metrics The following metrics are commonly used to assess the performance of regression models, by taking into account the number of variables $n$ that they take into consideration:

Mallow's CpAICBICAdjusted $R^2$
$displaystylefrac{textrm{SS}_{textrm{res}}+2(n+1)widehat{sigma}^2}{m}$$displaystyle2Big[(n+2)-log(L)Big]$$displaystylelog(m)(n+2)-2log(L)$$displaystyle1-frac{(1-R^2)(m-1)}{m-n-1}$

where $L$ is the likelihood and $widehat{sigma}^2$ is an estimate of the variance associated with each response.

Machine Learning Interview Cheat Sheet Aqeel Anwar


Model selection

Vocabulary When selecting a model, we distinguish 3 different parts of the data that we have as follows: Alcor micro usb smart card reader driver download for windows.

Sheet
Training setValidation setTesting set
• Model is trained
• Usually 80% of the dataset
• Model is assessed
• Usually 20% of the dataset
• Also called hold-out or development set
• Model gives predictions
• Unseen data

Once the model has been chosen, it is trained on the entire dataset and tested on the unseen test set. These are represented in the figure below:


Cross-validation Cross-validation, also noted CV, is a method that is used to select a model that does not rely too much on the initial training set. The different types are summed up in the table below:

k-foldLeave-p-out
• Training on $k-1$ folds and assessment on the remaining one
• Generally $k=5$ or $10$
• Training on $n-p$ observations and assessment on the $p$ remaining ones
• Case $p=1$ is called leave-one-out

The most commonly used method is called $k$-fold cross-validation and splits the training data into $k$ folds to validate the model on one fold while training the model on the $k-1$ other folds, all of this $k$ times. The error is then averaged over the $k$ folds and is named cross-validation error.


Regularization The regularization procedure aims at avoiding the model to overfit the data and thus deals with high variance issues. The following table sums up the different types of commonly used regularization techniques:

LASSORidgeElastic Net
• Shrinks coefficients to 0
• Good for variable selection
Makes coefficients smallerTradeoff between variable selection and small coefficients
$..+lambda||theta||_1$
$lambdainmathbb{R}$
$..+lambda||theta||_2^2$
$lambdainmathbb{R}$
$..+lambdaBig[(1-alpha)||theta||_1+alpha||theta||_2^2Big]$
$lambdainmathbb{R},alphain[0,1]$

Diagnostics

Bias The bias of a model is the difference between the expected prediction and the correct model that we try to predict for given data points.


Variance The variance of a model is the variability of the model prediction for given data points.

Machine Learning Interview Cheat Sheets


Bias/variance tradeoff The simpler the model, the higher the bias, and the more complex the model, the higher the variance.


UnderfittingJust rightOverfitting
Symptoms• High training error
• Training error close to test error
• High bias
• Training error slightly lower than test error• Very low training error
• Training error much lower than test error
• High variance
Regression illustration
Classification illustration
Deep learning illustration
Possible remedies• Complexify model
• Add more features
• Train longer
• Perform regularization
• Get more data

Error analysis Error analysis is analyzing the root cause of the difference in performance between the current and the perfect models.


Machine Learning Interview Cheat Sheet Pdf

Ablative analysis Ablative analysis is analyzing the root cause of the difference in performance between the current and the baseline models.