Hyperparameter Tuning
Hyperparameter Tuning
Hyperparameter types:
K in K-NN
Regularization constant, kernel type, and constants in SVMs
Number of layers, number of units per layer, regularization in neural network
The trade-off between these components is determined by the complexity of the model
and the amount of training data. The optimal hyperparameters help to avoid under-fitting
(training and test error are both high) and over-fitting (Training error is low but test error
is high)
Introduction
Workflow: One of the core tasks of developing an ML model is to evaluate its
performance. There are multiple stages in developing an ML model for use in software
applications.
Figure 1: Workflow
Evaluation: Model evaluation and ongoing evaluation may have different matrices. For
example, model evaluation may include Accuracy or AUROC and ongoing evaluation
may include customer lifetime value. Also, the distribution of the data might change
between the historical data and live data. One way to detect distribution drift is through
Hyper-parameters: Model parameters are learned from data and hyper-parameters are
tuned to get the best fit. Searching for the best hyper-parameter can be tedious, hence
search algorithms like grid search and random search are used.
Model Evaluation
Evaluation Matrices: These are tied to ML tasks. There are different matrices
for supervised algorithms (classification and regression) and unsupervised
algorithms. For example, the performance of classification of the binary class
is measured using Accuracy, AUROC, Log-loss, and KS.
Hyperparameter Tuning
Hyperparameters: Vanilla linear regression does not have any hyperparameters.
Variants of linear regression (ridge and lasso) have regularization as a hyperparameter.
The decision tree has max depth and min number of observations in leaf as
hyperparameters.
Hyperparameters Search: Grid search picks out a grid of hyperparameter values and
evaluates all of them. Guesswork is necessary to specify the min and max values for
each hyperparameter. Random search randomly values a random sample of points on
the grid. It is more efficient than grid search. Smart hyperparameter tuning picks a few
hyperparameter settings, evaluates the validation matrices, adjusts the
hyperparameters, and re-evaluates the validation matrices. Examples of smart hyper-
parameter are Spearmint (hyperparameter optimization using Gaussian processes) and
Hyperopt (hyperparameter optimization using Tree-based estimators).