Hyperparameter Tuning in Python

One of the easiest ways to get the last juice out of the models is to pick the right hyperparameters for machine learning or deep learning models. I will show you in this article some of the best ways to do hyperparameter tuning available today (in 2021)

Difference between parameter and hyper-parameters?

  1. Parameters of the model: These are the parameters calculated on the given dataset by the model. The weights of a deep neural network, for instance.
  2. Hyperparameters of Models: these are the parameters where the data model cannot predict. This is used for calculating the parameters of the model. For starters, in deep neural networks the learning rate.

Why Hyper-parameter tuning is more important?

The tuning technique is used to estimate the best hyperparameter combination that helps the algorithm to optimise the efficiency of the model. The proper hyperparameter combination is the only way to achieve the full value from the models.

How to Choose Hyper-parameters?

It isn’t a straightforward feat to pick the correct hyperparameter mix. It can be set in two forms.
1. Hyperparameter manual tuning: This approach is used to manually set (and experiment) various combinations of hyperparameters. This is a repetitive method and not feasible how multiple hyperparameters are to be attempted.
2. Applying automatic hyperparameters: This approach uses an algorithm that automates and optimises the procedure and determines optimal hyperparameters.

Methods of Hyper-Parameter Tuning

  1. Random Search: We construct a grid of potential hyperparameter values using the random search technique. Each iteration attempts a random mix of hyperparameters, tracks the output of the grid and eventually returns the best performing combination of the hyperparameters.
  2. Grid Search: We construct a grid of potential hyperparameter values in the grid search process. Each iteration attempts in a particular order to combine hyperparameters. It adapts to all possible hyperparameter configurations and reports the efficiency of the model. Finally, for the optimal hyperparameters, the best model returns.
  3. Bayesian Optimization: The optimisation challenge is to tune and locate the correct hyperparameters for your model. By adjusting model parameters, we wish to minimise the loss function of our model. In a minimal number of moves, Bayesian optimism helps one find the minimum point. Bayesian optimization also uses a buying feature which guides the sampling in areas where the best observation is likely to improve on the present one.
  4. Tree-structured Parzen estimators (TPE): The principle of optimising Tree-based Parzen is like optimising Bayesian. The hyperparameter values of the TPE models P(x|y) and P instead of finding the values of p(y|x) where y is the minimised function (e.g., validated loss) and x (y). One of the major limitations of tree-structured Parzen estimators is that the hyperparameters do not map interactions. Thus TPE performs very well and was checked in most fields. This works very well in practice.
Source

Hyperparameter tuning algorithms

  1. Hyperband: Hyperband is a random search variant, but with some discovery, philosophy to find the right time assignment for each setup. For more information, please see this research article.
  2. Population-based training (PBT): This methodology is the hybrid of two search techniques most widely used, namely random search and manual tuning for neural network models. PBT begins teaching several neural networks with random hyperparameters in parallel. However, these networks are not completely separate. The data from the rest of the population is used to optimise hyperparameters and to define the hyperparameter value to attempt. For more information on PBT, please review this post.
  3. Bayesian Optimization and HyperBand(BOHB): BOHB (Bayesian Optimization and HyperBand) mixes the Hyperband algorithm and Bayesian optimization.
Source

Important tools & Libraries for Hyper-Parameter Tuning

Some of the best Hyperparameter Optimization libraries are:

  1. Scikit-learn (grid search, random search)
  2. Hyperopt
  3. Scikit-Optimize
  4. Optuna
  5. Ray.tune

Scikit learn

Scikit-learn has implementations for grid search and random search and is a good place to start if you are building models with sklearn. For both of those methods, sci-kit-learn trains and evaluates a model in a k fold cross-validation setting over various parameter choices and returns the best model.

Specifically:

  • Random search: with randomsearchcv runs the search over some number of random parameter combinations
  • Grid search: gridsearchcv runs the search over all parameter sets in the grid

Tuning models with scikit-learn is a good start but there are better options out there and they often have random search strategy anyway.

Hyperopt

One of the most common tuning packages available for hyperparameters is Hyperopt. Hyperopt helps the user to describe a search field where the user wants the best answer to be able to search the algorithms more accurately in hyperopt.
There are actually three hyperopt algorithms.
1. Check Random
2. Estimators Tree of Parzen (TPE)
3. TPE adjustment
You can first explain how to use hyperopt:
1. The goal function to limit
2. Room for searching
3. the directory for storing all search points assessments
4. Using the Search Algorithm
This guide teaches you how to structure the code and use the programme hyperopt to get the best hyperparameters.

Scikit-optimize

In order to find optimal solutions for hyperparameter search problems in less time, Scikit-optimize uses a sequential models-based search algorithm.
Scikit-optimize offers many other features than maximising hyperparameters, for example:
1. Products of load and store optimisation,
2. Plots for integration,
3. Comparison of substitute versions

Optuna

In order to decide the promising region for maximising the hyperparameter, Optuna uses a historical record of trails and thus finds optimum parameter within a minimum time. It has a cutting mechanism that prevents promising pathways in the early stages of training automatically. Four of the most important characteristics of optuna are:

  1. Architecture lightweight, scalable and platform-agnostic
  2. Check spaces Pythonic
  3. Effective algorithms for optimization
  4. Simple to parallel
  5. Quick visualisation

Ray Tune

RayTune is a common choice of experiments and tuning in every scale for hyperparameters. Ray uses the strength of distributed computation to speed up the optimization of the hyperparameters and has a scaling algorithm for a variety of state of the art. Any of the main characteristics of ray tuning are:

  1. Asynchronous optimization spread by Ray from the box. distributed It’s quick to scale.
  2. SOTA algorithms like ASHA, BOHB and population-based workouts were supported.
  3. Supports MLflow and Tensorboard.
  4. Supports a wide range of frames, such as sklearn, xgboost, TensorFlow and PyTorch.
Source

Keras Tuner

Keras Tuner is a library that allows you to select the right collection of hyperparameters for TensorFlow. In addition to model architecture, when you create a model for tuning hyperparameters, you also describe the search space of the hyperparameter.

A hyper model is called the model you set up for the tuning of hyperparameters. Two approaches help you to describe a hypermodel:

  1. By using a blueprint building function
  2. By splitting the Keras Tuner API HyperModel class

You can also use HyperXception and HyperResNet for machine viewing purposes, two predefined HyperModel groups.

Hyper Parameter Tuning Resources and Examples

Random forest

XGBoost

LightGBM

Catboost

Keras

Pytorch

About the Author: Arpit Bhushan Sharma (B.Tech, 2016–2020) Electrical & Electronics Engineering, Dr APJ Abdul Kalam Technical University, Lucknow | Patent Analyst — Lakshmikumaran & Sridharan Attorney | Microsoft Student Partner (Beta)| Student Member R10 IEEE | Student Member PELS/PES | E-mail: bhushansharmaarpit@gmail.com

If you really like the article, please do the honour by comment and sharing (citation) for motivating me.

Comments

Popular posts from this blog

Tensorflow for Reinforcement Learning

Julia over Python

All basic Cheatsheets of Artificial Intelligence, Machine learning, Deep Learning, Natural Language Processing, etc.