introduction to machine learning hyperparameter tuning
play

Introduction to Machine Learning Hyperparameter Tuning - Basic - PowerPoint PPT Presentation

Introduction to Machine Learning Hyperparameter Tuning - Basic Techniques compstat-lmu.github.io/lecture_i2ml GRID SEARCH Simple technique which is still quite popular, tries all HP combinations on a multi-dimensional discretized grid For


  1. Introduction to Machine Learning Hyperparameter Tuning - Basic Techniques compstat-lmu.github.io/lecture_i2ml

  2. GRID SEARCH Simple technique which is still quite popular, tries all HP combinations on a multi-dimensional discretized grid For each hyperparameter a finite set of candidates is predefined Then, we simply search all possible combinations in arbitrary order Grid search over 10x10 points ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 5 TestAccuracy Hyperparameter 2 ● ● ● ● ● ● ● ● ● ● 0.8 ● ● ● ● ● ● ● ● ● ● 0.7 0 ● ● ● ● ● ● ● ● ● ● 0.6 0.5 ● ● ● ● ● ● ● ● ● ● −5 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −10 ● ● ● ● ● ● ● ● ● ● −10 −5 0 5 10 Hyperparameter 1 � c Introduction to Machine Learning – 1 / 6

  3. GRID SEARCH Advantages Very easy to implement All parameter types possible Parallelizing computation is trivial Disadvantages Scales badly: Combinatorial explosion Inefficient: Searches large irrelevant areas Arbitrary: Which values / discretization? � c Introduction to Machine Learning – 2 / 6

  4. RANDOM SEARCH Small variation of grid search Uniformly sample from the region-of-interest Random search over 100 points ● ● ● ● ● ● ● ● 5 ● ● ● TestAccuracy Hyperparameter 2 ● ● ● 0.8 0 ● ● 0.7 ● ● ● ● ● ● ● 0.6 ● ● ● 0.5 ● ● −5 ● ● ● ● ● ● ● ● ● ● ● −10 ● −10 −5 0 5 10 Hyperparameter 1 � c Introduction to Machine Learning – 3 / 6

  5. RANDOM SEARCH Advantages Like grid search: Very easy to implement, all parameter types possible, trivial parallelization Anytime algorithm: Can stop the search whenever our budget for computation is exhausted, or continue until we reach our performance goal. No discretization: each individual parameter is tried with a different value every time Disadvantages � c Introduction to Machine Learning – 4 / 6

  6. RANDOM SEARCH Inefficient: many evaluations in areas with low likelihood for improvement Scales badly: high dimensional hyperparameter spaces need lots of samples to cover. � c Introduction to Machine Learning – 5 / 6

  7. TUNING EXAMPLE Tuning random forest with random search and 5CV on the sonar data set for AUC: Parameter Type Min Max num.trees integer 3 500 mtry integer 5 50 integer 10 100 min.node.size Maximal AUC 0.93 0.92 0.91 0 50 100 150 Iterations � c Introduction to Machine Learning – 6 / 6

Recommend


More recommend