machine learning basics
play

Machine Learning Basics Lecture slides for Chapter 5 of Deep - PowerPoint PPT Presentation

Machine Learning Basics Lecture slides for Chapter 5 of Deep Learning www.deeplearningbook.org Ian Goodfellow 2016-09-26 Linear Regression Linear regression example Optimization of w 3 0 . 55 0 . 50 2 0 . 45 1 MSE (train) 0 . 40 0 y 0 . 35


  1. Machine Learning Basics Lecture slides for Chapter 5 of Deep Learning www.deeplearningbook.org Ian Goodfellow 2016-09-26

  2. Linear Regression Linear regression example Optimization of w 3 0 . 55 0 . 50 2 0 . 45 1 MSE (train) 0 . 40 0 y 0 . 35 − 1 0 . 30 − 2 0 . 25 − 3 0 . 20 − 1 . 0 − 0 . 5 0 . 0 0 . 5 1 . 0 0 . 5 1 . 0 1 . 5 x 1 w 1 Figure 5.1 (Goodfellow 2016)

  3. � ������������ � �������������������� � ����������� Underfitting and Overfitting in Polynomial Estimation x 0 x 0 x 0 Figure 5.2 (Goodfellow 2016)

  4. Generalization and Capacity Training error Underfitting zone Overfitting zone Generalization error Error Generalization gap 0 Optimal Capacity Capacity Figure 5.3 (Goodfellow 2016)

  5. Bayes error Train (quadratic) 15 10 5 0 Number of training examples 10 5 10 4 10 3 10 2 10 1 10 0 Train (optimal capacity) Test (optimal capacity) Test (quadratic) Error (MSE) Optimal capacity (polynomial degree) Number of training examples 10 0 10 1 10 2 10 3 10 4 10 5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 20 Training Set Size Figure 5.4 (Goodfellow 2016)

  6. y Underfitting y Appropriate weight decay y Overfitting Weight Decay (Medium λ ) (Excessive λ ) ( λ → () x ( x ( x ( Figure 5.5 (Goodfellow 2016)

  7. Bias and Variance Underfitting zone Overfitting zone Bias Generalization error Variance Capacity Optimal capacity Figure 5.6 (Goodfellow 2016)

  8. Decision Trees 0 1 10 00 01 11 010 010 011 110 111 00 01 1110 1111 0 011 110 1 11 10 1110 111 1111 Figure 5.7 (Goodfellow 2016)

  9. Principal Components Analysis 20 20 10 10 x 2 z 2 0 0 − 10 − 10 − 20 − 20 − 20 − 10 0 10 20 − 20 − 10 0 10 20 x 1 z 1 Figure 5.8 (Goodfellow 2016)

  10. Curse of Dimensionality Figure 5.9: As the number of relevant dimensions of the data increases (from Figure 5.9 (Goodfellow 2016)

  11. Nearest Neighbor Figure 5.10 (Goodfellow 2016)

  12. Manifold Learning 2 . 5 2 . 0 1 . 5 1 . 0 0 . 5 0 . 0 − 0 . 5 − 1 . 0 0 . 5 1 . 0 1 . 5 2 . 0 2 . 5 3 . 0 3 . 5 4 . 0 Figure 5.11 (Goodfellow 2016)

  13. Uniformly Sampled Images Figure 5.12 (Goodfellow 2016)

  14. QMUL Dataset Figure 5.13 (Goodfellow 2016)

Recommend


More recommend