cs480 680 machine learning lecture 3 may 13 2019
play

CS480/680 Machine Learning Lecture 3: May 13, 2019 Linear - PowerPoint PPT Presentation

CS480/680 Machine Learning Lecture 3: May 13, 2019 Linear Regression [RN] Sec. 18.6.1, [HTF] Sec. 2.3.1, [D] Sec. 7.6, [B] Sec. 3.1, [M] Sec. 1.4.5 University of Waterloo CS480/680 Spring 2019 Pascal Poupart 1 Linear model for regression


  1. CS480/680 Machine Learning Lecture 3: May 13, 2019 Linear Regression [RN] Sec. 18.6.1, [HTF] Sec. 2.3.1, [D] Sec. 7.6, [B] Sec. 3.1, [M] Sec. 1.4.5 University of Waterloo CS480/680 Spring 2019 Pascal Poupart 1

  2. Linear model for regression • Simple form of regression • Picture: University of Waterloo CS480/680 Spring 2019 Pascal Poupart 2

  3. Problem • Data: { ! " , $ % , ! & , $ ' , … , (! * , $ + )} – ! = < 0 % , 0 ' , … , 0 1 > : input vector – $ : target (continuous value) • Problem: find hypothesis ℎ that maps ! to $ – Assume that ℎ is linear: 4 !, 5 = 6 7 + 6 % 0 % + ⋯ + 6 1 0 1 = 5 : 1 ! • Objective: minimize some loss function – Euclidean loss: < ' (5) = % + − $ > ' ' ∑ >?% 4 ! @ , 5 University of Waterloo CS480/680 Spring 2019 Pascal Poupart 3

  4. Optimization • Find best ! that minimizes Euclidean loss 1 7 1 1 " ∗ = %&'()* " 2 . − " 4 2 - 5 6 ./0 • Convex optimization problem ⟹ unique optimum (global) University of Waterloo CS480/680 Spring 2019 Pascal Poupart 4

  5. ̅ Solution 1 - , 1 2 / − + 4 ! - ∑ /0, • Let ! " = " then min " 5 + • Find + ∗ by setting the derivative to 0 78 9 1 2 / − + 4 ! = ∑ /0, " 5 = /> = 0 ∀A 7 :; 1 2 / − + 4 ! ⟹ ∑ /0, " 5 ! " 5 = 0 • This is a linear system in + , therefore we rewrite it as C+ = D 4 and D = ∑ /0, 1 1 where C = ∑ /0, ! " 5 ! 2 / ! " 5 " 5 University of Waterloo CS480/680 Spring 2019 Pascal Poupart 5

  6. Solution • If training instances span ℜ "#$ then % is invertible: & = % () * • In practice it is faster to solve the linear system %& = * directly instead of inverting % – Gaussian elimination – Conjugate gradient – Iterative methods University of Waterloo CS480/680 Spring 2019 Pascal Poupart 6

  7. Picture University of Waterloo CS480/680 Spring 2019 Pascal Poupart 7

  8. Regularization • Least square solution may not be stable – i.e., slight perturbation of the input may cause a dramatic change in the output – Form of overfitting University of Waterloo CS480/680 Spring 2019 Pascal Poupart 8

  9. Example 1 " # = 1 " ' = 1 • Training data: ! ! 0 ( ) * = 1 ) + = 1 • , = • , -# = . = • / = University of Waterloo CS480/680 Spring 2019 Pascal Poupart 9

  10. Example 2 " # = 1 " ' = 1 • Training data: ! ! 0 ( ) * = 1 + ( ) , = 1 • - = • - .# = / = • 0 = University of Waterloo CS480/680 Spring 2019 Pascal Poupart 10

  11. Picture University of Waterloo CS480/680 Spring 2019 Pascal Poupart 11

  12. Regularization • Idea: favor smaller values " as a penalty term • Tikhonov regularization: add ! " • Ridge regression: 1 1 " + 9 " ! ∗ = %&'()* ! 2 . − ! 4 5 2 - 6 7 2 ! " ./0 where 9 is a weight to adjust the importance of the penalty University of Waterloo CS480/680 Spring 2019 Pascal Poupart 12

  13. Regularization • Solution: !" + $ % = ' • Notes – Without regularization: eigenvalues of linear system may be arbitrarily close to 0 and the inverse may have arbitrarily large eigenvalues. – With Tikhonov regularization, eigenvalues of linear system are ≥ ! and therefore bounded away from 0. Similarly, eigenvalues of inverse are bounded above by 1/! . University of Waterloo CS480/680 Spring 2019 Pascal Poupart 13

  14. Regularized Examples Example 1 Example 2 University of Waterloo CS480/680 Spring 2019 Pascal Poupart 14

Recommend


More recommend