data mining techniques
play

Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 2: - PowerPoint PPT Presentation

Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 2: Regression Jan-Willem van de Meent ( credit : Yijun Zhao, Marc Toussaint, Bishop) Administrativa Instructor Jan-Willem van de Meent Email : j.vandemeent@northeastern.edu


  1. Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 2: Regression Jan-Willem van de Meent ( credit : Yijun Zhao, Marc Toussaint, Bishop)

  2. Administrativa Instructor Jan-Willem van de Meent Email : j.vandemeent@northeastern.edu 
 Phone : +1 617 373-7696 
 Office Hours : 478 WVH, Wed 1.30pm - 2.30pm Teaching Assistants Yuan Zhong E-mail: yzhong@ccs.neu.edu 
 Office Hours: WVH 462, Wed 3pm - 5pm Kamlendra Kumar E-mail: kumark@zimbra.ccs.neu.edu 
 Office Hours: WVH 462, Fri 3pm - 5pm

  3. Administrativa Course Website http://www.ccs.neu.edu/course/cs6220f16/sec3/ Piazza https://piazza.com/northeastern/fall2016/cs622003/home Project Guidelines (Vote next week) http://www.ccs.neu.edu/course/cs6220f16/sec3/project/

  4. Question What would you like 
 to get out of this course?

  5. Linear Regression

  6. Regression Examples Continuous Features Value = ⇒ x y • {age, major, gender, race} ⇒ GPA • {income, credit score, profession} ⇒ Loan Amount • {college,major,GPA} ⇒ Future Income

  7. Example: Boston Housing Data UC Irvine Machine Learning Repository 
 ( good source for project datasets ) https://archive.ics.uci.edu/ml/datasets/Housing

  8. Example: Boston Housing Data 1. CRIM : per capita crime rate by town 2. ZN : proportion of residential land zoned for lots over 25,000 sq.ft. 3. INDUS : proportion of non-retail business acres per town 4. CHAS : Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) 5. NOX : nitric oxides concentration (parts per 10 million) 6. RM : average number of rooms per dwelling 7. AGE : proportion of owner-occupied units built prior to 1940 8. DIS : weighted distances to five Boston employment centres 9. RAD : index of accessibility to radial highways 10. TAX : full-value property-tax rate per $10,000 11. PTRATIO : pupil-teacher ratio by town 12. B : 1000(Bk - 0.63)^2 where Bk is the proportion of african americans by town 13. LSTAT : % lower status of the population 14. MEDV : Median value of owner-occupied homes in $1000's

  9. Example: Boston Housing Data CRIM : per capita crime rate by town

  10. Example: Boston Housing Data CHAS : Charles River dummy variable 
 (= 1 if tract bounds river; 0 otherwise)

  11. Example: Boston Housing Data MEDV : Median value of owner-occupied homes in $1000's

  12. Example: Boston Housing Data N data 
 points D features

  13. Regression: Problem Setup Given N observations { ( x 1 , y 1 ) , ( x 2 , y 2 ) ,..., ( x N , y N ) } learn a function y i = f ( x i ) ∀ i = 1,2,..., N and for a new input x* predict y ∗ = f ( x ∗ )

  14. Linear Regression Assume f is a linear combination of D features were x and w are defined as for N points we write Learning task : Estimate w

  15. Linear Regression

  16. Error Measure Mean Squared Error (MSE): N E ( w ) = 1 X ( w T x n � y n ) 2 N n =1 = 1 N k Xw � y k 2 where — x 1 T — 2 3 2 y 1 T 3 — x 2 T — y 2 T 6 7 6 7 X = y = 6 7 6 7 4 5 4 5 . . . . . . — x NT — y NT

  17. Minimizing the Error E ( w ) = 1 N k Xw � y k 2 5 E ( w ) = 2 N X T ( Xw � y ) = 0 X T Xw = X T y w = X † y where X † = ( X T X ) � 1 X T is the ’pseudo-inverse’ of X

  18. Minimizing the Error E ( w ) = 1 N k Xw � y k 2 5 E ( w ) = 2 N X T ( Xw � y ) = 0 X T Xw = X T y w = X † y where X † = ( X T X ) � 1 X T is the ’pseudo-inverse’ of X Matrix Cookbook (on course website)

  19. Ordinary Least Squares Construct matrix X and the vector y from the dataset { ( x 1 , y 1 ) , x 2 , y 2 ) , . . . , ( x N , y N ) } (each x includes x 0 = 1) as follows:  — x T   y T  1 — 1 — x T y T 2 —     2 X = y =         . . . . . . — x T y T N — N Compute X † = ( X T X ) − 1 X T Return w = X † y

  20. Gradient Descent countours : E ( w ) 50 45 40 35 30 w 1 25 20 15 10 5 5 10 15 20 25 30 35 40 45 50 w 0

  21. Least Mean Squares (a.k.a. gradient descent) Initialize the w (0) for time t = 0 for t = 0 , 1 , 2 , . . . do Compute the gradient g t = 5 E ( w ( t )) Set the direction to move, v t = � g t Update w ( t + 1) = w ( t ) + η v t Iterate until it is time to stop Return the final weights w

  22. Question When would you want to use OLS, when LMS?

  23. Computational Complexity Least Mean Squares (LMS) Ordinary least squares (OMS)

  24. Computational Complexity Least Mean Squares (LMS) Ordinary least squares (OMS) OMS is expensive when D is large

  25. Effect of step size

  26. Choosing Stepsize to r f ( x ) ?? Set step size proportional to ? small gradient small step? large gradient large step?

  27. Choosing Stepsize to r f ( x ) ?? Set step size proportional to ? small gradient small step? large gradient large step? Two commonly used techniques 1. Stepsize adaptation 2. Line search

  28. Stepsize Adaptation initial x 2 R n , functions f ( x ) and r Input: f ( x ) , initial stepsize α , tolerance θ Output: x 1: repeat f ( x ) r y x � α 2: | r f ( x ) | if [ then step is accepted] f ( y )  f ( x ) 3: x y 4: α 1 . 2 α // increase stepsize 5: else [step is rejected] 6: α 0 . 5 α // decrease stepsize 7: end if 8: 9: until | y � x | < θ [perhaps for 10 iterations in sequence] (“magic numbers”)

  29. Second Order Methods Compute Hessian matrix of second derivatives

  30. Second Order Methods • Broyden-Fletcher-Goldfarb-Shanno (BFGS) method: Input: initial x 2 R n , functions f ( x ) , r f ( x ) , tolerance θ Output: x 1: initialize H -1 = I n 2: repeat compute ∆ = � H -1 r f ( x ) 3: perform a line search min α f ( x + α ∆ ) 4: ∆ α ∆ 5: f ( x + ∆ ) � r f ( x ) y r 6: x x + ∆ 7: > > > ⇣ ⌘ H -1 ⇣ ⌘ > update H -1 I � y ∆ I � y ∆ + ∆∆ 8: > y > y > y ∆ ∆ ∆ 9: until | | ∆ | | 1 < θ Memory-limited version: L-BFGS

  31. Stochastic Gradient Descent What if N is really large? Batch gradient descent (evaluates all data) Minibatch gradient descent (evaluates subset) Converges under Robbins-Monro conditions

  32. Probabilistic Interpretation

  33. Normal Distribution Right Skewed Left Skewed Random

  34. Normal Distribution ∼ ⇒ Density:

  35. Central Limit Theorem 3 3 3 N = 1 N = 2 N = 10 2 2 2 1 1 1 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 If y 1 , …, y n are 1. Independent identically distributed (i.i.d.) 2. Have finite variance 0 < σ y 2 < ∞

  36. Multivariate Normal Density:

  37. Regression: Probabilistic Interpretation

  38. Regression: Probabilistic Interpretation

  39. Regression: Probabilistic Interpretation Joint probability of N independent data points

  40. Regression: Probabilistic Interpretation Log joint probability of N independent data points

  41. Regression: Probabilistic Interpretation Log joint probability of N independent data points

  42. Regression: Probabilistic Interpretation Log joint probability of N independent data points

  43. Regression: Probabilistic Interpretation Log joint probability of N independent data points Maximum 
 Likelihood

  44. Basis function regression Linear regression y = w 0 + w 1 x 1 + ... + w D x D = w T x Basis function regression Polynomial regression

  45. Polynomial Regression M = 0 M = 1 1 1 t t 0 0 − 1 − 1 0 1 0 1 x x M = 3 M = 9 1 1 t t 0 0 − 1 − 1 0 1 0 1 x x

  46. Polynomial Regression Underfit M = 0 M = 1 1 1 t t 0 0 − 1 − 1 0 1 0 1 x x M = 3 M = 9 1 1 t t 0 0 − 1 − 1 0 1 0 1 x x

  47. Polynomial Regression M = 0 M = 1 1 1 t t 0 0 − 1 − 1 0 1 0 1 x x Overfit M = 3 M = 9 1 1 t t 0 0 − 1 − 1 0 1 0 1 x x

  48. Regularization L 2 regularization (ridge regression) minimizes: E ( w ) = 1 N k Xw � y k 2 + λ k w k 2 where λ � 0 and k w k 2 = w T w � k k L 1 regularization (LASSO) minimizes: E ( w ) = 1 N k Xw � y k 2 + λ | w | 1 D where λ � 0 and | w | 1 = P | ω i | i =1

  49. Regularization

  50. Regularization L 2: closed form solution w = ( X T X + λ I ) � 1 X T y L 1: No closed form solution. Use quadratic programming: minimize k Xw � y k 2 k w k 1  s s . t .

  51. Review: Bias-Variance Trade-off Maximum likelihood estimator Bias-variance decomposition 
 ( expected value over possible data points )

  52. Bias-Variance Trade-off Often: low bias ⇒ high variance low variance ⇒ high bias Trade-o ff :

  53. K-fold Cross-Validation 1. Divide dataset into K “folds” 2. Train on all except k -th fold 3. Test on k -th fold 4. Minimize test error w.r.t. λ

  54. K-fold Cross-Validation • Choices for K : 5, 10, N (leave-one-out) • Cost of computation: K x number of λ

  55. Learning Curve

  56. Learning Curve

  57. Loss Functions

Recommend


More recommend