chapter 5
play

Chapter 5 Least Squares Chapter 5 Inconsistent Systems In - PowerPoint PPT Presentation

Chapter 5 Least Squares Chapter 5 Inconsistent Systems In regression (and many other applications) we have a system of equations wed like to solve: X = y However, this system does not have an exact solution. (i.e. all of our data points


  1. Chapter 5 Least Squares Chapter 5

  2. Inconsistent Systems In regression (and many other applications) we have a system of equations we’d like to solve: X β = y However, this system does not have an exact solution. (i.e. all of our data points don’t lie exactly on a flat surface) The best we can do is consider an equation with error and try to minimize that error: X � β = y + ǫ X � β = � y � y is the vector of predicted values. � β is the vector of parameter estimates. X is the design matrix. ǫ = � y − y is a vector of residuals Chapter 5

  3. The Normal Equations Since we can’t solve X β = y , we want to solve X � β = � y , where ǫ T ǫ = ( � y − y ) T ( � y − y ) is minimized. (Remember, ǫ T ǫ is just the sum of squared error.) Then � β is called a least-squares solution. Chapter 5

  4. The Normal Equations Since we can’t solve X β = y , we want to solve X � β = � y , where ǫ T ǫ = ( � y − y ) T ( � y − y ) is minimized. Chapter 5

  5. The Normal Equations Since we can’t solve X β = y , we want to solve X � β = � y , where ǫ T ǫ = ( � y − y ) T ( � y − y ) is minimized. The set of least-squares solutions is precisely the set of solutions to the Normal Equations , X T X � β = X T y . Chapter 5

  6. The Normal Equations Since we can’t solve X β = y , we want to solve X � β = � y , where ǫ T ǫ = ( � y − y ) T ( � y − y ) is minimized. The set of least-squares solutions is precisely the set of solutions to the Normal Equations , X T X � β = X T y . There is a unique solution if and only if X has full rank. Chapter 5

  7. The Normal Equations Since we can’t solve X β = y , we want to solve X � β = � y , where ǫ T ǫ = ( � y − y ) T ( � y − y ) is minimized. The set of least-squares solutions is precisely the set of solutions to the Normal Equations , X T X � β = X T y . There is a unique solution if and only if X has full rank. Linear independence of variables. Chapter 5

  8. The Normal Equations Since we can’t solve X β = y , we want to solve X � β = � y , where ǫ T ǫ = ( � y − y ) T ( � y − y ) is minimized. The set of least-squares solutions is precisely the set of solutions to the Normal Equations , X T X � β = X T y . There is a unique solution if and only if X has full rank. Linear independence of variables. # NoPerfectMulticollinearity Chapter 5

  9. The Normal Equations X T X � β = X T y When X has full rank, X T X is invertible. So we can multiply both sides by the inverse matrix: � β = ( X T X ) − 1 X T y And then by definition, our predicted values are y = X � β = X ( X T X ) − 1 X T y . � Chapter 5

  10. The Normal Equations X T X � β = X T y When X has full rank, X T X is invertible. So we can multiply both sides by the inverse matrix: � β = ( X T X ) − 1 X T y And then by definition, our predicted values are y = X � β = X ( X T X ) − 1 X T y . � Chapter 5

  11. The Intercept Remember that we generally have an intercept built into our model: β 0 + β 1 x 1 + · · · + β p x p = y This means our design matrix, X , has a built-in column of ones: x 1 x 2 . . . x p       obs 1 1 x 11 x 12 . . . x 1 p y 0 β 0       obs 2 x 21 x 22 x 2 p 1 . . . β 1 y 1       =       . . . . . . . . . . . . . . . .       . . . . . . . . obs n 1 x n 1 x n 2 . . . x np β p y n � �� � � �� � � �� � y X β Chapter 5

Recommend


More recommend