linear algebra primer
play

Linear Algebra Primer Note: the slides are based on CS131 (Juan - PowerPoint PPT Presentation

Linear Algebra Primer Note: the slides are based on CS131 (Juan Carlos et al) and EE263 (by Stephen Boyd et al) at Stanford. Reorganized, revised, and typed by Hao Su 1/59 Outline Vectors and Matrices Basic matrix operations


  1. Linear Algebra Primer Note: the slides are based on CS131 (Juan Carlos et al) and EE263 (by Stephen Boyd et al) at Stanford. Reorganized, revised, and typed by Hao Su 1/59

  2. Outline ◮ Vectors and Matrices ◮ Basic matrix operations ◮ Determinants, norms, trace ◮ Special matrices ◮ Transformation Matrices ◮ Homogeneous matrices ◮ Translation ◮ Matrix inverse ◮ Matrix rank 2/59

  3. Outline ◮ Vectors and Matrices ◮ Basic matrix operations ◮ Determinants, norms, trace ◮ Special matrices ◮ Transformation Matrices ◮ Homogeneous matrices ◮ Translation ◮ Matrix inverse ◮ Matrix rank 3/59

  4. Vector ◮ A column vector v ∈ R n × 1 where   v 1 v 2   v =  .  .   .   v n ◮ A row vector v T ∈ R 1 × n where v T = [ v 1 v 2 . . . v n ] T denotes the transpose operation 4/59

  5. Vector ◮ We’ll default to column vectors in this class   v 1 v 2   v =  .  .   .   v n ◮ You’ll want to keep track of the orientation of your vectors when programming in Python 5/59

  6. Vectors have two main uses ◮ Data (pixels, gradients at an image keypoint, etc) can also be treated as a vector ◮ Such vectors do not have a geometric interpretation, but calculations like “distance” can still have ◮ Vectors can represent an value offset in 2D or 3D space ◮ Points are just vectors from the origin 6/59

  7. Matrix ◮ A matrix A ∈ R m × n is an array of numbers with size m by n , i.e., m rows and n columns   a 11 a 12 a 13 . . . a 1 n a 21 a 22 a 23 . . . a 2 n   A =  . .  . .   . .   a m 1 a m 2 a m 3 . . . a mn ◮ if m = n , we say that A is square. 7/59

  8. Images ◮ Python represents an image as a matrix of pixel brightness ◮ Note that the upper left corner is ( y , x ) = [0 , 0] 8/59

  9. Color Images ◮ Grayscale images have one number per pixel, and are stored as an m × n matrix ◮ Color images have 3 numbers per pixel – red, green, and blue brightness (RGB) ◮ stored as an m × n × 3 matrix 9/59

  10. Basic Matrix Operations We will discuss: ◮ Addition ◮ Scaling ◮ Dot product ◮ Multiplication ◮ Transpose ◮ Inverse/pseudo-inverse ◮ Determinant/trace 10/59

  11. Matrix Operations ◮ Addition � a � � 1 � � a + 1 � 2 b + 2 b + = 3 4 c + 3 d + 4 c d ◮ Can only add a matrix with matching dimensions or a scalar � a � � a + 7 � b b + 7 + 7 = c d c + 7 d + 7 ◮ Scaling � a � � 3 a � b 3 b × 3 = c d 3 c 3 d 11/59

  12. Vectors �� n ◮ Norm: � x � 2 = i =1 x 2 i ◮ More formally, a norm is any function f : R n → R that satisfies 4 proerties: ◮ Non-Negativity: For all x ∈ R n , f ( x ) ≥ 0 ◮ Definiteness: f ( x ) = 0 if and only if x = 0 ◮ Homogeneity: For all x R n , t ∈ R , f ( tx ) = | t | f ( x ) ◮ Triangle inequality: For all x , y ∈ R n , f ( x + y ) ≤ f ( x ) + f ( y ) 12/59

  13. Vector Operations ◮ Example norms n � � x � 1 = | x i | ∞ � x � ∞ = max | x i | i i =1 ◮ General ℓ p norms: � n � 1 / p � | x i | p � x � p = i =1 13/59

  14. Vector Operations ◮ Inner product (dot product) of vectors ◮ Multiply corresponding entries of two vectors and add up the result ◮ x · y is also | x || y | cos(the angel between x and y )   y 1 n . � x T y = [ x 1 . . . x n ] .  = (scalar) x i y i   .  i =1 y n 14/59

  15. Vector Operations ◮ Inner product (dot product) of vectors ◮ If B is a unit vector, then A · B gives the length of A , which lies in the direction of B 15/59

  16. Matrix Operations ◮ The product of two matrices A ∈ R m × n , B ∈ R n × p C = AB ∈ R m × p n � C ij = A ik B kj i =1  − a T   a T a T a T  1 − 1 b 1 1 b 2 · · · 1 b p   | | | − a T a T a T a T 2 − 2 b 1 2 b 2 · · · 2 b p      = C = AB = b 1 b 2 · · · b p . . . .    ...  .  . . .     . . . . | | |     − a T a T a T a T m − · · · m b 1 m b 2 m b p 16/59

  17. Matrix Operations Multiplication example: Each entry of the matrix product is made by tak- ing the dot product of the corresponding row in the left matrix, with the cor- responding column in the right one. 17/59

  18. Matrix Operations ◮ The product of two matrices Matrix multiplication is associative: (AB)C=A(BC) Matrix multiplication is distributive: A(B+C)=AB+AC Matrix multiplication is, in general, not commutative; that is, it can be the case that AB � = BA (For example, if A ∈ R m × n and B ∈ R n × q , the matrix product BA does not even exist if m and q are not equal!) 18/59

  19. Matrix Operations ◮ Powers ◮ By convention, we can refer to the matrix product AA as A 2 , and AAA as A 3 , etc. ◮ Obviously only square matrices can be multiplied that way 19/59

  20. Matrix Operations ◮ Transpose – flip matrix, so row 1 becomes column 1 T   0 1 � � 0 2 4 2 3 =   1 3 5 4 5 ◮ A useful identity: ( ABC ) T = C T B T A T 20/59

  21. Matrix Operations ◮ Determinant ◮ det( A ) returns a scalar ◮ Represents area (or volume) of the parallelogram described by the vectors in the rows of the matrix � a � b ◮ For A = , det( A ) = ad − bc c d ◮ Properties: det( AB ) = det( A ) det( B ) det( AB ) = det( BA ) 1 det( A − 1 ) = det( A ) det( A T ) = det( A ) det( A ) = 0 ⇐ ⇒ A is singular 21/59

  22. Matrix Operations ◮ Trace ◮ trace( A ) = sum of diagonal elements � 1 3 � tr( ) = 1 + 7 = 8 5 7 ◮ Invariant to a lot of transformations, so it’s used sometimes in proofs. (Rarely used in this class, though) ◮ Properties: tr( AB ) = tr( BA ) tr( A + B ) = tr( A ) + tr( B ) tr( ABC ) = tr( BCA ) = tr( CAB ) 22/59

  23. Matrix Operations ◮ Vector norms n � � x � 1 = | x i | � x � ∞ = max | x i | i i =1 � n � � 1 / p n � � � � x 2 | x i | p � x � 2 = � x � p = � i i =1 i =1 ◮ Matrix norms: Norms can also be defined for matrices, such as � m n � � � � � A 2 � A � F = ij = tr( A T A ) � i =1 j =1 23/59

  24. Special Matrices ◮ Identity matrix I   1 0 0 I 3 × 3 = 0 1 0   0 0 1 ◮ Diagonal matrix   3 0 0 0 7 0   0 0 2 . 5 24/59

  25. Special Matrices ◮ Symmetric matrix: A T = A  1 2 5  2 1 7   5 7 1 ◮ Skew-symmetric matrix: A T = − A   0 − 2 − 5 2 0 − 7   5 7 0 25/59

  26. Outline ◮ Vectors and Matrices ◮ Basic matrix operations ◮ Determinants, norms, trace ◮ Special matrices ◮ Transformation Matrices ◮ Homogeneous matrices ◮ Translation ◮ Matrix inverse ◮ Matrix rank 26/59

  27. Transformation ◮ Matrices can be used to transform vectors in useful ways, through multiplication: x ′ = Ax ◮ Simplest is scaling: � s x � � x � � s x x � 0 × = 0 s y y s y y (Verify by yourself that the matrix multiplication works out this way) 27/59

  28. Rotation (2D case) Counter-clockwise rotation by an angle θ x ′ = cos θ x − sin θ y y ′ = cos θ y + sin θ x � x ′ � � cos θ � � x � − sin θ = y ′ sin θ cos θ y P ′ = RP 28/59

  29. Transformation Matrices ◮ Multiple transformation matrices can be used to transform a point: p ′ = R 2 R 1 Sp 29/59

  30. Transformation Matrices ◮ Multiple transformation matrices can be used to transform a point: p ′ = R 2 R 1 Sp ◮ The effect of this is to apply their transformations one after the other, from right to left 30/59

  31. Transformation Matrices ◮ Multiple transformation matrices can be used to transform a point: p ′ = R 2 R 1 Sp ◮ The effect of this is to apply their transformations one after the other, from right to left ◮ In the example above, the result is ( R 2 ( R 1 ( Sp ))) 31/59

  32. Transformation Matrices ◮ Multiple transformation matrices can be used to transform a point: p ′ = R 2 R 1 Sp ◮ The effect of this is to apply their transformations one after the other, from right to left ◮ In the example above, the result is ( R 2 ( R 1 ( Sp ))) ◮ The result is exactly the same if we multiply the matrices first, to form a single transformation matrix: p ′ = ( R 2 R 1 S ) p 32/59

  33. Homogeneous System ◮ In general, a matrix multiplication lets us linearly combine components of a vector � a � � x � � ax + by � b × = c d y cx + dy ◮ This is sufficient for scale, rotate, skew transformations ◮ But notice, we cannot add a constant! :( 33/59

  34. Homogeneous System ◮ The (somewhat hacky) solution? Stick a “1” at the end of every vector:      ax + by + c  a b c x  ×  = dx + ey + f d e f y     0 0 1 1 1 ◮ Now we can rotate, scale, and skew like before, AND translate (note how the multiplication works out, above) ◮ This is called “homogeneous coordinates” 34/59

  35. Homogeneous System ◮ In homogeneous coordinates, the multiplication works out so the rightmost column of the matrix is a vector that gets added       a b c x ax + by + c  ×  = d e f y dx + ey + f     0 0 1 1 1 ◮ Generally, a homogeneous transformation matrix will have a bottom row of [0 0 1], so that the result has a “1” at the bottom, too. 35/59

Recommend


More recommend