Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Matrix Calculations: Kernels & Images, Matrix Multiplication A. Kissinger (and H. Geuvers) Institute for Computing and Information Sciences – Intelligent Systems Radboud University Nijmegen Version: spring 2016 A. Kissinger Version: spring 2016 Matrix Calculations 1 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Outline Matrix multiplication Matrix inverse Kernel and image A. Kissinger Version: spring 2016 Matrix Calculations 2 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image From last time • Vector spaces V , W , . . . are special kinds of sets whose elements are called vectors . • Vectors can be added together, or multiplied by a real number, For v , w ∈ V , a ∈ R : v + w ∈ V a · v ∈ V • The simplest examples are: R n := { ( a 1 , . . . , a n ) | a i ∈ R } • Linear maps are special kinds of functions which satisfy two properties: f ( v + w ) = f ( v ) + f ( w ) f ( a · v ) = a · f ( v ) A. Kissinger Version: spring 2016 Matrix Calculations 3 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image From last time • Whereas there exist LOTS of functions between the sets V and W ... • ...there actually aren’t that many linear maps: Theorem For every linear map f : R n → R m , there exists an m × n matrix A where: f ( v ) = A · v (where “ · ” is the matrix multiplication of A and a vector v ) • More generally, every linear map f : V → W is representable as a matrix, but you have to fix a basis for V and W first: { v 1 , . . . , v m } ∈ V { w 1 , . . . , w n } ∈ W • ...whereas in R n there is an obvious choice: { (1 , 0 , . . . , 0) , (0 , 1 , . . . , 0) , . . . , (0 , . . . , 0 , 1) } ∈ R n A. Kissinger Version: spring 2016 Matrix Calculations 4 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Matrix-vector multiplication For a matrix A and a vector v , w := A · v is the vector whose i -th row is the dot product of the i -th row of A with v : a 11 · · · a 1 n v 1 a 11 v 1 + . . . + a 1 n v n . . . . · = . . . . . . . . a m 1 · · · a mn v n a m 1 v 1 + . . . + a mn v n n � i.e. w i := a 11 v 1 + . . . + a 1 n v n = a ij v j . j =1 A. Kissinger Version: spring 2016 Matrix Calculations 5 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Example: systems of equations A · x = b a 11 x 1 + · · · + a 1 n x n = b 1 a 11 · · · a 1 n x 1 b 1 . ⇒ . . . . . · = . . . a m 1 x 1 + · · · + a mn x n = b m . . . a m 1 · · · a mn x n b n A · x = 0 a 11 x 1 + · · · + a 1 n x n = 0 a 11 · · · a 1 n x 1 0 . . . ⇒ . . . . . . . . . . . · = . . . . . a m 1 x 1 + · · · + a mn x n = 0 . . . . . a m 1 · · · a mn x n 0 A. Kissinger Version: spring 2016 Matrix Calculations 6 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Matrix multiplication • Consider linear maps g , f represented by matrices A , B : g ( v ) = A · v f ( w ) = B · w • Can we find a matrix C that represents their composition? g ( f ( v )) = C · v • Let’s try: ( ∗ ) g ( f ( v )) = g ( B · v ) = A · ( B · v ) = ( A · B ) · v (where step ( ∗ ) is currently ‘wishful thinking’) • Great! Let C := A · B . • But we don’t know what “ · ” means for two matrices yet... A. Kissinger Version: spring 2016 Matrix Calculations 8 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Matrix multiplication • Solution: generalise from A · v • A vector is a matrix with one column: The number in the i -th row and the first column of A · v is the dot product of the i -th row of A with the first column of v . • So for matrices A , B : The number in the i -th row and the j -th column of A · B is the dot product of the i -th row of A with the j -th column of B . A. Kissinger Version: spring 2016 Matrix Calculations 9 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Matrix multiplication For A an m × n matrix, B an n × p matrix: A · B = C is an m × p matrix. . . . . ... ... · · · b j 1 · · · . . . . . . . . . a i 1 · · · a in · = · · · c ij · · · . · · · · · · . . . . . ... ... · · · b jn · · · . . . . . . . . n � c ij = a ik b kj k =1 A. Kissinger Version: spring 2016 Matrix Calculations 10 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Special case: vectors For A an m × n matrix, B an n × 1 matrix: A · b = c is an m × 1 matrix. . . . . b 11 . . . . . . . . . a i 1 · · · a in · = c i 1 . . . . . . b n 1 . . . . . . . . n � c i 1 = a ik b k 1 k =1 A. Kissinger Version: spring 2016 Matrix Calculations 11 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Matrix composition Theorem Matrix composition is associative: ( A · B ) · C = A · ( B · C ) Proof . Let X := ( A · B ) · C . This is a matrix with entries: � x ip = a ik b kp k Then, the matrix entries of X · C are: �� � � � � x ip c pj = a ik b kp c pk = a ik b kp c pk p p k kp (because sums can always be pulled outside, and combined) A. Kissinger Version: spring 2016 Matrix Calculations 12 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Associativity of matrix composition Proof (cont’d). Now, let Y := B · C . This has matrix entries: � y kj = b kp c pj p Then, the matrix entries of A · Y are: �� � � � � a ik y kj = a ik b kp c pj = a ik b kp c pk k k p kp ...which is the same as before! So: ( A · B ) · C = X · C = A · Y = A · ( B · C ) So we can drop those pesky parentheses: A · B · C := ( A · B ) · C = A · ( B · C ) A. Kissinger Version: spring 2016 Matrix Calculations 13 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Matrix product and composition Corollary The composition of linear maps is given by matrix product. Proof. Let g ( w ) = A · w and f ( v ) = B · v . Then: g ( f ( v )) = g ( B · v ) = A · B · v � No wishful thinking necessary! A. Kissinger Version: spring 2016 Matrix Calculations 14 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Example 1 Consider the following two linear maps, and their associated matrices: g f R 3 → R 2 R 2 → R 2 − − f ( x 1 , x 2 , x 3 ) = ( x 1 − x 2 , x 2 + x 3 ) g ( y 1 , y 2 ) = (2 y 1 − y 2 , 3 y 2 ) � 1 − 1 0 � � 2 − 1 � M f = M g = 0 1 1 0 3 We can compute the composition directly: � � ( g ◦ f )( x 1 , x 2 , x 3 ) = g f ( x 1 , x 2 , x 3 ) = g ( x 1 − x 2 , x 2 + x 3 ) = ( 2( x 1 − x 2 ) − ( x 2 + x 3 ) , 3( x 2 + x 3 ) ) = ( 2 x 1 − 3 x 2 − x 3 , 3 x 2 + 3 x 3 ) So: � 2 − 3 − 1 � M g ◦ f = 0 3 3 ...which is just the product of the matrices: M g ◦ f = M g · M f A. Kissinger Version: spring 2016 Matrix Calculations 15 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Note: matrix composition is not commutative In general, A · B � = B · A � 0 � 1 � � 0 1 For instance : Take A = and B = . Then: 0 − 1 − 1 0 � 0 � 1 � � 0 1 A · B = · 0 − 1 − 1 0 � 1 · 0 + 0 · − 1 � � 0 1 � 1 · 1 + 0 · 0 = = 0 · 0 + − 1 · − 1 0 · 1 + − 1 · 0 1 0 � 0 � � 1 � 1 0 B · A = · − 1 0 0 − 1 � 0 · 1 + 1 · 0 � 0 � � 0 · 0 + 1 · − 1 − 1 = = − 1 · 1 + 0 · 0 − 1 · 0 + 0 · − 1 − 1 0 A. Kissinger Version: spring 2016 Matrix Calculations 16 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image But it is... ...associative, as we’ve already seen: A · B · C := ( A · B ) · C = A · ( B · C ) It also has a unit given by the identity matrix I : A · I = I · A = A where: 1 0 · · · 0 0 1 · · · 0 I := . ... . . . . . 0 0 · · · 1 A. Kissinger Version: spring 2016 Matrix Calculations 17 / 43
Matrix multiplication Matrix inverse Radboud University Nijmegen Kernel and image Example: political swingers, part I • We take an extremely crude view on politics and distinguish only left and right wing political supporters • We study changes in political views, per year • Suppose we observe, for each year: • 80% of lefties remain lefties and 20% become righties • 90% of righties remain righties, and 10% become lefties Questions . . . • start with a population L = 100 , R = 150, and compute the number of lefties and righties after one year; • similarly, after 2 years, and 3 years, . . . • Find a convenient way to represent these computations. A. Kissinger Version: spring 2016 Matrix Calculations 18 / 43
Recommend
More recommend