Definitions Definition A matrix is a set of N real or complex numbers organized in m rows and n columns, with N = mn · · · a 11 a 12 a 1 n · · · a 21 a 22 a 2 n A = ≡ [ a ij ] i = 1 , . . . , m j = 1 , . . . , n · · · · · · a ij · · · a m 1 a m 2 · · · a mn A matrix is always written as a boldface capital letter viene as in A . To indicate matrix dimensions we use the following symbols A m × n A ∈ F m × n A m × n A ∈ F m × n where F = R for real elements and F = C for complex elements. Transpose matrix Given a matrix A m × n we define a transpose matrix the matrix obtained exchanging rows and columns a 11 a 21 · · · a m 1 · · · a 12 a 22 a m 2 A T n × m = . . . ... . . . . . . · · · a 1 n a 2 n a mn The following property holds ( A T ) T = A Square matrix A matrix is said to be square when m = n A square n × n matrix is upper triangular when a ij = 0 , ∀ i > j a 11 a 12 · · · a 1 n 0 a 22 · · · a 2 n A n × n = . . . ... . . . . . . 0 0 · · · a nn If a square matrix is upper triangular its transpose is lower triangular and viceversa a 11 0 · · · 0 · · · a 12 a 22 0 A T n × n = . . . ... . . . . . . · · · a 1 n a 2 n a nn Symmetric matrix A real square matrix is said to be symmetric if A = A T , or A − A T = O In a real symmetric matrix there are at least n ( n + 1) independent elements. 2
If a matrix K has complex elements k ij = a ij + j b ij (where j = √− 1) its conjugate is K with elements k ij = a ij − j b ij . Given a complex matrix K , an adjoint matrix K ∗ is defined, as the conjugate transpose T = K T K ∗ = K A complex matrix is called self-adjoint or hermitian when K = K ∗ . Some textbooks indicate this matrix as K † or K H Diagonal matrix A square matrix is diagonal if a ij = 0 for i � = j · · · a 1 0 0 · · · 0 a 2 0 A n × n = diag( a i ) = . . . ... . . . . . . 0 0 · · · a n A diagonal matrix is always symmetric. Skew-symmetric matrix Skew-symmetric matrix A square matrix is skew-symmetric or antisymmetric if A + A T = 0 A = − A T → Given the constraints of the above relation, a generic antisymmetric matrix has the following structure 0 · · · a 12 a 1 n − a 12 0 · · · a 2 n A n × n = . . . ... . . . . . . − a 1 n − a 2 n · · · 0 In a skew-symmetric matrix there are at most n ( n − 1) non zero independent elements. We will 2 see in the following some important properties of the antisymmetric 3 × 3 matrices. Block matrix It is possible to represent a matrix with blocks as � A 11 � · · · A 1 n · · · · · · A = A ij · · · A m 1 A mn where the blocks A ij have suitable dimensions. Given the following matrices � A 11 � A 11 � � � A 11 � · · · A 1 n O O O O · · · · · · A 1 = O A ij A 2 = A ij O A 3 = O A ij O O O A mn A m 1 · · · A mn O O A mn A 1 is upper block triangular, A 2 is lower block triangular, and A 3 is block diagonal 2
Matrix algebra Matrices are elements of an algebra , i.e., a vector space together with a product operator. The main operations of this algebra are: product by a scalar , sum , and matrix product Product by a scalar · · · · · · a 11 a 12 a 1 n αa 11 αa 12 αa 1 n a 21 a 22 · · · a 2 n αa 21 αa 22 · · · αa 2 n α A = α = . . . . . . ... ... . . . . . . . . . . . . a m 1 a m 2 · · · a mn αa m 1 αa m 2 · · · αa mn Sum a 11 + b 11 a 12 + b 12 · · · a 1 n + b 1 n a 21 + b 21 a 22 + b 22 · · · a 2 n + b 2 n A + B = . . . ... . . . . . . a m 1 + b m 1 a m 2 + b m 2 · · · a mn + b mn Matrix sum Sum properties A + O = A A + B = B + A ( A + B ) + C = A + ( B + C ) A T + B T ( A + B ) T = The null (neutral, zero) element O takes the name of null matrix. The subtraction (difference) operation is defined using the scalar α = − 1: A − B = A + ( − 1) B Matrix product Matrix product The operation is performed using the well-known rule “ rows by columns ”: the generic element c ij of the matrix product C m × p = A m × n · B n × p is n � c ij = a ik b kj k =1 The bi-linearity of the matrix product is guaranteed, since it is immediate to verify that, given a generic scalar α , the following identity holds: α ( A · B ) = ( α A ) · B = A · ( α B ) 3
Product Product properties A · B · C = ( A · B ) · C = A · ( B · C ) A · ( B + C ) = A · B + A · C ( A + B ) · C = A · C + B · C ( A · B ) T = B T · A T In general: • the matrix product is non-commutative: A · B � = B · A , apart from particular cases; • A · B = A · C does not imply B = C , apart from particular cases; • A · B = O does not imply A = O or B = O , apart from particular cases. Identity matrix A neutral element wrt product exists and is called identity matrix, written as I n or simply I when no ambiguity arises; given a rectangular matrix A m × n the following identities hold A m × n = I m A m × n = A m × n I n Identity matrix 1 0 · · · 0 · · · · · · 0 0 I = . . . ... . . . . . . · · · 0 0 1 Idempotent matrix Given a square matrix A ∈ R n × n , the k -th power is k � A k = A ℓ =1 A matrix is said to be idempotent if A 2 = A A n = A ⇒ Trace Trace The trace of a square matrix A n × n is the sum of its diagonal elements n � tr ( A ) = a kk k =1 The matrix traces satisfies the following properties tr ( α A + β B ) = α tr ( A ) + β tr ( B ) tr ( AB ) = tr ( BA ) tr ( A ) = tr ( A T ) tr ( A ) = tr ( T − 1 AT ) for non singular T (see below) 4
Minor A minor of order p of a matrix A m × n is the determinant D p of a square sub-matrix obtained selecting any p rows and p columns of A m × n The formal definition of determinant will be presented below There are as many minors as there are possible choices of p on m rows and of p on n columns. Given a matrix A m × n , the principal minors of order k are the determinants D k , with k = 1 , · · · , min { m, n } , obtained selecting the first k rows an k d columns of A m × n . Minor and cofactor Given A ∈ R n × n , we indicate with A ( ij ) ∈ R ( n − 1) × ( n − 1) the matrix obtained taking out the i -th row and the j -th column of A . We define the minor D rc of a generic element a rc of a square matrix A n × n , the determinant of the matrix obtained taking out the r -th row and the c -th column, i.e., det A ( rc ) D rc = det A ( rc ) . We define the cofactor of an element a rc of a square matrix A n × n the product A rc = ( − 1) r + c D rc Determinant Once defined the cofactor, the determinant of a square matrix A can be defined “by row”, i.e., choosing a generic row i , n n � � a ik ( − 1) i + k det ( A ( ik ) ) = det ( A ) = a ik A ik k =1 k =1 or, choosing a generic column j , we have the definition “by column”: n n � � a kj ( − 1) k + j det ( A ( kj ) ) = det ( A ) = a kj A kj k =1 k =1 Since these definition are recursive and assume the computation of determinants of smaller order minors, it is necessary to define the determinant of a matrix 1 × 1 (scalar), that is simply det ( a ij ) = a ij . Properties of determinant • det( A · B ) = det( A ) det( B ) • det( A T ) = det( A ) • det( k A ) = k n det( A ) • if one makes a number of s exchanges between rows or columns of A , obtaining a new matrix A s , we have det( A s ) = ( − 1) s det( A ) • if A has two equal or proportional rows/columns, we have det( A ) = 0 • if A has a row or a column that is a linear combination of other rows or columns, we have det( A ) = 0 e upper or lower triangular, we have det( A ) = � n • if A ` i =1 a ii • if A is block triangular, with p blocks A ii on the diagonal, we have det( A ) = � p i =1 det A ii 5
Recommend
More recommend