Math 221: LINEAR ALGEBRA §8-2. Orthogonal Diagonalization Le Chen 1 Emory University, 2020 Fall (last updated on 08/27/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary.
Orthogonal Matrices Definition An n × n matrix A is a orthogonal if its inverse is equal to its transpose, i.e., A − 1 = A T .
Orthogonal Matrices Definition An n × n matrix A is a orthogonal if its inverse is equal to its transpose, i.e., A − 1 = A T . Example √ 2 6 − 3 � � 2 1 1 1 and 3 2 6 2 − 1 1 7 − 6 3 2 are orthogonal matrices (verify).
proof that (1) holds if and only if (2) holds is analogous. The proof that (1) holds if and only if (3) holds is given in what follows. The Theorem The following are equivalent for an n × n matrix A. 1. A is orthogonal. 2. The rows of A are orthonormal. 3. The columns of A are orthonormal.
proof that (1) holds if and only if (2) holds is analogous. The proof that (1) holds if and only if (3) holds is given in what follows. The Theorem The following are equivalent for an n × n matrix A. 1. A is orthogonal. 2. The rows of A are orthonormal. 3. The columns of A are orthonormal.
are orthonormal. . . . . . . . if and only if . for all , , and for all and , , i.e., the columns of if and only if if and only if is orthogonal if and only if . Proof that (1) is equivalent to (3). � � x n ∈ R n . Let A = � x 1 � x 2 · · · � x n where � x 1 ,� x 2 , . . . ,�
are orthonormal. if and only if columns of , i.e., the and , for all , and for all , if and only if . . . . . . . . if and only if Proof that (1) is equivalent to (3). � � x n ∈ R n . Let A = � x 1 � x 2 · · · � x n where � x 1 ,� x 2 , . . . ,� ◮ A is orthogonal if and only if A T = A − 1 .
are orthonormal. if and only if columns of , i.e., the and , for all , and for all , if and only if . . . . . . . Proof that (1) is equivalent to (3). � � x n ∈ R n . Let A = � x 1 � x 2 · · · x n � where � x 1 ,� x 2 , . . . ,� ◮ A is orthogonal if and only if A T = A − 1 . ◮ A T = A − 1 if and only if A T A = I .
are orthonormal. . columns of , i.e., the and , for all , and for all , if and only if . . . . . Proof that (1) is equivalent to (3). � � x n ∈ R n . Let A = � x 1 � x 2 · · · x n � where � x 1 ,� x 2 , . . . ,� ◮ A is orthogonal if and only if A T = A − 1 . ◮ A T = A − 1 if and only if A T A = I . x T � 1 x T � 2 ◮ A T A = I if and only if � � � x 1 � x 2 · · · � x n = I . x T � n
. . . . . . Proof that (1) is equivalent to (3). � � x n ∈ R n . Let A = � x 1 � x 2 · · · x n � where � x 1 ,� x 2 , . . . ,� ◮ A is orthogonal if and only if A T = A − 1 . ◮ A T = A − 1 if and only if A T A = I . x T � 1 x T � 2 ◮ A T A = I if and only if � � � x 1 � x 2 · · · � x n = I . x T � n x T � 1 x T � 2 ◮ � � � x 1 � x 2 · · · � x n = I if and only if � x j · � x j = 1 for all j , x T � n 1 ≤ j ≤ n , and � x i · � x j = 0 for all i and j , 1 ≤ i � = j ≤ n , i.e., the columns of A are orthonormal. �
(columns) results in an orthogonal matrix. If an matrix has orthogonal rows (columns), then normalizing the rows Example 2 1 − 2 A = − 2 1 2 1 0 8 has orthogonal columns, but its rows are not orthogonal (verify).
(columns) results in an orthogonal matrix. If an matrix has orthogonal rows (columns), then normalizing the rows Example 2 1 − 2 A = − 2 1 2 1 0 8 has orthogonal columns, but its rows are not orthogonal (verify). Normalizing the columns of A gives us the matrix √ √ 2/3 1/ 2 − 1/3 2 √ √ A ′ = , − 2/3 1/ 2 1/3 2 √ 1/3 0 4/3 2 which has orthonormal columns. Therefore, A ′ is an orthogonal matrix.
(columns) results in an orthogonal matrix. Example 2 1 − 2 A = − 2 1 2 1 0 8 has orthogonal columns, but its rows are not orthogonal (verify). Normalizing the columns of A gives us the matrix √ √ 2/3 1/ 2 − 1/3 2 √ √ A ′ = , − 2/3 1/ 2 1/3 2 √ 1/3 0 4/3 2 which has orthonormal columns. Therefore, A ′ is an orthogonal matrix. If an n × n matrix has orthogonal rows (columns), then normalizing the rows
and If are orthogonal matrices, then is orthogonal and is orthogonal. Orthogonal Matrices: Products and Inverses Example Suppose A and B are orthogonal matrices. 1. Since ( AB )( B T A T ) = A ( BB T ) A T = AA T = I . and AB is square, B T A T = ( AB ) T is the inverse of AB, so AB is invertible, and ( AB ) − 1 = ( AB ) T . Therefore, AB is orthogonal.
and If are orthogonal matrices, then is orthogonal and is orthogonal. Orthogonal Matrices: Products and Inverses Example Suppose A and B are orthogonal matrices. 1. Since ( AB )( B T A T ) = A ( BB T ) A T = AA T = I . and AB is square, B T A T = ( AB ) T is the inverse of AB, so AB is invertible, and ( AB ) − 1 = ( AB ) T . Therefore, AB is orthogonal. 2. A − 1 = A T is also orthogonal, since ( A − 1 ) − 1 = A = ( A T ) T = ( A − 1 ) T .
orthogonal. Orthogonal Matrices: Products and Inverses Example Suppose A and B are orthogonal matrices. 1. Since ( AB )( B T A T ) = A ( BB T ) A T = AA T = I . and AB is square, B T A T = ( AB ) T is the inverse of AB, so AB is invertible, and ( AB ) − 1 = ( AB ) T . Therefore, AB is orthogonal. 2. A − 1 = A T is also orthogonal, since ( A − 1 ) − 1 = A = ( A T ) T = ( A − 1 ) T . Summary If A and B are orthogonal matrices, then AB is orthogonal and A − 1 is
Orthogonal Diagonalization Definition An n × n matrix A is orthogonally diagonalizable if there exists an orthogonal matrix, P, so that P − 1 AP = P T AP is diagonal.
Orthogonal Diagonalization Definition An n × n matrix A is orthogonally diagonalizable if there exists an orthogonal matrix, P, so that P − 1 AP = P T AP is diagonal. Theorem (Principal Axis Theorem) Let A be an n × n matrix. The following conditions are equivalent. 1. A has an orthonormal set of n eigenvectors. 2. A is orthogonally diagonalizable. 3. A is symmetric. Proof. ( partial proof ) (1) implies (2). Suppose { � x 1 ,� x 2 , . . . ,� x n } is an orthonormal set of n x n } is a basis of R n , and hence eigenvectors of A. Then { � x 1 ,� x 2 , . . . ,� � � P = � x 1 � x 2 · · · � x n is an orthogonal matrix such that P − 1 AP = P T AP is a diagonal matrix. Therefore A is orthogonally diagonalizable.
Proof. ( partial proof continued ) (2) implies (1). Conversely, suppose that A is orthogonally diagonalizable. Then there exists an orthogonal matrix P such that P T AP is a diagonal matrix. If P has columns � x 1 ,� x 2 , . . . ,� x n , then B = { � x 1 ,� x 2 , . . . ,� x n } is a set of n orthonormal vectors in R n . Since B is orthogonal, B is independent; furthermore, since | B | = n = dim ( R n ) , B spans R n and is therefore a basis of R n . Let P T AP = diag ( ℓ 1 , ℓ 2 , . . . , ℓ n ) = D. Then AP = PD, so ℓ 1 0 · · · 0 0 ℓ 2 · · · 0 � x 1 � x 2 � x n � � � x 1 � x 2 � x n � � A · · · = · · · . . . . . . . . . 0 0 · · · ℓ n � A � � ℓ 1 � � � x 1 A � x 2 · · · A � x n x 1 ℓ 2 � x 2 · · · ℓ n � x n = Thus A � x i = ℓ i � x i for each i, 1 ≤ i ≤ n, implying that B consists of eigenvectors of A. Therefore, A has an orthonormal set of n eigenvectors.
Then , so Taking transposes of both sides of the equation: (since (since Since , is symmetric. We omit the proof that . Partial Proof (continued). (2) implies (3). Suppose A is orthogonally diagonalizable, that D is a diagonal matrix, and that P is an orthogonal matrix so that P − 1 AP = D .
(since Taking transposes of both sides of the equation: (since Since , is symmetric. We omit the proof that . Partial Proof (continued). (2) implies (3). Suppose A is orthogonally diagonalizable, that D is a diagonal matrix, and that P is an orthogonal matrix so that P − 1 AP = D . Then P − 1 AP = P T AP , so A = PDP T .
. Since We omit the proof that is symmetric. , Taking transposes of both sides of the equation: Partial Proof (continued). (2) implies (3). Suppose A is orthogonally diagonalizable, that D is a diagonal matrix, and that P is an orthogonal matrix so that P − 1 AP = D . Then P − 1 AP = P T AP , so A = PDP T . A T = ( PDP T ) T ( P T ) T D T P T = PD T P T (since ( P T ) T = P ) = (since D T = D ) PDP T = = A .
. Taking transposes of both sides of the equation: We omit the proof that Partial Proof (continued). (2) implies (3). Suppose A is orthogonally diagonalizable, that D is a diagonal matrix, and that P is an orthogonal matrix so that P − 1 AP = D . Then P − 1 AP = P T AP , so A = PDP T . A T = ( PDP T ) T ( P T ) T D T P T = PD T P T (since ( P T ) T = P ) = (since D T = D ) PDP T = = A . Since A T = A , A is symmetric.
Recommend
More recommend