singular value decomposition
play

Singular Value Decomposition Presented by Matthew Motoki 1 What is - PowerPoint PPT Presentation

Singular Value Decomposition Presented by Matthew Motoki 1 What is a singular value decomposition (SVD)? A singular value decomposition is a factorization of the form A = U V T , where A is an m n matrix, U is an m m orthogonal


  1. Singular Value Decomposition Presented by Matthew Motoki 1

  2. What is a singular value decomposition (SVD)? A singular value decomposition is a factorization of the form A = U Σ V T , where • A is an m × n matrix, • U is an m × m orthogonal matrix ( UU T = U T U = I ) • Σ is a real rectangular diagonal matrix, with or- dered diagonal entries σ 1 ≥ · · · ≥ σ r ≥ σ r +1 = · · · = σ min { m,n } = 0 • V is an n × n orthogonal matrix. 2

  3. Rectangular Diagonal Matrices   0 · · · 0 σ 1 ...       σ r   For m < n , Σ =   0     ...     0 · · · 0   0 σ 1 ...       σ r     0     For m > n , Σ = ...       0 0     . . . . . .     0 0 3

  4. Applications of SVDs SVDs are used in a wide range of disciplines including signal processing, statistics, linear algebra, and com- puter science. Respective examples include: • noise reduction, • solving a linear least squares problem, • determining the rank, range, and null space of A (and A T ); computing the pseudoinverse of a matix, • data compression/matrix approximation. 4

  5. Solving a SVD SVDs can be solved by performing eigenvalue decom- positions on AA T and A T A . � � Σ r × r 0 r × n − r Denoting Σ = , we have 0 m − r × r 0 m − r × n − r U Σ V T ( U Σ V T ) T = U Σ V T V AA T Σ T U T = � �� � I � � (Σ r × r ) 2 0 r × m − r U T . = U 0 m − r × r 0 m − r × m − r Similarly, � � (Σ r × r ) 2 0 r × n − r A T A = V V T . 0 n − r × r 0 n − r × n − r 5

  6. Jacobi Eigenvalue Algorithm It is an iterative method for the calculation of the eigen- values and eigenvectors of a real symmetric matrix, S . The idea is to multiply on the left and right by Given’s rotation matrices, G , with the intent of killing off non- diagonal entries. S ′ = G T SG, with   1 · · · 0 · · · 0 · · · 0 . . . . ... . . . .  . . . .      0 · · · · · · − s · · · 0 c   . . . . ...   . . . . G = G ( i, j, θ ) = . . . . ,     0 · · · s · · · c · · · 0     . . . . ... . . . .   . . . .   0 0 0 1 · · · · · · · · · wherein, s = sin θ and c = cos θ . 6

  7. Jacobi Eigenvalue Algorithm cont. The entries of S ′ are: S ′ c 2 S ii − 2 scS ij + s 2 S jj = ii S ′ s 2 S ii + 2 scS ij + c 2 S jj = jj ji = ( c 2 − s 2 ) S ij + sc ( S ii − S jj ) S ′ S ′ = ij S ′ S ′ = ki = cS ik − sS jk ik S ′ S ′ = kj = sS ik + S jk jk S ′ = S kl kl It can be shown that killing the ij -th and the ji -th entry will not increase values of the other entries in S ′ . ij = 0, we get cos(2 θ ) S ij + 1 Setting the S ′ 2 sin(2 θ )( S ii − S jj ) = 0, or 2 S ij tan(2 θ ) = . S jj − S ii 7

  8. Jacobi Eigenvalue Algorithm cont. Iterating this process until a given tolerance is met; e.g. s ij < 10 − 10 for all i � = j , one gets D = · · · G T 2 G T 1 S G 1 G 2 · · · . Comparing this to our AA T eigenvalue decomposition, � � (Σ r × r ) 2 0 r × m − r = U T AA T U, 0 m − r × r 0 m − r × m − r it is obvious that U = G 1 G 2 · · · . Similarly for V . 8

  9. Image Compression. Low Rank Approximation Property The m × n matrix can be written as the sum of rank-one matrices r � σ i u i v T A = i , i =1 where r is the rank of A (number of nonzero entries in Σ), and u i and v i are the columns of U and V respec- tively. For p < r , the rank p approximation of the matrix A is defined as p � σ i u i v T A ≈ A P = i . i =1 9

Recommend


More recommend