expander and derandomization many derandomization results
play

Expander and Derandomization Many derandomization results are based - PowerPoint PPT Presentation

Expander and Derandomization Many derandomization results are based on the assumption that certain random/hard objects exist. Some unconditional derandomization can be achieved using explicit constructions of pseduorandom objects. Computational


  1. Expander and Derandomization

  2. Many derandomization results are based on the assumption that certain random/hard objects exist. Some unconditional derandomization can be achieved using explicit constructions of pseduorandom objects. Computational Complexity, by Fu Yuxi Expander and Derandomization 1 / 91

  3. Synopsis 1. Basic Linear Algebra 2. Random Walk 3. Expander Graph 4. Explicit Construction of Expander Graph 5. Reingold’s Theorem Computational Complexity, by Fu Yuxi Expander and Derandomization 2 / 91

  4. Basic Linear Algebra Computational Complexity, by Fu Yuxi Expander and Derandomization 3 / 91

  5. Three Views All boldface lower case letters denotes column vectors. Matrix = Linear transformation : Q n → Q m 1. f ( u + v ) = f ( u ) + f ( v ), f ( c u ) = cf ( u ) 2. the matrix M f corresponding to f has f ( e j ) as the j -th column Interpretation of v = A u 1. Dynamic view: u is transformed to v , movement in one basis 2. Static view: u in the column basis is the same as v in the standard basis, movement of basis Equation, Geometry (row picture), Algebra (column picture) ◮ Linear equation, hyperplane, linear combination Computational Complexity, by Fu Yuxi Expander and Derandomization 4 / 91

  6. Suppose M is a matrix, c 1 , . . . , c n are column vectors, and r 1 , . . . , r n are row vectors. M ( c 1 , . . . , c n ) = ( M c 1 , . . . , M c n ) (1)   r 1   r 2   ( c 1 , . . . , c n ) = c 1 r 1 + c 2 r 2 + . . . + c n r n (2)   . .   . r n Computational Complexity, by Fu Yuxi Expander and Derandomization 5 / 91

  7. Inner Product, Projection, Orthogonality 1. Inner product u † v measures the degree of colinearity of u and v u � u � is the normalization of u ◮ ◮ u and v are orthogonal if u † v = 0 √ u † v u � u � is the projection of v onto u , where � u � = u † u is the length of u ◮ � u � ◮ projection matrix P = uu † � u � · u † u u † u = � u � ◮ suppose u 1 , . . . , u m are linearly independent. the projection of v onto the subspace spanned by u 1 , . . . , u m is P v , where the projection matrix P is A ( A † A ) − 1 A † . if u 1 , . . . , u m are orthonormal, P = u 1 u † 1 + . . . + u m u † m = I m . 2. Basis, orthonormal basis, orthogonal matrix 3. Q − 1 = Q † for every orthogonal matrix Q ◮ Gram-Schmidt orthogonalization, A = QR u † v Cauchy-Schwartz Inequality . cos θ = � u �� v � ≤ 1. Computational Complexity, by Fu Yuxi Expander and Derandomization 6 / 91

  8. Fixpoints for Linear Transformation We look for fixpoints of a linear transformation A : R n → R n . A v = λ v . If there are n linear independent fixpoints v 1 , . . . , v n , then every v ∈ R n is some linear combination c 1 v 1 + . . . + c n v n . By linearity, A v = c 1 A v 1 + . . . + c n A v n = c 1 λ 1 v 1 + . . . + c n λ n v n . If we think of v 1 , . . . , v n as a basis, the effect of the transform A is to stretch the coordinates in the directions of the axes. Computational Complexity, by Fu Yuxi Expander and Derandomization 7 / 91

  9. Eigenvalue, Eigenvector, Eigenmatrix If A − λ I is singular, an eigenvector x satisfies x � = 0 , A x = λ x ; and λ is the eigenvalue. 1. S = [ x 1 , . . . , x n ] is the eigenmatrix. By definition AS = S Λ. 2. If λ 1 , . . . , λ n are different, x 1 , . . . , x n are linearly independent. 3. If x 1 , . . . , x n are linearly independent, A = S Λ S − 1 . Suppose c 1 x 1 + . . . + c n x n = 0 . Then c 1 λ 1 x 1 + . . . + c n λ n x n = 0 . It follows that c 1 ( λ 1 − λ n ) x 1 + . . . + c n − 1 ( λ n − 1 − λ n ) x n − 1 = 0 . By induction we eventually get c 1 ( λ 1 − λ 2 ) . . . ( λ 1 − λ n ) x 1 = 0 . Thus c 1 = 0. Similarly c 2 = . . . = c n = 0. ◮ We shall write the spectrum λ 1 , λ 2 , . . . , λ n such that | λ 1 | ≥ | λ 2 | ≥ . . . ≥ | λ n | . ◮ ρ ( A ) = | λ 1 | is called spectral radius. Computational Complexity, by Fu Yuxi Expander and Derandomization 8 / 91

  10. Similarity Transformation Similarity Transformation = Change of Basis 1. A is similar to B if A = MBM − 1 for some invertible M . 2. v is an eigenvector of A iff M − 1 v is an eigenvector of B . A and B describe the same transformation using different bases. 1. The basis of B consists of the column vectors of M . 2. A vector x in the basis of A is transformed into the vector M − 1 x in the basis of B , that is x = M ( M − 1 x ). 3. B then transforms M − 1 x into some y in the basis of B . 4. In the basis of A the vector A x is M y . Fact . Similar matrices have the same eigenvalues. Computational Complexity, by Fu Yuxi Expander and Derandomization 9 / 91

  11. Triangularization Diagonalization transformation is a special case of similarity transformation. In diagonalization Q provides an orthogonal basis. Question. Is every matrix similar to a diagonal matrix? Schur’s Lemma . For each matrix A there is a unitary matrix U such that T = U − 1 AU is triangular. The eigenvalues of A appear in the diagonal of T . Computational Complexity, by Fu Yuxi Expander and Derandomization 10 / 91

  12. Diagonalization What are the matrices that are similar to diagonal matrices? A matrix N is normal if NN † = N † N . Theorem . A matrix N is normal iff T = U − 1 NU is diagonal iff N has a complete set of orthonormal eigenvectors. Proof. If N is normal, T is normal. It follows from T † = T that T is diagonal. If T is diagonal, it is the eigenvalue matrix of N , and NU = UT says that the column vectors of U are precisely the eigenvectors. Computational Complexity, by Fu Yuxi Expander and Derandomization 11 / 91

  13. Hermitian Matrix and Symmetric Matrix real matrix complex matrix �� �� i ∈ [ n ] x 2 i ∈ [ n ] | x i | 2 length � x � = � x � = i A † A † conjugate transpose x † y = � x † y = � inner product i ∈ [ n ] x i y i i ∈ [ n ] x i y i x † y = 0 x † y = 0 orthogonality A † = A A † = A symmetric/Hermitian A = Q Λ Q † A = U Λ U † diagonalization Q † Q = I U † U = I orthogonal/unitary Fact . If A † = A , then x † A x = ( x † A x ) † is real for all complex x . Fact . If A † = A , the eigenvalues are real since v † A v = λ v † v = λ � v � 2 . Fact . If A † = A , the eigenvectors of different eigenvalues are orthogonal. Fact . � U x � 2 = � x � 2 and � Q x � 2 = � x � 2 . Computational Complexity, by Fu Yuxi Expander and Derandomization 12 / 91

  14. Spectral Theorem Theorem . Every Hermitian matrix A can be diagonalized by a unitary matrix U . Every symmetric matrix A can be diagonalized by an orthogonal matrix Q . U † AU = Λ , Q † AQ = Λ . The eigenvalues are in Λ; the orthonormal eigenvectors are in Q respectively U . Corollary . Every Hermitian matrix A has a spectral decomposition. � A = U Λ U † (1)(2) λ i u i u † = i . i ∈ [ n ] = � Notice that I = UU † (2) i ∈ [ n ] u i u † i . Computational Complexity, by Fu Yuxi Expander and Derandomization 13 / 91

  15. Positive Definite Matrix Symmetric matrixes with positive eigenvalues are at the center of many applications. A symmetric matrix A is positive definite if x † A x > 0 for all x � = 0 . Theorem . Suppose A is symmetric. The following are equivalent. 1. x † A x > 0 for all x � = 0 . 2. λ i > 0 for all the eigenvalues λ i . 3. A = R † R for some matrix R with independent columns. If we replace > by ≥ , we get positive semidefinite matrices. Computational Complexity, by Fu Yuxi Expander and Derandomization 14 / 91

  16. Singular Value Decomposition Consider an m × n matrix A . Both AA † and A † A are symmetric. 1. AA † is positive semidefinite since x † AA † x = � A † x � 2 ≥ 0. 2. AA † = U Σ ′ U † , where U consists of the orthonormal eigenvectors u 1 , . . . , u m and Σ ′ is the diagonal matrix made up from the eigenvalues σ 2 1 ≥ . . . ≥ σ 2 r . 3. A † A = V Σ ′′ V † . A † u i 4. AA † u i = σ 2 i u i implies that ( σ 2 i , A † u i ) is an eigenpair for A † A . So v i = � A † u i � . 5. u † i AA † u i = u † i σ 2 i u i = σ 2 i . So � A † u i � = σ i . � A † u i � = σ 2 6. A v i = A A † u i i u i = σ i u i . σ i Hence AV = U Σ, or A = U Σ V † . Notice that Σ an m × n matrix. Computational Complexity, by Fu Yuxi Expander and Derandomization 15 / 91

  17. Singular Value Decomposition We call 1. σ 1 , . . . , σ r the singular values of A , and 2. U Σ V † the singular value decomposition, or SVD, of A . Lemma . If A is normal, then σ i = | λ i | for all i ∈ [ n ]. Proof. Since A is normal, A = U Λ U † by diagonalization. Now A † A = AA † = U Λ 2 U † . So the spectrum of A † A / AA † is λ 2 1 , . . . , λ 2 n . Computational Complexity, by Fu Yuxi Expander and Derandomization 16 / 91

  18. Rayleigh Quotient Suppose A is an n × n Hermitian matrix, ( λ 1 , v 1 ), . . . , ( λ n , v n ) are the eigenpairs. The Rayleigh quotient of A and nonzero x is defined as follows: � i ∈ [ n ] λ i � v † i x � 2 R ( A , x ) = x † A x x † x = i x � 2 . (3) � i ∈ [ n ] � v † It is clear from (3) that ◮ if λ 1 ≥ . . . ≥ λ n , then λ i = max x ⊥ v 1 ,..., x ⊥ v i − 1 R ( A , x ), and ◮ if | λ 1 | ≥ . . . ≥ | λ n | , then | λ i | = max x ⊥ v 1 ,..., x ⊥ v i − 1 | R ( A , x ) | . One can use Rayleigh quotient to derive lower bound for λ i . Computational Complexity, by Fu Yuxi Expander and Derandomization 17 / 91

Recommend


More recommend