Freeness and the Transpose Matrices just wanna be free Jamie Mingo (Queen’s University) with Mihai Popa COSy, June 26, 2014 1 / 22
gue random matrices ◮ Ω = M N ( C ) s . a . ≃ R N 2 , dX is Lebesgue measure on R N 2 , dP = C exp (− N Tr ( X 2 ) / 2 ) dX is a probability measure on Ω ( C is a normalizing constant, Tr ( I N ) = N ) ◮ X : Ω → M N ( C ) , X ( ω ) = ω , the Gaussian Unitary Ensemble , is a matrix valued random variable on the probability space ( Ω , P ) 1 N ( x ij ) , then E ( x ij ) = 0, E ( | x ij | 2 ) = 1 and { x ij } i � j are ◮ if X = √ independent complex Gaussian random variables (real on diagonal) 2 / 22
Wigner’s semi-circle law (1955) 0.3 0.3 0.2 0.2 0.1 0.1 - - - - - 3 2 1 0 1 2 3 2 1 0 1 2 5 × 5 gue sampled 10,000 times. 100 × 100 gue sampled once. 0.3 This is the same distribution as 0.2 S + S ∗ on ℓ 2 ( N ) with respect to 0.1 the vector state ω ξ 0 with ξ 0 = ( 1, 0, 0, . . . ) and S is the - - 2 1 0 1 2 4000 × 4000 gue sampled once. unilateral shift. 3 / 22
Wishart matrices and the Marchenko-Pastur law ◮ G is a M × N random matrix G = ( g ij ) ij with { g ij } ij independent complex Gaussian random variables with mean 0 and (complex) variance 1, i.e. E ( | g ij | 2 ) = 1. W = 1 N G ∗ G is a Wishart random matrix M c = lim N > 0 0.75 N → ∞ a = ( 1 − √ c ) 2 , b = ( 1 + √ c ) 2 0.5 0.25 � ( b − t )( t − a ) d µ c = ( 1 − c ) δ 0 + dt 0.5 1.0 1.5 2.0 2.5 3.0 2 π t M = 50 N = 100 Wishart matrix sampled 3,000 times, the curve shows the eigenvalue distribution as M , N → ∞ with M / N → 1 / 2 4 / 22
Eigenvalue distributions and the transpose ◮ Let X N be the N × N GUE. (dotted curves show limit distributions) 0.3 0.3 0.5 0.4 0.2 0.2 0.3 0.2 0.1 0.1 0.1 � 1 0 1 2 3 4 5 6 � 2 � 1 0 1 2 3 4 � 2 � 1 0 1 2 3 4 X 1000 + X 2 X 100 + ( X 2 100 ) t X 1000 + ( X 2 1000 ) t 1000 ◮ The GOE is the same idea as the GUE except we use real symmetric matrices N ) t = Y N + Y 2 ◮ if we let Y N be the N × N GOE then Y N + ( Y 2 N ; so we would not get di ff erent pictures 5 / 22
Haar unitaries ◮ let U N be the N × N Haar distributed unitary matrix 0.5 0.25 U 10 + U ∗ 10 sampled 100 times 0 � 2 � 1 0 1 2 the arcsine law 0.2 10 ) t U 10 + U ∗ 10 + ( U 10 + U ∗ 0.1 sampled 100 times � 4 � 2 0 2 4 Kesten’s law on F 2 6 / 22
tensor and free independence Tensor version ◮ A , B unital C ∗ -algebras, ϕ 1 ∈ S ( A ) , ϕ 2 ∈ S ( B ) , states ◮ A 1 = A ⊗ 1 ⊂ A ⊗ B , A 2 = 1 ⊗ B ⊂ A ⊗ B are tensor independent with respect to ϕ = ϕ 1 ⊗ ϕ 2 ◮ if x ∈ A 1 , y ∈ A 2 , then x and y are tensor independent so ϕ ( x m 1 y n 1 · · · x m k y n k ) = ϕ ( x m 1 + ··· + m k ) ϕ ( y n 1 + ··· + n k ) Free version ◮ A 1 = A ∗ C 1 ⊂ A ∗ C B , A 2 = 1 ∗ C B ⊂ A ∗ C B are freely independent with respect to ϕ = ϕ 1 ∗ C ϕ 2 ◮ if x ∈ A 1 an y ∈ A 2 then ϕ ( x m 1 y n 1 x m 2 y n 2 ) = ϕ ( x m 1 + m 2 ) ϕ ( y n 1 ) ϕ ( y n 2 ) + ϕ ( x m 1 ) ϕ ( x m 2 ) ϕ ( y n 1 + n 2 ) − ϕ ( x m 1 ) ϕ ( x m 2 ) ϕ ( y n 1 ) ϕ ( y n 2 ) ◮ if a 1 , . . . , a n ∈ A 1 ∪ A 2 are alternating i.e. a i ∈ A j i with j 1 � j 2 � · · · � j n and centered i.e. ϕ ( a i ) = 0; then the product a 1 · · · a n is centered, i.e. ϕ ( a 1 · · · a n ) = 0. 7 / 22
the method of moments (and cumulants) ◮ how do you prove the central limit theorem? i.e. that a certain limit distribution is Gaussian ◮ E ( e itX n ) n → ∞ → E ( e itX ) where X is Gaussian ◮ take a logarithm, expand as a power series and check convergence term by term; use log E ( e itX ) = ( it ) 2 2! ◮ the R -transform is the free version of log E ( e itX ) , G ( R ( z ) + 1 / z ) = z where G ( z ) = E (( z − X ) − 1 ) . ◮ for the semicircle law R ( z ) = z i.e. all free cumulants vanish except variance is 1 ◮ for Marchenko-Pastur R ( z ) = c / ( 1 − z ) , i.e. all free cumulants equal to c ◮ X and Y are free if and only if mixed free cumulants vanish (also true for tensor independence–this is why cumulants were first used 100 yrs ago) 8 / 22
unitarily invariant ensembles ◮ a N × N random matrix, X = ( x ij ) ij , is unitarily invariant if for all U , a N × N unitary matrix, we have E ( x i 1 j 1 x i 2 j 2 · · · x i m j m ) = E ( y i 1 j 1 y i 2 j 2 · · · y i m j m ) where Y = UXU − 1 = ( y ij ) ij for all i 1 , . . . , i m and j 1 , . . . j m N → ∞ E ( tr ( X k ◮ if for all k , lim N )) exists, then we say { X N } N has a limit distribution ◮ thm (M. & Popa) if { X N } N has a limit distribution and is unitarily invariant then X and X t are asymptotically free ◮ GUE, Wishart, and Haar distributed unitary are all unitarily invariant so out theorem applies 9 / 22
(Block) Wishart Random Matrices: M d 1 ( C ) ⊗ M d 2 ( C ) ◮ Suppose G 1 , . . . , G d 1 are d 2 × p random matrices where G i = ( g ( i ) jk ) jk and g ( i ) jk are complex Gaussian random variables with mean 0 and (complex) variance 1, i.e. E ( | g ( i ) jk | 2 ) = 1. Moreover suppose that the random variables { g ( i ) jk } i , j , k are independent. ◮ G 1 W = 1 . � � G ∗ G ∗ = ( G i G ∗ . · · · j ) ij . 1 d 1 p G d 1 is a d 1 d 2 × d 1 d 2 Wishart matrix. We write W = ( W ij ) ij as d 1 × d 1 block matrix with each entry the d 2 × d 2 matrix G i G ∗ j . 10 / 22
Partial Transposes ◮ G i a d 2 × p matrix ◮ W ij = 1 p G i G ∗ j , a d 2 × d 2 matrix, ◮ W = ( W ij ) ij is a d 1 × d 1 block matrix with entries W ij ◮ W T = ( W T ji ) ij is the “full” transpose Γ ◮ W = ( W ji ) ij is the “left” partial transpose ◮ W Γ = ( W T ij ) ij is the “right” partial tarnspose p ◮ we assume that → α and 0 < α < ∞ d 1 d 2 ◮ eigenvalue distributions of W and W T converge to Marchenko-Pastur with parameter α and W Γ converge to a shifted Γ ◮ eigenvalues of W semi-circular with mean 1 and variance 1 /α (Aubrun) ◮ W and W T are asymptotically free (M. and Popa) ◮ what about W Γ and W Γ ? 11 / 22
Semi-circle and Marchenko-Pastur Distributions Suppose d 1 √ p → 1 and d 2 √ p → 1 and α = α 1 α 2 ( c = 1 /α .) α 1 α 2 ◮ limit eigenvalue distribution of W (Marchenko-Pastur) � 1 � 1 � # ( γσ − 1 )− 1 � # ( σ )− 1 � � lim E ( tr ( W n )) = = α α σ ∈ NC ( n ) σ ∈ NC ( n ) (here # ( σ ) is the number of blocks of σ , γ = ( 1, . . . , n ) and γσ − 1 is the “other” Kreweras complement) ◮ limit eigenvalue distribution of W Γ (semi-circle) � 1 � # ( γσ − 1 )− 1 � lim E ( tr (( W Γ ) n )) = α σ ∈ NC 1,2 ( n ) NC 1,2 ( n ) is the set of non-crossing partitions with only blocks of size 1 and 2. ( c.f. Fukuda and ´ Sniady (2013) and Banica and Nechita (2013)) 12 / 22
main theorem Γ , W Γ , W T } form an asymptotically ◮ thm : The matrices { W , W free family ◮ let ( ǫ , η ) ∈ { − 1, 1 } 2 = Z 2 2 . W if ( ǫ , η ) = ( 1, 1 ) Γ W if ( ǫ , η ) = (− 1, 1 ) ◮ let W ( ǫ , η ) = W Γ if ( ǫ , η ) = ( 1, − 1 ) W T if ( ǫ , η ) = (− 1, − 1 ) ◮ let ( ǫ 1 , η 1 ) , . . . , ( ǫ n , η n ) ∈ Z n 2 E ( Tr ( W ( ǫ 1 , η 1 ) · · · W ( ǫ n , η n ) )) � d 1 � f ǫ ( σ ) � d 2 � f η ( σ ) � p # ( σ )+ 1 2 ( f ǫ ( σ )+ f η ( σ ))− n . = √ p √ p σ ∈ S n where f ǫ ( σ ) = # ( ǫδγ − 1 δγδǫ ∨ σδσ − 1 ) ( “ ∨ ” means the sup of partitions and # means the number of blocks or cycles ) 13 / 22
Computing Moments via Permutations, I ◮ [ d 1 ] = { 1, 2, . . . , d 1 } , ◮ given i 1 , . . . , i n ∈ [ d 1 ] we think of this n -tuple as a function i : [ n ] → [ d 1 ] ◮ ker ( i ) ∈ P ( n ) is the partition of [ n ] such that i is constant on the blocks of ker ( i ) and assumes di ff erent values on di ff erent blocks ◮ if σ ∈ S n we also think of the cycles of σ as a partition and write σ � ker ( i ) to mean that i is constant on the cycles of σ ◮ given σ ∈ S n we extend σ to a permutation on [ ± n ] = { − n , . . . , − 1, 1, . . . , n } by setting σ (− k ) = − k for k > 0 ◮ γ = ( 1, 2, . . . , n ) , δ ( k ) = − k ◮ δγ − 1 δγδ = ( 1, − n )( 2, − 1 ) · · · ( n , −( n − 1 )) 14 / 22
Computing Moments via Permutations, II ◮ δγ − 1 δγδ = ( 1, − n )( 2, − 1 ) · · · ( n , −( n − 1 )) ◮ if A k = ( a ( k ) ij ) ij then N � � a ( 1 ) i 1 i 2 a ( 2 ) i 2 i 3 · · · a ( n ) a ( 1 ) i 1 i − 1 · · · a ( n ) Tr ( A 1 · · · A n ) = i n i 1 = i n i − n i 1 ,..., i n = 1 i ± 1 ,..., i ± n δγ − 1 δγδ � ker ( i ) W ( ǫ 1 , η 1 ) · · · W ( ǫ n , η n ) � � Tr � �� � W ( ǫ 1 , η 1 ) � W ( ǫ n , η n ) ) i n i 1 � = Tr i 1 i 2 · · · i 1 ,..., i n � �� � W ( ǫ 1 , η 1 ) � W ( ǫ n , η n ) ) i n i − n � = Tr i 1 i − 1 · · · i ± 1 ,..., i ± n � � � W ( η 1 ) j 1 j − 1 · · · W ( η n ) = Tr j n j − n j ± 1 ,..., j ± n where δγ − 1 δγδ � ker ( i ) , ǫδγ − 1 δγδǫ � ker ( j ) and j = i ◦ ǫ 15 / 22
Recommend
More recommend