Polynomials in Free Variables Roland Speicher Universit¨ at des Saarlandes Saarbr¨ ucken joint work with Serban Belinschi, Tobias Mai, and Piotr Sniady
Goal: Calculation of Distribution or Brown Measure of Polynomials in Free Variables Tools: • Linearization • Subordination • Hermitization
We want to understand distribution of polynomials in free vari- ables. What we understand quite well is: sums of free selfadjoint variables So we should reduce: arbitrary polynomial − → sums of selfadjoint variables This can be done on the expense of going over to operator-valued frame.
Let B ⊂ A . A linear map E : A → B is a conditional expectation if E [ b ] = b ∀ b ∈ B and E [ b 1 ab 2 ] = b 1 E [ a ] b 2 ∀ a ∈ A , ∀ b 1 , b 2 ∈ B An operator-valued probability space consists of B ⊂ A and a conditional expectation E : A → B
Consider an operator-valued probability space E : A → B . Random variables x i ∈ A ( i ∈ I ) are free with respect to E (or free with amalgamation over B ) if E [ a 1 · · · a n ] = 0 whenever a i ∈ B� x j ( i ) � are polynomials in some x j ( i ) with coeffi- cients from B and E [ a i ] = 0 ∀ i and j (1) � = j (2) � = · · · � = j ( n ) .
Consider an operator-valued probability space E : A → B . For a random variable x ∈ A , we define the operator-valued Cauchy transform: G ( b ) := E [( b − x ) − 1 ] ( b ∈ B ) . For x = x ∗ , this is well-defined and a nice analytic map on the operator-valued upper halfplane: H + ( B ) := { b ∈ B | ( b − b ∗ ) / (2 i ) > 0 }
Theorem (Belinschi, Mai, Speicher 2013): Let x and y be selfadjoint operator-valued random variables free over B . Then echet analytic map ω : H + ( B ) → H + ( B ) so that there exists a Fr´ G x + y ( b ) = G x ( ω ( b )) for all b ∈ H + ( B ) . Moreover, if b ∈ H + ( B ), then ω ( b ) is the unique fixed point of the map f b : H + ( B ) → H + ( B ) , f b ( w ) = h y ( h x ( w ) + b ) + b, and n →∞ f ◦ n for any w ∈ H + ( B ) . ω ( b ) = lim b ( w ) where 1 H + ( B ) := { b ∈ B | ( b − b ∗ ) / (2 i ) > 0 } , h ( b ) := G ( b ) − b
The Linearization Philosophy: In order to understand polynomials in non-commuting variables, it suffices to understand matrices of linear polynomials in those variables. • Voiculescu 1987: motivation • Haagerup, Thorbjørnsen 2005: largest eigenvalue • Anderson 2012: the selfadjoint version a (based on Schur complement)
Consider a polynomial p in non-commuting variables x and y . A linearization of p is an N × N matrix (with N ∈ N ) of the form � � 0 u p = ˆ , v Q where • u, v, Q are matrices of the following sizes: u is 1 × ( N − 1); v is ( N − 1) × N ; and Q is ( N − 1) × ( N − 1) • each entry of u , v , Q is a polynomial in x and y , each of degree ≤ 1 • Q is invertible and we have p = − uQ − 1 v
Consider linearization of p � � � � 0 u z 0 p = − uQ − 1 v p = ˆ and b = ( z ∈ C ) v Q 0 0 Then we have ( z − p ) − 1 � � � � � − uQ − 1 � � ( z − p ) − 1 � 1 0 0 1 ∗ p ) − 1 = ( b − ˆ = − Q − 1 v − Q − 1 1 0 0 1 ∗ ∗ and thus � ϕ (( z − p ) − 1 ) � ϕ ( ∗ ) p ) − 1 ) = G ˆ p ( b ) = id ⊗ ϕ (( b − ˆ ϕ ( ∗ ) ϕ ( ∗ )
Note: ˆ p is the sum of operator-valued free variables! Theorem (Anderson 2012): One has • for each p there exists a linearization ˆ p (with an explicit algorithm for finding those) • if p is selfadjoint, then this ˆ p is also selfadjoint Conclusion: Combination of linearization and operator-valued subordination allows to deal with case of selfadjoint polynomials.
Input: p ( x, y ) , G x ( z ) , G y ( z ) ↓ Linearize p ( x, y ) to ˆ p = ˆ x + ˆ y ↓ x ( b ) out of G x ( z ) and y ( b ) out of G y ( z ) G ˆ G ˆ ↓ Get w ( b ) as the fixed point of the iteration x ( w ) − 1 − w ) − 1 − ( G ˆ x ( w ) − 1 − w ) w �→ G ˆ y ( b + G ˆ ↓ G ˆ p ( b ) = G ˆ x ( ω ( b )) ↓ Recover G p ( z ) as one entry of G ˆ p ( b )
Example: p ( x, y ) = xy + yx + x 2 p has linearization y + x 0 x 2 p = ˆ x 0 − 1 y + x − 1 0 2
P ( X, Y ) = XY + Y X + X 2 for independent X, Y ; X is Wigner and Y is Wishart 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 −5 0 5 10 p ( x, y ) = xy + yx + x 2 for free x, y ; x is semicircular and y is Marchenko-Pastur
Example: p ( x 1 , x 2 , x 3 ) = x 1 x 2 x 1 + x 2 x 3 x 2 + x 3 x 1 x 3 p has linearization 0 0 x 1 0 x 2 0 x 3 0 x 2 − 1 0 0 0 0 x 1 − 1 0 0 0 0 0 p = ˆ 0 0 0 x 3 − 1 0 0 x 2 0 0 − 1 0 0 0 0 0 0 0 0 x 1 − 1 0 0 0 0 − 1 0 x 3
P ( X 1 , X 2 , X 3 ) = X 1 X 2 X 1 + X 2 X 3 X 2 + X 3 X 1 X 3 for independent X 1 , X 2 , X 3 ; X 1 , X 2 Wigner, X 3 Wishart 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 −10 −5 0 5 10 15 p ( x 1 , x 2 , x 3 ) = x 1 x 2 x 1 + x 2 x 3 x 2 + x 3 x 1 x 3 for free x 1 , x 2 , x 3 ; x 1 , x 2 semicircular, x 3 Marchenko-Pastur
What about non-selfadjoint polynomials? For a measure on C its Cauchy transform 1 � G µ ( λ ) = λ − zdµ ( z ) C is well-defined everywhere outside a set of R 2 -Lebesgue measure zero, however, it is analytic only outside the support of µ . The measure µ can be extracted from its Cauchy transform by the formula (understood in distributional sense) µ = 1 ∂ λG µ ( λ ) , ∂ ¯ π
Better approach by regularization: ¯ λ − ¯ z � G ǫ,µ ( λ ) = ǫ 2 + | λ − z | 2 dµ ( z ) C is well–defined for every λ ∈ C . By sub-harmonicity arguments µ ǫ = 1 ∂ λG ǫ,µ ( λ ) ∂ ¯ π is a positive measure on the complex plane. One has: ǫ → 0 µ ǫ = µ lim weak convergence
This can be copied for general (not necessarily normal) operators x in a tracial non-commutative probability space ( A , ϕ ). Put ( λ − x )( λ − x ) ∗ + ǫ 2 � − 1 � � ( λ − x ) ∗ � G ǫ,x ( λ ) := ϕ Then µ ǫ,x = 1 ∂ λG ǫ,µ ( λ ) ∂ ¯ π is a positive measure on the complex plane, which converges weakly for ǫ → 0, µ x := lim ǫ → 0 µ ǫ,x Brown measure of x
Hermitization Method For given x we need to calculate ( λ − x )( λ − x ) ∗ + ǫ 2 � − 1 � � ( λ − x ) ∗ � G ǫ,x ( λ ) = ϕ Let � � 0 x note: X = X ∗ X = ∈ M 2 ( A ); x ∗ 0 Consider X in the M 2 ( C )-valued probability space with repect to E = id ⊗ ϕ : M 2 ( A ) → M 2 ( C ) given by �� �� � � ϕ ( a 11 ) ϕ ( a 12 ) a 11 a 12 = E . ϕ ( a 21 ) ϕ ( a 22 ) a 21 a 22
For the argument � � � � 0 iǫ λ x Λ ǫ = ∈ M 2 ( C ) and X = x ∗ ¯ λ iǫ 0 consider now the M 2 ( C )-valued Cauchy transform of X � � g ǫ,λ, 11 g ǫ,λ, 12 (Λ ǫ − X ) − 1 � � G X (Λ ε ) = E = . g ǫ,λ, 21 g ǫ,λ, 22 One can easily check that − iǫ (( λ − x )( λ − x ) ∗ + ǫ 2 ) − 1 � ( λ − x )(( λ − x ) ∗ ( λ − x ) + ǫ 2 ) − 1 � (Λ ǫ − X ) − 1 = ( λ − x ) ∗ (( λ − x )( λ − x ) ∗ + ǫ 2 ) − 1 − iǫ (( λ − x ) ∗ ( λ − x ) + ǫ 2 ) − 1 thus g ǫ,λ, 12 = G ε,x ( λ ) .
So for a general polynomial we should 1. hermitize 2. linearise 3. subordinate But: do (1) and (2) fit together???
Consider p = xy with x = x ∗ , y = y ∗ . For this we have to calculate the operator-valued Cauchy trans- form of � � 0 xy P = 0 yx Linearization means we should split this in sums of matrices in x and matrices in y . Write � � � � � � � � 0 xy x 0 0 y x 0 P = = = XY X yx 0 0 1 y 0 0 1
P = XY X is now a selfadjoint polynomial in the selfadjoint vari- ables � � � � x 0 0 y X = and Y = 0 1 y 0 XY X has linearization 0 0 X 0 − 1 Y − 1 0 X
thus � � 0 xy P = yx 0 has linearization 0 0 0 0 x 0 0 0 0 0 x 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 y − 1 0 0 0 0 0 − 1 0 0 0 0 y 0 0 = + 0 0 y 0 0 − 1 0 0 0 0 0 − 1 0 0 y 0 0 0 x 0 − 1 0 0 0 x 0 − 1 0 0 0 0 0 0 0 0 0 0 1 0 − 1 0 0 0 1 0 − 1 0 0 0 0 0 0 0 0 and we can now calculate the operator-valued Cauchy transform of this via subordination.
Does eigenvalue distribution of polynomial in independent random matrices converge to Brown measure of corresponding polynomial in free variables? Conjecture: Consider m independent selfadjoint Gaussian (or, more general, Wigner) random matrices X (1) N , . . . , X ( m ) and put N A N := p ( X (1) N , . . . , X ( m ) ) , x := p ( s 1 , . . . , s m ) . N We conjecture that the eigenvalue distribution µ A N of the ran- dom matrices A N converge to the Brown measure µ x of the limit operator x .
Brown measure of xyz − 2 yzx + zxy with x, y, z free semicircles
Brown measure of x + iy with x, y free Poissons
Recommend
More recommend