Random matrices and Gaussian multiplicative chaos Nick Simm Mathematics Institute, University of Warwick Joint work with Gaultier Lambert and Dmitry Ostrovsky. Optimal Point Configurations and Orthogonal Polynomials April 2017, CIEM Research supported by Leverhulme fellowship ECF-2014-309
The Circular Unitary Ensemble
The Circular Unitary Ensemble ◮ Let U N be an N × N random matrix chosen uniformly from the unitary group.
The Circular Unitary Ensemble ◮ Let U N be an N × N random matrix chosen uniformly from the unitary group. ◮ The joint distribution of points is an example of a (1d) ‘Coulomb gas’: � | e i θ j − e i θ k | 2 P ( θ 1 , . . . , θ N ) ∝ j < k
The Circular Unitary Ensemble ◮ Let U N be an N × N random matrix chosen uniformly from the unitary group. ◮ The joint distribution of points is an example of a (1d) ‘Coulomb gas’: � | e i θ j − e i θ k | 2 P ( θ 1 , . . . , θ N ) ∝ j < k ◮ The eigenvalues e i θ 1 , . . . , e i θ N form a determinantal point process with kernel N − 1 p j ( θ ) p j ( φ ) = sin( N ( θ − φ ) / 2) � K N ( θ, φ ) = sin(( θ − φ ) / 2) j =0 where p j ( θ ) = e ij θ .
The Circular Unitary Ensemble ◮ Let U N be an N × N random matrix chosen uniformly from the unitary group. ◮ The joint distribution of points is an example of a (1d) ‘Coulomb gas’: � | e i θ j − e i θ k | 2 P ( θ 1 , . . . , θ N ) ∝ j < k ◮ The eigenvalues e i θ 1 , . . . , e i θ N form a determinantal point process with kernel N − 1 p j ( θ ) p j ( φ ) = sin( N ( θ − φ ) / 2) � K N ( θ, φ ) = sin(( θ − φ ) / 2) j =0 where p j ( θ ) = e ij θ . ◮ Problem: limit theorems for P N ( θ ) = det( U N − e i θ ) as N → ∞ .
Characteristic polynomials
Characteristic polynomials Characteristic polynomials of (large) random matrices:
Characteristic polynomials Characteristic polynomials of (large) random matrices: ◮ A good model of the Riemann zeta function ζ ( s ) high up on the critical line s = 1 / 2 + it (Keating and Snaith ’00).
Characteristic polynomials Characteristic polynomials of (large) random matrices: ◮ A good model of the Riemann zeta function ζ ( s ) high up on the critical line s = 1 / 2 + it (Keating and Snaith ’00). ◮ An interesting example of a log-correlated Gaussian field. E.g. how to compute M ∗ θ ∈ [0 , 2 π ] log | det( U N − e i θ I ) | N = θ ∈ [0 , 2 π ] log | P N ( θ ) | ≡ max max
Characteristic polynomials Characteristic polynomials of (large) random matrices: ◮ A good model of the Riemann zeta function ζ ( s ) high up on the critical line s = 1 / 2 + it (Keating and Snaith ’00). ◮ An interesting example of a log-correlated Gaussian field. E.g. how to compute M ∗ θ ∈ [0 , 2 π ] log | det( U N − e i θ I ) | N = θ ∈ [0 , 2 π ] log | P N ( θ ) | ≡ max max ◮ Using these ideas, it has been conjectured and partially proved that as N → ∞ N = log( N ) − 3 M ∗ 4 log(log( N )) + ( G 1 + G 2 ) / 2 + o (1) where G 1 , 2 are standard independent Gumbel variables. (Fyodorov and Keating ’12, Arguin, Belius, Bourgade ’15, Paquette and Zeitouni ’16, Chaibbi, Madaule and Najnudel ’16)
The logarithm
The logarithm Theorem (Hughes, Keating and O’Connell ’01) Let { Z j } ∞ j =1 be i.i.d. standard complex Gaussian random variables. Then ∞ e i θ → V ( θ ) := 1 d � V N ( θ ) := log | P N ( θ ) | √ Z k + c . c . 2 k k =1
The logarithm Theorem (Hughes, Keating and O’Connell ’01) Let { Z j } ∞ j =1 be i.i.d. standard complex Gaussian random variables. Then ∞ e i θ → V ( θ ) := 1 d � V N ( θ ) := log | P N ( θ ) | √ Z k + c . c . 2 k k =1 Key properties of V ( θ ): ◮ V is Gaussian and mean zero E ( V ( θ )) = 0. ◮ Logarithmic correlations: � ∞ e ik ( θ − φ ) E ( V ( θ ) V ( φ )) = 1 � = − 1 2 log | e i θ − e i φ | � 2 Re k j =1 ◮ What about θ = φ ? Implies Var ( V ( θ )) = ∞
The logarithm Theorem (Hughes, Keating and O’Connell ’01) Let { Z j } ∞ j =1 be i.i.d. standard complex Gaussian random variables. Then ∞ e i θ → V ( θ ) := 1 d � V N ( θ ) := log | P N ( θ ) | √ Z k + c . c . 2 k k =1 Key properties of V ( θ ): ◮ V is Gaussian and mean zero E ( V ( θ )) = 0. ◮ Logarithmic correlations: � ∞ e ik ( θ − φ ) E ( V ( θ ) V ( φ )) = 1 � = − 1 2 log | e i θ − e i φ | � 2 Re k j =1 ◮ What about θ = φ ? Implies Var ( V ( θ )) = ∞ Conclusion: Limit V ( θ ) is a distribution valued object .
The exponential of the logarithm
The exponential of the logarithm Naively we might suppose that | P N ( θ ) | = e V N ( θ ) converges to e V ( θ ) ?
The exponential of the logarithm Naively we might suppose that | P N ( θ ) | = e V N ( θ ) converges to e V ( θ ) ? How to define the exponential of a distribution?
The exponential of the logarithm Naively we might suppose that | P N ( θ ) | = e V N ( θ ) converges to e V ( θ ) ? How to define the exponential of a distribution? Consider measures formally defined by � e γ V ( θ ) − γ 2 2 Var ( V ( θ )) d θ µ ( γ ) ( D ) = D The measure µ ( γ ) is defined by a renormalization procedure V ǫ = V ∗ φ ǫ .
The exponential of the logarithm Naively we might suppose that | P N ( θ ) | = e V N ( θ ) converges to e V ( θ ) ? How to define the exponential of a distribution? Consider measures formally defined by � e γ V ( θ ) − γ 2 2 Var ( V ( θ )) d θ µ ( γ ) ( D ) = D The measure µ ( γ ) is defined by a renormalization procedure V ǫ = V ∗ φ ǫ . It was shown by Kahane ’85 that ◮ µ ( γ ) converges as ǫ → 0 to a non-trivial limit if and only if ǫ γ < 2. ◮ This limit does not depend on (Kahane’s) cut-off procedures.
The exponential of the logarithm Naively we might suppose that | P N ( θ ) | = e V N ( θ ) converges to e V ( θ ) ? How to define the exponential of a distribution? Consider measures formally defined by � e γ V ( θ ) − γ 2 2 Var ( V ( θ )) d θ µ ( γ ) ( D ) = D The measure µ ( γ ) is defined by a renormalization procedure V ǫ = V ∗ φ ǫ . It was shown by Kahane ’85 that ◮ µ ( γ ) converges as ǫ → 0 to a non-trivial limit if and only if ǫ γ < 2. ◮ This limit does not depend on (Kahane’s) cut-off procedures. This limit defines the measure µ ( γ ) which is called Gaussian multiplicative chaos (GMC).
Properties of measures µ ( γ )
Properties of measures µ ( γ ) ◮ Power law spectrum (multifractality): For 0 ≤ q γ 2 < 2 we have E ( µ ( γ ) [0 , r ] q ) = C q r ξ ( q ) where ξ ( q ) = (1 + γ 2 2 ) q − γ 2 2 q 2 .
Properties of measures µ ( γ ) ◮ Power law spectrum (multifractality): For 0 ≤ q γ 2 < 2 we have E ( µ ( γ ) [0 , r ] q ) = C q r ξ ( q ) where ξ ( q ) = (1 + γ 2 2 ) q − γ 2 2 q 2 . ◮ In two dimensions, V is essentially the Gaussian free field , a fundamental object of mathematical physics.
Properties of measures µ ( γ ) ◮ Power law spectrum (multifractality): For 0 ≤ q γ 2 < 2 we have E ( µ ( γ ) [0 , r ] q ) = C q r ξ ( q ) where ξ ( q ) = (1 + γ 2 2 ) q − γ 2 2 q 2 . ◮ In two dimensions, V is essentially the Gaussian free field , a fundamental object of mathematical physics. ◮ In that context, e γ V is used in Liouville quantum gravity to construct a uniform random metric on the sphere . (see work and recent surveys of e.g. Berestycki, Duplantier, Rhodes, Sheffield, Vargas,. . . )
Properties of measures µ ( γ ) ◮ Power law spectrum (multifractality): For 0 ≤ q γ 2 < 2 we have E ( µ ( γ ) [0 , r ] q ) = C q r ξ ( q ) where ξ ( q ) = (1 + γ 2 2 ) q − γ 2 2 q 2 . ◮ In two dimensions, V is essentially the Gaussian free field , a fundamental object of mathematical physics. ◮ In that context, e γ V is used in Liouville quantum gravity to construct a uniform random metric on the sphere . (see work and recent surveys of e.g. Berestycki, Duplantier, Rhodes, Sheffield, Vargas,. . . ) ◮ The distribution of µ ( γ ) near γ = γ c is believed to be closely related to statistics of max | z | =1 | P N ( z ) | .
The L 2 -phase
The L 2 -phase √ 2 is called the L 2 -phase. This is because The range 0 ≤ γ < � | e i θ − e i φ | − γ 2 / 2 d θ d φ < ∞ E ( µ ( γ ) ( D ) 2 ) = D × D √ if and only if 0 ≤ γ < 2.
The L 2 -phase √ 2 is called the L 2 -phase. This is because The range 0 ≤ γ < � | e i θ − e i φ | − γ 2 / 2 d θ d φ < ∞ E ( µ ( γ ) ( D ) 2 ) = D × D √ if and only if 0 ≤ γ < 2. Theorem (Webb ’15) Consider D | P N ( θ ) | γ d θ � µ ( γ ) N ( D ) = D | P N ( θ ) | γ d θ E � √ Then for any γ < 2 we have µ ( γ ) d → µ ( γ ) , N → ∞ N where µ ( γ ) is the same measure constructed from Kahane’s theory.
Counting statistics in the CUE
Counting statistics in the CUE Instead of V N ( θ ), we consider counting statistics N � χ J ( θ ) ( N α θ j ) , X N ( θ ) = J ( θ ) = [ θ − 1 , θ + 1] j =1
Recommend
More recommend