Calculating Lyapunov exponents for random products of positive matrices Mark Pollicott Warwick University 7 July, 2020
Greetings from Kenilworth This is a 1830 painting of the ruin of Kenilworth Castle by J.M.W. Turner. Of course, this is 190 years out of date: The second picture is a modern photograph of the castle. Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 2 / 29
Overview We want to discuss the (top) Lyapunov exponent for random products of matrices. We want to discuss three ways (not) to estimate the Lyapunov exponent; and a 4th way to compute it - in the particular case that the matrices are positive Why positive matrices? - because the method works. The new ingredient is the improved estimate on the error in the approximation. Why is it interesting? - There are other different approaches (e.g., Bandtlow-Slipantshuk, Bahsoun, Braviera-Duarte, Galatolo-Monge-Nisoli, ... ) but I like this approach because it uses some interesting underlying mathematics and “Tudo vale a pena quando a alma n˜ ao ´ e pequena” - Fernando Pessoa Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 3 / 29
A single matrix: Basic linear algebra Let A be a single k × k matrix ( k � 2) with entries in R . Some comforting concepts: The eigenvalues λ 1 · · · , λ k of the matrix A ; The determinant det A = � i λ i ∈ R The spectral radius spr(A) = max i | λ i | “Computational mathematics is mainly based on two ideas: Taylor series, and linear algebra” - Nick Trefethen We are going to use both. Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 4 / 29
Spectral radius formula The spectral radius also comes from Theorem (The spectral radius formula) n → + ∞ � A n � 1 / n spr(A) = lim where we can define the norm of a matrix A = ( a ij ) by � A � = max i , j | a ij | . (The specific choice of norm isn’t so important.) Surprisingly, there doesn’t seem to be any reference to this result before Gelfand’s paper in 1941. [N.B., After the lecture Michael Benedicks pointed out that there was a similar result by Beurling a few years earlier than that of Gelfand.] I.M. Gelfand Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 5 / 29
What about two matrices? Let us begin with two k × k matrices A 1 , A 2 ( k � 2) with entries in R . Consider weights 0 < p 1 , p 2 < 1 ( p 1 + p 2 = 1). We can consider: the 2 n possible products A i 1 A i 2 · · · A i n ( i 1 , · · · , i n ∈ { 1, 2 } ); their norms � A i 1 A i 2 · · · A i n � ; and the weights p i 1 p i 2 · · · p i n ( i 1 , · · · , i n ∈ { 1, 2 } ). We can define the (top) Lyapunov exponent by 1 � λ := lim ( p i 1 · · · p i n ) log � A i 1 · · · A i n � . n n → + ∞ i 1 ,..., i n A. Lyapunov Trivial case: A = A 1 = A 2 then λ = log spr ( A ) . Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 6 / 29
What about more matrices? Everything generalizes in the obvious way to more matrices. Consider k × k matrices A 1 , . . . , A d with entries in R ( d � 2, k � 2). Consider weights 0 < p 1 , . . . , p d < 1 ( p 1 + · · · + p d = 1). We can consider: the d n possible products A i 1 A i 2 · · · A i n ( i 1 , · · · , i n ∈ { 1, . . . , d } ); their norms � A i 1 A i 2 · · · A i n � ; and the weights p i 1 p i 2 · · · p i n ( i 1 , · · · , i n ∈ { 1, . . . , d } ). We can define the (top) Lyapunov exponent by 1 � lim ( p i 1 · · · p i n ) log � A i 1 · · · A i n � . λ := n n → + ∞ i 1 ,..., i n However, henceforth I will usually restrict to the case of two 2 × 2 matrices (mainly to avoid the problem of mixing up k and d ) and take p 1 = p 2 = 1 2 (mainly to cut down on notation). Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 7 / 29
How easy is it to compute λ ? Question: Given matrices A 1 , A 2 and 0 < p 1 , p 2 < 1, how easy is it to compute λ ? Sir John Kingman (1973): “ Pride of place among the unsolved problems of subadditive ergodic theory must go to calculation of the value λ ... and indeed this usually seems to be a problem of some depth.” Yuval Peres (1992): “We turn now to the excruciating problem of the subject: Devise reasonably general and effective algorithms for explicit calculation (or at least approximation) of Lyapunov exponents” At least we have the definition to work from... Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 8 / 29
Plan A: Compute λ using the definition We can try this out with a simple example. � 2 � � 3 � 1 1 and p 1 = p 2 = 1 Consider A 1 = and A 2 = 2 . 1 1 2 1 Then working from the definition: � 1 2 log � A i � = 1.77767 . . . i � 1 1 4 log � A i A j � = 1.45723 . . . i , j 2 � 1 i , j , k . 1 8 log � A i A j A k � = 1.35236 . . . 3 � 1 1 16 log � A i A j A k A l � = 1.30008 . . . 4 i , j , k , l � 1 1 32 log � A i A j A k A l A m � = 1.26872 . . . i , j , k , l , m 5 Unfortunately, this doesn’t converge particularly quickly (typically O ( 1 n ) ). “In general, things either work out or they don’t, and if they don’t, you figure out something else, a plan B. There’s nothing wrong with plan B” - Dick Van Dyke Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 9 / 29
A pointwise approach to Lyapunov exponents We first need to introduce the following notation. Let Σ = { 1, 2 } N then we can n = 1 ∈ Σ . Let µ = ( p 1 , p 2 ) N be the usual Bernoulli measure on Σ . write x = ( x n ) ∞ Theorem (Furstenberg-Kesten, Sir John Kingman) For almost all ( µ ) x ∈ Σ , 1 n log � A x 1 · · · A x n � → λ , as n → + ∞ . This is a (subadditive) ergodic theorem. H. Fustenberg Sir John Kingman H. Kesten (2020 Abel prize winner) Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 10 / 29
Plan B: Compute λ using random products Let us try a little experiment. � � � � 2 1 3 1 and p 1 = p 2 = 1 We can (again) let A 1 = and A 2 = 2 . 1 1 2 1 1 We can compute (for example 15) values of 1000 log � A x 1 · · · A x 1000 � for products of “random” choices of 1000 matrices: 1.14649 . . . 1.14777 . . . 1.14924 . . . 1.15448 . . . 1.15181 . . . 1.14341 . . . 1.14569 . . . 1.15094 . . . 1.14975 . . . 1.14683 . . . 1.15213 . . . 1.14924 . . . 1.13802 . . . 1.15244 . . . 1.14983 . . . This suggests the value of the Lyapunov exponent to a couple of decimal places, but it is not clear how to get a rigorous estimate, so let us turn to a third approach. Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 11 / 29
Positive matrices Henceforth we will restrict to positive matrices. We say that the matrices A 1 , A 2 are positive if all the entries are strictly larger than zero. When k = 2 then we can write � � � � a ( 1 ) a ( 1 ) a ( 2 ) a ( 2 ) 11 12 11 12 A 1 = and A 2 = a ( 1 ) a ( 1 ) a ( 2 ) a ( 2 ) 21 22 21 22 and then we require a ( 1 ) > 0 and a ( 2 ) > 0 for 1 � i , j � 2. ij ij Let A i : R 2 → R 2 ( i = 1, 2) be the usual linear action of the matrix A i . We can consider the positive quadrant A i ( R 2 + ) + = { ( x , y ) ∈ R 2 : x , y � 0 } ⊂ R 2 . R 2 R 2 Clearly positivity implies A i ( R 2 + ) ⊂ R 2 + for i = 1, 2 + Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 12 / 29
Restriction to a simplex We can consider the “restriction” to the simplex ∆ = { ( x , y ) ∈ R 2 + : x + y = 1 } . Let � A i : ∆ → ∆ ( i = 1, 2) be the projective action defined by A i ( x y ) ( x y ) A i ( x y ) ( x , y ) �→ A i ( x → � � y ) �→ A i ( x , y ) A i ( x , y ) � A i ( x y ) � 1 ∆ Since A 1 , A 2 are positive we have � A i ( ∆ ) ⊂ int ( ∆ ) . A i ( x y ) ( x y ) We can just take the first coordinate of � � A i ( x , y ) A i ( x , y ) = ( ξ , η ) to get a map T i : [ 0, 1 ] → [ 0, 1 ] defined by T i : x �→ ξ . ∆ x ξ Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 13 / 29
The contractions of [ 0, 1 ] Of course we can explicitly write expressions for T i in terms of the entries of A i ( i = 1, 2). More precisely, we associate to the positive matrices � � � � a ( 1 ) a ( 1 ) a ( 2 ) a ( 2 ) 11 12 11 12 A 1 = and A 2 = a ( 1 ) a ( 1 ) a ( 2 ) a ( 2 ) 21 22 21 22 the associated maps T 1 : [ 0, 1 ] → [ 0, 1 ] and T 2 : [ 0, 1 ] → [ 0, 1 ] , which by a little calculation are M¨ obius maps given by ( a ( i ) 11 − a ( i ) 12 ) x + a ( i ) 12 T i ( x ) = ( i = 1, 2 ) . ( a ( i ) 11 + a ( i ) 21 − a ( i ) 12 − a ( i ) 22 ) x + a ( i ) 12 + a ( i ) 22 The positivity of the matrices ensures that T 1 and T 2 are both (eventually) contractions of the interval. Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 14 / 29
Recommend
More recommend