Introduction Duality Statistical properties Limit laws for the sum Large deviation Matrix-correlated random variables: A statistical physics and signal processing duet Florian Angeletti Work in collaboration with Hugo Touchette, Patrice Abry and Eric Bertin. 13 January 2015
Introduction Duality Statistical properties Limit laws for the sum Large deviation Presentation Thesis: ”Sums and extremes in statistical physics and signal processing” Advisors: Eric Bertin and Patrice Abry, Physics laboratory of ENS Lyon. Postdoc NITheP, Stellenbosch, South Africa, working with Hugo Touchette on Large deviation theory Themes: Application of statistical physics to signal processing Extreme statistics Random vectors with matrix representation Large deviation functions
Introduction Duality Statistical properties Limit laws for the sum Large deviation Out-of-equilibrium statistical physics At equilibrium: Microcanonical ensemble: p ( x 1 , . . . , x n ) = cst Canonical ensemble: p ( x 1 , . . . , x n ) = e − β H ( x 1 ,..., x n ) Out-of-equilibrium: Constant flow of heat or particles Dynamic description Stationary distribution?
Introduction Duality Statistical properties Limit laws for the sum Large deviation ASEP A simple and iconic out-of-equilibrium systems Asymmetric: Particles only move from left to right Exclusion : One particle by site Creation rate α Destruction rate β
Introduction Duality Statistical properties Limit laws for the sum Large deviation Matrix-correlated random variable How do we describe the stationary solution ? Matrix product ansatz (Derrida and al 1993) p ( x 1 , . . . , x n ) = � W | R ( x 1 ) . . . R ( x n ) | V � � W | ( R (0) + R (1)) n | V � matrix R ( x ) Long range correlation Similar solution for 1D diffusion-reaction system Formal similarity with DMRG
Introduction Duality Statistical properties Limit laws for the sum Large deviation Objectives Mathematical model p ( x 1 , . . . , x n ) ≈ R ( x 1 ) · · · R ( x n ) Study the statistical properties of theses models Hidden Markov model Large deviation functions representation Limit distributions for the Signal processing sums application Limit distributions for the Topology induces extremes correlation Then go back to physical models
Introduction Duality Statistical properties Limit laws for the sum Large deviation Matrix representation p ( x 1 , . . . , x n ) = L ( R ( x 1 ) . . . R ( x n )) L ( E n ) linear form L : L ( M ) = tr ( A T M ) A : d × d positive matrix R ( x ): d × d positive matrix function structure matrix � E = R ( x ) dx R probability density function matrix R i , j ( x ) = E i , j P i , j ( x ) d > 1: Non-commutativity = ⇒ Correlation
Introduction Duality Statistical properties Limit laws for the sum Large deviation Correlation Product structure: p ( x 1 , . . . , x n ) = L ( R ( x 1 ) ... R ( x n )) L ( E n ) x q R ( x ) dx � Moment matrix: Q ( q ) = E k − 1 Q ( p ) E n − k � � = L X p � � k L ( E n ) E k − 1 Q (1) E l − k − 1 Q (1) E n − l � � � X k X l � = L L ( E n ) E k − 1 Q (1) E l − k − 1 Q (1) E m − l − 1 Q (1) E n − m � � � X k X l X m � = L L ( E n ) . . .
Introduction Duality Statistical properties Limit laws for the sum Large deviation Stationarity Translation invariance: p ( X k 1 = x 1 , . . . , X k l = x l ) = p ( X c + k 1 = x 1 , . . . , X c + k l = x l ) Sufficient condition [ A T , E ] = A T E − EA T = 0 ∀ M , L ( M E ) = L ( E M ) L ( R ( x ) E n − 1 ) p ( X k = x ) = L ( E n ) L ( R ( x ) E l − k − 1 R ( y ) E n −| l − k |− 1 ) p ( X k = x , X l = y ) = L ( E n ) p ( X k = x , X l = y , X m = z ) = L ( R ( x ) E l − k − 1 R ( y ) E m − l − 1 R ( z ) E n −| m − k |− 1 ) L ( E n )
Introduction Duality Statistical properties Limit laws for the sum Large deviation Numerical generation p ( x 1 , . . . , x n ) = L ( R ( x 1 ) . . . R ( x n )) L ( E n ) How do we generate a random vector X for a given triple ( A , E , P )? Expand the matrix product 1 � p ( x 1 , . . . , x n ) = A Γ 1 , Γ n +1 E Γ 1 , Γ 2 P Γ 1 , Γ 2 ( x 1 ) . . . E Γ n , Γ n +1 L ( E n ) Γ ∈{ 1 ,..., d } n +1 � p ( x 1 , . . . , x n ) = P (Γ) P ( X | Γ) Γ Γ, hidden Markov chain
Introduction Duality Statistical properties Limit laws for the sum Large deviation Hidden Markov Model Markov chain Γ Observable X = X 1 , . . . , X k ( X k | Γ k ) is distributed according to the pdf p ( X k | Γ k )
Introduction Duality Statistical properties Limit laws for the sum Large deviation Hidden Markov Chain representation Hidden Markov Chain Conditional pdf ( X | Γ) p (Γ) = A Γ 1 , Γ n +1 � E Γ k , Γ k +1 p ( X k = x | Γ) = P Γ k , Γ k +1 ( x ) L ( E n ) k E non-stochastic = ⇒ Non-homogeneous markov chain Specific non-homogeneous Hidden Markov model: Hidden Markov Model � Matrix representation
Introduction Duality Statistical properties Limit laws for the sum Large deviation Stationary time series design Generation of random vector with prescribed correlation and marginal distribution: Matrix representation: Choice of ( A , E , P ) Hidden Markov Model: Numerical generation Higher-order dependency structure: correlation of squares Realization Marginal Correlation Square corr. Prescribed
Introduction Duality Statistical properties Limit laws for the sum Large deviation Dual representation Matrix representation Hidden Markov Model Algebraic properties 2-layer model: correlated layer + independant layer Statistical properties Efficient synthesis computation
Introduction Duality Statistical properties Limit laws for the sum Large deviation Correlation and Jordan decomposition E k − 1 Q (1) E l − k − 1 Q (1) E n − l � � � X k X l � = L L ( E n ) The dependency structure of X depends on the behavior of E n λ k eigenvalues of E ordered by their real parts ℜ ( λ 1 ) ℜ ( λ 2 ) > · · · > λ r J k , l Jordan block associated with eigeivalue λ k λ k 1 0 · · · 0 . ... ... ... . J 1 , 1 0 0 0 . . ... ... ... ... E = B − 1 . B , J k , l = . 0 . ... ... 0 J k , l . . 1 0 · · · · · · 0 λ k
Introduction Duality Statistical properties Limit laws for the sum Large deviation Dependency structure 1 . 2 Case 1: Short-range correlation 0 . 8 Corr(1 , k ) λ 2 exists: 0 . 4 � X k X l � − � X k � � X l � ≈ 0 . 0 | k − l | m > 1 α m λ m � − 0 . 4 0 25 50 75 100 λ 1 k Case 2: Constant correlation More than one block J 1 , k : Constant correlation term Case 3: Long-range correlation 40 J 1 , k with size p > 1: l 20 � X k X l � − � X k � � X l � ≈ P ( k n , k − l n , l n ) , P ∈ 0 R [ X , Y , Z ] 0 15 30 45 k
Introduction Duality Statistical properties Limit laws for the sum Large deviation Short-range correlation: Ergodic chain E irreducible ⇐ ⇒ Γ ergodic Irreducible matrix E ⇐ ⇒ G ( E ) is strongly connected Short-range correlation
Introduction Duality Statistical properties Limit laws for the sum Large deviation Constant correlation: Identity E Disconnected components: 1 0 ... E = 0 1 The chain Γ is trapped inside its starting state Constant correlation: L ( Q (1) 2 ) −L ( Q (1)) 2 � X k X l � − � X k � � X l � = L ( E )
Introduction Duality Statistical properties Limit laws for the sum Large deviation Long-range correlation: Linear irreducible E Irreversible transitions: n 0 1 n-1 1 ǫ 0 ... ... E = 0 1 The chain Γ can only stay in its current state or jump to the next All chains with a non-zero probability and the same starting and ending points are equiprobable Polynomial correlation: � r � l − k � s � � t � k 1 − l � � X k X l � ≈ c r , s , t n n n r + s + t = d − 1
Introduction Duality Statistical properties Limit laws for the sum Large deviation General shape of E 23 25 18 19 26 24 20 21 15 14 12 I 1 ∗ T k , l 22 16 17 ... 5 E = 7 ∗ 13 2 6 0 I r 4 1 8 11 3 10 9 Irreducible blocks I k Irreversibles transitions T k , l Correlation: Mixture of short-range, constant and long-range correlations
Introduction Duality Statistical properties Limit laws for the sum Large deviation Summary Short-range correlation = ⇒ Strongly connected component of size s > 1 More than one weakly connected component = ⇒ constant correlation Polynomial correlation = ⇒ More than one strongly connected component Necessary but non sufficient conditions
Introduction Duality Statistical properties Limit laws for the sum Large deviation Random vector sum Sum n S ( X ) = 1 � X i n i =1 Correlated random variables Law of large numbers? Central limit theorem? Large deviations? Two paths: Hidden Markov chain representation Matrix representation
Introduction Duality Statistical properties Limit laws for the sum Large deviation Hidden Markov path S ( X | Γ): sum of sums of i . i . d . random variables: n ν i , j � � S ( X | Γ) = ( X k | i , j ) ≡ S ( X | ν ) i , j k =1 ν i , j fraction of ( i → j )-transition: � card { k / Γ k = i , Γ k +1 = j } � ν = n i , j Standard convergence theorem (law of large numbers or central limit theorem )
Recommend
More recommend