An introduction to random interlacements Artem Sapozhnikov University of Leipzig 23-25 May 2016
Example: Random walk on 3-dimensional torus ◮ ( X k ) k ≥ 0 , a nearest neighbor random walk on Z 3 n = ( Z / n Z ) 3 , n = Z 3 ◮ V u n \ { X 0 , . . . , X un 3 } , u > 0, ◮ red = the largest connected component of V u n , ◮ blue = the second largest connected component of V u n , Simulation by D. Windisch
Example: Random walk on 3-dimensional torus ◮ ( X k ) k ≥ 0 , a nearest neighbor random walk on Z 3 n = ( Z / n Z ) 3 , n = Z 3 ◮ V u n \ { X 0 , . . . , X un 3 } , u > 0, ◮ red = the largest connected component of V u n , ◮ blue = the second largest connected component of V u n , Simulation by D. Windisch
Example: Random walk on 3-dimensional torus ◮ ( X k ) k ≥ 0 , a nearest neighbor random walk on Z 3 n = ( Z / n Z ) 3 , n = Z 3 ◮ V u n \ { X 0 , . . . , X un 3 } , u > 0, ◮ red = the largest connected component of V u n , ◮ blue = the second largest connected component of V u n , Simulation by D. Windisch
Example: Random walk on 3-dimensional torus ◮ ( X k ) k ≥ 0 , a nearest neighbor random walk on Z 3 n = ( Z / n Z ) 3 , n = Z 3 ◮ V u n \ { X 0 , . . . , X un 3 } , u > 0, ◮ red = the largest connected component of V u n , ◮ blue = the second largest connected component of V u n , Simulation by D. Windisch
Example: Random walk on 3-dimensional torus ◮ ( X k ) k ≥ 0 , a nearest neighbor random walk on Z 3 n = ( Z / n Z ) 3 , n = Z 3 ◮ V u n \ { X 0 , . . . , X un 3 } , u > 0, ◮ red = the largest connected component of V u n , ◮ blue = the second largest connected component of V u n , Simulation by D. Windisch
Example: Random walk on 3-dimensional torus ◮ ( X k ) k ≥ 0 , a nearest neighbor random walk on Z 3 n = ( Z / n Z ) 3 , n = Z 3 ◮ V u n \ { X 0 , . . . , X un 3 } , u > 0, ◮ red = the largest connected component of V u n , ◮ blue = the second largest connected component of V u n , Simulation by D. Windisch
Example: Random walk on 3-dimensional torus ◮ ( X k ) k ≥ 0 , a nearest neighbor random walk on Z 3 n = ( Z / n Z ) 3 , n = Z 3 ◮ V u n \ { X 0 , . . . , X un 3 } , u > 0, ◮ red = the largest connected component of V u n , ◮ blue = the second largest connected component of V u n , Simulation by D. Windisch
Example: Random walk on 3-dimensional torus ◮ ( X k ) k ≥ 0 , a nearest neighbor random walk on Z 3 n = ( Z / n Z ) 3 , n = Z 3 ◮ V u n \ { X 0 , . . . , X un 3 } , u > 0, ◮ red = the largest connected component of V u n , ◮ blue = the second largest connected component of V u n , Simulation by D. Windisch
Questions: V u n = Z 3 n \ { X 0 , . . . , X un 3 } , u > 0, ◮ Structural phase transition: ∃ u c ∈ (0 , ∞ ) such that ◮ if u < u c , then � n ) | > cn 3 � |C max ( V u lim = 1 , n →∞ P ◮ if u > u c , then � n ) | ≪ n 3 � |C max ( V u n →∞ P lim = 1 . ◮ The second largest connected component of V u n is small: � n ) | ≪ n 3 � |C (2) ( V u lim = 1 . n →∞ P ◮ Geometric properties of C max ( V u n ), for u < u c .
First results: Let d ≥ 3, u > 0, X 0 ∼ U ( Z d n ), and V u n = Z d n \ { X 0 , . . . , X un d } . Then: ◮ There exist c 1 = c 1 ( d ) > 0, c 2 = c 2 ( d ) > 0 such that e − c 1 u ≤ lim inf n ] ≤ e − c 2 u . P [0 ∈ V u P [0 ∈ V u n ] ≤ lim sup n n ◮ If d is large enough and u small enough then there exists c = c ( d , u ) > 0 such that � n ) | > cn d � |C max ( V u n →∞ P lim = 1 . Benjamini-Sznitman (JEMS, ’08)
Main obstruction: Complexity of self intersections in the random walk trace.
Random interlacements as local limit: Let X [ a , b ] = { X a , X a +1 , . . . , X b } . 1. Random interlacements at level u > 0, I u , is a random subgraph of Z d such that for each finite K ′ ⊆ K ⊂ Z d , � X [0 , un d ] ∩ K = K ′ � � I u ∩ K = K ′ � lim = P . n →∞ P Sznitman (AM, ’10), Windisch (ECP, ’11) 2. For any u > 0, ε ∈ (0 , 1), δ ∈ (0 , 1), and λ , there exists a coupling of the random walk ( X k ) k ≥ 0 and the random interlacements I u (1 − ε ) , I u (1+ ε ) , so that � I u (1 − ε ) ∩ [0 , N δ ] d ⊆ X [0 , uN d ] ∩ [0 , N δ ] d ⊆ I u (1+ ε ) � P ≥ 1 − N − λ . Teixeira-Windisch (CPAM, ’11)
Some results using coupling: 1. For d ≥ 3, there exist u 1 ≤ u 2 such that ◮ if u > u 2 then for some λ = λ ( u ), � � n ) | ≤ log λ n |C max ( V u lim = 1 , n →∞ P ◮ if u < u 1 then there exists c = c ( u ) such that � n ) | ≥ cn d � |C max ( V u lim = 1 . n →∞ P 2. For d ≥ 5, there exist u 3 > 0 such that for all u < u 3 , ◮ there exists η ( u ) > 0 such that for all ε > 0, �� � � � |C max ( V u n ) | � � − η ( u ) n →∞ P lim � > ε = 0 , n d ◮ there exists λ ( u ) such that � � n ) | ≤ log λ n |C (2) ( V u lim = 1 . n →∞ P Teixeira-Windisch (CPAM, ’11)
Understanding local picture, I Basic facts about random walk: 1. Random walk on Z d is transient iff d ≥ 3. P´ olya (MA, ’21) ⇒ Random walk is locally transient on Z d = n , d ≥ 3. 2. Lazy random walk on Z d n is rapidly mixing, � � � � � � P [ X k = y ] − 1 − c k � � � ≤ C exp . n d n 2 y ∈ Z d n see, e.g., Saloff-Coste, ’97 or Levin-Peres-Wilmer, ’09 ⇒ The mixing time is of order n 2 . = 3. Exit time from a ball of radius r is of order r 2 , Prob. to visit center before leaving ball ∼ c d ( � x � 2 − d − r 2 − d ) , see, e.g., Lawler, ’91 ⇒ Random walk on Z d = n started far away from 0 will need about n d steps to hit 0.
Understanding local picture, II K finite subset of Z d How does the random walk (re)visit K ? 1. if X 0 ∼ U ( Z d n ), then X 0 is likely to be far from K ⇒ average time to visit K is of order n d , = 2. after visiting K it takes about n 2 steps to move far from K 3. after n 2 steps, the random walk “forgets” where it started. Thus, 1. locally transient ⇒ returns to K are by means of rare excursions , = 2. rapid mixing = ⇒ excursions are almost i.i.d. , entrance points ∼ � e K ( · ), 3. hit K in n 2 steps with prob. ∼ n 2 − d cap ( K ) ⇒ number of excursions up to un d is almost Poi ( u cap ( K )) . =
Understanding local picture, III ◮ Above heurisitcs suggests that as n → ∞ , all the “almost”s can be neglected. ◮ By doing this, we will define the random interlacements: 1. For each finite K , define N ∼ Poi ( u cap ( K )). 2. Take X (1) 0 , . . . , X ( N ) i.i.d., ∝ P · [ X n / ∈ K for n ≥ 1]. 0 3. Consider independent simple random walks ( X ( i ) k ) k ≥ 0 . 4. Define I u K as the vertices in K visited by the walks. d 5. Consistency: for K 1 ⊂ K 2 , I u = I u K 2 ∩ K 1 . K 1 ⇒ there exists I u such that I u = I u ∩ K . d = K
Preliminaries on random walks: ◮ Z d , d ≥ 3, y ∼ x if � y − x � = 1 ◮ ( X n ) n ≥ 0 nearest neighbor random walk on Z d , ◮ Green function, G ( x , y ) = � ∞ n =0 P x [ X n = y ], ◮ G ( x , y ) = G ( y , x ), ◮ G ( x , y ) = G (0 , y − x ) =: g ( y − x ), ◮ g is harmonic on Z d \ { 0 } : � g ( x ) = 1 x ∈ Z d . g ( y ) + δ 0 x , 2 d y ∼ x ◮ g ( x ) < ∞ for some/all x ∈ Z d iff d ≥ 3 ( transience ), ◮ for any d ≥ 3 there exist c 1 = c 1 ( d ) and c 2 = c 2 ( d ) such that c 1 (1 + � x � ) 2 − d ≤ g ( x ) ≤ c 2 (1 + � x � ) 2 − d , x ∈ Z d .
Green function and hitting probabilities: ◮ For K ⊂ Z d , define ◮ first entrance time to K : H K = inf { n ≥ 0 : X n ∈ K } , ◮ first hitting time of K : � H K = inf { n ≥ 1 : X n ∈ K } . 1 ◮ g (0) = H 0 = ∞ ] . P 0 [ � ( Proof: Mean of a geometric random variable. ) ◮ For all finite K ⊂ Z d , d ≥ 3, and all x ∈ Z d , � G ( x , y ) P y [ � P x [ H K < ∞ ] = H K = ∞ ] . y ∈ K ( Proof: Last exit time decomposition. )
Equilibrium measure and capacity: ◮ K finite subset of Z d , ◮ Equilibrium measure of K : � P x [ � H K = ∞ ] x ∈ K e K ( x ) = ∈ K . 0 x / ◮ Capacity of K : � cap ( K ) = e K ( x ) . x ◮ Capacity measures “hittability” of sets by random walk: y ∈ K G ( x , y ) cap ( K ) ≤ P x [ H K < ∞ ] ≤ max min y ∈ K G ( x , y ) cap ( K ) . ◮ Capacity of a ball B ( n ) = { x ∈ Z d : � x � ≤ n } : C 1 n d − 2 ≤ cap ( B ( n )) ≤ C 2 n d − 2 .
Properties of capacity: ◮ Monotonicity: cap ( K 1 ) ≤ cap ( K 2 ) , K 1 ≤ K 2 � � � P x [ � P y [ H K 1 < ∞ , X HK 1 = x ] P y [ � cap ( K 1 ) = H K 1 = ∞ ] = H K 2 = ∞ ] x ∈ K 1 x ∈ K 1 y ∈ K 2 � � = P y [ H K 1 < ∞ ] e K 2 ( y ) ≤ e K 2 ( y ) = cap ( K 2 ) . y ∈ K 2 y ∈ K 2 ◮ Subadditivity: K 1 , K 2 ∈ Z d cap ( K 1 ∪ K 2 ) ≤ cap ( K 1 ) + cap ( K 2 ) , � � � P x [ � P x [ � P x [ � cap ( K 1 ∪ K 2 ) = H K 1 ∪ K 2 = ∞ ] ≤ H K 1 = ∞ ] + H K 2 = ∞ ] x ∈ K 1 ∪ K 2 x ∈ K 1 x ∈ K 2 = cap ( K 1 ) + cap ( K 2 ) . 1 2 ◮ cap ( { x } ) = g (0) , cap ( { x , y } ) = g (0)+ g ( y − x ) .
Recommend
More recommend