Introduction Main Theorem Proof Discussion Exact solutions ◮ G n = Z n solved by Ising (1925) ◮ G n = K n solved by Husimi (1953) and Temperley (1954) ◮ G n = Z 2 n solved by Peierls (1936), Kramers & Wannier (1941), Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966) Smirnov (2006) proved critical interfaces between + / − components have conformally invariant limit ◮ SLE(3) - Schramm-L¨ owner Evolution K-F related Ising partition function to perfect matchings ◮ Elegant solution on planar graphs in terms of Pfaffians ◮ Only tractable on graphs of bounded (small) genus ◮ Method not tractable for G n = Z d n with d ≥ 3
Introduction Main Theorem Proof Discussion Computational Complexity ◮ P ARTITION : ◮ Input: Finite graph G = ( V, E ) , and parameters β , h ◮ Output: Ising partition function ◮ C ORRELATION : ◮ Input: Finite graph G = ( V, E ) , a pair u, v ∈ V , and parameters β , h ◮ Output: Ising two-point correlation function ◮ S USCEPTIBILITY : ◮ Input: Finite graph G = ( V, E ) , and parameters β , h ◮ Output: Ising susceptibility
Introduction Main Theorem Proof Discussion Computational Complexity ◮ P ARTITION : ◮ Input: Finite graph G = ( V, E ) , and parameters β , h ◮ Output: Ising partition function ◮ C ORRELATION : ◮ Input: Finite graph G = ( V, E ) , a pair u, v ∈ V , and parameters β , h ◮ Output: Ising two-point correlation function ◮ S USCEPTIBILITY : ◮ Input: Finite graph G = ( V, E ) , and parameters β , h ◮ Output: Ising susceptibility Proposition ( Jerrum-Sinclair 1993; Sinclair-Srivastava 2014 ) P ARTITION , S USCEPTIBILITY and C ORRELATION are #P-hard.
Introduction Main Theorem Proof Discussion Markov-chain Monte Carlo ◮ Construct a transition matrix P on Ω which: ◮ Is ergodic ◮ Has stationary distribution π ( · ) ◮ Generate random samples with (approximate) distribution π ◮ Estimate π expectations using sample means
Introduction Main Theorem Proof Discussion Markov-chain Monte Carlo ◮ Construct a transition matrix P on Ω which: ◮ Is ergodic ◮ Has stationary distribution π ( · ) ◮ Generate random samples with (approximate) distribution π ◮ Estimate π expectations using sample means x ∈ Ω � P t ( x, · ) − π ( · ) � ≤ Cα t , d ( t ) := max for α ∈ (0 , 1) ◮ Mixing time quantifies the rate of convergence t mix ( δ ) := min { t : d ( t ) ≤ δ }
Introduction Main Theorem Proof Discussion Markov-chain Monte Carlo ◮ Construct a transition matrix P on Ω which: ◮ Is ergodic ◮ Has stationary distribution π ( · ) ◮ Generate random samples with (approximate) distribution π ◮ Estimate π expectations using sample means x ∈ Ω � P t ( x, · ) − π ( · ) � ≤ Cα t , d ( t ) := max for α ∈ (0 , 1) ◮ Mixing time quantifies the rate of convergence t mix ( δ ) := min { t : d ( t ) ≤ δ } ◮ How does t mix depend on size of Ω ?
Introduction Main Theorem Proof Discussion Markov-chain Monte Carlo ◮ Construct a transition matrix P on Ω which: ◮ Is ergodic ◮ Has stationary distribution π ( · ) ◮ Generate random samples with (approximate) distribution π ◮ Estimate π expectations using sample means x ∈ Ω � P t ( x, · ) − π ( · ) � ≤ Cα t , d ( t ) := max for α ∈ (0 , 1) ◮ Mixing time quantifies the rate of convergence t mix ( δ ) := min { t : d ( t ) ≤ δ } ◮ How does t mix depend on size of Ω ? ◮ For Ising the size of a problem instance is n = | V | ◮ If t mix = O ( poly ( n )) we have rapid mixing
Introduction Main Theorem Proof Discussion Markov-chain Monte Carlo ◮ Construct a transition matrix P on Ω which: ◮ Is ergodic ◮ Has stationary distribution π ( · ) ◮ Generate random samples with (approximate) distribution π ◮ Estimate π expectations using sample means x ∈ Ω � P t ( x, · ) − π ( · ) � ≤ Cα t , d ( t ) := max for α ∈ (0 , 1) ◮ Mixing time quantifies the rate of convergence t mix ( δ ) := min { t : d ( t ) ≤ δ } ◮ How does t mix depend on size of Ω ? ◮ For Ising the size of a problem instance is n = | V | ◮ If t mix = O ( poly ( n )) we have rapid mixing ◮ | Ω | = 2 n so rapid mixing implies only logarithmically-many states need be visited to reach approximate stationarity
Introduction Main Theorem Proof Discussion Markov chains for the Ising model ◮ Glauber process (arbitrary field) 1963 ◮ Lubetzky & Sly (2012): Rapidly mixing on boxes in Z 2 at h = 0 iff β ≤ β c ◮ Levin, Luczak & Peres (2010): Precise asymptotics on K n for h = 0 ◮ Not used by computational physicists
Introduction Main Theorem Proof Discussion Markov chains for the Ising model ◮ Glauber process (arbitrary field) 1963 ◮ Lubetzky & Sly (2012): Rapidly mixing on boxes in Z 2 at h = 0 iff β ≤ β c ◮ Levin, Luczak & Peres (2010): Precise asymptotics on K n for h = 0 ◮ Not used by computational physicists ◮ Swendsen-Wang process (zero field) 1987 ◮ Simulates coupling of Ising and Fortuin-Kasteleyn models ◮ Long, Nachmias, Ding & Peres (2014): Precise asymptotics on K n ◮ Ullrich (2014): Rapidly mixing on boxes in Z 2 at all β � = β c ◮ Empirically fast. State of the art 1987 – recently
Introduction Main Theorem Proof Discussion Markov chains for the Ising model ◮ Glauber process (arbitrary field) 1963 ◮ Lubetzky & Sly (2012): Rapidly mixing on boxes in Z 2 at h = 0 iff β ≤ β c ◮ Levin, Luczak & Peres (2010): Precise asymptotics on K n for h = 0 ◮ Not used by computational physicists ◮ Swendsen-Wang process (zero field) 1987 ◮ Simulates coupling of Ising and Fortuin-Kasteleyn models ◮ Long, Nachmias, Ding & Peres (2014): Precise asymptotics on K n ◮ Ullrich (2014): Rapidly mixing on boxes in Z 2 at all β � = β c ◮ Empirically fast. State of the art 1987 – recently ◮ Jerrum-Sinclair process (positive field) 1993 ◮ Simulates high-temperature graphs for h > 0 ◮ Proved rapidly mixing on all graphs at all temperatures for all h > 0 ◮ No empirical results - not used by computational physicists
Introduction Main Theorem Proof Discussion Markov chains for the Ising model ◮ Glauber process (arbitrary field) 1963 ◮ Lubetzky & Sly (2012): Rapidly mixing on boxes in Z 2 at h = 0 iff β ≤ β c ◮ Levin, Luczak & Peres (2010): Precise asymptotics on K n for h = 0 ◮ Not used by computational physicists ◮ Swendsen-Wang process (zero field) 1987 ◮ Simulates coupling of Ising and Fortuin-Kasteleyn models ◮ Long, Nachmias, Ding & Peres (2014): Precise asymptotics on K n ◮ Ullrich (2014): Rapidly mixing on boxes in Z 2 at all β � = β c ◮ Empirically fast. State of the art 1987 – recently ◮ Jerrum-Sinclair process (positive field) 1993 ◮ Simulates high-temperature graphs for h > 0 ◮ Proved rapidly mixing on all graphs at all temperatures for all h > 0 ◮ No empirical results - not used by computational physicists ◮ Prokofiev-Svistunov worm process (zero field) 2001 ◮ No rigorous results currently known ◮ Empirically, best method known for susceptibility (Deng, G., Sokal) ◮ Widely used by computational physicists
Introduction Main Theorem Proof Discussion Mixing time bound for PS process Theorem (Collevecchio, G., Hyndman, Tokarev 2014+) For any temperature, the mixing time of the PS process on graph G = ( V, E ) satisfies t mix ( δ ) = O (∆( G ) m 2 n 5 ) with n = | V | , m = | E | and ∆( G ) the maximum degree.
Introduction Main Theorem Proof Discussion Mixing time bound for PS process Theorem (Collevecchio, G., Hyndman, Tokarev 2014+) For any temperature, the mixing time of the PS process on graph G = ( V, E ) satisfies t mix ( δ ) = O (∆( G ) m 2 n 5 ) with n = | V | , m = | E | and ∆( G ) the maximum degree. Only Markov chain for the Ising model currently known to be rapidly mixing at the critical point for boxes in Z d
Introduction Main Theorem Proof Discussion High-temperature expansions and the PS measure ◮ Let C k = { A ⊆ E : ( V, A ) has k odd vertices }
Introduction Main Theorem Proof Discussion High-temperature expansions and the PS measure ◮ Let C k = { A ⊆ E : ( V, A ) has k odd vertices } ◮ Let C W = { A ⊆ E : W is the set of odd vertices in ( V, A ) }
Introduction Main Theorem Proof Discussion High-temperature expansions and the PS measure ◮ Let C k = { A ⊆ E : ( V, A ) has k odd vertices } ◮ Let C W = { A ⊆ E : W is the set of odd vertices in ( V, A ) } ◮ PS measure defined on the configuration space C 0 ∪ C 2 � n, A ∈ C 0 , π ( A ) ∝ x | A | 2 , A ∈ C 2 .
Introduction Main Theorem Proof Discussion High-temperature expansions and the PS measure ◮ Let C k = { A ⊆ E : ( V, A ) has k odd vertices } ◮ Let C W = { A ⊆ E : W is the set of odd vertices in ( V, A ) } ◮ PS measure defined on the configuration space C 0 ∪ C 2 � n, A ∈ C 0 , π ( A ) ∝ x | A | 2 , A ∈ C 2 . ◮ If x = tanh β then: 1 ◮ Ising susceptibility χ = π ( C 0 ) ◮ Ising two-point correlation function E ( σ u σ v ) = n π ( C uv ) 2 π ( C 0 )
Introduction Main Theorem Proof Discussion High-temperature expansions and the PS measure ◮ Let C k = { A ⊆ E : ( V, A ) has k odd vertices } ◮ Let C W = { A ⊆ E : W is the set of odd vertices in ( V, A ) } ◮ PS measure defined on the configuration space C 0 ∪ C 2 � n, A ∈ C 0 , π ( A ) ∝ x | A | 2 , A ∈ C 2 . ◮ If x = tanh β then: 1 ◮ Ising susceptibility χ = π ( C 0 ) ◮ Ising two-point correlation function E ( σ u σ v ) = n π ( C uv ) 2 π ( C 0 ) ◮ PS measure is stationary distribution of PS process
Introduction Main Theorem Proof Discussion Fully-polynomial randomized approximation schemes Definition An fpras for an Ising property f is a randomized algorithm such that for given G , T , and ξ, η ∈ (0 , 1) the output Y satisfies P [(1 − ξ ) f ≤ Y ≤ (1 + ξ ) f ] ≥ 1 − η and the running time is bounded by a polynomial in n, ξ − 1 , η − 1 .
Introduction Main Theorem Proof Discussion Fully-polynomial randomized approximation schemes Definition An fpras for an Ising property f is a randomized algorithm such that for given G , T , and ξ, η ∈ (0 , 1) the output Y satisfies P [(1 − ξ ) f ≤ Y ≤ (1 + ξ ) f ] ≥ 1 − η and the running time is bounded by a polynomial in n, ξ − 1 , η − 1 . Combine rapid mixing of PS process with general fpras construction of Jerrum-Sinclair (1993):
Introduction Main Theorem Proof Discussion Fully-polynomial randomized approximation schemes Definition An fpras for an Ising property f is a randomized algorithm such that for given G , T , and ξ, η ∈ (0 , 1) the output Y satisfies P [(1 − ξ ) f ≤ Y ≤ (1 + ξ ) f ] ≥ 1 − η and the running time is bounded by a polynomial in n, ξ − 1 , η − 1 . Combine rapid mixing of PS process with general fpras construction of Jerrum-Sinclair (1993): ◮ Let A ⊆ C 0 ∪ C 2 with π ( A ) ≥ 1 /S ( n ) for some polynomial S ( n )
Introduction Main Theorem Proof Discussion Fully-polynomial randomized approximation schemes Definition An fpras for an Ising property f is a randomized algorithm such that for given G , T , and ξ, η ∈ (0 , 1) the output Y satisfies P [(1 − ξ ) f ≤ Y ≤ (1 + ξ ) f ] ≥ 1 − η and the running time is bounded by a polynomial in n, ξ − 1 , η − 1 . Combine rapid mixing of PS process with general fpras construction of Jerrum-Sinclair (1993): ◮ Let A ⊆ C 0 ∪ C 2 with π ( A ) ≥ 1 /S ( n ) for some polynomial S ( n ) ◮ The following defines an fpras for π ( A ) :
Introduction Main Theorem Proof Discussion Fully-polynomial randomized approximation schemes Definition An fpras for an Ising property f is a randomized algorithm such that for given G , T , and ξ, η ∈ (0 , 1) the output Y satisfies P [(1 − ξ ) f ≤ Y ≤ (1 + ξ ) f ] ≥ 1 − η and the running time is bounded by a polynomial in n, ξ − 1 , η − 1 . Combine rapid mixing of PS process with general fpras construction of Jerrum-Sinclair (1993): ◮ Let A ⊆ C 0 ∪ C 2 with π ( A ) ≥ 1 /S ( n ) for some polynomial S ( n ) ◮ The following defines an fpras for π ( A ) : ◮ Let R ( G ) be our upper bound for t mix ( δ ) with δ = ξ/ [8 S ( n )]
Introduction Main Theorem Proof Discussion Fully-polynomial randomized approximation schemes Definition An fpras for an Ising property f is a randomized algorithm such that for given G , T , and ξ, η ∈ (0 , 1) the output Y satisfies P [(1 − ξ ) f ≤ Y ≤ (1 + ξ ) f ] ≥ 1 − η and the running time is bounded by a polynomial in n, ξ − 1 , η − 1 . Combine rapid mixing of PS process with general fpras construction of Jerrum-Sinclair (1993): ◮ Let A ⊆ C 0 ∪ C 2 with π ( A ) ≥ 1 /S ( n ) for some polynomial S ( n ) ◮ The following defines an fpras for π ( A ) : ◮ Let R ( G ) be our upper bound for t mix ( δ ) with δ = ξ/ [8 S ( n )] ◮ Let Y = 1 A
Introduction Main Theorem Proof Discussion Fully-polynomial randomized approximation schemes Definition An fpras for an Ising property f is a randomized algorithm such that for given G , T , and ξ, η ∈ (0 , 1) the output Y satisfies P [(1 − ξ ) f ≤ Y ≤ (1 + ξ ) f ] ≥ 1 − η and the running time is bounded by a polynomial in n, ξ − 1 , η − 1 . Combine rapid mixing of PS process with general fpras construction of Jerrum-Sinclair (1993): ◮ Let A ⊆ C 0 ∪ C 2 with π ( A ) ≥ 1 /S ( n ) for some polynomial S ( n ) ◮ The following defines an fpras for π ( A ) : ◮ Let R ( G ) be our upper bound for t mix ( δ ) with δ = ξ/ [8 S ( n )] ◮ Let Y = 1 A ◮ Run the PS process T = R ( G ) time steps and record Y T
Introduction Main Theorem Proof Discussion Fully-polynomial randomized approximation schemes Definition An fpras for an Ising property f is a randomized algorithm such that for given G , T , and ξ, η ∈ (0 , 1) the output Y satisfies P [(1 − ξ ) f ≤ Y ≤ (1 + ξ ) f ] ≥ 1 − η and the running time is bounded by a polynomial in n, ξ − 1 , η − 1 . Combine rapid mixing of PS process with general fpras construction of Jerrum-Sinclair (1993): ◮ Let A ⊆ C 0 ∪ C 2 with π ( A ) ≥ 1 /S ( n ) for some polynomial S ( n ) ◮ The following defines an fpras for π ( A ) : ◮ Let R ( G ) be our upper bound for t mix ( δ ) with δ = ξ/ [8 S ( n )] ◮ Let Y = 1 A ◮ Run the PS process T = R ( G ) time steps and record Y T ◮ Independently generate 72 ξ − 2 S ( n ) such samples and take the sample mean
Introduction Main Theorem Proof Discussion Fully-polynomial randomized approximation schemes Definition An fpras for an Ising property f is a randomized algorithm such that for given G , T , and ξ, η ∈ (0 , 1) the output Y satisfies P [(1 − ξ ) f ≤ Y ≤ (1 + ξ ) f ] ≥ 1 − η and the running time is bounded by a polynomial in n, ξ − 1 , η − 1 . Combine rapid mixing of PS process with general fpras construction of Jerrum-Sinclair (1993): ◮ Let A ⊆ C 0 ∪ C 2 with π ( A ) ≥ 1 /S ( n ) for some polynomial S ( n ) ◮ The following defines an fpras for π ( A ) : ◮ Let R ( G ) be our upper bound for t mix ( δ ) with δ = ξ/ [8 S ( n )] ◮ Let Y = 1 A ◮ Run the PS process T = R ( G ) time steps and record Y T ◮ Independently generate 72 ξ − 2 S ( n ) such samples and take the sample mean ◮ Repeat 6 lg ⌈ 1 /η ⌉ + 1 such experiments and take the median
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals:
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 :
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 :
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv
Introduction Main Theorem Proof Discussion Prokofiev-Svistunov process PS proposals: ◮ If A ∈ C 0 : ◮ Pick uniformly random u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv ◮ If A ∈ C 2 : ◮ Pick random odd u ∈ V ◮ Pick uniformly random v ∼ u ◮ Propose A → A △ uv Metropolize proposals with respect to PS measure π ( · )
Introduction Main Theorem Proof Discussion Proof of rapid mixing ◮ We use the path method ◮ Consider the transition graph G = ( V , E ) of the PS process ◮ V = C 0 ∪ C 2 ◮ E = { AA ′ : P ( A, A ′ ) > 0 }
Introduction Main Theorem Proof Discussion Proof of rapid mixing ◮ We use the path method ◮ Consider the transition graph G = ( V , E ) of the PS process ◮ V = C 0 ∪ C 2 ◮ E = { AA ′ : P ( A, A ′ ) > 0 } ◮ Specify paths γ I,F in G between pairs of states I, F
Introduction Main Theorem Proof Discussion Proof of rapid mixing ◮ We use the path method ◮ Consider the transition graph G = ( V , E ) of the PS process ◮ V = C 0 ∪ C 2 ◮ E = { AA ′ : P ( A, A ′ ) > 0 } ◮ Specify paths γ I,F in G between pairs of states I, F ◮ If no transition is used too often, the process is rapidly mixing
Introduction Main Theorem Proof Discussion Proof of rapid mixing ◮ We use the path method ◮ Consider the transition graph G = ( V , E ) of the PS process ◮ V = C 0 ∪ C 2 ◮ E = { AA ′ : P ( A, A ′ ) > 0 } ◮ Specify paths γ I,F in G between pairs of states I, F ◮ If no transition is used too often, the process is rapidly mixing ◮ In the most general case, one specifies paths between each pair I, F ∈ C 0 ∪ C 2
Introduction Main Theorem Proof Discussion Proof of rapid mixing ◮ We use the path method ◮ Consider the transition graph G = ( V , E ) of the PS process ◮ V = C 0 ∪ C 2 ◮ E = { AA ′ : P ( A, A ′ ) > 0 } ◮ Specify paths γ I,F in G between pairs of states I, F ◮ If no transition is used too often, the process is rapidly mixing ◮ In the most general case, one specifies paths between each pair I, F ∈ C 0 ∪ C 2 ◮ For PS process, convenient to only specify paths from C 2 to C 0 C 2 C 0
Introduction Main Theorem Proof Discussion Proof of rapid mixing ◮ We use the path method ◮ Consider the transition graph G = ( V , E ) of the PS process ◮ V = C 0 ∪ C 2 ◮ E = { AA ′ : P ( A, A ′ ) > 0 } ◮ Specify paths γ I,F in G between pairs of states I, F ◮ If no transition is used too often, the process is rapidly mixing ◮ In the most general case, one specifies paths between each pair I, F ∈ C 0 ∪ C 2 ◮ For PS process, convenient to only specify paths from C 2 to C 0 I C 2 C 0 F
Introduction Main Theorem Proof Discussion Lemma (Jerrum-Sinclair-Vigoda (2004)) Consider MC with state space Ω and stationary distribution π . Let S ⊂ Ω , and specify paths Γ = { γ I,F : I ∈ S , F ∈ S c } . Then � π ( S ) π ( S c ) + π ( S c ) 1 � �� t mix ( δ ) ≤ log 2 + 4 ϕ (Γ) δ min A ∈ Ω π ( A ) π ( S ) where � � π ( I ) π ( F ) � ϕ (Γ) := ( I,F ) ∈S×S c | γ I,F | max max π ( A ) P ( A, A ′ ) AA ′ ∈E ( I,F ) ∈S×S c γI,F ∋ AA ′
Introduction Main Theorem Proof Discussion Lemma (Jerrum-Sinclair-Vigoda (2004)) Consider MC with state space Ω and stationary distribution π . Let S ⊂ Ω , and specify paths Γ = { γ I,F : I ∈ S , F ∈ S c } . Then � π ( S ) π ( S c ) + π ( S c ) 1 � �� t mix ( δ ) ≤ log 2 + 4 ϕ (Γ) δ min A ∈ Ω π ( A ) π ( S ) where � � π ( I ) π ( F ) � ϕ (Γ) := ( I,F ) ∈S×S c | γ I,F | max max π ( A ) P ( A, A ′ ) AA ′ ∈E ( I,F ) ∈S×S c γI,F ∋ AA ′ ◮ We choose S = C 2 .
Introduction Main Theorem Proof Discussion Lemma (Jerrum-Sinclair-Vigoda (2004)) Consider MC with state space Ω and stationary distribution π . Let S ⊂ Ω , and specify paths Γ = { γ I,F : I ∈ S , F ∈ S c } . Then � π ( S ) π ( S c ) + π ( S c ) 1 � �� t mix ( δ ) ≤ log 2 + 4 ϕ (Γ) δ min A ∈ Ω π ( A ) π ( S ) where � � π ( I ) π ( F ) � ϕ (Γ) := ( I,F ) ∈S×S c | γ I,F | max max π ( A ) P ( A, A ′ ) AA ′ ∈E ( I,F ) ∈S×S c γI,F ∋ AA ′ ◮ We choose S = C 2 . Elementary to show: 2 mx + 1 ≤ π ( C 2 ) mx � x � m π ( C 0 ) ≤ n − 1 , π ( A ) ≥ for all A and n 8
Introduction Main Theorem Proof Discussion Lemma (Jerrum-Sinclair-Vigoda (2004)) Consider MC with state space Ω and stationary distribution π . Let S ⊂ Ω , and specify paths Γ = { γ I,F : I ∈ S , F ∈ S c } . Then � π ( S ) π ( S c ) + π ( S c ) 1 � �� t mix ( δ ) ≤ log 2 + 4 ϕ (Γ) δ min A ∈ Ω π ( A ) π ( S ) where � � π ( I ) π ( F ) � ϕ (Γ) := ( I,F ) ∈S×S c | γ I,F | max max π ( A ) P ( A, A ′ ) AA ′ ∈E ( I,F ) ∈S×S c γI,F ∋ AA ′ ◮ We choose S = C 2 . Elementary to show: 2 mx + 1 ≤ π ( C 2 ) mx � x � m π ( C 0 ) ≤ n − 1 , π ( A ) ≥ for all A and n 8 ◮ Therefore: � 8 � � − log δ � � 1 � t mix ( δ ) ≤ log 3 + 2 m n ϕ (Γ) x m mx
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F I F I △ F
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 I F I △ F
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles I △ F
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles A 0
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles A 1
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles A 2
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles ◮ γ I,F defined by: ◮ Traverse A 0 ◮ . . . then A 1 ◮ . . . then A 2 ◮ . . . A 2
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles ◮ γ I,F defined by: ◮ Traverse A 0 ◮ . . . then A 1 ◮ . . . then A 2 ◮ . . . I
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles ◮ γ I,F defined by: ◮ Traverse A 0 ◮ . . . then A 1 ◮ . . . then A 2 ◮ . . .
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles ◮ γ I,F defined by: ◮ Traverse A 0 ◮ . . . then A 1 ◮ . . . then A 2 ◮ . . .
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles ◮ γ I,F defined by: ◮ Traverse A 0 ◮ . . . then A 1 ◮ . . . then A 2 ◮ . . .
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles ◮ γ I,F defined by: ◮ Traverse A 0 ◮ . . . then A 1 ◮ . . . then A 2 ◮ . . .
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles ◮ γ I,F defined by: ◮ Traverse A 0 ◮ . . . then A 1 ◮ . . . then A 2 ◮ . . .
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles ◮ γ I,F defined by: ◮ Traverse A 0 ◮ . . . then A 1 ◮ . . . then A 2 ◮ . . .
Introduction Main Theorem Proof Discussion Choice of Canonical Paths ◮ To transition from I to F ◮ Flip each e ∈ I △ F ◮ If ( I, F ) ∈ C 2 × C 0 then I △ F ∈ C 2 �� � ◮ I △ F = A 0 ∪ i ≥ 1 A i I F ◮ A 0 is a path ◮ A i disjoint cycles ◮ γ I,F defined by: ◮ Traverse A 0 ◮ . . . then A 1 ◮ . . . then A 2 ◮ . . .
Recommend
More recommend