Perfect sampling using dynamic programming Séminaire Équipe CALIN Christelle Rovetta Équipe AMIB(io), Laboratoire LIX - Inria Saclay - LRI - RNALand th April
What is Athena doing?
State space
State space 0.1 0.2 0.7
State space 0.2 0.8
Markov chain 0 0 0 1 0 ◮ Arithmetic mean � π
Markov chain 0 0 0.5 0.5 0 ◮ Arithmetic mean � π
Markov chain 0 0 0.67 0.33 0 ◮ Arithmetic mean � π
Markov chain 0.10 0.20 0.60 0.05 0.05 ◮ Arithmetic mean � π
Markov chain 0.10 0.12 0.20 0.16 0.60 0.63 0.05 0.03 0.05 0.04 ◮ Arithmetic mean � π ◮ Stationary distribution π (solution of π P = π )
Sample the stationary distribution ◮ State space: | S | < ∞ ◮ Ergodic Markov chain ( X n ) n ∈ Z on S ◮ Stationary distribution π 0.03 0.04 0.12 0.16 0.63 ◮ Sample a random object according π ◮ Very large S
Markov Chain Monte Carlo (MCMC) Markov chain convergence theorem For all initial distributions X n ∼ π when n → ∞ Simulate the Markov chain ◮ ( U n ) n ∈ Z an i.i.d sequence of random variables � X 0 = x ∈ S X n + 1 = update ( X n , U n + 1 ) States X 2 X 0 X 3 X 1 U U U 1 2 3 Time
Markov Chain Monte Carlo (MCMC) Markov chain convergence theorem For all initial distributions X n ∼ π when n → ∞ Simulate the Markov chain ◮ ( U n ) n ∈ Z an i.i.d sequence of random variables � X 0 = x ∈ S X n + 1 = update ( X n , U n + 1 ) ◮ How to detect the stopping criterion? X 2 X 0 X 3 X 1 X n U 1 U 2 U 3 U n
Perfect sampling algorithm ◮ Perfect sampling algorithm [Propp – Wilson, 1996] ◮ Produces y ∼ π ◮ Stopping criterion automatically detected ◮ Uses coupling from the past
Perfect sampling algorithm ◮ Perfect sampling algorithm [Propp – Wilson, 1996] ◮ Produces y ∼ π ◮ Stopping criterion automatically detected ◮ Uses coupling from the past U 0 0 -1
Perfect sampling algorithm ◮ Perfect sampling algorithm [Propp – Wilson, 1996] ◮ Produces y ∼ π ◮ Stopping criterion automatically detected ◮ Uses coupling from the past U -1 U 0 0 -2 -1
Perfect sampling algorithm ◮ Perfect sampling algorithm [Propp – Wilson, 1996] ◮ Produces y ∼ π ◮ Stopping criterion automatically detected ◮ Uses coupling from the past ◮ Starts from all states, complexity at least in O ( | S | ) U U -1 0 0 -n -2 -1
Perfect sampling algorithm ◮ Perfect sampling algorithm [Propp – Wilson, 1996] ◮ Produces y ∼ π ◮ Stopping criterion automatically detected ◮ Uses coupling from the past ◮ Starts from all states, complexity at least in O ( | S | ) ◮ Find strategies (monotone chains, envelopes,. . . )
Queueing networks ◮ Introduced by Erlang in 1917 to describe the Copenhagen telephone exchange ◮ Queues are everywhere in computing systems ◮ Analyze various kinds of system performance (average waiting time, expected number of customers waiting, ...) ◮ Usually modeled by an ergodic Markov chain ◮ Computer simulation
Closed queueing network Customers are not allowed leave the network 0.05 1 2 0.5 0.7 0.9 0.5 5 0.7 0.1 0.05 3 4 0.5 0.2 0.5 ◮ K = 5 queues, M = 4 customers ◮ State: x = ( 2 , 0 , 1 , 1 , 0 ) ◮ Sample π
Product form ◮ Gordon-Newell networks 0.05 1 2 0.5 0.7 0.9 0.5 5 0.7 0.1 0.05 3 4 0.5 0.2 0.5 Gordon-Newell theorem � � � 1 ρ x k ρ x k with π x = G ( K , M ) = k . k G ( K , M ) k ∈Q x ∈ S k ∈Q ◮ G ( K , M ) : normalization constant (partition function) ◮ Compute G ( K , M ) in O ( KM ) , dynamic programming [Buzen ’73]
Introduction Perfect Sampling For Closed Queueing Networks Generic diagrams Application 1: A Boltzmann Sampler Application 2: RNA folding kinetic Conclusion
Closed queueing network (monoclass) ◮ K queues ./ M / 1 (exponential service rate) ◮ Finite capacity C k in each queue ◮ Blocking policy: Repetitive service - random destination ◮ Strongly connected network Example 0.05 1 2 0.5 0.7 0.9 5 0.5 0.7 0.1 0.05 3 4 0.5 0.2 0.5 ◮ K = 5 queues, M = 3 customers, capacity C = ( 2 , 1 , 3 , 1 , 2 ) ◮ x = ( 1 , 0 , 1 , 1 , 0 )
State space ◮ K queues, M customers, capacity C = ( C 1 , . . . , C K ) ◮ State space: K � S = { x ∈ N K | x k = M , ∀ k 0 ≤ x k ≤ C k } k = 1 ◮ Number of states ( M ≫ K ): � M + K − 1 � � M + K − 1 � � M K − 1 � in O | S | ≤ = K − 1 M
Transition ◮ Transition on a state : � x − e i + e j if x i > 0 and x j < C j , t i , j ( x ) = otherwise ( x i = 0 or x j = C j ) , x � 1 if i = k , where e i ∈ { 0 , 1 } K e i ( k ) = 0 otherwise . ◮ Transition on a set of states : � t ( S ) := t ( x ) x ∈ S
Markov chain modeling ◮ ( U n ) n ∈ Z := ( i n , j n ) n ∈ Z an i.i.d sequence of random variables ◮ System described by an ergodic Markov chain: � X 0 ∈ S X n + 1 = t U n + 1 ( X n ) ◮ Unique stationary distribution π that is unknown ◮ GOAL: sample π with the perfect sampling algorithm
Perfect sampling algorithm Perfect Sampling with States ◮ Perfect sampling (PSS) 1. n ← 1 2. t ← t U − 1 3. While | t ( S ) | � = 1 4. n ← 2 n −n −n/2 −8 −4 −2 −1 0 5. t ← t U − 1 ◦ . . . ◦ t U − n 6. Return t ( S ) ◮ PROBLEM: | S | in O ( M K − 1 ) ◮ Find a strategy !
A new stategy Difficulty to adapt known stategies ◮ Fixed number of customers ( � K k = 1 x k = M ) ◮ No lattice structure More structured representation of the state space ◮ Reduce complexity: O ( M K − 1 ) to O ( KM 2 ) ◮ Represent states as paths in a graph ◮ Realize transitions directly on the graph
Diagram ◮ 5 queues, 3 customers, ◮ State: capacity C = ( 2 , 1 , 3 , 1 , 2 ) ◮ x = ( 0 , 0 , 2 , 0 , 1 ) ◮ � 5 k = 1 x k = 3 ◮ ∀ k 0 ≤ x k ≤ C k ◮ Diagram Customers 0 0,0 1,0 Queues
Diagram ◮ 5 queues, 3 customers, ◮ State: capacity C = ( 2 , 1 , 3 , 1 , 2 ) ◮ x = ( 0 , 0 , 2 , 0 , 1 ) ◮ � 5 k = 1 x k = 3 ◮ ∀ k 0 ≤ x k ≤ C k ◮ Diagram Custom ers 0 0 0,0 1,0 2,0 Queues
Diagram ◮ 5 queues, 3 customers, ◮ State: capacity C = ( 2 , 1 , 3 , 1 , 2 ) ◮ x = ( 0 , 0 , 2 , 0 , 1 ) ◮ � 5 k = 1 x k = 3 ◮ ∀ k 0 ≤ x k ≤ C k ◮ Diagram 5,3 1 3,2 4,2 0 2 Customers 0,0 1,0 2,0 0 0 Queues
Diagram ◮ 5 queues, 3 customers, ◮ State: capacity C = ( 2 , 1 , 3 , 1 , 2 ) ◮ x = ( 0 , 0 , 2 , 0 , 1 ) ◮ � 5 k = 1 x k = 3 ◮ y = ( 1 , 0 , 1 , 1 , 0 ) ◮ ∀ k 0 ≤ x k ≤ C k ◮ Diagram 4,3 5,3 3,2 4,2 1,1 2,1 Custom ers 0,0 1,0 2,0 Queues
Diagram ◮ A diagram is a graph that encode a set of states ◮ A diagram is complete if it encodes all the states ◮ Number of arcs in a diagram: O ( KM 2 ) Example K = 5 queues 5 columns of arcs M = 3 customers 4 rows C = ( 2 , 1 , 3 , 1 , 2 ) 0 ≤ | slopes | ≤ 3 2,3 3,3 4,3 5,3 1,2 2,2 3,2 4,2 1,1 2,1 3,1 4,1 0,0 1,0 2,0 3,0
States to Diagram: function φ S Diagram φ ( S ) ( 0 , 0 , 2 , 0 , 1 ) 4,3 5,3 ( 1 , 0 , 1 , 1 , 0 ) 3,2 4,2 1,1 2,1 0,0 1,0 2,0
Diagram to states: function ψ Diagram D ψ ( D ) ( 0 , 0 , 2 , 0 , 1 ) 4,3 5,3 ( 1 , 0 , 1 , 1 , 0 ) ( 0 , 0 , 2 , 1 , 0 ) 3,2 4,2 ( 1 , 0 , 1 , 0 , 1 ) 1,1 2,1 0,0 1,0 2,0
Transformation function S φ φ ψ ( D ) D ψ
Transformation function ◮ Galois connexion S φ φ ψ ( D ) D ψ
Transition on a diagram ◮ Transition on a diagram: T i , j = φ ◦ t i , j ◦ ψ ◮ Good properties for perfect sampling ◮ Preserves inclusion S ⊆ ψ ( D ) = ⇒ t i , j ( S ) ⊆ ψ ( T i , j ( D )) ◮ Preserves coupling | ψ ( D ) | = 1 = ⇒ | ψ ( T i , j ( D )) | = 1 ◮ Efficient algorithm to compute transitions T i , j in O ( KM 2 )
Transition on a set of states ◮ Parameters: K = 5 queues, M = 3 customers, capacity C = ( 2 , 1 , 3 , 1 , 2 ) . S = { ( 0 , 1 , 1 , 0 , 0 ) , ( 0 , 1 , 1 , 1 , 0 ) , ( 0 , 1 , 0 , 0 , 2 ) , ( 1 , 0 , 1 , 1 , 0 ) } ⊆ S ◮ Transition t 4 , 2 ( S ) ?
Transition t 4 , 2 on S x t 4 , 2 ( x ) = x t 4 , 2 ( x ) � = x t 4 , 2 ( x ) x 4 = 0 OR x 2 = C 2 x 4 > 0 AND x 2 < C 2 01100 01100 • 01110 01110 • 01002 01002 • • 10110 11100 •
Transition t 4 , 2 on S x t 4 , 2 ( x ) = x t 4 , 2 ( x ) � = x t 4 , 2 ( x ) x 4 = 0 OR x 2 = C 2 x 4 > 0 AND x 2 < C 2 01100 01100 • 01110 01110 • 01002 01002 • • 10110 11100 • ◮ t 4 , 2 ( S ) = { ( 0 , 1 , 1 , 0 , 0 ) , ( 0 , 1 , 1 , 1 , 0 ) , ( 0 , 1 , 0 , 0 , 2 ) , ( 1 , 1 , 1 , 0 , 0 ) }
Compute T 4 , 2 ( D ) on D 2 3 4 4 2 S tay D F ull 2 2 3 3 4 4 2 2 3 3 4 4 +1 -1 T ransit T ransit' T 4,2,1 (D)
Compute T 4 , 2 ( D ) on D 2 3 4 4 2 S tay D F ull 2 2 3 3 4 4 2 2 3 3 4 4 +1 -1 T ransit T ransit' T 4,2,1 (D) ◮ Complexity in O ( KM 2 ) compared to O ( M K − 1 ) (transition on a set of states)
Perfect sampling algorithm Perfect Sampling with States Perfect Sampling with (PSS) Diagram (PSD) 1. n ← 1 1. n ← 1 2. t ← t U − 1 2. T ← T U − 1 3. While | t ( S ) | � = 1 3. While | ψ ( T ( D )) | � = 1 4. n ← 2 n 4. n ← 2 n 5. 5. t ← t U − 1 ◦ . . . ◦ t U − n T ← T U − 1 ◦ . . . ◦ T U − n 6. Return t ( S ) 6. Return ψ ( T ( D ))
Recommend
More recommend