stochastic processes
play

Stochastic Processes MATH5835, P. Del Moral UNSW, School of - PowerPoint PPT Presentation

Stochastic Processes MATH5835, P. Del Moral UNSW, School of Mathematics & Statistics Lectures Notes 3 Consultations (RC 5112): Wednesday 3.30 pm 4.30 pm & Thursday 3.30 pm 4.30 pm 1/24 2/24 Citations of the day David


  1. Stochastic Processes MATH5835, P. Del Moral UNSW, School of Mathematics & Statistics Lectures Notes 3 Consultations (RC 5112): Wednesday 3.30 pm � 4.30 pm & Thursday 3.30 pm � 4.30 pm 1/24

  2. 2/24

  3. Citations of the day – David Hilbert (1862-1943) The art of doing mathematics consists in finding that special case which contains all the germs of generality. 3/24

  4. Citations of the day – David Hilbert (1862-1943) The art of doing mathematics consists in finding that special case which contains all the germs of generality. - David Hilbert - CSIRO Atherton, QLD 3/24

  5. Google PageRank algorithm Stanford University patent [ Larry Page ⊕ Sergey Brin ] 1996 ◮ Counts the number and quality of page links � importance index. ◮ Hyp.: Important sites receive more links from others. 4/24

  6. Google PageRank - Some information Using the web-spider bot Googlebot: ◮ d ≃ 25 × 10 9 Web pages (March 2014). ◮ d i outgoing links from each website i ∈ { 1 , . . . , d } . 5/24

  7. Google PageRank - Some information Using the web-spider bot Googlebot: ◮ d ≃ 25 × 10 9 Web pages (March 2014). ◮ d i outgoing links from each website i ∈ { 1 , . . . , d } . ◮ How to use this data? ◮ Ranking stochastic model? 5/24

  8. Google PageRank - Stochastic model 1/4 A stochastic (sparse) matrix on { 1 , . . . , d }  1 j is one of the d i outgoing links if  d i P ( i , j ) =  0 if d i = 0 ( a.k.a. a dangling node ) Markov chain model ? � 6/24

  9. Google PageRank - Stochastic model 1/4 A stochastic (sparse) matrix on { 1 , . . . , d }  1 j is one of the d i outgoing links if  d i P ( i , j ) =  0 if d i = 0 ( a.k.a. a dangling node ) Markov chain model ? � 6/24

  10. Google PageRank - Stochastic model 2/4 More regular Markov transitions: M ( i , j ) = ǫ P ( i , j ) + (1 − ǫ ) µ ( j ) with ◮ Damping factor ǫ ∈ ]0 , 1[ (restart rate). ◮ µ ( i ) = 1 / d uniform on { 1 , . . . , d } 7/24

  11. Google PageRank - Stochastic model 2/4 More regular Markov transitions: M ( i , j ) = ǫ P ( i , j ) + (1 − ǫ ) µ ( j ) with ◮ Damping factor ǫ ∈ ]0 , 1[ (restart rate). ◮ µ ( i ) = 1 / d uniform on { 1 , . . . , d } WHY? 7/24

  12. Google PageRank - Stochastic model 2/4 More regular Markov transitions: M ( i , j ) = ǫ P ( i , j ) + (1 − ǫ ) µ ( j ) with ◮ Damping factor ǫ ∈ ]0 , 1[ (restart rate). ◮ µ ( i ) = 1 / d uniform on { 1 , . . . , d } WHY? − → M ( i , j ) ≥ (1 − ǫ ) µ ( j ) 7/24

  13. Google PageRank - Stochastic model 2/4 More regular Markov transitions: M ( i , j ) = ǫ P ( i , j ) + (1 − ǫ ) µ ( j ) with ◮ Damping factor ǫ ∈ ]0 , 1[ (restart rate). ◮ µ ( i ) = 1 / d uniform on { 1 , . . . , d } WHY? − → M ( i , j ) ≥ (1 − ǫ ) µ ( j ) Consequences for 2 ⊥ Surfers ( X n , X ′ n ) (start � = sites) = p ′ n ( i ) = p n ( i ) � �� � � �� � P ( X ′ P ( X n = i ) − n = i ) = ??? P ( X n � = X ′ n ) = ??? � 7/24

  14. Google PageRank - Stochastic model 2/4 More regular Markov transitions: M ( i , j ) = ǫ P ( i , j ) + (1 − ǫ ) µ ( j ) with ◮ Damping factor ǫ ∈ ]0 , 1[ (restart rate). ◮ µ ( i ) = 1 / d uniform on { 1 , . . . , d } WHY? − → M ( i , j ) ≥ (1 − ǫ ) µ ( j ) Consequences for 2 ⊥ Surfers ( X n , X ′ n ) (start � = sites) = p ′ n ( i ) = p n ( i ) � �� � � �� � P ( X ′ P ( X n = i ) − n = i ) = ??? P ( X n � = X ′ n ) = ??? ⊕ Lecture slides 2! � 7/24

  15. Google PageRank - Stochastic model 3/4 Surfers X n starting at X 0 = i :   i − th ���� p 0 ( j ) = P ( X 0 = j ) = 1 i ( j ) ⇔ p 0 :=  0 , . . . , 0 , 1 , 0 . . . , 0  ⇓ [Forgetting the initial condition] P ( X n = j ) = P ( X n = j | X 0 = i ) = p 0 M n = M n ( i , j ) → n ↑∞ p ∞ ( j ) 8/24

  16. Google PageRank - Stochastic model 4/4 More general situations (i.e. ∀ p 0 ) � p 0 M n p 0 ( k ) M n ( k , j ) ⇒ − → n ↑∞ p n = = p n ( j ) = p ∞ ( j ) k ⇓ Fixed point equation = invariant/stationary p n − → n ↑∞ p ∞ = p ∞ M � Wolfram - Mathworld 9/24

  17. Google PageRank -Ranking ◮ Rate of convergence to equilibrium: d 1 � admitted � p n − p ∞ � tv := | p n ( i ) − p ∞ ( i ) | ≤ . . . ?? 2 i =1 ◮ How to rank sites using the surfer exploration? 10/24

  18. Google PageRank -Ranking ◮ Rate of convergence to equilibrium: d 1 � admitted � p n − p ∞ � tv := | p n ( i ) − p ∞ ( i ) | ≤ . . . ?? 2 i =1 ◮ How to rank sites using the surfer exploration? ⊕ Lecture notes ⊕ next slide � 10/24

  19. Google PageRank ǫ = . 85 0 . 8 0 . 6 � p n − p ∞ � tv 0 . 4 0 . 2 0 0 10 20 30 40 50 number of iterations 11/24

  20. From Monte Carlo to Los Alamos An introduction to simulation ◮ 3 simple ways to sample elementary random variables ◮ The Metropolis-Hasting model ( ≃ 1960 [Metropolis-Rosenbluth (2)-Teller (2) cf. lect. notes]): ∈ Top-10 algo. in the 20th century . ◮ In the 21th century . . . 12/24

  21. The inverse method p(x) x X1,...,Xn,... 1 F9x) 0 x U1,...,Un,... 13/24

  22. The inverse method p(x) x X1,...,Xn,... 1 F9x) 0 x U1,...,Un,... Formula � x def = F − 1 ( U ) F ( x ) = P ( X ≤ x ) = P ( X ∈ dy ) ⇒ X −∞ Examples: Exp( λ ), discrete, binomial, multinomial,. . . 13/24

  23. The inverse method p(x) x X1,...,Xn,... 1 F9x) 0 x U1,...,Un,... Formula � x def = F − 1 ( U ) F ( x ) = P ( X ≤ x ) = P ( X ∈ dy ) ⇒ X −∞ Examples: Exp( λ ), discrete, binomial, multinomial,. . . � Wolfram - Mathworld ⊕ Section 4.1 pp. 51-53 13/24

  24. The change of variables 14/24

  25. The change of variables Some formulae ( U i ⊥ Unif [0 , 1] ) [ a 1 , b 1 ] × [ a 2 , b 2 ] � ( X 1 , X 2 ) = ( a 1 + ( b 1 − a 1 ) U 1 , a 2 + ( b 2 − a 2 ) U 2 ) ?? and  � Y 1 := − 2 log( U 1 ) cos (2 π U 2 )  ?? � �  Y 2 := − 2 log( U 1 ) sin (2 π U 2 ) 14/24

  26. The change of variables Some formulae ( U i ⊥ Unif [0 , 1] ) [ a 1 , b 1 ] × [ a 2 , b 2 ] � ( X 1 , X 2 ) = ( a 1 + ( b 1 − a 1 ) U 1 , a 2 + ( b 2 − a 2 ) U 2 ) ?? and  � Y 1 := − 2 log( U 1 ) cos (2 π U 2 )  ?? � �  Y 2 := − 2 log( U 1 ) sin (2 π U 2 ) Uniform on the unit circle ?? � 14/24

  27. The change of variables Some formulae ( U i ⊥ Unif [0 , 1] ) [ a 1 , b 1 ] × [ a 2 , b 2 ] � ( X 1 , X 2 ) = ( a 1 + ( b 1 − a 1 ) U 1 , a 2 + ( b 2 − a 2 ) U 2 ) ?? and  � Y 1 := − 2 log( U 1 ) cos (2 π U 2 )  ?? � �  Y 2 := − 2 log( U 1 ) sin (2 π U 2 ) Uniform on the unit circle ?? � Lecture notes section 4.2 pp. 54-55 14/24

  28. Rejection technique 15/24

  29. Rejection technique Some formulae ( U i ⊥ Unif [0 , 1] ) [ a 1 , b 1 ] × [ a 2 , b 2 ] � ( X 1 , X 2 ) = ( a 1 + ( b 1 − a 1 ) U 1 , a 2 + ( b 2 − a 2 ) U 2 ) ?? and  � Y 1 := − 2 log( U 1 ) cos (2 π U 2 )  ?? � �  Y 2 := − 2 log( U 1 ) sin (2 π U 2 ) 15/24

  30. Rejection technique Some formulae ( U i ⊥ Unif [0 , 1] ) [ a 1 , b 1 ] × [ a 2 , b 2 ] � ( X 1 , X 2 ) = ( a 1 + ( b 1 − a 1 ) U 1 , a 2 + ( b 2 − a 2 ) U 2 ) ?? and  � Y 1 := − 2 log( U 1 ) cos (2 π U 2 )  ?? � �  Y 2 := − 2 log( U 1 ) sin (2 π U 2 ) Uniform on the unit circle ?? � 15/24

  31. Rejection technique Some formulae ( U i ⊥ Unif [0 , 1] ) [ a 1 , b 1 ] × [ a 2 , b 2 ] � ( X 1 , X 2 ) = ( a 1 + ( b 1 − a 1 ) U 1 , a 2 + ( b 2 − a 2 ) U 2 ) ?? and  � Y 1 := − 2 log( U 1 ) cos (2 π U 2 )  ?? � �  Y 2 := − 2 log( U 1 ) sin (2 π U 2 ) Uniform on the unit circle ?? � � Wolfram - Mathworld ⊕ Section 4.2 pp. 54-55 15/24

  32. Boltzmann-Gibbs measures π ( dx ) := 1 e − β V ( x ) λ ( dx ) Z β 16/24

  33. Boltzmann-Gibbs measures π ( dx ) := 1 e − β V ( x ) λ ( dx ) Z β Some examples: ( see also section 6.4 ) ◮ Ising/Sherrington-Kirkpatrick model : x ∈ {− 1 , +1 } { 1 ,..., L } 2 with λ ( x ) = 2 − L 2 � � x ( i ) − J V ( x ) = h θ i , j x ( i ) x ( j ) i ∈ E i ∼ j 16/24

  34. Boltzmann-Gibbs measures π ( dx ) := 1 e − β V ( x ) λ ( dx ) Z β Some examples: ( see also section 6.4 ) ◮ Ising/Sherrington-Kirkpatrick model : x ∈ {− 1 , +1 } { 1 ,..., L } 2 with λ ( x ) = 2 − L 2 � � x ( i ) − J V ( x ) = h θ i , j x ( i ) x ( j ) i ∈ E i ∼ j 1 ◮ Traveling Salesman m cities e i : x ∈ G m with λ ( x ) = m ! m � V ( x ) = d ( e x ( p ) , e x ( p +1) ) p =1 16/24

  35. Boltzmann-Gibbs measures π ( dx ) := 1 e − β V ( x ) λ ( dx ) Z β Some examples: ( see also section 6.4 ) ◮ Ising/Sherrington-Kirkpatrick model : x ∈ {− 1 , +1 } { 1 ,..., L } 2 with λ ( x ) = 2 − L 2 � � x ( i ) − J V ( x ) = h θ i , j x ( i ) x ( j ) i ∈ E i ∼ j 1 ◮ Traveling Salesman m cities e i : x ∈ G m with λ ( x ) = m ! m � V ( x ) = d ( e x ( p ) , e x ( p +1) ) p =1 ◮ Black Box problems: Inputs = X → Numerical codes F → Outputs = Y = F ( X ) e − β V ( x ) ≃ 1 F − 1 ( A ) ( x ) ⇒ π = Law ( X | X ∈ A ) 16/24

  36. The Metropolis Hasting sampler Markov chain X n − 1 � X n with 2 steps: ◮ Propose a transition X n − 1 = x � y with some probability density P ( x , dy ) ∼ π ( dy ) ◮ Accept X n = y or reject X n = x with acceptance probability � � 1 , π ( dy ) P ( y , dx ) a ( x , y ) = min π ( dx ) P ( x , dy ) ⇓ π M = π 17/24

Recommend


More recommend