random perturbations of dynamical systems
play

Random perturbations of dynamical systems Barbara Gentz , University - PowerPoint PPT Presentation

1. French Complex Systems Summer School Theory and Practice August 2007 Random perturbations of dynamical systems Barbara Gentz , University of Bielefeld http://www.math.uni-bielefeld.de/ gentz Abstract These lectures will provide an


  1. Energy-balance model For two stable climate regimes to coexist, γ ( T ) should have three roots, the middle root corresponding to an unstable state Following [Benzi, Parisi, Sutera & Vulpiani 1983], we model γ ( T ) by the cubic polynomial � �� �� � 1 − T 1 − T 1 − T γ ( T ) = β T 1 T 2 T 3 where T 1 = 278 . 6 K and T 3 = 288 . 6 K are the representative temper- ⊲ atures of the two stable climate regimes ⊲ T 2 = 283 . 3 K represents an intermediate, unstable regime ⊲ β determines the relaxation time τ of the system in the “tem- perate climate” state, taken to be 8 years, by 1 τ = (curvature at T 3 ) ≃ − E 0 c γ ′ ( T 3 ) 19

  2. Energy-balance model Introduce ⊲ slow time t = ωs ⊲ “dimensionless temperature” x = ( T − T 2 ) / ∆ T with ∆ T = ( T 3 − T 1 ) / 2 = 5 K Rescaled equation of motion ε d x d t = − x ( x − X 1 )( x − X 3 )(1 + K cos t ) + A cos t with X 1 = ( T 1 − T 2 ) / ∆ T ≃ − 0 . 94 and X 3 = ( T 3 − T 2 ) / ∆ T ≃ 1 . 06 Adiabatic parameter ε = ωτ 2( T 3 − T 2 ) ≃ 1 . 16 × 10 − 3 ∆ T Effective driving amplitude A = K T 1 T 2 T 3 (∆ T ) 3 ≃ 0 . 12 β (according to the value E 0 /c = 8 . 77 × 10 − 3 / 4000 Ks − 1 given in [Benzi, Parisi, Sutera & Vulpiani 1983]) 20

  3. Energy-balance model For simplicity, replace X 1 by − 1, X 3 by 1, and neglect the term K cos 2 πt This yields the equation ε d x d t = x − x 3 + A cos t The right-hand side derives from a double-well potential, and therefore has two stable equilibria and one unstable equilibrium, √ for all A < A c = 2 / 3 3 ≃ 0 . 38 Overdamped particle in a periodically forced double-well potential 21

  4. Energy-balance model Overdamped particle in a periodically forced double-well potential In our simple climate model, the two potential wells represent Ice Age and temperate climate The periodic forcing is subthreshold and thus not sufficient to allow for transitions between the stable equilibria Model too simple? The slow variations of insolation can only ex- plain the rather drastic changes between climate regimes if some powerful feedbacks are involved, for example a mutual enhance- ment of ice cover and the Earth’s albedo 22

  5. Energy-balance model New idea in [Benzi, Sutera & Vulpiani 1981] and [Nicolis & Nicolis 1981]: Incorporate the effect of short-timescale atmospheric fluc- tuations, by adding a noise term, as suggested by [Hasselmann 1976] This yields the SDE � � x t = 1 x t − x 3 σ ( ε ) ˙ ˙ t + A cos t + � W t ε σ = σ/ √ ε ) (considered on the slow timescale, � For adequate parameter values, typical solutions are likely to cross the potential barrier twice per period, producing the observed sharp transitions between climate regimes. This is a manifestation of stochastic resonance (SR). Whether SR is indeed the right explanation for the appearance of Ice Ages is controversial, and hard to decide. 23

  6. Sample paths A = 0 . 00, σ = 0 . 30, ε = 0 . 001 A = 0 . 10, σ = 0 . 27, ε = 0 . 001 A = 0 . 24, σ = 0 . 20, ε = 0 . 001 A = 0 . 35, σ = 0 . 20, ε = 0 . 001 24

  7. Example II: Dansgaard–Oeschger events GISP2 climate record for the second half of the last glacial [Rahmstorf, Timing of abrupt climate change: A precise clock , Geophys. Res. Lett. 30 (2003)] ⊲ Abrupt, large-amplitude shifts in global climate during last glacial Cold stadials; warm Dansgaard–Oeschger interstadials ⊲ ⊲ Rapid warming; slower return to cold stadial ⊲ 1 470-year cycle? ⊲ Occasionally a cycle is skipped 25

  8. Interspike times for Dansgaard–Oeschger events Histogram for “waiting times” between transitions [from: Alley, Anandakrishnan & Jung, Stochastic resonance in the North Atlantic , Paleoceanography 16 (2001)] 26

  9. Sample paths A = 0 . 00, σ = 0 . 30, ε = 0 . 001 A = 0 . 10, σ = 0 . 27, ε = 0 . 001 A = 0 . 24, σ = 0 . 20, ε = 0 . 001 A = 0 . 35, σ = 0 . 20, ε = 0 . 001 27

  10. Stochastic resonance What is stochastic resonance (SR)? SR = mechanism to amplify weak signals in presence of noise Requirements ⊲ (background) noise weak input ⊲ ⊲ characteristic barrier or threshold (nonlinear system) Examples periodic occurrence of ice ages (?) ⊲ ⊲ Dansgaard–Oeschger events (?) bidirectional ring lasers ⊲ ⊲ visual and auditory perception receptor cells in crayfish ⊲ ⊲ . . . 28

  11. Stochastic resonance: The paradigm model Overdamped motion of a Brownian particle . . . � � − x 3 d x s = s + x s + A cos( εs ) d s + σ d W s � �� � = − ∂ ∂xV ( x t , εs ) . . . in a periodically modulated double-well potential V ( x, t ) = 1 4 x 4 − 1 2 x 2 − A cos( t ) x A < A c with 29

  12. SR: Different parameter regimes Synchronisation I ⊲ Matching time scales 2 π/ε = T forcing = 2 T Kramers ≍ e 2 H/σ 2 ⊲ Quasistatic approach: Transitions twice per period likely (physics’ literature; [Freidlin ’00], [Imkeller et al , since ’02]) ⊲ Requires exponentially long forcing periods Synchronisation II ⊲ Intermediate forcing periods T relax ≪ T forcing ≪ T Kramers A ≈ A c and close-to-critical forcing amplitude ⊲ Transitions twice per period with high probability ⊲ Subtle dynamical effects: Effective barrier heights [Berglund & G ’02] SR outside synchronisation regimes ⊲ Only occasional transitions ⊲ But transition times localised within forcing periods Unified description / understanding of transition between regimes ? 30

  13. Example III: North-Atlantic thermohaline circulation ⊲ “Realistic”models (GCMs, EMICs): Numerical analysis ⊲ Simple conceptual models: Analytical results ⊲ In particular: Box models 31

  14. North-Atlantic THC: Stommel’s Box Model (’61) T i : Temperatures S i : Salinities F : Freshwater flux Q (∆ ρ ): Mass exchange ∆ ρ = α S ∆ S − α T ∆ T T 1 , S 1 T 2 , S 2 ∆ T = T 1 − T 2 Q (∆ ρ ) low latitudes high latitudes 10 ◦ N – 35 ◦ N 35 ◦ N – 75 ◦ N ∆ S = S 1 − S 2  d s ∆ T = − 1 d   (∆ T − θ ) − Q (∆ ρ )∆ T    τ r  d s ∆ S = S 0 d    H F − Q (∆ ρ )∆ S  Model for Q [Cessi ’94] : Q (∆ ρ ) = 1 + q V (∆ ρ ) 2 τ d 32

  15. Stommel’s box model as a slow–fast system Separation of time scales: τ r ≪ τ d Rescaling: x = ∆ T /θ , y = ( α S /α T )(∆ S/θ ), s = τ d t  x = − ( x − 1) − εx [1 + η 2 ( x − y ) 2 ]  ε ˙   y = µ − y [1 + η 2 ( x − y ) 2 ]  ˙ ε = τ r /τ d ≪ 1 y [1 + η 2 (1 − y ) 2 ] Slow manifold ( ε ˙ x = 0) : x = x ⋆ ( y ) = 1 + O ( ε ) µ Reduced equation on slow manifold: y = µ − y [1 + η 2 (1 − y ) 2 + O ( ε )] ˙ y 1 or 2 stable equilibria, depending on freshwater flux µ (and η ) 33

  16. Stommel’s box model with Ornstein–Uhlenbeck noise � � d x t = 1 d t + d ξ 1 − ( x t − 1) − εx t Q ( x t − y t ) t ε t = − γ 1 t d t + σ d ξ 1 ε ξ 1 √ ε d W 1 t � � d t + d ξ 2 d y t = µ − y t Q ( x t − y t ) t t d t + σ ′ d W 2 d ξ 2 t = − γ 2 ξ 2 t ⊲ Variance of x t − 1 ≃ σ 2 / (2(1 + γ 1 )) ⊲ Reduced system for ( y t , ξ 2 t ) is bistable (for suitable choice of µ ) How to choose µ , i. e., how to model the freshwater flux? 34

  17. Modelling the freshwater flux d s ∆ T = − 1 d (∆ T − θ ) − Q (∆ ρ )∆ T τ r d s ∆ S = S 0 d H F ( s ) − Q (∆ ρ )∆ S ⊲ Feedback: F or ˙ F depending on ∆ T and ∆ S ⇒ relaxation oscillations, excitability ⊲ External periodic forcing ⇒ stochastic resonance, hysteresis ⊲ Internal periodic forcing of ocean–atmosphere system ⇒ stochastic resonance, hysteresis 35

  18. Case I: Feedback (with Gaussian white noise) � � d x t = 1 d t + σ √ ε d W 0 − ( x t − 1) − εx t Q ( x t − y t ) t ε � � d t + σ 1 d W 1 d y t = µ t − y t Q ( x t − y t ) t √ εσ 2 d W 2 d µ t = ˜ εh ( x t , y t , µ t ) d t + ˜ (slow change in freshwater flux) t Reduced equation (after time change t �→ ˜ εt ) � � d y t = 1 d t + σ 1 d W 1 µ t − y t Q (1 − y t ) √ t ˜ ε ε ˜ d µ t = h (1 , y t , µ t ) d t + σ 2 d W 2 t y y Relaxation Excitability oscillations h < 0 h < 0 h > 0 h > 0 µ µ µ = yQ (1 − y ) µ = yQ (1 − y ) 36

  19. ✆ ✆ ✝ ✆ ✝ ✝ Case II: Periodic forcing Assume periodic freshwater flux µ ( t ) (centred w.r.t. bifurcation diagram) �✂✁☎✄ �✟✞✠✄ �✂✡☛✄ Theorem [Berglund & G ’02] ⊲ Small amplitude, small noise: Transitions unlikely during one cycle (However: Concentration of transition times within each period) ⊲ Large amplitude, small noise: Hysteresis cycles Area = static area + O ( ε 2 / 3 ) (as in deterministic case) ⊲ Large noise: Stoch. resonance / noise-induced synchronization Area = static area − O ( σ 4 / 3 ) (reduced due to noise) 37

  20. General slow–fast systems Stommel’s box model with noise � � d x t = 1 d t + d ξ 1 − ( x t − 1) − εx t Q ( x t − y t ) t ε t = − γ 1 t d t + σ d ξ 1 ε ξ 1 √ ε d W 1 t � � d t + d ξ 2 d y t = µ − y t Q ( x t − y t ) t t d t + σ ′ d W 2 d ξ 2 t = − γ 2 ξ 2 t is a special case of a randomly perturbed slow–fast system  d x t = 1 ε f ( x t , y t ) d t + σ  √ ε F ( x t , y t ) d W t  (fast variables ∈ R n ) g ( x t , y t ) d t + σ ′ G ( x t , y t ) d W t  d y t = (slow variables ∈ R m )  38

  21. General slow–fast systems For deterministic slow–fast systems   ε ˙ x = f ( x, y ) (fast variables ∈ R n )  y = g ( x, y ) ˙ (slow variables ∈ R m ) geometric singular perturbation theory permits to study the re- duced dynamics on a slow or centre manifold (under suitable assump- tions) Our goals: ⊲ Analog for the case of random perturbations ⊲ Effect of random perturbations near bifurcation points of the deterministic system We will focus on simple cases, in particular slowly driven systems 39

  22. References for PART I References from the text: ⊲ R. Z.Khasminskii, A limit theorem for solutions of differential equations with random right-hand side , Teor. Veroyatnost. i Primenen. 11 (1966), pp. 390– 406 Y. Kifer, Averaging and climate models , in Stochastic climate models (Chorin, ⊲ 1999), Progr. Probab. 49, pp. 171–188, Birkh¨ auser, Basel (2001) ⊲ Y. Kifer, Stochastic versions of Anosov’s and Neistadt’s theorems on aver- aging , Stoch. Dyn. 1 (2001), pp. 1–21 Y. Kifer, L 2 diffusion approximation for slow motion in averaging , Stoch. ⊲ Dyn. 3 (2003), pp. 213–246 ⊲ V. Bakhtin, and Y. Kifer, Diffusion approximation for slow motion in fully coupled averaging , Probab. Theory Related Fields 129 (2004), pp. 157–181 ⊲ W. Just, K. Gelfert, N. Baba, A. Riegert, and H. Kantz, Elimination of fast chaotic degrees of freedom: on the accuracy of the Born approximation , J. Statist. Phys. 112 (2003), pp. 277–292 ⊲ R. Benzi, G. Parisi, A. Sutera, and A. Vulpiani, A theory of stochastic resonance in climatic change , SIAM J. Appl. Math. 43 (1983), pp. 565–578 R. Benzi, Roberto, A. Sutera, and A. Vulpiani, The mechanism of stochastic ⊲ resonance , J. Phys. A 14 (1981), pp. L453–L457 ⊲ C. Nicolis, and G. Nicolis, Stochastic aspects of climatic transitions—additive fluctuations , Tellus 33 (1981), pp. 225–234 ⊲ K. Hasselmann, Stochastic climate models. Part I. Theory , Tellus 28 (1976), pp. 473–485 40

  23. ⊲ S. Rahmstorf, Timing of abrupt climate change: A precise clock , Geophys- ical Research Letters 30 (2003), pp. 17-1–17-4 ⊲ R. B. Alley, S. Anandakrishnan, and P. Jung, Stochastic resonance in the North Atlantic , Paleoceanography 16 (2001), 190–198 ⊲ M. I. Freidlin, Quasi-deterministic approximation, metastability and stochas- tic resonance , Physica D 137, (2000), pp. 333–352 ⊲ S. Herrmann, and P. Imkeller, Barrier crossings characterize stochastic res- onance , Stoch. Dyn. 2 (2002), pp. 413–436 P. Imkeller, and I. Pavlyukevich, Model reduction and stochastic resonance , ⊲ Stoch. Dyn. 2 (2002), pp. 463–506 ⊲ N. Berglund, and B. Gentz, A sample-paths approach to noise-induced syn- chronization: Stochastic resonance in a double-well potential , Ann. Appl. Probab. 12 (2002), pp. 1419–1470 ⊲ N. Berglund, and B. Gentz, Beyond the Fokker–Planck equation: Pathwise control of noisy bistable systems , J. Phys. A 35 (2002), pp. 2057–2091 ⊲ N. Berglund, and B. Gentz, Metastability in simple climate models: Path- wise analysis of slowly driven Langevin equations , Stoch. Dyn. 2 (2002), pp. 327–356 ⊲ S. Rahmstorf, Ocean circulation and climate during the past 120,000 years , Nature 419 (2002), pp. 207–214 ⊲ H. Stommel, Thermohaline convection with two stable regimes of flow , Tellus 13 (1961), pp. 224–230 P. Cessi, Paola, A simple box model of stochastically forced thermohaline ⊲ flow , J. Phys. Oceanogr. 24 (1994), pp. 1911–1920 ⊲ N. Berglund, and B. Gentz, The effect of additive noise on dynamical hys- teresis , Nonlinearity 15 (2002), pp. 605–632 41

  24. Additional reading: ⊲ F. Moss, and K. Wiesenfeld, The benefits of background noise , Scientific American 273 (1995), pp. 50-53 ⊲ K. Wiesenfeld, and F. Moss, Stochastic resonance and the benefits of noise: from ice ages to crayfish and SQUIDs , Nature 373 (1995), pp. 33–36 K. Wiesenfeld, and F. Jaramillo, Minireview of stochastic resonance , Chaos 8 ⊲ (1998), pp. 539–548 Data, figures and photographs: ⊲ http://www.ncdc.noaa.gov/paleo/slides ⊲ http://www.museum.state.il.us/exhibits/ice ages ⊲ http://arcss.colorado.edu/data/gisp grip (ice-core date) ⊲ http://www.ncdc.noaa.gov/paleo/icecore/greenland/greenland.html (ice-core date) And last not least: ⊲ http://www.phdcomics.com/comics.php 42

  25. I’m inviting you now to follow me onto a journey into prob- ability theory. In case you’re bored – I recommend . . . 43

  26. PART II Review ⊲ Brownian motion ⊲ Stopping times ⊲ Stochastic integration (Itˆ o integrals) ⊲ Stochastic differential equations Diffusion processes and Fokker–Planck equation ⊲ 45

  27. Stochastic processes A stochastic process is a collection { X t ( ω ) } t ≥ 0 of random (chance) variables ω �→ X t ( ω ), indexed by time. ω denotes the dependence on chance More precisely: ω denotes the realisation of chance / randomness / noise View stochastic process as a random function of time: t �→ X t ( ω ) (for fixed ω ) We call t �→ X t ( ω ) a sample path. 46

  28. Brownian motion Physics’ literature: ˙ Gaussian white noise W t ( ω ) is a Gaussian stationary stochastic process with autocorrelation function C ( s ) := E ( ˙ W t ˙ W t + s ) = δ ( s ) ⊲ E denotes expectation (weighted average over all realizations of the noise) ⊲ δ ( s ) denotes the Dirac delta function ˙ ⊲ W t is completely uncorrelated � t ˙ Brownian motion (BM): W t = W s ds 0 (In the sense that Gaussian white noise is the generalized mean- square derivative of Brownian motion.) 47

  29. Sample-path view on Brownian motion (in the spirit of this course) BM can be constructed as a scaling limit of a symmetric random walk ⌊ nt ⌋ 1 � W t ( ω ) = lim √ n X i ( ω ) n →∞ i =1 ⊲ X i ( ω ) are independent, identically distributed (i.i.d.) random variables (r.v.’s) ⊲ E X i = 0, Var( X i ) = 1 Special case: Nearest-neighbour random walk ( X i = ± 1 with probability 1 / 2) The limit is to be understood as convergence in distribution. 48

  30. Definition of Brownian motion A one-dimensional standard Brownian motion (or Wiener process) is a stochastic process { W t } t ≥ 0 , satisfying 1. W 0 = 0 2. Independent increments: W t − W s is independent of { W u } 0 ≤ u ≤ s (for all t > s ≥ 0) 3. Gaussian increments: W t − W s ∼ N (0 , t − s ) (for all t > s ≥ 0) That is: 1 e − x 2 / 2( t − s ) W t − W s has (probability) density x �→ � 2 π ( t − s ) (the famous bell-shape curve!) 49

  31. Properties of Brownian motion ⊲ Continuity of sample paths We may assume that the sample paths t �→ W t ( ω ) of BM are continuous for almost all ω . (Kolmogorov’s continuity theorem) ⊲ Non-differentiability of sample paths The sample paths are nowhere differentiable for almost all ω . ⊲ Markov property BM is a Markov process � � � � � � � � W t + s ∈ A � W u , u ≤ t W t + s ∈ A P = P � W t ⊲ Gaussian transition probabilities e − ( y − x ) 2 / 2 s � � � � = P t,x � � � √ W t + s ∈ A � W t = x W t + s ∈ A = d y P 2 πs A Fokker–Planck equation (FPE) ⊲ The transition densities p ( t, x ) satiesfy the FPE / forward Kol- mogorov equation d ∂ 2 ∂t = 1 � p = 1 ∂p 2 △ p (in the d -dim. case) 2 ∂x i ∂x j i,j =1 50

  32. Properties of Brownian motion ⊲ Gaussian process { W t } t ≥ 0 is a Gaussian process (i.e., all its finite-dimensional marginals are Gaussian random variables) with – mean zero – Cov { W t , W s } := E ( W t W s ) = t ∧ s Conversely, any mean-zero Gaussian process with this covari- ance structure is a standard Brownian motion. ⊲ Scaling property { cW t/c 2 } t ≥ 0 is a standard Brownian motion (for any c > 0) A k -dimensional standard Brownian motion is a vector W t = ( W (1) , . . . , W ( k ) ) t t of k independent one-dimensional standard Brownian motions 51

  33. Stopping times A random variable τ : Ω → [0 , ∞ ] is called a stopping time (with respect to the BM { W t } t ) if { τ ≤ t } = { ω ∈ Ω: τ ( ω ) ≤ t } can be decided from the knowledge of W s for s ≤ t alone. (No need to “look into the future”.) Formally, we request { τ ≤ t } ∈ F t = σ { W s , 0 ≤ s ≤ t } for all t > 0. Example: First-exit time from a set τ A = inf { t > 0: W t �∈ A } ∈ [0 , ∞ ] Note: The time τ A = sup { t > 0: W t ∈ A } ∈ [0 , ∞ ] � of the last visit to A is in general no stopping time. 52

  34. Andr´ e’s reflection principle Consider a Brownian motion { W t } t , starting in − b < 0. (Shift to whole sample path vertically by − b .) First-passage time τ 0 = inf { t > 0: W t ≥ 0 } at level x = 0 P 0 , − b { τ 0 < t } = P 0 , − b { τ 0 < t, W t ≥ 0 } + P 0 , − b { τ 0 < t, W t < 0 } Now, for τ 0 < t , W t = W t − W τ 0 depends (by the strong Markov property) only on W τ 0 but not on the rest of the past of the sample path. We can restart W t at time τ 0 in W τ 0 = 0. By symmetry of the distribution of the Brownian sample path, starting in 0 at time τ 0 , e − y 2 / 2 t � ∞ . . . = 2 P 0 , − b { τ 0 < t, W t ≥ 0 } = 2 P 0 , − b { W t ≥ 0 } = √ d y 2 πt b Depends only on the endpoint at time t ! 53

  35. Stochastic integrals (Itˆ o integrals) Goal: Give a meaning to stochastic differential equations (SDE’s) x t = f ( x t , t ) + F ( x t , t ) ˙ ˙ W t Consider the discrete-time version x t k +1 − x t k = f ( x t k , t k )∆ t k + F ( x t k , t k )∆ W k , k ∈ { 0 , . . . , K − 1 } with a partition 0 = t 0 < t 1 < · · · < t K = T ⊲ ∆ t k = t k +1 − t k ⊲ ⊲ Gaussian increments ∆ W k = W t k +1 − W t k Observe that � t K − 1 � f ( x t k , t k )∆ t k → 0 f ( x s , s ) d s as the partition is chosen finer and finer k =0 54

  36. Stochastic integrals (Itˆ o integrals) This suggests to interpret the SDE as an integral equation � t � t x t = x 0 + 0 f ( x s , s ) d s + 0 F ( x s , s ) d W s provided the second integral can be defined as � t K − 1 � 0 F ( x s , s ) d W s = lim F ( x t k , t k )∆ W k ∆ t k → 0 k =0 in some suitable sense Thus we want to define (stochastic) integrals of the type � t 0 h ( s, ω ) d W s ( ω ) for suitable integrands h ( s, ω ) 55

  37. A heuristic approach to stochastic integrals Assume for the moment: s �→ h ( s, ω ) continuous and of bounded variation for (almost) all ω Were the paths of the Brownian motion s �→ W s ( ω ) also of finite variation, we could apply integration by parts: � t � t 0 h ( s, ω ) d W s ( ω ) = h ( t ) W t ( ω ) − h (0) W 0 ( ω ) − 0 W s ( ω ) h (d s, ω ) � t = h ( t ) W t ( ω ) − 0 W s ( ω ) h (d s, ω ) The integral on the right-hand side is defined as a Stieltjes integral for each fixed ω . � t We can use this equation to define 0 h ( s, ω ) d W s ( ω ) ω -wise Unfortunately, the paths of BM are almost surely not of finite variation, and we can not expect s �→ h ( s, ω ) = F ( x s ( ω ) , s ) to be of finite variation either. Thus the class of possible integrands is not large enough for our purpose! 56

  38. Elementary functions Let F t = σ { W s , s ≤ t } be the σ -algebra generated by the Brownian motion up to time t . We think of F t as the past of the BM up to time t We start by defining the stochastic integral for a class of particu- larly simple functions: h : [0 , T ] × Ω → R is called elementary if there exists a partition 0 = t 0 < t 1 < . . . t K = T such that K − 1 � h ( t, ω ) = h k ( ω )1 ( t k ,t k +1 ] ( t ) ⊲ k =0 ω �→ h k ( ω ) is F t k -measurable for all k ⊲ For such elementary integrands h , define � t K − 1 � 0 h ( s, ω ) d W s ( ω ) = h k ( ω )[ W t k +1 ( ω ) − W t k ( ω )] k =0 57

  39. Stochastic integrals: L 2 -theory To extend this definition, we use the following isometry Itˆ o isometry Let h be elementary with h k ∈ L 2 (Ω) for all k . Then, ��� t � 2 � � t 0 E { h ( s ) 2 } d s E 0 h ( s ) d W s = Importance of the Itˆ o isometry � T The map h �→ 0 h ( s ) d W s which maps (elementary) h to the stochastic integral of h is an isometry between L 2 ([0 , T ] × Ω) and L 2 (Ω) 58

  40. Stochastic integrals: L 2 -theory Class of possible integrands h : [0 , T ] × Ω → R : ⊲ ( t, ω ) �→ h ( t, ω ) jointly measurable ω �→ h ( t, ω ) F t -measurable for any fixed t (Not looking into future!) ⊲ � T 0 E { h ( t ) 2 } d t < ∞ . ⊲ Such h can be approximated by elementary functions e ( n ) � T 0 E { ( h ( s ) − e ( n ) ( s )) 2 } d s → 0 , as n → ∞ By Itˆ o isometry � t � t 0 e ( n ) ( s ) d W s 0 h ( s ) d W s = L 2 - lim n →∞ is well-defined (its value does not depend on the choice of the sequence of elementary functions) 59

  41. Stratonovich integral By our definition of elementary functions, h is approximated by (random) step functions, where the value of such a step function at all times t ∈ [ t ( n ) , t ( n ) k +1 ] is F t ( n ) -measurable. k k If h is a bounded function and continuous in t for (almost) all ω , the elementary functions e ( n ) can be chosen by setting e ( n ) ( t ) = h ( t ( n ) ) k for all t ∈ [ t ( n ) , t ( n ) k +1 ]. k If we were to choose e ( n ) ( t ) = h ( t ⋆ ) on [ t ( n ) , t ( n ) k +1 ] for some differ- k ent t ⋆ ∈ [ t ( n ) , t ( n ) k +1 ], the definition of the stochastic integral would k yield a different value. For instance, choosing t ⋆ as the midpoint the interval would yield the so-called Stratonovich integral. 60

  42. Properties of the Itˆ o integral For [ a, b ] ⊂ [0 , T ], define � b � T a h ( s ) d W s = 0 1 [ a,b ] ( s ) h ( s ) d W s ⊲ Splitting � t � u � t s h ( s ) d W s = s h ( s ) d W s + u h ( s ) d W s for 0 ≤ s ≤ u ≤ t ≤ T Linearity ⊲ � t � t � t 0 ( ch 1 ( s ) + h 2 ( s )) d W s = c 0 h 1 ( s ) d W s + 0 h 2 ( s ) d W s ⊲ Expectation �� t � 0 h ( s ) d W s = 0; E Covariance / Itˆ o isometry ⊲ ��� t �� ��� t � t 0 h 1 ( s ) d W s 0 h 2 ( s ) d W s = 0 E { h 1 ( s ) h 2 ( s ) } d s E 61

  43. Itˆ o integrals as stochastic processes � t Consider X t = 0 h ( s ) d W s as a function of t X t is F t -measurable (not looking into the future) ⊲ X t is an F t -martingale: E { X t |F s } = X s for 0 ≤ s ≤ t ≤ T ⊲ We may assume that t �→ X t ( ω ) is continuous for allmost all ω ⊲ 62

  44. Extending the definition The definition of the Itˆ o integral can be extended to integrands h satisfying the same measurability assumptions as before but a weaker integrability assumption. It is sufficient to assume that �� t � 0 h ( s, ω ) 2 d s < ∞ for all t ≥ 0 = 1 . P The stochastic integral is then defined as the limit in probability of integrals of elementary functions. Keep in mind that for such h , those of the above properties of the stochastic integral which involve expectations may fail. 63

  45. Examples � t (a) Calculate 0 W s d W s directly from the definition by approximat- ing W s by elementary functions. (Homework!) Note that the result � t 0 W s d W s = 1 t − 1 2 W 2 2 t contains an unexpected term − t/ 2, which shows that Itˆ o inte- grals can not be calculated like ordinary integrals. (The stochastic integral is a martingale, and the Itˆ o correction − t is the quadratic variation of W t which makes W 2 t − t a martingale.) Below we will state Itˆ o’s formula which replaces the chain rule for Riemann integrals. Useful for calculating Itˆ o integrals. (b) Case of deterministic integrands ( h not depending on ω ): � t � t 0 h ( s ) 2 d s 0 h ( s ) d W s is Gaussian with mean zero and variance 64

  46. Itˆ o’s formula Assume ⊲ h and f satisfy the standard measurability assumptions �� t � 0 h ( s, ω ) 2 d s < ∞ for all t ≥ 0 = 1 ⊲ P �� t � ⊲ 0 | f ( s, ω ) | d s < ∞ for all t ≥ 0 = 1 P Itˆ o process � t � t X t = X 0 + 0 f ( s ) d s + 0 h ( s ) d W s Let g : R × [0 , T ] → R be continuous with cont. partial derivatives g xx = ∂ 2 g t = ∂ g x = ∂ ∂tg ( x, t ) , ∂xg ( x, t ) , ∂x 2 g ( x, t ) 65

  47. Itˆ o’s formula Then Y t = g ( X t , t ) is again an Itˆ o process, given by � � � t g t ( X s , s ) + g x ( X s , s ) f ( s ) + 1 2 g xx ( X s , s ) h ( s ) 2 Y t = g ( X 0 , 0) + d s 0 � t + 0 g x ( X s , s ) h ( s ) d W s Using the shorthand d X t = f d t + h d W t Itˆ o’s formula can be written as d Y t = g t d t + g x d X t + 1 2 g xx (d X t ) 2 where (d X t ) 2 is calculated according to the scheme (d t ) 2 = (d t )(d W t ) = (d W t )(d t ) = 0 , (d W t ) 2 = d t 66

  48. Examples � t (a) Using Itˆ o’s formula, we can calculate 0 s d W s : Set g ( x, t ) = t · x and Y t = g ( W t , t ). Then d Y t = W t d t + t d W t + 1 2 0 d t , and, therefore, � t � t � t 0 s d W s = Y t − Y 0 − 0 W s d s = tW t − 0 W s d s. Note that this is an integration-by-parts formula. Similarly, by setting g ( x, t ) = h ( t ) · x , the integration-by-parts formula from Slide 51 can be established for suitable h . (b) Choosing g ( x, t ) = x 2 and Y t = g ( t, W t ), Itˆ o’s formula gives a � t much easier way to calculate 0 W s d W s . (Homework!) o’s formula to show that Y t = e X t (c) Let X t = W t − t/ 2. Use Itˆ satisfies d Y t = Y t d W t Y t is called the Dol´ eans exponential of W t . 67

  49. The multidimensional case Extension to R n is easy: W t = ( W (1) , . . . , W ( k ) ⊲ ) k -dimensional standard BM t t ⊲ h ( s, ω ) = ( h ij ( s, ω )) i ≤ n,j ≤ k a matrix-valued function, taking values in the set of ( n × k )-matrices ⊲ Assume, each h ij allows for stochastic integration in R Define the i th component of the n -dim. stochastic integral by � t k � 0 h ij ( s ) d W ( j ) s j =1 The above mentioned properties of stochastic integrals carry over in the natural way. In particular, the covariance of stochastic integrals can be calculated as ��� t ��� t � T � � t 0 E { f ( s ) g ( s ) T } d s 0 f ( s ) d W s 0 g ( s ) d W s = E 68

  50. Itˆ o’s formula: The multidimensional case As the multidimensional integral can be defined componentwise, it is sufficient to consider Y t = g ( X t , t ) for multidimensional X t and one-dimensional Y t . h : [0 , ∞ ) × Ω → R n × k ⊲ f : [0 , ∞ ) × Ω → R n ⊲ g : R n × [0 , T ] → R ⊲ ⊲ Assumptions as before . . . Let d X t = f ( t ) d t + h ( t ) d W t and Y t = g ( X t , t ) Then n n � +1 � g x i ( X t , t ) d X ( i ) g x i x j ( X t , t )(d X ( i ) )(d X ( j ) d Y t = g t ( X t , t ) d t + ) t t t 2 i =1 i,j =1 using the scheme (d t ) 2 = (d t )(d W ( µ ) ) = (d W ( µ ) )(d t ) = 0 and (d W ( µ ) )(d W ( ν ) ) = δ µν d t t t t t 69

  51. Application of the multidimensional version of Itˆ o’s formula Integration-by-parts formula Let d X ( i ) = f i d t + h i d W t for i = 1 , 2 t The multidimensional version of Itˆ o’s formula shows � t � t � t X (1) X (2) = X (1) X (2) 0 X (1) d X (2) 0 X (2) d X (1) + + + 0 h 1 ( s ) h 2 ( s ) d s s s s s t t 0 0 70

  52. Stochastic differential equations Goal: Give a meaning to SDE’s of the form d x t = f ( x t , t ) d t + F ( x t , t ) d W t { x t } t ∈ [0 ,T ] is called a strong solution with initial condition x 0 if For all t : x t is { W s ; s ≤ t } -measurable ⊲ (depends only on the past of the BM up to time t ) ⊲ Integrability condition: �� T � �� T � 0 � F ( x s , s ) � 2 d s < ∞ 0 � f ( x s , s ) � d s < ∞ P = 1 , P = 1 ⊲ For all t : � t � t x t = x 0 + 0 f ( x s , s ) d s + 0 F ( x s , s ) d W s holds for almost all ω If the initial condition x 0 is random, we assume that it does not depend on the BM ! 71

  53. Existence and uniqueness Assume ⊲ Lipschitz condition (local Lipschitz condition suffices) � f ( x, t ) − f ( y, t ) � + � F ( x, t ) − F ( y, t ) � ≤ K � x − y � Bounded-growth condition ⊲ � f ( x, t ) � + � F ( x, t ) � ≤ K (1 + � x � ) (Can be relaxed, f.e. to xf ( x, t )+ F ( x, t ) 2 ≤ K 2 (1+ x 2 ) in the one-dim. case) Then: The SDE has a (pathwise) unique almost surely continuous solution x t Uniqueness means: For any two almost surely continuous solutions x t and y t � � sup � x t − y t � > 0 = 0 P 0 ≤ t ≤ T 72

  54. Existence and uniqueness: Remarks ⊲ As in the deterministic case: Uniqueness requires only the Lip- schitz condition ⊲ As in the deterministic case: The bounded-growth condition excludes explosions of the solution ⊲ Conditions can be relaxed in many ways Proof by a stochastic version of Picard–Lindel¨ of iterations ⊲ The solution x t satisfies the strong Markov property, meaning ⊲ that we can restart the process not only at fixed times s in x s but even at any stopping time τ in x τ 73

  55. Example: Linear SDE’s ⊲ We frequently approximate solutions of SDE’s locally by lin- earizing ⊲ Linear SDE’s can be solved easily One-dimensional linear SDE � � d x t = a ( t ) x t + b ( t ) d t + F ( t ) d W t Admits a strong solution � t � t x t = x 0 e α ( t,t 0 ) + e α ( t,s ) b ( s ) d s + e α ( t,s ) F ( s ) d W s t 0 t 0 where � t α ( t, s ) = s a ( u ) d u o’s formula to solve the equation! Hint: y t = e − α ( t,t 0 ) x t ) (Use Itˆ 74

  56. Example: Linear SDE’s ⊲ If the initial condition x 0 is either deterministic of Gaussian, then � t � t x t = x 0 e α ( t,t 0 ) + e α ( t,s ) b ( s ) d s + e α ( t,s ) F ( s ) d W s t 0 t 0 is a Gaussian process ⊲ For arbitrary initial conditions (independent of the BM): � t E { x t } = E { x 0 } e α ( t ) + 0 b ( s ) e α ( t,s ) d s, � t Var { x t } = Var { x 0 } e 2 α ( t ) + 0 F ( s ) 2 e 2 α ( t,s ) d s, If a ( t ) ≤ − a 0 , the effect of the initial condition is suppressed exponentially fast in t 75

  57. Example: Ornstein–Uhlenbeck process Consider the particular case a ( t ) ≡ − γ , b ( t ) ≡ 0 , F ( t ) ≡ 1 leading to the SDE d x t = − γx t d t + d W t Its solution � t x t = x 0 e − γ ( t − t 0 ) + e − γ ( t − s ) d W s t 0 is known as Ornstein–Uhlenbeck process, modelling the velocity of a Brownian particle. In this context, − γx t is the damping or frictional force As soon as t ≫ 1 / 2 γ , x t relaxes quickly towards its equilibrium distribution which is Gaussian with mean zero and variance � � � t 1 = 1 e − 2 γ ( t − s ) d s = lim 1 − e − 2 γt t →∞ Var { x t } = lim lim t →∞ t →∞ 2 γ 2 γ t 0 76

  58. Diffusion processes and Fokker–Planck equation Diffusion process d x t = f ( x t , t ) d t + F ( x t , t ) d W t The solution x t is an (inhomogenous) Markov process, and the densities of the transition properties satisfy Kolmogorov’s forward or Fokker–Planck equation ∂ ∂tρ ( y, t ) = Lρ ( y, t ) n n ∂ 2 � � � � � ∂ + 1 � Lϕ = − ⊲ f i ( y, t ) ϕ d ij ( y, t ) ϕ ∂y i 2 ∂y i ∂y j i =1 i,j =1 d ij ( x, t ) are the matrix elements of D ( x, t ) := F ( x, t ) F ( x, t ) T ⊲ ρ : ( y, t ) �→ p ( y, t | x, s ) is the (time-dependent) density of the ⊲ transition probability, when starting in x at time s Note: If x t admits an invariant density ρ 0 , then Lρ 0 = 0 77

  59. Gradient systems and Fokker–Planck equation Consider an (autonomous) SDE of the form d x t = −∇ U ( x ) d x + σ d W t Then L = ∆ U + ∇ U · ∇ + σ 2 2 ∆ If the potential grows sufficiently quickly at infinity, the stochastic process admits an invariant density ρ 0 ( x ) = 1 N e − 2 U ( x ) /σ 2 (Homework: Compute L and verify that Lρ 0 = 0.) For the Ornstein–Uhlenbeck process, U ( x ) is quadratic, and thus the invariant density is indeed Gaussian. 78

  60. References for PART II The covered material is pretty standard, and you can choose your favourite text book. Standard references are for instance R. Durrett, Brownian motion and martingales in analysis , Wadswort (1984) ⊲ I. Karatzas, and S. E. Shreve, Brownian motion and stochastic calculus , ⊲ Springer (1991) ⊲ Ph. E. Protter, Stochastic integration and differential equations , Springer (2003) ⊲ B. K. Øksendal, Stochastic differential equations , Springer (2000) For those who can read French, I’d like to recommend also the lecture notes by Jean-Fran¸ cois Le Gall, available at http://www.dma.ens.fr/ ˜ legall ⊲ 79

  61. PART III The paradym ⊲ The overdamped motion of a Brownian particle in a potential ⊲ Time scales ⊲ Metastability ⊲ Slowly driven systems 80

  62. The motion of a particle in a double-well potential Two-parameter family of ODEs d x s d s = µx s − x 3 s + λ describes the overdamped motion of a particle in the potential U ( x ) = − 1 2 µx 2 + 1 4 x 4 − λx µ 3 > (27 / 4) λ 2 : Two wells, one saddle ⊲ µ 3 < (27 / 4) λ 2 : One well ⊲ µ 3 = (27 / 4) λ 2 and λ � = 0: Saddle–node bifurcation between ⊲ the saddle and one of the wells ⊲ ( x, λ, µ ) = (0 , 0 , 0): Pitchfork bifurcation point Notation x ⋆ ± for (the position of) the well bottoms and x ⋆ 0 for the saddle 81

  63. The motion of a Brownian particle in a double-well potential For a Brownian particle: � � µx s − x 3 d x s = s + λ d s + σ d W s x s has an invariant density p 0 ( x ) = 1 N e − 2 U ( x ) /σ 2 ⊲ For small σ , p 0 ( x ) is strongly concentrated near the minima of the potential ⊲ If U ( x ) has two wells of different depths, the invariant density favours the deeper well The invariant density does not contain all the information needed to describe the motion! 82

  64. Time scales Assume : U double-well potential and x 0 concentrated at the bot- tom x ⋆ + of the right-hand well How long does it take, until we may safely assume that x t is well described by the invariant distribution? ⊲ If the noise is sufficiently weak, paths are likely to stay in the right-hand well for a long time ⊲ x t will first approach a Gaussian in a time of order T relax = 1 1 c = curvature at the bottom x ⋆ + of the well ⊲ With overwhelming probability, paths will remain inside the same well, for all times significantly shorter than Kramers’ time T Kramers = e 2 H/σ 2 , where H = U ( x ⋆ 0 ) − U ( x ⋆ + ) = barrier height Only on longer time scales, the density of x t will approach the ⊲ bimodal stationary density p 0 83

  65. Time scales Dynamics is thus very different on the different time scales t ≪ T relax ⊲ ⊲ T relax ≪ t ≪ T Kramers t ≫ T Kramers ⊲ Method of choice to study the SDE depends on the time scale we are interested in Hierarchical description ⊲ On a coarse-grained level, the dynamics is described by a two- state Markovian jump process, with transition rates e − 2 H ± /σ 2 ⊲ Dynamics between transitions (inside a well) can be approxi- mated by ignoring the other well Approximate local dynamics of the deviation x t − x ⋆ ± by the linearisation (OU process) d y s = − ω 2 ± y s d s + σ d W s 84

  66. Metastability The fact, that the double-well structure of the potential is not visible on time scales shorter than T Kramers is a manifestation of metastability: The distribution concentrated near x ⋆ + seems to be invariant The relevant time scales for metastability are related to the small eigenvalues of the generator of the diffusion 85

  67. Slowly driven systems Let us now turn to situations in which the potential U ( x ) = U ( x, εs ) depends slowly on time: d x s = − ∂U ∂x ( x s , εs ) d s + σ d W s In slow time t = εs d x t = − 1 ∂U ∂x ( x t , t ) d t + σ √ ε d W t ε (d t = ε d s , d W t = √ ε d W s as W εs and √ εW s have the same distribution) Note that the probability density of x t still obeys a Fokker–Planck equation, but there will be no stationary solution in general 86

  68. Slowly driven systems ⊲ Depths H ± = H ± ( t ) of the well may now depend on time, and may even vanish if one of the bifurcation curves is crossed “Instantaneous” Kramers timescales e 2 H ± ( t ) /σ 2 no longer fixed ⊲ If the forcing timescale ε − 1 , at which the potential changes ⊲ shape, is longer than the maximal Kramers time of the system, one can expect the dynamics to be a slow modulation of the dynamics for frozen potential ⊲ Otherwise, the interplay between the timescales of modulation and of noise-induced transitions becomes nontrivial ε introduces additional timescale via the forcing speed T forcing = 1 /ε 87

  69. Slowly driven systems Questions ⊲ How long do sample paths remain concentrated near stable equilibrium branches, that is, near the bottom of slowly mov- ing potential wells? How fast do sample paths depart from unstable equilibrium ⊲ branches, that is, from slowly moving saddles? ⊲ What happens near bifurcation points, when the number of equilibrium branches changes? ⊲ What can be said about the dynamics far from equilibrium branches? 88

  70. PART IV Diffusion exit from a domain ⊲ Large deviations for Brownian motion ⊲ Large deviations for diffusion processes ⊲ Diffusion exit from a domain ⊲ Relation to PDEs The concept of a quasipotential ⊲ Asymptotic behaviour of first-exit times and locations ⊲ (small-noise asymptotics) ⊲ Refined results for gradient systems ⊲ Refined results for non-gradient systems: Passage through an unstable periodic orbit ⊲ Cycling 89

  71. Introduction: Small random perturbations Consider a small random perturbation t ) d t + √ ε g ( x ε d x ε t = b ( x ε x ε t ) d W t , 0 = x 0 of ODE x t = b ( x t ) ˙ (with same initial cond.) We expect x ε t ≈ x t for small ε Depends on ⊲ deterministic dynamics ⊲ noise intensity ε ⊲ time scale 90

  72. Introduction: Small random perturbations Indeed, for b Lipschitz continuous and g = Id � t s − x s � d s + √ ε � W t � � x ε 0 � x ε t − x t � ≤ L Gronwall’s lemma shows s − x s � ≤ √ ε � x ε � W s � e Lt sup sup 0 ≤ s ≤ t 0 ≤ s ≤ t � W s � Remains to estimate sup 0 ≤ s ≤ t ⊲ d = 1: Use reflection principle � � � � � � ≤ 2 e − r 2 / 2 t P sup | W s | ≥ r ≤ 2 P sup W s ≥ r ≤ 4 P W t ≥ r 0 ≤ s ≤ t 0 ≤ s ≤ t ⊲ d > 1: Reduce to d = 1 using independence � � ≤ 2 d e − r 2 / 2 dt � W s � ≥ r P sup 0 ≤ s ≤ t 91

  73. Introduction: Small random perturbations For Γ ⊂ C = C ([0 , T ] , R d ) with Γ ⊂ B (( x s ) s , δ ) c � � � � � � � W s � ≥ δ x ε ∈ Γ � x ε √ ε e − Lt ≤ P s − x s � ≥ δ ≤ P P sup sup 0 ≤ s ≤ t 0 ≤ s ≤ t and � � − δ 2 e − 2 Lt � � x ε ∈ Γ ≤ 2 d exp → 0 as ε → 0 P 2 εdt Event { x ε ∈ Γ } is atypical: Occurrence a large deviation ⊲ ⊲ Question: Rate of convergence as a function of Γ? ⊲ Generally not possible, but exponential rate can be found Aim: Find functional I : C → [0 , ∞ ] s.t. � � � x ε − ϕ � ∞ < δ ≈ e − I ( ϕ ) /ε for ε → 0 P ⊲ Provides local description 92

  74. Large deviations for Brownian motion: The endpoint Special case: Scaled Brownian motion, d = 1 t = √ ε d W t , t = √ ε W t d W ε W ε = ⇒ ⊲ Consider endpoint instead of whole path � � � 1 P { W ε − x 2 / 2 εt √ t ∈ A } = 2 πεt exp d x A ⊲ Use Laplace method to evaluate integral x 2 t ∈ A } ∼ − 1 ε log P { W ε ε → 0 2 inf as t x ∈ A Caution | A | = 1: l . h . s . = −∞ < r . h . s . ∈ ( −∞ , 0] ⊲ Limit does not necessarily exit ⊲ 93

  75. Large deviations for Brownian motion: The endpoint Remedy: Use interior and closure = ⇒ Large deviation principle x 2 − 1 ε log P { W ε t ≤ lim inf t ∈ A } inf 2 x ∈ A ◦ ε → 0 x 2 t ∈ A } ≤ − 1 ε log P { W ε ≤ lim sup 2 inf t ε → 0 x ∈ A 94

  76. Large deviations for Brownian motion: Schilder’s theorem Schilder’s Theorem (1966) Scaled BM satisfies a (full) large deviation principle (LDP) with good rate function  � 1 H 1 = 1  ϕ s � 2 d s 2 � ϕ � 2  [0 ,T ] � ˙ if ϕ ∈ H 1 , ϕ 0 = 0   2 I ( ϕ ) = I [0 ,T ] , 0 ( ϕ ) =   + ∞ otherwise   ⊲ I : C 0 := { ϕ ∈ C : ϕ 0 = 0 } → [0 , ∞ ] is lower semi-continuous ⊲ Good rate function: I has compact level sets ⊲ Upper and lower large-deviation bound: ε log P { W ε ∈ Γ } ≤ lim sup ε log P { W ε ∈ Γ } ≤ − inf − inf Γ ◦ I ≤ lim inf I ε → 0 ε → 0 Γ ⊲ Infinite-dimensional version of Laplace method W ε �∈ H 1 = ⇒ I ( W ε ) = + ∞ (almost surely) ⊲ I (0) = 0 reflects W ε → 0 ⊲ ( ε → 0) 95

  77. Large deviations for Brownian motion: Examples � � Example I: Endpoint again . . . ( d = 1) Γ = ϕ ∈ C 0 : ϕ t ∈ A � � xs �� � t x 2 2 1 d � � � � inf Γ I = inf d s = inf � � 2 d s t 2 t x ∈ A x ∈ A 0 inf Γ I = cost to force BM to be in A at time t � � � � W ε x ∈ A x 2 / 2 tε ⇒ t ∈ A ∼ exp − inf = P √ Note: Typical spreading of W ε t is εt � � ϕ ∈ C 0 : � ϕ � ∞ ≥ δ Example II: BM leaving a small ball Γ = δ 2 2 t = δ 2 inf Γ I = inf ϕ ∈C 0 : � ϕ t � = δ I ( ϕ ) = inf inf 2 T 0 ≤ t ≤ T 0 ≤ t ≤ T inf Γ I = cost to force BM to leave B (0 , δ ) before T � � � � ∃ t ≤ T, � W ε − δ 2 / 2 Tε ⇒ t � ≥ δ ∼ exp = P 96

  78. Large deviations for Brownian motion: Examples Example III: BM staying in a cone (similar . . . Homework!) 97

  79. Large deviations for Brownian motion: Lower bound To show: Lower bound for open sets ε log P { W ε ∈ G } ≥ − inf lim inf G I for all open G ⊂ C 0 ε → 0 Lemma (local variant of lower bound) ε log P { W ε ∈ B ( ϕ, δ ) } ≥ − I ( ϕ ) lim inf ε → 0 for all ∀ ϕ ∈ C 0 s.t. I ( ϕ ) < ∞ and all δ > 0 ⊲ Lemma = ⇒ lower bound W t = W t − ϕ t / √ ε ) Rewrite ( � W ∈ B (0 , δ/ √ ε ) } P { W ε ∈ B ( ϕ, δ ) } = P {� W ε − ϕ � ∞ < δ } = P { � ⊲ Proof of Lemma: via Cameron–Martin–Girsanov formula, al- lows to transform away the drift 98

  80. Cameron–Martin–Girsanov formula (special case, d = 1) { � { W t } t P –BM = ⇒ W t } t Q –BM where � t � W t = W t − h ∈ L 2 0 h ( s ) d s, � �� t � � t � d Q 0 h ( s ) d W s − 1 0 h ( s ) 2 d s � = exp � � F t d P 2 99

Recommend


More recommend