Neural Fields, Finite-Dimensional Approximation, Large Deviations, and SDE Continuation Christian Kuehn Vienna University of Technology
Outline Part 1: Neural Fields (joint work with Martin Riedler , Linz/Vienna): 1. Neural Fields - Amari-type 2. Galerkin Approximation 3. Large Deviation Principle(s) Part 2: SDE Continuation 1. Numerical Continuation 2. Extension to SODEs 3. Calculating Kramers’ Law 4. Extension to SPDEs
Neural Fields Amari-type neural field model: � � � d U t ( x ) = − α U t ( x ) + w ( x , y ) f ( U t ( y )) d y d t + ε d W t ( x ) . B
Neural Fields Amari-type neural field model: � � � d U t ( x ) = − α U t ( x ) + w ( x , y ) f ( U t ( y )) d y d t + ε d W t ( x ) . B Ingredients: ◮ B ⊂ R d bounded closed domain. Hilbert space X = L 2 ( B ). ◮ ( x , t ) ∈ B × [0 , T ], u = u ( x , t ) ∈ R , α > 0, 0 < ε ≪ 1.
Neural Fields Amari-type neural field model: � � � d U t ( x ) = − α U t ( x ) + w ( x , y ) f ( U t ( y )) d y d t + ε d W t ( x ) . B Ingredients: ◮ B ⊂ R d bounded closed domain. Hilbert space X = L 2 ( B ). ◮ ( x , t ) ∈ B × [0 , T ], u = u ( x , t ) ∈ R , α > 0, 0 < ε ≪ 1. ◮ w : B × B → R kernel, modelling neural connectivity. ◮ f : R → (0 , + ∞ ) gain function, modelling neural input.
Neural Fields Amari-type neural field model: � � � d U t ( x ) = − α U t ( x ) + w ( x , y ) f ( U t ( y )) d y d t + ε d W t ( x ) . B Ingredients: ◮ B ⊂ R d bounded closed domain. Hilbert space X = L 2 ( B ). ◮ ( x , t ) ∈ B × [0 , T ], u = u ( x , t ) ∈ R , α > 0, 0 < ε ≪ 1. ◮ w : B × B → R kernel, modelling neural connectivity. ◮ f : R → (0 , + ∞ ) gain function, modelling neural input. ◮ Q : X → X trace-class, non-negative symmetric operator: eigenvalues λ 2 i ∈ R , eigenfunctions v i . ◮ W t ( x ) := � ∞ i =1 λ i β i β i t v i ( x ) , t iid Brownian motions.
Existence and Regularity Assumptions: ◮ Kg ( x ) := � B w ( x , y ) g ( y ) d y is a compact self-adjoint operator on L 2 ( B ). ◮ F ( g )( x ) := f ( g ( x )) is a Lipschitz continuous Nemytzkii operator on L 2 ( B ). Neural field as evolution equation d U t = [ − α U t + KF ( U t )] d t + ε d W t .
Existence and Regularity Assumptions: ◮ Kg ( x ) := � B w ( x , y ) g ( y ) d y is a compact self-adjoint operator on L 2 ( B ). ◮ F ( g )( x ) := f ( g ( x )) is a Lipschitz continuous Nemytzkii operator on L 2 ( B ). Neural field as evolution equation d U t = [ − α U t + KF ( U t )] d t + ε d W t . (daPrato-Zabczyk92) ⇒ Mild solution u ∈ C ([0 , T ] , L 2 ( B )) � t � t e − α ( t − s ) d W s . e − α ( t − s ) KF ( U s ) d s + ε U t = e − α t U 0 + 0 0
Existence and Regularity Assumptions: ◮ Kg ( x ) := � B w ( x , y ) g ( y ) d y is a compact self-adjoint operator on L 2 ( B ). ◮ F ( g )( x ) := f ( g ( x )) is a Lipschitz continuous Nemytzkii operator on L 2 ( B ). Neural field as evolution equation d U t = [ − α U t + KF ( U t )] d t + ε d W t . (daPrato-Zabczyk92) ⇒ Mild solution u ∈ C ([0 , T ] , L 2 ( B )) � t � t e − α ( t − s ) d W s . e − α ( t − s ) KF ( U s ) d s + ε U t = e − α t U 0 + 0 0 Lemma (K./Riedler, 2013) v i Lipschitz with constants L i and for some ρ ∈ (0 , 1) � ∞ � � ∞ � � � � � i L 2 ρ � λ 2 i v i ( x ) 2 � λ 2 i | v i ( x ) | 2(1 − ρ ) sup � < ∞ , sup � < ∞ � � � � � � � � x ∈B x ∈B � � i =1 i =1 ⇒ u ∈ C ([0 , T ] , C ( B )) .
Galerkin Approximation Spectral representation of solution: ∞ � u i U t ( x ) = t v i ( x ) . i =1
Galerkin Approximation Spectral representation of solution: ∞ � u i U t ( x ) = t v i ( x ) . i =1 Take L 2 -inner product with v i in neural field model � � � d U t , v i � = − α � U t , v i � + � KF ( U t ) , v i � d t + ε � d W t , v i � , ⇒ d u i − α u i t + ( KF ) i ( u 1 t , u 2 d t + ελ i d β i � � = t , . . . ) t . t
Galerkin Approximation Spectral representation of solution: ∞ � u i U t ( x ) = t v i ( x ) . i =1 Take L 2 -inner product with v i in neural field model � � � d U t , v i � = − α � U t , v i � + � KF ( U t ) , v i � d t + ε � d W t , v i � , ⇒ d u i − α u i t + ( KF ) i ( u 1 t , u 2 d t + ελ i d β i � � = t , . . . ) t . t where ∞ �� � � � u j ( KF ) i ( u 1 t , u 2 t , . . . ) := f t v j ( x ) w ( x , y ) v i ( y ) dy d x B B j =1
Approximation Accuracy Theorem (K./Riedler, 2013) For all T > 0 N →∞ sup t ∈ [0 , T ] � U t − U N lim t � L 2 ( B ) = 0 a . s .
Approximation Accuracy Theorem (K./Riedler, 2013) For all T > 0 N →∞ sup t ∈ [0 , T ] � U t − U N lim t � L 2 ( B ) = 0 a . s . If “regularity-lemma” conditions hold and N →∞ � U 0 − P N U 0 � C ( B ) = 0 U 0 ∈ C ( B ) such that lim then N →∞ sup t ∈ [0 , T ] � U t − U N lim t � C ( B ) = 0 a . s . Proof. Lengthy calculation using a technique by Bl¨ omker/Jentzen (SINUM 2013).
Large Deviations Principle (LDP) Example: Stochastic ordinary differential equation d u t = g ( u t ) d t + ε G ( u t ) d β t . where ◮ u t ∈ R N , g : R N → R N , G : R N → R N × k , t ) T vector of k iid Brownian motions, ◮ β t = ( β 1 t , . . . , β k ◮ u 0 ∈ D ⊂ R N .
Large Deviations Principle (LDP) Example: Stochastic ordinary differential equation d u t = g ( u t ) d t + ε G ( u t ) d β t . where ◮ u t ∈ R N , g : R N → R N , G : R N → R N × k , t ) T vector of k iid Brownian motions, ◮ β t = ( β 1 t , . . . , β k ◮ u 0 ∈ D ⊂ R N . Goal : Estimate first-exit time τ ε D := inf { t > 0 : u t = u ε t �∈ D} .
An Abstract Theorem ◮ X := C 0 ([0 , T ] , R N ) = { φ ∈ C ([0 , T ] , R N ) : φ (0) = u 0 } . 1 := { φ : [0 , T ] → R N : φ absolutely continuous, φ ′ ∈ L 2 , φ (0) = 0 } . ◮ H N ◮ Diffusion matrix D ( u ) := G ( u ) T G ( u ) ∈ R N × N positive definite.
An Abstract Theorem ◮ X := C 0 ([0 , T ] , R N ) = { φ ∈ C ([0 , T ] , R N ) : φ (0) = u 0 } . 1 := { φ : [0 , T ] → R N : φ absolutely continuous, φ ′ ∈ L 2 , φ (0) = 0 } . ◮ H N ◮ Diffusion matrix D ( u ) := G ( u ) T G ( u ) ∈ R N × N positive definite. Theorem (Freidlin, Wentzell) The SODE satisfies an LDP ε → 0 ε 2 ln P (( u ε − inf Γ o I ≤ lim inf t ) t ∈ [0 , T ] ∈ Γ) ≤ ε 2 ln P (( u ε ≤ lim sup t ) t ∈ [0 , T ] ∈ Γ) ≤ − inf I . ¯ ε → 0 Γ for any measurable set of paths Γ ⊂ X with rate function � 1 � T t − g ( φ t )) T D ( φ t ) − 1 ( φ ′ 0 ( φ ′ φ ∈ u 0 + H N t − g ( φ t )) dt , 1 , 2 I ( φ ) = + ∞ otherwise.
Arhennius-Eyring- Kramers’ Formula ◮ Gradient structure and additive noise d u t = −∇ V ( u t ) d t + ε Id d β t . ◮ V has precisely two local minima u ∗ ± , single saddle point u ∗ s . ◮ Hessian ∇ 2 V ( u ∗ s ) at saddle has eigenvalues ρ 1 ( u ∗ s ) < 0 < ρ 2 ( u ∗ s ) < · · · < ρ N ( u ∗ s ) .
Arhennius-Eyring- Kramers’ Formula ◮ Gradient structure and additive noise d u t = −∇ V ( u t ) d t + ε Id d β t . ◮ V has precisely two local minima u ∗ ± , single saddle point u ∗ s . ◮ Hessian ∇ 2 V ( u ∗ s ) at saddle has eigenvalues ρ 1 ( u ∗ s ) < 0 < ρ 2 ( u ∗ s ) < · · · < ρ N ( u ∗ s ) . Theorem (Kramers’ Formula) Mean first-passage u ∗ − to u ∗ + obeys: � | det( ∇ 2 V ( u ∗ 2 π s )) | − )) e 2( V ( u ∗ s ) − V ( u ∗ − )) /ε 2 . E [ τ u ∗ + } ] ∼ − → u ∗ | ρ 1 ( u ∗ det( ∇ 2 V ( u ∗ s ) |
Back to Neural Fields... Kramers’ Formula and LDP Observations (K./Riedler, 2013) ◮ From [Laing/Troy03,Enulescu/Bestehorn07] ε = 0 ⇒ neural field has energy-structure. Let g := f − 1 , P ( x , t ) = f ( U ( x , t )) . 1 ∂ t P ( x , t ) = − g ′ ( P ( x , t )) ∇ E [ P ( x , t )] .
Back to Neural Fields... Kramers’ Formula and LDP Observations (K./Riedler, 2013) ◮ From [Laing/Troy03,Enulescu/Bestehorn07] ε = 0 ⇒ neural field has energy-structure. Let g := f − 1 , P ( x , t ) = f ( U ( x , t )) . 1 ∂ t P ( x , t ) = − g ′ ( P ( x , t )) ∇ E [ P ( x , t )] . But, there are problems for ε > 0 ⇒ ◮ Change-of-variable ⇒ multiplicative noise. ◮ Space-time dependent factor 1 / g ′ ( P ( x , t )) . ◮ Trace-class noise.
Back to Neural Fields... Kramers’ Formula and LDP Observations (K./Riedler, 2013) ◮ From [Laing/Troy03,Enulescu/Bestehorn07] ε = 0 ⇒ neural field has energy-structure. Let g := f − 1 , P ( x , t ) = f ( U ( x , t )) . 1 ∂ t P ( x , t ) = − g ′ ( P ( x , t )) ∇ E [ P ( x , t )] . But, there are problems for ε > 0 ⇒ ◮ Change-of-variable ⇒ multiplicative noise. ◮ Space-time dependent factor 1 / g ′ ( P ( x , t )) . ◮ Trace-class noise. ◮ LDP follows from evolution equation [daPratoZabczyk92]. ◮ LDP can be approximated using Galerkin method.
Part 2 SDE Continuation: Motivation Consider the general differential equation ∂ u ∂ t = F ( u ; λ ) where λ ∈ R p are parameters.
Recommend
More recommend