linear threshold modeling of brain network dynamics erfan
play

Linear-Threshold Modeling of Brain Network Dynamics Erfan Nozari - PowerPoint PPT Presentation

Linear-Threshold Modeling of Brain Network Dynamics Erfan Nozari University of California, San Diego Department of Mechanical and Aerospace Engineering Department of Cognitive Science http://carmenere.ucsd.edu/erfan Joint work with: Prof.


  1. Linear-Threshold Modeling of Brain Network Dynamics Erfan Nozari University of California, San Diego Department of Mechanical and Aerospace Engineering Department of Cognitive Science http://carmenere.ucsd.edu/erfan Joint work with: Prof. Jorge Cort´ es 2019 American Control Conference July 9, 2019

  2. Overview • Brain as a networked dynamical system • New: rapid advancements in neuro-technologies • Critical applications in � Deep brain stimulation (DBS) � Transcranial magnetic stimulation (TMS) � Brain-machine/computer interfaces (BMI/BCI) � Optogenetics . . � . [Osborn et al, Sci Robot, 2018] [Chen et al, Science, 2018] 1 / 18

  3. Outline 1 Derivation 2 Analysis

  4. Outline 1 Derivation 2 Analysis

  5. Starting Point: Biophysical Spiking Models • Conductance-based (a.k.a. Hudgkin-Huxley) models: Neuron RC Circuit ≡ • Input = current, output = voltage • Nonlinear (active) & time-varying resistors ⇒ excitable behavior (spiking) � output ≃ δ ( t − t s ) t s Data from [Henze et al, CRCNS, 2009] [Image Att: Behrang Amini, Wikimedia.org] 2 / 18

  6. Mean-Field Approximation: Rate Dynamics • Often, it seems that information mostly encoded in firing rate ( # spikes /s ) • x i ( t ) = firing rate of neuron i • Simplifying assumptions : 1. Poisson spiking 2a. For constant input I in ,i σ ( · ) x i = σ ( I in ,i ) 2b. For time-varying input I in ,i ( t ) τ ˙ x i ( t ) = − x i ( t ) + σ ( I in ,i ( t )) 3. Slowly varying inputs I in ,i ( t ) ( ≫ τ ) 3 / 18

  7. Network Dynamics • Node = population of neurons • State = average firing rate σ ( · ) • Network dynamics (mean-field approximation): � � τ ˙ x ( t ) = − x ( t ) + σ Wx ( t ) + p ( t )   + +   �   = x p .   .  .    +   + + · · · − + +  · · · −    W = . . ... .   . . .  . . .    + + · · · − 4 / 18

  8. Approximating the Sigmoidal Nonlinearity Two popular approximations: � Kuramoto: Cubic approximation in x i , linearization in { W ij } , change to polar coordinates ˙ � θ i = ω i + K ij sin( θ j − θ i ) j → For weakly-coupled oscillators, explicit phase dynamics, n 2 states, smooth � Linear-Threshold: Piecewise-linearization of σ ( · ) [ · ] m 0 m � m i � � τ i ˙ x i = − x i + j W ij x j + p i 0 → For arbitrary dynamics , implicit phase and amplitude (oscillations), switched-affine 5 / 18

  9. Outline 1 Derivation 2 Analysis

  10. Linear-Threshold Networks as Switched-Affine Systems � � � m i [ · ] m τ i ˙ x i = − x i + j W ij x j + p i 0 0 m � �� � I in ,i � Solution exist in the classical sense ( C 1 ) and is unique � State space: [ 0 , m ] = [0 , m 1 ] × [0 , m 2 ] × · · · × [0 , m n ] � Dynamics of each node i can be in 3 modes ⇒ 3 n switching regions  τ i ˙ x i = − x i if I in ,i ≤ 0    τ i ˙ x i = − x i + I in ,i if 0 ≤ I in ,i ≤ m i    τ i ˙ x i = − x i + m i if m i ≤ I in ,i � Switched-affine representation: x = ( − I + Σ ℓ σ ( x ) W ) x + Σ ℓ σ ( x ) p + Σ s σ ( x ) ∈ { 0 , ℓ, s } n τ ˙ σ ( x ) m , 6 / 18

  11. Complex & Nonlinear Dynamics � Wide range of complex behavior, including 1. Monostability 2. Multistability 3. Limit cycles 4. Chaos 7 / 18

  12. Equilibria and Global Stability Some definitions: • W ∈ H if all its principal submatrices are Hurwitz • W ∈ P if all its principal minors are positive • W ∈ L if there exists P = P T > 0 such that for all σ ∈ { 0 , 1 } n ( − I + W T diag ( σ )) P + P ( − I + diag ( σ ) W ) < 0 I − W ∈ P Necessary for GES − I + W ∈ H Necessary & (Conj: also sufficient) W Sufficient for EUE ∈ L ρ ( | W | ) Sufficient for GES < 1 [Feng & Hadeler, 1996] Sufficient for GES [Pavlov et al, 2005] 8 / 18

  13. Implications for the Brain: Need for Stabilization • The stronger or larger a network, the more unstable it becomes Random Network Linear Fit ? Brain networks are large and become stronger with learning (without losing stability!) ⇒ Need for stabilization mechanisms : ◦ via structure W → homeostasis (re-normalizing rows of W ) ◦ via input p ( t ) → ? 9 / 18

  14. Selective Stabilization via Inhibitory Control • Input decomposition: p ( t ) = Bu ( t ) + ˜ p Higher-Order Areas u • Stabilization can/should be selective B ≤ 0 � � 0 x to be stabilized x = 1 x arbitrary (active) Theorem: Inhibitory Stabilization Assume 0 1 x x u ( t ) ≡ ¯ u or u ( t ) = Kx ( t ) 0 ) , there exists u ( t ) such that If dim( u ) ≥ dim( x → x ∗ = ( 0 , x ∗ 1 ) GES x ( t ) − − 1 sub-dynamics is internally GES if and only if the x 1 is the sole determiner of the stabilizability of x ⇒ The stability of x 10 / 18

  15. Extensions to Hierarchical Structures • Layer dynamics: � m � τ i ˙ x i ( t ) = − x i ( t ) + W i,i x i ( t ) + p i ( t ) 0 x 1 1. Selective activity/stabilization: � � � 00 01 � 0 W W x to be stabilized i,j i,j x 2 i x i = , W i,j = 1 10 11 x arbitrary (active) W W i i,j i,j . . . 2. Chain topology (information processing pathways): x i p i ( t ) = B i u i ( t ) + W i,i − 1 x i − 1 ( t ) + W i,i +1 x i +1 ( t ) + c i . . 0 1 x x . i i x N 3. Timescale separation: τ 1 > τ 2 > · · · > τ i > · · · > τ N Sensory Input 11 / 18

  16. Extensions to Hierarchical Structures – cont’d Theorem: Hierarchical Stabilization & Tracking 0 Assume dim( u i ) ≥ dim( x i ) for all i . There exists u i ( t ) = K i x i ( t ) + ¯ u i ( t ) , ∀ i such that  0 i ( t ) → 0 (Inhibitory Stabilization) x  ∀ i 1 i ( t ) → x ∗ 1 11 1 1 i ( W i − 1 ( t ) + c i ) (Tracking) x i,i − 1 x  as τ i τ i − 1 → 0 , ∀ i if 1 1 11 1 11 i,i +1 x ∗ 1 11 1 1 1 i ] + τ i ˙ x i ( t ) = − x i ( t ) + [ W i,i x i ( t ) + W i +1 ( W i +1 ,i x i ( t ) + c i +1 ) + c 1 1 is GES for all c i +1 and c i 12 / 18

  17. Extensions to Hierarchical Structures – cont’d 1. Equilibrium maps Lemma: Piecewise-Affine Equilibrium Maps The equilibrium of layer i is given by x ∗ i ( x i − 1 ) = F i,λ x i − 1 + f i,λ , ∀ x i − 1 ∈ Ψ i,λ , λ ∈ Λ i where { F i,λ , f i,λ , Ψ i,λ , Λ i } have recursive expressions 2. Multi-layer GES Theorem: Global Exponential Stability (GES) Let ¯ F i � max λ ∈ Λ i | F i,λ | . If ρ ( | W i,i | + | W i,i +1 | ¯ F i +1 | W i +1 ,i | ) < 1 1 1 1 then x i ( t ) is GES for all c i +1 and c i . τ i 1 3. Time-scale separation: τ i − 1 ≤ 1 . 5 is often enough in practice 13 / 18

  18. Application: Goal-Driven Selective Attention in Rodents 1. Data: [Rodgers & DeWeese, Neuron, 2014] R1 Time R2 2. Defining nodes (clustering neurons) 3. Computing x ( t ) PFC 4. Defining edges (brain physiology) A1 5. Finding edge weights: min d ( x data , x model ) θ θ =[ w i,j , b i,j , c i , τ i , x i (0)] i,j S1 S2 6. Verifying theoretical conditions: � τ 1 = 4 . 70 ≫ τ 2 = 2 . 33 ≫ τ 3 = 1 . 07 11 11 1 11 � 2 , 3 | ¯ � � Under R1: ρ | W 2 , 2 | + | W 3 | W 3 , 2 | F = 0 . 42 < 1 � 11 11 2 , 3 | ¯ 1 11 � | W 2 , 2 | + | W 3 | W 3 , 2 | � Under R2: ρ F = 0 . 13 < 1 14 / 18

  19. Beyond Equilibrium Attractors: Neural Oscillations • Attractor dynamics: dynamics that settle to a stable pattern (manifold) Facilitate analysis Miss transients (unless x (0) close to attractor) • Common forms: 1. Equilibrium attractors ◦ Isolated equilibria, as above 0  i ( t ) → 0 (Inhibitory Stabilization) x  ∀ i 1 i ( t ) → x ∗ 1 11 1 1 x i ( W i,i − 1 x i − 1 ( t ) + c i ) (Tracking)  ◦ Continuum of equilibria (line, ring, plane, . . . ) 2. Oscillatory attractors ◦ Limit cycles (regular) ◦ Chaotic oscillations (irregular/noisy-like) 15 / 18

  20. Structural Characterization of Oscillations • Network of Wilson-Cowan oscillators � m i, 1 � � τ i ˙ x i, 1 = − x i, 1 + a i x i, 1 − b i x i, 2 + p i, 1 + j A ij x j, 1 0 � m i, 2 � τ i ˙ x i, 2 = − x i, 2 + c i x i, 1 − d i x i, 2 + p i, 2 A ij a i 0 • Lack of stable equilibria (LoSE) as proxy for oscillations c i − b i Theorem: Lack of Stable Equilibria − d i For each oscillator i , LoSE iff Oscillator i d i + 2 < a i ( a i − 1)( d i + 1) < b i c i ( a i − 1) m i, 1 < b i m i, 2 p i,ℓ < p i,ℓ < ¯ p i,ℓ , ℓ = 1 , 2 and, if so, for the full network, LoSE iff � ∃ i : p i, 1 + j A ij m j, 1 < ¯ p i, 1 LoSE 16 / 18

  21. Summary Starting Point: Biophysical Spiking Models In this talk: • Conductance-based (a.k.a. Hudgkin-Huxley) models: � Biophysical spiking models Neuron ≡ RC Circuit • Input = current, output = voltage • Nonlinear (active) & time-varying resistors ⇒ excitable behavior (spiking) output ≃ � δ ( t − t s ) t s Data from [Henze et al, CRCNS, 2009] [Image Att: Behrang Amini, Wikimedia.org] 2 / 18 17 / 18

Recommend


More recommend