two or three things i know about mean field methods in
play

Two or three things I know about mean field methods in neuroscience - PowerPoint PPT Presentation

Two or three things I know about mean field methods in neuroscience Olivier Faugeras NeuroMathComp Laboratory - INRIA Sophia/ENS Paris/UNSA LJAD CIRM Workshop on Mean-field methods and multiscale analysis of neuronal populations Joint work


  1. Two or three things I know about mean field methods in neuroscience Olivier Faugeras NeuroMathComp Laboratory - INRIA Sophia/ENS Paris/UNSA LJAD CIRM Workshop on Mean-field methods and multiscale analysis of neuronal populations

  2. Joint work with ◮ Javier Baladron ◮ Bruno Cessac ◮ Diego Fasoli ◮ Jonathan Touboul

  3. The neuronal activity in V1: from Ecker et al., Science 2010 ◮ Recording neurons in V1

  4. The neuronal activity in V1: from Ecker et al., Science 2010 ◮ shows that their activity is HIGHLY decorrelated for synthetic

  5. The neuronal activity in V1: from Ecker et al., Science 2010 ◮ and natural images

  6. The neuronal activity in V1: from Ecker et al., Science 2010 ◮ as opposed to the current consensus.

  7. The neuronal activity in V1: from Ecker et al., Science 2010 ◮ Is this a network effect? ◮ Is this related to the stochastic nature of neuronal computation?

  8. Spin glasses ◮ N spins x i in interaction in a potential U (keeps spin values bounded). ◮ Weights of interaction: J ij . Assume J ij � = J ji . ◮ Weights are i.i.d. N (0 , 1). Single spin dynamics: j J ij x j + ξ i � β x i −∇ U ( x i ) + ˙ = � √ N µ ⊗ N Law of x 0 = 0 ◮ Limit, when N → ∞ , of the dynamics?

  9. Which limit? ◮ Let P N β ( J ) be the law of the solution to the N spin equations. ◮ If we anneal it by taking the expectation over the weights: � � Q N P N β = ❊ β ( J ) we can obtain two theorems. Theorem (Ben Arous-Guionnet) µ N = 1 � N i =1 δ x i under Q N The law of the empirical measure ˆ N β converges to δ Q . Theorem (Ben Arous-Guionnet) Q is the law of the solution to the following nonlinear stochastic differential equation:  −∇ U ( x t ) dt + dB t dx t =  dW t + β 2 �� t  � = 0 f ( t , s ) dB s dB t dt  Law of x 0 = µ 0 

  10. Which limit? W is a Q -Brownian motion, the function f is given by   − β 2 � t � 2 du � � G Q � exp u 0  G Q  s G Q  f ( t , s ) = ❊  ,  t  � � � 2 �� − β 2 � t � G Q ❊ exp du u 0 and G Q is a centered Gaussian process, independent of Q , and t with the same covariance: � � � s G Q G Q ❊ = x s x t dQ ( x ) t

  11. The results of Sompolinsky and Zippelius ◮ S.-Z. studied the same spin-glass equation, up to minor details. Theorem (Sompolinsky-Zippelius) The annealed mean-field equation in the thermodynamic limit is � dx t −∇ U ( x t ) dt + Φ x = t dt Law of x 0 = µ 0 Φ x t is a Gaussian process with zero mean and whose autocorrelation C writes � C ( t , s ) ≡ ❊ [Φ x s Φ x t ] = δ ( t − s ) + β 2 x s x t dQ ( x ) = � � δ ( t − s ) + β 2 ❊ s G Q G Q t

  12. Are these two results the same? Proposition (Faugeras) If the function f in the Ben Arous-Guionnet theorem is continuous in [0 , T ] 2 , the stochastic differential equation dW t + β 2 �� t � � dB t = 0 f ( t , s ) dB s dt = 0 B 0 has a unique solution defined for all t ∈ [0 , T ] by �� t � dB t = dW t + Γ( t , s ) dW s dt , 0 ∞ where � Γ( t , s ) = g i ( t , s ) , i =1 and � t g n +1 ( t , s ) = β 2 g 1 = β 2 f f ( t , τ ) g n ( τ, s ) d τ n ≥ 1 , s

  13. Are these two results the same? ◮ Rewrite the Ben Arous-Guionnet mean-field equation as � dx t −∇ U ( x t ) dt + Ψ x = t dt , Law of x 0 = µ 0 where � t t = dW t Ψ x + Γ( t , u ) dW u dt 0 ◮ The process dW t is interpreted as “Gaussian white noise”. dt ◮ Ψ x is a Gaussian process with zero mean and autocorrelation � t ∧ s ❊ [Ψ x t Ψ x s ] = δ ( t − s ) + Γ( t , u )Γ( s , u ) du 0 ◮ Since in general � t ∧ s � � Γ( t , u )Γ( s , u ) du � = β 2 ❊ G Q s G Q , t 0 the two results may be contradictory!

  14. From spin glasses to firing rate neurons ◮ In 1988, Sompolinsky, Crisanti and Sommers generalized the spin glass equations to firing rate neurons: � β x i −∇ U ( x i ) + j J ij S ( x j ) + ξ i ˙ = � √ N µ ⊗ N Law of x 0 = 0 ◮ S is the “usual” sigmoid function. ◮ They proposed the following mean-field equation: � dx t −∇ U ( x t ) dt + Φ x = t dt + Idt Law of x 0 = µ 0 Φ x t is a zero mean Gaussian process whose autocorrelation C writes � C ( t , s ) ≡ ❊ [Φ x s Φ x t ] = δ ( t − s ) + β 2 S ( x s ) S ( x t ) dQ ( x ) = � � δ ( t − s ) + β 2 ❊ S ( G Q s ) S ( G Q t )

  15. From spin glasses to firing rate neurons ◮ In 2009, Faugeras, Touboul and Cessac generalized the S.-C.-S. equation to the case of several populations. � � J αβ N β , σ αβ √ ◮ The weights are i.i.d. and J ij ≈ N N β ◮ They proposed an annealed mean-field equation inspired from that of S.-C.-S. and proved the equation had a unique solution in finite time. ◮ The solution is Gaussian, but non-Markov. ◮ The mean satisfies a first-order differential equation. ◮ The covariance function satisfies an integral equation. ◮ Both equations are nonlinearly coupled. ◮ Studying the solution turned out to be a formidable task (see part of Geoffroy Hermann’s PhD thesis) ◮ From the discussion on spin glasses one may wonder whether this equation is the “correct” one.

  16. From spin glasses to firing rate neurons The mean process is coupled through the variance C ( t , t ) d µ ( t ) = − µ ( t ) � � � + ¯ � C ( t , t ) + µ ( t ) Dx + J S x dt τ ❘ I ( t )

  17. From spin glasses to firing rate neurons The mean process is coupled through the variance C ( t , t ) d µ ( t ) = − µ ( t ) � � � + ¯ � C ( t , t ) + µ ( t ) Dx + J S x dt τ ❘ I ( t )

  18. ❊ From spin glasses to firing rate neurons To the non-Markovian covariance C (0 , 0) + σ 2 e 2( t ∧ s ) /τ − 1 C ( t , s ) = e − ( t + s ) /τ � � � ext 2 � t � s � + ¯ e ( u + v ) /τ ∆( u , v ) dudv J 0 0

  19. ❊ From spin glasses to firing rate neurons To the non-Markovian covariance C (0 , 0) + σ 2 e 2( t ∧ s ) /τ − 1 C ( t , s ) = e − ( t + s ) /τ � � � ext 2 � t � s � + ¯ e ( u + v ) /τ ∆( u , v ) dudv J 0 0

  20. From spin glasses to firing rate neurons To the non-Markovian covariance C (0 , 0) + σ 2 e 2( t ∧ s ) /τ − 1 C ( t , s ) = e − ( t + s ) /τ � � � ext 2 � t � s � + ¯ e ( u + v ) /τ ∆( u , v ) dudv J 0 0 ∆( t , s ) = ❊ [ S ( x t ) S ( x s )]

  21. Questions, open problems ◮ The neuron models are deceptively simple: can we do better? ◮ The synaptic connection models are deceptively simple: can we improve them? ◮ The completely connected graph model is too restrictive: can we develop a theory for different graphs? ◮ The i.i.d. model for the synaptic weights is not compatible with biological evidence: can we include correlations?

  22. Networks of continuous spiking neurons ◮ Hodgkin-Huxley model or one of its 2D reductions.

  23. Networks of continuous spiking neurons ◮ Hodgkin-Huxley model or one of its 2D reductions. ◮ Chemical and electric noisy synapses (conductance-based models)

  24. Networks of continuous spiking neurons ◮ Hodgkin-Huxley model or one of its 2D reductions. ◮ Chemical and electric noisy synapses (conductance-based models) ◮ Synaptic weights are dynamically changing over time.

  25. Fitzhugh Nagumo model Stochastic Differential Equation (SDE): dV t = ( V t − V 3 � 3 − w t + I ext ( t )) dt + σ ext dW t t dw t = a ( b V t − w t ) dt Takes into account external current noise.

  26. Synapses Synaptic current from the j th to the i th neuron: = g ij ( t )( V i − V i I syn rev ) ij Chemical conductance: g ij ( t ) = J ij ( t ) y j ( t ) The function y denotes the fraction of open channels: r S j ( V j )(1 − y j ( t )) − a j y j ( t ) = a j d y j ( t ) , ˙ The function S : T max S ( V j ) = 1 + e − λ ( V j − V T )

  27. Synapses Taking noise into account: dy j dt + σ ( V j , y j ) dW j , y a r S ( V j )(1 − y j ( t )) − a d y j ( t ) � � t = t Keeping y j between 0 and 1: � σ ( V j , y j ) = a r S ( V j )(1 − y j ) + a d y j χ ( y j )

  28. Synaptic weights The synaptic weights are affected by dynamical random variations: ¯ ¯ dB i N + σ J N + σ J N ξ i ( t ) = t J ij ( t ) = dt , N Advantage : simplicity

  29. Synaptic weights The synaptic weights are affected by dynamical random variations: ¯ ¯ dB i N + σ J N + σ J N ξ i ( t ) = t J ij ( t ) = dt , N Advantage : simplicity Disadvantage : an increase of the noise level increases the probability that the sign of J ij ( t ) is different from that of ¯ J . It can be fixed (technical)

  30. Putting everything together Each neuron is represented by a state vector of dimension 3: � t − ( V i t ) 3 �  dV i V i − w i = t + I ( t ) dt + t 3    � �  j ¯ t − V rev ) y j 1 J ( V i  � dt +  t N    �� �  t − V rev ) y j 1 j σ ( V i dB i  t + t N σ ext dW i   t    dw i � b V i t − w i � = a dt   t t   t ) dW i , y  dy i a r S ( V i t )(1 − y i t ) − a d y i dt + σ ( V i t , y i � � =  t t t

  31. Putting everything together The full dynamics of the neuron i can be described compactly by dW i � � dX i t = f ( t , X i t ) dt + g ( t , X i t t ) + dW i , y t 1 � b ( X i t , X j t ) dt + N j 1 � t , X j t ) dB i , j β ( X i t N j This very general equation applies to all continuous spiking neuron models.

Recommend


More recommend