theoretical neuroscience from single neuron to network
play

Theoretical neuroscience: From single neuron to network dynamics - PowerPoint PPT Presentation

Theoretical neuroscience: From single neuron to network dynamics Nicolas Brunel Outline Single neuron stochastic dynamics Network dynamics Learning and memory Single neurons in vivo seem highly stochastic Single neuron stochastic


  1. Theoretical neuroscience: From single neuron to network dynamics Nicolas Brunel

  2. Outline • Single neuron stochastic dynamics • Network dynamics • Learning and memory

  3. Single neurons in vivo seem highly stochastic

  4. Single neuron stochastic dynamics: the LIF model • LIF neuron with deterministic + white noise inputs, dV dt = − V + µ ( t ) + σ ( t ) √ τ m η ( t ) τ m Spikes are emitted when V = V t , then neuron reset to V r ; • P ( V, t ) is described by Fokker-Planck equation = σ 2 ( t ) ∂ 2 P ( V, t ) ∂P ( V, t ) + ∂ τ m ∂V [( V − µ ( t )) P ( V, t )] ∂V 2 ∂t 2

  5. Single neuron stochastic dynamics: the LIF model • P ( V, t ) is described by Fokker-Planck equation = σ 2 ( t ) ∂ 2 P ( V, t ) ∂P ( V, t ) + ∂ τ m ∂V [( V − µ ( t )) P ( V, t )] ∂V 2 ∂t 2 • Boundary conditions: – At threshold V t : absorbing b.c. + probability flux at V t = firing probability ν ( t ) : ∂V ( V t , t ) = − 2 ν ( t ) τ m ∂P P ( V t , t ) = 0 , σ 2 ( t ) – At reset potential V r : what comes out at V t must come back at V r ∂P r , t ) − ∂P r , t ) = − 2 ν ( t ) τ m r , t ) = P ( V + ∂V ( V + P ( V − ∂V ( V − r , t ) , σ 2 ( t )

  6. LIF model: stationary inputs µ ( t ) = µ 0 , σ ( t ) = σ 0 Vt − µ 0 − ( V − µ 0 ) 2 � � � 2 ν 0 τ m σ exp( u 2 )Θ( u − V r ) du P 0 ( V ) = exp σ 2 σ V − µ 0 σ Vt − µ 0 1 � √ π σ exp( u 2 )[1 + erf ( u )] = τ m ν 0 Vr − µ 0 σ � x Vt − µ 0 � σ e x 2 dx e y 2 (1 + erf y ) 2 dy CV 2 2 πν 2 = 0 Vr − µ 0 −∞ σ

  7. Time-dependent inputs • Given an arbitrary time-dependent input ( µ ( t ) , σ ( t ) ) what is the instantaneous firing rate ν ( t ) ?

  8. Computing the linear firing rate response • Strategy: – start with small time-dependent perturbations around means, µ ( t ) = µ 0 + ǫµ 1 ( t ) , σ ( t ) = σ 0 + ǫσ 1 ( t ) – linearize FP equation and obtain the linear response of P = P 0 + ǫP 1 ( t ) and ν = ν 0 + ǫν 1 ( t ) (solution of inhomogeneous 2nd order ODE). � t R µ ( t − t ′ ) µ 1 ( t ′ ) + R σ ( t − t ′ ) σ 1 ( t ′ ) dt ′ ν 1 ( t ) = ν 1 ( ω ) ˜ = R µ ( ω )˜ µ 1 ( ω ) + R σ ˜ σ 1 ( ω ) – R µ and R σ can be computed explicitly in terms of confluent hypergeometric functions. – go to higher orders in ǫ ...

  9. LIF model: linear rate response R µ ( ω ) (changes in µ ) √ 2 iωτ m ) • High frequency behavior: R µ ( ω ) ∼ ν 0 / ( σ 0 √ t initial response for step currents. • Translates into a

  10. More realistic models High ω behavior • Colored noise inputs: dV � τ s τ m = − V + µ ( t ) + σ ( t ) W dt R µ ( ω ) ∼ dW τ m − W + √ τ m η ( t ) τ s = dt • More realistic spike generation: dV dt = − V + F ( V ) + µ ( t ) + σ ( t ) √ τ m η ( t ) τ m Spike emitted when V → ∞ ; then reset at V r – EIF: F ( V ) = ∆ t exp(( V − V T ) / ∆ t ) R µ ( ω ) 1 /ω ∼ – QIF: F ( V ) ∼ V 2 1 /ω 2 R µ ( ω ) ∼ – PIF: F ( V ) ∼ V α 1 /ω α/ ( α − 1) R µ ( ω ) ∼

  11. Conclusions • In simple spiking neuron models, response of instantaneous firing rate can be much faster than the response of the membrane; • EIF model: fits well pyramidal cell data, allows to understand quantitatively factors controlling speed of firing rate response; • Cut-off frequency of real neurons is very high ( ∼ 200 Hz or higher) ⇒ allows very fast population response to time dependent inputs • EIF can be mapped to both LNP and Wilson-Cowan-type firing rate models, with a time constant that depends on intrinsic parameters of the cell, and on instantaneous rate itself

  12. Local networks in cerebral cortex • Size ∼ cubic millimeter • Total number of cells ∼ 100,000 • Types of cells: – pyramidal cells - excitatory (80%) – interneurons - inhibitory (20%) • Connection probability ∼ 10% • Synapses/cell: ∼ 10,000 (total 10 9 synapses/mm 3 ) • Each synapse has a small effect: depo- larization/ hyperpolarization ∼ 1-10% of threshold.

  13. Randomly connected network of LIFs • N neurons. Each neuron receives K < N randomly chosen connections from other neurons. Couplings between neurons J ( J < 0 is total coupling strength). • Neurons = leaky integrate-and-fire: dV i ( t ) τ m = − V i + I i dt Threshold V t , reset V r • Total input of a neuron i at time t √ τ m η i ( t ) � � S ( t − t k I i ( t ) = µ ext + J c ij j ) + σ ext j k where S ( t ) describes time course of PSCs, t k j spike time of k th spike of neuron j , c ij chosen randomly such that � j c ij = K for all i .

  14. Analytical description of irregular state • If neurons are firing approximately as Poisson processes, and connection probability is small ( K/N ≪ 1 ), then the recurrent inputs to a neuron can be approximated as ext + J 2 Kν ( t − D ) τ √ τη i ( t ) � σ 2 I i ( t ) = µ ext + JKτν ( t − D ) + where η i ( t ) are uncorrelated white noise. • We can use again Fokker-Planck formalism, ∂t = σ 2 ( t ) τ ∂P ∂V 2 + ∂ ∂P ∂V [( V − µ ( t )) P ] , 2 where – µ ( t ) = average input (external − recurrent inhibitory) µ ( t ) = µ ext + JKτν ( t − D ) – σ ( t ) = ‘intrinsic’ noise due to recurrent interactions σ 2 ( t ) = σ 2 ext + J 2 Kν ( t − D ) τ

  15. Asynchronous state, linear stability analysis 1. Asynchronous state (constant instantaneous firing rate): Vt − µ 0 1 � √ π σ 0 exp( u 2 )[1 + erf ( u )] = τ m ν 0 Vr − µ 0 σ 0 µ 0 = µ ext + KJν 0 τ m σ 2 σ 2 ext + KJ 2 ν 0 τ m = 0 2. Linear stability analysis: P ( V, t ) = P 0 ( V ) + ǫP 1 ( V, λ ) exp( λt ) ν 0 ( t ) = ν 0 + ǫν 1 ( λ ) exp( λt ) . . . ⇒ obtain eigenvalues λ 3. Instabilities of asynchronous state occur when Re ( λ ) = 0 ; 4. Weakly non-linear analysis: behavior beyond the bifurcation point 5. Finite size effects

  16. Randomly connected E-I networks ✘ ✘ ✾ ❍❍❍❍❍❍❍ ❥ ✏✏✏✏✏✏✏✏✏✏✏ ✶ ❆ ❑ ❆

  17. Conclusions - network dynamics • Network dynamics can be studied analytically using Fokker-Planck formalism; • Inhibition-dominated networks settle in highly irregular states, that can be either asynchronous or synchronous; • Such irregular states reproduce some of the main experimentally observed features of spontaneous activity in cortex in vivo: – Highly irregular firing of single cells at low rates; – Broad distribution of firing rates (close to lognormal) – Weak correlations between cells • Synchronous irregular oscillations similar to fast oscillations observed in cerebellum, hippocampus, cerebral cortex • LFP spectra from all these structures can be fitted quantitatively by the model • Irregularity persists in randomly connected networks in the absence of noise • Irregular dynamics can be truly chaotic (positive Lyapunov exponents) or ‘stably chaotic’ (negative Lyapunov exponents)

  18. Synaptic plasticity, learning and memory

  19. Synaptic plasticity and network dynamics: future challenges • So far, most studies of learning and memory in networks have focused on networks with fixed connectivity (typically Hebbian - assumed to be the result of learning) • With Hebbian connectivity matrices, networks become multistable - with one background state, and a multiplicity of ‘selective’ attractors representing stored memories. • Challenges: – Devise ‘learning rules’ (i.e. dynamical equations for synapses) consistent with known data – Insert such rules in networks, and study how inputs with prescribed statistics shape network attractor landscape – Study maximal storage capacity of the network, with different types of attractors – Learning rules that are able to reach maximal capacity?

Recommend


More recommend