Sample network to model Modeling the Visual System CMVC figure 3.1a Dr. James A. Bednar jbednar@inf.ed.ac.uk http://homepages.inf.ed.ac.uk/jbednar Tangential section with a small subset of neurons labeled Where do we begin? CNV Spring 2012: Modeling background 1 CNV Spring 2012: Modeling background 2 Modeling approaches Dense connectivity (Briggman & Denk 2006) CMVC figure 3.1b,e (Livet et al. 2007) Compartmental Integrate-and-fire / firing-rate model of the network Brainbow mouse cortex Electron microscopy of rat cortex neuron model Remember that the actual network is far denser than in One approach: model single cells extremely well the previous slides, with many opportunities for contact Our approach: many, many simple single-cell models between neurons and neurites. CNV Spring 2012: Modeling background 3 CNV Spring 2012: Modeling background 4
Levels of explanationn Adult retina and LGN cell models There are many ways to explain the electrophysiological properties (the behavior) of V1 neurons: (Rodieck 1965) 1. Phenomenological : Mathematical fit to behavior – a good model iff there is a good fit to adults 2. Mechanistic : good if a good type 1 model and also consistent with circuits or other mechanisms in adults • Standard model of adult RGC or LGN cell activity: 3. Developmental : good if a good type 2 model and Difference of Gaussians weight matrix explains how it comes about, consistent with known data • Firing rate: dot product of weight and input matrices 4. Normative : good if a good type 1, 2, or 3 model and • Can be tuned for quantitative match to firing rate explains why the behavior is useful or appropriate • Can add temporal component (transient+sustained) CNV Spring 2012: Modeling background 5 CNV Spring 2012: Modeling background 6 Effect of DoG Adult V1 cell model: Gabor ON: (Adult cat; Daugman 1988) c0.5 s1.5 c1 s3 c3 s9 c10 s30 c30 s90 original OFF: c1.5 s0.5 c3 s1 c9 s3 c30 s10 c90 s30 Standard model of adult V1 simple cell spatial preferences: Each DoG, if convolved with the image, performs edge Gabor (Gaussian times sine grating) (Daugman 1980) detection at a certain size scale (spatial frequency band) CNV Spring 2012: Modeling background 7 CNV Spring 2012: Modeling background 8
Adult V1 cell model: CGE Adult V1 cell model: Energy • Spatiotemporal energy: (Geisler & Albrecht 1997) Standard model of complex direction cell (Adelson & Bergen 1985) • Combines inputs from a quadrature pair (two simple cell motion • Gabor model fits spatial preferences models out of phase) • Simple response function: dot product • Achieves phase invariance, • To match observations: need to add numerous nonlinearities direction selectivity • Examples: CGE model (Geisler & Albrecht 1997) ; LN model CNV Spring 2012: Modeling background 9 CNV Spring 2012: Modeling background 10 Macaque and model V1 RFs V1 RFs as a sparse basis set k = 1 k = 2 k = 3 k = 4 k = 16 k = 64 k = 392 Basis w =-0.332 w = 0.323 w =-0.229 w = 0.213 w = 0.122 w =-0.045 w = 0.000 Weighted Reconstruction Original c = 0 . 39 c = 0 . 72 c = 0 . 75 c = 0 . 76 c = 0 . 89 c = 0 . 95 c = 0 . 98 Macaque SSC SparseNet ICA (Ringach 2002) (Rehn & Sommer 2007) (Olshausen & Field 1996) van Hateren et al. 1998 One way to think about these cells: Basis vectors (here Reproducing full range of RFs requires special from Olshausen & Field 1996) supporting reconstruction sparseness constraints (SSC) of the inputs, in a generative model. CNV Spring 2012: Modeling background 11 CNV Spring 2012: Modeling background 12
Retina/LGN development models Our focus: Cortical map models • Retinal wave generation (e.g. Feller et al. 1997; Godfrey & Swindale 2007; Hennig et al. 2009 ) V1 CMVC figure 3.3 • RGC development based on retinal waves (e.g. Eglen & Willshaw 2002 ) • Retinogeniculate pathway based on retinal waves (e.g. Eglen 1999; Haith 1998 ) Input Because of the wealth of data from the retina, such models can now become quite detailed. Basic architecture: input surface mapped to cortical surface + some form of lateral interaction CNV Spring 2012: Modeling background 13 CNV Spring 2012: Modeling background 14 Kohonen SOM: Feedforward Kohonen SOM: Lateral Popular computationally tractable map model (Kohonen 1982) Abstract model of lateral interactions: Feedforward activity of unit ( i, j ) : • Pick winner ( r, s ) • Assign it activity η max η ij = � � V − � W ij � (1) • Assume that activity of unit ( i, j ) can be described by (distance between input vector � V and weight vector � W ) a neighborhood function, such as a Gaussian: − ( r − i ) 2 + ( s − j ) 2 Not particularly biologically plausible, but easy to compute, � � h rs,ij = η max exp , widely implemented, and has some nice properties. σ 2 h (2) Note: Activation function is not typically a dot product; Models lateral interactions that depend only on distance the CMVC book is confusing about that. from winning unit. CNV Spring 2012: Modeling background 15 CNV Spring 2012: Modeling background 16
Kohonen SOM: Learning SOM example: Input CMVC figure 3.4 Inspired by basic Hebbian rule (Hebb 1949) : w ′ = w + αηχ (3) where the weight increases in proportion to the product of the input and output activities. • SOM will be trained with unoriented Gaussian activity In SOM, the weight vector is shifted toward the input patterns vector based on the Euclidean difference: • Random ( x, y ) positions anywhere on retina w ′ k,ij = w k,ij + α ( χ k − w k,ij ) h rs,ij . (4) • 576-dimensional input, but the x and y locations are the only source of variance Hebb-like, but depending on distance from winning unit CNV Spring 2012: Modeling background 17 CNV Spring 2012: Modeling background 18 SOM: Weight vector self-org SOM: Retinotopy self-org neuron Center CMVC figure 3.6a-b CMVC figure 3.5 Edge neuron Iteration 0: Initial Iteration 1000: Unfolding Iteration 0 Iteration 1000 Iteration 5000 Iteration 40,000 Initially bunched (all average to zero) Combination of input patterns; eventually settles to an exemplar Unfolds as neurons differentiate CNV Spring 2012: Modeling background 19 CNV Spring 2012: Modeling background 20
SOM: Retinotopy self-org Magnification of dense input areas CMVC figure 3.6c-d CMVC figure 3.7 Iteration 5000: Expanding Iteration 40,000: Final Gaussian distribution Two long Gaussians Density of units receiving input from a particular region Expands to cover usable portion of input space. depends on input pattern statistics CNV Spring 2012: Modeling background 21 CNV Spring 2012: Modeling background 22 Principal components of data Nonlinear distributions: principal curves, folding distributions y y PC 2 PC 1 ( ) X P ( ) X P PC 1 PC 2 CMVC figure 3.9 CMVC figure 3.8 x x f Principal curve Folded curve ( a ) Linear distribution ( b ) Nonlinear distribution Generalization of idea of PCA to pick best-fit curve(s) Multiple possible curves PCA: linear approximation, good for linear data CNV Spring 2012: Modeling background 23 CNV Spring 2012: Modeling background 24
Three-dimensional model of Role of density of input sheet ocular dominance CMVC figure 3.10 • Gaussian inputs are nearly band-limited (since Fourier transform is also Gaussian) Representing the third dimension by Visualization of ocular • Density of input sampling unimportant, if it’s greater folding dominance than 2X highest frequency in input (Nyquist theorem) Feature maps: Discrete approximations to principal surfaces? CNV Spring 2012: Modeling background 25 CNV Spring 2012: Modeling background 26 Role of density of SOM sheet Other relevant models ICA Independent Component Analysis yields realistic RFs SOM sheet acts as a discrete approximation to a (Olshausen & Field 1996) ; also can be applied to maps two-dimensional surface. arinen & Hoyer 2001) . (Hyv¨ InfoMax Information maximization can lead to RFs (Linsker How many units are needed for the SOM depends on how 1986b,c) and basic maps (Kozloski et al. 2007; Linsker 1986a) nonlinear the input distribution is — a smoothly varying Elastic net Achieving good coverage and continuity leads input distribution requires fewer units to represent the to realistic feature maps (Carreira-Perpi˜ n´ an et al. 2005; shape. Goodhill & Cimponeriu 2000) Only loosely related to the input density – input density This course focuses on mechanistic circuit models, not normative models (ICA, Infomax, PCA, principal surfaces) limits how quickly the input varies across space, but only or feature space models (elastic net), both of which are for wideband stimuli. hard to relate directly to the underlying biological systems. CNV Spring 2012: Modeling background 27 CNV Spring 2012: Modeling background 28
Recommend
More recommend