cse nb 528 lecture 10 recurrent networks
play

CSE/NB 528 Lecture 10: Recurrent Networks (Chapter 7) Lecture - PDF document

CSE/NB 528 Lecture 10: Recurrent Networks (Chapter 7) Lecture figures are from Dayan & Abbotts book R. Rao, CSE528: Lecture 10 1 http://people.brandeis.edu/~abbott/book/index.html Whats on our smrgsbord today? F Computation in


  1. CSE/NB 528 Lecture 10: Recurrent Networks (Chapter 7) Lecture figures are from Dayan & Abbott’s book R. Rao, CSE528: Lecture 10 1 http://people.brandeis.edu/~abbott/book/index.html What’s on our smörgåsbord today? F Computation in Linear Recurrent Networks Eigenvalue analysis F Non-linear Recurrent Networks Eigenvalue analysis F Covered in: Chapter 7 in Dayan & Abbott R. Rao, CSE528: Lecture 10 2

  2. Linear Recurrent Networks d v     W  M v u v dt Output Decay Input Feedback R. Rao, CSE528: Lecture 10 3 What can a Linear Recurrent Network do? d v     W  M v u v dt On-Board analysis based on eigenvectors of recurrent weight matrix M R. Rao, CSE528: Lecture 10 4

  3. Example of a Linear Recurrent Network Each neuron codes for an angle between -180 to +180 degrees Recurrent connections M = cosine function of relative angle +       M ( , ' ) cos( ' ) - - Excitation nearby, Inhibition further away (  -  ’ ) Is M symmetric? M (  ,  ’ )= M (  ’ ,  ’ )? R. Rao, CSE528: Lecture 10 5 Example of a Linear Recurrent Network       M ( , ' ) cos( ' ) Each neuron has a preferred angle between -180 to +180 degrees 1     All eigenvalues = 0 except 0 . 9 10 i.e. ampli fication  1 1 - 1 (See section 7.4 in Dayan & Abbott) R. Rao, CSE528: Lecture 10 6

  4. Example: Amplification in a Linear Recurrent Network Input Output (noisy cosine) Preferred angle of neuron R. Rao, CSE528: Lecture 10 7 Example: Memory for Maintaining Eye Position Input: Bursts of spikes from brain stem oculomotor neurons Output: Memory of eye position in medial vestibular nucleus R. Rao, CSE528: Lecture 10 8

  5. Nonlinear Recurrent Networks d v      ( W M ) v F u v dt Output Decay Input Feedback (Convenient to use W u = h ) R. Rao, CSE528: Lecture 10 9 Amplification in a Nonlinear Recurrent Network Input Output (F = rectification nonlinearity: F(x) = x if x > 0 and 0 o.w.)  1  1 . 9 (but stable due to rectification) R. Rao, CSE528: Lecture 10 10

  6. Selective “Attention” in a Nonlinear Recurrent Network Input Output Network performs “winner -takes- all” input selection R. Rao, CSE528: Lecture 10 11 Gain Modulation in a Nonlinear Recurrent Network Inputs Outputs Changing the level of input multiplies the output R. Rao, CSE528: Lecture 10 12

  7. Gain Modulation in Parietal Cortex Neurons Gaze 2 Gaze 1 Example of a gain- Responses of Area 7a neuron modulated tuning curve R. Rao, CSE528: Lecture 10 13 Short-Term Memory Storage in a Nonlinear Recurrent Network Local Input + Output Network maintains Background a memory of previous activity when input is turned off. Similar to “short - Turn off input Output term memory” or “working memory” in prefrontal cortex Memory is maintained by recurrent activity R. Rao, CSE528: Lecture 10 14

  8. What about Non-Symmetric Recurrent Networks? F Example: Network of Excitatory (E) and Inhibitory (I) Neurons Connections can’t be symmetric: Why?    dv        E v M v M v E E EE E EI I E dt    dv        I v M v M v I I II I IE E I dt Simple 2-neuron network representing interacting populations One excitatory neuron and one inhibitory neuron R. Rao, CSE528: Lecture 10 15 Stability Analysis of Nonlinear Recurrent Networks d v  General case : ( ) f v dt  Suppose v is a fixed point (i.e., f ( v ) 0)   ε d v d    ε Near v , v ( t ) v ( t ) (i.e., )   dt dt  f   ε Taylor expansion : f ( v ( t ) ) f ( v ) ( t )   v v   ε J is the “Jacobian d v f d     ε ε . . ( ) ( ) i e t J t  matrix” dt v dt v  Derive solution for v ( t ) based on eigen-analysis of J Eigenvalues of J determine stability of network (see Mathematical Appendix A.3 in textbook) R. Rao, CSE528: Lecture 10 16

  9. Example: Non-Symmetric Recurrent Networks F Specific Network of Excitatory (E) and Inhibitory (I) Neurons: 10 ms 1.25 -1 -10    dv        E v M v M v E E EE E EI I E dt   dv         I v M v M v I I II I IE E I dt 0 1 10 Parameter we will vary to study the network R. Rao, CSE528: Lecture 10 17 Linear Stability Analysis        dv v M v M v  Take derivatives of right E E EE E EI I E  dt hand side with respect to E   both v E and v I      dv v M v M v  I I II I IE E I  dt I F Matrix of derivatives (the “ Jacobian Matrix”):    ( M 1 ) M EE EI        E E J  ( 1 )  M M  IE II       I I R. Rao, CSE528: Lecture 10 18

  10. Compute the Eigenvalues F Jacobian Matrix:    ( M 1 ) M EE EI        E E J  ( 1 )  M M  IE II       I I F Its two eigenvalues (obtained by solving det ( J –  I) = 0):   2         1 ( 1 ) ( 1 ) 1 1 M M M M M M         EE II EE II EI IE 4             2     E I E I E I Different dynamics depending on real and imaginary parts of  (see pages 410-412 of Appendix in Text) R. Rao, CSE528: Lecture 10 19 Phase Plane and Eigenvalue Analysis  I = 30 ms 50ms    dv      E 10 v 1 . 25 v v 10 E E I dt    dv        I 0 10 v v v I I I E dt R. Rao, CSE528: Lecture 10 20

  11. Damped Oscillations in the Network  I = 30 ms (negative real eigenvalue) Stable Fixed Point R. Rao, CSE528: Lecture 10 21 Unstable Behavior and Limit Cycle  I = 50 ms (positive real eigenvalue) Limit cycle R. Rao, CSE528: Lecture 10 22

  12. Oscillatory Activity in Real Networks Sniff Sniff Sniff Activity in rabbit (or wabbit) olfactory bulb during 3 sniffs (see Chapter 7 in textbook for details) R. Rao, CSE528: Lecture 10 23 F Things to do: That’s all folks! Start reading Chapter 8 in D & A Homework #3 assigned today Start working on final project R. Rao, CSE528: Lecture 10 24

Recommend


More recommend