introduction the goal of neuromorphic engineering is to
play

Introduction The goal of neuromorphic engineering is to design and - PowerPoint PPT Presentation

Introduction The goal of neuromorphic engineering is to design and implement micro- electronic systems that emulate the structure and function of the brain. Address-event representation (AER) is a communication protocol origi- nally


  1. Introduction • The goal of neuromorphic engineering is to design and implement micro- electronic systems that emulate the structure and function of the brain. • Address-event representation (AER) is a communication protocol origi- nally proposed as a means to communicate sparse neural events between neuromorphic chips. • Previous work has shown that AER can also be used to construct large- scale networks with arbitrary, configurable synaptic connectivity. • Here, we further extend the functionality of AER to implement arbitrary, configurable synaptic plasticity in the address domain. 1

  2. Address-Event Representation (AER) Sender Receiver 0 0 Data bus Encoder Decoder 1 1 3 0 2 1 2 2 time 3 3 REQ REQ ACK ACK (Mahowald, 1994; Lazzaro et al., 1993) • The AER communication protocol emulates massive connectivity be- tween cells by time-multiplexing many connections on the same data bus. • For a one-to-one connection topology, the required number of wires is reduced from N to ∼ log 2 N . • Each spike is represented by: ◦ Its location: explicitly encoded as an address. ◦ The time at which it occurs: implicitly encoded. 2

  3. Learning on Silicon • Adaptive hardware systems commonly employ learning circuitry embed- ded into the individual cells. • Executing learning rules locally requires inputs and outputs of the algo- rithm to be local in both space and time . • Implementing learning circuits locally increases the size of repeating units. • This approach can be effective for small systems, but it is not efficient when the number of cells increases. y 1 y 2 w 11 w 12 x 1 w 21 w 22 x 2 w 31 w 32 x 3 w 41 w 42 x 4 3

  4. Address Domain Learning • By performing learning in the address domain, we can: ◦ Move learning circuits to the periphery. ◦ Create scalable adaptive systems. ◦ Maintain the small size of our analog cells. ◦ Construct arbitrarily complex and reconfigurable learning rules. • Because any measure of cellular activity can be made globally available us- ing AER, many adaptive algorithms based on incremental outer-product computations can be implemented in the address domain. • By implementing learning circuits on the periphery, we reduce restrictions of locality on constituents of the learning rule. • Spike timing-based learning rules are particularly well-suited for imple- mentation in the address domain. 4

  5. Enhanced AER • In its original formulation, AER implements a one-to-one connection topology. • To create more complex neural circuits, convergent and divergent con- nections are required. • The connectivity of AER systems can be enhanced by routing address- events to multiple receiver locations via a look-up table (Andreou et al., 1997; Diess et al., 1999; Boahen, 2000; Higgins & Koch, 1999) . • Continuous-valued synaptic weights can be obtained by manipulating event transmission (Goldberg et al., 2001) : = W n p q × × Weight Number of Probability of Amplitude of spikes sent transmission postsynaptic response 5

  6. Enhanced AER: Example Sender address Synapse index Receiver address Weight polarity Weight magnitude 0 0 0 1 3 ‘‘Sender’’ ‘‘Receiver’’ - - - 1 REQ - - - 2 0 3 POL 0 0 1 0 0 0 0 0 1 Decoder Encoder 1 1 2 1 8 1 -1 - - - 2 2 0 2 1 4 2 1 1 - - - 1 8 EG - - - 2 Integrate-and-fire array 4 2 2 Look-up table • A two-layer neural network is mapped to the AER framework by means of a look-up table (LUT). • The event generator (EG) sends as many events as are specified in the weight magnitude field of the LUT. • The integrate-and-fire array transceiver (IFAT) spatially and temporally integrates events. 6

  7. Architecture IFAT System AOUT[0] AOUT[1] Sender OUTACK address Weight polarity ADDRESS MATCH OUTREQ DATA Receiver address SCAN RAM ACK Event scanning CACK CREQ CACK CREQ CACK CREQ ACK SCAN MATCH RSEL POL IN OUT AIN[2] AOUT[2] RSEL RREQ IN OUT RACK RSCAN IFAT AIN[3] AOUT[3] RSEL RREQ RACK RSCAN CPOL CPOL CPOL RREQ Synapse Weight RACK RSCAN index magnitude INACK MCU POL D Q VDD/2 INREQ PC board Input control AIN[0] AIN[1] 7

  8. Implementation IFAT System Column scanning and encoding RAM IFAT Row scanning and encoding Row decoding Single IF cell Column decoding MCU 8

  9. Spike Timing-Dependent Plasticity • In spike timing-dependent plastic- ity (STDP), changes in synaptic strength depend on the time be- tween each pair of presynaptic and postsynaptic events. • The most recent inputs to a post- synaptic cell make larger contri- butions to its membrane potential than past inputs due to passive leakage currents. • Postsynaptic events immediately following incoming presynaptic spikes are considered to be causal and induce weight increments. • Presynaptic inputs that arrive From (Bi & Poo, 1998) shortly after a postsynaptic spike are considered to be anti-causal and induce weight decrements. 9

  10. Address Domain STDP: Event Queues • To implement our STDP synaptic modification rule in the address do- main, we augmented our AER architecture with two event queues, one for presynaptic events and one for postsynaptic events. • When an event occurs, its address is entered into the appropriate queue along with an associated value ϕ initialized to τ + or τ − . This value is decremented over time. Presynaptic queue Postsynaptic queue Address 2 1 1 2 1 1 2 1 2 Address 2 1 1 1 2 2 1 2 2 ϕ pre ϕ post 0.0 0.0 0.1 0.7 1.0 1.8 2.1 2.5 3.0 2.1 2.4 3.5 4.1 4.5 4.8 5.3 5.6 6.0 x 1 y 1 x 2 y 2 t t -4 -3 -2 -1 0 -4 -3 -2 -1 0 � � τ + − ( t − t pre ) if t − t pre ≥ τ + τ − − ( t − t post ) if t − t post ≥ τ − ϕ pre ( t − t pre ) = ϕ post ( t − t post ) = 0 if t − t pre < τ + 0 if t − t post < τ − 10

  11. Address Domain STDP: Weight Updates • Weight update procedure: ∆ w τ + Postsynaptic queue Presynaptic queue ∆ w τ − x 1 x 2 x 1 x 1 x 2 x 1 x 2 y 2 y 1 y 1 y 1 y 1 y 1 y 1 y 2 y 2 y 2 y 1 y 2 x 1 x Presynaptic t t Presynaptic x 2 y 1 y 1 Postsynaptic y y 2 Postsynaptic For each postsynaptic event, we For each presynaptic event, we it- iterate backwards through the erate backwards through the post- presynaptic queue to find the synaptic queue to find the anti- causal spikes and increment the causal spikes and decrement the appropriate weights in the LUT. appropriate weights in the LUT. • The magnitude of the weight updates are specified by the values stored in the queue. � − η · ϕ post ( t pre − t post ) if 0 ≤ t pre − t post ≤ τ − ∆ w = + η · ϕ pre ( t post − t pre ) if − τ + ≤ t pre − t post ≤ 0 0 otherwise 11

  12. Address Domain STDP: Details ∆ w ( t pre − t p ost ) −τ + τ − t pre − t p ost pre s yna p t ic p osts yna p t ic ∆ w • For stable learning, the area under the synaptic modification curve in the anti-causal regime must be greater than that in the causal regime. This ensures convergence of the synaptic strengths (Song et al., 2000) . • In our implementation of STDP, this constraint is met by setting τ − > τ + . 12

  13. Experiment: Grouping Correlated Inputs x 1 x 2 x 3 x 4 x 5 Uncorrelated y x 16 x 17 x 18 Correlated x 19 x 20 • Each of the 20 neurons in the input layer is driven by an externally supplied, randomly generated list of events. • Our randomly generated list of events simulates two groups of neurons, one correlated and one uncorrelated. The uncorrelated group drives in- put layer cells x 1 . . . x 17 , and the correlated group drives input layer cells x 18 . . . x 20 . • Although each neuron in the input layer has the same average firing rate, neurons x 18 . . . x 20 fire synchronous spikes more often than any other combination of neurons. 13

  14. Experimental Results Single Trial Average Over 20 Trials • STDP has been shown to be effective at detecting correlations between groups of inputs (Song et al., 2000) . We demonstrate that this can be accomplished in hardware in the address domain. • Given a random starting distribution of synaptic weights for a set of presynaptic inputs, a neuron using STDP should maximize the weights of correlated inputs and minimize the weights of uncorrelated inputs. • Our results illustrate this principle when all synaptic weights are initialized to a uniform value and the network is allowed to process 200,000 input events. 14

  15. Conclusion • The address domain provides an efficient representation to implement synaptic plasticity based on the relative timing of events. ◦ Learning circuitry can be moved to the periphery. ◦ The constituents of learning rules need not be constrained in space or time. • We have implemented an address domain learning system using a hybrid analog/digital architecture. • Our experimental results illustrate an application of this approach using a temporally-asymmetric Hebbian learning rule. 15

Recommend


More recommend