Learning on Silicon: Overview Gert Cauwenberghs Johns Hopkins University gert@jhu.edu 520.776 Learning on Silicon http://bach.ece.jhu.edu/gert/courses/776 G. Cauwenberghs 520.776 Learning on Silicon
Learning on Silicon: Overview • Adaptive Microsystems – Mixed-signal parallel VLSI – Kernel machines • Learning Architecture – Adaptation, learning and generalization – Outer-product incremental learning • Technology – Memory and adaptation • Dynamic analog memory • Floating gate memory – Technology directions • Silicon on Sapphire • System Examples G. Cauwenberghs 520.776 Learning on Silicon
Massively Parallel Distributed VLSI Computation • Neuromorphic – distributed representation – local memory and adaptation – sensory interface – physical computation – internally analog, externally digital • Scalable throughput scales linearly with silicon area • Ultra Low-Power factor 100 to 10,000 less energy than CPU or DSP Example: VLSI Analog-to-digital vector quantizer (Cauwenberghs and Pedroni, 1997) G. Cauwenberghs 520.776 Learning on Silicon
Learning on Silicon Adaptation: SYSTEM INPUTS OUTPUTS { } p i – necessary for robust performance under variable and unpredictable ε ( p ) conditions – also compensates for REFERENCE imprecisions in the computation – avoids ad-hoc programming, tuning, and manual parameter adjustment Learning: INPUTS OUTPUTS SYSTEM – generalization of output to previously unknown, although ε ( p ) similar, stimuli – system identification to extract MODEL { } relevant environmental p i parameters G. Cauwenberghs 520.776 Learning on Silicon
Adaptive Elements Adaptation:* Autozeroing (high-pass filtering) outputs Offset Correction outputs e.g. Image Non-Uniformity Correction Equalization /Deconvolution inputs, outputs e.g. Source Separation; Adaptive Beamforming Learning: Unsupervised Learning inputs, outputs e.g. Adaptive Resonance; LVQ; Kohonen Supervised Learning inputs, outputs, targets e.g. Least Mean Squares; Backprop Reinforcement Learning reward / punishment G. Cauwenberghs 520.776 Learning on Silicon
Example: Learning Vector Quantization (LVQ) α 1 α 1 α 1 1 j m k α i α i α i WTA α k 1 j m δ ( a j , α i α i ) n j ) d ( a , α n α n α n j 1 m a 1 a j a m i ν Σ Σ d ( a , α i ) = δ ( a j , α j a j - α j i ) = Distance Calculation: j j k = argmin i d ( a , α i ) Winner-Take-All Selection: Training: α j k ← (1 - λ ) α j k + λ a j G. Cauwenberghs 520.776 Learning on Silicon
Incremental Outer-Product Learning in Neural Nets x i p ij x j j i e i e j Σ x i = f ( ) p ij x j Multi-Layer Perceptron: j ∆ p ij = η x j ⋅ e i Outer-Product Learning Update: – Hebbian (Hebb, 1949) : e i = x i target - x i e i = f ' i ⋅ x i – LMS Rule (Widrow-Hoff, 1960) : Σ e j = f ' j ⋅ – Backpropagation (Werbos, Rumelhart, LeCun) : p ij e i i G. Cauwenberghs 520.776 Learning on Silicon
Technology Incremental Adaptation: – Continuous-Time: I adapt D V stored C d G d tV stored = I adapt Q adapt S – Discrete-Time: C C ∆ V stored = Q adapt Storage: – Volatile capacitive storage (incremental refresh) – Non-volatile storage (floating gate) Precision: – Only polarity of the increments is critical (not amplitude). – Adaptation compensates for inaccuracies in the analog implementation of the system. G. Cauwenberghs 520.776 Learning on Silicon
Floating-Gate Non-Volatile Memory and Adaptation Paul Hasler, Chris Diorio, Carver Mead, … • Hot electron injection – ‘Hot’ electrons injected from drain onto floating gate of M1. – Injection current is proportional to drain current and exponential in floating-gate to drain voltage (~5V). • Tunneling – Electrons tunnel through thin gate oxide from floating gate onto high-voltage (~30V) n-well. – Tunneling voltage decreases with decreasing gate oxide thickness. • Source degeneration – Short-channel M2 improves stability of I out closed-loop adaptation (Vd open-circuit). – M2 is not required if adaptation is regulated (Vd driven). • Current scaling – In subthreshold, Iout is exponential both in the floating gate charge, and in control voltage Vg. G. Cauwenberghs 520.776 Learning on Silicon
Dynamic Analog Memory Using Quantization and Refresh Autonomous Active Refresh Using A/D/A Quantization: D/A WR D p i A/D – Allows for an excursion margin around discrete quantization levels, provided the rate of refresh is sufficiently fast. – Supports digital format for external access – Trades analog depth for storage stability G. Cauwenberghs 520.776 Learning on Silicon
Binary Quantization and Partial Incremental Refresh Problems with Standard Refresh Schemes: – Systematic offsets in the A/D/A loop – Switch charge injection (clock feedthrough) during refresh – Random errors in the A/D/A quantization Binary Quantization: – Avoids errors due to analog refresh – Uses a charge pump with precisely controlled polarity of increments Partial Incremental Refresh: – Partial increments avoid catastrophic loss of information in the presence of random errors and noise in the quantization – Robustness to noise and errors increases with smaller increment amplitudes G. Cauwenberghs 520.776 Learning on Silicon
Binary Quantization and Partial Incremental Refresh Q ( p i ) +1 + δ – δ 1 2 3 4 p i p d p d p d p d –1 ∆ ( k ) - δ Q ( p i ( k + 1) = p i ( k ) ) p i – Resolution ∆ – Increment size δ r T < δ << ∆ – Worst-case drift rate (|d p /d t| ) r – Period of refresh cycle T G. Cauwenberghs 520.776 Learning on Silicon
Functional Diagram of Partial Incremental Refresh DRIFT ( k ) ( k ) ) Q ( p i ± δ p i Σ Σ δ z -1 Q NOISE • Similar in function and structure to the technique of delta-sigma modulation • Supports efficient and robust analog VLSI implementation, using binary controlled charge pump G. Cauwenberghs 520.776 Learning on Silicon
Analog VLSI Implementation Architectures EN SEL p i I/D p i INCR/DECR EN I/D C INCR/DECR C Q ( k ) ) ( k ) Q ( p i p i Q ( k ) ) ( k ) Q ( p i p i • An increment/decrement device I/D is provided for every memory cell, serving refresh increments locally. • The binary quantizer Q is more elaborate to implement, and one instance can be time-multiplexed among several memory cells G. Cauwenberghs 520.776 Learning on Silicon
Charge Pump Implementation of the I/D Device I/D EN MP V b INCR p i INCR/DECR V b DECR MN EN Binary controlled polarity of increment/decrement – INCR/DECR controls polarity of current Accurate amplitude over wide dynamic range of increments – EN controls duration of current – V b INCR and V b DECR control amplitude of subthreshold current – No clock feedthrough charge injection (gates at constant potentials) G. Cauwenberghs 520.776 Learning on Silicon
Dynamic Memory and Incremental Adaptation EN p (b) V bp I adapt V stored POL ∆ Q adapt C V bn 1pF (a) EN n Voltage Decrement ²V stored (V) 0 0 ∆ t = 40 msec 10 Voltage Increment ²V stored (V) 10 ∆ t = 40 msec 1 msec -1 -1 10 10 1 msec -2 -2 10 10 23 µsec -3 -3 10 10 23 µsec -4 -4 10 10 ∆ t = 0 -5 -5 10 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 Gate Voltage V bn (V) Gate Voltage V bp (V) (b) (a) G. Cauwenberghs 520.776 Learning on Silicon
A/D/A Quantizer for Digital Write and Read Access A/D/A WR (Q) p i D ( k ) ) Q ( p i D/A A D Integrated bit-serial (MSB-first) D/A and SA A/D converter: – Partial Refresh: Q(.) from LSB of ( n +1)-bit A/D conv. – Digital Read Access: n -bit A/D conv. – Digital Write Access: n -bit D/A ; WR ; Q(.) from COMP G. Cauwenberghs 520.776 Learning on Silicon
Dynamic Analog Memory Retention Input Voltage (V) 2.29 2.30 2.31 2.32 2.33 1.0 P (LSB = "1") 0.8 0.6 0.4 10 9 cycles mean time – 0.2 between failure 0.0 – 8 bit effective resolution 10000001 – 20 µV Distribution (%/mV) 100 increments/decrements 10000000 – 200 µm X 32 µm in 2 µm 50 01111111 CMOS 0 01111110 2.29 2.30 2.31 2.32 2.33 Capacitor Voltage (V) G. Cauwenberghs 520.776 Learning on Silicon
Silicon on Sapphire Peregrine UTSi process – Higher integration density – Drastically reduced bulk leakage • Improved analog memory retention – Transparent substrate • Adaptive optics applications G. Cauwenberghs 520.776 Learning on Silicon
The Credit Assignment Problem or How to Learn from Delayed Rewards SYSTEM INPUTS OUTPUTS { } p i r*(t) r(t) ADAPTIVE CRITIC External, discontinuous reinforcement signal r(t). Adaptive Critics: – Heuristic Dynamic Programming (Werbos, 1977) – Reinforcement Learning (Sutton and Barto, 1983) – TD( λ ) (Sutton, 1988) – Q-Learning (Watkins, 1989) G. Cauwenberghs 520.776 Learning on Silicon
Reinforcement Learning Classifier for Binary Control SEL hor Vbp UPD q vert V α p SEL vert q k e k UPD V δ ^ Vbn Vbn r SEL hor HYST Vbp UPD UPD HYST x 2 ( t ) y vert V α p Vbp LOCK LOCK V α n y k x 1 ( t ) Vbn SEL hor y = –1 y ( t ) y = 1 u ( t ) G. Cauwenberghs 520.776 Learning on Silicon
Adaptive Optical Wavefront Correction with Marc Cohen, Tim Edwards and Mikhail Vorontsov iris retina lens zonule fibers cornea optic nerve G. Cauwenberghs 520.776 Learning on Silicon
Recommend
More recommend