why spiking neural networks are
play

Why Spiking Neural Networks Are Limited in Time (cont-d) Efficient: - PowerPoint PPT Presentation

Why Spiking Neural . . . Looking for Basic . . . Dependence Is . . . Simplest Possible . . . Why Spiking Neural Networks Are Limited in Time (cont-d) Efficient: A Theorem Shift- and Scale- . . . Michael Beer 1 , Julio Urenda 2 Definitions


  1. Why Spiking Neural . . . Looking for Basic . . . Dependence Is . . . Simplest Possible . . . Why Spiking Neural Networks Are Limited in Time (cont-d) Efficient: A Theorem Shift- and Scale- . . . Michael Beer 1 , Julio Urenda 2 Definitions and the . . . Olga Kosheleva 2 , and Vladik Kreinovich 2 But Are Spiked . . . Discussion 1 Leibniz University Hannover Home Page 30167 Hannover, Germany beer@irz.uni-hannover.de Title Page 2 University of Texas at El Paso ◭◭ ◮◮ 500 W. University El Paso, Texas 79968, USA ◭ ◮ jcurenda@utep.edu, olgak@utep.edu, vladik@utep.edu Page 1 of 40 Go Back Full Screen Close Quit

  2. Why Spiking Neural . . . Looking for Basic . . . 1. Why Spiking Neural Networks (NN) Dependence Is . . . • At this moment, artificial neural networks are the most Simplest Possible . . . successful – and the most promising – direction in AI. Limited in Time (cont-d) Shift- and Scale- . . . • Artificial neural networks are largely patterned after Definitions and the . . . the way the actual biological neural networks work. But Are Spiked . . . • This patterning makes perfect sense: Discussion Home Page – after all, our brains are the result of billions of years of improving evolution, Title Page – so it is reasonable to conclude that many features ◭◭ ◮◮ of biological neural networks are close to optimal, ◭ ◮ – not very efficient features would have been filtered Page 2 of 40 out in this long evolutionary process. Go Back • However, there is an important difference between the current artificial NN and biological NN. Full Screen Close Quit

  3. Why Spiking Neural . . . Looking for Basic . . . 2. Why Spiking NN (cont-d) Dependence Is . . . • In hardware-implemented artificial NN each value is Simplest Possible . . . represented by the intensity of the signal. Limited in Time (cont-d) Shift- and Scale- . . . • In contrast, in the biological neural networks, each Definitions and the . . . value is represented by a frequency instantaneous spikes. But Are Spiked . . . • Since simulating many other features of biological neu- Discussion ral networks has led to many successes. Home Page • So, a natural idea is to also try to emulate the spiking Title Page character of the biological neural networks. ◭◭ ◮◮ ◭ ◮ Page 3 of 40 Go Back Full Screen Close Quit

  4. Why Spiking Neural . . . Looking for Basic . . . 3. Spiking Neural Networks Are Indeed Efficient Dependence Is . . . • Interestingly, adding spiking to artificial neural net- Simplest Possible . . . works has indeed led to many successful applications. Limited in Time (cont-d) Shift- and Scale- . . . • They were especially successful in processing temporal Definitions and the . . . (and even spatio-temporal) signals. But Are Spiked . . . • A biological explanation of the success of spiking neural Discussion networks makes perfect sense. Home Page • However, it would be nice to supplement it with a clear Title Page mathematical explanation. ◭◭ ◮◮ • It is especially important since: ◭ ◮ – in spite of all the billions years of evolution, Page 4 of 40 – we humans are not perfect as biological beings, Go Back – we need medicines, surgeries, and other artificial Full Screen techniques to survive, and – our brains often make mistakes. Close Quit

  5. Why Spiking Neural . . . Looking for Basic . . . 4. Looking for Basic Functions Dependence Is . . . • In general, to represent a signal x ( t ) means to approxi- Simplest Possible . . . mate it as a linear combination of some basic functions. Limited in Time (cont-d) Shift- and Scale- . . . • For example, it is reasonable to represent a periodic Definitions and the . . . signal as a linear combination of sines and cosines. But Are Spiked . . . • Often, it makes sense to represent the observed values Discussion as a linear combination of: Home Page – functions t , t 2 , etc., representing the trend and Title Page – sines and cosines that describe the periodic part of ◭◭ ◮◮ the signal. ◭ ◮ • We can also take into account that the amplitudes of Page 5 of 40 the periodic components can also change with time. Go Back • So, we end up with terms of the type t · sin( ω · t ). Full Screen Close Quit

  6. Why Spiking Neural . . . Looking for Basic . . . 5. Looking for Basic Functions (cont-d) Dependence Is . . . • For radioactivity, the observed signal is: Simplest Possible . . . Limited in Time (cont-d) – a linear combination of functions exp( − k · t ) Shift- and Scale- . . . – that represent the decay of different isotopes. Definitions and the . . . • So, in precise terms, selecting a representation means But Are Spiked . . . selecting an appropriate family of basic functions. Discussion Home Page • In general, elements b ( t ) of a family can be described as b ( t ) = B ( c 1 , . . . , c n , t ) corr. to diff. c = ( c 1 , . . . , c n ). Title Page ◭◭ ◮◮ • Sometimes, there is only one parameter, as in sines and cosines. ◭ ◮ • In control, typical are functions exp( − k · t ) · sin( ω · t ), Page 6 of 40 with two parameters k and ω , etc. Go Back Full Screen Close Quit

  7. Why Spiking Neural . . . Looking for Basic . . . 6. Dependence on Parameters Is Continuous Dependence Is . . . • We want the dependence B ( c 1 , . . . , c n , t ) to be com- Simplest Possible . . . putable. Limited in Time (cont-d) Shift- and Scale- . . . • It is known that all computable functions are, in some Definitions and the . . . reasonable sense, continuous. But Are Spiked . . . • Indeed, in real life, we can only determine the values Discussion of all physical quantities c i with some accuracy. Home Page • Measurements are always not 100% accurate, and com- Title Page putations always involve some rounding. ◭◭ ◮◮ • For any given accuracy, we can provide the value with ◭ ◮ this accuracy. Page 7 of 40 • Thus, the approximate values of c i are the only thing Go Back that B ( c 1 , . . . , c n , t )-computing algorithm can use. Full Screen • This algorithm can ask for more and more accurate values of c i . Close Quit

  8. Why Spiking Neural . . . Looking for Basic . . . 7. Dependence Is Continuous (cont-d) Dependence Is . . . • However, at some point it must produce the result. Simplest Possible . . . Limited in Time (cont-d) • At this point, we only known approximate values of c i . Shift- and Scale- . . . • So, we only know the interval of possible values of c i . Definitions and the . . . But Are Spiked . . . • And for all the values of c i from this interval: Discussion – the result of the algorithm provides, with the given Home Page accuracy, Title Page – the approximation to the desired value B ( c 1 , . . . , c n , t ). ◭◭ ◮◮ • This is exactly what continuity is about! ◭ ◮ • One has to be careful here, since the real-life processes Page 8 of 40 may actually be discontinuous. Go Back • Sudden collapses, explosions, fractures do happen. Full Screen Close Quit

  9. Why Spiking Neural . . . Looking for Basic . . . 8. Dependence Is Continuous (cont-d) Dependence Is . . . • For example, we want to make sure that: Simplest Possible . . . Limited in Time (cont-d) – a step-function which is equal to 0 for t < 0 and to Shift- and Scale- . . . 1 for t ≥ 0 is close to Definitions and the . . . – an “almost” step function which is equal to 0 for But Are Spiked . . . t < 0, to 1 for t ≥ ε and to t/ε for t ∈ (0 , ε ). Discussion • In such situations: Home Page – we cannot exactly describe the value at moment t , Title Page – since the moment t is also measured approximately. ◭◭ ◮◮ • What we can describe is its values at a moment close ◭ ◮ to t . Page 9 of 40 Go Back Full Screen Close Quit

  10. Why Spiking Neural . . . Looking for Basic . . . 9. Dependence Is Continuous (cont-d) Dependence Is . . . • In other words, we can say that the two functions a 1 ( t ) Simplest Possible . . . and a 2 ( t ) are ε -close if: Limited in Time (cont-d) Shift- and Scale- . . . – for each t 1 , there are ε -close t 21 , t 22 such that a 1 ( t 1 ) Definitions and the . . . is ε -close to a convex combination of a 2 ( t 2 i ); But Are Spiked . . . – for each t 2 , there are ε - t 11 , t 12 such that a 2 ( t 2 ) is Discussion ε -close to a convex combination of a 1 ( t 1 i ). Home Page Title Page ◭◭ ◮◮ ◭ ◮ Page 10 of 40 Go Back Full Screen Close Quit

  11. Why Spiking Neural . . . Looking for Basic . . . 10. Additional Requirement Dependence Is . . . • We consider linear combinations of basic functions. Simplest Possible . . . Limited in Time (cont-d) • So, it does not make sense to have two basic functions Shift- and Scale- . . . that differ only by a constant. Definitions and the . . . • If b 2 ( t ) = C · b 1 ( t ), then there is no need to consider But Are Spiked . . . the function b 2 ( t ) at all. Discussion Home Page • In each linear combination we can replace b 2 ( t ) with Title Page C · b 1 ( t ) . ◭◭ ◮◮ ◭ ◮ Page 11 of 40 Go Back Full Screen Close Quit

Recommend


More recommend