chapter 13 neurodynamics
play

Chapter 13. Neurodynamics Neural Networks and Learning Machines - PowerPoint PPT Presentation

Chapter 13. Neurodynamics Neural Networks and Learning Machines (Haykin) Lecture Notes on Self-learning Neural Algorithms Byoung-Tak Zhang School of Computer Science and Engineering Seoul National University Version 20171109 Contents 13 .1


  1. Chapter 13. Neurodynamics Neural Networks and Learning Machines (Haykin) Lecture Notes on Self-learning Neural Algorithms Byoung-Tak Zhang School of Computer Science and Engineering Seoul National University Version 20171109

  2. Contents 13 .1 Introduction ……………………………………………………..…………………………….... 3 13.2 Dynamic Systems ………………………………………………….…….………………..…. 5 13.3 Stability of Equilibrium States ……………………………....………………….…….... 8 13.4 Attractors ……….………..………………….……………………………………..………..... 1 4 13.5 Neurodynamic Models ……………………………..…………………………...…...…. 1 5 13.6 Attractors and Recurrent Networks ….……..……..……….…….……………….. 1 9 13.7 Hopfield Model ….………………………………………..……..…….……….………….. 20 13.8 Cohen-Grossberg Theorem ……………….…………..………….………….....……. 27 13.9 Brain-State-In-A-Box Model ………………………….……..…......................…. 29 Summary and Discussion …………….…………….………………………….……………... 33 (c) 2017 Biointelligence Lab, SNU 2

  3. 13.1 Introduction (1/2) • Time plays a critical role in learning . Two ways in which time manifests itself in the learning process: 1. A static neural network (NN) is made into a dynamic mapper by stimulating it via a memory structure , short term or long term. 2. Time is built into the operation of a neural network through the use of feedback . • Two ways of applying feedback in NNs: 1. Local feedback , which is applied to a single neuron inside the network; 2. Global feedback , which encompasses one or more layers of hidden neurons — or better still, the whole network. • Feedback is like a double-edged sword in that when it is applied improperly, it can produce harmful effects. In particular, the application of feedback can cause a system that is originally stable to become unstable. Our primary interest in this chapter is in the stability of recurrent networks. • The subject of neural networks viewed as nonlinear dynamic systems, with particular emphasis on the stability problem, is referred to as neurodynamics . (c) 2017 Biointelligence Lab, SNU 3

  4. 13.1 Introduction (2/2) • An important feature of the stability (or instability) of a nonlinear dynamic system is that it is a property of the whole system. – The presence of stability always implies some form of coordination between the individual parts of the system. • The study of neurodynamics may follow one of two routes, depending on the application of interest: – Deterministic neurodynamics , in which the neural network model has a deterministic behavior. Described by a set of nonlinear differential equations  This chapter. – Statistical neurodynamics , in which the neural network model is perturbed by the presence of noise. Described b y stochastic nonlinear differential equations , thereby expressing the solution in probabilistic terms. (c) 2017 Biointelligence Lab, SNU 4

  5. 13.2 Dynamic Systems (1/3) A dynamic system is a system whose state varies with time.   , = 1, 2, ..., d x      t F x t j N j j j dt   d      x t F x t dt          ] T [ , , ... , x t x t x t x t 1 2 N Figure 13.1 A two-dimensional trajectory (orbit) of a dynamic system. (c) 2017 Biointelligence Lab, SNU 5

  6. 13.2 Dynamic Systems (2/3) Figure 13.2 A two-dimensional state Figure 13.3 A two-dimensional vector (phase) portrait of a dynamic system. field of a dynamic system. (c) 2017 Biointelligence Lab, SNU 6

  7. 13.2 Dynamic Systems (3/3) Lipschitz condition Let denote the norm, or Euclidean length, of the vector x x. Let and be a pair of vectors in an open set in a noraml vector (state) space. x u M Then, acco rding to the Lipschitz condition, there exists a constant for all and in . K x u M    ( ) ( ) F x F u K x u Divergence Theorem       ( ( ) ) S ( ( )) F x n d F x dV S V   If the divergence ( ) (which is a scalar) is zero, F x the system is conservative and if it is negative the system is dissipative , , . (c) 2017 Biointelligence Lab, SNU 7

  8. 13.3 Stability of Equilibrium States (1/6) Table 13.1   = F x 0  ( ) = + ( ) x t x x t   ( ) + ( ) F x x A x t  ( ) | = A F x  x x  x d dt    ( ) ( ) x t A x t (c) 2017 Biointelligence Lab, SNU 8

  9. 13.3 Stability of Equilibrium States (2/6) Figure 13.4 (a) Stable node. (b) Stable focus. (c) Unstable node. (d) Unstable focus. (e) Saddle point. (f) Center. (c) 2017 Biointelligence Lab, SNU 9

  10. 13.3 Stability of Equilibrium States (3/6) Lyapunov’s Theorems The equilibrium state is stable if, in a small neighborhood of Theorem1. x x, there exists a positive - definite function such that its derivative with (x) V respect to time is negative semidefinite in t hat region. The equilibrium state is asymptotically stable if, in a small Theorem2. x neighborhood of , there exists a positive - definite function such that x V (x) its derivative with respect to time is negative de finite in that region. (c) 2017 Biointelligence Lab, SNU 10

  11. 13.3 Stability of Equilibrium States (4/6) Figure 13.5 Illustration of the notion of uniform stability of a state vector. (c) 2017 Biointelligence Lab, SNU 11

  12. 13.3 Stability of Equilibrium States (5/6) Requirement : The Lyapunov function ( ) to be a positive - definite function V x 1. The function ( ) has continous partial derivatives with respect to the elements V x of the state . x 2. ( ) = 0 . x V  3. ( ) > 0 if - where is a small neighborhood around . V x x u x u x According to Theorem 1, d V      0 for x x x u dt According to Theorem 2, d V   < 0 for   x x x u dt (c) 2017 Biointelligence Lab, SNU 12

  13. 13.3 Stability of Equilibrium States (6/6) c c c Figure 13.6 Lyapunov surfaces for decreasing value of constant , with c < < . The equilibribum state 1 2 3 is denoted by the point . x (c) 2017 Biointelligence Lab, SNU 13

  14. 13.4 Attractors • A k -dimensional surface embedded in the N -dimensional state space, which is defined by the set of equations   1, 2, ... , j k     , , ... , 0, M x x x  1 2 j N  k N • These manifolds are called attractors in that they are bounded subsets to which regions of initial conditions of a nonzero state-space volume converge as time t increases. - Point attractors - Limit cycle - Basin (domain) of attraction - Separatrix - Hyperbolic attractor Figure 13.7 Illustration of the notion of a basin of attraction and the idea of a separatrix. (c) 2017 Biointelligence Lab, SNU 14

  15. 13.5 Neurodynamic Models (1/4) • General properties of the neurodynamic systems 1. A large number of degrees of freedom The human cortex is a highly parallel, distributed system that is estimated to possess about 10 billion neurons, with each neuron modeled by one or more state variables. The system is characterized by a very large number of coupling constants represented by the strengths (efficacies) of the individual synaptic junctions. 2. Nonlinearity A neurodynamic system is inherently nonlinear. In fact, nonlinearity is essential for creating a universal computing machine. 3. Dissipation A neurodynamic system is dissipative. It is therefore characterized by the convergence of the state-space volume onto a manifold of lower dimensionality as time goes on. 4. Noise Finally, noise is an intrinsic characteristic of neurodynamic systems. In real-life neurons, membrane noise is generated at synaptic junctions (c) 2017 Biointelligence Lab, SNU 15

  16. 13.5 Neurodynamic Models (2/4) Figure 13.8 Additive model of a neuron, labeled j. (c) 2017 Biointelligence Lab, SNU 16

  17. 13.5 Neurodynamic Models (3/4) Additive model     dv t v t N       j j C w x t I j ji i j d t R  i 1 j         x t v t j j     dv t v t N        j j = 1, 2, ..., C w x t I j N j ji i j , dt R  1 i j   1  φ , = 1, 2, ..., v j N    j 1 + exp v j (c) 2017 Biointelligence Lab, SNU 17

  18. 13.5 Neurodynamic Models (4/4) Related model   dv t           j φ = , = 1, 2, ..., v t w v t I j N j ji i j dt i     dx t =         j φ x t x t , = 1, 2, ...,  w  K j N j ji i j   dt i      = v t w x t k kj j j  = I w K k kj j j (c) 2017 Biointelligence Lab, SNU 18

Recommend


More recommend