Learning multisensory integration with stochastic variational learning in recurrent spiking networks Presented by : Sharbatanu Chatterjee 1 , 2 Guided by : Aditya Gilra 1 & Johanni Brea 1 1 Laboratory of Computational Neuroscience, EPFL, Switzerland 2 Department of Computer Science & Engineering, IIT Kanpur, India 22 Aaugust 2015
Multisensory Integration Stochastic Variational Learning Model Results Further Work Contents 1 Multisensory Integration 2 Stochastic Variational Learning Model 3 Results 4 Further Work Learning multisensory integration with stochastic variational learning in recurrent spiking networks 0
Multisensory Integration Stochastic Variational Learning Model Results Further Work Figure: Architecture for Multisensory Integration 1 1 Sabes et al 2013 Learning multisensory integration with stochastic variational learning in recurrent spiking networks 1
Multisensory Integration Stochastic Variational Learning Model Results Further Work Figure: Regenerating model from learnt hidden neurons 2 1 Sabes et al 2013 Learning multisensory integration with stochastic variational learning in recurrent spiking networks 2
Multisensory Integration Stochastic Variational Learning Model Results Further Work Figure: An autoencoder Learning multisensory integration with stochastic variational learning in recurrent spiking networks 3
Multisensory Integration Stochastic Variational Learning Model Results Further Work Figure: A schematic for our model Learning multisensory integration with stochastic variational learning in recurrent spiking networks 4
Multisensory Integration Stochastic Variational Learning Model Results Further Work Figure: Architecture for Danilo’s model 3 2 Danilo et al 2014 Learning multisensory integration with stochastic variational learning in recurrent spiking networks 5
Multisensory Integration Stochastic Variational Learning Model Results Further Work Danilo’s model Figure: The Neuron Model 4 3 Danilo et al 2014 Learning multisensory integration with stochastic variational learning in recurrent spiking networks 6
Multisensory Integration Stochastic Variational Learning Model Results Further Work The log-likelihood of seeing the spikes The probability of producing a spike train X i ( t ) is: The log-likelihood is: � T � logp ( X (0 . . . t )) = [ log ρ i ( τ ) X i ( τ ) − ρ i ( τ )] (1) 0 i ∈ V ∪ H The marginalized log-likelihood of the visible neurons � p ( X V ) = DX H p ( X V , X H ) (2) Learning multisensory integration with stochastic variational learning in recurrent spiking networks 7
Multisensory Integration Stochastic Variational Learning Model Results Further Work A ML approach to the problem We use another distribution “q” to approximate the posterior � DX H q ( X H | X V ) log q ( X H | X V ) KL ( q | p ) = p ( X H | X V ) (3) DX H q ( X H | X V ) log q ( X H | X V ) � = p ( X H , X V ) + logp ( X V ) The first term is the Helmholtz Free Energy F Learning multisensory integration with stochastic variational learning in recurrent spiking networks 8
Multisensory Integration Stochastic Variational Learning Model Results Further Work We use another distribution “q” to approximate the posterior DX H q ( X H | X V ) log q ( X H | X V ) � 0 ≤ KL ( q | p ) = p ( X H | X V ) (4) = F + logp ( X V ) The first term is the Helmholtz Free Energy F Learning multisensory integration with stochastic variational learning in recurrent spiking networks 9
Multisensory Integration Stochastic Variational Learning Model Results Further Work We use another distribution “q” to approximate the posterior DX H q ( X H | X V ) log q ( X H | X V ) � 0 ≤ KL ( q | p ) = p ( X H | X V ) (4) = F + logp ( X V ) The first term is the Helmholtz Free Energy F Since KL divergence is positive F + logp ( X V ) ≥ 0 (5) Learning multisensory integration with stochastic variational learning in recurrent spiking networks 9
Multisensory Integration Stochastic Variational Learning Model Results Further Work We use another distribution “q” to approximate the posterior DX H q ( X H | X V ) log q ( X H | X V ) � 0 ≤ KL ( q | p ) = p ( X H | X V ) (4) = F + logp ( X V ) The first term is the Helmholtz Free Energy F Since KL divergence is positive F + logp ( X V ) ≥ 0 (5) logp ( X V ) ≥ − F (6) Learning multisensory integration with stochastic variational learning in recurrent spiking networks 9
Multisensory Integration Stochastic Variational Learning Model Results Further Work We use another distribution “q” to approximate the posterior DX H q ( X H | X V ) log q ( X H | X V ) � 0 ≤ KL ( q | p ) = p ( X H | X V ) (4) = F + logp ( X V ) The first term is the Helmholtz Free Energy F Since KL divergence is positive F + logp ( X V ) ≥ 0 (5) logp ( X V ) ≥ − F (6) Problem reduced to minimising the Free Energy with respect to q and the original model p Learning multisensory integration with stochastic variational learning in recurrent spiking networks 9
Multisensory Integration Stochastic Variational Learning Model Results Further Work Figure: The Neuron Model Learning multisensory integration with stochastic variational learning in recurrent spiking networks 10
Multisensory Integration Stochastic Variational Learning Model Results Further Work The final weight updates are simple gradient descent on the free energy ˙ ij = − µ M ∇ w M w M (7) ij F ˙ w Q ij = − µ Q ∇ w Q (8) ij F Learning multisensory integration with stochastic variational learning in recurrent spiking networks 11
Multisensory Integration Stochastic Variational Learning Model Results Further Work Final Equations ˙ ij = µ M H M w M ij ( t ) (9) ˙ w Q ij = − µ Q e N ( t ) H Q ij ( t ) (10) Learning multisensory integration with stochastic variational learning in recurrent spiking networks 12
Multisensory Integration Stochastic Variational Learning Model Results Further Work Final Equations ˙ ij = µ M H M w M ij ( t ) (9) ˙ w Q ij = − µ Q e N ( t ) H Q ij ( t ) (10) ( X i − ρ Q / M ) ∗ φ j (11) i Learning multisensory integration with stochastic variational learning in recurrent spiking networks 12
Multisensory Integration Stochastic Variational Learning Model Results Further Work Final Equations ˙ ij = µ M H M w M ij ( t ) (9) ˙ w Q ij = − µ Q e N ( t ) H Q ij ( t ) (10) ( X i − ρ Q / M ) ∗ φ j (11) i e N ( t ) = ˆ F − ¯ (12) F Learning multisensory integration with stochastic variational learning in recurrent spiking networks 12
Multisensory Integration Stochastic Variational Learning Model Results Further Work Global signal e N ( t ) = ˆ F − ¯ (13) F Learning multisensory integration with stochastic variational learning in recurrent spiking networks 13
Multisensory Integration Stochastic Variational Learning Model Results Further Work Global signal e N ( t ) = ˆ F − ¯ (13) F � ˆ F = (14) F τ d τ F τ = F Q − F M (15) � � � log ρ Q i ( τ ) X i ( τ ) − ρ Q F Q = i ( τ ) (16) i ∈ H � � � log ρ M i ( τ ) X i ( τ ) − ρ M F M = i ( τ ) (17) i ∈ V ∪ H Learning multisensory integration with stochastic variational learning in recurrent spiking networks 13
Multisensory Integration Stochastic Variational Learning Model Results Further Work Results Figure: 2 neurons, only M network, no global factor Learning multisensory integration with stochastic variational learning in recurrent spiking networks 14
Multisensory Integration Stochastic Variational Learning Model Results Further Work Results Figure: 10 neurons, same as above 5 Learning multisensory integration with stochastic variational learning in recurrent spiking networks 15
Multisensory Integration Stochastic Variational Learning Model Results Further Work Results Figure: Entire network, malfunctional hidden neurons, learning stops at halfway Learning multisensory integration with stochastic variational learning in recurrent spiking networks 16
Multisensory Integration Stochastic Variational Learning Model Results Further Work Results Figure: The entire required result, learning stops halfway Learning multisensory integration with stochastic variational learning in recurrent spiking networks 17
Multisensory Integration Stochastic Variational Learning Model Results Further Work Results Figure: The error terms with constant firing Learning multisensory integration with stochastic variational learning in recurrent spiking networks 18
Multisensory Integration Stochastic Variational Learning Model Results Further Work Results Figure: The rho being approached with constant firing Learning multisensory integration with stochastic variational learning in recurrent spiking networks 19
Multisensory Integration Stochastic Variational Learning Model Results Further Work Further work and implication Get my implementation of Danilo’s model to function flawlessly. Sabes’ paper does not introduce a temporal factor, try to incorporate so. Encoding for other key processes of sensory processing - integration of prior information and coordinate transforms. Learning multisensory integration with stochastic variational learning in recurrent spiking networks 20
Thank You!
Recommend
More recommend