Inferring Inference Xaq Pitkow Rajkumar Vasudeva Raju part of the MICrONS project with Tolias, Bethge, Patel, Zemel, Urtasun, Xu, Siapas, Paninski, Baraniuk, Reid, Seung NICE workshop 2017
World Brain match Hypothesis: The brain approximates probabilistic inference over a probabilistic graphical model using a message-passing algorithm implicit in population dynamics
What algorithms can we learn from the brain? Architectures ? cortex, hippocampus, cerebellum, basal ganglia, … Transformations ? M B nonlinear dynamics from population responses S M B Learning rules ? short and long-term plasticity
Principles : Details : Probabilistic Graphical models Nonlinear Message-passing inference Distributed Multiplexed across neurons
Events in the world can cause many neural responses. Neural responses can be caused by many events. So neural computation is inevitably statistical . This provides us with mathematical predictions. ? world brain ?
Why does it matter whether processing is linear or nonlinear? If all computation were linear we wouldn’t need a brain. apples oranges linearly separable nonlinearly separable
Two sources of nonlinearities Relationships between latent variables L R Image = Light × Reflectance I Relationships between uncertainties posteriors generally have nonlinear dependencies even for the simplest variables Product rule : p(x,y) = p(x) ∙ p(y) Sum rule : L(x) = log ∑ y exp L(x,y)
Probabilistic Graphical Models: Simplify joint distribution p ( x | r ) by specifying how variables interact Y p ( x | r ) ∝ ψ α ( x α ) α
Variable x 1 x 2 Factor ψ 123 x 3
Example: Pairwise Markov Random Field J 1 J 2 J 3 x 1 x 2 x 3 J 12 J 23
Approximate inference by message-passing : • Localize information so it is actionable • Summarize statistics relevant for targets • Send that information along graph • Iteratively update factors with new information general interactions equation message-passing posterior posterior for parameters neighbors parameters
Example message-passing algorithms • Mean-field (assumes variables are independent) • Belief propagation (assumes tree graph) • Expectation propagation (updates parametric posterior) • … • Brain’s clever tricks?
Spatial representation of uncertainty (e.g. Probabilistic Population Codes, PPCs) Posterior Neural p ( x | r ) response r i Neuron x index i b . r 1 σ = a . r µ = a . r a . r b . r Pattern of activity represents probability. More spikes generally means more certainty Ma, Beck, Latham, Pouget 2006, etc
Message-passing updates embedding Neural dynamics
J 1 J 2 J 3 x 1 x 2 x 3 J 12 J 23 r r r singleton 1 2 3 populations nonlinear connections linear connections pairwise populations r r 12 23
J 1 J 2 J 3 x 1 x 2 x 3 J 12 J 23 singleton populations linear connections nonlinear pairwise connections populations
Neural activity
Neural activity
Neural Neural Information activity encoding encoded
Neural Neural Information activity encoding encoded
Neural Neural Information interactions encoding interactions
Neural Neural Information Probability interactions encoding interactions distributions
Neural Neural Information Example: interactions encoding interactions orientation
Network activity can implicitly perform inference max Neural activity r min Time N neurons N neurons N params = 1 N params = 5 no noise Inferred parameters Mean Variance True parameters Raju and Pitkow 2016
Simulated brain Infer b time Encode r time Inferring inference Decode *within Fit* family Message-passing Interactions parameters
Recovery results for simulated brain Message-passing • ≠ •! parameters G αβγ αβγ 0 Interactions J ij True Learnt ij 0
Analysis reveals degenerate family of equivalent algorithms Distance towards local minimum 2 max degenerate degenerate * valley 2 valley 2 1 Mean Squared Error degenerate valley 1 * global min * 0 0 0 1 Distance towards local minimum 1
From simulated neural data we have recovered: how variables are encoded Representation which variables interact Graphical model how they interact Message-Passing how the interactions are used algorithm
Applying message-passing to novel tasks Brain Message Apply to Relax to neural passing new graphical OR novel neural network nonlinearity model structure network
Next up: applying methods to real brains stimulus: orientation field recordings: V1 responses* *not to same stimulus recordings from Tolias lab
mementos: • Neurons can perform inference implicitly in a graphical model distributed across a population. • New method to discover message-passing algorithms by modeling transformations of decoded task variables World Brain match
acknowledgements funding: collaborators xaqlab .com Alex Pouget Jeff Beck Dora Angelaki Andreas Tolias Jacob Reimer Rajkumar Vasudeva Raju Fabian Sinz Alex Ecker Kaushik Lakshminarasimhan Ankit Patel Qianli Yang Emin Orhan Aram Giahi-Saravani KiJung Yoon James Bridgewater Zhengwei Wu Saurabh Daptardar
Recommend
More recommend