Statistical modeling and analysis of neural data NEU 560, Spring 2018 Lecture 7 Neural encoding models & maximum likelihood Jonathan Pillow 1
probability leftovers: sampling vs inference Model Data 700 samples sampling (measurement) Inference (“fitting”) true mean: [0 0.8] sample mean: [-0.05 0.83] true cov: [1.0 -0.25 sample cov: [0.95 -0.23 -0.25 0.3] -0.23 0.29] 2
Estimation measured dataset (“population response”) model x = { r 1 , r 2 , . . . , r n } parameter p ( x | θ ) ( 1 (“stimulus”) = , , r r r 2 spike count neuron # An e stimator is a function → ˆ f : x − θ ˆ • often we will write or just θ ( x ) 3
Properties of an estimator “expected” value (average over draws of x) bias: • “unbiased” if bias=0 variance: • “consistent” if bias and variance both go to zero asymptotically ˆ Q : what is the bias of the estimator θ ( x ) = 7 (i.e., estimate is 7 for all datasets x ) Q : what is the variance of that estimator? 4
neural coding problem x y stimuli spike trains Q: what is the probabilistic relationship between stimuli and spike trains? 5
neural coding problem x y stimuli spike trains “encoding model” Q: what is the probabilistic relationship between stimuli and spike trains? 6
today: single-neuron encoding x y P ( y i | ~ x, ✓ ) stimuli spike trains “encoding model” Question : what criteria for picking a model? 7
model desiderata multi- sweet linear, GLM compartment spot Gaussian Hodgkin-Huxley fittability / richness / tractability flexibility (capture realistic neural (can be fit to data) properties) 8
Example 1: linear Poisson neuron − 3 − 2 − = mean = variance spike count spike rate 0 1 2 3 4 5 6 7 8 9 10 parameter stimulus encoding model: 9
60 p(y|x) conditional distribution (spike count) 40 20 0 0 20 40 0 20 40 60 (contrast) 10
60 p(y|x) conditional distribution (spike count) 40 20 0 0 20 40 0 20 40 60 (contrast) 11
60 p(y|x) conditional distribution (spike count) 40 20 0 0 20 40 0 20 40 60 (contrast) 12
Maximum Likelihood Estimation: • given observed data , find that maximizes parameters all spike all counts stimuli } single-trial probability Q: what assumption are we making about the responses? A: conditional independence across trials! 13
Maximum Likelihood Estimation: • given observed data , find that maximizes parameters all spike all counts stimuli } single-trial probability Q: what assumption are we making about the responses? A: conditional independence across trials! Q: when do we call a likelihood ? A: when considering it as a function of ! 14
Maximum Likelihood Estimation: • given observed data , find that maximizes 60 p(y|x) (spike count) 40 20 0 0 20 40 (contrast) • could in theory do this by turning a knob 15
Maximum Likelihood Estimation: • given observed data , find that maximizes 60 p(y|x) (spike count) 40 20 0 0 20 40 (contrast) • could in theory do this by turning a knob 16
Maximum Likelihood Estimation: • given observed data , find that maximizes 60 p(y|x) (spike count) 40 20 0 0 20 40 (contrast) • could in theory do this by turning a knob 17
Likelihood function: as a function of Because data are independent: likelihood 0 1 2 18
Likelihood function: as a function of Because data are independent: likelihood 0 1 2 log log-likelihood 0 1 2 19
log-likelihood 0 1 2 Do it: solve for 20
log-likelihood 0 1 2 • Closed-form solution when model in “exponential family” 21
Properties of the MLE (maximum likelihood estimator) • consistent (converges to true in limit of infinite data) • e ffi cient (converges as quickly as possible, i.e., achieves minimum possible asymptotic error) 22
Example 2: linear Gaussian neuron spike count spike rate parameter stimulus encoding model: 23
60 encoding distribution 40 (spike count) 20 0 0 20 40 0 20 40 60 (contrast) All slices have same width 24
Log-Likelihood Do it: differentiate, set to zero, and solve. 25
Log-Likelihood Maximum-Likelihood Estimator: (“Least squares regression” solution) (Recall that for Poisson, ) 26
Example 3: unknown neuron 100 75 (spike count) 50 25 0 -25 0 25 (contrast) Be the computational neuroscientist: what model would you use? 27
Example 3: unknown neuron 100 75 (spike count) 50 25 0 -25 0 25 (contrast) More general setup: for some nonlinear function f 28
Quick Quiz: The distribution P(y|x, ) can be considered as a function of y, x, or . spikes stimulus parameters What is P(y|x, ) : 1. as a function of y? Answer: encoding distribution - probability distribution over spike counts 2. as a function of ? Answer: likelihood function - the probability of the data given model params 3. as a function of x? Answer: stimulus likelihood function - useful for ML stimulus decoding! 29
60 40 (spike count) stimulus decoding 20 likelihood 0 0 20 40 (contrast) 30
Recommend
More recommend