neural encoding models maximum likelihood
play

Neural encoding models & maximum likelihood Jonathan Pillow 1 - PowerPoint PPT Presentation

Statistical modeling and analysis of neural data NEU 560, Spring 2018 Lecture 7 Neural encoding models & maximum likelihood Jonathan Pillow 1 probability leftovers: sampling vs inference Model Data 700 samples sampling (measurement)


  1. Statistical modeling and analysis of neural data NEU 560, Spring 2018 Lecture 7 Neural encoding models & maximum likelihood Jonathan Pillow 1

  2. probability leftovers: sampling vs inference Model Data 700 samples sampling (measurement) Inference (“fitting”) true mean: [0 0.8] sample mean: [-0.05 0.83] true cov: [1.0 -0.25 sample cov: [0.95 -0.23 -0.25 0.3] -0.23 0.29] 2

  3. Estimation measured dataset 
 (“population response”) model x = { r 1 , r 2 , . . . , r n } parameter p ( x | θ )  ( 1 (“stimulus”) = , , r r r 2 spike count neuron # An e stimator is a function → ˆ f : x − θ ˆ • often we will write or just θ ( x ) 3

  4. Properties of an estimator “expected” value 
 (average over draws of x) bias: • “unbiased” if bias=0 variance: • “consistent” if bias and variance both go 
 to zero asymptotically ˆ Q : what is the bias of the estimator 
 θ ( x ) = 7 (i.e., estimate is 7 for all datasets x ) Q : what is the variance of that estimator? 4

  5. neural coding problem x y stimuli spike trains Q: what is the probabilistic relationship between stimuli and spike trains? 5

  6. neural coding problem x y stimuli spike trains “encoding model” Q: what is the probabilistic relationship between stimuli and spike trains? 6

  7. today: single-neuron encoding x y P ( y i | ~ x, ✓ ) stimuli spike trains “encoding model” Question : what criteria for picking a model? 7

  8. model desiderata multi- sweet linear, GLM compartment spot Gaussian Hodgkin-Huxley fittability / richness / tractability flexibility (capture realistic neural (can be fit to data) properties) 8

  9. Example 1: linear Poisson neuron − 3 − 2 − = mean = variance spike count spike rate 0 1 2 3 4 5 6 7 8 9 10 parameter stimulus encoding model: 9

  10. 60 p(y|x) conditional distribution (spike count) 40 20 0 0 20 40 0 20 40 60 (contrast) 10

  11. 60 p(y|x) conditional distribution (spike count) 40 20 0 0 20 40 0 20 40 60 (contrast) 11

  12. 60 p(y|x) conditional distribution (spike count) 40 20 0 0 20 40 0 20 40 60 (contrast) 12

  13. Maximum Likelihood Estimation: • given observed data , find that maximizes parameters all spike all counts stimuli } single-trial probability Q: what assumption are we making about the responses? A: conditional independence across trials! 13

  14. Maximum Likelihood Estimation: • given observed data , find that maximizes parameters all spike all counts stimuli } single-trial probability Q: what assumption are we making about the responses? A: conditional independence across trials! Q: when do we call a likelihood ? A: when considering it as a function of ! 14

  15. Maximum Likelihood Estimation: • given observed data , find that maximizes 60 p(y|x) (spike count) 40 20 0 0 20 40 (contrast) • could in theory do this by turning a knob 15

  16. Maximum Likelihood Estimation: • given observed data , find that maximizes 60 p(y|x) (spike count) 40 20 0 0 20 40 (contrast) • could in theory do this by turning a knob 16

  17. Maximum Likelihood Estimation: • given observed data , find that maximizes 60 p(y|x) (spike count) 40 20 0 0 20 40 (contrast) • could in theory do this by turning a knob 17

  18. Likelihood function: as a function of Because data are independent: likelihood 0 1 2 18

  19. Likelihood function: as a function of Because data are independent: likelihood 0 1 2 log log-likelihood 0 1 2 19

  20. log-likelihood 0 1 2 Do it: solve for 20

  21. log-likelihood 0 1 2 • Closed-form solution when model in “exponential family” 21

  22. Properties of the MLE (maximum likelihood estimator) • consistent (converges to true in limit of infinite data) • e ffi cient 
 (converges as quickly as possible, 
 i.e., achieves minimum possible asymptotic error) 22

  23. Example 2: linear Gaussian neuron spike count spike rate parameter stimulus encoding model: 23

  24. 60 encoding distribution 40 (spike count) 20 0 0 20 40 0 20 40 60 (contrast) All slices have same width 24

  25. Log-Likelihood Do it: differentiate, set to zero, and solve. 25

  26. Log-Likelihood Maximum-Likelihood Estimator: (“Least squares regression” solution) (Recall that for Poisson, ) 26

  27. Example 3: unknown neuron 100 75 (spike count) 50 25 0 -25 0 25 (contrast) Be the computational neuroscientist: what model would you use? 27

  28. Example 3: unknown neuron 100 75 (spike count) 50 25 0 -25 0 25 (contrast) More general setup: for some nonlinear function f 28

  29. Quick Quiz: The distribution P(y|x, ) can be considered as a function of y, x, or . spikes stimulus parameters What is P(y|x, ) : 1. as a function of y? Answer: encoding distribution - probability distribution over spike counts 2. as a function of ? Answer: likelihood function - the probability of the data given model params 3. as a function of x? Answer: stimulus likelihood function - useful for ML stimulus decoding! 29

  30. 60 40 (spike count) stimulus decoding 20 likelihood 0 0 20 40 (contrast) 30

Recommend


More recommend