bayesian method for repeated threshold estimation
play

Bayesian Method for Repeated Threshold Estimation Alexander Petrov - PowerPoint PPT Presentation

Bayesian Method for Repeated Threshold Estimation Alexander Petrov Department of Cognitive Sciences University of California, Irvine Supported by NIMH and NSF grants to Prof. Barbara Dosher Motivation: Perceptual Learning Non-stationary


  1. Bayesian Method for Repeated Threshold Estimation Alexander Petrov Department of Cognitive Sciences University of California, Irvine Supported by NIMH and NSF grants to Prof. Barbara Dosher

  2. Motivation: Perceptual Learning � Non-stationary thresholds � Dynamics of learning is important � Must use naïve observers � Low motivation � high lapsing rates � Slow learning � many sessions � Large volume of low-quality binary data http://www.socsci.uci.edu/~apetrov/

  3. Objective: Data Reduction http://www.socsci.uci.edu/~apetrov/

  4. Isn’t This a Solved Problem? � Up/down (Levitt, 1970) � PEST (Taylor & Creelman, 1967) � BEST PEST (Pentland, 1980) � QUEST (Watson & Pelli, 1979) � ML-Test (Harvey, 1986) � Ideal (Pelli, 1987) � YAAP (Treutwein, 1989) � and many others… http://www.socsci.uci.edu/~apetrov/

  5. We Solve a Different Problem � Standard methods: � Adaptive stimulus placement � Stopping criterion � Threshold estimation � Our method: � Threshold estimation � Integrate information across blocks http://www.socsci.uci.edu/~apetrov/

  6. Weibull Psychometric Function α β = − − − α β ( ; , ) 1 exp( exp((log log ) )) W x x α β γ λ = γ + − γ − λ α β ( ; , , , ) (1 ) ( ; , ) P x W x 1- λ � Threshold log α � Slope β � Guessing rate γ γ � Lapsing rate λ log α logx http://www.socsci.uci.edu/~apetrov/

  7. Two Kinds of Parameters � Threshold log α Parameters of interest θ � Slope β � Guessing rate γ Nuisance parameters φ � Lapsing rate λ The nuisance parameters are harder to estimate but change more slowly than the threshold parameter. http://www.socsci.uci.edu/~apetrov/

  8. Get the Best of Both Worlds Use long data sequences to constrain the nuisance parameters; use short sequences to estimate the thresholds. http://www.socsci.uci.edu/~apetrov/

  9. Joint Posterior of θ k , φ θ φ = … … ( , | ; , ) p y y y y y − + 1 1 1 k k k k n ∏∫ θ φ θ φ θ θ φ θ ( | , ) ( ) ( ) ( ) ( | , ) p p p p p d y y k k k i i i i ≠ i k Information about φ extracted Likelihood of Priors from the other data sets current data Modified prior for the current block http://www.socsci.uci.edu/~apetrov/

  10. Two-Pass Algorithm � Pass 1: for each block i, calculate = ∫ φ θ θ φ θ ( | ) ( ) ( | , ) p p p d y y i i � Pass 2: for each block k, calculate ∏∫ θ φ = θ φ θ φ φ ( , | ) ( | , ) ( ) ( ) ( | ) p p p p p y y y k k k k k i ≠ i k http://www.socsci.uci.edu/~apetrov/

  11. Posterior Thresholds = ∫ − θ φ θ φ θ φ 1 ( ) (.75; , ) ( , | ) p T P p d d y k k k k k posterior normal Posterior density -3 -2.5 -2 -1.5 -1 -0.5 0 75% threshold http://www.socsci.uci.edu/~apetrov/

  12. Some Details � Vaguely informative priors: α ∝ µ σ (log ) N( , ) p α α β ∝ µ σ ( ) N( , ) p β β λ ∝ ( ) Beta( , ) p a b λ λ � Implemented on a grid: log α x β x λ � Assume γ =.5 for 2AFC data � MATLAB software available at http://www.socsci.uci.edu/~apetrov/ http://www.socsci.uci.edu/~apetrov/

  13. Simulation 1: Stationary α = − = log 1.204 const β = 1.5 λ = .10 T = − 1.217 75 http://www.socsci.uci.edu/~apetrov/

  14. Stimulus Placement � 2 interleaved 0 staircases -0.5 log intensity � 100 trials/block -1 � 10 catch -1.5 � 40 x 3down/1up � 50 x 2down/1up -2 0 20 40 60 80 100 trial number � 100 runs of 12 blocks each http://www.socsci.uci.edu/~apetrov/

  15. Threshold Estimators Estimator Mean Med Std ML median ML -1.24 -1.23 .27 mean true Median -1.26 -1.23 .28 Frequency Mean -1.30 -1.27 .31 Std. dev. 0.41 0.36 .15 1200 Monte Carlo estimates True 75% threshold = -1.217 -3 -2.5 -2 -1.5 -1 -0.5 0 Estimated threshold http://www.socsci.uci.edu/~apetrov/

  16. β x λ Distribution from Pass 1 1 5 true β and λ ML β and λ 0.9 4 Slope BETA 0.8 P(correct) 3 0.7 2 0.6 1 0.5 0.05 0.1 0.15 0.2 -4 -3 -2 -1 0 Lapsing rate LAMBDA log intensity http://www.socsci.uci.edu/~apetrov/

  17. Catch Trials Are Worthwhile Estimator Mean Med Std ML median ML -1.24 -1.22 .31 mean true Median -1.29 -1.26 .30 Frequency Mean -1.36 -1.33 .34 Std. dev. 0.58 0.57 .16 1200 Monte Carlo estimates No catch trials presented -3 -2.5 -2 -1.5 -1 -0.5 0 True 75% threshold = -1.217 Estimated threshold http://www.socsci.uci.edu/~apetrov/

  18. Simulation 2: With Learning − α = − − /800 t log 0.693 ( 2) e β = 1.5 λ = .10 0 -0.5 T75 -1 -1.5 -2 0 1000 2000 3000 4000 5000 6000 Trial number http://www.socsci.uci.edu/~apetrov/

  19. Group Learning Curve, N=100 0 True learning curve Reconstruction ± CI 95 -0.2 -0.4 -0.6 -0.8 ML threshold -1 -1.2 -1.4 -1.6 -1.8 -2 0 10 20 30 40 50 60 Block number http://www.socsci.uci.edu/~apetrov/

  20. More Realistic Sample, N=10 0 True learning curve Reconstruction ± CI 95 -0.2 -0.4 -0.6 -0.8 ML threshold -1 -1.2 -1.4 -1.6 -1.8 -2 0 10 20 30 40 50 60 Block number http://www.socsci.uci.edu/~apetrov/

  21. Individual Runs 0 0 0 -1 -1 -1 -2 -2 -2 0 0 0 -1 -1 -1 -2 -2 -2 http://www.socsci.uci.edu/~apetrov/

  22. The Method Performs Well Estimator Mean Med Std ML median ML -0.03 -0.02 .28 mean true Median -0.05 -0.03 .29 Frequency Mean -0.08 -0.05 .32 Std. dev. 0.42 0.39 .15 6000 Monte Carlo estimates Similar to the stationary case -1.5 -1 -0.5 0 0.5 1 No systematic bias over time Estimated threshold - true threshold http://www.socsci.uci.edu/~apetrov/

  23. Example: Actual Data, N=8 Jeter, Dosher, Petrov, & Lu (2005) 1 0.5 0 -0.5 75% threshold In high noise -1 -1.5 -2 In no noise -2.5 -3 0 2 4 6 8 10 12 14 16 Block number http://www.socsci.uci.edu/~apetrov/

  24. Future Work � Sensitivity to priors? � Compare with standard ML methods � Individual differences � Estimate slope in addition to threshold � Non-stationary β and λ ? � Recommended stimulus placement? � Hierarchical models http://www.socsci.uci.edu/~apetrov/

  25. The End http://www.socsci.uci.edu/~apetrov/

Recommend


More recommend