particle filter based gaussian process optimisation for
play

Particle filter-based Gaussian process optimisation for parameter - PowerPoint PPT Presentation

Particle filter-based Gaussian process optimisation for parameter inference IFAC World Congress 2014, Cape Town, South Africa, August 28, 2014. Johan Dahlin johan.dahlin@liu.se Division of Automatic Control, Linkping University, Sweden.


  1. Particle filter-based Gaussian process optimisation for parameter inference IFAC World Congress 2014, Cape Town, South Africa, August 28, 2014. Johan Dahlin johan.dahlin@liu.se Division of Automatic Control, Linköping University, Sweden. AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  2. This is collaborative work with Dr. Fredrik Lindsten (University of Cambridge, United Kingdom) AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  3. Summary Aim Efficient likelihood parameter inference in nonlinear SSMs. Methods Gaussian process optimisation. Sequential Monte Carlo methods. Contributions Decreased computational cost compared with popular methods. Interesting method for solving other costly optimisation problems. AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  4. Example: Modelling volatility in OMXS30 returns 1300 Daily closing prices 1200 1100 1000 900 jan 12 mar 12 maj 12 jul 12 sep 12 nov 12 jan 13 mar 13 maj 13 jul 13 sep 13 nov 13 jan 14 Month AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  5. Example: Modelling volatility in OMXS30 returns 4 Daily log−returns 2 0 −2 −4 jan 12 mar 12 maj 12 jul 12 sep 12 nov 12 jan 13 mar 13 maj 13 jul 13 sep 13 nov 13 jan 14 Month 0.4 4 Sample Quantiles 2 0.3 Density 0 0.2 −2 0.1 −4 0.0 −3 −2 −1 0 1 2 3 −4 −2 0 2 4 Theoretical Quantiles Daily log−returns AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  6. Example: Modelling volatility in OMXS30 returns � x t +1 ; φ x t , σ 2 � x t +1 | x t ∼ N , � � y t ; 0 , β 2 exp( x t ) y t | x t ∼ N . Task: Estimate � θ ML � argmax ℓ ( θ ) , with θ = { φ , σ , β } . θ ∈ Θ AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  7. Overview of the algorithm (i) Given iterate θ k , estimate the log-likelihood � ℓ k ≈ ℓ ( θ k ) . (ii) Given { θ j , � ℓ j } k j =0 , create a surrogate cost function of ℓ ( θ ) . (iii) Select a new θ k +1 using the surrogate cost function. AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  8. Overview of the algorithm (i) Given iterate θ k , estimate the log-likelihood � ℓ k ≈ ℓ ( θ k ) . Estimated using a particle filter. (ii) Given { θ j , � ℓ j } k j =0 , create a surrogate cost function of ℓ ( θ ) . The predictive distribution of a Gaussian process. (iii) Select a new θ k +1 using the surrogate cost function. Acquisition function, a heuristic based on the predictive distribution. AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  9. Particle filtering: overview Resampling Propagation Weighting Given the particle system � � N x ( i ) 1: T , w ( i ) i =1 , 1: T the filtering density can be approximated by � � N w ( i ) � T p θ ( ❞ x 1: T | y 1: T ) = � 1: T ( ❞ x 1: T ) . δ x ( i ) � N k =1 w ( k ) i =1 T AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  10. Particle filtering: log-likelihood estimator 2.0 1.5 Estimator: Density � � 1.0 � T � N � w ( i ) ℓ ( θ ) = log − T log N. 0.5 t t =1 i =1 0.0 −1.0 −0.5 0.0 0.5 1.0 Error in the log−likelihood estimate Statistical properties (CLT): � � 0.5 0.5 Error in the log−likelihood estimate √ ℓ ( θ ) + σ 2 � � d ℓ ( θ ) − � l 0 , σ 2 − → N N . Sample Quantiles l 2 N 0.0 0.0 −0.5 −0.5 −3 −2 −1 0 1 2 3 Theoretical Quantiles AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  11. Gaussian process regression: overview We assume a priori that � θ, θ ′ �� � � � ℓ ( θ ) ∼ GP m θ , κ , which gives the posterior predictive distribution � � , with D k = { θ j , � µ ( θ ⋆ |D k ) , σ 2 ( θ ⋆ |D k ) + σ 2 ℓ i } k ℓ ( θ ⋆ ) |D k ∼ N j =1 . l AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  12. Gaussian process regression: toy example -330 -340 -350 -360 surrogate function -370 -380 -390 -400 0.2 0.4 0.6 0.8 1.0 1.2 θ AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  13. Gaussian process regression: toy example AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  14. Acquisition rule for selecting sampling points Consider, the 95% upper confidence bound as the acquisition rule � � � σ 2 ( θ ⋆ |D k ) θ k +1 = argmax µ ( θ ⋆ |D k ) + 1 . 96 , θ ⋆ ∈ Θ to determine the next iterate θ k +1 . AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  15. Overview of the algorithm (i) Given iterate θ k , estimate the log-likelihood � ℓ k ≈ ℓ ( θ k ) . Estimated using a particle filter. (ii) Given { θ j , � ℓ j } k j =0 , create a surrogate cost function of ℓ ( θ ) . The predictive distribution of a Gaussian process. (iii) Select a new θ k +1 using the surrogate cost function. Acquisition function, a heuristic based on the predictive distribution. AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  16. A toy example of the algorithm in action -320 -340 surrogate function -360 -380 -400 0.2 0.4 0.6 0.8 1.0 1.2 θ AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  17. A toy example of the algorithm in action AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  18. Example: Modelling volatility in OMXS30 returns 1.2 estimate from GPO 0.8 0.4 0.0 0 50 100 150 iteration 1.2 estimate from SPSA 0.8 0.4 0.0 0 50 100 150 iteration AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  19. Example: Modelling volatility in OMXS30 returns AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  20. Example: Modelling volatility in OMXS30 returns 15 15 6 5 10 10 4 Density Density Density 3 5 5 2 1 0 0 0 0.90 0.92 0.94 0.96 0.98 1.00 0.00 0.05 0.10 0.15 0.20 0.25 0.6 0.7 0.8 0.9 φ σ v β � � Estimator � φ σ β Maximum likelihood (GPO) 0.98 0.11 0.93 Bayesian posterior mode 0.98 0.12 0.88 Bayesian posterior mean 0.97 0.14 0.93 AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  21. Example: Modelling volatility in OMXS30 returns 4 Daily log−returns 2 0 −2 −4 jan 12 mar 12 maj 12 jul 12 sep 12 nov 12 jan 13 mar 13 maj 13 jul 13 sep 13 nov 13 jan 14 Month 1.0 Estimated volatility 0.5 0.0 −0.5 jan 12 mar 12 maj 12 jul 12 sep 12 nov 12 jan 13 mar 13 maj 13 jul 13 sep 13 nov 13 jan 14 Month AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  22. Conclusions Methods Particle filtering for log-likelihood estimation. CLT for the log-likelihood and Gaussian process modelling. Acquisition rules. Contributions Decreased computational cost compared with popular methods. Only makes use of cheap zero-order information. Future work Bias compensation of log-likelihood estimate. Approximate Bayesian computations. (New paper!) Input design. (New paper!) AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  23. Thank you for your attention! Questions, comments and suggestions are most welcome. The paper and code to replicate the results within it are found at: http://work.johandahlin.com/ . AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  24. (bootstrap) Particle filtering Resampling Propagation Weighting t − 1 = x a ( i ) - Resampling: P ( a ( i ) w ( j ) x ( i ) = j ) = � t − 1 and set � t − 1 . t t � � � - Propagation: x ( i ) x ( i ) x ( i ) �� ∼ R θ = f θ ( x t | � t − 1 ) . x t t t − 1 � � - Weighting: w ( i ) x ( i ) x ( i ) = W θ t , � = g θ ( y t | x t ) . t − 1 t AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

  25. Likelihood estimation using the APF The likelihood for an SSM can be decomposed by T � L ( θ ) = p θ ( y 1: T ) = p θ ( y 1 ) p θ ( y t | y 1: t − 1 ) , t =2 where the one-step ahead predictor can be computed by � p θ ( y t | y 1: t − 1 ) = f θ ( x t | x t − 1 ) g θ ( y t | x t ) p θ ( x t − 1 | y 1: t − 1 ) ❞ x t � W θ ( x t | x t − 1 ) R θ ( x t | x t − 1 ) p θ ( x t − 1 | y 1: t − 1 ) ❞ x t . = � N N � � p θ ( y t | y 1: t − 1 ) ≈ 1 t − 1 ❞ x t = 1 w ( i ) W θ ( x t | x t − 1 ) δ x ( i ) t . x ( i ) N t , � N i =1 i =1 AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

Recommend


More recommend