strong gravitational lensing and ml generative models for
play

Strong Gravitational Lensing and ML: generative models for galaxies - PowerPoint PPT Presentation

Strong Gravitational Lensing and ML: generative models for galaxies Adam Coogan Dark Machines workshop ICTP , 8-12 April 2019 Observed galaxy p ( src | x ) Generative model p ( lens | x ) Bayesian inference p ( sub | x ) Model


  1. Strong Gravitational Lensing and ML: generative models for galaxies Adam Coogan Dark Machines workshop ICTP , 8-12 April 2019

  2. Observed galaxy p ( θ src | x ) Generative model p ( θ lens | x ) Bayesian inference p ( θ sub | x ) Model physics when possible, use machine learning for the rest

  3. Observed galaxy p ( θ src | x ) Generative model p ( θ lens | x ) Bayesian inference p ( θ sub | x ) http://great3.jb.man.ac.uk/

  4. Observed galaxy p ( θ src | x ) Generative model p ( θ lens | x ) Bayesian inference p ( θ sub | x ) Machine learning for generatively modeling the source light http://great3.jb.man.ac.uk/

  5. Observed galaxy p ( θ src | x ) Generative model p ( θ lens | x ) Bayesian inference p ( θ sub | x ) Machine learning for generatively modeling the source light • Galaxies have diverse, complex morphologies (especially z ≳ 2) http://great3.jb.man.ac.uk/

  6. Observed galaxy p ( θ src | x ) Generative model p ( θ lens | x ) Bayesian inference p ( θ sub | x ) Machine learning for generatively modeling the source light • Galaxies have diverse, complex morphologies (especially z ≳ 2) • Complex source → more accurate lens parameter inference http://great3.jb.man.ac.uk/

  7. Observed galaxy p ( θ src | x ) Generative model p ( θ lens | x ) Bayesian inference p ( θ sub | x ) Machine learning for generatively modeling the source light • Galaxies have diverse, complex morphologies (especially z ≳ 2) • Complex source → more accurate lens parameter inference http://great3.jb.man.ac.uk/

  8. Observed galaxy p ( θ src | x ) Generative model p ( θ lens | x ) Bayesian inference p ( θ sub | x ) Machine learning for generatively modeling the source light • Galaxies have diverse, complex morphologies (especially z ≳ 2) • Complex source → more accurate lens parameter inference http://great3.jb.man.ac.uk/

  9. Source model • Low-dimensional representation of data that: ➡ Captures range of galaxy morphologies ➡ Has a latent space compatible with Bayesian inference Kingma & Welling 2013, Rezende et al 2014

  10. Source model • Low-dimensional representation of data that: ➡ Captures range of galaxy morphologies ➡ Has a latent space compatible with Bayesian inference Variational autoencoder Latent space z x Data Kingma & Welling 2013, Rezende et al 2014

  11. Source model • Low-dimensional representation of data that: ➡ Captures range of galaxy morphologies ➡ Has a latent space compatible with Bayesian inference Variational autoencoder Latent space z Encoder q ϕ ( z | x ) x Data Kingma & Welling 2013, Rezende et al 2014

  12. Source model • Low-dimensional representation of data that: ➡ Captures range of galaxy morphologies ➡ Has a latent space compatible with Bayesian inference Variational autoencoder Latent space z Encoder Decoder p θ ( x | z ) q ϕ ( z | x ) x Data Kingma & Welling 2013, Rezende et al 2014

  13. Source model • Low-dimensional representation of data that: ➡ Captures range of galaxy morphologies ➡ Has a latent space compatible with Bayesian inference Variational autoencoder Latent space z p ( z ) = N (0, I ) Encoder Decoder p θ ( x | z ) q ϕ ( z | x ) x Data Kingma & Welling 2013, Rezende et al 2014

  14. Source model • Low-dimensional representation of data that: ➡ Captures range of galaxy morphologies ➡ Has a latent space compatible with Bayesian inference Variational autoencoder Latent space z p ( z ) = N (0, I ) Encoder Decoder p θ ( x | z ) q ϕ ( z | x ) x Data Kingma & Welling 2013, Rezende et al 2014

  15. Source model • Low-dimensional representation of data that: ➡ Captures range of galaxy morphologies ➡ Has a latent space compatible with Bayesian inference Variational autoencoder Latent space z p ( z ) = N (0, I ) Train encoder, decoder by Encoder Decoder maximizing lower bound on p θ ( x | z ) p(data) q ϕ ( z | x ) x Data Kingma & Welling 2013, Rezende et al 2014

  16. Galaxy VAE • Dataset: ~56,000 galaxies, redshifts ~ 1 http://great3.jb.man.ac.uk/

  17. Galaxy VAE • Dataset: ~56,000 galaxies, redshifts ~ 1 S/N < 10 S/N ~ 20 S/N > 100 http://great3.jb.man.ac.uk/

  18. Galaxy VAE • Dataset: ~56,000 galaxies, redshifts ~ 1 This talk: train on ~10,000 images with S/N = 15 - 50 S/N < 10 S/N ~ 20 S/N > 100 http://great3.jb.man.ac.uk/

  19. Galaxy VAE • Dataset: ~56,000 galaxies, redshifts ~ 1 This talk: train on ~10,000 images with S/N = 15 - 50 S/N < 10 S/N ~ 20 S/N > 100 • Encoder, decoder: deep convolutional neural networks Eg, decoder: http://great3.jb.man.ac.uk/ Radford et al 2015 (DCGAN)

  20. Galaxy VAE: reconstructions x z ∼ q ϕ ( z | x ) x ′ �

  21. Galaxy VAE: reconstructions Original Reconstruction Original Reconstruction Original Reconstruction x z ∼ q ϕ ( z | x ) x ′ �

  22. Galaxy VAE: reconstructions Original Reconstruction Original Reconstruction Original Reconstruction x z ∼ q ϕ ( z | x ) x ′ �

  23. Galaxy VAE: reconstructions Original Reconstruction Original Reconstruction Original Reconstruction x z ∼ q ϕ ( z | x ) x ′ �

  24. Galaxy VAE: reconstructions Original Reconstruction Original Reconstruction Original Reconstruction 1. x 2. z ∼ q ϕ ( z | x ) x ′ � 1. 2.

  25. Galaxy VAE: reconstructions Original Reconstruction Original Reconstruction Original Reconstruction 1. x 2. z ∼ q ϕ ( z | x ) x ′ � 1. 2.

  26. Galaxy VAE: reconstructions Original Reconstruction Original Reconstruction Original Reconstruction 1. x 2. z ∼ q ϕ ( z | x ) x ′ � 1. 2. Rezende & Viola 2018, Zhao et al 2017

  27. Galaxy VAE: samples z ∼ p ( z ) = N (0, I ) Ho ff man & Johnson 2016, Alemi et al 2018

  28. Galaxy VAE: samples z ∼ p ( z ) = N (0, I ) Ho ff man & Johnson 2016, Alemi et al 2018

  29. Galaxy VAE: samples z ∼ p ( z ) = N (0, I ) Ho ff man & Johnson 2016, Alemi et al 2018

  30. Galaxy VAE: samples z distribution for training data z ∼ p ( z ) = N (0, I ) Ho ff man & Johnson 2016, Alemi et al 2018

  31. Galaxy VAE: samples z distribution for training data z ∼ p ( z ) = N (0, I ) ≠ N (0, I ) → open issue with VAEs! Ho ff man & Johnson 2016, Alemi et al 2018

  32. Galaxy VAE: samples z distribution for training data z ∼ p ( z ) = N (0, I ) ≠ N (0, I ) → open issue with VAEs! Our approach: sample z from here to generate better galaxies Ho ff man & Johnson 2016, Alemi et al 2018

  33. Normalizing flows z ∼ N (0, I ) z ′ � ∼ Rezende & Mohamed 2015, Kingma et al 2016, …

  34. Normalizing flows f T ∘ . . . ∘ f 2 ∘ f 1 ( z ) z ∼ N (0, I ) z ′ � ∼ • Compose invertible transformations with simple Jacobians to reshape distributions Rezende & Mohamed 2015, Kingma et al 2016, …

  35. Normalizing flows f T ∘ . . . ∘ f 2 ∘ f 1 ( z ) z ∼ N (0, I ) z ′ � ∼ • Compose invertible transformations with simple Jacobians to reshape distributions • Parametrize with neural networks Rezende & Mohamed 2015, Kingma et al 2016, …

  36. Normalizing flows f T ∘ . . . ∘ f 2 ∘ f 1 ( z ) z ∼ N (0, I ) z ′ � ∼ • Compose invertible transformations with simple Jacobians to reshape distributions • Parametrize with neural networks • For our purposes: inverse autoregressive flows ( IAFs ), which enable e ffi cient sampling of the latent variable Rezende & Mohamed 2015, Kingma et al 2016, …

  37. Galaxy VAE: samples z distribution for training data z samples from IAF fit z ∼ IAF( z ) x

  38. Galaxy VAE: samples z distribution for training data z samples from IAF fit Generated galaxies z ∼ IAF( z ) x

  39. Lensing galaxies True source Observation *Very preliminary, simplified analysis

  40. Lensing galaxies True source Observation Best-fit source *Very preliminary, simplified analysis

  41. Lensing galaxies True source Observation Best-fit source True Einstein radius: 2.3 Best-fit value: 2.29 *Very preliminary, simplified analysis

  42. Conclusions • Integrate galaxy VAE with full analysis pipeline • Improve prior/latent distribution mismatch: • Fully incorporate flows with VAE • Fix blurriness: • More flexible encoder? β /conditional-VAE, …? • Example of “di ff erentiable programming” for physics + ML Tomczak & Welling 2018 Higgins et al 2017

  43. Conclusions • Integrate galaxy VAE with full analysis pipeline • Improve prior/latent distribution mismatch: • Fully incorporate flows with VAE • Fix blurriness: • More flexible encoder? β /conditional-VAE, …? • Example of “di ff erentiable programming” for physics + ML Thanks! Tomczak & Welling 2018 Higgins et al 2017

  44. Lensing MNIST digits True source Observation Simplified lens Poissonian with one observation parameter, r ein noise Outputs from simplified analysis

  45. Lensing MNIST digits True source Observation Simplified lens Poissonian with one observation parameter, r ein noise Outputs from simplified analysis Best-fit source from VAE

  46. Lensing MNIST digits True source Observation Simplified lens Poissonian with one observation parameter, r ein noise Outputs from simplified analysis Lens parameter inference Best-fit source from VAE 100 HMC True r ein 80 SVI p ( r ein | obs) 60 40 20 0 1 . 63 1 . 64 1 . 65 1 . 66 1 . 67 r ein

Recommend


More recommend