event generation and statistical sampling with deep
play

Event Generation and Statistical Sampling with Deep Generative - PowerPoint PPT Presentation

Event Generation and Statistical Sampling with Deep Generative Models Rob Verheyen Introduction Event generation is really hard! 2 Introduction Can we use deep neural networks to do event generation? Possible applications: Faster


  1. Event Generation and Statistical Sampling with Deep Generative Models Rob Verheyen

  2. Introduction Event generation is really hard! 2

  3. Introduction Can we use deep neural networks to do event generation? Possible applications: • Faster • Data driven generators • Targeted event generation 3

  4. Introduction Study of different types of unsupervised generative models • Generative Adversarial Networks • Variational Autoencoders • Buffer Variational Autoencoder Can these networks be used for event generation? 4

  5. Generative Adversarial Networks (GANs)

  6. Generative Adversarial Networks Two networks (Generator & Discriminator) that play a game against each other 6

  7. Generative Adversarial Networks Loss function: Nash equilibrium: p data ( x ) = p gen ( x ) D ( x ) = 1 2 7

  8. Generative Adversarial Networks 1812.04948 8

  9. Variational Autoencoders (VAEs)

  10. Autoencoders • Data is encoded into latent space • Dim of latent space is often lower than dim of data 10

  11. Variational Autoencoders Add degree of randomness to training procedure 11

  12. Variational Autoencoders Points in latent space are ordered 12

  13. Variational Autoencoders Loss function L VAE = (1 − � ) 1 y i ) 2 + � D KL ( N ( µ i , � i ) , N (0 , 1)) N ( ~ x i − ~ Mean squared error Kullback–Leibler divergence MSE : Gaussians prefer being very narrow KL Div: Gaussians prefer being close to N (0 , 1) is a hyperparameter: tune by hand β 13

  14. Information Buffer The latent space representation of our datapoints are now ordered Normally, one would now sample from in latent space N (0 , 1) But we can do better: Create information buffer n p ( z ) = 1 X N ( µ i , σ i ) n i Representation of distribution of training data in latent space 14

  15. Results

  16. Toy Model decay with uniform angles and 1 → 2 no exact momentum conservation Trained on four-vectors 16

  17. Top pair production MG5 aMC@NLO 6.3.2 + Pythia 8.2 + Delphes 3 • One top required to decay leptonically 5 × 10 5 • Number of training points • Jets with p T > 20 GeV O (10 8 ) Event generation with the B-VAE is faster! 17

  18. Top pair production 18

  19. Latent space distributions Distributions are still Gaussian-like Some have sharp cutoffs: Unphysical events outside Information buffer very important! 19

  20. Latent Space Principal Component Analysis 20

  21. Latent Space Principal Component Analysis 21

  22. Possible Applications Most direct application: Importance sampling for ME generation d Φ p ( Φ ) | M ( Φ ) | 2 Z Z d Φ | M ( Φ ) | 2 = σ ∝ p ( Φ ) Recent ML techniques: Current methods: VEGAS Latent variable models 1810.11509 e + e − → qg ¯ q efficiency: • VEGAS: ~4% • LVM: ~ 65% • B-VAE: ??? 22

  23. Applications? • Data-driven event generators • Targeted event generation • Applications outside High Energy Physics? • ??? 23

Recommend


More recommend