vcmc variational consensus monte carlo
play

VCMC: Variational Consensus Monte Carlo Maxim Rabinovich, Elaine - PowerPoint PPT Presentation

VCMC: Variational Consensus Monte Carlo Maxim Rabinovich, Elaine Angelino, Michael I. Jordan Berkeley Vision and Learning Center September 22, 2015 probabilistic models! sky fog bridge water grass object tracking & recognition


  1. VCMC: Variational Consensus Monte Carlo Maxim Rabinovich, Elaine Angelino, Michael I. Jordan Berkeley Vision and Learning Center September 22, 2015

  2. probabilistic models! sky fog bridge water grass object tracking & recognition personalized recommendations genomics & phylogenetics small molecule discovery

  3. Outline Bayesian inference and Markov chain Monte Carlo MCMC is hard → New data–parallel algorithms VCMC: Our approach and theoretical results Empirical evaluation

  4. Bayesian models encode uncertainty using probabilities Probability distribution over model parameters π ( α, β, σ | x , y ) y A model is a probabilistic description of data y i ∼ N ( α x i + β, σ 2 ) x

  5. Bayesian inference uses Bayes’ rule π ( θ | x ) ∝ π ( θ ) π ( x | θ ) � �� � ���� � �� � posterior prior likelihood Model parameters θ = ( α, β, σ ) Data x = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x 10 , y 10 ) } y i ∼ N ( α x i + β, σ 2 ) Probabilistic model of data

  6. In general, posterior distributions are difficult to work with Normalizing involves an integral that is often intractable π ( θ ) π ( x | θ ) π ( θ | x ) = � Θ π ( θ ) π ( x | θ ) d θ

  7. In general, posterior distributions are difficult to work with Normalizing involves an integral that is often intractable π ( θ ) π ( x | θ ) π ( θ | x ) = � Θ π ( θ ) π ( x | θ ) d θ Expectations w.r.t. the posterior = More intractable integrals � E π [ f ] = f ( θ ) π ( θ | x ) d θ Θ (These are statistics that distill information about the posterior)

  8. y y x x Solution: Monte Carlo integration Given a finite set of samples θ 1 , θ 2 , . . . , θ T ∼ π ( θ | x )

  9. y y x x Solution: Monte Carlo integration Given a finite set of samples θ 1 , θ 2 , . . . , θ T ∼ π ( θ | x ) Estimate an intractable expectation as a sum: T � f ( θ ) π ( θ | x ) d θ ≈ 1 � E π [ f ] = f ( θ t ) T Θ t =1

  10. Solution: Monte Carlo integration Given a finite set of samples θ 1 , θ 2 , . . . , θ T ∼ π ( θ | x ) Estimate an intractable expectation as a sum: T � f ( θ ) π ( θ | x ) d θ ≈ 1 � E π [ f ] = f ( θ t ) T Θ t =1 i.e., replace a distribution with samples from it: y y x x

  11. Markov chain Monte Carlo (MCMC) Widely used class of sampling algorithms Sample by simulating a Markov chain (biased random walk) whose stationary distribution (after convergence) is the posterior θ 1 , θ 2 , . . . , θ T ∼ π ( θ | x ) Use samples for Monte Carlo integration � T f ( θ ) π ( θ | x ) d θ ≈ 1 � E π [ f ] = f ( θ t ) T Θ t =1

  12. Outline Bayesian inference and Markov chain Monte Carlo MCMC is hard → New data–parallel algorithms VCMC: Our approach and theoretical results Empirical evaluation

  13. Traditional MCMC ◮ Serial, iterative algorithm for generating samples ◮ Slow for two reasons: (1) Large number of iterations required to converge (2) Each iteration depends on the entire dataset ◮ Most innovation in MCMC has targeted (1) ◮ Recent threads of work target (2)

  14. Serial MCMC Data Single core Samples

  15. Data-parallel MCMC Data Parallel cores “Samples”

  16. Aggregate samples from across partitions — but how? Data Parallel cores “Samples” Aggregate

  17. Factorization ( ⋆ ) motivates a data-parallel approach J � π ( θ ) 1 / J π ( x ( j ) | θ ) π ( θ | x ) ∝ π ( θ ) π ( x | θ ) = � �� � ���� � �� � � �� � j =1 posterior prior likelihood sub-posterior

  18. Factorization ( ⋆ ) motivates a data-parallel approach J � π ( θ ) 1 / J π ( x ( j ) | θ ) π ( θ | x ) ∝ π ( θ ) π ( x | θ ) = � �� � ���� � �� � � �� � j =1 posterior prior likelihood sub-posterior ◮ Partition the data as x (1) , . . . , x ( J ) across J cores ◮ The j th core samples from a distribution proportional to the j th sub-posterior (a ‘piece’ of the full posterior) ◮ Aggregate the sub-posterior samples to form approximate full posterior samples

  19. Aggregation strategies for sub-posterior samples J � π ( θ ) 1 / J π ( x ( j ) | θ ) π ( θ | x ) ∝ π ( θ ) π ( x | θ ) = � �� � ���� � �� � � �� � j =1 posterior prior likelihood sub-posterior Sub-posterior density estimation (Neiswanger et al, UAI 2014) Weierstrass samplers (Wang & Dunson, 2013) Weighted averaging of sub-posterior samples ◮ Consensus Monte Carlo (Scott et al, Bayes 250, 2013) ◮ Variational Consensus Monte Carlo (Rabinovich et al, NIPS 2015)

  20. Aggregate ‘horizontally’ ( ⋆ ) across partions Data Parallel cores “Samples” Aggregate

  21. Recall that samples are parameter vectors ( , ) ( , ) =

  22. Na¨ ıve aggregation = Average Aggregate ( , ) x x = + 0.5 0.5

  23. Less na¨ ıve aggregation = Weighted average Aggregate ( , ) x x = + 0.58 0.42

  24. Consensus Monte Carlo (Scott et al, 2013) Aggregate ( , ) x x = + ◮ Weights are inverse covariance matrices ◮ Motivated by Gaussian assumptions ◮ Designed at Google for the MapReduce framework

  25. Outline Bayesian inference and Markov chain Monte Carlo MCMC is hard → New data–parallel algorithms VCMC: Our approach and theoretical results Empirical evaluation

  26. Variational Consensus Monte Carlo Goal: Choose the aggregation function to best approximate the target distribution Method: Convex optimization via variational Bayes

  27. Variational Consensus Monte Carlo Goal: Choose the aggregation function to best approximate the target distribution Method: Convex optimization via variational Bayes F = aggregation function q F = approximate distribution L ( F ) = E q F [log π ( X , θ )] + H [ q F ] � �� � � �� � � �� � entropy objective likelihood

  28. Variational Consensus Monte Carlo Goal: Choose the aggregation function to best approximate the target distribution Method: Convex optimization via variational Bayes F = aggregation function q F = approximate distribution ˜ ˜ L ( F ) = E q F [log π ( X , θ )] + H [ q F ] � �� � � �� � � �� � objective likelihood relaxed entropy

  29. Variational Consensus Monte Carlo Goal: Choose the aggregation function to best approximate the target distribution Method: Convex optimization via variational Bayes F = aggregation function q F = approximate distribution ˜ ˜ L ( F ) = E q F [log π ( X , θ )] + H [ q F ] � �� � � �� � � �� � objective relaxed entropy likelihood No mean field assumption

  30. Variational Consensus Monte Carlo Aggregate ( , ) x x = + ◮ Optimize over weight matrices ( ⋆ ) ◮ Restrict to valid solutions when parameter vectors constrained

  31. Variational Consensus Monte Carlo Theorem (Entropy relaxation) Under mild structural assumptions, we can choose K H [ q F ] = c 0 + 1 � ˜ h k ( F ) , K k =1 with each h k a concave function of F such that H [ q F ] ≥ ˜ H [ q F ] . We therefore have L ( F ) ≥ ˜ L ( F ) .

  32. Variational Consensus Monte Carlo Theorem (Concavity of the variational Bayes objective) Under mild structural assumptions, the relaxed variational Bayes objective L ( F ) = E q F [log π ( X , θ )] + ˜ ˜ H [ q F ] is concave in F.

  33. Outline Bayesian inference and Markov chain Monte Carlo MCMC is hard → New data–parallel algorithms VCMC: Our approach and theoretical results Empirical evaluation

  34. Empirical evaluation ◮ Compare 3 aggregation strategies: ◮ Uniform average ◮ Gaussian-motivated weighted average (CMC) ◮ Optimized weighted average (VCMC) ◮ For each algorithm A , report approximation error of some expectation E π [ f ], relative to serial MCMC ǫ A ( f ) = | E A [ f ] − E MCMC [ f ] | | E MCMC [ f ] | ◮ Preliminary speedup results

  35. Example 1: High-dimensional Bayesian probit regression #data = 100 , 000 , d = 300 First moment estimation error, relative to serial MCMC (Error truncated at 2.0)

  36. Example 2: High-dimensional covariance estimation Normal-inverse Wishart model #data = 100 , 000 , #dim = 100 = ⇒ 5 , 050 parameters (L) First moment estimation error (R) Eigenvalue estimation error

  37. Example 3: Mixture of 8, 8-dim Gaussians Error relative to serial MCMC, for cluster comembership probabilities of pairs of test data points

  38. VCMC error decreases as the optimization runs longer Initialize VCMC with CMC weights (inverse covariance matrices)

  39. VCMC reduces CMC error at the cost of speedup ( ∼ 2x) VCMC speedup is approximately linear CMC VCMC

  40. Concluding thoughts Contributions ◮ Convex optimization framework for Consensus Monte Carlo ◮ Structured aggregation accounting for constrained parameters ◮ Entropy relaxation ◮ Empirical evaluation Future work ◮ More structured and complex (latent variable) models ◮ Alternate posterior factorizations and aggregation schemes We’d love to hear about your Bayesian inference problems!

Recommend


More recommend