precision nuclear physics
play

Precision nuclear physics Observable calculations are becoming - PowerPoint PPT Presentation

Precision nuclear physics Observable calculations are becoming increasingly precise Hamiltonian Calculation Experiment Observable What are the theory errors? Hergert et al. PRL 110 , 242501 (2013) Ground-state energies for even oxygen


  1. Precision nuclear physics Observable calculations are becoming increasingly precise Hamiltonian Calculation Experiment Observable What are the theory errors? Hergert et al. PRL 110 , 242501 (2013) Ground-state energies for even oxygen isotopes Chiral effective field theory (EFT) is used to generate microscopic nuclear Hamiltonians and currents (note: plural!). Many versions (scales/ schemes) on the market.

  2. Sources of uncertainty in EFT predictions Hamiltonian: truncation errors regulator artifacts Low-energy constants: Full uncertainty on prediction error from fitting to data Numerics: many-body methods basis truncation anything else

  3. Sources of uncertainty in EFT predictions Hamiltonian: truncation errors regulator artifacts Low-energy constants: Full uncertainty on prediction error from fitting to data Numerics: many-body methods Bayesian methods basis truncation treat on equal footing anything else

  4. Bayesian Uncertainty Quantification: Errors for Your EFT Prior ! a 1 ! Posterior ! Goal: True value ! Full uncertainty quantification (UQ) for effective field theory (EFT) 0 ! a 0 ! predictions using Bayesian statistics BUQEYE Collaboration ! Some BUQEYE publications on UQ for EFT • “A recipe for EFT uncertainty quantification in nuclear physics”, J. Phys. G 42, 034028 (2015) • “Quantifying truncation errors in effective field theory”, Phys. Rev. C 92, 024005 (2015) • “Bayesian parameter estimation for effective field theories”, J. Phys. G 43, 074001 (2016) • “Bayesian truncation errors in chiral EFT: nucleon-nucleon observables”, Phys. Rev. C 96, 024003 (2017) [Editors’ Suggestion]

  5. Bayesian interpretation of probability Repeatable situations: Unrepeatable situations: Probability that it will rain in Rolling dice Washington, D.C. tomorrow Properties of the Universe Repeatable measurements (we have exactly one sample!) a n Probability of a parameter “Based on a large amount of observations “From the best of knowledge and previous of the event, here is the probability” measurements: the probability lies in this range” Formulation of probability as “degree of belief” Great introduction for physicists: “Bayes in the Sky” [arXiv:0803.4089] Beta decay credit: The 2015 Long Range Plan for Nuclear Science

  6. Why Bayes for theory errors? Frequentist approach: long-run relative frequency • Outcomes of experiments treated as random variables • Predict probabilities of observing various outcomes • Well adapted to quantities that fluctuate statistically • But systematic errors are problematic Bayesian probabilities: pdf is a measure of state of knowledge • Ideal for systematic/ theory errors that do not behave stochastically • Assumptions and expectations encoded in prior pdfs • Make explicit what is usually implicit: assumptions may be applied consistently, tested, and modified in light of new information

  7. Why Bayes for theory errors? Frequentist approach: long-run relative frequency • Outcomes of experiments treated as random variables pdf for uncertainty: • Predict probabilities of observing various outcomes different prior assumptions • Well adapted to quantities that fluctuate statistically about higher-order corrections • But systematic errors are problematic Bayesian probabilities: pdf is a measure of state of knowledge • Ideal for systematic/ theory errors that do not behave stochastically • Assumptions and expectations encoded in prior pdfs • Make explicit what is usually implicit: assumptions may be applied consistently, tested, and modified in light of new information Observable(x) 68% level x

  8. Why Bayes for theory errors? Frequentist approach: long-run relative frequency • Outcomes of experiments treated as random variables • Predict probabilities of observing various outcomes • Well adapted to quantities that fluctuate statistically • But systematic errors are problematic Bayesian probabilities: pdf is a measure of state of knowledge • Ideal for systematic/ theory errors that do not behave stochastically • Assumptions and expectations encoded in prior pdfs • Make explicit what is usually implicit: assumptions may be applied consistently, tested, and modified in light of new information Widespread application of Bayesian approaches in theoretical physics • Interpretation of dark-matter searches; structure determination in condensed matter physics, constrained curve-fitting in lattice QCD • Is supersymmetry a “natural” approach to the hierarchy problem? • Estimating uncertainties in perturbative QCD (e.g., parton distributions)

  9. Joint probability for theory parameters pr( x | y ) is read: Here “The probability that x is true given y” Example: want to “fit” parameters pr( a | D, k, k max , I ) I : any other information k : truncation order Vector of parameters Data { a 0 , a 1 , …a k } k max : omitted orders

  10. Bayesian rules of probability as principles of logic 1: Sum rule If set { x i } is exhaustive and exclusive Z X dx pr( x | I ) = 1 pr( x i | I ) = 1 i • cf. complete and orthonormal • implies marginalization (cf. inserting complete set of states) Z X pr( x | I ) = pr( x, y j | I ) pr( x | I ) = dy pr( x, y | I ) j

  11. Bayesian rules of probability as principles of logic 1: Sum rule If set { x i } is exhaustive and exclusive Z X dx pr( x | I ) = 1 pr( x i | I ) = 1 i • cf. complete and orthonormal • implies marginalization (cf. inserting complete set of states) Z X pr( x | I ) = pr( x, y j | I ) pr( x | I ) = dy pr( x, y | I ) j Expanding a joint probability of x and y 2: Product rule pr( x, y | I ) = pr( x | y, I ) pr( y | I ) = pr( y | x, I ) pr( x | I ) • If x and y are mutually independent: pr( x | y, I ) = pr( x | I ) pr( x, y | I ) → pr( x | I ) × pr( y | I ) • Rearrange rule equality to get Bayes Theorem pr( x | y, I ) = pr( y | x, I )pr( x | I ) pr( y | I )

  12. Interaction between data and prior Posterior Likelihood Prior pr( a | D, k, k max ) ∝ pr( D | a , k, k max ) × pr( a | k, k max ) 1D projections of a 1 and a 3 for naturalness prior: Likelihood overwhelms prior Prior suppresses unconstrained likelihood

Recommend


More recommend