Modeling the Universe Interfacing Theory, Simulations, Statistical Methods, and Observations Tim Eifler (JPL/Caltech, University of Arizona)
The Challenge reduced data summary and catalogs statistics 11
The Challenge reduced data and catalogs 11
Introducing CosmoLike Idea: consistent, multi-probe likelihood analysis software framework including • Realistic statistical error bars (cross-probe covariances) • Cross-correlations of observables/systematics • Efficient treatment of nuisance parameters Numerical Weak Lensing, Galaxy Systematics Simulations/ Clustering, Clusters, CMB, (photo-z, shape Emulators CMB-LSS correlations uncertainties) Explore fundamental Astrophysics physics (cosmic (Intrinsic alignment, acceleration, neutrinos, Baryonic Physics) Likelihood free tests of gravity) inference Gaussianization of Galaxy bias models Multi-Probe summary statistics (linear, quadratic, Covariances/Hybrid HOD) Estimators
Project 1: Simulate a Multi-Probe Likelihood Analysis for LSST Theory+Sims+Stats -> Obs cosmolike - cosmological likelihood analyses for photometric galaxy surveys CosmoLike release paper (www.cosmolike.info) Krause & TE 2017
Example Data Vector and Systematics Weak Lensing (cosmic shear) shear calibration, 10 tomography bins photo-z (sources) 25 l bins, 25 < l < 5000 IA, baryons Galaxy clustering b 1 , b 2,… 4 redshift bins (0.2-0.4,0.4-0.6,0.6-0.8,0.8-1.0) photo-z (lenses) compare two samples: σ z <0.04, redMaGiC linear + quadratic bias only : l bins restricted to R> 10 Mpc/h HOD modeling going to R>0.1 MPC/h Galaxy-galaxy lensing galaxies from clustering (as lenses) with shear sources N-M relation Clusters - number counts + shear profile c-M relation so far, 8 richness, 4 z-bins (same as clustering) off-centering tomographic cluster lensing (500 < l < 10000)
CosmoLike - “Inner Workings” Krause & Eifler 2017 cosmological parameters collapse density clusters.c cosmo3d.c 𝜀 c (z) peak height 𝜉 (M,z) growth factor scaling relation transfer function halo properties D(k,z) M obs (M) T(k,z) c(M,z) b(M,z) n(M,z) P lin (k,z) cluster HOD, bias model selection fuction halo.c Coyote U. Cov(z i ,z j ,z k ,z l ,l 1, l 2 ) Emulator P nl (k,z) distances non-linear regime baryons cluster finding galaxy formation P(k,z j ) non-Gaussian intrinsic alignments photo-zs shear calibration ... .... .... systematics.c z-distr. n(z) N(M obs; z i ) o n p r o j e c t i photo-z n s f u n c t i o model b e r L i m r o x . C XY (l;z i ,z j ) a p p redshift.c Likelihood cosmo2d.c
Multi-Probes Forecasts: Covariance Cluster Lensing Clusters Galaxy Clustering Galaxy- Galaxy Lensing Cosmic Shear 7+ million elements details: Krause&TE ‘17
The Power of Combining Probes Prob 7 cosmological parameters σ 8 49 nuisance parameters Shear Calibration, • Lens+Source photo-z, • h Linear galaxy bias • Cluster Mass • Calibration w 0 • Intrinsic Alignments w a clustering cosmic shear clusterN 3x2pt 3x2pt+clusterN+clusterWL w 0 w a Ω m σ 8 h
Zoom into w0-wa plane clustering • Very non-linear gain in cosmic shear clusterN constraining power 3x2pt 3x2pt+clusterN+clusterWL • Most stringent requirements on w a numerical simulations, photo-z, shear calibration, etc flow from Multi-Probe statistical limits w 0
Project 2: Exploring WFIRST survey strategies Theory+Sims+Stats -> Obs Project within the WFIRST Cosmology with the High Latitude Survey Science Investigation Team TE et al in prep
Individual vs multi- probe WFIRST analysis Modified Gravity All-In Systematics 76 dimensions (7 cosmology, 69 systematics)
WFIRST - LSST synergies Possible WFIRST extension of 1.6 years overlapping with LSST
Project 3: New statistical methods to reduce Super-Computing needs Theory+Stats -> Sims Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters Friedrich & TE 2018
The Problem: Inverse Covariance Estimation Analytical covariance model relies on approximations that might be too imprecise for an LSST Y10 data set Estimation the covariance from numerical simulations (brute force), requires 10^5-10^6 realizations of an LSST Year 10 like survey to shield against noise in the estimator Why? The estimated inverse covariance is not the inverse of the estimated covariance High-dimensionality of the data vector -> many elements in the covariance
Idea: Estimate the inverse directly i! � 1 ⌘ T C � 1 ⇣ ˆ 2 χ 2 h π | ˆ π | ˆ χ 2 h i ⇣ ˆ ⌘ π | ˆ p ( π π ξ ξ ) ⇠ exp π ξ ξ , C p ( π π π ) ξ π ξ ξ , C π π ξ ξ ξ ξ ξ � ξ ξ [ π ξ π π ] ξ ξ ξ � ξ ξ ξ [ π π π ] = Standard Estimator Ψ = ν � N d � 1 C � 1 ˆ ˆ New idea: Include theory ν information into estimator X : = ( B − B m ) M − 1 where M = A + B m is C = M + ( B − B m ) C = A + B C = ( 1 + X ) M , Invert and expand as ˆ M � 1 + M � 1 B m M � 1 B m M � 1 Ψ 2nd = power series � M � 1 ⇣ ˆ ⌘ M � 1 B � B m Build 0 1 � M � 1 ˆ ∞ BM � 1 B m M � 1 Estimator X B C M − 1 ( − 1) k X k B C Ψ B C = B C � M � 1 B m M � 1 ˆ B C BM � 1 B C @ A k = 0 + M � 1 ν 2 ˆ BM � 1 ˆ M � 1 ˆ ⇣ ⌘ B � ν ˆ B tr B M − 1 ⇣ h X 3 i⌘ 1 − X + X 2 + O = M � 1 ν 2 + ν � 2 Only matrix multiplication, no inversion of estimated quantities
Idea: Estimate the inverse directly ⌘ T C � 1 ⇣ ˆ i! � 1 χ 2 h i ⇣ ˆ ⌘ π | ˆ 2 χ 2 h π | ˆ π | ˆ ξ , C π π ξ ξ ξ ξ ξ � ξ ξ [ π ξ π π ] ξ ξ ξ � ξ ξ ξ [ π π π ] p ( π π ξ ) ⇠ exp ξ π ξ ξ , C p ( π π π ) ξ π ξ = Standard Estimator Ψ = ν � N d � 1 C � 1 ˆ ˆ New idea: Include theory ν information into estimator X : = ( B − B m ) M − 1 where M = A + B m is C = M + ( B − B m ) C = A + B C = ( 1 + X ) M , Invert and expand as ˆ M � 1 + M � 1 B m M � 1 B m M � 1 Ψ 2nd = power series � M � 1 ⇣ ˆ ⌘ M � 1 B � B m Build 0 1 � M � 1 ˆ ∞ BM � 1 B m M � 1 Estimator X B C M − 1 ( − 1) k X k B C Ψ B C = B C � M � 1 B m M � 1 ˆ B C BM � 1 B C @ A k = 0 + M � 1 ν 2 ˆ BM � 1 ˆ M � 1 ˆ ⇣ ⌘ B � ν ˆ B tr B M − 1 ⇣ h X 3 i⌘ 1 − X + X 2 + O = M � 1 ν 2 + ν � 2 Only matrix multiplication, no inversion of estimated quantities
Standard estimator Ψ = ν � N d � 1 C � 1 ˆ ˆ ν N s C : = 1 � T � � � � ξ i − ¯ ˆ ξ i − ¯ ˆ ˆ ξ ξ ν i = 1 Inverting quantities with “hats” is dangerous
Idea: Estimate the inverse directly i! � 1 ⌘ T C � 1 ⇣ ˆ 2 χ 2 h π | ˆ π | ˆ χ 2 h i ⇣ ˆ ⌘ π | ˆ p ( π π ξ ) ⇠ exp ξ π ξ ξ , C p ( π π π ) ξ π ξ ξ , C π π ξ ξ ξ ξ � ξ ξ ξ [ π ξ π π ] ξ ξ ξ � ξ ξ ξ [ π π π ] = Standard Estimator Ψ = ν � N d � 1 C � 1 ˆ ˆ New idea: Include theory ν information into estimator X : = ( B − B m ) M − 1 where M = A + B m is C = M + ( B − B m ) C = A + B C = ( 1 + X ) M , Invert and expand as ˆ M � 1 + M � 1 B m M � 1 B m M � 1 Ψ 2nd = power series � M � 1 ⇣ ˆ ⌘ M � 1 B � B m Build 0 1 � M � 1 ˆ ∞ BM � 1 B m M � 1 Estimator X B C M − 1 ( − 1) k X k B C Ψ B C = B C � M � 1 B m M � 1 ˆ B C BM � 1 B C @ A k = 0 + M � 1 ν 2 ˆ BM � 1 ˆ M � 1 ˆ ⇣ ⌘ B � ν ˆ B tr B M − 1 ⇣ h X 3 i⌘ 1 − X + X 2 + O = M � 1 ν 2 + ν � 2 Only matrix multiplication, no inversion of estimated quantities
Idea: Estimate the inverse directly i! � 1 ⌘ T C � 1 ⇣ ˆ 2 χ 2 h π | ˆ π | ˆ χ 2 h i ⇣ ˆ ⌘ π | ˆ p ( π π ξ ξ ) ⇠ exp π ξ ξ , C p ( π π ) π ξ π ξ ξ , C π π ξ ξ ξ ξ � ξ ξ ξ [ π ξ π π ] ξ ξ ξ � ξ ξ ξ [ π π π ] = Standard Estimator Ψ = ν � N d � 1 C � 1 ˆ ˆ New idea: Include theory ν information into estimator X : = ( B − B m ) M − 1 where M = A + B m is C = M + ( B − B m ) C = A + B C = ( 1 + X ) M , Invert and expand as ˆ M � 1 + M � 1 B m M � 1 B m M � 1 Ψ 2nd = power series � M � 1 ⇣ ˆ ⌘ M � 1 B � B m Build 0 1 � M � 1 ˆ ∞ BM � 1 B m M � 1 Estimator X B C M − 1 ( − 1) k X k B C Ψ B C = B C � M � 1 B m M � 1 ˆ B C BM � 1 B C @ A k = 0 + M � 1 ν 2 ˆ BM � 1 ˆ M � 1 ˆ ⇣ ⌘ B � ν ˆ B tr B M − 1 ⇣ h X 3 i⌘ 1 − X + X 2 + O = M � 1 ν 2 + ν � 2 Only matrix multiplication, no inversion of estimated quantities
New estimator M � 1 + M � 1 B m M � 1 B m M � 1 ˆ Ψ 2nd = � M � 1 ⇣ ˆ ⌘ M � 1 B � B m � M � 1 ˆ BM � 1 B m M � 1 � M � 1 B m M � 1 ˆ BM � 1 + M � 1 ν 2 ˆ BM � 1 ˆ M � 1 ˆ ⇣ ⌘ B � ν ˆ B tr B M � 1 ν 2 + ν � 2 No more inversion of “hat quantities”…
New estimator performance Instead of >10^5 our new estimator only requires ~2000 numerical simulations (LSST case) Given that 1 sim is 1M CPUh, at 1c/CPUh New method reduces cost $1B to $20M (-> fund theorists!) Next step: data compression
Recommend
More recommend