Bayesian Probabilistic Numerical Methods Numerical Disintegration and Pipelines Jon Cockayne June 6, 2017 1
(Re)introduction
(Re)introduction “Prior” 2 u ∼ µ “Data” A ( u ) = a Q # µ a “Information Equation” “Posterior”
2 Q1: How can we access µ a ?
(Re)introduction Unless probabilistic numerical methods “agree” about what their uncertainty means, 3 they cannot be composed coherently.
Modelling Electro-Mechanics in the Heart 4
Modelling Electro-Mechanics in the Heart 5 Ca Flux during Caffeine Ca Fit NCX Model transient Ca Flux during tail of Field I CaL Voltage Stimulation Ca Transient. Less Ca clamp traces Flux through NCX (calculated) Fit I CaL Model Fit SERCA Model Ca flux during start of Field Stimulation Ca Fit RyR Model Transient. Less Ca Flux through NCX, SERCA and I CaL (calculated)
Q2: when is it “legal” to compose Bayesian PNM in pipelines? 5
Numerical Disintegration
Numerical Disintegration Recall, the issue: which means… d a d 6 X a = { u ∈ X : A ( u ) = a } µ ( X a ) = 0
Numerical Disintegration Recall, the issue: which means… 6 X a = { u ∈ X : A ( u ) = a } µ ( X a ) = 0 ∄ d µ a d µ
a (“Numerical Disintegration”) Our Approach Two sources of error • Intractability of • Intractability of non-Gaussian priors (“prior truncation”) 7 Design an algorithm for approximately sampling µ a .
Our Approach Two sources of error • Intractability of non-Gaussian priors (“prior truncation”) 7 Design an algorithm for approximately sampling µ a . • Intractability of µ a (“Numerical Disintegration”)
Our Approach Two sources of error • Intractability of non-Gaussian priors (“prior truncation”) 7 Design an algorithm for approximately sampling µ a . • Intractability of µ a (“Numerical Disintegration”)
Three Considerations Numerical Disintegration Prior Truncation 8
Three Considerations Numerical Disintegration Prior Truncation Sampler Convergence 8
Three Considerations Numerical Disintegration Prior Truncation Sampler Convergence 8
a relaxation function chosen so that: • . 0 as r r • 1 0 Numerical Disintegration 9 Introduce the δ -relaxed measure µ a δ … d µ a ( ∥ A ( u ) − a ∥ A ) δ d µ ∝ ϕ δ
9 Numerical Disintegration Introduce the δ -relaxed measure µ a δ … d µ a ( ∥ A ( u ) − a ∥ A ) δ d µ ∝ ϕ δ ϕ : R + → R + a relaxation function chosen so that: • ϕ ( 0 ) = 1 • ϕ ( r ) → 0 as r → ∞ .
Numerical Disintegration: Intuition “Ideal” Radon–Nikodym derivative 10 “ d µ a d µ ∝ I ( u ∈ X a ) ”
Example Relaxation Functions 11 1 0 a a ϕ ( r ) = I ( r < 1 ) ϕ ( r ) = exp ( − r 2 )
Example Relaxation Functions 11 1 0 a a ϕ ( r ) = I ( r < 1 ) ϕ ( r ) = exp ( − r 2 ) Uniform noise over B δ ( a ) Gaussian noise with s.d. ∝ δ
N and consider 0 is the prior and easy to sample. N has N close to zero and is hard to sample. 12 a • Intermediate distributions defjne a “ladder” which takes us from prior to posterior. a • a • N 1 a 0 a 1 0 Set schemes to sample the posterior. Tempering for Sampling µ a δ To sample µ a δ we take inspiration from rare event simulation and use tempering
0 is the prior and easy to sample. N has N close to zero and is hard to sample. 12 • • Intermediate distributions defjne a “ladder” which takes us from prior to posterior. schemes to sample the posterior. a • a Tempering for Sampling µ a δ To sample µ a δ we take inspiration from rare event simulation and use tempering Set δ 0 > δ 1 > . . . > δ N and consider µ a δ 0 , µ a δ 1 , . . . , µ a δ N
• Intermediate distributions defjne a “ladder” which takes us from prior to posterior. schemes to sample the posterior. 12 Tempering for Sampling µ a δ To sample µ a δ we take inspiration from rare event simulation and use tempering Set δ 0 > δ 1 > . . . > δ N and consider µ a δ 0 , µ a δ 1 , . . . , µ a δ N • µ a δ 0 is the prior and easy to sample. • µ a δ N has δ N close to zero and is hard to sample.
• Impose boundary conditions explicitly. • Impose interior conditions at x 2 10 4 . r 2 . Example: Poisson’s Equation • Construct the posterior using ND with exp r • Use 1 0 10 1 3 , x 2 3 . Consider • Use a Gaussian prior on u x . 13 − d 2 d x 2 u ( x )= sin ( 2 π x ) x ∈ ( 0 , 1 ) u ( x )= 0 x = 0 , x = 1
• Impose boundary conditions explicitly. • Impose interior conditions at x 2 10 4 . r 2 . Example: Poisson’s Equation • Construct the posterior using ND with exp r • Use 1 0 10 1 3 , x 2 3 . Consider 13 − d 2 d x 2 u ( x )= sin ( 2 π x ) x ∈ ( 0 , 1 ) u ( x )= 0 x = 0 , x = 1 • Use a Gaussian prior on u ( x ) .
2 10 4 . r 2 . Example: Poisson’s Equation • Construct the posterior using ND with exp r • Use 1 0 10 13 Consider • Impose boundary conditions explicitly. − d 2 d x 2 u ( x )= sin ( 2 π x ) x ∈ ( 0 , 1 ) u ( x )= 0 x = 0 , x = 1 • Use a Gaussian prior on u ( x ) . • Impose interior conditions at x = 1 / 3 , x = 2 / 3 .
Example: Poisson’s Equation Consider • Impose boundary conditions explicitly. . 13 − d 2 d x 2 u ( x )= sin ( 2 π x ) x ∈ ( 0 , 1 ) u ( x )= 0 x = 0 , x = 1 • Use a Gaussian prior on u ( x ) . • Impose interior conditions at x = 1 / 3 , x = 2 / 3 . 1 . 0 , 10 − 2 , 10 − 4 } { • Construct the posterior using ND with δ ∈ • Use ϕ ( r ) = exp ( − r 2 ) .
Example: Poisson’s Equation On the right are contours of 14 In what follows, on the left are samples from the posterior µ a δ in X -space. ( ∥ A ( u ) − a ∥ A ) ϕ δ in A -space.
Example: Poisson’s Equation 15 = 1.0 0.100 0.4 0.075 0.6 0.050 0.025 0.8 0.000 0.025 1.0 0.050 1.2 0.075 0.100 0.0 0.2 0.4 0.6 0.8 1.0 0.4 0.6 0.8 1.0 1.2
Example: Poisson’s Equation 15 = 0.01 0.100 0.4 0.075 0.6 0.050 0.025 0.8 0.000 0.025 1.0 0.050 1.2 0.075 0.100 0.0 0.2 0.4 0.6 0.8 1.0 0.4 0.6 0.8 1.0 1.2
Example: Poisson’s Equation 15 = 0.0001 0.100 0.4 0.075 0.6 0.050 0.025 0.8 0.000 0.025 1.0 0.050 1.2 0.075 0.100 0.0 0.2 0.4 0.6 0.8 1.0 0.4 0.6 0.8 1.0 1.2
Three Considerations Numerical Disintegration Prior Truncation Sampler Convergence 16
i require difgerent i IID Uniform, i IID Gaussian, i IID Cauchy, For practical computation we truncate to N terms. Prior Construction 2 • 2 • 1 • for almost-sure convergence… Difgerent 17 Assume X has a countable basis { ϕ i } , i = 0 , . . . , ∞ . Then for any u ∈ X ∞ ∑ u ( x ) = u i ϕ i ( x ) i = 0
For practical computation we truncate to N terms. Prior Construction 17 Assume X has a countable basis { ϕ i } , i = 0 , . . . , ∞ . Then for any u ∈ X ∞ ∑ u ( x ) = γ i ξ i ϕ i ( x ) i = 0 Difgerent ξ i require difgerent γ for almost-sure convergence… • ξ i IID Uniform, γ ∈ ℓ 1 • ξ i IID Gaussian, γ ∈ ℓ 2 • ξ i IID Cauchy, γ ∈ ℓ 2
Prior Construction N 17 Assume X has a countable basis { ϕ i } , i = 0 , . . . , ∞ . Then for any u ∈ X ∑ u N ( x ) = γ i ξ i ϕ i ( x ) i = 0 Difgerent ξ i require difgerent γ for almost-sure convergence… • ξ i IID Uniform, γ ∈ ℓ 1 • ξ i IID Gaussian, γ ∈ ℓ 2 • ξ i IID Cauchy, γ ∈ ℓ 2 For practical computation we truncate to N terms.
Three Considerations Numerical Disintegration Prior Truncation Sampler Convergence 18
Convergence, but in what metric? All results show weak convergence framed in terms of an abstract integral probability metric 1 : sup Results are generic to A u , . Examples: Total Variation, Wasserstein 1 Müller [1997] 19 d F ( ν, ν ′ ) = � ν ( f ) − ν ′ ( f ) � � � ∥ f ∥ F ≤ 1
Convergence, but in what metric? All results show weak convergence framed in terms of an abstract integral probability metric 1 : sup Examples: Total Variation, Wasserstein 1 Müller [1997] 19 d F ( ν, ν ′ ) = � ν ( f ) − ν ′ ( f ) � � � ∥ f ∥ F ≤ 1 Results are generic to A ( u ) , µ .
Convergence, but in what metric? All results show weak convergence framed in terms of an abstract integral probability metric 1 : sup Examples: Total Variation, Wasserstein 1 Müller [1997] 19 d F ( ν, ν ′ ) = � ν ( f ) − ν ′ ( f ) � � � ∥ f ∥ F ≤ 1 Results are generic to A ( u ) , µ .
20 d -almost-all a for A C 1 C a a Then, for small Assume that Theorem Convergence of µ a δ � α d F ( µ a , µ a ′ ) ≤ C µ � a − a ′ � � for some C µ , α constant and A # µ -almost-all a , a ′ ∈ A .
Theorem Assume that 20 Convergence of µ a δ � α d F ( µ a , µ a ′ ) ≤ C µ � � a − a ′ � for some C µ , α constant and A # µ -almost-all a , a ′ ∈ A . Then, for small δ δ , µ a ) ≤ C µ ( 1 + C φ ) δ α d F ( µ a for A # µ -almost-all a ∈ A
21 Total Error Denote by µ a δ, N the posterior distribution given by d µ a ( ∥ A ◦ P N ( u ) − a ∥ A ) δ, N ∝ ϕ d µ δ
Recommend
More recommend