verifying differentially private bayesian inference
play

Verifying Differentially Private Bayesian Inference Marco Gaboardi - PowerPoint PPT Presentation

Verifying Differentially Private Bayesian Inference Marco Gaboardi University of Dundee Joint work with G. Barthe, G.P . Farina, E.J. Gallego Arias, A.Gordon, Differentially Private vs Probabilistic Inference Differential Probabilistic


  1. HOARe 2 : Relational Refinement Types → Program P Precondition Postcondition P : {x | Pre(x 1 ,x 2 )} → {y | Post(y 1 ,y 2 )} Example: Monotonicity of exponential exp : {x | x 1 ≤ x 2 } → {y | y 1 ≤ y 2 }

  2. HOARe 2 : Higher Order Approximate Relational Refinement T ypes for DP Components • Relational Refinement Types • Higher Order Refinements reasoning about DP • Partiality Monad • Semantic Subtyping • Approximate Equivalence for Distributions Barthe et al, POPL’15

  3. Lifting of P P(x 1 ,x 2 ) P*(x 1 ,x 2 ) Relation Relation over distributions

  4. Lifting of P • Given dist. μ 1 over A and μ 2 over B: μ 1 P* μ 2 iff it exists a dist. μ over AxB s.t. • μ (x) > 0 implies x ∈ R • 𝞺 1 μ ≦ μ 1 and 𝞺 2 μ ≦ μ 2

  5. ( ε , δ )-Lifting of P • Given dist. μ 1 over A and μ 2 over B: μ 1 P* μ 2 ε δ iff it exists a dist. μ over AxB s.t. • μ (x) > 0 implies x ∈ R • 𝞺 1 μ ≦ μ 1 and 𝞺 2 μ ≦ μ 2 • max A ( 𝞺 i μ (A) - e ε μ i (A) , μ i (A) - e ε 𝞺 i μ (A) ) ≦ δ

  6. Verifying Differential Privacy If we can conclude C:{x:db|d( y 1 ,y 2 ) ≦ 1 } � {y:O| y 1 = * y 2 } ε δ then C is ( ε , δ )- differentially private.

  7. Programming Languages for Differentially Private Probabilistic Inference Differential Probabilistic Privacy Inference

  8. Programming Languages for Differentially Private Probabilistic Inference Differential Probabilistic Privacy Inference Programming Language Tools

  9. Adding noise

  10. Adding noise Noise Probabilistic + Program Differentially Private Program

  11. Adding noise

  12. Adding noise

  13. Adding noise Prior Distribution Posterior Distribution Probabilistic Program

  14. Adding noise Prior Distribution Posterior Distribution Probabilistic Program

  15. Adding noise Prior Distribution Posterior Distribution Probabilistic Program

  16. Adding noise Prior Distribution Posterior Distribution Probabilistic Program

  17. Adding noise Prior Distribution Posterior Distribution Probabilistic Program

  18. Adding noise Prior Distribution Posterior Distribution Probabilistic Program

  19. Adding noise Prior Distribution Posterior Distribution Probabilistic Program Probabilistic Inference

  20. Adding noise Prior Distribution Posterior Distribution Probabilistic Program Probabilistic Inference

  21. Adding noise Prior Distribution Posterior Distribution Probabilistic Program Probabilistic Inference

  22. Adding noise Prior Distribution Posterior Distribution Probabilistic Program Probabilistic Inference

  23. Adding noise on the data Prior Distribution Posterior Distribution Probabilistic Program Probabilistic Inference

  24. An example function privBerInput (l: B list) (p1: R) (p2: R): M[(0,1)]{ let function vExp (l: B list) : M[B list]{ match l with |nil -> mreturn nil |x::xs -> coercion (exp eps((0,0)->1,(0,1)->0,(1,1)->1, (1,0)->0) x) :: (vExp l) } in mlet nl = (vExp l) in let prior = mreturn(beta(p1,p2)) in let function Ber (l: B list) (p:M[(0,1)]): M[(0,1)]{ match l with |nil -> ran(p) |x::xs -> observe y => y = x in (Ber xs p) } in mreturn(infer (Ber nl prior)) }

  25. An example function privBerInput (l: B list) (p1: R) (p2: R): M[(0,1)]{ let function vExp (l: B list) : M[B list]{ match l with |nil -> mreturn nil |x::xs -> coercion (exp eps((0,0)->1,(0,1)->0,(1,1)->1, (1,0)->0) x) :: (vExp l) } in Noise mlet nl = (vExp l) in let prior = mreturn(beta(p1,p2)) in let function Ber (l: B list) (p:M[(0,1)]): M[(0,1)]{ match l with |nil -> ran(p) |x::xs -> observe y => y = x in (Ber xs p) } in mreturn(infer (Ber nl prior)) }

  26. Adding noise on the output Prior Distribution Posterior Distribution Probabilistic Program Probabilistic Inference

  27. DP for Probabilistic Programs Posterior Distribution

  28. DP for Probabilistic Programs Releasing the Posterior Parameters Distribution

  29. DP for Probabilistic Programs Releasing the Posterior Parameters Distribution Sampling from the Distribution

  30. DP for Probabilistic Programs Releasing the Posterior Parameters Distribution Sampling from the Distribution

  31. A distance over distributions An example function privBerInput (l: B list) (p1: R) (p2: R): M[(0,1)]{ let function hellingerDistance (a0:R) (b0:R) (a1:R) (b1:R) : R { let gamma (r:R) = (r-1)! in let betaf (a:R) (b:R) = gamma(a)*gamma(b))/gamma(a+b) in let num=betaf ((a0+a1)/2.0) ((b0+b1)/2.0) in let denum=Math.Sqrt((betaf a0 b0)*(betaf a1 b1)) in Math.Sqrt(1.0-(num/denum)) } in let function score (input:M[(0,1)]) (output:M[(0,1)]) : R { let beta(a0,b0) = input in let beta(a1,b1) = output in (-1.0) * (hellingerDistance a0 b0 a1 b1) } in let prior = mreturn(beta(p1,p2)) in let function Ber (l: B list) (p:M[(0,1)]): M[(0,1)]{ match l with |nil -> ran(p) |x::xs -> observe y => y = x in (Ber xs p) } in exp eps score (infer (Ber l prior)) }

  32. A distance over distributions An example function privBerInput (l: B list) (p1: R) (p2: R): M[(0,1)]{ let function hellingerDistance (a0:R) (b0:R) (a1:R) (b1:R) : R { let gamma (r:R) = (r-1)! in let betaf (a:R) (b:R) = gamma(a)*gamma(b))/gamma(a+b) in let num=betaf ((a0+a1)/2.0) ((b0+b1)/2.0) in let denum=Math.Sqrt((betaf a0 b0)*(betaf a1 b1)) in Math.Sqrt(1.0-(num/denum)) } in let function score (input:M[(0,1)]) (output:M[(0,1)]) : R { let beta(a0,b0) = input in let beta(a1,b1) = output in (-1.0) * (hellingerDistance a0 b0 a1 b1) } in let prior = mreturn(beta(p1,p2)) in let function Ber (l: B list) (p:M[(0,1)]): M[(0,1)]{ match l with |nil -> ran(p) |x::xs -> observe y => y = x in (Ber xs p) } in Noise exp eps score (infer (Ber l prior)) }

  33. A distance over distributions An example function privBerInput (l: B list) (p1: R) (p2: R): M[(0,1)]{ let function hellingerDistance (a0:R) (b0:R) (a1:R) (b1:R) : R { let gamma (r:R) = (r-1)! in let betaf (a:R) (b:R) = gamma(a)*gamma(b))/gamma(a+b) in let num=betaf ((a0+a1)/2.0) ((b0+b1)/2.0) in let denum=Math.Sqrt((betaf a0 b0)*(betaf a1 b1)) in Math.Sqrt(1.0-(num/denum)) } in let function score (input:M[(0,1)]) (output:M[(0,1)]) : R { let beta(a0,b0) = input in let beta(a1,b1) = output in (-1.0) * (hellingerDistance a0 b0 a1 b1) } in let prior = mreturn(beta(p1,p2)) in let function Ber (l: B list) (p:M[(0,1)]): M[(0,1)]{ match l with |nil -> ran(p) |x::xs -> observe y => y = x in (Ber xs p) } in Noise exp eps score (infer (Ber l prior)) }

  34. More general Lifting of P P(x 1 ,x 2 ) P*(x 1 ,x 2 ) Relation Relation over distributions

  35. Accuracy Different ways of adding noise can have different accuracy. - we have theoretical accuracy (how to integrate it in a framework for reasoning about DP?) - we have experimental accuracy (we need a framework for for test our programs)

Recommend


More recommend