csc2412 private multiplicative weights
play

CSC2412: Private Multiplicative Weights Sasho Nikolov 1 Query - PowerPoint PPT Presentation

CSC2412: Private Multiplicative Weights Sasho Nikolov 1 Query Release Reminder: Query Release 50,1 } . :D oh Recall the query release problem: q.lt/--tnEZ9ilx ) Workload Q = { q 1 , . . . , q k } of k counting queries . , Xu } teh x


  1. CSC2412: Private Multiplicative Weights Sasho Nikolov 1

  2. Query Release

  3. Reminder: Query Release → 50,1 } . :D oh Recall the query release problem: q.lt/--tnEZ9ilx ) • Workload Q = { q 1 , . . . , q k } of k counting queries . , Xu } teh x , where 0 q 1 ( X ) 1 . . . . . A 2 [0 , 1] k . Q ( X ) = B C . @ q k ( X ) • Compute, with ( " , � )-DP, some Y 2 R k so that k max i =1 | Y i � q i ( X ) |  ↵ , with probability � 1 � � . 2

  4. Motivating example ` -wise marginals queries: d binary attributes • X = { 0 , 1 } d i. e . • a query q S , a for any S = { i 1 , . . . , i k } ✓ [ d ] and a = ( a i 1 , . . . , a i k ): e e 8 1 x i j = a i j 8 i j 2 S < q S , a ( x ) = . 0 otherwise : E.g., “smoker and female?”, “smoker and over 30?”, “smoker and heart disease?”, etc. E- wise marginal " on foil all queries of workload Q , = ' = ( del .de# I tael 3

  5. What do we know? mechanism I E- DP the Laplace Using : noise , answer k a' counting queries ' can none :L I ;9i÷m we win " - Dp : Using ( ed ) Gaussian the mechanism noise : - no > Water n , ,ltem§ EL LE 4

  6. Private Multiplicative Weights is constant I :÷÷÷ . We will see an algorithm that achieves: • under " -DP, error ↵ with probability 1 � � when n � log( k ) log( |X| ) . . ↵ 3 " • under ( " , � )-DP, error ↵ with probability 1 � � when p n � log( k ) log( |X| ) log(1 / � ) K . ↵ 2 " 5

  7. Learning a distribution

  8. A probability view → allowed toabemultiset µ over We can think of X = { x 1 , . . . , x n } as a probability distribution p : 5 uniform over x ⇠ p ( x = y ) = | i : x i = y | H K P , Xu XL , n - . . Then, for any counting query q : X ! { 0 , 1 } , . """#= × Ep gin = Eagan n - : qcpi q ( X ) = 1 X q ( x ) n i i =1 ie yet ) the expectation under of q = . distribution empirical X of 6

  9. ⇒ Learning a distribution release problem distributions over & • Query → a Task: Learn an approximation ˆ p of the empirical distribution p such that workload of 4 Y ' " III queries q 8 q 2 Q : | q (ˆ p ) � q ( p ) |  ↵ . q , 11 If this release do of 451 for all we can answers , we can qeQ Trick ( again ) : that will if we asked , is assume q I - q then asked is also 947g , :p 945 ' to make enough sure ginger - 7

  10. Bounded mistake learner 't p know tonged → update algorithm a mistake makes Distribution learning algorithm U : § ryon which mistake T makes a § • takes a ˆ p and q such that q (ˆ p ) � q ( p ) > ↵ → q → on 13 € p 0 = U ( q , ˆ an improvement • returns a new distribution ˆ p ) of § > Suppose that ˆ p 0 = uniform over X and ˆ p t = U (ˆ p t � 1 , q t ). ↳ keep improving It by poinmitifagqegoat ' Ivers initial l U makes at most L mistakes if any such sequence ˆ p 0 , ˆ p 1 , . . . , ˆ p ` must have t  L . L improvements ) After ( and mistakes making L accurate for all be poi g must 8

  11. Multiplicative Weights Learner Theorem II. There exists a distribution learner U that makes L  4 ln |X| mistakes. ↵ 2 9

  12. Multiplicative weight update Algorithm The Learner qcp ) > L gip ) Reminder : - much weight . qcx , it st to x too pa gives I. e of . = prob 9451--17,9 " ' x you , parameter itogefbecafer F under U ( q , ˆ p ) : g. * p ( x ) e � η q ( x ) 8 x 2 X : ˜ p ( x ) = ˆ t it open p ( x ) ˜ → p 0 ( x ) = ˆ join decrease P y ∈ X ˜ p ( y ) ↳ normalize return ˆ p 0 distribution prob get to a . 10

  13. Why it works Ponton notion - hee byte :# = x 2 X p ( x ) log p ( x ) KL-divergence: D ( p k ˆ p t ) = P - p t ( x ) ˆ → entropy of p ⇒ Fo 1. D ( p k ˆ p 0 )  log |X| because ˆ p 0 is uniform - L - t.zepcxi.bg#TyEloglH - Ige pcxilbgllael ) - they P " ' ) - fog , 'Ll Dlptlpol - = initial guess pro 2. D ( p k ˆ p t ) � 0 for all t → iEi ¥¥ . find mistake q , = Utpo .gl ) 5. find mistake 92 p t � 1 )) + η 2 3. D ( p k ˆ p t ) � D ( p k ˆ p t � 1 )  η 2 ( q t � 1 ( p ) � q t (ˆ . 4 . Maps 1%a1ft-il-qt.ie#hyIa-iI---IfIY.tu set 11 a- a

  14. Private Multiplicative Weights

  15. Idea for private algorithm • Start with t = 0, ˆ p 0 uniform. tr • Private find the most wrongly answered query q 2 Q / n # c. a " " themes in hare error a → • If q (ˆ p t ) � q ( p ) < ↵ , output ˆ p t ( • Else set ˆ p t +1 = U ( p , q ) and increase t q ? , a = 4617L iterations L terminates after e 10

  16. ↳ ↳ ⇒ The algorithm in detail hlnhhl set in the priv parameter , be to go . T analysis p 0 = uniform over X ˆ mechanism → exponential for t = 1 . . L ② - I ⇣ ⌘ n ( q (ˆ p t ) � q ( p )) want of CP ) ol Sample q 2 Q w/ prob / exp gipp ← 2 " 0 - score to achieve w/ " 9N ) = gift - gu ) worst qq.in#-EapeaaYsitiitys7onthe approx 1 error Y t = q ( p ) + Z t , Z t ⇠ Lap (0 , " 0 n ) * if q (ˆ p t ) � Y t ) > 2 ↵ In noise week p t = U (ˆ ˆ p t � 1 , q ) Mma a mech exponential else Output ˆ p t w/ priv param Eo * w , privacy parameter - E gift ) Max - gcp ) error - th I E 945ft - Yt -12L 32 11 E

  17. Privacy analysis Approach loss found iteration . per privacy : total prior loss bound theorem composition to use mech - DP F- xp Eo loss iteration priv : per tap mech e . by - DP Leo composition EL Total of iterations Eo - DP 2L → total loss E priv . = IT FLIRT set eo = 12

  18. ↳ Accuracy analysis ? d) - neon Pll Ztl E e w/ prof t ) that we 1. p want z - qcpll 14T Eh t t round t query in Laplace mechanism queries L adaptive s ns.enk/pl=2LlnkMIalbIkH w , enough to have Ed ed - Eod 2) - 9 " M prob I - P q' ( Ftl ? qq.to - L ? we - gaps qlpnel iteration at every n → l°L !! L = 21 ¥ 13 ! 2 ¥ 13 ) if 13 { 2 Eh

Recommend


More recommend