jointly private convex programming
play

Jointly Private Convex Programming PrivDuDe Justin Hsu 1 , Zhiyi - PowerPoint PPT Presentation

Jointly Private Convex Programming PrivDuDe Justin Hsu 1 , Zhiyi Huang 2 , Aaron Roth 1 , Steven Zhiwei Wu 1 1 University of Pennsylvania 2 University of Hong Kong January 10, 2016 1 One hot summer...not enough electricity! 2 Solution:


  1. Jointly Private Convex Programming “ PrivDuDe ” Justin Hsu 1 , Zhiyi Huang 2 , Aaron Roth 1 , Steven Zhiwei Wu 1 1 University of Pennsylvania 2 University of Hong Kong January 10, 2016 1

  2. One hot summer...not enough electricity! 2

  3. Solution: Turn off air-conditioning Decide when customers get electricity ◮ Divide day into time slots ◮ Customers have values for slots ◮ Customers have hard minimum requirements for slots Goal: maximize welfare 3

  4. Scheduling optimization problem Constants (Inputs to the problem) ◮ Customer i ’s value for electricity in time slot t : v ( i ) ∈ [0 , 1] t ◮ Customer i ’s minimum requirement: d ( i ) ∈ [0 , 1] t ◮ Total electricity supply in time slot t : s t ∈ R 4

  5. Scheduling optimization problem Constants (Inputs to the problem) ◮ Customer i ’s value for electricity in time slot t : v ( i ) ∈ [0 , 1] t ◮ Customer i ’s minimum requirement: d ( i ) ∈ [0 , 1] t ◮ Total electricity supply in time slot t : s t ∈ R Variables (Outputs) ◮ Electricity level for user i , time t : x ( i ) t 4

  6. Scheduling optimization problem Maximize welfare v ( i ) · x ( i ) � max t t i , t 5

  7. Scheduling optimization problem Maximize welfare v ( i ) · x ( i ) � max t t i , t ...subject to constraints ◮ Don’t exceed power supply: x ( i ) � ≤ s t t i 5

  8. Scheduling optimization problem Maximize welfare v ( i ) · x ( i ) � max t t i , t ...subject to constraints ◮ Don’t exceed power supply: x ( i ) � ≤ s t t i ◮ Meet minimum energy requirements: x ( i ) ≥ d ( i ) t t 5

  9. Privacy concerns Private data ◮ Values v ( i ) for time slots t ◮ Customer requirements d ( i ) t 6

  10. Privacy concerns Private data ◮ Values v ( i ) for time slots t ◮ Customer requirements d ( i ) t Customers shouldn’t learn private data of others 6

  11. More generally... Convex program ◮ Want to maximize: f ( i ) concave � f ( i ) ( x ( i ) ) i 7

  12. More generally... Convex program ◮ Want to maximize: f ( i ) concave � f ( i ) ( x ( i ) ) i ◮ Coupling constraints: g ( i ) g ( i ) � j ( x ( i ) ) ≤ h j convex j i 7

  13. More generally... Convex program ◮ Want to maximize: f ( i ) concave � f ( i ) ( x ( i ) ) i ◮ Coupling constraints: g ( i ) g ( i ) � j ( x ( i ) ) ≤ h j convex j i ◮ Personal constraints: S ( i ) convex x ( i ) ∈ S ( i ) 7

  14. More generally... Key feature: separable ◮ Partition variables: Agent i ’s “part” of solution is x ( i ) Agent i ’s private data affects: ◮ Objective f ( i ) ◮ Coupling constraints g ( i ) j ◮ Personal constraints S ( i ) Examples ◮ Matching LP ◮ d -demand fractional allocation ◮ Multidimensional fractional knapsack 8

  15. Our results, in one slide Theorem Let ε > 0 be a privacy parameter. For a separable convex program with k coupling constraints, there is an efficient algorithm for privately finding a solution with objective at least � k � OPT − O , ε and exceeding constraints by at most k /ε in total. No polynomial dependence on number of variables 9

  16. The plan today ◮ Convex program solution ↔ equilibrium of a game ◮ Compute equilibrium via gradient descent ◮ Ensure privacy 10

  17. The convex program game 11

  18. The convex program two-player, zero-sum game The players ◮ Primal player: plays candidate solutions x ∈ S (1) × · · · × S ( n ) ◮ Dual player: plays dual solutions λ 12

  19. The convex program two-player, zero-sum game The players ◮ Primal player: plays candidate solutions x ∈ S (1) × · · · × S ( n ) ◮ Dual player: plays dual solutions λ The payoff function ◮ Move constraints depending on multiple players (coupling constraints) into objective as penalty terms �� � g ( i ) f ( i ) ( x ( i ) ) + j ( x ( i ) ) − h j � � L ( x , λ ) = λ j i j i ◮ Primal player maximizes, dual player minimizes 12

  20. Idea: Solution ↔ equilibrium Convex duality ◮ Optimal solution x ∗ gets payoff OPT versus any λ ◮ Optimal dual λ ∗ gets payoff at least − OPT versus any x In game theoretic terms... ◮ The value of the game is OPT ◮ Optimal primal-dual solution ( x ∗ , λ ∗ ) is an equilibrium 13

  21. Idea: Solution ↔ equilibrium Convex duality ◮ Optimal solution x ∗ gets payoff OPT versus any λ ◮ Optimal dual λ ∗ gets payoff at least − OPT versus any x In game theoretic terms... ◮ The value of the game is OPT ◮ Optimal primal-dual solution ( x ∗ , λ ∗ ) is an equilibrium Find an equilibrium to find an optimal solution approximate approximately 13

  22. Idea: Solution ↔ equilibrium Convex duality ◮ Optimal solution x ∗ gets payoff OPT versus any λ ◮ Optimal dual λ ∗ gets payoff at least − OPT versus any x In game theoretic terms... ◮ The value of the game is OPT ◮ Optimal primal-dual solution ( x ∗ , λ ∗ ) is an equilibrium Find an equilibrium to find an optimal solution approximate approximately 13

  23. Finding the equilibrium 14

  24. Known: techniques for finding equilibrium [FS96] Simulated play ◮ First player chooses the action x t with best payoff ◮ Second player uses a no-regret algorithm to select action λ t ◮ Use payoff L ( x t , λ t ) to update the second player ◮ Repeat 15

  25. Known: techniques for finding equilibrium [FS96] Simulated play ◮ First player chooses the action x t with best payoff ◮ Second player uses a no-regret algorithm to select action λ t ◮ Use payoff L ( x t , λ t ) to update the second player ◮ Repeat Key features ◮ Average of ( x t , λ t ) converges to approximate equilibrium ◮ Limited access to payoff data, can be made private 15

  26. Gradient descent dynamics (linear case) Idea: repeatedly go “downhill” ◮ Given primal point x ( i ) t , gradient of L ( x t , − ) is g ( i ) · x ( i ) � ℓ j = − h j t j i ◮ Update: λ t +1 = λ t − η · ℓ 16

  27. Achieving privacy 17

  28. (Plain) Differential privacy [DMNS06] 18

  29. More formally Definition (DMNS06) Let M be a randomized mechanism from databases to range R , and let D , D ′ be databases differing in one record. M is ( ε, δ ) -differentially private if for every S ⊆ R , Pr[ M ( D ) ∈ S ] ≤ e ε · Pr[ M ( D ′ ) ∈ S ] + δ. 19

  30. More formally Definition (DMNS06) Let M be a randomized mechanism from databases to range R , and let D , D ′ be databases differing in one record. M is ( ε, δ ) -differentially private if for every S ⊆ R , Pr[ M ( D ) ∈ S ] ≤ e ε · Pr[ M ( D ′ ) ∈ S ] + δ. For us: too strong! 19

  31. A relaxed notion of privacy [KPRU14] Idea ◮ Give separate outputs to agents ◮ Group of agents can’t violate privacy of other agents 20

  32. A relaxed notion of privacy [KPRU14] Idea ◮ Give separate outputs to agents ◮ Group of agents can’t violate privacy of other agents Definition An algorithm M : C n → Ω n is ( ε, δ ) -joint differentially private if for every agent i , pair of i -neighbors D , D ′ ∈ C n , and subset of outputs S ⊆ Ω n − 1 , Pr[ M ( D ) − i ∈ S ] ≤ exp( ε ) Pr[ M ( D ′ ) − i ∈ S ] + δ. 20

  33. Achieving joint differential privacy “Billboard” mechanisms ◮ Compute signal S satisfying standard differential privacy ◮ Agent i ’s output is a function of i ’s private data and S 21

  34. Achieving joint differential privacy “Billboard” mechanisms ◮ Compute signal S satisfying standard differential privacy ◮ Agent i ’s output is a function of i ’s private data and S Lemma (Billboard lemma [HHRRW14]) Let S : D → S be ( ε, δ ) -differentially private. Let agent i have private data D i ∈ X , and let F : X × S → R . Then the mechanism M ( D ) i = F ( D i , S ( D )) is ( ε, δ ) -joint differentially private. 21

  35. Our signal: noisy dual variables Privacy for the dual player ◮ Recall gradient is g ( i ) · x ( i ) � ℓ j = − h j j t i ◮ May depend on private data in a low-sensitivity way 22

  36. Our signal: noisy dual variables Privacy for the dual player ◮ Recall gradient is g ( i ) · x ( i ) � ℓ j = − h j j t i ◮ May depend on private data in a low-sensitivity way ◮ Use Laplace mechanism to add noise, “noisy gradient”: g ( i ) · x ( i ) ˆ � ℓ j = − h j + Lap (∆ /ε ) j t i ◮ Noisy gradients satisfy standard differential privacy 22

  37. Private action: best response to dual variables (Joint) privacy for the primal player ◮ Best response problem: �� � g ( i ) · x ( i ) − h j f ( i ) · x ( i ) + � � max x ∈ S L ( x , λ t ) = max λ j , t j x ∈ S i j i 23

  38. Private action: best response to dual variables (Joint) privacy for the primal player ◮ Best response problem: �� � g ( i ) · x ( i ) − h j f ( i ) · x ( i ) + � � max x ∈ S L ( x , λ t ) = max λ j , t j x ∈ S i j i ◮ Can optimize separately: x ( i ) ∈ S ( i ) f ( i ) · x ( i ) + � g ( i ) · x ( i ) � � max λ j , t j j 23

Recommend


More recommend