differentially private distributed convex optimization
play

Differentially Private Distributed Convex Optimization via - PowerPoint PPT Presentation

Differentially Private Distributed Convex Optimization via Functional Perturbation Erfan Nozari Department of Mechanical and Aerospace Engineering University of California, San Diego http://carmenere.ucsd.edu/erfan July 6, 2016 Joint work with


  1. Differentially Private Distributed Convex Optimization via Functional Perturbation Erfan Nozari Department of Mechanical and Aerospace Engineering University of California, San Diego http://carmenere.ucsd.edu/erfan July 6, 2016 Joint work with Pavankumar Tallapragada and Jorge Cort´ es

  2. Distributed Coordination 2 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  3. Distributed Coordination What if local information is sensitive? 2 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  4. Motivating Scenario: Optimal EV Charging [Han et. al. , 2014] 3 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  5. Motivating Scenario: Optimal EV Charging [Han et. al. , 2014] Central aggregator solves: � � n � minimize U i =1 r i r 1 ,...,r n r i ∈ C i i ∈ { 1 , . . . , n } subject to • U = energy cost function • r i = r i ( t ) = charging rate • C i = local constraints 3 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  6. Motivating Scenario: Optimal EV Charging [Han et. al. , 2014] Central aggregator solves: � � n � minimize U i =1 r i r 1 ,...,r n r i ∈ C i i ∈ { 1 , . . . , n } subject to • U = energy cost function • r i = r i ( t ) = charging rate • C i = local constraints 3 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  7. Myth: Aggregation Preserves Privacy 4 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  8. Myth: Aggregation Preserves Privacy • Fact: NOT in the presence of side-information 4 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  9. Myth: Aggregation Preserves Privacy • Fact: NOT in the presence of side-information Database 1 100 2 120 Average = 110 . . . n 90 • Toy example: 4 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  10. Myth: Aggregation Preserves Privacy • Fact: NOT in the presence of side-information Database 1 100 2 120 Average = 110 . . . n 90 • Toy example: Side Information ⇒ d 1 = 100 2 120 . . . n 90 4 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  11. Myth: Aggregation Preserves Privacy • Fact: NOT in the presence of side-information Database 1 100 2 120 Average = 110 . . . n 90 • Toy example: Side Information ⇒ d 1 = 100 2 120 . . . n 90 • Real example: A. Narayanan and V. Shmatikov successfully de-anonymized Netflix Prize dataset (2007) Side information: IMDB databases! 4 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  12. Outline 1 DP Distributed Optimization Problem Formulation Impossibility Result 2 Functional Perturbation Perturbation Design 3 DP Distributed Optimization via Functional Perturbation Regularization Algorithm Design and Analysis 5 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  13. Outline 1 DP Distributed Optimization Problem Formulation Impossibility Result 2 Functional Perturbation Perturbation Design 3 DP Distributed Optimization via Functional Perturbation Regularization Algorithm Design and Analysis 5 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  14. Problem Formulation Optimization Standard additive convex optimization problem: 7 8 n � f ( x ) � minimize f i ( x ) x ∈ D 5 6 i =1 subject to G ( x ) ≤ 0 4 Ax = b 3 1 2 Assumption: • D is compact • f i ’s are strongly convex and C 2 6 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  15. Problem Formulation Optimization Standard additive convex optimization problem: 7 8 n � f ( x ) � minimize f i ( x ) x ∈ D 5 6 i =1 subject to G ( x ) ≤ 0 4 Ax = b 3 1 2 Assumption: n � f ( x ) � minimize f i ( x ) • D is compact x ∈ X i =1 • f i ’s are strongly convex and C 2 6 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  16. Problem Formulation Optimization Standard additive convex optimization problem: 7 8 n � f ( x ) � minimize f i ( x ) 5 6 x ∈ X i =1 4 3 1 2 Assumption: • D is compact • f i ’s are strongly convex and C 2 6 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  17. Problem Formulation Optimization Standard additive convex optimization problem: 7 8 n � f ( x ) � minimize f i ( x ) 5 6 x ∈ X i =1 4 3 • A non-private solution [Nedic et. al. , 2010]: 1 2 x i ( k + 1) = proj X ( z i ( k ) − α k ∇ f i ( z i ( k ))) Assumption: n � • D is compact z i ( k ) = w ij x j ( k ) • f i ’s are strongly j =1 convex and C 2 6 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  18. Problem Formulation Optimization Standard additive convex optimization problem: 7 8 n � f ( x ) � minimize f i ( x ) 5 6 x ∈ X i =1 4 3 • A non-private solution [Nedic et. al. , 2010]: 1 2 x i ( k + 1) = proj X ( z i ( k ) − α k ∇ f i ( z i ( k ))) Assumption: n � • D is compact �� α k = ∞ z i ( k ) = w ij x j ( k ) • f i ’s are strongly � α 2 j =1 k < ∞ convex and C 2 6 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  19. Problem Formulation Privacy • “Information”: F = ( f i ) n i =1 ∈ F n 7 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  20. Problem Formulation Privacy • “Information”: F = ( f i ) n i =1 ∈ F n • Given ( V , � · � V ) with V ⊆ F , Adjacency F, F ′ ∈ F n are V -adjacent if there exists i 0 ∈ { 1 , . . . , n } such that f i = f ′ f i 0 − f ′ i for i � = i 0 i 0 ∈ V and 7 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  21. Problem Formulation Privacy • “Information”: F = ( f i ) n i =1 ∈ F n • Given ( V , � · � V ) with V ⊆ F , Adjacency F, F ′ ∈ F n are V -adjacent if there exists i 0 ∈ { 1 , . . . , n } such that f i = f ′ f i 0 − f ′ i for i � = i 0 i 0 ∈ V and • For a random map M : F n × Ω → X and ǫ ∈ R n > 0 Differential Privacy (DP) M is ǫ -DP if ∀ V -adjacent F, F ′ ∈F n ∀O ⊆ X P {M ( F ′ , ω ) ∈ O} ≤ e ǫ i 0 � f i 0 − f ′ i 0 � V P {M ( F, ω ) ∈ O} 7 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  22. Case Study Linear Classification with Logistic Loss Function a 2 1 • Training records: { ( a j , b j ) } N j =1 x T a = 0 where a j ∈ [0 , 1] 2 and b j ∈ {− 1 , 1 } • Goal: find the best separating hyperplane x T a a 1 0 1 8 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  23. Case Study Linear Classification with Logistic Loss Function a 2 1 • Training records: { ( a j , b j ) } N j =1 x T a = 0 where a j ∈ [0 , 1] 2 and b j ∈ {− 1 , 1 } • Goal: find the best separating hyperplane x T a a 1 0 1 Convex Optimization Problem N � 2 | x | 2 � ℓ ( x ; a j , b j ) + λ � x ∗ = argmin x ∈ X j =1 • Logistic loss: ℓ ( x ; a, b ) = ln(1 + e − ba T x ) 8 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  24. Case Study Linear Classification with Logistic Loss Function a 2 1 • Training records: { ( a j , b j ) } N j =1 x T a = 0 where a j ∈ [0 , 1] 2 and b j ∈ {− 1 , 1 } • Goal: find the best separating hyperplane x T a a 1 0 1 Convex Optimization Problem n N i 7 � 2 | x | 2 � ℓ ( x ; a i,j , b i,j ) + λ � � x ∗ = argmin 8 x ∈ X 5 6 i =1 j =1 4 3 • Logistic loss: ℓ ( x ; a, b ) = ln(1 + e − ba T x ) 1 2 8 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  25. Message Perturbation vs. Objective Perturbation A generic distributed optimization algorithm: Network Message Passing j i Local State Update f i x + i = h i ( x i , x − i ) 9 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  26. Message Perturbation vs. Objective Perturbation Message Perturbation: Objective Perturbation: Network Network Message Passing Message Passing j j i i Local State Update Local State Update f i f i x + x + i = h i ( x i , x − i ) i = h i ( x i , x − i ) 9 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  27. Message Perturbation vs. Objective Perturbation Message Perturbation: Objective Perturbation: Network Network Message Passing Message Passing j j i i Local State Update Local State Update f i f i x + x + i = h i ( x i , x − i ) i = h i ( x i , x − i ) 9 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  28. Message Perturbation vs. Objective Perturbation Message Perturbation: Objective Perturbation: Network Network Message Passing Message Passing j j i i Local State Update Local State Update f i f i x + x + i = h i ( x i , x − i ) i = h i ( x i , x − i ) 9 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  29. Impossibility Result Generic message-perturbing algorithm: x ( k + 1) = a I ( x ( k ) , ξ ( k )) ξ ( k ) = x ( k ) + η ( k ) 10 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

  30. Impossibility Result Generic message-perturbing algorithm: x ( k + 1) = a I ( x ( k ) , ξ ( k )) ξ ( k ) = x ( k ) + η ( k ) Theorem If • The η → x dynamics is 0-LAS • η i ( k ) ∼ Lap( b i ( k )) or η i ( k ) ∼ N (0 , b i ( k )) • b i ( k ) is O ( 1 k p ) for some p > 0 Then no ǫ -DP of the information set I for any ǫ > 0 10 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

Recommend


More recommend