Adversarially Learned Representations for Information Obfuscation and Inference Martin Bertran 1 , Natalia Martinez 1 , Afroditi Papadaki 2 Qiang Qiu 1 , Miguel Rodrigues 2 , Galen Reeves 1 , Guillermo Sapiro 1 1. Duke University 2. University College London
Motivation Why do users share their data? Shared data Facial Image Utility Service Subject verification Provider User decision 2
Motivation Why do users share their data? Shared data Facial Image Utility Service Subject verification Provider Sensitive attributes User Emotion decision Gender Race 2
Motivation Can we do better? 3
Motivation Can we do better? Filtered Image Shared data Facial Image Utility Service Subject verification Provider Sensitive attributes User Gender decision Learn space-preserving representations that obfuscate sensitive information while preserving utility. 3
Motivation Example: Preserve gender & obfuscate emotion Original Filtered P(Male) = 0.98 P(Male) = 0.98 P(Smile) = 0.78 P(Smile) = 0.38 P(Female) = 0.99 P(Female) = 0.99 P(Serious) = 0.98 P(Serious) = 0.31 4
Motivation Example: Preserve subject & obfuscate gender Original Filtered P(Male) = 0.99 P(Male) = 0.70 Subject verified Subject verified P(Female) = 0.99 P(Female) = 0.54 Subject verified Subject verified 5
Sample of related work • (2003) Chechik et al. Extracting relevant structures with side information. • (2016) Basciftci et al. On privacy-utility tradeoffs for constrained data release mechanisms. • (2018) Madras et al. Learning adversarially fair and transferable representations. • (2018) Sun et al. A hybrid model for identity obfuscation by face replacement. 6
Problem formulation Utility Sensible variable variable High-dimensional data Sanitized data 7
Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable High-dimensional data Sanitized data 7
Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable X ∼ p ( X | U, S ) High-dimensional data Sanitized data 7
Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable X ∼ p ( X | U, S ) High-dimensional data Y ∼ p ( Y | X ) Sanitized data Our objective! 7
Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable X ∼ p ( X | U, S ) High-dimensional data Y ∼ p ( Y | X ) Sanitized data Our objective! Want to learn Y ∼ p ( Y | X ) such that : p ( S | Y ) ∼ p ( S ) • p ( U | Y ) ∼ p ( U | X ) • 7
Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable X ∼ p ( X | U, S ) High-dimensional data Y ∼ p ( Y | X ) Sanitized data Our objective! Want to learn Y ∼ p ( Y | X ) such that : p ( S | Y ) ∼ p ( S ) D KL [ p ( S | Y ) || p ( S )] • min p ( U | Y ) ∼ p ( U | X ) • 7
Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable X ∼ p ( X | U, S ) High-dimensional data Y ∼ p ( Y | X ) Sanitized data Our objective! Want to learn Y ∼ p ( Y | X ) such that : p ( S | Y ) ∼ p ( S ) D KL [ p ( S | Y ) || p ( S )] • min p ( U | Y ) ∼ p ( U | X ) D KL [ p ( U | X ) || p ( U | Y )] • min 7
Problem formulation Want to learn Y ∼ p ( Y | X ) such that: D KL [ p ( S | Y ) || p ( S )] • min D KL [ p ( U | X ) || p ( U | Y )] • min 8
Problem formulation Want to learn Y ∼ p ( Y | X ) such that: E Y [ . ] I ( S ; Y ) D KL [ p ( S | Y ) || p ( S )] • min D KL [ p ( U | X ) || p ( U | Y )] • min 8
Problem formulation Want to learn Y ∼ p ( Y | X ) such that: E Y [ . ] I ( S ; Y ) D KL [ p ( S | Y ) || p ( S )] • min E X,Y [ . ] I ( U ; X | Y ) D KL [ p ( U | X ) || p ( U | Y )] • min 8
Problem formulation Want to learn Y ∼ p ( Y | X ) such that: E Y [ . ] I ( S ; Y ) D KL [ p ( S | Y ) || p ( S )] • min E X,Y [ . ] I ( U ; X | Y ) D KL [ p ( U | X ) || p ( U | Y )] • min 8
Problem formulation Want to learn Y ∼ p ( Y | X ) such that: E Y [ . ] I ( S ; Y ) D KL [ p ( S | Y ) || p ( S )] • min E X,Y [ . ] I ( U ; X | Y ) D KL [ p ( U | X ) || p ( U | Y )] • min Objective: min I ( U ; X | Y ) p ( Y | X ) I ( S ; Y ) ≤ k s.t. 8
Problem formulation Want to learn Y ∼ p ( Y | X ) such that: E Y [ . ] I ( S ; Y ) D KL [ p ( S | Y ) || p ( S )] • min E X,Y [ . ] I ( U ; X | Y ) D KL [ p ( U | X ) || p ( U | Y )] • min Objective: min I ( U ; X | Y ) I ( U ; Y ) ∼ max p ( Y | X ) p ( Y | X ) I ( S ; Y ) ≤ k s.t. 8
Performance bounds Given the objective min I ( U ; X | Y ) I ( S ; Y ) ≤ k s.t. p ( Y | X ) 9
Performance bounds Given the objective min I ( U ; X | Y ) I ( S ; Y ) ≤ k s.t. p ( Y | X ) What are the intrinsic limits on the trade-offs for this problem? 9
Performance bounds Given the objective min I ( U ; X | Y ) I ( S ; Y ) ≤ k s.t. p ( Y | X ) What are the intrinsic limits on the trade-offs for this problem? Lemma 1. finite alphabets, . ( U, S ) ∈ U × S X ∼ p ( X | U, S ) Then: min I ( U ; X | Y ) ≥ I ( U ; X ) − I ( U ; Y ) min p ( Y | X ) p ( Y | U, S ) I ( S ; Y ) ≤ k I ( S ; Y ) ≤ k s.t. s.t. I ( U ; Y ) ≤ I ( U ; X ) • With finite we can compute a sequence of upper bounds: Restricted |Y| cardinality sequence (RCS). 9
Performance bounds Given the objective min I ( U ; X | Y ) s.t. : I ( S ; Y ) ≤ k p ( Y | X ) What are the intrinsic limits on the trade-offs for this problem? Lemma 2. Given ( X, U, S ) ∼ p ( X, U, S ) I ( U ; X | Y ) ≥ − I ( S ; Y ) + I ( U ; S ) − I ( U ; S | X ) 10
Performance bounds Given the objective min I ( U ; X | Y ) s.t. : I ( S ; Y ) ≤ k p ( Y | X ) What are the intrinsic limits on the trade-offs for this problem? Lemma 2. Given ( X, U, S ) ∼ p ( X, U, S ) I ( U ; X | Y ) ≥ − I ( S ; Y ) + I ( U ; S ) − I ( U ; S | X ) ∀ Lemma 3. Given , such that: ∃ ( X, U, S ) ∼ p ( X, U, S ) p ( Y | X ) k ≥ 0 I ( S ; Y ) ≤ k k I ( U ; X | Y ) = max (0 , 1 − I ( S ; X )) I ( U ; X ) 10
Performance bounds Lemmas 1, 2 and 3 can be approximated using contingency tables. Lemma 1 (RCS) I ( S ; X ) Lemma 2 (lower bound) Lemma 3 (achievable upper bound) I ( S ; Y ) ≤ I ( U ; S ) )) I ( U ; X ) I ( U ; S ) I ( U ; X | Y ) * Sketch under the assumption that I ( U ; S | X ) = 0 11
Proposed framework 12
Proposed framework Objective: min I ( U ; X | Y ) p ( Y | X ) ∼ q θ ( X, Z ) s.t. : I ( S ; Y ) ≤ k 12
Proposed framework Objective: min I ( U ; X | Y ) p ( Y | X ) ∼ q θ ( X, Z ) s.t. : I ( S ; Y ) ≤ k Optimization objective: [ I ( U ; X | Y ) + λ max { I ( S ; Y ) − k, 0 } 2 ] min p ( Y | X ) ∼ q θ ( X, Z ) 12
Implementation Optimization objective: [ I ( U ; X | Y ) + λ max { I ( S ; Y ) − k, 0 } 2 ] min q θ ( X, Z ) 13
Implementation Optimization objective: [ I ( U ; X | Y ) + λ max { I ( S ; Y ) − k, 0 } 2 ] min q θ ( X, Z ) Learning the stochastic mapping : Y = q θ ( X, Z ) ˆ ⇥ ⇤ p φ ( U | X ) φ = argmin φ E X,U − log ( p φ ( U | X ) p ( U | X ) ∼ ˆ ⇥ ⇤ p ψ ( U | Y ) ψ = argmin ψ E X,U,Z − log ( p ψ ( U | q ˆ θ ( X, Z )) p ( U | Y ) ∼ ⇥ ⇤ p η ( S | Y ) η = argmin η E X,S,Z ˆ − log ( p η ( S | q ˆ θ ( X, Z )) p ( S | Y ) ∼ 13
Implementation Optimization objective: [ I ( U ; X | Y ) + λ max { I ( S ; Y ) − k, 0 } 2 ] min q θ ( X, Z ) Learning the stochastic mapping : Y = q θ ( X, Z ) ˆ ⇥ ⇤ p φ ( U | X ) φ = argmin φ E X,U − log ( p φ ( U | X ) p ( U | X ) ∼ ˆ ⇥ ⇤ p ψ ( U | Y ) ψ = argmin ψ E X,U,Z − log ( p ψ ( U | q ˆ θ ( X, Z )) p ( U | Y ) ∼ ⇥ ⇤ p η ( S | Y ) η = argmin η E X,S,Z ˆ − log ( p η ( S | q ˆ θ ( X, Z )) p ( S | Y ) ∼ ˆ ⇥ θ = argmin θ E X,Z D KL [ p ˆ φ ( U | X ) || p ˆ ψ ( U | q θ ( X, Z ))]] η ( S | q θ ( X, Z )) || P ( S )]] − k, 0) 2 ⇥ + λ max( E X,Z D KL [ p ˆ 13
Implementation Optimization objective: [ I ( U ; X | Y ) + λ max { I ( S ; Y ) − k, 0 } 2 ] min q θ ( X, Z ) Learning the stochastic mapping : Y = q θ ( X, Z ) ˆ ⇥ ⇤ p φ ( U | X ) φ = argmin φ E X,U − log ( p φ ( U | X ) p ( U | X ) ∼ Xception ˆ ⇥ ⇤ p ψ ( U | Y ) ψ = argmin ψ E X,U,Z − log ( p ψ ( U | q ˆ θ ( X, Z )) p ( U | Y ) ∼ Networks ⇥ ⇤ p η ( S | Y ) η = argmin η E X,S,Z ˆ − log ( p η ( S | q ˆ θ ( X, Z )) p ( S | Y ) ∼ ˆ ⇥ θ = argmin θ E X,Z D KL [ p ˆ φ ( U | X ) || p ˆ ψ ( U | q θ ( X, Z ))]] U-NET + noise η ( S | q θ ( X, Z )) || P ( S )]] − k, 0) 2 ⇥ + λ max( E X,Z D KL [ p ˆ 13
Experiments Emotion obfuscation vs gender detection k ∞ 0.5 0.3 14
Experiments Emotion obfuscation vs gender detection k ∞ 0.5 0.3 15
Experiments Gender obfuscation vs subject verification k ∞ 0.3 0.2 16
Experiments Gender obfuscation vs subject verification k ∞ 0.3 0.2 17
Experiments Subject within Subject Consenting Nonconsenting User User Subject verified k Subject verified ∞ Subject verified Subject verified 0.5 18
Recommend
More recommend