Universal Multi-Party Poisoning Attacks Saeed Mahloujifar Mohammad Mahmoody Ameer Mohammed
Multi-Party Learning Distributions Data Providers šø 1 š 1 Model š» šø š š š
Multi-Party Learning (Round j) Distributions Data Providers šø 1 š 1 š» šø š š š Model jā1 šø š š š
Multi-Party Learning (Round j) Distributions Data Providers šø 1 š 1 š» šø š š š Model jā1 šø š š š
Multi-Party Learning (Round j) Distributions Data Providers šø 1 š 1 š£ š š š š» šø š š š Model jā1 šø š š š
Multi-Party Learning (Round j) Distributions Data Providers šø 1 š 1 š£ š š š Model j š» šø š š š Model jā1 šø š š š
Multi-Party Learning (Round j) Distributions Data Providers šø 1 š 1 Model j Model j Model j š» Model j šø š š š Model j Model j Model jā1 šø š š š
Poisoning in Multi-Party Learning Distributions Data Providers An adversary (partially) controls a šø 1 š number of data providers 1 Model š» šø š š š šø š š š
(š, š) -Poisoning Attack Model š (out of š ) of the parties become corrupted šø š š š Each corrupted party š š samples from a different distribution š , ā¤ š šø š šø š š = š ā š -Tampering [ACMPS14] [MM17] [MM18] š = 1 ā Static Corruption in MPC (crypto)
What is the inherent power of š, š -poisoning adversaries against Multi-party Learning?
Main Theorem: Power of š, š -Poisoning Let š¶ be a bad property of the model š ā¢ E.g. š¶(š) = 1 if š misclassified an specific instance š¦ For any š -party learning protocol there is a š, š -poisoning adversary that increases Pr[š¶] from š ā š 1ā šš š
Main Theorem: Power of š, š -Poisoning Let š¶ be a bad property of the model š ā¢ E.g. š¶(š) = 1 if š misclassified an specific instance š¦ For any š -party learning protocol there is a š, š -poisoning adversary that increases Pr[š¶] from š ā š 1ā šš š Pr[š¶] Before attack š š Pr[š¶] after attack 5% 1/2 š/2 11% 5% 1/2 š 22% 5% 1 š/2 22%
Features of Attack ā¢ Universal: provably work against any learning protocol ā¢ In contrast with: [Bagdasaryan et al 2018; Bhagoji et al. 2018] ā¢ Clean label: Only uses correct labels ā¢ Similar to: [M et al 2017; Shafahi et al 2018] ā¢ Polynomial time ā¢ Similar to: [M and Mahmoody 2019]
Ideas Behind Attack ā¢ Main Idea: Treat protocol as random process and run a biasing attack ā¢ The bad property is a function over the random process ā¢ We want to bias that function, similar to attacks in coin tossing
Ideas Behind Attack ā¢ Main Idea: Treat protocol as random process and run a biasing attack ā¢ The bad property is a function over the random process ā¢ We want to bias that function, similar to attacks in coin tossing ā¢ New biasing model: Generalized š -Tampering.
Ideas Behind Attack ā¢ Main Idea: Treat protocol as random process and run a biasing attack ā¢ The bad property is a function over the random process ā¢ We want to bias that function, similar to attacks in coin tossing ā¢ New biasing model: Generalized š -Tampering. Let š ā¶ š 1 , ā¦ , š š ā {0,1}
Ideas Behind Attack ā¢ Main Idea: Treat protocol as random process and run a biasing attack ā¢ The bad property is a function over the random process ā¢ We want to bias that function, similar to attacks in coin tossing ā¢ New biasing model: Generalized š -Tampering. Let š ā¶ š 1 , ā¦ , š š ā {0,1} Input blocks š£ 1 , š£ 2 , ā¦ š£ š are sampled one-by one in online way:
Ideas Behind Attack ā¢ Main Idea: Treat protocol as random process and run a biasing attack ā¢ The bad property is a function over the random process ā¢ We want to bias that function, similar to attacks in coin tossing ā¢ New biasing model: Generalized š -Tampering. Let š ā¶ š 1 , ā¦ , š š ā {0,1} Input blocks š£ 1 , š£ 2 , ā¦ š£ š are sampled one-by one in online way: š š with marginal probability 1 ā š š£ š = į with marginal probability š
Ideas Behind Attack ā¢ Main Idea: Treat protocol as random process and run a biasing attack ā¢ The bad property is a function over the random process ā¢ We want to bias that function, similar to attacks in coin tossing ā¢ New biasing model: Generalized š -Tampering. Let š ā¶ š 1 , ā¦ , š š ā {0,1} Input blocks š£ 1 , š£ 2 , ā¦ š£ š are sampled one-by one in online way: š š with marginal probability 1 ā š š£ š = į with marginal probability š Our generalized p-tampering attack based on Ideas in coin tossing attacks [BOL89,IH14]
Summary We show Poisoning attacks against multi-party learning protocols: ā¢ Universal: Provably apply to any multi-party learning protocol ā¢ Clean label: Only uses samples with correct labels ā¢ Run in polynomial time Poster #160 ā¢ Increase the probability of any chosen bad property
Recommend
More recommend