Privacy in Machine Learning Fatemehsadat Mireshghallah ICLR2020
Privacy: A Major Concern for Machine Learning Graphics Adopted from The New York Times Privacy Project 2
Famous incidents - Anonymization - “ A Face Is Exposed for AOL Searcher No. 4417749 ” [Barbaro & Zeller ’ 06] - “ Robust De-anonymization of Large Datasets (How to Break Anonymity of the Netflix Prize Dataset) ” [Narayanan & Shmatikov ’ 08] - “ Matching Known Patients to Health Records in Washington State Data ” [Sweeney ’ 13]
Machine Learning Models that Remember Too Much [Song ’ 17] Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [Fredrikson ’ 15] Membership Inference Attacks Against Machine Learning Models [Shokri ’ 17] Practical Black-Box Attacks against Machine Learning [Papernot ’ 17]
Privacy Protection: A Timeline DNN GDPR: CCPA: Data General 900+ Training 5000+ California Aggregation Data Papers Privacy [Shokri Papers Consumer Protection Privacy [Sweeney et & Shmatikov ’ 15, Privacy Abadi et al. ’ 16] Regulation al. ’ 02, Dwork et al. ’ 06] Act 5 2002 2006 2011 2016 2018 2020 DNN Machine Inference Learning Privacy Privacy [Mireshghallah et [Chaudhuri et al. 20, Juvekar et al. ’ 11] al. ’ 18] 600+ ~30 Papers Papers 11
Privacy-Enhancing Execution Models Split Learning [Gupta & Raskar ’ 18] Trusted Execution Environment Federated Learning [McMahan et al. ’ 17] These are execution models and environments that help enhance privacy and are not by themselves privacy-preserving. 6
You can find the list of papers mentioned, and more related papers in this link: https://tinyurl.com/paperlist-ppml
Recommend
More recommend