on evasion attacks against machine learning in practical
play

On evasion attacks against machine learning in practical settings - PowerPoint PPT Presentation

On evasion attacks against machine learning in practical settings Lujo Bauer Professor, Electrical & Computer Engineering + Computer Science Director, Cyber Autonomy Research Center Collaborators: Mahmood Sharif, Sruti Bhagavatula, Mike


  1. On evasion attacks against machine learning in practical settings Lujo Bauer Professor, Electrical & Computer Engineering + Computer Science Director, Cyber Autonomy Research Center Collaborators: Mahmood Sharif, Sruti Bhagavatula, Mike Reiter (UNC), … 1

  2. Machine Learning Is Ubiquitous • Cancer diagnosis • Predicting weather • Self-driving cars • Surveillance and access-control 2

  3. What Do You See? Lion (p=0.99) Deep Neural Network* 𝑞�𝑑 � � 𝑞�𝑑 � � Race car … 𝑞�𝑑 � � (p=0.74) … … … … 𝑞�𝑑 � � … Traffic light (p=0.99) 3 *CNN-F, proposed by Chatfield et al., “Return of the Devil”, BMVC ‘14

  4. What Do You See Now? Pelican (p=0.85) DNN (same as before) 𝑞�𝑑 � � 𝑞�𝑑 � � Speedboat … 𝑞�𝑑 � � (p=0.92) … … … … 𝑞�𝑑 � � … Jeans (p=0.89) 4 *The attacks generated following the method proposed by Szegedy et al.

  5. The Difference � 3 Amplify - = - = - = 5

  6. Is This an Attack? � 3 Amplify - = - = - = 6

  7. [Sharif, Bhagavatula, Bauer, Reiter CCS ’16, arXiv ’17, TOPS ’19] Can an Attacker Fool ML Classifiers? Fooling face recognition (e.g., for surveillance, access control) Can change • What is the attack scenario? physical objects, in a • Does scenario have constraints? limited way • On how attacker can manipulate input? Can’t control • On what the changed input can look like? camera position, lighting Defender / beholder doesn’t notice attack (to be measured by user study) 7

  8. Step #1: Generate Realistic Eyeglasses [0..1] … … … … … … Generator Real eyeglasses real / fake … … … … … … Discriminator 19

  9. Step #2: Generate Realistic Eyeglasses Adversarial [0..1] … … … … … … Generator Real eyeglasses real / fake … … … … … … Discriminator 20

  10. Step #2: Generate Realistic Eyeglasses Adversarial [0..1] … … … … … … Generator Russell Crowe / Owen Wilson / … Lujo Bauer / … … … … … … Face recognizer 21

  11. Ariel 22

  12. Are Adversarial Eyeglasses Inconspicuous? real / fake real / fake real / fake … … 23

  13. Are Adversarial Eyeglasses Inconspicuous? fraction of time selected as real Most realistic 10% of physically realized eyeglasses are more realistic than average real eyeglasses Adversari Adversarial al (realized) (digital) Real 24

  14. Can an Attacker Fool ML Classifiers? (Attempt #2) Fooling face recognition (e.g., for surveillance, access control) Can change • What is the attack scenario? physical objects • Does scenario have constraints? in a limited way Can’t control • On how attacker can manipulate input? camera • On what the changed input can look like? position, lighting Defender / beholder doesn’t notice attack (to be measured by user study) 25

  15. Considering Camera Position, Lighting • Used algorithm to measure pose (pitch, roll, yaw) • Mixed-effects logistic regression • Each 1 ° of yaw = 0.94x attack success rate • Each 1 ° of pitch = 0.94x (VGG) or 1.12x (OpenFace) attack success rate • Varied luminance (add 150W incandescent light at 45 ° , 5 luminance levels) • Not included in training → 50% degradation in attack success • Included in training → no degradation in attack success 26

  16. What If Defenses Are in Place? • Already: • Augmentation to make face recognition more robust to eyeglasses • New: • Train attack detector (Metzen et al. 2017) • 100% recall and 100% precision • Attack must fool original DNN and detector • Re Result (digital environment): attack success unchang ck success unchanged ed, with minor impact to conspicuousness 27

  17. Can an Attacker Fool ML Classifiers? (Attempt #2) Fooling face recognition (e.g., for surveillance, access control) Can change • What is the attack scenario? physical objects • Does scenario have constraints? in a limited way Can’t control • On how attacker can manipulate input? camera • On what the changed input can look like? position, lighting Defender / beholder doesn’t notice attack (to be measured by user study) 28

  18. Other Attack Scenarios? Dodging: One pair of eyeglasses, many attackers? Change to training process: Train with multiple images of one user → train with multiple images of many users Create multiple eyeglasses, test with large population 29

  19. Other Attack Scenarios? 5 pairs of eyeglasses, 85+% of population Dodging: One pair of eyeglasses, many attackers? avoids recognition # of eyeglasses Success rate (VGG143) used for dodging 1 pair of eyeglasses, 50+% of population avoids recognition # of subjects trained on 30

  20. Other Attack Scenarios? or Defense Stop sign → speed limit sign [Eykholt et al., arXiv ‘18] 33

  21. Other Attack Scenarios? or Defense Stop sign → speed limit sign [Eykholt et al., arXiv ’18] Hidden voice commands [Carlini et al., ’16-19] noise → “OK, Google, browse to evil dot com” Malware classification [Suciu et al., arXiv ’18] malware → “benign” 34

  22. Can an attacker fool ML classifiers? Face recognition Malware detection Attacker goal: evade surveillance, Attacker goal: bypass malware fool access-control mechanism detection system Input: image of face Input: malware binary Constraints: Constraints: • • Can’t precisely control camera Must be functional malware angle, lighting, pose, … • Changes to binary must not • Attack must be inconspicuous be easy to remove Very different constraints!  Attack method does not carry over 35

  23. Hypothetical attack on malware detection Malware-detection DNN Malware (p=0.99) Benign (p=0.99) 1. Must be functional malware 2. Changes to binary must not be easy to remove 36

  24. Attack building block: Binary diversification • Originally proposed to mitigate return-oriented programming [3,4] • Uses transformations that preserve functionality: 1. Substitution of equivalent instruction In-place 2. Reordering instructions randomization (IPR) 3. Register-preservation (push and pop) randomization 4. Reassignment of registers 5. Displace code to a new section Displacement 6. Add semantic nops (Disp) [3] Koo and Polychronakis, “Juggling the Gadgets.” AsiaCCS, ’16. 37 [4] Pappas et al., “Smashing the Gadgets.” IEEE S&P, ’12.

  25. Example: Reordering instructions* Original code Reordered code mov eax, [ecx+0x10] push ebx push ebx mov ebx, [ecx+0xc] reorder mov ebx, [ecx+0xc] mov eax, [ecx+0x10] cmp eax, ebx mov [ecx+0x8], eax mov [ecx+0x8], eax cmp eax, ebx jle 0x5c jle 0x5c Dependency graph 38 *Example by Pappas et al.

  26. Transforming malware to evade detection Input: malicious binary x (classified as malicious) Desired output: malicious binary x’ that is misclassified by AV For each function h in binary x 1. Pick a transformation 2. Apply transformation to function h to create binary x’ 3. If x’ is “more benign” than x , continue with x’ ; otherwise revert to x 39

  27. Transforming malware to evade detection Experiment: 100 malicious binaries, 3 malware detectors (80-92% TPR) Success rate (success = malicious binary classified as benign): 100 Avast Misclassified (%) Endgame 80 MalConv 60 Transformed malicious 40 binary classified as benign 20 0 ~100% of the time Random IPR+Disp-5 Kreuk-5 Success rate for 68 commercial anti viruses (black-box): Up to ~50% of AVs classify transformed malicious binary as benign 41

  28. Can an attacker fool ML classifiers? Yes Face recognition Malware detection Attacker goal: evade surveillance, Attacker goal: bypass malware fool access-control mechanism detection system Input: image of face Input: malware binary Constraints: Constraints: • • Can’t precisely control camera Must be functional malware angle, lighting, pose, … • Changes to binary must not • Attack must be inconspicuous be easy to remove 42

  29. Some directions for defenses • Know when not to deploy ML algs • “Explainable AI” – help defender understand alg’s decision Image courtesy of Matt Fredrikson 43

  30. Some directions for defenses • Know when not to deploy ML algs • “Explainable” AI – help defender understand alg’s decision • Harder to apply to input data not easily interpretable by humans • “Provably robust/verified” ML – but slow, works only in few cases • Test-time inputs similar to training-time inputs should be classified the same • … but similarity metrics for vision don’t capture semantic attacks  • … and in some domains similarity isn’t important for successful attacks • Ensembles, gradient obfuscation, … – help, but only to a point 44

  31. Fooling ML Classifiers: Summary Lujo Bauer lbauer@cmu.edu • “Attacks” may not be meaningful until we fix context • E.g., for face recognition: • Attacker: physically realized (i.e., constrained) attack • Defender / observer: attack isn’t noticed as such • Even in a practical (constrained) context, real attacks exist • Relatively robust, inconspicuous; high success rates • Hard-to-formalize constraints can be captured by a DNN • We need better definitions for similarity and correctness 45

Recommend


More recommend