poisoning attacks on federated
play

Poisoning Attacks on Federated Le Learning-based In Intrusion - PowerPoint PPT Presentation

Poisoning Attacks on Federated Le Learning-based In Intrusion Detection System Thien Duc Nguyen, Phillip Rieger, Markus Miettinen, Ahmad-Reza Sadeghi Typical IoT Devices 2 IoT The S stands for Security 3 Mirai: Largest Disruptive


  1. Poisoning Attacks on Federated Le Learning-based In Intrusion Detection System Thien Duc Nguyen, Phillip Rieger, Markus Miettinen, Ahmad-Reza Sadeghi

  2. Typical IoT Devices 2

  3. IoT The S stands for Security 3

  4. Mirai: Largest Disruptive Cyberattack in History Peak bandwidth of 1156 Gbps More than 145,000 infected devices Source: https://www.incapsula.com/blog/malware-analysis-mirai-ddos-botnet.html 4

  5. Mirai: Largest Disruptive Cyberattack in History Peak bandwidth of 1156 Gbps More than 145,000 infected devices Source: https://www.incapsula.com/blog/malware-analysis-mirai-ddos-botnet.html 4

  6. Federated Learning Aggregator Client Client Client 5

  7. Federated Learning Aggregator Client Client Client 5

  8. Federated Learning Aggregator Client Client Client 5

  9. Federated Learning Aggregator Client Client Client 5

  10. Advantages of Federated Learning • Allows all participants to profit from all data • Privacy Preserving ▪ E.g.: Don’t reveal network traffic • Distributing computation load to clients 6

  11. IoT NIDS Aggregator SGW: Security Gateway (e.g., Local WiFi router) SGW SGW SGW Nguyen et.al. , ICDCS 2019 7

  12. IoT NIDS Aggregator SGW: Security Gateway (e.g., Local WiFi router) SGW SGW SGW Nguyen et.al. , ICDCS 2019 7

  13. IoT NIDS Aggregator SGW: Security Gateway (e.g., Local WiFi router) SGW SGW SGW Nguyen et.al. , ICDCS 2019 7

  14. IoT NIDS Aggregator SGW: Security Gateway (e.g., Local WiFi router) SGW SGW SGW Nguyen et.al. , ICDCS 2019 7

  15. Examples of Backdoor Attacks: Adversary Chosen Label Image classification Word prediction IoT malware detection Change labels, e.g., Select end words, e.g., Inject malicious traffic, • Speed limit signs from ”buy phone from Google” e.g., use compromised 30kph to 80kph IoT devices Our new Attack 8

  16. Backdoor Attacks on FL 1. Manipulate training data 2. Manipulate local models Aggregator Client Client Client Nguyen et.al. , ICDCS 2019 9

  17. Backdoor Attacks on FL Attack Strategies: 1. Manipulate training data 2. Manipulate local models Aggregator Client Client Client Nguyen et.al. , ICDCS 2019 9

  18. Our Threat Model Attack Goal: • Inject Backdoor Attacker’s Capabilities : • Full knowledge about the targeted system • Fully control some IoT devices Attacker cannot: • Control Security Gateways • Control devices in < 50% of all networks 10

  19. Our Approach – High Level Idea • Challenge: Prevent detection of data poisoning • Only few attack data → Gateway will not detect it → Still include malware traffic in training data → Neural Network learns to predict malware behavior • Use compromised IoT devices 11

  20. Our Approach 1. Compromise IoT Devices 2. Inject Malicious Data Aggregator SGW: Security Gateway (e.g., Local WiFi router) SGW SGW SGW 2. 1. 12

  21. Our Approach 1. Compromise IoT Devices 2. Inject Malicious Data Aggregator SGW: Security Gateway (e.g., Local WiFi router) SGW SGW SGW 2. 1. 12

  22. Our Approach 1. Compromise IoT Devices 2. Inject Malicious Data Aggregator SGW: Security Gateway (e.g., Local WiFi router) SGW SGW SGW 2. 1. 12

  23. Our Approach 1. Compromise IoT Devices 2. Inject Malicious Data Aggregator SGW: Security Gateway (e.g., Local WiFi router) SGW SGW SGW 2. 1. 12

  24. Experimental Setup • 3 Real – World Datasets [1, 2] • Consisting of traffic from 46 IoT devices • Different stages of Mirai: infection, scanning, different DDoS attacks • Distributed data to 100 clients ▪ Approx. 2h of traffic [1] Nguyen et.al. , ICDCS 2019 [2] Sivanathan et.al. , IEEE Transactions on Mobile Computing 2018 13

  25. Attack Parameters • Poisoned Model Rate (PMR) ▪ Indicates percentage of poisoned local models o E.g., ratio of networks, containing compromised IoT devices • Poisoned Data Rate (PDR) ▪ Indicates ratio between poisoned and benign data o E.g., ratio between malware and benign network traffic 14

  26. Evaluation Metrics • Backdoor Accuracy (BA) ▪ E.g., alerts, raised on malware traffic ▪ 100 % BA → No Alert for malware traffic • Main task Accuracy (MA) ▪ E.g., accuracy on benign network traffic ▪ 100 % MA → No alert for benign traffic 15

  27. Experimental Results • Malware traffic not detected for PDR of 36.7% ( ± 6.5%) PDR: Poisoned Data Rate 16

  28. Experimental Results • Malware traffic not detected for PDR of 36.7% ( ± 6.5%) • Attack successful for low number of compromised networks ▪ BA 100% for PMR 25% and PDR 20% ▪ Higher PMRs are successful for lower PDRS ▪ Lower PMRs require higher PDRs PDR: Poisoned Data Rate ▪ PMR 5% is too low PMR: Poisoned Model Rate 16

  29. Experimental Results – Clustering Defense Mechanism: Experimental Results • Calculates pairwise Euclidean Distances • Apply Clustering on them • BA 100% • Attack effective for PDR ≤ 20% Illustration for PDR = 30% 17

  30. Experimental Results – Clustering Defense Mechanism: Experimental Results • Calculates pairwise Euclidean Distances • Apply Clustering on them • BA 100% • Attack effective for PDR ≤ 20% Illustration for PDR = 20% 17

  31. Experimental Results – Differential Privacy Defense Experimental Results Mechanism: • Restricts Euclidean distance of local models • Adds gaussian noise • Not effective for PDR >= 15% • BA 100% • MA reduced significantly 18

  32. Conclusion ➢ Introduced novel backdoor attack vector ➢ Requires only control of few IoT devices ➢ Inject Malware Traffic Stealthily ➢ Evaluated on 3 real – world datasets ➢ Bypasses current defenses 19

  33. Future Research Direction • Improve IDS • Filter poisoned data on clients • Defense against these poisoning attacks 20

Recommend


More recommend