Adversarial Robustness of Machine Learning Models for Graphs Prof. Dr. Stephan Günnemann Department of Informatics Technical University of Munich 28.10.2019 Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann
Can you trust the predictions of Adversarial Robustness of Machine Learning Models for Graphs graph-based ML models? Prof. Dr. Stephan Günnemann Department of Informatics Technical University of Munich 28.10.2019 Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann
Graphs are Everywhere Computational Computational Chemistry, Social Proteomics, Biology Sciences Reasoning Systems Scene Graphs Meshes Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 2
Machine Learning for Graphs § Graph neural networks have become extremely popular § Example: GNNs for semi-supervised node classification ($) = ' ) " ⋅ + $,- ⋅ . $ ( ℎ " ? Message ? passing ? ? ? GNN ? ? Partially labeled, attributed graph Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 3
Graphs & Robustness Are machine learning models for graphs robust with respect to (adversarial) perturbations? § Reliable/safe use of ML models requires correctness even in the worst-case – adversarial perturbations = worst-case corruptions § Adversaries are common in many application scenarios where graphs are used (e.g. recommender systems, social networks, knowledge graphs) Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 4
Adversarial Attacks in the Image Domain § State-of-the-art (deep) learning methods are not robust against small deliberate perturbations Training data Model 99% Training Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 5
Adversarial Attacks in the Image Domain § State-of-the-art (deep) learning methods are not robust against small deliberate perturbations Training data Model 92% Training Perturbed image Perturbation Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 6
The relational nature of the data might… Improve Robustness Cause Cascading Failures predictions are computed jointly perturbations in one part of the graph rather than in isolation can propagate to the rest ? Message ? ? passing ? ? ML for graphs ? ? Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 7
Remaining Roadmap ü Introduction & Motivation 2. Are ML models for graphs robust? 3. Can we give guarantees, i.e. certificates? 4. Conclusion Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 8
Semi-Supervised Node Classification ? Message ? ? passing ? ? ML for graphs ? ? Partially labeled, attributed graph Can we change the predictions by slightly perturbing the data? Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 9
Unique Aspects of the Graph Domain Target node Attacker node Attacker node Target node ! ∈ # : node whose classification label we want to change Attacker nodes $ ⊂ # : nodes the attacker can modify Direct attack ( $ = {!} ) Indirect attack ( ! ∉ $ ) Example Example Modify the Modify the § § Change website Hijack friends attackers ‘ features target ‘s features content of target § Add connections Add connections § Buy likes/ to the attackers to the target followers Create a link/ spam farm Remove connections § Remove connections § Unfollow from the attackers from the target untrusted users Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 10
7′ ∈ 0,1 K×K : modified adjacency matrix 8′ ∈ 0,1 K×M : modified node attributes Single Node Attack for a GCN N : target node 2 2 $ % ,' % min min ()( *+, log 0 1,( *+, − log 0 1,( where 0 2 = 5 6 7 2 , 8 2 = 9:5;<=> ? 7′8′E F E G 7′ ABCD ? § Classification margin > 0: no change in classification < 0: change in classification Message passing § Core idea: Linearization → efficient greedy approach Zügner, Akbarnejad, Günnemann. Adversarial Attacks on Neural Networks for Graph Data. KDD 2018 Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 11
Results: Cora Data Poisoning attack on GCN Poisoning attack on DeepWalk 1 . 0 1 . 0 Correct Classification margin classification 0 . 5 0 . 5 0 . 0 0 . 0 Wrong − 0 . 5 − 0 . 5 classification − 1 . 0 − 1 . 0 Ours Gradient Random Clean Ours- Ours Gradient Random Clean Ours- Ours Grad. Inter-class Clean Ours Ours Grad. Inter-class Clean Ours Direct Direct Direct Indirect Direct Direct Direct Indirect Direct Random Indirect Direct Random Indirect % Correct: 1.0% 2.7% 60.8% 90.3% 67.2% 2.1% 9.8% 46.2% 83.8% 59.2% Graph learning models are not robust to adversarial perturbations. Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 12
Results: Cora Data Poisoning attack on GCN Poisoning attack on DeepWalk 1 . 0 1 . 0 Correct Classification margin classification 0 . 5 0 . 5 0 . 0 0 . 0 Wrong − 0 . 5 − 0 . 5 classification − 1 . 0 − 1 . 0 Ours Gradient Random Clean Ours- Ours Gradient Random Clean Ours- Ours Grad. Inter-class Clean Ours Ours Grad. Inter-class Clean Ours Direct Direct Direct Indirect Direct Direct Direct Indirect Direct Random Indirect Direct Random Indirect % Correct: 1.0% 2.7% 60.8% 90.3% 67.2% 2.1% 9.8% 46.2% 83.8% 59.2% Graph learning models are not robust to adversarial perturbations. Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 13
Results: Analysis Given a target node ! , what are the properties of the nodes an attack "connects to"/"disconnects from"? fraction of nodes Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 14
Results: Attacking Multiple Nodes Jointly Aim: Damage the overall Accuracy on test set (Citeseer data) performance on the test set 70 Accuracy (%) Clean graph Core idea: Meta-learning Poisoned graph Treat the graph as a hyper- • 60 parameter to optimize Backpropagate through the • 50 learning phase CLN GCN Log. reg. Using a perturbed graph is worse than using attributes alone! Zügner, Günnemann. Adversarial Attacks on Graph Neural Networks via Meta Learning. ICLR 2019 Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 15
Intermediate Summary § Graph neural networks are highly vulnerable to adversarial perturbations – Targeted as well as global attacks – Performance on the perturbed graph might even be lower compared to only using attributes (no structure) – Attacks are successful even under restrictive attack scenarios , e.g. no access to target node or limited knowledge about the graph § Non-Robustness holds for graph embeddings as well – see e.g. Bojchevski, Günnemann. ICML 2019 ℝ " Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 16
Remaining Roadmap ü Introduction & Motivation ü Are ML models for graphs robust? No! 3. Can we give guarantees, i.e. certificates? Robustness certificate: 4. Conclusion Mathematical guarantee that the predicted class of an instance does not change under any admissible perturbation Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 17
Classification margin 1 Classification margin: 1 1 1 1 0 &'& ∗ log ,(. ∗ ) − log ,(.) ! = min 1 1 0 ? > 0: correct classification 0 Graph neural 0 0 < 0: incorrect classification Class 1 Class 2 Class 3 network 1 0 1 Class predictions Graph of target node Bojchevski, Günnemann. Certifiable Robustness to Graph Perturbations. NeurIPS 2019 Zügner, Günnemann. Certifiable Robustness and Robust Training for Graph Convolutional Networks. KDD 2019 Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 18
Classification margin 1 after perturbation Classification margin: Negative margin 1 1 1 1 0 &'& ∗ log ,(. ∗ ) − log ,(.) ! = min 0 1 1 0 ? > 0: correct classification 1 0 Graph neural 0 0 < 0: incorrect classification Class 1 Class 2 Class 3 network 1 0 1 Class predictions Graph of target node Classification margin ! Worst-case margin ! ∗ = minimize log , . ∗ − log ,(.) min >?9== &'& ∗ 345675896:;<= Bojchevski, Günnemann. Certifiable Robustness to Graph Perturbations. NeurIPS 2019 Zügner, Günnemann. Certifiable Robustness and Robust Training for Graph Convolutional Networks. KDD 2019 Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 19
Core Idea: Robustness Certification Classification Reachable via margin log $(& ) ) perturbations Positive margin No perturbation (robust) Worst possible Decision (intractable, unknown) robust boundary Worst-case Convex Lower bound margin relaxation (tractable) 0 Lower bound on the not robust Robustness worst-case margin Negative margin (not robust) certificate log $(& ' ) Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 20
Robustness Certification: Citeseer 80 60 40 <25% of nodes robust, 20 >50% certifiably nonrobust 0 for 10 perturbations. Allowed Perturbations Adversarial Robustness of Machine Learning Models for Graphs S. Günnemann 21
Recommend
More recommend