first order adversarial vulnerability of neural networks
play

First-order Adversarial Vulnerability of Neural Networks and Input - PowerPoint PPT Presentation

First-order Adversarial Vulnerability of Neural Networks and Input Dimension C.-J. Simon-Gabriel , Y. Ollivier , L. Bottou , B. Schlkopf , D. Lopez-Paz Max-Planck-Institute for Intelligent Systems Facebook AI Research


  1. First-order Adversarial Vulnerability of Neural Networks and Input Dimension C.-J. Simon-Gabriel † , Y. Ollivier ‡ , L. Bottou ‡ , B. Schölkopf † , D. Lopez-Paz ‡ † Max-Planck-Institute for Intelligent Systems ‡ Facebook AI Research First-order Adv Vul of NNs & Input Dim 1

  2. Relation to Literature First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  3. Relation to Literature [1,2] : “Under specific data assumptions, vulnerability Increases with input dimension.” [1] Adversarial Spheres, Gilmer et al., ICLR Workshop 2018 [2] Are adversarial examples inevitable?, Shafahi et al., ICLR 2019 First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  4. Relation to Literature [1,2] : “Under specific data assumptions, vulnerability Increases with input dimension.” - No-free-lunch-like result: “If data can be anything, then there exists datasets that make the problem arbitrarily hard” [1] Adversarial Spheres, Gilmer et al., ICLR Workshop 2018 [2] Are adversarial examples inevitable?, Shafahi et al., ICLR 2019 First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  5. Relation to Literature [1,2] : “Under specific data assumptions, vulnerability Increases with input dimension.” - No-free-lunch-like result: “If data can be anything, then there exists datasets that make the problem arbitrarily hard” - Cannot apply to image-datasets, because humans are a non vulnerable classifiers for which higher dimension (higher resolution) helps . [1] Adversarial Spheres, Gilmer et al., ICLR Workshop 2018 [2] Are adversarial examples inevitable?, Shafahi et al., ICLR 2019 First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  6. Relation to Literature [1,2] : “Under specific data assumptions, vulnerability Increases with input dimension.” - No-free-lunch-like result: “If data can be anything, then there exists datasets that make the problem arbitrarily hard” - Cannot apply to image-datasets, because humans are a non vulnerable classifiers for which higher dimension (higher resolution) helps . - Hence the question: not : what’s wrong with our data? but : what’s wrong with our classifiers? [1] Adversarial Spheres, Gilmer et al., ICLR Workshop 2018 [2] Are adversarial examples inevitable?, Shafahi et al., ICLR 2019 First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  7. Relation to Literature [1,2] : “Under specific data assumptions, vulnerability Increases with input dimension.” Here : “Under specific classifier assumptions, vulnerability Increases with input dimension.” - No-free-lunch-like result: “If data can be anything, then there exists datasets that make the problem arbitrarily hard” - Cannot apply to image-datasets, because humans are a non vulnerable classifiers for which higher dimension (higher resolution) helps . - Hence the question: not : what’s wrong with our data? but : what’s wrong with our classifiers? [1] Adversarial Spheres, Gilmer et al., ICLR Workshop 2018 [2] Are adversarial examples inevitable?, Shafahi et al., ICLR 2019 First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  8. Main Theorem First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  9. Main Theorem Theorem: At initialization, using “He-initialization”, and for a very wide class of neural nets, adversarial damage increases like ! ( ! : input dimension). First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  10. Main Theorem Theorem: At initialization, using “He-initialization”, and for a very wide class of neural nets, adversarial damage increases like ! ( ! : input dimension). Remarks: • Vulnerability is independent of the network topology (inside a wide class). First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  11. Main Theorem Theorem: At initialization, using “He-initialization”, and for a very wide class of neural nets, adversarial damage increases like ! ( ! : input dimension). Remarks: • Vulnerability is independent of the network topology (inside a wide class). • Includes any succession of FC-, conv-, ReLU-, and subsampling layers at He-init. First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  12. Main Theorem Theorem: At initialization, using “He-initialization”, and for a very wide class of neural nets, adversarial damage increases like ! ( ! : input dimension). Remarks: • Vulnerability is independent of the network topology (inside a wide class). • Includes any succession of FC-, conv-, ReLU-, and subsampling layers at He-init. Question: Does it hold after training? → Experiments First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  13. Experimental Setting First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  14. Experimental Setting • Up-sample CIFAR-10 • Yields 4 datasets with input sizes: (3x)32x32, 64x64, 128x128, 256x256. First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  15. Experimental Setting • Up-sample CIFAR-10 • Yields 4 datasets with input sizes: (3x)32x32, 64x64, 128x128, 256x256. • Train a conv net for each input size • Use same architecture for all networks (up to convolution dilation and subsampling layers). First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  16. Experimental Setting • Up-sample CIFAR-10 • Yields 4 datasets with input sizes: (3x)32x32, 64x64, 128x128, 256x256. • Train a conv net for each input size • Use same architecture for all networks (up to convolution dilation and subsampling layers). • Compare their adversarial vulnerability First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  17. Experimental Results (after training) First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  18. Experimental Results (after training) 350 300 E x k ∂ x L k 1 250 200 150 100 p 32 64 128 image-width / d First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  19. Experimental Results (after training) 350 300 E x k ∂ x L k 1 250 200 150 100 p 32 64 128 image-width / d First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  20. Experimental Results (after training) 350 300 E x k ∂ x L k 1 250 Adversarial damage ∝ " 200 150 100 p 32 64 128 image-width / d First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  21. Conclusion First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  22. Conclusion We show: • Our networks are vulnerable by design: vulnerability increases like ! . First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  23. Conclusion We show: • Our networks are vulnerable by design: vulnerability increases like ! . • Proven theoretically at initialization First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  24. Conclusion We show: • Our networks are vulnerable by design: vulnerability increases like ! . • Proven theoretically at initialization • Verified empirically after usual and robust training First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  25. Conclusion We show: • Our networks are vulnerable by design: vulnerability increases like ! . • Proven theoretically at initialization • Verified empirically after usual and robust training • Theoretical result is independent of network topology First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  26. Conclusion We show: • Our networks are vulnerable by design: vulnerability increases like ! . • Proven theoretically at initialization • Verified empirically after usual and robust training • Theoretical result is independent of network topology Suggests that: • Current networks are not yet data-specific enough. First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  27. Conclusion We show: • Our networks are vulnerable by design: vulnerability increases like ! . • Proven theoretically at initialization • Verified empirically after usual and robust training • Theoretical result is independent of network topology Suggests that: • Current networks are not yet data-specific enough. • Architectural tweaks may not be sufficient to solve adversarial vulnerability. First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

  28. Thank you for listening! Yann Ollivier Léon Bottou Bernhard Schölkopf David Lopez-Paz First-order Adversarial Vulnerability of Neural Networks and Input Dimension Poster Pacific Ballroom #62 First-order Adv Vul of NNs & Input Dim Carl-Johann SIMON-GABRIEL

Recommend


More recommend