Adversarial Domain Adaptation and Adversarial Robustness Judy Hoffman
+ = Big Deep success data learning
Benchmark Performance 100 95 Accuracy 90 85 Millions of Images 80 Deep models 75 Challenge to recognize 1000 categories 70 2010 2011 2012 2013 2014 2015 2016 2017
Dataset Bias ? Test Image Deep Model
Dataset Bias ? Test Image Deep Model
Dataset Bias Dog is not recognized ? Test Image Deep Model
Dataset Bias
Dataset Bias Low resolution
Dataset Bias Motion Blur Low resolution
Dataset Bias Motion Blur Pose Variety Low resolution
Why not collect new annotations?
Why not collect new annotations? Sky Car Vegetation Road Street Sign Sidewalk Building Person
Why not collect new annotations? Expensive ($10-12 per image) Sky Car Vegetation Road Street Sign Sidewalk Building Person
Why not collect new annotations? Expensive ($10-12 per image) Sky Car Large Potential for Change Vegetation Road Different: Weather, City, Car Street Sign Sidewalk Building Person
Why not collect new annotations? Proprietary Private
Domain Adaptation : Train on Source Test on Target Adapt Source Domain Target Domain ∼ P T ( X T , Y T ) ∼ P S ( X S , Y S ) lots of labeled data unlabeled or limited labels
Adversarial Domain Adaptation bottle y s Source feature vector Classifier Source x s CNN Source Data Ganin & Lempinsky, ICML 2015. Tzeng*, Hoffman * , Saenko, Darrell, ICCV 2015. Tzeng, Hoffman, Saenko, Darrell. CVPR 2017.
Adversarial Domain Adaptation bottle y s Source feature vector Classifier Source x s CNN Source Data Target feature vector Target x t CNN Target Data Ganin & Lempinsky, ICML 2015. Tzeng*, Hoffman * , Saenko, Darrell, ICCV 2015. Tzeng, Hoffman, Saenko, Darrell. CVPR 2017.
Adversarial Domain Adaptation bottle y s Source feature vector Classifier Source x s CNN Source Data Target feature vector Minimize Discrepancy Target x t CNN Target Data Ganin & Lempinsky, ICML 2015. Tzeng*, Hoffman * , Saenko, Darrell, ICCV 2015. Tzeng, Hoffman, Saenko, Darrell. CVPR 2017.
Adversarial Domain Adaptation bottle y s Source feature vector Classifier Domain Classifier Source x s CNN Source Data Target feature vector Minimize Discrepancy Target x t CNN Target Data Ganin & Lempinsky, ICML 2015. Tzeng*, Hoffman * , Saenko, Darrell, ICCV 2015. Tzeng, Hoffman, Saenko, Darrell. CVPR 2017.
Adversarial Domain Adaptation bottle y s Source feature vector Classifier Adversarial Domain Loss Classifier Source x s CNN Source Data Target feature vector Minimize Discrepancy Target x t CNN Target Data Ganin & Lempinsky, ICML 2015. Tzeng*, Hoffman * , Saenko, Darrell, ICCV 2015. Tzeng, Hoffman, Saenko, Darrell. CVPR 2017.
Adversarial Domain Adaptation bottle y s Classifier Adversarial Domain Loss Classifier Source CNN Source Data Minimize Discrepancy Target Data Liu 2016. Taigman 2016. Bousmalis 2017. Liu 2017. Kim 2017. Sankaranarayanan 2018. Hoffman 2018.
CyCADA: Cycle Consistent Adversarial DA Semantically Source Data Consistent Domain Adversarial Source to Target Cycle Consistent Target to Source Target Data Reconstructed Source Data Hoffman et.al. ICML 2018
Synthetic to Real Pixel Adaptation Train Test GTA (synthetic) CityScapes (Germany) Hoffman et.al. ICML 2018
Synthetic to Real Pixel Adaptation Hoffman et.al. ICML 2018
Synthetic to Real Pixel Adaptation Hoffman et.al. ICML 2018
Synthetic to Real Pixel Adaptation Zhu*, Park*, Isola, Efros. ICCV 2017.
Synthetic to Real Pixel Adaptation Zhu*, Park*, Isola, Efros. ICCV 2017.
CyCADA Results: CityScapes Evaluation Car CityScapes Image Ground Truth Road Sidewalk Person Sky Vegetation Street Sign Building Before Adaptation After Adaptation Hoffman et.al. ICML 2018
CyCADA Results: CityScapes Evaluation Car CityScapes Image Ground Truth Road Sidewalk Person Sky Vegetation Street Sign Building Before Adaptation After Adaptation Hoffman et.al. ICML 2018
CyCADA Results: CityScapes Evaluation Car CityScapes Image Ground Truth Road Sidewalk Person Sky Vegetation Street Sign Building Before Adaptation After Adaptation Hoffman et.al. ICML 2018
So Far: Adapting to Natural Shifts Adapt
So Far: Adapting to Natural Shifts Adapt
What about adversarial shifts?
Adversarial Examples + . 007 ⇥ = x + sign ( r x J ( θ , x , y )) x ✏ sign ( r x J ( θ , x , y )) “panda” “nematode” “gibbon” 57.7% confidence 8.2% confidence 99.3 % confidence Goodfellow et al. ICLR 2015.
Visualize Perturbation Space
Visualize Perturbation Space Training point 28 28
Visualize Perturbation Space Training point Vectorize 28 784 28
Visualize Perturbation Space Training point Vectorize 28 784 Project onto random 2D 28 orthonormal basis
Visualize Perturbation Space Training point Vectorize Sweep over a grid of perturbations 28 784 Project onto random 2D 28 orthonormal basis
Visualize Perturbation Space Training point Vectorize Sweep over a grid of perturbations 28 784 Project onto random 2D 28 Perturbed Image orthonormal basis
Visualize Perturbation Space Training point Vectorize Sweep over a grid of Model Score perturbations 28 784 Project onto random 2D 28 Perturbed Image orthonormal basis
MNIST LeNet Decisions Around Training Point
MNIST LeNet Decisions Around Training Point Training Data Point
MNIST LeNet Decisions Around Training Point Training Data Point
MNIST LeNet Decisions Around Training Point Non-smooth Decision Boundary Training Data Point
MNIST LeNet Decisions Around Training Point Non-smooth Decision Boundary Training Small perturbations Data Point lead to new outputs
MNIST LeNet with L2 Regularization Smooth Decision Boundary Small perturbations lead to new outputs
MNIST LeNet with L2 Regularization Smooth Decision Boundary Small perturbations lead to new outputs
Jacobian Regularization y s bottle score vector x s Classifier z s Hoffman, Roberts, Yaida, In submission, 2019.
Jacobian Regularization y s bottle score vector x s Classifier z s Input-output Jacobian matrix J c,i = ∂ z c ∂ x i Hoffman, Roberts, Yaida, In submission, 2019.
Jacobian Regularization y s bottle score vector x s Classifier z s Input-output Minimize Jacobian matrix Frobenius Norm J c,i = ∂ z c || J || 2 F ∂ x i Hoffman, Roberts, Yaida, In submission, 2019.
MNIST LeNet with Jacobian Regularization Mostly Smooth Decision Boundary Larger perturbations needed to lead to new outputs
MNIST LeNet with Jacobian Regularization Mostly Smooth Decision Boundary Larger perturbations needed to lead to new outputs
Decision Boundary Comparison No L2 Jacobian Regularization Regularization Regularization Hoffman, Roberts, Yaida, In submission, 2019.
Robustness to Random Perturbations MNIST LeNet Model Hoffman, Roberts, Yaida, In submission, 2019.
Robustness to Adversarial Perturbations Hoffman, Roberts, Yaida, In submission, 2019.
Next Steps Jacobian regularizer as unsupervised adaptive loss? Domain Adversarial Adaptation Robustness Adaptation to an adversarial domain?
Thank you Taesung Park Jun-Yan Zhu Dan Roberts Eric Tzeng UC Berkeley MIT Diffeo UC Berkeley Phil Isola Kate Saenko Trevor Darrell Alyosha Efros Sho Yaida MIT Boston University UC Berkeley UC Berkeley FAIR
Judy Hoffman judyhoffman.io
Recommend
More recommend