building fair and robust representations for vision and
play

Building Fair and Robust Representations for Vision and Language - PowerPoint PPT Presentation

EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong Building Fair and Robust Representations for Vision and Language Vicente Ordez-Romn Assistant Professor Department of Computer Science Outline Issues


  1. EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong Building Fair and Robust Representations for Vision and Language Vicente Ordóñez-Román Assistant Professor Department of Computer Science

  2. Outline • Issues identified in biased representations • Metrics and findings • Solutions that have been proposed EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong 2

  3. Annotated Data + Machine Learning / Deep Learning f(x) Words, Text, Linguistic Structure EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  4. Case Study 1: Most Basic form of Grounding: Image to Words f(x) kitchen no-kitchen Protected variable: Gender EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  5. Case Study 1: Most Basic form of Grounding: Image to Words f(x) kitchen no-kitchen Protected variable: Gender For any pair of gender types: P(kitchen = 1 / gender = m) = P(kitchen = 1 / gender = f) P(kitchen = 0 / gender = m) = P(kitchen = 0 / gender = f) EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  6. Approach 1: Feature Invariant Learning ICML 2013 EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  7. Approach 1: Feature Invariant Learning X: Images Y: Labels kitchen kitchen kitchen kitchen no-kitchen no-kitchen no-kitchen kitchen kitchen no-kitchen no-kitchen Learning Fair Representations Zemel, Wu, Swersky, Pitassi, and Dwork. ICML 2013 EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  8. Approach 1: Feature Invariant Learning X: Images Y: Labels kitchen kitchen kitchen y = f(x) kitchen no-kitchen no-kitchen no-kitchen kitchen kitchen no-kitchen no-kitchen Learning Fair Representations Zemel, Wu, Swersky, Pitassi, and Dwork. ICML 2013 EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  9. Approach 1: Feature Invariant Learning Instead X: Images Y: Labels Z: Representations kitchen kitchen kitchen kitchen no-kitchen no-kitchen no-kitchen kitchen kitchen no-kitchen no-kitchen Learning Fair Representations Zemel, Wu, Swersky, Pitassi, and Dwork. ICML 2013 EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  10. ̂ Approach 1: Feature Invariant Learning Instead X: Images Y: Labels Z: Representations kitchen kitchen kitchen kitchen no-kitchen no-kitchen no-kitchen kitchen kitchen no-kitchen no-kitchen x y = f(z) y x = ∑ z i v i i Learning Fair Representations Zemel, Wu, Swersky, Pitassi, and Dwork. ICML 2013 EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  11. ̂ Approach 1: Feature Invariant Learning Instead X+: Images Y: Labels Z: Representations kitchen kitchen kitchen kitchen no-kitchen no-kitchen no-kitchen kitchen kitchen no-kitchen X-: Images no-kitchen y = f(z) y x = ∑ z i v i i Learning Fair Representations Zemel, Wu, Swersky, Pitassi, and Dwork. ICML 2013 EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  12. ̂ Approach 1: Feature Invariant Learning Instead X+: Images Y: Labels Z: Representations kitchen kitchen kitchen kitchen no-kitchen no-kitchen no-kitchen kitchen kitchen no-kitchen X-: Images no-kitchen y = f(z) y x = ∑ z i v i i P ( z i | x +) = P ( z i | x − ) Learning Fair Representations Zemel, Wu, Swersky, Pitassi, and Dwork. ICML 2013 EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  13. Approach 1: Feature Invariant Learning L = ∑ 1 1 x ( k ) + β x ( k ) − ̂ y ( k ) ) + α ∑ | X + | ∑ | X − | ∑ z ( k ) z ( k ) CrossEntropy ( y ( k ) , ̂ − i i k k X + X − Intermediate Representations Classifications Reconstructions should be indistinguishable should be good should be good across values of the protected variable Learning Fair Representations Zemel, Wu, Swersky, Pitassi, and Dwork. ICML 2013 EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  14. Approach 1I: Adversarial Feature Learning X: Images Y: Labels kitchen kitchen kitchen y = f(x) kitchen no-kitchen no-kitchen no-kitchen kitchen kitchen no-kitchen no-kitchen Controllable Invariance through Adversarial Feature Learning Yulun Du, Eduard Hovy, Graham Neubig. NeurIPS 2017 Qizhe Xie, Zihang Dai, EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  15. Approach 1I: Adversarial Feature Learning y = f(x) X: Images Y: Labels kitchen kitchen kitchen kitchen no-kitchen no-kitchen no-kitchen kitchen kitchen no-kitchen no-kitchen z Controllable Invariance through Adversarial Feature Learning Yulun Du, Eduard Hovy, Graham Neubig. NeurIPS 2017 Qizhe Xie, Zihang Dai, EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  16. Approach 1I: Adversarial Feature Learning y = f(x) X: Images kitchen / no-kitchen objective gender prediction adversarial objective z Controllable Invariance through Adversarial Feature Learning Yulun Du, Eduard Hovy, Graham Neubig. NeurIPS 2017 Qizhe Xie, Zihang Dai, EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  17. Approach 1I: Adversarial Feature Learning y = f(x) X: Images Person identification objective illumination type z Controllable Invariance through Adversarial Feature Learning Yulun Du, Eduard Hovy, Graham Neubig. NeurIPS 2017 Qizhe Xie, Zihang Dai, EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  18. Approach 1I: Adversarial Feature Learning Controllable Invariance through Adversarial Feature Learning Yulun Du, Eduard Hovy, Graham Neubig. NeurIPS 2017 Qizhe Xie, Zihang Dai, EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  19. Case Study: Visual Semantic Role Labeling (vSRL) Commonly Uncommon: Semantic Sparsity in Situation Recognition Yatskar, Vicente Ordonez, Luke Zettlemoyer, Ali Farhadi CVPR 2017 Mark EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  20. Compositionality: How to learn what looks like carrying? But Lots of Images of Tables in Lots of Images of People Carrying Backpacks Other Images Not Many Images of People Carrying Tables EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  21. Deep Neural Network + Compositional Conditional Random Field (CRF) Commonly Uncommon: Semantic Sparsity in Situation Recognition Yatskar, Vicente Ordonez, Luke Zettlemoyer, Ali Farhadi CVPR 2017 Mark EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  22. Situation Recognition: CVPR 2017 Compositional Shared Learning of Underlying Concepts http://imsitu.org/demo/ Commonly Uncommon: Semantic Sparsity in Situation Recognition Yatskar, Vicente Ordonez, Luke Zettlemoyer, Ali Farhadi CVPR 2017 Mark EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  23. However we kept running into this… http://imsitu.org/demo/ Commonly Uncommon: Semantic Sparsity in Situation Recognition Yatskar, Vicente Ordonez, Luke Zettlemoyer, Ali Farhadi CVPR 2017 Mark EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  24. However we kept running into this… http://imsitu.org/demo/ Commonly Uncommon: Semantic Sparsity in Situation Recognition Yatskar, Vicente Ordonez, Luke Zettlemoyer, Ali Farhadi CVPR 2017 Mark EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  25. Key Finding: Models Amplify Biases in the Dataset Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus Level Constraints Yatskar, Vicente Ordonez, Kai-Wei Chang. EMNLP 2017 Jieyu Zhao, Tianlu Wang, Mark Dataset? Model? EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  26. Key Finding: Models Amplify Biases in the Dataset Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus Level Constraints Yatskar, Vicente Ordonez, Kai-Wei Chang. EMNLP 2017 Jieyu Zhao, Tianlu Wang, Mark Dataset? Model? Images of People Cooking EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  27. Key Finding: Models Amplify Biases in the Dataset Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus Level Constraints Yatskar, Vicente Ordonez, Kai-Wei Chang. EMNLP 2017 Jieyu Zhao, Tianlu Wang, Mark Dataset? Model? Men Cooking: 33% Women Cooking: 66% EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  28. Key Finding: Models Amplify Biases in the Dataset Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus Level Constraints Yatskar, Vicente Ordonez, Kai-Wei Chang. EMNLP 2017 Jieyu Zhao, Tianlu Wang, Mark Dataset? Model? Men Cooking: 33% Women Cooking: 66% Test Images EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

  29. Key Finding: Models Amplify Biases in the Dataset Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus Level Constraints Yatskar, Vicente Ordonez, Kai-Wei Chang. EMNLP 2017 Jieyu Zhao, Tianlu Wang, Mark Dataset? Model? Men Cooking: 33% Women Cooking: 66% Men Cooking: 16% Women Cooking: 84% EMNLP 2019 Tutorial on Bias and Fairness in Natural Language Processing, Hong Kong

Recommend


More recommend