text and text and automated biases automated biases
play

TEXT AND TEXT AND AUTOMATED BIASES AUTOMATED BIASES NATURAL - PowerPoint PPT Presentation

TEXT AND TEXT AND AUTOMATED BIASES AUTOMATED BIASES NATURAL LANGUAGES ARE THE NATURAL LANGUAGES ARE THE BASE HUMAN COMMUNICATION BASE HUMAN COMMUNICATION we learn from books of all kinds about complex topics and keep ourself updated


  1. TEXT AND TEXT AND AUTOMATED BIASES AUTOMATED BIASES

  2. NATURAL LANGUAGES ARE THE NATURAL LANGUAGES ARE THE BASE HUMAN COMMUNICATION BASE HUMAN COMMUNICATION we learn from books of all kinds about complex topics and keep ourself updated

  3. USEFULL APPLICATIONS USEFULL APPLICATIONS structure big amounts of text (by topics or certain words) understand the meaning of text voice recognition text generation (summaries, q&a systems)

  4. COMMON TASKS COMMON TASKS AllenNLP demos Spacy demos

  5. HOW DO WE MAKE COMPUTERS HOW DO WE MAKE COMPUTERS TRY TO UNDERSTAND TRY TO UNDERSTAND LANGUAGE? LANGUAGE? The langauge of each person is different Language is ambigious Language requires contextual information it's constantly evolving

  6. APPROACHES IN THE PAST APPROACHES IN THE PAST 1. Rule based 2. probabilistic models and linear classifiers. 3. deep learning

  7. DEEP LEARNING DEEP LEARNING

  8. HOW TO DEAL WITH SEQUENCES HOW TO DEAL WITH SEQUENCES

  9. DEEP LEARNING FOR NLP DEEP LEARNING FOR NLP from symbolic representations to tensors/vectors and embeddings how to represent words in the input layer?

  10. ONE HOT ENCODING ONE HOT ENCODING word : n dimensions (for dictionary size) car : 1 0 0 ... 0 dog : 0 1 0 ... 0 cat : 0 0 1 ... 0 apple : 0 0 0 ... 1 scales bad no relationship between words no context/semantic information

  11. WORD EMBEDDINGS WORD EMBEDDINGS much less dimensions then words in the dictionary relationship between words build from training language models

  12. TRAINING WORD VECTORS TRAINING WORD VECTORS (WORD EMBEDDINGS) (WORD EMBEDDINGS)

  13. LANGUAGE MODELS LANGUAGE MODELS Predicting the next character / word in a sequence

  14. The Unreasonable Effectiveness of Recurrent Neural Networks

  15. WORD ASSOCIATIONS WORD ASSOCIATIONS

  16. Demo time

  17. SUMMARY SUMMARY Language models builds Word Embeddings Word Embeddings are Word representations in tense spaces The contain semantical information about a word Association are reflected in the relationships of words

  18. SUMMARY SUMMARY Language models builds Word Embeddings Word Embeddings are Word representations in tense spaces The contain semantical information about a word Association are reflected in the relationships of words There are problematic associations

  19. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

  20. Word Embedding Association Test Semantics derived automatically from language corpora necessarily contain human biases

  21. Biases are consolidated Historical bias Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination underepresented group

  22. QUESTIONS TO ASK QUESTIONS TO ASK Who build the model From what dataset was it build Where is the model used?

  23. REAL WORLD APPLICATIONS AND REAL WORLD APPLICATIONS AND THERE PROBLEMS THERE PROBLEMS GOOGLE TRANSLATE GOOGLE TRANSLATE

  24. Google Translate Keeps Spitting Out Creepy Religious Prophecies

  25. CHATBOTS CHATBOTS

  26. Can virtual humans be more engaging than real ones?

  27. MICROSOFTS CHATBOT TAY MICROSOFTS CHATBOT TAY

  28. WOEBOT WOEBOT

  29. HOW EXTREME BIAS BECOMES WHEN HOW EXTREME BIAS BECOMES WHEN FED WITH BAD DATA FED WITH BAD DATA

  30. Norman A.I

  31. Bias is identical to meaning, and it is impossible to employ language meaningfully without incorporating human bias.

  32. THANK YOU THANK YOU Get in touch transfluxus@posteo.de twitter.com/ramin__

Recommend


More recommend