responsible
play

RESPONSIBLE ARTIFICIAL INTELLIGENCE Prof. Dr. Virginia Dignum - PowerPoint PPT Presentation

RESPONSIBLE ARTIFICIAL INTELLIGENCE Prof. Dr. Virginia Dignum Chair of Social and Ethical AI - Department of Computer Science Email: virginia@cs.umu.se - Twitter: @vdignum WHAT IS AI? Not just algorithm Not just machine learning


  1. RESPONSIBLE ARTIFICIAL INTELLIGENCE Prof. Dr. Virginia Dignum Chair of Social and Ethical AI - Department of Computer Science Email: virginia@cs.umu.se - Twitter: @vdignum

  2. WHAT IS AI? • Not just algorithm • Not just machine learning Socio-technical • But • AI applications are not alone AI system o Socio-technical AI systems Autonomy AI system

  3. AI IS NOT INTELLIGENCE! • What AI systems cannot do (yet) • What AI systems can do (well) o Common sense reasoning o Identify patterns in data  Understand context  Images  Understand meaning  Text o Learning from few examples  Video o Learning general concepts o Extrapolate those patterns to new data o Combine learning and reasoning o Take actions based on those patterns

  4. AI IS NOT INTELLIGENCE!

  5. AI IS NOT INTELLIGENCE!

  6. WHAT IS RESPONSIBLE AI? Responsible AI is Responsible AI recognises that • Ethical • AI systems are artefacts • Lawful • We set the purpose • Reliable • We are responsible! • Beneficial

  7. RESPONSIBLE AI • AI can potentially do a lot. Should it? • Who should decide? • Which values should be considered? Whose values? • How do we deal with dilemmas? • How should values be prioritized? • …..

  8. PRINCIPLES AND GUIDELINES Responsible / Ethical / Trustworthy.... https://www.oecd.org/going- digital/ai/principles/ https://ethicsinaction.ieee.org https://ec.europa.eu/digital-single- market/en/high-level-expert-group- artificial-intelligence

  9. MANY INITIATIVES (AND COUNTING...) • Strategies / positions o IEEE o European Union o OECD o WEF o Council of Europe o Many national strategies o ... • Declarations o Asilomar o Montreal o ... https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.pdf lists 84!

  10. EU HLEG OECD IEEE EAD Human agency and benefit people and the How can we ensure that • • • oversight planet A/IS do not Technical robustness respects the rule of law, infringe human rights ? • • and safety human rights , effect of A/IS • Privacy and data democratic values and technologies on • governance diversity , human well-being . Transparency include appropriate How can we assure that • • • Diversity , non- safeguards (e.g. human designers, • discrimination and intervention) to ensure a manufacturers, owners fairness fair and just society . and operators of A/IS Societal and transparency and are responsible and • • environmental well- responsible disclosure accountable ? being robust, secure and How can we ensure that • • Accountability safe A/IS are transparent ? • Hold organisations and How can we extend the • • individuals accountable benefits and minimize for proper functioning of the risks of AI/AS AI technology being misused?

  11. BUT ENDORSEMENT IS NOT (YET) COMPLIANCE

  12. EU HLEG OECD IEEE EAD Human agency and benefit people and the How can we ensure that • • • oversight planet A/IS do not Technical robustness respects the rule of law, infringe human rights ? • • and safety human rights , effect of A/IS • Privacy and data democratic values and technologies on • governance diversity , human well-being . Transparency include appropriate How can we assure that • • • Diversity , non- safeguards (e.g. human designers, • discrimination and intervention) to ensure a manufacturers, owners fairness fair and just society . and operators of A/IS Societal and transparency and are responsible and • • environmental well- responsible disclosure accountable ? being robust, secure and How can we ensure that • • Accountability safe A/IS are transparent ? • Hold organisations and How can we extend the • • individuals accountable benefits and minimize for proper functioning of the risks of AI/AS AI technology being misused? regulation observatory standards

  13. The promise of AI: Better decisions

  14. HOW DO WE MAKE DECISIONS?

  15. HOW DO WE MAKE DECISIONS TOGETHER?

  16. DESIGN IMPACTS DECISIONS IMPACTS SOCIETY • Choices • Formulation • Involvement • Legitimacy • Voting system email: virginia@cs.umu.se Twitter: @vdignum

  17. WHICH DECISIONS SHOULD AI MAKE?

  18. WHICH DECISIONS SHOULD AI MAKE?

  19. HOW SHOULD AI MAKE DECISIONS?

  20. TAKING RESPONSIBILITY • in Design o Ensuring that development processes take into account ethical and societal implications of AI and its role in socio-technical environments • by Design o Integration of ethical reasoning abilities as part of the behaviour of artificial autonomous systems • for Design(ers) o Research integrity of stakeholders (researchers, developers, manufacturers,...) and of institutions to ensure regulation and certification mechanisms

  21. IN DESIGN: ART • AI needs ART o A ccountability Socio-technical o R esponsibility o T ransparency AI system Autonomy AI system Responsibility

  22. ACCOUNTABILITY • Principles for Responsible AI = ART o A ccountability  Explanation and justification  Design for values o R esponsibility • Optimal AI is explainable AI • Many options, not one ‘right’ choice • Explanation is for the user: o T ransparency context matters

  23. CHALLENGE: NO AI WITHOUT EXPLANATION Explanation is for the user: • Different needs, different expertises and interests o Just in time, clear, concise, understandable, correct o Explanation is about: • individual decisions and the ‘big picture’ o enable understanding of overall strengths & weaknesses o convey an understanding of how the system will behave in the future o convey how to correct the system’s mistakes o

  24. RESPONSIBILITY • Principles for Responsible AI = ART o A ccountability  Explanation and justification  Design for values o R esponsibility  Autonomy  Chain of responsible actors  Human-like AI o T ransparency

  25. RESPONSIBILITY CHALLENGES • Chain of responsibility researchers, developerers, manufacturers, users, owners, governments, … o Liability and conflict settling mechanisms o Human-like systems • Robots, chatbots, voice… o Expectations o Vulnerable users o Mistaken identity o https://ieeexplore.ieee.org/document/7451743 Responsibility for choices • 95% accurate but no explanation or 80% accurate with explanation? o Fairness or sustainability? o

  26. TRANSPARENCY • Principles for Responsible AI = ART o A ccountability  Explanation and justification  Design for values o R esponsibility  Autonomy  Chain of responsible actors  Human-like AI o T ransparency  Data and processes  Algorithms  Choices and decisions

  27. CHALLENGE: BIAS AND DISCRIMINATION Remember: AI systems extrapolate patterns from data to take action • Bias is inherent on human data o we need bias to make sense of world • Bias leads to stereotyping and prejudice • Bias is more than biased data

  28. BY DESIGN: ARTIFICIAL AGENTS • Can AI systems be ethical? o What does that mean? o What is needed? • Design for values

  29. ETHICAL BEHAVIOR • Should we teach ethics to AI? • Understanding ethics Which values? Whose values? o Who gets a say? o • Using ethics What is proper action given a values? o Are ethical theories of use? o How to prioritise values? o Is knowing ethics enough? o Ethical reasoning • Many different theories o  Utilitarian, Kantian, Virtues…) Highly abstract o Do not provide ways to resolve conflicts o

  30. DESIGN FOR VALUES fairness values interpretation Equal Equal … norms resources opportunity concretization … … functionalities

  31. GLASS BOX APPROACH • Doing the right thing o Elicit, define, agree, describe, report • Doing it right o Explicit values, principles, interpretations, decisions o Evaluate input/output against principles

  32. FOR DESIGN(ERS): PEOPLE • Regulation • Certification • Standards Conduct • • AI principles are principles for us

  33. FOR DESIGN: TRUSTWORTHY AI • Regulation and certification • Codes of conduct • Human-centered • AI as driver for innovation

  34. • Design impacts decisions impacts society impacts design • AI systems are tools, artefacts made by people: We set the purpose • AI can give answers, but we ask the questions • AI needs ART (Accountability, Responsibility, Transparency)

  35. RESPONSI SPONSIBL BLE ARTIFI FICIAL CIAL INTELL LLIGENCE IGENCE WE WE A ARE E RES ESPO PONS NSIBLE IBLE Email: virginia@cs.umu.se Twitter: @vdignum

Recommend


More recommend