ethical design and decision in autonomous and intelligent
play

Ethical Design And Decision in Autonomous And Intelligent Systems - PowerPoint PPT Presentation

Ethical Design And Decision in Autonomous And Intelligent Systems Raja Cha-la Ins-tute of intelligent Systems and Robo-cs (ISIR) University Pierre and Marie Curie, Paris Raja.Cha-la@isir.upmc.fr Chair, The IEEE Global Ini.a.ve Ini.a.ve on


  1. Ethical Design � And Decision in Autonomous And Intelligent Systems Raja Cha-la Ins-tute of intelligent Systems and Robo-cs (ISIR) University Pierre and Marie Curie, Paris Raja.Cha-la@isir.upmc.fr Chair, The IEEE Global Ini.a.ve Ini.a.ve on Ethics of Autonomous and Intelligent Systems 1 23/11/2017

  2. Booming Applica-ons Robo-cs and AI Manufacturing, Transporta1on, Logis1cs, Agriculture, Mining, Construc1on, Health, Jus1ce, Banking, Personal services, Leisure, Defense, Interven1on, etc. • To replace humans • To assist and serve humans Clinique du Risque de l’UPMC - PSL Niveau Niveau de Plateforme é volu.ve de Risque Risque (données pa.ents et li5érature) à affiner déterminé Circuit de Circuit de « précision » « réduc.on » Base de données du Risque du Risque Consulta)ons Algorithmes de Référen#els de Pa)ents Experts spécialisées / Examens calcul de risque bonnes pra#ques complémentaires E-coaching Objets ETP connectés Pa#ent / MT Impact scien.fique Impact Clinique Impact Pédagogique Impact Economique (big data, algorithmes, référen#els) (améliora#on des PEC) (sémiologie du risque) (réduc#on des coûts) • To rehabilitate/augment humans 2

  3. A Few Ethical, Legal, Social Issues Raised by A/IS • Impact on jobs • Personal data, privacy, intrusion, surveillance • Transparency, explainability of algorithmic decisions • Autonomous/learned machine decisions • Cogni-ve and affec-ve bonds with robots • Human dignity, integrity and autonomy • Transforma-on and augmenta-on of humans • Anthropomorphism and Human iden-ty • Legal accountability and responsibility of robots • Status of robots in the human society • Specific robot applica-ons and usage (AWS, Sexbots) • Fears about General/Super AI … • … 3 23/11/2017

  4. Ethical Concerns • (Un)Ethical usage of robots, AI and autonomous systems. • Ethics of machine decisions. • Ethically aligned design: Ethics in research, and engineering. 4 23/11/2017

  5. The IEEE Global Ini1a1ve on Ethics of Autonomous and Intelligent Systems • Launched April 2016 • Mission : To ensure every stakeholder involved in the design and development of AIS is educated, trained, and empowered to priori-ze ethical considera-ons so that these technologies are advanced for the benefit of humanity. • Brings together mul-ple and diverse voices from academia, industry and organiza-ons in the A/IS and ELS communi-es and landscapes to iden-fy and find consensus about ELS in the development and deployment of A/IS. • Version 1 of Ethically Aligned Design : December 2016. • Version 2 featuring five new sec-on: December 2017. Final version by 2019. • 11 standards proposals under discussion/development within IEEE-SA by Ad- Hoc working groups. 5 23/11/2017

  6. Ethically Aligned Design A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems Version 1 Released December, 2016 as a Creative Commons • doc / RFI for public input Created by over 100 Global AI/Ethics experts, in a • bottom up, globally open and transparent process Eight Committees / Sections • Contains over eighty key Issues and Candidate • Recommendations Designed as the “go-to” resource to help • technologists and policy makers prioritize ethical considerations in AI/AS 23/11/2017 6

  7. Ethically Aligned Design A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems Version 2 Launching December 2017 as a Creative Commons doc / • RFI for second round of public input Created by over 250 Global AI/Ethics experts, in a bottom • up, transparent, open and increasingly globally inclusive process Will incorporate over 200 pages of feedback from public • RFI and new Working Groups from China, Japan, Korea and Brazil Thirteen Committees / Sections • Will contains over one hundred twenty key Issues and • Cover Design TBD Candidate Recommendations Designed as the “go-to” resource to help technologists and • policy makers prioritize ethical considerations in AI/AS 23/11/2017 7

  8. Current CommiMees • General Principles • Personal Data and Individual Access Control • Embedding Values Into Autonomous Intelligent Systems • Methodologies to Guide Ethical Research and Design • Safety and Beneficence of Ar-ficial General Intelligence (AGI) and Ar-ficial Superintelligence (ASI) • Reframing Autonomous Weapons Systems • Economics/Humanitarian Issues • Law • Affec-ve Compu-ng • Classical Ethics in Informa-on & Communica-on Technologies • Policy • Mixed Reality • Wellbeing 23/11/2017 8

  9. IEEE-SA Standards Projects for Ethically Aligned Design IEEE P7000 : Model Process for Addressing Ethical Concerns During System Design • IEEE P7001 : Transparency of Autonomous Systems • IEEE P7002 : Data Privacy Process • IEEE P7003 : Algorithmic Bias Considerations • IEEE P7004 : Standard on Child and Student Data Governance • IEEE P7005 : Standard on Employer Data Governance • IEEE P7006 : Standard on Personal Data AI Agent • IEEE P7007: • Ontological Standard for Ethically Driven Robotics and Automation Systems IEEE P7008: Standard for Ethically Driven Nudging for Robotic, Intelligent and • Autonomous Systems IEEE P7009 : Standard for Fail-Safe Design of Autonomous and Semi-Autonomous • Systems IEEE P7010 : Wellbeing Metrics Standard for Ethical Artificial Intelligence and • Autonomous Systems 23/11/2017 9

  10. Embedding Values into Autonomous Systems Example Issues: • Values to be embedded in A/IS are not all universal; some are specific to user communities and to tasks. • Moral overload: A/IS are usually subject to a multiplicity of norms and values that may conflict with each other. • A/IS can have built-in data or algorithmic biases that disadvantage members of certain groups. 23/11/2017 10

  11. Autonomy And Decision-making 11 23/11/2017

  12. What is Autonomy? Attainable Attainable Autonomy Autonomy Complexity of Complexity environment of task Complexity of Complexity environment of task • Autonomy: ability of an agent to determine and achieve its ac-ons by its own means. Rela-ve to environment and tasks. Related to adapta-on capacity. • Opera-onal autonomy vs. decisional autonomy • Aeainable Autonomy is rela-ve to task and environment complexity 12 23/11/2017

  13. Teleopera-on: human control Examples of Autonomy Roomba Opera-onal Autonomy Advanced automa-c control Da Vinci Opera-onal Autonomy Opera-onal/some Crusher, CMU decisional Autonomy Naviga-on, LAAS 13 23/11/2017

  14. Automated Driving Google 14 23/11/2017

  15. Automated Driving: Usual Situa1ons and Moral Dilemmas 15 23/11/2017

  16. UK Department of Transport • Retaking human control from a self driving car ~ 10’’ At 36 km/h the car would have moved an addi-onal 10m on its own. 16 23/11/2017

  17. Machine decisions • Machine decisions are based on knowledge and ac-on possibili-es. • Knowledge about the environment is acquired by sensing and through contextual informa-on. • Knowledge is prone to be par-al and uncertain. • Situa-ons are dynamic. • Ac-ons might have unexpected outcomes. 17 23/11/2017

  18. What does it mean for a machine to make ethical decisions? • Ethical decisions are related to human dignity and well-being • Abstract concepts such as dignity cannot be explicitly described, taught to/understood by machines • Machines are not autonomous as humans are because they cannot decide their purpose and their own goals. Therefore machines cannot determine ethical values • Therefore machines cannot make ethical decisions, but can perform ac-ons with ethical consequences. • Machines can only select within a bounded set of categories or decisions provided to them directly or indirectly (e.g., learning) by a human programmer • Accountability remains on the human who programmed the machine 18 23/11/2017

  19. Automated Driving: Usual Situa1ons and Moral Dilemmas 19 23/11/2017

  20. Decision under Uncertainty • Situa-on assessment. Iden-fica-on of current state and es-ma-on of future states {S}: Pr(s) • Possible decisions: state-dependent ac-ons {A} • Uncertain state transi-ons Pr(s,a,s’). • Each ac-on and resul-ng state characterized by a value H(a,s) reflec-ng the es-mated incurred harm. 20 23/11/2017

  21. Ethical Approaches • Virtue Ethics: promo-ng the personal virtues and the “good life” (Plato, Aristotle). Agents must be virtuous for their decisions to be good • Deon-c Ethics: Obey a moral impera-ve in all circumstances (I. Kant 1797) • Consequen-alism, U-litarianism: “The greatest good for the greatest number” (J. Bentham 1789, J. S. Mill 1861) • Casuis-c approach • Theory of Jus-ce; protect the most vulnerable (J. Rawls 1971) 21 23/11/2017

  22. Theory of Jus1ce John Rawls 1921-2002 • Jus-ce as fairness • The “veil of ignorance”: To choose a decision-making system while ignoring how you will be affected by its decisions • Minimize harm for the most vulnerable 22 23/11/2017

  23. Best decision in state s : Π*(s) = argmin a Pr(s, a, s’) H(a,s’) “Rawlsian” decision: to cause the least harm to the most vulnerable State characterized by a vulnerability measure according to predefined categories and to actual situa-on interpreta-on 23 23/11/2017

Recommend


More recommend