sensitivity to risk profiles of users when developing ai
play

Sensitivity to risk profiles of users when developing AI systems - PowerPoint PPT Presentation

Sensitivity to risk profiles of users when developing AI systems Robin Cohen Cheriton School of Computer Science Rishav Agarwal Dhruv Kumar Alexander Parmentier Tsz Him Leung Position Paper Message about Trusted AI Engender trust


  1. Sensitivity to risk profiles of users when developing AI systems Robin Cohen Cheriton School of Computer Science Rishav Agarwal Dhruv Kumar Alexander Parmentier Tsz Him Leung

  2. Position Paper • Message about Trusted AI • Engender trust from users and organizations • Includes explainability, transparency, fairness, safety and ethics • Differing solutions for different users • Not one size fits all • User risk tolerance a factor 2

  3. Motivation 3 Image: xkcd.com

  4. Trusted AI • Think beyond a homogenous user base • Risk profiles of users important • One factor: risk averse users may require more explanation • Diverse solutions not just for trust but for fairness 4 Image: boston.com

  5. Background • Others advocate personalization (user preferences matter) • Explainable AI (Anjomshae et al.): context-awareness important • Trust in robots (Rossi et al.): differing user tolerances and emotions • Elements of trustworthiness (Mayer et al.): risk-taking perceptions 5

  6. Background • Inter-related concerns of fairness, explanability, trust (Cohen et al.)] • Al solutions to these problems matter • Dedicated effort examining planning and explainability (Kambhampati and coauthors) • Trading cost of computation vs. serving the user • e.g. less accurate but more explainable 6

  7. Trust and Risk Profiles: Our Models • Two models, building on Kambhampati approach (Sengupta et al. AAMAS 2019 Trust workshop) • Game theoretic: reasoning about costs and actions Explainability assumes a cost Allowing risky plans • Observe vs. execute • Build up trust • Agent's reasoning is in terms of user risk • Allow risk to be updated profiles 7

  8. Game-Theoretic Models 8

  9. Explainability, Cost and Risk Profiles • Agent has a model of the Human’s assessment of the Agent • Use risk profile as a proxy for mental models • Risk profile is perceived • Consider costs of planning, explaining and not achieving goals. 9

  10. Explainability, Cost and Risk Profiles • 𝐵𝑕𝑓𝑜𝑢 𝐷𝑝𝑡𝑢𝑡 𝐵 𝜌 𝑄 . • 𝐷𝑝𝑡𝑢 𝑝𝑔 𝑛𝑏𝑙𝑗𝑜𝑕 𝑞𝑚𝑏𝑜 𝐷 𝑞 𝐵 𝜌 𝑄 . • 𝐷𝑝𝑡𝑢 𝑝𝑔 𝑓𝑦𝑞𝑚𝑏𝑗𝑜𝑗𝑜𝑕 𝑗𝑡 𝐷 𝐹 • 𝐷𝑝𝑡𝑢 𝑝𝑔 𝑓𝑦𝑞𝑚𝑏𝑗𝑜𝑗𝑜𝑕 𝑣𝑜𝑢𝑗𝑚 𝑏 𝑞𝑏𝑠𝑢𝑗𝑏𝑚 𝑞𝑚𝑏𝑜 ො 𝜌 𝑞 𝐵 . • 𝐷𝑝𝑡𝑢 𝑝𝑔 𝑜𝑝𝑢 𝑏𝑑ℎ𝑗𝑓𝑤𝑗𝑜𝑕 𝑕𝑝𝑏𝑚 𝐻 𝑗𝑡 𝐷 ෠ 𝐻 • 𝑋𝑓 𝑑𝑏𝑜 𝑏𝑡𝑡𝑣𝑛𝑓 𝑢ℎ𝑏𝑢 𝑢ℎ𝑓 𝑡𝑏𝑔𝑓𝑡𝑢 𝑞𝑚𝑏𝑜 𝑒𝑝𝑓𝑡𝑜′𝑢 ℎ𝑏𝑤𝑓 𝑏 𝑑𝑝𝑡𝑢 𝑝𝑔 𝑔𝑏𝑗𝑚𝑣𝑠𝑓 10

  11. Explainability, Cost and Risk Profiles • 𝐼𝑣𝑛𝑏𝑜 𝐷𝑝𝑡𝑢𝑡 𝐼 ො • 𝐷𝑝𝑡𝑢 𝑝𝑔 𝑝𝑐𝑡𝑓𝑠𝑤𝑗𝑜𝑕 𝑢ℎ𝑓 𝑞𝑚𝑏𝑜 𝑣𝑜𝑢𝑗𝑚 𝑡𝑝𝑛𝑓 𝑞𝑚𝑏𝑜 ℎ𝑏𝑡 𝑐𝑓𝑓𝑜 𝑓𝑦𝑓𝑑𝑣𝑢𝑓𝑒 𝑗𝑡 𝐷 𝑞 𝜌 𝑞 . 𝐼 𝜌 𝑄 . • 𝐷𝑝𝑡𝑢 𝑝𝑔 𝑝𝑐𝑡𝑓𝑠𝑤𝑗𝑜𝑕 𝑏𝑢 𝑢ℎ𝑓 𝑓𝑜𝑒𝑡 𝐷 𝐹 𝐼 . • 𝐷𝑝𝑡𝑢 𝑝𝑔 𝑜𝑝𝑢 𝑏𝑑ℎ𝑗𝑓𝑤𝑗𝑜𝑕 𝑕𝑝𝑏𝑚 𝑗𝑡 𝐷 ෠ 𝐻 • 𝑆𝑗𝑡𝑙 𝑝𝑔 𝑓𝑦𝑓𝑑𝑣𝑢𝑗𝑜𝑕 𝑏 𝑞𝑚𝑏𝑜 𝑆 𝐼 𝜌 𝑄 11

  12. Explainability, Cost and Risk Profiles Observe Not Execute Observe and Execute Any plan Safe plan 12

  13. Explainability, Cost and Risk Profiles Observe Not Execute Observe and Execute Any plan Risk Averse Cost of achieving the goal must at least be greater than the cost of explaining the rest of the task 13

  14. Explainability, Cost and Risk Profiles Observe Not Execute Observe and Execute Any plan Risk Taking Risk must be less than cost of not achieving goal 14

  15. Explainability, Cost and Risk Profiles Explain Risk profile Plan and Execute Observe 15

  16. Trust Boundaries and Risk Profiles • Human's lack of trust suggests safe plan (Sengupta et al. 2019) • Trust boundary ensures Agent does not execute risky plan • Yet more risky lower cost plan might be preferred by user • If trust has built up enough to take that risk 16

  17. Trust Boundaries and Risk Profiles • Human's lack of trust suggests safe plan (Sengupta et al. 2019) • Trust boundary ensures Agent does not execute risky plan • Yet more risky lower cost plan might be preferred by user • If trust has built up enough to take that risk Trust boundary 17

  18. Trust Boundaries and Risk Profiles • Human will reason: cost of executing plan is considered • Allows: beyond Agent simply modeling User trust for its decisions • Progressive updates of user profiles and trust should be possible Trust boundary 18

  19. Fairness 19

  20. Fairness and Differing User Tolerances • User preferences for fairness and explainabilty also an issue • for example: In hiring may be very risk averse to unfairness • People's positions on algorithmic fairness will be an influence 20 Image: offthemark.com

  21. Fairness and Differing User Tolerances • Accurate may not be most fair solution • Concerns with bias to be taken into consideration • Key importance of which definition of fairness is at hand for user • Differing preferences need to be considered 21

  22. Fairness and Differing User Tolerances • Current models designed to be more accurate than fair • Some users have less risk aversion to unfairness • e.g. more concerned with explainability • Again drives towards knowing user preferences • Risk tolerance can continue to be a determiner • Metrics of disparate impact (independent attributes), Individual fairness (equal opportunity), equalized odds (favors majority) 22

  23. Outstanding Concerns • Acquiring user profiles • Important to consider elicitation across contexts • Engendering trust varies according to user tolerances • Expand the concept of risk aversion • Consider a collection of user profile preferences 23

  24. Conclusion • Continue to imagine personalized trusted AI solutions • Leverage the important concern of risk profiles • Tradeoffs in accuracy, explainability, fairness and other desiderata • Some suggested models for reasoning and decision making by agents 24

  25. References • Anjomshoae, S., Framling, K., Najjar, A.: Explanations of black-box model predictions bycontextual importance and utility. In: International Workshop on Explainable, TransparentAutonomous Agents and Multi-Agent Systems. pp. 95 – 109. Springer (2019) • Cohen, R., Schaekermann, M., Liu, S., Cormier, M.: Trusted ai and the contribution of trustmodeling in multiagent systems. In: Proceedings of AAMAS. pp. 1644 – 1648 (2019) • Kambhampati, S.: Synthesizing explainable behavior for human-ai collaboration. In: Pro-ceedings of AAMAS. pp. 1 – 2. Richland, SC (2019) • Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust.Academy of management review20(3), 709 – 734 (1995) • Rossi, A., Holthaus, P., Dautenhahn, K., Koay, K.L., Walters, M.L.: Getting to know pepper:Effects of people’s awareness of a robot’s capabilities on their trust in the robot. In: Proceed-ings of the 6th International Conference on Human-Agent Interaction. pp. 246 – 252. ACM(2018) • Sengupta, S., Zahedi, Z., Kambhampati , S.: To monitor or to trust: Observing robot’s be -havior based on a game-theoretic model of trust. In: Proc. Trust workshop at AAMAS 2019(2019) 25

  26. Questions? • Robin Cohen: rcohen@uwaterloo.ca • Rishav Raj Agarwal: http://rishavrajagarwal.com (rragarwal@uwaterloo.ca) • Dhruv Kumar: d35kumar@uwaterloo.ca • Alexander Parmentier: aparmentier@uwaterloo.ca • Tsz Him Leung: th4leung@uwaterloo.ca 26

  27. Agent Decision Procedure 𝐼 • Key factor (Cost of risk less then cost of not achieving goal) 𝑆 𝐼 (𝜌 𝑞 ) < 𝐷 ෠ 𝐻 • Focus on explainability at expense of accuracy (optimality) with a risk averse human • Allow Human more agency (mixed-initiative dialogue) • Agent could reason at each step of the plan • Is cost of achieving goal more than cost of explanation • User risk profile model can be updated 27

Recommend


More recommend