Ava Thomas Wright AI Ethics (I): Value- AV.wright@northeastern.edu Postdoctoral Research Fellow in AI and Data Ethics here at Northeastern University embeddedness in JD, MS (Artificial Intelligence), PhD (Philosophy) I am here today to talk about “Value Sensitive Design” in AI systems. AI system design The goal of Value Sensitive Design is to make socially-informed and thoughtful value-based choices in the technology design process Appreciating that technology design is a value-laden practice Recognizing the value-relevant choice points in the design process Identifying and analyzing the values at issue in particular design choices Reflecting on those values and how they can or should inform technology design Group Activity: Moral Machine http://moralmachine.mit.edu/ I. Descriptive Students will work through the scenarios presented in moral machine as a group and decide which option to choose. vs. Prescriptive The instructor might ask students to discuss as a group which choice they should make and then decide by vote. In a larger class, students might break into small groups and (Normative) work through the activity together. It is important that the group make a choice rather than have everyone do it on their own to highlight an important point in the lesson plan. Claims It is also important to show what happens once all the cases are decided: MM outputs which factors the user takes to be morally relevant and to what extent.
The distinction between descriptive and prescriptive ethical questions: Descriptive: How do Prescriptive: How should people think AVs should AVs behave in accident behave in accident scenarios? ( prescribes scenarios? ( describes what AVs should do, or Set-up: Review the findings on popular ethical preferences what people's what AV system in the MM paper in Nature (see, for example, Figure 2) preferences are) designers should do) Descriptive: Group Discussion • Does the MM platform accurately capture people's preferences about how AVs should behave in accident scenarios? Answer the prescriptive and descriptive questions just • Can the MM platform help its users clarify how they raised. This serves to set up the rest of the lesson plan. reason about how AVs should behave? Suggestions Prescriptive: Some 10 minutes: Have students break into small groups to try to • Should designers use the moral machine platform to answer these questions descriptive make decisions about how to program autonomous and 5 minutes: Have students write down their individual answers vehicles to behave in accident scenarios? prescriptive 10 minutes: Have a general group discussion about people’s • How should designers determine how to program AVs answers to these questions questions the to behave in accident scenarios? MM • When (if ever) should designers use surveys of ethical preferences to decide how to program autonomous experiment systems such as AVs? raises:
The MM thus makes two implicit claims Aims of Discussion about AV system design: Dependence relationships between the questions: descriptive claim: The MM platform does If MM is a bad descriptive tool, then we shouldn’t look to it accurately capture people's ethical to answer moral questions preferences about how an AV should Even if MM is a good descriptive tool, nothing immediately behave in accident scenarios. follows from that about the answer to prescriptive questions about what you ought to do (sometimes referred to loosely as the "is-ought" gap in moral theory). The majority's preferences might be unethical or unjust prescriptive claim: AVs should be programmed to act in accordance Examples: Nazi Germany; antebellum South. Or consider a society of cannibals guided by the consensus ethical rule, with the majority's preferences as "Murder is morally permissible so long as one intends to eat collected by the MM platform. one's victim." II. Challenges for the Take a 5- minute Descriptive break? claim
Descriptive Claim: If the MM platform is not a good tool for The MM platform is accurately capturing people's ethical Issues in the collection a good tool for preferences about how an AV should accurately behave in accident scenarios., then it should not be used as a tool for answering capturing people's of data prescriptive questions about how to ethical preferences program autonomous vehicles. about how an AV Even if you think you should encode the majority's preferences. you first have to should behave in make sure to get them right! accident scenarios. For example, Is the data from our class representative of any individual user or even of the group? 1) Representativeness There are no instructions Users might not take it letting the user know that this of sample seriously data might be used for the programming of AVs There are The people answering questions on the few MM website may not be representative of everyone controls on data collection Users cannot register indifference in MM:
2) Implicit value assumptions or blindspots in data Issue: But we need to know a lot more about how much Potential collection practices noise is introduced response: With enough data, we can ignore the noise that results from the above Potential response: Perhaps MM should disqualify discriminatory ethical preferences, if they exist. Issue: But MM tests ethical preferences with regard to gender and age. For example, MM does not gather people's preferences Designing the experiment to capture some preferences that may be with regard to race, ethnicity, apparent LGBT status, etc. Some ethical discriminatory but not others is a normative decision that requires an Many other features that might have influenced results features of explanation and ethical justification. could have been tested as well. accident scenarios in MM were selected for testing, but not others. Why?
Data comes from somewhere and the quality and care taken when collecting it will determine whether the resulting data is useful. Data that is poorly constructed III. Big-Picture can undermine programmers’ ability to design systems ethically. General Other disciplines might be needed to help understand Takeaways or vet data. In the case of MM, a social scientist might Data be able to tell us what kinds of results are significant Collection even with lots of noise. They might also tell us what sorts of controls are needed. Concerns Design of system may have hidden value assumptions A more diverse design team might help reveal blindspots or surface implicit ethical assumptions so that they can be examined. Such problems do not apply only when the data Even if there is some version of MM that provides reliable collected is data concerning people's ethical information about users’ ethical preferences, the implicit preferences. proposal that we should rely on such data to inform how Tools or For example, suppose a hospital with a history of we should develop AVs is a (controversial) prescriptive practices for intentionally discriminating against the hiring of female claim that requires defense. collecting data doctors naively uses its own historical data on the traits of Arguably this is the main issue with the MM platform and is may be successful hires to train a machine learning system to the topic of the next class. identify high-quality job applicants. The (perhaps unwitting) implicitly biased result would be a sexist algorithm. or contain We will discuss this more in AI Ethics II module unexamined ethical value assumptions
Review Questions Rightful Machines A rightful machine is an explicitly moral autonomous system that respects principles of justice and the What is the difference between a descriptive public law of a legitimate state. and a prescriptive claim? (the is-ought gap) Efforts to build such systems must focus first on What are the main descriptive and duties of right , or justice, which take normative prescriptive claims made in the MM platform? priority over contestable duties of ethics in cases What is the logical relationship between of conflict. (This insight resolves the “trolley problem” for purposes of rightful machines.) them? Feasibility: Describe some issues with how data on people’s ethical preferences was collected in An adequate deontic logic of the law 1) can describe conflicts but 2) normatively requires their resolution MM. SDL fails, but NMRs can meet these requirements Should designers program autonomous systems such as AVs to act in accordance Legal duties must be precisely specified ob(-A) :- murder(A), not with the ethical preferences of a majority of qual(r1(A)). A rational agent architecture : 1) rat agent (LP) people as revealed by platforms like the MM? qual(r1(A)) :- act(A), not ob(- constraining 2) control system (ML) for 3) sensors and A). (Q for next time) actuators An implementation : answer-set (logic) programming murder(A) :- intentional(A), act(A), causes_death(A, P), person(P).
Recommend
More recommend