creating moral robots
play

Creating Moral Robots Kiah Breidenbach Important Terms Machine - PowerPoint PPT Presentation

Creating Moral Robots Kiah Breidenbach Important Terms Machine Ethics Superintelligence Asimovs Three Laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm 2. A


  1. Creating Moral Robots Kiah Breidenbach

  2. Important Terms Machine Ethics ● Superintelligence ●

  3. Asimov’s Three Laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm 2. A robot must obey orders given it by human beings except where such orders would conflict with L1. 3. A robot must protect its own existence as long as such protection does not conflict with L2. Added later: 0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

  4. What Is A Moral Robot? A robot with one or more competencies considered important for living in a ● moral community A robot with the capacity for moral judgment ●

  5. How? -Ethical theories -Legal principles -moral competence

  6. In Favor of Creating Moral Robots “The greater the freedom of a machine, the more it will need moral standards” - ● Roz Picard, MIT Affective computing lab There is already a need for moral robots ● They could be capable of teaching humans more about ethics and making moral ● decisions

  7. Need For Moral Robots ●

  8. Arshia Khan with Robot Pepper

  9. U.S. Military is already pursuing moral robots

  10. Opposing View Some experts warn that unsupervised advancements in AI and robots could lead to the end of the human race. Results are indeterminable - unintended effects Large scale harm due to crude ethical assessments Mistakes in code could have severe consequences Robots would be allowed to make decisions without human supervision

  11. Case Study Suppose moral robots were created… A moral robot is tasked with monitoring a person in a nursing home to lessen the workload of human staff.

  12. Kantian Analysis Motive: Helping patient, lessen discomfort ● Universal Moral Rule: If you have the ability to help someone, you have the ● responsibility to do so. __________________________________ From a kantian standpoint, the robot was indeed acting ethically and the implementation of moral robots was an ethically correct decision.

  13. Virtue Ethics Were human staff available to care for the patient, they would have made the same decision. We can conclude that the robot was acting virtuously and thus made the ethically correct decision

  14. Act Utilitarian Analysis No one was available to help patient -2 and then remained in pain longer than necessary Robot was able to administer additional +2 pain medication, thus benefiting the Human staff are overworked due to -1 patient lack of staff Human staff are not overworked +1 Lack of staff results in inadequate care -3 of patients Moral robots in conjunction with human +3 staff ensure all the patients receive Potential harms of moral robots are +3 adequate care avoided total: -3 Robot could have unforeseen error in -3 code resulting in indeterministic results From an act utilitarian standpoint, total: +3 implementation of moral robots is the ethically correct choice

  15. In My Opinion: As long as potential threats can be sufficiently mitigated and the robots do not possess autonomy, a sense of self or personal emotions, I think moral robots could have a powerful positive impact on our world.

  16. Future Outlook We will have to make a lot of important decisions regarding ai and robots. As technology continues to advance I believe moral robots will be created and will benefit humanity as a result. I do not think they will evolve to be the end of mankind but that they will be a benevolent and helpful creation.

  17. Questions?

Recommend


More recommend