Governing the AI Revolution Allan Dafoe Yale University Future of Humanity Institute University of Oxford governance.ai Allan Dafoe governance.ai Yale / FHI, Oxford 1 / 19
The AI Governance Problem : the problem of devising global norms, policies, and institutions to best ensure the benefjcial development and use of advanced AI. Allan Dafoe governance.ai Yale / FHI, Oxford 2 / 19
Common Misunderstanding 1 Attention to technological risks implies one believes ...the technology is net negative or risks are probable. ...there are risks which attention could mitigate. Allan Dafoe governance.ai Yale / FHI, Oxford 3 / 19
Common Misunderstanding 1 Attention to technological risks implies one believes ...the technology is net negative or risks are probable. ...there are risks which attention could mitigate. Allan Dafoe governance.ai Yale / FHI, Oxford 3 / 19
Near-term Governance Challenges Safety in critical systems , such as fjnance, energy systems, transportation, robotics, autonomous vehicles. (Consequential) algorithms that encode values , such as in hiring, loans, policing, justice, social network. Desiderata: fairness effjciency, privacy, ethics. AI impacts on employment, equality, privacy, democracy... Allan Dafoe governance.ai Yale / FHI, Oxford 4 / 19 Hardt , accountability, transparency,
Some Extreme Challenges from Near-Term AI counterforce vulnerability from AI intel, cyber, drones; Yale / FHI, Oxford governance.ai Allan Dafoe systems and transformative capabilities. Accident/Emergent/Other Risks , from AI-dependent critical Military Advantage : LAWS, cyber, intel, info operations. autonomous nuclear retaliation (esp w/ hypersonics). Strategic (Nuclear) Stability : autonomous escalation; Mass labor displacement and inequality. If AI substitutes, persuasion, repression (LAWS). digitally-mediated behavior), intimate profjling, tailored Surveillance and Control : mass surveillance (sensors, AI services, incumbent advantage, high fjxed costs from AI R&D. are natural global monopolies, due to low/zero marginal costs of AI Oligopolies: strategic industry and trade. If AI industries rather than complements, labor. 5 / 19
Some Extreme Challenges from Near-Term AI counterforce vulnerability from AI intel, cyber, drones; Yale / FHI, Oxford governance.ai Allan Dafoe systems and transformative capabilities. Accident/Emergent/Other Risks , from AI-dependent critical Military Advantage : LAWS, cyber, intel, info operations. autonomous nuclear retaliation (esp w/ hypersonics). Strategic (Nuclear) Stability : autonomous escalation; Mass labor displacement and inequality. If AI substitutes, persuasion, repression (LAWS). digitally-mediated behavior), intimate profjling, tailored Surveillance and Control : mass surveillance (sensors, AI services, incumbent advantage, high fjxed costs from AI R&D. are natural global monopolies, due to low/zero marginal costs of AI Oligopolies: strategic industry and trade. If AI industries rather than complements, labor. 5 / 19
Some Extreme Challenges from Near-Term AI counterforce vulnerability from AI intel, cyber, drones; Yale / FHI, Oxford governance.ai Allan Dafoe systems and transformative capabilities. Accident/Emergent/Other Risks , from AI-dependent critical Military Advantage : LAWS, cyber, intel, info operations. autonomous nuclear retaliation (esp w/ hypersonics). Strategic (Nuclear) Stability : autonomous escalation; Mass labor displacement and inequality. If AI substitutes, persuasion, repression (LAWS). digitally-mediated behavior), intimate profjling, tailored Surveillance and Control : mass surveillance (sensors, AI services, incumbent advantage, high fjxed costs from AI R&D. are natural global monopolies, due to low/zero marginal costs of AI Oligopolies: strategic industry and trade. If AI industries rather than complements, labor. 5 / 19
Some Extreme Challenges from Near-Term AI counterforce vulnerability from AI intel, cyber, drones; Yale / FHI, Oxford governance.ai Allan Dafoe systems and transformative capabilities. Accident/Emergent/Other Risks , from AI-dependent critical Military Advantage : LAWS, cyber, intel, info operations. autonomous nuclear retaliation (esp w/ hypersonics). Strategic (Nuclear) Stability : autonomous escalation; Mass labor displacement and inequality. If AI substitutes, persuasion, repression (LAWS). digitally-mediated behavior), intimate profjling, tailored Surveillance and Control : mass surveillance (sensors, AI services, incumbent advantage, high fjxed costs from AI R&D. are natural global monopolies, due to low/zero marginal costs of AI Oligopolies: strategic industry and trade. If AI industries rather than complements, labor. 5 / 19
Some Extreme Challenges from Near-Term AI counterforce vulnerability from AI intel, cyber, drones; Yale / FHI, Oxford governance.ai Allan Dafoe systems and transformative capabilities. Accident/Emergent/Other Risks , from AI-dependent critical Military Advantage : LAWS, cyber, intel, info operations. autonomous nuclear retaliation (esp w/ hypersonics). Strategic (Nuclear) Stability : autonomous escalation; Mass labor displacement and inequality. If AI substitutes, persuasion, repression (LAWS). digitally-mediated behavior), intimate profjling, tailored Surveillance and Control : mass surveillance (sensors, AI services, incumbent advantage, high fjxed costs from AI R&D. are natural global monopolies, due to low/zero marginal costs of AI Oligopolies: strategic industry and trade. If AI industries rather than complements, labor. 5 / 19
Corner-Cutting a hard problem when you’re Yale / FHI, Oxford governance.ai Allan Dafoe Demis Hassabis, January 2017 governments . talking about national scale, and that’s going to be The coordination problem is be a big issue on a global gets cut.... That’s going to starts happening and safety fjnish where corner-cutting this harmful race to the on now]. We want to avoid one thing [we should focus 6 / 19
Corner-Cutting a hard problem when you’re Yale / FHI, Oxford governance.ai Allan Dafoe Demis Hassabis, January 2017 governments . talking about national scale, and that’s going to be The coordination problem is be a big issue on a global gets cut.... That’s going to starts happening and safety fjnish where corner-cutting this harmful race to the on now]. We want to avoid one thing [we should focus 6 / 19
Allan Dafoe governance.ai Yale / FHI, Oxford 7 / 19
Massive Media Reaction Allan Dafoe governance.ai Yale / FHI, Oxford 8 / 19
National Strategies Allan Dafoe Yale / FHI, Oxford governance.ai 9 / 19 Pre-Decisional Draft 1.0--For Discussion Purposes Only China’s Technology Transfer Strategy: How Chinese Investments in Emerging Technology Enable A Strategic Competitor to Access the Crown Jewels of U.S. Innovation Michael Brown and Pavneet Singh February, 2017 1
Epistemic Calibration “ Prediction is very diffjcult, especially about the future. ” -attributed to Niels Bohr, and others... Failure Mode 1 : Overconfjdence that some specifjc possibility, X, will happen. Failure Mode 2 : Overconfjdence that X will not happen. Failure Mode 3 : Given uncertainty, dismiss value of studying X. Lesson : Accept uncertainty and distributional beliefs. Uncertainty does not imply futility. Allan Dafoe governance.ai Yale / FHI, Oxford 10 / 19
Epistemic Calibration “ Prediction is very diffjcult, especially about the future. ” -attributed to Niels Bohr, and others... Failure Mode 1 : Overconfjdence that some specifjc possibility, X, will happen. Failure Mode 2 : Overconfjdence that X will not happen. Failure Mode 3 : Given uncertainty, dismiss value of studying X. Lesson : Accept uncertainty and distributional beliefs. Uncertainty does not imply futility. Allan Dafoe governance.ai Yale / FHI, Oxford 10 / 19
Epistemic Calibration “ Prediction is very diffjcult, especially about the future. ” -attributed to Niels Bohr, and others... Failure Mode 1 : Overconfjdence that some specifjc possibility, X, will happen. Failure Mode 2 : Overconfjdence that X will not happen. Failure Mode 3 : Given uncertainty, dismiss value of studying X. Lesson : Accept uncertainty and distributional beliefs. Uncertainty does not imply futility. Allan Dafoe governance.ai Yale / FHI, Oxford 10 / 19
Epistemic Calibration “ Prediction is very diffjcult, especially about the future. ” -attributed to Niels Bohr, and others... Failure Mode 1 : Overconfjdence that some specifjc possibility, X, will happen. Failure Mode 2 : Overconfjdence that X will not happen. Failure Mode 3 : Given uncertainty, dismiss value of studying X. Lesson : Accept uncertainty and distributional beliefs. Uncertainty does not imply futility. Allan Dafoe governance.ai Yale / FHI, Oxford 10 / 19
Epistemic Calibration “ Prediction is very diffjcult, especially about the future. ” -attributed to Niels Bohr, and others... Failure Mode 1 : Overconfjdence that some specifjc possibility, X, will happen. Failure Mode 2 : Overconfjdence that X will not happen. Failure Mode 3 : Given uncertainty, dismiss value of studying X. Lesson : Accept uncertainty and distributional beliefs. Uncertainty does not imply futility. Allan Dafoe governance.ai Yale / FHI, Oxford 10 / 19
Recommend
More recommend