Artificial Intelligence: What Could Possibly Go Wrong? Jim Dempsey Executive Director Berkeley Center for Law & Technology jdempsey@berkeley.edu
What Is Artificial Intelligence?
No single definition “a set of technologies that enable computers to perceive, learn, reason and assist in decision-making to solve problems in ways that are similar to what people do” (Microsoft, The Future Computed, 2018) Many different flavors: • Expert systems (rules-based) • Machine learning • Deep learning – neural networks AI, and the concerns it raises, are related to: • Robotics • Big data • Algorithmic decisionmaking
3 key ingredients to much of what is referred to currently as AI • Algorithms (recipes for processing data) • Processing power • Data
General AI - A Long Way Off
Narrow AI - Already Here
Narrow AI Better Than Humans in Specific Tasks
AI/ML – The Role of Training Data
There Will Be Hype
The Hype Goes in Both Directions • “[I]ncreasingly useful applications of AI, with potentially profound positive impacts on our society and economy, are likely to emerge between now and 2030.” AI 100 Study • “The development of full artificial intelligence could spell the end of the human race.” Stephen Hawking
It’s Not Magic • Algorithms are not value-free • AI involves human decisions and tradeoffs “[I]t is a serious misunderstanding to view [AI] tools as objective or neutral simply because they are based on data.” Partnership on AI, Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System (April 26, 2019)
What Could Possibly Go Wrong?
Why Do Things Go Wrong?
Limitations in the Training Data
The view from nowhere
The view from nowhere
Deep Proxies
Deep Proxies
Failure to Understand the Error or Confidence Level
Flaws in framing the question
Virtualization (Hiding the Role of AI)
Human to Machine Handoffs
The Human to Machine Handoff
Adversary-Induced Failures 1 2 3 5 4
What Can Be Done to Minimize Risk?
Some easy fixes
Sometimes not so easy
The black box problem “… even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs, …[t]he computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.” https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
Trade secrecy and the black box
In Key Contexts, Black Box AI Is Unacceptable “However, the effectiveness of these [AI] systems will be limited by the machine’s inability to explain its thoughts and actions to human users. Explainable AI will be essential, if users are to understand, trust, and effectively manage this emerging generation of artificially intelligent partners.” David Gunning, DARPA https://www.cc.gatech.edu/~alanwags/DLAI2016/(Gunning)%20IJCAI-16%20DLAI%20WS.pdf
Transparent AI IBM researchers have proposed a Supplier’s Declaration of Conformity (SDoC) Basically, a factsheet: • What dataset was used to train the AI? • What underlying algorithms were used? • Was bias mitigation performed on the dataset? • Was the service tested on any additional datasets? https://www.ibm.com/blogs/research/2018/08/factsheets-ai/
Auditable AI See also Sandvig et al., Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms http://www-personal.umich.edu/~csandvig/research/Auditing%20Algorithms%20--%20Sandvig%20-- %20ICA%202014%20Data%20and%20Discrimination%20Preconference.pdf
Explainable AI
Debiasing
Algorithmic Bug Bounty • Proposal: use market and reputational incentives to facilitate a scalable, crowd-based system of auditing to uncover bias and other flaws. • In the US, some of the testing techniques face legal barriers Amit Elazari Bar On, Christo Wilson and Motahhare Eslami, ‘Beyond Transparency – Fostering Algorithmic Auditing and Research’ (2018); see also https://motherboard.vice.com/en_us/article/8xkyj3/we-need-bug- bounties-for-bad-algorithms
Policy Responses
Ongoing Efforts to Address the Societal and Ethical Concerns • Academic Research: Fairness, Accountability and Transparency in ML https://www.fatml.org/ • Corporate: Ethics teams at Facebook, Alphabet, Microsoft • NGOs: AI Now: https://ainowinstitute.org/aap- toolkit.pdf • Multi-stakeholder: Partnership for AI • Principles: Asilomar AI Principles https://futureoflife.org/ai-principles/
Guiding Principles for Corporations and Governments • Transparency • Safety, security, reliability • Data protection/ privacy • Fairness Tools • Policies • Testing and audits • Redress • Vendor due diligence • Employee training
AI Inventory and Impact Assessment Al Algorithms and Da Data Outp Ou tput Mo Mode dels • Types of data used • General design, • Testing/auditing criteria, or how they schedule, results, • Sources of the data learn and remediation • Why it is “fit” for • Purposes for which • Whether and when purpose/reliable/ they operate to use third parties unbiased/timely • Any material limits • Whether and when on their capabilities to have humans in the loop • Steps taken to avoid bias • Consider protected or underrepresented populations From Lindsey Tonsager, Covington
Regulatory Responses - Transparency “Each [airline reservation] system shall provide to any person upon request the current criteria used in editing and ordering flights for the integrated displays and the weight given to each criterion and the specifications used by the system’s programmers in constructing the algorithm.” 15 CFR 255.4.
Transparent AI – The Role of Government as Customer Transparency – demand insight into • Purpose • Algorithm • Training Data • Validation • ORAS – Ohio Risk Assessment System • OTS – Offender Screening Tool (Maricopa County)
Legislative Responses -Bans • California: AB 1215 (2019) -3 year moratorium on law enforcement’s use of any biometric surveillance system in connection with an officer camera or data collected by an officer camera. Penal Code, Section 832.19. • “Biometric surveillance system” means any computer software or application that performs facial recognition or other biometric surveillance (but not in-field fingerprint collection) • Berkeley, CA; Somerville, MA – ban on all government use of facial recognition technology • San Francisco – ban plus approval process for future adoption
Opening the black box in the EU EU General Data Protection Regulation (GDPR) • A data controller must provide the data subject “ meaningful information about the logic involved , as well as the significance and the envisaged consequences of such processing for the data subject.” (Art. 13). • See also Arts. 22 and 14-15
Trade secrecy and the black box
State v. Loomis (Wisc. S. Ct 2016) Use of COMPAS risk assessment at sentencing • Weighting of factors is proprietary: No due process violation if PSI includes limitations and cautions regarding the COMPAS risk assessment's accuracy • COMPAS predicts group behavior: a circuit court is expected to consider this caution as it weighs all of the factors. • Risk scores: cannot be used to determine whether to incarcerate or the severity of the sentence. • May be used as one factor in probation and supervision
Litigation in the US • K.W. ex rel. D.W. v. Armstrong , 298 F.R.D. 479 (D. Idaho 2014) – automated decision-making system for Medicaid payments – court ordered the state to disclose the formula. As part of settlement, state agreed to develop new formula with greater transparency. • Ark. Dep’t of Human Servs. v. Ledgerwood , 530 S.W.3d 336 (Ark. 2017) – homecare for individuals with profound physical disabilities – complex computer algorithms - injunctive relief for plaintiffs - failure to adopt the rule according to notice and comment procedures • Houston Federation of Teachers v. Houston Independent School District , 51 F. Supp. 3d 1168 (S.D. Tex. 2017) – court ruled in favor of teachers: SAS’s secrecy about its algorithm prohibited teachers from accessing, understanding, or acting on their own evaluations. • Barry v. Lyon , 834 F.3d 706 (6th Cir. 2016) – state adopted matching algorithm for food assistance eligibility. More than 19,000 people were improperly matched, automatically disqualified, and given only vague notice. Court ruled the notice denied due process.
Recommend
More recommend