not so fast the very
play

NOT SO FAST! THE VERY NO HUMAN LIMITS TO THE DEVELOPMENT OF AI IN - PowerPoint PPT Presentation

NOT SO FAST! THE VERY NO HUMAN LIMITS TO THE DEVELOPMENT OF AI IN LAW, LAW PRACTICE, AND LEGAL EDUCATION. ASHLEY LONDON, J.D. & JAMES B. SCHREIBER, PH.D. DUQUESNE UNIVERSITY ITS TIME TO PLAY THE MUSIC, ITS TIME TO LIGHT THE


  1. NOT SO FAST! THE VERY NO HUMAN LIMITS TO THE DEVELOPMENT OF AI IN LAW, LAW PRACTICE, AND LEGAL EDUCATION. ASHLEY LONDON, J.D. & JAMES B. SCHREIBER, PH.D. DUQUESNE UNIVERSITY

  2. “IT’S TIME TO PLAY THE MUSIC, IT’S TIME TO LIGHT THE LIGHTS…”

  3. ARTIFICIAL INTELLIGENCE??? Let us begin by exploring a little idea together… • Is infinity a number?

  4. WHAT IS AI ANYWAY? • What is intelligence?

  5. THE MERRIAM-WEBSTER DICTIONARY DEFINITION OF INTELLIGENCE • “: the ability to learn or understand or to deal with new or trying situations : . . . the skilled use of reason (2) : the ability to apply knowledge to manipulate one’s environment or to think abstractly…”

  6. HOW THE ACADEMY DEFINES INTELLIGENCE • PSYCHOLOGIST-DAVID WECHSLER • A global concept that involves an individual’s ability to act purposefully, think rationally, and deal effectively with the environment. • AI Researchers Legg and Hutter (2006) • Intelligence measures an agent’s ability to achieve goals in a wide range of environments.

  7. BLACK’S LAW DICTIONARY DEFINITION OF ARTIFICIAL INTELLIGENCE • A software used to make computers and robots work better than humans . The systems are rule based or neutral networks. It is used to help make new products, robotics, human language understanding, and computer vision.

  8. AI IS NEITHER INTELLIGENCE NOR ARTIFICIAL INTELLIGENCE • It is more “Artificial-Artificial Intelligence” – Dr. Cathy O’Neill. • Humans are helping MACHINES help humans. • The greater the human reliance on computers, the higher the risk of potential ethical issues and conundrums.

  9. ALL WE REALLY HAVE IS A SYSTEM OF MATHEMATICAL MODELS • Mostly you hear the phrase “algorithms” or “machine learning.” • The terms are also used interchangeably. • So what exactly is an algorithm? • Algorithms are sets of rules that a computer is able to follow. Rules like, you have to be able to subtract from both sides.

  10. BASIC GOALS OF AN ALGORITHM • Predict and classify. • Prediction is where we want to predict, a number typically, like the price of a car, or house, or salary request. • Classification is where we want to predict into some pre-defined category, like yes or no, purchase or not, mechanical failure and so on. • To get to the ultimate prediction however, the computer program must be loaded with “decision points” that trigger whether one route is taken, or another. THIS is where issues arise.

  11. DECISION POINTS EXAMPLE • Decision points are based on the programmer’s values or on the values of the person or entity commissioning the creation of the particular algorithm. Does anyone see a problem with this? • That is a great deal of power—and there is always a power issue when decisions are being made. Dr. Cathy O’Neill tells the story of making dinner for the family. • The data are the ingredients on hand over time, plus the amount of time, and level of motivation to make the dinner. • But she also needs to define success. Here, she defines success as whether her kids eat vegetables.

  12. DECISION POINTS EXAMPLE, CONTINUED • With that as a definition she can start examining all the meal ingredients and the overall meal and work on what is linked to success. • If her kids defined success, a much different model would happen? Right? Maybe they would choose fewer vegetables and more dessert, that would be a successful dinner. • Now with all of this data and the success definition she can start optimizing the meals based on the linkage between the ingredients and the results to see if every meal is a “success.” • What if the ingredients are sugar and fat-based sauces on the vegetables and increasing the sugar leads to more success?

  13. ALGORITHMS ARE VALUE LADEN • Algorithms have an inherent power differential embedded right in the decision-making apparatus. • By “value laden” we mean the person who developed the algorithm chose the variables included, the definition of success, and the optimization process of that success definition. • While the developer may have so-called “good intentions,” it is nearly impossible for one person to account for all potential sources of bias. Including implicit, or unconscious, bias.

  14. A STRONG BIAS ALREADY EXISTS IN THE LEGAL FIELD FOR THE AGGRESSIVE USE AND IMPLEMENTATION OF AI • See also, we are hosting an AI conference right now at Duquesne University School of Law! (Among many other law schools doing the same.) • Articles like, “Law Firms Need Artificial Intelligence to Stay in the Game,” by ALM Intelligence. • A need to find better, faster, cheaper ways to address the growing social justice gap. ACCESS TO JUSTICE . • Cheaper, faster, electronic discovery for litigation. • A need to eliminate human errors, reduce risk, and manage costs to clients .

  15. BIG LAW FIRMS ARE FASTEST ADOPTERS OF AI RIGHT NOW • The ABA’s “2018 Legal Technology Survey Report” found that AI Prominent usage is the greatest at law firms with over 100 attorneys were internationally-known most likely to use the technology. “Big Law” firm O’Melveny & Myers LLP based in Los Angeles, CA, recently • For those that saw a benefit to adopting AI, saving time and announced it would serve increasing efficiency was the highest rated advantage that AI- as a pioneer in the powered software could provide. Reducing costs and predicting introduction if the use of outcomes/reducing risks was also cited as an important benefit. Artificial Intelligence (AI) in recruiting and hiring • Accuracy remained the biggest concern about AI , the only associates (O’Melveny & response to receive a consensus of over 50% (61% of the Meyers, 2018) in an respondents at BigLaw- 500+ attorneys). attempt to improve diversity.

  16. THE SEDUCTION OF PREDICTING THE “RIGHT” RESULT Lawyers and law school administrators are salivating over the • prospect of using big data analytics to “predict” a variety of LexisNexis just announced it is releasing a unknowns. new product called, Context. This language analytics program supposedly will allow legal professionals to build arguments Finding algorithms that can predict “success” on metrics such as • designed to sway judges in favor of their first time and “ultimate bar passage rates” in response to new clients. ABA requirements (and, let’s be honest, to improve a law school’s ranking). The global legal analytics market is expected to reach a value of • $1.8 billion by 2022. (Hichman, 2018) Law students need understand the benefits and detriments of the • use of AI not only for their clients, but for themselves.

  17. WE TEND TO USE THREE SISTERS OF ALGORITHMS • Linear Models • Tree based Models • Neural Networks

  18. AI IS NOT AUTOMATICALLY EVIL. BUT THE FACT THAT HUMANS CREATE AI SHOULD GIVE US PAUSE. • These are powerful technologies. As Fei-Fei Li, one of the major • Amazing things can be done with them to developers of these technologies recently argued, advance human, business, and legal “we will hit a moment when interests. it will be impossible to course-correct.” • But in this point and click era- amazingly bad things can be done or perpetuated. • Let us look at a few…

  19. JOBS JOBS JOBS • The hiring process is never a single decision. • The process is a series of decisions over time. • The key with these algorithm systems is the rejection aspect . • These are typically automated and can easily reflect the bias of the programmer. • Use of algorithms on job hunting sites also affects who learns about the job(s). • Hiring algorithms can (and do) rank job seekers and in doing so highlight what would normally be a marginal or unimportant difference.

  20. AMAZON • Amazon had to kill an algorithm based hiring system. Why? • The tool disadvantaged FEMALE candidates. • Those who went to certain women’s colleges presumably not attended by many existing Amazon engineers. • Downgraded resumes that included the word “women’s” — as in “women’s rugby team.” • Privileged resumes with the kinds of verbs that men tend to use, like “executed” and “captured.”

  21. JOBS JOBS JOBS • In the case of systems meant to automate candidate search and hiring, we need to ask ourselves: What assumptions about worth, ability, and potential do these systems reflect and reproduce? Who was at the table when these assumptions were encoded? –Meredith Whittaker, Executive Director, AI Now Institute. • So if we were to create an algorithm to hire faculty at Duquesne, with all the data on hiring, promotion and tenure, grants, etc., for thousands and thousands of current and former employees, what would it tell you? Data quality? • • Potential for Bias? • Who is at the table during the creation of this algorithm, and how is it to be applied?

Recommend


More recommend