AI and Real World Ethics

Ethics and Technology on the Cutting Edge

By Edward McIntyre

Slowly — as though pushing through a parting cloud — some of us only now are beginning to realize that AI (artificial intelligence) is shaping, and likely will dominate, our lives and futures. Well beyond our expectations. Beyond our imaginations, even. Yet this is the precise environment that California Western School of Law’s Second Annual Legal Ethics Symposium sought to tackle — with the added dimension of its ethical implications.

Undaunted, it brought together scientists, technology experts and legal ethicists from around the country and across the globe to address the current state and foreseeable future of AI; the moral and legal dilemmas in regulating new technology; privacy, big data and medicine; and the impact of disruptive technology on public health and wellness — not an easy topic in the mix.

Symposium presenters included Dr. Michael Sung from Hong Kong University of Science and Technology, by way of MIT’s Artificial Intelligence Laboratory among other institutions, as keynote speaker; and panelists Professors Thomas D. Barton, Timothy Casey and James M. Cooper, California Western School of Law; Jennifer Brobst, Assistant Professor of Law, Southern Illinois University; Professor Joshua P. Davis, University of San Francisco School of Law; Cayce Greiner, Partner, Tyson & Mendes, LLP; Dr. Fazal Khan, Associate Professor of Law, University of Georgia School of Law; Dr. Lydia Kostopoulos, Harvard University; Dr. Tokio Matsuzaki, UCSD and Kobe University Graduate School of Medicine; Eleonore Pauwels, The Wilson Center; Brenda Simon, Associate Professor of Law, Thomas Jefferson School of Law; and Bradley Wendel, Associate Dean and Professor of Law, Cornell Law School. Brooke Raunig, Editor-in-Chief of California Western Law Review, also contributed to the discussion, chairing one of the panels.

Professor Casey started the day with the reminder that the Rules of Professional Conduct, Rule 3-110 (competence) in particular, requires not only that we lawyers know the law, but also that we remain abreast of technological developments — the most spectacular of which at present is artificial intelligence. As science and technology races forward to push the limits of what AI can do, the goal of the symposium was to address the more difficult question: What AI should do?

 

“If AI is, or will soon be, increasingly and more accurately ‘predictive,’ to the extent that legal interpretation involves description or prediction, will AI take on the role of interpreting our laws?”

 

Dr. Sung set the stage with an exposition of the history of AI and an explanation of what AI is — and is not. He described how it is currently being deployed (law, radiology, banking, insurance, health care, to name a few fields) and some of its key facets (natural language, machine learning, facial recognition, robotics). Then, he turned to the future.

AI’s progression, as Dr. Sung presented his vision, is an inevitable and exponential “disrupter” — one that will drive social, business and political change on a logarithmic scale. We have, as he sees it, caught up with a future that is both exciting but, for some, will be an abrupt shift in what they find comfortable.

His presentation raised overarching questions: How will AI and humans interact? And what responsibilities do we have for the development, deployment and ultimate use of AI?

Against this backdrop, panelists then took on the moral and legal dilemmas in regulating new technology as a consequence of AI; some issues related to privacy, big data and medicine in relation to AI; and the impact of disruptive technology on public health and wellness given the possibilities AI can deliver — none of these a stroll in the park.

In the legal/regulatory arena, one question persisted: If AI is, or will soon be, increasingly and more accurately “predictive,” to the extent that legal interpretation involves description or prediction, will AI take on the role of interpreting our laws? Or, is there a role for moral judgment in determining what the law is? Will we prevent AI from usurping the responsibility and authority of lawyers, judges and citizens themselves, and should we? How?

Take, by way of example, the autonomous, self-driving car. Assume it overcomes all the technological hurdles of safe and lawful operation. How in an emergency — without fault — must it react when confronted with the dilemma of killing a pedestrian, or swerving and killing or injuring its passenger? Several pedestrians? Several passengers? A crowded school bus? Who gets to elect the programming? Employing what “norms” — potential liability of the manufacturer; of the owner; some other “moral” standard? Should the passenger have a “protect-the-passenger” option at all costs?

To consider another sphere, AI — especially “deep learning” — needs vast arrays of data to continue to test, “learn” and self-program. By gathering the complete medical information and genetic data of many millions of patients, medical science can improve diagnostic precision. It can refine and “invent” more precise therapies and medications. It could significantly improve strategies to combat or even eliminate certain diseases. But when does medical precision become surveillance and control? The public benefit is demonstrable. But who should be allowed to exploit that personal data? To what end? Where do privacy rights surrender to the public good? What ethical standards govern the choices — choices science is currently making?

On another front, what is the long-term impact of robots and robotic toys that provide support and companionship to persons suffering from dementia? To others with severe social inhibitions? Will they supplement or supplant human caregivers? Human partners? What is a “good” outcome? Are there limits or norms? Set by whom?

Did the audience and panelists come away with clear and simple answers by the end of the day? Hardly. Some may have gained a tentative understanding of the dimension of the issues at best. No one, however, suggested that she or he had definitive conclusions.

Rather, the panel discussions were Socratic, not didactic; ideas ebbed and flowed; many more questions surfaced than answers — as one might expect from the range of far-reaching topics the panelists explored.

A few audience members questioned reshaping law school professional responsibility curricula if lawyers are to assume a role in shaping how AI’s development affects society.
One certainty evolved. As AI and its development races ahead, its disruptive force grows exponentially abrupt. For some this will be frightening; for others, exciting. As one futurist expressed it: AI’s impact on our lives will be like the introduction of electricity on past generations. “Everything will be ‘cognified.’” The overarching questions, however, remained. How will AI and humans interact? What responsibilities will we have for the development, deployment and ultimate use of AI?

Inventors and developers will continue to focus on what AI can do. This symposium raised a critical question: what should AI do? More importantly, who should be involved in making those decisions? Asking the question is the important first step. The California Western School of Law 2018 Legal Ethics Symposium took that step.

Edward McIntyre is an attorney at law and co-editor of San Diego Lawyer.

This article originally appeared in the May/June 2018 issue of San Diego Lawyer.