Conveners:
Éric Pardoux – IHRIM (CNRS & ENS Lyon) & MFO
Aurelia Sauerbrei – The Ethox Centre and Wellcome Centre for Ethics and Humanities, Nuffield Department of Population Health, and Big Data Institute, University of Oxford, Oxford, UK
Mogens Laerke – IHRIM (CNRS & ENS Lyon) & MFO
Thomas Guyet – INRIA Lyon
Artificial intelligence (AI) is slowly but steadily being introduced in medical and healthcare settings. From basic research to applied systems already deployed in hospitals, AI-based systems (AIS) are now used in applications that go far beyond the initial role that the first expert systems had in the 1970s and the 1980s. AIS are now helping not only with diagnosis tasks, but also with risk stratification, assistance during surgical procedures, or monitoring of biomarkers in chronically ill patients. This list is far from being exhaustive. Any kind of decision that may be taken in healthcare is now susceptible to be supported by AI.
However, AIS cannot be considered as mere tools that will not affect medical practice or biomedical ethics. By nature—and typically when they are based on Machine Learning—AIS can be opaque for users or even programmers, becoming so-called black boxes. More generally, AIS can help to computationally obtain decision models that are sometimes hardly explainable to the commoners. The use of such AIS raises many ethical questions: how can we trust opaque medical systems—be they black or grey boxes? How should the responsibility be shared when these systems are used? How can we define reasonable trade-offs in terms of opacity, safety and efficiency of those systems? What is the role of explanation in medicine and how could AIS disrupt it? Finally, once we have understood the normative expectations we direct towards AIS, how can we foster ethical design practices to actually build these ethical AI-based systems we are envisioning?
These questions are a mere sample of all the ethical stakes linked to the introduction of AIS in healthcare. These stakes are neither purely theoretical nor purely practical. Instead, they are intertwined in a conundrum—a wicked problem—we need to collectively tackle using the expertise of every stakeholder. Through this workshop, we hope to facilitate an interdisciplinary dialogue between technologists, medical practitioners and ethicists.
More information & registration
More information about ED-AIM
Programme:
–9:45-10:00: welcome word (Lionel Tarassenko, Reuben College, University of Oxford (to be confirmed))
The ends - What should ethical AIS for healthcare be?
–10:00-11:00: Christine Hine (University of Surrey) — Ethics and artificial intelligence in the interdisciplinary collaborations of smart care
–11:00-11:30: break
–11:30-12:00: Karin Jongsma, Megan Milota, Jojanneke Drogt (University of Utrecht (Netherlands)) — Visualizing the ethics of AI in pathology through the lens of human expertise and responsibility
–12:00-12:30: Éric Pardoux (IHRIM, École Normale Supérieure de Lyon & MFO) — The ethics of AI ethicsor how to ethically design ethical AI?
–12:30-13:00: Discussion of the first axis — chaired by Angeliki Kerasidou (Ethox Centre, University of Oxford)
13:00-14:00: lunch break
The means - How to develop ethical AIS for healthcare?
–14:00-15:00: Mihaela van der Schaar (University of Cambridge) & Thomas Callender (University College London) — “Sunlight is said to be the best of disinfectants”: Transparency is key to ethical AI in healthcare
–15:00-15:30: break
–15:30-16:00: Francis McKay (Ethox Centre, University of Oxford) — Public Participation in Medical AI Ethics
–16:00-16:30: Jessica Morley — Title to be confirmed (Oxford Internet Institute, University of Oxford)
–16:30-17:00: Discussion of the second axis
17:00-17:30: General discussion to conclude the workshop