TILT seminar: Michal Klincewicz
Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy
This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context.
Speaker: Michal Klincewicz
Assistant professor in Tilburg University in the Department of Cognitive Science and Artificial Intelligence and an assistant professor (part-time) in Jagiellonian University in the Department of Cognitive Science, in the Institute of philosophy. Past post-doctoral researcher at the Berlin School of Mind and Brain, received Ph.D. in philosophy in 2013 at City University of New York, Graduate Center, with David Rosenthal as the supervisor.
Michal’s research focuses on the temporal dimension of cognition, including conscious experience, personal change over time, perception, and dreams.
Michal also cares a great deal about ethically problematic consequences of emerging technologies, such as autonomous weapon systems and moral enhancement. He is realizing two related research projects: (1) "Modelling Expert Decisions in Complex Environments" as a part of MindLabs and in cooperation with the Port of Rotterdam and (2) "Moral Improvement with Artificial Intelligence," which is a series of articles on the ethical and moral dimensions of new computing technologies. He says a few things about how these projects intersect in this short conversation with The Decision Lab.
Host: Saskia Lavrijssen