TY - JOUR
T1 - Navigating the Ethical Crossroads: Bridging the gap between Predictive Power and Explanation in the use of Artificial Intelligence in Medicine
AU - Riva, Giuseppe
AU - Sajno, Elena
AU - De Gaspari, Stefano
AU - Pupillo, Chiara
AU - Wiederhold, Brenda K.
PY - 2023
Y1 - 2023
N2 - Artificial Intelligence (AI) has emerged as a transformative force in medicine, provoking both awe and apprehension in clinicians and patients. In fact, the challenges posed by medical AI extend beyond mere technological hiccups; they delve into the very core of ethics and human decision-making. This paper delves into the intricate dichotomy between the clinical predictive prowess of AI and the human ability to explain decisions, highlighting the ethical challenges arising from this disparity. While humans can elucidate their choices, AI often operates in opaque realms, generating predictions without transparent reasoning. The paper explores the cognitive underpinnings of prediction and explanation, emphasizing the essential interplay between these processes in human intelligence. It critically analyzes the limitations of current medical AI systems, emphasizing their vulnerability to errors and lack of transparency, especially in a critical domain like healthcare. In this paper, we contend that explainability serves as a vital tool to ensure that patients remain at the core of healthcare. It empowers patients and clinicians to make informed, autonomous decisions regarding their health. Explainable Artificial Intelligence (XAI) tackles these challenges. However, achieving it is not easy, and it is strongly dependent from different technical, social and psychological variables. Achieving this objective highlights the urgent requirement for a multidisciplinary approach in XAI that integrates technological knowledge with psychological, cognitive and social perspectives. This alignment will foster innovation, empathy, and responsible implementation, shaping a healthcare landscape that prioritizes both technological advancement and ethical considerations.
AB - Artificial Intelligence (AI) has emerged as a transformative force in medicine, provoking both awe and apprehension in clinicians and patients. In fact, the challenges posed by medical AI extend beyond mere technological hiccups; they delve into the very core of ethics and human decision-making. This paper delves into the intricate dichotomy between the clinical predictive prowess of AI and the human ability to explain decisions, highlighting the ethical challenges arising from this disparity. While humans can elucidate their choices, AI often operates in opaque realms, generating predictions without transparent reasoning. The paper explores the cognitive underpinnings of prediction and explanation, emphasizing the essential interplay between these processes in human intelligence. It critically analyzes the limitations of current medical AI systems, emphasizing their vulnerability to errors and lack of transparency, especially in a critical domain like healthcare. In this paper, we contend that explainability serves as a vital tool to ensure that patients remain at the core of healthcare. It empowers patients and clinicians to make informed, autonomous decisions regarding their health. Explainable Artificial Intelligence (XAI) tackles these challenges. However, achieving it is not easy, and it is strongly dependent from different technical, social and psychological variables. Achieving this objective highlights the urgent requirement for a multidisciplinary approach in XAI that integrates technological knowledge with psychological, cognitive and social perspectives. This alignment will foster innovation, empathy, and responsible implementation, shaping a healthcare landscape that prioritizes both technological advancement and ethical considerations.
KW - Algorethics
KW - Artificial Intelligence
KW - Explainable Artificial Intelligence (XAI)
KW - Explanation
KW - Prediction
KW - Algorethics
KW - Artificial Intelligence
KW - Explainable Artificial Intelligence (XAI)
KW - Explanation
KW - Prediction
UR - http://hdl.handle.net/10807/272879
M3 - Editorial
SN - 1554-8716
VL - 21
SP - 3
EP - 7
JO - Annual Review of CyberTherapy and Telemedicine
JF - Annual Review of CyberTherapy and Telemedicine
ER -