TY - JOUR
T1 - AI Says I’m Better: Evaluating the Effect of AI Defer on Users. A Study Protocol
AU - Sajno, E.
AU - Beretta, A.
AU - DE, GASPARI S.
AU - Giannotti, F.
AU - Pedreschi, D.
AU - Pellungrini, R.
AU - Pugnana, A.
AU - Pupillo, C.
AU - Repetto, Claudia
AU - Sansoni, Maria
AU - Villani, Daniela
AU - Riva, Giuseppe
PY - 2025
Y1 - 2025
N2 - The integration of AI into decision support systems raises concerns about\r\noverreliance and distrust. To address this, we propose an experimental protocol\r\ncombining Learning to Defer (LtD)—where AI delegates decisions to humans when\r\nappropriate—and Explainable AI (XAI), which provides users with decision\r\nrationales. Our study investigates how these approaches impact human decision-\r\nmaking, particularly in high-stakes contexts. Participants will classify noisy images\r\nfrom ImageNet under three between-subjects conditions: Defer (AI defers to user),\r\nDefer + XAI (AI provides an explanation), and Hidden Delegation (AI involvement\r\nis concealed). Each condition will be tested in neutral and high-stakes scenarios, the\r\nlatter framed through narratives emphasizing the danger of misclassification. We\r\nwill assess decision accuracy and reaction times, as well as psychological measures\r\nthat explore the influence of individual differences (i.e., intolerance to uncertainty\r\nand cognitive styles), and emotions (e.g., emotion regulation, and AI-related\r\nanxiety). We hypothesize that Defer may prompt more analytical thinking,\r\nimproving accuracy over Hidden Delegation, while Defer + XAI may further\r\nenhance performance. In contrast, Hidden Delegation could promote reliance on\r\nintuitive processing. We expect higher accuracy and longer response times in high-\r\nstakes conditions. Findings will inform the design of human-AI systems that\r\noptimize user engagement and reliability, particularly in domains like clinical\r\ndecision-making.
AB - The integration of AI into decision support systems raises concerns about\r\noverreliance and distrust. To address this, we propose an experimental protocol\r\ncombining Learning to Defer (LtD)—where AI delegates decisions to humans when\r\nappropriate—and Explainable AI (XAI), which provides users with decision\r\nrationales. Our study investigates how these approaches impact human decision-\r\nmaking, particularly in high-stakes contexts. Participants will classify noisy images\r\nfrom ImageNet under three between-subjects conditions: Defer (AI defers to user),\r\nDefer + XAI (AI provides an explanation), and Hidden Delegation (AI involvement\r\nis concealed). Each condition will be tested in neutral and high-stakes scenarios, the\r\nlatter framed through narratives emphasizing the danger of misclassification. We\r\nwill assess decision accuracy and reaction times, as well as psychological measures\r\nthat explore the influence of individual differences (i.e., intolerance to uncertainty\r\nand cognitive styles), and emotions (e.g., emotion regulation, and AI-related\r\nanxiety). We hypothesize that Defer may prompt more analytical thinking,\r\nimproving accuracy over Hidden Delegation, while Defer + XAI may further\r\nenhance performance. In contrast, Hidden Delegation could promote reliance on\r\nintuitive processing. We expect higher accuracy and longer response times in high-\r\nstakes conditions. Findings will inform the design of human-AI systems that\r\noptimize user engagement and reliability, particularly in domains like clinical\r\ndecision-making.
KW - LLM
KW - AI
KW - explainable AI
KW - LLM
KW - AI
KW - explainable AI
UR - https://publicatt.unicatt.it/handle/10807/327316
UR - https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=105023873791&origin=inward
UR - https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105023873791&origin=inward
M3 - Article
SN - 1554-8716
VL - 23
SP - 282
EP - 288
JO - Annual Review of CyberTherapy and Telemedicine
JF - Annual Review of CyberTherapy and Telemedicine
IS - NA
ER -