TY - GEN
T1 - Trust in AI: Transparency, and Uncertainty Reduction. Development of a new theoretical framework
AU - Aquilino, Letizia
AU - Bisconti, Piercosma
AU - Marchetti, Antonella
PY - 2024
Y1 - 2024
N2 - Trust plays a pivotal role in the acceptance of AI (Artificial Intelligence), particularly when it involves people’s health and safety. AI systems have proven to hold great potential when applied to the medical field. However, users still find it challenging to trust AI over a human doctor for decisions regarding their health. This paper establishes a new theoretical framework, drawing upon the integration of the Uncertainty Reduction Theory (URT) and the theorization on agency locus. This framework aims to examine the influence of transparency, agency locus, and human oversight, mediated by uncertainty reduction, on trust development. Transparency has already been revealed as a key element in fostering trust, as AI systems showing some kind of transparency, providing insights into their inner workings, are generally perceived as more trustworthy. One explanation for this can pertain to the system becoming more understandable and predictable to the user, which reduces the uncertainty of the interaction. The framework also focuses on the differences entailed by the application in different fields, namely healthcare and first response intervention. Moreover, the paper foresees multiple experiments that will validate this model, shedding light on the complex dynamics of trust in AI.
AB - Trust plays a pivotal role in the acceptance of AI (Artificial Intelligence), particularly when it involves people’s health and safety. AI systems have proven to hold great potential when applied to the medical field. However, users still find it challenging to trust AI over a human doctor for decisions regarding their health. This paper establishes a new theoretical framework, drawing upon the integration of the Uncertainty Reduction Theory (URT) and the theorization on agency locus. This framework aims to examine the influence of transparency, agency locus, and human oversight, mediated by uncertainty reduction, on trust development. Transparency has already been revealed as a key element in fostering trust, as AI systems showing some kind of transparency, providing insights into their inner workings, are generally perceived as more trustworthy. One explanation for this can pertain to the system becoming more understandable and predictable to the user, which reduces the uncertainty of the interaction. The framework also focuses on the differences entailed by the application in different fields, namely healthcare and first response intervention. Moreover, the paper foresees multiple experiments that will validate this model, shedding light on the complex dynamics of trust in AI.
KW - Artificial Agents, Perceived Trustworthiness, Transparency, Trust, Uncertainty
KW - Artificial Agents, Perceived Trustworthiness, Transparency, Trust, Uncertainty
UR - http://hdl.handle.net/10807/261794
UR - https://ceur-ws.org/vol-3634/paper7.pdf
M3 - Conference contribution
SN - 1613-0073
T3 - CEUR WORKSHOP PROCEEDINGS
SP - 19
EP - 26
BT - CEUR Workshop Proceedings
T2 - MULTITTRUST 2023
Multidisciplinary Perspectives on Human-AI Team Trust 2023
Y2 - 4 December 2023 through 4 December 2023
ER -