TY - JOUR
T1 - Can a robot lie? Young children's understanding of intentionality beneath false statements
AU - Peretti, Giulia
AU - Manzi, Federico
AU - Di Dio, Cinzia
AU - Cangelosi, Angelo
AU - Harris, Paul L.
AU - Massaro, Davide
AU - Marchetti, Antonella
PY - 2023
Y1 - 2023
N2 - Including robots in children's lives calls for reflection on the\r\npsychological and moral aspects of such relationships, especially\r\nwith respect to children's ability to differentiate intentional\r\nfrom unintentional false statements, that is, lies from\r\nmistakes. This ability calls for an understanding of an interlocutor's\r\nintentions. This study examined the ability of\r\n5-6-year-olds to recognize, and morally evaluate, lies and\r\nmistakes produced by a human as compared to a NAO\r\nrobot, and to attribute relevant emotions to the deceived\r\nparty. Irrespective of the agent, children had more difficulty\r\nin understanding mistakes than lies. In addition, they were\r\ndisinclined to attribute a lie to the robot. Children's age and\r\ntheir understanding of intentionality were the strongest\r\npredictors of their performance on the lie-mistake task.\r\nChildren's Theory of Mind, but not their executive function\r\nskills, also correlated with their performance. Our findings\r\nsuggest that, regardless of age, a robot is perceived as an\r\nintentional agent. Robot behaviour was more acceptable for\r\nchildren because his actions could be attributed to someone\r\nwho programmed it to act in a specific way.
AB - Including robots in children's lives calls for reflection on the\r\npsychological and moral aspects of such relationships, especially\r\nwith respect to children's ability to differentiate intentional\r\nfrom unintentional false statements, that is, lies from\r\nmistakes. This ability calls for an understanding of an interlocutor's\r\nintentions. This study examined the ability of\r\n5-6-year-olds to recognize, and morally evaluate, lies and\r\nmistakes produced by a human as compared to a NAO\r\nrobot, and to attribute relevant emotions to the deceived\r\nparty. Irrespective of the agent, children had more difficulty\r\nin understanding mistakes than lies. In addition, they were\r\ndisinclined to attribute a lie to the robot. Children's age and\r\ntheir understanding of intentionality were the strongest\r\npredictors of their performance on the lie-mistake task.\r\nChildren's Theory of Mind, but not their executive function\r\nskills, also correlated with their performance. Our findings\r\nsuggest that, regardless of age, a robot is perceived as an\r\nintentional agent. Robot behaviour was more acceptable for\r\nchildren because his actions could be attributed to someone\r\nwho programmed it to act in a specific way.
KW - children
KW - human–robot interaction
KW - intentionality understanding
KW - lie-mistake
KW - theory of mind
KW - children
KW - human–robot interaction
KW - intentionality understanding
KW - lie-mistake
KW - theory of mind
UR - https://publicatt.unicatt.it/handle/10807/222928
UR - https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85145603785&origin=inward
UR - https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85145603785&origin=inward
U2 - 10.1002/icd.2398
DO - 10.1002/icd.2398
M3 - Article
SN - 1522-7227
SP - 1
EP - 25
JO - Infant and Child Development
JF - Infant and Child Development
IS - e2398
ER -