TY - JOUR
T1 - Can a robot lie? Young children's understanding of intentionality beneath false statements
AU - Peretti, Giulia
AU - Manzi, Federico
AU - Di Dio, Cinzia
AU - Cangelosi, Angelo
AU - Harris, Paul L.
AU - Massaro, Davide
AU - Marchetti, Antonella
PY - 2023
Y1 - 2023
N2 - Including robots in children's lives calls for reflection on the
psychological and moral aspects of such relationships, especially
with respect to children's ability to differentiate intentional
from unintentional false statements, that is, lies from
mistakes. This ability calls for an understanding of an interlocutor's
intentions. This study examined the ability of
5-6-year-olds to recognize, and morally evaluate, lies and
mistakes produced by a human as compared to a NAO
robot, and to attribute relevant emotions to the deceived
party. Irrespective of the agent, children had more difficulty
in understanding mistakes than lies. In addition, they were
disinclined to attribute a lie to the robot. Children's age and
their understanding of intentionality were the strongest
predictors of their performance on the lie-mistake task.
Children's Theory of Mind, but not their executive function
skills, also correlated with their performance. Our findings
suggest that, regardless of age, a robot is perceived as an
intentional agent. Robot behaviour was more acceptable for
children because his actions could be attributed to someone
who programmed it to act in a specific way.
AB - Including robots in children's lives calls for reflection on the
psychological and moral aspects of such relationships, especially
with respect to children's ability to differentiate intentional
from unintentional false statements, that is, lies from
mistakes. This ability calls for an understanding of an interlocutor's
intentions. This study examined the ability of
5-6-year-olds to recognize, and morally evaluate, lies and
mistakes produced by a human as compared to a NAO
robot, and to attribute relevant emotions to the deceived
party. Irrespective of the agent, children had more difficulty
in understanding mistakes than lies. In addition, they were
disinclined to attribute a lie to the robot. Children's age and
their understanding of intentionality were the strongest
predictors of their performance on the lie-mistake task.
Children's Theory of Mind, but not their executive function
skills, also correlated with their performance. Our findings
suggest that, regardless of age, a robot is perceived as an
intentional agent. Robot behaviour was more acceptable for
children because his actions could be attributed to someone
who programmed it to act in a specific way.
KW - children
KW - human–robot interaction
KW - intentionality understanding
KW - lie-mistake
KW - theory of mind
KW - children
KW - human–robot interaction
KW - intentionality understanding
KW - lie-mistake
KW - theory of mind
UR - http://hdl.handle.net/10807/222928
U2 - 10.1002/icd.2398
DO - 10.1002/icd.2398
M3 - Article
SN - 1522-7227
SP - 1
EP - 25
JO - Infant and Child Development
JF - Infant and Child Development
ER -