Can a robot lie? Young children's understanding of intentionality beneath false statements

Risultato della ricerca: Contributo in rivistaArticolo

Abstract

Including robots in children's lives calls for reflection on the\r\npsychological and moral aspects of such relationships, especially\r\nwith respect to children's ability to differentiate intentional\r\nfrom unintentional false statements, that is, lies from\r\nmistakes. This ability calls for an understanding of an interlocutor's\r\nintentions. This study examined the ability of\r\n5-6-year-olds to recognize, and morally evaluate, lies and\r\nmistakes produced by a human as compared to a NAO\r\nrobot, and to attribute relevant emotions to the deceived\r\nparty. Irrespective of the agent, children had more difficulty\r\nin understanding mistakes than lies. In addition, they were\r\ndisinclined to attribute a lie to the robot. Children's age and\r\ntheir understanding of intentionality were the strongest\r\npredictors of their performance on the lie-mistake task.\r\nChildren's Theory of Mind, but not their executive function\r\nskills, also correlated with their performance. Our findings\r\nsuggest that, regardless of age, a robot is perceived as an\r\nintentional agent. Robot behaviour was more acceptable for\r\nchildren because his actions could be attributed to someone\r\nwho programmed it to act in a specific way.
Lingua originaleInglese
pagine (da-a)1-25
Numero di pagine25
RivistaInfant and Child Development
Numero di pubblicazionee2398
DOI
Stato di pubblicazionePubblicato - 2023

All Science Journal Classification (ASJC) codes

  • Psicologia dello Sviluppo e dell’Educazione

Keywords

  • children
  • human–robot interaction
  • intentionality understanding
  • lie-mistake
  • theory of mind

Fingerprint

Entra nei temi di ricerca di 'Can a robot lie? Young children's understanding of intentionality beneath false statements'. Insieme formano una fingerprint unica.

Cita questo