TY - JOUR
T1 - On the emergent capabilities of ChatGPT 4 to estimate personality traits
AU - Piastra, M.
AU - Catellani, Patrizia
PY - 2025
Y1 - 2025
N2 - This study investigates the potential of ChatGPT 4 in the assessment of personality traits based on written texts. Using two publicly available datasets containing both written texts and self-assessments of the authors’ psychological traits based on the Big Five model, we aimed to evaluate the predictive performance of ChatGPT 4. For each sample text, we asked for numerical predictions on an eleven-point scale and compared them with the self-assessments. We also asked for ChatGPT 4 confidence scores on an eleven-point scale for each prediction. To keep the study within a manageable scope, a zero-prompt modality was chosen, although more sophisticated prompting strategies could potentially improve performance. The results show that ChatGPT 4 has moderate but significant abilities to automatically infer personality traits from written text. However, it also shows limitations in recognizing whether the input text is appropriate or representative enough to make accurate inferences, which could hinder practical applications. Furthermore, the results suggest that improved benchmarking methods could increase the efficiency and reliability of the evaluation process. These results pave the way for a more comprehensive evaluation of the capabilities of Large Language Models in assessing personality traits from written texts.
AB - This study investigates the potential of ChatGPT 4 in the assessment of personality traits based on written texts. Using two publicly available datasets containing both written texts and self-assessments of the authors’ psychological traits based on the Big Five model, we aimed to evaluate the predictive performance of ChatGPT 4. For each sample text, we asked for numerical predictions on an eleven-point scale and compared them with the self-assessments. We also asked for ChatGPT 4 confidence scores on an eleven-point scale for each prediction. To keep the study within a manageable scope, a zero-prompt modality was chosen, although more sophisticated prompting strategies could potentially improve performance. The results show that ChatGPT 4 has moderate but significant abilities to automatically infer personality traits from written text. However, it also shows limitations in recognizing whether the input text is appropriate or representative enough to make accurate inferences, which could hinder practical applications. Furthermore, the results suggest that improved benchmarking methods could increase the efficiency and reliability of the evaluation process. These results pave the way for a more comprehensive evaluation of the capabilities of Large Language Models in assessing personality traits from written texts.
KW - Big Five
KW - ChatGPT
KW - Large Language Models
KW - conversational agents
KW - personality traits
KW - text analysis
KW - Big Five
KW - ChatGPT
KW - Large Language Models
KW - conversational agents
KW - personality traits
KW - text analysis
UR - https://publicatt.unicatt.it/handle/10807/314491
UR - https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85219563893&origin=inward
UR - https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85219563893&origin=inward
U2 - 10.3389/frai.2025.1484260
DO - 10.3389/frai.2025.1484260
M3 - Article
SN - 2624-8212
VL - 8
SP - 1
EP - 1
JO - Frontiers in Artificial Intelligence
JF - Frontiers in Artificial Intelligence
IS - 1
ER -