Abstract
The large availability of processable textual resources for Classical Latin has made it possible to study Latin literature through methods and tools that support distant reading. This paper describes a number of experiments carried out to test the possibility of investigating the thematic distribution of the Classical Latin corpus Opera Latina by means of topic modeling. For this purpose, we train, optimize and compare two neural models, Product-of-Experts LDA (ProdLDA) and Embedded Topic Model (ETM), opportunely revised to deal with the textual data from a Classical Latin corpus, to evaluate which one performs better both on the basis of topic diversity and topic coherence metrics, and from a human judgment point of view. Our results show that the topics extracted by neural models are coherent and interpretable and that they are significant from the perspective of a Latin scholar. The source code of the proposed model is available at https://github.com/MIND-Lab/LatinProdLDA.
Lingua originale | English |
---|---|
Titolo della pubblicazione ospite | Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) |
Pagine | 6929-6934 |
Numero di pagine | 6 |
Stato di pubblicazione | Pubblicato - 2024 |
Evento | 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) - TORINO -- ITA Durata: 22 mag 2024 → 24 mag 2024 |
Convegno
Convegno | 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) |
---|---|
Città | TORINO -- ITA |
Periodo | 22/5/24 → 24/5/24 |
Keywords
- Latin
- Linguistic resources