Abstract
EVALITA 2007, the first edition of the initiative devoted to the evaluation of Natural Language Processing tools for Italian, provided a shared framework where participants’ systems had the possibility to be evaluated on five different tasks, namely Part of Speech Tagging (organised by the University of Bologna), Parsing (organised by the University of Torino), Word Sense Disambiguation (organised by CNR-ILC, Pisa), Temporal Expression Recognition and Normalization (organised by CELCT, Trento), and Named Entity Recognition (organised by FBK, Trento). We believe that the diffusion of shared tasks and shared evaluation practices is a crucial step towards the development of resources and tools for Natural Language Processing. Experiences of this kind, in fact, are a valuable contribution to the validation of existing models and data, allowing for consistent comparisons among approaches and among representation schemes. The good response obtained by EVALITA, both in the number of participants and in the quality of results, showed that pursuing such goals is feasible not only for English, but also for other languages.
Lingua originale | English |
---|---|
Titolo della pubblicazione ospite | LREC 2008 |
Pagine | 2536-2543 |
Numero di pagine | 8 |
Stato di pubblicazione | Pubblicato - 2008 |
Evento | LREC 2008 - Marrakech, Morocco Durata: 28 mag 2008 → 30 mag 2008 |
Convegno
Convegno | LREC 2008 |
---|---|
Città | Marrakech, Morocco |
Periodo | 28/5/08 → 30/5/08 |
Keywords
- evaluation campaigns, Italian, NLP