Temporal Information Annotation: Crowd vs. Experts

Rachele Sprugnoli, Tommaso Caselli, Oana Inel

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

This paper describes two sets of crowdsourcing experiments on temporal information annotation conducted on two languages, ie, English and Italian. The first experiment, launched on the CrowdFlower platform, was aimed at classifying temporal relations given target entities. The second one, relying on the CrowdTruth metric, consisted in two subtasks: one devoted to the recognition of events and temporal expressions and one to the detection and classification of temporal relations. The outcomes of the experiments suggest a valuable use of crowdsourcing annotations also for a complex task like Temporal Processing.
Original languageEnglish
Title of host publicationProceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)
Pages3502-3609
Number of pages108
Publication statusPublished - 2016
EventTenth International Conference on Language Resources and Evaluation (LREC 2016) - Portorož, Slovenia
Duration: 23 May 201628 May 2016

Conference

ConferenceTenth International Conference on Language Resources and Evaluation (LREC 2016)
CityPortorož, Slovenia
Period23/5/1628/5/16

Keywords

  • Corpus
  • Crowdsourcing
  • Temporal Information Processing

Fingerprint

Dive into the research topics of 'Temporal Information Annotation: Crowd vs. Experts'. Together they form a unique fingerprint.

Cite this