Research Repository

Comparing Bayesian Models of Annotation

Paun, Silviu and Carpenter, Bob and Chamberlain, JD and Hovy, Dirk and Kruschwitz, Udo and Poesio, Massimo (2018) 'Comparing Bayesian Models of Annotation.' Transactions of the Association for Computational Linguistics, 6 (2018). 571 - 585.

tacl_a_00040.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (448kB) | Preview


The analysis of crowdsourced annotations in NLP is concerned with identifying 1) gold standard labels, 2) annotator accuracies and biases, and 3) item difficulties and error patterns. Traditionally, majority voting was used for 1), and coefficients of agreement for 2) and 3). Lately, model-based analysis of corpus annotations have proven better at all three tasks. But there has been relatively little work comparing them on the same datasets. This paper aims to fill this gap by analyzing six models of annotation, covering different approaches to annotator ability, item difficulty, and parameter pooling (tying) across annotators and items. We evaluate these models along four aspects: comparison to gold labels, predictive accuracy for new annotations, annotator characterization, and item difficulty, using four datasets with varying degrees of noise in the form of random (spammy) annotators. We conclude with guidelines for model selection, application, and implementation.

Item Type: Article
Uncontrolled Keywords: annotation, annotation models, Bayesian models, computational linguistics, crowdsourcing
Subjects: P Language and Literature > P Philology. Linguistics
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Faculty of Science and Health > Computer Science and Electronic Engineering, School of
Depositing User: Elements
Date Deposited: 09 Nov 2018 13:42
Last Modified: 13 Feb 2019 11:15

Actions (login required)

View Item View Item