Building Annotated Corpora without Experts

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

GRÁC Marek

Year of publication 2011
Type Article in Proceedings
Conference Natural Language Processing, Multilinguality
MU Faculty or unit

Faculty of Informatics

Citation
Field Informatics
Keywords corpus annotation crowdsourcing
Description In this paper, we present a low-cost approach of building a multi-purpose language resource for Czech, based on currently available results of previous work done by various teams. We focus on the first phase that consists of verifying validity of automatically discovered syntactic elements in 10 000 sentences by 47 human annotators. Due to the number of annotators and very limited time for training, existing heavy-weight techniques for building annotated corpora were not applicable. We have decided to avoid using experts when results between annotators differed. This means that our corpus does not offer ultimate answers, but raw data and models for obtaining ``correct'' answer tailored to user's application. Finally we discuss the currently achieved results and future plans.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info