Annotating named entities in Twitter data with crowdsourcing
No Thumbnail Available
Permanent Link
Author/Creator ORCID
Date
2010-06-06
Type of Work
Department
Program
Citation of Original Publication
Tim Finin, Will Murnane, Anand Karandikar, Nicholas Keller, Justin Martineau, and Mark Dredze, Annotating named entities in Twitter data with crowdsourcing, Proceedings of the NAACL Workshop on Creating Speech and Text Language Data With Amazon's Mechanical Turk, 2010. https://aclanthology.org/W10-0713/
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Abstract
We describe our experience using both Amazon Mechanical Turk (MTurk) and Crowd Flower to collect simple named entity annotations for Twitter status updates. Unlike most genres that have traditionally been the focus of named entity experiments, Twitter is far more informal and abbreviated. The collected annotations and annotation techniques will provide a first step towards the full study of named entity recognition in domains like Facebook and Twitter. We also briefly describe how to use MTurk to collect judgements on the quality of “word clouds.”