Increasing Cheat Robustness of Crowdsourcing Tasks

Crowdsourcing successfully strives to become a widely used means of collecting large-scale scientific corpora. Many research fields, including Information Retrieval, rely on this novel way of data acquisition. However, it seems to be undermined by a significant share of workers that are primarily interested in producing quick generic answers rather than correct ones in order to optimize their time-efficiency and, in turn, earn more money. Recently, we have seen numerous sophisticated schemes of identifying such workers. Those, however, often require additional resources or introduce artificial limitations to the task. We took a different approach by investigating means of a priori making crowdsourced tasks less attractive for cheaters.

“Increasing Cheat Robustness of Crowdsourcing Tasks” has been accepted for publication in Information Retrieval.

 

TREC 2011

The 20th edition of the Text REtrieval Conference (TREC) for the first time featured a dedicated crowdsourcing track. As a consequence of the various crowdsourcing experiments conducted for the PuppyIR project, we hypothesised that a game-based crowdsourcing approach could proof advantageous for the collection of relevance assessments. In order to explore the tradeoff between pay rate and entertainment, TU Delft participated in a joint effort with the University of Iowa and proposed the GeAnn annotation game. Within the course of one week, we were able to collect 10,000 relevance judgements for the assigned query-document pairs. Using GeAnn, we were able to finance the entire assessment process with as little as $ 3.74. This unmatched cost-efficiency compared very beneficially to other participants who typically spent between $30 and $70 to achieve comparable result quality. Especially with regard to final system evaluation with children, the results looked very encouraging.

Across the other tracks, numerous state of the art challenges to IR systems such as the retrieval of chemical compounds, micro blog posts or medical records from patient files. In dedicated planning sessions for TREC 2012, the refining of current tracks as well as the introduction of 2 entirely new ones was discussed.

  • The Contextual Suggestion Track aims to entertain people at a given spatio-temporal position and a known set of preferences. Rather than the varying queries of most traditional tracks, this new setting focuses entirely on the user and his specific context.
  • The Knowledge-Base Acceleration Track addresses the task of automatically curating information for online encyclopedias such as Wikipedia. The challenge is to identify previously uncovered and appropriate news stories that would make a valid extension to the current Wiki article.
  • Finally, in a TREC anniversary historical panel, an interesting (and occasionally smirk-inducing) review of 20 years of TREC was given.