ABSTRACT
When running large human computation tasks in the real-world, honeypots play an important role for assessing the overall quality of the work produced. The generation of such honeypots can be a significant burden on the task owner as they require specific characteristics in their design and implementation and continuous maintenance when operating data pipelines that include a human computation component. In this extended abstract we outline a novel approach for creating honeypots using automatically generated questions from a reference knowledge base with the ability to control such parameters as topic and difficulty.
- O. Alonso, C. C. Marshall, and M. Najork. Debugging a crowdsourced task with low inter-rater agreement. In Proc. of JCDL, 2015. Google ScholarDigital Library
- L. Beinborn, T. Zesch, and I. Gurevych. Predicting the difficulty of language proficiency tests. TACL, 2, 2014.Google Scholar
- M. Christoforaki and P. G. Ipeirotis. STEP: A scalable testing and evaluation platform. In Proc. of HCOMP, 2014.Google Scholar
- P. Dai, J. M. Rzeszotarski, P. Paritosh, and E. H. Chi. And now for something completely different: Improving crowdsourcing workflows with micro-diversions. In Proc. of CSCW, pages 628--638, 2015. Google ScholarDigital Library
- D. E. Difallah, G. Demartini, and P. Cudré-Mauroux. Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms. In Proc. of Workshop on Crowdsourcing Web Search, Lyon, France, April 17, 2012, pages 26--30, 2012.Google Scholar
- C. M. Feeney and M. Heilman. Automatically generating and validating reading-check questions. In Proc. of ITS, 2008. Google ScholarDigital Library
- S. K. Kondreddi, P. Triantafillou, and G. Weikum. Combining information extraction and human computing for crowdsourced knowledge acquisition. In Proc. of ICDE, pages 988--999, 2014.Google ScholarCross Ref
- A. Marcus and A. G. Parameswaran. Crowdsourced Data Management. Foundations and Trends in Databases, 2015. Google ScholarDigital Library
- D. Seyler, M. Yahya, and K. Berberich. Generating Quiz Questions from Knowledge Graphs. In Proc. of WWW, 2015. Google ScholarDigital Library
Index Terms
- Automated question generation for quality control in human computation tasks
Recommendations
Adaptive Task Replication Strategy for Human Computation
SBRC '14: Proceedings of the 2014 Brazilian Symposium on Computer Networks and Distributed SystemsHuman computation systems are distributed systems in which the processors are human beings, called workers. In such systems, task replication has been used as a way to obtain results redundancy and quality. The level of replication is usually defined ...
Enhancing Question Generation with Commonsense Knowledge
Chinese Computational LinguisticsAbstractQuestion generation (QG) is to generate natural and grammatical questions that can be answered by a specific answer for a given context. Previous sequence-to-sequence models suffer from a problem that asking high-quality questions requires ...
Comments