Skip to main content

Advertisement

Log in

A survey of task-oriented crowdsourcing

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

Since the advent of artificial intelligence, researchers have been trying to create machines that emulate human behaviour. Back in the 1960s however, Licklider (IRE Trans Hum Factors Electron 4–11, 1960) believed that machines and computers were just part of a scale in which computers were on one side and humans on the other (human computation). After almost a decade of active research into human computation and crowdsourcing, this paper presents a survey of crowdsourcing human computation systems, with the focus being on solving micro-tasks and complex tasks. An analysis of the current state of the art is performed from a technical standpoint, which includes a systematized description of the terminologies used by crowdsourcing platforms and the relationships between each term. Furthermore, the similarities between task-oriented crowdsourcing platforms are described and presented in a process diagram according to a proposed classification. Using this analysis as a stepping stone, this paper concludes with a discussion of challenges and possible future research directions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. http://www.istockphoto.com.

  2. http://www.innocentive.com.

  3. https://www.mturk.com.

  4. http://blog.safecast.org/.

  5. https://microworkers.com.

  6. http://www.shorttask.com.

  7. http://crowdflower.com.

  8. http://crowdsourcingresults.com/competition-platforms/crowdsourcing-landscape-discussion.

  9. http://www.cloudcrowd.com.

  10. https://www.mobileworks.com.

  11. https://microworkers.com.

References

  • Ahmad S, Battle A, Malkani Z, Kamvar S (2011) The jabberwocky programming environment for structured social computing. In: Proceedings of 24th annual ACM symposium user interface software technology, pp 53–64

  • Brabham DC (2008a) Crowdsourcing as a model for problem solving an introduction and cases. Converg Int J Res New Media Technol 14:75–90

    Article  Google Scholar 

  • Brabham DC (2008b) Moving the crowd at iStockphoto: the composition of the crowd and motivations for participation in a crowdsourcing application. First Monday 13:1–22

    Article  Google Scholar 

  • Chklovski T (2003) Learner: a system for acquiring commonsense knowledge by analogy. In: Proceedings of 2nd international conference knowledge capture, pp 4–12

  • Cooper S, Khatib F, Treuille A et al (2010) Predicting protein structures with a multiplayer online game. Nature 466:756–760. doi:10.1038/nature09304

    Article  Google Scholar 

  • Doan A, Ramakrishnan R, Halevy AY (2011) Crowdsourcing systems on the world-wide web. Commun ACM 54:86–96

    Article  Google Scholar 

  • Faridani S, Hartmann B, Ipeirotis PG (2011) What’s the right price? pricing tasks for finishing on time. In: Proceedings of AAAI workshop in human computation

  • Goodchild MF, Glennon JA (2010) Crowdsourcing geographic information for disaster response: a research frontier. Int J Digit Earth 3:231–241

    Article  Google Scholar 

  • Gruber T (2008) Collective knowledge systems: where the social web meets the semantic web. Web Semant Sci Serv Agents World Wide Web 6:4–13

    Article  Google Scholar 

  • Gruber TR (1993) A translation approach to portable ontology specifications. Knowl Acquis 5:199–220

    Article  Google Scholar 

  • Harris CG (2011) Dirty deeds done dirt cheap: a darker side to crowdsourcing. In: IEEE international conference private security risk trust, pp 1314–1317

  • Horrocks I, Patel-Schneider PF, McGuinness DL, and Welty CA (2007) OWL: a description logic based ontology language for the semantic web. In: Baader F, Calvanese D, McGuinness D, Nardi D, Patel-Schneider p (eds) The description logic handbook: Theory, Implementation, and Applications, 2nd edn, chap. 14. Cambridge University Press.

  • Howe J (2006) The rise of crowdsourcing. Wired Mag 14:1–4

    Google Scholar 

  • Howe J (2009) Crowdsourcing: why the power of the crowd is driving the future of business. Crown Publishing Group

  • Ipeirotis PG, Provost F, Wang J (2010) Quality management on amazon mechanical turk. In: Proceedings of ACM SIGKDD workshop on human computation, pp 64–67

  • Kittur A, Chi EH, Suh B (2008) Crowdsourcing user studies with Mechanical Turk. In: Proceedings of SIGCHI conference on human factors in computing system, pp 453–456

  • Kittur A, Khamkar S, André P, Kraut R (2012) CrowdWeaver: visually managing complex crowd work. In: Proceedings of ACM 2012 conference on computer supported cooperative work, pp 1033–1036

  • Kittur A, Smus B, Khamkar S, Kraut RE (2011) Crowdforge: Crowdsourcing complex work. Proceedings of the 24th annual ACM symposium on user interface software and technology, pp 43–52

  • Konstas I, Stathopoulos V, Jose JM (2009) On social networks and collaborative recommendation. In: Proceedings of the 32nd international ACM SIGIR conference on research and development in information retrieval, Boston, MA, USA, pp 195–202

  • Kulkarni AP, Can M, Hartmann B (2011) Turkomatic: automatic recursive task and workflow design for mechanical turk. In: Proceedings of 2011 annual conference on extended abstracts on human factors in computing systems, pp 2053–2058

  • Levine SS, Kurzban R (2006) Explaining clustering in social networks: towards an evolutionary theory of cascading benefits. Manag Decis Econ 27:173–187. doi:10.1002/mde.1291

    Article  Google Scholar 

  • Licklider JCR (1960) Man–computer symbiosis. IRE Trans Hum Factors Electron HFE-1:4–11

  • Little G, Chilton LB, Goldman M, Miller RC (2010) Turkit: human computation algorithms on mechanical turk. In: Proceedings of 23nd annual ACM symposium on user interface software technology, pp 57–66

  • Luo S, Xia H, Yoshida T, Wang Z (2009) Toward collective intelligence of online communities: a primitive conceptual model. J Syst Sci Syst Eng 18:203–221

    Article  Google Scholar 

  • Luz N, Silva N, Maio P, Novais P (2012) Ontology alignment through argumentation. In: 2012 AAAI Spring symposium series

  • Luz N, Silva N, Novais P (2014) Generating human–computer micro-task workflows from domain ontologies. In: Kurosu M (ed) Human–computer interaction: theories, methods, and tools. Springer International Publishing, Berlin, pp 98–109

    Google Scholar 

  • Ma H, Zhou D, Liu C, et al (2011) Recommender systems with social regularization. In: Proceedings of fourth ACM international conference on web search data mining, pp 287–296

  • Mason W, Watts DJ (2010) Financial incentives and the performance of crowds. ACM SigKDD Explor Newsl 11:100–108

    Article  Google Scholar 

  • Obrst L, Liu H, Wray R (2003) Ontologies for corporate web applications. AI Mag 24:49

    Google Scholar 

  • Paolacci G, Chandler J, Ipeirotis P (2010) Running experiments on amazon mechanical turk. Judgm Decis Mak 5:411–419

    Google Scholar 

  • Porter J (2008) Designing for the social web. Peachpit Press, San Francisco

    Google Scholar 

  • Quinn AJ, Bederson BB (2011) Human computation: a survey and taxonomy of a growing field. In: Proceedings of 2011 annual conference human factors in computing systems, pp 1403–1412

  • Sarasua C, Simperl E, Noy NF (2012) CrowdMap: crowdsourcing ontology alignment with microtasks. Springer, Berlin

    Google Scholar 

  • Singh P, Lin T, Mueller ET, et al (2002) Open mind common sense: knowledge acquisition from the general public. In: Move meaningful internet system 2002 CoopIS DOA ODBASE. Springer, Berlin, pp 1223–1237

  • Studer R, Benjamins VR, Fensel D (1998) Knowledge engineering: principles and methods. Data Knowl Eng 25:161–197

    Article  Google Scholar 

  • Surowiecki J (2004) the wisdom of crowds: why the many are smarter than the few and how collective wisdom shapes business, economies, societies and nations. Doubleday, Garden City, NY

    Google Scholar 

  • Von Ahn L (2009) Human computation. In: 46th ACMIEEE design automation conference, pp 418–419

  • Wasserman S, Faust K (1994) Social network analysis: methods and applications, 1st edn. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Willett W, Heer J, Agrawala M (2012) Strategies for crowdsourcing social data analysis. In: Proceedings of 2012 ACM annual conference on human factors computation systems, pp 227–236

  • Yuen M-C, King I, Leung K-S (2011a) A survey of crowdsourcing systems. In: IEEE international conference on privacy, security, risk and trust

  • Yuen M-C, King I, Leung K-S (2011b) Task matching in crowdsourcing. Internet Things IThingsCPSCom 2011 international conference on and 4th international conference on cyber, physical and social computing, pp 409–412

Download references

Acknowledgments

This work is part-funded by ERDF—European Regional Development Fund through the COMPETE Programme (Operational Programme for Competitiveness) and by National Funds through the FCT—Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within the Ph.D. Grant SFRH/BD/70302/2010 and by the Projects AAL4ALL (QREN11495), World Search (QREN 13852) and FCOMP-01-0124-FEDER-028980 (PTDC/EEI-SII/1386/2012). The authors also thank Jane Boardman for her assistance proof reading the document.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nuno Luz.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luz, N., Silva, N. & Novais, P. A survey of task-oriented crowdsourcing. Artif Intell Rev 44, 187–213 (2015). https://doi.org/10.1007/s10462-014-9423-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-014-9423-5

Keywords

Navigation