skip to main content
10.1145/2145204.2145355acmconferencesArticle/Chapter ViewAbstractPublication PagescscwConference Proceedingsconference-collections
research-article

Shepherding the crowd yields better work

Authors Info & Claims
Published:11 February 2012Publication History

ABSTRACT

Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.

References

  1. Annett, J. Feedback and human behaviour: the effects of knowledge of results, incentives, and reinforcement on learning and performance. Penguin Books, 1969.Google ScholarGoogle Scholar
  2. Bernstein, M.S., Little, G., Miller, R.C., Hartmann, B., Ackerman, M.S., Karger, D.R., Crowell, D., and Panovich, K. Soylent: a word processor with a crowd inside. Proc of ACM Symposium on User Interface Software and Technology (2010), 313--322. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bigham, J.P., Jayant, C., Ji, H., Little, G., Miller, A., Miller, R.C., Miller, R., Tatarowicz, A., White, B., White, S., and Yeh, T. VizWiz: nearly real-time answers to visual questions. Proc of ACM Symp. on User Interface Software and Technology (2010), 333--342. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Boud, D. Sustainable Assessment: Rethinking assessment for the learning society. Studies in Continuing Education 22, 2 (2000), 151.Google ScholarGoogle ScholarCross RefCross Ref
  5. Chandler, D. and Kapelner, A. Breaking monotony with meaning: Motivation in crowdsourcing markets. University of Chicago mimeo, (2010).Google ScholarGoogle Scholar
  6. Cheshire, C. and Antin, J. The Social Psychological Effects of Feedback on the Production of Internet Information Pools. Journal of Computer-Mediated Communication 13, 3 (2008), 705--727.Google ScholarGoogle ScholarCross RefCross Ref
  7. David, B. Enhancing Learning Through Self-Assessment. Routledge, 1995.Google ScholarGoogle Scholar
  8. Ericsson, K.A., Charness, N., Feltovich, P.J., and Hoffman, R.R. The Cambridge Handbook of Expertise and Expert Performance. Cambr. Univer. Press, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  9. Gawande, A. The Checklist Manifesto: How to Get Things Right. Metropolitan Books, 2009.Google ScholarGoogle Scholar
  10. Hanrahan, S.J. and Isaacs, G. Assessing Self- and Peer-assessment: The students' views. Higher Education Research & Development 20, 1 (2001), 53.Google ScholarGoogle ScholarCross RefCross Ref
  11. Havighurst, R.J. Human Development and Education. Longmans, Green and Co, 1955.Google ScholarGoogle Scholar
  12. Heer, J. and Bostock, M. Crowdsourcing graphical perception: using mechanical turk to assess visualization design. Proc of ACM conf on Human factors in computing systems (2010), 203--212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Hinds, P. The Curse of Expertise: The Effects of Expertise and Debiasing Methods on Predictions of Novice Performance. Journal of Experimental Applied Psychology 5, (1999), 205--221.Google ScholarGoogle ScholarCross RefCross Ref
  14. Horton, J.J. Employer Expectations, Peer Effects and Productivity: Evidence from a Series of Field Experiments. SSRN eLibrary, (2010).Google ScholarGoogle Scholar
  15. Hullman, J., Adar, E., and Shah, P. The impact of social information on visual judgments. Proc of conf on Human factors in computing systems, ACM (2011), 1461--1470. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Ipeirotis, P.G., Provost, F., and Wang, J. Quality management on Amazon Mechanical Turk. Proc of ACM SIGKDD Workshop on Human Computation, (2010), 64--67. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Ipeirotis, P.G. Analyzing the Amazon Mechanical Turk marketplace. XRDS: Crossroads, The ACM Magazine for Students 17, 2010, 16--21. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Karger, D., Oh, S., and Shah, D. Budget-optimal Crowdsourcing using Low-rank Matrix Approximations. Proc. of the Allerton Conf. on Communication, Control, and Computing, (2011).Google ScholarGoogle ScholarCross RefCross Ref
  19. Kittur, A., Chi, E.H., and Suh, B. Crowdsourcing user studies with Mechanical Turk. Proc of SIGCHI conference on Human factors in computing systems, ACM (2008), 453--456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Lampe, C. and Resnick, P. Slash(dot) and burn: distributed moderation in a large online conversation space. Proc. of the SIGCHI conference on Human factors in computing systems, ACM (2004), 543--550. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Lave, J. and Wenger, E. Situated Learning: Legitimate Peripheral Participation. Camb. University Press, 1991.Google ScholarGoogle ScholarCross RefCross Ref
  22. Levenshtein, V.I. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady 10, 8, 707--710.Google ScholarGoogle Scholar
  23. Little, G., Chilton, L.B., Goldman, M., and Miller, R.C. Exploring iterative and parallel human computation processes. Proc. of the ACM SIGKDD Workshop on Human Computation, ACM (2010), 68--76. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Masten, A.S., Morison, P., and Pellegrini, D.S. A revised class play method of peer assessment. Developmental Psychology 21, 3 (1985), 523--533.Google ScholarGoogle ScholarCross RefCross Ref
  25. Mattheos, N., Nattestad, A., Falk-Nilsson, E., and Attstrom, R. The interactive examination: assessing students' self-assessment ability. Medical Education 38, 4 (2004), 378--389.Google ScholarGoogle ScholarCross RefCross Ref
  26. Musico, C. There's No Place Like Home. destinationCRM.com, 2008.Google ScholarGoogle Scholar
  27. Orsmond, P., Merry, S., and Reiling, K. A Study in Self-assessment: tutor and students' perceptions of performance criteria. Assessment & Evaluation in Higher Education 22, 4 (1997), 357.Google ScholarGoogle ScholarCross RefCross Ref
  28. Sadler, D.R. Formative assessment and the design of instructional systems. Instructional Science 18, 2 (1989), 119--144.Google ScholarGoogle ScholarCross RefCross Ref
  29. Shute, V.J. Focus on Formative Feedback. Review of Educational Research 78, 1 (2008), 153 -189.Google ScholarGoogle ScholarCross RefCross Ref
  30. Silberman, M.S., Ross, J., Irani, L., and Tomlinson, B. Sellers' problems in human computation markets. Proc ACM SIGKDD Workshop on Human Computation, (2010), 18--21. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Taras, M. Using Assessment for Learning and Learning from Assessment. Assessment & Evaluation in Higher Education 27, 6 (2002), 501.Google ScholarGoogle ScholarCross RefCross Ref
  32. Taras, M. To Feedback or Not to Feedback in Student Self-assessment. Assessment & Evaluation in Higher Education 28, 5 (2003), 549.Google ScholarGoogle ScholarCross RefCross Ref
  33. Viégas, F., Wattenberg, M., and Mckeon, M. The Hidden Order of Wikipedia. In Online Communities and Social Computing. 2007, 445--454. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Zimmerman, B.J. Becoming a self-regulated learner: Which are the key subprocesses? Contemporary Educational Psychology 11, 4 (1986), 307--313.Google ScholarGoogle ScholarCross RefCross Ref
  35. http://en.wikipedia.org/.Google ScholarGoogle Scholar
  36. http://www.thejohnnycashproject.com/.Google ScholarGoogle Scholar
  37. Crowdflower. Crowdflower.com/.Google ScholarGoogle Scholar

Index Terms

  1. Shepherding the crowd yields better work

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CSCW '12: Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
      February 2012
      1460 pages
      ISBN:9781450310864
      DOI:10.1145/2145204

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 February 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CSCW '12 Paper Acceptance Rate164of415submissions,40%Overall Acceptance Rate2,235of8,521submissions,26%

      Upcoming Conference

      CSCW '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader