skip to main content
10.1145/3308560.3317080acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article

Discovering User Bias in Ordinal Voting Systems

Published:13 May 2019Publication History

ABSTRACT

Crowdsourcing systems increasingly rely on users to provide more subjective ground truth for intelligent systems - e.g. ratings, aspect of quality and perspectives on how expensive or lively a place feels, etc. We focus on the ubiquitous implementation of online user ordinal voting (e.g 1-5, 1 star-4 stars) on some aspect of an entity, to extract a relative truth, measured by a selected metric such as vote plurality or mean. We argue that this methodology can aggregate results that yield little information to the end user. In particular, ordinal user rankings often converge to a indistinguishable rating. This is demonstrated by the trend in certain cities for the majority of restaurants to all have a 4 star rating. Similarly, the rating of an establishment can be significantly affected by a few users [10]. User bias in voting is not spam, but rather a preference that can be harnessed to provide more information to users. We explore notions of both global skew and user bias. Leveraging these bias and preference concepts, the paper suggests explicit models for better personalization and more informative ratings.

References

  1. Nicola Barbieri. 2011. Regularized Gibbs Sampling for User Profiling with Soft Constraints. 2011 International Conference on Advances in Social Networks Analysis and Mining (2011), 129–136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Jiang Bian, Yandong Liu, Ding Zhou, Eugene Agichtein, and Hongyuan Zha. 2009. Learning to recognize reliable users and content in social media with coupled mutual reinforcement. In WWW. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bee-Chung Chen, Anirban Dasgupta, Xuanhui Wang, and Jie Yang. 2012. Vote Calibration in Community Question-answering Systems. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval(SIGIR ’12). ACM, New York, NY, USA, 781–790. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Irene Chen, Fredrik D. Johansson, and David Sontag. 2018. Why Is My Classifier Discriminatory?arXiv e-prints, Article arXiv:1805.12002 (May 2018), arXiv:1805.12002 pages. arxiv:stat.ML/1805.12002Google ScholarGoogle Scholar
  5. Thomas Hofmann. 2003. Collaborative Filtering via Gaussian Probabilistic Latent Semantic Analysis. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval(SIGIR ’03). ACM, New York, NY, USA, 259–266. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Yehuda Koren and Joseph Sill. 2013. Collaborative Filtering on Ordinal User Feedback. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence(IJCAI ’13). AAAI Press, 3022–3026. http://dl.acm.org/citation.cfm?id=2540128.2540570 Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Gary P. Latham. 2012. Work Motivation: History, Theory, Research, and Practice. Sage Publisher.Google ScholarGoogle ScholarCross RefCross Ref
  8. R. Likert. 1932. A Technique for the Measurement of Attitudes. Archives of Psychology 140 (1932), 1–55.Google ScholarGoogle Scholar
  9. Benjamin M. Marlin. 2003. Modeling User Rating Profiles For Collaborative Filtering. In NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. David Owen. 2018. Customer Satisfaction at the Push of a Button: HappyOrNot terminals look simple, but the information they gather is revelatory. The New Yorker (Feb. 2018). https://www.newyorker.com/magazine/2018/02/05/customer-satisfaction-at-the-push-of-a-buttonGoogle ScholarGoogle Scholar
  11. Lahari Poddar, Wynne Hsu, and Mong-Li Lee. 2017. Quantifying Aspect Bias in Ordinal Ratings using a Bayesian Approach. In IJCAI 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Drazen Prelec, Hyunjune Seung, and John McCoy. 2017. A solution to the single-question crowd wisdom problem. Nature 541 (01 2017), 532–535.Google ScholarGoogle Scholar
  13. Vikas C. Raykar and Shipeng Yu. 2012. Eliminating Spammers and Ranking Annotators for Crowdsourced Labeling Tasks. J. Mach. Learn. Res. 13 (March 2012), 491–518. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Lu Ren, Lan Du, Lawrence Carin, and David Dunson. 2011. Logistic Stick-Breaking Process. J. Mach. Learn. Res. 12 (Feb. 2011), 203–239. http://dl.acm.org/citation.cfm?id=1953048.1953055 Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. David H. Stern, Ralf Herbrich, and Thore Graepel. 2009. Matchbox: large scale online bayesian recommendations.. In WWW, Juan Quemada, Gonzalo León, Yoëlle S. Maarek, and Wolfgang Nejdl (Eds.). ACM, 111–120. http://dblp.uni-trier.de/db/conf/www/www2009.html#SternHG09 Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Hao Wang and Martin Ester. 2014. A Sentiment-aligned Topic Model for Product Aspect Rating Prediction. In EMNLP.Google ScholarGoogle Scholar
  17. Pu Wang, Carlotta Domeniconi, and Kathryn Laskey. 2009. Latent Dirichlet Bayesian Co-Clustering. 522–537.Google ScholarGoogle Scholar
  18. Xiaochi Wei, Heyan Huang, Chin-Yew Lin, Xin Xin, Xianling Mao, and Shangguang Wang. 2015. Re-Ranking Voting-Based Answers by Discarding User Behavior Biases. In IJCAI. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Qianli Xing, Yiqun Liu, Jian-Yun Nie, Min Zhang, Shaoping Ma, and Kuo Zhang. 2013. Incorporating user preferences into click models. In CIKM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Discovering User Bias in Ordinal Voting Systems
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        WWW '19: Companion Proceedings of The 2019 World Wide Web Conference
        May 2019
        1331 pages
        ISBN:9781450366755
        DOI:10.1145/3308560

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 13 May 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate1,899of8,196submissions,23%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format