skip to main content
10.1145/2783258.2783311acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Certifying and Removing Disparate Impact

Published:10 August 2015Publication History

ABSTRACT

What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process.

When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses.

We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.

Skip Supplemental Material Section

Supplemental Material

p259.mp4

mp4

106.8 MB

References

  1. S. Barocas and A. D. Selbst. Big data's disparate impact. Technical report, available at SSRN: http://ssrn.com/abstract=2477899, 2014.Google ScholarGoogle Scholar
  2. T. Calders, F. Kamiran, and M. Pechenizkiy. Building classifiers with independency constraints. In ICDM Workshop Domain Driven Data Mining, pages 13--18, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In Proc. of Innovations in Theoretical Computer Science, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear classification. J. of Machine Learning Research, 9:1871--1874, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. Certifying and removing disparate impact. CoRR, abs/1412.3756, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. H. Hodson. No one in control: The algorithms that run our lives. New Scientist, Feb. 04, 2015.Google ScholarGoogle Scholar
  7. T. Joachims. A support vector method for multivariate performance measures. In Proc. of Intl. Conf. on Machine Learning, pages 377--384. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. F. Kamiran and T. Calders. Classifying without discriminating. In Proc. of the IEEE International Conference on Computer, Control and Communication, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  9. T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Fairness-aware classifier with prejudice remover regularizer. Machine Learning and Knowledge Discovery in Databases, pages 35--50, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. T. Kamishima, S. Akaho, and J. Sakuma. Fairness aware learning through regularization approach. In Proc of. Intl. Conf. on Data Mining, pages 643--650, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. B. T. Luong, S. Ruggieri, and F. Turini. k-nn as an implementation of situation testing for discrimination discovery and prevention. In Proc. of Intl. Conf. on Knowledge Discovery and Data Mining, KDD '11, pages 502--510, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. A. Menon, H. Narasimhan, S. Agarwal, and S. Chawla. On the statistical consistency of algorithms for binary classification under class imbalance. In Proc. 30th. ICM, pages 603--611, 2013.Google ScholarGoogle Scholar
  13. W. Miao. Did the results of promotion exams have a disparate impact on minorities? Using statistical evidence in Ricci v. DeStefano. J. of Stat. Ed., 19(1), 2011.Google ScholarGoogle Scholar
  14. D. Pedreschi, S. Ruggieri, and F. Turini. Integrating induction and deduction for finding evidence of discrimination. In Proc. of Intl. Conf. on Artificial Intelligence and Law, ICAIL '09, pages 157--166, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. D. Pedreschi, S. Ruggieri, and F. Turini. A study of top-k measures for discrimination discovery. In Proc. of Symposium on Applied Computing, SAC '12, pages 126--131, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. J. L. Peresie. Toward a coherent test for disparate impact discrimination. Indiana Law Journal, 84(3):Article 1, 2009.Google ScholarGoogle Scholar
  17. J. Podesta, P. Pritzker, E. J. Moniz, J. Holdren, and J. Zients. Big data: seizing opportunities, preserving values. Executive Office of the President, May 2014.Google ScholarGoogle Scholar
  18. A. Romei and S. Ruggieri. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, pages 1--57, April 3 2013.Google ScholarGoogle Scholar
  19. Supreme Court of the United States. Griggs v. Duke Power Co. 401 U.S. 424, March 8, 1971.Google ScholarGoogle Scholar
  20. Supreme Court of the United States. Watson v. Fort Worth Bank & Trust. 487 U.S. 977, 995, 1988.Google ScholarGoogle Scholar
  21. Supreme Court of the United States. Ricci v. DeStefano. 557 U.S. 557, 174, 2009.Google ScholarGoogle Scholar
  22. Texas House of Representatives. House bill 588. 75th Legislature, 1997.Google ScholarGoogle Scholar
  23. The Leadership Conference. Civil rights principles for the era of big data. http://www.civilrights.org/press/2014/civil-rights-principles-big-data.html, Feb. 27, 2014.Google ScholarGoogle Scholar
  24. The U.S. EEOC. Uniform guidelines on employee selection procedures, March 2, 1979.Google ScholarGoogle Scholar
  25. R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In Proc. of Intl. Conf. on Machine Learning, pages 325--333, 2013.Google ScholarGoogle Scholar
  26. M.-J. Zhao, N. Edakunni, A. Pocock, and G. Brown. Beyond Fano's inequality: bounds on the optimal F-score, BER, and cost-sensitive risk and their implications. J. of Machine Learning Research, 14(1):1033--1090, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Certifying and Removing Disparate Impact

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          KDD '15: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
          August 2015
          2378 pages
          ISBN:9781450336642
          DOI:10.1145/2783258

          Copyright © 2015 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 10 August 2015

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          KDD '15 Paper Acceptance Rate160of819submissions,20%Overall Acceptance Rate1,133of8,635submissions,13%

          Upcoming Conference

          KDD '24

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader