skip to main content
10.1145/3340531.3411881acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Shapley Values and Meta-Explanations for Probabilistic Graphical Model Inference

Authors Info & Claims
Published:19 October 2020Publication History

ABSTRACT

Probabilistic graphical models, such as Markov random fields (MRF), exploit dependencies among random variables to model a rich family of joint probability distributions. Inference algorithms, such as belief propagation (BP), can effectively compute the marginal posteriors for decision making. Nonetheless, inferences involve sophisticated probability calculations and are difficult for humans to interpret. Among all existing explanation methods for MRFs, no method is designed for fair attributions of an inference outcome to elements on the MRF where the inference takes place. Shapley values provide rigorous attributions but so far have not been studied on MRFs. We thus define Shapley values for MRFs to capture both probabilistic and topological contributions of the variables on MRFs. We theoretically characterize the new definition regarding independence, equal contribution, additivity, and submodularity. As brute-force computation of the Shapley values is challenging, we propose GraphShapley, an approximation algorithm that exploits the decomposability of Shapley values, the structure of MRFs, and the iterative nature of BP inference to speed up the computation. In practice, we propose meta-explanations to explain the Shapley values and make them more accessible and trustworthy to human users. On four synthetic and nine real-world MRFs, we demonstrate that GraphShapley generates sensible and practical explanations.

Skip Supplemental Material Section

Supplemental Material

3340531.3411881.mp4

mp4

110.9 MB

References

  1. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity Checks for Saliency Maps. In NeurIPS.Google ScholarGoogle Scholar
  2. Marco Ancona, Cengiz Oztireli, and Markus Gross. 2019. Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation. In ICML.Google ScholarGoogle Scholar
  3. Javier Castro, Daniel Gó mez, and Juan Tejada. 2009. Polynomial calculation of the Shapley value based on sampling. Computers & Operations Research, Vol. 36, 5 (2009), 1726--1730.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Hei Chan and Adnan Darwiche. 2005. Sensitivity analysis in Markov networks. In IJCAI.Google ScholarGoogle Scholar
  5. Chao Chen, Yifei Liu, Xi Zhang, and Sihong Xie. 2019 a. Scalable Explanation of Inferences on Large Graphs. In ICDM.Google ScholarGoogle Scholar
  6. Jianbo Chen, Le Song, Martin J. Wainwrightand, and Michael I. Jordan. 2019 b. L-shapley and c-shapley: Efficient model interpretation for structured data. In ICLR.Google ScholarGoogle Scholar
  7. Adnan Darwiche. 2003. A differential approach to inference in Bayesian networks. Journal of the ACM (JACM) (2003), 280--305.Google ScholarGoogle Scholar
  8. Mengnan Du, Ninghao Liu, and Xia Hu. 2019. Techniques for Interpretable Machine Learning. Commun. ACM, Vol. 63, 1 (2019), 68--77.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Papachristoudis Georgios and Fisher III John. 2015. Adaptive Belief Propagation. In ICML.Google ScholarGoogle Scholar
  10. Amirata Ghorbani, Abubakar Abid, and James Y Zou. 2017. Interpretation of Neural Networks is Fragile. In AAAI.Google ScholarGoogle Scholar
  11. Amirata Ghorbani and James Zou. 2019. Data Shapley: Equitable Valuation of Data for Machine Learning. In ICML.Google ScholarGoogle Scholar
  12. L H Gilpin, D Bau, B Z Yuan, A Bajwa, M Specter, and L Kagal. 2018. Explaining Explanations: An Overview of Interpretability of Machine Learning. In DSAA. 80--89.Google ScholarGoogle Scholar
  13. Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, and Fosca Giannotti. 2018. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv., Vol. 51 (2018), 93:1--93:42.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Qiang Huang, Makoto Yamada, Yuan Tian, Dinesh Singh, Dawei Yin, and Yi Chang. 2020. GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks. (2020).Google ScholarGoogle Scholar
  15. Sarthak Jain and Byron C Wallace. 2019. A ttention is not E xplanation. In NAACL.Google ScholarGoogle Scholar
  16. K. Jha, Y. Wang, G. Xun, and A. Zhang. 2018. Interpretable Word Embeddings for Medical Domain. In ICDM.Google ScholarGoogle Scholar
  17. Ruoxi Jia, David Dao, Boxin Wang, Frances Ann Hubis, Nick Hynes, Nezihe Merve Gurel, Bo Li, Ce Zhang, Dawn Song, and Costas Spanos. 2019. Towards Efficient Data Valuation Based on the Shapley Value. In AISTATS. 1167--1176.Google ScholarGoogle Scholar
  18. Murphy Kevin. 2001 (accessed April 25, 2020). List of Bayesian Network Software. https://www.cs.ubc.ca/murphyk/Bayes/old.bnsoft.htmlGoogle ScholarGoogle Scholar
  19. Daphne Koller and Nir Friedman. 2009. Probabilistic graphical model: principles and techniques .MIT Press.Google ScholarGoogle Scholar
  20. Andrew McCallum, Dayne Freitag, and Fernando C. N. Pereira. 2000. Maximum Entropy Markov Models for Information Extraction and Segmentation. In ICML.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Tomasz P. Michalak, Karthik .V. Aadithya, Piotr L. Szczepa'ski, Balaraman Ravindran, and Nicholas R. Jennings. 2013. Effcient Computation of the Shapley Value for Game-Theoretic Network Centrality. JAIR, Vol. 46 (2013), 607--650.Google ScholarGoogle ScholarCross RefCross Ref
  22. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, Vol. 267 (2019), 1--38.Google ScholarGoogle ScholarCross RefCross Ref
  23. Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining Explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Galileo Mark Namata, Ben London, and Lise Getoor. 2016. Collective graph identification. ACM TKDD, Vol. 10, 3 (2016), 25.Google ScholarGoogle Scholar
  25. G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. 1978. An analysis of approximations for maximizing submodular set functions. Mathematical Programming, Vol. 14, 1 (Dec 1978), 265--294.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank citation ranking: Bringing order to the web. Technical Report. Stanford InfoLab.Google ScholarGoogle Scholar
  27. Shashank Pandit, Duen Horng Chau, Samuel Wang, and Christos Faloutsos. 2007. Netprobe: A Fast and Scalable System for Fraud Detection in Online Auction Networks. In WWW.Google ScholarGoogle Scholar
  28. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In SIGKDD.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Forough Poursabzi-Sangdeh, Dan Goldstein, Jake Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2018. Manipulating and Measuring Model Interpretability.Google ScholarGoogle Scholar
  30. Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Graham Neubig, and Zachary Chase Lipton. 2019. Learning to Deceive with Attention-Based Explanations. ArXiv, Vol. abs/1909.0 (2019).Google ScholarGoogle Scholar
  31. Shebuti Rayana and Leman Akoglu. 2015. Collective opinion spam detection: Bridging review networks and metadata. In SIGKDD.Google ScholarGoogle Scholar
  32. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In SIGKDD.Google ScholarGoogle Scholar
  33. Andrew Slavin Ross, Michael C Hughes, and Finale Doshi-Velez. 2017. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations. In IJCAI.Google ScholarGoogle Scholar
  34. L Shapley. 1953. A Value for n-Person Games. Contributions to the Theory of Games (1953), 31--40.Google ScholarGoogle Scholar
  35. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning Important Features Through Propagating Activation Differences. In ICML.Google ScholarGoogle Scholar
  36. Oskar Skibski, Talal Rahwan, Tomasz P. Michalak, and Michael Wooldridge. 2019. Enumerating Connected Subgraphs and Computing the Myerson and Shapley Values in Graph-Restricted Games. ACM TIST, Vol. 10, 2 (2019), 15.Google ScholarGoogle Scholar
  37. Erik Strumbelj and Igor Kononenko. 2010. An Efficient Explanation of Individual Classifications using Game Theory. JMLR (2010).Google ScholarGoogle Scholar
  38. Erik vS trumbelj and Igor Kononenko. 2014. Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems, Vol. 41, 3 (2014), 647--665.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Henri Jacques Suermondt. 1992. Explanation in Bayesian Belief Networks. Ph.D. Dissertation.Google ScholarGoogle Scholar
  40. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In ICML.Google ScholarGoogle Scholar
  41. Lei Tang and Huan Liu. 2009. Relational learning via latent social dimensions. In SIGKDD.Google ScholarGoogle Scholar
  42. Jeroen Van Bouwel and Erik Weber. 2002. Remote Causes, Bad Explanations? Journal for the Theory of Social Behaviour, Vol. 32, 4 (2002), 437--449.Google ScholarGoogle ScholarCross RefCross Ref
  43. Xifeng Yan and Jiawei Han. 2002. gspan: Graph-based substructure pattern mining. In ICDM.Google ScholarGoogle Scholar
  44. Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David W Inouye, and Pradeep D Ravikumar. 2019. On the (In)fidelity and Sensitivity of Explanations. In NeurIPS.Google ScholarGoogle Scholar
  45. Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models (CHI).Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Rex Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. 2019. GNN Explainer: A Tool for Post-hoc Explanation of Graph Neural Networks. In NeurIPS.Google ScholarGoogle Scholar
  47. KiJung Yoon, Renjie Liao, Yuwen Xiong, Lisa Zhang, Ethan Fetaya, Raquel Urtasun, Richard Zemel, and Xaq Pitkow. 2018. Inference in probabilistic graphical models by graph neural networks. In ICLR workshop.Google ScholarGoogle Scholar
  48. Xinyang Zhang, Ningfei Wang, Shouling Ji, Hua Shen, and Ting Wang. 2018. Interpretable Deep Learning under Fire. ArXiv, Vol. abs/1812.0 (2018).Google ScholarGoogle Scholar

Index Terms

  1. Shapley Values and Meta-Explanations for Probabilistic Graphical Model Inference

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CIKM '20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management
      October 2020
      3619 pages
      ISBN:9781450368599
      DOI:10.1145/3340531

      Copyright © 2020 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 October 2020

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate1,861of8,427submissions,22%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader