Abstract
Post-hoc explainability methods aim to clarify predictions of black-box machine learning models. However, it is still largely unclear how well users comprehend the provided explanations and whether these increase the users’ ability to predict the model behavior. We approach this question by conducting a user study to evaluate comprehensibility and predictability in two widely used tools: LIME and SHAP. Moreover, we investigate the effect of counterfactual explanations and misclassifications on users’ ability to understand and predict the model behavior. We find that the comprehensibility of SHAP is significantly reduced when explanations are provided for samples near a model’s decision boundary. Furthermore, we find that counterfactual explanations and misclassifications can significantly increase the users’ understanding of how a machine learning model is making decisions. Based on our findings, we also derive design recommendations for future post-hoc explainability methods with increased comprehensibility and predictability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alqaraawi, A., Schuessler, M., Weiß, P., Costanza, E., Berthouze, N.: Evaluating saliency map explanations for convolutional neural networks: A user study. In: Proceedings of the 25th International Conference on Intelligent User Interfaces, IUI ’20, page 275–285, New York, NY, USA (2020). Association for Computing Machinery
Carvalho, D.V., Pereira, E.M., Jaime S Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019)
Field, A.P., Miles, J., Zoë Field, Z.: Discovering statistics using R. SAGE publications, London, England (2012)
Anna, G., et al.: Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018)
Guidotti, R.: Monreale, Anna, Ruggieri, Salvatore, Turini, Franco, Giannotti, Fosca, Pedreschi, Dino: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018)
Harrison, D., Rubinfeld, D.L.: Hedonic housing prices and the demand for clean air. J. Environ. Econ. Manag. 5(1), 81–102 (1978)
Hart, S.G., et al.: Development of NASA-TLX: results of empirical and theoretical research." inp. a. hancock and n. meshkati (eds.), human mental workload (1988)
Hase, P., Bansal, M.: Evaluating explainable AI: Which algorithmic explanations help users predict model behavior? Association for Computational Linguistics (ACL) (2020)
Stefan, H., et al.: Felix: On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage 87, 96–110 (2014)
Hoffman R.R.: A taxonomy of emergent trusting in the human-machine relationship. Cognitive systems engineering: the future for a changing world, pp. 137–164 (2017)
Jacovi, A., Marasović, A., Miller, T., Goldberg, Y.: Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 624–635 (2021)
Kaur, H.: Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, page 1–14, New York, NY, USA (2020). Association for Computing Machinery
Lakkaraju, H., Bastani, O.: how do i fool you?: manipulating user trust via misleading black box explanations. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES ’20, page 79–85, New York, NY, USA (2020). Association for Computing Machinery
Lipton, Z.C.: The mythos of model interpretability: machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)
Lundberg, S.M., Su-In Lee, S.-I.: A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pp. 4768–4777, Red Hook, NY, USA (2017). Curran Associates Inc
Martin, D.W.: Doing psychology experiments. pp. 148–170 (2007)
Mayring, P.: Qualitative content analysis. A Companion Qual. Res. 1(2), 159–176 (2004)
Mohseni, S.: Toward design and evaluation framework for interpretable machine learning systems. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 553–554 (2019)
Mohseni, S., Ragan, E.D.: A human-grounded evaluation benchmark for local explanations of machine learning. arXiv preprint arXiv:1801.05075 (2020)
Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans. Interact. Intell. Syst. (TIIS) 11(3–4), 1–45 2021
Molnar, C., Casalicchio, G., Bischl, B.: Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges. In: Koprinska, I., et al. (eds.) ECML PKDD 2020. CCIS, vol. 1323, pp. 417–431. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65965-3_28
Mondal, I., Ganguly, D.: Alex: active learning based enhancement of a classification model’s explainability. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 3309–3312 (2020)
Nourani, M., Kabir, S., Mohseni, S., Ragan, E.D.: The effects of meaningful and meaningless explanations on trust and perceived system accuracy in intelligent systems. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 7, pp. 97–105 (2019)
Papenmeier, P., Englebienne, G., Seifert, C.: How model accuracy and explanation fidelity influence user trust. AI IJCAI Workshop on Explainable Artificial Intelligence (2019)
Qualtrics. Copyright year: 2021, location: Provo, utah, usa
Ribeiro, M.T., Singh, S., Guestrin, C.: why should i trust you? explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: Proceedings of the 32nd AAAI International Conference on Artificial Intelligence, vol. 18, pp. 1527–1535 (2018)
Stefan Rüping. Learning interpretable models (2006)
Schmidt P., Biessmann, F.: Quantifying interpretability and trust in machine learning systems. In: AAAI-19 Workshop on Network Interpretability for Deep Learning (2019)
Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum.-Comput. Stud. 146, 102551 (2021)
Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning - Vol. 70, ICML’17, pp. 3145–3153, Sydney, NSW, Australia (2017). JMLR.org
Sokol, K., Flach, P.: Explainability fact sheets: a framework for systematic assessment of explainable approaches. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, pp. 56–67, New York, NY, USA (2020). Association for Computing Machinery
Spinner, T., Udo, S., Hanna, S., Mennatallah, E-A.: explainer: a visual analytics framework for interactive and explainable machine learning. IEEE Trans. Vis. Comput. Graph. 26(1), 1064–1074 (2019)
Tenney, I., et al.: The language interpretability tool: Extensible, interactive visualizations and analysis for NLP models. pp. 107–118 (2020)
Tonekaboni, S., Joshi, S., McCradden, M.D., Anna Goldenberg. A.: What clinicians want: Contextualizing explainable machine learning for clinical end use. In: Doshi-Velez, F. eds., Proceedings of the 4th Machine Learning for Healthcare Conference, vol. 106 of Proceedings of Machine Learning Research, pp. 359–380, Ann Arbor, Michigan (2019). PMLR
Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing Theory-Driven User-Centric Explainable AI, pp. 1–15. Association for Computing Machinery, New York, NY, USA (2019)
Zhou, J., Gandomi, A.H., Chen, F., Holzinger, A.: Evaluating the quality of machine learning explanations: a survey on methods and metrics. Electronics 10(5), 593 (2021)
Acknowledgment
We thank all the volunteers and all the reviewers who wrote and provided helpful comments on previous versions of this document. We especially thank our colleagues, Clemens Heistracher and Denis Katic, for their constructive feedback on the structure of this work. We further thank Dr. Philipp Wintersberger for his constructive feedback and insight into this work. We further thank Dr. Jasmin Lampert for her constructive feedback and insight into this work. We also thank the Austrian Research Promotion Agency (FFG) for funding this work, which is a part of the industrial project DeepRUL, project ID 871357, and the funding from the European Union’s H2020 research and innovation program as part of the STARLIGHT (GA No 101021797) project.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Jalali, A., Haslhofer, B., Kriglstein, S., Rauber, A. (2023). Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis. In: Arai, K. (eds) Intelligent Computing. SAI 2023. Lecture Notes in Networks and Systems, vol 711. Springer, Cham. https://doi.org/10.1007/978-3-031-37717-4_46
Download citation
DOI: https://doi.org/10.1007/978-3-031-37717-4_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-37716-7
Online ISBN: 978-3-031-37717-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)