skip to main content
research-article

Directive Explanations for Actionable Explainability in Machine Learning Applications

Authors Info & Claims
Published:08 December 2023Publication History
Skip Abstract Section

Abstract

In this article, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of directive explanations (directive-specific and directive-generic), and describe how these can be generated computationally. We investigate people’s preference for and perception toward directive explanations through two online studies, one quantitative and the other qualitative, each covering two domains (the credit scoring domain and the employee satisfaction domain). We find a significant preference for both forms of directive explanations compared to non-directive counterfactual explanations. However, we also find that preferences are affected by many aspects, including individual preferences and social factors. We conclude that deciding what type of explanation to provide requires information about the recipients and other contextual information. This reinforces the need for a human-centered and context-specific approach to explainable AI.

Skip Supplemental Material Section

Supplemental Material

REFERENCES

  1. [1] Adadi Amina and Berrada Mohammed. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6 (2018), 5213852160.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Aggarwal Charu C., Chen Chen, and Han Jiawei. 2010. The inverse classification problem. Journal of Computer Science and Technology 25, 3 (2010), 458468.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Ahmad Muhammad Aurangzeb, Eckert Carly, and Teredesai Ankur. 2018. Interpretable machine learning in healthcare. In Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics (BCB’18). Association for Computing Machinery, New York, NY, 559560.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Atkinson Katie, Bench-Capon Trevor, and Bollegala Danushka. 2020. Explanation in AI and law: Past, present and future. Artif. Intell. 289 (Dec.2020), 103387.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Banovic Nikola, Wang Anqi, Jin Yanfeng, Chang Christie, Ramos Julian, Dey Anind, and Mankoff Jennifer. 2017. Leveraging human routine models to detect and generate human behaviors. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). Association for Computing Machinery, New York, NY, 66836694.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Barocas Solon, Selbst Andrew D., and Raghavan Manish. 2020. The hidden assumptions behind counterfactual explanations and principal reasons. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT*’20). Association for Computing Machinery, New York, NY, 8089.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Bellman Richard. 1957. A Markovian decision process. Journal of Mathematics and Mechanics 6, 5 (1957), 679684.Google ScholarGoogle Scholar
  8. [8] Binns Reuben, Kleek Max Van, Veale Michael, Lyngs Ulrik, Zhao Jun, and Shadbolt Nigel. 2018. “It’s reducing a human being to a percentage”: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). Association for Computing Machinery, New York, NY, 114.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Biran and Cotton. 2017. Explanation and justification in machine learning: A survey. In IJCAI-17 Workshop on Explainable AI (XAI), Vol. 8. cs.columbia.edu, 813.Google ScholarGoogle Scholar
  10. [10] Braun Virginia and Clarke Victoria. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77101.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Browne Cameron B., Powley Edward, Whitehouse Daniel, Lucas Simon M., Cowling Peter I., Rohlfshagen Philipp, Tavener Stephen, Perez Diego, Samothrakis Spyridon, and Colton Simon. 2012. A survey of Monte Carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4, 1 (March2012), 143.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Bryce Daniel. 2014. Landmark-based plan distance measures for diverse planning. ICAPS 24 (May2014), 5664.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Bueff Andreas C., Cytryński Mateusz, Calabrese Raffaella, Jones Matthew, Roberts John, Moore Jonathon, and Brown Iain. 2022. Machine learning interpretability for a stress scenario generation in credit scoring based on counterfactuals. Expert Syst. Appl. 202 (Sept.2022), 117271.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Buesing Lars, Weber Theophane, Zwols Yori, Racaniere Sebastien, Guez Arthur, Lespiau Jean-Baptiste, and Heess Nicolas. 2018. Woulda, coulda, shoulda: Counterfactually-guided policy search. (Nov.2018). arxiv:1811.06272 [cs.LG]Google ScholarGoogle Scholar
  15. [15] Buhrmester Michael, Kwang Tracy, and Gosling Samuel D.. 2011. Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspect. Psychol. Sci. 6, 1 (Jan.2011), 35.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Bussmann Niklas, Giudici Paolo, Marinelli Dimitri, and Papenbrock Jochen. 2020. Explainable AI in fintech risk management. Front. Artif. Intell. 3 (April2020), 26.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Byrne Ruth M. J.. 2016. Counterfactual thought. Annual Review of Psychology 67 (2016), 135157.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Byrne Ruth M. J.. 2019. Counterfactuals in explainable artificial intelligence (XAI): Evidence from human reasoning. Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI’19 Macao, 10-16 August 2019), ijcai.org, 6276–6282.Google ScholarGoogle Scholar
  19. [19] Chakraborti Tathagata, Sreedharan Sarath, Zhang Yu, and Kambhampati Subbarao. 2017. Plan explanations as model reconciliation: Moving beyond explanation as soliloquy. (Jan.2017). arxiv:1701.08317 [cs.AI]Google ScholarGoogle Scholar
  20. [20] Chazette Larissa and Schneider Kurt. 2020. Explainability as a non-functional requirement: Challenges and recommendations. Requirements Engineering 25, 4 (Dec.2020), 493514.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Clarke Victoria and Braun Virginia. 2014. Thematic analysis. (2014), 19471952. Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Creager Elliot, Madras David, Pitassi Toniann, and Zemel Richard. 2020. Causal modeling for fairness in dynamical systems. In Proceedings of the 37th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 119), Iii Hal Daumé and Singh Aarti (Eds.). PMLR, 21852195.Google ScholarGoogle Scholar
  23. [23] Dandl Susanne, Molnar Christoph, Binder Martin, and Bischl Bernd. 2020. Multi-objective counterfactual explanations. In Parallel Problem Solving from Nature (PPSN XVI). Springer International Publishing, 448469.Google ScholarGoogle Scholar
  24. [24] Dong Jinshuo, Roth Aaron, Schutzman Zachary, Waggoner Bo, and Wu Zhiwei Steven. 2018. Strategic classification from revealed preferences. In Proceedings of the 2018 ACM Conference on Economics and Computation (EC’18 Ithaca, NY, USA, June 18-22, 2018), Éva Tardos, Edith Elkind, and Rakesh Vohra (Eds.). ACM, 55–70. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Duong Tri Dung, Li Qian, and Xu Guandong. 2021. Prototype-based counterfactual explanation for causal classification. (May2021). arxiv:2105.00703 [cs.LG]Google ScholarGoogle Scholar
  26. [26] Edwards Lilian and Veale Michael. 2017. Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke L. & Tech. Rev. 16 (2017), 18.Google ScholarGoogle Scholar
  27. [27] Ehsan Upol, Tambwekar Pradyumna, Chan Larry, Harrison Brent, and Riedl Mark O.. 2019. Automated rationale generation: A technique for explainable AI and its effects on human perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI’19). Association for Computing Machinery, New York, NY, 263274.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Fawzi Alhussein, Fawzi Omar, and Frossard Pascal. 2018. Analysis of classifiers’ robustness to adversarial perturbations. Machine Learning 107, 3 (2018), 481508.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Fox Maria, Long Derek, and Magazzeni Daniele. 2017. Explainable planning. (Sept.2017). arxiv:1709.10256 [cs.AI]Google ScholarGoogle Scholar
  30. [30] Geffner Hector and Bonet Blai. 2013. A concise introduction to models and methods for automated planning. Synthesis Lectures on Artificial Intelligence and Machine Learning 8, 1 (2013), 1141.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Guidotti Riccardo, Monreale Anna, Ruggieri Salvatore, Turini Franco, Giannotti Fosca, and Pedreschi Dino. 2019. A survey of methods for explaining black box models. ACM Computing Surveys (CSUR) 51, 5 (2019), 93.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Hall Mark, Harborne Daniel, Tomsett Richard, Galetic Vedran, Quintana-Amate Santiago, Nottle Alistair, and Preece Alun. 2019. A systematic method to understand requirements for explainable AI (XAI) systems. In Proceedings of the IJCAI Workshop on eXplainable Artificial Intelligence (XAI’19), Vol. 11. dais-ita.org.Google ScholarGoogle Scholar
  33. [33] Halpern Joseph Y. and Pearl Judea. 2020. Causes and explanations: A structural-model approach. Part I: Causes. Br. J. Philos. Sci. (2020).Google ScholarGoogle Scholar
  34. [34] Hardt Moritz, Megiddo Nimrod, Papadimitriou Christos, and Wootters Mary. 2016. Strategic classification. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science (ITCS’16). Association for Computing Machinery, New York, NY, 111122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Hoffman Robert R. and Klein Gary. 2017. Explaining explanation, part 1: Theoretical foundations. IEEE Intelligent Systems 32, 3 (2017), 6873.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Holzinger Andreas, Biemann Chris, Pattichis Constantinos S., and Kell Douglas B.. 2017. What do we need to build explainable AI systems for the medical domain? (Dec.2017). arxiv:1712.09923 [cs.AI]Google ScholarGoogle Scholar
  37. [37] Hopkins Mark and Pearl Judea. 2007. Causality and counterfactuals in the situation calculus. J. Logic Comput. 17, 5 (Oct.2007), 939953.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Karimi Amir-Hossein, Schölkopf Bernhard, and Valera Isabel. 2021. Algorithmic recourse: From counterfactual explanations to interventions. (2021), 353362. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Karimi Amir-Hossein, Barthe Gilles, Balle Borja, and Valera Isabel. 2020. Model-agnostic counterfactual explanations for consequential decisions. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics(Proceedings of Machine Learning Research, Vol. 108), Chiappa Silvia and Calandra Roberto (Eds.). PMLR, 895905.Google ScholarGoogle Scholar
  40. [40] Karimi Amir-Hossein, Kügelgen Julius von, Schölkopf Bernhard, and Valera Isabel. 2020. Algorithmic recourse under imperfect causal knowledge: A probabilistic approach. (June2020). arxiv:2006.06831 [cs.LG]Google ScholarGoogle Scholar
  41. [41] Katz Michael and Sohrabi Shirin. 2020. Reshaping diverse planning. AAAI 34, 06 (April2020), 98929899.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Kleinberg Jon, Lakkaraju Himabindu, Leskovec Jure, Ludwig Jens, and Mullainathan Sendhil. 2018. Human decisions and machine predictions. Q. J. Econ. 133, 1 (Feb.2018), 237293.Google ScholarGoogle Scholar
  43. [43] König Gunnar, Freiesleben Timo, and Grosse-Wentrup Moritz. 2021. A causal perspective on meaningful and robust algorithmic recourse. (July2021). arxiv:2107.07853 [stat.ML]Google ScholarGoogle Scholar
  44. [44] Krarup Benjamin, Cashmore Michael, Magazzeni Daniele, and Miller Tim. 2019. Model-based contrastive explanations for explainable planning. In ICAPS 2019 Workshop on Explainable AI Planning (XAIP’19). AAAI Press, 9.Google ScholarGoogle Scholar
  45. [45] Lewis David. 2013. Counterfactuals. John Wiley & Sons.Google ScholarGoogle Scholar
  46. [46] Liao Q. Vera, Gruen Daniel, and Miller Sarah. 2020. Questioning the AI: Informing design practices for explainable AI user experiences. arXiv preprint arXiv:2001.02478 (2020).Google ScholarGoogle Scholar
  47. [47] Lim Brian Y. and Dey Anind K.. 2009. Assessing demand for intelligibility in context-aware applications. In Ubiquitous Computing, 11th International Conference (UbiComp’09), Proceedings (ACM International Conference Proceeding Series), Helal Sumi, Gellersen Hans, and Consolvo Sunny (Eds.). ACM, 195204. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Lipton Zachary C.. 2018. The mythos of model interpretability. Commun. ACM 61, 10 (2018), 3643. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Madumal Prashan, Miller Tim, Sonenberg Liz, and Vetere Frank. 2019. A grounded interaction protocol for explainable artificial intelligence. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS’19). International Foundation for Autonomous Agents and Multiagent Systems, 10331041.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Madumal Prashan, Miller Tim, Sonenberg Liz, and Vetere Frank. 2020. Explainable reinforcement learning through a causal lens. (2020), 24932500. https://aaai.org/ojs/index.php/AAAI/article/view/5631.Google ScholarGoogle Scholar
  51. [51] Miller Tim. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 138.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Miller Tim. 2021. Contrastive explanation: A structural-model approach. Knowl. Eng. Rev. 36 (2021), e14.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Molnar Christoph. 2020. Interpretable Machine Learning. Lulu.com.Google ScholarGoogle Scholar
  54. [54] Mothilal Ramaravind K., Sharma Amit, and Tan Chenhao. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 607617.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Naumann Philip and Ntoutsi Eirini. 2021. Consequence-aware sequential counterfactual generation. (April2021). arxiv:2104.05592 [cs.LG]Google ScholarGoogle Scholar
  56. [56] Nowell Lorelli S., Norris Jill M., White Deborah E., and Moules Nancy J.. 2017. Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods 16, 1 (2017), 1609406917733847.Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Oberst Michael and Sontag David. 2019. Counterfactual off-policy evaluation with gumbel-max structural causal models. In International Conference on Machine Learning (ICML’19). proceedings.mlr.press, 48814890.Google ScholarGoogle Scholar
  58. [58] Poursabzi-Sangdeh Forough, Goldstein Daniel G., Hofman Jake M., Vaughan Jennifer Wortman Wortman, and Wallach Hanna. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI’21, Article 237). Association for Computing Machinery, New York, NY, 152.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Poyiadzi Rafael, Sokol Kacper, Santos-Rodríguez Raúl, Bie Tijl De, and Flach Peter A.. 2020. FACE: Feasible and actionable counterfactual explanations. In AAAI/ACM Conference on AI, Ethics, and Society (AIES’20, New York, NY, USA, February 7-8, 2020), Annette N. Markham, Julia Powles, Toby Walsh, and Anne L. Washington (Eds.). ACM, 344–350. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Puterman Martin L.. 2014. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons.Google ScholarGoogle Scholar
  61. [61] Rader Emilee, Cotter Kelley, and Cho Janghee. 2018. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). Association for Computing Machinery, New York, NY, 113.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Ribeiro Marco Túlio, Singh Sameer, and Guestrin Carlos. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Krishnapuram Balaji, Shah Mohak, Smola Alexander J., Aggarwal Charu C., Shen Dou, and Rastogi Rajeev (Eds.). ACM, 11351144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Ribeiro Marco Túlio, Singh Sameer, and Guestrin Carlos. 2018. Anchors: High-precision model-agnostic explanations. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI’18), the 30th innovative Applications of Artificial Intelligence (IAAI’18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18, New Orleans, Louisiana, USA, February 2-7, 2018), McIlraith Sheila A. and Weinberger Kilian Q. (Eds.). AAAI Press, 15271535. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16982.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Rudin Cynthia. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (2019), 206215.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Russell Chris. 2019. Efficient search for diverse coherent explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT’19, Atlanta, GA, USA, January 29-31, 2019), Boyd Danah and Morgenstern Jamie H. (Eds.). ACM, 2028. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. [66] Selbst Andrew D. and Barocas Solon. 2018. The intuitive appeal of explainable machines. Fordham L. Rev. 87 (2018), 1085.Google ScholarGoogle Scholar
  67. [67] Sharma Shubham, Henderson Jette, and Ghosh Joydeep. 2019. CERTIFAI: Counterfactual explanations for robustness, transparency, interpretability, and fairness of artificial intelligence models. (May2019). arxiv:1905.07857 [cs.LG]Google ScholarGoogle Scholar
  68. [68] Sokol Kacper and Flach Peter A.. 2020. One explanation does not fit all: The promise of interactive explanations for machine learning transparency. CoRR abs/2001.09734 (2020). arxiv:2001.09734 https://arxiv.org/abs/2001.09734.Google ScholarGoogle Scholar
  69. [69] Sreedharan Sarath, Kulkarni Anagha, and Kambhampati Subbarao. 2022. Explainable human–AI interaction: A planning perspective. Synthesis Lectures on Artificial Intelligence and Machine Learning 16, 1 (Jan.2022), 1184.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Srivastava Biplav, Nguyen Tuan Anh, Gerevini Alfonso, Kambhampati Subbarao, Do Minh Binh, and Serina Ivan. 2007. Domain independent approaches for finding diverse plans. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI’07, Hyderabad, India, January 6-12, 2007), Veloso Manuela M. (Ed.). 20162022. http://ijcai.org/Proceedings/07/Papers/325.pdf.Google ScholarGoogle Scholar
  71. [71] Sutton Richard S. and Barto Andrew G.. 2018. Reinforcement Learning: An Introduction (2nd ed.). MIT Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Taylor Winnie F.. 1980. Meeting the Equal Credit Opportunity Act’s specificity requirement: Judgmental and statistical scoring systems. Buff. L. Rev. 29 (1980), 73.Google ScholarGoogle Scholar
  73. [73] Tomsett Richard, Braines Dave, Harborne Dan, Preece Alun D., and Chakraborty Supriyo. 2018. Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. CoRR abs/1806.07552 (2018). arxiv:1806.07552 http://arxiv.org/abs/1806.07552.Google ScholarGoogle Scholar
  74. [74] Tsirtsis Stratis, De Abir, and Gomez-Rodriguez Manuel. 2021. Counterfactual explanations in sequential decision making under uncertainty. (July2021). arxiv:2107.02776 [cs.LG]Google ScholarGoogle Scholar
  75. [75] Tsirtsis Stratis and Gomez-Rodriguez Manuel. 2020. Decisions, counterfactual explanations and strategic behavior. (Feb.2020). arxiv:2002.04333 [cs.LG]Google ScholarGoogle Scholar
  76. [76] Utsun Berk, Spangher Alexander, and Liu Yang. 2019. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT’19, Atlanta, GA, USA, January 29-31, 2019), Boyd Danah and Morgenstern Jamie H. (Eds.). ACM, 1019. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. [77] Venkatasubramanian Suresh and Alfano Mark. 2020. The philosophical basis of algorithmic recourse. In Conference on Fairness, Accountability, and Transparency (FAccT’20, Barcelona, Spain, January 27-30, 2020), Hildebrandt Mireille, Castillo Carlos, Celis L. Elisa, Ruggieri Salvatore, Taylor Linnet, and Zanfir-Fortuna Gabriela (Eds.). ACM, 284293. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. [78] Wachter Sandra, Mittelstadt Brent, and Russell Chris. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. J. L. & Tech. 31 (2017), 841.Google ScholarGoogle Scholar
  79. [79] Wang Danding, Yang Qian, Abdul Ashraf, and Lim Brian Y.. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19, New York, NY, USA, May 2019). Glasgow, 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. [80] Wang Lin, Rau Pei-Luen Patrick, Evers Vanessa, Robinson Benjamin Krisper, and Hinds Pamela. 2010. When in Rome: The role of culture & context in adherence to robot recommendations. In 2010 5th ACM/IEEE International Conference on Human-robot Interaction (HRI’10). ieeexplore.ieee.org, 359366.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Directive Explanations for Actionable Explainability in Machine Learning Applications

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Interactive Intelligent Systems
          ACM Transactions on Interactive Intelligent Systems  Volume 13, Issue 4
          December 2023
          388 pages
          ISSN:2160-6455
          EISSN:2160-6463
          DOI:10.1145/3636547
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 8 December 2023
          • Online AM: 12 January 2023
          • Accepted: 17 December 2022
          • Revised: 13 September 2022
          • Received: 21 February 2022
          Published in tiis Volume 13, Issue 4

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text