Skip to main content
Log in

Explainable artificial intelligence: a comprehensive review

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

Thanks to the exponential growth in computing power and vast amounts of data, artificial intelligence (AI) has witnessed remarkable developments in recent years, enabling it to be ubiquitously adopted in our daily lives. Even though AI-powered systems have brought competitive advantages, the black-box nature makes them lack transparency and prevents them from explaining their decisions. This issue has motivated the introduction of explainable artificial intelligence (XAI), which promotes AI algorithms that can show their internal process and explain how they made decisions. The number of XAI research has increased significantly in recent years, but there lacks a unified and comprehensive review of the latest XAI progress. This review aims to bridge the gap by discovering the critical perspectives of the rapidly growing body of research associated with XAI. After offering the readers a solid XAI background, we analyze and review various XAI methods, which are grouped into (i) pre-modeling explainability, (ii) interpretable model, and (iii) post-modeling explainability. We also pay attention to the current methods that dedicate to interpret and analyze deep learning methods. In addition, we systematically discuss various XAI challenges, such as the trade-off between the performance and the explainability, evaluation methods, security, and policy. Finally, we show the standard approaches that are leveraged to deal with the mentioned challenges.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Abdollahi B, Nasraoui O (2018) Transparency in fair machine learning: the case of explainable recommender systems. In: Human and machine learning. Springer, Berlin, pp 21?35

  • ACM (2020) ACM conference on fairness, accountability, and transparency. https://fatconference.org. Accessed 24 Jan 2020

  • Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138?52160

    Article  Google Scholar 

  • Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. Adv Neural Inf Process Syst 31:9505?9515

    Google Scholar 

  • Adler P, Falk C, Friedler SA, Nix T, Rybeck G, Scheidegger C, Smith B, Venkatasubramanian S (2018) Auditing black-box models for indirect influence. Knowl Inf Syst 54(1):95?122

    Article  Google Scholar 

  • Adriana da Costa FC, Vellasco MMB, Tanscheit R (2013) Fuzzy rules extraction from support vector machines for multi-class classification. Neural Comput Appl 22(7):1571?1580

    Article  Google Scholar 

  • Ahmed M (2019) Data summarization: a survey. Knowl Inf Syst 58(2):249?273

    Article  Google Scholar 

  • Ahn Y, Lin YR (2019) Fairsight: visual analytics for fairness in decision making. IEEE Trans Vis Comput Graph 26(1):1086?1095

    Google Scholar 

  • AI (2019) Ethics for autonomous systems. https://www.journals.elsevier.com/artificial-intelligence/call-for-papers/special-issue-on-ethics-for-autonomous-systems. Accessed 3 Mar 2020

  • AI (2020) Explainable artificial intelligence. https://www.journals.elsevier.com/artificial-intelligence/call-for-papers/special-issue-on-explainable-artificial-intelligence. Accessed 3 Mar 2020

  • Akula AR, Todorovic S, Chai JY, Zhu SC (2019) Natural language interaction with explainable AI models. In: CVPR workshops, pp 87?90

  • Al-Shedivat M, Dubey A, Xing E (2020) Contextual explanation networks. J Mach Learn Res 21(194):1?44

    MathSciNet  MATH  Google Scholar 

  • Angelov P, Soares E (2020) Towards explainable deep neural networks (xDNN). Neural Netw 130:185?194

    Article  Google Scholar 

  • Anysz H, Zbiciak A, Ibadov N (2016) The influence of input data standardization method on prediction accuracy of artificial neural networks. Proc Eng 153:66?70

    Article  Google Scholar 

  • Arras L, Arjona-Medina J, Widrich M, Montavon G (2019) Explaining and interpreting lstms. In: Explainable AI: interpreting, explaining and visualizing deep learning, vol 11700, p 211

  • Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, García S, Gil-López S, Molina D, Benjamins R et al (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58:82?115

    Article  Google Scholar 

  • Asadi S, Nilashi M, Husin ARC, Yadegaridehkordi E (2017) Customers perspectives on adoption of cloud computing in banking sector. Inf Technol Manag 18(4):305?330

    Article  Google Scholar 

  • Assaf R, Giurgiu I, Bagehorn F, Schumann A (2019) Mtex-cnn: Multivariate time series explanations for predictions with convolutional neural networks. In: 2019 IEEE international conference on data mining (ICDM). IEEE, pp 952?957

  • Bang JS, Lee MH, Fazli S, Guan C, Lee SW (2021) Spatio-spectral feature representation for motor imagery classification using convolutional neural networks. IEEE Trans Neural Netw Learn Syst

  • Baniecki H, Biecek P (2019) modelStudio: Interactive studio with explanations for ML predictive models. J Open Source Softw 4(43):1798

    Article  Google Scholar 

  • Baron B, Musolesi M (2020) Interpretable machine learning for privacy-preserving pervasive systems. IEEE Pervasive Comput

  • Bau D, Zhou B, Khosla A, Oliva A, Torralba A (2017) Network dissection: Quantifying interpretability of deep visual representations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6541?6549

  • Bender EM, Friedman B (2018) Data statements for natural language processing: toward mitigating system bias and enabling better science. Trans Assoc Comput Linguist 6:587?604

    Article  Google Scholar 

  • Bi X, Zhang C, He Y, Zhao X, Sun Y, Ma Y (2021) Explainable time?frequency convolutional neural network for microseismic waveform classification. Inf Sci 546:883?896

    Article  MathSciNet  MATH  Google Scholar 

  • Blanco-Justicia A, Domingo-Ferrer J, Martínez S, Sánchez D (2020) Machine learning explainability via microaggregation and shallow decision trees. Knowl-Based Syst 194:105532

    Article  Google Scholar 

  • BMVC (2020) Interpretable & explainable machine vision. https://arxiv.org/html/1909.07245. Accessed 3 Mar 2020

  • Bologna G (2019) A simple convolutional neural network with rule extraction. Appl Sci 9(12):2411

    Article  Google Scholar 

  • Butterworth M (2018) The ICO and artificial intelligence: the role of fairness in the GDPR framework. Comput Law Secur Rev 34(2):257?268

    Article  Google Scholar 

  • Campbell T, Broderick T (2019) Automated scalable Bayesian inference via Hilbert coresets. J Mach Learn Res 20(1):551?588

    MathSciNet  MATH  Google Scholar 

  • Cao HE, Sarlin R, Jung A (2020) Learning explainable decision rules via maximum satisfiability. IEEE Access 8:218180?218185

    Article  Google Scholar 

  • Carey P (2018) Data protection: a practical guide to UK and EU law. Oxford University Press, Inc, Oxford

    Google Scholar 

  • Carter S, Armstrong Z, Schubert L, Johnson I, Olah C (2019) Activation atlas. Distill 4(3):e15

    Article  Google Scholar 

  • Carvalho DV, Pereira EM, Cardoso JS (2019a) Machine learning interpretability: a survey on methods and metrics. Electronics 8(8):832

    Article  Google Scholar 

  • Carvalho DV, Pereira EM, Cardoso JS (2019b) Machine learning interpretability: a survey on methods and metrics. Electronics 8(8):832

    Article  Google Scholar 

  • Ceni A, Ashwin P, Livi L (2020) Interpreting recurrent neural networks behaviour via excitable network attractors. Cogn Comput 12(2):330?356

    Article  Google Scholar 

  • Chakraborty S, Tomsett R, Raghavendra R, Harborne D, Alzantot M, Cerutti F, Srivastava M, Preece A, Julier S, Rao RM et al (2017) Interpretability of deep learning models: a survey of results. In: 2017 IEEE SmartWorld, ubiquitous intelligence & computing, advanced & trusted computed, scalable computing & communications, cloud & big data computing, internet of people and smart city innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). IEEE, pp 1?6

  • Chan TH, Jia K, Gao S, Lu J, Zeng Z, Ma Y (2015) PCANet: a simple deep learning baseline for image classification? IEEE Trans Image Process 24(12):5017?5032

    Article  MathSciNet  MATH  Google Scholar 

  • Chen J, Song L, Wainwright MJ, Jordan MI (2018) L-shapley and c-shapley: efficient model interpretation for structured data. In: International conference on learning representations

  • Chen J, Vaughan J, Nair V, Sudjianto A (2020a) Adaptive explainable neural networks (AxNNs). Available at SSRN 3569318

  • Chen Y, Yu C, Liu X, Xi T, Xu G, Sun Y, Zhu F, Shen B (2020b) PCLiON: an ontology for data standardization and sharing of prostate cancer associated lifestyles. Int J Med Inform 145:104332

    Article  Google Scholar 

  • Chen H, Lundberg S, Lee SI (2021) Explaining models by propagating Shapley values of local components. In: Explainable AI in Healthcare and Medicine. Springer, Berlin, pp 261?270

  • Choi E, Bahadori MT, Kulas JA, Schuetz A, Stewart WF, Sun J (2016) Retain: an interpretable predictive model for healthcare using reverse time attention mechanism. In: Advances in Neural Information Processing Systems, pp 3512?3520

  • Choi KS, Choi SH, Jeong B (2019) Prediction of IDH genotype in gliomas with dynamic susceptibility contrast perfusion MR imaging using an explainable recurrent neural network. Neuro Oncol 21(9):1197?1209

    Article  Google Scholar 

  • Choi H, Som A, Turaga P (2020) AMC-loss: angular margin contrastive loss for improved explainability in image classification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 838?839

  • Choo J, Liu S (2018) Visual analytics for explainable deep learning. IEEE Comput Graph Appl 38(4):84?92

    Article  Google Scholar 

  • CIM I (2021) Explainable and trustworthy artificial intelligence. https://sites.google.com/view/special-issue-on-xai-ieee-cim. Accessed 1 Aug 2021

  • Comizio VG, Petrasic KL, Lee HY (2011) Regulators take steps to eliminate differences in thrift, bank and holding company reporting requirements. Banking LJ 128:426

    Google Scholar 

  • Cortez P, Embrechts MJ (2013) Using sensitivity analysis and visualization techniques to open black box data mining models. Inf Sci 225:1?17

    Article  Google Scholar 

  • Craven MW, Shavlik JW (2014) Learning symbolic rules using artificial neural networks. In: Proceedings of the tenth international conference on machine learning, pp 73?80

  • Daglarli E (2020) Explainable artificial intelligence (XAI) approaches and deep meta-learning models. In: Advances and applications in deep learning, p 79

  • Dai J, Chen C, Li Y (2019) A backdoor attack against lstm-based text classification systems. IEEE Access 7:138872?138878

    Article  Google Scholar 

  • Dang LM, Hassan SI, Im S, Mehmood I, Moon H (2018) Utilizing text recognition for the defects extraction in sewers CCTV inspection videos. Comput Ind 99:96?109

    Article  Google Scholar 

  • Dang LM, Piran M, Han D, Min K, Moon H et al (2019) A survey on internet of things and cloud computing for healthcare. Electronics 8(7):768

    Article  Google Scholar 

  • Darpa (2020) Explainable artificial intelligence (XAI). https://www.darpa.mil/program/explainable-artificial-intelligence. Accessed 24 Jan 2020

  • De T, Giri P, Mevawala A, Nemani R, Deo A (2020) Explainable AI: a hybrid approach to generate human-interpretable explanation for deep learning prediction. Procedia Comput Sci 168:40?48

    Article  Google Scholar 

  • Deeks A (2019) The judicial demand for explainable artificial intelligence. Columbia Law Rev 119(7):1829?1850

    Google Scholar 

  • Deleforge A, Forbes F, Horaud R (2015) High-dimensional regression with gaussian mixtures and partially-latent response variables. Stat Comput 25(5):893?911

    Article  MathSciNet  MATH  Google Scholar 

  • Deng H (2019) Interpreting tree ensembles with intrees. Int J Data Sci Anal 7(4):277?287

    Article  Google Scholar 

  • Dibia V, Demiralp Ç (2019) Data2vis: automatic generation of data visualizations using sequence-to-sequence recurrent neural networks. IEEE Comput Graph Appl 39(5):33?46

    Article  Google Scholar 

  • Ding L (2018) Human knowledge in constructing AI systems?neural logic networks approach towards an explainable AI. Procedia Comput Sci 126:1561?1570

    Article  Google Scholar 

  • Dingen D, van?t Veer M, Houthuizen P, Mestrom EH, Korsten EH, Bouwman AR, Van Wijk J (2018) Regressionexplorer: interactive exploration of logistic regression models with subgroup analysis. IEEE Trans Vis Comput Graph 25(1):246?255

  • DMKD (2021) Data mining and knowledge discovery. https://www.springer.com/journal/10618/updates/18745970. Aceessed 1 Aug 2021

  • Dogra DP, Ahmed A, Bhaskar H (2016) Smart video summarization using mealy machine-based trajectory modelling for surveillance applications. Multimed Tools Appl 75(11):6373?6401

    Article  Google Scholar 

  • Doran D, Schulz S, Besold TR (2017) What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:171000794

  • DuMouchel W (2002) Data squashing: constructing summary data sets. In: Handbook of massive data sets. Springer, Cham, pp 579?591

  • Dunn C, Moustafa N, Turnbull B (2020) Robustness evaluations of sustainable machine learning models against data poisoning attacks in the internet of things. Sustainability 12(16):6434

    Article  Google Scholar 

  • Dziugaite GK, Ben-David S, Roy DM (2020) Enforcing interpretability and its statistical impacts: trade-offs between accuracy and interpretability. arXiv preprint arXiv:201013764

  • Eiras-Franco C, Guijarro-Berdiñas B, Alonso-Betanzos A, Bahamonde A (2019) A scalable decision-tree-based method to explain interactions in dyadic data. Decis Support Syst 127:113141

    Article  MATH  Google Scholar 

  • Electronics (2019) Interpretable deep learning in electronics, computer science and medical imaging. https://www.mdpi.com/journal/electronics/special_issues/interpretable_deep_learning. Accessed 3 Mar 2020

  • Elghazel H, Aussem A (2015) Unsupervised feature selection with ensemble learning. Mach Learn 98(1):157?180

    Article  MathSciNet  MATH  Google Scholar 

  • Elshawi R, Al-Mallah MH, Sakr S (2019) On the interpretability of machine learning-based model for predicting hypertension. BMC Med Inform Decis Mak 19(1):1?32

    Article  Google Scholar 

  • Erfani SM, Rajasegarar S, Karunasekera S, Leckie C (2016) High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. Pattern Recogn 58:121?134

    Article  Google Scholar 

  • Escalante HJ, Escalera S, Guyon I, Baró X, Güçlütürk Y, Güçlü U, van Gerven M, van Lier R (2018) Explainable and interpretable models in computer vision and machine learning. Springer, Cham

    Book  Google Scholar 

  • Escobar CA, Morales-Menendez R (2019) Process-monitoring-for-quality?a model selection criterion for support vector machine. Procedia Manuf 34:1010?1017

    Article  Google Scholar 

  • Fang X, Xu Y, Li X, Lai Z, Wong WK, Fang B (2017) Regularized label relaxation linear regression. IEEE Trans Neural Netwo Learn Syst 29(4):1006?1018

    Article  Google Scholar 

  • Felzmann H, Fosch-Villaronga E, Lutz C, Tamo-Larrieux A (2019) Robots and transparency: the multiple dimensions of transparency in the context of robot technologies. IEEE Robotics Autom Mag 26(2):71?78

    Article  Google Scholar 

  • Fernandez A, Herrera F, Cordon O, del Jesus MJ, Marcelloni F (2019) Evolutionary fuzzy systems for explainable artificial intelligence: why, when, what for, and where to? IEEE Comput Intell Mag 14(1):69?81

    Article  Google Scholar 

  • FGCS (2021) Future generation computer systems. https://www.journals.elsevier.com/future-generation-computer-systems/call-for-papers/explainable-artificial-intelligence-for-healthcare. Accessed 1 Aug 2021

  • Forte JC, Mungroop HE, de Geus F, van der Grinten ML, Bouma HR, Pettilä V, Scheeren TW, Nijsten MW, Mariani MA, van der Horst IC et al (2021) Ensemble machine learning prediction and variable importance analysis of 5-year mortality after cardiac valve and CABG operations. Sci Rep 11(1):1?11

    Google Scholar 

  • Främling K (2020) Decision theory meets explainable AI. In: International workshop on explainable, transparent autonomous agents and multi-agent systems. Springer, Cham, pp 57?74

  • Gallego AJ, Calvo-Zaragoza J, Valero-Mas JJ, Rico-Juan JR (2018) Clustering-based k-nearest neighbor classification for large-scale data with neural codes representation. Pattern Recogn 74:531?543

    Article  Google Scholar 

  • Gaonkar B, Shinohara RT, Davatzikos C, Initiative ADN et al (2015) Interpreting support vector machine models for multivariate group wise analysis in neuroimaging. Med Image Anal 24(1):190?204

    Article  Google Scholar 

  • García-Magariño I, Muttukrishnan R, Lloret J (2019) Human-centric AI for trustworthy IoT systems with explainable multilayer perceptrons. IEEE Access 7:125562?125574

    Article  Google Scholar 

  • Gartner (2020) Gartner identifies the top 10 strategic technology trends for 2020. https://www.gartner.com/en/newsroom/press-releases/2019-10-21-gartner-identifies-the-top-10-strategic-technology-trends-for-2020. Accessed 24 Jan 2020

  • Ghorbani A, Abid A, Zou J (2019) Interpretation of neural networks is fragile. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 3681?3688

  • Gite S, Khatavkar H, Kotecha K, Srivastava S, Maheshwari P, Pandey N (2021) Explainable stock prices prediction from financial news articles using sentiment analysis. PeerJ Comput Sci 7:e340

    Article  Google Scholar 

  • Google (2021) Google what-if toolkit. https://pair-code.github.io/what-if-tool/. Accessed 26 Apr 2021

  • Gronauer S, Diepold K (2021) Multi-agent deep reinforcement learning: a survey. Artif Intell Rev 1?49

  • Gu D, Su K, Zhao H (2020a) A case-based ensemble learning system for explainable breast cancer recurrence prediction. Artif Intell Med 107:101858

    Article  Google Scholar 

  • Gu R, Wang G, Song T, Huang R, Aertsen M, Deprest J, Ourselin S, Vercauteren T, Zhang S (2020b) Ca-net: comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Trans Med Imaging

  • Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2019) A survey of methods for explaining black box models. ACM Comput Surv (CSUR) 51(5):93

    Article  Google Scholar 

  • Gulati P, Hu Q, Atashzar SF (2021) Toward deep generalization of peripheral EMG-based human-robot interfacing: a hybrid explainable solution for neurorobotic systems. IEEE Robotics Autom Lett

  • Guo S, Yu J, Liu X, Wang C, Jiang Q (2019) A predicting model for properties of steel using the industrial big data based on machine learning. Comput Mater Sci 160:95?104

    Article  Google Scholar 

  • Guo W (2020) Explainable artificial intelligence for 6G: improving trust between human and machine. IEEE Commun Mag 58(6):39?45

    Article  Google Scholar 

  • Gupta B, Rawat A, Jain A, Arora A, Dhami N (2017) Analysis of various decision tree algorithms for classification in data mining. Int J Comput Appl 163(8):15?19

    Google Scholar 

  • H2oai (2017) Comparative performance analysis of neural networks architectures on h2o platform for various activation functions. In: 2017 IEEE International young scientists forum on applied physics and engineering (YSF). IEEE, pp 70?73

  • Haasdonk B (2005) Feature space interpretation of SVMs with indefinite kernels. IEEE Trans Pattern Anal Mach Intell 27(4):482?492

    Article  Google Scholar 

  • Hagras H (2018) Toward human-understandable, explainable AI. Computer 51(9):28?36

    Article  Google Scholar 

  • Hara S, Hayashi K (2018) Making tree ensembles interpretable: a Bayesian model selection approach. In: International conference on artificial intelligence and statistics. PMLR, pp 77?85

  • Hatwell J, Gaber MM, Azad RMA (2020) Chirps: explaining random forest classification. Artif Intell Rev 53:5747?5788

    Article  Google Scholar 

  • Hatzilygeroudis I, Prentzas J (2015) Symbolic-neural rule based reasoning and explanation. Expert Syst Appl 42(9):4595?4609

    Article  Google Scholar 

  • Hendricks LA, Akata Z, Rohrbach M, Donahue J, Schiele B, Darrell T (2016) Generating visual explanations. In: European conference on computer vision. Springer, Cham, pp 3?19

  • Henelius A, Puolamäki K, Boström H, Asker L, Papapetrou P (2014) A peek into the black box: exploring classifiers by randomization. Data Min Knowl Disc 28(5):1503?1529

    Article  MathSciNet  Google Scholar 

  • Hind M, Wei D, Campbell M, Codella NC, Dhurandhar A, Mojsilovi? A, Natesan Ramamurthy K, Varshney KR (2019) TED: teaching AI to explain its decisions. In: Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society, pp 123?129

  • Hoffman RR, Mueller ST, Klein G, Litman J (2018) Metrics for explainable AI: challenges and prospects. arXiv preprint arXiv:181204608

  • Holzinger A (2016) Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Inform 3(2):119?131

    Article  Google Scholar 

  • Holzinger A, Langs G, Denk H, Zatloukal K, Müller H (2019) Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov 9(4):e1312

    Article  Google Scholar 

  • Holzinger A, Malle B, Saranti A, Pfeifer B (2021a) Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf Fusion 71:28?37

    Article  Google Scholar 

  • Holzinger A, Weippl E, Tjoa AM, Kieseberg P (2021b) Digital transformation for sustainable development goals (SDGS)?a security, safety and privacy perspective on AI. In: International cross-domain conference for machine learning and knowledge. Springer, Cham, pp 103?107

  • Hu K, Orghian D, Hidalgo C (2018a) Dive: a mixed-initiative system supporting integrated data exploration workflows. In: Proceedings of the workshop on human-in-the-loop data analytics, pp 1?7

  • Hu R, Andreas J, Darrell T, Saenko K (2018b) Explainable neural computation via stack neural module networks. In: Proceedings of the European conference on computer vision (ECCV), pp 53?69

  • Huang Q, Katsman I, He H, Gu Z, Belongie S, Lim SN (2019) Enhancing adversarial example transferability with an intermediate level attack. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 4733?4742

  • Huisman M, van Rijn JN, Plaat A (2021) A survey of deep meta-learning. Artif Intell Rev 1?59

  • IBM (2019) AI fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J Res Dev 63(4/5):4?1

  • ICAPS (2020) Explainable planning. https://icaps20.icaps-conference.org/workshops/xaip/. Accessed 3 Mar 2020

  • ICCV (2019) Interpretating and explaining visual artificial intelligence models. http://xai.unist.ac.kr/workshop/2019/. Accessed 3 Mar 2020

  • ICML (2021) Theoretic foundation, criticism, and application trend of explainable AI. https://icml2021-xai.github.io/. Accessed 1 Aug 2021

  • IDC (2020) Worldwide spending on artificial intelligence systems will be nearly 98 billion dollars in 2023. https://www.idc.com/getdoc.jsp?containerId=prUS45481219. Accessed 24 Jan 2020

  • IJCAI (2019) Explainable artificial intelligence(XAI). https://sites.google.com/view/xai2019/home. Accessed 3 Mar 2020

  • Islam MA, Anderson DT, Pinar AJ, Havens TC, Scott G, Keller JM (2019) Enabling explainable fusion in deep learning with fuzzy integral neural networks. IEEE Trans Fuzzy Syst 28(7):1291?1300

    Article  Google Scholar 

  • Islam NU, Lee S (2019) Interpretation of deep CNN based on learning feature reconstruction with feedback weights. IEEE Access 7:25195?25208

    Article  Google Scholar 

  • IUI (2019) Explainable smart systems. https://explainablesystems.comp.nus.edu.sg/2019/. Accessed 3 Mar 2020

  • Ivanovs M, Kadikis R, Ozols K (2021) Perturbation-based methods for explaining deep neural networks: a survey. Pattern Recognit Lett

  • Jagadish H, Gehrke J, Labrinidis A, Papakonstantinou Y, Patel JM, Ramakrishnan R, Shahabi C (2014) Big data and its technical challenges. Commun ACM 57(7):86?94

    Article  Google Scholar 

  • Janitza S, Celik E, Boulesteix AL (2018) A computationally fast variable importance test for random forests for high-dimensional data. Adv Data Anal Classif 12(4):885?915

    Article  MathSciNet  MATH  Google Scholar 

  • Jung YJ, Han SH, Choi HJ (2021) Explaining CNN and RNN using selective layer-wise relevance propagation. IEEE Access 9:18670?18681

    Article  Google Scholar 

  • Junior JRB (2020) Graph embedded rules for explainable predictions in data streams. Neural Netw 129:174?192

    Article  Google Scholar 

  • Juuti M, Szyller S, Marchal S, Asokan N (2019) PRADA: protecting against DNN model stealing attacks. In: 2019 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 512?527

  • Kapelner A, Soterwood J, Nessaiver S, Adlof S (2018) Predicting contextual informativeness for vocabulary learning. IEEE Trans Learn Technol 11(1):13?26

    Article  Google Scholar 

  • Karlsson I, Rebane J, Papapetrou P, Gionis A (2020) Locally and globally explainable time series tweaking. Knowl Inf Syst 62(5):1671?1700

    Article  Google Scholar 

  • Keane MT, Kenny EM (2019) How case-based reasoning explains neural networks: A theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: International conference on case-based reasoning. Springer, Cham, pp 155?171

  • Keneni BM, Kaur D, Al Bataineh A, Devabhaktuni VK, Javaid AY, Zaientz JD, Marinier RP (2019) Evolving rule-based explainable artificial intelligence for unmanned aerial vehicles. IEEE Access 7:17001?17016

    Article  Google Scholar 

  • Kenny EM, Ford C, Quinn M, Keane MT (2021) Explaining black-box classifiers using post-hoc explanations-by-example: the effect of explanations and error-rates in XAI user studies. Artif Intell 294:103459

    Article  MathSciNet  MATH  Google Scholar 

  • Kim J, Canny J (2018) Explainable deep driving by visualizing causal attention. In: Explainable and interpretable models in computer vision and machine learning. Springer, Cham, pp 173?193

  • Kindermans PJ, Hooker S, Adebayo J, Alber M, Schütt KT, Dähne S, Erhan D, Kim B (2019) The (un) reliability of saliency methods. In: Explainable AI: interpreting, explaining and visualizing deep learning. Springer, Cham, pp 267?280

  • Kiritz N, Sarfati P (2018) Supervisory guidance on model risk management (SR 11-7) versus enterprise-wide model risk management for deposit-taking institutions (E-23): a detailed comparative analysis. Available at SSRN 3332484

  • Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: International conference on machine learning. PMLR, pp 1885?1894

  • Kolyshkina I, Simoff S (2021) Interpretability of machine learning solutions in public healthcare: the CRISP-ML approach. Front Big Data 4:18

    Article  Google Scholar 

  • Konig R, Johansson U, Niklasson L (2008) G-REX: a versatile framework for evolutionary data mining. In: 2008 IEEE international conference on data mining workshops. IEEE, pp 971?974

  • Konstantinov AV, Utkin LV (2021) Interpretable machine learning with an ensemble of gradient boosting machines. Knowl Based Syst 222:106993

    Article  Google Scholar 

  • Krishnamurthy P, Sarmadi A, Khorrami F (2021) Explainable classification by learning human-readable sentences in feature subsets. Inf Sci 564:202?219

    Article  Google Scholar 

  • Kumari B, Swarnkar T (2020) Importance of data standardization methods on stock indices prediction accuracy. In: Advanced computing and intelligent engineering. Springer, Cham, pp 309?318

  • Kuo CCJ, Zhang M, Li S, Duan J, Chen Y (2019) Interpretable convolutional neural networks via feedforward design. J Vis Commun Image Represent 60:346?359

    Article  Google Scholar 

  • Langer M, Oster D, Speith T, Hermanns H, Kästner L, Schmidt E, Sesing A, Baum K (2021) What do we want from explainable artificial intelligence (XAI)??A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif Intell 296:103473

    Article  MathSciNet  MATH  Google Scholar 

  • Lapchak PA, Zhang JH (2018) Data standardization and quality management. Transl Stroke Res 9(1):4?8

    Article  Google Scholar 

  • Lapuschkin S, Binder A, Montavon G, Müller KR, Samek W (2016) The LRP toolbox for artificial neural networks. J Mach Learn Res 17(1):3938?3942

    MathSciNet  MATH  Google Scholar 

  • Latouche P, Robin S, Ouadah S (2018) Goodness of fit of logistic regression models for random graphs. J Comput Graph Stat 27(1):98?109

    Article  MathSciNet  MATH  Google Scholar 

  • Lauritsen SM, Kristensen M, Olsen MV, Larsen MS, Lauritsen KM, Jørgensen MJ, Lange J, Thiesson B (2020) Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat Commun 11(1):1?11

    Article  Google Scholar 

  • Lawless WF, Mittu R, Sofge D, Hiatt L (2019) Artificial intelligence, autonomy, and human-machine teams: interdependence, context, and explainable AI. AI Mag 40(3)

  • Lee D, Mulrow J, Haboucha CJ, Derrible S, Shiftan Y (2019) Attitudes on autonomous vehicle adoption using interpretable gradient boosting machine. Transp Res Rec, p 0361198119857953

  • Li K, Hu C, Liu G, Xue W (2015) Building?s electricity consumption prediction using optimized artificial neural networks and principal component analysis. Energy Build 108:106?113

    Article  Google Scholar 

  • Liang S, Sabri AQM, Alnajjar F, Loo CK (2021) Autism spectrum self-stimulatory behaviours classification using explainable temporal coherency deep features and SVM classifier. IEEE Access

  • Liberati C, Camillo F, Saporta G (2017) Advances in credit scoring: combining performance and interpretation in kernel discriminant analysis. Adv Data Anal Classif 11(1):121?138

    Article  MathSciNet  MATH  Google Scholar 

  • Lin YC, Lee YC, Tsai WC, Beh WK, Wu AYA (2020) Explainable deep neural network for identifying cardiac abnormalities using class activation map. In: 2020 Computing in cardiology. IEEE, pp 1?4

  • Lipton ZC (2018) The mythos of model interpretability. Queue 16(3):31?57

    Article  Google Scholar 

  • Liu YJ, Ma C, Zhao G, Fu X, Wang H, Dai G, Xie L (2016) An interactive spiraltape video summarization. IEEE Trans Multimed 18(7):1269?1282

    Article  Google Scholar 

  • Liu Z, Tang B, Wang X, Chen Q (2017) De-identification of clinical notes via recurrent neural network and conditional random field. J Biomed Inform 75:S34?S42

    Article  Google Scholar 

  • Liu P, Zhang L, Gulla JA (2020) Dynamic attention-based explainable recommendation with textual and visual fusion. Inf Process Manag 57(6):102099

    Article  Google Scholar 

  • Long M, Cao Y, Cao Z, Wang J, Jordan MI (2018) Transferable representation learning with deep adaptation networks. IEEE Trans Pattern Anal Mach Intell 41(12):3071?3085

    Article  Google Scholar 

  • Loor M, De Tré G (2020) Contextualizing support vector machine predictions. Int J Comput Intell Syst 13(1):1483?1497

    Article  Google Scholar 

  • Luo X, Chang X, Ban X (2016) Regression and classification using extreme learning machine based on L1-norm and L2-norm. Neurocomputing 174:179?186

    Article  Google Scholar 

  • Ma Y, Chen W, Ma X, Xu J, Huang X, Maciejewski R, Tung AK (2017) EasySVM: a visual analysis approach for open-box support vector machines. Comput Vis Media 3(2):161?175

    Article  Google Scholar 

  • Manica M, Oskooei A, Born J, Subramanian V, Sáez-Rodríguez J, Rodriguez Martinez M (2019) Toward explainable anticancer compound sensitivity prediction via multimodal attention-based convolutional encoders. Mol Pharm 16(12):4797?4806

    Article  Google Scholar 

  • Martini ML, Neifert SN, Gal JS, Oermann EK, Gilligan JT, Caridi JM (2021) Drivers of prolonged hospitalization following spine surgery: a game-theory-based approach to explaining machine learning models. JBJS 103(1):64?73

    Article  Google Scholar 

  • Maweu BM, Dakshit S, Shamsuddin R, Prabhakaran B (2021) CEFEs: a CNN explainable framework for ECG signals. Artif Intell Med 102059

  • Meske C, Bunde E, Schneider J, Gersch M (2020) Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Inf Syst Manag 1?11

  • Microsoft (2021) Azure model interpretability. https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability. Accessed 26 Apr 2021

  • Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1?38

    Article  MathSciNet  MATH  Google Scholar 

  • Minh DL, Sadeghi-Niaraki A, Huy HD, Min K, Moon H (2018) Deep learning approach for short-term stock trends prediction based on two-stream gated recurrent unit network. IEEE Access 6:55392?55404

    Article  Google Scholar 

  • Mohit, Kumari AC, Sharma M (2019) A novel approach to text clustering using shift k-medoid. Int J Soc Comput Cyber Phys Syst 2(2):106?118

  • Molnar C, Casalicchio G, Bischl B (2019) Quantifying model complexity via functional decomposition for better post-hoc interpretability. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, Cham, pp 193?204

  • Montavon G, Lapuschkin S, Binder A, Samek W, Müller KR (2017) Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn 65:211?222

    Article  Google Scholar 

  • Moradi M, Samwald M (2021) Post-hoc explanation of black-box classifiers using confident itemsets. Expert Syst Appl 165:113941

    Article  Google Scholar 

  • Mordvintsev A, Olah C, Tyka M (2015) Inceptionism: going deeper into neural networks, 2015. https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

  • Muller H, Mayrhofer MT, Van Veen EB, Holzinger A (2021) The ten commandments of ethical medical AI. Computer 54(07):119?123

    Article  Google Scholar 

  • Musto C, de Gemmis M, Lops P, Semeraro G (2020) Generating post hoc review-based natural language justifications for recommender systems. User Model User Adapt Interact 1?45

  • Neto MP, Paulovich FV (2020) Explainable matrix?visualization for global and local interpretability of random forest classification ensembles. IEEE Trans Vis Comput Graph

  • Ng SF, Chew YM, Chng PE, Ng KS (2018) An insight of linear regression analysis. Sci Res J 15(2):1?16

    Article  Google Scholar 

  • Nguyen TN, Lee S, Nguyen-Xuan H, Lee J (2019) A novel analysis-prediction approach for geometrically nonlinear problems using group method of data handling. Comput Methods Appl Mech Eng 354:506?526

    Article  MathSciNet  MATH  Google Scholar 

  • Nguyen DT, Kasmarik KE, Abbass HA (2020a) Towards interpretable neural networks: an exact transformation to multi-class multivariate decision trees. arXiv preprint arXiv:200304675

  • Nguyen TN, Nguyen-Xuan H, Lee J (2020b) A novel data-driven nonlinear solver for solid mechanics using time series forecasting. Finite Elem Anal Des 171:103377

    Article  MathSciNet  Google Scholar 

  • NIPS (2017) Interpreting, explaining and visualizing deep learning. http://www.interpretable-ml.org/nips2017workshop/. Accessed 3 Mar 2020

  • Obregon J, Kim A, Jung JY (2019) RuleCOSI: combination and simplification of production rules from boosted decision trees for imbalanced classification. Expert Syst Appl 126:64?82

    Article  Google Scholar 

  • Olah C, Satyanarayan A, Johnson I, Carter S, Schubert L, Ye K, Mordvintsev A (2018) The building blocks of interpretability. Distill 3(3):e10

    Article  Google Scholar 

  • Oracle (2021) Oracle skater. https://oracle.github.io/Skater/overview.html. Accessed 26 Apr 2021

  • Ostad-Ali-Askari K, Shayannejad M (2021) Computation of subsurface drain spacing in the unsteady conditions using artificial neural networks (ANN). Appl Water Sci 11(2):1?9

    Article  Google Scholar 

  • Ostad-Ali-Askari K, Shayannejad M, Ghorbanizadeh-Kharazi H (2017) Artificial neural network for modeling nitrate pollution of groundwater in marginal area of Zayandeh-rood river, Isfahan, Iran. KSCE J Civ Eng 21(1):134?140

    Article  Google Scholar 

  • Osullivan S, Nevejans N, Allen C, Blyth A, Leonard S, Pagallo U, Holzinger K, Holzinger A, Sajid MI, Ashrafian H (2019) Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int J Med Robotics Comput Assist Surg 15(1):e1968

    Article  Google Scholar 

  • Padarian J, McBratney AB, Minasny B (2020) Game theory interpretation of digital soil mapping convolutional neural networks. Soil 6(2):389?397

    Article  Google Scholar 

  • Páez A (2019) The pragmatic turn in explainable artificial intelligence (XAI). Mind Mach 29(3):441?459

    Article  Google Scholar 

  • Pan X, Tang F, Dong W, Ma C, Meng Y, Huang F, Lee TY, Xu C (2019) Content-based visual summarization for image collections. IEEE Transa Vis Comput Graph

  • Park DH, Hendricks LA, Akata Z, Rohrbach A, Schiele B, Darrell T, Rohrbach M (2018) Multimodal explanations: justifying decisions and pointing to the evidence. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8779?8788

  • Payer C, Stern D, Bischof H, Urschler M (2019) Integrating spatial configuration into heatmap regression based CNNs for landmark localization. Med Image Anal 54:207?219

    Article  Google Scholar 

  • Peloquin D, DiMaio M, Bierer B, Barnes M (2020) Disruptive and avoidable: GDPR challenges to secondary research uses of data. Eur J Hum Genet 28(6):697?705

    Article  Google Scholar 

  • Polato M, Aiolli F (2019) Boolean kernels for rule based interpretation of support vector machines. Neurocomputing 342:113?124

    Article  Google Scholar 

  • PR (2019) Explainable deep learning for efficient and robust pattern recognition. https://www.journals.elsevier.com/pattern-recognition/call-for-papers/call-for-paper-on-special-issue-on-explainable-deep-learning. Accessed 3 Mar 2020

  • Raaijmakers S (2019) Artificial intelligence for law enforcement: challenges and opportunities. IEEE Secur Priv 17(5):74?77

    Article  Google Scholar 

  • Rai A (2020) Explainable AI: from black box to glass box. J Acad Mark Sci 48(1):137?141

    Article  Google Scholar 

  • Rajapaksha D, Bergmeir C, Buntine W (2020) LoRMIkA: local rule-based model interpretability with k-optimal associations. Inf Sci 540:221?241

    Article  MathSciNet  Google Scholar 

  • Rajkomar A, Oren E, Chen K, Dai AM, Hajaj N, Hardt M, Liu PJ, Liu X, Marcus J, Sun M et al (2018) Scalable and accurate deep learning with electronic health records. NPJ Digit Med 1(1):1?10

    Article  Google Scholar 

  • Ren X, Xing Z, Xia X, Lo D, Wang X, Grundy J (2019) Neural network-based detection of self-admitted technical debt: from performance to explainability. ACM Trans Softw Eng Methodol (TOSEM) 28(3):1?45

    Article  Google Scholar 

  • Ribeiro MT, Singh S, Guestrin C (2016) ?Why should I trust you?? explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135?1144

  • Ribeiro PC, Schardong GG, Barbosa SD, de Souza CS, Lopes H (2019) Visual exploration of an ensemble of classifiers. Comput Graph 85:23?41

    Article  Google Scholar 

  • Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206?215

    Article  Google Scholar 

  • Sabol P, Sinčák P, Hartono P, Kočan P, Benetinová Z, Blichárová A, Verbóová Ľ, Štammová E, Sabolová-Fabianová A, Jašková A (2020) Explainable classifier for improving the accountability in decision-making for colorectal cancer diagnosis from histopathological images. J Biomed Inform 109:103523

  • Sagi O, Rokach L (2020) Explainable decision forest: transforming a decision forest into an interpretable tree. Inf Fusion 61:124?138

    Article  Google Scholar 

  • Salmeron JL, Correia MB, Palos-Sanchez PR (2019) Complexity in forecasting and predictive models. Complexity 2019

  • Sanz H, Valim C, Vegas E, Oller JM, Reverter F (2018) SVM-RFE: selection and visualization of the most relevant features through non-linear kernels. BMC Bioinform 19(1):1?18

    Article  Google Scholar 

  • Sarvghad A, Tory M, Mahyar N (2016) Visualizing dimension coverage to support exploratory analysis. IEEE Trans Visual Comput Graph 23(1):21?30

    Article  Google Scholar 

  • Schneeberger D, Stöger K, Holzinger A (2020) The European legal framework for medical AI. In: International cross-domain conference for machine learning and knowledge extraction. Springer, Cham, pp 209?226

  • Self JZ, Dowling M, Wenskovitch J, Crandell I, Wang M, House L, Leman S, North C (2018) Observation-level and parametric interaction for high-dimensional data analysis. ACM Trans Interact Intell Syst (TIIS) 8(2):1?36

    Article  Google Scholar 

  • Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2020) Grad-cam: visual explanations from deep networks via gradient-based localization. Int J Comput Vis 128(2):336?359

    Article  Google Scholar 

  • Setzu M, Guidotti R, Monreale A, Turini F, Pedreschi D, Giannotti F (2021) Glocalx-from local to global explanations of black box AI models. Artif Intell 294:103457

    Article  MathSciNet  MATH  Google Scholar 

  • Shi L, Teng Z, Wang L, Zhang Y, Binder A (2018) Deepclue: visual interpretation of text-based deep stock prediction. IEEE Trans Knowl Data Eng 31(6):1094?1108

    Article  Google Scholar 

  • Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. In: International conference on machine learning. PMLR, pp 3145?3153

  • Singh N, Singh P, Bhagat D (2019) A rule extraction approach from support vector machines for diagnosing hypertension among diabetics. Expert Syst Appl 130:188?205

    Article  Google Scholar 

  • Singh A, Sengupta S, Lakshminarayanan V (2020) Explainable deep learning models in medical image analysis. J Imaging 6(6):52

    Article  Google Scholar 

  • Song S, Huang H, Ruan T (2019) Abstractive text summarization using LSTM-CNN based deep learning. Multimed Tools Appl 78(1):857?875

    Article  Google Scholar 

  • SP (2019) Explainable AI on emerging multimedia technologies. https://www.journals.elsevier.com/signal-processing-image-communication/call-for-papers/emerging-multimedia-technologies. Accessed 3 Mar 2020

  • Spinner T, Schlegel U, Schäfer H, El-Assady M (2019) explAIner: a visual analytics framework for interactive and explainable machine learning. IEEE Trans Vis Comput Graph 26(1):1064?1074

    Google Scholar 

  • Statista (2020) Revenues from the artificial intelligence software market worldwide from 2018 to 2025. https://www.statista.com/statistics/607716/worldwide-artificial-intelligence-market-revenues/. Accessed 24 Jan 2020

  • Stojić A, Stanić N, Vuković G, Stanišić S, Perišić M, Šoštarić A, Lazić L (2019) Explainable extreme gradient boosting tree-based prediction of toluene, ethylbenzene and xylene wet deposition. Sci Total Environ 653:140?147

  • Strobelt H, Gehrmann S, Pfister H, Rush AM (2017) Lstmvis: a tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Trans Vis Comput Graph 24(1):667?676

    Article  Google Scholar 

  • Strobelt H, Gehrmann S, Behrisch M, Perer A, Pfister H, Rush AM (2018) SEQ2SEQ-VIS: a visual debugging tool for sequence-to-sequence models. IEEE Trans Vis Comput Graph 25(1):353?363

    Article  Google Scholar 

  • Štrumbelj E, Kononenko I (2014) Explaining prediction models and individual predictions with feature contributions. Knowl Inf Syst 41(3):647?665

    Article  Google Scholar 

  • Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828?841

    Article  Google Scholar 

  • Swartout WR, Moore JD (1993) Explanation in second generation expert systems. In: Second generation expert systems. Springer, Cham, pp 543?585

  • Tan Q, Ye M, Ma AJ, Yang B, Yip TCF, Wong GLH, Yuen PC (2020) Explainable uncertainty-aware convolutional recurrent neural network for irregular medical time series. IEEE Trans Neural Netw Learn Syst

  • Tjoa E, Guan C (2020) A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Trans Neural Netw Learn Syst

  • Turkay C, Kaya E, Balcisoy S, Hauser H (2016) Designing progressive and interactive analytics processes for high-dimensional data analysis. IEEE Trans Vis Comput Graph 23(1):131?140

    Article  Google Scholar 

  • UberAccident (2020) What happens when self-driving cars kill people. https://www.forbes.com/sites/cognitiveworld/2019/09/26/what-happens-with-self-driving-cars-kill-people/#4b798bcc405c. Accessed 17 Mar 2020

  • Van Belle V, Van Calster B, Van Huffel S, Suykens JA, Lisboa P (2016) Explaining support vector machines: a color based nomogram. PLoS ONE 11(10):e0164568

    Article  Google Scholar 

  • Van Lent M, Fisher W, Mancuso M (2004) An explainable artificial intelligence system for small-unit tactical behavior. In: Proceedings of the national conference on artificial intelligence. AAAI Press; MIT Press, Menlo Park, London, pp 900?907

  • Van Luong H, Joukovsky B, Deligiannis N (2021) Designing interpretable recurrent neural networks for video reconstruction via deep unfolding. IEEE Trans Image Process 30:4099?4113

    Article  MathSciNet  Google Scholar 

  • Veale M, Binns R, Edwards L (2018) Algorithms that remember: model inversion attacks and data protection law. Philos Trans Royal Soc A Math Phys Eng Sci 376(2133):20180083

    Google Scholar 

  • Vellido A (2019) The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput Appl 1?15

  • Waa J, Nieuwburg E, Cremers A, Neerincx M (2021) Evaluating XAI: a comparison of rule-based and example-based explanations. Artif Intell 291:103404

    Article  MathSciNet  MATH  Google Scholar 

  • Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Privacy Law 7(2):76?99

    Article  Google Scholar 

  • Wang SC (2003) Artificial neural network. In: Interdisciplinary computing in java programming. Springer, Cham, pp 81?100

  • Wang B, Gong NZ (2018) Stealing hyperparameters in machine learning. In: 2018 IEEE symposium on security and privacy (SP). IEEE, pp 36?52

  • Wang H, Yeung DY (2016) Towards Bayesian deep learning: a framework and some existing methods. IEEE Trans Knowl Data Eng 28(12):3395?3408

    Article  Google Scholar 

  • Wang Y, Aghaei F, Zarafshani A, Qiu Y, Qian W, Zheng B (2017) Computer-aided classification of mammographic masses using visually sensitive image features. J Xray Sci Technol 25(1):171?186

    Google Scholar 

  • Wang Q, Zhang K, Ororbia AG II, Xing X, Liu X, Giles CL (2018) An empirical evaluation of rule extraction from recurrent neural networks. Neural Comput 30(9):2568?2591

    Article  MathSciNet  Google Scholar 

  • Wang C, Shi Y, Fan X, Shao M (2019a) Attribute reduction based on k-nearest neighborhood rough sets. Int J Approx Reason 106:18?31

    Article  MathSciNet  MATH  Google Scholar 

  • Wang F, Kaushal R, Khullar D (2019b) Should health care demand interpretable artificial intelligence or accept ?black box? medicine? Ann Intern Med

  • Wang S, Zhou T, Bilmes J (2019c) Bias also matters: bias attribution for deep neural network explanation. In: International conference on machine learning. PMLR, pp 6659?6667

  • Wang Y, Wang D, Geng N, Wang Y, Yin Y, Jin Y (2019d) Stacking-based ensemble learning of decision trees for interpretable prostate cancer detection. Appl Soft Comput 77:188?204

    Article  Google Scholar 

  • Wasilow S, Thorpe JB (2019) Artificial intelligence, robotics, ethics, and the military: a Canadian perspective. AI Mag 40(1)

  • Weitz K, Schiller D, Schlagowski R, Huber T, André E (2020) ?Let me explain!?: exploring the potential of virtual agents in explainable AI interaction design. J Multimodal User Interfaces 1?12

  • Wickstrøm KK, ØyvindMikalsen K, Kampffmeyer M, Revhaug A, Jenssen R (2020) Uncertainty-aware deep ensembles for reliable and explainable predictions of clinical time series. IEEE J Biomed Health Inform

  • Williford JR, May BB, Byrne J (2020) Explainable face recognition. In: European Conference on computer vision. Springer, Cham, pp 248?263

  • Wu Q, Burges CJ, Svore KM, Gao J (2010) Adapting boosting for information retrieval measures. Inf Retr 13(3):254?270

    Article  Google Scholar 

  • Wu J, Zhong Sh, Jiang J, Yang Y (2017) A novel clustering method for static video summarization. Multimed Tools Appl 76(7):9625?9641

    Article  Google Scholar 

  • Wu M, Hughes M, Parbhoo S, Zazzi M, Roth V, Doshi-Velez F (2018) Beyond sparsity: tree regularization of deep models for interpretability. In: Proceedings of the AAAI conference on artificial intelligence, vol 32

  • Xu J, Zhang Z, Friedman T, Liang Y, Broeck G (2018) A semantic loss function for deep learning with symbolic knowledge. In: International conference on machine learning. PMLR, pp 5502?5511

  • Yamamoto Y, Tsuzuki T, Akatsuka J, Ueki M, Morikawa H, Numata Y, Takahara T, Tsuyuki T, Tsutsumi K, Nakazawa R et al (2019) Automated acquisition of explainable knowledge from unannotated histopathology images. Nat Commun 10(1):1?9

    Article  Google Scholar 

  • Yang SCH, Shafto P (2017) Explainable artificial intelligence via Bayesian teaching. In: NIPS 2017 workshop on teaching machines, robots, and humans, pp 127?137

  • Yang Z, Zhang A, Sudjianto A (2020) Enhancing explainability of neural networks through architecture constraints. IEEE Trans Neural Netw Learn Syst

  • Yeganejou M, Dick S, Miller J (2019) Interpretable deep convolutional fuzzy classifier. IEEE Trans Fuzzy Syst 28(7):1407?1419

    Google Scholar 

  • Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H (2015) Understanding neural networks through deep visualization. arXiv preprint arXiv:150606579

  • Yousefi-Azar M, Hamey L (2017) Text summarization using unsupervised deep learning. Expert Syst Appl 68:93?105

    Article  Google Scholar 

  • Yu H, Yang S, Gu W, Zhang S (2017) Baidu driving dataset and end-to-end reactive control model. In: 2017 IEEE intelligent vehicles symposium (IV). IEEE, pp 341?346

  • Yuan J, Xiong HC, Xiao Y, Guan W, Wang M, Hong R, Li ZY (2020) Gated CNN: Integrating multi-scale feature layers for object detection. Pattern Recogn 105:107131

    Article  Google Scholar 

  • Zeltner D, Schmid B, Csiszár G, Csiszár O (2021) Squashing activation functions in benchmark tests: towards a more explainable artificial intelligence using continuous-valued logic. Knowl Based Syst 218:106779

    Article  Google Scholar 

  • Zhang Qs, Zhu SC (2018) Visual interpretability for deep learning: a survey. Fronti Inf Technol Electron Eng 19(1):27?39

    Article  Google Scholar 

  • Zhang J, Wang Y, Molino P, Li L, Ebert DS (2018a) Manifold: a model-agnostic framework for interpretation and diagnosis of machine learning models. IEEE Trans Vis Comput Graph 25(1):364?373

    Article  Google Scholar 

  • Zhang Q, Nian Wu Y, Zhu SC (2018b) Interpretable convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8827?8836

  • Zhang Q, Yang Y, Ma H, Wu YN (2019) Interpreting CNNs via decision trees. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6261?6270

  • Zhang A, Teng L, Alterovitz G (2020a) An explainable machine learning platform for pyrazinamide resistance prediction and genetic feature identification of mycobacterium tuberculosis. J Am Med Inform Assoc

  • Zhang M, You H, Kadam P, Liu S, Kuo CCJ (2020b) Pointhop: an explainable machine learning method for point cloud classification. IEEE Trans Multimed 22(7):1744?1755

    Article  Google Scholar 

  • Zhang W, Tang S, Su J, Xiao J, Zhuang Y (2020c) Tell and guess: cooperative learning for natural image caption generation with hierarchical refined attention. Multimed Tools Appl 1?16

  • Zhang Z, Beck MW, Winkler DA, Huang B, Sibanda W, Goyal H et al (2018c) Opening the black box of neural networks: methods for interpreting neural network models in clinical applications. Ann Transl Med 6(11)

  • Zhao W, Du S (2016) Spectral-spatial feature extraction for hyperspectral image classification: a dimension reduction and deep learning approach. IEEE Trans Geosci Remote Sens 54(8):4544?4554

    Article  Google Scholar 

  • Zheng S, Ding C (2020) A group lasso based sparse KNN classifier. Pattern Recogn Lett 131:227?233

    Article  Google Scholar 

  • Zheng Xl, Zhu My, Li Qb, Chen Cc, Tan Yc (2019) FinBrain: when finance meets AI 2.0. Front Inf Technol Electron Eng 20(7):914?924

    Article  Google Scholar 

  • Zhou B, Bau D, Oliva A, Torralba A (2018a) Interpreting deep visual representations via network dissection. IEEE Trans Pattern Anal Mach Intell 41(9):2131?2145

    Article  Google Scholar 

  • Zhou X, Jiang P, Wang X (2018b) Recognition of control chart patterns using fuzzy SVM with a hybrid kernel function. J Intell Manuf 29(1):51?67

    Article  Google Scholar 

  • Zhuang Yt, Wu F, Chen C, Pan Yh (2017) Challenges and opportunities: from big data to knowledge in AI 2.0. Front Inf Technol Electron Eng 18(1):3?14

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Dang Minh or Tan N. Nguyen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Minh, D., Wang, H.X., Li, Y.F. et al. Explainable artificial intelligence: a comprehensive review. Artif Intell Rev 55, 3503–3568 (2022). https://doi.org/10.1007/s10462-021-10088-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-021-10088-y

Keywords

Navigation