Authors:
Hajar Hakkoum
1
;
Ibtissam Abnane
1
and
Ali Idri
2
;
1
Affiliations:
1
Software Project Management Research Team, ENSIAS, Mohammed V University, Rabat, Morocco
;
2
MSDA, Mohammed VI Polytechnic University, Ben Guerir, Morocco
Keyword(s):
Explainability, XAI, Medicine, Artificial Intelligence, Machine Learning, Systematic Review.
Abstract:
Machine learning (ML) has been rapidly growing, mainly owing to the availability of historical datasets and advanced computational power. This growth is still facing a set of challenges, such as the interpretability of ML models. In particular, in the medical field, interpretability is a real bottleneck to the use of ML by physicians. This review was carried out according to the well-known systematic map process to analyse the literature on interpretability techniques when applied in the medical field with regard to different aspects. A total of 179 articles (1994-2020) were selected from six digital libraries. The results showed that the number of studies dealing with interpretability increased over the years with a dominance of solution proposals and experiment-based empirical type. Additionally, artificial neural networks were the most widely used ML black-box techniques investigated for interpretability.