Skip to main content

Explainability-Enhanced Neural Network for Thoracic Diagnosis Improvement

  • Conference paper
  • First Online:
Computer Analysis of Images and Patterns (CAIP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14184))

Included in the following conference series:

  • 378 Accesses

Abstract

Thoracic problems are medical conditions that affect the area behind the sternum and include the heart, lungs, trachea, bronchi, esophagus and other structures of the respiratory and cardiovascular system. These problems can be caused by a variety of conditions, such as respiratory infections, lung conditions, heart conditions, autoimmune diseases, or anxiety disorders, and can vary in symptoms and severity. In this paper, we introduce a supervised neural network model that is trained to predict these problems and to further more increase the level of accuracy by using explainability methods. We chose to use the attention mechanism to be able to get a higher weight after training the data set. The accuracy of the trained model reached the value of more than 80%. To be able to analyze and explain each feature, we use Local Interpretable Model-Agnostic Explanations, which is a post-hoc model agnostic technique. Our experiments showed that by using explainability results as feedback signal, we were able to increase the accuracy of the base model with more than 20% on a small medical dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Niu, Z., Zhong, G., Yu, H.: A review on the attention mechanism of deep learning. Neurocomputing 452, 48–62 (2021)

    Google Scholar 

  2. Santurkar, S., et al.: How does batch normalization help optimization? In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  3. Hu, R., et al.: Efficient hardware architecture of softmax layer in deep neural network. In: 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP). IEEE (2018)

    Google Scholar 

  4. Lee, E., et al.: Developing the sensitivity of LIME for better machine learning explanation. In: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, vol. 11006. SPIE (2019)

    Google Scholar 

  5. Bhattacharya, A.: Applied Machine Learning Explainability Techniques: Make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more. Packt Publishing Ltd. (2022)

    Google Scholar 

  6. Garreau, D., Luxburg, U.: Explaining the explainer: a first theoretical analysis of LIME. In: International Conference on Artificial Intelligence and Statistics. PMLR (2020)

    Google Scholar 

  7. Zafar, M.R., Khan, N.M.: DLIME: a deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. arXiv preprint arXiv:1906.10263 (2019)

  8. Palatnik de Sousa, I., Maria Bernardes Rebuzzi Vellasco, M., Costa da Silva, E.: Local interpretable model-agnostic explanations for classification of lymph node metastases. Sensors 19(13), 2969 (2019)

    Google Scholar 

  9. El-Hajj, C., Kyriacou, P.A.: Deep learning models for cuffless blood pressure monitoring from PPG signals using attention mechanism. Biomed. Signal Process. Control 65, 102301 (2021)

    Article  Google Scholar 

  10. Santillan, B.G.: A step towards the applicability of algorithms based on invariant causal learning on observational data. arXiv preprint arXiv:2304.02286 (2023)

  11. Xia, J.-F., et al.: APIS: accurate prediction of hot spots in protein interfaces by combining protrusion index with solvent accessibility. BMC Bioinform. 11, 1–14 (2010)

    Article  Google Scholar 

  12. Nguyen, H.V., Byeon, H.: Prediction of Parkinson’s disease depression using LIME-based stacking ensemble model. Mathematics 11(3), 708 (2023)

    Article  Google Scholar 

  13. Ranjbarzadeh, R., et al.: Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images. Sci. Rep. 11(1), 1–17 (2021)

    Article  MathSciNet  Google Scholar 

  14. Fan, G., Li, J., Hao, H.: Dynamic response reconstruction for structural health monitoring using densely connected convolutional networks. Struct. Health Monit. 20(4), 1373–1391 (2021)

    Article  Google Scholar 

  15. Feichtinger, H., Onchis, D.M.: Constructive reconstruction from irregular sampling in multi- window spline-type spaces. In: General Proceedings of the 7th ISAAC Congress, London, pp. 257–265 (2010)

    Google Scholar 

  16. Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)

  17. Agarap, A.F.: Deep learning using rectified linear units (RELU). arXiv preprint arXiv:1803.08375 (2018)

  18. Ding, B., Qian, H., Zhou, J.: Activation functions and their characteristics in deep neural networks. In: 2018 Chinese control and decision conference (CCDC). IEEE (2018)

    Google Scholar 

  19. Onchis, D.M., Gillich, G.-R.: Wavelet-type denoising for mechanical structures diagnosis. In: EMESEG 2010: Proceedings of the 3rd WSEAS International Conference on Engineering Mechanics, Structures, Engineering Geology, pp. 200–203 (2010)

    Google Scholar 

  20. Tato, A., Nkambou, R.: Improving adam optimizer (2018)

    Google Scholar 

  21. Yang, S., Berdine, G.: The receiver operating characteristic (ROC) curve. Southwest Respir. Crit. Care Chronicles 5(19), 34–36 (2017)

    Article  Google Scholar 

  22. Gaianu, M., Onchis, D.M.: Face and marker detection using Gabor frames on GPUs. 96, 90–93, March 2014

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Flavia Costi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Costi, F., Onchis, D.M., Istin, C., Cozma, G.V. (2023). Explainability-Enhanced Neural Network for Thoracic Diagnosis Improvement. In: Tsapatsoulis, N., et al. Computer Analysis of Images and Patterns. CAIP 2023. Lecture Notes in Computer Science, vol 14184. Springer, Cham. https://doi.org/10.1007/978-3-031-44237-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44237-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44236-0

  • Online ISBN: 978-3-031-44237-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics