Skip to main content

Model Fooling Threats Against Medical Imaging

  • Chapter
  • First Online:
Artificial Intelligence and Cybersecurity

Abstract

Automatic medical image diagnosis tools are vulnerable to modern model fooling technologies. Because medical imaging is a way of determining the health status of a person, the threats could have grave consequences. These threats are not only dangerous to the individual but also threaten the patients’ trust in modern diagnosis methods and in the healthcare sector as a whole. As recent diagnosis tools are based on artificial intelligence and machine learning, they can be exploited using attack technologies such as image perturbations, adversarial patches, adversarial images, one-pixel attacks, and training process tampering. These methods take advantage of the non-robust nature of many machine learning models created to solve medical imaging classification problems, such as determining the probability of cancerous cell growth in tissue samples. In this study, we review the current state of these attacks and discuss their effect on medical imaging. By comparing the known attack methods and their use against medical imaging, we conclude with an evaluation of their potential use against medical imaging.

This chapter is an extended version of a paper published in the Second International Scientific Conference “Digital Transformation, Cyber Security and Resilience” (DIGILIENCE 2020) and published in the special conference issue of Information & Security: An International Journal [48].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Afifi, M., Brown, M.S.: What else can fool deep learning? Addressing color constancy errors on deep neural network performance. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 243–252 (2019)

    Google Scholar 

  2. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018). https://doi.org/10.1109/ACCESS.2018.2807385

    Article  Google Scholar 

  3. Al-Sharify, Z.T., Al-Sharify, T.A., Al-Sharify, N.T., Naser, H.Y.: A critical review on medical imaging techniques (CT and PET scans) in the medical field. IOP Conf. Ser. Mater. Sci. Eng. 870, 012043 (2020). https://doi.org/10.1088/1757-899x/870/1/012043

    Article  Google Scholar 

  4. Apostolidis, K.D., Papakostas, G.A.: A survey on adversarial deep learning robustness in medical image analysis. Electronics 10(17), 2132 (2021). https://doi.org/10.3390/electronics10172132

    Article  Google Scholar 

  5. Armanious, K., Mecky, Y., Gatidis, S., Yang, B.: Adversarial inpainting of medical image modalities. In: ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3267–3271 (2019). https://doi.org/10.1109/ICASSP.2019.8682677

  6. Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: Proceedings of the 35th International Conference on Machine Learning, vol. 80, pp. 284–293. PMLR, Stockholm (2018)

    Google Scholar 

  7. Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch. arXiv preprint arXiv:1712.09665v2 (2018)

    Google Scholar 

  8. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017). https://doi.org/10.1109/SP.2017.49

  9. Carter, S.M., Rogers, W., Win, K.T., Frazer, H., Richards, B., Houssami, N.: The ethical, legal and social implications of using artificial intelligence systems in breast cancer care. Breast 49, 25–32 (2020). https://doi.org/10.1016/j.breast.2019.10.001

    Article  Google Scholar 

  10. Chuquicusma, M.J.M., Hussein, S., Burt, J., Bagci, U.: How to fool radiologists with generative adversarial networks? A visual turing test for lung cancer diagnosis. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 240–244 (2018). https://doi.org/10.1109/ISBI.2018.8363564

  11. Cristovao, F., Cascianelli, S., Canakoglu, A., Carman, M., Nanni, L., Pinoli, P., Masseroli, M.: Investigating deep learning based breast cancer subtyping using pan-cancer and multi-omic data. IEEE/ACM Trans. Comput. Biol. Bioinform. 1–1 (2020). https://doi.org/10.1109/TCBB.2020.3042309

  12. Deng, Y., Zhang, C., Wang, X.: A multi-objective examples generation approach to fool the deep neural networks in the black-box scenario. In: 2019 IEEE Fourth International Conference on Data Science in Cyberspace (DSC), pp. 92–99 (2019). https://doi.org/10.1109/DSC.2019.00022

  13. Doi, K.: Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Comput. Med. Imaging Graph. 31(4–5), 198–211 (2007)

    Article  Google Scholar 

  14. Finlayson, S.G., Bowers, J.D., Ito, J., Zittrain, J.L., Beam, A.L., Kohane, I.S.: Adversarial attacks on medical machine learning. Science 363(6433), 1287–1289 (2019)

    Article  Google Scholar 

  15. Finlayson, S.G., Chung, H.W., Kohane, I.S., Beam, A.L.: Adversarial attacks against medical deep learning systems. arXiv preprint arXiv:1804.05296v3 (2019)

    Google Scholar 

  16. Gilmer, J., Metz, L., Faghri, F., Schoenholz, S.S., Raghu, M., Wattenberg, M., Goodfellow, I., Brain, G.: The relationship between high-dimensional geometry and adversarial examples. arXiv preprint arXiv:1801.02774 (2018)

    Google Scholar 

  17. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.org

    MATH  Google Scholar 

  18. Gu, T., Liu, K., Dolan-Gavitt, B., Garg, S.: BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access 7, 47230–47244 (2019). https://doi.org/10.1109/ACCESS.2019.2909068

    Article  Google Scholar 

  19. Gu, Z., Hu, W., Zhang, C., Lu, H., Yin, L., Wang, L.: Gradient shielding: towards understanding vulnerability of deep neural networks. IEEE Trans. Netw. Sci. Eng. 1–1 (2020). https://doi.org/10.1109/TNSE.2020.2996738

  20. INTERPOL, The International Criminal Police Organization: INTERPOL report shows alarming rate of cyberattacks during COVID-19 (2020). https://www.interpol.int/en/News-and-Events/News/2020/INTERPOL-report-shows-alarming-rate-of-cyberattacks-during-COVID-19. Accessed 4 Dec 2020

  21. Ker, J., Wang, L., Rao, J., Lim, T.: Deep learning applications in medical image analysis. IEEE Access 6, 9375–9389 (2018). https://doi.org/10.1109/ACCESS.2017.2788044

    Article  Google Scholar 

  22. Korpihalkola, J., Sipola, T., Kokkonen, T.: Color-optimized one-pixel attack against digital pathology images. In: Balandin, S., Koucheryavy, Y., Tyutina, T. (eds.) 2021 29th Conference of Open Innovations Association (FRUCT), vol. 29, pp. 206–213. IEEE (2021). https://doi.org/10.23919/FRUCT52173.2021.9435562

  23. Korpihalkola, J., Sipola, T., Puuska, S., Kokkonen, T.: One-pixel attack deceives computer-assisted diagnosis of cancer. In: Proceedings of the 4th International Conference on Signal Processing and Machine Learning (SPML 2021), August 18–20, 2021, Beijing, China. ACM, New York (2021). https://doi.org/10.1145/3483207.3483224

  24. Kügler, D., Distergoft, A., Kuijper, A., Mukhopadhyay, A.: Exploring adversarial examples. In: Stoyanov, D., Taylor, Z., Kia, S.M., Oguz, I., Reyes, M., Martel, A., Maier-Hein, L., Marquand, A.F., Duchesnay, E., Löfstedt, T., Landman, B., Cardoso, M.J., Silva, C.A., Pereira, S., Meier, R. (eds.) Understanding and Interpreting Machine Learning in Medical Image Computing Applications. Lecture Notes in Computer Science, vol. 11038, pp. 70–78. Springer International Publishing, Cham (2018). https://doi.org/10.1007/978-3-030-02628-8_8

  25. Latif, J., Xiao, C., Imran, A., Tu, S.: Medical imaging using machine learning and deep learning algorithms: a review. In: 2019 2nd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–5 (2019). https://doi.org/10.1109/ICOMET.2019.8673502

  26. Ma, X., Niu, Y., Gu, L., Wang, Y., Zhao, Y., Bailey, J., Lu, F.: Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn. 110, 107332 (2020). https://doi.org/10.1016/j.patcog.2020.107332

    Article  Google Scholar 

  27. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv preprint arXiv:1706.06083v4 arXiv:1706.06083 (2019)

    Google Scholar 

  28. Mihajlović, M., Popović, N.: Fooling a neural network with common adversarial noise. In: 2018 19th IEEE Mediterranean Electrotechnical Conference (MELECON), pp. 293–296 (2018). https://doi.org/10.1109/MELCON.2018.8379110

  29. Mikołajczyk, A., Grochowski, M.: Data augmentation for improving deep learning in image classification problem. In: 2018 International Interdisciplinary Ph.D Workshop (IIPhDW), pp. 117–122 (2018). https://doi.org/10.1109/IIPHDW.2018.8388338

  30. Moosavi-Dezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 86–94. IEEE Computer Society, Los Alamitos (2017). https://doi.org/10.1109/CVPR.2017.17

  31. Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582 (2016). https://doi.org/10.1109/CVPR.2016.282

  32. Munn, Z., Peters, M.D.J., Stern, C., Tufanaru, C., McArthur, A., Aromataris, E.: Systematic review or scoping review? guidance for authors when choosing between a systematic or scoping review approach. BMC Med. Res. Methodol. 18, 143 (2018). https://doi.org/10.1186/s12874-018-0611-x

    Article  Google Scholar 

  33. Murugesan, M., Sukanesh, R.: Automated Detection of Brain Tumor in EEG Signals Using Artificial Neural Networks. In: 2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies, pp. 284–288 (2009). https://doi.org/10.1109/ACT.2009.77

  34. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 427–436 (2015). https://doi.org/10.1109/CVPR.2015.7298640

  35. Paschali, M., Conjeti, S., Navarro, F., Navab, N.: Generalizability vs. robustness: adversarial examples for medical imaging. arXiv preprint arXiv:1804.00504 (2018)

    Google Scholar 

  36. Paul, R., Schabath, M., Gillies, R., Hall, L., Goldgof, D.: Mitigating adversarial attacks on medical image understanding systems. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 1517–1521 (2020). https://doi.org/10.1109/ISBI45749.2020.9098740

  37. Poonguzhali, N., Dharani, V., Nivedha, R., Ruby, L.S.: Prediction of breast cancer using electronic health record. In: 2020 International Conference on System, Computation, Automation and Networking (ICSCAN), pp. 1–6 (2020). https://doi.org/10.1109/ICSCAN49426.2020.9262398

  38. Rai, S., Raut, A., Savaliya, A., Shankarmani, R.: Darwin: Convolutional neural network based intelligent health assistant. In: 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), pp. 1367–1371 (2018). https://doi.org/10.1109/ICECA.2018.8474861

  39. Rajkomar, A., Dean, J., Kohane, I.: Machine learning in medicine. N. Engl. J. Med. 380(14), 1347–1358 (2019). https://doi.org/10.1056/NEJMra1814259

    Article  Google Scholar 

  40. Rastgar-Jazi, M., Fernando, X.: Detection of heart abnormalities via artificial neural network: an application of self learning algorithms. In: 2017 IEEE Canada International Humanitarian Technology Conference (IHTC), pp. 66–69 (2017). https://doi.org/10.1109/IHTC.2017.8058202

  41. Ravish, D.K., Shanthi, K.J., Shenoy, N.R., Nisargh, S.: Heart function monitoring, prediction and prevention of heart attacks: using artificial neural networks. In: 2014 International Conference on Contemporary Computing and Informatics (IC3I), pp. 1–6 (2014). https://doi.org/10.1109/IC3I.2014.7019580

  42. Razzaq, S., Mubeen, N., Kiran, U., Asghar, M.A., Fawad, F.: Brain tumor detection from mri images using bag of features and deep neural network. In: 2020 International Symposium on Recent Advances in Electrical Engineering Computer Sciences (RAEE CS), vol. 5, pp. 1–6 (2020). https://doi.org/10.1109/RAEECS50817.2020.9265768

  43. Ruiz, L., Martín, A., Urbanos, G., Villanueva, M., Sancho, J., Rosa, G., Villa, M., Chavarrías, M., Pérez, Á., Juarez, E., Lagares, A., Sanz, C.: Multiclass brain tumor classification using hyperspectral imaging and supervised machine learning. In: 2020 XXXV Conference on Design of Circuits and Integrated Systems (DCIS), pp. 1–6 (2020). https://doi.org/10.1109/DCIS51330.2020.9268650

  44. Salama, W.M., Elbagoury, A.M., Aly, M.H.: Novel breast cancer classification framework based on deep learning. IET Image Process.14(13), 3254–3259 (2020). https://doi.org/10.1049/iet-ipr.2020.0122

    Article  Google Scholar 

  45. Sasubilli, S.M., Kumar, A., Dutt, V.: Machine learning implementation on medical domain to identify disease insights using TMS. In: 2020 International Conference on Advances in Computing and Communication Engineering (ICACCE), pp. 1–4 (2020). https://doi.org/10.1109/ICACCE49060.2020.9154960

  46. Secretariat of the Security Committee: Finland’s Cyber security Strategy, Government Resolution 3.10.2019 (2019). https://turvallisuuskomitea.fi/wp-content/uploads/2019/10/Kyberturvallisuusstrategia_A4_ENG_WEB_031019.pdf

  47. Sipola, T., Kokkonen, T.: One-pixel attacks against medical imaging: a conceptual framework. In: Rocha, Á., Adeli, H., Dzemyda, G., Moreira, F., Ramalho Correia A. (eds.) Trends and Applications in Information Systems and Technologies. WorldCIST 2021. Advances in Intelligent Systems and Computing, vol. 1365, pp. 197–203. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72657-7_19

    Google Scholar 

  48. Sipola, T., Puuska, S., Kokkonen, T.: Model fooling attacks against medical imaging: a short survey. Inf. Secur. Int. J. 46(2), 215–224 (2020). https://doi.org/10.11610/isij.4615

    Google Scholar 

  49. Someswararao, C., Shankar, R.S., Appaji, S.V., Gupta, V.: Brain tumor detection model from mr images using convolutional neural network. In: 2020 International Conference on System, Computation, Automation and Networking (ICSCAN), pp. 1–4 (2020). https://doi.org/10.1109/ICSCAN49426.2020.9262373

  50. Su, J., Vargas, D.V., Sakurai, K.: Attacking convolutional neural network using differential evolution. IPSJ Trans. Comput. Vis. Appl. 11(1), 1–16 (2019)

    Article  Google Scholar 

  51. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019). https://doi.org/10.1109/TEVC.2019.2890858

    Article  Google Scholar 

  52. Syam, R., Marapareddy, R.: Application of deep neural networks in the field of information security and healthcare. In: 2019 SoutheastCon, pp. 1–5 (2019). https://doi.org/10.1109/SoutheastCon42311.2019.9020553

  53. Taghanaki, S.A., Das, A., Hamarneh, G.: Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks. In: Stoyanov, D., Taylor, Z., Kia, S.M., Oguz, I., Reyes, M., Martel, A., Maier-Hein, L., Marquand, A.F., Duchesnay, E., Löfstedt, T., Landman, B., Cardoso, M.J., Silva, C.A., Pereira, S., Meier, R. (eds.) Understanding and Interpreting Machine Learning in Medical Image Computing Applications, pp. 87–94. Springer International Publishing, Cham (2018). https://doi.org/10.1007/978-3-030-02628-8_10

  54. Tizhoosh, H.R., Pantanowitz, L.: Artificial intelligence and digital pathology: challenges and opportunities. J. Pathol. Inform. 9 (2018)

    Google Scholar 

  55. Vargas, D.V., Su, J.: Understanding the one-pixel attack: propagation maps and locality analysis. arXiv preprint arXiv:1902.02947 (2019)

    Google Scholar 

  56. Xu, H., Ma, Y., Liu, H.C., Deb, D., Liu, H., Tang, J.L., Jain, A.K.: Adversarial attacks and defenses in images, graphs and text: a review. Int. J. Autom. Comput. 17(2), 151–178 (2020). https://doi.org/10.1007/s11633-019-1211-x

    Article  Google Scholar 

  57. Yang, C., Wu, Q., Li, H., Chen, Y.: Generative poisoning attack method against neural networks. arXiv preprint arXiv:1703.01340 (2017)

    Google Scholar 

Download references

Acknowledgements

This research is partially funded by The Regional Council of Central Finland/Council of Tampere Region and European Regional Development Fund as part of the Health Care Cyber Range (HCCR) project and The Cyber Security Network of Competence Centres for Europe (CyberSec4Europe) project of the Horizon 2020 SU-ICT-03-2018 program. The authors would like to thank Ms. Tuula Kotikoski for proofreading the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tuomo Sipola .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sipola, T., Kokkonen, T., Karjalainen, M. (2023). Model Fooling Threats Against Medical Imaging. In: Sipola, T., Kokkonen, T., Karjalainen, M. (eds) Artificial Intelligence and Cybersecurity. Springer, Cham. https://doi.org/10.1007/978-3-031-15030-2_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-15030-2_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15029-6

  • Online ISBN: 978-3-031-15030-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics