Elsevier

Photoacoustics

Volume 26, June 2022, 100351
Photoacoustics

Research article
Improving needle visibility in LED-based photoacoustic imaging using deep learning with semi-synthetic datasets

https://doi.org/10.1016/j.pacs.2022.100351Get rights and content
Under a Creative Commons license
open access

Abstract

Photoacoustic imaging has shown great potential for guiding minimally invasive procedures by accurate identification of critical tissue targets and invasive medical devices (such as metallic needles). The use of light emitting diodes (LEDs) as the excitation light sources accelerates its clinical translation owing to its high affordability and portability. However, needle visibility in LED-based photoacoustic imaging is compromised primarily due to its low optical fluence. In this work, we propose a deep learning framework based on U-Net to improve the visibility of clinical metallic needles with a LED-based photoacoustic and ultrasound imaging system. To address the complexity of capturing ground truth for real data and the poor realism of purely simulated data, this framework included the generation of semi-synthetic training datasets combining both simulated data to represent features from the needles and in vivo measurements for tissue background. Evaluation of the trained neural network was performed with needle insertions into blood-vessel-mimicking phantoms, pork joint tissue ex vivo and measurements on human volunteers. This deep learning-based framework substantially improved the needle visibility in photoacoustic imaging in vivo compared to conventional reconstruction by suppressing background noise and image artefacts, achieving 5.8 and 4.5 times improvements in terms of signal-to-noise ratio and the modified Hausdorff distance, respectively. Thus, the proposed framework could be helpful for reducing complications during percutaneous needle insertions by accurate identification of clinical needles in photoacoustic imaging.

Keywords

Photoacoustic imaging
Needle visibility
Light emitting diodes
Deep learning
Minimally invasive procedures

Cited by (0)

Mengjie Shi is a Ph.D. student in the School of Biomedical Engineering & Imaging Sciences at King’s College London, UK. She completed her bachelor’s degree in Optoelectricity Information of Science and Technology at Nanjing University of Science and Technology, China in 2019 and master’s degree in Communications and Signal Processing at Imperial College London in 2020. Her research interests focus on improving photoacoustic imaging with affordable light sources for guiding minimally invasive procedures.

Tianrui Zhao is a Ph.D. student in the School of Biomedical Engineering & Imaging Sciences at King’s College London, UK. He received his B.Sc. in Materials Science and Engineering from Northwestern Polytechnical University, China, and M.Sc. degree in Materials for Energy and Environment from University College London, UK, in 2015 and 2016, respectively. His research interests include developing minimally invasive imaging devices based on photoacoustic imaging.

Dr. Sim West is a consultant anaesthetist at UCLH. He graduated from Sheffield in 2000, and completed his training in anaesthesia in North London, spending 2012 as the Smiths Medical Innovation Fellow. He was appointed to UCLH in 2013 and is lead for regional anaesthesia and the orthopaedic hub. His research interests include improving visualisation of needles, catheters and nerves.

Dr. Adrien Desjardins is a Professor in the Department of Medical Physics and Biomedical Engineering at the University College London, where he leads the Interventional Devices Group. His research interests are centred on the development of new imaging and sensing modalities to guide minimally invasive medical procedures. He has a particular interest in the application of photoacoustic imaging and optical ultrasound to guide interventional devices for diagnosis and therapy.

Tom Vercauteren is a Professor of Interventional Image Computing at King’s College London since 2018 where he holds the Medtronic/Royal Academy of Engineering Research Chair in Machine Learning for Computer-assisted Neurosurgery. From 2014 to 2018, he was Associate Professor at UCL where he acted as Deputy Director for the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (2017–18). From 2004 to 2014, he worked for Mauna Kea Technologies, Paris where he led the research and development team designing image computing solutions for the company’s CE- marked and FDA-cleared optical biopsy device. His work is now used in hundreds of hospitals worldwide. He is a Columbia University and Ecole Polytechnique graduate and obtained his Ph.D. from Inria in 2008. Tom is also an established open-source software supporter.

Dr. Wenfeng Xia is a Lecturer in the School of Biomedical Engineering & Imaging Sciences at King’s College London, UK. He received a B.Sc. in Electrical Engineering from Shanghai Jiao Tong University, China, and a M.Sc. in Medical Physics from University of Heidelberg, Germany, in 2005 and 2007, respectively. In 2013, he obtained his Ph.D from University of Twente, Netherlands. From 2014 to 2018, he was a Research Associate / Senior Research Associate in the Department of Medical Physics and Biomedical Engineering at University College London, UK. He currently leads the Photons+ Ultrasound Research Laboratory (https://www.purlkcl.org/). His research interests include non-invasive and minimally invasive photoacoustic imaging, and ultrasound-based medical devices tracking for guiding interventional procedures.