Skip to main content

Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual Learning

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12263))

Abstract

Transrectal ultrasound (US) is the most commonly used imaging modality to guide prostate biopsy and its 3D volume provides even richer context information. Current methods for 3D volume reconstruction from freehand US scans require external tracking devices to provide spatial position for every frame. In this paper, we propose a deep contextual learning network (DCL-Net), which can efficiently exploit the image feature relationship between US frames and reconstruct 3D US volumes without any tracking device. The proposed DCL-Net utilizes 3D convolutions over a US video segment for feature extraction. An embedded self-attention module makes the network focus on the speckle-rich areas for better spatial movement prediction. We also propose a novel case-wise correlation loss to stabilize the training process for improved accuracy. Highly promising results have been obtained by using the developed method. The experiments with ablation studies demonstrate superior performance of the proposed method by comparing against other state-of-the-art methods. Source code of this work is publicly available at https://github.com/DIAL-RPI/FreehandUSRecon.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Afsham, N., Rasoulian, A., Najafi, M., Abolmaesumi, P., Rohling, R.: Nonlocal means filter-based speckle tracking. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 62(8), 1501–1515 (2015)

    Article  Google Scholar 

  2. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  3. Chang, R.F., et al.: 3-D US frame positioning using speckle decorrelation and image registration. Ultrasound Med. Biol. 29(6), 801–812 (2003)

    Article  Google Scholar 

  4. Chen, J.F., Fowlkes, J.B., Carson, P.L., Rubin, J.M.: Determination of scan-plane motion using speckle decorrelation: theoretical considerations and initial test. Int. J. Imaging Syst. Technol. 8(1), 38–44 (1997)

    Article  Google Scholar 

  5. Daoud, M.I., Alshalalfah, A.L., Awwad, F., Al-Najar, M.: Freehand 3D ultrasound imaging system using electromagnetic tracking. In: 2015 International Conference on Open Source Software Computing (OSSCOM), pp. 1–5. IEEE (2015)

    Google Scholar 

  6. Fukui, H., Hirakawa, T., Yamashita, T., Fujiyoshi, H.: Attention branch network: learning of attention mechanism for visual explanation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10705–10714 (2019)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  8. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  9. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  10. Laporte, C., Arbel, T.: Learning to estimate out-of-plane motion in ultrasound imagery of real tissue. Med. Image Anal. 15(2), 202–213 (2011)

    Article  Google Scholar 

  11. Mohamed, F., Siang, C.V.: A survey on 3D ultrasound reconstruction techniques. In: Artificial Intelligence-Applications in Medicine and Biology. IntechOpen (2019)

    Google Scholar 

  12. Oktay, O., et al.: Attention u-net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)

  13. Paszke, A.,et al.: Automatic differentiation in PyTorch. In: NIPS 2017 Workshop Autodiff (2017)

    Google Scholar 

  14. Prevost, R., et al.: 3D freehand ultrasound without external tracking using deep learning. Med. Image Anal. 48, 187–202 (2018)

    Article  Google Scholar 

  15. Siddiqui, M.M., et al.: Comparison of MR/ultrasound fusion-guided biopsy with ultrasound-guided biopsy for the diagnosis of prostate cancer. JAMA 313(4), 390–397 (2015)

    Article  Google Scholar 

  16. Tuthill, T.A., Krücker, J., Fowlkes, J.B., Carson, P.L.: Automated three-dimensional us frame positioning computed from elevational speckle decorrelation. Radiology 209(2), 575–582 (1998)

    Article  Google Scholar 

  17. Wen, T., et al.: An accurate and effective FMM-based approach for freehand 3D ultrasound reconstruction. Biomed. Signal Process. Control 8(6), 645–656 (2013)

    Article  Google Scholar 

  18. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)

    Google Scholar 

  19. Yang, Q., et al.: Low-dose CT image denoising using a generative adversarial network with wasserstein distance and perceptual loss. IEEE Trans. Med. Imaging 37(6), 1348–1357 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

This work was partially supported by National Institute of Biomedical Imaging and Bioengineering (NIBIB) of the National Institutes of Health (NIH) under awards R21EB028001 and R01EB027898, and through an NIH Bench-to-Bedside award made possible by the National Cancer Institute.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pingkun Yan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, H., Xu, S., Wood, B., Yan, P. (2020). Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual Learning. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12263. Springer, Cham. https://doi.org/10.1007/978-3-030-59716-0_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59716-0_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59715-3

  • Online ISBN: 978-3-030-59716-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics