Skip to main content

Synth-by-Reg (SbR): Contrastive Learning for Synthesis-Based Registration of Paired Images

  • Conference paper
  • First Online:
Simulation and Synthesis in Medical Imaging (SASHIMI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12965))

Included in the following conference series:

Abstract

Nonlinear inter-modality registration is often challenging due to the lack of objective functions that are good proxies for alignment. Here we propose a synthesis-by-registration method to convert this problem into an easier intra-modality task. We introduce a registration loss for weakly supervised image translation between domains that does not require perfectly aligned training data. This loss capitalises on a registration U-Net with frozen weights, to drive a synthesis CNN towards the desired translation. We complement this loss with a structure preserving constraint based on contrastive learning, which prevents blurring and content shifts due to overfitting. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction. Results on two public datasets show improvements over registration based on mutual information (13% reduction in landmark error) and synthesis-based algorithms such as CycleGAN (11% reduction), and are comparable to registration with label supervision. Code and data are publicly available at https://github.com/acasamitjana/SynthByReg.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amunts, K., et al.: BigBrain: an ultrahigh-resolution 3D human brain model. Science 340(6139), 1472ā€“1475 (2013)

    ArticleĀ  Google ScholarĀ 

  2. Arar, M., Ginger, Y., Danon, D., Bermano, A.H., Cohen-Or, D.: Unsupervised multi-modal image registration via geometry preserving image-to-image translation. In: CVPR, pp. 13410ā€“13419. IEEE (2020)

    Google ScholarĀ 

  3. Arjovsky, M., Bottou, L.: Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862 (2017)

  4. Ashburner, J., Friston, K.: Voxel-based morphometry-the methods. Neuroimage 11(6), 805ā€“821 (2000)

    ArticleĀ  Google ScholarĀ 

  5. Ashburner, J., Friston, K.: Unified segmentation. Neuroimage 26, 839ā€“851 (2005)

    ArticleĀ  Google ScholarĀ 

  6. Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: Voxelmorph: a learning framework for deformable medical image registration. IEEE Trans. Med. Imaging 38(8), 1788ā€“1800 (2019)

    ArticleĀ  Google ScholarĀ 

  7. Cao, X., et al.: Deformable image registration based on similarity-steered CNN regression. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 300ā€“308. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66182-7_35

    ChapterĀ  Google ScholarĀ 

  8. ƇiƧek, Ɩ., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424ā€“432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    ChapterĀ  Google ScholarĀ 

  9. Cohen, J.P., Luck, M., Honari, S.: Distribution matching losses can hallucinate features in medical image translation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-LĆ³pez, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 529ā€“536. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_60

    ChapterĀ  Google ScholarĀ 

  10. Dalca, A.V., Balakrishnan, G., Guttag, J., Sabuncu, M.R.: Unsupervised learning for fast probabilistic diffeomorphic registration. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-LĆ³pez, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 729ā€“738. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_82

    ChapterĀ  Google ScholarĀ 

  11. Ding, S.L., et al.: Comprehensive cellular-resolution atlas of the adult human brain. J. Comp. Neurol. 524(16), 3127ā€“3481 (2016)

    ArticleĀ  Google ScholarĀ 

  12. Fan, J., Cao, X., Wang, Q., Yap, P.T., Shen, D.: Adversarial learning for mono-or multi-modal registration. Med. Image Anal. 58, 101545 (2019)

    ArticleĀ  Google ScholarĀ 

  13. Fonov, V., Evans, A.C., Botteron, K., Almli, C.R., McKinstry, R.C., Collins, D.L.: Unbiased average age-appropriate atlases for pediatric studies. Neuroimage 54(1), 313ā€“327 (2011)

    ArticleĀ  Google ScholarĀ 

  14. Heinrich, M.P., et al.: MIND: modality independent neighbourhood descriptor for multi-modal deformable registration. Med. Image Anal. 16(7), 1423ā€“1435 (2012)

    ArticleĀ  Google ScholarĀ 

  15. Hu, Y.: MR to ultrasound registration for image-guided prostate interventions. Med. Image Anal. 16(3), 687ā€“703 (2012)

    ArticleĀ  Google ScholarĀ 

  16. Huo, Y., Xu, Z., Bao, S., Assad, A., Abramson, R.G., Landman, B.A.: Adversarial synthesis learning enables segmentation without target modality ground truth. In: ISBI, pp. 1217ā€“1220. IEEE (2018)

    Google ScholarĀ 

  17. Iglesias, J.E., Konukoglu, E., Zikic, D., Glocker, B., Van Leemput, K., Fischl, B.: Is synthesizing MRI contrast useful for inter-modality analysis? In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8149, pp. 631ā€“638. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40811-3_79

    ChapterĀ  Google ScholarĀ 

  18. Iglesias, J.E., Sabuncu, M.R.: Multi-atlas segmentation of biomedical images: a survey. Med. Image Anal. 24(1), 205ā€“219 (2015)

    ArticleĀ  Google ScholarĀ 

  19. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 1125ā€“1134. IEEE (2017)

    Google ScholarĀ 

  20. Kwon, D., Niethammer, M., Akbari, H., Bilello, M., Davatzikos, C., Pohl, K.M.: PORTR: pre-operative and post-recurrence brain tumor registration. IEEE Trans. Med. Imaging 33(3), 651ā€“667 (2013)

    ArticleĀ  Google ScholarĀ 

  21. Maes, F., Vandermeulen, D., Suetens, P.: Medical image registration using mutual information. Proc. IEEE 91(10), 1699ā€“1722 (2003)

    ArticleĀ  Google ScholarĀ 

  22. Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: CVPR, pp. 2794ā€“2802. IEEE (2017)

    Google ScholarĀ 

  23. Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 3DV Conference, pp. 565ā€“571 (2016)

    Google ScholarĀ 

  24. Modat, M., et al.: Fast free-form deformation using graphics processing units. Comput. Methods Programs Biomed. 98(3), 278ā€“284 (2010)

    ArticleĀ  Google ScholarĀ 

  25. Oord, A.V.D., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018)

  26. Park, T., Efros, A.A., Zhang, R., Zhu, J.-Y.: Contrastive learning for unpaired image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 319ā€“345. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_19

    ChapterĀ  Google ScholarĀ 

  27. Pichat, J., Iglesias, J.E., Yousry, T., Ourselin, S., Modat, M.: A survey of methods for 3D histology reconstruction. Med. Image Anal. 46, 73ā€“105 (2018)

    ArticleĀ  Google ScholarĀ 

  28. Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) IPMI 2019. LNCS, vol. 11492, pp. 249ā€“261. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20351-1_19

    ChapterĀ  Google ScholarĀ 

  29. Reuter, M., Schmansky, N.J., Rosas, H.D., Fischl, B.: Within-subject template estimation for unbiased longitudinal image analysis. Neuroimage 61, 1402ā€“18 (2012)

    ArticleĀ  Google ScholarĀ 

  30. Rohlfing, T., Brandt, R., Menzel, R., Maurer, C.R., Jr.: Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains. Neuroimage 21(4), 1428ā€“1442 (2004)

    ArticleĀ  Google ScholarĀ 

  31. Sokooti, H., de Vos, B., Berendsen, F., Lelieveldt, B.P.F., IÅ”gum, I., Staring, M.: Nonrigid image registration using multi-scale 3D convolutional neural networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 232ā€“239. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66182-7_27

    ChapterĀ  Google ScholarĀ 

  32. Sotiras, A., Davatzikos, C., Paragios, N.: Deformable medical image registration: a survey. IEEE Trans. Med. Imaging 32(7), 1153ā€“1190 (2013)

    ArticleĀ  Google ScholarĀ 

  33. Tanner, C., Ozdemir, F., Profanter, R., Vishnevsky, V., Konukoglu, E., Goksel, O.: Generative adversarial networks for MR-CT deformable image registration. arXiv preprint arXiv:1807.07349 (2018)

  34. de Vos, B.D., Berendsen, F.F., Viergever, M.A., Staring, M., IÅ”gum, I.: End-to-end unsupervised deformable image registration with a convolutional neural network. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 204ā€“212. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67558-9_24

    ChapterĀ  Google ScholarĀ 

  35. Wang, C., Yang, G., Papanastasiou, G., Tsaftaris, S.A., Newby, D.E., Gray, C., et al.: DiCyc: GAN-based deformation invariant cross-domain information fusion for medical image synthesis. Inf. Fus. 67, 147ā€“160 (2021)

    ArticleĀ  Google ScholarĀ 

  36. Wei, D., et al.: Synthesis and inpainting-based MR-CT registration for image-guided thermal ablation of liver tumors. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11768, pp. 512ā€“520. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32254-0_57

    ChapterĀ  Google ScholarĀ 

  37. Xu, Z., et al.: Adversarial uni- and multi-modal stream networks for multimodal image registration. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 222ā€“232. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_22

    ChapterĀ  Google ScholarĀ 

  38. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: CVPR, pp. 2223ā€“2232. IEEE (2017)

    Google ScholarĀ 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to AdriĆ  Casamitjana .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Casamitjana, A., Mancini, M., Iglesias, J.E. (2021). Synth-by-Reg (SbR): Contrastive Learning for Synthesis-Based Registration of Paired Images. In: Svoboda, D., Burgos, N., Wolterink, J.M., Zhao, C. (eds) Simulation and Synthesis in Medical Imaging. SASHIMI 2021. Lecture Notes in Computer Science(), vol 12965. Springer, Cham. https://doi.org/10.1007/978-3-030-87592-3_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87592-3_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87591-6

  • Online ISBN: 978-3-030-87592-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics