Elsevier

Medical Image Analysis

Volume 81, October 2022, 102536
Medical Image Analysis

LMISA: A lightweight multi-modality image segmentation network via domain adaptation using gradient magnitude and shape constraint

https://doi.org/10.1016/j.media.2022.102536Get rights and content
Under a Creative Commons license
open access

Highlights

  • An efficient end-to-end domain adaptation network to achieve multi-modality image segmentation.

  • The proposed model requires fewer parameters, less memory and less training time compared to other state-of-the-art methods.

  • The method is implemented for 2D and 3D medical images for multi-class segmentation.

  • The method can be easily extended to segment images from unseen domains without model retraining.

Abstract

In medical image segmentation, supervised machine learning models trained using one image modality (e.g. computed tomography (CT)) are often prone to failure when applied to another image modality (e.g. magnetic resonance imaging (MRI)) even for the same organ. This is due to the significant intensity variations of different image modalities. In this paper, we propose a novel end-to-end deep neural network to achieve multi-modality image segmentation, where image labels of only one modality (source domain) are available for model training and the image labels for the other modality (target domain) are not available. In our method, a multi-resolution locally normalized gradient magnitude approach is firstly applied to images of both domains for minimizing the intensity discrepancy. Subsequently, a dual task encoder-decoder network including image segmentation and reconstruction is utilized to effectively adapt a segmentation network to the unlabeled target domain. Additionally, a shape constraint is imposed by leveraging adversarial learning. Finally, images from the target domain are segmented, as the network learns a consistent latent feature representation with shape awareness from both domains. We implement both 2D and 3D versions of our method, in which we evaluate CT and MRI images for kidney and cardiac tissue segmentation. For kidney, a public CT dataset (KiTS19, MICCAI 2019) and a local MRI dataset were utilized. The cardiac dataset was from the Multi-Modality Whole Heart Segmentation (MMWHS) challenge 2017. Experimental results reveal that our proposed method achieves significantly higher performance with a much lower model complexity in comparison with other state-of-the-art methods. More importantly, our method is also capable of producing superior segmentation results than other methods for images of an unseen target domain without model retraining. The code is available at GitHub (https://github.com/MinaJf/LMISA) to encourage method comparison and further research.

Keywords

Domain adaptation
Multi-modality image segmentation
Generative adversarial network

MSC

41A05
41A10
65D05
65D17

Cited by (0)