Automatic brain labeling via multi-atlas guided fully convolutional networks☆
Graphical abstract
Introduction
Anatomical brain labeling is highly desired for region-based analysis of MR brain images, which is important for many research studies and clinical applications, such as facilitating diagnosis (Zhou et al., 2012, Chen et al., 2017) and investigating early brain development (Holland et al., 2014). Also, brain labeling is a fundamental step in brain network analysis pipelines, where regions-of-interest (ROIs) need to be identified prior to exploring any connectivity traits (Bullmore and Bassett, 2011, Liu et al., 2012, Ingalhalikar et al., 2014, Zhang et al., 2017a, Zhang et al., 2017c). But it is labor-intensive and impractical to manually label a large set of 3D MR images, thus recent developments focused on automatic labeling of brain anatomy. However, there are multiple challenges in automatic labeling: 1) complex brain structures, 2) ambiguous boundaries between neighboring regions as observed by the highlighted region in Figs 1, and 3) large variation of the same brain structure across different subjects.
Recently, many attempts have been made to address these challenges in MR brain labeling (Langerak et al., 2010, Coupé et al., 2011, Tong et al., 2013, Sanroma et al., 2015, Wu et al., 2015, Ma et al., 2016, Zhang et al., 2017a, Zhang et al., 2017c, Wu et al., 2014). In particular, the multi-atlas-based labeling methods have been widely used as standard approaches for their effectiveness and robustness. Basically, through defining an atlas as a combination of the intensity image with its manually-labeled map, one can label a target image in two steps: 1) registering the atlas image to the target image, and then 2) propagating the atlas label map to the target image. This generalizes to multi-atlas labeling methods, where multiples atlases are first registered to the target image, and then labels from all labeled atlases are propagated to the target unlabeled image. Generally, the multi-atlas-based methods can be classified into two categories: registration-based and patch-based methods. Typically, registration-based methods first align multiple atlases to the target image in the registration step (Shen and Davatzikos, 2002, Klein et al., 2009), and then fuse the respective warped atlas label maps to obtain the final labels in the label fusion step (Langerak et al., 2010, Kim et al., 2013, Wang et al., 2013, Giraud et al., 2016). The main drawback of such methods is that the labeling performance highly depends on the reliability of non-rigid registration techniques used, which is often quite time-consuming (Iglesias and Sabuncu, 2015).
Patch-based methods, on the other hand, have gained increased attention in image labeling, since they can alleviate the need for high registration accuracy through exploring several neighboring patches within a local search region (Tu and Bai, 2010, Hao et al., 2014, Zikic et al., 2014, Khalifa et al., 2016, Pereira et al., 2016, Zhang et al., 2017b). For such methods, affine registration of the atlases to the target image is often used. Specifically, for each target patch, similar patches are selected from the affine-aligned atlas images according to patch similarities within a search region. Then, the labels of those selected atlas patches are fused together to label the subject patch. The underlying assumption of patch-based methods is that, when two patches are similar in intensity, they are also similar in labels (Rousseau et al., 2011). To measure the similarity between patches, several feature extraction methods have been proposed based on anatomical structures (Tu and Bai, 2010, Zhang et al., 2016) or intensity distributions (Hao et al., 2014, Zikic et al., 2014). However, these hand-crafted patch-driven features have a key limitation. For example, they are limited by using a pre-defined set of features (i.e., color, gradient, shape, intensity distribution etc.), without exploring other possible features that can be considered and learned when comparing patches for our target task.
Recently, the convolutional networks (ConvNet) methods have shown great promise and performance in several medical image analysis tasks, including image segmentation (Ronneberger et al., 2015, Chen et al., 2016, Milletari et al., 2016, Badrinarayanan et al., 2017) and image synthesis (Van Nguyen et al., 2015, Li and Wand, 2016, Nie et al., 2017). An appealing aspect of ConvNet is that it can automatically learn the most comprehensive, high-level appearance features that can best represent the image. Specifically, the fully convolutional network (FCN) (Long et al., 2015) have demonstrated its effectiveness in medical image segmentation. For example, Nie et al. (2016) adopted the FCN model for brain tissue segmentation, which significantly outperformed the conventional segmentation methods in terms of accuracy.
In this paper, we propose a novel multi-atlas guided fully convolution network (MA-FCN) aiming at further improving the labeling performance with the aid of patch-based manner and the registration-based labeling. To guide the learning of a conventional FCN for automatic brain labeling by leveraging available multiple atlases, we align a subset of the training atlases to the target images. Note that we only implement affine registration (with 12 degree of freedom using normalized correlation as cost function) to roughly align atlases to the target image, instead of non-rigid registration, which ensures efficiency and also demonstrates the ability of the FCN for inferring labels from local regions. In the training stage, we propose a novel candidate target patch selection strategy for helping identify the optimal set of candidate target patches, thus balancing the large variability of ROI sizes. Both target patches and their corresponding candidate atlas patches (two training sources) are used for training the FCN model. We take our proposed FCN model one step further by devising three novel strategies to incorporate the extracted appearance features from the two training sources in a more effective way, i.e., atlas-unique pathway, target-patch pathway, and atlas-aware fusion pathway. Specifically, atlas-unique pathway and target-patch pathway process the atlas patch and target patch separately, while atlas-aware fusion pathway merges these pathways together. The main contributions of our method are two-fold:
- (1)
We guide the learning of FCN model by leveraging the available information in multiple atlases.
- (2)
The proposed method does not need a non-rigid registration step for aligning atlases to the target image, which is more efficient for brain labeling.
Section snippets
Registration-based labeling
Registration based methods leverage both non-linear registration and label fusion techniques. Many relevant works were proposed to improve the performance of the registration step, including the LEAP method (Wolz et al., 2010) which constructs an image manifold according to the similarities between all training and test images. The sophisticated tree-based group-wise registration strategy developed in (Jia et al., 2012) employed pairwise registration strategy that concatenated precomputed
Method
In this section, we detail the proposed MA-FCN framework for automatic brain labeling. Our goal is to improve the labeling performance of a typical FCN by guiding and boosting its learning using multiple aligned atlases. Our method comprises training and testing stages. In the training stage, we randomly select several training images as atlases. Specifically, we first select 3D patches from the training images using a random selection strategy. Next, for each selected training 3D patch, we
Experiments and results
We evaluated the proposed method on the LONI LBPA401 (Shattuck et al., 2008) dataset and SATA MICCAI 2013 challenge dataset2 (Landman, 2013). LONI dataset and SATA dataset are the two widely-used datasets for evaluating 2D (Zikic et al., 2014, Wu et al., 2015, Bao and Chung, 2018) or 3D (Tu and Bai, 2010, Bao et al., 2018, Wu et al., 2018) labeling algorithms. They contain different
Discussion
In this paper, we proposed an automated labeling framework of brain images, by integrating multiple-atlas based labeling approaches into an FCN architecture. Previously, several neural network-based methods aimed to integrate data from multiple sources or different modalities by concatenating them together for network training (Fang et al., 2017, Rohé et al., 2017, Xiang et al., 2017, Yang et al., 2017). Our proposed MA-FCN falls into the same category, but it has more appealing aspects. For
Conclusion
In this work, we have proposed a novel multi-atlas guided fully convolutional networks (MA-FCN) for brain labeling. Different from conventional ConvNet methods, we integrated atlas intensity and label information through new pathways embedded in the proposed FCN architecture. The MA-FCN contains three propagation pathways: atlas-unique pathway, atlas-aware fusion pathway, and target-patch pathway. The atlas-unique pathway can amend the wrong labels in the atlas by using the convolution
Acknowledgments
This work was supported in part by The National Key Research and Development Program of China (2017YFB1302704) and National Natural Science Foundation of China (91520202, 81701785), Youth Innovation Promotion Association CAS (2012124), the CAS Scientific Research Equipment Development Project (YJKYYQ20170050) and the Beijing Municipal Science & Technology Commission (Z181100008918010) and Strategic Priority Research Program of CAS. This work was also supported by NIH grants (EB006733, EB008374,
References (68)
Patch-based segmentation using expert priors: application to hippocampus and ventricle segmentation
Neuroimage
(2011)"An optimized patchmatch for multi-scale and multi-feature label fusion
Neuroimage
(2016)Brain tumor segmentation with deep neural networks
Med. Image Anal.
(2017)- et al.
Multi-atlas segmentation of biomedical images: a survey
Med. Image Anal.
(2015) Iterative multi-atlas-based multi-image segmentation with tree-based registration
Neuroimage
(2012)Automatic hippocampus segmentation of 7.0 Tesla MR images by combining multiple atlases and auto-context models
Neuroimage
(2013)Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration
Neuroimage
(2009)Automatic brain tissue segmentation in MR images using random forests and conditional random fields
J. Neurosci. Methods
(2016)Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains
Neuroimage
(2004)A transversal approach for patch-based label fusion via matrix completion
Med. Image Anal.
(2015)
Construction of a 3D probabilistic atlas of human cortical structures
Neuroimage
Advances in functional and structural MR image analysis and implementation as FSL
Neuroimage
Segmentation of MR images via discriminative dictionary learning and sparse coding: application to hippocampus labeling
Neuroimage
LEAP: learning embeddings for atlas propagation
Neuroimage
A generative probability model of joint label fusion for multi-atlas based brain segmentation
Medical image analysis
Hierarchical multi-atlas label fusion with multi-scale feature representation and label-specific patch partition
Neuroimage
Robust brain ROI segmentation by deformation regression and deformable shape model
Med. Image Anal.
Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI
Neurocomputing
Brain atlas fusion from high-thickness diagnostic magnetic resonance images by learning-based super-resolution
Pattern Recognit.
Learning-based structurally-guided construction of resting-state functional correlation tensors
Magn. Reson. Imaging
Concatenated spatially-localized random forests for hippocampus labeling in adult and infant MR brain images
Neurocomputing
Deep convolutional neural networks for multi-modality isointense infant brain image segmentation
Neuroimage
Predicting regional neurodegeneration from the healthy brain functional connectome
Neuron
"Encoding atlases by randomized classification forests for efficient multi-atlas label propagation
Med. Image Anal.
Combination strategies in multi-atlas image segmentation: application to brain MR data
IEEE Trans. Med. Imaging
Segnet: a deep convolutional encoder-decoder architecture for scene segmentation
IEEE Trans. Pattern Anal. Mach. Intell
Label fusion in atlas-based segmentation using a selective and iterative method for performance level estimation (SIMPLE)
IEEE Trans. Med. Imaging
3D Randomized connection network with graph-based label inference
IEEE Trans. Image Process.
Multi-scale structured CNN with label consistency for brain MR image segmentation
Comput. Methods Biomech. Biomed. Eng.: Imaging Vis.
Brain graphs: graphical models of the human brain connectome
Annu. Rev. Clin. Psychol.
Extraction of dynamic functional connectivity from brain grey matter and white matter for MCI classification
Hum. Brain Mapp.
Brain Image Labeling Using Multi-Atlas Guided 3D Fully Convolutional Networks
Cited by (29)
Motor imagery classification in brain-machine interface with machine learning algorithms: Classical approach to multi-layer perceptron model
2022, Biomedical Signal Processing and ControlWhole brain segmentation with full volume neural network
2021, Computerized Medical Imaging and GraphicsCitation Excerpt :There have been an increasing number of CNNs/FCNs proposed for whole brain segmentation in the past few years, as a result of their ability to learn task-specific relevant features and classifiers at the same time. Most existing works focus on automatic brain segmentation methods that are patch-based (Fang et al., 2019), slice-based (Chen et al., 2018a; Roy et al., 2019) or sub-volume-based (Huo et al., 2019; Sun et al., 2019a). A remarkable drawback of these approaches is that only partial information of each data instance is adopted during training and inference.
A robust discriminative multi-atlas label fusion method for hippocampus segmentation from MR image
2021, Computer Methods and Programs in BiomedicineCitation Excerpt :It is worth noting that a formula is integrated to embed the patch into the CRF model in our work. In the literature, most existing PBM methods [19–28] adopt a predefined distance metric model to measure the intensity-based similarity between target patch and atlas patches. This kind of similarity measure can not effectively characterize statistical distributions of image intensities [29].
Hippocampus segmentation in MR images: Multiatlas methods and deep learning methods
2021, Big Data in Psychiatry and NeurologyBalanced multi-image demons for non-rigid registration of magnetic resonance images
2020, Magnetic Resonance ImagingCitation Excerpt :Image registration found many applications, e.g., in computer vision [7] and medical imaging [4,5,8], e.g., for the segmentation of the human brain [9–15]. Different approaches have been explored for non-rigid image registration [4], e.g., physical constraints [16], statistical models [15,17], deep learning [1,18]. Demons method is a non-parametric, non-rigid image registration technique which is appreciated for the good performances and low computational cost [19,20].
A New Multi-Atlas Based Deep Learning Segmentation Framework With Differentiable Atlas Feature Warping
2024, IEEE Journal of Biomedical and Health Informatics
- ☆
Conflict of interest: We wish to draw the attention of the Editor to the following facts which may be considered as potential conflicts of interest and to significant financial contributions to this work.
We confirm that the manuscript has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed. We further confirm that the order of authors listed in the manuscript has been approved by all of us.
We confirm that we have given due consideration to the protection of intellectual property associated with this work and that there are no impediments to publication, including the timing of publication, with respect to intellectual property. In so doing we confirm that we have followed the regulations of our institutions concerning intellectual property.
We understand that the Corresponding Author is the sole contact for the Editorial process (including Editorial Manager and direct communications with the office). He/she is responsible for communicating with the other authors about progress, submissions of revisions and final approval of proofs. We confirm that we have provided a current, correct email address which is accessible by the Corresponding Author and which has been configured to accept email from[email protected], [email protected]