Skip to main content
Journal of Healthcare Engineering logoLink to Journal of Healthcare Engineering
. 2021 Apr 23;2021:5520196. doi: 10.1155/2021/5520196

Symmetric Deformable Registration via Learning a Pseudomean for MR Brain Images

Xiaodan Sui 1, Yuanjie Zheng 1,2,, Yunlong He 3, Weikuan Jia 1,
PMCID: PMC8087477  PMID: 33976754

Abstract

Image registration is a fundamental task in medical imaging analysis, which is commonly used during image-guided interventions and data fusion. In this paper, we present a deep learning architecture to symmetrically learn and predict the deformation field between a pair of images in an unsupervised fashion. To achieve this, we design a deep regression network to predict a deformation field that can be used to align the template-subject image pair. Specifically, instead of estimating the single deformation pathway to align the images, herein, we predict two halfway deformations, which can move the original template and subject into a pseudomean space simultaneously. Therefore, we train a symmetric registration network (S-Net) in this paper. By using a symmetric strategy, the registration can be more accurate and robust particularly on the images with large anatomical variations. Moreover, the smoothness of the deformation is also significantly improved. Experimental results have demonstrated that the trained model can directly predict the symmetric deformations on new image pairs from different databases, consistently producing accurate and robust registration results.

1. Introduction

Computer models have become a usable method for solving biomedical engineering and are applied to the analysis and measurement of data in the biomedical field (e.g., material mechanical behavior measurement [14], medical image segmentation [5, 6], and registration [79]). Deformable image registration aims to align subject images onto a template space by gradually optimizing the spatial transformation fields consisting of voxel-to-voxel correspondences between template and subject images [10]. Deformable registration is a key procedure in clinical applications such as population analysis, longitudinal data analysis, and image-guided intervention. Many image registration algorithms have been proposed and applied to various imaging analysis tasks [79, 1116]. Conventional registration algorithms achieve the task via typical optimization, which can be classified into either intensity-based registration [1113] or feature-based registration [1416]. In these methods, the deformation field is obtained by iteratively optimizing the image similarity metric with a smoothness regularization constraint.

In recent years, deep learning has been widely applied in medical image analysis [17, 18]. And deep-learning-based registration methods have shown promising performance especially for efficiency, as the computational time can be significantly reduced from minutes to seconds. Since the ground-truth deformations are difficult to obtain in practice, some semisupervised [19] and unsupervised learning strategies [2022] are more popular currently. Specifically, the spatial transformation network (STN) [23] is leveraged in the deep-learning-based registration framework so that the loss can be defined directly on the image similarity, instead of using ground-truth deformations as supervised information. When the model is well trained, in the application stage, the transformation field can be estimated for unseen image pairs, without the need for iterative optimization. Therefore, deep-learning-based registration is more flexible in real clinical use. Additionally, to further improve the registration accuracy, multiscale strategy [24, 25], diffeomorphic strategy [26], and inverse-consistent properties [27] are also incorporated in the deep-learning-based registration framework.

However, for the aforementioned registration algorithms, it is difficult to accurately register the images with large anatomical variation, and the smoothness is even difficult to preserve and constrain for large deformation. Thus, it is essential to develop an algorithm, which can effectively register the images with large anatomical variations and, meanwhile, keep the transformation field smooth, so that the topology can be well preserved. In addition, symmetric diffeomorphic registration has achieved better performance overall, which estimates symmetrical deformation pathway from two objects (template and subject) to the intermediate point instead of a single pathway from template to subject [13, 28]. Inspired by these methods, we hope to add a symmetric image registration strategy to the unsupervised model.

In this paper, we further investigate the deep-learning-based registration by considering the symmetric property. We propose a symmetric registration network (S-Net) by simultaneously aligning the subject and template images to an intermediate space, i.e., the pseudomean space. Specifically, instead of establishing the voxel-to-voxel correspondences in one pathway, i.e., from template space to subject space, we move the template and subject images symmetrically, until they meet in the pseudomean space. In this space, the image similarity is maximized. The main contribution of this work can be summarized as follows:

  1. We propose a symmetric registration network that can register images in the dual direction simultaneously. In this framework, the pseudomean space can be automatically learned by using the symmetric constraint without any supervised guidance.

  2. The symmetric property allows for estimating two short deformation pathways instead of directly estimating a long deformation pathway. It is more effective to register images with large anatomical variations. The final registration result can be more accurate and smoother.

  3. Under the symmetric framework, we can directly obtain the forward (register subject to template) and backward (register subject to template) transformation fields by using the trained S-Net. Therefore, the inverse consistency can be achieved without introducing any additional model or strategy.

2. Materials and Methods

The S-Net is trained in an unsupervised manner based on the proposed symmetric way. As shown in Figure 1, the input of the network is a pair of template image IT and subject image IS, together with their difference map. Instead of directly estimating the deformation field ϕ to register the subject to template, we make the training of the registration network symmetric, i.e., the template and subject image both deform until reaching their pseudomean space. Two deformation pathways will be estimated under this framework: (1) ϕT is the deformation pathway between template and pseudomean space and (2) ϕS is the deformation pathway between subject and pseudomean space.

Figure 1.

Figure 1

Overview of our method. Learning the parameters of S-Net by unsupervised training. The input consists of subject image IS, the template image IT , and their difference map; the outputs are the 3D displacement maps.

Mathematically, the optimization of symmetric registration can be formulated by minimizing the image dissimilarity in the pseudomean space:

FIS,IT,ϕS,ϕT=MTIS,ϕS,TIT,ϕT+λRϕS,ϕT, (1)

where 𝒯(, ) is a deformation operation that can warp I by ϕ and M is the dissimilarity between the deformed subject image IS(ϕS) and deformed template image IT(ϕT). R is a regularization term to constrain the smoothness of two symmetric deformations ϕT and ϕS. λ is a weight to balance the registration accuracy and deformation smoothness. In the training of the deformable registration network S-Net, M and R are used to define the loss function and 𝒯 is the spatial transformation network [23] used to spatially transform the image based on the estimated transformation field. The details of training the symmetric registration network S-Net will be described in Section 2.1.

In the testing stage, giving an unseen image pair and their difference maps, we can get their symmetric deformations ϕS and ϕT. As shown in Figure 2, the final symmetric registration results can be obtained by composing the two predicted deformation pathways: the forward deformation can be formulated as F=ϕT∘(ϕS)−1, which can register the subject to template. The backward deformation can be formulated as F−1=ϕS∘(ϕT)−1 which can register the template to the subject. ϕT∘(ϕS)−1 is the inverse field of ϕS∘(ϕT)−1, and “∘” denotes the composition operator [28].

Figure 2.

Figure 2

The symmetric image registration scheme. (a) Illustration of the hypothesis of the symmetric image registration; (b) the whole deformation field from template to subject can be calculated by F=ϕT∘(ϕS)−1, and the inverse deformation field from subject to template can also be obtained by F−1= ϕS∘(ϕT)−1.

2.1. Symmetric Network Design

For symmetric registration, the pseudomean is an intermediate space on the image manifold, and the distance between the pseudomean and the template should equal that between the pseudomean and the subject. Therefore, for each location/voxel, the deformation magnitudes of ϕT and ϕS should be equal to each other, while the direction should be opposite. Thus, during the training, the output of the network is only ϕT, and we can set ϕS=−ϕT. There are two advantages to using this symmetric setting. (1) The large local deformation can be more effectively estimated since we shorten the deformation pathway during registration. (2) We can easily keep the inverse consistency without introducing any additional constraint.

The S-Net was designed based on the network architecture designed in VoxelMorph [20], which is lighter than the original U-Net [6] by reducing the redundant connections to adapt to the analysis of 3D images. The network output is the halfway deformation ϕT. Since we do not have the ground-truth deformations, herein, we apply the unsupervised training strategy [2022]. Specifically, a spatial transformer network [23] provides a fully differentiable spatial transformation layer 𝒯 that can transform the input image I by the output deformation ϕ, which is the output of the S-Net. Specifically, we use the trilinear interpolation in STN, and the operation 𝒯 can be formulated as

TI,ϕ=vNu+ϕuIvdx,y,z1ud+ϕudvd, (2)

where u=[x, y, z] is the voxel coordinate, N(u+ϕ(u)) is the eight neighbor voxels of u+ϕ(u) in I, and d indicates three directions in 3D space. With STN, the loss defined by the image similarity can backpropagate to the S-Net, and the registration network can be trained in an unsupervised manner.

2.2. Loss Definition

2.2.1. Symmetric Similarity Loss

The similarity loss of the registration task is used to evaluate the registration accuracy, and here, we define the similarity loss by SSD. Conventionally, the subject image should be warped to the template space by the output deformation field, and the loss is calculated in the template space. For the symmetric registration network, we define the similarity loss in the pseudomean space to penalize the symmetric property. Mathematically, it can be formulated as

Simsym=TIS,ϕSTIT,ϕT22,ϕS=ϕT. (3)

By minimizing the symmetric similarity loss ℒSimsym, the template and subject image will gradually register with each other, until they reach their pseudomean space. To further enhance the symmetric constraints and registration accuracy, we also define the similarity loss in both template and subject image space:

SimISIT=TIS,ϕSϕT1I2T2,SimITIS=TIT,ϕTϕS1I2S2, (4)

where ϕS∘(ϕT)−1 indicates the forward deformation pathway, which can transform the subject image to the template space, while ϕT∘(ϕS)−1 indicates the backward deformation pathway, which can transform the template image to the subject space. It worth noting that the output of the S-Net is the halfway deformation ϕT, and the symmetric loss ℒSimsym defined in the pseudomean space can well preserve the symmetric property, while ℒSimISIT and ℒSimITIS defined in the end image space can make the registration accurate. Therefore, the whole symmetric similarity loss can be summarized as ℒSim=ℒSimsym+ℒSimISIT+ℒSimITIS.

2.2.2. Field Regularization Loss

The regularization loss is used to constrain the smoothness of the estimated deformation field ϕT, which is important to preserve the topology. In S-Net, this regularization loss is only defined on ϕT (output of the network). The smoothness of ϕS can be automatically constrained since ϕS=−ϕT. In our work, three kinds of regularization loss, i.e., Laplace smoothness, zero constraint, and antifolds constraint, are used to penalize the smoothness.

(1) Laplace smoothness LLaplace: constraining the smoothness of the field ϕT, which is defined as

Laplace=u2ϕTu22, (5)

where ∇2ϕ(u) is the second derivative of the field ϕT(u) at the voxel u.

(2) Zero constraint: modifying the displacement value for avoiding unreasonable large deformations:

Zero=uϕTu22. (6)

(3) Antifolds constraint: adding an antifolds constraint [27] in the loss function to further enhance the smoothness constraint, avoiding folds, or crossing in the final deformation:

Anti=uRϕTu+1, (7)

where ∇ϕ(u) is the gradient of the displacement map and the term R(∇ϕ(u)+1) is an index function to penalize the gradient of the deformation field with folds. If Q ≤ 0, R(Q)=|Q|, and R(Q)=0, for otherwise.

The final loss function for training the S-Net is

Loss=Sim+Reg=Simsym+SimISIT+SimITIS+αLap+βZero+γAnti, (8)

where α, β, and γ are used to balance the weight for each term. In this work, we set α=1 and γ=100 in our experiment. For zero constraint term LZero, we set the weight β a small value as β=0.01, since the large value may influence the accuracy when estimating the large deformations.

2.3. Implementation and Training

The S-Net is implemented in Keras and trained on an NVIDIA Tesla V100 GPUs with 32 GB of video memory. The network is trained by using the Adam strategy [29]. We use four public databases, i.e., LONI LPBA40 [30], IBSR18 (https://www.nitrc.org/projects/ibsr), CUMC12 [31], and MGH10 [32] in our experiments. All images were preprocessed by using a standard pipeline, including skull stripping, resampling, and affine registration to the MNI152 template [33] by using FLIRT [34]. After preprocessing, the data are with the same size 192 × 224 × 192 (voxel size 1 mm × 1 mm × 1 mm).

We used 30 subjects from LONI LPBA40 dataset as the training data, and 30 × 30=900 image pairs can be derived. The remaining 10 images were used as the testing data, where 10 × 9=90 image pairs can be derived. The other three datasets are also used as the testing data to further evaluate the effectiveness of the proposed method, and we have 18 × 17=306 image pairs from IBSR18,  12 × 11=132 image pairs from CUMC12, and 10 × 9=90 image pairs from MGH10. For more effective training, we trained S-Net in two stages. First, the network was pretrained with a small dataset, where we chose one image as template and all the remaining images as subject. In this scenario, we totally have 1 × 30=30 image pairs for training. The network was trained for 200 iterations per image pair at a learning rate of 1e − 4. Then, we draw each two images as a template and subject pair, and we totally have 30 × 30=900 image pairs for further training the S-Net. 20 epochs were trained in this scenario, the learning rate is set to 1e − 5, with a decay weight of 0.5 for every 2 epochs.

3. Results

We have compared our results with three state-of-the-art registration methods, namely, D. Demons [12], SyN [13], and VoxelMorph [20]. Demons and SyN are typical deformable registration methods enforced successfully for the medical image registration task, and VoxelMorph is a learning-based framework that defines registration as a learnable parametric function. We conducted the experiment and measured the registration accuracy based on the volumetric overlap of brain ROIs. The overall registration accuracy was computed in the form of a Dice Similarity Coefficient (DSC) score Dice(RiS,  RiT)=(2RiSRiT)/(|RiS|+| RiT|), for each ROI, with RiS and  RiT being the corresponding anatomical regions i in the subject and template image. Additionally, we also evaluate the smoothness of the transformation map by using the Jacobian determinant Jϕ(u). Transformation map is considered smoothness when Jϕ(u) > 0, where Jϕ(u)=|−1(u)| [35]. And, the overall folds of the estimated displacement map are defined in |{u : Jϕ(u) < 0}|.

The results of DSC scores and runtimes are shown in Table 1 compared with those state-of-the-art registration methods (Demons, SyN, and VoxelMorph). The results show that the proposed method performs significantly better than VoxelMorph (learning-based method without using a symmetric training manner). For some datasets, our approach even outperforms SyN, which was among the state-of-the-art brain image registration algorithms and only took about 3.6 seconds to register two brain volume data efficiently. Those learning-based methods, compared with the regular scenario, have shorter runtime, and also performance hardly deteriorates. In Table 2, we present the folds in the estimated displacement maps of the proposed method and the baseline method. The results show that the displacement maps are estimated by the proposed symmetric registration network smoothness more than by the model without asymmetric strategy in most cases by a large margin.

Table 1.

Dice score (%) for subject-to-subject alignment using Demons, SyN, VoxelMorph, and the proposed S-Net.

Dataset D. Demons SyN (CC) VoxelMorph (CC) VoxelMorph (MSE) Proposed method
LPBA40 68.7 ± 2.4 71.3 ± 1.8 71.2 ± 2.8 71.6 ± 2.4 71.8 ± 2.1
IBSR18 54.6 ± 2.2 57.4 ± 2.4 54.2 ± 3.4 55.2 ± 2.9 56.8 ± 2.5
CUMC12 53.1 ± 3.4 54.1 ± 2.8 51.8 ± 4.1 53.1 ± 3.5 54.4 ± 3.2
MGH10 60.4 ± 2.5 62.1 ± 2.4 59.6 ± 2.9 60.2 ± 2.6 62.4 ± 2.4
Time (s) 114 1330 0.31 0.31 3.6

Table 2.

Folds (|Jϕ(p)| < 0) results for subject-to-subject alignment using Demons, SyN, VoxelMorph, and the proposed S-NET. Folds refer to the average number of folds.

Dataset D. Demons SyN (CC) VoxelMorph (CC) VoxelMorph (MSE) Proposed method
LPBA40 13.71 ± 2.91 0 28.52 ± 14.92 44.04 ± 13.83 3.28 ± 0.78
IBSR18 15.59 ± 8.14 0 44.26 ± 15.31 67.57 ± 19.59 7.56 ± 1.86
CUMC12 21.02 ± 9.38 0 39.37 ± 11.65 48.92 ± 15.28 7.29 ± 1.47
MGH10 18.92 ± 6.54 0 42.17 ± 13.26 56.72 ± 16.76 6.63 ± 1.53

The respective results and intermediate results are also shown in Figures 3(b)3(e) (final warped template image, middle warped subject image, middle warped template image, and final warped subject image, respectively). The S-Net works better than directly registering images in a single pathway: not only the registration accuracy but also the smoothness is also largely improved. This indicates that the proposed symmetric training strategy can effectively estimate large local deformations and the estimated field is smoother.

Figure 3.

Figure 3

The results of the S-Net. From left to right, the column shows subject, final warped template image, middle warped subject image, middle warped template image, final warped subject image, and template image. (a) Subject image. (b) Final warped template image. (c) Middle warped subject image. (d) Middle warped template image. (e) Final warped subject image. (f) Template image.

It is worth noting that S-Net achieves image registration tasks in an unsupervised end-to-end fashion by using an image similarity metric for optimization so that the training of this S-Net does not require the known deformation field, which is difficult to obtain for medical image registration. Furthermore, we have also evaluated our framework for the number of folds with the traditional registration method and single-direction deep-learning-based registration method. The deformation maps estimated by the proposed S-Net tend to be smoother, since the symmetric displacement map only needs half pathway, instead of a long pathway, which is easier to penalize the smoothness. Experimental results showed that our method successfully reduces the folds of estimated maps while providing more accurate registration results.

4. Discussion

S-Net learns for image registration tasks in an unsupervised end-to-end fashion using an image similarity metric for optimization so that the training for this S-Net does not require the known deformation field, which is difficult to obtain for medical image registration. Furthermore, we have also evaluated our framework for the number of folds with the traditional registration methods and single-direction deep-learning-based registration methods. The deformation maps estimated by the proposed S-NET tend to be smoother, since the symmetric displacement map only needs a half pathway, instead of a long pathway, which is easier to penalize the smoothness. Experimental results showed that our method successfully reduces the folds of estimated maps while providing more accurate registration results.

The total loss function in S-NET consists of two types of six losses. However, the multiple losses weight (hyperparameters) of our S-NET training is hard to balance. Therefore, we did some experiments to determine the weight of multiple losses in Figure 4. We set α=1 and β=0.01 that can achieve good performance, and after γ > 100, it has little effect on the results. In our experiment, we set α=1, β=0.01, and γ=100. It is difficult to balance multiple losses is a common problem in deep-learning-based registration methods. In future work, we hope that we can learn hyperparameters through learning.

Figure 4.

Figure 4

Effect of varying the regularization parameters  α, β , and γ on Dice score. (a) The best results occur when α=1 and β and γ are fixed at 0.01 and 100. (b) The best results occur when β=0.01 and α and γ are fixed at 1 and 100. (c) The best results occur when γ=100 and α and β are fixed at 1 and 0.01. As the weight increases, the results rarely change.

5. Conclusion

We presented a new symmetric training strategy for an unsupervised deep-learning-based registration framework, which can better estimate the large local deformation during registration. In particular, we utilize a pseudomean as an intermediate target registration space, and a long deformation pathway can be divided into two short deformation pathways. Experimental results have shown promising registration performance for both accuracy and field smoothness.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grants nos. 81871508 and 61773246), the Major Program of Shandong Province Natural Science Foundation (Grant nos. ZR2019ZD04 and ZR2018ZB0419), and the Taishan Scholar Program of Shandong Province of China (Grant no. TSHW201502038).

Contributor Information

Yuanjie Zheng, Email: yjzheng@sdnu.edu.cn.

Weikuan Jia, Email: wkjia@sdnu.edu.cn.

Data Availability

The databases of LPBA40, IBSR18, CUMC12, and MGH10 can be downloaded from the registration grant challenge at https://continuousregistration.grand-challenge.org.

Conflicts of Interest

The authors declare that they have no conflicts of interest to report regarding the present study.

References

  • 1.Ausiello P., Ciaramella S., Garcia-Godoy F., et al. The effects of cavity-margin-angles and bolus stiffness on the mechanical behavior of indirect resin composite class II restorations. Dental Materials. 2017;33(1):e39–e47. doi: 10.1016/j.dental.2016.11.002. [DOI] [PubMed] [Google Scholar]
  • 2.Ausiello P., Ciaramella S., Fabianelli A., et al. Mechanical behavior of bulk direct composite versus block composite and lithium disilicate indirect class II restorations by CAD-FEM modeling. Dental Materials. 2017;33(6):690–701. doi: 10.1016/j.dental.2017.03.014. [DOI] [PubMed] [Google Scholar]
  • 3.De Santis R., Gloria A., Viglione S., et al. 3D laser scanning in conjunction with surface texturing to evaluate shift and reduction of the tibiofemoral contact area after meniscectomy. Journal of the Mechanical Behavior of Biomedical Materials. 2018;88:41–47. doi: 10.1016/j.jmbbm.2018.08.007. [DOI] [PubMed] [Google Scholar]
  • 4.Fucile P., Papallo I., Improta G., et al. Reverse engineering and additive manufacturing towards the design of 3D advanced scaffolds for hard tissue regeneration. Proceedings of the 2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4. 0&IoT); June 2019; Naples, Italy. pp. 33–37. [Google Scholar]
  • 5.Long J., Shelhamer E., Darrell T. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2015;39(4):640–651. doi: 10.1109/TPAMI.2016.2572683. [DOI] [PubMed] [Google Scholar]
  • 6.Ronneberger O., Fischer P., Brox T. U-net: convolutional networks for biomedical image segmentation. Proceedings of the Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015; October 2015; Munich, Germany. pp. 234–241. [DOI] [Google Scholar]
  • 7.Sotiras A., Davatzikos C., Paragios N. Deformable medical image registration: a survey. IEEE Transactions on Medical Imaging. 2013;32(7):1153–1190. doi: 10.1109/tmi.2013.2265603. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Bajcsy R., Kovačič S. Multiresolution elastic matching. Computer Vision, Graphics, and Image Processing. 1989;46(1):1–21. doi: 10.1016/s0734-189x(89)80014-3. [DOI] [Google Scholar]
  • 9.Thirion J.-P. Image matching as a diffusion process: an analogy with Maxwell’s demons. Medical Image Analysis. 1998;2(3):243–260. doi: 10.1016/s1361-8415(98)80022-4. [DOI] [PubMed] [Google Scholar]
  • 10.Hill D. L., Batchelor P. G., Holden M., Hawkes D. J. Medical image registration. Physics in Medicine & Biology. 2001;46(3):R1–R45. doi: 10.1088/0031-9155/46/3/201. [DOI] [PubMed] [Google Scholar]
  • 11.Avants B., Anderson C., Grossman M., Gee J. C. Spatiotemporal normalization for longitudinal analysis of gray matter atrophy in frontotemporal dementia. Medical Image Computing and Computer-Assisted Intervention-MICCAI 2007. 2007;10:303–310. doi: 10.1007/978-3-540-75759-7_37. [DOI] [PubMed] [Google Scholar]
  • 12.Vercauteren T., Pennec X., Perchant A., Ayache N. Diffeomorphic demons: efficient non-parametric image registration. NeuroImage. 2009;45(1):61–72. doi: 10.1016/j.neuroimage.2008.10.040. [DOI] [PubMed] [Google Scholar]
  • 13.Avants B. B., Epstein C. L., Grossman M., Gee J. C. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Medical Image Analysis. 2008;12(1):26–41. doi: 10.1016/j.media.2007.06.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Shen D., Davatzikos C. Hammer: hierarchical attribute matching mechanism for elastic registration. IEEE Transactions on Medical Imaging. 2002;21(11):1421–1439. doi: 10.1109/TMI.2002.803111. [DOI] [PubMed] [Google Scholar]
  • 15.Beg M. F., Miller M. I., Trouvé A., Younes L. Computing large deformation metric mappings via geodesic flows of diffeomorphisms. International Journal of Computer Vision. 2005;61(2):139–157. doi: 10.1023/b:visi.0000043755.93987.aa. [DOI] [Google Scholar]
  • 16.Fanti Z., Torres F., Hazan-Lasri E., et al. Improved surface-based registration of CT and intraoperative 3D ultrasound of bones. Journal of Healthcare Engineering. 2018;2018:11. doi: 10.1155/2018/2365178.2365178 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Shen D., Wu G., Heung-Il S. Deep learning in medical image analysis. Annual Review of Biomedical Engineering. 2017;19(1):221–248. doi: 10.1146/annurev-bioeng-071516-044442. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Wang J., Zhu H., Wang S. H., Zhang Y. D. A review of deep learning on medical image analysis. Mobile Networks and Applications. 2020;26:1–30. doi: 10.1007/s11036-020-01672-7. [DOI] [Google Scholar]
  • 19.Fan J., Cao X., Yap P.-T., Shen D. BIRNet: brain image registration using dual-supervised fully convolutional networks. Medical Image Analysis. 2019;54:193–206. doi: 10.1016/j.media.2019.03.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Balakrishnan G., Zhao A., Sabuncu M. R., Dalca A. V., Guttag J. An unsupervised learning model for deformable medical image registration. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; June 2018; Salt Lake City, UT, USA. pp. 9252–9260. [DOI] [Google Scholar]
  • 21.De Vos B. D., Berendsen F. F., Viergever M. A., Staring M., Išgum I. End-to-end unsupervised deformable image registration with a convolutional neural network. Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support DLMIA 2017, ML-CDS 2017; September 2017; Québec City, Canada. pp. 204–212. [DOI] [Google Scholar]
  • 22.Li H., Fan Y. Non-rigid image registration using self-supervised fully convolutional networks without training data. Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018); April 2018; Washington, DC, USA. pp. 1075–1078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Jaderberg M., Simonyan K., Zisserman A., Kavucuoglu K. Spatial transformer networks. NeurIPS Proceedings. 2015;2:2017–2025. [Google Scholar]
  • 24.Dalca A. V., Balakrishnan G., Guttag J., Sabuncu M. R. Unsupervised learning of probabilistic diffeomorphic registration for images and surfaces. Medical Image Analysis. 2019;57:226–236. doi: 10.1016/j.media.2019.07.006. [DOI] [PubMed] [Google Scholar]
  • 25.Shen Z., Han X., Xu Z., Niethammer M. Networks for joint affine and non-parametric image registration. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); June 2019; Long Beach, CA, USA. pp. 4224–4233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Mok T. C., Chung A. Fast symmetric diffeomorphic image registration with convolutional neural networks. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR); June 2020; Seattle, WA, USA. IEEE; pp. 4644–4653. [Google Scholar]
  • 27.Zhang J. Inverse-consistent deep networks for unsupervised deformable image registration. 2018. http://arxiv.org/abs/1809.03443.
  • 28.Wu G., Kim M., Wang Q., Shen D. S-HAMMER: hierarchical attribute-guided, symmetric diffeomorphic registration for MR brain images. Human Brain Mapping. 2014;35(3):1044–1060. doi: 10.1002/hbm.22233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Kingma D. P., Ba J. Adam: a method for stochastic optimization. 2014. http://arxiv.org/abs/1412.6980.
  • 30.Shattuck D. W., Mirza M., Adisetiyo V., et al. Construction of a 3D probabilistic atlas of human cortical structures. NeuroImage. 2008;39(3):1064–1080. doi: 10.1016/j.neuroimage.2007.09.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Caviness V. S., Jr., Meyer J., Makris N., Kennedy D. N. MRI-based topographic parcellation of human neocortex: an anatomically specified method with estimate of reliability. Journal of Cognitive Neuroscience. 1996;8(6):566–587. doi: 10.1162/jocn.1996.8.6.566. [DOI] [PubMed] [Google Scholar]
  • 32.Klein A., Andersson J., Ardekani B. A., et al. Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration. NeuroImage. 2009;46(3):786–802. doi: 10.1016/j.neuroimage.2008.12.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Evans A.C., Collins D. L., MacDonald B. An mri-based stereotactic brain atlas from 300 young normal subjects. Magnetic Resonance Scanning and Epilepsy. 1992;264:p. 408. doi: 10.1007/978-1-4615-2546-2_48. [DOI] [Google Scholar]
  • 34.Jenkinson M., Smith S. A global optimisation method for robust affine registration of brain images. Medical Image Analysis. 2001;5(2):143–156. doi: 10.1016/s1361-8415(01)00036-6. [DOI] [PubMed] [Google Scholar]
  • 35.Kybic J., Thevenaz P., Nirkko A., Unser M. Unwarping of unidirectionally distorted EPI images. IEEE Transactions on Medical Imaging. 2000;19(2):80–93. doi: 10.1109/42.836368. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The databases of LPBA40, IBSR18, CUMC12, and MGH10 can be downloaded from the registration grant challenge at https://continuousregistration.grand-challenge.org.


Articles from Journal of Healthcare Engineering are provided here courtesy of Wiley

RESOURCES