Skip to main content
. Author manuscript; available in PMC: 2023 Aug 11.
Published in final edited form as: Med Image Comput Comput Assist Interv. 2022 Sep 17;13436:66–77. doi: 10.1007/978-3-031-16446-0_7

Table 1.

Trading off performance for invertibility.

Set Width Method Opt. λ Dice (↑) Dice30 (↑) % Folds (↓) Sdlog|Jφ|(↓)
A 64 NGF [11] 0.0 0.696 ± 0.023 0.686 0.141 ± 0.043 0.072
64 MI [38] 0.175 0.748 ± 0.021 0.739 0.461 ± 0.100 0.089
64 LocalMI [8] 0.125 0.745 ± 0.023 0.737 0.402 ± 0.076 0.083
64 MIND [14] 0.3 0.726 ± 0.023 0.716 0.258 ± 0.051 0.079
64 CR (proposed) 0.05 0.776 ± 0.020 0.768 0.451 ± 0.074 0.083
64 mCR (proposed) 0.125 0.781 ± 0.020 0.774 0.475 ± 0.070 0.084
B 256 SM-brains [15] - 0.755 ± 0.020 0.749 0.023 ± 0.008 0.048
256 SM-shapes [15] - 0.721 ± 0.021 0.715 0.017 ± 0.011 0.056
256 MI [38] 0.2 0.759 ± 0.021 0.750 0.487 ± 0.099 0.090
256 CR (proposed) 0.075 0.774 ± 0.020 0.765 0.315 ± 0.0576 0.078
256 mCR (proposed) 0.15 0.780 ± 0.021 0.773 0.416 ± 0.065 0.082
C 64 CR+MI 0.3 0.751 ± 0.021 0.742 0.246 ± 0.059 0.080
64 CR+ExtNegs 0.05 0.764 ± 0.020 0.756 0.489 ± 0.073 0.085
64 CR+MI+ExtNegs 0.3 0.747 ± 0.021 0.739 0.214 ± 0.056 0.078
64 CR+SupPretrain 0.025 0.778 ± 0.020 0.770 0.465 ± 0.075 0.084
64 mCR+SupPretrain 0.075 0.778 ± 0.020 0.770 0.406 ± 0.067 0.081
64 mCR+RandAE 0.1 0.778 ± 0.020 0.770 0.393 ± 0.070 0.80
D 256 CR (10 int. steps) 0.075 0.773 ± 0.021 0.764 0.341 ± 0.058 0.079
256 CR (16 int. steps) 0.05 0.779 ± 0.020 0.772 0.462 ± 0.071 0.083
256 CR (32 int. steps) 0.075 0.774 ± 0.020 0.765 0.315 ± 0.0576 0.078

Registration accuracy (Dice), robustness (Dice30), and characteristics (% Folds, stddev. logJφ) for all benchmarked methods at values of λ that maintains the percentage of folding voxels at less than 0.5% of all voxels, as in [30], s.t. high performance is achieved alongside negligible singularities. This table is best interpreted in conjunction with figure 3, where results from all λ values are visualized. A. CR and mCR obtain improved accuracy and robustness (A5–6) with similar deformation characteristics to baseline losses (A1–4). B. At larger model sizes, mCR and CR still obtain higher registration accuracy and robustness (B4–5), albeit at the cost of more irregular deformations in comparison to SM (B1). C. Further adding external losses, negative samples, or both to CR harms performance (C1–3), supervised pretraining (C4–5) very marginally improves results over training from scratch (A5–6), and random feature extraction only slightly reduces Dice while smoothening displacements (C6). D. At a given λ, increasing integration steps yields marginal Dice and smoothness improvements.