Skip to main content
. Author manuscript; available in PMC: 2024 May 15.
Published in final edited form as: Med Image Comput Comput Assist Interv. 2023 Oct 1;14220:651–662. doi: 10.1007/978-3-031-43907-0_62

Table 2.

Our Ark-5 and Ark-6 outperform SOTA ImageNet pretrained models and the self-supervised domain-adapted model that utilizes even more training data, highlighting the importance of accruing and reusing knowledge in expert labels from diverse datasets for both classification and segmentation. With the best bolded and the second best underlined, a statistical analysis is conducted between the best vs. others, where green-highlighted boxes indicate no statistically significant difference at level p = 0.05.

Initialization Pretraining Classification task
1. CXPT 2. NIHC 3. RSNA 4. VINC 5. NIHS
Random - 83.39±0.84 77.04±0.34 70.02±0.42 78.49±1.00 92.52±4.98
Supervised IN 87.80±0.42 81.73±0.14 73.44±0.46 90.35±0.31 93.35±0.77
SimMIM IN 88.16±0.31 81.95±0.15 73.66±0.34 90.24±0.35 94.12±0.96
SimMIM IN→CXR(926K) 88.37±0.40 83.04±0.15 74.09±0.39 91.71±1.04 95.76±1.79
Ark-5(ours) IN→CXR(335K) 88.73±0.20 82.87±0.13 74.73±0.59 94.67±0.33 98.92±0.21
Ark-6(ours) IN→CXR(704K) 89.14±0.22 83.05±0.09 74.76±0.35 95.07±0.16 98.99±0.16
Initialization Pretraining Segmentation task
6. NIHM 7. JSRTLung 8. JSRTHeart 9. JSRTClavicle 10. VINR
Random - 96.32±0.18 96.32±0.10 92.35±0.20 85.56±0.71 56.46±0.62
Supervised IN 97.23±0.09 97.13±0.07 92.58±0.29 86.94±0.69 62.40±0.80
SimMIM IN 97.12±0.14 96.90±0.08 93.53±0.11 87.18±0.63 61.64±0.69
SimMIM IN→CXR(926K) 97.10±0.40 96.93±0.12 93.75±0.36 88.87±1.06 63.46±0.89
Ark-5(ours) IN→CXR(335K) 97.65±0.17 97.41±0.04 94.16±0.66 90.01±0.35 63.96±0.30
Ark-6(ours) IN→CXR(704K) 97.68±0.03 97.48±0.08 94.62±0.16 90.05±0.15 63.70±0.23