Skip to main content
. Author manuscript; available in PMC: 2022 Nov 14.
Published in final edited form as: Domain Adapt Represent Transf (2022). 2022 Sep 15;13542:12–22. doi: 10.1007/978-3-031-16852-9_2

Fig. 1.

Fig. 1.

In medical imaging, good initialization is more vital for transformer-based models than for CNNs. When training from scratch, transformers perform significantly worse than CNNs on all target tasks. However, with supervised or self-supervised pre-training on ImageNet, transformers can offer the same results as CNNs, highlighting the importance of pre-training when using transformers for medical imaging tasks. We conduct statistical analysis between the best of six pre-trained transformer models and the best of three pre-trained CNN models.