Skip to main content
. 2022 Jul 19;8:e1045. doi: 10.7717/peerj-cs.1045

Table 7. Performance comparison on LUNA 2016 dataset.

Pretext tasks indicated with an asterisk (*) are reproduced by the same author. Pretext tasks indicated with two asterisks (**) are implemented using different backbones.

No. Author Pretext task Category Random init.: AUC SSL: AUC
1 Zhu et al. (2020b) TCPC ∗∗ Contrastive 0.982 0.996
2 Zhu et al. (2020b) TCPC ∗∗ Contrastive 0.911 0.987
3 Haghighi et al. (2020) Semantic Genesis Multi-tasking 0.943 0.985
4 Zhou et al. (2019) Models Genesis Generative 0.942 0.982
5 Haghighi et al. (2020) Rubik Cube* Predictive 0.943 0.955
6 Haghighi et al. (2020) Context Restoration* Generative 0.943 0.919
7 Haghighi et al. (2020) Image Inpainting* Generative 0.943 0.915
8 Haghighi et al. (2020) Auto-encoder* Generative 0.943 0.884
9 Tajbakhsh et al. (2019) 3D patch Reconstruct. Generative 0.724 0.739