Skip to main content
. Author manuscript; available in PMC: 2024 Aug 14.
Published in final edited form as: Proc Conf Empir Methods Nat Lang Process. 2022 Dec;2022:3876–3887. doi: 10.18653/v1/2022.emnlp-main.256

Figure 1:

Figure 1:

Zero-shot performance of MedCLIP, ConVIRT (Zhang et al., 2020), GLoRIA (Huang et al., 2021) when using different amounts of data for pre-training. ConVIRT and GLoRIA are trained on MIMIC-CXR (369K) and CheXpert (191K) dataset, respectively. Our method yields superior ACC than GLoRIA using near 1/10 of pre-training data.