Skip to main content
. Author manuscript; available in PMC: 2022 Dec 1.
Published in final edited form as: Proc IEEE Int Conf Data Min. 2021 Dec;2021:857–866. doi: 10.1109/icdm51629.2021.00097

Fig. 1.

Fig. 1.

An illustration of our SCEHR. We propose a general supervised contrastive learning loss ContrastiveCrossEntropy+λSupervisedContrastiveRegularizer for clinical risk prediction problems using longitudinal electronic health records. The overall goal is to improve the performance of binary classification (e.g. in-hospital mortality prediction) and multi-label classification (e.g. phenotyping) by pulling (→←) similar samples closer and pushing (←→) dissimilar samples apart from each other. ContrastiveCrossEntropy tries to contrast sample representations with learned positive and negative anchors, and SupervisedContrastiveRegularizer tries to contrast sample representations with others in a mini-batch according to their labels. For brevity, we only highlight the contrastive pulling and pushing forces associated with sample i in a mini-batch consisting of two positive samples and three negative samples