Skip to main content
. Author manuscript; available in PMC: 2025 Feb 13.
Published in final edited form as: Proc Conf Empir Methods Nat Lang Process. 2022 Dec;2022:2873–2885. doi: 10.18653/v1/2022.emnlp-main.185

Figure 2:

Figure 2:

The workflow of PromptEHR. The input longitudinal events are transformed to the code sequence by special tokens, e.g., <v> and </v> cover events in the same visit; <dx> and </dx> cover contemporary diagnosis events. Baseline features are encoded to prompt embeddings by two featurizers then add to the token embeddings. The model decodes autoregressively and is trained with causal language modeling loss.