Figure 2:

The workflow of PromptEHR. The input longitudinal events are transformed to the code sequence by special tokens, e.g., <v> and </v> cover events in the same visit; <dx> and </dx> cover contemporary diagnosis events. Baseline features are encoded to prompt embeddings by two featurizers then add to the token embeddings. The model decodes autoregressively and is trained with causal language modeling loss.