Skip to main content
. Author manuscript; available in PMC: 2024 Oct 15.
Published in final edited form as: Nat Med. 2024 Feb 27;30(4):1134–1142. doi: 10.1038/s41591-024-02855-5

Extended Data Table 1 |.

Datasets, task instructions

Model Context Parameters Proprietary? Seq2seq Autoreg.

FLAN-T5 512 2.7B - -
FLAN-UL2 2,048 20B - -
Alpaca 2,048 7B - -
Med-Alpaca 2,048 7B - -
Vicuna 2,048 7B - -
Llama-2 4,096 7B, 13B - -
GPT-3.5 16,384 175B -
GPT-4 32,768* unknown -
*

The context length of GPT-4 has since been increased to 128,000.

We quantitatively evaluated eight models, including state-of-the-art seq2seq and autoregressive models. Unless specified, models are open source (versus proprietary).