Skip to main content
[Preprint]. 2025 May 25:2025.05.23.25328115. [Version 1] doi: 10.1101/2025.05.23.25328115

Table 1.

Comparison of supervised, few-shot, and zero-shot machine learning.

Learning approach Description a Examples and analogies
Supervised learning Model is trained (“fine-tuned”) on a labeled dataset, mapping input data to the correct output by minimizing error between predicted and actual labels.
  • ML model to distinguish between dogs from cats trained on 1,000 pictures of dogs and 1,000 pictures of cats, each labeled accordingly

  • ML model to identify documented GOC discussions trained on 5,000 EHR notes, of which 300 are labeled as containing GOC discussions

Few-shot learning Pre-trained model is fine-tuned to, or prompted with, a small number of labeled examples, relying on the model’s pre-training to apply previously learned patterns to a new task. Examples need not cover full range of variability or noise expected in the target task, as the model leverages prior learning to generalize effectively.
  • ML model with generalized pre-training fine-tuned on just 6 pictures of dogs and cats to distinguish between them

  • A child learning to distinguish airplanes from helicopters after being shown 5 examples from a picture book

  • LLM prompted to identify documented GOC discussions after being given a small number of examples

Zero-shot prompting Model is prompted to make predictions without any task-specific examples, relying entirely on the model’s pre-training to generalize to the new task.
  • Vision-language model with generalized pre-training asked to distinguish between dogs and cats, with no fine-tuning or task-specific examples

  • Resident physician asked to characterize an unusual chest x-ray finding

  • LLM prompted to identify documented GOC discussions by definition alone, with no provided examples

Abbreviations: ML, machine learning; LLM, large language model; GOC, goals of care.

a

In machine learning, pre-training refers to the initial process of training a model on a large, general dataset to learn patterns, representations and features that are broadly applicable across tasks. Pre-training is often performed by the model’s creators. Whereas, fine-tuning refers to subsequent adjustment of a pre-trained model on a smaller, task-specific dataset, often performed by those adapting the model for a particular use.