Table 1.
Comparison of supervised, few-shot, and zero-shot machine learning.
| Learning approach | Description a | Examples and analogies |
|---|---|---|
| Supervised learning | Model is trained (“fine-tuned”) on a labeled dataset, mapping input data to the correct output by minimizing error between predicted and actual labels. |
|
| Few-shot learning | Pre-trained model is fine-tuned to, or prompted with, a small number of labeled examples, relying on the model’s pre-training to apply previously learned patterns to a new task. Examples need not cover full range of variability or noise expected in the target task, as the model leverages prior learning to generalize effectively. |
|
| Zero-shot prompting | Model is prompted to make predictions without any task-specific examples, relying entirely on the model’s pre-training to generalize to the new task. |
|
Abbreviations: ML, machine learning; LLM, large language model; GOC, goals of care.
In machine learning, pre-training refers to the initial process of training a model on a large, general dataset to learn patterns, representations and features that are broadly applicable across tasks. Pre-training is often performed by the model’s creators. Whereas, fine-tuning refers to subsequent adjustment of a pre-trained model on a smaller, task-specific dataset, often performed by those adapting the model for a particular use.