Skip to main content
. 2022 Jun 30;67(8):1647–1669. doi: 10.1002/lno.12101

Table 1.

Definitions of a few computational terms, highlighting important, but subtle, differences.

Computer vision (CV) A broad subfield of computer science dedicated to using a computer to interpret images and video sequences.
Machine learning (ML) A set of statistical approaches that attempt to discern patterns in data, either automatically or based on explicit human instructions.
Supervised ML ML techniques that teach a computer to recognize patterns using a set of expert‐curated examples, such as annotated images.
Unsupervised ML ML methods that attempt to group data together without human intervention. Clustering algorithms are a common example. Their performance is often difficult to evaluate.
Training set A collection of data annotated by human experts for teaching a computer how to interpret information. Building the labeled dataset is the most time‐consuming and critical part of an ML workflow.
Validation set A separate human labeled dataset used to evaluate a trained system. These data are entirely independent of the training set and should represent conditions the system might encounter in the field. Also referred to as test data.
Feature‐based learning ML algorithms that operate on a reduced, hand‐engineered feature space. Each data point is cast as a vector of measurements and used to tune a set of parameters that dictate how the model works.
Deep neural networks (DNNs) A type of representational algorithm that learns directly from raw data. DNNs layer many mathematical abstractions on top of each other to connect input information to a desired output. Through iterative training, the system learns the most salient features of the input. Modern DNNs often have numerous layers and billions of weights.
Transfer learning A shortcut for training DNNs by repurposing a network originally trained for a different task.