Skip to main content
. 2022 Apr 18;11(8):2265. doi: 10.3390/jcm11082265

Table 1.

Important AI-related terms and definitions.

Term Description References
Machine learning (ML) Process by which an algorithm encodes statistical regularities from a database of examples into parameter weights for future predictions [11]
Deep learning (DL) Multilayered complex ML platform comprised of numerous computational layers able to make accurate predictions [6]
Supervised learning Training an ML algorithm using previously labeled training examples, consisting of inputs and desired outputs provided by an expert [7,11]
Unsupervised learning When an ML algorithm discovers hidden patterns or data groupings without the need for human intervention [11]
Reinforcement learning Learning strategies towards acting optimally in certain situations with respect to a given criterion; such an algorithm obtains feedback on its performance by comparison with this criterion through reward values during training [7]
Model A trained ML algorithm that can make predictions from unseen data [11]
Training Feeding an ML algorithm with examples from a training dataset towards deriving useful parameters for future predictions [11]
Features Components of a dataset describing the characteristics of the studied observations [1]
Decision tree Nonparametric supervised learning method visualized as a graph representing the choices and their outcomes in the form of a tree; each tree consists of nodes (attributes in the group to be classified) and branches (values that a node can take) [12,13]
Random forest Ensemble classification technique that uses “parallel ensembling”, fitting several decision tree classifiers in parallel on dataset subsamples [13]
Naïve Bayes (NB) Classification technique assuming independence among predictors (i.e., assumes that the presence of a feature in the class is unrelated to the presence of any other feature) [12]
Logistic regression Algorithm using a logistic function to estimate probabilities that can overfit high-dimensional datasets, being suitable for datasets that can be linearly separated [13]
K-nearest neighbors (KNN) “Instance-based learning” or a non-generalizing learning algorithm that does not focus on constructing a general internal model but, rather, stores all instances corresponding to the training data in an n-dimensional space and classifies new data points based on similarity measures [13]
Support vector machine (SVM) Supervised learning model that can efficiently perform linear and nonlinear classifications, implicitly mapping their inputs into high-dimensional feature spaces [12]
Boosting Family of algorithms converting weak learners (i.e., classifiers) to strong learners (i.e., classifiers that are arbitrarily well-correlated with the true classification) towards decreasing the bias and variance [12]
Artificial neural network (ANN) An ML technique that processes information in an architecture comprising many layers (“neurons”), each inter-neuronal connection extracting the desired parameters incrementally from the training data [6,11]
Deep neural network (DNN) A DL architecture with multiple layers between the input and output layers [11]
Convolutional neural network (CNN) A class of DNN displaying connectivity patterns similar to the connectivity patterns and image processing in the visual cortex [11]