Bayes [17, 19, 20, 22] |
Estimates the probability of data patterns belonging to a specific class |
Boosting: functional data boosting (FDboost) [13]; gradient boosting (GB) [24, 28]; extreme gradient boosting regression (XGBoost) [27, 31] |
Merges weak classifiers into strong ones |
Deep learning neural network (DLNN) [10, 11, 14, 16, 18, 34, 35] |
Similarly to multiple linear regression it contains layers of interconnected nodes. A subclass of NN is the convolutional neural network (CNN) |
Decision trees (DT) [14, 22, 29, 34] |
Gradually reject classes assigned into multistage decision systems to accept a final class. In pain medicine, decision trees algorithms such as classification and regression trees have been used |
k-means clustering [14] |
Divides a number of data points into a number of clusters based on the nearest mean |
k-nearest neighbors (kNN) [10, 22, 24, 27, 29, 32] |
Assigns data patterns to a class on the basis of the distance to the training patterns of a certain class |
Multilayer perceptron (MLP) [9, 22] |
Trains on a set of input data patterns to predict/classify the output class |
Random forest (RF) [14, 15, 28, 29, 31, 32] |
Builds and merges multiple decision trees to provide a more accurate prediction |
Regression: kernel ridge regression (KRR), [30]; elastic net (EN) [23, 28]; generalized linear mixed-models (GLMMs) based on repeated data points, Lasso [15, 24]; least square (LS) [28]; linear regression (LiR) [27, 33]; logistic regression (LoR) [10, 15, 29, 31, 32]; ridge regression (RR) [28] |
Predicts the probability of agreement using continuous data points |
Support vector machine (SVM) [9, 12, 15, 21, 22, 25–27, 29, 32, 34] |
Creates a hyperplane to separate two classes. The hyperplane is found by optimizing a cost function |
Multi-subject dictionary learning (MSDL) [16] |
It is a feature learning method where a training example is represented as a linear combination of basic functions, and is assumed to be a sparse matrix |