Table 1.
Machine learning methods in DBS
| Type/purpose | Method name |
|---|---|
| Unsupervised | |
| Clustering |
Gaussian mixture model K-means |
| Dimensionality reduction/feature selection |
Linear discriminant analysis (LDA) Principal component analysis (PCA): kernel PCA t-distributed stochastic neighbour embedding (t-SNE) |
| Supervised | |
| Classification |
AdaBoost Decision Tree (DT): Oblique DT Gradient Boosting Machine: XGBM, Extreme gradient boosted trees Hidden Markov Model (HMM) K-nearest neighbor (KNN) Logistic Regression (LR): L1 logistic / LASSO; L2 logistic/Ridge Naïve Bayes (NB): Conditional model and Gaussian Neural Networks (NN): Multilayer perceptron, Shallow NN, Convolutional NN (CNN), Deep NN, LAMSTAR NN, Recurrent Networks Random Forest (RF): unsupervised RF Support-vector machine (SVM): SVM based on linear and Radial Basis Function (RBF) kernels |
| Regression/time series |
Granger causality Linear regression Kalman filters Recurrent networks Volterra kernels |
The methods are organized according to the following categories: Clustering (Ct) and Dimensionality Reduction/Feature Selection (DR/FS) for unsupervised learning; Classification (Cf) and Regression/Time Series (R/TS) for supervised learning. It should be noticed that some methods can be adapted to operate with different purposes. For example, recurrent networks can be used either in a context of classification or regression