Skip to main content
. 2022 Jun 16;16:909553. doi: 10.3389/fncom.2022.909553

TABLE 4.

Test results for the remaining participant after training using data for the other seven participants.

DT RF ADB LR MLP SVM KNN LDA LR-CNN CNN LSTM
Participant 1 0.759 0.825 0.828 0.793 0.856 0.829 0.806 0.841 0.923 0.898 0.886
Participant 2 0.737 0.836 0.828 0.813 0.863 0.837 0.819 0.824 0.932 0.883 0.877
Participant 3 0.742 0.831 0.829 0.798 0.856 0.835 0.816 0.823 0.919 0.869 0.855
Participant 4 0.716 0.822 0.826 0.816 0.853 0.840 0.814 0.854 0.932 0.889 0.883
Participant 5 0.732 0.836 0.833 0.822 0.833 0.833 0.811 0.821 0.917 0.796 0.811
Participant 6 0.721 0.820 0.827 0.821 0.843 0.836 0.810 0.826 0.926 0.913 0.878
Participant 7 0.727 0.843 0.826 0.816 0.852 0.842 0.812 0.822 0.862 0.877 0.858
Participant 8 0.712 0.832 0.829 0.794 0.829 0.828 0.821 0.829 0.921 0.881 0.862

DT, decision tree; RF, random forest; ADB, adboost; LR, logistic regression; MLP, multilayer perceptron; SVM, support vector machine; KNN, k-nearest neighbor; LDA, linear discriminant analysis; LR-CNN, logistic regression and convolutional neural network; CNN, convolutional neural network; LSTM, long short-term memory.