Unbalanced and balanced accuracy estimates for various classifiers a within recursive cluster elimination (RCE) framework, b outside RCE framework for post-traumatic stress disorder (PTSD) data when the training/validation data and the hold-out test data are from same age groups in the range for the multiclass classification between healthy controls and subjects with PTSD. The training/validation data and the hold-out test data are matched in age with subjects from age range of 23–53 years. The balanced accuracy was obtained by averaging the individual class accuracies. The orange bars indicate the cross-validation (CV) accuracy while the blue bars indicate the accuracy for the hold-out test data obtained by the voting procedure. The dotted line indicates the accuracy obtained when the classifier assigns the majority class to all subjects in the test data. For unbalanced accuracy, this happens to be 68.6% since subjects with PTSD formed 68.6% of the total size of the hold-out test data. For balanced accuracy, this is exactly 50%. We chose the majority classifier as the benchmark since the accuracy obtained must be greater than that if it learns anything from the training data. The discrepancy between the biased estimates of the CV accuracy and the unbiased estimates of the hold-out accuracy is noteworthy. The best hold-out test accuracy was 97.1%, whereas the best balanced hold-out test accuracy obtained was 95.5%, obtained by boosted stumps, boosted trees, multilayer perceptron neural network; (MLP-NN) and linear discriminant analysis (LDA) implemented within the RCE framework. ELM, extreme learning machine; KNN, k-nearest neighbors; QDA, quadratic discriminant analysis; SVM, support vector machine; FC-NN, fully connected neural network; LVQNET, learning vector quantization neural network; SLR, sparse logistic regression; RLR, regularized logistic regression; RVM, relevance vector machine