Unbalanced and balanced accuracy estimates for various classifiers a within the Recursive cluster elimination (RCE) framework, b outside RCE framework for autism brain imaging data exchange (ABIDE) data when the training/validation data and the hold-out test data are matched in imaging sites as well as age group for the binary classification problem between healthy controls and subjects with autism spectrum disorder (ASD). The training/validation and the hold-out test data are from all 15 imaging sites and age range of 7–58 years. The balanced accuracy was obtained by averaging the individual class accuracies. The orange bars indicate the cross-validation (CV) accuracy while the blue bars indicate the accuracy for the hold-out test data obtained by the voting procedure. The dotted line indicates the accuracy obtained when the classifier assigns the majority class to all subjects in the test data. For unbalanced accuracy, this happens to be 56% since healthy controls formed 56% of the total size of the hold-out test data. For balanced accuracy, this is exactly 50%. We chose the majority classifier as the benchmark since the accuracy obtained must be greater than that if it learns anything from the training data. The discrepancy between the biased estimates of the CV accuracy and the unbiased estimates of the hold-out accuracy is noteworthy. The best hold-out test accuracy was 70.7% obtained with RBF-support vector machine (SVM) within the RCE framework, while the best balanced hold-out test accuracy was 69.2% obtained with linear SVM implemented within the RCE framework. ELM, extreme learning machine; KNN, k-nearest neighbors; LDA, linear discriminant analysis; QDA, quadratic discriminant analysis; FC-NN, fully connected neural network; MLP-NN, multilayer perceptron neural network; LVQNET, learning vector quantization neural network; SLR, sparse logistic regression; RLR, regularized logistic regression; RVM, relevance vector machine