Skip to main content
. 2024 Jul 1;8(2):437–465. doi: 10.1162/netn_a_00361

Table 3. .

Results of the prediction analyses for the validation and combined cohorts

  Variable True negatives: SF True positives: NSF Acc. Prec. Sensitivity F1
Validation S(Rop) 19/26 (= 0.73) 4/8 (= 0.50) 0.68 0.26 0.50 0.38
Ov(RA, Rop) 18/26 (= 0.69) 5/8 (= 0.63) 0.68 0.38 0.63 0.51
δIR(RA) 21/26 (= 0.81) 5/8 (= 0.63) 0.76 0.50 0.63 0.57
Combined 21/26 (= 0.81) 6/8 (= 0.75) 0.79 0.55 0.75 0.65
RUSboost 0.82 0.35 0.71 0.37 0.35 0.36
Combined RUSboost 0.72 0.63 0.70 0.42 0.63 0.51

Note. For each analysis, we used a leave-one-out cross-validation such that a predictive model was built to predict the outcome of each patient using the data from the remaining N − 1 patients. For the individual variables, the results correspond to the optimal points of the ROC curves according to the Youden criterion. For the machine learning analyses, they were derived from an adaptive boosting (AdaBoost1, Matlab 2018) algorithm with leave-one-out cross-validation, combined with random undersampling (RUSboost) to account for class imbalance. Results were averaged over 10 iterations of the AdaBoost1 algorithm. For the combined method, the results from the three individual analyses were combined, and an NSF classification was assigned to patients with at least two positive (NSF) classifications. For each group (SF, NSF), we show the number of correctly identified cases by absolute number and relative frequency. The remaining columns correspond, respectively, to the accuracy (Acc.), precision (Prec.), sensitivity, and F1 statistic. For machine learning analyses, only the average fraction of correctly predicted cases is shown in the true negatives and true positives columns, since absolute results can vary per realization of the prediction algorithm.