Skip to main content
. 2022 Sep 1;12:14851. doi: 10.1038/s41598-022-19045-3

Table 1.

Overview of the obtained verification results for our experiments using varying training set sizes Ns at different learning rates η. Moreover, different data handling techniques were used (FTS Fixed training set, RNP Randomized negative pairs). For each experiment, the training sets were balanced with respect to the amount of positive and negative image pairs. In this table, we present the AUC (together with the lower and upper bounds of the 95% confidence intervals from 10,000 bootstrap runs), the accuracy, the specificity, the recall, the precision, and the F1-score. Bold text emphasizes the overall highest AUC value.

Data handling Ns η AUC + 95 % CI Accuracy (TP+TNP+N) Specificity (TNN) Recall (TPP) Precision (TPTP+FP) F1-score
FTS 100,000 10-3 0.86100.85880.8632 0.7782(77,815100,000) 0.7710(38,54850,000) 0.7853(39,26750,000) 0.7742(39,26750,719) 0.7797
200,000 10-3 0.94480.94350.9461 0.8743(87,428100,000) 0.8685(43,42650,000) 0.8800(44,00250,000) 0.8700(44,00250,576) 0.8750
400,000 10-4 0.95870.95750.9599 0.8755(87,546100,000) 0.9290(46,45250,000) 0.8219(41,09450,000) 0.9205(41,09444,642) 0.8684
800,000 10-4 0.98960.98910.9901 0.9537(95,367100,000) 0.9541(47,70550,000) 0.9532(47,66250,000) 0.9541(47,66249,957) 0.9536
RNP 800,000 10-4 0.99400.99370.9944 0.9555(95,545100,000) 0.9822(49,11150,000) 0.9287(46,43450,000) 0.9812(46,43447,323) 0.9542