Skip to main content
. 2021 Dec 13;11:23895. doi: 10.1038/s41598-021-03265-0

Table 3.

CAD software performance when matching the sensitivity of the Expert Reader.

Cut-off Score TP FP FN TN Sensitivity (95% CI) Specificity (95% CI) Accuracy (95% CI)
Expert Reader N/A 127 520 6 379 95.5% (90.4–98.3) 42.2% (38.9–45.5) 49.0% (45.9–52.1)
Abnormality scores obtained by FIT
Qure.ai 44.1 127 461 6 438 95.5% (90.4–98.3) 48.7% (45.4–52.0) 54.7% (51.7–57.8)
DeepTek 31.1 127 483 6 416 95.5% (90.4–98.3) 46.3% (43.0–49.6) 52.6% (49.5–55.7)
Delft imaging 46.7 127 492 6 407 95.5% (90.4–98.3) 45.3% (42.0–48.6) 51.7% (48.7–54.8)
Abnormality scores provided by CAD company
JF Healthcare 83.4 127 530 6 369 95.5% (90.4–98.3) 41.0% (37.8–44.3) 48.1% (45.0–51.2)
OXIPIT 15.4 127 532 6 367 95.5% (90.4–98.3) 40.8% (37.6–44.1) 47.9% (44.8–51.0)
Lunit 3.0 127 551 6 348 95.5% (90.4–98.3) 38.7% (35.5–42.0) 46.0% (43.0–49.1)
InferVision 53.8 127 661 6 238 95.5% (90.4–98.3) 26.5% (23.6–29.5) 35.4% (32.5–38.4)
Artelus 1.2 127 691 6 208 95.5% (90.4–98.3) 23.1% (20.4–26.0) 32.5% (29.6–35.4)
Dr CADx 27.8 127 790 6 109 95.5% (90.4–98.3) 12.1% (10.1–14.4) 22.9% (20.3–25.6)
SemanticMD 0.4 127 808 6 91 95.5% (90.4–98.3) 10.1% (7.2–10.8) 21.1% (16.6–21.2)
EPCON 0.6 127 815 6 84 95.5% (90.4–98.3) 9.3% (6.6–10.0) 20.4% (16.0–20.6)
COTO 1.5 127 842 6 57 95.5% (90.4–98.3) 6.3% (4.8–8.1) 17.8% (15.5–20.3)

TP true positive, FP false positive, FN false negative, TN true negative.