Skip to main content
. 2021 May 20;34(9):1780–1794. doi: 10.1038/s41379-021-00826-6

Table 2.

(a) Tile-classifier metrics for the task of classifying foci of interest on H&E slides, and slide-level classifier metrics for the IHC request prediction task, for the three cross-validation splits of the dataset. (b) Slide-level classifier results for the task of IHC requesting evaluated on the validation set, for the three annotating pathologists. (c) Estimation of time savings and extra costs from introduction of the model (n = 380 IHC-requested cases).

(a)
Split number Tile classification accuracy Tile classification AUC IHC order accuracy IHC order AUC
0 0.86 0.91 0.99 0.99
1 0.85 0.94 0.99 0.99
2 0.91 0.93 0.99 0.99
(b)
Pathologist Validation accuracy Validation AUC
1 0.979 0.977
2 0.764 0.737
3 0.674 0.681
(c)
Operating point (n = 380) Specificity Average false positive rate Turnaround time savings (days) Reporting time savings (hours) Extra cost of unnecessary IHC orders (£)
Reflex testing NA NA 1170 70 4180
1 0.6 0.15   703 42   627
2 0.75 0.33   879 52 1379
3 0.9 0.48 1054 63 2006

The false positive rate is averaged over the three pathologists. The minimum expected savings of 3 days 2 h for turnaround time and 11 min for reporting times were used in the calculations. We assume an IHC order cost of £11 per slide.