Skip to main content
. 2019 Jul 24;9:10750. doi: 10.1038/s41598-019-47181-w

Table 3.

Classification results for PIRC, QRDR and PIMEC with varying input image sizes on the primary validation set.

Grading system Input size Macro-AUC Accuracy Quadratic-Weighted Kappa
PIRC 256 0.901 0.751 0.772
PIRC 299 0.919 0.785 0.834
PIRC 512 0.951 0.838 0.894
PIRC 1024 0.961 0.870 0.915
PIRC 2095* 0.962 0.869 0.910
PIRC 6 × 512a 0.958 0.944 0.904
QRDR 256 0.977 0.912 0.901
QRDR 299 0.981 0.922 0.914
QRDR 512 0.989 0.937 0.930
QRDR 1024 0.991 0.938 0.932
QRDR 2095* 0.991 0.925 0.914
QRDR 6 × 512a 0.991 0.962 0.938
PIMEC 256 0.959 0.928 0.813
PIMEC 299 0.970 0.923 0.803
PIMEC 512 0.979 0.935 0.832
PIMEC 1024 0.978 0.937 0.846
PIMEC 2095* 0.981 0.934 0.856
PIMEC 6 × 512a 0.983 0.973 0.871

Macro-AUC refers to area under macro average of ROC for each class one-vs-all manner.

*Trained with model using instance normalization layers and an optimizer with accumulation of 15 mini-batches.

aEnsemble of six classifiers trained on same data with same input size.