Table 3. Predictive Abilities of the Constructed Modelsa.
| sensitivity:specificity | PPV:NPV | ACC | AUROC | ||
|---|---|---|---|---|---|
| hierarchical | CD:UC | 65:65 | 65:65 | 65 | 0.7675 |
| CD:control | 95:90 | 90:95 | 93 | 0.9925 | |
| UC:control | 95:90 | 90:95 | 93 | 0.9925 | |
| urine | CD:UC | 0:0 | 0:0 | 0 | 0 |
| CD:control | 43:100 | 100:83 | 85 | 0.9643 | |
| UC:control | 85:100 | 100:91 | 94 | 0.9923 | |
| plasma | CD:UC | 75:65 | 68:72 | 70 | 0.7325 |
| CD:control | 90:90 | 90:90 | 90 | 0.9825 | |
| UC:control | 90:95 | 95:90 | 93 | 0.985 | |
| serum | CD:UC | 60:50 | 55:56 | 55 | 0.655 |
| CD:control | 95:100 | 100:95 | 98 | 1 | |
| UC:control | 80:95 | 94:83 | 88 | 0.9225 |
PPV, positive predictive value; NPV, negative predictive value; ACC, accuracy; AUROC, area under the ROC curve. PPV (NPV) is the proportion of samples with positive (negative) test results that are correctly predicted with the model. Sensitivity (specificity) measures the proportion of actual positives (negatives) that are correctly predicted with the model. Accuracy (ACC) is the proportion of true results (both true positives and true negatives) in all results. The area under curve (AUROC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one.