Table 2:
Performance of CopyDetective (raw, i.e., without optional final filtration, and filter, i.e., with default filtration threshold of 10.76) in comparison to 5 established approaches: ExomeCNV, VarScan2, ExomeDepth, Control-FREEC (WR and KS), and CNV-seq.
| Tool and configuration | Data set | TP calls (+ false type) | FP calls | CNVs | Sens | PPV | F1 | ||
|---|---|---|---|---|---|---|---|---|---|
| Found | Missed | Detectable | |||||||
| ExomeCNV | 1 | 1,378 (+2,204) | 215,865 | 49 | 6 | 55 | 0.89 | 0.01 | 0.01 |
| 2 | 280 (+508) | 13,213 | 32 | 15 | 47 | 0.68 | 0.02 | 0.04 | |
| 3 | 1,017 (+1,904) | 66,686 | 40 | 8 | 48 | 0.83 | 0.02 | 0.03 | |
| 4 | 94 (+1) | 2,064 | 24 | 64 | 88 | 0.27 | 0.04 | 0.08 | |
| VarScan2 | 1 | 119 (+126) | 11,736 | 27 | 22 | 49 | 0.55 | 0.01 | 0.02 |
| 2 | 106 (+185) | 2,758 | 26 | 11 | 37 | 0.70 | 0.04 | 0.07 | |
| 3 | 65 (+16) | 374 | 21 | 14 | 35 | 0.60 | 0.15 | 0.24 | |
| 4 | 30 (+0) | 54 | 23 | 65 | 88 | 0.26 | 0.36 | 0.30 | |
| ExomeDepth | 1 | 163 (+50) | 8,074 | 13 | 36 | 49 | 0.27 | 0.02 | 0.04 |
| 2 | 275 (+162) | 2,042 | 25 | 12 | 37 | 0.68 | 0.12 | 0.20 | |
| 3 | 175 (+33) | 2,047 | 20 | 15 | 35 | 0.57 | 0.08 | 0.14 | |
| 4 | 909 (+0) | 375 | 32 | 56 | 88 | 0.36 | 0.71 | 0.48 | |
| ControlFREEC | |||||||||
| WR | 1 | 7 (+6) | 1,568 | 5 | 50 | 55 | 0.09 | <0.01 | 0.01 |
| 2 | 6 (+3) | 278 | 3 | 44 | 47 | 0.06 | 0.02 | 0.03 | |
| 3 | 7 (+9) | 654 | 6 | 42 | 48 | 0.13 | 0.01 | 0.02 | |
| 4 | 5 (+2) | 231 | 5 | 83 | 88 | 0.06 | 0.02 | 0.03 | |
| KS | 1 | 24 (+38) | 7,261 | 12 | 43 | 55 | 0.22 | <0.01 | 0.01 |
| 2 | 16 (+11) | 1,124 | 10 | 38 | 48 | 0.21 | 0.01 | 0.03 | |
| 3 | 16 (+11) | 1,124 | 10 | 38 | 48 | 0.21 | 0.01 | 0.03 | |
| 4 | 32 (+4) | 224 | 17 | 71 | 88 | 0.19 | 0.09 | 0.13 | |
| CNV-seq | 1 | 25,690 (+27,757) | 1,723,974 | 21 | 34 | 55 | 0.38 | 0.01 | 0.03 |
| 2 | 3,016 (+1,885) | 94,461 | 21 | 26 | 47 | 0.45 | 0.03 | 0.06 | |
| 3 | 6,628 (+4,518) | 316,311 | 19 | 29 | 48 | 0.40 | 0.02 | 0.04 | |
| 4 | 786 (+1,125) | 28,863 | 15 | 73 | 88 | 0.17 | 0.03 | 0.05 | |
| CopyDetective | |||||||||
| Raw | 1 | 33 (+22) | 729 | 18 | 1 | 19 | 0.95 | 0.04 | 0.08 |
| 2 | 63* (+43) | 176 | 34 | 3 | 37 | 0.92 | 0.26 | 0.41 | |
| 3 | 67 (+21) | 212 | 40 | 1 | 41 | 0.98 | 0.24 | 0.39 | |
| 4 | 23 (+23) | 399 | 19 | 0 | 19 | 1.00 | 0.05 | 0.10 | |
| Filter 10.76 | 1 | 25 (+15) | 180 | 18 | 1 | 19 | 0.95 | 0.12 | 0.22 |
| 2 | 50 (+31) | 63 | 34 | 3 | 37 | 0.92 | 0.44 | 0.60 | |
| 3 | 60 (+18) | 10 | 40 | 1 | 41 | 0.98 | 0.86 | 0.90 | |
| 4 | 22 (+16) | 63 | 19 | 0 | 19 | 1.00 | 0.26 | 0.41 | |
The table reports true-positive (TP) calls (in parentheses: reporting the number of additional true-positive calls if CNV type is not evaluated), false-positive (FP) calls, found, missed, and detectable CNVs, sensitivity (sens; only evaluating true-positive calls with correct CNV type), positive predictive value (PPV; only evaluating TP calls with correct CNV type), and the F1 score. * Sixty-four detected CNVs are overlapping true CNVs. However, as 1 called CNV is clearly shorter than the validated one and characterized by a remarkably low quality value, we assume that this overlap is occurring by just coincidence. Therefore, it is counted as “missed."