Table 2.
Algorithm sensitivity results.
| Definite | Probable | Combined | |
|---|---|---|---|
| SWI | |||
| Rater identified | 39 | 15 | 54 |
| Algorithm identified | 38 | 12 | 50 |
| Sensitivity | 0.97 | 0.8 | 0.93 |
| GRE | |||
| Rater identified | 43 | 18 | 61 |
| Algorithm identified | 41 | 15 | 56 |
| Sensitivity | 0.95 | 0.83 | 0.92 |
| Merged | |||
| Rater identified | 45 | 19 | 64 |
| Algorithm identified | 44 | 17 | 61 |
| Sensitivity | 0.98 | 0.89 | 0.95 |
An artifact was labeled as a “definite” microbleed if all three raters agreed that the artifact was a microbleed, and as a “probable” microbleed if two raters agreed that it was a microbleed. This table shows the sensitivity results of the algorithm across these different labels. Note that in a study using only visual ratings, the final column (combining the definite and probable ratings) would be the number typically reported.