Skip to main content
. 2005 Jul-Aug;12(4):448–457. doi: 10.1197/jamia.M1794

Table 4.

“Best Estimate” Event Prevalence and System Performance

Metric* Derivation Value (95% CI)
Prevalence
    Case rate: proportion of cases with one or more true events 53 ÷ 1,000 0.053 (0.040–0.069)
    Event rate: true events per case
65 ÷ 1,000
0.065 (0.051–0.082)
System performance for detecting cases with events
    Sensitivity: proportion of cases with true events that had apparent events 15 ÷ 53 0.28 (0.17–0.42)
    Specificity: proportion of cases with no true events that had no apparent events
graphic file with name M1.gif
0.985 (0.984–0.986)
    Positive predictive value: proportion of cases with apparent events that had true events 652 ÷ 1,461 0.45 (0.42–0.47)
    Negative predictive value: proportion of cases with no apparent events that had no true events
930 ÷ 968
0.96 (0.95–0.97)
System performance for detecting individual events
    Sensitivity: proportion of true events that were identified by the system 16 ÷ 65 0.25 (0.15–0.37)
    Specificity: proportion of cases without true events of a given type that the system did not identify
graphic file with name M2.gif
0.9996 (0.9996–0.9997)
    Positive predictive value: proportion of apparent events that were true 704 ÷ 1,590 0.44 (0.42–0.47)
    Negative predictive value: proportion of cases without true events of a given type that had no true event
graphic file with name M3.gif
0.9989 (0.9986–0.9992)

CI = confidence interval.

*

A true event was detected by manual review; an apparent event was identified by the system.

See the text for an explanation of the difference between case specificity and event specificity.