Skip to main content
. 2014 Nov 14;2014:1268–1276.

Table 2.

Performance of algorithms on identification of clinically relevant new information. Precision = TP/(TP+FP), Recall = TP/(TP+FN), F1-Measure = 2×Precision×Recall/(Precision+Recall).

Author type: All
Algorithms Initial Annotation Revised Annotation
Recall Precision F1-measure Recall Precision F1-measure
Baseline 0.812 0.572 0.671 0.807 0.645 0.717
LapLace 0.827 0.654 0.730 0.826 0.728 0.774
Good-Turning 0.834 0.669 0.742 0.829 0.735 0.779
Ney-Essen 0.841 0.680 0.752 0.832 0.743 0.784
Author type: Physician & Resident
Algorithms Initial Annotation Revised Annotation
Recall Precision F1-measure Recall Precision F1-measure
Baseline 0.800 0.587 0.677 0.800 0.667 0.733
LapLace 0.817 0.670 0.707 0.812 0.746 0.762
Good-Turning 0.824 0.681 0.746 0.820 0.758 0.788
Ney-Essen 0.830 0.692 0.755 0.825 0.767 0.795
Author type: Physician Assistant & Nurse Practitioner (Advanced Practice Providers)
Algorithms Initial Annotation Revised Annotation
Recall Precision F1-measure Recall Precision F1-measure
Baseline 0.861 0.506 0.637 0.857 0.553 0.651
LapLace 0.918 0.517 0.662 0.917 0.576 0.707
Good-Turning 0.923 0.522 0.667 0.920 0.584 0.714
Ney-Essen 0.931 0.531 0.677 0.927 0.589 0.720