Skip to main content
. Author manuscript; available in PMC: 2009 Jul 23.
Published in final edited form as: AJR Am J Roentgenol. 2009 Jul;193(1):157–164. doi: 10.2214/AJR.08.2051

Table 2. Radiologists' Likes and Dislikes for Various Formats of Audit Reports.

Format Description Likes Dislikes

Recall by sensitivity scattergram
  • Shows how sensitivity (on y-axis) increases with recall rate (on x-axis)

  • Includes individual radiologists as single points and a regression line summarizing the relation between sensitivity and recall

  • Highlights range of recall rates for which radiologists had the best sensitivity

  • Shows how one performance measure relates to another

  • Easy to compare performance with everyone else

  • Visually easy to see individual performance

  • Highlighted range gives a goal

  • Graph was not intuitive and required explanation

  • Could be confusing if it included a large number of radiologists

Vertical bar graph with benchmark
  • Shows sensitivity (on y-axis) over time (past 5 years on x-axis) with different vertical bars for individual radiologists, practice facility, entire state, and BCSCa

  • Includes single line that indicates 85% sensitivity as benchmark

  • Easy to compare individual performance with facility or region

  • Benchmark gives a goal, radiologists can easily see if they were over or under line

  • Seeing data over time shows trends

  • Multiple graphs take up space and cannot all be presented on a single page

Single-page table
  • Includes all statistics outlined in BI-RADS audit

  • Shows cumulative numbers and percentages for screening mammograms done by individual radiologists compared with entire state from time they started practicing mammography through past year

  • Provides same statistics for all mammograms (screening and diagnostic)

  • Cumulative numbers provide more stable estimates compared with single years

  • Provides all measures on a single page

  • Compares with state data

  • A lot of numbers on one page

  • Wanted to see screening and diagnostic examinations separately

  • No comparisons to colleagues at one's own facility or over time

2 × 2 table
  • Shows positive and negative mammogram assessments on left and cancer diagnoses across top for a single year

  • Includes numbers for true-positives, false-positives, false-negatives, and true-negatives

  • Includes equations and calculations for sensitivity, specificity, and PPV

  • Straightforward and easy to understand

  • Like having sensitivity, specificity, and PPV calculated

  • Radiologists would not calculate performance measures on their own

  • No comparison with colleagues or over time

  • No benchmarks

Color-coded chart
  • Summarizes performance over 1 year across facilities on a single page

  • Performance is indicated with colored boxes instead of numbers (green = achieved target, yellow = achieved minimum standard but not target, red = failed minimum standard)

  • Can be useful on a facility level or for a director

  • Colors are easy to see

  • Did not provide actual numbers

  • Radiologists had no idea where they were within a group

Color-coded horizontal bar graphb
  • Summarizes outcomes associated with detriments (recall and biopsy rates) and benefits (cancer detection rates) of screening mammography

  • Color-coded bars show whether radiologists achieved target (green), achieved minimum standard but not target (yellow), or failed to reach minimum standard (red)

  • Targets are displayed as over or under 100%

  • Colors are easy to see

  • 100% target was not meaningful

Note—BCSC = Breast Cancer Surveillance Consortium, PPV = positive predictive value.

a

Added in subsequent focus groups.

b

Not shown to last two groups.