Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2004 Oct;19(10):1013–1018. doi: 10.1007/s11606-004-0003-2

An Evaluation of Vignettes for Predicting Variation in the Quality of Preventive Care

Timothy R Dresselhaus 1,2, John W Peabody 3,4,5, Jeff Luck 5,6,7, Dan Bertenthal
PMCID: PMC1492573  PMID: 15482553

Abstract

OBJECTIVE

Clinical vignettes offer an inexpensive and convenient alternative to the benchmark method of chart audits for assessing quality of care. We examined whether vignettes accurately measure and predict variation in the quality of preventive care.

DESIGN

We developed scoring criteria based on national guidelines for 11 prevention items, categorized as vaccine, vascular-related, cancer screening, and personal behaviors. Three measurement methods were used to ascertain the quality of care provided by clinicians seeing trained actors (standardized patients; SPs) presenting with common outpatient conditions: 1) the abstracted medical record from an SP visit; 2) SP reports of physician practice during those visits; and 3) physician responses to matching computerized case scenarios (clinical vignettes).

SETTING

Three university-affiliated (including 2 VA) and one community general internal medicine clinics.

PATIENTS/PARTICIPANTS

Seventy-one randomly selected physicians from among eligible general internal medicine residents and attending physicians.

MEASUREMENTS AND MAIN RESULTS

Physicians saw 480 SPs (120 at each site) and completed 480 vignettes. We calculated the proportion of prevention items for each visit reported or recorded by the 3 measurement methods. We developed a multiple regression model to determine whether site, training level, or clinical condition predicted prevention performance for each measurement method. We found that overall prevention scores ranged from 57% (SP) to 54% (vignettes) to 46% (chart abstraction). Vignettes matched or exceeded SP scores for 3 prevention categories (vaccine, vascular-related, and personal behavior). Prevention quality varied by site (from 40% to 67%) and was predicted similarly by vignettes and SPs.

CONCLUSIONS

Vignettes can measure and predict prevention performance. Vignettes may be a less costly way to assess prevention performance that also controls for patient case-mix.

Keywords: compliance, preventive care guidelines, physician practice, clinical vignettes, quality of care


As evidence of the effectiveness of preventive care mounts, physicians are increasingly held accountable for providing such care to patients.13 Evidence-based guidelines for screening and immunization are now widely used and specifically intended to promote preventive practices linked to lower morbidity and mortality.48 Despite these efforts, the actual practice of preventive care by physicians is often disappointing and inconsistent.9 Clinicians themselves readily acknowledge that a gap exists between established guidelines and implementation.10 This gap is confirmed by observations of clinical practice10,11 and comparison studies of physicians in different clinical settings. Although many suggest that institutional factors affect compliance with standards of preventive care,9,12,13 few studies have evaluated the actual variation in quality of preventive care or the institutional factors related to poor quality.

The available assessments of the quality of preventive care have been hindered by limited techniques for measurement. Chart audits, the method most commonly used for assessing preventive care, are expensive to perform and subject to recording bias.1417 In an earlier study, we demonstrated that charts underreport the preventive care provided by physicians.9 These findings highlight the need for better strategies to measure prevention performance and for research into the factors underlying poor performance.18 The standardized patient (SP) methodology controls for case-mix and can be used as a gold standard method for measuring the quality of preventive and ambulatory care; however, SP measurement is challenging outside the research setting due to cost and logistical complexity.19 A promising alternative measurement method is clinical vignettes, offering a comparatively inexpensive and also case-mix-controlled method for assessing the quality of preventive care.9,19

Using a prospectively randomized data collection strategy, we examined the ability of vignettes to detect variation in prevention performance across diverse clinical settings. First, we measured whether clinical vignettes, when compared to the SP gold standard and the usual method of chart abstraction, were a valid comprehensive measure of preventive care in this large, multi-institutional study. Second, we examined how the quality of preventive care varies across different clinical settings regardless of measurement method and hypothesized that this variation would correspond to institutional factors supporting the delivery of preventive services. Third, we hypothesized that prompting physicians to provide preventive care would increase preventive care services. Finally, and most importantly, we investigated whether variation in the quality of care measured by SPs could be predicted by vignettes using a model that controlled for site, level of training, and type of care. If successful, such a prediction model using vignettes could help physicians and administrators identify, implement, and evaluate interventions to improve the quality of preventive care.

METHODS

We conducted this prospective assessment among randomly selected general internists at 4 outpatient general internal medicine clinics. We trained experienced actor patients to present unannounced to these clinics and to report on the quality of the preventive care they received. We reviewed the charts from the SP visits, and also gave the physicians identical clinical vignettes as described below. Eleven preventive services were measured, among other quality criteria, for 4 common medical conditions. The analysis compared the physicians’ responses to clinical vignettes with the reports of SPs and the medical records generated from their visits. We developed a multiple regression model to determine whether site, training level, or clinical condition predicted the quality of preventive care.

Instruments and Primary Data Collection

Data for these comparisons were collected at 2 university-affiliated Veterans Affairs medical centers and 2 large urban medical clinics; all sites had internal medicine residency training programs. Data were collected between March 2000 and August 2000. All second- and third-year residents and attending physicians assigned to these sites were eligible to participate in the study. Informed consent was obtained, with 88% of eligible subjects agreeing to participate.

From these subjects, we randomly selected 60 physicians across the 4 sites to complete 8 cases. We used 3 measurement methods to ascertain the quality of preventive care they provided: 1) standardized patients (SPs), the gold standard method, who presented unannounced to physicians’ clinics; 2) abstraction of the SP medical record generated at these encounters; and 3) computerized case scenarios (clinical vignettes) that corresponded exactly to the SP presentation. Prevention scoring criteria were based on national guidelines for 11 prevention measures.4 These were influenza vaccine, pneumococcal vaccine, tetanus vaccine, diet, exercise, lipid measurement, colon cancer screening, prostate cancer screening (digital rectal examination [DRE]; prostate specific antigen [PSA]), alcohol screening, tobacco screening, and tobacco cessation counseling. These items were included because they were standard practice at participating institutions and readily assessed by multiple measurement methods. These 11 measures were further categorized using expert opinion into 4 general preventive areas: vaccine, vascular-related, cancer screening, and personal behaviors. Prevention measures not indicated (e.g., colon cancer screening for a patient under 50) were excluded, as was lipid screening, from the scoring of SP visits, as an SP would not be able to determine whether or not a test was ordered. Performance of these prevention items was recorded by the SP on a checklist, abstracted from the medical record, or reported by the physician in response to clinical vignettes.

We recruited 45 experienced actors to serve as standardized patients. Actors were trained according to established practices for SP training2024 to simulate 4 common medical conditions (chronic obstructive pulmonary disease [COPD], type II diabetes mellitus [diabetes], vascular insufficiency [vascular], and depression) and to complete a scoring checklist, including prevention criteria, immediately following visits. For each condition, we developed 2 scenarios, one simple and one complex. Reliability, as assessed by videotaping training sessions, comparing SP scores with those of expert raters, and SP visits to members of the research team, has been found to be excellent.25 Charts generated at each visit were retrieved and abstracted by a trained abstractor. When in the course of a visit the clinician did not initiate a discussion of preventive care, the actor patient was instructed to query the clinician along the following line: “Now that I'm 65 years old, are there other things I need to have done or need to talk to you about?”

The clinical vignettes recreated the sequence of a typical patient visit (history, physical exam, tests, diagnosis, and management plan) and prompted physicians to provide open-ended responses. The content of the clinical vignettes was matched identically to the clinical presentations of the SPs. The blinded abstractor scored physicians’ responses on the clinical vignettes using the same explicit criteria used for SP checklists and chart abstraction. In scoring the vignettes, it should be noted that a clinician could score below, comparably to, or above the score of the SP checklist, as the clinician was responding separately to a matching clinical scenario. A representative example of a case scenario (applicable to the SP presentation or clinical vignette) can be found in Figure 1.

FIGURE 1.

FIGURE 1

Chronic obstructive pulmonary disease scenario.

In all, physician subjects evaluated 480 standardized patients (each with a medical record) and completed 480 vignettes; 120 SP encounters occurred at each site. When one of the original 60 physicians could not complete all 8 cases, another randomly selected physician completed any remaining cases. Eleven physicians were not available to complete all 8 cases. Vignettes cases were only done by physicians if they had seen an identical SP case.

Qualitative Data Collection

To identify institutional features of the 4 sites that might affect the performance of preventive care, we conducted a phone survey of physician opinion leaders with oversight responsibilities in the clinics at each site. We ascertained whether practice guidelines were disseminated or clinical reminders available for each of the 11 prevention measures, whether allied health personnel assisted in accomplishing the prevention measures, and whether feedback mechanisms or incentives were used to enhance performance. In addition, these leaders were queried regarding documentation (electronic vs paper format), designation of a single individual with oversight of prevention, and the length of new patient visits (new appointment lengths categorized as <20 minutes, <30 minutes, or >30 minutes).

Analysis

We calculated the percentage of prevention items completed for each visit for each of the measurement methods. Percentage prevention scores were also calculated by method for each of the 4 prevention subcategories (vaccine, vascular-related, cancer screening, and personal behaviors) by site, by level of training, and by whether or not the actor patient prompted the physician subject regarding preventive care. The presence of institutional features conducive to prevention performance was qualitatively compared to the measured prevention scores.

To determine a predictive model of prevention compliance, the data were divided randomly into two parts. The first, labeled the development data set, contained 40% of the composite gold standard responses from the SP plus data on testing and referrals abstracted from the medical record. This data set served as the basis for a regression model to predict variation in the quality of preventive care. This model was then applied to the remaining 60% of the SP and chart composite data, which we labeled the test data set; similarly, we applied the model to the remaining vignette data set. This was done to determine whether site, training level, or clinical condition could be controlled to more accurately predict the quality of preventive care. The prediction model evaluated prevention compliance at 4 sites, for 4 conditions and for 3 levels of training, for a total of 48 comparisons. Prediction model success for both the SP test data and the vignette data was defined as the proportion of test data that was contained within the 95% confidence interval of the predicted value. For example, to assess the variability across sites, we used the regression model to calculate the aggregate scores among all providers at each site. The 95% confidence limits were then determined and the proportion of observations that fell within those confidence limits was reported. We then calculated the predicted score and the confidence interval for each site and calculated the proportion of the observations in the test set that fell within these confidence limits. Stata (release 6.0, Stata Corporation, College Station, Tex) and SAS (SAS Institute, Cary, NC) statistical data software packages were used for all of these analyses.

RESULTS

The overall quality of preventive care as measured by the 3 methods ranged from 57% correct for SPs to 54% for vignettes to 46% for chart abstraction (Table 1). Vignette measurement matched or exceeded the gold standard SP scores for 3 of the 4 categories (vaccine, vascular-related, and cancer screening); chart measurements of prevention were lowest for all 4 categories.

Table 1.

Prevention Performance Scores for Three Methods

Method (% Correct)
Measure Standardized Patient Vignette Chart
Overall total 57 54 46
Immunization 37 44 25
 Flu and tetanus 39 47 29
 Pneumovax 27 46 23
 Vaccine other 42 39 23
Vascular-related 34 34 26
 Diet 36 36 28
 Exercise 33 33 24
 Lipid n/a* 60 57
Cancer screening 45 69 44
 Colon 50 81 46
 DRE or PSA 40 57 42
Personal behaviors 83 72 68
 Alcohol 78 67 63
 Smoking screen 93 87 84
 Smoking cessation 78 62 59

Prevention performance scores are based on proportion (%) of items accomplished overall, and for prevention subcategories are listed for each of the 3 measurement methods: standardized patient, vignette, and chart. The chart scores are obtained from the abstracted medical record; the SP scores from the SP completed checklist, done at the time of the visit; and the vignette scores are based upon responses to a clinical case exactly corresponding to the SP case.

*

As an SP is unable to determine whether or not a test is ordered, lipid screening is not applicable.

DRE, digital rectal examination; PSA, prostate specific antigen.

The quality of preventive care varied most by site. One site's overall score was 21% to 25% lower than the other three sites (Table 2). This site also scored lowest for each of the prevention categories: 38% to 46% lower for vaccines, 13% to 17% lower for vascular related, 26% to 34% lower for cancer screening, and 13% to 24% lower for personal behaviors. There was less variation in scores across training level or clinical condition.

Table 2.

Prevention Performance Based on Development Data Set

Measure (% Correct)
Variable Total Vaccine Vascular-related Cancer Screening Personal Behaviors
Site
 1 40 5 31 17 69
 2 61 47 44 43 82
 3 67 42 48 49 93
 4 65 51 46 51 88
Training level
 PGY2 60 32 41 41 87
 PGY3 60 42 46 39 83
 Attending 55 36 39 41 79
Condition
 COPD 1 60 33 31 54 87
 COPD 2 48 37 19 33 82
 Diabetes 1 65 41 47 n/a* 85
 Diabetes 2 63 29 61 42 83
 Vascular 1 56 25 46 37 84
 Vascular 2 53 35 38 35 81
 Depression 1 57 36 48 38 81
 Depression 2 63 56 49 n/a* 80
Prompt by patient
 Yes 61 39 44 46 85
 No 47 30 33 24 72

Using the developmental data set, overall and category-specific prevention scores (% correct) are displayed according to site, training level, clinical condition, and prompt by patient.

*

Cancer screening strategies were not applicable to these cases due to age of cases.

PGY, postgraduate year; COPD, chronic obstructive pulmonary disease.

When actor patients prompted physician subjects to do preventive care, scores improved in all 4 prevention categories, increasing by 9% for vaccines and 22% for cancer screening (Table 2).

The qualitative survey of physician opinion leaders at each site indicated variation in the presence of factors supporting prevention services; however, no clear pattern vis-à-vis higher and lower scoring institutions was discernible. All institutions reported some degree of guideline implementation; none used incentives to reward prevention performance. New patient visit length and identification of a responsible prevention leader did not correspond with site prevention scores. The lowest scoring institution, like the higher performing sites, had implemented prevention guidelines, clinical reminders, educational programs, and feedback mechanisms. However, the two highest scoring institutions did report that they used electronic medical records in contrast to the lowest scoring institution.

The prediction models using the test data from the SP data set and for the vignette data set accurately predicted the quality of preventive care (Table 3). Both models confirmed that there was significant site variation (SP/chart composite at 4 sites; vignettes at 3 sites). Both the SP/chart composite and the vignette models identified significant variation for 3 of the 4 conditions (P < .05). A significant training level difference among postgraduate third-year residents and attending physicians was noted only for vignettes (P < .001). A similar model restricting data that could be obtained across all 3 methods showed that the vignette model was intermediate between SP and charts for predicting prevention compliance—vignettes were 0.60, standardized patients 0.52, and charts 0.71.

Table 3.

Prediction Model Results Based on Test Data Set

Regression Coefficient P Values
SP/Chart Composite Vignette
Variables Coefficient Pr > | t | Coefficient Pr > | t |
Site
 1 Reference Reference
 2 .27881 <.0001 .01927 NS
 3 .31405 <.0001 .08542 .02
 4 .34024 <.0001 .09583 .01
Training level
 PGY2 .04058 NS −.02773 NS
 PGY3 .03068 NS −.10742 .001
 Attending Reference Reference
Condition
 COPD Reference Reference
 Vascular .00926 NS .08542 .02
 Depression .08084 .04 −.20052 <.0001
 Diabetes .10751 .01 .05937 NS
Prediction model
success rate
 Proportion .52 .63

Prediction model results for test data set are displayed for SP/chart composite and vignette indicating significant differences identified according to site, training level, and clinical condition.

PGY, postgraduate year; COPD, chronic obstructive pulmonary disease; NS, not shown.

DISCUSSION

The study's findings indicate that clinical vignettes are a valid method for measuring and predicting variation in preventive care when compared to the gold standard SP method. These results confirm, on the basis of a large, representative data set, that vignettes more closely approximate the data obtained from SPs than do chart abstractions, the traditional method for measuring preventive care. Because standardized patients are impractical for routine applications and charts underestimate actual care provided, clinical vignettes appear to be a practical and valid method for assessing prevention performance.

Overall prevention performance was poor, irrespective of the quality measurement method used. Less than half of the standardized patients in this study were adequately assessed for immunizations, cancer screening, dietary habits, or exercise, despite presenting conditions for which such interventions are strongly evidence based. Alcohol use screening and tobacco cessation counseling were also inconsistently performed.

Underperformance was significantly related to clinical site. This variation by site suggests, as others have found, that institutional factors may influence physician behavior.9,12,13 However, institutional features reported by physician opinion leaders at each site did not explain the variation in prevention performance. It is possible that self-report of the presence of reminders, educational programs, or other features conducive to prevention may not reflect the effective implementation of these strategies, thereby masking significant underlying differences among institutions. It is also possible that qualitative program evaluations such as we performed may be insensitive to institutional variation and may not identify programs deficient in prevention performance. The one feature that distinguished the highest from the lowest performing institutions was the presence of electronic medical records. Electronic records may enhance the effectiveness of guideline implementation and prevention performance evaluation through efficient data capture and feedback to physicians, thereby increasing prevention scores. Alternatively, electronic medical records may be a surrogate for other unmeasured organizational features and cultural factors that influence physician behavior in ways that improve prevention performance.

Clinical condition was also associated with some of the variation in performance. This is surprising, as measured prevention items were not differentially indicated in our cases. We suppose that the higher prevention scores for diabetes might be due to the great emphasis placed on prevention in diabetes care and the broad dissemination of preventive standards for such patients. Less clear is why preventive care for patients with COPD and depression would differ, a finding found for both the composite SP measure and the vignette measure.

Prompting—when patients queried providers about preventive care—was a powerful inducement. When prompted, scores rose in all measured categories, nearly doubling for cancer screening. There are several possible explanations. A patient-initiated reminder effectively may mimic the known effect of other kinds of prompts, such as on-screen reminders, that appear to be effective in improving prevention services in a variety of settings.2628 Alternatively, patient queries suggest an expectation of the patient for preventive services, which might otherwise be deferred by the physician. Because several of the preventive strategies require behavior change, initiative by the patient would also indicate receptivity to change and encourage ensuing discussion. In general, the success of prompting in this study suggests the potentially important role of patients to determine the range of preventive services they receive.

The inconsistent provision of preventive care evident in these data underscores the need for better quality measurement tools to detect variation and inform our understanding of its causes. Though simulations of the process of care, vignettes predicted the quality of preventive care when compared to standardized patients, and outperformed medical record abstraction. As a comparatively inexpensive and case-mix-controlled method, vignettes could be used more broadly for evaluations of variation in quality of care. This finding is reinforced by the prediction model, which accurately predicted vignette-measured variation in prevention quality performance when compared to the SP gold standard. These data support the broader use of vignettes as a quality measurement tool to assess institutional performance, identify improvement opportunities, and evaluate the impact of interventions to enhance prevention performance.

This study is subject to several limitations that should be considered in weighing these conclusions. First, prevention was measured during a single episode of care, whereas such services may be provided over several visits, depending upon the personal approach of the clinician or even organizational factors such as the length of new patient visits. Second, not all evidence-based prevention items were evaluated by our measurement methods. Third, standardized patients in this study were all men who presented with a limited number of clinical conditions, limiting the generalization of these findings to preventive care services specific to women or to patients with other medical problems. Finally, institutions included in this study were all urban teaching institutions. It is uncertain whether similar patterns of prevention performance would be observed in different clinical settings.

These results appear to be robust and are consistent with previous observations of the quality of preventive care and the measurement attributes of clinical vignettes.9,19 They highlight important deficiencies in the delivery of preventive care and suggest a new method to measure preventive care variation by using vignettes. As efforts to enhance the delivery of preventive care move forward, we believe that the availability of valid measurement methods is essential to assess their effectiveness. Finally, these data shed some light on such potential strategies for improving prevention performance, including development of novel performance improvement approaches such as patient prompts to providers; investigations of institutional factors that facilitate prevention performance; and evaluations of clinician competence using vignettes.

Acknowledgments

This research was funded by grant 11R 98118-1 from the Veterans Affairs Health Service Research and Development Service, Washington DC. Dr. Peabody was also a recipient of a Senior Research Associate Career Development Award from 1998–2001 from the Department of Veterans Affairs.

REFERENCES

  • 1.Audet AM, Scott HD. The uniform clinical data set: an evaluation of the proposed national database for Medicare's quality review program. Ann Intern Med. 1993;119:1209–13. doi: 10.7326/0003-4819-119-12-199312150-00008. [DOI] [PubMed] [Google Scholar]
  • 2.Lawthers AG, Palmer RH, Edwards JE, et al. Developing and evaluating performance measures for ambulatory care quality: a preliminary report of the DEMPAQ project. Jt Comm J Qual Improv. 1993;19:552–65. doi: 10.1016/s1070-3241(16)30036-0. [DOI] [PubMed] [Google Scholar]
  • 3.McGlynn EA. Six challenges in measuring the quality of health care. Health Aff. 1997;16:7–21. doi: 10.1377/hlthaff.16.3.7. [DOI] [PubMed] [Google Scholar]
  • 4.U.S. Preventive Services Task Force. Guide to Clinical Preventive Services. 2nd ed. Baltimore: Williams and Wilkins; 1996. [Google Scholar]
  • 5.Mandel J, Bond J, Church T, et al. Reducing mortality from colorectal cancer by screening for fecal occult blood. N Engl J Med. 1993;328:1365–71. doi: 10.1056/NEJM199305133281901. [DOI] [PubMed] [Google Scholar]
  • 6.Bridges CB, Thompson WW, Meltzer MI, et al. Effectiveness and cost-benefit of influenza vaccination of healthy working adults: a randomized controlled trial. JAMA. 2000;284:1655–63. doi: 10.1001/jama.284.13.1655. [DOI] [PubMed] [Google Scholar]
  • 7.Summary of the second report of the National Cholesterol Education Program (NCEP) Expert Panel on the detection, evaluation, and treatment of high blood cholesterol in adults (Adult Treatment Panel II) JAMA. 1993;269:3015–23. [PubMed] [Google Scholar]
  • 8.Lee IM, Skerrett PJ. Physical activity and all-cause mortality: what is the dose-response relation? Med Sci Sports Exerc. 2001;33:S459–S471. doi: 10.1097/00005768-200106001-00016. [DOI] [PubMed] [Google Scholar]
  • 9.Dresselhaus TR, Peabody JW, Lee M, Wang MM, Luck J. Measuring compliance with preventive care guidelines: standardized patients, clinical vignettes, and the medical record. J Gen Intern Med. 2000;15:782–8. doi: 10.1046/j.1525-1497.2000.91007.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Weingarten S, Stone E, Hayward R, et al. The adoption of preventive care practice guidelines by primary care physicians: do actions match intentions? J Gen Intern Med. 1995;10:138–44. doi: 10.1007/BF02599668. [DOI] [PubMed] [Google Scholar]
  • 11.Ramsey PG, Curtis JR, Paauw DS, Carline JD, Wenrich MD. History-taking and preventive medicine skills among primary care physicians: an assessment using standardized patients. Am J Med. 1998;104:152–8. doi: 10.1016/s0002-9343(97)00310-0. [DOI] [PubMed] [Google Scholar]
  • 12.Hutchison B, Woodward CA, Norman GR, Abelson J, Brown JA. Provision of preventive care to unannounced standardized patients. CMAJ. 1998;158:185–93. [PMC free article] [PubMed] [Google Scholar]
  • 13.Reschovsky J, Reed M, Blumenthal D, Landon B. Physicians’ assessments of their ability to provide high-quality care in a changing health care system. Med Care. 2001;39:254–69. doi: 10.1097/00005650-200103000-00006. [DOI] [PubMed] [Google Scholar]
  • 14.Leshan LA, Fitzsimmons M, Marbella A, Gottlieb M. Increasing clinical prevention efforts in a family practice residency program through CQI methods. J Qual Improv. 1997;23:391–400. doi: 10.1016/s1070-3241(16)30327-3. [DOI] [PubMed] [Google Scholar]
  • 15.Sanazaro PJ, Worth RM. Measuring clinical performance of individual internists in office and hospital practice. Med Care. 1985;23:1097–114. doi: 10.1097/00005650-198509000-00007. [DOI] [PubMed] [Google Scholar]
  • 16.Dietrich AJ, Goldberg H. Preventive content of adult primary care: do generalists and subspecialists differ? Am J Public Health. 1984;74:223–7. doi: 10.2105/ajph.74.3.223. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Carey TS, Levis D, Pickard CG, Bernstein J. Development of a model quality-of-care assessment program for adult preventive care in rural medical practices. QRB Qual Rev Bull. 1991;17:54–9. doi: 10.1016/s0097-5990(16)30425-0. [DOI] [PubMed] [Google Scholar]
  • 18.Book RH, McGlynn EA, Shekelle PG. Defining and measuring quality of care: a perspective from US researchers. Int J Qual Health Care. 2000;12:281–95. doi: 10.1093/intqhc/12.4.281. [DOI] [PubMed] [Google Scholar]
  • 19.Peabody JW, Luck J, Glassman P, Dresselhaus TR, Lee M. Should we use vignettes as a yardstick? A prospective trial comparing quality of care measurement by vignettes, chart abstraction and standardized patients. JAMA. 2000;283:1715–22. doi: 10.1001/jama.283.13.1715. [DOI] [PubMed] [Google Scholar]
  • 20.Beullens J, Rethans JJ, Goedhuys J, Buntinx F. The use of standardized patients in research in general practice. Fam Pract. 1997;14:58–62. doi: 10.1093/fampra/14.1.58. [DOI] [PubMed] [Google Scholar]
  • 21.Vu NV, Marcy MM, Colliver JA, Verhulst SJ, Travis TA, Barrows HS. Standardized (simulated) patients’ accuracy in recording clinical performance check-list items. Med Educ. 1992;26:99–104. doi: 10.1111/j.1365-2923.1992.tb00133.x. [DOI] [PubMed] [Google Scholar]
  • 22.Schwartz MH, Colliver JA. Using standardized patients for assessing clinical performance: an overview. Mt Sinai J Med. 1996;63:241–9. [PubMed] [Google Scholar]
  • 23.Ferrell BG. Clinical performance assessment using standardized patients. A primer. Special series: core concepts in family medicine education. Fam Med. 1995;27:14–9. [PubMed] [Google Scholar]
  • 24.Kinnersley P, Pill R. Potential of using simulated patients to study the performance of general practitioners. Br J Gen Pract. 1993;43:297–300. [PMC free article] [PubMed] [Google Scholar]
  • 25.Luck J, Peabody JW. Using standardised patients to measure physicians’ practice: validation study using audio recordings. BMJ. 2002;325:679. doi: 10.1136/bmj.325.7366.679. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.McDonald CJ, Hui SL, Tierney WM. Effects of computer reminders for influenza vaccination on morbidity during influenza epidemics. MD Comput. 1992;9:304–12. [PubMed] [Google Scholar]
  • 27.Dexter PR, Wolinsky FD, Gramelspacher GP, et al. Effectiveness of computer-generated reminders for increasing discussions about advance directives and completion of advance directive forms. A randomized, controlled trial. Ann Intern Med. 1998;128:102–10. doi: 10.7326/0003-4819-128-2-199801150-00005. [DOI] [PubMed] [Google Scholar]
  • 28.Dexter PR, Perkins S, Overhage JM, Maharry K, Kohler RB, McDonald CJ. A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med. 2001;345:965–70. doi: 10.1056/NEJMsa010181. [DOI] [PubMed] [Google Scholar]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES