Introduction
Multiple practice guidelines exist for the treatment and secondary prevention of acute ischemic stroke. However, several studies demonstrate that practice guidelines are not being followed and adherence to guideline recommendations is low.(1-6) Performance measures (PMs), developed from clinical practice guidelines, enable the assessment of adherence to guidelines. Auditing clinical performance and providing feedback of adherence rates has been shown to be somewhat effective in improving the adherence to guidelines.(7,8) Audit and feedback coupled with other interventions, including benchmarking, has been shown to be more effective.(9) Individual PMs may require different interventions to improve their adherence.(10) More recently, quality improvement (QI) interventions have been tailored to overcome specific barriers identified before or during a QI process.(11,12).
Though strategies to address gaps in quality of care generally have been identified and studied in some diseases with specific procedures, these strategies are still being studies and have not yet been fully tested in acute stroke.(7,9,11, 13) There is a clear need to find successful strategies to improve adherence to evidence-based guidelines in the care of patients with acute stroke.
The Stroke Practice Improvement Network (SPIN) was a multi-center, nested, quasi-randomized trial within a longitudinal prospective study to assess a multifaceted intervention designed to improve care measured by increased adherence to guideline-recommended process measures for care of ischemic stroke. The studied intervention was tailored to each site based on pre-intervention identification of site-specific barriers to adherence. We hypothesized that this tailored, locally implemented intervention would result in improvements in adherence to PMs that were 8-10% greater than audit, feedback and benchmarking alone.
Research Design and Methods
Study Setting and Participants
Sixteen sites (see Appendix 1) in 13 states were selected to participate and collect concurrent in-hospital stroke care data. Site selection was based on a list of 150 investigators who had contacted the American Academy of Neurology expressing interest in participating in the quality improvement (QI) project. Through a survey detailing capacity and infrastructure and interview process assessing interest 13 sites were selected. One major criterion was that the sites had ability to fund a study coordinator. The internal review board at all hospitals approved the protocol.
A variety of practice settings were represented. Two sites had fewer than 200 acute beds, three had 200-400 acute beds, and eight had greater than 400 acute beds. Other site characteristics included: 3 urban academic sites, 2 academic affiliated community sites and 8 community sites, overall representing 7 Council of Teaching Hospitals and Health Systems (COTH) sites and 6 non COTH sites. Sixty-nine (69%) (9/13) of sites had a dedicated stroke unit, 69% (9/13) had stroke teams, and 85% (11/13) had stroke pathways.
Patients meeting the inclusion criteria, all acute ischemic stroke patients with age greater than 18, seen by neurology, and not hospital-to-hospital transfers (see Appendix 2 for details), were enrolled at the time of admission or when first evaluated.
Determination of the four performance measures
Measures were chosen from the 23 PMs that were most highly rated by a multidisciplinary expert panel. (14) Sites ranked these 23 PM on importance for stroke care, room for improvement and feasibility of collecting and implementing a quality improvement initiative for that measure. The four measures chosen for ischemic stroke included: the delivery of thrombolytic therapy to a patient within one hour of hospital arrival (tPA), a screen for dysphagia performed prior to any oral intake (dysphagia), prophylaxis for deep venous thrombosis in non ambulatory patients (DVT) and discharge on warfarin for patients with atrial fibrillation unless contraindicated (AFIB). Their inclusion and exclusion criteria have been previously described and their level of evidence at the time of this project is summarized in table 3. (15)
Table 3.
Self selected measures that sites identified for improvement.
Measure / Level of evidence (14) | Intervention | Control | P value |
---|---|---|---|
tPA in 1 hour / C1 | 5/6 | 0/6 | .003 |
Dysphagia screen / C1 | 6/6 | 2/6 | .045 |
DVT prophylaxis / B1/2 | 5/6 | 2/6 | .19 |
Warfarin for atrial fibrillation /A1 | 2/6 | 0/6 | .12 |
Major process changes attempted | |||
Intervention | Control | ||
Standing orders implemented or revised attempted/succeeded | 5/6 (2/6) | 2/6 (2/6) | |
Standardized dysphagia screen implemented or revised | 6/6 (3/6) | 1/6 (1/6) | . |
Although most intervention sites attempted to implement standing orders and dysphagia screens, less than half could do this in the intervention time period.
Identification of physician and organizational barriers to adherence to the four PMs
Two surveys were developed to collect baseline information regarding physician knowledge and attitudes about stroke care delivery and existent organizational stroke-care infrastructure and barriers to care delivery (inventory survey). These surveys were developed through expert panel consensus after a review of the literature, informal interviews of healthcare providers, and clinician focus groups. The survey results were used to guide the site-specific interventions.
Follow up Survey
A follow up survey was developed to assess the implementation efforts sites for each performance measure. We also collected any performance improvement tools sites developed during the project.
Overall design
This was designed to be a group-randomized controlled trial but because of randomization difficulties this is a quasi-experimental study. The site is the unit of intervention and analysis. To control for potential sources of bias such as differential maturation, selection, and contamination; hospitals were paired on baseline dysphagia measure adherence rates and the stage of their QI infrastructure(16) and then randomized to either a control group that received audit, feedback and benchmark information only or to an intervention group that received audit, feedback and benchmark information plus a multifaceted intervention designed specifically for each site. We chose to pair hospitals by their dysphagia adherence because the measure had the most variation in adherence and room for improvement among the sites. This may be able to control for other, immeasurable hospital variations, which could affect the results. All control hospitals had access to the same data analyses given to the intervention group.
Intervention
Following baseline data collection, the stroke team from each site assigned to the experimental group participated in a 1-½ day intervention meeting. The experimental intervention is summarized in Table 1. Study investigators analyzed the results of the sites potential knowledge and attitudes barriers and provided specific recommendations and tools for the sites to use to overcome these barriers. Study investigators also analyzed the site specific data to identify potential barriers to adherence and suggestions for improvement, for example a barrier to higher adherence rates for treatment with warfarin in atrial fibrillation was shown to be due to the tapering of use of warfarin as the ages of patient increased. We could also show the sites that lack of use of warfarin was not related to the size of the stroke, as measured by the National Institute of Health Stroke Scale Score.
Table 1. Summary of site-specific experimental intervention given to site staff.
Tool Kit |
|
Individualized data analysis and written suggestions for improvement |
|
Disturbance of randomization
Two hospitals led by one principal investigator (PI) and randomized to the intervention group, dropped out of the project just after randomization due to financial constraints in ongoing data collection. They had collected a total of 150 cases or 6.5% of the total baseline data, which was excluded from the analysis. Of the seven remaining site PIs in the intervention group, one had a conflict with the scheduled intervention meeting and was switched to the control group. Thus, the site from the control group with the most similar dysphagia adherence rate and QI infrastructure stage to this PI's site was switched to the experimental group. These changes, which occurred prior to the intervention phase of the study, resulted in-group assignments not being randomized. Overall, there were six sites in the intervention group and seven sites in the control.
Baseline data was collected from December 1, 2001 until December 4, 2002. The experimental interventions occurred from January 16, 2003 until July 14, 2003. Post intervention data was collected for patients admitted from July 15, 2003 until April 15, 2004.
Sample size
Based on prior literature (9,18), a 10% difference in PM adherence rates in the intervention versus the control group was used to determine sample size needed for each hospital and PM. To determine sample size based on a group randomized design(17), we used the actual variation associated with patient counts per hospital, and the variability in the hospital adherence rates across hospitals, based on five months of data. For 13 sites, with a type I error of 0.025 (critical value=1.99, one-tailed test) and statistical power of 80% to detect a 10% difference, a minimum of 50 subjects per site for the dysphagia measure, a 15-25 subjects per site for the DVT and AFIB measures, and > 400 in each group for the tPA measure. Thus, the study was not adequately powered to detect quantitative differences in the tPA PM, so only qualitative data will be presented for this measure.
Primary analysis
The primary endpoint was the difference in post-intervention adherence rates for three PMs in the intervention group compared to the control group, analyzed using the chi square test. (17) Because patients cluster within a hospital, we controlled for clustering within sites using the generalized linear mixed model (Glimmix) for each PM.
Secondary analyses
The differences in post intervention adherence rates after controlling for baseline adherence rates for the three PM's were assessed using a multiple linear regression model. We tested for clustering within sites using the Glimmix model to control for baseline adherence rates.
Another secondary analysis was the change in adherence rates from pre- to post-intervention using the chi square test, with the Breslow-Day test of homogeneity of odds ratios performed.
Post hoc analysis
Exploratory analyses were performed on results of the site follow up survey to determine if there were qualitative differences in the QI effort at intervention versus control sites (Table 3).
Clinical Data Acquisition and quality
We used a web-based data entry system. Required data elements included all variables related to the four required PMs, demographics and discharge status. Optional data elements included the use of standing orders and the development of in-hospital complications. Sites obtained Institutional Review Board approval as required by local policy.
For data validation purposes, comparison of entered data was made to independent chart review for a randomly selected sample comprising 10% of the total patient enrollment for the study period. Data abstraction from the charts was performed either centrally (SPIN project coordinator, six sites) or locally (trained, on-site abstractor, seven sites). Overall reliability was good (kappa = 0.68). Specific data elements that had poor agreement, kappa < 0.5 were: onset time, arrival data, and dysphagia screen before oral intake. It was expected that dysphagia screen before oral intake would often be missing when assessed retrospectively. Thus, we required prospective data to assess adherence to this PM.
Results
Thirteen sites collected 3311 cases from December 1, 2001 through March 31, 2004. Baseline differences (intervention versus control) for patient and hospital characteristics of the 13 sites are shown in Table 2. There were no significant hospital characteristic differences between the intervention and control sites. For patient characteristic data, patients admitted to the control hospitals were on average older, and more likely to be white and discharged to a skilled nursing facility.
Table 2. Baseline characteristics between intervention and control sites.
Intervention (6 sites) | Control (7 sites) | P value | |
---|---|---|---|
N= 1169 | N= 902 | ||
Patient characteristics | |||
Female (%) | 49.3% | 50.6% | .34 |
African American (%) | 11% | 8.6% | .11 |
White (%) | 77% | 88% | <.0001 |
Average age | 69 | 71 | .0015 |
In-hospital mortality | 6.2% | 4% | .03 |
NIHSS (median) | 5 | 5 | |
Discharge destination | |||
Home | 39% | 39% | .83 |
Skilled nursing facility | 9% | 18% | <.0001 |
Rehabilitation unit | 27% | 21% | .002 |
Hospital characteristics | |||
Teaching status (COTH) | 50% | 43% | .81 |
QI infrastructure (0 or 1) | .5 | .48 | .95 |
Stroke unit | 83% | 57% | .35 |
Beds | 567 | 427 | .38 |
Pathway | 100% | 62% | .08 |
Stroke team | 67% | 71% | .86 |
NIH = National Institute of Health Stroke Scale Score
COTH = Council of Teaching Hospitals and Health Systems
QI= Quality improvement infrastructure
The intervention group, after controlling for clustering within sites, had a significant post intervention adherence rate for discharging a patient with atrial fibrillation on warfarin compared to the control group, 98% v 87%, P <.005. No other performance measure had a significant difference in post intervention adherence rate, see figure 1.
Figure 1. Changes in adherence rates.
Changes in pre intervention to post intervention adherence rates for each performance measure by group.
Although there appeared to be a substantial difference in baseline adherence rates for the dysphagia measure (72% v 57%), when clustering within hospitals was controlled, this difference was not statistically significant. Using the Glimmix model to control for baseline adherence rates, except for the AFIB measures, there was no significant difference in the post-intervention adherence rates between the intervention and control sites.
Although not statistically significant, the rates of dysphagia adherence improved in both groups and there was a trend favoring more improvement among the control sites. Conversely, for DVT, there was a non-significant trend for greater improvement in the intervention group seen in figure 1. For each of the PMs, the Breslow-Day test for homogeneity suggested that the change in adherence rates from the pre-intervention to the post-intervention phase was statistically similar.
Post hoc analysis
Interventions
We received follow up surveys regarding implementation efforts from 6/7 control sites and 6/6 intervention sites. Site-reported interventions and their ability to put processes in place prior to the end of the intervention period or toward the end of the trial are outlined in Table 3. One control hospital, although not implementing a standardized dysphagia screen but instead assigned their study coordinator to watch patients eat, received credit for implementing a screen. Overall, the intervention sites attempted or performed more improvement activity.
Discussion
The addition of a multifaceted QI intervention for sites was associated with a statistically significant improvement in appropriate anticoagulation on discharge for patients with atrial fibrillation on warfarin (AFIB) compared to audit, feedback and benchmarking alone. There was no significant improvement for the other PM, although we had insufficient power to assess the PM for tPA. Unlike prior studies, we were not able to show that audit, feedback and benchmarking alone improved adherence to our PMs. (7-9)
Our QI intervention encountered several challenges that may help to explain the mixed results. First, in contrast to QI studies that sought to improve processes limited to the same phase of clinical care, in a single setting and conducted by the same staff, the stroke measures implemented in this study were more heterogeneous and the selected PMs required changes in behavior at different times (in the emergency department, upon admission and discharge).
These different settings require a QI team with members from multiple groups whose expertise varies depending on the behavior identified for change. The members of the QI team needed to improve adherence to dysphagia screening could involve: floor nursing staff, speech and swallowing staff, floor nursing administration, nursing educator and a neurologist at least. This is in contrast to the QI team needed to change behavior in the ED which could include staff, physicians, nurses and administration, from multiple different departments: emergency department, neurology, laboratory and radiology staff.
Second, we targeted processes of care that required sites to make substantial organizational changes for performance to improve. To implement a dysphagia screen the QI team needs to gain approval from all physicians on who should have a screen, decide who and where the screen will be performed, develop a screen, develop a training and assessment method for use of the screen and implement the screen and track success. Changes required to improve adherence to DVT, besides obtaining buy in from staff, involved the introduction or revision of standing orders that include a prompt for DVT prophylaxis. Steps to implement standing orders would include: the content consensus, medical record approval, and changing physician behavior to use standard orders. Changes required to improve adherence for thrombolytics are too numerous to discuss. (19)
Third, our PMs were not equally based on a high level of evidence, see table 3. These measures were chosen because there was strong professional consensus and good face validity for most and the sites ranked the PMs as important to do, feasible, and having room for improvement. Nevertheless, our knowledge and attitudes surveys revealed that a lack of clinician buy-in was an important barrier, particularly for the tPA and dysphagia measures.
Another factor that may have contributed to the absence of improvement is that the intervention period was relatively short (six months). All the intervention sites chose to implement a standardized dysphagia screening tool, but half (3/6) were unsuccessful within the 6-month time frame. A similar problem was noted with implementation and use of standing orders, which may partially explain the lack of improvement in the DVT measure. As stated previously, trying to simultaneously address multiple dimensions of stroke care require significant and time-consuming organizational change. The results of this study may have been different if we allowed for more time for sites to implement changes.
Lastly, due to funding limitations we did not have onsite outreach visits. As others have pointed out, our less aggressive approach to tailored interventions may be a barrier to improvement. (12) While this may partially explain why the multifaceted intervention did not have a large effect on all PMs, from the perspective of the study's design, the fact that we did not add substantial additional resources to implement the intervention makes this study a good effectiveness trial, and the results probably reflect what can be accomplished by most hospitals in a short amount of time with limited resources.
A factor that may enable hospitals to achieve greater progress on these stroke measures is to align hospital activities within the scope of external influences, including healthcare regulation, public dissemination of hospital performance and financial incentives. (21) The Primary Stroke Center certification process, implemented by The Joint Commission in cooperation with the American Heart Association/American Stroke Association, includes assessment of the measures included in this project. The Center for Medicare and Medicaid Services (CMS) is considering adopting five stroke measures as part of a core measurement set, which would involve mandatory reporting and public dissemination of hospital level results. If CMS adopts stroke as a new core measurement set, this should provide a powerful incentive for improvement, although some barriers may still be difficult for hospitals to overcome.
We found that a single method/intervention is unlikely to address the variety of care delivery barriers found in multiple domains of care. Each phase of care has its specific interventional needs. Recommendations for the future would include the addition of outreach visits, linkages to external incentives, or linkage to a national organization which provides recognitions all of which have been shown or recommended to improve quality of care. (21-24)
Limitations of our study included the breakdown in the randomization of hospitals and the small number of hospitals. Therefore, group assignments used for the analysis were not random. This may have biased the results but in which direction is unclear. Also, our hospitals had relatively high baseline performance rates on the DVT, AFIB and dysphagia measures at baseline, making it more difficult to demonstrate improvement.
A multifaceted intervention designed to address specific physician and organizational barriers seems a logical approach to improve adherence to stroke care guidelines. We have shown that audit and feedback alone did not result in improvement of all of the targeted components of acute stroke. Future research is needed to determine which methods or tools will improve the delivery of quality care of acute stroke patients across specific phases with respect to identified barriers. We would suggest that future stroke quality improvement research assess the organizational change requirements of performance measures and provide the necessary time period for intervention. Even longer intervention periods may be needed to allow for the acceptance and diffusion of evidence by average or slow adopters. We would also suggest that quality improvement teams focus on similar phases of care to ease a hospital's QI burden by having fewer concurrent improvement teams. Lastly, the focus of the QI should be on areas with strong clinical consensus among all specialties involved in the care.
Acknowledgments
Consultants
Catherine Borbas, PhD helped design the intervention & clinician surveys.
Larry Goldstein, MD Duke University provided neurologic expertise.
Cheryl Bushnell, MD Duke University helped with the literature group.
Sherry Fox, PhD provided expertise in outcome measures.
Barbara Vickery, MD, MPH UCLA and RAND Corporation led the site selection work group.
Patricia Hibbard, MD helped with the design of the prospective study.
Grant number NIH K23NS002163 supported this study
This project was funded by the American Academy of Neurology (AAN), the American Heart Association (AHA/ASA) and an unrestricted educational grant from Boehringer Ingelheim Pharmaceuticals, Inc. The content is solely the responsibility of the authors and does not represent any official views of the NINDS, AAN, AHA/ASA or Boehringer Ingelheim.
Footnotes
Drs. Hinchey, Shephard, Herman, Selker, Kent or Ms. Ruthazer and Tonn have no potential financial or other conflicts of interest to disclose.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Reference List
- 1.Cadilhac DA, Ibrahim J, Pearce DC, Ogden KJ, McNeill J, Davis SM, Donnan GA for the SCOPES Study group. Stroke. 2004;35:1035–1040. doi: 10.1161/01.STR.0000125709.17337.5d. [DOI] [PubMed] [Google Scholar]
- 2.Scholte WJM, Dippel DW, Franke CL, van Oostenbrugge RJ, de Jong G, Hoeks S, Simoons ML. Quality of hospital and outpatient care after stroke or transient ischemic attack. Insights from a stroke survey in the Netherlands. Stroke. 2006;37:1844–1849. doi: 10.1161/01.STR.0000226463.17988.a3. [DOI] [PubMed] [Google Scholar]
- 3.Heuschmann PU, Biegler MK, Busse O, Elsner S, Grau A, Hasenbein U, Hermanek P, Janzen RWC, Kolominisky-Rabas PL, Kraywinkel K, Lowitzsch K, Misselwitz B, Nabavi DG, Otten K, Pientka L, von Reutern GM, Ringelstein EB, Sander D, Wagner M, Berger K. Development and implementation of evidence-based indicators for measuring quality of acute stroke care. The quality indictor board of the German stroke registers study group (ADSR) Stroke. 2006;(37):2573–2578. doi: 10.1161/01.STR.0000241086.92084.c0. [DOI] [PubMed] [Google Scholar]
- 4.Kapral MK, Laupacis A, Phillips SJ, Silver FL, Hill MD, Fang J, Richards J, Tu JV for the Investigators of the registry of the Canadian stroke network. Stroke. 2004;35:1756–1762. doi: 10.1161/01.STR.0000130423.50191.9f. [DOI] [PubMed] [Google Scholar]
- 5.Reeves MJ, Arora S, Broderick JP, Frankel M, Heinrich JP, Hickenbottom S, Karp H, LaBresh KA, Malarcher A, Mensah G, Moomaw CJ, Schwamm L, Weiss P Paul Coverdell Prototype Registries Writing Group. Acute stroke care in the US: results from 4 pilot prototypes of the Paul Coverdell National Acute Stroke Registry. Stroke. 2005;36:1232–40. doi: 10.1161/01.STR.0000165902.18021.5b. [DOI] [PubMed] [Google Scholar]
- 6.Jencks SF, Cuerdon T, Burwen DR, Fleming B, Houck PM, Kussmaul AE, et al. Quality of medical care delivered to Medicare beneficiaries: a profile at state and national levels. JAMA. 2000;284:1670–1676. doi: 10.1001/jama.284.13.1670. [DOI] [PubMed] [Google Scholar]
- 7.O&aposBrien T, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL. Audit and feedback versus alternative strategies: effects on professional practice and health care outcomes (Cochrane review) The Cochrane Library. 2002 doi: 10.1002/14651858.CD000260. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Bradley EH, Herrin J, Mattera JA, Holmboe ES, Wang Y, Frederick P, et al. Quality Improvement Efforts and Hospital Performance. Rates of Beta-Blocker prescription after Acute Myocardial Infarction. Medical Care. 2005;43:282–292. doi: 10.1097/00005650-200503000-00011. [DOI] [PubMed] [Google Scholar]
- 9.Kiefe CI, Allison JJ, Williams OD, Person SD, Weaver MT, Weissman NW. Improving Quality Improvement Using Achievable benchmarks for Physician Feedback. A Randomized Controlled Trial. JAMA. 2001;285:2871–2879. doi: 10.1001/jama.285.22.2871. [DOI] [PubMed] [Google Scholar]
- 10.Borbas C, Morris N, McLaughlin B, Asinger R, Gobel F. The Role of clinical Opinion Leaders in Guideline Implementation and Quality Improvement. Chest. 2000;118:24S–32S. doi: 10.1378/chest.118.2_suppl.24s. [DOI] [PubMed] [Google Scholar]
- 11.Shaw B, Cheater F, Baker R, Gillies C, Hearnshaw H, Flottorp S, et al. Tailored interventions to overcome identified barriers to change: effects on professional practice and health care outcomes. Cochrane Database. 2005:1–45. doi: 10.1002/14651858.CD005470. [DOI] [PubMed] [Google Scholar]
- 12.Flottorp S, Oxman AD. Identifying barriers and tailoring interventions to improve the management of urinary tract infections and sore throat: a pragmatic study using qualitative methods. BMC Health Services Research. 2003;3:3. doi: 10.1186/1472-6963-3-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Caminiti C, Scoditti U, Diodati F, Passalacqua R. How to promote, improve and test adherence to scientific evidence n clinical practice. BMC Health Services Research. 2005;5:62. doi: 10.1186/1472-6963-5-62. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Holloway RG, Vickory B, Benesch C, Hinchey JA, Bieber J. Development of performance measures for acute ischemic stroke. Stroke. 2001;32:2058–2074. doi: 10.1161/hs0901.94620. [DOI] [PubMed] [Google Scholar]
- 15.Hinchey JA, Shephard T, Tonn ST, Ruthazer R, Selker HP, Kent DM. Benchmarks and Determinants of Adherence to Stroke Performance measures. Stroke. 2008;39:1619–1620. doi: 10.1161/STROKEAHA.107.496570. [DOI] [PubMed] [Google Scholar]
- 16.Wagner C, De Baaker DH, Groenewegen PP. A measuring instrument for evaluation of quality systems. Int J Quality Health Care. 1999;11:119–130. doi: 10.1093/intqhc/11.2.119. [DOI] [PubMed] [Google Scholar]
- 17.Murray DM. Design and Analysis of Group-Randomized Trials. New York: Oxford University Press; 1998. [Google Scholar]
- 18.Mehta RH, Montoye CK, Gallogly M, Baker P, Blount A, Faul J, Foychoudhury C, Borzak S, Fox S, Franlin M, Freundl M, Kline-Rogers E, LaLonde T, Orza M, Parrish R, Satwicz M, Smith JM, Sobotka P, Winston S, Riba AA, Eagle KA GAP Steering Committee of the American College of Cardiolgoy. Improving quality of care for acute myocardial infarction: The Guidelines Applied in Practice (GAP) Initiative. JAMA. 2002 Mar 13;287(10):1321–3. doi: 10.1001/jama.287.10.1269. [DOI] [PubMed] [Google Scholar]
- 19.Soumerai SB, McLaughlin TJ, Gurwitz JH, Guadagnoli E, auptman PJ, Borbas C, orris N, et al. Effect of local medical opinion leaders on quality of care for acute myocardial infarction: a randomized controlled trial. JAMA. 1998;279:1358–1363. doi: 10.1001/jama.279.17.1358. [DOI] [PubMed] [Google Scholar]
- 20.Hermann RC, Zazzali J, Lerner DE, Chan JA. Aligning Measurement-Based Quality Improvement with Implementation of Evidence-Based Practices. Administration and Policy in Mental Health and Mental Health Services Research. 2006;33:636–645. doi: 10.1007/s10488-006-0055-1. [DOI] [PubMed] [Google Scholar]
- 21.Rosenthal MB, Fernandopulle R, Song HR, et al. Paying for quality: Providers' incentives for quality improvement. Health Affairs. 2004;23:127–41. doi: 10.1377/hlthaff.23.2.127. [DOI] [PubMed] [Google Scholar]
- 22.Lindenauer PK, Remus D, Roman S, Rothberg MB, Benjamin EM, Ma A, Bratzler DW. Public reporting and pay for performance in hospital quality improvement. NEJM. 2007;356:486–96. doi: 10.1056/NEJMsa064964. [DOI] [PubMed] [Google Scholar]
- 23.Bufalino V, Peterson ED, Krumholz HM, Burke GL, LaBresh KA, Jones DW, Faxon DP, Valadez AM, Solis P, Schwartz JS American Heart Association. Non financial incentives for quality: a policy statement from the American Heart Association. Circulation. 2007 Jan 23;115(3):398–401. doi: 10.1161/CIRCULATIONAHA.106.180202. [DOI] [PubMed] [Google Scholar]