Despite the considerable amount of money spent on clinical research relatively little attention has been paid to ensuring that the findings of research are implemented in routine clinical practice.1 There are many different types of intervention that can be used to promote behavioural change among healthcare professionals and the implementation of research findings. Disentangling the effects of intervention from the influence of contextual factors is difficult when interpreting the results of individual trials of behavioural change.2 Nevertheless, systematic reviews of rigorous studies provide the best evidence of the effectiveness of different strategies for promoting behavioural change.3,4 In this paper we examine systematic reviews of different strategies for the dissemination and implementation of research findings to identify evidence of the effectiveness of different strategies and to assess the quality of the systematic reviews.
Summary points
Systematic reviews of rigorous studies provide the best evidence on the effectiveness of different strategies to promote the implementation of research findings
Passive dissemination of information is generally ineffective
It seems necessary to use specific strategies to encourage implementation of research based recommendations and to ensure changes in practice
Further research on the relative effectiveness and efficiency of different strategies is required
Identification and inclusion of systematic reviews
We searched Medline records dating from 1966 to June 1995 using a strategy developed in collaboration with the NHS Centre for Reviews and Dissemination. The search identified 1139 references. No reviews from the Cochrane Effective Practice and Organisation of Care Review Group4 had been published during this time. In addition, we searched the Database of Abstracts of Research Effectiveness (DARE) (www.york.ac.uk/inst/crd) but did not identify any other review meeting the inclusion criteria.
We searched for any review of interventions to improve professional performance that reported explicit selection criteria and in which the main outcomes considered were changes in performance or outcome. Reviews that did not report explicit selection criteria, systematic reviews focusing on the methodological quality of published studies, published bibliographies, bibliographic databases, and registers of projects on dissemination activities were excluded from our review. If systematic reviews had been updated we considered only the most recently published review. For example, the Effective Health Care bulletin on implementing clinical guidelines superseded the earlier review by Grimshaw and Russell.5,6
Two reviewers independently assessed the quality of the reviews and extracted data on the focus, inclusion criteria, main results, and conclusions of each review. A previously validated checklist (including nine criteria scored as done, partially done, or not done) was used to assess quality.7,8 Reviews also gave a summary score (out of seven) based on the scientific quality of the review. Major disagreements between reviewers were resolved by discussion and consensus.
Results and assessment of systematic reviews
We identified 18 reviews that met the inclusion criteria. They were categorised as focusing on broad strategies (such as the dissemination and implementation of guidelines5,6,9–11), continuing medical education,12,13 particular strategies (such as audit and feedback,14,15 computerised decision support systems,16,17 or multifaceted interventions18), particular target groups (for example, nurses19 or primary healthcare professionals20), and particular problem areas or types of behaviour (for example, diagnostic testing,15 prescribing,21 or aspects of preventive care15,22–25). Most primary studies were included in more than one review, and some reviewers published more than one review. No systematic reviews published before 1988 were identified. None of the reviews explicitly addressed the cost effectiveness of different strategies for effecting changes in behaviour.
There was a lack of a common approach adopted between the reviews in how interventions and potentially confounding factors were categorised. The inclusion criteria and methods used in these reviews varied considerably. Interventions were frequently classed differently in the different systematic reviews.
Common methodological problems included the failure to adequately report criteria for selecting studies included in the review, the failure to avoid bias in the selection of studies, the failure to adequately report criteria used to assess validity, and the failure to apply criteria to assess the validity of the selected studies. Overall, 42% (68/162) of criteria were reported as having been done, 49% (80/162) as having been partially done, and 9% (14/162) as not having been done. The mean summary score was 4.13 (range 2 to 6, median 3.75, mode 3).
Encouragingly, reviews published more recently seemed to be of better quality. For studies published between 1988 and 1991 (n=6) only 20% (11/54) of criteria were scored as having been done (mean summary score 3.0); for reviews published after 1991 (n=12) 52% (56/108) of criteria were scored as having been done (mean summary score 4.7).
Five reviews attempted formal meta-analyses of the results of the studies identified.12,17,19,23,25 The appropriateness of meta-analysis in three of these reviews is uncertain,12,17,19 and the reviews should be considered exploratory at best, given the broad focus and heterogeneity of the studies included in the reviews with respect to the types of interventions, targeted behaviours, contextual factors, and other research factors.2
A number of consistent themes were identified by the systematic reviews (box). (Further details about the systematic reviews are available on the BMJ’s website.) Most of the reviews identified modest improvements in performance after interventions. However, the passive dissemination of information was generally ineffective in altering practices no matter how important the issue or how valid the assessment methods.5,9,11,13,21,26 The use of computerised decision support systems has led to improvements in the performance of doctors in terms of decisions on drug dosage, the provision of preventive care, and the general clinical management of patients, but not in diagnosis.16 Educational outreach visits have resulted in improvements in prescribing decisions in North America.5,13 Patient mediated interventions also seem to improve the provision of preventive care in North America (where baseline performance is often very low).13 Multifaceted interventions (that is, a combination of methods that includes two or more interventions such as participation in audit and a local consensus process) seem to be more effective than single interventions.13,18 There is insufficient evidence to assess the effectiveness of some interventions—for example the identification and recruitment of local opinion leaders (practitioners nominated by their colleagues as influential).5
Interventions to promote behavioural change among health professionals
Consistently effective interventions
Educational outreach visits (for prescribing in North America)
Reminders (manual or computerised)
Multifaceted interventions (a combination that includes two or more of the following: audit and feedback, reminders, local consensus processes, or marketing)
Interactive educational meetings (participation of healthcare providers in workshops that include discussion or practice)
Interventions of variable effectiveness
Audit and feedback (or any summary of clinical performance)
The use of local opinion leaders (practitioners identified by their colleagues as influential)
Local consensus processes (inclusion of participating practitioners in discussions to ensure that they agree that the chosen clinical problem is important and the approach to managing the problem is appropriate)
Patient mediated interventions (any intervention aimed at changing the performance of healthcare providers for which specific information was sought from or given to patients)
Interventions that have little or no effect
Educational materials (distribution of recommendations for clinical care, including clinical practice guidelines, audiovisual materials, and electronic publications)
Didactic educational meetings (such as lectures)
Few reviews attempted explicitly to link their findings to theories of behavioural change. The difficulties associated with linking findings and theories are illustrated in the review by Davis et al, who found that the results of their overview supported several different theories of behavioural change.13
Availability and quality of primary studies
This overview also allows the opportunity to estimate the availability and quality of primary research in the areas of dissemination and implementation. Identification of published studies on behavioural change is difficult because they are poorly indexed and scattered across generalist and specialist journals. Nevertheless, two reviews provided an indication of the extent of research in this area. Oxman et al identified 102 randomised or quasirandomised controlled trials involving 160 comparisons of interventions to improve professional practice.11 The Effective Health Care bulletin on implementing clinical guidelines identified 91 rigorous studies (including 63 randomised or quasirandomised controlled trials and 28 controlled before and after studies or time series analyses).5 Even though the studies included in these two reviews fulfilled the minimum inclusion criteria, some are methodologically flawed and have potentially major threats to their validity. Many studies randomised health professionals or groups of professionals (cluster randomisation) but analysed the results by patient, thus resulting in a possible overestimation of the significance of the observed effects (unit of analysis error).27 Given the small to moderate size of the observed effects this could lead to false conclusions about the significance of the effectiveness of interventions in both meta-analyses and qualitative analyses. Few studies attempted to undertake any form of economic analysis.
Given the importance of implementing the results of sound research and the problems of generalisability across different healthcare settings, there are relatively few studies of individual interventions to effect behavioural change. The review by Oxman et al identified studies involving 12 comparisons of educational materials, 17 of conferences, four of outreach visits, six of local opinion leaders, 10 of patient mediated interventions, 33 of audit and feedback, 53 of reminders, two of marketing, eight of local consensus processes, and 15 of multifaceted interventions.11 Few studies compared the relative effectiveness of different strategies; only 22 out of 91 studies reviewed in the Effective Health Care bulletin allowed comparisons of different strategies.5 A further limitation of the evidence about different types of interventions is that the research is often conducted by limited numbers of researchers in specific settings. The generalisability of these findings to other settings is uncertain, especially because of the marked differences in undergraduate and postgraduate education, the organisation of healthcare systems, potential systemic incentives and barriers to change, and societal values and cultures. Most of the studies reviewed were conducted in North America; only 14 of the 91 studies reviewed in the Effective Health Care bulletin had been conducted in Europe.5
The way forward
This overview suggests that there is an increasing amount of primary and secondary research in the areas of dissemination and implementation. It is striking how little is known about the effectiveness and cost effectiveness of interventions that aim to change the practice or delivery of health care. The reviews that we examined suggest that the passive dissemination of information (for example, publication of consensus conferences in professional journals or the mailing of educational materials) is generally ineffective and, at best, results only in small changes in practice. However, these passive approaches probably represent the most common approaches adopted by researchers, professional bodies, and healthcare organisations. The use of specific strategies to implement research based recommendations seems to be necessary to ensure that practices change, and studies suggest that more intensive efforts to alter practice are generally more successful.
At a local level greater attention needs to be given to actively coordinating dissemination and implementation to ensure that research findings are implemented. The choice of intervention should be guided by the evidence on the effectiveness of dissemination and implementation strategies, the characteristics of the message,10 the recognition of external barriers to change,13 and the preparedness of the clinicians to change.28 Local policymakers with responsibility for professional education or quality assurance need to be aware of the results of implementation research, develop expertise in the principles of the management of change, and accept the need for local experimentation.
Given the paucity of evidence it is vital that dissemination and implementation activities should be rigorously evaluated whenever possible. Studies evaluating a single intervention provide little new information about the relative effectiveness and cost effectiveness of different interventions in different settings. Greater emphasis should be given to conducting studies that evaluate two or more interventions in a specific setting or help clarify the circumstances that are likely to modify the effectiveness of an intervention. Economic evaluations should be considered an integral component of research. Researchers should have greater awareness of the issues related to cluster randomisation, and should ensure that studies have adequate power and that they are analysed using appropriate methods.29
The NHS research and development programme on evaluating methods to promote the implementation of research and development is an important initiative that will contribute to our knowledge of the dissemination of information and the implementation of research findings.30 However, these research issues cut across national and cultural differences in the practice and financing of health care. Moreover, the scope of these issues is such that no one country’s health services research programme can examine them in a comprehensive way. This suggests that there are potential benefits of international collaboration and cooperation in research, as long as appropriate attention is paid to cultural factors that might influence the implementation process such as the beliefs and perceptions of the public, patients, healthcare professionals, and policymakers.
The results of primary research should be systematically reviewed to identify promising implementation techniques and areas where more research is required.3 Undertaking reviews in this area is difficult because of the complexity inherent in the interventions, the variability in the methods used, and the difficulty of generalising study findings across healthcare settings. The Cochrane Effective Practices and Organisation of Care Review Group is helping to meet the need for systematic reviews of current best evidence on the effects of continuing medical education, quality assurance, and other interventions that affect professional practice. A growing number of these reviews are being published and updated in the Cochrane Database of Systematic Reviews.4,31
Acknowledgments
This paper is based on a briefing paper prepared by the authors for the Advisory Group on the NHS research and development programme on evaluating methods to promote the implementation of research and development. We thank Nick Freemantle for his contribution to this paper.
Footnotes
Funding: This work was partly funded by the European Community funded Eur-Assess project. The Cochrane Effective Practice and Organisation of Care Review Group is funded by the Chief Scientist Office of the Scottish Office Home and Health Department; the NHS Welsh Office of Research and Development; the Northern Ireland Department of Health and Social Services; the research and development offices of the Anglia and Oxford, North Thames, North West, South and West, South Thames, Trent, and West Midlands regions; and by the Norwegian Research Council and Ministry of Health and Social Affairs in Norway. The Health Services Research Unit is funded by the Chief Scientist Office of the Scottish Office Home and Health Department. The views expressed are those of the authors and not necessarily the funding bodies.
Conflict of interest: None.
The articles in this series are adapted from Coping with Loss, edited by Colin Murray Parkes and Andrew Markus, which will be published in July.
References
- 1.Eddy DM. Clinical policies and the quality of clinical practice. N Engl J Med. 1982;307:343–347. doi: 10.1056/NEJM198208053070604. [DOI] [PubMed] [Google Scholar]
- 2.Grimshaw JM, Freemantle N, Langhorne P, Song F. Complexity and systematic reviews: report to the US Congress Office of Technology Assessment. Washington, DC: Office of Technology Assessment; 1995. [Google Scholar]
- 3.Mulrow CD. Rationale for systematic reviews. BMJ. 1994;309:597–599. doi: 10.1136/bmj.309.6954.597. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Bero L, Grilli R, Grimshaw JM, Harvey E, Oxman AD, eds. Effective professional practice and organisation of care module, Cochrane Database of Systematic Reviews. The Cochrane Library. The Cochrane Collaboration; Issue 4. Oxford: Update Software; 1997.
- 5.Implementing clinical guidelines: can guidelines be used to improve clinical practice? Effective Health Care 1994; No 8.
- 6.Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342:1317–1322. doi: 10.1016/0140-6736(93)92244-n. [DOI] [PubMed] [Google Scholar]
- 7.Oxman AD, Guyatt GH. The science of reviewing research. Ann N Y Acad Sci. 1993;703:123–131. doi: 10.1111/j.1749-6632.1993.tb26342.x. [DOI] [PubMed] [Google Scholar]
- 8.Oxman AD. Checklists for review articles. BMJ. 1994;309:648–651. doi: 10.1136/bmj.309.6955.648. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Lomas J. Words without action? The production, dissemination, and impact of consensus recommendations. Annu Rev Public Health. 1991;12:41–65. doi: 10.1146/annurev.pu.12.050191.000353. [DOI] [PubMed] [Google Scholar]
- 10.Grilli R, Lomas J. Evaluating the message: the relationship between compliance rate and the subject of a practice guideline. Med Care. 1994;32:202–213. doi: 10.1097/00005650-199403000-00002. [DOI] [PubMed] [Google Scholar]
- 11.Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to help health care professionals deliver services more effectively or efficiently. Can Med Assoc J. 1995;153:1423–1431. [PMC free article] [PubMed] [Google Scholar]
- 12.Beaudry JS. The effectiveness of continuing medical education: a quantitative synthesis. J Continuing Education Health Professions. 1989;9:285–307. [Google Scholar]
- 13.Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance: a systematic review of the effect of continuing medical education strategies. JAMA. 1995;274:700–705. doi: 10.1001/jama.274.9.700. [DOI] [PubMed] [Google Scholar]
- 14.Mugford M, Banfield P, O’Hanlon M. Effects of feedback of information on clinical practice: a review. BMJ. 1991;303:398–402. doi: 10.1136/bmj.303.6799.398. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Buntinx F, Winkens R, Grol R, Knottnerus JA. Influencing diagnostic and preventive performance in ambulatory care by feedback and reminders: a review. Fam Pract. 1993;10:219–228. doi: 10.1093/fampra/10.2.219. [DOI] [PubMed] [Google Scholar]
- 16.Johnston ME, Langton KB, Haynes RB, Mathieu A. Effects of computer-based clinical decision support systems on clinician performance and patient outcome: a critical appraisal of research. Ann Intern Med. 1994;120:135–142. doi: 10.7326/0003-4819-120-2-199401150-00007. [DOI] [PubMed] [Google Scholar]
- 17.Austin SM, Balas EA, Mitchell JA, Ewigman BG. Effect of physician reminders on preventive care: meta-analysis of randomized clinical trials. Proceedings—theAnnual Symposium on Computer Applications in Medical Care 1994;121-4. [PMC free article] [PubMed]
- 18.Wensing M, Grol R. Single and combined strategies for implementing changes in primary care: a literature review. Int J Quality Health Care. 1994;6:115–132. doi: 10.1093/intqhc/6.2.115. [DOI] [PubMed] [Google Scholar]
- 19.Waddell DL. The effects of continuing education on nursing practice: a meta-analysis. J Continuing Education Nurs. 1991;22:113–118. doi: 10.3928/0022-0124-19910501-09. [DOI] [PubMed] [Google Scholar]
- 20.Yano EM, Fink A, Hirsch SH, Robbins AS, Rubenstein LV. Helping practices reach primary care goals: lessons from the literature. Arch Intern Med. 1995;155:1146–1156. [PubMed] [Google Scholar]
- 21.Soumerai SB, McLaughlin TJ, Avorn J. Improving drug prescribing in primary care: a critical analysis of the experimental literature. Milbank Q. 1989;67:268–317. [PubMed] [Google Scholar]
- 22.Lomas J, Haynes RB. A taxonomy and critical review of tested strategies for the application of clinical practice recommendations: from “official” to “individual” clinical policy. Am J Prev Med. 1987;4:77–94. [PubMed] [Google Scholar]
- 23.Gyorkos TW, Tannenbaum TN, Abrahamowicz M, Bédard L, Carsley J, Franco ED, et al. Evaluation of the effectiveness of immunization delivery methods. Can J Public Health. 1994;85(suppl 1):14–30S. [PubMed] [Google Scholar]
- 24.Mandleblatt J, Kanetsky PA. Effectiveness of interventions to enhance physician screening for breast cancer. J Fam Pract. 1995;40:162–171. [PubMed] [Google Scholar]
- 25.Silagy C, Lancaster T, Gray S, Fowler G. The effectiveness of training health professionals to provide smoking cessation interventions: systematic review of randomised controlled trials. Qual Health Care. 1995;3:193–198. doi: 10.1136/qshc.3.4.193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Lomas J, Anderson GM, Domnick-Pierre K, Vayda E, Enkin MW, Hannah WJ. Do practice guidelines guide practice? The effect of a consensus statement on the practice of physicians. N Engl J Med. 1989;321:1306–1311. doi: 10.1056/NEJM198911093211906. [DOI] [PubMed] [Google Scholar]
- 27.Whiting-O’Keefe QE, Henke C, Simborg DW. Choosing the correct unit of analysis in medical care experiments. Med Care. 1984;22:1101–1114. doi: 10.1097/00005650-198412000-00005. [DOI] [PubMed] [Google Scholar]
- 28.Grol R. Implementing guidelines in general practice care. Qual Health Care. 1992;1:184–191. doi: 10.1136/qshc.1.3.184. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Donner A, Birkett N, Buck C. Randomisation by cluster: sample size requirements and analysis. Am J Epidemiol. 1981;114:906–914. doi: 10.1093/oxfordjournals.aje.a113261. [DOI] [PubMed] [Google Scholar]
- 30.NHS Research and Development Programme. Methods to promote the implementation of research findings in the NHS: priorities for evaluation. Leeds: Department of Health; 1995. [Google Scholar]
- 31.Freemantle N, Grilli R, Grimshaw JM, Oxman A. Implementing the findings of medical research: the Cochrane Collaboration on effective professional practice. Qual Health Care. 1995;4:45–47. doi: 10.1136/qshc.4.1.45. [DOI] [PMC free article] [PubMed] [Google Scholar]