Various strategies have been evaluated for their ability to support the adoption of clinical evidence into everyday practice.1,2 There is increasing interest in interventions aimed at groups of healthcare staff and promoting organisational change. Observational studies of these types of interventions have produced encouraging findings,3,4 but the results of randomised controlled trials have sometimes been disappointing.5–7 These differences may be due to the methodological and practical difficulties of evaluating such interventions in randomised trials rather than to lack of efficacy of the interventions.8
We carried out an exploratory trial to examine the independent and combined effects of teaching evidence based medicine and facilitated change management on the implementation of cardiovascular disease guidelines in primary care (box). The trial was accompanied by a formative evaluation, drawing on information collected by the evidence based medicine tutor, the change management facilitator, and a qualitative researcher who observed workshops and meetings and conducted a series of semistructured interviews with study participants.14 Progress was reviewed at monthly steering group meetings, and the thesis of this paper emerged from the deliberations and discussions of this group.
Summary points
When designing trials of interventions to change professional practice in primary care, choices have to be made about selection of appropriate practices, development and adaptation of interventions, and experimental design
The different priorities of researchers, those developing the interventions, and those participating must be recognised when such choices are made
The best design options may be those that are able to reconcile the interests of research, development, and practice
Interventions requiring the participation of health professionals in organisational change require a high degree of motivation, and eligibility criteria should be developed and applied at recruitment
Interventions must be adapted as far as possible to the needs of participants without compromising theoretical assumptions
Experimental designs must enable active staff participation without distorting the interventions delivered
Issues arising
Selection of suitable practices
Choice of sampling frame—Our exploratory study was carried out in collaboration with the Medical Research Council General Practice Research Framework. Although we recognised advantages in working with a well supported and enthusiastic group of practices with a track record in collaborative research, the practices may be atypical.11 For example, some practices had participated in research on cardiovascular disease before and may have adopted the findings of their research.15 We subsequently found a wide variation in performance across the cardiovascular disease management topics, but the organisational characteristics of the practices could still limit the generalisability of our conclusions.
Details of exploratory trial
Design—Randomised controlled trial with a factorial design.9 All practices were sent guidelines on five cardiovascular disease topics then allocated to evidence based medicine teaching, facilitated change management, both, or neither by using a restricted randomisation procedure.10
Participants—Eight from 25 eligible practices in the Medical Research Council General Practice Research Framework in North West Thames.11
Interventions—The evidence based medicine intervention was a one day practice based workshop, covering appraisal of trials, systematic reviews, and guidelines.12 The change management programme comprised a one day workshop introducing principles of continuous quality improvement (change management, multiprofessional working, problem solving, and analysis of the process of care) followed by a series of visits from a facilitator trained in the methods.13
Study outcomes—Prescribing indicators, reflecting the implementation of the cardiovascular disease guidelines, and qualitative data on changes in professional practice
Participating practices—About 30% of practices approached agreed to participate. Most practices expected benefits in terms of implementing guidelines or developing new skills, or both. However, the interested practices were not always the ones that the tutor and the facilitator thought would benefit from the interventions offered. In particular, the facilitator had reservations about working with more hierarchical practices, where the change in management approach might be unacceptable.4 An additional drawback was that some practices came into the study because they were interested in one intervention but were randomised to the other, and there was some evidence that this might have affected the degree of engagement with the interventions (equivalent to patient preference effects in therapeutic trials).16
Consent to participate—Informed consent was obtained from lead practitioners on behalf of their general practice partners. This was sufficient for ethics committees, but the extent of the internal consultation with partners varied considerably. The dissemination of information between general practitioners and staff also varied, and this affected the degree to which practice staff became engaged with the interventions being tested. The General Practice Research Framework has recently amended its procedures and now requires signatures from all partners before a practice is admitted to a research project, but there is an argument for seeking consent from an even wider range of staff when interventions are directed at practice teams.
Development and adaptation of interventions
Theoretical credibility—Our teaching of evidence based medicine was based on workshops developed at McMaster University,12 and the change management programme drew on principles of continuous quality improvement as taught at the Institute of Health Improvement.13 This strategy provided external validity for the interventions delivered, but some primary care staff felt that the change management workshop was insufficiently grounded in the language, concerns, and perspectives of primary care.
Acceptability—The one day practice based workshop was a popular format with primary care staff; participatory learning approaches were preferred over more didactic teaching, and practical tasks over presentations of theory. These positive remarks have to be balanced against concerns on the part of the tutor and facilitator that a single day was insufficient to deliver the relevant material. The facilitator follow up was an attractive intervention model for practices, as the low intensity and long time span meant it was easily integrated with other practice activities. However, the duration of the intervention was truncated by the short time frame for research funding.
Replicability and transferability—New materials were developed to support the evidence based medicine workshop, the change management workshop, and the facilitation activities. A single tutor ran the evidence based medicine sessions, and a single facilitator ran the change management programme, with the details of interventions adapted to take account of the cultural and organisational attributes of individual practices. Such tailoring improves the transferability of interventions but reduces their replicability. Using multiple tutors or facilitators would have similar implications. A transferable intervention could be tested in a wider range of practices, yielding more generalisable results, but could never be applied in service if replicability was compromised.
Evaluability—The pilot study was designed to evaluate the effect of the change management programme on specific guideline related outcomes, while adopting a qualitative approach for assessing more general changes in the way that primary care teams worked. The focus on implementation of guidelines simplified the outcome measures for the trial, but the facilitator was always more committed to capturing information on changes in organisational effectiveness. Various questionnaires for measuring organisational effectiveness have been identified,17,18 but their suitability and validity for use in a trial still needs to be assessed.
Choice of experimental design
Multiple interventions—The decision to adopt a factorial design had implications for the time scale of the trial, the delivery of the interventions, and the demands on the practices. In particular, delays were introduced because practices allocated to both interventions could not accommodate two, one day workshops in close succession. Practice based workshops were convenient for participants but labour intensive for the research staff. They enabled us to avoid contamination between practices allocated to different interventions, but interactions with practices were sometimes compromised as the tutor was asked not to advise on implementation and the facilitator was asked not to comment on research evidence.
Multiple guidelines—The provision of a choice of guidelines was valued by practices and felt to be consistent with a real life situation. The approach also allowed us to learn more about presentation of materials and preferred topics. Ultimately, the guidelines that practices selected varied considerably, and the principle of providing choice of guidelines to practitioners was difficult to reconcile with the establishment of common, meaningful measures of effect across practices. In addition, there is evidence that clinicians may derive greater benefits from directing attention towards topics that they rank lower in terms of interest than to those that they rank higher.19
Reconciling interests of research, development, and practice
We had to make various choices in designing our study: which practices to select for participation, how to develop and adapt the interventions, and which experimental design to use. Tensions arose because people whose principal concern was with the scientific rigour of the investigation, those whose main focus was the development and adaptation of the interventions (as a theoretically based model of behaviour change or as a pragmatic service intervention20), and those who were participants in the research had different views on what was important. The table is an analytical framework that summarises the desirable characteristics of trial design from the perspective of the three constituencies. The issues arising are discussed in detail below.
The interests of research, development, and practice would be addressed by a trial design that was able to satisfy the following conditions:
A practice selection strategy which reconciles issues around the acceptability and appropriateness of interventions to practices and the generation of externally valid, generalisable trial results
Interventions that remain true to their theoretical foundations, are of sufficient focus, intensity, or duration to have an impact, but are adapted to the needs and conditions of participating practices. The interventions also need to be replicable, transferable, and evaluable
A statistically efficient experimental design that can ensure scientific rigour without compromising the ability of staff to participate and without damaging the integrity of the interventions being tested.
Discussion
Interventions requiring the active participation of health professionals in organisational change are likely to require a high degree of motivation from most of the practice team if they are to have an impact. The willingness of practices to participate in trials of professional behaviour change will depend on the interests of members and the organisational characteristics of practices. A trial which focuses on practices expected to derive substantial benefits (an explanatory trial)21 would ensure that practices and interventions were well matched but might not provide appropriate information on the application of such interventions in service settings. An alternative approach would be to recruit practices that were typical of those that might come forward for interventions offered in a service context (pragmatic trial).21 The designs converge as recruitment becomes increasingly selective. In a practice preference design,20 a randomised controlled trial of practices that have no preferences for particular interventions is nested in an observational study in which practices with strong preferences receive the intervention of their choice. This design could meet the concerns of research, development, and practice but is complex and expensive and requires more effort from the researchers.
In the interests of theoretical credibility, we reproduced established models for teaching the principles of evidence based medicine12 and continuous quality improvement13 while allowing practitioners to explore specific applications. For interventions delivered at this level, measures of organisational effectiveness might be more appropriate than disease specific measures based on implementation of guidelines.17,18 Also, the approach would need to be directed at generating substantial change at practice level for measurable effects to be seen. The linking of theory to a specific task would be acceptable to practices, and a focus on guidelines could be associated with measurable outcomes. This would simplify measurement issues in a trial, but some commentators might express concerns about the application of continuous quality improvement methods to tasks that the practices had not necessarily identified as priorities.3,4
The factorial design is a powerful approach to studying two interventions simultaneously, though the execution of this design may be demanding for researchers and study participants. It may not be feasible for practices to implement a group of guidelines simultaneously, but a sequence of guidelines could be presented over time (in random sequence to balance order effects). This split plot design9 is likely to be acceptable to trial participants, to those whose concern is to maintain the integrity of the interventions, and to researchers provided that funding could be found for the extended trial period. Alternatively, a randomised incomplete block design9 could be used to assess the efficacy of a single intervention across two or more guidelines, but as every practice is subject to intervention the design could not provide information on the generic effects of interventions on professional behaviour.
Conclusions
Clinicians and those involved in service development sometimes dismiss academic research because of an “ivory tower” approach that pays too little attention to issues around service delivery.22 The nature of negotiated access to research subjects in primary care settings can have a direct effect not only on participation rates in research but also on the quality of the research data.23 If a trial is to be executed successfully, and its findings are to be applicable in a service setting, it is important to identify a trial design that can best reconcile the interests of research, development, and practice. Our analytical framework provides an approach by which it is possible to explore how particular characteristics of trial design appear from each perspective and thereby to assess the most satisfactory design options. The approach cannot assure that trial design will be straightforward and problem free, but early consideration of the perspectives of research, development, and practice might help to prevent fundamental problems arising later.
Table.
Constituencies | Desirable characteristics of trial design
|
||
---|---|---|---|
Selection of suitable practices | Development and adaptation of interventions | Choice of experimental design | |
Research | Probabilistic sampling of practices from defined sampling frame Practice staff who are interested and well motivated Evidence of low level of implementation for topic targeted | Intervention that is replicable, transferable, and manageable to deliver Focused outcome for which there are valid and repeatable measures | Design can assess more than one intervention or more than one guideline simultaneously Random allocation of practices to interventions or guidelines |
Development: | |||
Theoretical perspective | Volunteer practices admitted on grounds of interest and enthusiasm Evidence of low level of implementation for topic targeted | Intervention that remains true to underlying principles and theory | Design suited to testing hypotheses about underlying theory |
Service perspective | Practices admitted on grounds of similarity with practices in which the intervention will eventually be applied Practices pre-assessed to ensure that they can benefit from the interventions Evidence of a low level of implementation for topic targeted | Intervention that is applicable, adaptable, deliverable, and affordable Intervention that examines topic or change that is consistent with service priorities | Design that tests intervention as it will eventually be delivered Choice of guidelines relevant to policy agenda |
Practice | Self selection according to perceived benefits and costs | Intervention that is applicable to practice needs and whose delivery is in accordance with constraints on practice time and resources | Design that minimises disruption to practice Design that allows choice of guidelines |
Acknowledgments
The study was carried out in collaboration with the MRC General Practice Research Framework, and we are grateful to participating practice staff and Dr M Vickers.
Footnotes
Funding: NHS Research and Development Implementation Methods Programme.
Competing interests: None declared.
References
- 1.Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J. 1995;153:1423–1431. [PMC free article] [PubMed] [Google Scholar]
- 2.Effective Health Care. Getting evidence into practice. York: University of York; 1999. [Google Scholar]
- 3.Lawrence M, Packwood T. Adapting total quality management for general practice: evaluation of a programme. Qual Health Care. 1996;5:151–158. doi: 10.1136/qshc.5.3.151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hearnshaw H, Reddish S, Carlyle D, Baker R, Robertson N. Introducing a quality improvement programme to primary healthcare teams. Qual Health Care. 1998;7:200–208. doi: 10.1136/qshc.7.4.200. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Wyatt JC, Paterson-Brown S, Johanson R, Altman DG, Bradburn MJ, Fisk NM. Randomised trial of educational visits to enhance use of systematic reviews in 25 obstetric units. BMJ. 1998;317:1041–1046. doi: 10.1136/bmj.317.7165.1041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Solberg LI, Kottke TE, Brekke ML. Will primary care clinics organise themselves to improve the delivery of preventive services? A randomised controlled trial. Preventive Med. 1998;27:623–631. doi: 10.1006/pmed.1998.0337. [DOI] [PubMed] [Google Scholar]
- 7.Goldberg HI, Wagner EH, Fihn SD, Martin DP, Horowitz CR, Christensen DB, et al. A randomized controlled trial of CQI teams and academic detailing: can they alter compliance with guidelines. Jt Comm J Qual Improv. 1998;24:130–142. doi: 10.1016/s1070-3241(16)30367-4. [DOI] [PubMed] [Google Scholar]
- 8.Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what will it take to accelerate progress? Milbank Quarterly. 1998;76:593–624. doi: 10.1111/1468-0009.00107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Cochran and Cox. Experimental designs. New York: Wiley; 1957. [Google Scholar]
- 10.Altman DG. Practical statistics for medical research. London: Chapman and Hall; 1991. [Google Scholar]
- 11.Medical Research Council General Practice Research Framework. A network of general practices throughout the UK managed by the MRC Epidemiology and Medical Care Unit. London: EMCU, Wolfson Institute of Preventive Medicine; 1997. [Google Scholar]
- 12.Neufield VR, Woodward CA, McLeod SM. The McMasters MD programme: a case study in renewal in medical education. Acad Med. 1989;64:423–432. doi: 10.1097/00001888-198908000-00001. [DOI] [PubMed] [Google Scholar]
- 13.Berwick D. A primer on the improvement of systems. BMJ. 1996;312:619–622. doi: 10.1136/bmj.312.7031.619. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Tomlin Z, Humphrey C, Rogers S. General practitioners' perceptions of effective health care. BMJ. 1999;318:1532–1535. doi: 10.1136/bmj.318.7197.1532. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Stephens R, Gibson D. The impact of clinical trials on the treatment of lung cancer. Clin Oncol. 1993;5:211–219. doi: 10.1016/s0936-6555(05)80231-6. [DOI] [PubMed] [Google Scholar]
- 16.Brewin CR, Bradley C. Patient preferences and randomised clinical trials. BMJ. 1989;299:313–315. doi: 10.1136/bmj.299.6694.313. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Anderson NR, West MA. The team climate inventory: manual and users guide. Windsor: Assessment Services for Employment/National Foundation for Educational Research-Nelson Press; 1994. [Google Scholar]
- 18.Pritchard P, Pritchard J. Teamwork for primary and shared care. Oxford: Oxford University Press; 1994. [Google Scholar]
- 19.Sibley J, Sackett DL, Neufeld V, Gerrard B, Rudnick KV, Fraser W. A randomized trial of continuing medical education. N Engl J Med. 1982;306:511–515. doi: 10.1056/NEJM198203043060904. [DOI] [PubMed] [Google Scholar]
- 20.Bradley F, Wiles R, Kinmonth A-L, Mant D, Gantley M. Development and evaluation of complex interventions in health services research: case study of the Southampton heart integrated care project. BMJ. 1999;318:711–715. doi: 10.1136/bmj.318.7185.711. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Schwarz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutic trials. J Chronic Dis. 1967;20:637–648. doi: 10.1016/0021-9681(67)90041-0. [DOI] [PubMed] [Google Scholar]
- 22.Dawson S. Inhabiting different worlds: how can research relate to practice? Qual Health Care. 1997;6:177–178. doi: 10.1136/qshc.6.4.177. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Murphy E, Spiegal N, Kinmonth AL. “Will you help me with my research?” Gaining access to primary care settings and subjects. Br J Gen Pract. 1992;42:162–165. [PMC free article] [PubMed] [Google Scholar]