Skip to main content
The BMJ logoLink to The BMJ
. 2000 Jan 22;320(7229):224–230. doi: 10.1136/bmj.320.7229.224

Evaluation of the effectiveness of an educational intervention for general practitioners in adolescent health care: randomised controlled trial

L A Sanci a, C M M Coffey a, F C M Veit a, M Carr-Gregg a, G C Patton a, N Day b, G Bowes a
PMCID: PMC27271  PMID: 10642233

Abstract

Objective

To evaluate the effectiveness of an educational intervention in adolescent health designed for general practitioners in accordance with evidence based practice in continuing medical education.

Design

Randomised controlled trial with baseline testing and follow up at seven and 13 months.

Setting

Local communities in metropolitan Melbourne, Australia.

Participants

108 self selected general practitioners.

Intervention

A multifaceted educational programme for 2.5 hours a week over six weeks on the principles of adolescent health care followed six weeks later by a two hour session of case discussion and debriefing.

Outcome measures

Objective ratings of consultations with standardised adolescent patients recorded on videotape. Questionnaires completed by the general practitioners were used to measure their knowledge, skill, and self perceived competency, satisfaction with the programme, and self reported change in practice.

Results

103 of 108 (95%) doctors completed all phases of the intervention and evaluation protocol. The intervention group showed significantly greater improvements in all outcomes than the control group at the seven month follow up except for the rapport and satisfaction rating by the standardised patients. 104 (96%) participants found the programme appropriate and relevant. At the 13 month follow up most improvements were sustained, the confidentiality rating by the standardised patients decreased slightly, and the objective assessment of competence further improved. 106 (98%) participants reported a change in practice attributable to the intervention.

Conclusions

General practitioners were willing to complete continuing medical education in adolescent health care and its evaluation. The design of the intervention using evidence based educational strategies proved an effective and quick way to achieve sustainable and large improvements in knowledge, skill, and self perceived competency.

Key messages

  • Firm evidence shows that the confidence, knowledge, and skills of doctors in adolescent health contribute to barriers in delivering health care to youth

  • Evidence based strategies in continuing medical education were used in the design of a training programme to address the needs of doctors and youth

  • The programme covered adolescent development, consultation and communication skills, health risk screening, health promotion, risk assessment of depression and suicide, and issues in management of psychosocial health risk including interdisciplinary approaches to care

  • Most interested doctors attended and completed the 15 hour training programme over six weeks and the evaluation protocol covering 13 months

  • Doctors completing the training had substantial gains in knowledge, clinical skills, and self perceived competency than the controls; these gains were sustained at 12 months and were further improved in the objective measure of clinical competence in conducting a psychosocial interview

Introduction

The patterns of health need in youth have changed noticeably over the past three decades. Studies in the United Kingdom, North America, and Australia have shown that young people experience barriers to health services.15 With the increase in a range of youth health problems, such as depression, eating disorders, drug and alcohol use, unplanned pregnancy, chronic illness, and suicide, there is a need to improve the accessibility and quality of health services to youth.3,6

In the Australian healthcare system general practitioners provide the most accessible primary health care for adolescents.7 Yet Veit et al surveyed 1000 Victorian general practitioners and found that 80% reported inadequate undergraduate training in consultation skills and psychosocial diseases in adolescents and 87% wanted continuing medical education in these areas.4,8 These findings agreed with comparable overseas studies.911

Evidence based strategies in helping doctors learn and change practice are at the forefront of the design of continuing medical education.1214 In response to the identified gap in training an evidence based educational intervention was designed to improve the knowledge, skill, and self perceived competency of general practitioners in adolescent health. We conducted a randomised controlled trial to evaluate the intervention, with follow up at seven and 13 months after the baseline assessment.

Participants and methods

The divisions of general practice are regional organisations that survey the needs of, and provide education for, general practitioners in their zone. There are 15 divisions in metropolitan Melbourne. Advertisements inviting participation in our trial were placed in 14 of the 15 divisional and state college newsletters and mailed individually to all division members. The course was free, and continuing medical education points were available. Respondents were sent details of the intervention and the evaluation protocol and asked to return a signed consent form. Divisions and doctors were excluded if they had previously received a course in adolescent health from this institution.

Randomisation

Consenting doctors were grouped into eight geographical clusters by practice location to minimise contamination and to maximise efficiency of the delivery of the intervention. Clusters (classes) of similar size were randomised to intervention or control by an independent researcher.

Intervention

The box details the objectives, content, and instructional design of the multifaceted intervention. A panel comprising young people, general practitioners, college education and quality assurance staff, adolescent health experts, and a state youth and family government officer gave advice on the design.15 The curriculum included evidence based primary and secondary educational strategies such as role play with feedback, modeling practice with opinion leaders, and the use of checklists.12,16 The six week programme was delivered concurrently by LS, starting one month after baseline testing (see figure on website).

Goals, content, and instructional design of intervention in principles of adolescent health care for general practitioners

Intervention goals

  • To improve general practitioners' knowledge, skill, and attitudes in the generic concepts of adolescent health to effectively gain rapport with young people, screen them for health risk, and provide health promotion and appropriate management plans

  • To increase awareness of the barriers their practices may pose for youth access and how these may be overcome

  • To understand how other services can contribute to the management of young people and how to access these in their locality

Intervention content (weekly topics)

  • Understanding adolescent development, concerns, and current morbidities, the nature of general practice, and yourself

  • Locating other youth health services and understanding how they work, and medicolegal and ethical issues in dealing with minors

  • Communication and consultation skills and health risk screening

  • Risk assessment of depression and suicide

  • Detection and initial management of eating disorders

Instructional design

Needs analysis

  • From previous surveys and informally at start of workshops

Primary educational strategy

Workshops for 2.5 hours weekly for six weeks

  • Debriefing from previous session

  • Brief didactic overviews

  • Group problem based activities and discussion

  • Modeling of interview skills by opinion leaders on instructional video

  • Role play and feedback practice sessions with adolescent actors

  • Activities set to practise in intervening week

  • Individual feedback on precourse evaluation video

Course book

  • Goals, objectives, course requirements, and notes

  • Suggested further reading

  • Class or home activities with rationale for each

Resource book

  • Reading material expanding on workshop sessions

Practice reinforcing and enabling strategies

  • Adolescent assessment chart for patient audit

  • Logbook for reflection on experience with the patients audited

  • Self assembled list of adolescent health services in local community

  • Availabilty of tutor (LS) by phone for professional support between workshops

  • Refresher session for group discussion of experiences in practice (six weeks after course)

Measures

Table 1 summarises the instruments used in the evaluation. Parallel strategies of objective and self reported ratings of knowledge, skill, and competency were used to ensure findings were consistent.17,18 Participants' satisfaction with the course and their self reported change in practice were evaluated at 13 months. Any other training or education obtained in adolescent health or related areas were noted.

Table 1.

Evaluation measures, their content, inter item reliability, and intraclass correlation within randomisation groups estimated at baseline

Evaluation measures Content Crohnbach α Intraclass correlation
Skills
Patients' rating:
 Satisfaction and rapport 7 0.95 0.01
 Confidentiality discussion 1 0.07
Observer's rating:
 Competency* 13 0.95 0.05
 Content of risk assessment* 22 items 0.09
Self perceived competency
Comfort:
 Clinical process 11 0.88 <0.01
 Substantive issues 10 0.93 0.01
Self perceived knowledge and skill:
 Clinical process 11 0.90 0.04
 Substantive issues 10 0.94 0.05
General practitioner's self score on interview 6 0.93 <0.01
Knowledge
Self completion knowledge test 41 items <0.01
*

Assessments from viewing taped consultations. 

Likert scales unless stated otherwise. 

Clinical skills

Seven female drama students were trained to simulate a depressed 15 year old exhibiting health risk behaviour. Case details and performances were standardised according to published protocols1921 and varied for each testing period. Doctors were given 30 minutes to interview the patient in a consulting room at this institution. An unattended camera recorded the consultation on videotape.

The standardised patients were trained in the use of a validated rating chart21 assessing their own rapport and satisfaction and discussion about confidentiality. These were completed after the interview while still in role. They were blind to the intervention status of the doctors, and no doctor had the same patient for successive interviews.

Two independent observers, blind to participants' status, assessed the taped consultations in the three testing periods. A doctor in adolescent health coded three items in the scale relating to medical decision making. A trained non-medical researcher assessed all other items. The chart was developed from two validated instruments for assessment of adolescent consultations21 and general practice consultations.22,23 Marks for both competency and content of the health risk assessment were summarised into a percentage score. The same observers were used in all three testing periods.

Self perceived competency

Two questionnaires were developed for the doctors to rate both their comfort and their knowledge or skill with process issues, including the clinical approach to adolescents and their families and with substantive issues of depression, suicide risk assessment, alcohol and drug issues, eating disorders, sexual history taking, and sexual abuse. Doctors also rated their consultation with the standardised patient on a validated chart,21 itemising their self perceived knowledge and skill.

Knowledge

Knowledge was assessed with short answer and multiple choice items developed to reflect the workshop topics. The items were pretested and refined for contextual and content validity. The course tutor, blind to group status, awarded a summary score.

Analysis

Statistical analysis was performed with stata (Stata, Texas), with the individual as the unit of analysis. Factor analysis with varimax rotation was used to identify two domains within the comfort and self perceived knowledge or skill items: process and substantive issues. The internal consistency for all scales was estimated using Crohnbach's α. Reproducibility within and between raters was estimated with one way analysis of variance as was the intraclass correlation of baseline score within each teaching group.

The effect of this intervention was evaluated by regression of gain scores (score at seven month follow up minus baseline score) on the intervention status, with adjustment for baseline and potential confounding variables. Robust standard errors were used to allow for randomisation by cluster. The sustainability of outcome changes in the intervention group between the assessments at seven months and 13 months was evaluated with paired t tests.

Results

Participants

Newsletters and mailed advertisements to 2415 general practitioners resulted in 264 expressions of interest. Overall, 139 doctors gave written consent to be randomised. Attrition after notification of study status left 55 (73%) doctors in the intervention group and 53 (83%) in the control group, with an average of 13.5 (12 to 15) doctors in each class.

The age and country of graduation of the doctors in this study were similar to the national workforce of general practitioners.24,25 Female doctors were overrepresented (50% in this study versus 19% and 33% in the other reports). Table 2 describes the randomisation groups. There was imbalance in age, gender, languages other than English spoken, average weekly hours of consulting, types of practice, and college examinations.

Table 2.

Demographic characteristics of general practitioners by intervention group. Numbers are percentages

Characteristic Intervention group (n=54) Control group (n=51)
Male 24 (44) 28 (55)
Age (years)
 25-34 13 (24) 10 (20)
 35-44 20 (37) 16 (31)
 45-54 18 (33) 15 (29)
 ⩾55 3 (6) 10 (20)
Language other than English spoken 14 (26) 24 (47)
Average hours consulting/week:
 <20 17 (31) 20 (20)
 20-40 29 (54) 22 (44)
 41-60  8 (15) 18 (36)
Patients seen in average week:
 <50 14 (26)  9 (18)
 51-100 16 (28) 13 (26)
 101-150 18 (33) 16 (32)
 >150  7 (13) 12 (24)
% of adolescents of total patients seen per week:
 <10 24 (45) 21 (41)
 10-30 22 (40) 23 (43)
 >30  8 (15)  8 (16)
Age (years) of oldest child:
 No children 3 (6)  9 (18)
 ⩽10 19 (35) 12 (24)
 11-20 21 (39) 10 (20)
 >20 11 (20) 19 (38)
Vocational registration 51 (94) 46 (90)
College exams taken 25 (46) 15 (29)
Previous training in adolescent health 15 (28) 15 (29)
Type of practice:
 Solo 4 (7) 13 (25)
 Group 43 (80) 24 (47)
 Community health centre 0 4 (8)
 Extended hours 0 2 (4)
 Other  7 (13)  8 (16)
Appointments/hour:
 ⩽4 32 (59) 33 (65)
 5-6  9 (17) 4 (8)
 ⩾6  8 (15)  8 (16)
 Other booking systems 5 (9)  6 (12)

Compliance

One doctor dropped out of the intervention group. Overall, 44 doctors attended all six tutorials, eight missed one, and two missed three. In total, 103 of 108 (95%) of participants at baseline completed the entire evaluation protocol (see website).

Measures

The evaluation scales showed satisfactory internal consistency and low association with class membership (table 1). Satisfactory interrater agreement was achieved on the competency scale (n=70, r=0.70). The intrarater consistency for both medical and non-medical raters was also satisfactory (n=20, r=0.80 and 0.91 respectively).

Effect of the intervention

Table 3 describes the baseline measures and the effect of the intervention at the seven month follow up. All analyses were adjusted for age, gender, languages other than English, average weekly hours of consulting, practice type, and college examinations. Doctors reporting education in related areas during follow up (67% control (34 of 51), 41% intervention (22 of 54)) were characterised. The difference analysis was adjusted for this extraneous training and baseline score, although the extraneous training did not affect any outcomes. The study groups were similar on all measures at baseline. The intervention group showed significantly greater improvements than the control group at the seven month follow up in all outcomes except the rapport rating by the standardised patients.

Table 3.

Multiple regression analyses of baseline and difference in scores on continuous outcome measures evaluating success of educational intervention at seven month follow up. Models include gender, age group, language other than English, type of practice, average hours worked per week, and college exams taken. Difference scores are also adjusted for baseline score and training obtained from elsewhere over 7 month period. Robust standard errors allowed for cluster randomisation. All scores out of 100

Scores No* Baseline
Difference at 7 month follow up
P value
Mean (95% CI) Mean (95% CI) Effect size
Skills
Standardised patients' rapport and satisfaction:
 Control 50 67.9 (61.4 to 74.5) −0.5 (−6.1 to 5.0) −0.02 0.12
 Intervention 54 67.9 (64.9 to 70.9) 6.0 (2.6 to 9.5) 0.54
Standardised patients' confidentiality:
 Control 50 35.2 (29.3 to 41.1)   4.0 (−10.3 to 18.3) 0.19 <0.01
 Intervention 54 42.2 (31.0 to 53.4) 53.5 (49.3 to 57.8) 1.28
Observer competence:
 Control 50 51.8 (45.9 to 57.6) 2.6 (−3.0 to 8.1) 0.12 0.01
 Intervention 54 48.8 (46.2 to 51.4) 15.3 (11.1 to 19.5) 1.55
Observer risk assessment:
 Control 50 53.3 (49.4 to 57.2) 0.5 (−3.0 to 4.1) 0.04 0.03
 Intervention 53 50.7 (44.2 to 57.2) 9.9 (5.8 to 14.0) 0.41
Self perceived competency
Comfort (process):
 Control 49 71.1 (66.4 to 75.8) 0.2 (−3.5 to 4.0) 0.01 0.03
 Intervention 54 71.8 (69.7 to 73.9) 7.1 (4.7 to 9.4) 0.89
Comfort (substantive):
 Control 50 58.1 (52.3 to 63.9) 0.3 (−5.1 to 5.6) 0.01 <0.01
 Intervention 54 60.5 (56.1 to 64.8) 15.8 (13.8 to 17.8) 0.97
Knowledge and skill (process):
 Control 50 65.9 (60.4 to 71.5) 0.7 (−4.0 to 5.3) 0.03 <0.01
 Intervention 53 66.3 (63.6 to 69.1) 15.6 (12.1 to 19.2) 1.54
Knowledge and skill (substantive):
 Control 50 52.1 (44.5 to 59.7) 2.8 (−2.0 to 7.6) 0.10 <0.01
 Intervention 54 57.5 (53.8 to 61.2) 20.6 (18.2 to 22.9) 1.50
Doctors' self rating on taped consultation
 Control 49 56.6 (52.7 to 60.5) 3.1 (0.6 to 5.6) 0.22 <0.01
 Intervention 54 56.9 (55.7 to 58.1) 17.8 (15.9 to 19.7) 4.01
Knowledge test
Control 49 33.3 (31.6 to 35.0) 3.1 (0.6 to 5.6) 0.51 <0.01
Intervention 54 32.8 (31.6 to 34.0) 14.6 (13.0 to 16.2) 3.31
*

Variations due to missing values in rating forms of some participants. 

The contextual validity and applicability of the course was assessed by 48 of 53 doctors and rated positively by 46 (96%).

Follow up of the intervention group at 13 months

The intervention effect was sustained in most measures and further improved in the independent rater's assessment of competence (table 4). The crude rating of the confidentiality discussion by the standardised patients deteriorated at the 13 month assessment but was significantly greater than baseline. Overall, 98% of the participants reported a change in practice, which they attributed to the intervention.

Table 4.

Evaluation of change in unadjusted percentage scores for intervention group (n=54) from baseline to seven month follow up and from 7 month to 13 month follow up using paired t tests. Values are mean (95% CI) unless stated otherwise

Scores Follow up
P value* P value
Baseline 7 months 13 months
Skills
Standardised patients' rapport and satisfaction 68.6 (63.5 to 73.7) 76.0 (71.7 to 80.2) 75.9 (71.4 to 80.5) <0.01 1.00
Standardised patients' confidentiality 42.5 (34.4 to 50.6) 92.7 (89.1 to 96.3) 84.4 (78.4 to 90.5) <0.01 0.01
Observer competence 51.0 (46.3 to 55.8) 65.3 (60.3 to 70.3) 70.7 (66.3 to 75.0) <0.01 0.02
Observer risk assessment 51.2 (47.9 to 54.5) 61.3 (58.4 to 64.3) 61.4 (58.3 to 64.4) <0.01 1.00
Self perceived competency
Comfort:
 Process 71.3 (67.8 to 74.8) 78.1 (74.8 to 81.5) 80.0 (77.3 to 82.7) <0.01 0.12
 Substantive 59.6 (55.4 to 63.9) 74.9 (71.7 to 78.0) 75.5 (72.4 to 78.7) <0.01 0.58
Self perceived knowledge and skill:
 Process 66.6 (63.4 to 69.7) 80.8 (78.1 to 83.5) 81.9 (79.2 to 84.6) <0.01 0.27
 Substantive 56.7 (52.8 to 60.6) 76.3 (73.2 to 79.5) 76.3 (73.0 to 79.6) <0.01 0.99
General practitioner self rating on taped consultation 55.6 (50.6 to 60.6) 72.1 (68.7 to 75.6) 71.0 (67.3 to 74.7) <0.01 0.59
Knowledge test 33.5 (31.5 to 35.4) 48.0 (46.1 to 49.9) 47.7 (45.8 to 49.6) <0.01 0.71
*

Baseline to 13 months. 

7-13 months. 

Discussion

A course in adolescent health for six sessions designed with evidence based strategies in doctor education brought substantial gains in knowledge, skills, and self perceived competency of the intervention group of doctors compared with the control group, except for the rapport and satisfaction rating by the standardised patients. The changes were generally sustained over 12 months and further improved in the independent observer's rating of competence. Almost all participants reported a change in actual practice since the intervention.

These results are better than reported in a review of 99 randomised controlled trials to evaluate continuing medical education published from 1974-95.12 Although over 60% had positive outcomes they were small to moderate and usually in only one or two outcome measures. In keeping with the recommendations of this review we adapted a rigorous design, clearly defined our target population, and used multiple methods for evaluating competence. Perhaps more importantly the intervention design incorporated three further elements: the use of evidence based educational strategies, a comprehensive preliminary needs analysis, and the content validity of the curriculum ensured by the involvement of both young people and doctors.

The participants clearly represented a highly motivated group of doctors. This self selection bias was unavoidable but reflected the reality that only interested doctors would desire special skills in this domain and conforms to the adult learning principle of providing education where there is a self perceived need and desire for training.12,26,27 We therefore established that the intervention is effective with motivated doctors. It is generally accepted that doctors with an interest in a topic would already have high levels of knowledge and skill, with little scope for improvement. This was not the case in our study. Baseline measures were often low and improvements were large, confirming the need for professional development in adolescent health. The retention rate was excellent and possibly due, in part, to the role of a doctor in the design of the programme, in recruitment, and in tutoring.

Doubt remains as to whether improved competency in a controlled test setting translates to improved performance in clinical practice.28 High competency ratings are not necessarily associated with high performance, but low competency is usually associated with low performance.16,29,30

The rapport and satisfaction rating by the standardised patients was the only outcome measure apparently unresponsive to the intervention. Actors' ratings and character portrayal were standardised, and gender bias was controlled by using only actresses. Even with these precautions three actresses scored differently from the rest, one had fewer physician encounters, and the subjective nature of the rating scale probably contributed to large individual variation. A trend towards improvement in the intervention group was noted but our study lacked sufficient power to find a difference. In other settings validity and reliability in competency assessments with standardised patients has been shown to increase with the number of consultations examined.31,32 Pragmatically, it was not feasible to measure multiple consultations in our study.

Errors in interrater measurement were minimised by using the same raters for all three periods of testing. The independent observer and patient were blind to study status but may have recognised the intervention group at the seven month follow up because of the learnt consultation styles. Other measures of competency were included to accommodate this unavoidable source of error.

Our study shows the potential of doctors to respond to the changing health needs of youth after brief training based on a needs analysis and best evidence based educational practice. Further study should address the extent to which these changes in doctors' competence translate to health gain for their young patients.

Acknowledgments

We thank the participating doctors, Helen Cahill (Youth Research Centre, Melbourne University), Dr David Rosen (University of Michigan), and Sarah Croucher (Centre for Adolescent Health).

Footnotes

Funding: The Royal Australian College of General Practitioners Trainee Scholarship and Research Fund and the National Health and Medical Research Council.

Competing interests: None declared.

References

  • 1.Donovan C, Mellanby AR, Jacobson LD, Taylor B, Tripp JH. Teenagers' views on the general practice consultation and provision of contraception. The adolescent working group. Br J Gen Pract. 1997;47:715–718. [PMC free article] [PubMed] [Google Scholar]
  • 2.Oppong-Odiseng ACK, Heycock EC. Adolescent health services—through their eyes. Arch Dis Child. 1997;77:115–119. doi: 10.1136/adc.77.2.115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Ginsburg KR, Slap GB. Unique needs of the teen in the health care setting. Curr Opin Pediatr. 1996;8:333–337. doi: 10.1097/00008480-199608000-00006. [DOI] [PubMed] [Google Scholar]
  • 4.Veit FCM, Sanci LA, Young DYL, Bowes G. Adolescent health care: perspectives of Victorian general practitioners. Med J Aust. 1995;163:16–18. doi: 10.5694/j.1326-5377.1995.tb126081.x. [DOI] [PubMed] [Google Scholar]
  • 5.McPherson A, Macfarlane A, Allen J. What do young people want from their GP? [Letter] Br J Gen Pract. 1996;46:627. [PMC free article] [PubMed] [Google Scholar]
  • 6.Bearinger LH, Gephart J. Interdisciplinary education in adolescent health. J Paediatr Child Health. 1993;29:10–5S. doi: 10.1111/j.1440-1754.1993.tb02253.x. [DOI] [PubMed] [Google Scholar]
  • 7.Bennett DL. Adolescent health in Australia: an overview of needs and approaches to care. Sydney: Australian Medical Association; 1984. [Google Scholar]
  • 8.Veit FCM, Sanci LA, Coffey CMM, Young DYL, Bowes G. Barriers to effective primary health care for adolescents. Med J Aust. 1996;165:131–133. doi: 10.5694/j.1326-5377.1996.tb124885.x. [DOI] [PubMed] [Google Scholar]
  • 9.Blum R. Physicians' assessment of deficiencies and desire for training in adolescent care. J Med Educ. 1987;62:401–407. doi: 10.1097/00001888-198705000-00005. [DOI] [PubMed] [Google Scholar]
  • 10.Blum RW, Bearinger LH. Knowledge and attitudes of health professionals toward adolescent health care. J Adolesc Health Care. 1990;11:289–294. doi: 10.1016/0197-0070(90)90037-3. [DOI] [PubMed] [Google Scholar]
  • 11.Resnick MD, Bearinger L, Blum R. Physician attitudes and approaches to the problems of youth. Pediatr Ann. 1986;15:799–807. doi: 10.3928/0090-4481-19861101-09. [DOI] [PubMed] [Google Scholar]
  • 12.Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance. A systematic review of the effect of continuing medical education strategies. JAMA. 1995;274:700–705. doi: 10.1001/jama.274.9.700. [DOI] [PubMed] [Google Scholar]
  • 13.Davis DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME. JAMA. 1992;268:1111–1117. [PubMed] [Google Scholar]
  • 14.Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J. 1995;153:1423–1431. [PMC free article] [PubMed] [Google Scholar]
  • 15.Owen JM. Program evaluation forms and approaches. St Leonards, NSW: Allen and Unwin; 1993. [Google Scholar]
  • 16.Davis D, Fox R. The physician as learner. Linking research to practice. Chicago, IL: American Medical Association; 1994. [Google Scholar]
  • 17.Greene JC, Caracelli VJ. Advances in mixed-method evaluation: the challenges and benefits of integrating diverse paradigms. New directions for evaluation, No 74. San Francisco: Jossey-Bass; 1997. [Google Scholar]
  • 18.Masters GN, McCurry D. Competency-based assessment in the professions. Canberra: Australian Government Publishing Service; 1990. [Google Scholar]
  • 19.Norman GR, Neufeld VR, Walsh A, Woodward CA, McConvey GA. Measuring physicians' performances by using simulated patients. J Med Educ. 1985;60:925–934. doi: 10.1097/00001888-198512000-00004. [DOI] [PubMed] [Google Scholar]
  • 20.Woodward CA, McConvey GA, Neufeld V, Norman GR, Walsh A. Measurement of physician performance by standardized patients. Refining techniques for undetected entry in physicians' offices. Med Care. 1985;23:1019–1027. doi: 10.1097/00005650-198508000-00009. [DOI] [PubMed] [Google Scholar]
  • 21.Rosen D. The adolescent interview project. In: Johnson J, editor. Adolescent medicine residency training resources. Elk Grove Village, IL: American Academy of Pediatrics; 1995. pp. 1–15. [Google Scholar]
  • 22.The Royal Australian College of General Practitioners College examination handbook for candidates 1996. South Melbourne: Royal Australian College of General Practitioners; 1996. [Google Scholar]
  • 23.Hays RB, van der Vleuten C, Fabb WE, Spike NA. Longitudinal reliability of the Royal Australian College of General Practitioners certification examination. Med Educ. 1995;29:317–321. doi: 10.1111/j.1365-2923.1995.tb02855.x. [DOI] [PubMed] [Google Scholar]
  • 24.Bridges-Webb C, Britt H, Miles DA, Neary S, Charles J, Traynor V. Morbidity and treatment in general practice in Australia 1990-1991. Med J Aust. 1992;157:1–57S. [PubMed] [Google Scholar]
  • 25.The general practices profile study. A national survey of Australian general practices. Clifton Hill, Victoria: Campbell Research and Consulting; 1997. [Google Scholar]
  • 26.Knowles M. The adult learner. A neglected species. Houston, TX: Gulf; 1990. [Google Scholar]
  • 27.Ward J. Continuing medical education. Part 2. Needs assessment in continuing medical education. Med J Aust. 1988;148:77–80. [PubMed] [Google Scholar]
  • 28.Norman GR. Defining competence: a methodological review. In: Neufeld VR, Norman GR, editors. Assessing clinical competence. New York, NY: Springer; 1985. pp. 15–35. [Google Scholar]
  • 29.Rethans JJ, Strumans F, Drop R, van der Vleuten C, Hobus P. Does competence of general practitioners predict their performance? Comparison between examination setting and actual practice. BMJ. 1991;303:1377–1380. doi: 10.1136/bmj.303.6814.1377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Pieters HM, Touw-Otten FWWM, De Melker RA. Simulated patients in assessing consultation skills of trainees in general practice vocational training: a validity study. Med Educ. 1994;28:226–233. doi: 10.1111/j.1365-2923.1994.tb02703.x. [DOI] [PubMed] [Google Scholar]
  • 31.Colliver JA, Swartz MH. Assessing clinical performance with standardized patients. JAMA. 1997;278:790–791. doi: 10.1001/jama.278.9.790. [DOI] [PubMed] [Google Scholar]
  • 32.Colliver JA. Validation of standardized-patient assessment: a meaning for clinical competence. Acad Med. 1995;70:1062–1064. doi: 10.1097/00001888-199512000-00005. [DOI] [PubMed] [Google Scholar]
BMJ. 2000 Jan 22;320(7229):224–230.

Commentary: Applying the BMJ's guidelines on educational interventions

Jean Ker 1

In the western world, healthcare systems are facing enormous changes driven by both political and economic forces and by the increase in consumer expectations for competent and consistent quality health care. In response to these changes, medical education has become an increasingly important aspect of every doctors' professional life. Publishers have responded by including papers on medical educational issues with increasing frequency. This move has, however, required the development of guidelines to evaluate papers on educational interventions.

This critique applies guidelines developed by the BMJ's education group, which were published in the BMJ on 8 May 1999.

Guideline 1: General overview

The commitment of the BMJ to publish more educational research makes the paper by Sanci et al an eminently suitable one for practising doctors interested in medical education.

Adolescent health care is challenging not only for general practitioners but for healthcare professionals involved in service delivery at all levels. This paper shows how successfully continuing medical education can be incorporated into changes in service delivery.

The principle steps of the educational intervention process are clearly outlined and can be generalised to other clinical settings, making it of interest to a wide readership. It contributes to the growing literature on evaluation of educational interventions in the general practice setting by attempting to show sustained changes in practice performance after a brief programme for continuing medical education.

The paper also follows the general style and guidelines for publication in the BMJ.

Guideline 2: Theoretical considerations

One of the purposes of the guidelines on evaluating educational interventions is to facilitate, through papers, readers' understanding of the teaching and learning process so that they can apply any relevant aspects to their own practice.

In relation to this, the goals of this educational intervention are well described in the context of Australian general practice. The educational rationale was, however, rather brief in its explanation. An expanded discussion on the strategies used could have covered advantages and disadvantages. Readers may be able to utilise some of the learning opportunities given, but their links to the goals were not explicit.

Guideline 3: Study presentation and design

A panel of stakeholders, including patients, was used to identify the content and design of the multifaceted intervention, which ensures the relevance of the intervention in terms of healthcare practice, and this was described in detail. The study design to ensure that standardised patients and observers were blind to the intervention status of the doctors is commendable.

In answering the questions posed in the guidelines some concerns with the design are raised.

The study is described as a randomised controlled study. A better and less misleading description would have been to describe it only as a randomised study, as it is often difficult to eliminate contaminants in an educational intervention. In fact the bias described in the type of practice, the language spoken, the age differences, as well as the college exams taken, does question the positive outcomes reported in the study.

The lack of a pretest to identify whether the two groups were comparable in terms of knowledge does also bring into question the final interpretation of the intervention. Purposive sampling based on a pretest and the variables described above would have been more appropriate and would have lent more meaning to the outcome.

The statistical analysis is clearly shared with the reader and well described. The use of a multifaceted evaluation system using recognised validated instruments reflects the guidelines for evaluating papers on educational interventions.

Guideline 4: Discussion

The discussion was structured in accordance with the guidelines, with a clear statement of the principle findings. The sustainability of the intervention could, however, have been highlighted as it was a significant finding. The strengths and weaknesses of the study in relation to selection bias were well debated and justified.

The discussion in relation to other studies was, however, only briefly addressed, referring to only one systematic review of strategies for continuing medical education. This could have been expanded to support some of the findings, particularly in relation to the rapport and satisfaction of the standardised patients as a measurement of outcome.

The discussion did not begin to explore the implications for clinicians other than to indicate a need for assessing the health gain for patients from such interventions but did not discuss the difficulties of cost benefit analysis.

The guidelines on evaluating educational interventions as applied to this paper enabled the reviewer to systematically address all relevant aspects of the intervention. What is not clear is how much weighting should be placed on each guideline in relation to deciding whether the article should be published or not.

Footnotes

website extra: The sample size calculation and a chart showing the flow of participants through the trial appears on the BMJ's website www.bmj.com


Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES