Commentary
During the past decade, medical education at the undergraduate level and, to a lesser extent, the postgraduate level has seen major reform. A key change has been the incorporation of problem- and case-based learning, both of which revolve around basic tenets of theory about how adults learn. Unfortunately for society, continuing medical education has not progressed much past the traditional lecture format that, on occasion, is followed by examinations. Often, even when given, these examinations may or may not be corrected. In this manner, continuing medical education is a one-size-fits-all exercise that is geared to a lecturer's assessment of learners' needs. Rarely are learners asked to assess their own knowledge, skills, or attitudes to help direct their learning. Similarly, the context of continuing medical education is rarely geared toward helping busy clinicians develop new ways to deal with real-life practice dilemmas or to assess their practice behaviors.
The article by Sanci and associates addressed two areas of great importance to primary care physicians. First, the authors designed a clear, interactive, and innovative program of continuing medical education based on the assessed needs of primary care physicians. The program was broken into bite-sized morsels, each of which focused on a different objective. Second, the authors took a content area, adolescent medicine, and attempted to provide primary care physicians with the skills, knowledge, and attitudes to better address this population that is under-represented in the health care arena. Rather than assess short-term knowledge acquisition (multiple choice or true-false questions given at the end of the program), the authors relied on a systematic, nonbiased review of videotapes of clinicians (controls and intervention group) interacting with simulated adolescent patients (standardized patients).
Although the design was a rigorous randomized control trial, given the logistics of education research, the study design had some weaknesses—small sample size, variability in standardized patients, a potentially nonrepresentative group of clinicians, and lack of a pretest that would have allowed an initial comparison between the control and intervention groups. Despite these problems, the report is a welcome addition to the areas of both continuing medical education and adolescent medicine. Educators and medical leaders should take notice, for the approach used in this study is easily extrapolated to other content areas and groups of physicians.
The critical objective is to assess whether an educational intervention can be acceptable to physicians and result in long-term change in clinical behavior. In this study, both were possible.