Rip Out Action Items
Program directors, designated institutional officials, and GME educators should:
Recognize that using program evaluation models yield insights that inform implementation or assess outcomes.
Select a model based on the question the evaluation seeks to answer (process and/or outcomes).
Plan your evaluation during the development phase and get input from stakeholder groups.
The Challenge
Program directors, designated intuitional officials (DIOs), and other graduate medical education (GME) educators strive to ensure the quality of their training programs. For example, faculty at annual program evaluation retreats often ask, “Is our training being implemented as we intended?” or “Is our training really working?” They may not recognize that these are 2 different types of questions; both may be answered differently depending on the program evaluation model used. Each model provides a different vantage point to inform the development and implementation of an educational initiative (process evaluation) or from which to judge the value or effectiveness of an educational initiative (outcome evaluation). Familiarity with commonly used models can optimize GME educators' ability to obtain actionable answers to a program's evaluation questions.
What Is Known
Two program evaluation models, appreciative inquiry and the logic model, have been successfully used to frame GME program evaluations (Table).
Table.
Key Features, Assumptions, Applications, and Pros and Cons of Evaluation Models
Appreciative Inquiry | Logic Model |
|
|
Application of appreciative inquiry for process evaluation: Annual Program Evaluation section on graduate performance:
|
Application of logic model for outcome evaluation: New faculty development series:
|
Pros (+) and Cons (-) + Focuses on assets not deficits. + Creates opportunity for people to be heard, to dream, and to act. - May be perceived as “rose-colored glasses.” - Familiarity with qualitative data is helpful. | Pros (+) and Cons (-) + Helpful visual “snapshot” of your program. + Flexible and adaptable in its use. - Can be unwieldy and cumbersome. - By focusing on outcomes, can overlook important processes. |
How You Can Start TODAY
Consider. Sketch out how each program evaluation model “fits” with the program you are evaluating, and with the type of question you are asking. For example, fill a blank table comprised of columns labeled with inputs, activities, outputs, and outcomes to visualize how your program might align with the logic model. Or pilot some appreciative inquiry questions with program participants: Are responses to these questions likely to yield information that will be useful for program evaluation?
Begin systematically. Identify a small program or initiative whose implementation or effectiveness has not been evaluated and pick one of the models discussed. Convene a diverse group of stakeholders (eg, faculty, trainees, program coordinators) to discuss feasibility of evaluation and how evaluation data will be utilized. Outline your evaluation plan, then implement.
What You Can Do LONG TERM
Familiarize yourself with different program evaluation models. Utilize published resources in medical education and evaluation3 and attend evaluation-oriented workshops/meetings.
Get involved in the evaluation community. Take advantage of opportunities with program evaluators at your institution or through your national education organization(s).
Resources
- 1.Preskill H, Catsambas TT. Reframing Evaluation through Appreciative Inquiry. Thousand Oaks, CA: Sage Publications;; 2006. [Google Scholar]
- 2.W.K. Kellogg Foundation. Using Logic Models to Bring Together Planning, Evaluation, and Action: Logic Model Development Guide. Battle Creek, MI: W.K. Kellogg Foundation; 2004. [Google Scholar]
- 3.Balmer DF, Rama JA, Martimianakis MA, Stenfors-Hayes T. Using data from program evaluations for qualitative research. J Grad Med Educ. 2015;8(5):773–774. doi: 10.4300/JGME-D-16-00540.1. [DOI] [PMC free article] [PubMed] [Google Scholar]