Skip to main content
Journal of Graduate Medical Education logoLink to Journal of Graduate Medical Education
. 2023 Feb;15(1):15–18. doi: 10.4300/JGME-D-22-00397.1

Program Evaluation Use in Graduate Medical Education

Katherine A Moreau 1,, Kaylee Eady 2
PMCID: PMC9934821  PMID: 36817537

It is common to complete evaluations of graduate medical education (GME) programs, present them at conferences, publish them in peer-reviewed journals, add them to curricula vitae (CVs), and then move on without using them to enact changes in the programs themselves. Such actions may reflect the reality that many individuals perceive and conduct program evaluations as if they were research.1 While research and program evaluation use similar methods, they have distinct purposes, timelines, audiences, and most notably, intended uses.2 Evaluations of GME programs need to be used to, for example, inform program decisions and modifications, grow program stakeholders' knowledge, stimulate organizational culture changes, or improve the quality of training.1,3,4 They need to be more than intellectual exercises resulting in accomplishments listed on CVs.5 As such, we emphasize that evaluation use is an essential consequence of program evaluation. Those involved in program evaluation should discuss it and maintain its prominence at the onset of every evaluation. We also promote the adage “use-it-or-lose-it” to stress timely program evaluation use. Yet the literature on program evaluation in GME often neglects to discuss use, including how selected evaluation approaches can influence evaluation use.1,6,7 In this article, we explain evaluation use by describing both the use of evaluation findings and process use (ie, changes resulting from engagement in the evaluation process itself).1,8 We also suggest strategies, including evaluation approaches, that faculty can use to increase evaluation use in GME.

Use of Evaluation Findings

The 3 categories of use of evaluation findings are instrumental, conceptual, and symbolic. Instrumental use refers to instances where stakeholders use evaluation findings to take direct actions (eg, improvements, changes, terminations) in a program.9 For example, evaluation findings show that residents in a GME program are struggling to complete their research projects. Using the findings, the GME team implements new research training activities to assist residents in the completion of their projects. Conceptual use describes occurrences where stakeholders use evaluation findings to evolve their understandings of a program but do not take direct actions based on these findings.4 For instance, the GME team acknowledges the findings that residents are struggling to complete their research projects. These findings inform their understanding of why residents are not attending academic conferences to present their research. Lastly, symbolic use occurs when stakeholders use the sheer existence of a completed evaluation to comply with reporting requirements or justify a previously made program action.4 For example, the funding university requires the GME program to complete an evaluation to retain funding for residents' research projects. The GME team completes an evaluation and presents the report to the university. Alternatively, before the evaluation, the GME program hired a research assistant to help residents with their research projects and the subsequent evaluation findings are used to justify the hiring of the research assistant. In GME, we emphasize instrumental use, as this form of use leads to actions that can improve programs. However, the use of evaluation findings is typically a short-term consequence of evaluation because these findings are relevant only within a specific and limited timeframe (ie, use-it-or-lose it).

Process Use

On the other hand, process use can have ongoing influence on individuals, programs, and organizations. It recognizes that evaluation processes themselves can affect attitudes, thought processes, and behaviors.10 Process use recognizes stakeholders' learning advancements from their involvement in an evaluation as well as the effects of evaluation processes on program functioning and organizational culture.11 Process use does not require changes to a program or direct actions because of evaluation findings. There are 6 types of process use which we illustrate with examples:

  1. Facilitating stakeholders' shared understanding of the program: Evaluation activities result in the GME team agreeing on their program's goals and activities.

  2. Supporting and reinforcing a program intervention: Evaluation processes require the GME team to communicate and collaborate, skills that their program's educational intervention aims to enhance.

  3. Increasing stakeholders' engagement as well as their evaluation and critical thinking skills: Evaluation involvement teaches the GME team how to conduct evaluations and demonstrates their value. Thus, it enhances their commitment to evaluation and the program itself.

  4. Facilitating program and organizational development: Evaluation involvement may lead the GME team to value and become responsive to program feedback. Such changes contribute to their organization's evaluation capacity and learning functions.

  5. Infusing evaluation thinking into the organization's culture: Evaluation involvement leads the GME team to think like evaluators in their everyday roles; therefore, an evaluation culture emerges within their organization.

  6. Promoting instrumentation effects: Evaluation involvement increases the GME team's understanding of what program aspects are evaluation foci. Thus, they ensure that what gets evaluated remains a priority within the program.11

When stakeholders are involved in evaluation processes, they enter an evaluation culture and learn how to think and look at things through an evaluative lens. They can also use the knowledge and skills (eg, evaluation knowledge, methodological and facilitation skills) they develop to strengthen their organization's abilities to design, implement, interpret, and use evaluations and thereby build their organization's evaluation capacity. In this sense, process use is valuable throughout and following an evaluation and in various GME settings regardless of the evaluation findings or recommendations.12

The Table presents strategies that faculty involved in program evaluation can employ to increase evaluation use.

Table.

Strategies to Increase Program Evaluation Use

Strategy Description Examples of What to Do
Engage evaluation users Evaluation users (ie, those who can use the evaluation findings and processes) can recommend major evaluation questions that are relevant and lead to usable information. They can also increase the credibility of a program evaluation and ensure that themselves as well as other users view the program evaluation as trustworthy and thus usable. In planning an evaluation ask users:
  • What is the purpose of the evaluation?

  • What major questions should be the focus?

  • How will you use the findings and processes?

Once evaluation data is collected ask:
  • How would you interpret and use the information?

  • What did you learn from the processes that may be helpful for future evaluations?

  • What reporting strategies would ensure others use the findings and processes?

  • Are there any new potential uses for the collected information?

  • What findings and processes should be used immediately, and by who and how?

Select an evaluation approach that facilitates evaluation use The engagement of evaluation users in an evaluation increases their commitment to its use.5 Selected evaluation approaches require such engagement and thus increase evaluation use. Use a participatory evaluation13,14 or utilization-focused evaluation approach.15
Anticipate and prepare for barriers to evaluation use There are many hurdles that can hinder evaluation use, including a lack of trust in the evaluators and evaluation processes, the perceived relevance and credibility of evaluation reports, a lack of resources or power to use evaluation findings or processes, and receptiveness to negative findings or openness to change.16 Throughout the evaluation ask users:
  • Why might you not use the evaluation?

  • What resources do you need to better use the evaluation, and can these resources be budgeted into the evaluation itself?

  • How can we engage you in the evaluation so that you can have faith in its credibility?

Use action-oriented reporting Action-oriented evaluation reporting uses creativity to focus attention on important findings and processes and how to use them. It tailors the information to the evaluation users and emphasizes information that is a priority for use.17 Those responsible for the evaluation can communicate about it using:
  • One-page fact sheets

  • Town halls

  • Social media postings

  • Podcasts

  • Webinars

Disseminate on evaluation use When disseminating a program evaluation, it is important to note the findings but also how the team and others used or plan to use the findings as well what process use occurred as a result of the evaluation. When disseminating a program evaluation:
  • Describe how the evaluation was intended to be used

  • Explain how and why the evaluation was used (or not used)

  • Explain what type(s) of process use occurred because of an evaluation of a specific program

  • Share why evaluation use is not occurring within your context

In closing, it is imperative to remember that evaluation use, especially process use, can occur throughout a program evaluation rather than simply at its conclusion.10 Evaluation use can start at the planning stage and continue well beyond a presentation or publication of an evaluation. Program evaluators need a use-it-or-lose-it perspective throughout the evaluation process to maximize improvements to training. This perspective will maintain stakeholders' faith in the value of evaluation, as they witness that evaluation efforts lead to timely, actionable findings and processes. Ultimately, we must embrace evaluation use to ensure that all stakeholders and programs, not only conference attendees, readers of peer-reviewed journals, or our CVs, witness the consequences (both positive and negative) of program evaluation.

References

  • 1.Balmer DF, Riddle JM, Simpson D. Program evaluation: getting started and standards. J Grad Med Educ . 2020;12(3):345–346. doi: 10.4300/JGME-D-20-00265.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Mathison S. What is the difference between evaluation and research and why do we care? In: Smith NL, Brandon PR, editors. Fundamental Issues in Evaluation . The Guilford Press;; 2008. pp. 183–196. [Google Scholar]
  • 3.Balmer DF, Rama JA, Simpson D. Program evaluation models: evaluating processes and outcomes in graduate medical education. J Grad Med Educ . 2019;11(1):99–100. doi: 10.4300/JGME-D-18-01084.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Johnson K, Greenseid L, Toal S, King JA, Lawrenz F, Volkov B. Research on evaluation use: a review of the empirical literature from 1986 to 2005. Am J Eval . 2009;30(3):377–410. doi: 10.1177/109821400934166. [DOI] [Google Scholar]
  • 5.Alkin M. Evaluation Essentials . The Guilford Press; 2011. [Google Scholar]
  • 6.Cook DA. Twelve tips for evaluating educational programs. Med Teach . 2010;32(4):296–301. doi: 10.3109/01421590903480121. [DOI] [PubMed] [Google Scholar]
  • 7.Goldie J. AMEE education guide no. 29: evaluating educational programmes. Med Teach . 2006;28(3):210–224. doi: 10.1080/01421590500271282. [DOI] [PubMed] [Google Scholar]
  • 8.Yarbrough DB, Shulha LM, Hopson RK, Caruthers FA. The Program Evaluation Standards A Guide for Evaluators and Evaluation Users 3rd ed. Sage Publications; 2011. [Google Scholar]
  • 9.Shulha LM, Cousins JB. Evaluation use: theory, research, and practice since 1986. Am J Eval . 1997;18(1):195–208. doi: 10.1177/109821409701800302. [DOI] [Google Scholar]
  • 10.Alkin M, King JA. The historical development of evaluation use. Am J Eval . 2016;37(4):568–579. doi: 10.1177/1098214016665164. [DOI] [Google Scholar]
  • 11.Patton MQ. Process use as a usefulism. New Dir Eval . 2007;2007(116):99–112. doi: 10.1002/ev.246. [DOI] [Google Scholar]
  • 12.Amo C, Cousins JB. Going through the process: an examination of the operationalization of process use in empirical research on evaluation. New Dir Eval . 2007;2007(116):5–26. doi: 10.1002/ev.240. [DOI] [Google Scholar]
  • 13.Cousins JB, Chouinard JA. Participatory Evaluation Up Close An Integration of ResearchBased Knowledge . Information Age Publishing; 2012. [Google Scholar]
  • 14.Moreau K. Twelve tips for planning and conducting a participatory evaluation. Med Teach . 2017;39(4):334–340. doi: 10.1080/0142159X.2017.1286310. [DOI] [PubMed] [Google Scholar]
  • 15.Patton MQ, Campbell-Patton C. UtilizationFocused Evaluation 5th ed. Sage Publications; 2021. [Google Scholar]
  • 16.Taut S, Alkin M. Program staff perceptions of barriers to evaluation implentation. Am J Eval . 2003;24(2):213–226. doi: 10.1177/109821400302400205. [DOI] [Google Scholar]
  • 17.National Center for Chronic Disease and Prevention and Health Promotion. Evaluation reporting A guide to help ensure use of evaluation findings . US Department of Health and Human Services; 2013. https://www.cdc.gov/dhdsp/docs/evaluation_reporting_guide.pdf . [Google Scholar]

Articles from Journal of Graduate Medical Education are provided here courtesy of Accreditation Council for Graduate Medical Education

RESOURCES