Skip to main content
Public Health Reports logoLink to Public Health Reports
. 2023 Apr 14;139(1):129–137. doi: 10.1177/00333549231163529

Building Program Evaluation Capacity Through an Online Training for Graduate Students at Schools and Programs of Public Health

Bree L Hemingway 1,, Reener M Balingit 1, Stewart I Donaldson 2
PMCID: PMC10905762  PMID: 37057393

Abstract

Objectives:

Program evaluation is an essential function for public health professionals that is necessary to acquire funding for public health programs and support evidenced-based practice, but coverage of program evaluation principles and methodology within the master of public health (MPH) curriculum is inconsistent and may not adequately prepare students to conduct program evaluation activities postgraduation, especially on culturally responsive program evaluation. We examined the effectiveness of an online training course on program evaluation topics.

Methods:

In July 2021, we recruited current and recently graduated MPH students from accredited US universities to measure the effectiveness of a 1-hour online training course in program evaluation. We distributed pre- and postsurveys to eligible participants. We assessed program evaluation skills on a 4-point Likert scale to determine improvements in knowledge (from 4 = extremely knowledgeable to 1 = not knowledgeable), attitudes (from 4 = strongly agree to 1 = strongly disagree), and self-efficacy (from 4 = strongly agree to 1 = strongly disagree).

Results:

Among 80 MPH students who completed the survey, respondents indicated mean (SD) increases from presurvey to postsurvey in knowledge (from 2.13 [0.66] to 3.24 [0.54]) and attitudes (from 3.61 [0.51] to 3.84 [0.30]) toward program evaluation and in self-efficacy in conducting program evaluation (from 2.92 [0.71] to 3.44 [0.52]).

Conclusion:

The course may be an effective approach for training public health professionals about program evaluation. Our results provide a basis for revising the way program evaluation is taught and practical recommendations for integrating program evaluation competencies within public health curricula, such as by incorporating a self-paced training course for continuing education.

Keywords: public health evaluation, online training course, pedagogy, culturally relevant evaluation, MPH students and professionals


Program evaluation is an essential function for public health professionals and is necessary to acquire funding for public health programs and support evidenced-based practice.1,2 Despite the importance of program evaluation, research3-5 suggests that coverage of program evaluation principles and methodology in master of public health (MPH) curriculum is inconsistent and may not adequately prepare students to conduct program evaluation activities postgraduation. 6 Inconsistent and insufficient training leads to ineffective practice 7 ; hence, it is vital to explore how public health academic programs prepare graduates to conduct program evaluation. 8

In addition, the elimination of health disparities and promotion of health equity is a primary goal of Healthy People 2030. 7 Specifically, culturally responsive evaluation (CRE)—a theory and practice of program evaluation that includes culture with the intention of bringing balance and equity to the program evaluation process for groups that have been historically marginalized—is beneficial for evaluation of health promotion programs.7-9 CRE that effectively measures the real-world impact of health programs is necessary to inform policy and practice that promote health equity, yet it is not standard practice for public health professionals.10,11

Literature on program evaluation indicates that adequate training is a predictor of successful program evaluation practice, 8 underscoring the importance of program evaluation training that emphasizes culturally responsive practice. This pilot study had 3 primary objectives: measure the effectiveness of the training course, analyze participant satisfaction, and identify strategies for improvement.

Methods

Development of the Training Course

In 2021, with the help of a video editor, researchers from Claremont Graduate University developed a 5-module animated training course on Teachable (Teachable Inc), an online learning platform, to introduce the importance of program evaluation and outline CRE practices to public health students and professionals. The development of the training course and student learning objectives (SLOs) was informed by qualitative research on current use of program evaluation among public health practitioners and the coverage of program evaluation competencies in MPH programs. 12 Qualitative research included interviews with public health professionals, 4 interviews with instructors who have taught program evaluation, and content review of current public health curriculum.

After reviewing results of qualitative research and literature on best practices in program evaluation training, researchers from Claremont Graduate University developed an online training course to cover 4 SLOs tailored toward the program evaluation needs of emerging public health professionals. The training course consisted of 5 self-paced modules. The SLOs aimed to ensure that after completing all 5 modules of the training course, participants would be able to: (1) describe why program evaluation is important to public health practice, (2) identify resources that can assist with program evaluation, (3) distinguish program evaluation from social science research, and (4) explain how CRE can be used to promote health equity.

Recruitment of Participants and Survey Development

Researchers created and disseminated pre- and postsurveys to eligible participants before and after completion of a 1-hour training course. Eligible MPH students had to be enrolled during the summer 2021 semester or have graduated in the spring 2021 semester from an accredited MPH program in the United States and be able to complete a 1-hour training course, Introduction to Program Evaluation for Public Health Professionals, within 2 weeks of the release date on July 1, 2021 (Table 1). To evaluate outcomes, we used a pretest–posttest design. The institutional review board at Claremont Graduate University approved this project (3933) to recruit MPH students in summer 2021.

Table 1.

Lesson plan developed by Claremont Graduate University researchers to map student learning objectives for a program evaluation course piloted in July 2021, United States

Learning module Mode/overview
Welcome • Animated video 1: Overview, instructions, learning objective, and acknowledging sources (2.5 min)
• Knowledge check 1 (30 sec)
Section 1: What is program evaluation? (SLOa 1, 4) • Animated video 2: Shopping for cookies, evaluating your options (2 min)
• Student reflection 1: When have you done evaluation in your day-to-day life? At the grocery store, making a big purchase? (1 min)
• Animated video 2: What is program evaluation? Introduce program evaluation and how it is used in public health (3 min)
• Knowledge check 2 (1 min)
Section 2: Who does program evaluation? (SLOa 2) • Animated video 3: Overview of evaluation as a professional field (3 min)
• Animated video 4: Evaluators in public health. Introduces evaluation competencies in public health (3 min)
• Downloadable resource list: List of evaluation resources (1 min)
• Knowledge check 3 (1 min)
Section 3: Why do we conduct program evaluation in public health? (SLOa 4) • Animated video 5: Review common uses of program evaluation in public health including accountability, improvement, and health equity (4 min)
• Student reflection 2: Which of these uses do you think is most important for public health practice? Why? (1 min)
• Knowledge check 4 (1 min)
Section 4: How do you do program evaluation in public health? (SLOa 3) • Animated video 6: Introduce the CDC Evaluation Framework. Go through the steps and provide examples of the activities that could take place at each step (5 min)
• Animated video 7: Introduce differences between evaluation and research methods (4 min)
• Knowledge check 5 (1 min)
Section 5: How to conduct a culturally responsive evaluation (CRE)? (SLOa 4) • Animated video 8: Introducing components of CRE (3 min); applying what we learned
• How to integrate cultural competence into each step of evaluation (6 min)
• Knowledge check 6 (1 min)

Abbreviations: CDC, Centers for Disease Control and Prevention; SLO, student learning objective.

a

The 4 student learning objectives are: (1) describe why program evaluation is important to public health practice, (2) identify resources that can assist with program evaluation, (3) distinguish program evaluation from social science research, and (4) explain how program evaluation can be used to promote health equity.

Recruitment materials included a link to an electronic registration form, which included a consent form, an eligibility screening form, and a form collecting contact information. Registered participants who met eligibility criteria received an email with a link to the presurvey so that we could collect baseline data on current knowledge of program evaluation, attitudes about the importance of program evaluation in public health, and perceived self-efficacy about various program evaluation topics covered in the training course.13,14 In addition, the presurvey collected information about current progress in participants’ MPH degree, concentration, completed program evaluation coursework, and previous program evaluation experience. Other questions prompted a ranking response using a 4-point Likert scale in response to 2 statements: (1) How knowledgeable are you about culturally responsive evaluation? (4 = extremely knowledgeable, 3 = moderately knowledgeable, 2 = a little knowledgeable, 1 = not knowledgeable) and (2) I am confident that I can describe why program evaluation is important to public health practice (4 = strongly agree, 3 = agree, 2 = disagree, 1 = strongly disagree).

After completing the registration and the presurvey, participants received access to the training course delivered in an online learning platform, Teachable, and were instructed to complete all 5 modules and the postsurvey within 2 weeks. Questions from the presurvey were repeated on the postsurvey to track changes in participants’ knowledge, attitudes, and self-efficacy. To assess participant satisfaction with the training course, the following question was added: “How satisfied were you with the evaluation-training module?” Responses were on a 4-point scale, where 1 = not satisfied and 4 = very satisfied.

Researchers conducted cognitive interviews to review the content of the training course and the surveys prior to wider dissemination. The primary purpose of cognitive interviews was to check for accurate comprehension and interpretation. MPH students and alumni were selected via convenience sample, and revisions were made based on feedback.

Analysis

Researchers cleaned and analyzed survey data using R statistical analysis package version 3.6.0 (R Foundation for Statistical Computing).

Learning objectives

Composite variables were created to best estimate participants’ knowledge, attitudes, and self-efficacy on the pretest and posttest. The research team conducted paired-sample t tests to compare composite variables (knowledge, attitudes, and self-efficacy) before and after taking the training course, with P < .05 considered significant. Cohen d was calculated as a measure of effect size. Researchers conducted additional paired-sample t tests to measure participants’ understanding of the 4 SLOs before and after taking the training course. In addition to quantitative analysis, the research team collected and analyzed responses to open-ended questions using thematic coding. For example, we asked the following question: “In your opinion, what were some of the most important takeaways that you got from the program evaluation training?”

Program evaluation competence of the presurvey sample

To understand participants’ initial experience with program evaluation, we examined range, mean, and frequency of the variables (knowledge, attitudes, and self-efficacy) and demographic characteristics (gender, age, education) of participants who completed the presurvey using descriptive analysis. We used 1-way analysis of variance (ANOVA) tests with post hoc comparison using the Tukey test to identify how participants’ previous program evaluation training influenced their knowledge about program evaluation. An example question was, “How do you describe your previous experience with program evaluation?” The research team conducted additional analysis to understand how the program evaluation training that participants had previously received influenced their knowledge of program evaluation at the time of the presurvey.

Satisfaction with training course

We used descriptive analysis on the postsurvey to analyze quantitative responses (ie, “How satisfied were you with the program evaluation training module?”), and we used thematic coding to analyze responses to open-ended questions (ie, “What did you like most about the training module? What were your biggest challenges with the training module? Please share any additional feedback about your experience with the training module.”). We developed open-ended questions to collect information about participants’ experience with the training course. We reviewed qualitative responses using Microsoft Excel. We developed a codebook based on survey questions and added codes as they emerged from the responses. The coding process was iterative; 2 coders (B.H., R.B.) reviewed and coded all responses to identify themes.

Sample

Two researchers asked 25 schools and programs of public health and 32 regional public health associations to distribute recruitment emails and fliers to student members; 5 of 25 (20.0%) MPH programs and 5 of 32 (15.6%) regional public health associations confirmed email distribution. Of 276 participants who accessed the registration form, 52 (18.8%) did not complete the form, 4 (1.4%) were excluded because they did not meet the eligibility criteria of being a current or recently graduated MPH student from an accredited public health program in the United States, and 207 (75.0%) completed the entire registration form and met the eligibility requirement. Given the limited budget for incentives, we sent the presurvey to students gradually until spaces reached capacity of 130. Of 207 eligible registrants, 153 (73.9%) were invited to take the presurvey and 53 (25.6%) were put on a wait list. Of 153 registrants who were invited to take the presurvey, 138 (90.2%) completed the presurvey, 9 (6.5%) of whom received communications explaining that they would be added to the wait list because of limited space. Of the 130 participants added to Teachable to access the training course for 2 weeks, 83 (63.8%) completed all 5 sections of the course within the time frame. A postsurvey was sent via Qualtrics XM Platform (SAP America Inc) to these 83 participants, of whom 80 (96.4%) completed the final survey, for a final sample size of 80 participants.

Results

Participant Characteristics

Of 138 participants in the presurvey, 107 (77.5%) were female, 107 (77.5%) were aged 18-29 years, 120 (87.0%) were current students, and 18 (13.0%) were students who had recently graduated (Table 2). Of 80 participants in the postsurvey, 65 (81.3%) were female, 62 (77.5%) were aged 18-29 years, and 70 (87.5%) were current students. Respondents came from 20 universities.

Table 2.

Characteristics of master of public health students in the United States who registered for and completed a presurvey and postsurvey for an online public health evaluation course, July 2021 a

Characteristic Registration Presurvey Postsurvey b
Total 224 (100.0) 138 (61.6) 80 (35.7)
Sex/gender
 Female 185 (82.6) 107 (77.5) 65 (81.3)
 Male 36 (16.1) 28 (20.3) 15 (18.8)
 Other 3 (1.3) 3 (2.2) 0
Age, y
 18-29 170 (75.9) 107 (77.5) 62 (77.5)
 30-39 44 (19.6) 26 (18.8) 15 (18.8)
 40-49 8 (3.6) 5 (3.6) 3 (3.8)
 50-59 2 (0.9) 0 0
Student enrollment status
 Current 192 (85.7) 120 (87.0) 70 (87.5)
 Recently graduated 32 (14.3) 18 (13.0) 10 (12.5)
School
 Benedictine University 20 (8.9) 10 (7.2) 7 (8.8)
 California State University, Los Angeles 1 (0.4) 1 (0.7) 0
 Claremont Graduate University 10 (4.5) 9 (6.5) 6 (7.5)
 Des Moines University 11 (4.9) 7 (5.1) 3 (3.8)
 Drexel University Dornsife School of Public Health 1 (0.4) 1 (0.7) 0
 Grand Valley State University 10 (4.5) 6 (4.3) 3 (3.8)
 Johns Hopkins Bloomberg School of Public Health 1 (0.4) 0 0
 Liberty University 1 (0.4) 1 (0.7) 1 (1.3)
 Missouri State University 6 (2.7) 6 (4.3) 4 (5.0)
 University of Missouri 19 (8.5) 11 (8.0) 5 (6.3)
 San Francisco State University 2 (0.9) 1 (0.7) 0
 Saint Louis University 7 (3.1) 1 (0.7) 1 (1.3)
 Southern Illinois University 7 (3.1) 5 (3.6) 3 (3.8)
 Northern Illinois University 6 (2.7) 4 (2.9) 4 (5.0)
 University of Illinois 18 (8.0) 12 (8.7) 7 (8.8)
 University of California, Berkeley 89 (39.7) 54 (39.1) 30 (37.5)
 Thomas Jefferson University 3 (1.3) 1 (0.7) 0
 University of Indianapolis 2 (0.9) 1 (0.7) 0
 University of Southern California 8 (3.6) 6 (4.3) 5 (6.3)
 Western Michigan University 2 (0.9) 1 (0.7) 1 (1.3)
a

All values are number (percentage) unless otherwise indicated; percentages may not total to 100 because of rounding.

b

Completed both presurvey and postsurvey.

Learning Objectives

Results of paired-sample t tests showed a positive change in knowledge, attitudes, and self-efficacy from presurvey to postsurvey (Table 3). After taking the online training course, participants reported a significant increase in scores for variables measuring SLO 1 (t79 = −7.47, P < .001, d = –0.84), SLO 2 (t79 = −10.41, P < .001, d = −1.16), SLO 3 (t79 = −10.17, P < .001, d = −1.23), and SLO 4 (t79 = −8.53, P < .001, d = −0.95) from presurvey to postsurvey (Table 4).

Table 3.

Composite variables created from items on a presurvey and postsurvey for an online public health program evaluation course piloted among master of public health students in the United States, 2021

Composite variable Cronbach α Description Item
Presurvey knowledge 0.801 Knowledge on pretest • Please describe your knowledge of program evaluation. a
• How knowledgeable are you about culturally responsive program evaluation? a
• How much experience do you have with program evaluation? b
Postsurvey knowledge 0.804 Knowledge on posttest • After completing the evaluation training, how knowledgeable are you about program evaluation? a
• After completing the evaluation training, how knowledgeable are you about culturally responsive program evaluation? a
Presurvey attitudes 0.794 Attitude on pretest • Please indicate the extent to which you agree with the following statement below: I believe that program evaluation is valuable to my practice as a public health professional. c
• I can envision myself using program evaluation in my future work as a public health professional. c
Postsurvey attitudes 0.606 Attitude on posttest • Please indicate the extent to which you agree with the following statement below: I believe that program evaluation is valuable to my practice as a public health professional. c
• I can envision myself using program evaluation in my future work as a public health professional. c
Presurvey efficacy 0.845 Self-efficacy on pretest • I am confident that I can conduct program evaluation in the future as a public health professional. c
• I am confident that I can conduct culturally responsive program evaluation in the future as a public health professional. c
Postsurvey efficacy 0.751 Self-efficacy on posttest • I am confident that I can conduct program evaluation in the future as a public health professional. c
• I am confident that I can conduct culturally responsive program evaluation in the future as a public health professional. c
a

On a Likert scale, where 4 = extremely knowledgeable, 3 = moderately knowledgeable, 2 = a little knowledgeable, 1 = not knowledgeable.

b

On a Likert scale, where 4 = a lot of experience, 3 = some experience, 2 = a little experience, 1 = no experience.

c

On a Likert scale, where 4 = strongly agree, 3 = agree, 2 = disagree, 1 = strongly disagree.

Table 4.

Comparison of knowledge, attitude, and self-efficacy scores and change in confidence in student learning objectives from presurvey to postsurvey for an online public health evaluation course piloted among master of public health students in the United States, 2021

Variable Survey Mean (SD) t test a
Knowledge b Presurvey 2.13 (0.66) –16.57
Postsurvey 3.24 (0.54)
Attitude c Presurvey 3.61 (0.51) –5.55
Postsurvey 3.84 (0.30)
Self-efficacy d Presurvey 2.92 (0.71) –6.58
Postsurvey 3.44 (0.52)
Student learning objectives
 1. Describe importance of program evaluation e Presurvey 3.05 (0.72) –7.47
Postsurvey 3.64 (0.53)
 2. Identify resources f Presurvey 2.57 (0.77) –10.41
Postsurvey 3.41 (0.57)
 3. Distinguish research from evaluation g Presurvey 2.69 (0.72) –10.17
Postsurvey 3.50 (0.53)
 4. Explain how evaluation promotes health equity h Presurvey 2.96 (0.75) –8.53
Postsurvey 3.62 (0.49)
a

All values significant at P < .001.

b

Please describe your knowledge of program evaluation. Responses were on a Likert scale, where 4 = extremely knowledgeable, 3 = moderately knowledgeable, 2 = a little knowledgeable, 1 = not knowledgeable.

c

I believe that program evaluation is valuable to my practice as a public health professional. Responses were on a Likert scale, where 4 = strongly agree, 3 = agree, 2 = disagree, 1 = strongly disagree.

d

I am confident that I can conduct program evaluation in the future as a public health professional. Responses were on a Likert scale, where 4 = extremely knowledgeable, 3 = moderately knowledgeable, 2 = a little knowledgeable, 1 = not knowledgeable.

e

I am confident I can describe the importance of program evaluation. Responses were on a Likert scale, where 4 = strongly agree, 3 = agree, 2 = disagree, 1 = strongly disagree.

f

I am confident I can identify resources that can assist with program evaluation. Responses were on a Likert scale, where 4 = strongly agree, 3 = agree, 2 = disagree, 1 = strongly disagree.

g

I am confident I can distinguish research from program evaluation. Responses were on a Likert scale, where 4 = strongly agree, 3 = agree, 2 = disagree, 1 = strongly disagree.

h

I am confident I can explain how evaluation promotes health equity. Responses were on a Likert scale, where 4 = strongly agree, 3 = agree, 2 = disagree, 1 = strongly disagree.

After taking the training course, participants reported significantly increased scores in knowledge (t79 = −16.57, P < .001, ds = −1.85), attitudes (t79 = −5.55, P < .001, ds = −0.65), and self-efficacy (t79 = −6.58, P < .001, ds = −0.73) from presurvey to postsurvey (Table 4). Additional analysis showed a significant increase in participants’ understanding of all 4 SLOs from paired-sample t tests.

Qualitative analysis of open-ended responses on the postsurvey revealed several trends in participants’ mastery of the content in the training course. Seventy-two participants shared their most important takeaways, and 3 common themes emerged: (1) better understanding of CRE, (2) clear description of how to conduct program evaluation, and (3) deeper appreciation for the importance and purpose of program evaluation in public health.

Many participants reported a lack of familiarity with CRE before taking the training course. One participant explained, “I briefly remembered discussing [cultural responsiveness] in a course I took while completing my MPH, but I thought the [program] evaluation training gave a more in-depth overview. I thought that this is the most important takeaway from the training because being culturally responsive is incredibly important to attaining health equity.”

More than one-third of participants (25 of 72, 34.7%) noted that they appreciated the instruction on how to conduct program evaluation, and many noted that the section on the CDC Evaluation Framework offered a clear overview of how to conduct program evaluation. Other participants noted that the examples used in the video were helpful in terms of how to use various types of evaluations in a public health setting. Lastly, 17 (23.6%) participants reported a greater sense of appreciation and understanding of why program evaluation is important to public health practice. One participant explained, “This [online training] truly helped [me] understand the need for program evaluation and the important work that needs to be done to create programs that . . . reap positive outcomes.” Participants recognized how often these tools could be used in public health practice and reported a deeper understanding of the purpose of program evaluation.

Program Evaluation Competence of the Presurvey Sample

Descriptive analysis of presurvey data showed that most participants (88 of 138, 63.8%) had some exposure to program evaluation in the MPH curriculum, with a partial program evaluation course that covered topics of program evaluation (Table 5). Forty-four (31.9%) participants reported taking a course that primarily focused on program evaluation. Most participants with an MPH concentration in general public health (6 of 9) or social and behavioral sciences (16 of 29) reported that they had taken a course focused on program evaluation. Fewer participants with concentrations in epidemiology (8 of 28) or leadership and management (3 of 23) reported taking a course on program evaluation. No participants with concentrations in biostatistics (n = 3), environmental health (n = 4), or global health (n = 3) reported taking a program evaluation course. Despite the coverage of program evaluation within some MPH coursework, 122 (88.4%) participants reported that they were not very familiar with the American Evaluation Association.

Table 5.

Characteristics of master of public health (MPH) students in the United States who participated in an online public health program evaluation course (N = 138), 2021

Variable No. (%)
MPH concentration
 Biostatistics 3 (2.2)
 Environmental health 4 (2.9)
 Epidemiology 28 (20.3)
 Generalist 9 (6.5)
 Health policy/health administration/leadership or management 23 (16.7)
 International/global health 3 (2.2)
 Maternal and child health 4 (2.9)
 Nutrition 6 (4.3)
 Social and behavioral science (eg, health education, promotion, communication) 29 (21.0)
 Other 29 (21.0)
School year enrolled in MPH program
 2017-2018 4 (2.9)
 2018-2019 10 (7.2)
 2019-2020 42 (30.4)
 2020-2021 67 (48.6)
 2021-2022 13 (9.4)
No. of MPH courses completed
 0 19 (13.8)
 1-3 14 (10.1)
 4-6 17 (12.3)
 7-9 24 (17.4)
 10-12 26 (18.8)
 >12 18 (13.0)
 Finished all MPH coursework 20 (14.5)
Experience with program evaluation (PE) a
 PE courses primarily focused 44 (31.9)
 PE courses partially focused 88 a (63.8)
 Additional training: formal degree program 10 (7.2)
 Additional training: academic certification in evaluation 2 (1.4)
 Additional training: professional development workshops/webinars 26 (18.8)
 Academic training: conference attendance 13 (9.4)
 Academic training: internship or profession 4 (2.9)
 Academic training: other 10 (7.2)
AEA engagement a
 Never a member 137 (99.3)
 Current member 1 (0.7)
 Familiar with AEA 16 (11.6)
 Use AEA resources 5 (3.6)

Abbreviation: AEA, American Evaluation Association.

a

Participants can be included in multiple categories.

Results of 1-way ANOVA tests suggested that participants who reported having coursework on program evaluation felt more knowledgeable about program evaluation than participants who did not have coursework on program evaluation. Participants’ completion of a course with a partial focus on program evaluation influenced their self-reported knowledge score on the presurvey. We found a significant difference in self-reported knowledge scores for the 3 conditions (yes, no, I do not know): F (2,137) = 20.80; P < .001; partial η2 = 0.232. Given the partial η2 of 0.232, the participants’ completion of a course that partially focused on program evaluation explains 23.2% of the variance in their knowledge about program evaluation on the presurvey. Post hoc comparison using the Tukey test indicated that the mean knowledge score among participants who had taken a course with a partial focus on program evaluation (group 1) (mean [SD] = 2.36 [0.51]) was significantly higher than among participants who had not taken such a course (group 2) (mean [SD] = 1.72 [0.69]; mean 1 – mean 2 = 0.64; P < .001). We also found a significant difference in knowledge scores among participants who had taken a course with a partial focus on program evaluation (group 1) (mean [SD] = 2.36 [0.51]) and participants who did not know if they had taken such a course (group 3) (mean [SD] = 1.70 [0.48]; mean 1 – mean 3 = 0.67; P< .001). We found no significant difference between knowledge about program evaluation among participants who did not take a course with a partial focus on program evaluation (group 2) and participants who did not know if they had taken such a course (group 3) (mean 2 – mean 3 = 0.02; P = .99).

Qualitative Analysis

In the presurvey, when asked about their experience with program evaluation before starting the training course, 42 of 126 (33.3%) respondents reported that they had applied experience at work or in an internship. One participant explained, “[My program evaluation experience] was not based on public health training but rather the needs that needed to be filled in my organization.” Thirty-five (27.8%) participants reported having some experience with program evaluation through course assignments or class projects. Some participants explained that although program evaluation was covered in courses, they had no applied experience with it. One participant wrote, “I have had to learn about/conduct hypothetical program evaluations for some of my undergraduate public and global health coursework, but I have never had to conduct a real program evaluation.”

Twenty-eight (22.2%) respondents reported that they had no previous experience with program evaluation. Qualitative feedback suggested that current program evaluation training was inconsistent but that many participants were expected to conduct program evaluation in their work or internships.

Satisfaction With Training Course

Postsurvey data on 80 participants showed that many were satisfied (n = 28, 35.0%) or very satisfied (n = 46, 57.5%) with the training course. Qualitative analysis of an open-ended question asking about strengths of the training course found that participants enjoyed the graphics, the short video segments, and the use of relevant examples. One participant said, “[The] material was presented in an easily digestible way.” Analysis of an open-ended question about challenges with the training course identified areas of improvement, including providing more introduction to the topic for people with no prior experience with program evaluation and extending the length of the modules to go more in-depth on the topics. Participants also reported that the modules provided adequate information in a short time frame.

Discussion

The online training course had promising outcomes that offer guidance on how to integrate program evaluation training into public health curriculum. Although the format of our program evaluation training was unique, the findings were consistent with in-person program evaluation trainings that observed an increase in participants’ knowledge. 15 We found a significant increase in participants’ knowledge of program evaluation, positive attitudes about program evaluation, and self-efficacy in using CRE within their public health practice. In addition, we found a significant increase in all 4 SLOs. The training course also helped participants increase their understanding of program evaluation and their confidence in applying CRE in public health practice, suggesting that CRE can be effectively integrated in public health curriculum.

The importance of incorporating training courses on program evaluation in MPH curricula was emphasized by the findings from the presurvey. Participants who reported having taken a course on program evaluation had higher levels of knowledge about program evaluation than students who had not taken a course on program evaluation. Participants’ exposure to program evaluation in MPH curricula varied by their concentration; it was most common among participants with concentrations in generalist or social or behavioral health. In addition, most participants were not aware of the American Evaluation Association, suggesting a divide between the disciplines of public health and program evaluation. Although we did not develop a full curriculum, we developed an online training course that provided a basic introduction on program evaluation for public health professionals with limited knowledge and skills in conducting program evaluation and could address current gaps in program evaluation training in MPH programs at schools and programs of public health.

Strengths and Limitations

This study had 2 strengths. First, the training course was tailored toward the needs of public health professionals identified through formative qualitative research. Second, analysis of postsurvey data showed that most participants enjoyed the format, the pace of the videos, animation, and use of relevant examples.

This study also had several limitations. First, although the course was designed to prioritize the most relevant program evaluation skills and connect participants to trustworthy resources, the training course was 1 hour and could not cover all skills that participants need to be effective program evaluators. Second, there was no long-term follow-up with participants to effectively assess application of knowledge and skills gained; however, we intend to evaluate the impact of training on the use of these skills in practice for future implementations. Although results showed improvements in knowledge, attitudes, and self-efficacy, additional research would need to be conducted to understand whether participants would be able to apply these skills in practice. Third, the sample size was small; however, it was representative of various demographic characteristics, program evaluation experience, and concentrations. Fourth, participants were recruited through their affiliation with schools and programs of public health and regional public health associations. The participation of these academic units and regional associations was voluntary. Although academic units and associations throughout the United States were invited to participate, participation in the study was higher from institutions on the West Coast than from other regions. This inconsistency in participation regionally may have affected the results. Fifth, we used pre- and postsurveys to assess improvements in knowledge, attitudes, and self-efficacy; however, we did not have a control group, and we could not establish a causal relationship between the training course and outcomes.

Conclusion

The interactive online training course on program evaluation offers an effective way to provide additional training for MPH students and emerging public health practitioners. The self-paced online course makes training convenient for MPH students and public health professionals to use. This format is especially important given the impact of the COVID-19 pandemic on education and public health practice. Research on the best ways to train public health professionals with the necessary skills for practice after the pandemic is needed. Specifically, the use of content informed by qualitative research and tailored to the needs of public health students seemed to be successful. Incorporating CRE as a topic engaged MPH practitioners and provided skills that could make them more effective program evaluators. Looking to the future, developing online training courses in program evaluation may only be a temporary solution; more program evaluation competencies should be included in public health curriculum. In addition, this online, self-paced training course could be an effective approach for offering continuing education to public health professionals currently in the field. This change could benefit from interdisciplinary collaboration between the field of public health and program evaluation. Continued research on the current training needs of public health professionals can help inform the development of training materials for public health students and practitioners.

Acknowledgments

The authors thank Professors Darleen Peterson, PhD, MPH, MA, and Tiffany Berry, PhD, at Claremont Graduate University for their thoughtful feedback on early drafts of this article.

Footnotes

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors received a transdisciplinary dissertation award from Claremont Graduate University to support components of the research that contributed to this project.

ORCID iD: Bree L. Hemingway, PhD, MPH, CHES Inline graphic https://orcid.org/0000-0003-0710-5393

References

  • 1. Carman JG. Nonprofits, funders, and evaluation. Am Rev Public Admin. 2008;39(4):374-390. doi: 10.1177/0275074008320190 [DOI] [Google Scholar]
  • 2. Galport N, Azzam T. Evaluator training needs and competencies. Am J Eval. 2016;38(1):80-100. doi: 10.1177/1098214016643183 [DOI] [Google Scholar]
  • 3. Centers for Disease Control and Prevention. 10 Essential public health services. March 18, 2021. Accessed September 17, 2022. https://www.cdc.gov/publichealthgateway/publichealthservices/essentialhealthservices.html
  • 4. Fierro LA, Christie CA. Understanding evaluation training in schools and programs of public health. Am J Eval. 2010;32(3):448-468. doi: 10.1177/1098214010393721 [DOI] [Google Scholar]
  • 5. LaVelle JM, Donaldson SI. University-based evaluation training programs in the United States 1980-2008: an empirical examination. Am J Eval. 2010;31(1):9-23. doi: 10.1177/1098214009356022 [DOI] [Google Scholar]
  • 6. Hemingway BL, Douville S, Fierro LA. Aligning public health training and practice in evaluation: implications and recommendations for educators. Pedagog Health Promot. 2021;8(4):324-331. doi: 10.1177/23733799211033621 [DOI] [Google Scholar]
  • 7. Dye BA, Duran DG, Murray DM, et al. The importance of evaluating health disparities research. Am J Public Health. 2019;109(suppl 1):S34-S40. doi: 10.2105/AJPH.2018.304808 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Askew K, Beverly MG, Jay ML. Aligning collaborative and culturally responsive evaluation approaches. Eval Program Plann. 2012;35(4):552-557. doi: 10.1016/j.evalprogplan.2011.12.011 [DOI] [PubMed] [Google Scholar]
  • 9. Abma TA. Responsive evaluation: its meaning and special contribution to health promotion. Eval Program Plann. 2005;28(3):279-289. doi: 10.1016/j.evalprogplan.2005.04.003 [DOI] [Google Scholar]
  • 10. US Department of Health and Human Services, Office of Disease Prevention and Health Promotion. Health equity in Healthy People 2030. Accessed September 15, 2022. https://health.gov/healthypeople/priority-areas/health-equity-healthy-people-2030
  • 11. Brown H. The Economics of Public Health: Evaluating Public Health Interventions. Palgrave Pivot; 2019. [Google Scholar]
  • 12. American Evaluation Association. Competencies & standards. Accessed September 16, 2022. https://www.eval.org/About/Competencies-Standards
  • 13. Christie CA, Quiñones P, Fierro L. Informing the discussion on evaluator training. Am J Eval. 2013;35(2):274-290. doi: 10.1177/1098214013503697 [DOI] [Google Scholar]
  • 14. Kulik NL, Moore EW, Centeio EE, et al. Knowledge, attitudes, self-efficacy, and healthy eating behavior among children: results from the Building Healthy Communities trial. Health Educ Behav. 2019;46(4):602-611. doi: 10.1177/1090198119826298 [DOI] [PubMed] [Google Scholar]
  • 15. Adams J, Dickinson P. Evaluation training to build capability in the community and public health workforce. Am J Eval. 2010;31(3):421-433. doi: 10.1177/1098214010366586 [DOI] [Google Scholar]

Articles from Public Health Reports are provided here courtesy of SAGE Publications

RESOURCES