Skip to main content
Medical Science Educator logoLink to Medical Science Educator
. 2021 Oct 15;31(6):2189–2197. doi: 10.1007/s40670-021-01428-2

Measuring Post-training Activities Following a Veterinary Teaching Workshop in East Africa

Misty R Bailey 1,, India F Lane 1, Marcy J Souza 2
PMCID: PMC8651839  PMID: 34956731

Abstract

A teaching workshop was delivered for faculty members of East African colleges of veterinary medicine to foster teaching development and reflection. The goal was for participants to use knowledge gained to improve teaching skills. The approach was to “train the trainer” so attendees could transfer new knowledge to colleagues at their institutions. Techniques were used to increase the likelihood that participants would apply the training. A culturally responsive survey was developed to assess training transfer 1 year later. Pilot survey results suggest that participants applied what they learned and shared with colleagues largely due to peer and supervisor support.

Supplementary Information

The online version contains supplementary material available at 10.1007/s40670-021-01428-2.

Keywords: Veterinary medical education, Survey development, Training transfer, Faculty development, International collaboration

Introduction

In 2008, the New Partnership for Africa’s Development recognized agriculture as central to increasing growth and decreasing poverty and hunger in Africa [1]. Veterinarians play a crucial role in that effort, as well as in helping control the emergence and spread of zoonotic diseases both in Africa and throughout the world. To meet the challenges of the future, veterinary educators will likely look to new teaching methods and approaches to be able to effectively teach technical and non-technical competencies, such as leadership, interpersonal skills, and the ability to work as part of a global team.

Most veterinarians choose the profession to become practitioners rather than academicians [2]. Therefore, veterinary academicians often learn to teach on the job. Recognizing a need for veterinary medical education programming, the University of Tennessee College of Veterinary Medicine launched its Master Teacher Program (MTP) in 2008. The mission of the MTP has been to provide resources, programs, and leadership to support the highest quality of professionalism and instruction within the various educational missions of the college. The MTP was modeled as a community of practice with the approach of providing ongoing professional development toward mastery, as opposed to recognition or exclusion. Over the years, participants noted that the program was particularly useful in improving their teaching methods, materials, or communication; their ability to assess students; and the learning environment in their classroom [3].

In June 2019, the authors co-delivered core MTP programming during a 4-day teaching workshop at a veterinary college in East Africa. The workshop focused on improving teaching skills for faculty members of East African colleges of veterinary medicine. The goal was to foster teaching development through discovery and innovation in educational practice based on quality evidence, as well as reflection on current practice. Sessions comprised content about learning theory, assessment, exam preparation, instructional outcomes, course design, lecturing, and giving feedback, as detailed in Fig. 1. The authors anticipated that the knowledge gained could be used by faculty members to improve their teaching skills and therefore improve learning of veterinary students of east African colleges.

Fig. 1.

Fig. 1

Master teacher workshop content

The purpose of this monograph is to describe the process the authors used to design and assess the workshop, as well as to briefly describe workshop outcomes, in order to better enable other health profession educators to construct and measure outcomes for similar international workshops.

Train-the-Trainer Workshop Approach

The approach to the workshop was to use the material and expertise accumulated by the MTP to “teach the teacher” and “train the trainer” (TTT). The hope was that those attending the workshop would transfer their knowledge (for a wider reach) to members of their home institutions. Therefore, the techniques shown in Fig. 2 were built into the workshop to increase the likelihood that participants would apply the training. Nonetheless, one of the most acknowledged obstacles of the TTT model is lack of trainer follow-through. In one study, fewer than half of participants later conducted a public health training of their own [4], and another study reported that only 15% of employees in a large grocery organization transferred learning to their jobs [5]. Blume et al. describe training transfer as “the extent to which the learning that results from a training experience transfers to the job and leads to meaningful changes in work performance” [6, p. 1066]. For the purposes of this study, the authors added “training given” and “meaningful institutional change” to this operational definition of the training transfer construct.

Fig. 2.

Fig. 2

Workshop techniques employed to increase likelihood of learning transfer [6, 13, 28, 30]

The authors planned a robust evaluation of the workshop’s success in order to identify ways to improve and assess whether workshop objectives were met [9, 10]. The evaluation was based on the Kirkpatrick Model of training effectiveness [11], which the U.S. Department of Commerce considers to be the gold standard approach to program evaluation [10]. The Kirkpatrick Model includes four levels:

  1. Reaction: To what degree participants found the workshop satisfactory, engaging, and relevant

  2. Learning: To what degree participants acquired intended knowledge and skills based on the workshop

  3. Behavior: To what degree participants applied what they learned in the workshop

  4. Results: To what degree outcomes were realized as a result of the workshop

This model was used to design an assessment tool to gauge participant reaction and learning during the workshop and to assess participant activities (reflecting behavior and results) 1 year later. Participant activities resulting from the workshop were seen as measures of follow-through behavior [10]. Specifically, how did participants apply what they learned, did they give trainings (and what kinds), and did institutional change occur (and what kind)? Additionally, if little to no activity occurred, why not?

The challenge the authors faced was that one of the factors shown to inhibit effective evaluation of training is a lack of scientifically robust measurement tools [9]. During a literature search, several existing survey instruments for training effectiveness were identified and closely examined for potential use [1214]. However, survey instruments evaluating TTT workshops that also provided participants with novel professional development material seemed to be lacking in the literature. None of the identified instruments incorporated all the exact needs of this project; therefore, it was decided not to use any of the identified surveys in their entirety but instead to selectively use response items that fit the purpose of the teaching workshop assessment.

Ultimately, several items from a U.S. Office of Personnel Management survey instrument were revised, and questions were added to adjust for cultural and situational appropriateness [10]. This instrument was chosen because of its availability without purchase, its brevity, and the applicability of its questions to the teaching workshop’s purpose. Below, we describe the creation of the survey and briefly share evaluation results.

Post-Workshop Activity Instrument

Training Transfer Items

The post-workshop activity survey (Supplementary information) was developed to measure levels three and four in Kirkpatrick’s Model–behavior and results [11]. The survey used the following types of items: bipolar Likert ratings, multiple choice, check all that apply, dichotomous, and open-ended [8, 15]. Several relevant and reliable predictors of training transfer were identified to ensure that reasons for learning applied and for non-activity were captured; these predictors are outlined in Table 1. Ultimately, the survey contained four main variables related to the construct of training transfer as a result of the teaching workshop: learning applied, training given, institutional change implemented, and reasons for non-activity. Learning applied refers to the degree to which participants applied what they learned during the workshop. Training given refers to participant transfer of newly learned knowledge to others by leading their own training sessions, and institutional change implemented refers to any changes that have occurred at the departmental, college, or university level as a result of application of the workshop’s content. Lastly, reasons for non-activity refer to possible reasons why participants were unable to apply what they learned in the workshop to their jobs.

Table 1.

Evidence-based predictors of training transfer

Predictor Source
Participants believe training is relevant to job Griffin R. A practitioner friendly and scientifically robust training evaluation approach. J Workplace Learn. 2012;24:393–402
Time period between training and evaluation
Social support from supervisors and peers Blume BD, Ford JK, Baldwin TT, Huang JL. Transfer of training: A meta-analytic review. J Manag. 2010;36:1065–105
Opportunities (or lack thereof) to use training on the job
Organizational constraints
Post-training knowledge
Potential reward Yamnill S, McLean GN. Theories supporting transfer of training. Human Resourc Dev Q. 2001;12:195–208
Perceived effort-reward probability
Belief that efforts will lead to better performance
Positive, negative, or no feedback
Connections to organization’s strategic goals
Sense of recognition Burke LA, Hutchins HM. Training transfer: An integrative literature review. Hum Resour Dev Rev. 2007;6:263–96
Organizational commitment

Near the end of the survey, a question was added so participants could report the activities that occurred as a result of the workshop. The addition of this question allowed for determination of how well the training and materials prepared participants to share their professional development with others in the form of professional development and/or new initiatives. Two parts of this item — the specific workshop content delivered and reasons for delivering it — were borrowed directly from a TTT model survey by Hill et al. [12]. Lastly, two demographic questions were added to facilitate understanding of participant opportunity to apply and transfer any knowledge and skills gained from the workshop: (1) effort allocation in teaching, research, and service activities and (2) job or role changes since the workshop.

The source survey instructions were altered for our study to include additional information, such as specifics about the survey’s purpose, how to return the survey, contact information, and instructions to consult a CV or resume for ease of recalling resulting activity [8, 10, 15]. Because these participants were not likely to use a telephone to contact the authors, both an e-mail address and a WhatsApp contact number were included. The survey was then divided into sections with headings and more specific instructions added for each section to help respondents transition between previous and new sections [7]. Robinson and Leonard recommend this practice to provide clues to respondents on the type of information researchers want to collect [15].

Cultural Responsiveness

Because the survey participants were multinational and multicultural, the “3MC” approach to adaptation was used [16]. The 3MC approach focuses on survey design that optimizes data comparison across countries or cultures [16]. The goal of such an approach is to reduce measurement error associated with survey design [16]. Because the source instrument was intended for use by U.S. government employees who provide training within federal agencies, the authors realized that it might not culturally transfer to participants from East Africa. A previous study conducted in sub-Saharan Africa identified challenges in conceptual equivalence [17]. Because the survey was administered to participants via email (self-report method), immediate clarification about any potential confusion was not possible, so instrument creation needed to be thoughtful.

Although the participants, as veterinary academicians, were highly educated, the authors could not be certain that they were all native speakers of English. Nonetheless, the survey was not translated into other languages because English was the language in which the workshop was conducted and was the language of study for the veterinary programs for all the participants. Therefore, the participants teach in English and most likely learned veterinary medicine using English. English is also one of the official languages of Uganda, Kenya, and Tanzania [18].

Still, given the abundance of regional languages in East Africa, the language and response choices used in the survey were analyzed for readability [15]. The Flesch Reading Ease test was done with readability statistics produced by Microsoft Word 2016. Microsoft recommends that a satisfactory score should be between 60 and 70; higher scores indicate easier reading [15, 19]. The reading ease score for the survey was 59.7, indicating a satisfactory score. Additionally, the reported Flesch-Kincaid grade level was 8.1. Robinson and Leonard recommend that the grade level be no higher than ninth grade for U.S. audiences [15]. For our highly educated East African participants, an 8.1 grade level score should ensure readability.

Additionally, it was unknown whether the scale that was included in the existing survey instrument would translate well to other cultures. Metric equivalence has been previously identified as potentially problematic between different cultural groups [17]. Nonetheless, a literature search revealed that many different studies in sub-Saharan and East Africa have used standard survey methodology and Likert items to evaluate everything from refugee integration to career aspirations in secondary school students to adolescent life perspectives after war [2022]. Therefore, Likert items were used in the instrument.

Finally, to be culturally responsive to participants, their organizational and political environments were considered [15]. Some of the questions in the source survey asked participants to report on support given by their supervisors. It was possible that the participants’ organizational and political environment would not enable them to be honest in responses to these questions because such questions might be considered taboo [16]. Additionally, participants might fear that their answers would be discovered by their college, university, or government administration. Administrative access to computers and/or files transmitted via the internet is a genuine concern in some countries with governmental censorship. In 2017, Freedom House gave Uganda an internet freedom score of 41, with a score of 0 representing most free and 100 being least free [23]. In comparison, the USA scored 21. These scores are based on access obstacles, content limits, and user rights violations [23]. Immediately after the teaching workshop, Ethiopia’s government reportedly initiated a nationwide internet blackout that lasted 12 days after rumors of an attempted coup [24]. The authors were unable to communicate with one of the participants via e-mail until the blackout was lifted. Although political influences could not be eliminated from this study, the authors found little hesitation for participant input regarding supervisors during and after the workshop.

Instrument Pretesting

For survey development and refinement prior to implementation, Fink recommends a minimum of five survey pre-testers; therefore, the survey was initially pretested with six colleagues with expertise in survey research [8]. These six pre-testers provided written and verbal feedback, which was used to revise instructions, stems, response options, and scales, where necessary. Then, the authors took the survey as if they were actual participants. Advantages of pretesting include ensuring that questions are clear and relevant, gaining new perspectives, and developing a greater understanding of respondents [15]. This pretesting permitted completion of final modification to wording, identification of potential problems with question order and potential issues with the survey process, and determination of how long we anticipated it would take participants to complete the survey.

Pilot Testing

Survey Sample

Because of the purposive nature of the sample, the sample size was limited to the 12 participants of the workshop. There were 12 total participants from these four East African countries: Uganda, Ethiopia, Kenya, and Tanzania. Participants received the consent form via e-mail and were asked to return the signed consent page (scanned or photographed) via e-mail. The University of Tennessee IRB approved this method of consent. Although eight participants returned signed consent forms, six participants completed the survey (50% response rate).

Survey Delivery

The timing for the survey — 1 year post workshop — was supported by previous research on training transfer [2, 12]. The authors wanted to ensure that participants had enough time to apply the training before sending the survey. The survey was delivered in June 2020 via e-mail in a Word document attachment, self-administered, and returned to the authors via e-mail. Due to the unpredictability of internet connections in East Africa, use of an online survey platform would have made it difficult for the participants to complete the survey. After an initial e-mail invitation, one reminder e-mail was sent to participants who had not yet returned the survey.

Reliability and Validity

Because of the small participant sample size for this pilot survey, using most statistical measures of survey reliability and validity was not pursuable. Additionally, participants had limited internet access and were located a great distance from the researchers; therefore, a test–retest or equivalent forms of measure of reliability were not feasible because both techniques require administering an instrument to the same group twice [25]. However, Huck explains that reliability may be established by assessing a survey’s validity, and a frequently used, non-statistical type of validity is content validity [25]. To establish content validity, a thorough literature review was conducted to identify evidence-based predictors of learning transfer [8]. These predictors are shown in Table 1. Pretesting by the research team and survey research experts also helped to establish the survey’s content validity and thus its reliability [15, 25]. Having such expert review helps ensure consistency between the conditions under which the survey was created and the conditions under which it was administered [25].

Participant Reaction and Knowledge Assessment

For consistency, the instrument that had been used to measure participant reaction (Kirkpatrick’s first level) in the MTP was also used to assess participant reaction during the workshop [11]. This instrument (Table 2) was developed at the authors’ institution and included a total of nine questions asking participants about their level of agreement on a Likert-type scale ranging from strongly disagree (1) to strongly agree (7). The instrument was provided at the end of each of the 4 days. The questions addressed participants’ level of engagement and relevance of sessions, as well as participants’ beliefs of whether the sessions would help them improve their teaching methods, materials, communication skills in teaching, and student learning and assessment. Additional items assessed how much participants agreed that the sessions would help their ability to document good teaching and publish scholarly articles related to teaching, and if they plan to apply what they learned to their own teaching. Space was also available for open-ended comments.

Table 2.

Participant post-session evaluation

Item M ± SD
n = 11
I found today’s sessions engaging 6.8 ± .45
The content in these sessions was relevant to my own teaching 6.8 ± .49
I believe these sessions will help me to improve my teaching methods and/or materials 6.8 ± .38
I believe these sessions will help me to improve student learning/the learning environment 6.5 ± .78
I believe these sessions will help my communication skills in teaching 6.0 ± 1.15
I believe today’s sessions will help my ability to assess student performance 6.3 ± 1.25
I believe these sessions will help my ability to document good teaching 6.0 ± 1.16
I believe these sessions will help my ability to publish scholarly articles related to teaching 4.9 ± 1.53
I plan to apply what I have learned in today’s sessions to my own teaching 6.9 ± .28

Scale: strongly disagree (1) to strongly agree (7)

The authors created a 28-item pre- and post-test with questions based on the learning outcomes defined for each session of the workshop. Questions were listed in the same order on both tests and were used to assess participant learning (Kirkpatrick’s second level) [11].

Data Analysis

All pilot data were entered manually into a Microsoft Excel 2016 spreadsheet and imported into SPSS version 26. Data were cleaned according to the first four steps of the 12-step method of Morrow and Skolits [26]. Data cleaning included creating a data codebook, a data analysis plan, and running frequencies of data to identify potential data entry and coding errors [26].

Workshop Outcomes

The overall goal of the teaching workshop was to improve teaching skills at East African veterinary colleges. To extend the reach of the workshop’s content, an ancillary goal was to equip workshop participants with the ability to train others at their home institutions. The authors sought to accomplish these aims by delivery of a TTT workshop. Here, we briefly describe the outcomes of the workshop as support for the functionality of the pilot design to assess workshop success.

Participant Learning

Knowledge gained through the workshop was measured through pre- and post-tests. A paired-samples t test showed a significant improvement in scores for the post-workshop knowledge assessment in comparison to the pre-workshop assessment, t(9) = 6.9, p < .001. Participants improved most on workshop content related to Bloom’s taxonomy, low-stakes exams, active learning, assessment rubrics, and multiple choice examination items.

Participant Reaction

Participant satisfaction with workshop content was measured at the end of each day using a Likert-type scale from strongly disagree (1) to strongly agree (7). The three statements that participants rated most highly overall were that they plan to apply what they learned to their own teaching (M = 6.91), they found the sessions engaging (M = 6.83), and they found the content relevant to their own teaching (M = 6.76; Table 2). One participant commented, “This week has been an eye opener and a great step towards being a better teacher.”

Post-Workshop Activities

Six participants completed the post-workshop activity survey (50% response rate). Respondents noted that they have been able to successfully apply the knowledge and skills learned in the workshop (Tables 3 and 4) because they received support and/or encouragement from their supervisor (n = 5) and peers (n = 4), and because they have been able to receive ongoing training after the initial workshop (n = 1) and received coaching from a supervisor (n = 1; Table 4). All respondents indicated that they were able to apply what they learned in the workshop (two immediately, three within 1 to 3 months, and one within 4 to 6 months). All respondents strongly agreed that the workshop was a worthwhile investment in their career development. Table 5 shows activities that resulted from the workshop. Four participants later delivered workshop content at their home institutions. All four shared information about learning outcomes and problem-based learning, and three each shared information about fundamentals of assessment, assessment by rubric, lecturing, and teaching and assessing skills. The most common reason for presenting particular information was because it was the best fit for their own participants, given participant knowledge, skills, and interests. Five respondents to the survey had the same job they had during the workshop, and mean teaching, research, and service/outreach efforts were 46%, 43%, and 14%, respectively.

Table 3.

Workshop learning effectiveness and application

Survey item M ± SD
n = 6
What I learned in the workshop has helped me in my job 4.7 ± .52
I have been able to apply what I learned in the workshop 4.2 ± .41
I am already seeing positive results from the workshop 4.2 ± .98

Scale: strongly disagree (1) to strongly agree (5)

Table 4.

Factors influencing workshop participants to apply what they learned

Survey item M ± SD
n = 6
My supervisor and I set expectations for this training before the workshop 2.0 ± .89
My supervisor and I determined how I would apply what I learned after the workshop 2.7 ± 1.03
I received support and encouragement from my supervisor for applying my learning to my job 3.7 ± 1.03
I received support and encouragement from my peers for applying my learning to my job 3.3 ± 1.21
I have the necessary resources (i.e., tools, time, human resources) to apply what I learned 3.3 ± 1.37

Scale: strongly disagree (1) to strongly agree (5)

Table 5.

Participant activities resulting from the workshop

Activity Number of participants
n = 6
Departmental/college initiative to improve veterinary education 6*
University initiative to improve veterinary education 2*
Regional initiative to improve veterinary education 2**
Delivered content learned at the workshop 4
Mentored another veterinary educator 2
Became a recognized leader in veterinary education with their department/college 4
Apply knowledge learned in their own classroom 5
Applied learning to inform policy and/or procedures within the department/college 2
Discussed what they learned informally with colleagues 5

*1 new initiative; **2 new initiatives

One participant noted that when she began teaching veterinary medicine, she had no prior teacher training, but “Through the formal training I got from the workshop, I am now able to design my own style of teaching while applying formally recognized techniques to disseminate the veterinary knowledge I have.”

Discussion

The authors conducted the workshop using a TTT approach, in which workshop participants were expected to not only apply what they learned in their own jobs but also to thereafter share what they learned with faculty members at their respective organizations. This approach allows sustainability of the knowledge transferred during the workshop because it leverages social capital within a learning community by distributing the knowledge across many people with roots in that community to both empower the participants and promote self-reliance [4]. The TTT model has been used in industry, government, and private organizations [4] and is especially suitable for use in developing countries, in which resources are limited, and it is difficult to reach a wide audience [4, 27].

Evidence-based predictors of training transfer, shown in Table 1, were built into the workshop and taken into consideration with the post-workshop activity survey. To ensure social support and organizational commitment [6, 28], we partnered with participants’ supervisors to make recommendations for workshop participants from their respective institutions. Colleges who sent more than one participant also enabled peer social support [6]. Results from the pilot survey suggest that social support from supervisors and peers was a factor that influenced participant application of what they learned [6]. To assess opportunities to use training on the job [6], we launched the post-workshop activity survey after 1 year to ensure sufficient time between training and evaluation, as recommended by Griffin [9]. The majority of the respondents in our survey was able to apply what they learned 1 to 6 months after the workshop. Results from the pilot survey support several additional predictors of training transfer. These predictors include Griffin’s findings of participant belief that the training was relevant to their jobs [9]. Eighty-nine percent of workshop respondents strongly agreed that the training was relevant to their jobs. Additionally, results from the pre/post-tests suggest that post-training knowledge was increased among participants, and Blume et al. note such knowledge gain as an indicator that training transfer will occur [6].

A key concept in teaching the teacher is to promote engagement in reflective practice [29]. Therefore, time for reflection and action planning was built into the workshop schedule. Ultimately, the workshop led to the creation of an East African Consortium of Veterinary Educators, which was organized as an initiative within an existing East Africa veterinary consortium. The new effort plans to focus on disseminating topics covered during the workshop in order to improve teaching, improve student learning, and build the veterinary capacity of participating countries. The creation of this initiative supports McLean's notion that training transfer is also affected by participants’ perception of the connection between training and potential reward, as well as the connection to an organization’s strategic goals [30].

In contrast to one of the findings by Blume et al. [6] and Burke and Hutchins [28], organizational constraints did not appear to affect how much workshop content was delivered at respondents’ home institutions or how much they applied what they learned in their own classrooms. This observation is based on respondents’ answers to the question, “I have the necessary resources (i.e., tools, time, human resources) to apply what I learned.” Because this question did not distinguish between resource types, it is possible that respondents had the tools they needed but not the time or human resources.

Lessons Learned

Integrating Kirkpatrick’s model and the TTT approach to the workshop and its evaluation enabled us to achieve a clear purpose for the workshop and collect meaningful data to assess the workshop’s success and make adjustments to future workshops. Workshop success was supported by participant reports of applying knowledge gained to their own teaching. Additionally, supervisor support seemed to be an essential aspect to participant application of learning, and we plan to continue to involve supervisors in future workshops. It is important to incorporate cultural responsiveness into any survey’s design, but for this particular survey with international participants, the 3MC approach seems to have been particularly valuable [16]. By designing the survey with culture in mind, we were able to take into consideration the readability of the survey, the appropriateness of the items and scales used, the survey administration method, and the various methods participants might need to contact the investigators.

A lesson learned for the pre- and post-tests was that the same 3MC approach would have been useful when designing the questions. These tests took more time for participants to complete than the authors had anticipated, and this additional time subtracted from the available time for workshop content. For future tests, the authors plan to reexamine the questions for readability and possibly reduce the number of questions.

Limitations

The results of the pilot survey should not be over-interpreted due to the small participant sample size. Statistical reliability and validity studies were not pursuable; however, pretesting by the research team and survey research experts helped to establish the survey’s content validity and thus its reliability [15, 25]. Future research should examine the results of the survey with a larger sample.

Summary

The authors designed and implemented an assessment program to evaluate a teaching workshop using the four-level Kirkpatrick model of training effectiveness, which includes participant reaction, learning, behavior, and results [11]. International participant reaction was gauged with workshop content satisfaction surveys, and learning was assessed with knowledge pre- and post-tests. The last two levels — behavior and results — were evaluated with the pilot survey, the results of which were used to help determine the success of the workshop and identify needed changes before extending delivery of the workshop elsewhere. Following TTT practices, the authors gave workshop participants the tools and encouragement to host a similar workshop or individual sessions for fellow faculty members at their home university. This outreach allows for dissemination of teaching innovations to be magnified by the TTT approach.

Supplementary Information

Below is the link to the electronic supplementary material.

Author Contribution

All the authors contributed to the development of materials, delivery of the workshop, assessment material conception and design, and survey instrument pre-testing. Misty R. Bailey developed and revised the activity survey, collected and analyzed data, and wrote the first draft of the manuscript. Marcy J. Souza secured funding for the project and revised the manuscript. India F. Lane revised the manuscript. All authors read and approved the final manuscript.

Funding

This project was funded by a seed grant from the Smith International Center, Institute of Agriculture, University of Tennessee, Knoxville, TN.

Data Availability

The datasets generated and analyzed during the current study are not publicly available due to Institutional Review Board research restrictions but are available (stripped of identifying information) from the corresponding author on reasonable request.

Declarations

Ethics Approval

The project was approved under the expedited review category by the Institutional Review Board at the University of Tennessee (UTK IRB-20–05775-XP).

Consent to Participate

Consent to participate was collected from survey participants with an informed consent form e-mailed to participants. Participants consented by signing and dating the form, scanning the signed form, and e-mailing it to the investigators.

Consent for Publication

The informed consent form used for consent to participate included consent for publication.

Conflict of Interest

The authors declare no competing interests.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Swan GE, Kriek NPJ. Veterinary education in Africa: Current and future perspectives. Onderstepoort J Vet Res. 2009;76:105–114. doi: 10.4102/ojvr.v76i1.73. [DOI] [PubMed] [Google Scholar]
  • 2.Baldwin TT, Ford JK. Transfer of training: A review and directions for future research. Pers Psychol. 1988;41:63–105. doi: 10.1111/j.1744-6570.1988.tb00632.x. [DOI] [Google Scholar]
  • 3.Lane IF, Sims M, Howell NE, Bailey M. Sustaining a collegewide teaching academy as a community: 10 years of experience with the master teacher program at the University of Tennessee College of Veterinary Medicine. J Vet Med Ed. 2020;47:384–394. doi: 10.3138/jvme.0918-106r1. [DOI] [PubMed] [Google Scholar]
  • 4.Orfaly RA, Frances JC, Campbell P, Whittemore B, Jolly B, Koh H. Train-the-trainer as an educational model in public health preparedness [Supplemental material]. J Public Health Manag Pract. 2005;Nov:S123–S127. [DOI] [PubMed]
  • 5.Velada R, Caetano A, Michel JW, Lyons BD, Kavanah MJ. The effects of training design, individual characteristics and work environment on transfer of training. Int J Train Dev. 2007;11:282–294. doi: 10.1111/j.1468-2419.2007.00286.x. [DOI] [Google Scholar]
  • 6.Blume BD, Ford JK, Baldwin TT, Huang JL. Transfer of training: A meta-analytic review. J Manag. 2010;36:1065–1105. [Google Scholar]
  • 7.Czaja R, Blair J. Designing surveys. Thousand Oaks, CA: Pine Forge Press; 2011. [Google Scholar]
  • 8.Fink A. How to conduct surveys. 6. Thousand Oaks, CA: Sage Publications; 2017. [Google Scholar]
  • 9.Griffin R. A practitioner friendly and scientifically robust training evaluation approach. J Workplace Learn. 2012;24:393–402. doi: 10.1108/13665621211250298. [DOI] [Google Scholar]
  • 10.U. S. Office of Personnel Management. Training evaluation field guide: Demonstrating the value of training at every level. 2011. https://www.opm.gov/policy-data-oversight/training-and-development/reference-materials/training_evaluation.pdf. Accessed 16 Feb 2021.
  • 11.Kirkpatrick DL, Kirkpatrick JD, Kirkpatrick WK. The Kirkpatrick model. Kirkpatrick Partners. https://www.kirkpatrickpartners.com/Our-Philosophy/The-Kirkpatrick-Model. Accessed 16 Feb 2021.
  • 12.Hill I, Palmer A, Klein A, Howell E, Pelletier J. Assessing the train-the-trainer model: An evaluation of the Data & Democracy II project. Urban Inst. 2010. https://www.urban.org/sites/default/files/publication/28971/412174-assessing-the-train-the-trainer-model-an-evaluation-of-the-data-amp-democracy-ii-project.pdf. Accessed 16 Feb 2021.
  • 13.Holton EF, Bates RA, Ruona WEA. Development of a generalized learning transfer system inventory. Human Resourc Dev Q. 2000;11:333–360. doi: 10.1002/1532-1096(200024)11:4&#x0003c;333::AID-HRDQ2&#x0003e;3.0.CO;2-P. [DOI] [Google Scholar]
  • 14.LTSInventory: Learning transfer system. LTSI pricing. LTSInventory. 2019. http://www.ltsinventory.com/#pricing. Accessed 16 Feb 2021.
  • 15.Robinson SB, Leonard KF. Designing quality survey questions. Thousand Oaks, CA: Sage Publications; 2019. [Google Scholar]
  • 16.Harkness J, Bilgen I, Córdova Cazar A, Hu M, Huang L, Lee S, Yan T. Questionnaire design. In: Guidelines for best practice in cross-cultural surveys. 4th ed. Institute for Social Research. 2016. http://www.ccsg.isr.umich.edu/. Accessed 16 Feb 2021.
  • 17.Moswete NN, Darley WK. Tourism survey research in sub-Saharan Africa: Problems and challenges. Curr Issues Method Pract. 2012;15:369–383. [Google Scholar]
  • 18.African Studies Center. East Africa living encyclopedia. University of Pennsylvania. https://www.africa.upenn.edu/NEH/uhome.htm. Accessed 16 Feb 2021.
  • 19.Microsoft. Get your document’s readability and level statistics. Microsoft support. https://support.office.com/en-us/article/get-your-document-s-readability-and-level-statistics-85b4969e-e80a-4777-8dd3-f7fc3c8b3fd2. Accessed 16 Feb 2021.
  • 20.Beversluis D, Schoeller-Diaz D, Anderson M, Anderson N, Slaughter A, Patel RB. Developing and validating the refugee integration scale in Nairobi. Kenya J Refug Stud. 2016;30:106–132. [Google Scholar]
  • 21.Migunde Q, Agak J, Odiwuor W. Gender differences, career aspirations, and career development barriers of secondary school students in Kisumu Municipality. Gend Behav. 2012;10:4987–4997. [Google Scholar]
  • 22.Saupe LB, Göβmann K, Catani C, Neuner F. Adolescent life perspectives after war: Evaluation and adaption of the future expectation scale in Uganda. Frontiers Psychol. 2019;10:1527. doi: 10.3389/fpsyg.2019.01527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Freedom House. Freedom on the net 2017: manipulating social media to undermine democracy. Freedom House. 2017. https://freedomhouse.org/article/new-report-freedom-net-2017-manipulating-social-media-undermine-democracy. Accessed 16 Feb 2021.
  • 24.Woodhams S. Ethiopia’s leader promised to protect freedom of expression. But he keeps flicking the internet kill switch. CNN. 2019. https://www.cnn.com/2019/07/15/africa/ethiopia-internet-shutdowns-old-regime/index.html. Accessed 16 Feb 2021.
  • 25.Huck SW. Reading statistics and research. 5. Boston, MA: Allyn and Bacon; 2008. [Google Scholar]
  • 26.Morrow JA, Skolits G. Twelve steps of quantitative data cleaning: strategies for dealing with dirty evaluation data. Washington, DC: Session presented at the meeting of the American Evaluation Association; 2017. [Google Scholar]
  • 27.Rajapakse BN, Neeman T, Dawson AH. The effectiveness of a train the trainer model of resuscitation education for rural peripheral hospital doctors in Sri Lanka. PLoS One. 2013;8(11):e79491. [DOI] [PMC free article] [PubMed]
  • 28.Burke LA, Hutchins HM. Training transfer: An integrative literature review. Hum Resour Dev Rev. 2007;6:263–296. doi: 10.1177/1534484307303035. [DOI] [Google Scholar]
  • 29.Silva-Fletcher A. Teaching the teacher. In: Hodgson JL, Pelzer JM, editors. Veterinary medical education: a practical guide. Hoboken, NJ: Wiley Blackwell; 2017. pp. 572–588. [Google Scholar]
  • 30.Yamnill S, McLean GN. Theories supporting transfer of training. Human Resourc Dev Q. 2001;12:195–208. doi: 10.1002/hrdq.7. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

The datasets generated and analyzed during the current study are not publicly available due to Institutional Review Board research restrictions but are available (stripped of identifying information) from the corresponding author on reasonable request.


Articles from Medical Science Educator are provided here courtesy of Springer

RESOURCES