Abstract
Background
The Medical Education Partnership Initiative (MEPI) supports medical schools in Africa to increase the capacity and quality of medical education, improve retention of graduates, and promote regionally relevant research. Many MEPI programmes include elements of community-based education (CBE) such as: community placements; clinical rotations in underserved locations, community medicine, or primary health; situational analyses; or student-led research.
Methods
CapacityPlus and the MEPI Coordinating Center conducted a workshop to share good practices for CBE evaluation, identify approaches that can be used for CBE evaluation in the African context, and strengthen a network of CBE collaborators. Expected outcomes of the workshop included draft evaluation plans for each school and plans for continued collaboration among participants. The workshop focused on approaches and resources for evaluation, guiding exploration of programme evaluation including data collection, sampling, analysis, and reporting. Participants developed logic models capturing inputs, activities, outputs, and expected outcomes of their programmes, and used these models to inform development of evaluation plans. This report describes key insights from the workshop, and highlights plans for CBE evaluation among the MEPI institutions.
Results
Each school left the workshop with a draft evaluation plan. Participants agreed to maintain communication and identified concrete areas for collaboration moving forward. Since the workshop’s conclusion, nine schools have agreed on next steps for the evaluation process and will begin implementation of their plans.
Conclusion
This workshop clearly demonstrated the widespread interest in improving CBE evaluation efforts and a need to develop, implement, and disseminate rigorous approaches and tools relevant to the African context.
The Medical Education Partnership Initiative (MEPI) supports medical schools in 12 African countries to increase the capacity and quality of African medical education, improve retention of medical graduates, and promote regionally relevant research through locally led innovative programmes. MEPI is funded by the US President’s Emergency Plan for AIDS Relief (PEPFAR) and the National Institutes of Health.
To address areas of common interest among collaborating schools MEPI has established technical working groups (TWGs), including one focused on community-based medical education (CBE). CBE is a learning strategy which allows students to acquire clinical, research, communications and other professional competencies in community settings. It often includes elements of primary healthcare, health promotion, and disease prevention. Medical schools across Africa employ CBE in different ways, often seeking to achieve similar educational goals.[1] Despite the importance of evaluation to inform curricular revision and determine the effectiveness of programmes, few MEPI-supported institutions have formally evaluated their CBE programmes.
While CBE programmes at MEPI schools vary in their duration, the types of activities, and location of the attachments, they mostly have a strong clinical focus and a bias in favour of underserved populations, and face a common set of challenges. The four major challenges identified by MEPI schools are as follows:[1] shortage of supervisory staff at CBE sites, with few having educational training; infrastructure challenges, including lack of clinical space in facilities for students, insufficient accommodation of a reasonable standard, inadequate transportation and absence of internet connectivity; increased medical student enrolment that has placed severe strains on limited CBE sites; and curricular issues such as a lack of clear objectives, innovation, and adaptability that diminish the importance of CBE programmes in medical schools.
CapacityPlus, the PEPFAR-funded global project focused on strengthening the health workforce, partners with the MEPI Coordinating Center (MEPI-CC) to build capacity of medical schools to monitor, evaluate and continually improve their CBE programmes. At the request of MEPI schools engaged in CBE, CapacityPlus and the MEPI-CC conducted a 3-day CBE programme evaluation workshop in Kampala, Uganda. The workshop’s objectives were to: share good practices for CBE evaluation relevant to the needs of MEPI institutions; identify approaches and tools that can be used for CBE evaluation in the African context; and strengthen a supportive network of CBE collaborators. The expected outcomes of the workshop were draft CBE evaluation plans for each participating school, and to develop concrete plans for continued collaboration between participants.
This report describes key insights from the workshop, and highlights future plans for CBE evaluation among the collaborating MEPI institutions.
Methods
Participants and facilitators
The 19 workshop attendees included representatives from 11 medical schools and two country-level consortiums of medical schools that are involved in the MEPI initiative in seven countries: Botswana, Nigeria, South Africa, Tanzania, Uganda, Zambia, and Zimbabwe; and representatives from three partner organisations: USAID, the African Centre for Global Health and Social Transformation (ACHEST), and IntraHealth International. Participants included administrators and educators who were champions of CBE with wide experience and understanding of the situation and future needs for CBE at their institutions.
The workshop was facilitated by an expert in the evaluation of medical education programmes, as well as representatives from CapacityPlus and the MEPI-CC, each with broad experience in CBE and programme evaluation. The facilitators advised participants as they worked through three stages of activities, building toward development and implementation of CBE evaluation plans. These phases are outlined in Fig. 1.
Fig. 1.
Activities leading to development and implementation of CBE evaluation plans.
Pre-workshop activities
Prior to the workshop, participants completed activities to focus thinking on their programme objectives and specific evaluation needs. These included: completing a CBE programme questionnaire; preliminary stakeholder mapping exercise; meeting with relevant stakeholders to agree on key issues for evaluation; and locating learning objectives for their institution’s CBE programme.
A questionnaire adapted from the Collaboration for Health Equity through Education and Research (CHEER) methodology[2] asked participants to describe their CBE programmes comprehensively. CHEER uses a peer-to-peer evaluation approach, whereby a group of three to four colleagues from other medical schools with CBE programmes are invited by a collaborating school to assist in assessing local CBE activities, with the focus and the programmes being reviewed determined by the host school, described in more detail elsewhere.[2] This enables schools being evaluated to learn from a preparatory self-assessment and to understand which CBE practices are more or less effective in a given context.
Additionally, participants developed stakeholder maps, visually depicting groups or key individuals with vested interest in their CBE programme or its evaluation. Participants then met with stakeholders to review, validate, and fill in the gaps of the CBE programme questionnaire and stakeholder map. They discussed with their stakeholders the purpose of evaluating their CBE programme and what type of evaluation would be most appropriate and feasible, given the needs of their programme. This exercise provided participants with a firm understanding of the components and characteristics of their institution’s CBE programme within the context of its parent organisation, including how the programme operates and whom it serves.
These pre-workshop products, along with objectives for each CBE programme, were submitted to the workshop organisers, who reviewed and analysed them to inform development of workshop activities.
Workshop activities
The workshop consisted of participatory lectures, group activities, round-table sessions, role-play and presentations. It was highly interactive, with discussions encouraged by facilitators.
To understand the breadth of experience among the group, participants shared the goals and activities of CBE at each of their institutions. Some participants’ institutions desired to address urban/rural discrepancies in healthcare and increase retention of graduates in underserved areas, while others aimed to provide students with opportunities to experience community-level clinical services and focus on interdisciplinary or primary healthcare, or to develop appropriate attitudes toward patients. Activities at institutions include community placements, clinical rotations in underserved locations, situational analyses, rotations in community medicine and primary health care, and student-led community research to identify health challenges.[1] Participants shared experiences of both good practices and lessons learned throughout the workshop.
Early in the workshop, the discussions and activities focused on approaches, components and resources for evaluation. The group discussed the theoretical approaches to programme evaluation relevant to CBE programmes and drew on their collective experiences to reflect on the incentives for programme evaluation, reasons for evaluating, and key stakeholders who should be involved in evaluation. Barriers and facilitators for planning, implementing and communicating evaluation efforts were also discussed (Table 1). Representatives from the Medical Education for Equitable Services to All Ugandans (MESAU) Consortium in Uganda and the University of Zambia presented the methods they used in recent evaluations of their CBE programmes, as well as results of these evaluations. These presentations prompted the group to consider different aspects of communicating evaluation outcomes (Table 2). Through these discussions, participants appreciated that programme evaluation evolves throughout the design and implementation of activities, and the importance of considering the perspectives of various stakeholders early on in the process.
Table 1.
Identifying evaluation barriers and facilitators
| Barriers/facilitators in planning a programme evaluation |
Barriers
|
Facilitators
|
| Barriers/facilitators in implementing a programme evaluation |
Barriers
|
Facilitators
|
| Barriers/facilitators in using programme evaluation outcomes/results |
Barriers
|
Facilitators
|
Table 2.
Communicating results of a programme evaluation
To whom should results be communicated?
|
What are the important components of the evaluation to be disseminated?
|
When should results be communicated?
|
How can the results be disseminated?
|
ICT = information and communication technology
The discussion of theoretical approaches to programme evaluation and in-depth review of evaluations completed by two schools evidenced the complexities inherent in evaluation of educational programmes, particularly CBE programmes. Guided by the facilitators, participants discussed the importance of explicitly considering several issues before commencing a programme evaluation, so as to minimise the impact of these complexities. Participants considered which stakeholders should be involved in evaluation of their work, agreeing that in any programme, key personnel involved with implementation, funding, and oversight should be included. Many participants also stressed the importance of soliciting assistance from the programme’s potential beneficiaries, both internal (students) and external (community members). They considered the advantages and disadvantages of involving persons completely outside of the programme, perhaps including CBE implementers from other institutions, to lend experience and credence to the evaluation.
Participants carefully considered which portions of a programme could be measured, given the age of the programme and its stage of implementation, the available budget, and data collection capabilities. They concluded that less mature programmes should focus on whether processes in place were working efficiently, while planning for later evaluations to review outcomes and even the impact of CBE. High-quality data gathering was agreed to be lacking in many places; however, this is a skill which can be taught, as is the skill of devising questions and approaches to gather practical information. Data capturers should also be closely supervised.
Most urgently, participants lamented the impossibility of finding any universally applicable tool or approach to guide their evaluative work. Each CBE programme is unique in both context and content – that is, every school represented interprets the meaning of CBE differently, and each works within its own geographic and academic context. A compendium of tools used for evaluating CBE programmes was given to participants as a resource (available at: http://www.mepinetwork.org/community-based-education). This compendium resulted from a targeted search of the literature to identify good practices and tools for evaluation of CBE programmes applicable to the African context.[3] Tools that could be useful to MEPI schools were classified according to the Kirkpatrick levels of evaluation[4] and distributed together with the full text of relevant articles on a flash drive.
With an understanding of different models of approaches to CBE and evaluation methods, participants were ready to begin developing evaluation plans for each of their institutions. First, participants developed logic models to capture inputs, activities, outputs, and expected outcomes of their programmes. The logic model process was adapted from The Systems Evaluation Protocol (V2.2).[5] Participants worked in small groups according to the level of maturity and type of CBE programme at their respective institutions to develop basic logic models using supplied templates. This collaborative method invited immediate feedback from peers and facilitators as models were developed.
Once logic models were completed, participants began drafting full evaluation plans using another template and a similar peer- and facilitator-supported process. Facilitators asked participants to describe the scope and purpose of their evaluation, develop evaluation questions, and define specific measures of CBE programme evaluation. Mini-presentations on challenges and possible strategies to mitigate them guided participants’ exploration of programme evaluation including quantitative and qualitative data collection methods, sampling, analysis, and reporting (Table 3). Participants used their logic models as foundation documents to inform the development of their evaluation plans. The sessions supporting participants in developing their evaluation plans were interspersed with activities designed to address broader issues associated with programme evaluation. Foci of these activities included a didactic session on crafting learning objectives as critical to making judgments about programme impact;[6] small-group discussion on the ethics of programme evaluation including considerations related to insider evaluations;[7,8] presentation on distinguishing programme evaluation from research; and role-play activities to rehearse oral presentations of evaluation findings.
Table 3.
Data collection and analysis: challenges and strategies to address them
| Challenges in data collection | Strategies to manage these challenges |
| Designing an appropriate, concise tool that captures the right information Translation of language Time needed to design, administer questionnaires Quality control during questionnaire design and data collection Low response rates Withholding of information Limited or excessive resources |
Orient, supervise, train evaluation team to research methods Pre-tested tools Adequate resources Advanced planning Quality assurance (e.g. random checks of questionnaire) Sensitisation to manage expectations Hire translator Transparency and honesty |
| Challenges in data analysis | Strategies to manage these challenges |
| Analysis is time consuming Intellectually challenging Data cleaning/preparing data for analysis is difficult Incomplete datasets Insufficient expertise in analysis and using software Poorly designed instruments/data collection tools Shortage of people who understand the project |
Funds for, and training in, statistical software Enter data as it comes in; adjust tools if needed Use electronic surveys if appropriate Double entry for quality assurance Analyse in teams Hire assistance with transcriptions Dedicated research assistants |
CBE programmes within the group were at different stages in their life cycle; therefore evaluation strategies and plans varied. Some institutions plan to evaluate programme processes, asking questions such as: How much time do supervisors spend with students? Are students aware of expectations? Are supervisors aware of their roles and responsibilities? Other participants focused more on evaluating the outcome and output of their programmes, asking questions about student knowledge of community health, or the number of students completing programme requirements. More mature programmes considered longer-term outcomes and even impact questions such as: Do CBE students become confident and competent doctors? How many students take up practice in underserved areas? What are the advantages, if any, for students who participate in CBE compared to their peers who train largely in tertiary centres?
On the final day of the workshop, the group visited two peri-urban health centres in Uganda – one private not-for-profit and one public – where students from Makerere University College of Health Sciences complete rotations. On their return from the site visits, the group discussed how the CBE sites and rotations they visited could be evaluated using approaches introduced during the workshop.
The workshop provided a forum for strengthening MEPI’s international CBE community of practice. Participants had ample time to interact and discuss their programmes, ideas for evaluation, and good practices from their experiences. Participants were invited to join a MEPI CBE Facebook page, and throughout the workshop facilitators and participants posted resources to support their peers. This platform continues to be used by schools to share information and materials, ask questions, and solicit feedback from each other and facilitators.
To conclude the workshop, facilitators solicited participant feedback about the workshop objectives, expected outcomes, methodology, logistics, and next steps. Participants reported that the workshop achieved its objectives, met or exceeded their expectations and was highly effective in guiding them through drafting their CBE evaluation plans. Multiple participants stated that the link between logic models and evaluation plans was invaluable and that they feel ready to evaluate the CBE programmes at their institutions. One participant said, ‘I have the knowledge and expertise now [for CBE evaluation]. I think I can convince my school to institutionalise evaluation of CBE.’
Results
The workshop achieved its expected outcomes. Each school departed with a draft CBE evaluation plan to be refined and disseminated to stakeholders at their institution. Participants agreed to maintain communication and identified concrete areas for collaboration moving forward. Since the conclusion of the workshop, nine participating schools have agreed on next steps for the evaluation process and will begin implementation of their plans. In addition, participants indicated that they would enhance local ownership of CBE by sharing lessons learned with students, school leaders and other stakeholders. To strengthen CBE evaluation in the region, the group agreed to share evaluation resources, results and experiences with each other and the broader community of practice through the Facebook page, the internal listserv and through publication and presentation of results.
Conclusion
Community-based medical education is broadly used across Africa to produce a workforce that is community-oriented, skilled in community health and motivated to work in underserved areas. That said, CBE is implemented in very different ways, making evaluation efforts critically important to assess effectiveness. For new CBE programmes, evaluation allows an assessment of methods while for more established programmes, evaluation focuses on the effectiveness of the programme, including its outcomes and impact. This workshop clearly demonstrated that there is widespread interest in improving evaluation efforts and a need to develop and disseminate rigorous approaches and tools relevant to the African context. The opportunity to meet in person clearly allowed this community to evolve, allowing for an exchange of ideas and resources. As common tools and evaluation questions arise, cross-institutional and international collaborations are likely to emerge. Moving forward, the CBE Technical Working Group within MEPI will continue to be supported by CapacityPlus and the MEPI-CC to support evaluation plans, nurture the community of practice, and leverage resources to support evaluation of CBE programmes.
Contributor Information
Rebecca J Bailey, Team leader for Health Workforce Development for the USAID-funded CapacityPlus Project led by IntraHealth International, Chapel Hill, North Carolina, USA.
Rhona K Baingana, Lecturer with Makerere University College of Health Sciences and the co-ordinator of MESAU Consortium Community-based Education, Research, and Service Evaluation, Kampala, Uganda.
Ian D Couper, Professor and director, Centre for Rural Health in the Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa, and a consultant to MEPI for the CapacityPlus Project led by IntraHealth International.
Christopher B Deery, Medical student at Wake Forest School of Medicine, Winston-Salem, North Carolina, USA.
Debra Nestel, Professor of Simulation Education in Healthcare, School of Rural Health/HealthPEER at Monash University, Melbourne, Australia.
Heather Ross, Technical advisor for the USAID-funded CapacityPlus Project led by IntraHealth International, Washington, DC, USA.
Atiene Solomon Sagay, Professor of obstetrics and gynecology, College of Medical Sciences, University of Jos, Jos, Nigeria.
Zohray M Talib, Associate professor of medicine and of health policy and faculty on the MEPI Coordinating Center at George Washington University, Washington, DC, USA.
References
- 1.Haile Mariam D, Sagay AS, Arubaku W, et al. Community-based education programs in Africa: Faculty experience within the Medical Education Partnership Initiative (MEPI) Network. Acad Med. 2014;89(8 Suppl):S50–S54. doi: 10.1097/ACM.0000000000000330. [ http://dx.doi.org/10.1097/acm.0000000000000330] [DOI] [PubMed] [Google Scholar]
- 2.Reid SJ, Cakwe M. Collaboration for Health Equity through Education and Research (CHEER). The contribution of South African curricula to prepare health professionals for working in rural or under-served areas in South Africa: A peer review evaluation. S Afr Med J. 2011;101(1):34–38. doi: 10.7196/samj.4526. [DOI] [PubMed] [Google Scholar]
- 3.Dreyer A, Couper I, Bailey R, Talib Z, Ross H, Sagay AS. Identifying approaches and tools for evaluating community-based medical education programmes in Africa. Afr J Health Professions Educ. 2015;7(1 Suppl 1):134–139. [ http://dx.doi.org/10.7196/AJHPE.568] [PMC free article] [PubMed] [Google Scholar]
- 4.Kirkpatrick DL. Evaluating Training Programs: The Four Levels. Berrett-Koehler; San Francisco: 1994. [Google Scholar]
- 5.Trochim W, Urban J, Hargraves M, et al. The Guide to the Systems Evaluation Protocol (V2.2) Cornell Digital Print Services; Ithaca, NY: 2012. [Google Scholar]
- 6.Grant J. Principles of curriculum design. In: Swanick T, editor. Understanding Medical Education: Evidence, Theory and Practice. Wiley-Blackwell; Oxford: 2010. pp. 1–15. [Google Scholar]
- 7.Alkin M, editor. Evaluation Roots: A Wider Perspective of Theorists' Views and Influences. 2nd Sage; Thousand Oaks, Ca: 2013. [Google Scholar]
- 8.Mertens D. Research and Evaluation in Education and Psychology: Integrating Diversity with Quantitative, Qualitative, and Mixed Methods. Sage; Thousand Oaks, Ca: 2005. [Google Scholar]

