Table 5.
Strengths and weaknesses of the prospective evaluation approach and its influence on the implementation of Gavi support and EPI programmes
Successful example | Unsuccessful example | Opportunity for stronger evaluation design |
---|---|---|
Example 1: Feedback loops worked when individual, institutional and contextual factors aligned | ||
During year 1 of the evaluation in Mozambique, we found two major bottlenecks in the preparation for the introduction of PCV. First, the vaccine introduction grant funds were disbursed late by Gavi and only arrived in the country 2 weeks prior to the official launch of the vaccine resulting in lack of country readiness (e.g. the National Immunization Program did not have time to update immunization registers or prepare for adequate monitoring). Second, MoH and partners did not pilot social mobilization media messages resulting in the erroneous invitation of all infants rather than only the target age group (babies aged 4 months). Consequently, there was overuse of available doses and ensuing stock outs of PCV. On receiving these results both Gavi and Mozambique’s National Immunization Program made immediate process rectifications for the upcoming HPV vaccine introduction. Gavi disbursed funds 8 months prior to the launch and the National Immunization Program piloted HPV vaccine social media mobilization messages before widely implementing them. | Unfortunately, the findings from Mozambique’s PCV introduction in the first year of the FCE (e.g. that new vaccine introductions were sub-optimally planned) were repeated in almost every country for every new vaccine introduction, including in Mozambique for introductions in the years following the PCV introduction. The FCE evaluators continually recommended to Gavi that they play a more active role in determining introduction readiness, and specifically that countries and partner reflect on and propose adaptations based on previous new vaccine introduction experiences. The existence of these recommendations in reports, policy briefs, and in presentations were likely not married with adequate advocacy to Gavi or countries to ensure that they were sufficiently adopted. | Future prospective evaluations/IR platforms should invest more heavily in the use of evaluation findings for real-time and ongoing programme adjustments, which requires adequate resources for communications and advocacy, capacity building and institutionalization of a culture of learning and adaptation. |
| ||
Example 2: Feedback loops were often too slow for larger policies or decisions | ||
A major focus of the evaluation was Gavi’s HPV vaccine ‘demonstration project’ policy—a policy that required countries to first demonstrate that they could achieve adequate coverage of HPV vaccine in select regions prior to applying for funding for national introduction. Evaluators found that Gavi’s policy incentivized countries to invest heavily in achieving high coverage rates in non-representative regions with greater demand and access to services; because of the incentive to demonstrate high coverage, learnings from these projects were of little relevance for national introduction and more challenging contexts. FCE evaluators communicated these and other findings to Gavi and over a period of multiple years and Gavi eventually reformed their policies and approaches to supporting decision-making, introduction and implementation of HPV vaccine based on FCE findings and other evidence. | It took Gavi multiple years to finalize the changes to their HPV vaccine policies and procedures, and in the meantime little action was taken both by Gavi and among partners to transfer cross-country lessons and adapt within countries. This contributed to sub-optimal implementation in countries that did decide to introduce (e.g. Uganda, Bangladesh) and delayed introductions in Zambia and Mozambique due to perceptions that lessons from the demonstration projects were not generalizable. These bottlenecks could have been avoided with faster and more adaptive engagement of Gavi and other Alliance partners. | Future prospective evaluations should experiment with approaches to supporting the timely adaptive management of programmes. This may mean more direct engagement with key decision-makers, or an agreement that learnings from the evaluation will be discussed and responded to by decision-makers |
| ||
Example 3: Evaluation teams were not adequately embedded in programme decision-making | ||
Over time, evaluation teams strengthened their relationships and level of embeddedness with programme implementers. When evaluators were present during decision-making processes, they were able to add findings and evidence from the evaluation. This occurred in Bangladesh for their HSS-2 application (the evaluation team invited to participate on the technical working group), during Mozambique’s planning meetings for national HPV vaccine introduction, and frequently in Uganda where the team was the most embedded with the EPI programme. | In other cases, evaluators were not present for key decisions and that limited the incorporation of evidence from the FCE. In some cases—particularly in the early years of the evaluation—evaluators were present in decision-making meetings but felt unsure of whether and how to inform the decision without biasing it and calling into question their role as an independent evaluator. | Future prospective evaluations should encourage evaluators to be embedded in programmes. |