Abstract
Background
Assessing implementation fidelity—the degree to which a program is implemented as intended—is essential to understand whether poor outcomes are due to implementation problems or the design of an intervention. Few studies in health research have documented the association between implementation fidelity and effectiveness. The Integrated District Evidence-to-Action (IDEAs) is a multicomponent audit and feedback strategy designed to improve the implementation of maternal and child clinical guidelines in Mozambique. In a previous study, we found mixed results of IDEAs effectiveness. The objective of the present study is to understand how implementation fidelity may have influenced the effectiveness of the strategy.
Methods
IDEAs was implemented in 154 health facilities across 12 districts in Manica and Sofala provinces in Mozambique between 2016 and 2020. We used the conceptual framework for implementation fidelity to guide descriptive analysis of IDEAs adherence. Regression modeling was used to study patterns of the direction of association between measures of fidelity and effectiveness for ten service delivery outcomes and five service readiness outcomes.
Results
We describe adherence on 15 measures of fidelity, of which 12 had high fidelity. Poor fidelity was found in conducting facility service readiness assessments and completing micro-interventions from action plans. Service delivery measures tended to be positively associated with participation and degree of micro-intervention completion and negatively associated with a higher number of action plans elaborated by participating teams. For the service readiness outcomes, delivery of essential care was positively associated with participation and micro-intervention completion, and staff availability was negatively associated with supervision.
Conclusion
Participation in audit and feedback meetings, the number of action plans elaborated, and the degree of completion of micro-interventions seem to be related to the effectiveness results. IDEAs should be adapted to reduce the number of action plans elaborated and promote better micro-intervention completion. Additionally, combining audit and feedback strategies with other strategies might enhance effectiveness in service outcomes. This study examines how to analyze the link between fidelity and effectiveness of a strategy to inform better design and recommend context-specific improvements.
Supplementary Information
The online version contains supplementary material available at 10.1186/s43058-025-00840-8.
Keywords: Audit and feedback, Implementation fidelity, Mozambique, Maternal and Child Health, Implementation Science, Effectiveness
Contributions to the literature.
This paper brings an approach to examining associations between implementation fidelity and health outcomes by using implementation science frameworks and quantitative methods.
It responds to the need to conduct and report implementation processes to understand the results of effectiveness studies and improve health strategies’ design, implementation, and adaptation.
It shows that when implementing audit and feedback strategies in low-resource settings, one must account for contextual health system weaknesses, such as the availability of resources to implement planned actions for change.
Background
Reductions in neonatal mortality have been slower than for maternal and child mortality, with projections indicating that between 2018 and 2030, 27.8 million children will die in their first month of life if countries do not accelerate reduction [1]. As the highest risk of death occurs at the time of birth, it requires a quick response from healthcare providers [2].
Universal coverage of essential newborn and maternal health care interventions would reduce neonatal mortality by 71%, benefit women and children after the first month, and reduce stillbirths [3]. However, the packages with the most significant impact (clinical care around birth and care of tiny and ill babies) are not being efficiently implemented. Factors contributing to low coverage of these packages include health workforce, financing, and service delivery quality [3].
In 2019–2020, national community surveillance data estimated a high rate of neonatal mortality in Mozambique, with 23 (95% CI:18–28) deaths per 1000 live births [4]. Infections accounted for 62% of deaths, intrapartum-related events, including birth asphyxia and birth trauma, were responsible for 20% (95% CI:14–26) of deaths, and prematurity was responsible for 10% (95% CI:7–13) [4].
Skilled care at birth with evidence-based practices offered in a humane, supportive environment is needed to reduce preventable newborn morbidity and mortality [5]. Satisfactory quality of care to improve the health and positive experiences of women and newborns requires the appropriate use of effective clinical and non-clinical interventions, strengthened health infrastructure, and optimized skills and attitudes of health workers [5].
Quality improvement strategies have been recognized as a key tool to strengthen health systems and improve the quality of care in resource-poor settings [6–8]. Audit and feedback (A&F) strategy, defined as a summary of clinical performance over a specified period of time, is widely used to improve the quality of professional practice [9, 10]. During the A&F process, an individual’s professional practice or performance is measured and compared to professional standards or targets. Most available evidence of A&F emanates from randomized control trials in high-income settings where results vary widely, ranging from minor to a substantial effect on professional behavior to no impact on patient outcomes [9, 10]. Evidence suggests that feedback is most effective when it is delivered by a supervisor or respected colleague, presented more than once, features specific goals and action plans, focuses on settings with lower baseline performance, and engages participants who are not physicians [10].
Integrated District Evidence to Action (IDEAs) is a multicomponent A&F implementation strategy designed to improve the implementation of maternal and child guidelines in Mozambique. Specifically, IDEAs aim to improve the coverage and quality of a bundle of existing evidence-based interventions targeting major causes of neonatal mortality. Funded by the Doris Duke Charitable Foundation and the US National Institutes of Health, the IDEAs strategy was implemented between October 2016 and December 2020 in 154 primary healthcare facilities across 12 districts in Manica and Sofala provinces of central Mozambique. Maternal and child health (MCH) managers at the facility, district, and province levels were the leading agents in the IDEAs A&F strategy [11].
Implementation fidelity is defined as the degree to which program staff implement programs as intended [12]. Implementation fidelity serves as a potential moderator of the relationship between interventions and their outcomes, meaning it is a factor that may impact how far the intervention actually affects the outcomes [12]. Examining implementation fidelity is critical to the internal and external validity of implementation research. Accurate conclusions about an intervention cannot be made without evaluating fidelity, as unknown factors may have influenced the outcome. Reporting on fidelity enables researchers in the field of implementation science to assess the extent to which the success of an intervention is influenced by the strategy used [13]. Such reporting facilitates the selection of optimal implementation strategies, more accurate replication, and, ultimately, a more successful transfer of evidence into practice [13]. However, little data is available on implementation fidelity [13]. The authors of this article seek to close this gap.
This article is part of a series of analyses conducted on the IDEAs strategy. Previously, we first examined the implementation process and evaluated implementation outcomes [11], and secondly, we evaluated the effectiveness of IDEAs on service readiness and service delivery outcomes guided by the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework, the results of which are published elsewhere [14]. Briefly, the results of the first study showed high reach, adoption, and maintenance of the strategy. Implementation fidelity, as measured by adherence, was generally high; out of 15 fidelity measures, 2 had poor fidelity. In the second study, which analyzed the effectiveness of the IDEAs strategy on ten service delivery outcomes across antenatal, maternity, postpartum, childcare, and reproductive health services, as well as five service readiness outcomes (medicines, infrastructure, equipment, care, and staffing availability), we found mixed results [14]. The current study aimed to evaluate the relationship between fidelity measures and effectiveness outcomes to identify any patterns in these associations that might help explain the mixed results regarding the effectiveness of the IDEAs strategy.
Methods
Strategy description
The IDEAs strategy was designed to improve health service delivery and service readiness by identifying performance gaps based on facility indicators and by enabling MCH managers to monitor, evaluate, prioritize, and implement ‘micro-interventions’ (defined here as solutions selected, implemented, or adapted at the health facility level) to improve compliance with Ministry of Health guidelines targeting primary causes of neonatal mortality. Each intervention district conducted a total of nine A&F meeting cycles over five days each. The cycles were iterative, occurring twice a year between October 2016 and December 2020. IDEAs strategy design is published elsewhere [11], but in summary, there are three steps:
Step 1: Facility service readiness assessments. Before each bi-annual five-day A&F meeting, the study team applied standardized service readiness assessment tools at three health facilities per the 12 districts to assess structural readiness to deliver guideline-based care (such as staffing levels and availability of essential commodities, equipment, and supplies). The assessment tools were adapted from the World Health Organization Service Availability and Readiness Assessment (SARA) [15]. Facility readiness described here is a design component of the IDEAs strategy and differs from the service readiness evaluation discussed in this paper.
Step 2: Audit and feedback meetings. MCH managers from the facility, district, and provincial levels participated in A&F meetings. They used audit data from routine health information systems and service readiness assessments to compare performance relative to goals. MCH managers then provided feedback on performance to their peers in both graphical and tabular formats, allowing visualization of secular trends of service delivery indicators. Each facility and district team presented their performance metrics, followed by a group discussion to interpret results, identify barriers to implementing clinical guidelines, develop action plans highlighting priority problems, select context-specific micro-interventions, and set measurable targets and resources required to implement action plan activities.
Step 3: Targeted facility support (supervision visits and financial support). Twice a year, at each district A&F meeting cycle, three primary healthcare facilities were selected based on their performance of service delivery indicators (one high-performing and two low-performing) to receive up to two supervision visits per A&F meeting cycle. In addition, a modest monthly financial amount of US$1250 was allocated to each district to support action plan implementation in the selected health facilities. During supervision visits, district MCH managers reviewed action plans, identified barriers to guideline implementation, and provided technical assistance to address barriers. District MCH managers were also responsible for monitoring, evaluating, and recording the degree of success (i.e., the percentage of micro-intervention implemented successfully) by the health facilities (Table 1).
Table 1.
Specifications of the IDEAs strategy
| DOMAIN | STRATEGY |
|---|---|
| Name | Integrated District Evidence to Action (IDEAs) |
| Definition | Is the multicomponent A&F strategy led by MCH managers and include: (1) facility service readiness assessment, (2) A&F meeting, and (3) targeted facility support (supervision and financial) |
| Actors | MCH managers at the facility, district, and provincial levels |
| Actions | Facility level: Participate in A&F meetings, use data from routine health information system and service readiness assessment to evaluate gaps in facility service delivery performance, present performance to their peers and receive feedback, identify priority issues, and propose local micro-interventions written in action plans, and receive supervision visits |
| District level: Participate in A&F meetings, conduct district-to-facility supervision, influence the allocation of flexible funding to health facilities to support action plan implementation, and evaluate micro-intervention completion | |
| Provincial level: Participate in A&F meetings and supports supervisions | |
| Action targets |
MCH nurse at the service delivery level - Have knowledge about the existing gaps in service delivery - Is under peer pressure to make changes - Implement clinical guidelines to improve performance - Is motivated by receiving supervision and support for action plan implementation |
| Temporality | The strategy was implemented between October 2016 and December 2020 |
| Dose |
Service readiness assessments occurred twice a year, with a total of 9 assessments in a sample of 36 health facilities A&F cycles occurred twice a year at meetings lasting five days, with a total of 9 cycles in each district Supervision visits: Up to two supervision visits during each A&F meeting cycle for selected health facilities |
| Implementation outcomes affected | Reach, adoption, fidelity, maintenance |
| Justification | Theory of planned behavior [16] |
Note: The characteristics of the IDEAs strategy are provided based on Proctor et al.'s recommendations for specifying strategies [17]
Study setting
IDEAs strategy was implemented in Manica and Sofala provinces in central Mozambique, with a combined population of more than 4.5 million [18, 19]. The A&F strategy was implemented in 154 primary healthcare facilities across 12 districts in central Mozambique, representing more than 70% of the population in both provinces (Table 2).
Table 2.
IDEAs study setting
| Province | Neonatal mortality rate (2019–2020)a | IDEAs districts | Population (2021)b | District coveragec | Number of IDEAs facilities | Health facility coveraged |
|---|---|---|---|---|---|---|
| Manica | 31 per 1,000 live births | Chimoio | 456,775 | 10% | 6 | 2% |
| Manica | 257,191 | 5% | 17 | 6% | ||
| Mossurize | 230,705 | 5% | 11 | 4% | ||
| Gondola | 224,603 | 5% | 10 | 4% | ||
| Barue | 217,254 | 5% | 13 | 5% | ||
| Sussundenga | 195,258 | 4% | 13 | 5% | ||
| Vanduzi | 130,893 | 3% | 9 | 3% | ||
| Sofala | 33 per 1,000 live births | Beira | 696,515 | 15% | 15 | 5% |
| Nhamatanda | 318,380 | 7% | 17 | 6% | ||
| Dondo | 223,484 | 5% | 15 | 5% | ||
| Buzi | 201,710 | 4% | 15 | 5% | ||
| Gorongosa | 202,043 | 4% | 13 | 5% |
(a) Countrywide surveillance data (provincial estimates). District neonatal mortality rate is not available. The national neonatal mortality rate is 23 per 1000 live births [4];
(b) Provincial Statistical Data 2021;
(c) Percentage of the population covered by IDEAs health facilities (the combined total population in Manica and Sofala provinces is 4,702,874);
(d) Percentage of IDEAs health facilities (the combined total number of health facilities in Manica and Sofala provinces is 277)
Study design
This is a quantitative study. Descriptive statistics were used to report the programmatic level of completion of measures of fidelity, and regression analysis was conducted to examine the relationship between measures of fidelity and measures of effectiveness for service delivery and service readiness outcomes.
The IDEAs strategy evaluation is guided by the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework. All dimensions of the RE-AIM were used in planning, evaluating, and reporting IDEAs. The reach, adoption, implementation fidelity, and maintenance are reported elsewhere [11], as well as the strategy’s effectiveness in improving service and readiness outcomes [14].
Data analysis
Measures of fidelity
By measures of fidelity, we refer to adherence elements, according to the conceptual framework proposed by Carroll et al. [12]. This includes the content, coverage, frequency, and duration of the activities proposed in the strategy's design (Table 3).
Table 3.
Measures of fidelity from IDEAs strategy
| Adherence | Measures of fidelity |
|---|---|
| Content | Number of A&F meeting cycles |
| Number of action plans elaborated | |
| Content of action plans (identified problems and proposed micro-interventions) | |
| Number of service readiness assessments | |
| Number of supervision visits | |
| Proportion of flexible funding allocated to the districts | |
| Coverage | Number of facilities and districts participating in A&F |
| Number of participants (facility, district, province) at A&F meetings | |
| Number of facilities selected based on performance | |
| Proportion of micro-interventions implemented successfully | |
| Frequency | Frequency of service readiness assessments |
| Frequency of A&F meeting cycles | |
| Frequency of supervision visits | |
| Duration | Duration of A&F meeting cycles |
| Timing of supervision visits |
Note: Adherence is a component of fidelity as described in the conceptual framework for implementation fidelity proposed by Carrol et al. Other components include potential moderators and the identification of essential elements, which are not assessed in this manuscript
Data on these measures were collected from program monitoring documents.
From the list of measures listed in Table 3, 10 aggregated measures of fidelity per facility were initially selected for quantitative analysis: number of A&F meeting cycles, number of participants (province level), number of participants (district level), number of participants (facility level), number of action plans elaborated, number of distinct problems identified in action plans, number of supervision visits, mean monthly time to the first supervision visit, percentage of micro-interventions completed, and number of times the facility was selected based on performance. We excluded facility service readiness assessments from quantitative analysis of service indicators because these assessments were conducted in a subsample of 36 facilities, while service delivery is examined in 154 facilities. Flexible funding was not included because there is no variation of this variable.
Measures of effectiveness
The impact of the IDEAs strategy on service delivery and readiness indicators has already been evaluated and reported elsewhere [14]. We briefly outline below how these measures were selected and provide a summary of the results to contextualize this study.
Service delivery indicators
The selection of service delivery indicators was guided by multiple sources, including the existing evidence of interventions to reduce neonatal mortality [20–22], the list of priority issues identified during the A&F meetings (we attempted to match service indicators with priority issues), and the availability of data in routine health information systems.
Monthly counts of 10 indicators from antenatal care, maternity, postpartum, child consultations, and family planning were extracted from routine health information systems for each health facility. These indicators are counts of (1) pregnant women with a fourth dose of intermittent treatment for malaria; (2) pregnant women with a second-to-fifth dose of tetanus vaccine; (3) pregnant women with at least four antenatal care visits; (4) deliveries with active management of the third stage of labor; (5) first postpartum consultations; (6) fully vaccinated children; (7) first at-risk child appointments; (8) first polymerase chain reaction (PCR) tests for HIV-exposed children; (9) new users of contraceptive methods; and (10) women starting long-lasting contraceptives (e.g., intrauterine devices, implants, and injectables).
Results from the effectiveness analysis are published somewhere [14]. In summary, we observed significant positive associations for two outcomes: The first at-risk child appointments (IRR = 1.06 [95% CI, 1.04, 1.07]) and the first PCR testing for exposed HIV children (IRR = 1.02 [95% CI, 1.01, 1.03]). Non-significant associations were found six service delivery outcomes: The fourth dose of intermittent treatment for malaria (IRR = 1.00 [95% CI, 0.99, 1.01]); the second-to-fifth dose of tetanus vaccine (IRR = 1.00 [95% CI, 0.99, 1.01]); the fourth antenatal care visit (IRR = 0.99 [95% CI, 0.98, 1.01]); active management of the third stage of labor (IRR = 1.00 [95% CI, 0.99, 1.00]); the first postpartum consultation (IRR = 0.99 [95% CI, 0.98, 1.00]); and fully vaccinated children (IRR = 1.00 [95% CI, 0.99, 1.01]). Negative associations were found in two outcomes: new users of contraceptives (IRR = 0.95 [95% CI, 0.94, 0.96]) and women starting long-lasting contraceptives (IRR = 0.94 [95% CI, 0.93, 0.95]), respectively.
Service readiness indicators
We created composite measures for the availability of essential medicine, infrastructure, equipment, and essential care, and used count data for the number of staff available (MCH nurses). In summary, 15 items were included to create composite scores for the availability of essential medicine based on the World Health Organization’s list of priority life-saving medicines for women and children [23]. For the infrastructure domain, six groups of items describing the availability of communications, power supply, basic amenities, processing of equipment for reuse, and infection control were included. For equipment and essential care, 28 and 16 items were included to create composite scores, respectively, according to the SARA list of items for these categories [15]. Composite scores for each health facility were estimated by dividing the number of available items by the total number of possible items. For instance, the total possible number of items for medicine is 15. If a health facility has eight items available, the medicine score for that facility is 8/15 = 0.53.
The analysis of the impact of the strategy on readiness indicators found a positive association on the infrastructure domain (OR = 5.84 [95% CI, 1.32, 25.88]), negative association on availability of essential care (OR = 0.13 [95% CI, 0.03, 0.54]), and null associations on the availability of essential medicines (OR = 2.61 [95% CI, 0.66,10.29]), essential equipment (OR = 1.20 [95% CI, 0.31, 4.64], and MCH staffing (IRR = 1.09 [95% CI, 0.82, 1.45].
Analysis
For this study, we performed regression analysis to examine the relationship between measures of fidelity and effectiveness outcomes described earlier, aiming to identify any patterns in the direction of these associations that could help explain the mixed results of effectiveness. “Patterns in the direction of association” here refer to the general tendency of the relationship between fidelity and effectiveness. In other words, we want to determine whether a specific measure of fidelity tends to be negative, positive, or unrelated to effectiveness measures. Although we present the regression coefficients, our main focus is not on their size but on analyzing the overall tendency of the relationships between fidelity and effectiveness measures.
Regression analysis
We conducted regression analysis on two groups of effectiveness indicators—service delivery and service readiness.
We analyzed the Pearson correlation among selected fidelity measures to ensure an adequate set of indicators for our model. We decided to combine the number of participants at the three levels (province, district, facility) into a single variable, “total participants,” and excluded the variable measuring how many times the facility was selected, as it was highly correlated with the supervision variable. Correlation between variables included in the analysis can be found in additional file 1A.
To analyze the associations between elements of fidelity and effectiveness outcomes, we conducted it in two steps: 1) For each facility, we estimated the effect of each outcome on the implementation of the IDEAs strategy (51 months for service indicators and 4 annual measurements for readiness indicators); and 2) we used the effect of the indicator to examine the association with fidelity measures. In this second step, the monthly effects of the service delivery indicators were multiplied by 12 to obtain annual effects. Effects of readiness indicators were already annual and did not require multiplication by 12. We describe these steps as following:
Step 1: For service delivery monthly count data, we applied a negative binomial regression model to estimate the annual increase effect (AIE) as expressed by the equation:
| 1 |
where:
i = health facility,
t = month (from 1 to 51),
= expected number of the service delivery indicator at facility i and month t
= Facility intercept(average log count at time 0)
= Facility i monthly slope (monthly effect)
= offset to account for population size (converts counts to rates)
So, we take the to obtain the annual increase effect.
Step 2: Then, the annual effect was used as an outcome in a generalized linear model (GLM) to estimate the association with fidelity measures as predictors (number of action plans, number of problems identified, total number of participants, number of supervisions, and number of micro-interventions completed successfully). We also included location (rural or urban) and province in the model, as per the equation:
| 2 |
where:
i = health facility,
is the intercept
is a vector of predictors.
is a vector of coefficients (relative change in the expected AIE)
For service readiness annual proportion data (scores of availability for essential medicine, equipment, infrastructure, staff, and essential care) and for time, measured as a count of four service readiness evaluations, Eq. 1 was revised to estimate the AIE in the odds of the indicator using a quasibinomial regression model, as expressed in the equation:
| 3 |
where:
is the log-odds of the outcome in facility i at time t
is the mean (probability of success);
= Facility intercept (average log count at time 0)
= Facility i annual slope (annual effect)
The annual increase effect is calculated as . Afterwards, the AIE was used as an outcome with the same predictors as described in Eq. 2.
Sensitivity analysis
We performed two types of sensitivity analysis: 1) For the step presented in step 2, we also applied a standard ordinary least squares (OLS) regression, with robust standard errors for both sets of indicators to verify if it would produce different results from the GLM model; 2) We conducted a multivariate (multi-outcome) GLM regression, in which the service outcomes are stacked into a single response vector, and predictors dummy indicators for outcome type are included among fidelity measures. The regressions were weighted by the inverse variance of the individual slope, and robust standard errors were used to construct the confidence intervals. All analyses were conducted in R software, version 4.4.3.
Results
Descriptive results on fidelity measures
In Table 4, we present the extent to which each fidelity measure was achieved programmatically. Overall, we observed good adherence on 12 out of 15 fidelity measures for the IDEAs strategy. Two measures with low fidelity levels involved conducting facility readiness assessments twice a year among a sample of 36 facilities (52% achievement, irregular frequency), and the third related to completing the implementation of micro-interventions from the action plans (17% achievement).
Table 4.
Level of achievement of fidelity measures
| Adherence | Measures of fidelity | Proposed | Extent achieved |
|---|---|---|---|
| Content | Number of A&F meeting cycles | 108 | 107 (99%) |
| Number of action plans elaborated | 1294 | 1257(97%) | |
| Content of action plans (identified problems and proposed micro-interventions) | At least one problem and micro-intervention in each action plan | 100% achieved | |
| Number of facility service readiness assessments | 324 | 168 (52%) conducted | |
| Number of supervision visits | At least one supervision visit for each facility selected based on performance |
124/128 (96%) of selected facilities supervised Average 3.7 supervision visits for low-performing facilities Average 5.2 supervision visits for those switching between higher- and lower-performing Average of 2.6 supervision visits for high-performing facilities |
|
| Proportion of flexible funding allocated to the districts | 100% | 100% | |
| Coverage | Number of facilities and districts participating in A&F | 154 facilities/12 districts | 100% |
| Number of participants (facility, district, province) at A&F meetings | At least 1512 participants |
Total of 3076 (> 100%) 1964 (64%): Facility level 905 (29%): District level 217 (7%): Province level |
|
| Number of facilities selected based on performance | 321 |
Total of 309 (96%): 206 low-performing 103 high-performing |
|
| Percentage of micro-interventions implemented successfully | 100% | 17% implemented completely | |
| Frequency | Frequency of service readiness assessments | Twice a year | Varied. Was Irregular |
| Frequency of A&F meeting cycles | Twice a year | Mean (5.9 months) | |
| Frequency of supervision visits | Up to two supervision visits per selected facility per A&F cycle (twice a year) |
342 (64%) were double supervision visits (first and follow-up) 36% were single supervision visits |
|
| Duration | Duration of A&F meetings | Five days | Achieved 100% |
| Timing for supervision visits | First supervision at least within three months after the A&F meeting. No timing was defined for the follow-up | An average of 2.4 months for the first supervision and 3.7 months for the follow-up |
Note: These are results from the process evaluation of the IDEAs strategy. Data sources include program monitoring instruments and reports
In Table 5, we present descriptive statistics of the fidelity measures included in the regression analysis. We observe that the coefficient of variability ranges from a minimum of 17% in the number of action plans elaborated to a maximum of 65% in the number of supervisions conducted.
Table 5.
Summary statistics of the fidelity measures included in regression analysis
| Variable | Obs | Mean | Median | 25% | 75% | Min | Max | CV |
|---|---|---|---|---|---|---|---|---|
| Action plans elaborated | 154 | 8.2 | 8.0 | 8.0 | 9.0 | 1.0 | 10.0 | 17% |
| Distinct problems identified | 154 | 16.9 | 17.0 | 15.0 | 19.0 | 1.0 | 25.0 | 21% |
| Number of supervisions | 154 | 3.4 | 3.0 | 2.0 | 5.0 | 0.0 | 10.0 | 65% |
| Micro-interventions completed (%) | 154 | 16.5 | 19.1 | 9.5 | 21.3 | 6.6 | 32.7 | 47% |
| Number of participants | 154 | 274.5 | 270.0 | 216.0 | 289.0 | 164.0 | 524.0 | 33% |
Abbreviations: Obs is the number of observations, CV is coefficient of variability
Results of regression analysis
Service delivery indicators
The results of the associations between service indicators and fidelity measures suggest a generally positive relationship with the number of participants and the percentage of micro-interventions completed. In contrast, the number of action plans elaborated tends to have a negative relationship with three effectiveness outcomes (Fig. 1). Detailed information about these results can be found in an additional file 1B.
Fig. 1.
Patterns of direction of the association between fidelity measures and service delivery indicators. Green highlights represent positive associations, orange highlights indicate negative associations, and the absence of highlight denote null associations
Service readiness indicators
For readiness indicators, no specific patterns were observed in each fidelity measures. However, positive associations were seen between the availability of essential care and the number of participants, as well as with completed micro-interventions. A negative association was found between staff availability and the number of supervision visits (Fig. 2). Additional information can be found in additional file 1C.
Fig. 2.
Patterns of direction of the association between fidelity measures and service readiness outcomes. Green highlights represent positive associations, orange highlights indicate negative associations, and the absence of highlight denote null associations
Sensitivity analysis results
The results from the OLS models were similar to those of the generalized linear model in terms of the direction and magnitude of the coefficients.
In the multivariate regression for service delivery indicators, the average annual change in any service outcome was negatively associated with the number of action plans elaborated (RR = 0.9373 [0.8796, 0.9987]) and positively associated with the number of participants (RR = 1.1629 [1.0300, 1.3129]). In the multivariate regression for readiness indicators, the average annual change in any readiness outcome was negatively associated with the number of action plans elaborated (RR = 0.9429 [0.8938, 0.9950]). More information about pooled models can be found in the additional files 1D and 1E.
Discussion
We presented a study examining the relationships between measures of implementation fidelity and effectiveness of an A&F strategy implemented in 154 primary healthcare facilities in central Mozambique. We utilized implementation science frameworks to understand the implementation process and applied regression modeling to analyze the patterns of associations.
We choose to verify patterns in the direction of associations because a consistent pattern might suggest that a particular component of fidelity has some influence on effectiveness. We also decided to use individual regression for each service or readiness outcome in our main analysis to identify any differences among indicators of different services (antenatal care, maternity, postpartum, family planning). This is because, in IDEAs audit and feedback meetings, the participating nurses identified distinct types of challenges and selected different micro-interventions for each of these services [11]. Therefore, detecting strong patterns of associations in the results is useful for strategy adjustments to improve future implementation.
A pattern of positive associations was observed between the number of participants in the audit and feedback meetings and most service delivery indicators; additionally, the degree of completion of micro-interventions was positively related to two indicators. In contrast, negative associations were identified between the number of action plans developed and three service delivery indicators. Furthermore, one indicator, the number of users of long-lasting contraceptives, showed negative relationships with the number of action plans developed, the number of problems identified, and the number of supervisions. From the multivariate regression in the sensitivity analysis, we observed consistent associations with the main analysis: a negative relationship between the mean of outcomes and the number of action plans developed, and a positive relationship with the number of participants. No associations were found between the service delivery outcomes and the level of completion of micro-interventions in the sensitivity analysis.
The findings on IDEA's effectiveness on service delivery indicators [14] indicate a positive link between the strategy and child-at-risk indicators (such as initial appointments and first PCR tests for exposed children), a negative link with family planning indicators (including new contraceptive users and those using long-lasting methods), while the remaining six service indicators showed no effects. By triangulating these effectiveness results with the patterns found in associations with fidelity measures, we can say that participation in A&F meetings appears to have a positive impact on results, and a higher number of action plans was not necessarily a factor influencing success. This final point just emphasizes that implementing solutions in action plans has a greater impact than merely creating plans. Building on this idea that putting solutions into practice leads to impact, we observed that only two indicators have significant positive correlations with the completion level of micro-interventions, that is, solving issues identified in the plans, which might partially explain the null effects observed in most cases of effectiveness indicators.
Surprisingly, the number of supervisions was not significantly associated with most indicators, although this is one of the main components for supporting health facility teams in implementing micro-interventions. This might indicate several things, for example, that although the number of supervisions was high (as shown in Table 4), the focus of those visits was too broad, not necessarily prioritizing issues from the action plans, or that the quality of the supervisions was not ideal. Unfortunately, the present analysis cannot assess these claims.
Regarding readiness indicators, we do not observe a strong pattern of associations. However, there is a positive association between the availability of essential care and both the number of participants and the extent to which micro-interventions were completed. Conversely, the availability of staff numbers is negatively associated with supervision visits. Based on the characteristics of the context (low-resource setting, complex health system) these few observed associations make sense because providing essential care (which is essentially applying clinical guidelines correctly) is the outcome where an A&F intervention might have the greatest chance to produce immediate changes, compared to changing staff levels, equipment, infrastructure, or medicines stocks. In the case of IDEAs, participants are not only maternal and child health managers, but they are also nurses, providing direct patient care, which offers a valuable opportunity to improve the application of clinical practices guidelines. The negative association between staff levels and supervision visits is also obvious; lower staff numbers might limit the number of supervisions conducted.
Based on the effectiveness of IDEAs on readiness indicators [14], it was reported null effects for most indicators, except for a positive association between the strategy and the availability of infrastructure and a negative association with the availability of essential care. Combining these findings with the current discussion, it is safe to say that the selected fidelity measures do not have a direct or sufficient influence to change the availability of readiness indicators. This may be because most expected changes in the availability of medicine, equipment, and staff require actions beyond the facility level, restricting the number of micro-interventions related to these areas that can be included and executed in the action plans. One exception is the availability of essential care, which, as mentioned above, has more room for modification with local solutions. However, this readiness indicator was negatively associated with the strategy in the effectiveness analysis, suggesting that factors outside the scope of the strategy components influenced this outcome during implementation. Examples include stockouts of essential medicines, lack of equipment, and inadequate provider training—all of which cannot be captured with our data.
When evaluating the implementation process of the IDEAs strategy [11], we observed that two specific components of the strategy were poorly executed, which could also partially affect the effectiveness results. The explanation for this is described elsewhere [11]. In summary, the process of identifying and specifying problems and micro-interventions during the A&F meetings improved over time. During the first two A&F meeting cycles (out of a total of nine cycles), many problems were identified, and poorly specified micro-interventions were proposed, which posed challenges in executing and monitoring their implementation. Furthermore, Sofala province presented better micro-intervention completion results than Manica, suggesting differences in experience in monitoring and evaluating action plan activities between MCH managers from both provinces. In addition, conducting facility readiness assessments to inform A&F meetings as planned was challenging because of delays in creating the assessment protocol, delays in IRB approval, and failure to synthesize results to feedback into the A&F meeting rapidly. Additionally, executing these assessments was very expensive, creating more constraints for its execution.
Few studies in health research have systematically documented implementation processes and measured fidelity, and there is no consensus on how to do it best [13, 24].
A review of primary and early secondary prevention programs found that only 39 of 162 outcome studies specified procedures for documenting fidelity, and only 13 considered fidelity variations in analyzing program effects [25].
In a cross-sectional study, Narh-Bana et al. examined provider implementation fidelity to implementing national guidelines on tuberculosis screening at HIV clinics in Ghana [26]. They used the conceptual framework of implementation fidelity to guide the analysis of the content and frequency in which healthcare providers implemented guidelines, reporting an overall median of 79% on provider scores. They also explored associations between fidelity and demographic characteristics and found positive associations with gender, profession, and education [26].
In a study of a continuum of care program for frail older adults in health and social care, Hasson et al. used the conceptual framework for implementation fidelity to examine the intervention’s content, dose, coverage, and moderating factors [27]. They found that 16 out of 18 intervention components were always or most often delivered as intended in the program protocol. No link with effectiveness was reported.
Hogue et al. examined the impact of treatment adherence and therapist competence on treatment outcomes in psychotherapy and reported a positive linear relationship between adherence and outcomes [28].
Our study has some limitations that need to be considered. First, we examined associations; therefore, no causal inferences can be made. We assessed adherence to the strategy, but we recognize that other factors might influence the level of fidelity, such as intervention complexity, facilitation strategies, quality of delivery, and participant responsiveness, as noted by Carroll et al. [12]. However, because of data limitations, we did not evaluate these moderators. Third, interpreting each of the coefficients of the associations in isolation is not really meaningful given that the question of the study is not about the magnitude of each association, but rather to detect potential patterns. Despite these limitations, the study has notable strengths. We applied well-known frameworks to specify the IDEAs strategy and describe its fidelity measures, providing clarity about the strategy and allowing meaningful comparison to similar studies. We used quantitative method to identify potential influencers of success, but we also triangulated these with process evaluation information to better understand the implementation dynamics. This allowed us to securely report on our strategy design, the implementation process, the impact, and potential factors influencing it, thereby contributing to the field of implementation science by demonstrating a way to monitor and report on fidelity components and link them to effectiveness.
Based on this and other published work on the IDEAs strategy [11, 14, 29], we recommend adapting the strategy to enhance fidelity by reducing the number of action plans developed and improving the specification and prioritization of micro-interventions. Additionally, it is important to evaluate the quality of supportive supervision and identify factors related to challenges in implementing micro-interventions. This includes qualitatively exploring the availability of resources for implementation, nurses' clinical capacity in managing obstetric complications, and the constraints in using flexible funds provided to districts. The second recommendation is to eliminate the facility service readiness assessments from the strategy design, as their implementation has proven to be impractical and costly. Lastly, to target more direct service delivery and service readiness outcomes, it may be helpful to consider allocating financial resources directly to facilities instead of districts, and combining A&F with other strategies such as nurses training in emergency obstetric care.
Conclusion
Participation in audit and feedback meetings, the number of action plans created, and the completion of micro-interventions may have influenced the effectiveness of the IDEAs strategy. We recommend adapting the components of IDEAs by reducing the number of action plans created in A&F meetings and promoting the completion of micro-interventions. Additionally, combining A&F with other strategies might create better opportunities to more directly impact service delivery and readiness outcomes. This study exemplifies how linking components of fidelity with effectiveness can help examine implementation processes in resource-constrained settings. Such analysis supports the development of context-specific strategy adaptations, accurate replication, and successful transfer of evidence into practice.
Supplementary Information
Additional file 1: (A, B, C, D, E): This is an Excel file with 5 sheets presenting a correlation matrix for fidelity variables included in regression analysis; the results of the individual generalized linear models for service delivery and service readiness outcomes, as well as from pooled regressions for sensitivity analysis. It is an Excel file with information organized in five sheets: The first sheet (A-Correlation-fidelity) presents the correlation matrix with 5 fidelity variables included in regression analysis. The second sheet (B-S.delivery—individual models) presents the relative risks and 95% confidence intervals from the generalized linear regression models for ten service delivery outcomes. The third sheet (C-S.readiness – individual models) displays the relative risks and 95% confidence intervals from the generalized linear regression models for five service readiness outcomes. The fourth sheet (D-S.delivery—pooled model) shows the relative risks and 95% confidence intervals from a pooled generalized linear regression model for all ten service delivery outcomes. The fifth sheet (E-S.readiness—pooled model) provides the relative risks and 95% confidence intervals from a pooled generalized linear regression model for all five service readiness outcomes.
Acknowledgements
We thank the Sofala and Manica Health directorates, the 12 district health departments, maternal and child managers, and nurses for collaborating with the IDEAs strategy.
Abbreviations
- A&F
Audit and feedback
- IDEAs
Integrated District Evidence to Action
- MCH
Maternal and Child Health
- OLS
Ordinary least squares
- GLM
Generalized linear model
- RE-AIM
Reach, Effectiveness, Adoption, Implementation, and Maintenance
- SARA
Service Availability and Readiness Assessment
Authors’ contributions
AD, KS, QF, SGL, SG, RE, BW, and GS conceived the idea of the study. IR, EB, DU, and AD organized the data. AD conducted the analysis. OA reviewed the analysis. AD prepared the data and wrote the manuscript. All authors reviewed and approved the manuscript.
Funding
The research reported in this publication is supported by the US National Institutes of Health (NIH) under award numbers 1R01HD092449-01A1 and the Doris Duke Charitable Foundation’s African Health Initiative grant #2016106. The content is solely the authors’ responsibility and does not necessarily represent the official views of the NIH or Doris Duke Charitable Foundation.
Data availability
The data supporting this study’s findings are available upon reasonable request from the corresponding author and with permission of Manica and Sofala provincial health directorate.
Declarations
Ethics approval and consent to participate
All the research methods were performed following the relevant guidelines and regulations. The study was approved by the institutional review board of the University of Washington (IRB#STUDY00003926), Mozambique’s National Bioethics Committee for Health (CNBS-IRB00002657), and the Ministry of Health after endorsement from Manica and Sofala Provincial Health Directorates. Mozambique’s National Bioethics Committee for Health waived the need for informed consent.
Consent for publication
Not applicable.
Competing interests
Kenneth Sherr and Sarah Gimbel are part of the editorial board for the Implementation Science journal. All other authors declare that they have no competing interests.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Hug L, Alexander M, You D, Alkema L. National, regional, and global levels and trends in neonatal mortality between 1990 and 2017, with scenario-based projections to 2030: a systematic analysis. Lancet Glob Health. 2022;2019(7):e710–20. 10.1016/S2214-109X(19)30163-9.. Cited 2022 Jan 31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Lawn JE, Blencowe H, Oza S, You D, Lee AC, Waiswa P, et al. Every Newborn: progress, priorities, and potential beyond survival. The lancet. Elsevier; 2014;384:189–205. [DOI] [PubMed]
- 3.Dickson KE, Simen-Kapeu A, Kinney MV, Huicho L, Vesel L, Lackritz E, et al. Every Newborn: health-systems bottlenecks and strategies to accelerate scale-up in countries. The Lancet. Elsevier; 2014;384:438–54. [DOI] [PubMed]
- 4.Macicame I, Kante AM, Wilson E, Gilbert B, Koffi A, Nhachungue S, et al. Countrywide mortality surveillance for action in Mozambique: results from a national sample-based vital statistics system for mortality and cause of death. Am J Trop Med Hyg. The American Society of Tropical Medicine and Hygiene; 2023;108:5.
- 5.Tunçalp Ӧ, Were W, MacLennan C, Oladapo O, Gülmezoglu A, Bahl R, et al. Quality of care for pregnant women and newborns—the WHO vision. Bjog. Wiley-Blackwell; 2015;122:1045. [DOI] [PMC free article] [PubMed]
- 6.Leatherman S, Ferris TG, Berwick D, Omaswa F, Crisp N, Oxford University Press. The role of quality improvement in strengthening health systems in developing countries. Int J Qual Health Care. 2010;22:237–43. [DOI] [PubMed] [Google Scholar]
- 7.World Health Organization. The network for improving quality of care for maternal, newborn and child health: evolution, implementation and progress: 2017–2020 report. Geneva: World Health Organization; 2021. ISBN: 978-92-4-002815-9.
- 8.Zaka N, Alexander EC, Manikam L, Norman IC, Akhbari M, Moxon S, et al. Quality improvement initiatives for hospitalised small and sick newborns in low-and middle-income countries: a systematic review. Implement Sci. Springer; 2018;13:1–21. [DOI] [PMC free article] [PubMed]
- 9.Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Effective Practice and Organisation of Care Group, editor. Cochrane Database Syst Rev. 2012; 10.1002/14651858.CD000259.pub3. Cited 2022 Jan 31. [DOI] [PMC free article] [PubMed]
- 10.Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O’Brien MA, French SD, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29(11):1534–41. 10.1007/s11606-014-2913-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Dinis A, Fernandes Q, Wagenaar BH, Gimbel S, Weiner BJ, John-Stewart G, et al. Implementation outcomes of the integrated district evidence to action (IDEAs) program to reduce neonatal mortality in central Mozambique: an application of the RE-AIM evaluation framework. BMC Health Serv Res. 2024;24:164. 10.1186/s12913-024-10638-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci Springer. 2007;2:1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Slaughter SE, Hill JN, Snelgrove-Clarke E. What is the extent and quality of documentation and reporting of fidelity to implementation strategies: a scoping review. Implement Sci BioMed Central. 2015;10:1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Dinis A, Augusto O, Fernandes Q, Birru E, Etzioni R, Gimbel S, et al. Can audit and feedback improve health service readiness and delivery outcomes in a low-resource setting? Effectiveness results of the IDEAs strategy from central Mozambique. PLOS Glob Public Health. Public Library of Science San Francisco, CA USA; 2025;5:e0004216. [DOI] [PMC free article] [PubMed]
- 15.World Health Organization. Service availability and readiness assessment (SARA): an annual monitoring system for service delivery: reference manual. Geneva: World Health Organization; 2013. WHO/HIS/HSI/RME/2013/1.
- 16.Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process Elsevier. 1991;50:179–211. [Google Scholar]
- 17.Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139. 10.1186/1748-5908-8-139. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Anuário Estatístico Provincia de Sofala 2021. Instituto Nacional de Estatística Delegação Provincial de Sofala; http://www.ine.gov.mz/estatisticas/publicacoes/anuario/anuario-provincia-de-sofala/ae-sofala-2021-f.docx/view. Accessed 27 Dec 2022.
- 19.Anuário Estatístico Provincia de Manica 2021. Instituto Nacional de Estatística Delegação Provincial de Manica; http://www.ine.gov.mz/estatisticas/publicacoes/anuario/provincia-de-manica/anuario-estatistico-provincia-de-manica-2021.pdf/view. Accessed 27 Dec 2022.
- 20.Darmstadt GL, Bhutta ZA, Cousens S, Adam T, Walker N, de Bernis L. Evidence-based, cost-effective interventions: how many newborn babies can we save? Lancet. 2005;365:977–88. 10.1016/S0140-6736(05)71088-6. [DOI] [PubMed] [Google Scholar]
- 21.Mason E, McDougall L, Lawn JE, Gupta A, Claeson M, Pillay Y, et al. From evidence to action to deliver a healthy start for the next generation. The lancet. Elsevier; 2014;384:455–67. 10.1016/S0140-6736(14)60750-9. [DOI] [PubMed]
- 22.Bhutta ZA, Das JK, Bahl R, Lawn JE, Salam RA, Paul VK, et al. Can available interventions end preventable deaths in mothers, newborn babies, and stillbirths, and at what cost? The Lancet Elsevier. 2014;384:347–70. [DOI] [PubMed] [Google Scholar]
- 23.World Health Organization. Priority life-saving medicines for women ad children. 2012. https://www.who.int/publications/i/item/WHO-EMP-MAR-2012-1. Accessed 23 May 2023. Cited 2023 May 23.
- 24.Hasson H. Systematic evaluation of implementation fidelity of complex interventions in health and social care. Implement Sci BioMed Central. 2010;5:1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: are implementation effects out of control? Clin Psychol Rev Elsevier. 1998;18:23–45. [DOI] [PubMed] [Google Scholar]
- 26.Narh-Bana SA, Kawonga M, Chirwa ED, Ibisomi L, Bonsu F, Chirwa TF, et al. Fidelity of implementation of TB screening guidelines by health providers at selected HIV clinics in Ghana. PLoS ONE. 2021;16:e0257486. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Hasson H, Blomberg S, Dunér A, BioMed Central. Fidelity and moderating factors in complex interventions: a case study of a continuum of care program for frail elderly people in health and social care. Implement Sci. 2012;7(1):1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Hogue A, Henderson CE, Dauber S, Barajas PC, Fried A, Liddle HA, et al. Treatment adherence, competence, and outcome in individual and family therapy for adolescent behavior problems. J Consult Clin Psychol. 2008;76:544. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Inguane C, Soi C, Gimbel S, Manaca N, Ramiro I, Floriano F, et al. Applying the Consolidated Framework for Implementation Research to Identify Implementation Determinants for the Integrated District Evidence-to-Action Program, Mozambique. Glob Health Sci Pract. 2022;10(Suppl 1):e2100714. 10.9745/GHSP-D-21-00714. [DOI] [PMC free article] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Additional file 1: (A, B, C, D, E): This is an Excel file with 5 sheets presenting a correlation matrix for fidelity variables included in regression analysis; the results of the individual generalized linear models for service delivery and service readiness outcomes, as well as from pooled regressions for sensitivity analysis. It is an Excel file with information organized in five sheets: The first sheet (A-Correlation-fidelity) presents the correlation matrix with 5 fidelity variables included in regression analysis. The second sheet (B-S.delivery—individual models) presents the relative risks and 95% confidence intervals from the generalized linear regression models for ten service delivery outcomes. The third sheet (C-S.readiness – individual models) displays the relative risks and 95% confidence intervals from the generalized linear regression models for five service readiness outcomes. The fourth sheet (D-S.delivery—pooled model) shows the relative risks and 95% confidence intervals from a pooled generalized linear regression model for all ten service delivery outcomes. The fifth sheet (E-S.readiness—pooled model) provides the relative risks and 95% confidence intervals from a pooled generalized linear regression model for all five service readiness outcomes.
Data Availability Statement
The data supporting this study’s findings are available upon reasonable request from the corresponding author and with permission of Manica and Sofala provincial health directorate.


