Abstract
There is a growing body of research that addresses implementation-focused questions within obstetrics and gynecology. With this document, we provide clinicians with the necessary tools to critically read and interpret literature evaluating an implementation endeavor. We describe the process of implementation research, as well as common study designs and outcomes. Furthermore, we detail pitfalls in the design and analysis of implementation studies, using examples within obstetrics and gynecology. Armed with this knowledge, clinicians may better be able to translate a paper on implementation into improvement efforts in their own clinical practice setting.
Keywords: implementation, implementation science, gynecology, obstetrics
Introduction
Researchers in obstetrics and gynecology are continuously generating new evidence to inform clinical care delivery. Yet, much of this newly emerging evidence fails to be rapidly and effectively integrated into routine clinical practice [1, 2]. It is part of our role to champion, implement, and improve utilization of evidence-based practices in routine care. Yet, implementation of such practices is challenging. Will implementing this practice be effective in my practice? How can we actually elicit change from our administration, our colleagues, or our patients?
There is a growing body of research that addresses such implementation-focused questions. With this document, our goal is to provide clinicians with the necessary tools to critically read and interpret literature evaluating an implementation endeavor. We will describe the process of implementation research, as well as common study designs and outcomes. Furthermore, we will detail pitfalls in the design and analysis of implementation studies, using examples within obstetrics and gynecology. Armed with this knowledge, clinicians may better be able to translate a paper on implementation into improvement efforts in their own clinical practice setting.
Effectiveness, the Science of Implementation, and Obstetrics
Implementation science begins with an evidence-based intervention—that is, a clinical practice already supported by sufficient high-quality evidence (i.e., randomized, controlled trials, meta-analyses, and systematic reviews) that we as a scientific community believe it should be used in clinical care. This evidence-based practice is “the thing” [3]. Implementation science focuses on how best to do “the thing”, as well as how to get clinicians to use “the thing.” Implementation studies sometimes evaluate whether utilizing an evidence-based practice improves clinical outcomes in a specific, real-world population (aka effectiveness research). Sometimes, effectiveness is so well-established that move to studying different strategies to get the practice implemented into care (pure implementation research) [4]. We can also perform studies that evaluate both in a hybrid effectiveness-implementation trial [5]. Such hybrid studies evaluate the clinical effectiveness of real-world implementations, while also evaluating what are called implementation outcomes [6, 7]. Even at the early stages of effectiveness research, there is always a process of incorporating an evidence-based practice into routine use. That process of implementation, whether good or bad, has the potential to impact clinical effectiveness outcomes almost as much as the intervention itself. The success of an implementation is an intermediary step between the intervention and any clinical effect. For example, if a study plans to analyze the impact of instituting a guideline (i.e., “the thing”) for short-interval follow up of women at high risk for wound infection after gynecologic surgery, but clinicians are unaware of the guideline or do not follow it existence, the guideline cannot hope to have an impact on readmission rates.
Robust implementation science involves four key steps: 1) measuring the evidence to practice gap (i.e., the difference between recommended intervention and the care actually delivered), 2) determining barriers and facilitators to implementing the intervention, 3) selecting strategies to improve implementation that map specifically to those barriers and facilitators, and 4) evaluation of the effects of those strategies [Figure 1]. Such evaluation often involves comparing various strategies (e.g. training and education, coaching, electronic health record-based nudges) to understand how well they improve the use of an evidence-based practice and to characterize heterogeneity of effect across various clinical settings. Rigorous implementation research uses implementation frameworks, theories, and models to guide this work [8]. In most published literature on implementation efforts in our field, however, many of those steps are skipped or not described [9].
Figure 1:

Roadmap to implementation success
A hallmark of quality in implementation research is, at the least, a description of the work done to achieve implementation. What steps did the authors take to put this evidence into practice? Which key stakeholder perspectives (e.g., patients, physicians, midwives, nurses, administrators) were considered when designing the implementation intervention? What support systems were needed to make the intervention work at their site? How did they disseminate the plan to use the practice? The discerning reader can evaluate an implementation publication by looking for what pieces of this rigorous process, if any, are described. To begin to assess the quality of a study in front of us, let us start with examining the overarching study design.
Study Designs in Implementation Research [Figure 2] [10]
Figure 2:

Trial designs in implementation science
Randomized, controlled trials (RCTs) are often considered the gold standard in research methods [11, 12]. RCTs harness randomization to evenly distribute baseline characteristics among study groups, thereby decreasing risk of bias. RCTs increase confidence that the assigned intervention is the only factor leading to an observed change in outcome.
RCTs are often used to develop the evidence for a clinical intervention (“the thing”). Once effectiveness is determined, it is not impossible to perform an implementation-focused RCT, but it is challenging. As many implementation projects occur at the hospital or practice level, it can be difficult to perform individual-patient RCTs of EBP implementation.
Example: Consider implementation of the Baby-Friendly Hospital Initiative, a global program to implement the Ten Steps to Successful Breastfeeding (Ten Steps) and the International Code of Marketing of Breastmilk Substitutes [13]. Let’s imagine we wanted to determine if implementing the Baby-Friendly Hospital Initiative improved exclusive breastfeeding rates in our population. It is a large-scale logistical undertaking to complete the initiative as intended, including intensive clinician and patient education, policies to support immediate skin-to-skin, and the infrastructure for “rooming-in”. To randomize individual patients to receive or not receive the Baby-Friendly Initiative, there could be “bleed-through” of the intervention from the Baby-Friendly group to the group not randomized to Baby-Friendly. Clinicians’ newfound breastfeeding knowledge might impact all patients, making the groups more similar, thereby reducing the possible effect of the intervention. More importantly, it could be seen as unethical to provide breastfeeding support for the patient in one room, but not for the patient in the next.
Because of these considerations, large-scale implementation research often uses cluster-randomized trials. Here, groups of patients, (e.g., all patients clustered within a clinic, hospital, or region) are randomized as a group to receive or not receive an intervention. In this case, the unit of randomization and analysis is at the level of the cluster rather than the individual level (see Unit of Analysis section below). Power is determined by the number of clusters, not the number of patients within clusters; as a result, cluster-randomized trials require a large number of patient groups to demonstrate significant differences and can be quite resource-intensive to perform.
Example: The Promotion of Breastfeeding Intervention Trial (PROBIT), a cluster-randomized trial, randomized 31 maternity hospitals in Belarus to either implementation of the Baby-Friendly Hospital Initiative or usual care [14]. Their results demonstrated that infants from the intervention sites were significantly more likely than control infants to be breastfed exclusively at 3 and 6 months.
Because of the resource intensity of cluster RCTs, implementation studies often use alternative designs. Stepped wedge designs sequentially roll out the implementation of an evidence-based practice to units (i.e. practices, hospitals) over time. In many cases, this may be seen as more ethical than a cluster-randomized trial, as all sites receive the intervention by the end of the study. Some stepped wedge studies increase methodologic rigor by randomizing the order in which sites receive the intervention (the stepped wedge randomized trial). There are several advantages of this study design in comparison to studies that utilize simultaneous implementation. For example, at study conclusion, the time periods where a site did not have the intervention can be used as a control for the sites that already implemented the practice. Furthermore, implementation leaders can learn from sites that implemented earlier and apply lessons for implementation success to sites that implement later in the wedge design.
Example: A stepped wedge randomized trial design was used to evaluate a multilevel intervention called the Development of Systems and Education for Human Papillomavirus Vaccination (DOSE HPV) [15]. Five primary care pediatric and family medicine practices were randomized to the order in which they received the intervention over a 2-year period. Using the time periods at sites where implementation had not yet occurred as controls, implementation of DOSE HPV demonstrated increased initiation and completion of HPV vaccination in adolescents.
Cluster-randomized design and stepped wedge designs often require dozens of sites for rigorous studies. However, when only one or a couple sites are a part of an implementation endeavor, choices for study design become limited. One of the major issues with implementation studies with only 1-2 sites is determining if the specific implementation approach being studied, rather than secular trends or other concurrent improvement endeavors, is responsible for any improved outcome. A secular trend is the relatively consistent movement of a variable over time. Quasi-experimental study designs aim to allow causal inference in situations where randomized experiments aren’t possible. Time series designs, an example of a quasi-experimental design, can help increase the confidence in a causal link. A time series design evaluates outcomes at multiple time points before and after an intervention, with a change in the before vs. after slopes suggesting the implementation intervention’s effect. Furthermore, the intervention may be stopped and then restarted at multiple time points, if feasible. If outcomes improve and decline in association, this further supports causality. Results of time series designs are often presented as “run charts,” or line graphs of an outcome plotted over time [Figure 3]. One can perform such studies with one or more control groups, and compare slopes of change over multiple time points from an experimental group to a control group, known as a difference-in-difference comparison.
Figure 3:

“Run chart” example
Example: A time series study was used to evaluate the effect of a pay-for-performance scheme in Britain regarding counseling around long-acting reversible contraception (LARC) [16]. To control for potential rises in LARC utilization due to shifts in culture, public opinion, and method accessibility—rather than the pay for performance intervention—the study evaluated LARC initiation rates at multiple time points before and after implementation of the pay-for-performance scheme, offering higher confidence that the observed post-intervention increase and change in slope of LARC utilization was attributable to the intervention, rather than to a secular trend.
Example: A difference-in-difference comparison was used to evaluate the impact of Medicaid expansion on postpartum coverage by comparing Colorado, which expanded Medicaid under the Affordable Care Act, to Utah, which did not, monthly over a 2-year window [17]. Comparisons by state over multiple time points demonstrated that Medicaid expansion was associated with stability of postpartum coverage and the use of postpartum outpatient care.
The least rigorous, but most common, study designs in implementation are before-after studies. These studies evaluate rates of an outcome before and after implementation of an evidence-based practice. Before-after studies can be controlled or uncontrolled. A controlled before-after study identifies a population who will not be undergoing the intervention, but is as similar to the population who will be undergoing the intervention as possible. Outcomes are measured for both groups, once before and once after the intervention is implemented in the experimental group. Any change in outcome in the experimental group can be compared to change in that outcome in the control group. An uncontrolled before-after study simply examines outcomes in one study population before and after implementation.
Example: A study out of France aimed to evaluate the impact of implementing a scoring system for risk of venous thromboembolism (VTE) in pregnant women with subsequent management recommendations by score on actual VTE rates. VTE rates were reduced in the post-implementation period [18]. However, this type of study design reflects many of the pitfalls of implementation research, which we will address as we go on.
In adaptive study designs, researchers want to be able to adapt an intervention to improve its successful implementation for an individual or group throughout a study, to better support those who don’t respond to the initial intervention. In a Sequential Multiple Assignment Randomized Trial (SMART) design, an intervention non-responder can be re-randomized to another intervention based on pre-planned decision trees within the study design [19].
The Tools of Implementation Science
Before executing any of these study designs, implementation researchers first design an implementation intervention and determine a process for evaluating it. Implementation science offers numerous frameworks to help with both intervention design and evaluation. These frameworks draw on decades of behavior change research and theory to identify “determinants” of implementation outcomes, or “barriers” and “facilitators” to success. Frameworks can be used in formative work, to measure barriers and facilitators to implementation of a clinical practice, and they can also be used in evaluative work after implementation, to explain variation in implementation outcomes [20]. While more than 60 of such frameworks exist, some more commonly used and approachable frameworks for the implementation science beginner include the Consolidated Framework for Implementation Research (CFIR), the Behavioral Change Wheel (also referred to as the COM-B system), and the Framework for Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) [21–23]. For example, CFIR aids in systematically identifying factors that influence implementation including intervention characteristics, features of the implementing organization, and the external context under which the organization lives [21]. When studies incorporate these frameworks, theories, or models into their work, there can be a more organized approach to implementation, and evaluation of the implementation effort can create generalizable knowledge to inform other sites undertaking similar efforts.
Example: Prior studies of the implementation of immediate postpartum long-acting reversible contraception programs have utilized the CFIR, allowing researchers to identify key determinants of implementation success and explain variation in outcomes across sites [24, 25].
Outcomes in Implementation Research
At the more rigorous end, researchers can actually measure the success of implementation using implementation outcomes [6, 7]. Common language to refer to such implementation outcomes is important. The primary outcomes evaluated in implementation science include acceptability, adoption, appropriateness, feasibility, fidelity, cost, penetration, and sustainability. These outcomes answer important questions not addressed in clinical findings. These outcomes might be evaluated quantitatively, qualitatively, or in a mixed-methods approach that integrates both quantitative and qualitative methodologies. Examples of how such outcomes have been utilized in the obstetrics and gynecology literature are shown in [Table]. For example, acceptability of an intervention during an implementation process might be assessed using surveys or interviews of clinicians.
Table:
Examples of Implementation Outcomes Utilized in Obstetrics & Gynecology Literature
| Implementation Outcome | Definition [6] | Example |
|---|---|---|
| Acceptability | The perception among stakeholders that an intervention is agreeable, or satisfactory. | 340 students and professors were surveyed regarding acceptability of a trainee clinical skills assessment for performing Pap smears [39] |
| Adoption | Initial decision to employ an intervention | Study of clinicians across 7 Health Maintenance Organizations (HMOs) and adoption of referral guidelines for hereditary breast and ovarian cancer counseling [40] |
| Appropriateness | The perceived fit, relevance, or compatibility of the intervention within a given setting. | Mixed-methods analysis of the appropriateness and feasibility of monitoring pregnant women’s blood pressure in their homes using village health workers to improve outcomes in hypertension in pregnancy in resource-poor settings in Nigeria [41] |
| Feasibility | The extent to which an intervention can be carried out within a given setting. | |
| Fidelity | The degree to which the intervention is delivered as described. | A study of 4 primary care centers evaluating fidelity to essential elements of CenteringPregnancy during its implementation in Mexico [42] |
| Cost | The cost of implementing the intervention, and any costs required to support its implementation. | Evaluation of the cost of implementing infertility medical services in developing countries [43] |
| Penetration | The integration of the new practices into a service setting and related systems | Study evaluating penetration of an SMS-based system developed to improve maternal and child health in Rwanda [44] |
| Sustainability | The extent to which the intervention is institutionalized within ongoing operations. | Mixed-methods study of an intervention to improve antiretroviral use in HIV evaluating sustainability of the intervention 2-3 years post-implementation [45] |
Example: In one study evaluating a Web-based tool for prediction of breast and ovarian cancer, clinician acceptability of the tool was assessed via open-ended questionnaires and semi-structured interviews. The work highlighted challenges and concerns around the tool that allowed modifications before more widespread implementation [26].
Pitfalls to Consider When Reading Studies on Implementation
Biases in study design
As we reviewed when discussing study designs, there are many possible biases in implementation research, particularly when using before-after approaches. Often, there is a lack of a concurrent control group. When the only control is a historical cohort before the intervention is implemented, there remains concern that other changes, interventions, or secular trends could contribute to any identified difference in outcome in the post-intervention group. Even when there is attempt to include a concurrent control group, this group may be a low quality comparator. For example, concurrent control groups are often similar patients undergoing care at different sites not implementing the intervention. Inherent differences in patient populations, as well as care practices and patterns by site, might contribute to unmeasured differences between groups that could bias results. Finally, most implementation research is un-blinded by nature. Post-implementation clinicians are likely to be aware that their cases are being audited or observed, possibly impacting results; this is known as the Hawthorne effect. Furthermore, data abstractors analyzing outcomes before and after implementation are unlikely to be blinded, which may also subconsciously bias results.
High quality implementation research will try to assuage these concerns as much as is possible. Tools such as run-charts, regression analyses, matching, and propensity scoring are often used analytic methods to attempt to reduce bias in implementation research. Readers of implementation research can consider what biases exist in the study design presented, and if any analytic tools were utilized to overcome these biases.
Example: Take this population-based retrospective cohort study of nearly 100,000 pregnant Kaiser Permanente Northern California members during three implementation phases of a Universal Perinatal Depression Screening Program [27]. The study demonstrated progressive improvement in expected percentage of women receiving treatment for depression from the pre-implementation, to the roll-out, to the fully-implemented phase. This use of a 3-phase analysis provides additional support that another intervention or secular trend was not responsible for the observed improvements.
Context specificity
Implementation research is more prone to issues with generalizability than other medical research [28]. Context plays a role in the success of an implementation endeavor, and thereby its impact on clinical outcomes. Context is broad, and may refer to geography, culture, type of institution/practice, leadership structure, or political climate. When interpreting implementation research, consider how applicable that work is to the site in which you practice. Was the intervention implemented at a range of diverse sites? Do the authors describe facilitators and barriers to implementation at each included site? How might you adapt the intervention to fit within the context of your site?
Example: Kawakita et al describes a before-after study demonstrating reduced post-cesarean surgical site infection rates after implementation of an evidence-based surgical bundle [29]. The surgical bundle was developed and championed by residents. Therefore, this study may not be generalizable to a site without an obstetrics residency program.
Cost and other unintended outcomes
The thresholds for acting based on the results an implementation study depend on the costs. Costs of implementation are not just monetary, but often involve significant time, effort, and resources of leadership and staff. If a study demonstrates only a small clinical impact, but is of substantial cost to the healthcare system, it may not be of high priority for implementation [30]. Few implementation studies incorporate cost analyses directly into their design. However, even local implementation efforts can report on the effort and resources used for implementation. High quality implementation research will also report on acceptability of the intervention to clinicians on the front lines. Such research will also assess possible associated harms and unintended consequences of the intervention, particularly from the perspective of the patient.
Example: A retrospective before-after study demonstrated that implementing an evidence-based bundle of interventions could reduce perioperative blood transfusions for women undergoing laparotomy for ovarian or endometrial cancer by more than 50% [31]. The interventions included an intraoperative hemostasis checklist and evidence-based use of tranexamic acid. One might question whether the cost of implementing this evidence-based bundle was offset by a reduction in blood product transfusion. Indeed, the authors detailed the costs of care with and without implementation of the intervention, determining them to be cost-neutral.
Unit of analysis
Even when rigorous study methodologies, such as cluster-randomized trials, are utilized, we still need to closely consider a study’s analytic methods. In cluster-randomized trials, sites, practices, or hospital systems are often randomized to receive or not receive an intervention. Yet, analyses are still often performed at the patient level in order to increase power and demonstrate a significant result. High quality work in implementation research will take the “unit of analysis” or “clustering” into account when performing statistical analyses.
Example: A cluster-randomized trial in Japan examined the impact of a postpartum educational video on rates of self-reported shaking and smothering of newborns [32]. 45 obstetric units were randomized to receive or not receive the intervention. Analysis was performed by patient, comparing 2350 individuals in the intervention group to 2372 individuals in the no intervention group. Importantly, the authors reported that multilevel statistical analyses were used to adjust for correlation by cluster (in this case, obstetric unit).
Other Considerations for the Interpretation of Studies in Implementation
Health equity in implementation
As readers of implementation research, we need to evaluate work with an equity-focused lens. Quality improvement interventions have the potential to either worsen or lessen racial, ethnic, and socioeconomic disparities. For example, innovative methods of reaching patients from home, such as telehealth solutions for peripartum care, could worsen inequities if careful thought is not taken to providing access to patients of all backgrounds [33]. Equity in implementation is of critical importance, but exploring it in depth is outside of the scope of this paper. For those interested in this field, Brownson et al and Baumann at al have excellent publications [34, 35].
Example: Retrospective work by Hamm et al demonstrated that a standardized protocol for management of labor induction may reduce disparities in cesarean delivery rate and neonatal morbidity [36]. Our group is now studying prospective implementation of this protocol, while simultaneously evaluating its acceptability to clinicians and patients.
Sustainability
A final, critical consideration around any implementation endeavor it is its sustainability. If the implementation strategies utilized in the study are not feasible outside of large-scale funding, this may limit the expected impact of the work.
Example: A randomized, controlled trial in Scotland evaluated the impact of a tailored, multifaceted strategy to promote utilization of a national guideline on abortion care [37]. The strategy included unit-specific audit and feedback reports. Yet, if there were no longer available personnel to perform such likely time-intensive reports at the completion of the study period, this project may not lead to sustainable improvements in outcomes.
Proposed Considerations Evaluating Studies on Implementation Endeavors in Obstetrics & Gynecology
When reading studies in implementation, we propose considering a set of questions [Figure 4].
Figure 4:

Proposed considerations when evaluating studies on implementation endeavors in obstetrics and gynecology
Conclusions
The field of implementation science holds immense promise to more effectively embed evidence-based interventions in women’s healthcare delivery. Women’s health clinicians with familiarity with implementation research study designs and concepts can better interpret the implementation literature and identify effective strategies for implementing evidence-based practices locally. Such efforts may help improve outcomes and eliminate inequities in our field [38].
Acknowledgements:
We acknowledge Sarah Block and Marisa Wetmore for their assistance with manuscript preparation.
Funding:
Michelle Moniz is supported by the Agency for Healthcare Research and Quality (AHRQ), grant #K08 HS025465 and is a paid consultant for RAND Corporation, the National Institute on Drug Abuse, and the Society of Family Planning. Rebecca Hamm is supported by a K23 Mentored Career Development Grant the Eunice Kennedy Shriver National Institute of Child Health and Development (K23 HD102523).
Footnotes
Disclosure Statement: The authors report no conflict of interest.
References:
- 1.Miller S, et al. , Beyond too little, too late and too much, too soon: a pathway towards evidence-based, respectful maternity care worldwide. Lancet, 2016. 388(10056): p. 2176–2192. [DOI] [PubMed] [Google Scholar]
- 2.Dadich A, Piper A, and Coates D, Implementation science in maternity care: a scoping review. Implement Sci, 2021. 16(1): p. 16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Curran GM, Implementation science made too simple: a teaching tool. Implement Sci Commun, 2020. 1: p. 27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Lane-Fall MB, Curran GM, and Beidas RS, Scoping implementation science for the beginner: locating yourself on the “subway line” of translational research. BMC Med Res Methodol, 2019. 19(1): p. 133. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Curran GM, et al. , Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care, 2012. 50(3): p. 217–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Proctor E, et al. , Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health, 2011. 38(2): p. 65–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Lewis CC, et al. , Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implement Sci, 2015. 10: p. 155. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Esmail R, et al. , A scoping review of full-spectrum knowledge translation theories, models, and frameworks. Implement Sci, 2020. 15(1): p. 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Rosenstein MG, Application of Implementation Science to OB/GYN Quality Improvement Efforts. Clin Obstet Gynecol, 2019. 62(3): p. 594–605. [DOI] [PubMed] [Google Scholar]
- 10.Brown CH, et al. , An Overview of Research and Evaluation Designs for Dissemination and Implementation. Annu Rev Public Health, 2017. 38: p. 1–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Meldrum ML, A brief history of the randomized controlled trial. From oranges and lemons to the gold standard. Hematol Oncol Clin North Am, 2000. 14(4): p. 745–60, vii. [DOI] [PubMed] [Google Scholar]
- 12.Simon SD, Is the randomized clinical trial the gold standard of research? J Androl, 2001. 22(6): p. 938–43. [DOI] [PubMed] [Google Scholar]
- 13.Schellinski K, Ten steps toward successful breast-feeding and the building blocks of the Baby-Friendly Hospital Initiative (BFHI). Cesk Pediatr, 1993. 48 Suppl 1: p. 3–4. [PubMed] [Google Scholar]
- 14.Kramer MS, et al. , Promotion of Breastfeeding Intervention Trial (PROBIT): a randomized trial in the Republic of Belarus. JAMA, 2001. 285(4): p. 413–20. [DOI] [PubMed] [Google Scholar]
- 15.Perkins RB, et al. , Improving HPV Vaccination Rates: A Stepped-Wedge Randomized Trial. Pediatrics, 2020. 146(1). [DOI] [PubMed] [Google Scholar]
- 16.Ma R, et al. , Impact of a pay-for-performance scheme for long-acting reversible contraceptive (LARC) advice on contraceptive uptake and abortion in British primary care: An interrupted time series study. PLoS Med, 2020. 17(9): p. e1003333. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Gordon SH, et al. , Effects Of Medicaid Expansion On Postpartum Coverage And Outpatient Utilization. Health Aff (Millwood), 2020. 39(1): p. 77–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Chauleur C, et al. , Benefit of Risk Score-Guided Prophylaxis in Pregnant Women at Risk of Thrombotic Events: A Controlled Before-and-After Implementation Study. Thromb Haemost, 2018. 118(9): p. 1564–1571. [DOI] [PubMed] [Google Scholar]
- 19.Collins LM, Murphy SA, and Strecher V, The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med, 2007. 32(5 Suppl): p. S112–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Nilsen P, Making sense of implementation theories, models and frameworks. Implement Sci, 2015. 10: p. 53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Damschroder LJ, et al. , Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci, 2009. 4: p. 50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Glasgow RE, Vogt TM, and Boles SM, Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health, 1999. 89(9): p. 1322–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Michie S, van Stralen MM, and West R, The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci, 2011. 6: p. 42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Palm HC, et al. , An initiative to implement immediate postpartum long-acting reversible contraception in rural New Mexico. Am J Obstet Gynecol, 2020. 222(4S): p. S911 e1–S911 e7. [DOI] [PubMed] [Google Scholar]
- 25.Moniz MH, et al. , Implementing immediate postpartum contraception: a comparative case study at 11 hospitals. Implement Sci Commun, 2021. 2(1): p. 42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Archer S, et al. , Evaluating clinician acceptability of the prototype CanRisk tool for predicting risk of breast and ovarian cancer: A multi-methods study. PLoS One, 2020. 15(3): p. e0229999. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Avalos LA, et al. , Improved Perinatal Depression Screening, Treatment, and Outcomes With a Universal Obstetric Program. Obstet Gynecol, 2016. 127(5): p. 917–925. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Pfadenhauer LM, et al. , Making sense of complexity in context and implementation: the Context and Implementation of Complex Interventions (CICI) framework. Implement Sci, 2017. 12(1): p. 21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Kawakita T, et al. , Reducing Cesarean Delivery Surgical Site Infections: A Resident-Driven Quality Initiative. Obstet Gynecol, 2019. 133(2): p. 282–288. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Berwick DM, The science of improvement. JAMA, 2008. 299(10): p. 1182–4. [DOI] [PubMed] [Google Scholar]
- 31.Wallace SK, et al. , Optimizing Blood Transfusion Practices Through Bundled Intervention Implementation in Patients With Gynecologic Cancer Undergoing Laparotomy. Obstet Gynecol, 2018. 131(5): p. 891–898. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Fujiwara T, et al. , Effectiveness of an Educational Video in Maternity Wards to Prevent Self-Reported Shaking and Smothering during the First Week of Age: A Cluster Randomized Controlled Trial. Prev Sci, 2020. 21(8): p. 1028–1036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Jean-Francois B, et al. , The Potential for Health Information Technology Tools to Reduce Racial Disparities in Maternal Morbidity and Mortality. J Womens Health (Larchmt), 2021. 30(2): p. 274–279. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Baumann AA and Cabassa LJ, Reframing implementation science to address inequities in healthcare delivery. BMC Health Serv Res, 2020. 20(1): p. 190. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Brownson RC, et al. , Implementation science should give higher priority to health equity. Implement Sci, 2021. 16(1): p. 28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Hamm RF, Srinivas SK, and Levine LD, A standardized labor induction protocol: impact on racial disparities in obstetrical outcomes. Am J Obstet Gynecol MFM, 2020. 2(3): p. 100148. [DOI] [PubMed] [Google Scholar]
- 37.Foy R, et al. , A randomised controlled trial of a tailored multifaceted strategy to promote implementation of a clinical guideline on induced abortion care. BJOG, 2004. 111(7): p. 726–33. [DOI] [PubMed] [Google Scholar]
- 38.Callaghan-Koru JA, Moniz MH, and Hamm RF, Prioritize implementation research to effectively address the maternal health crisis. Am J Obstet Gynecol, 2021. 225(2): p. 212–213. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Seo JH, et al. , Authenticity, acceptability, and feasibility of a hybrid gynecology station for the Papanicolaou test as part of a clinical skills examination in Korea. J Educ Eval Health Prof, 2018. 15: p. 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Mouchawar J, et al. , Guidelines for breast and ovarian cancer genetic counseling referral: adoption and implementation in HMOs. Genet Med, 2003. 5(6): p. 444–50. [DOI] [PubMed] [Google Scholar]
- 41.Shobo OG, et al. , Implementing a community-level intervention to control hypertensive disorders in pregnancy using village health workers: lessons learned. Implement Sci Commun, 2020. 1: p. 84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Fuentes-Rivera E, et al. , Evaluating process fidelity during the implementation of Group Antenatal Care in Mexico. BMC Health Serv Res, 2020. 20(1): p. 559. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Ombelet W, et al. , Infertility and the provision of infertility medical services in developing countries. Hum Reprod Update, 2008. 14(6): p. 605–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Ngabo F, et al. , Designing and Implementing an Innovative SMS-based alert system (RapidSMS-MCH) to monitor pregnancy and reduce maternal and child deaths in Rwanda. Pan Afr Med J, 2012. 13: p. 31. [PMC free article] [PubMed] [Google Scholar]
- 45.Katuramu R, et al. , Sustainability of the streamlined ART (START-ART) implementation intervention strategy among ART-eligible adult patients in HIV clinics in public health centers in Uganda: a mixed methods study. Implement Sci Commun, 2020. 1: p. 37. [DOI] [PMC free article] [PubMed] [Google Scholar]
