Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2024 Mar 1.
Published in final edited form as: J Clin Epidemiol. 2017 Jul 15;90:28–36. doi: 10.1016/j.jclinepi.2017.06.014

AHRQ series on complex intervention systematic reviews—paper 4: selecting analytic approaches

Meera Viswanathan a,*, Melissa L McPheeters b, M Hassan Murad c, Mary E Butler d, Emily E (Beth) Devine e, Michele P Dyson f, Jeanne-Marie Guise g, Leila C Kahwati a, Jeremy NV Miles h, Sally C Morton i
PMCID: PMC10906111  NIHMSID: NIHMS1952617  PMID: 28720515

Abstract

Background:

Systematic reviews of complex interventions can vary widely in purpose, data availability and heterogeneity, and stakeholder expectations.

Rationale:

This article addresses the uncertainty that systematic reviewers face in selecting methods for reviews of complex interventions. Specifically, it lays out parameters for systematic reviewers to consider when selecting analytic approaches that best answer the questions at hand and suggests analytic techniques that may be appropriate in different circumstances.

Discussion:

Systematic reviews of complex interventions comprising multiple questions may use multiple analytic approaches. Parameters to consider when choosing analytic methods for complex interventions include nature and timing of the decision (clinical practice guideline, policy, or other); purpose of the review; extent of existing evidence; logistic factors such as the timeline, process, and resources for deciding the scope of the review; and value of information to be obtained from choosing specific systematic review methods. Reviewers may elect to revise their analytic approach based on new or changing considerations during the course of the review but should guard against bias through transparency of reporting.

Keywords: Complex interventions, Evidence-based medicine, Review literature as topic, Systematic review, Qualitative research, Research design

1. Introduction

This is the fourth of a seven-part series of papers providing tools and approaches for conducting reviews of complex interventions. This paper is intended to assist systematic review authors in selecting analytic approaches regarding reviews of complex interventions.

In response to the standards established by the National Academy of Medicine (formerly the Institute of Medicine) for trustworthy clinical practice guidelines [1], the National Guidelines Clearinghouse now requires that clinical practice guidelines be based on systematic reviews [2]. This move has accelerated the demand from clinicians and policymakers for systematic reviews on an array of topics. As a result, systematic reviews increasingly scrutinize complex interventions. Researchers are now paying greater attention to the methods, constraints, and requirements of systematic reviews of complex interventions [313].

Because complex interventions often allow for adaptation, systematic reviews of a ‘‘class’’ or type of complex intervention may include a set of studies in which the overall intervention either includes slightly different components in each instance or is implemented differently in every study. Without using appropriate methods that explicitly take into account the multiple components and their variation, systematic reviewers could easily find themselves defaulting to a stance that the studies cannot be combined or even analyzed together. This default can lead to inappropriate and unnecessary conclusions that the strength of the body of evidence is insufficient for making decisions.

New methods are available currently that allow an investigator to glean potentially important information about the role of the components in addition to the overall relative effectiveness of the complete intervention as well as the variability in implementation [9]. The underlying requirements, assumptions, and outputs of these new methods vary greatly. Inadequately justified or inappropriate analysis methods for systematic reviews of complex interventions [13] can lead to questions about the utility of the systematic review [14].

This article lays out parameters for systematic reviewers to consider when selecting analytic approaches that best answer the questions at hand and suggests analytic techniques that may be appropriate in different circumstances. We believe this document will be of interest to systematic reviewers in identifying methodological approaches, commissioners or funders of reviewers in understanding what types of methods might best suit their purposes, and other stakeholders. It will also provide greater transparency to the systematic review process.

This article was based on discussions initiated and supported by the Agency for Healthcare Research and Quality (AHRQ). Attendees at the AHRQ’s 2015 meeting on methods for reviews of complex interventions collaborated on this effort, using a group consensus process.

We first define complex interventions and then briefly summarize potential methods. The remainder of the paper describes the parameters that influence the choice of analytic approaches, provides specific examples, and offers suggestions for improved transparency in reporting.

2. Characterizing complex interventions

According to the definition of complex interventions as defined by Guise et al. and presented in the following, all complex interventions have multiple components and causal pathways characterized by feedback loops, synergies, mediators, or moderators [15]. In addition, they may target multiple participants, groups, or organizational levels; require multifaceted adoption, uptake, or integration strategies; or be implemented in a dynamic multidimensional environment.

Definition of complex interventions [15] —

All complex interventions have two common characteristics; they have multiple components (intervention complexity) and complicated/multiple causal pathways, feedback loops, synergies, and/or mediators and moderators of effect (pathway complexity). In addition, they may also do one or more of the following: three additional characteristics target multiple participants, groups, or organizational levels (population complexity); require multifaceted adoption, uptake, or integration strategies (implementation complexity); or work in a dynamic multidimensional environment (contextual complexity).

3. Approaches for addressing complex interventions in systematic reviews

Two independent sets of authors have arrayed approaches for systematic reviews of complex interventions along a continuum [5,16]. The Anderson et al.’s paper arrays approaches along a spectrum of theory, from theory generation (using configuring methods such as meta-ethnography or thematic synthesis), to theory exploration, and finally to theory testing (using inferential statistical methods such as meta-analysis) [16]. Anderson and colleagues also note that methods such as Bayesian synthesis, framework synthesis, cross-study synthesis, and realist synthesis can potentially integrate qualitative or quantitative data.

The AHRQ report [5] arrays approaches along a continuum reflecting the complexity of systematic review questions. The least intricate questions ask whether the overall bundle of interventions works. This approach is the classic ‘‘efficacy’’ use of the systematic review, for which a traditional qualitative or quantitative synthesis may be used. This approach asks whether an intervention works when compared against usual care or other appropriate control. Note that this approach differs from what is commonly the comparative effectiveness question in which multiple multi-component interventions must be compared with one another. Comparative effectiveness questions can be answered using quantitative synthesis methods, including network meta-analysis; qualitative syntheses may also be appropriate. A middle set of approaches extends the comparative effectiveness question by asking how it varies by intervention features and disaggregates them according to a hypothesized set of features. Analytic approaches such as meta-regression, finite-mixture modeling, realist synthesis, and qualitative comparative analysis can be used to answer these questions [9,17]. Advanced meta-analytic approaches are described in detail by Pigott et al. in this issue [9]. The most intricate set of questions asks about reasons for the success or failure of interventions. These questions also may use an extended array of methods encompassing qualitative, quantitative, or mixed methods. Specific approaches include qualitative comparative analysis, Bayesian approaches, critical interpretive synthesis, integrative review, narrative synthesis, realist review, meta-ethnography, meta-interpretation, meta-summary, meta-study, meta-synthesis, and mixed studies review [9,17].

4. Parameters that influence the choice of methods for examining complex interventions in systematic reviews

Both perspectives described previously can be integrated into a broader rubric (Fig. 1). It considers the following parameters (discussed in more detail in the following): nature and timing of the decision (clinical practice guideline, policy, or other); purpose of the review; extent of existing evidence; logistic factors such as the timeline, process, and resources for deciding the scope of the review; and value of information to be obtained from choosing specific systematic review methods.

Fig. 1.

Fig. 1.

Parameters for choosing analytic methods for systematic reviews of complex interventions.

4.1. Nature of the decision that the systematic review will support

Systematic reviews of complex interventions can sometimes present challenges to decision makers. The reason is that the usual ‘‘efficacy-focused’’ research questions framing the review or the standard meta-analytic methods often selected to synthesize findings may not lead to information that decision makers need or can use. In some instances, decision makers may be looking for the range of potential interventions and intervention components that are effective so that they can design scalable policies to address a specific problem. Such problem-focused reviews may include a range of interventions; some may be considered simple, whereas others may be considered complex. Alternatively, decision makers may be looking for evidence for the use of a specific intervention with the potential to impact a myriad of outcomes. They may want information to model the impact of such an intervention within their population, setting, or context. These decisions may require methods that parse the effect of components or context on interventions or the effect of interventions within subgroups. Finally, some decisions may require information other than efficacy estimates, such as a qualitative understanding of what happens when an intervention is implemented. Box 1 offers three examples of reviews demonstrating how the nature of the decision influenced the selection of methods.

Box 1. The nature of the decision influences analytic strategies in systematic reviews of complex interventions: examples.

  • Michael et al. conducted a review to support the US Preventive Services Task Force recommendation on primary care interventions to prevent falls in community-dwelling older adults [18]. In this review, the outcomes of interest were narrow—fall-related morbidity, mortality, and quality of life yet the interventions evaluated were diverse and could be broadly categorized as one of five types. The authors pooled the all-cause mortality outcome using random-effects meta-analysis across all intervention types; the result was a null effect. They also conducted a narrative and quantitative synthesis within the five intervention types that were identified across the body of evidence. Within a single intervention type, the authors further stratified by specific treatment type and in some cases conducted a random-effects meta-regression to identify study-level covariates explaining heterogeneity in effect size within intervention types. This approach identified effective interventions and provided the Task Force with the specific interventions options to call out as part of their recommendation.

  • Kay et al. conducted a review of center-based early childhood education programs to promote health equity and included a range of educational, social, and health outcomes [19]. In this review, authors used eight different meta-analyses, one for each outcome of interest, to summarize the impact of this type of intervention on children who attended these programs as compared with children who did not. Each result was stratified by the three main models for delivering these interventions, but comparative effectiveness among the models was not of interest. This review also used meta-analysis to estimate the impact on effect size for specific program features; this approach was limited to the studies that provided information about the relationship between program feature and outcome.

  • Glenton et al. used a framework synthesis approach to conduct a qualitative evidence synthesis related to barriers and facilitators to the implementation of lay health worker programs to improve access to maternal and child health [20]. In this example, the systematic review was not intended to determine whether these interventions were effective, but rather to synthesize evidence concerning their implementation.

4.2. Timing of the decision that the systematic review will support

Decision makers often use systematic reviews as the basis for clinical or public health practice guidelines; program development; performance measure specification; or resource allocations, investments, and coverage decisions. Balancing the need for timely information to inform decisions with the need for methodologic rigor is a perennial issue that plagues systematic reviewers.

The timing of the policy decision influences the choice of analytic methods indirectly through the development of the research questions that frame the review, the search strategy, and other scoping decisions (eg, outcomes included). A short timeline for a clinical practice guideline, policy decision, or other decision can also influence the choice of analytic methods directly in two ways.

First, it can serve as a significant constraint on the use of certain methods. For example, it can constrain the use of recursive searches (eg, meta-synthesis using grounded formal theory [21]) or multiple concurrent data sources that require synthesis and reconciliation (eg, supplementing a meta-analysis on interventions to improve medication adherence with a meta-ethnography of the reasons for lack of adherence among patients). Second, review authors with limited familiarity with some of the more novel methods or methods requiring specialized software may need to consider whether sufficient time is available to accommodate the learning curve associated with the use of a method new to them.

4.3. Purpose of the systematic review

A central issue in framing questions in systematic reviews is the degree of certainty or causal inference that the reader should attach to the conclusions. Causal inference in the context of a systematic review refers to the degree to which the review is able to isolate the effect of the intervention on the target outcomes. Even if systematic reviews comprise exclusively randomized controlled trials, they provide observational rather than experimental data because the studies are not randomly assigned across different contexts, populations, or conditions. As a result, quantitative synthesis from these data cannot serve as a test of causal inference in the way that an experimental study can. Nonetheless, systematic reviews can provide the building blocks necessary for causal inference, through assessments of strength of association, consistency, and dose-response. Other activities undertaken in a systematic review such as evaluation of applicability or indirectness (in judging specificity, biological plausibility, and coherence) and assessment of risk of bias and study limitations (in judging temporality or study design suitability) allow further support for causal claims [22].

In a review of a complex intervention, the multiple components of the intervention may each act on the target population separately or concurrently. In addition to understanding whether the complex intervention is effective as a whole, investigators also seek to understand the interaction of the components and the role that each plays in achieving a desired effect. Multiple pathways can result in the same outcome [23].

Thus, reviews of complex interventions require closer attention to necessary and sufficient causes for change than systematic reviews of simple interventions. In the sufficient cause model of causality [24], a sufficient cause guarantees the outcome under study [25], but the outcome can occur in the absence of a specific sufficient cause. This sufficient cause may include one or more of a constellation of components. A necessary cause is one without which the outcome cannot occur, but it may not be sufficient to cause the outcome to occur. One question that a systematic review of a complex intervention may ask is whether, in fact, all the components are necessary or whether a subset of intervention components drives the observed effects. To assess the role of component causes requires a referent condition that is a clearly specified alternative. Even when the intervention itself is well characterized, the referent condition can be poorly specified or vary widely across studies. Without this basis for assessing causality (having a clearly specified referent condition), only association or description is possible.

Thus, systematic reviewers need to understand the degree to which causality needs to be assessed to select the best method for reviewing complex interventions. However, systematic reviews may indeed have multiple purposes: they can answer the question of whether a complex intervention is effective in its entirety but also questions about whether an approach works in specific contexts and in certain subgroups of individuals. Thus, systematic reviews may be used to support causal claims for an intervention’s overall impact. They may be used additionally to infer the contexts in which interventions may be best applied, whether specific components of the intervention are more or less effective than others, and whether effectiveness is driven by subsets of components (Box 2).

Box 2. The purpose of the review shapes analytic strategies in systematic reviews of complex interventions: an example.

Thomas et al. used qualitative comparative analysis on a subset of studies within a multimethod systematic review that was designed to evaluate the effect of community engagement interventions on reductions in health inequalities [26,27]. This review included a meta-analysis, economic analysis, and thematic analysis with a purpose of identifying an effective approach for whom and under what circumstances. In addition, a purpose of the review was to identify theory underpinning community engagement and explore mechanisms through which this occurs. The authors used qualitative comparative analysis to identify the features of community engagement interventions that were present among effective interventions for promoting breastfeeding among new mothers and pregnant women. The features selected for qualitative comparative analysis included three theories of change (empowerment, involvement in design, and lay leaders). The authors had identified as each having an independent and statistically significant effect size in post hoc random-effects ANOVA analyses within the review. Other literature also supported these as potentially important features of community-engaged interventions.

If a mechanism of action has been established, reviewers can apply several analytic approaches. When the mechanism of action is clearly understood, systematic reviewers can use approaches such as standard meta-analysis, meta-regression, or network meta-analysis. Eligible study designs may extend beyond trials to include observational studies. As noted earlier, these approaches do not provide confirmation or refutation of causal claims, but they can be used to identify the strength of association, consistency, and dose–response of an intervention on an outcome.

When reviewers hypothesize that multiple mechanisms of action influence an outcome, they can use theory-exploring approaches such as finite-mixture modeling, integrative meta-synthesis [28], realist synthesis, or qualitative comparative analysis. The results of these analyses can help reviewers to explore issues of causal inference such as specificity (does changing a specific exposure result in a change in outcome?) and biological plausibility (is the observed relationship believable in physiologic terms?). Additional primary and secondary analyses may be required to produce information on other issues such as strength of association, consistency, and dose-response to build a causal claim.

If no known mechanism of action exists, reviewers may use theory-generating approaches, such as meta-ethnography [29,30] or grounded formal theory [28].

4.4. Extent of existing evidence

The choice of analytic methods also depends on whether existing evidence supports the use of an analytic approach. As with other parameters, this issue is not unique to complex interventions, but it plays a large role in determining the choice of methods. Systematic reviews of complex interventions may require data on context, components, and the implementation process that may not report alongside outcomes data. Systematic reviews of complex interventions that seek to understand the contributions of specific components or the conditions for success or failure will likely need to expand their searches beyond traditional controlled studies. They may need to include ‘‘sibling’’ publications (ie, multiple publications on the study, or multiple publications from different studies of same intervention) of process evaluations and qualitative studies. When such data are not available (as is often the case), investigators may need to collect such information from review authors or other key informants (Box 3).

Box 3. Extent of existing evidence shapes analytic strategies in systematic reviews of complex interventions: an example.

An AHRQ-sponsored review on strategies to improve mental health in children through dissemination, implementation, or quality improvement identified extremely diverse interventions. This factor posed significant challenges to synthesis and extraction of common predictors of benefits and harms [31]. In an extended analysis, the review authors reached out to the principal investigators of included studies to identify critical intervention components.

If evidence is available or can be collected on components, context, and implementation, systematic reviewers can consider a wide array of methods ranging from qualitative comparative analysis, meta-ethnography, and finite-mixture modeling. If such evidence is neither available nor obtainable in a timely fashion, systematic reviewers will likely to be limited to narrative and quantitative syntheses (including meta-analysis or meta-regression) from traditional designs (eg, experimental, quasi-experimental, controlled observational studies). In some instances, the data may be available from these sources for more complex analysis but may require substantial investment of effort in data collection, cleaning, and confirmation before they can be used for analysis.

4.5. Logistic factors: timeline, processes, and resources for deciding the scope of the review

Although investigators must make decisions about the timeline, processes, and resources required to conduct a systematic review for all their reviews, these planning aspects must be given more intense consideration for reviews of complex interventions. The parameters or components of the intervention of greatest interest should be identified and specified a priori, if possible. Furthermore, analytic approaches must align with the research questions of interest.

Sometimes, an array of analytic approaches is required. Conducting a baseline analysis to establish the effectiveness of the intervention is often the first step, but it is sometimes not sufficient to provide results that will be meaningful for specific decisions or to fully understand the theoretical underpinnings of the intervention. In addition, although the results of an evidence review may suggest that a meaningful effect of a complex intervention can be achieved, whether it will be achieved in practice requires additional analysis. These questions will require a synthesis of evidence provided from the field of dissemination and implementation science, with the possible inclusion of cost—effectiveness or budget impact literature. When this is the case, additional analyses must be undertaken (Box 4).

Box 4. Logistic factors influence analytic strategies in systematic reviews of complex interventions: an example.

An AHRQ-funded project on health information exchange comprised eight key questions and included sufficient resources to conduct a broad and deep review that took 18 months to complete [32]. The key questions addressed the following aspects of health information exchange: effectiveness, harms, intermediate outcomes, use, usability, facilitators and barriers, and implementation and sustainability. Included studies drew from data sources as varied as (1) clinical data from electronic health records to assess effectiveness outcomes, (2) surveys and audit logs to assess health information exchange use, and (3) focus group and key informant interviews to identify facilitators and barriers to implementation and use. Investigators were unable to synthesize the evidence quantitatively to answer each research question of interest. They did, however, integrate the evidence across these dimensions to as great an extent as possible. The investigators looked for consistency and alignment of studies, and they provided an overview of the current ‘‘state of the field of health information exchange’’ to guide future investments.

The timeline, processes, and resources required to complete this entire collection of reviews for one complex intervention can be challenging to achieve concurrently, or even sequentially, for any single systematic review project. Indeed, sometimes questions may be unknown at the time of project inception. A synthesis of qualitative evidence or of evidence to inform decisions about adopting a health technology may, of necessity, be explored in separate and subsequent review projects. Thus, the overall review process may be iterative or even recursive.

4.6. Value of information

Systematic reviewers should consider the relative value of information from the analytic approaches they propose to use. Assessing the relative contribution of components of a complex intervention to the outcome can increase the benefits of a review, but these benefits will be associated with increased costs. A review systematically exploring components as effect modifiers in meta-regressions or using qualitative comparative analysis requires considerably greater investment of time than a conventional quantitative or narrative synthesis [26]. Information on the nuances of the implementation of the intervention may not be available in published papers. If such information is available, the data abstraction, categorization, and interpretation of the information pose a burden on the systematic reviewers in addition to data acquisition.

5. Risks arising from the choice of analytic methods

All analytic choices carry risks. The optimal method balances the benefits of the information obtained from a particular analytic method with the constraints and risks of the method. We describe salient risks in the following.

5.1. Risk of false inference because of inappropriate choice of methods

Selecting a method to ensure valid inference deserves careful consideration. Selecting the wrong method can produce false inferences, for instance. This problem can potentially lead to applications of inaccurate, contradictory, or even erroneous findings to various clinical decisions (eg, guideline development) and policy decisions.

5.2. Risk of investigator bias in recursive approaches, or a posteriori methods

Constraints such as heterogeneity of studies or limited availability of data can alter analyses. Systematic reviews of complex interventions may have to abandon planned strategies and select new approaches midstream. A recursive approach to selecting methods can, however, lead to investigator bias. Such bias may be real, or perceived, and both can impact the acceptability of the review results. A well-designed analytic frame-work can anticipate potential changes in the analysis plan, offer context for these changes, and provide transparency in communicating results to stakeholders.

5.3. Risk of inefficiency

Systematic reviews of complex interventions require careful planning to ensure that all key steps in the review process are designed to appropriate inputs for analysis and to minimize having to redo key steps (such as re-abstraction). The complexity of these interventions, coupled with variations in reporting, necessitates more upfront planning of study abstraction forms, training of study abstractors, and reliability checks in abstraction across the review team than has been common practice to date. Such planning helps to ensure consistency in how interventions are being described and grouped. For systematic review methods seeking to explore heterogeneity in outcomes based on intervention components, detailed information about intervention components needs to be captured during study abstraction. Of particular importance is distinguishing among the presence, absence, and nonreporting of specific components.

6. Implications for reporting

The process of choosing to use an analytic approach does not necessarily lead to mutually exclusive options; nor is it linear. Systematic reviews of complex interventions comprising multiple questions may use multiple analytic approaches. Reviewers may elect to revise their analytic approach based on new or changing considerations during the review but should guard against bias through transparency of reporting.

Using a priori protocols guards against bias in systematic reviews; reviewers should use them routinely [33]. However, systematic reviews of complex interventions may begin with a complex analysis plan and still require modifications while the review is being conducted, as reviewers discover new and unexpected facets of complexity and encounter limitations of reporting. Such changes should be documented clearly and published protocols amended appropriately. Reviewers should describe the change, the rationale for the change, and the impact of such change on the overall conclusions of the systematic review.

Specifically, systematic reviewers of complex interventions should routinely report on the following issues.

  • Is the review expected to help support a specific decision? If so, what is the decision?

  • How did the choice of analytic methods help to achieve the objective?

  • Were the analysis methods chosen in advance?

  • Were the analysis methods altered during the review? Why were changes made?

  • What are the next steps for analysis? What data are necessary to accomplish these next steps?

7. Conclusion

A wide and increasing array of analytic approaches is available for analyzing complex interventions in systematic reviews. Parameters to consider when choosing analytic methods for complex interventions include nature and timing of the decision (clinical practice guideline, policy, or other); purpose of the review; extent of existing evidence; logistic factors like the timeline, process, and resources for deciding the scope of the review; and value of information to be obtained from choosing specific systematic review methods. Reviewers may need to revise their analytic approach based on new or changing considerations during the course of the review but should guard against bias through transparency of reporting.

Acknowledgments

The authors are grateful to Christine Chang, Stephanie M. Chang, and Kathleen N. Lohr for their guidance. We also deeply appreciate the support provided by Makalapua L. Motu’apuaka, Elizabeth O’Connor, and Lyndzie Sardenga.

Funding:

This project was funded under contract no. HHSA290201200004C from the Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services.

Footnotes

Conflicts of interest: The authors have no conflicts of interest to report. The authors alone are responsible for the content and writing of the paper. Statements in the report should not be construed as endorsement by the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services.

References

  • [1].Steinberg E, Greenfield S, Mancher M, Wolman DM, Graham R. Clinical practice guidelines we can trust Washington, DC: National Academies Press; 2011. [PubMed] [Google Scholar]
  • [2].National Guideline Clearinghouse. Inclusion criteria. Agency for Healthcare and Research Quality; 2014. Available at http://www.guideline.gov/about/inclusion-criteria.aspx. Accessed July 2015.
  • [3].Anderson LM, Petticrew M, Chandler J, Grimshaw J, Tugwell P, O’Neill J, et al. Introducing a series of methodological articles on considering complexity in systematic reviews of interventions. J Clin Epidemiol 2013;66:1205–8. [DOI] [PubMed] [Google Scholar]
  • [4].Burford B, Lewin S, Welch V, Rehfuess E, Waters E. Assessing the applicability of findings in systematic reviews of complex interventions can enhance the utility of reviews for decision making. J Clin Epidemiol 2013;66:1251–61. [DOI] [PubMed] [Google Scholar]
  • [5].Guise J-M, Chang C, Viswanathan M, Glick S, Treadwell J, Umscheid CA, et al. Agency for Healthcare Research and Quality Evidence-based Practice Center methods for systematically reviewing complex multicomponent health care interventions. J Clin Epidemiol 2014;67:1181–91. [DOI] [PubMed] [Google Scholar]
  • [6].Noyes J, Gough D, Lewin S, Mayhew A, Michie S, Pantoja T, et al. A research and development agenda for systematic reviews that ask complex questions about complex interventions. J Clin Epidemiol 2013;66:1262–70. [DOI] [PubMed] [Google Scholar]
  • [7].Petticrew M, Anderson L, Elder R, Grimshaw J, Hopkins D, Hahn R, et al. Complex interventions and their implications for systematic reviews: a pragmatic approach. J Clin Epidemiol 2013;66:1209–14. [DOI] [PubMed] [Google Scholar]
  • [8].Petticrew M, Rehfuess E, Noyes J, Higgins JP, Mayhew A, Pantoja T, et al. Synthesizing evidence on complex interventions: how meta-analytical, qualitative, and mixed-method approaches can contribute. J Clin Epidemiol 2013;66:1230–43. [DOI] [PubMed] [Google Scholar]
  • [9].Pigott T, Noyes J, Umscheid CA, Myers E, Morton SC, Fu R, et al. AHRQ series on complex intervention systematic reviews—paper 5: advanced analytic methods. J Clin Epidemiol 2017;90:37–42. [DOI] [PubMed] [Google Scholar]
  • [10].Squires JE, Valentine JC, Grimshaw JM. Systematic reviews of complex interventions: framing the review question. J Clin Epidemiol 2013;66:1215–22. [DOI] [PubMed] [Google Scholar]
  • [11].Tharyan P Introducing conceptual and analytical clarity on dimensions of complexity in systematic reviews of complex interventions. J Clin Epidemiol 2013;66:1202–4. [DOI] [PubMed] [Google Scholar]
  • [12].Tugwell P, Knottnerus JA, Idzerda L. Complex interventions–how should systematic reviews of their impact differ from reviews of simple or complicated interventions? J Clin Epidemiol 2013;11:1195–6. [DOI] [PubMed] [Google Scholar]
  • [13].Weir MC, Grimshaw JM, Mayhew A, Fergusson D. Decisions about lumping vs. splitting of the scope of systematic reviews of complex interventions are not well justified: a case study in systematic reviews of health care professional reminders. J Clin Epidemiol 2012;65:756–63. [DOI] [PubMed] [Google Scholar]
  • [14].Macaulay AC, Jagosh J, Seller R, Henderson J, Cargo M, Greenhalgh T, et al. Assessing the benefits of participatory research: a rationale for a realist review. Glob Health Promot 2011;18(2):45–8. [DOI] [PubMed] [Google Scholar]
  • [15].Guise J-M, Chang C, Butler M, Viswanathan M, Tugwell P. AHRQ series on complex intervention systematic reviews—paper 1: an introduction to a series of articles provides guidance and tools for reviews of complex interventions. J Clin Epidemiol 2017;90: 6–10. [DOI] [PubMed] [Google Scholar]
  • [16].Anderson LM, Oliver SR, Michie S, Rehfuess E, Noyes J, Shemilt I. Investigating complexity in systematic reviews of interventions by using a spectrum of methods. J Clin Epidemiol 2013;66:1223–9. [DOI] [PubMed] [Google Scholar]
  • [17].Kastner M, Antony J, Soobiah C, Straus SE, Tricco AC. Conceptual recommendations for selecting the most appropriate knowledge synthesis method to answer research questions related to complex evidence. J Clin Epidemiol 2016;73:43–9. [DOI] [PubMed] [Google Scholar]
  • [18].Michael Y, Lin J, Whitlock E, Gold R, Fu R, O’Connor E, et al. Interventions to prevent falls in older adults: an updated systematic review. Evidence Synthesis No. 80. 2010;AHRQ Publication No. 11–05150-EF-1 Rockville, MD: Agency for Healthcare Research and Quality; 2010. [PubMed] [Google Scholar]
  • [19].Kay N, Pennucci A. Early childhood education for low-income students: A review of the evidence and benefit-cost analysis (Doc. No. 14–01-2201) Olympia: Washington State Institute for Public Policy; 2014. [Google Scholar]
  • [20].Glenton C, Colvin CJ, Carlsen B, Swartz A, Lewin S, Noyes J, et al. Barriers and facilitators to the implementation of lay health worker programmes to improve access to maternal and child health: qualitative evidence synthesis. Cochrane Database Syst Rev 2013; CD010414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Campbell R, Pound P, Morgan M, Daker-White G, Britten N, Pill R, et al. Evaluating meta ethnography: systematic analysis and synthesis of qualitative research. Health Technol Assess 2011;15(43):1–164. [DOI] [PubMed] [Google Scholar]
  • [22].Hill AB. The environment and disease: association or causation? Proc R Soc Med 1965;58(5):295. [PMC free article] [PubMed] [Google Scholar]
  • [23].Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: A guide to qualitative comparative analysis Cambridge, UK: Cambridge University Press; 2012. [Google Scholar]
  • [24].Cook TD, Campbell DT, Day A. Quasi-experimentation: Design & analysis issues for field settings Boston, MA: Houghton Mifflin Boston; 1979. [Google Scholar]
  • [25].Rothman KJ, Greenland S, Lash TL. Chapter 2: Causation and Causal Inference. Modern epidemiology Philadelphia, PA: Lippincott Williams & Wilkins; 2008. [Google Scholar]
  • [26].Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev 2014;3(1):1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].O’Mara-Eves A, Brunton G, McDaid G, Oliver S, Kavanagh J, Jamal F, et al. Community engagement to reduce inequalities in health: a systematic review, meta-analysis and economic analysis. Public Health Res 2013;1(4). [PubMed] [Google Scholar]
  • [28].Campbell R, Pound P, Morgan M, Daker-White G, Britten N. Evaluating meta-ethnography: systematic analysis and synthesis of qualitative research. Health Technol Assess 2011;15:164. 2011/12/19. [DOI] [PubMed] [Google Scholar]
  • [29].Jamal F, Fletcher A, Harden A, Wells H, Thomas J, Bonell C. The school environment and student health: a systematic review and meta-ethnography of qualitative research. BMC Public Health 2013; 13:798. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Chenail RJ, George SS, Wulff D, Duffy M, Scott KW, Tomm K. Clients’ relational conceptions of conjoint couple and family therapy quality: A grounded formal theory. J Marital Fam Ther 2012;38(1): 241–64. [DOI] [PubMed] [Google Scholar]
  • [31].Forman-Hoffman VL, Cook Middleton J, McKeeman JL, Stambaugh LF, Christian RB, Gaynes BN, et al. Strategies to improve mental health care for children and adolescents. Comparative Effectiveness Review No. 181. In: (Prepared by the RTI International—University of North Carolina Evidence-based Practice Center under Contract No. 290–2012-00008-I.) AHRQ Publication No. 16(17)- EHC035-EF Rockville, MD: Agency for Healthcare Research and Quality; 2016. Available at www.effectivehealthcare.ahrq.gov/reports/final.cfm. [PubMed] [Google Scholar]
  • [32].Hersh W, Totten A, Eden K, Devine B, Gorman P, Kassakian S, et al. Health Information Exchange. Evidence Report/Technology Assessment No. 220. (Prepared by the Pacific Northwest Evidence-based Practice Center under Contract No. HHSA 290201200014I.) AHRQ Publication No. 15(16)-E002-EF Rockville, MD: Agency for Healthcare Research and Quality; 2015: 2015. [Google Scholar]
  • [33].Murad MH, Montori VM, Ioannidis JP, Jaeschke P, Devereaux K, Prasad I, et al. How to read a systematic review and meta-analysis and apply the results to patient care: users’ guides to the medical literature. J Am Med Assoc 2014;312:171–9. [DOI] [PubMed] [Google Scholar]

RESOURCES