Skip to main content
Implementation Research and Practice logoLink to Implementation Research and Practice
. 2022 Aug 28;3:26334895221120797. doi: 10.1177/26334895221120797

Expect the unexpected: A qualitative study of the ripple effects of children’s mental health services implementation efforts

Michael D Pullmann 1,, Shannon Dorsey 2, Mylien T Duong 3, Aaron R Lyon 1, Ian Muse 1, Cathy M Corbin 1, Chayna J Davis 1, Kristin Thorp 4, Millie Sweeney 5, Cara C Lewis 6, Byron J Powell 7
PMCID: PMC9731268  NIHMSID: NIHMS1852773  PMID: 36504561

Abstract

Background

Strategies to implement evidence-based interventions (EBIs) in children’s mental health services have complex direct and indirect causal impacts on multiple outcomes. Ripple effects are outcomes caused by EBI implementation efforts that are unplanned, unanticipated, and/or more salient to stakeholders other than researchers and implementers. The purpose of the current paper is to provide a compilation of possible ripple effects associated with EBI implementation strategies in children’s mental health services, to be used for implementation planning, research, and quality improvement.

Methods

Participants were identified via expert nomination and snowball sampling. Online surveys were completed by 81 participants, each representing one of five roles: providers of mental health services to children or youth, researchers, policy makers, caregivers, and youth. A partially directed conventional content analysis with consensus decision making was used to code ripple effects.

Results

Four hundred and four unique responses were coded into 66 ripple effects and 14 categories. Categories include general knowledge, skills, attitudes, and confidence about using EBIs; general job-related ripple effects; EBI treatment adherence, fidelity, and alignment; gaming the system; equity and stigma; shifting roles, role clarity, and task shifting; economic costs and benefits; EBI treatment availability, access, participation, attendance, barriers, and facilitators; clinical process and treatment quality; client engagement, therapeutic alliance, and client satisfaction; clinical organization structure, relationships in the organization, process, and functioning; youth client and caregiver outcomes; and use of EBI strategies and insights in one’s own life.

Conclusions

This research advances the field by providing children’s mental health implementers, researchers, funders, policy makers, and consumers with a menu of potential ripple effects. It can be a practical tool to ensure compliance with guidance from Quality Improvement/Quality Assurance, Complexity Science, and Diffusion of Innovation Theory. Future phases will match potential ripple effects with salient children’s mental health implementation strategies for each participant role.

Plain Language Summary: This qualitative study of multiple stakeholders in children’s mental health services identifies several possible ripple effects of implementation strategies, opening a new area of study for implementation science. Ripple effects can be positive, negative, or neutral within the full balance of implementation quality and impact. The list of ripple effects will provide implementation scientists, developers, and others with a useful tool during implementation planning and evaluation. This expert-informed methodology can provide a model for other fields for exploring possible ripple effects within implementation science.

Keywords: ripple effects, children’s mental health services, implementation strategies, unintended consequences

Introduction

Implementation strategies have complex direct and indirect causal impacts on multiple outcomes, and these impacts vary as a function of context, needs, and stakeholders. Implementation strategies are chosen based on what is deemed to be feasible and effective at achieving the primary outcomes of adoption and delivery of an intervention with fidelity (Mannion & Braithwaite, 2012; Merton, 1968; Rogers, 2004). To facilitate strategy selection, implementation scientists have developed approaches to enable a systematic process of identifying mechanisms and causal processes (Franks & Bory; Grol et al., 2013; Lewis et al., 2018; Powell, Beidas, et al., 2017). However, this work has had limited focus on potential unintended outcomes.

Any change in complex systems such as health care can lead to a cascading array of unplanned, non-linear, or unexpected “ripple effects” (Lipsitz, 2012). We define ripple effects as positive or negative outcomes that are caused by evidence-based intervention (EBI) implementation strategies and are unplanned, unanticipated, and/or more salient to stakeholders other than researchers and implementers. Ripple effects are rarely considered when planning implementation in a research context, as these outcomes may be distal and indirect. Identifying, measuring, planning for, and addressing ripple effects should be a priority for implementation science, as ripple effects can hinder or enhance implementation outcomes, effectiveness, and sustainability (Brainard & Hunter, 2015). The purpose of the current paper is to provide children’s mental health services implementation stakeholders—intermediary organizations, purveyors, mental health providers, researchers, and treatment developers—with a compilation of possible ripple effects associated with implementation of EBI in children’s mental health services.

Features of Ripple Effects

Ripple effects include a broad range of unexpected secondary outcomes, mediators, or process impacts resulting from implementation strategies. As with any outcome, ripple effects are associated with the type of causal action (e.g., the specific implementation strategy) as well as the quality of delivery. Constructs that are intended mediators or outcomes in the context of one implementation effort may be ripple effects in a different implementation effort. For example, increasing the fidelity of treatment delivery is an intended outcome of many implementation strategies, such as training and supervision. However, other implementation strategies, such as mandates requiring the delivery of evidence-based treatments, are focused on improving access to treatment. Therefore, impacts on treatment fidelity may be primary outcomes in studies of training and supervision, but a ripple effect (i.e., unanticipated effect) in studies of mandates. Interestingly, mandates have been found to increase adoption, penetration, and reach, but to decrease treatment fidelity (Park et al., 2018).

Ripple effects exist within an array of different types of unexpected or secondary impacts described in multiple fields of study, often with differing terminology. For instance, the Medical Research Council’s guidance on conducting process evaluations of complex interventions identified “unexpected mechanisms” as an area worthy of investigation (Moore et al., 2015). Secondary outcomes are central to Complexity Science (Brainard & Hunter, 2015) and Diffusion of Innovations theory, which use the term “unintended consequences” (Rogers, 2004), as well as Quality Improvement and Quality Assurance (QI/QA), which uses the term “balancing measures” (Toma et al., 2018), as they are essential for understanding the full balance of costs and benefits of a quality improvement effort. In implementation and treatment research some publications have referred to “spillover effects,” generally defined as benefits to intervention non-participants (Desrosiers et al., 2020). An example of spillover effect research in children’s mental health is the impact of interventions to reduce child behavior problems on reductions in parental stress (Martins et al., 2020; Price et al., 2015).

There are at least five features that could be used to clarify this terminology. These features include type of causal action that leads to the outcome (e.g., intervention, implementation strategy, quality improvement effort, etc.), location of the outcome within the theory of change (e.g., mechanisms or outcomes), valence (e.g., positive, negative, neutral, or mixed), level and role of impacted parties (e.g., participant/client, provider, caregiver, team, system), and intentionality. We provide the following classifications based on our reading of the literature and how ripple effects fit within it. The broadest term is balancing measures, which are defined as any secondary outcome generated by a quality improvement or quality assurance effort and can feature any other combination of the other features (Toma et al., 2018). Unintended consequences are generally considered to be unintended negative outcomes resulting from an intervention, and can impact any party (Brainard & Hunter, 2015). Spillover effects have been defined as unintended or intended outcomes on non-participants generated by any causal action (including those not related to the healthcare delivery, such as the impact of one family member’s mental health challenges on others (Lee et al., 2022). The program evaluation literature defines spillover effects as being positive, negative, or neutral (Angelucci & DiMaro, 2010) whereas in the mental health literature, the definition may be limited to positive outcomes (Desrosiers et al., 2020). We specify ripple effects as unintentional, caused by implementation strategies, with any type of valence, and impacting any role or level.

Ripple Effects Have Been Understudied

Research attention to ripple effects has been generally limited. Two systematic reviews conducted 25 years apart found fewer than 0.2% of studies on innovations in a broad range of fields examined whether the innovation resulted in unplanned outcomes (Rogers, 2004; Sveiby et al., 2009); other reviews have found similar absence in perioperative care improvement interventions (Jones et al., 2016), improvement methodologies in surgery (Nicolay et al., 2012), and interventions to reduce patient falls and infections (Manojlovich et al., 2016). Unintended and unexpected ripple effects are infrequently measured even when using strategies specifically designed to consider the full range of implementation outcomes. Complexity science, for example, accepts that unintended consequences are a normal part of any system intervention and emphasizes the importance of purposefully exploring for them. However, a scoping review found that only half of 22 studies that used a complexity science informed intervention design featured a process to identify unintended or negative consequences (Brainard & Hunter, 2015).

Over eighty years ago, Robert Merton first described several factors that limit human beings’ ability to consider and anticipate consequences, four of which are pertinent here: (a) general lack of foreknowledge (not knowing what you do not know); (b) implicit assumptions and assessment errors in strategy selection; (c) the “imperious immediacy of interest,” also known as “tunnel vision” or “innovation bias” (Rogers, 2004), caused by an intense focus on desired immediate consequences or positive results so that other consequences are not considered; and (d) decision making solely based on values in the absence of logic or rationale (Merton, 1936; Rogers, 2004). Sveiby et al., (2009) and colleagues conducted a survey of researchers who study unintended consequences to uncover their beliefs about why research in this area is sparse. Participants rated pro-innovation bias as the principal reason unintended consequences are rarely included. The second ranked reason was a belief that funders are less interested in research on unintended consequences, also a form of innovation bias. As Sveiby points out, these two reasons are political and emotional in nature, and do not positively serve the interests of science or implementation. Ignoring ripple effects does not make them go away (Weiner et al., 2017). Importantly, study participants did not believe methodological difficulties were barriers to this work, particularly if reliable methods for anticipating and measuring ripple effects existed. Based on Merton’s theory and Sveiby’s research, it follows that improving our awareness of ripple effects and methods to study them may result in increased acceptance and inclusion in future research.

Ripple Effects in Healthcare

In healthcare, there have been several studies and syntheses about unintended consequences, particularly those resulting from performance measurement (PM) systems (formal systems, usually computerized, that support healthcare planning, reporting, and monitoring procedures). Powell and colleagues (2012) explored the impact of implementing national PM policies in the United States into local healthcare systems and found PMs could lead to provider inattention to patient concerns and negative impacts on patient autonomy. Importantly, they concluded that unintended consequences did not usually result from national policies but rather were the result of the policy implementation strategies. Mannion and Braithwaite (2012) provided a four-category taxonomy of the unintended consequences of PM systems in the English National Health Service: 1) Poor measurement (e.g., fixating on measurement targets rather than the spirit of the measure); 2) Misplaced incentives and sanctions (e.g., increased inequality as high-performing systems are rewarded); 3) Breach of trust (e.g., gaming the system by intentionally performing poorly during baseline periods); and 4) Politicization of performance systems.

Ripple Effects in Mental Healthcare

Specific to mental healthcare, Adams conducted qualitative interviews examining the institutionalization of peer support (Adams, 2020) and concluded that these efforts resulted in four major areas of unintended consequences: 1) Shifting the scope of services to be more formal; 2) Narrowing of the peer support workforce by alienating or excluding those without formal education or professional skills; 3) Reduced flexibility and individualization in peer/client relationships; and 4) Increased experience of feeling stigma and discrimination in the workplace.

Children’s mental healthcare is of particular importance, as mental health (MH) disorders constitute the most costly health conditions of childhood, affecting one out of every five children in the United States (Perou et al., 2013). Over 650 evidence-based interventions (EBIs) have been developed to treat child MH problems (Bruns et al., 2014; Chorpita et al., 2016), but very few youth in community mental health settings receive them (Bruns et al., 2015; Weisz et al., 2006). However, the array of implementation strategies used to increase access to children’s mental healthcare have yet to be examined for possible ripple effects. In a systematic review of 18 studies of children’s MH implementation strategies, the authors identified only one that systematically explored for potential harms (Forman-Hoffman, 2017). This study found two negative ripple effects resulting from professional training on client identification and referral for first episode psychosis: adverse events and false-positive identification (Lester et al., 2009).

We identified two other studies that examined ripple effects in children’s mental health. In a separate randomized trial of the effectiveness of an intervention to reduce child neglect, researchers used an intervention × implementation strategy 2 × 2 factorial design (Intervention: EBI vs. services as usual; Implementation strategy: fidelity monitoring and feedback vs. no fidelity monitoring and feedback). Results indicated that providing fidelity monitoring and feedback along with the provision of the EBI was associated with greater staff retention (Aarons, Sommerfeld, et al., 2009); however, providing fidelity monitoring and feedback to the services as usual group was associated with greater staff emotional exhaustion (Aarons, Fettes, et al., 2009). Therefore, the same implementation strategy had different ripple effects depending on the intervention condition, one positive (retention for those in the EBI group) and one negative (emotional exhaustion for those in the services as usual group). In a different study, Park et al., (2018) examined the impact of a state policy encouraging EBI adoption in children’s mental health services. While the policy resulted in its intended impact, increased EBI adoption and reach, it also increased negative ripple effects such as low-quality EBI use, “off-label” treatment, and modified, low-fidelity or “counterfeit” versions of EBIs (Ryan et al., 2014).

Purpose of the Current Study

The goal of the current study is to develop a compilation of possible implementation ripple effects from the viewpoint of primary stakeholders of children’s mental health services (youth receiving services, their caregivers, mental health providers, policy makers, and researchers) to support implementation decision-making by anticipating and detecting ripple effects. We do not intend to present a checklist of ripple effects that must be examined in all implementation efforts, as this would be unwieldy and infeasible. We hope that the compilation we provide might temper the factors described above that limit attention to ripple effects in the following ways. It would improve awareness by explicitly describing ripple effects to facilitate consideration and inclusion during research and implementation planning efforts. It may reduce tunnel vision and innovation bias by normalizing the inclusion of ripple effects as an area of study. Once detected during implementation efforts, desirable ripple effects can be capitalized upon, and undesirable ripple effects can be ameliorated, thereby contributing to quality assurance and improvement. Our intent is to describe the range of possible ripple effects to consider for implementation in children’s mental health, which may serve as a model throughout the implementation literature.

Method

This qualitative research study was conducted under the assumptions of a post-positivist paradigm, which posits that interpretations of reality are based on context and past experiences and the “truth” of phenomena emerge from individual experiences. It also recognizes that researchers cannot be fully detached and objective, and that research and researchers mutually influence each other. This paradigm was applied because the purpose of the research was to obtain a wide range of possible ripple effects salient to multiple participant types, including “objective” researchers. This is important because the goals of the study were not to identify objectively occurring ripple effects, but rather to identify multiple possible experiences from different viewpoints. The coding team has combined decades of experience in children’s mental health research and implementation, and all have PhDs in Clinical or Community Psychology or a related field. The team assumed the validity and transferability of the coding process was inherently tied to their own perceptions and context. The research team’s characteristics as cisgender, white academics within a large, traditional medical school, likely contributed to the ripple effect categories. As described, we made several efforts to obtain diverse participants to better obtain broad perspectives in terms of role, geography, and race/ethnicity. While another team may have identified or classified these codes differently, the team believes our conclusions point toward deeper truths.

Participants

Table 1 provides sample demographics. The sample consisted of a total N = 81 participants, n = 23 (28.4%) providers of mental health services to children and youth, n = 14 (17.3%) researchers, n = 6 (7.4%) policy makers, n = 18 (22.2%) caregivers of youth consumers of mental health services, and n = 20 (24.7%) youth consumers of mental health services. Most participants were female and White. We did not collect age, but n = 5 (6.2%) participants were under 18 years.

Table 1.

Demographics of full population.

Baseline demographics n %
Age    
 >18 years 76 93.8
 <18 years 5 6.2
Gender
 Female 63 77.8
 Male 15 18.5
 Non-binary, third gender, or gender fluid 3 3.7
Race
 American Indian or Alaskan Native 3 3.7
 Asian 10 12.3
 Black 6 7.4
 Native Hawaiian or Pacific Islander 0 0
 White 51 63
 Prefer not to say 4 4.9
 Self-described:
  Hispanic/Latinx 3 3.7
  White and Black 2 2.4
  Mixed/Black 1 1.2
  Arab American 1 1.2
Ethnicity
 Latinx 11 13.6
 Non-Latinx 70 86.4
Role
 Provider 23 28.4
 Researcher 14 17.3
 Policymaker 6 7.4
 Caregiver 18 22.2
 Youth 20 24.7

Procedures

Participants were recruited from throughout the United States using purposeful sampling to ensure diversity, representativeness, and expertise within the five roles. Criteria for participation were that participants had to (a) fit their role (i.e., be a caregiver of a child/youth who received mental health services; a youth aged 16–25 who have received mental health services; a youth mental health therapist; a policy maker such as a state, county, or regional mental health director, legislator, or legislative staff; or a researcher), and (b) have experience with some aspect of implementing children’s mental health services and supports, such as working on mental health policy, advisory committees, providing peer support, and conducting training. Participants were identified with the support of several organizations. We aimed to recruit approximately 20 participants from each role and generally met this goal except for policy makers. As described in the results, we found that our recruitment led to code saturation. However, we struggled with recruitment from policy makers. Youth consumers were identified by Youth Motivating Others through Voices of Experience (Youth MOVE National), which did direct outreach to invite 40 youth to participate, and when this did not identify enough youth, sent announcements on Twitter, Facebook, and an internal email list. Caregivers were recruited by the Family-Run Executive Director Leadership Association (FREDLA), which initially reached out directly to approximately 35 family leaders, and when this did not identify enough caregivers, emailed a recruitment flyer to a listserv. Researchers and providers were identified via nominations by the study team and colleagues of the team from the School Mental Health Assessment Research and Training (SMART) Center and via an email to listservs run by the Society for Implementation Research Collaboration (SIRC). Policy makers were recruited via warm handoff emails from colleagues of the research team, and cold-emails to addresses identified via contacts provided by the National Association of State Mental Health Program Directors (NASMHPD). Additionally, our research and consultant team brainstormed a list of researchers, providers, and policy makers with whom we have worked or who have published literature on children’s mental health services implementation and ripple effects. Approximately 120 policy makers were invited to participate; while most did not respond, some stated that they could not take part in the research due to conflicts of interest resulting from the monetary incentive.

Research staff received emailed requests to participate from 66 youth and caregivers recruited by Youth MOVE National and FREDLA. A team internal database of potential researchers and providers for behavioral health groups provided an additional 148 individuals we asked if they would be interested in participating via blind email; 136 participants responded and were sent a consent form via online survey. Following consent survey reach-outs, 135 gave consent.

Informed consent occurred online. Participants under 18 years of age were required to obtain signed consent from a parent or legal guardian. Participants were informed that the purpose of the study was to discover the types of impacts that might result from children’s mental health services. They received a $100 gift card for completion of this first survey. Study data were collected and managed using REDCap online data capture tools hosted at the University of Washington (Harris et al., 2019). The survey required approximately 30 min to 1 h to complete. Youth participants were offered the opportunity to complete the survey with the support of a telephone or video-enabled interview or live access to a research coordinator to field questions. All but one youth participant chose this option.

The current manuscript presents the results from one portion of a larger study to extend this work by linking ripple effects to the most common implementation strategies and identifying gaps in researchers’ ripple effect awareness. This study was determined to be exempt from review by the University of Washington Institutional Review Board.

Measures

Participants provided demographics and then read definitions of EBIs, EBI implementation strategies, and ripple effects, and were provided with three examples of ripple effects. One example read, “Training therapists in an EBI has the intended effect of increasing therapist use of an EBI and quality of treatment for clients. If training is done well, there may be several positive ripple effects: therapists might find increased satisfaction with their job, increased feelings of self-worth in the work they do, and may be less likely to leave their job in search of other opportunities. However, if training is done poorly or is done when therapists are already feeling burdened, it may lead to negative ripple effects: therapists may experience burnout, fatigue, dissatisfaction, resistance to the training, and be more likely to leave their job in search of other opportunities, and their clients may be dissatisfied at poor service.” Participants were directed to brainstorm up to 15 ripple effects, providing a name and a description or definition for each.

Analyses

Qualitative analyses were conducted by four experienced members of the team and proceeded in two steps. First, a partially directed conventional content analysis using group consensus was used to thematically code ripple effects (Strauss & Glaser, 1967). We began with a list of ripple effects based on the literature and our expertise during the grant writing process. Coding of survey responses into multiple themes was conducted by the first and sixth author, with review and edits by the second and fourth. Discrepancies in coding were discussed until agreement was reached on a final code or adjustment to the codebook. Responses that were ripple effects but not in our existing list were added. Using consensus decision making, names and definitions of codes in the existing list were iteratively edited to better fit the data. During this process, several codes were merged with other, highly similar codes. When possible, we coded ripple effects so they did not indicate valence, or merged two opposite valence codes into a code without valence (e.g., “Loss of income” and “Increased income” were merged into “Income”), but this was often not possible (e.g., “Overspecialization in EBIs”). Responses that were agreed to not be ripple effects were discarded. Discarded responses were vague, were about ripple effects from an intervention rather than an implementation strategy, or did not answer the question. Three examples of discarded responses include “Medication interaction: Some medications have interactions which interact poorly with each other,” “Exposes the weakness: We can see when the school didn’t work out, when the system failed the kids,” and, “I love that you are thinking about latent and manifest consequences of the actions you are considering taking!” Second, we applied the constant comparative method using group consensus to cluster ripple effect codes into a smaller number of meaningful ripple effect categories. We engaged in an iterative process of categorization, editing, and re-categorization, while continually comparing the categories and codes to the existing data. Although several codes could reasonably be placed in more than one category, we assigned each code to its best fitting single primary category.

Results

There were 404 unique responses to the ripple effects question; these were coded into 66 unique ripple effects, which were further grouped into 13 categories. This ratio of an average of six unique responses per coded ripple effect and five ripple effects per category indicates that analyses achieved thematic saturation. Lower ratios, such as only 1–2 responses per unique ripple effect, would have indicated that there were many more ripple effects to uncover. Table 2 provides the full list of categories and ripple effects. General knowledge, skills, attitudes, and confidence about using Evidence Based Interventions included eight ripple effects of the impact on knowledge, skills, attitudes, and confidence of therapists, clients, caregivers, or others regarding a therapist’s use of an EBI. An example quote is “Confidence—feeling self-assured to provide a service because you have had the proper training and supervision.”

Table 2.

Compilation of possible ripple effects of children’s mental health services implementation strategies.

Ripple effect category and ripple effects within each category Description of Ripple effect or category
1. General knowledge, skills, attitudes, and confidence about using evidence-based interventions (n = 8) Any impact on tde knowledge, skills, attitudes, and confidence of tderapists, clients, caregivers, or otders regarding tde use of an evidence-based intervention.
  Confidence/Self-efficacy in using EBIs One's belief in their capacity to use an EBI with fidelity.
  Skills in using EBIs One's actual ability to use an EBI with fidelity.
  Knowledge of EBIs Facts and information about EBIs.
  Beliefs in provider's competence to use EBIs A perception of one's provider having sufficient knowledge, judgment, and/or skills in using EBIs.
  Overspecialization in EBIs To specialize in one or a few EBIs to such a degree as other EBIs are neglected.
  Feelings of regret for not having used EBIs with past clients Feeling guilt, remorse, or regret for previously implemented, non-EBI treatment practices.
  EBI Intervention fatigue A feeling of being overwhelmed by replacing or supplementing old knowledge and skills with new knowledge and skills, or from trying to learn and master too many varying EBIs.
  Positive attitude and supportiveness of EBI Degree of support for and belief in the effectiveness of EBIs; includes the amount of hope for improvement resulting from the EBI.
2. General job-related ripple effects (n = 6) Attitudes, feelings, beliefs about one's job and outcomes associated with those attitudes such as worker retention, job satisfaction, and job burnout.
  Job satisfaction The attitudes clinicians, peer support providers, policy makers, or others have about their job as compared to previous experiences, current expectations, or available alternatives.
  Job burnout Feelings of emotional exhaustion, reduced personal accomplishment, loss of work fulfillment, and reduced effectiveness.
  Job retention Percentage of clinicians, peer support providers, policy makers, or others who stay in their job each year.
  Job autonomy or independence A feeling of independence and self-determination; ability to make your own decisions and act mostly on your own. When autonomy decreases, one might feel micromanaged.
  Job workload/work burden Feelings of burden associated with the amount or difficulty of one's work
  Sense of job security Feeling that the possibility of losing one's job is very low.
3. EBI treatment adherence, fidelity, and alignment (n = 3) The degree to which EBIs are delivered by therapists in a way that is consistent with EBI training.
  Provider EBI treatment fidelity Whether an EBI intervention is implemented as planned and each component is delivered with competence.
  Provider mixing of practice elements or cherry-picking practices Using elements from multiple EBIs in a single practice, incorporating non-EBI elements (for example, dream analysis) into treatment, or using only practices deemed more useful or easier (e.g., teaching relaxation techniques).
  Provider using “off-label” treatment Implementing a mental health intervention for a disorder that it was not originally designed to treat. Using a treatment that the clinician knows how to do rather than the treatment most appropriate for the client.
4. Gaming the system (n = 4) Manipulating data or practice to achieve the image of an outcome that is not consistent with the spirit of the practice.
  Data gaming Manipulating a data tracking, client management, or progress rewards system for a desired outcome.
  Treatment counterfeiting Delivering a treatment that is supposed to be a particular EBI but in reality lacks the essential elements of the EBI.
  Tokenism Recruiting people from underrepresented groups to provide feedback or be engaged as advisory board members solely to give the appearance of equality and engagement.
  Insufficient use of data tracking systems Providers only entering the minimum amount of data necessary to ensure payment or client tracking, and not using the data system as intended.
5. Equity and stigma (n = 3) Equity related to treatment availability, access, quality, and outcomes by race, ethnicity, or other identity group classification; stigma associated with mental health.
  Racial/ethnic prejudice Being seen as one's racial or ethnic identity rather than individuals.
  Equity Reduction or elimination of disparities (e.g., access to services) between groups stemming from reduction of biases, removal of barriers, and/or inclusion of diverse perspectives.
  Stigma around mental health issues Stereotypes or negative views attributed to a person or groups of people when their mental health and/or behaviors are viewed as different from or inferior to societal norms.
6. Shifting roles, role clarity, task shifting (n = 2) The degree to which the new intervention requires roles that clarifies or conflicts with existing roles, or changes role responsibilities.
  Role clarity The degree to which the new intervention clarifies and distinguishes between existing roles, tasks, responsibilities, and processes. The opposite of role conflict, blurred roles, or turf battles.
  Task shifting When some mental health treatment tasks that had previously been done by therapists are assigned to health workers, peer providers, or others with shorter training and fewer academic qualifications.
7. Economic costs and benefits (n = 4) Any monetary cost or benefit of the use of the implementation strategy.
  Income Amount of money one makes at one's job.
  Ability to bill insurance for an EBI The ability of therapists to bill insurance plans for a particular EBI or mental health service.
  Budget implications Expected impacts on income and expenditures for agencies or state public mental health budgets.
  Monetary costs Monetary costs of the implementation strategy.
8. EBI treatment availability, access, participation, attendance, barriers & facilitators (n = 8) The availability of an EBI in a region (e.g., whether there is a provider that is trained on the EBI), whether the EBI is accessible (e.g., whether there is no waiting list or other barriers to accessing the treatment), the actual use of and client participation in the EBI, and any barriers or facilitators that support the availability, access, and use of services
  Service availability, access, and reach The presence or availability of services and spread of those services across a region—this requires a provider who is trained on the EBI.
  Facilitators of treatment Something that facilitates, makes easier, or supports a client’s ability to attend, engage in or complete treatment. Facilitators can be at the client level, logistic, or other facilitators. Client-level facilitators might be attitudes (belief treatment will work, belief in the need for treatment), emotional, or behavioral strengths in individuals and groups. Logistic facilitators to treatment include things such as insurance coverage, good transportation, and proximity to treatment.
  Barriers to treatment Something that restricts, impedes, or blocks client ability to attend, engage, or complete treatment. Barriers can be at the client level, logistic, or other barriers. Client-level barriers might be attitudes (stigma, feeling treatment might not work), emotional, or behavioral limitations in individuals and groups. Logistic barriers to treatment include things such as cost, transportation issues, and location.
  Participation or attendance at treatment sessions The degree to which the youth, client, or caregiver is involved in the client's treatment sessions.
  Client adherence to treatment recommendations The degree to which the behavior of youth clients or their caregivers corresponds with recommendations from therapists.
  Client transfer/therapist change Burden and/or instability felt by clients, families, and organizations resulting from changing providers.
  De-implementation When a clinical practice or approach to treatment (EBI or non-EBI) that was once offered is discontinued.
  Sustainability Percentage of implemented EBIs that sustain at yearly intervals.
9. Clinical process and treatment quality (n = 7) The process of clinical treatment, including treatment quality, the focus, length, assessment burden, time spent in active treatment, and effects on treatment as usual.
  Treatment quality and effectiveness The degree to which the treatment is high-quality, meaning how effective, engaging, and efficient the treatment is at improving client functioning and symptoms.
  Individualization of treatment goals and practice Tailoring treatment goals and practices to the specific needs of the client.
  Length of time until treatment completion The length of time for a youth client to complete treatment.
  Narrow focus on outcomes Focusing treatment only on one major outcome, neglecting other needs the youth client may have.
  Assessment burden Perception of burden (e.g., hardship or distress) when administering or completing clinical assessments, surveys, and questionnaires.
  Time providing or receiving active treatment Amount of time a provider spends delivering or a client spends receiving active treatment during a session.
  Improvement in quality of “usual treatment” Improved quality and effectiveness of "treatment-as-usual," usual care, or non-EBI treatment.
10. Client engagement, therapeutic alliance, and client satisfaction (n = 5) The degree to which clients are engaged and participating in the treatment planning and process, feelings of alliance and bond between client and therapist, and level of satisfaction with treatment.
  Client engagement in treatment The degree to which the client is engaged in treatment, such as participating during the session, being involved in treatment planning, and initiating contact or questions with the provider.
  Clarity of understanding about client's needs and strengths, treatment purpose, and progress The degree to which clinicians, clients, or their caregivers are informed about and understand the client's needs and strengths, the reasons for and goals of treatment, and the youth client's clinical progress.
  Satisfaction with treatment A client’s rating of important attributes of the process and outcomes of their treatment experience.
  Youth client motivation/self-efficacy to accomplish treatment goals Refers to a youth client's motivation and belief in their capacity to execute behaviors necessary to complete treatment. This reflects confidence in the ability to exert control over one's own motivation, behavior, and social environment.
  Therapeutic alliance A cooperative working relationship between youth client and therapist. Includes youth's sense of being "heard" and responded to in therapy, shared understanding, shared goals, and working together for common therapeutic purpose.
11. Clinical organization structure, relationships in the organization, process, and functioning (n = 9) Elements associated with the organization, structure, process, functioning, and relationships within the organization, including organizational climate, organization's use of resources, supervisory relationships, job applicant pool, and referral rates.
  Administrative burden Anything that is necessary to demonstrate compliance with a regulatory requirement, including the collecting, processing, reporting, and retaining of information, and the financial and economic costs of doing so.
  Organization reputation Refers to people’s collective opinion regarding the organization.
  Organizational climate and culture Employee's shared beliefs and values of an organization, and perceptions of the organization's policies, practices, procedures, and reward systems.
  Referral rates The rate of referrals to mental health treatment.
  Quality of job applicant pool Having a pool of qualified clinician applicants during the hiring process.
  Opportunity costs The loss of potential gain from other alternatives when one alternative is chosen.
  Supervisor-therapist alliance A partnership between a clinical supervisor and a clinician, devoted to the learning and growing of the clinician where there is a strong bond of care, respect, and trust.
  Scalability An attribute that describes the ability of a process, intervention, or organization to grow and manage increased demand.
  Cross-organizational camaraderie Mutual trust, support, friendship, and feelings of community among people.
12. Youth client and caregiver outcomes (n = 5) Client and caregiver thoughts, feelings, behaviors, symptoms, and functioning.
  Functioning and symptoms How the client is functioning in the world, and the symptoms of their mental health problems. This could include symptoms like depression, anxiety, or unwanted thoughts, or functioning such as ability to care for themselves, incarceration/recidivism, school enrollment, graduation, out-of-home placements, employment, lifespan, etc.
  Informal support systems Informal support systems such as friends, family, religious supports, recreation groups, and online social networks.
  Caregiver strain The perception of persistent problems and a feeling of decreased well-being that results from providing prolonged care.
  Empowerment The degree to which one can represent or advocate for their interests, determine the progress of their mental health treatment, and claim their rights.
  Re-traumatization Clinical regression (worsening of symptoms or functioning) or new mental health crisis due to processing past traumas.
  Hope Hopefulness for the future, inspiration, a belief that the situation will improve.
13. Use of EBI strategies and insight in one's own life (n = 2) The application of strategies and practices from an EBI in one's own life.
  Using EBI strategies for self-care Using EBI strategies in one's own life (such as deep breathing or behavioral activation).
  Using EBI strategies with friends and family members Helping friends and families by teaching them EBI strategies (such as deep breathing or behavioral activation), or using strategies with them such as reward systems.

General job-related ripple effects included six ripple effects on the attitudes, feelings, and beliefs about one’s job and outcomes associated with those attitudes such as worker retention, job satisfaction, and job burnout. An example quote is, “Decreased turnover—Increasing clinicians” felt sense of effectiveness may decrease the desire to switch jobs.”

EBI treatment adherence, fidelity, and alignment included three ripple effects on the degree to which EBIs are delivered by therapists in a way that is consistent with EBI training. An example quote is, “Where the [client’s] data is collected but also meaningfully interpreted in order to implement effective practices.”

Gaming the system included four ripple effects pertaining to manipulating data or practice to achieve the image of an outcome that is not consistent with the spirit of the practice. An example quote is, “People game their therapies to get more money from clients and from agencies to say they did a more elaborate treatment than they truly did.”

Equity and stigma included three ripple effects on equity (racial, ethnic, and identity group) specifically related to treatment availability, access, quality, and outcomes, as well as stigma associated with mental health issues. An example quote is, “When localities are mandated to perform EBP there could be a potential for localities with less resources to struggle implementing the EBP compared to localities with more resources.”

Shifting roles, role clarity, and task shifting included two ripple effects on the degree to which the new intervention requires roles that clarifies or conflicts with existing roles, or modifies role responsibilities. An example quote is, “As a parent who is also a provider/professional (peer provider), it can be difficult to manage boundaries with families and providers. Families can think you are “friends” and providers can forget that you are a family member.”

Economic costs and benefits included four ripple effects on any monetary cost or benefit of the use of the implementation strategy. An example quote is, “[Training caregiver and youth as peer supporters] provides income/employment to family members who may have had to quit their ‘day jobs’ already.”

EBI treatment availability, access, participation, attendance, barriers, and facilitators included eight ripple effects on the availability of an EBI in a region (e.g., whether there is a provider that is trained on the EBI), whether the EBI is accessible (e.g., whether there is no waiting list or other barriers to accessing the treatment such as insurance coverage), client participation and the actual use of in the EBI, and any barriers or facilitators that support the availability, access, and use of services. An example quote is, “Increased access—Offering clients the ability to engage with an EBP in an alternative format may increase access to treatment for many youth who do not wish to come in to an office or do not have the resources to do so.”

Clinical process and treatment quality included seven ripple effects on the process of clinical treatment, including treatment quality, the focus, length, assessment burden, time spent in active treatment, and effects on treatment as usual. An example quote is, “Less time in therapy—Increasing access to an effective treatment may decrease the amount of time clients need to be in the system.”

Client engagement, therapeutic alliance, and client satisfaction included five ripple effects on the degree to which clients are engaged and participating in the treatment planning and process, feelings of alliance and bond between client and therapist, and level of satisfaction with treatment. An example quote is, “Pessimism—Lack of faith in the system which your client is stuck in.”

Clinical organization structure, relationships in the organization, process, and functioning included nine ripple effects on elements associated with the organization, structure, process, functioning, and relationships within the organization, such as organizational climate, organization's use of resources, supervisory relationships, job applicant pool, and referral rates. An example quote is, “Providers [who are trained in EBP] can become more well known, receive accolades, and be supported communally through better treatments and client success.”

Youth client and caregiver outcomes included five ripple effects on client thoughts, feelings, behaviors, symptoms, and functioning. An example quote is, “Evidence based treatments that improve client functioning can also improve their performance and retention in school, graduation rates, employment rates, and income.”

Use of EBI strategies and insights in one's own life includes two ripple effects on using EBI strategies for self-care and using EBP strategies with friends and family members. An example quote is, “EBPs such as deep breathing are utilized by practitioners [to avoid work burnout].”

Discussion

This study aimed to develop a compilation of possible ripple effects resulting from the implementation of evidence-based children’s mental health services. Participants included a diverse array of experienced implementation experts from multiple roles, including youth mental health consumers, caregivers of child and youth consumers, mental health providers, policy makers, and researchers. We identified 66 ripple effects classified into 13 categories.

Given the importance of developing causal conceptual frameworks for implementation efforts (Franks & Bory, 2017; Grol et al., 2013; Lewis et al., 2018; Powell, Beidas, et al., 2017), this compilation provides children’s mental health implementers, researchers, funders, policy makers, and consumers with a menu of potential ripple effects. Those that are logically or theoretically associated with implementation efforts can be monitored to ameliorate negative and capitalize upon positive ripple effects. This research can serve to increase understanding of the complexity of implementation efforts and the recognition that implementation of EBIs is but one action in a larger, interconnected system. We caution that this compilation should not be used as a checklist of all areas to be included in children’s mental health implementation research and quality control efforts. However, we encourage it to be used before and during implementation efforts as a list of possible ripple effects to consider when evaluating the full balance of implementation. This list can be a practical tool to help researchers and implementers be consistent with guidance to explore for unexpected outcomes from experts in QI/QA, Complexity Science, and Diffusion of Innovation Theory, as well as the Medical Research Council (Brainard & Hunter, 2015; Moore et al., 2015; Rogers, 2004; Science of Improvement; Toma et al., 2018).

We also hope this compilation can serve to normalize and elevate the discussion of ripple effects in children’s mental health and more broadly. Ripple effects might not currently receive attention because they are out of scope or because of innovation bias prevents their identification. They might also be minimized when they are contradictory to the expectations of researchers and implementers. Tunnel vision impacts both what is investigated and what is reported (Rosenthal, 1979), compounding the risk that ripple effects will not be identified and reported. Regarding negative ripple effects, the pressures of funders’ priorities, political agendas, and career goals may sometimes (consciously or otherwise) override rational, open scientific investigation and reporting. This may be even more true when an implementation effort has positive primary effects (e.g., improved client mental health functioning) and negative ripple effects (e.g., provider burnout and turnover), and implementers may wish to avoid “tainting” positive findings by providing a more balanced view of the overall implementation effects. By elevating the discussion of ripple effects and recognizing their significance, we hope that exploring for them becomes a system-wide expectation of implementation efforts. Consideration of Ripple Effects might be included in standards checklists such as the Standards for Reporting Implementation Studies (StaRI) (Pinnock et al., 2017), and considered when conducting peer review of grants, manuscripts, and conference presentations. One step to elevate this discussion would be a systematic literature review of ripple effects and associated concepts (unintended consequences, spillover effects, balancing measures) throughout health care implementation in general. Such a review would serve to help researchers and others understand the full scope of knowledge in this area and what research and practice gaps are most pressing to address. We hope that the current manuscript provides a sampling that could be encountered in this review.

Many of the ripple effects do not have existing measures or, if the measures do exist, they may be lengthy and burdensome. The inclusion of ripple effects measures in a study is, by definition, exploratory. Ripple effects measures may not be included in studies more often due to feasibility and burden. There is a strong need for ripple effects to be quantified using pragmatic measures, that is, measures which are (a) acceptable and relevant to multiple stakeholders, (b) compatible with the setting, (c) short and easy to administer, and (d) useful for decision-making (Powell, Stanick, et al., 2017; Pullmann et al., 2013).

Limitations

There are several limitations to generating this compilation. First, this is a small sample focused on children’s mental health services. The generalizability of these findings to a larger sample with a broader focus on health care in general is unknown, and likely represent only a subset of ripple effects from the healthcare field. Second, as in any study, a different participant pool may have yielded different results. Our participant base was limited to the United States by participants with implementation expertise who may have had more positive attitudes toward EBPs than general people in their respective roles. By broadening our pool, we hoped to develop a relatively comprehensive participant base, but this may have also reduced the number of participants with expertise in implementation strategies. Our participation rate from policy makers was much lower than desired, likely due to the difficulty in incentivizing government workers because of rules against providing monetary incentives, and time pressures due to COVID-19. Third, it was difficult to balance parsimony with comprehensiveness during coding and classification. While we attempted to create valence-free codes, this was often not possible, as some codes did not have clear opposites (e.g., “burnout” may contrast with multiple codes, including job satisfaction, retention, and empowerment, but it is not the opposite of any single code). Fourth, to manage our resources and obtain input from the largest number of participants, we used online surveys rather than focus groups or in-person interviews, with an exception for telephone interviews with youth. It is possible that participants who are less familiar with differentiating implementation strategies from interventions (and the greyness in between) (Eldh et al., 2017) did not fully understand the survey, and these difficulties may have impacted the validity of our findings.

Conclusions

The current study makes a significant contribution to children’s mental health services implementation efforts, as well as implementation science in general, by providing a clear and useful compilation of possible ripple effects. Its utility will be expanded in further study phases. We encourage this manuscript to be a starting point for consideration and discussion of ripple effects in general and encourage the development of ripple effect compilations in other fields and areas of implementation. We welcome comments and critiques to add to and iterate upon this compilation to benefit a greater understanding of implementation efforts and their effects.

Footnotes

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Institutes of Health, (grant number R21MH119360).

References

  1. Aarons G. A., Fettes D. L., Flores L. E., Sommerfeld D. H. (2009). Evidence-based practice implementation and staff emotional exhaustion in children’s services. Behaviour Research and Therapy, 47(11), 954–960. 10.1016/j.brat.2009.07.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons G. A., Sommerfeld D. H., Hecht D. B., Silovsky J. F., Chaffin M. J. (2009). The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: Evidence for a protective effect. Journal of Consulting and Clinical Psychology, 77(2), 270–280. 10.1037/a0013223 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Adams W. E. (2020). Unintended consequences of institutionalizing peer support work in mental healthcare. Social Science & Medicine, 262, 113249. 10.1016/j.socscimed.2020.113249 [DOI] [PubMed] [Google Scholar]
  4. Angelucci M., DiMaro V. (2010). Program evaluation and spillover effects: Impact-evaluation guidelines. Inter-American Development Bank. https://publications.iadb.org/publications/english/document/Program-Evaluation-and-Spillover-Effects.pdf. [Google Scholar]
  5. Brainard J., Hunter P. R. (2015). Do complexity-informed health interventions work? A scoping review. Implementation Science, 11(1), 1–12. 10.1186/s13012-016-0492-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bruns E. J., Pullmann M. D., Kerns S. E. U., Hensley S., Lutterman T., Hoagwood K. E. (2015). How research-based is our policy-making? Implementation of evidence-based treatments by state behavioral health systems, 2001-2012. Implementation Science, 10(S1), 1. 10.1186/1748-5908-10-S1-A4025567289 [DOI] [Google Scholar]
  7. Bruns E. J., Walker J. S., Bernstein A., Daleiden E., Chorpita B. F. (2014). Family voice with informed choice: Coordinating wraparound with research-based treatment for children and adolescents. Journal of Clinical Child & Adolescent Psychology, 43(2), 256–269. 10.1080/15374416.2013.859081 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Chorpita B. F., Daleiden E. L., Bernstein A. D. (2016). At the intersection of health information technology and decision support: Measurement feedback systems…and beyond. Administration and Policy in Mental Health and Mental Health Services Research, 43(3), 471–477. 10.1007/s10488-015-0702-5 [DOI] [PubMed] [Google Scholar]
  9. Desrosiers A., Kumar P., Dayal A., Alex L., Akram A., Betancourt T. (2020). Diffusion and spillover effects of an evidence-based mental health intervention among peers and caregivers of high risk youth in Sierra Leone: Study protocol. BMC Psychiatry, 20(1), 1–11. 10.1186/s12888-020-02500-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Eldh A. C., Almost J., DeCorby-Watson K., Gifford W., Harvey G., Hasson H., Kenny D., Moodie S., Wallin L., Yost J. (2017). Clinical interventions, implementation interventions, and the potential greyness in between -a discussion paper. BMC Health Services Research, 17(1), 1–10. 10.1186/s12913-016-1958-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Forman-Hoffman V. L. (2017). Quality improvement, implementation, and dissemination strategies to improve mental health care for children and adolescents: A systematic review. Implementation Science, 12(1), 1–21. 10.1186/s13012-017-0626-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Franks R. P., Bory C. T. (2017). Strategies for developing intermediary organizations: Considerations for practice. Families in Society: The Journal of Contemporary Social Services, 98(1), 27–34. 10.1606/1044-3894.2017.6 [DOI] [Google Scholar]
  13. Grol R., Bosch M., Wensing M. (2013). Improving Patient Care: The Implementation of Change in Health Care (2nd ed.). John Wiley & Sons, Ltd. [Google Scholar]
  14. Harris P. A., Taylor R., Minor B. L., Elliott V., Fernandez M., O’Neal L., McLeod L., Delacqua G., Delacqua F., Kirby J., Duda S. N. (2019). The REDCap Consortium: Building an international community of software platform partners. Journal of Biomedical Informatics, 95, 103208. 10.1016/j.jbi.2019.103208 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Jones E. L., Lees N., Martin G., Dixon-Woods M. (2016). How well is quality improvement described in the perioperative care literature? A systematic review. The Joint Commission Journal on Quality and Patient Safety, 42(5), 196–AP10. 10.1016/S1553-7250(16)42025-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Lee D., Kim Y., Devine B. (2022). Spillover effects of mental health disorders on family members’ health-related quality of life: Evidence from a US sample. Medical Decision Making, 42(1), 80–93. 10.1177/0272989X211027146 [DOI] [PubMed] [Google Scholar]
  17. Lester H., Birchwood M., Freemantle N., Michail M., Tait L. (2009). REDIRECT: Cluster randomised controlled trial of GP training in first-episode psychosis. British Journal of General Practice, 59(563), e183–e190. 10.3399/bjgp09X420851 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Lewis C. C., Klasnja P., Powell B. J., Lyon A. R., Tuzzio L., Jones S., Walsh-Bailey C., Weiner B. (2018). From classification to causality: Advancing understanding of mechanisms of change in implementation science. Frontiers in Public Health, 6(136), 136. 10.3389/fpubh.2018.00136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Lipsitz L. A. (2012). Understanding health care as a complex system: The foundation for unintended consequences. JAMA, 308(3), 243–244. PubMed. 10.1001/jama.2012.7551 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Mannion R., Braithwaite J. (2012). Unintended consequences of performance measurement in healthcare: 20 salutary lessons from the English National Health Service: Consequences of performance measurement. Internal Medicine Journal, 42(5), 569–574. 10.1111/j.1445-5994.2012.02766.x [DOI] [PubMed] [Google Scholar]
  21. Manojlovich M., Lee S., Lauseng D. (2016). A systematic review of the unintended consequences of clinical interventions to reduce adverse outcomes. Journal of Patient Safety, 12(4), 173–179. 10.1097/PTS.0000000000000093 [DOI] [PubMed] [Google Scholar]
  22. Martins R. C., Blumenberg C., Tovo-Rodrigues L., Gonzalez A., Murray J. (2020). Effects of parenting interventions on child and caregiver cortisol levels: Systematic review and meta-analysis. BMC Psychiatry, 20(1), 1–17. 10.1186/s12888-020-02777-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Merton R. K. (1936). The unanticipated consequences of purposive social action. American Sociological Review, 1(6), 894–904. 10.2307/2084615 [DOI] [Google Scholar]
  24. Merton R. K. (1968). Social theory and social structure (1968 enlarged edition.). Free Press. [Google Scholar]
  25. Moore G. F., Audrey S., Barker M., Bond L., Bonell C., Hardeman W., Moore L., O’Cathain A., Tinati T., Wight D., Baird J. (2015). Process evaluation of complex interventions: Medical Research Council guidance. BMJ, 350(mar19 6), h1258–h1258. 10.1136/bmj.h1258 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Nicolay C. R., Purkayastha S., Greenhalgh A., Benn J., Chaturvedi S., Phillips N., Darzi A. (2012). Systematic review of the application of quality improvement methodologies from the manufacturing industry to surgical healthcare. British Journal of Surgery, 99(3), 324–335. 10.1002/bjs.7803 [DOI] [PubMed] [Google Scholar]
  27. Park A. L., Tsai K. H., Guan K., Chorpita B. F. (2018). Unintended consequences of evidence-based treatment policy reform: Is implementation the goal or the strategy for higher quality care? Administration and Policy in Mental Health and Mental Health Services Research, 45(4), 649–660. 10.1007/s10488-018-0853-2 [DOI] [PubMed] [Google Scholar]
  28. Perou R., Bitsko R. H., Blumberg S. J., Pastor P., Ghandour R. M., Gfroerer J. C., Hedden S. L., Crosby A. E., Visser S. N., Schieve L. A., Parks S. E., Hall J. E., Brody D., Simile C. M., Thompson W. W., Baio J., Avenevoli S., Kogan M. D., Huang L. N. (2013). Mental health surveillance among children—United States, 2005-2011. Morbidity and Mortality Weekly Report. Supplement, 62(2), 1–35. [PubMed] [Google Scholar]
  29. Pinnock H., Barwick M., Carpenter C. R., Eldridge S., Grandes G., Griffiths C. J., Rycroft-Malone J., Meissner P., Murray E., Patel A., Sheikh A., Taylor S. J. C. (2017). Standards for reporting implementation studies (StaRI) statement. BMJ, 356(i6795), i6795–9. 10.1136/bmj.i6795 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Powell A. A., White K. M., Partin M. R., Halek K., Christianson J. B., Neil B., Hysong S. J., Zarling E. J., Bloomfield H. E. (2012). Unintended consequences of implementing a national performance measurement system into local practice. Journal of General Internal Medicine, 27(4), 405–412. 10.1007/s11606-011-1906-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Powell B. J., Beidas R. S., Lewis C. C., Aarons G. A., McMillen J. C., Proctor E. K., Mandell D. S. (2017). Methods to improve the selection and tailoring of implementation strategies. The Journal of Behavioral Health Services & Research, 44(2), 177–194. 10.1007/s11414-015-9475-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Powell B. J., Stanick C. F., Halko H. M., Dorsey C. N., Weiner B. J., Barwick M. A., Damschroder L. J., Wensing M., Wolfenden L., Lewis C. C. (2017). Toward criteria for pragmatic measurement in implementation research and practice: A stakeholder-driven approach using concept mapping. Implementation Science, 12(1), 1–7. 10.1186/s13012-017-0649-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Price J. M., Roesch S., Walsh N. E., Landsverk J. (2015). Effects of the KEEP foster parent intervention on child and sibling behavior problems and parental stress during a randomized implementation trial. Prevention Science, 16(5), 685–695. 10.1007/s11121-014-0532-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Pullmann M. D., Ague S., Johnson T., Lane S., Beaver K., Jetton E., Rund E. (2013). Defining engagement in adolescent substance abuse treatment. American Journal of Community Psychology, 52(3-4), 347–358. 10.1007/s10464-013-9600-8 [DOI] [PubMed] [Google Scholar]
  35. Rogers E. M. (2004). A prospective and retrospective look at the diffusion model. Journal of Health Communication, 9(sup1), 13–19. 10.1080/10810730490271449 [DOI] [PubMed] [Google Scholar]
  36. Rosenthal R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641. 10.1037/0033-2909.86.3.638 [DOI] [Google Scholar]
  37. Ryan A. M., McCullough C. M., Shih S. C., Wang J. J., Ryan M. S., Casalino L. P. (2014). The intended and unintended consequences of quality improvement interventions for small practices in a community-based electronic health record implementation project. Medical Care, 52(9), 826–832. 10.1097/MLR.0000000000000186 [DOI] [PubMed] [Google Scholar]
  38. Strauss A. L., Glaser B. G. (1967). The discovery of grounded theory: Strategies for qualitative research. Aldine Publishing. [Google Scholar]
  39. Sveiby K.-E., Gripenberg P., Segercrantz B., Eriksson A., Aminoff A. (June 2009). Unintended and undesirable consequences of innovation. In Huizingh, K. R. E., Conn, S., Torkkeli, M., & Bitran, I. (Eds.), Proceedings of the XX ISPIM Conference. [Google Scholar]
  40. Toma M., Dreischulte T., Gray N. M., Campbell D., Guthrie B. (2018). Balancing measures or a balanced accounting of improvement impact: A qualitative analysis of individual and focus group interviews with improvement experts in Scotland. BMJ Quality & Safety, 27(7), 547–556. 10.1136/bmjqs-2017-006554 [DOI] [PubMed] [Google Scholar]
  41. Weiner B. J., Lewis C. C., Stanick C., Powell B. J., Dorsey C. N., Clary A. S., Boynton M. H., Halko H. (2017). Psychometric assessment of three newly developed implementation outcome measures. Implementation Science, 12(1), 1–12. 10.1186/s13012-017-0635-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Weisz J. R., Jensen-Doss A., Hawley K. M. (2006). Evidence-based youth psychotherapies versus usual clinical care. American Psychologist, 61(7), 671–689. 10.1037/0003-066X.61.7.671 [DOI] [PubMed] [Google Scholar]

Articles from Implementation Research and Practice are provided here courtesy of SAGE Publications

RESOURCES