Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Jan 7.
Published in final edited form as: Behav Res Ther. 2010 Nov 2;49(1):10.1016/j.brat.2010.10.008. doi: 10.1016/j.brat.2010.10.008

Moving from Efficacy to Effectiveness Trials in Prevention Research

Erica Marchand a, Eric Stice a, Paul Rohde a, Carolyn Black Becker b
PMCID: PMC3883560  NIHMSID: NIHMS254066  PMID: 21092935

Abstract

Efficacy trials test whether interventions work under optimal, highly controlled conditions whereas effectiveness trials test whether interventions work with typical clients and providers in real-world settings. Researchers, providers, and funding bodies have called for more effectiveness trials to understand whether interventions produce effects under ecologically valid conditions, which factors predict program effectiveness, and what strategies are needed to successfully implement programs in practice settings. The transition from efficacy to effectiveness with preventive interventions involves unique considerations, some of which are not shared by treatment research. The purpose of this article is to discuss conceptual and methodological issues that arise when making the transition from efficacy to effectiveness research in primary, secondary, and tertiary prevention, drawing on the experiences of two complimentary research groups as well as the existing literature. We address (a) program of research, (b) intervention design and conceptualization, (c) participant selection and characteristics, (d) providers, (e) context, (f) measurement and methodology, (g) outcomes, (h) cost, and (i) sustainability. We present examples of research in eating disorder prevention that demonstrate the progression from efficacy to effectiveness trials.

Keywords: effectiveness, efficacy, translational research, eating disorder prevention


Efficacy trials are designed to evaluate whether interventions produce effects under optimal conditions, in which providers are well-trained and closely supervised, interventions are delivered in adequately staffed research clinics, and participants are often homogeneous (Flay, 1986; Glasgow, Lichtenstein, & Marcus, 2003). In contrast, effectiveness trials are designed to evaluate whether interventions produce effects when delivered by endogenous providers (e.g., school counselors, hospital staff), under real world conditions in natural settings with heterogeneous samples (Flay, 1986; Glasgow et al., 2003). In reality much research lies on a continuum between the two; “pragmatic” randomized controlled trials are a hybrid of these, blending aspects of experimental control and external validity (Schwartz & Lellouch, 1967; Zwarenstein & Treweek, 2009).

Recently, attention to effectiveness research in prevention has increased (e.g., special issue of Evaluation and the Health Professions; Bausell, 2006; Glasgow, Green, Klesges, et al., 2006; Tunis, Stryer, & Clancy, 2003; Zwarenstein & Treweek, 2009). Prevention work exists on a continuum ranging from primary, secondary, to tertiary (also known as universal, selective, and indicated) programs (Gordon, 1983). Primary prevention is practiced with general populations, with the goal of preventing disorder onset (e.g., a school-based social-emotional coping skills program delivered to all students in a middle school to prevent behavioral, social and emotional problems; Merrell, Juskelis, Tran, & Buchanan, 2008). Secondary prevention is undertaken to prevent future onset of a problem among populations at elevated risk (e.g., a program to prevent development of eating disorders among adolescent women who report elevated body image concerns; Stice, Marti, Spoor, Presnell, & Shaw, 2008a). Tertiary prevention is aimed at individuals who already experience symptoms of a disease, with the goal of preventing further onset of pathology (e.g., a cognitive-behavioral group intervention to prevent major depressive disorder among adolescents who present with sub-diagnostic low mood; Stice, Rohde, Seeley, & Gau, 2008b).

Prevention and treatment overlap to a degree, as evinced by the preceding examples; however, prevention in real-world settings involves unique considerations that warrant a discussion of translational research specific to prevention. Issues such as identifying infrastructure and endogenous providers, assessing and coping with varying levels of participant motivation, and demonstrating cost-effectiveness must be addressed in prevention research in ways that may differ from treatment. Therefore, our aim is to draw from numerous disciplines in moving from efficacy to effectiveness research, but to focus most of our attention on preventive mental health interventions and to apply this information to the needs of prevention scientists conducting translational research.

Recent decades have ushered in significant interest in conducting effectiveness trials of interventions with promising effects in efficacy trials, secondary to the recognition that efficacious results are often limited to the contexts, providers, and clients evaluated in a specific study (Chambless & Hollon, 1998; Glasgow et al., 2003). Although there is agreement that it is vital to conduct effectiveness trials, literature and examples are needed to guide prevention scientists. Recent articles have addressed relevant conceptual issues (e.g., Barrera & Sandler, 2006; Flay et al., 2005; Glasgow et al., 2003; Sussman, Valente, Rohrbach, Skara, & Pentz, 2006) and a few reports have described programs of efficacy to effectiveness research in various fields of prevention, including substance abuse, childhood obesity, and HIV infection (Holder, Flay, Howard, Boyd, Voas, & Grossman, 1999; Reynolds & Spruijt-Metz, 2006; Solomon, Card, & Malow, 2006). These reports make an essential contribution. However, additional insight is needed from ongoing programs of research to further guide prevention scientists through the challenging process of transitioning from efficacy to effectiveness research.

Accordingly, the purpose of this article is to discuss conceptual and methodological issues that arise when making the transition from efficacy to effectiveness research in primary, secondary, and tertiary prevention, drawing on the experiences of two complimentary research groups as well as the extant literature, and to present examples of research in eating disorder prevention that demonstrate the progression to effectiveness trials. Using a framework adapted from Wells (1999), this articles considers the following aspects of prevention effectiveness research: (a) program of research, (b) intervention design and conceptualization, (c) participant selection and characteristics, (d) providers, (e) context, (f) measurement and methodology, (g) outcomes, (h) cost, and (i) sustainability. Each of these factors is discussed in turn, and a summary is presented in Table 1.

Table 1.

Issues in Moving from Efficacy to Effectiveness Trials in Prevention Research

Efficacy Effectiveness
Program of Research
  • Beginning with carefully controlled efficacy trials provides internal validity but limits generalizability, requires more steps to reach effectiveness and implementation.

  • Beginning with efficacy/effectiveness hybrids or pragmatic trials maintains some experimental control and also includes elements needed to understand effects in real-world contexts; more generalizable.

Intervention Design and Conceptualization
  • Most common design is a carefully controlled, randomized trial designed to isolate the intervention as the main source of variability.

  • Often little attention to generalizability or transportability.

  • Interventions should be grounded in theory.

  • Designs incorporate more variability in participants, interventionists, and settings; are sometimes not randomized.

  • Ability to be implemented and engage participants in real world settings must be considered.

  • Interventions should be grounded in theory and may include a motivational component.

Participant Selection & Characteristics
  • Smaller sample size.

  • Self-selected, paid volunteers or defined sub-population.

  • Homogeneous sample in terms of geography, community, risk factors, or other variables.

  • Researchers in charge of recruitment of individuals or participating entities.

  • Larger sample size.

  • Self-selected, paid or unpaid volunteers or defined sub-population.

  • Heterogeneous sample needed to test generalizability, sociocultural differences in response to intervention, differential outcomes based on initial risk, and other variables.

  • Agency personnel may assume recruitment.

Intervention Providers Carefully trained research assistants or other individuals familiar with and committed to intervention, usually with adequate time to prepare and provide intervention. Community lay persons or professionals in related fields with intervention-specific training who may have conflicting theoretical orientations to change and competing job duties that result in limited time for prevention and supervision.
Intervention Delivery Context Defined location(s), such as research lab or specific classroom. Multiple locations.
Measurement & Methodology
  • Randomized controlled trial (RCT) is gold standard.

  • Assessment may be lengthy.

  • Researchers track participants closely, minimize attrition and procedural irregularities.

  • Alternatives to RCT’s may be used.

  • Assessments must be streamlined.

  • Researchers prepare for attrition, irregularities due to multiple providers and stakeholders.

Selection of Outcomes Primary outcome of interest is whether intervention produces preventive effects post-intervention and at variable follow-up periods. Outcomes include preventive effects of intervention, fidelity of implementation across settings, feasibility of implementation by community providers, moderating effects, and other outcomes related to implementation and feasibility.
Cost Often not considered at this stage of research. Intervention is generally fully supported by grant budget. Measurement of costs of intervention delivery and estimation of cost savings of preventing negative outcomes are recommended. Intervention may be partially or fully self-supported.
Sustainability within Communities Often not considered at this stage of research. Feasibility within community agencies, acceptability to community members, attractiveness to providers, and other factors must be considered.

We stop short of discussing a third phase in the process, dissemination, which may be considered both a continuation and an ultimate endpoint of the research process, whereby effective interventions are sustainably implemented in real-world settings. Dissemination science comprises its own unique considerations and body of literature, and requires more time and space than we can allot herein. Interested readers can review Backer, Liberman, & Kuehnel, 1986; Becker, Stice, Shaw, & Woda, 2009; Fixsen, Blase, Naoom, & Walace, 2009, and Lorig, Hurwicz, Sobel, Hobbs, & Ritter, 2005 for examples of dissemination.

Issues and Considerations in Moving from Efficacy to Effectiveness

Program of Research

Some behavioral scientists have advocated a stepwise process of moving from basic research to efficacy to effectiveness to dissemination trials in both prevention and treatment (Chambless & Hollon, 1998; Flay, 1986; Flay et al., 2005; Glasgow et al., 2003; Sussman et al., 2006), to establish efficacy with a high degree of internal validity before moving on to other steps of the research process. This model has numerous critics, whose arguments center on the low external validity and generalizability of results from “pure” efficacy studies, and the difficulty and time required in moving from this type of research to effectiveness trials (Glasgow et al., 2003; Green & Glasgow, 2006; Tunis et al., 2003; Zwarenstein & Treweek, 2009).

Many in the field favor the alternative of viewing efficacy, effectiveness, and dissemination trials as existing on a continuum. Hoagwood, Hibbs, Brent and Jensen (1995) have argued that efficacy and effectiveness can be viewed as differing along three continuous dimensions. The first dimension, validity, ranges from a focus largely on internal validity in efficacy research, to internal and external validity in effectiveness research. The second dimension, intervention, ranges from highly structured, short-term, single modality (on the efficacy side), to less structured, multiple modality, and longer term (effectiveness). Finally, the third dimension – outcome – ranges from focusing on specific symptoms or risk factors to functional improvement in a broader range of outcomes. Medical researchers as early as the 1960’s (Schwartz & Lellouch, 1967) have pressed for more “pragmatic” clinical trials that balance internal and external validity, able to address questions about real-world practices with various patient groups, rather than imposing unrealistically tight experimental control and producing results not suited to answer important practice questions (Glasgow et al., 2006; Zwarenstein, Treweek, Gagnier, et al., 2008).

In practice, programs of research that address the range of questions that can be posed about a given intervention likely will move along these dimensions by blending elements of efficacy, effectiveness and dissemination based on feasibility, funding and the specific questions being addressed. Thus, whereas some studies may fall cleanly into traditional efficacy or effectiveness categories, other studies will appear to be hybrids. For instance, an effectiveness study might include a manualized intervention, assessment of provider adherence to the manual, and even randomized assignment to condition – all features commonly viewed as aspects of efficacy research (Kazdin, 2003). It also should be recognized that research may not always temporally proceed from efficacy to effectiveness and then dissemination. For instance, after an intervention has been thoroughly tested with a specific range of participants, new research might investigate whether it is feasible to use the intervention with a novel population (Becker, Powell, McDaniel, Bull, & McIntyre, submitted).

Intervention Design and Conceptualization

Community-based participatory research

Effectiveness research involves an essential conceptual shift, from viewing a research project as conceived in the lab and then brought to target populations, to the project’s shared ownership between researchers and community partners. Community-based participatory research (CBPR) is a model which is especially useful in effectiveness and dissemination research (see Israel, Eng, Schulz, & Parker, 2005 for review), and involves treating all key stakeholders in the research – including community partners – as having an equal voice in the research process. Key elements involve making a long-term commitment to community partners to create sustainable programming, promotion of co-learning, and designing programs around community strengths. Later we describe research by Becker and colleagues that was designed in full accordance with all of the values of CBPR. It is important to realize, however, that even more traditional research programs can include some components of CBPR. For instance, Stice and colleagues have systematically collected qualitative input from facilitators and participants in their dissonance-based eating disorder prevention effectiveness trials, regarding ways to further improve the intervention. They have also sought guidance from school staff on many important decisions, such as effective recruitment and intervention delivery methods.

Motivation

As primary and secondary prevention targets persons not yet experiencing problems, such programs may need to assess individuals’ readiness to change (and potentially include it as a moderating variable; Prochaska, DiClemente, & Norcross, 1992) or incorporate motivational techniques into the intervention. For example, a secondary obesity prevention intervention from one of our research groups includes a motivational component in each session, in which participants identify benefits of striving for a healthy lifestyle and discuss the positive intervention effects (Stice et al., 2008a).

Pragmatic trials

It is generally assumed that any line of research with a new intervention must start with a plan for well-controlled efficacy research. More scientists, however, are calling for clinical trials that balance internal and external validity, thus allowing for a more efficient progression to practice implementation. “Pragmatic” clinical trials are designed to answer questions in clinical practice, such as which course of treatment produces better outcomes for a patient population (Schwartz & Lellouch, 1967). Such hybrid trials, which often blend randomization and some degree of experimental control with aspects such as diverse samples and endogenous providers, can provide information that is immediately useful to decision-makers about relative effects of treatment or preventive interventions (Zwarentstien & Treweek, 2009). The CONSORT group (Zwarenstein et al., 2008) recently published guidelines for reporting results of pragmatic clinical trials, including recommendations for describing eligibility criteria for participants, providers and settings, resources required to implement and methods used to standardize the intervention, rationale for choosing outcomes and length of follow-up needed to see results, explanation of how researchers arrived at the required sample size, difference between the total number of eligible participants and those who chose to participate, reasons for non-participation, and key aspects of the setting(s) that may influence results. These recommendations can be used as a framework not only for reporting results but also for designing a pragmatic efficacy/effectiveness trial.

Core elements, length, & dosage

Efficiency and portability of interventions are important for effectiveness research. Part of effectiveness trials might involve finding the optimal dosage of an intervention that produces the desired outcomes, so that a program is an optimal length but not more complicated than necessary (Chambless & Hollon, 1998). For example, prior research in our group has utilized both 3- and 4-session versions of an obesity prevention intervention, and we continue to use the 4-session version (Stice et al., 2008a). A related task is determining which aspects of the intervention are the core components responsible for change, and which may be altered or left out to suit constraints of various participant groups, facilitators, or settings. Researchers may consult with colleagues, conduct focus groups, and use pilot results of various versions of the intervention to assess these aspects.

In addition, since the prevention intervention must be portable, it should be manualized (Chambless & Hollon, 1998; Clarke, 1995) or even automated (for example, by creating video content to address some or all of the intervention). Manualization may be even more important in prevention as compared to treatment because endogenous providers conducting prevention interventions may not have an extensive clinical background. Further, whereas there has been an assumption within the treatment community that clinicians have the skills needed to tailor manualized treatments to specific patients, it cannot be presumed that community prevention providers have the capability or time to tailor interventions to specific groups. As such, there may be a greater need for researchers to be involved in tailoring prevention manuals for particular populations.

Participant Selection and Characteristics

Sample size

Effectiveness trials often require larger samples than efficacy trials because greater variability of participants, providers, and settings results in decreased statistical power and potentially smaller effects (Wells, 1999). Community providers may have limited time and resources to track participants, who in turn may face multiple barriers to attending sessions and follow-ups (Clarke, 1995), both contributing to attrition. Researchers should conduct a priori power analyses with conservative estimates for effect size and generous predictions for attrition in order to end with an adequate sample size. Recruiting for a longer time period or from additional sites can help reach the minimum sample needed.

Inclusion and exclusion

Inclusion and exclusion criteria may be relevant in primary prevention, but must be carefully considered in secondary and tertiary prevention programs, as both generalizability and experimental control are important. For example, it could be argued that a tertiary intervention for preventing adolescent depression should recruit teens with elevated depressive symptoms but exclude individuals with anxiety disorders, which maximizes internal but limits external validity, because depression and anxiety often co-occur in real populations. Clarke (1995) offered a “donut model” of recruitment for creating a diverse sample while maintaining control over sample characteristics. He recommended including participants with a predefined set of comorbid conditions or risk factors as the “donut ring,” and recruiting a subset of highly selected participants with few risk factors or comorbidities as the “donut hole.” Effects can then be tested for each group.

Alternatively, effectiveness trials might defensibly enroll all interested individuals and empirically test whether exclusion criteria are needed for program effectiveness. For example, in an intervention to prevent obesity, no exclusion criteria would initially be used; rather, all interested individuals would be eligible to participate. Participants would be randomized to condition and analyses would test whether participant variables such as body dissatisfaction, body mass, or disordered eating behaviors moderated program effects. An inclusive recruitment strategy like this is much more likely to be feasible in real-world settings, and our research groups have found that many social systems prefer primary – or universal– prevention within their communities over secondary or tertiary prevention. Reasons for this include believing that all community members will benefit from the intervention, a desire to use the intervention to facilitate community bonding, and a desire to avoid screenings (which may be viewed as stigmatizing).

Social and cultural factors

Some researchers posit that the content of an intervention can, to an extent, be primary (Barrera & Sandler, 2006), and that participants’ cultural backgrounds may or may not need to be accounted for when designing prevention programs. Others argue that participants who are underrepresented in research must be actively recruited and prevention content made culturally relevant. We believe that both viewpoints have value and should coexist in intervention design. This is a situation in which the core elements of the intervention may remain the same across groups, but the details, examples, and images in the content change.

For example, many eating disorder prevention programs aim to reduce body dissatisfaction. Cultural norms for beauty and size may influence women of different backgrounds differently; one woman’s dissatisfaction may come from feeling she is not thin enough, while another’s may stem from feeling she is not curvy enough. In both cases, the goal of the intervention to reduce body dissatisfaction is the same. The mechanism for reducing body dissatisfaction would also remain the same across groups, e.g., increasing cognitive dissonance about the desirability of an unreachable “ideal.” However, the details and examples used in the content may vary, and the facilitator should be flexible enough to include multiple perspectives so that the intervention feels relevant for the group in question. Further, if the examples used in an intervention come from participants themselves, the content is naturally culturally adapting, which is the approach we favor for intervention design. It can be difficult to identify cultural factors at work in any given community, which will be heterogeneous with respect to culture even within a defined racial or ethnic group. Partnering with community organizations is essential to understanding a community and gaining ideas about tailoring a program to the community.

Culture and ethnicity may be confounded with other variables that affect participation and outcomes, like acculturation, language skills, and socioeconomic resources. A logical first step is to test whether intervention effects differ across various ethnic groups, and as a function of factors related to race or ethnicity. These analyses can help test whether an efficacious program needs to be adjusted to “fit” a given community (Hoagwood & Olin, 2002). Focus groups and consultation with community partners can again be essential in understanding and improving the fit of an intervention. Feasibility studies including diverse target populations can assess the acceptability of interventions prior to formal evaluation. This initial work can provide needed feedback about cultural sensitivity and norms (Klesges, Eastabrooks, Dzewaltowski, Bull, & Glasgow, 2005). We believe that sociocultural factors are especially important to consider in prevention effectiveness research. Ethnic and cultural minority groups in the U.S. experience risk factors greater in number and severity than many non-minority individuals, but little research has adequately tested whether and how the effects of prevention programs are moderated by ethnicity (Herschell, McNeil, & McNeil, 2004), and how to better recruit ethnic minority participants for research.

Recruitment

Prevention effectiveness studies have no existing pool of treatment-seeking individuals from which to draw participants. Community, health care, and school personnel familiar with the target community can be extremely valuable in identifying and recruiting participants. Primary prevention trials may be less burdened with participant engagement than secondary or tertiary prevention, although all three require engagement and adoption by the organization where the program will be delivered. Once researchers have identified a sample, it is important to consider that individuals who are not yet experiencing problems may be unmotivated to engage in prevention activities. To estimate the real-world likelihood of success for a preventive intervention, the effectiveness trial should test participant recruitment with minimal enticement; for example, with no or only small payments for completing assessments rather than the large payments sometimes given in efficacy research.

Scientists should address the representativeness of self-selected samples, to gauge the reach and feasibility of prevention efforts. Effectiveness research can assess why participants chose to engage in the prevention program; this may be done with a short qualitative questionnaire or interview at the conclusion of study activities. Also helpful is to try to learn why eligible persons chose not to participate; for example, with a follow-up request for a brief written or phone survey. This information will influence the implementation and sustainability of a prevention program when delivered by existing providers.

Intervention Providers

Provider identification and recruitment

Since many prevention activities are not routine in school and community settings, often there is no existing pool of prevention providers for an effectiveness trial. Prevention effectiveness trials must identify community leaders or laypersons with appropriate skills, and then recruit, train, and supervise them in delivering the preventive intervention in addition to (rather than in place of) their usual responsibilities. Although competent providers are available in natural settings, asking these providers to deliver preventive interventions in addition to their usual daily activities can pose a challenge.

Different strategies can facilitate recruitment of providers. For instance, in one line of research, we first sought to identify school professionals who were enthusiastic about our program and the prevention of eating pathology. Second, we streamlined training for intervention delivery and the intervention itself to take as little facilitator time as possible. Third, we chose to pay facilitators for both training and intervention activities because it did not seem ethical to require the school district to pay for a portion of our research costs and often the interventions were provided after regular school hours. While none of these techniques reduces the fact that school personnel are busy and responsible for a multitude of tasks, adhering to these three principles has helped in identifying professionals in every school that we have approached who are willing to facilitate the prevention groups. We should note, however, that it will be unlikely that facilitators are paid in dissemination studies. Another strategy has been to employ part-time school staff to facilitate prevention activities across schools. Identifying a dedicated person with enough time and flexibility to co-lead several intervention groups at different sites has been very helpful in staffing prevention trials.

Alternatively, in another line of research, we recruited community peer-leaders. Peer-leaders were not paid, but the value of giving back to their community, adopting a leadership role, and developing valuable skills were all highlighted. Training was more intensive because laypersons often lack the base of knowledge needed to reduce training times. In addition, we train a very large number of providers, which is more burdensome on the research team but reduces the actual number of sessions each provider has to run – thus reducing the overall time commitment. This approach has led to the sustainable development of an ongoing program that is run without grant support.

Training and fidelity

Researchers must decide to what degree they should provide an optimal (and perhaps expensive and time-consuming) level of training, or a level that would be typical in the real world. Roy-Byrne and colleagues (Roy-Byrne, Sherbourne, Craske, et al., 2003) advocate for a real-world level of training that approximates naturalistic training conditions. We concur, though as highlighted by the two examples above, real-world levels of training may vary according to which real-world providers are being recruited, and which community is partnering in the research. Regardless of level of training, we recommend that fidelity of implementation be assessed in prevention effectiveness trials and that researchers assess whether outcomes vary as a function of fidelity (Clarke, 1995). One of our research groups requests that facilitators videotape all intervention sessions, and researchers review a subset of the tapes for adherence to core intervention content. In addition to providing valuable information, such data can be sometimes be used to negotiate increased training times when working with communities on full-scale dissemination. For example, results showing that poor adherence was associated with worsened outcome convinced the Delta Delta Delta Sorority (Tri Delta) to accept a high level of training in its sustainable deployment of evidence-based eating disorders prevention (Becker, Stice, Shaw, Woda, 2009).

Intervention Delivery Context

Locating intervention settings

Effectiveness research takes place in diverse contexts; this helps to determine whether the program works in a variety of settings (Glasgow et al., 2003; Wells, 1999). Public schools are common locations for prevention activities for children and adolescents, given their broad reach and the diversity and representativeness of samples that can be obtained. Other settings have included after-school programs, county health department clinics, community mental health agencies, college residence halls, churches, sororities, neighborhood centers, and even participants’ homes. In choosing a context for delivery of prevention services, researchers must take into account whether the desired sample will be available in that context, whether the context provides adequate diversity of participants, and whether the necessary personnel and structure exist in that setting, according to the needs of the project.

Multi-site trials can increase variability of participant and setting characteristics, allowing for examination of potential moderating effects. When intervention activities take place at multiple sites, it is necessary to account for differences in the service delivery context when analyzing data (Flay, 1986; Glasgow et al., 2003; Roy-Byrne et al., 2003; Sussman et al., 2006; Wells, 1999). Selection of appropriate statistical techniques is discussed subsequently.

Measurement and Methodology

Research design and randomization

Prevention effectiveness research requires a balance between experimental control and generalizability, or between internal and external validity. Even once efficacy has been established, effectiveness trials must maintain enough experimental control to enable inferences regarding the program’s effects (e.g., Flay et al., 2005). Primary prevention in particular may use a variety of methodologies other than randomized assessment-only control-group designs to determine effectiveness, one common method being a quasi-experimental comparison group design, in which random assignment to condition is not required (Zubrick, Ward, Silburn, et al., 2005). Randomization to condition remains the optimal procedure to decrease the likelihood of group differences and to meet assumptions of many statistical tests (Flay et al., 2005); however, randomization in prevention research is not always possible. Of note, meta-analytic reviews of prevention trials for eating disorders, obesity, and depression suggest that effect sizes are not significantly different for trials that use random assignment to condition versus other allocation methods (e.g., Stice, Shaw, & Marti, 2006b).

Effectiveness researchers should be prepared with a variety of methods of assigning participants to groups that both maintain the internal validity of the study and meet community needs. Assessment-only control groups may not be acceptable or appropriate; wait-list, usual-care, attention-control, or alternate intervention comparison groups can also be used (Clarke, 1995). In primary prevention targeting an entire community, a similar neighboring area may be used as a comparison group (e.g., Zubrick et al., 2005), though it is always possible that the intervention and comparison groups differ in unknown ways. Matched group designs may sometimes be appropriate if the matching variables are carefully selected and if precautions are taken to ensure that groups are indeed equivalent on variables of interest. However, matching can also provide inaccurate estimates of intervention effects and should be used with caution (Flay et al., 2005). Cluster randomized designs can be used to assign intact groups to intervention conditions; researchers must, however, choose appropriate statistical methods to deal with these group-level data (i.e., a statistic that takes into account both group-level and individual-level variance; Donner & Klar, 2004). Multiple-baseline and n-of-1 designs are other possibilities for handling real-world data.

Finally, some researchers are creative in working with agencies to provide needed services to the community while still retaining randomization in the trial. One researcher allows two “mercy assignments” per year at an agency with which she collaborates – the agency may argue for two individuals per year to be assigned to treatment on the basis of need rather than randomization (D. Unruh, personal communication, April 2006); these “mercy assignments” are not analyzed with other data.

Benchmarking

One alternative strategy to assessing effectiveness involves “benchmarking” results gathered in effectiveness studies against results from efficacy trials. For instance, benchmarking has been used by researchers in the treatment literature to document that cognitive behavioral therapy for panic disorder in a community mental health setting produced results of a similar magnitude to those found in highly controlled efficacy studies (Wade, Treat, & Stuart, 1998). Benchmarking has been described by Wilson (2007) as a useful and flexible method for documenting effectiveness in naturalistic settings, and we have used this strategy in interpreting some of our research.

Moderation effects

A moderator is a variable (e.g., acculturation, health beliefs, proximity to a healthcare clinic) that influences the relation between a predictor and an outcome (Baron & Kenny, 1986). Since samples and contexts are generally more heterogeneous in effectiveness research (Glasgow et al., 2003), attention to moderating variables is vital when interpreting observed differences among groups. It can be helpful to measure effects by broad group differences such as ethnicity or socioeconomic status (SES); however, these variables do not indicate the circumstances or processes that lead to observed differences in outcomes. Measuring additional constructs, such as cultural congruency of the intervention, participants’ perceptions of competence and respect by intervention providers, health beliefs, health literacy, and access to transportation or childcare may provide a more complete understanding of the features that identify groups for whom an intervention works.

Assessment

Other methodological concerns relate to practical constraints of effectiveness research. In large primary prevention trials, it may not be possible or necessary to assess all participants. Instead, a representative subset of participants may be assessed individually, while epidemiological data related to the variable of interest (e.g., incidence of physical aggression in a school district) can be utilized to interpret intervention effects. In smaller-scale prevention programs, individual assessment is often required. Assessments may need to be streamlined for ease of implementation, and telephone assessments may lighten the burden for participants and busy community assessors (Roy-Byrne et al., 2003). New, computer-based technologies (e.g., cell phones, personal digital assistants, computer-based surveys and social networks) should also be considered for simplified assessment procedures.

Re-thinking internal and external validity

Effectiveness trials may necessitate a re-conceptualization of the research process (Glasgow et al., 2003). The real-world difficulties that create researcher headaches – comorbidity, life stressors that cause participant dropout, high caseloads among community providers, organizational restructuring at the intervention provider, lack of financial resources – are the very things that need to be addressed in the development of prevention programs that will be feasible and sustainable in practice. Thus, researchers must plan for handling attrition, missing data, concomitant treatment, and other irregularities in data collection. Ultimately, these “nuisances” may provide particularly valuable information about procedures necessary for participant and provider engagement and retention.

Outcome Selection

Proximal and ecologically valid outcomes

Effectiveness trials need to measure a broader range of outcomes than simply whether the intervention prevents the condition of interest. Assessing ecologically valid outcomes (e.g., school and work attendance, depression-free days, other markers of life success and engagement; Roy-Byrne et al., 2003; Wells, 1999) can help to demonstrate a program’s success and cost-effectiveness. Very importantly, selected outcomes should align with the targeted scope of the prevention program. For example, a smoking prevention intervention could justifiably measure attitudes about smoking and rates of smoking onset, if the program was designed to target only smoking. Conversely, a preventive intervention designed to prevent multiple problem behaviors among youth might include a variety of adolescent outcomes.

Unintended outcomes

Since effectiveness research often takes place within under-studied communities and effects may differ from those observed “in the lab,” researchers should watch for and report unanticipated outcomes. A well-known example is the iatrogenic effect of increased problem behavior observed in group interventions to prevent adolescent problem behavior (Poulin, Dishion, & Burraston, 2001). Unplanned outcomes may also be positive. For example, we found that an indicated depression prevention program significantly reduced risk for substance use onset and escalation (Stice et al., 2008b). Ongoing communication with intervention providers and periodic analysis of preliminary data should facilitate identification of unplanned effects.

Implementation effectiveness

Flay (1986) provided a useful distinction between intervention effectiveness and implementation effectiveness, and advocates that these elements each receive attention. In order to show intervention effectiveness, a program must produce beneficial effects under real-world conditions. Effectiveness research must also assess implementation effectiveness – the success and sustainability of program implementation in a target community (Flay, 1986; Wells, 1999). Fidelity of implementation, relevant setting and interventionist variables, participant characteristics, cost effectiveness, intervention parameters, and supports and barriers to implementation may all be part of a study of implementation effectiveness (Clarke, 1995).

Cost

Cost-effectiveness

Cost-effectiveness is uniquely important for prevention research. Benefits of prevention are not as readily apparent as those of treatment, in which a diseased or distressed state is measurably ameliorated. Preventionists must estimate the value of preventing a negative outcome, weigh it against the cost of providing the preventive intervention, and demonstrate a favorable ratio. Establishing cost effectiveness is increasingly salient in today’s funding climate.

“Cost-effective” is different than “cost-saving.” Prevention that results in decreased costs over time is cost-saving (for example, vaccinations – inexpensive means to prevent expensive diseases – are often cost-saving). On the other hand, cost-effective describes an intervention whose benefits are “sufficiently large compared to the costs” (Cohen, & Neumann, 2009). Evaluating cost-effectiveness, therefore, is somewhat subjective. Assessing the cost-effectiveness of prevention programs can be done by gathering data regarding the costs of providing the intervention and the estimated costs of reduced health service utilization and other adverse outcomes that are reduced by the intervention (e.g., incarceration, obesity, work days lost). Although cost-effectiveness can help justify the expense of disseminating prevention programs, we acknowledge that some prevention programs may produce clinically important effects that are difficult to translate into cost savings.

The growing literature in cost-effectiveness and cost-savings of preventive interventions has produced many useful articles on methodological and statistical considerations, measurement of health-related costs, collecting, reporting and interpreting cost-related information, and controversies in the field (see Dolan, 2008; Fenwick & Byford, 2005; Hoch, Briggs, & Willan, 2002; Knapp & Mangalore, 2007; Southard et al., 2000; and Stinnett and Mullahy, 1998; see Lynch and colleagues [Lynch, Striegel-Moore, Dickerson, et al., 2010] for an excellent example of a paper reporting cost-effectiveness in the treatment of binge eating disorder).

Sustainability

Essential to a prevention program’s effectiveness is its ability to be implemented on an ongoing basis in a community by existing providers in a way that is acceptable to stakeholders and financially feasible. Effectiveness research frequently necessitates ongoing collaboration and positive relationships with community organizations. Developing school- or community-based interventions with the resources and needs of the target community in mind is critical. It is often important for researchers conducting effectiveness trials to continue collaborating with communities by conducting dissemination research with promising prevention programs.

Examples of Moving from Efficacy to Effectiveness in Prevention Research

In the sections that follow, we describe two complimentary but independent programs of research. The first program of research transitioned from efficacy to effectiveness trials with a secondary eating disorder prevention program. The second program of research began with a hybrid efficacy/effectiveness trial that was delivered primarily within a specific community and then moved increasingly towards effectiveness/dissemination.

Stice and Colleagues

Our research group (e.g., Stice et al., 2008a) and others (Green, Scott, Diyankova, Gasser, & Pederson, 2005; Mitchell, Mazzeo, Rausch, & Cooke, 2007; Pineda, 2006; Roehrig, Thompson, Brannick, & van den Berg, 2006) have tested the efficacy of a dissonance-based intervention for prevention of eating disorders among adolescent girls and young women. Researchers have also conducted quasi-effectiveness trials of this intervention, in which endogenous providers provide the intervention, but not recruit participants (e.g., Matusek, Wendt, & Wiseman, 2004). Subsequently, we initiated a full effectiveness trial of this eating disorder prevention program in which endogenous providers are responsible for participant recruitment and intervention delivery (Stice, Rohde, Gau, & Shaw, 2009). Intervention development and steps in testing efficacy and effectiveness are described here.

Efficacy

The eating disorder prevention intervention developed by Stice and colleagues (Stice et al., 2000; Stice & Presnell, 2007), referred to as the “Body Project,” is grounded in the theory of cognitive dissonance and based on empirical findings that internalization of a thin ideal of beauty is a key risk factor for developing eating disorders (Stice, 2002). In this eating disorder prevention program, girls with body image concerns who have internalized the culturally sanctioned thin-ideal voluntarily engage in exercises that critique this ideal. These counter-attitudinal activities putatively result in reduced endorsement of the thin ideal because of cognitive dissonance (Festinger, 1957). The reduced thin-ideal internalization presumably leads to reductions in body dissatisfaction, maladaptive dieting, negative affect, and bulimic symptoms. To facilitate training and standardization, the intervention was manualized (Stice & Presnell, 2007).

Participants in the preliminary efficacy trials were young women recruited from universities. Subsequent efficacy trials involved adolescent girls recruited from high schools. Participants were selected on the basis of a common risk factor for eating disorders – body dissatisfaction. In the early efficacy trials, students with body image concerns were invited to participate in a body acceptance trial (i.e., they simply self-selected into the trials). In later efficacy trials we confirmed that interested participants had at least some body image concerns during an initial phone screen, and attempted to exclude participants who met criteria for anorexia nervosa and bulimia nervosa based on the reasoning that they needed treatment rather than prevention.

Based on the effects observed in three preliminary trials (reviewed in Stice & Shaw, 2004), a large efficacy trial was conducted (Stice et al., 2008a). Participants were recruited from area high schools and a large university. Trained graduate students provided the intervention, which contributed to high levels of fidelity and competence. Participants who completed the Body Project showed significantly greater reductions in bulimic symptoms and significantly reduced risk for onset of future eating disorders relative to assessment-only controls and sometimes relative to participants in alternative interventions, with certain effects persisting through 3-year follow-up (Stice, Shaw, Burton, & Wade, 2006a, and Stice et al., 2008a). Based on the positive effects observed in these trials and those conducted by other labs, we initiated an effectiveness trial of the Body Project in high schools.

Effectiveness

High schools and school personnel were identified as likely settings and providers for the intervention. Dr. Stice obtained permission and support from each of the three school districts in the area, high schools within these districts, and participating teachers, nurses, and counselors who would ultimately serve as the intervention providers. This process involved multiple meetings to build relationships with school administrators and personnel. In addition to describing the evidence base that established the efficacy for this intervention, we sought to make the study valuable and minimally time-consuming to facilitators. Facilitators were paid an hourly rate commensurate with their positions for time spent training and delivering the intervention. Common courtesies were used when collaborating with school personnel, such as demonstrating a great deal of flexibility, maintaining a respectful and congenial working relationship, and expressing thanks for participation with thank-you cards and small gifts (e.g., gift certificates to local businesses). These steps helped build and maintain positive relationships with school administrators and facilitators.

Several adjustments were made to the research design used in the original efficacy trials to conform to the tenets of effectiveness research. First, tasks were divided between school and research staff such that school personnel were responsible for recruitment and intervention delivery and research staff was responsible for assessments and facilitator supervision. Researchers provided materials for recruitment and group facilitation (letters, fliers, intervention manuals, participant workbooks, etc.). Group facilitators participated in a streamlined, half-day training during which they were introduced to the script and rationale of the intervention and role-played key portions of the program. Facilitators received email feedback from Dr. Stice during the program, based on review of recordings of all sessions.

A total of 306 adolescent girls from seven high schools were enrolled. Results indicated that Body Project participants showed significantly greater reductions in thin-ideal internalization, body dissatisfaction, dieting, depressive symptoms, and eating disorder symptoms than control participants from pre to post (Stice, Rohde et al., 2009). The effects for body dissatisfaction, dieting, and eating disorder symptoms are persisting through 6-month and 1-year follow-up (data for the 2-year and 3-year assessments are still being collected). The effects from this effectiveness trial are generally similar in magnitude to effects observed in our large efficacy trial. For example, the pre to post effect for eating pathology (r = .23) is slightly larger than the parallel effect from our efficacy trial (r = .17). Importantly, both of these effects are larger than the average effect for eating disorder symptoms from a meta-analytic review of efficacy trials of eating disorder prevention programs (r = .12; Stice & Shaw, 2004). Table 2 presents effect sizes for pre to post, pre to 6-month, and pre to 1-year follow-up measurement on key variables for the efficacy and effectiveness trials. These results suggest that an intervention that has produced effects in efficacy trials can generate promising results in effectiveness research and that school staff can implement the Body Project intervention successfully.

Table 2.

Effect Sizes (r) and Significance Levels for the Time x Condition Interactions from Cognitive-Dissonance Based Eating Disorder Prevention (“Body Project”) Efficacy and Effectiveness Trials

Efficacy Trial Effectiveness Trial
Pre-Post Sample size n=481 n=306
Thin-ideal internalization .38*** .23***
Body dissatisfaction .35*** .27***
Disordered eating symptoms .17** .23***
Pre-6 Month Sample size n=481 n=306
Thin-ideal internalization .29*** .02
Body dissatisfaction .28*** .11^
Disordered eating symptoms .18** .14*
Pre-1 Year Sample size n=481 n=306
Thin-ideal internalization .13* .07
Body dissatisfaction .08 .14*
Disordered eating symptoms .20*** .17**
^

p< .10,

*

p<.05.

**

p<.01.

***

p<.001.

Becker and Colleagues

This line of research was developed according to the principles of community participatory research with a local sorority community (see Becker et al., 2009). The research began in 2001, when Becker and an undergraduate student decided to attempt to replicate Stice et al.’s early work. After a successful small scale pilot study, 161 sorority members were randomly assigned to the dissonance intervention, an alternative media advocacy intervention, or waitlist (Becker, Smith & Ciao, 2005) on a universal basis within the community because this was the preference of the sorority. This hybrid study had elements of both efficacy (randomized design, waitlist control) and effectiveness research (run sustainably without grant support, participants recruited by members of their own community who participated on the research team in the spirit of participatory research, no participant compensation). Results supported both the use of dissonance and, to a somewhat lesser degree, the media advocacy intervention.

After having a positive experience with the hybrid trial, sorority community leaders decided that they wanted all new members to go through “the body image program.” Members were semi-mandated into the program and then invited to participate in an optional study that consisted of completing questionnaires. Thus, this line of research tested whether dissonance prevention remained effective when it was administered according to the wishes of a community (universal and required). To accommodate a variety of logistical concerns, including lack of funding to pay providers and insufficient access to clinical providers, this next study also moved further along the effectiveness continuum by recruiting peer-leaders (i.e., community laypersons) as providers. After a successful trial that demonstrated that undergraduates could be trained to implement dissonance prevention (Becker, Smith & Ciao, 2006), the sorority community decided to implement the program on an annual basis, providing the opportunity for additional research. Subsequent trials replicated the finding that peers could deliver dissonance-prevention in a sustainable manner (Becker, Bull, Schaumberg, Cauble & Franco, 2008), and that effects lasted to 14 months and were comparable to those found by Stice et al., (2006a) at 12-months using benchmarking (Becker, Wilson, Williams, Kelly, McDaniel, & Elmquist, 2010). Importantly, this sustainable, unfunded program of research is largely run by community members (i.e., sorority members) who are unpaid and responsible for virtually all aspects of the research under the supervision of Becker. Further, to date, sorority members have conservatively contributed over 16,000 unpaid hours to the study and implementation of dissonance prevention and take considerable pride in both the program and associated research.

In 2005, a large national sorority (Tri Delta) became interested in adopting the sorority version of dissonance-prevention. After two years of pilot testing, it was determined that the peer-led model could be expanded to a national scale, and Tri Delta purchased sufficient customized materials to reach 20,000 collegiate women over a 5 year period. By spring 2011, the sorority version of dissonance prevention (i.e., Reflections: Body Image Program) will have reached over 80 campuses throughout North America in a sustainable manner, with ongoing research documenting the transportability of peer-led dissonance eating disorders prevention (Perez, Becker, & Ramirez-Cash, 2010). The program also has a sustainable training infrastructure (Reflections: Body Image Academy) that is priced at a level that is considered affordable by target communities. Further, this model (peer-led, with custom manuals and ability for an organization to brand a specific variant of dissonance prevention) appears to be of interest to other groups including several non-profit eating disorder organizations outside North America.

We believe that these programs of research provide useful examples for moving from efficacy to effectiveness research prevention. It will be important for future studies to test whether this prevention program still produces valuable effects when delivered by endogenous providers in other settings (e.g., peers at the high school level, college counselors, hospital staff). In addition, it will be important to further investigate how best to further disseminate this intervention to endogenous providers, with a focus on predictors of adoption, implementation, and sustainability.

Conclusion

This paper has provided a review of the literature and two sample research programs in moving from efficacy to effectiveness in prevention research, considering several aspects of the research process that require unique consideration in effectiveness trials. We hope the examples and information discussed here are helpful to other prevention scientists in conceptualizing pathways from efficacy to effectiveness research – pathways that are not necessarily linear, as many others have noted before us (e.g., Glasgow et al., 2006). The importance of scientifically rigorous trials of the effectiveness of prevention programs cannot be overstated. Findings gained from studies such as these are critical for demonstrating the real-world value of prevention activities to funding agencies, community stakeholders, providers, and potential intervention recipients.

Fortunately, repeated calls advocating for advancement beyond efficacy research and the demonstrated feasibility and usefulness of effectiveness trials, primarily in treatment rather than prevention, have improved the visibility and acceptability of effectiveness research. A recent perusal of NIMH program announcements revealed several targeting effectiveness, dissemination, and implementation aspects of investigation. Scientists are encouraged to promote effectiveness research of evidence-based treatment and prevention programs as a priority in both professional organizations and governmental funding agencies. To continue to advance effectiveness research, we believe it particularly important to: (a) design programs of research with the endpoint of dissemination in mind; e.g., include elements of external validity in initial studies; (b) build lasting, mutually beneficial relationships with community partners and be persistent in figuring out how to work collaboratively and maintain adequate experimental control; (c) include measures of cost and cost-effectiveness; and (d) report findings with enough detail that other researchers can understand the context and effects of an intervention; the CONSORT guidelines are excellent for this (Zwarenstein et al., 2008).

The prevention of disorder and distress remains an important role for behavioral scientists and is as necessary today as it was over fifteen years ago when the Institute of Medicine called attention to the need for prevention in practice (Mrazek & Haggerty, 1994). We hope that the positions, guidelines, suggestions, and examples provided in this paper help prevention scientists successfully transition to effectiveness research.

Acknowledgments

Acknowledgements for research support:

NIMH Grant 5 R01 MH070699 and an NIMH Research Supplement to Promote Diversity in Health Related Research provided financial support for completion of the work on this manuscript.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Backer T, Liberman R, Kuehnel T. Dissemination and adoption of innovative psychosocial interventions. Journal of Consulting and Clinical Psychology. 1986;54:111–118. doi: 10.1037//0022-006x.54.1.111. [DOI] [PubMed] [Google Scholar]
  2. Baron R, Kenny D. The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology. 1986;51:1173–1182. doi: 10.1037//0022-3514.51.6.1173. [DOI] [PubMed] [Google Scholar]
  3. Barrera M, Sandler I. Prevention: A report of progress and momentum into the future. Clinical Psychology: Science and Practice. 2006;13(3):221–226. [Google Scholar]
  4. Bausell R, editor. Translation research [Special issue] Evaluation and the Health Professions. 2006;29(1) doi: 10.1177/0163278705284440. [DOI] [PubMed] [Google Scholar]
  5. Becker CB, Bull S, Schaumberg K, Cauble A, Franco A. Effectiveness of peer-led eating disorders prevention: A replication trial. Journal of Consulting and Clinical Psychology. 2008;76:347–354. doi: 10.1037/0022-006X.76.2.347. [DOI] [PubMed] [Google Scholar]
  6. Becker CB, Smith LM, Ciao AC. Reducing eating disorder risk factors in sorority members: A randomized trial. Behavior Therapy. 2005;36:245–253. [Google Scholar]
  7. Becker CB, Smith LM, Ciao AC. Peer facilitated eating disorders prevention: A randomized effectiveness trial of cognitive dissonance and media advocacy. Journal of Counseling Psychology. 2006;53:550–555. [Google Scholar]
  8. Becker CB, Stice E, Shaw H, Woda S. Use of empirically-supported interventions for psychopathology: Can the participatory approach move us beyond the research-to-practice gap? Behaviour Research and Therapy. 2009;47:265–274. doi: 10.1016/j.brat.2009.02.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Becker CB, Wilson C, Williams A, Kelly M, McDaniel L, Elmquist J. Peer-facilitated cognitive dissonance versus healthy weight eating disorders prevention: A randomized comparison. Body Image. 2010;7:280–288. doi: 10.1016/j.bodyim.2010.06.004. [DOI] [PubMed] [Google Scholar]
  10. Chambless D, Hollon S. Defining empirically supported therapies. Journal of Consulting and Clinical Psychology. 1998;66(1):7–18. doi: 10.1037//0022-006x.66.1.7. [DOI] [PubMed] [Google Scholar]
  11. Clarke G. Improving the transition from basic efficacy research to effectiveness studies: Methodological issues. Journal of Consulting and Clinical Psychology. 1995;63(5):718–725. doi: 10.1037//0022-006x.63.5.718. [DOI] [PubMed] [Google Scholar]
  12. Cohen J, Neumann P. Cost-savings and cost-effectiveness of clinical preventive care. Robert Wood Johnson Foundation – The Synthesis Project, Research Synthesis Report No. 18. 2009 Retrieved October 14, 2010 from http://www.rwjf.org/files/research/100709.policysnythesis.preventivecare.report.pdf. [PubMed]
  13. Dolan P. Developing methods that really do value the “Q” in the QALY. Health Economics, Policy, and Law. 2008;3:69–77. doi: 10.1017/S1744133107004355. [DOI] [PubMed] [Google Scholar]
  14. Donner A, Klar N. Pitfalls of and controversies in cluster randomization trials. American Journal of Public Health. 2004;94:416–422. doi: 10.2105/ajph.94.3.416. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Fenwick E, Byford S. A guide to cost-effectiveness acceptability curves. British Journal of Psychiatry. 2005;187:106–108. doi: 10.1192/bjp.187.2.106. [DOI] [PubMed] [Google Scholar]
  16. Festinger L. A theory of cognitive dissonance. Stanford: Stanford University Press; 1957. [Google Scholar]
  17. Fixsen D, Blase K, Naoom S, Wallace F. Core implementation components. Research on Social Work Practice. 2009;19:531–540. [Google Scholar]
  18. Flay B. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine. 1986;15:451–474. doi: 10.1016/0091-7435(86)90024-1. [DOI] [PubMed] [Google Scholar]
  19. Flay B, Biglan A, Boruch R, Castro F, Gottfredson D, Kellam S, et al. Standards of evidence: Criteria for efficacy, effectiveness, and dissemination. Prevention Science. 2005;6(3):151–175. doi: 10.1007/s11121-005-5553-y. [DOI] [PubMed] [Google Scholar]
  20. Glasgow R, Green L, Klesges L, Abrams D, Fisher E, Goldstein M, Hayman L, Ockene J, Orleans T. External validity: We need to do more. Annals of Behavioral Medicine. 2006;31:105–108. doi: 10.1207/s15324796abm3102_1. [DOI] [PubMed] [Google Scholar]
  21. Glasgow R, Lichtenstein E, Marcus A. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health. 2003;93(8):1261–1267. doi: 10.2105/ajph.93.8.1261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Gordon R. An operational classification of disease prevention. Public Health Reports. 1983;98:107–109. [PMC free article] [PubMed] [Google Scholar]
  23. Green L, Glasgow R. Relevance, generalization, and applicability of research: Issues in external validation and translation methodology. Evaluation and the Health Professions. 2006;29(1):126–153. doi: 10.1177/0163278705284445. [DOI] [PubMed] [Google Scholar]
  24. Green M, Scott N, Diyankova I, Gasser C, Pederson E. Eating disorder prevention: An experimental comparison of high level dissonance, low level dissonance, and no-treatment control. Eating Disorders. 2005;13:157–169. doi: 10.1080/10640260590918955. [DOI] [PubMed] [Google Scholar]
  25. Herschell A, McNeil C, McNeil D. Clinical child psychology’s progress in disseminating empirically supported treatments. Clinical Psychology: Science and Practice. 2004;11(3):267–288. [Google Scholar]
  26. Hoagwood K, Hibbs E, Brent D, Jensen P. Introduction to the special section: Efficacy and effectiveness in studies of child and adolescent psychotherapy. Journal of Consulting and Clinical Psychology. 1995;63:683–687. doi: 10.1037//0022-006x.63.5.683. [DOI] [PubMed] [Google Scholar]
  27. Hoagwood K, Olin S. The NIMH blueprint for change report: Research priorities in child and adolescent mental health. Journal of the American Academy of Child and Adolescent Psychiatry. 2002;41(7):760–767. doi: 10.1097/00004583-200207000-00006. [DOI] [PubMed] [Google Scholar]
  28. Hoch J, Briggs A, Willan A. Something old, something new, something borrowed, something blue: A framework for the marriage of health econometrics and cost-effectiveness analysis. Health Economics. 2002;11:415–430. doi: 10.1002/hec.678. [DOI] [PubMed] [Google Scholar]
  29. Holder H, Flay B, Howard J, Boyd G, Voas R, Grossman M. Phases of alcoholism prevention research. Alcoholism: Clinical and Experimental Research. 1999;23(1):183–194. [PubMed] [Google Scholar]
  30. Israel BA, Eng E, Schulz AJ, Parker EA. Introduction to methods in community-based participatory research for health. In: Israel BA, Eng E, Shulz AJ, Parker EA, editors. Methods in community-based participatory research for health. San Francisco: Jossey-Bass; 2005. pp. 3–26. [Google Scholar]
  31. Kazdin AE. Research Design in Clinical Psychology. 4. Boston, MA: Allyn and Bacon; 2003. [Google Scholar]
  32. Klesges L, Eastabrooks P, Dzewaltowski D, Bull S, Glasgow R. Beginning with the application in mind: Designing and planning health behavior change interventions to enhance dissemination. Annals of Behavioral Medicine. 2005;29(suppl):66–75. doi: 10.1207/s15324796abm2902s_10. [DOI] [PubMed] [Google Scholar]
  33. Knapp M, Mangalore R. The trouble with QALY’s…. Social Psychiatry and Psychiatric Epidemiology. 2007;61:348–358. [Google Scholar]
  34. Lorig K, Hurwicz M, Sobel D, Hobbs M, Ritter P. A national dissemination of an evidence-based self-management program: A process evaluation study. Patient Education and Counseling. 2005;59:69–79. doi: 10.1016/j.pec.2004.10.002. [DOI] [PubMed] [Google Scholar]
  35. Lynch F, Striegel-Moore R, Dickerson J, Perrin N, DeBar L, Wilson G, Kraemer H. Cost-effectiveness of guided self-help treatment for recurrent binge eating. Journal of Consulting and Clinical Psychology. 2010;78:322–333. doi: 10.1037/a0018982. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Matusek JA, Wendt SJ, Wiseman CV. Dissonance thin-ideal and didactic healthy behavior eating disorder prevention programs: Results from a controlled trial. International Journal of Eating Disorders. 2004;36:376–388. doi: 10.1002/eat.20059. [DOI] [PubMed] [Google Scholar]
  37. Merrell K, Juskelis M, Tran O, Buchanan R. Social and emotional learning in the classroom: Evaluation of strong kids and strong teens on students’ social-emotional knowledge and symptoms. Journal of Applied School Psychology. 2008;24:209–224. [Google Scholar]
  38. Mitchell KS, Mazzeo SE, Rausch SM, Cooke KL. Innovative interventions for disordered eating: Evaluating dissonance-based and yoga interventions. International Journal of Eating Disorders. 2007;40:120–128. doi: 10.1002/eat.20282. [DOI] [PubMed] [Google Scholar]
  39. Mrazek P, Haggerty R. Reducing risks for mental disorders: Frontiers for preventive intervention research. Washington, DC: National Academy Press; 1994. [PubMed] [Google Scholar]
  40. Perez M, Becker CB, Ramirez-Cash A. Transportability of an empirically supported dissonance-based prevention program for eating disorders. Body Image. 2010;7:179–186. doi: 10.1016/j.bodyim.2010.02.006. [DOI] [PubMed] [Google Scholar]
  41. Pineda GG. Tesis doctoral inédita. Facultad de Psicología. Universidad Nacional Autónoma de México; México: 2006. Estrategias preventivas de factores de riesgo en trastornos de la conducta alimentaria. [Google Scholar]
  42. Poulin F, Dishion TJ, Burraston B. 3-year iatrogenic effects associated with aggregating high-risk adolescents in preventive interventions. Applied Developmental Science. 2001;5:214–224. [Google Scholar]
  43. Prochaska J, DiClemente C, Norcross J. In search of how people change: Applications to addictive behaviors. American Psychologist. 1992;47(9):1102–1114. doi: 10.1037//0003-066x.47.9.1102. [DOI] [PubMed] [Google Scholar]
  44. Reynolds K, Spruijt-Metz D. Translational research in childhood obesity prevention. Evaluation and the Health Professions. 2006;29(2):219–245. doi: 10.1177/0163278706287346. [DOI] [PubMed] [Google Scholar]
  45. Roehrig M, Thompson JK, Brannick M, van den Berg P. Dissonance-based eating disorder prevention program: A preliminary dismantling investigation. International Journal of Eating Disorders. 2006;39:1–10. doi: 10.1002/eat.20217. [DOI] [PubMed] [Google Scholar]
  46. Roy-Byrne P, Sherbourne C, Craske M, Stein M, Katon W, Sullivan G, et al. Moving treatment research from clinical trials to the real world. Psychiatric Services. 2003;54(3):327–332. doi: 10.1176/appi.ps.54.3.327. [DOI] [PubMed] [Google Scholar]
  47. Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. Journal of Chronic Disease. 1967;20:637–648. doi: 10.1016/0021-9681(67)90041-0. Reprinted in Journal of Clinical Epidemiology, 62, 499–505. [DOI] [PubMed] [Google Scholar]
  48. Solomon J, Card J, Malow R. Adapting efficacious interventions: Advancing translational research in HIV prevention. Evaluation and the Health Professions. 2006;29(2):162–194. doi: 10.1177/0163278706287344. [DOI] [PubMed] [Google Scholar]
  49. Stice E. Risk and maintenance factors for eating pathology: A meta-analytic review. Psychological Bulletin. 2002;128:825–848. doi: 10.1037/0033-2909.128.5.825. [DOI] [PubMed] [Google Scholar]
  50. Stice E, Marti N, Spoor S, Presnell K, Shaw H. Dissonance and healthy weight eating disorder prevention programs: Long-term effects from a randomized efficacy trial. Journal of Consulting and Clinical Psychology. 2008a;76:329–340. doi: 10.1037/0022-006X.76.2.329. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Stice E, Presnell K. The Body Project: Promoting Body Acceptance and Preventing Eating Disorders, Facilitators Guide. New York: Oxford University Press; 2007. [Google Scholar]
  52. Stice E, Rohde P, Seeley J, Gau J. Brief cognitive-behavioral depression prevention program for high-risk adolescents outperforms alternative interventions: A randomized efficacy trial. Journal of Consulting and Clinical Psychology. 2008b;76:595–606. doi: 10.1037/a0012645. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Stice E, Rohde P, Gau J, Shaw H. An effectiveness trial of a dissonance-based eating disorder prevention program for high-risk adolescent girls. Journal of Consulting and Clinical Psychology. 2009;77:825–834. doi: 10.1037/a0016132. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Stice E, Shaw H. Eating disorder prevention programs: A meta-analytic review. Psychological Bulletin. 2004;130:206–227. doi: 10.1037/0033-2909.130.2.206. [DOI] [PubMed] [Google Scholar]
  55. Stice E, Shaw H, Burton E, Wade E. Dissonance and healthy weight eating disorder prevention programs: A randomized efficacy trial. Journal of Consulting and Clinical Psychology. 2006a;74(2):263–275. doi: 10.1037/0022-006X.74.2.263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Stice E, Shaw H, Marti CN. A meta-analytic review of obesity prevention programs for children and adolescents: The skinny on interventions that work. Psychological Bulletin. 2006b;132:667–691. doi: 10.1037/0033-2909.132.5.667. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Stinnett A, Mullahy J. Net health benefits: A new framework for the analysis of uncertainty in cost-effectiveness analysis. Medical Decision Making. 1998;18(Suppl 2):68–80. doi: 10.1177/0272989X98018002S09. [DOI] [PubMed] [Google Scholar]
  58. Stouthard M, Essink-Bot M, Bonsel G. Disability weights for diseases: A modified protocol and results for a Western European region. European Journal of Public Health. 2000;10:24–30. [Google Scholar]
  59. Sussman S, Valente T, Rohrbach L, Skara S, Pentz M. Translation in the health professions: Converting science into action. Evaluation and the Health Professions. 2006;29(1):7–32. doi: 10.1177/0163278705284441. [DOI] [PubMed] [Google Scholar]
  60. Tunis S, Stryer D, Clancy C. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association. 2003;12:1624–1632. doi: 10.1001/jama.290.12.1624. [DOI] [PubMed] [Google Scholar]
  61. Wells KB. Treatment research at the crossroads: The scientific interface of clinical trials and effectiveness research. American Journal of Psychiatry. 1999;156(1):5–10. doi: 10.1176/ajp.156.1.5. [DOI] [PubMed] [Google Scholar]
  62. Wilson GT. Manual-based treatment: Evolution and evaluation. In: Treat TA, Bootzin RR, Baker TB, editors. Psychological clinical science: Papers in honor of Richard M. McFall. Modern pioneers in psychological science. New York, NY: Psychology Press; 2007. pp. 105–132. [Google Scholar]
  63. Zubrick S, Ward K, Silburn S, Lawrence D, Williams A, Blair E, et al. Prevention of child behavior problems through primary implementation of a group behavioral family intervention. Prevention Science. 2005;6(4):287–304. doi: 10.1007/s11121-005-0013-2. [DOI] [PubMed] [Google Scholar]
  64. Zwarentstein M, Treweek S. What kind of randomized trials do we need? Canadian Medical Association Journal. 2009;180:998–1000. doi: 10.1503/cmaj.082007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Zwarenstein M, Treweek S, Gagnier J, Altman D, Tunis S, Haynes B, Oxman A, Moher D. Improving the reporting of pragmatic trials: An extension of the CONSORT statement. British Medical Journal. 2008;337:a2390–a2397. doi: 10.1136/bmj.a2390. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES