Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
. 2003 Aug;93(8):1261–1267. doi: 10.2105/ajph.93.8.1261

Why Don’t We See More Translation of Health Promotion Research to Practice? Rethinking the Efficacy-to-Effectiveness Transition

Russell E Glasgow 1, Edward Lichtenstein 1, Alfred C Marcus 1
PMCID: PMC1447950  PMID: 12893608

Abstract

The gap between research and practice is well documented. We address one of the underlying reasons for this gap: the assumption that effectiveness research naturally and logically follows from successful efficacy research. These 2 research traditions have evolved different methods and values; consequently, there are inherent differences between the characteristics of a successful efficacy intervention versus those of an effectiveness one. Moderating factors that limit robustness across settings, populations, and intervention staff need to be addressed in efficacy studies, as well as in effectiveness trials. Greater attention needs to be paid to documenting intervention reach, adoption, implementation, and maintenance. Recommendations are offered to help close the gap between efficacy and effectiveness research and to guide evaluation and possible adoption of new programs.


Despite a growing literature documenting prevention and health promotion interventions that have proven successful in well-controlled research, few of these interventions are consistently implemented in applied settings. This is true across preventive counseling services for numerous target behaviors, including tobacco use, dietary change, physical activity, and behavioral health issues (e.g., alcohol use, depression). Several recent reviews and metaanalyses have documented this gap,1,2 and the task forces on both clinical preventive services and community preventive services have noted that in several areas there is insufficient applied evidence available to make recommendations at present.3–5 Most of the Healthy People 2000 objectives6 were not met, and the even more ambitious goals in Healthy People 2010 are similarly unlikely to be met without significant changes in the status quo.7,8 To meet these challenges, we will need to have substantially more demonstrations of how to effectively implement recommendations in typical settings and in locations serving minority, low-income, and rural populations facing health disparities.

This situation is not unique to preventive interventions, as strikingly documented in the recent Institute of Medicine report Crossing the Chasm,9 which summarizes the similar state of affairs regarding many medical and disease management interventions. For example, there is increasing consensus on evidence-based diabetes management practices to prevent complications and on the importance and costeffectiveness of these practices.10 However, these recommendations—and especially those related to lifestyle counseling and behavioral issues—are poorly implemented in practice.11–14

This gap between research and practice is the result of several interacting factors, including limited time and resources of practitioners, insufficient training,15 lack of feedback and incentives for use of evidence-based practices, and inadequate infrastructure and systems organization to support translation.8,16 In this article, we focus on another reason for the slow and incomplete translation of research findings into practice: the logic and assumptions behind the design of efficacy and effectiveness research trials.

EFFICACY AND EFFECTIVENESS TRIALS

Many of the methods used in current prevention science are based on 2 influential papers published in the 1980s: Greenwald and Cullen’s17 description of the phases of cancer control research and Flay’s analysis of efficacy and effectiveness research.18 Both papers argued for a logical progression of research designs through which promising intervention ideas should proceed. These papers had many positive effects in helping to establish prevention research and enhancing acceptability among other disciplines. However, they may also have had an important and inadvertent negative consequence that derives from the assumption that the best candidates for effectiveness studies—and later dissemination—are interventions that prove successful in certain types of efficacy research. We argue that this assumption, or at least the way in which it has been operationalized over the past 15 years, has often led to interventions that have low probability of success in real-world settings.

To understand this point, it is necessary first to briefly review the seminal papers by Flay18 and Greenwald and Cullen.17 Efficacy trials are defined by Flay as a test of whether a “program does more good than harm when delivered under optimum conditions.”18(p451) Efficacy trials are characterized by strong control in that a standardized program is delivered in a uniform fashion to a specific, often narrowly defined, homogeneous target audience. Owing to the strict standardization of efficacy trials, any positive (or negative) effect can be directly attributed to the intervention being studied.

Effectiveness trials are defined as a test of whether a “program does more good than harm when delivered under real-world conditions.”18(p451) They typically standardize availability and access among a defined population while allowing implementation and levels of participation to vary on the basis of real-world conditions. The primary goal of an effectiveness trial is to determine whether an intervention works among a broadly defined population. Effectiveness trials that result in no change may be the result of a lack of proper implementation or weak acceptance or adherence by participants.18,19

Greenwald and Cullen17 proposed 5 phases of intervention research presumed to unfold in a sequential fashion. This continuum begins with Phase I research to formulate and develop intervention hypotheses for future study. Phase II studies develop methodologies that can be used in future efficacy or effectiveness studies. Phase III (efficacy) studies test intervention hypotheses, using methods that have been tested in Phase II. Thus, Phase III studies are designed to test interventions for efficacy, with an emphasis on internal validity, the purpose of which is to establish a causal link between the intervention and outcomes. Given this emphasis on internal control, Greenwald and Cullen note that Phase III studies can be conducted in settings and with samples that will “optimize interpretation of efficacy,” including study samples that may be more homogeneous than the ultimate target population, and settings that will maximize management of and control over the research process.

The main objective of Phase IV (effectiveness) studies is to measure the impact of an intervention when it is tested within a population that is representative of the intended target audience. Given that Phase IV studies should yield results that are generalizable, there is also the presumption that the context and setting for delivering the intervention should likewise be generalizable to the intended program users. In Phase V studies, effective Phase IV interventions are translated into large-scale demonstration projects. The major concern is implementation fidelity of an intervention that will now be introduced within even broader populations, including entire communities. This final phase (dissemination research), where collaboration and coordination with various community partners is likely to receive even greater attention, is intended to provide the necessary data and experience to move interventions into public health service programs at the national, regional, state, and local levels.

Greenwald and Cullen specifically advocated that intervention research unfold in a systematic fashion, building on and extending the body of science accumulated in previous phases. By explicitly defining the difference between Phase III and Phase IV research as being an emphasis on internal control versus representativeness, both Flay and Greenwald and Cullen assumed that successful Phase III trials would lead naturally to Phase IV trials. Unfortunately, this has not occurred.1,11,20 Instead, we currently find ourselves in a situation in which we have many small-scale efficacy studies of unknown generalizability and few successful effectiveness trials.21,22 In particular, we know very little about the representativeness of participants, settings, or intervention agents participating in health promotion research.1,21

Although the National Cancer Institute no longer emphasizes this linear “phases of research” model,23,24 the model was extremely influential in guiding an entire generation of research; many researchers, reviewers, and editors still use this framework when designing, funding, and evaluating research—and in deciding what types of studies are needed to advance a given area. Similar phase models are influential in evaluating prevention effectiveness25 and in developing drug therapies. In the remainder of this article, we discuss how this well-intentioned and logical phase of research paradigm may have fallen short of its intended goal, and propose approaches to remedy the present situation.

Our primary thesis is that this “trickle-down” model of how to translate research into practice—namely, that the optimal way to develop disseminable interventions is to progress from efficacy studies to effectiveness trials to dissemination projects—is inherently flawed, or at least incomplete. We posit that given the respective cultures, values, and methodological traditions that have developed within efficacy versus population-based effectiveness research, it is highly unlikely that interventions that are successful in efficacy studies will do well in effectiveness studies, or in real-world applications.

Table 1 summarizes the key characteristics of well-designed efficacy and effectiveness trials, using the RE-AIM evaluation framework.26,27 This model for evaluating interventions is intended to refocus priorities on public health issues, and it gives balanced emphasis to internal and external validity (see http://www.re-aim.org). RE-AIM is an acronym for Reach, Efficacy or Effectiveness (depending on the stage of research), Adoption, Implementation, and Maintenance.

TABLE 1—

Distinctive Characteristics of Efficacy and Effectiveness Intervention Studies, Using RE-AIM26,27 Dimensions for Program Evaluation

RE-AIM Issue Efficacy Studies Effectiveness Studies
Reach Homogeneous, highly motivated sample; exclude those with complications, other comorbid problems Broad, heterogeneous, representative sample; often use a defined population
Efficacy or effectiveness Intensive, specialized interventions that attempt to maximize effect size; very standardized; randomized designs Brief, feasible interventions not requiring great expertise; adaptable to setting; randomized, time series, or quasi-experimental designs
Adoption Usually 1 setting to reduce variability; settings with many resources and expert staff Appeal to and work in multiple settings; able to be adapted to fit setting
Implementation Implemented by research staff closely following specific protocol Implemented by variety of different staff with competing demands, using adapted protocol
Maintenance and cost Few or no issues; focus on individual level. Major issues; setting-level maintenance is as important as individual-level maintenance

Reach refers to the participation rate among those approached and the representativeness of participants. Factors determining reach are the size and characteristics of the potential audience and the barriers to participation (e.g., cost, social and environmental context, necessary referrals, transportation, and inconvenience). Efficacy or effectiveness pertains to the impact of an intervention on specified outcome criteria and includes measures of potential negative outcomes as well as intended results (as recommended by Flay,18 but seldom collected)28,29 (D. A. Dzewaltowski et al., unpublished data, 2002). Adoption operates at the setting level and concerns the percentage and representativeness of organizations or settings that will conduct a given program. Rogers30 has written extensively on adoption and dissemination issues. Factors associated with adoption include political and cultural fit, cost, level of resources and expertise required, and how similar a proposed service is to current practices of an organization. Implementation refers to intervention integrity, or the quality and consistency of delivery. Finally, maintenance operates at both the individual and the setting or organizational level. At the individual level, maintenance refers to how well behavior changes hold up in the long term. At the setting level, it refers to the extent to which a treatment or practice becomes institutionalized in an organization.

Table 1 summarizes how the RE-AIM dimensions apply to the efficacy–effectiveness distinction. Efficacy trials typically limit reach by seeking motivated, homogeneous participants with minimal or no complications or comorbidities. The considerable degree of initial screening for eligibility inherently limits the reach of an efficacy trial. Adoption is often treated as a nonissue for efficacy trials so long as at least one or, in some trials, a few settings are willing to participate. For effectiveness trials, reach is usually higher because participants are drawn from a broad and “defined” population. Adoption is critical because the settings need to commit their own resources and expect the intervention to “fit” with existing procedures.

Implementation in an efficacy trial is usually accomplished by research staff following a standardized protocol, whereas in an effectiveness trial, regular staff with many competing demands on their time must implement the intervention. While such staff are also guided by a protocol, adherence is likely to be more variable.1 Because they are implemented by research staff, efficacy interventions are often more complex and intensive than effectiveness interventions. Maintenance is usually a nonissue for efficacy trials at the setting level; it is expected that the intervention will cease when final assessments are completed and research staff depart. Since effectiveness trials are intended to represent typical setting conditions, it is hoped that the intervention will be maintained, assuming there are positive results.

WHY THE DISCONNECT?

We conclude that the characteristics that cause an intervention to be successful in efficacy research (e.g., intensive, complex, highly standardized) are fundamentally different from, and often at odds with, programs that succeed in population-based effectiveness settings (e.g., having broad appeal, being adaptable for both participants and intervention agents). If this is the case, then the “system” of moving from research to usual service programs, to which we have subscribed, may be broken and may need to be substantially modified.

Why does this linear progression of research, which is analogous to the steps used successfully to evaluate and bring pharmaceuticals to market, seem to fail with behavioral and health promotion research? One contextual factor is that, before trials, pharmaceutical companies invest considerable time and money establishing that the drug affects relevant biological mediators to a much greater extent than behavioral researchers invest in showing that their interventions affect psychosocial mediators. Granted, industry has vastly more resources. But we suggest that key differences also reside in the nature of the interventions.

Standard medical interventions (e.g., drugs or surgery) are presumed to be robust, readily transferable from setting to setting, and to work approximately equally across broad categories of patients. Clinicians exercise discretion about dosage and surgeons vary in experience, but it is still presumed that the pill is the same whoever administers it. Medicinal and surgical protocols can be relatively precisely defined, and adherence to them can be more easily monitored relative to behavioral interventions. Behavioral interventions are more difficult to define and standardize in part because of the inherent interactivity with client characteristics, preferences, and behaviors. This is exacerbated when behavioral interventions are delivered by staff whose training and expertise fall outside of behavioral science. In efficacy trials, research staff usually bring expertise in behavioral intervention and ensure that it is implemented consistently. This level of quality control and standardization is typically absent among regular health care staff implementing interventions for effectiveness trials.

There are 2 underlying differences between efficacy and effectiveness approaches that we feel are responsible for the current state of affairs. The first is that in an effort to enhance internal validity and control extraneous factors, the tradition in efficacy studies has been to simplify and narrow settings, conditions, participants, and a variety of other factors. There is nothing inherently wrong with this methodological approach, and the tradition of reductionism (e.g., understanding effects by isolating them and removing or controlling other factors) has contributed much to the advancement of science and theory.31 The problem is that usually the longer-range intent is to generalize beyond the narrow conditions of the efficacy trial. In effectiveness trials, an intervention must be robust across a variety of different participants, settings, conditions, and other less controlled factors. Equally important, it must appeal to a broad “defined population” or target audience.

A classic example of the typical differences between a health care efficacy study and an effectiveness trial concerns subject selection. In a tightly controlled efficacy trial, only highly motivated, homogenous self-selected volunteers who do not have any complications or other comorbid conditions are eligible (to control for potential confounding factors). Then, following success in such an efficacy study, we expect the same intervention to appeal to and be effective in a much broader cross-section of participants, many of whom have comorbid conditions and may not volunteer for treatment.

The second key difference between efficacy and effectiveness trials concerns how settings and contextual factors are treated. In efficacy studies, the usual approach is to control variance by restricting the setting to one set of circumstances—for example, one particular clinic (which often includes intervention experts). In contrast, a key characteristic of effectiveness trials is to produce robust effects and to understand variation in outcomes across heterogeneous settings and delivery agents. Therefore, it should not be surprising when the results of an intervention are efficacious under a highly specific set of circumstances but fail to replicate across a wide variety of settings, conditions, and intervention agents in effectiveness research.

SHALL THE TWAIN EVER MEET?

From the above discussion, it may seem hopeless to expect congruence across findings from efficacy and effectiveness studies. Some might go so far as to suggest, as one reviewer of this manuscript did, that perhaps efficacy studies should be abandoned altogether. We are optimistic, however, that there are solutions to the present disconnect. In brief, we need to embrace and study the complexity of the world, rather than attempting to ignore or reduce it by studying only isolated (and often unrepresentative) situations.32 What is needed is a “science of larger social units”33 that takes into account and analyzes the social context(s) in which experiments are conducted. To advance our present state of science, the question that we need to ask of both efficacy and effectiveness studies is “What are the characteristics of interventions that can (a) reach large numbers of people, especially those who can most benefit, (b) be broadly adopted by different settings (worksite, school, health, or community), (c) be consistently implemented by different staff members with moderate levels of training and expertise, and (d) produce replicable and long-lasting effects (and minimal negative impacts) at a reasonable cost?”

This suggested focus has important implications. It implies that we need to consider not only individual participants but also the settings within which they reside and receive treatment. This move to a multilevel approach is consistent with developments in several fields, and methodologies for how to handle such factors are available. There is not only a rich conceptual history to the study of generalization34 and of representative or purposeful sampling,35,36 but also statistical methods for handling these contextual factors.37

This comes down to an issue of generalization.38 The prevailing view seems to be that efficacy studies should focus only on internal validity and theoretical process mechanisms, and that issues of external validity should be left until later effectiveness studies. In contrast, we argue that issues of moderating variables (external validity) need to be addressed in both efficacy and effectiveness studies. Brewer39 conceptualizes such social context factors as moderating variables that influence the conclusions that can be drawn about the efficacy of an intervention. Moderating variables (e.g., race/ethnicity, socioeconomic status, type of setting or intervention agent) are relatively stable factors that interact with the intervention or change the effect of the program. Researchers should consider elevating hypotheses related to moderator variables to primary aims.

WHAT CAN BE DONE? DISCUSSION AND RECOMMENDATIONS

It is difficult to change established practice patterns, regardless of whether they be of clinicians, researchers, or funding agencies. It cannot reasonably be expected that many scientists will quickly discontinue practices in which they have been trained and become comfortable. It is also more efficient, and much more under one’s control, to continue to conduct efficacy studies without considering moderating variables or external validity because “the purpose is to study interventions under ideal conditions.” However, as illustrated above, , this is only true if one does not intend to generalize one’s conclusions beyond the very limited sample and conditions of a given study,1,31 which is hardly ever the case in health promotion research.

There is an increasingly well-documented disparity between the large amount of information on efficacy and the very small amount of information on effectiveness and representativeness.21,22,40 To produce significant improvement in the current state of affairs, changes will be necessary on the part of researchers, funding organizations, journal reviewers, and grant review panels. We propose 4 specific changes—2 of which focus on researchers, 1 on journal editors, and 1 on funding organizations.

1. Researchers should pay increased attention to moderating factors in both efficacy and effectiveness research. Table 2 outlines how data collection and information about moderating factors, such as participant characteristics (reach) and setting characteristics (adoption), can be incorporated into both efficacy and effectiveness research in a manner appropriate to that phase. Using the RE-AIM framework, we suggest that researchers consider the types of settings, intervention agents, and individuals that they wish their program to be used by when designing and evaluating interventions. During efficacy studies, purposeful or oversampling strategies can be used to include both specific end-user groups (e.g., minorities, less educated) and settings of interest. A critical concern for broader application—and an integral part of Flay’s original description18—was measurement of potential harmful outcomes. This part of his definition has seldom been addressed, but it needs to be.

TABLE 2—

Ways to Address RE-AIM26,27 Issues in Efficacy and Effectiveness Studies

Reach Efficacy or Effectiveness Adoption Implementation Maintenance
Efficacy trials (Phase III research) Have specified inclusion criteria or purposeful selection, but participants will be volunteers in a specific research setting. Measure outcomes using intent to treat assumptions or imputation of missing values and a high level of rigor. Have potential adoptees assess fit of prototype intervention to their setting. Collect data on likely treatment demands. Assess recidivism among participants.
Evaluate delivery of intervention protocol by different intervention agents (usually research staff). Engage potential community settings in strategic planning efforts from the outset.
Report exclusions, participation rates, dropouts, and representativeness on key characteristics. Assess both positive (anticipated) and negative (unintended) outcomes. Include “proxy measures” of adoption, such as participation among those staff members of a system who will participate in the study. Document extent to which research protocol is retained by setting/agency once the formal study is completed.
Report effects of moderator variables.
Effectiveness trials in defined populations (Phase IV research) Include all relevant members of a defined population. Address as above, though measures are usually more limited. Assess willingness of stakeholders from multiple settings to adopt and adapt the program. Assess staff ability to implement key components of the intervention in routine practice. Assess continuation of program over time, and especially after research phase concludes.
Report exclusions, participation rates, dropouts, and representativeness. Include economic outcomes. Report on representativeness of settings, participation rate, and reasons for declining. Evaluate consistency of intervention delivery by agency staff who are not part of research team. Systematically program for and evaluate the level of institutionalization of the program elements after formal study assistance is terminated.

Participatory research methods, including developing one’s intervention ideas collaboratively with members of the intended audience (individuals, intervention agents, and organization decisionmakers) should not be left for later phases of research but built into efficacy studies. More formal measures of adoption and setting level maintenance may need to wait until later effectiveness studies (Table 2), but both qualitative and quantitative “proxy measures” of these factors can and should be addressed in efficacy studies. Such information can lead to better tailoring of interventions to organizational culture in the same way that tailoring of intervention at the individual level has led to increased success.41,42 A final recommendation for both efficacy and effectiveness studies is to include a variety of intervention agents, to describe their backgrounds and levels of experience/expertise with regard to the target behavior, and to report on potential differences in implementation and outcomes associated with these differences.43

As illustrated in Table 2, issues pertaining to moderating factors—and eventual translation into practice—are best addressed during the planning phases of research. RE-AIM, or other evaluation models,13,16 can be used to help plan and select samples, interventions, settings, and agents in ways that make it more likely that results will be replicated in later studies.

2. Realize that public health impact involves more than just efficacy. Our training and current review criteria all emphasize producing large effect sizes under tightly controlled conditions. To make a real-world impact, several other criteria are also necessary.

a. At the individual level, several research groups have proposed that Impact = Reach (R) × Efficacy (E).44–47 It is not enough to produce a highly efficacious intervention. To have broad public health impact, an intervention must also have high reach. To the Impact = R × E formula, we would add a third component: implementation (I). As discussed by Basch et al.,19 a program cannot be effective if it is not implemented. Thus, we propose that individual-level Impact = R × E × I.

b. An individual-level focus is, however, not sufficient. An intervention also has to be acceptable to and adopted by a variety of intervention settings, and to be implemented relatively consistently by different intervention agents. In other words, the parallel setting or organizational-level impact formula should be Organizational Impact (OI) = Adoption (A) × Implementation (I). Several authors have discussed issues of nesting and setting factors37,48 and how to adjust individual-level effects for issues of nonindependence. However, to our knowledge, the A × I = OI formula for estimating the impact of an intervention across settings has not been discussed, with the exception of an early related proposal by Kolbe49 that Impact = Effectiveness × Dissemination × Maintenance. It is important to emphasize that in terms of overall public health effect, adoption and implementation are as important as reach and efficacy, and that we need more emphasis on studies of organizational- and system-level factors.

3. Include external validity reporting criteria in author guidelines. Within medicine, a widely agreed upon set of criteria for reporting the results of randomized clinical trials has been developed. Known as the CONSORT criteria,50 these reporting standards have been widely adopted by leading medical journals and have helped to increase the quality of published research. As helpful as the CONSORT criteria are, they are almost exclusively concerned with issues of internal validity. Only 1 out of 22 recommendations directly addresses external validity issues51; in contrast to the other very specific and concrete criteria, it simply states “Generalizability (external validity) of the trial findings” and provides no guidance as to how this issue should be reported.

We propose the following 7 additions to the existing CONSORT criteria, which would help greatly to increase awareness of and reporting on external validity. If such criteria were widely adopted, it would greatly enhance the quality and information value not only of individual studies but also of evidence-based reviews and meta-analyses. The current state of health promotion research is so biased toward reporting on internal validity issues that it is difficult to draw any conclusions about generalization. In particular, there has been a serious lack of attention to issues of representativeness, especially at the level of settings and intervention agents.21,28,52 This becomes even more problematic when the evidence upon which meta-analyses and practice recommendations are based consists largely or solely of efficacy studies of unknown generalizability.

The 7 items that we propose below should apply to both efficacy and effectiveness studies. They would not require a great deal of additional journal space and are described below in the same format as existing CONSORT items. These criteria were recently added by the Evidence-Based Behavioral Medicine Committee of the Society of Behavioral Medicine53 to their recommendations for reporting on behavioral intervention studies.

a. State the target population to which the study intends to generalize.

b. Report the rate of exclusions, the participation rate among those eligible, and the representativeness of participants.

c. Report on methods of recruiting study settings, including exclusion rate, participation rate among those approached, and representativeness of settings studied.

d. Describe the participation rate and characteristics of those delivering the intervention. State the population of intervention agents that one would see eventually implementing the program and how the study interventionists compare with those who will eventually deliver the intervention.

e. Report the extent to which different components of the intervention are delivered (by different intervention agents) as intended in the protocol.

f. Report the specific time, and costs required to deliver the intervention.

g. Report on organizational level of continuance, discontinuance or adaptation in modified form of the intervention once the trial is completed, and individual-level maintenance of results.

We think that such information should be of relevance not only to researchers but also to clinicians, health directors, and decisionmakers responsible for selecting prevention and health promotion programs. In fact, we think that these parties already make implicit use of these dimensions. Making them explicit should aid reading of the literature and guide more informed program selections.

4. Increase funding for research focused on moderating variables, external validity, and robustness. The large imbalance between the extent to which health promotion investigations focus on internal validity and the extent to which they focus on external validity will not be remedied without substantial changes in funding priorities. Table 3 lists several recommendations for funding organizations that would help correct this imbalance.

TABLE 3—

Recommendations for Funding Organizations to Accelerate Transfer of Research to Practice

  • Solicit proposals that investigate interventions in multiple settings and especially settings that are representative of those to which the program is intended to generalize.

  • Fund innovative investigations of ways to enhance reach, adoption, implementation, and maintenance (which have all been de-emphasized relative to efficacy).

  • Require standard and comprehensive reporting of exclusions, participation rates, and representativeness of both participants and settings.

  • Fund cross-over designs, sequential program changes, replications, multiple baseline, and other designs in addition to randomized controlled trials that can efficiently and practically address key issues in translation.

  • Invite programs that investigate and can demonstrate quality implementation and outcomes across a wide range of intervention agents similar to those present in applied settings.

  • Require a maintenance/sustainability phase in research projects and implementation of plans to enhance institutionalization once the original research has been completed.

  • Fund competitive proposals to investigate long-term effects and sustainability of initially successful interventions.

  • Encourage innovation in intervention design and standardization in reporting on process and outcome measures at both individual and setting/intervention agent levels.

  • Request more cost-effectiveness studies and other economic evaluations that are of interest to program administrators and policymakers.

These recommendations would have 2 effects. The first would be to increase the small number of well-conducted effectiveness studies now available. The second would be to increase the relevance of efficacy studies for practice by focusing attention on moderating variables and the range of conditions, settings, intervention agents, and participants to which the results apply. Such refocused funding priorities should also increase understanding of health disparities and help reduce them, since more research would be conducted involving minorities and low-income settings. Finally, funding organizations might explicitly have reviewers rate proposals on their likely robustness or potential for widespread application and impact. This could be done by methods described in the Guide to Community Preventive Services.54

CONCLUSIONS

In summary, at least part of the reason for the slow and uneven translation of research findings into practice in the health promotion sciences is lack of attention to issues of generalization and external validity (moderating factors that potentially limit the robustness of interventions). There also needs to be a greater understanding of, and research on, setting-level social contextual factors.16,55,56 If these issues were addressed in the design and reporting of efficacy as well as effectiveness studies, it would greatly advance the current quality of research and our knowledge base. These issues are to a large extent under the control of researchers, reviewers, and funding organizations, and we have listed actions that each of these parties can take to facilitate better transfer from efficacy to effectiveness research.

Acknowledgments

This project was supported by The Robert Wood Johnson Foundation (grant 030102) and the Agency for Healthcare Research and Quality (grant HS10123).

We acknowledge the contributions of Allan Best, PhD, Brian Flay, PhD, Lisa Klesges, PhD, and Thomas M. Vogt, MD, MPH, for their helpful comments on an earlier draft of the manuscript.

Contributors

All authors produced original drafts of sections of the manuscript, extensively edited each other’s contributions, and made substantive contributions to the ideas expressed in the manuscript.

Peer Reviewed

References

  • 1.Clark GN. Improving the transition from basic efficacy research to effectiveness studies: methodological issues and procedures. J Consult Clin Psychol. 1995;63:718–725. [DOI] [PubMed] [Google Scholar]
  • 2.Weisz JR, Weisz B, Donenberg GR. The lab versus the clinic: effects of child and adolescent psychotherapy. Am Psychol. 1992;47:1578–1585. [DOI] [PubMed] [Google Scholar]
  • 3.Briss PA, Zaza S, Papaioanou M, et al. Developing an evidence-based Guide to Community Preventive Services—methods. Prev Med. 2000;18(suppl 1):35–43. [DOI] [PubMed] [Google Scholar]
  • 4.Centers for Disease Control and Prevention. The Guide to Community Preventive Services. 2002. Available at: http://www.thecommunityguide.org. Accessed March 11, 2003.
  • 5.Whitlock EP, Orleans CT, Prender N, Allan J. Evaluating primary care behavioral counseling interventions: an evidence-based approach. Am J Prev Med. 2002;22:267–284. [DOI] [PubMed] [Google Scholar]
  • 6.Department of Health and Human Services. Healthy People 2000. 2002. Available at: http://www.health.gov/healthypeople/data/PROGRVW/default.htm. Accessed March 11, 2003.
  • 7.Smedley BD, Syme SL. Promoting health: intervention strategies from social and behavioral research. Am J Health Promot. 2001;15:149–166. [DOI] [PubMed] [Google Scholar]
  • 8.Integration of Health Behavior Counseling Into Routine Medical Care. Washington, DC: Center for the Advancement of Health; 2001. [PubMed]
  • 9.Committee on Quality Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
  • 10.Joyner L, McNeeley S, Kahn R. ADA’s provider recognition program. HMO Pract. 1997;11:168–170. [PubMed] [Google Scholar]
  • 11.Glasgow RE, Strycker LA. Level of preventive practices for diabetes management: patient, physician, and office correlates in two primary care samples. Am J Prev Med. 2000;19:9–14. [DOI] [PubMed] [Google Scholar]
  • 12.Health Behavior Change in Managed Care: A Status Report. Washington, DC: Center for the Advancement of Health; 2000.
  • 13.Kottke TE, Edwards BS, Hagen PT. Counseling: implementing our knowledge in a hurried and complex world. Am J Prev Med. 1999;17:295–298. [DOI] [PubMed] [Google Scholar]
  • 14.Woolf SH, Atkins D. The evolving role of prevention in health care contributions of the US Preventive Services Task Force. Am J Prev Med. 2001;20:13–20.11306228 [Google Scholar]
  • 15.Orlandi MA. Promoting health and preventing disease in health care settings: an analysis of barriers. Prev Med. 1987;16:119–130. [DOI] [PubMed] [Google Scholar]
  • 16.Green LW. From research to “best practices” in other settings and populations. Am J Health Behav. 2001;25:165–178. [DOI] [PubMed] [Google Scholar]
  • 17.Greenwald P, Cullen JW. The new emphasis in cancer control. J Natl Cancer Inst. 1985;74:543–551. [PubMed] [Google Scholar]
  • 18.Flay BR. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Prev Med. 1986;15:451–474. [DOI] [PubMed] [Google Scholar]
  • 19.Basch CE, Sliepcevich EM, Gold RS. Avoiding type III errors in health education program evaluations. Health Educ Q. 1985;12:315–331. [DOI] [PubMed] [Google Scholar]
  • 20.King AC. The coming of age of behavioral research in physical activity. Ann Behav Med. 2001;23:227–228. [DOI] [PubMed] [Google Scholar]
  • 21.Glasgow RE, Bull SS, Gillette C, Klesges LM, Dzewaltowski DA. Behavior change intervention research in health care settings: a review of recent reports with emphasis on external validity. Am J Prev Med. 2002;23:62–69. [DOI] [PubMed] [Google Scholar]
  • 22.Oldenburg B, Ffrench BF, Sallis JF. Health behavior research: the quality of the evidence base. Am J Health Promot. 2000;14:253–257. [DOI] [PubMed] [Google Scholar]
  • 23.Hiatt RA, Rimer BK. A new strategy for cancer control research. Cancer Epidemiol Biomarkers Prev. 1999;8:957–964. [PubMed] [Google Scholar]
  • 24.Kerner JF. Closing the Gap Between Discovery and Delivery. Washington, DC: National Cancer Institute; 2002.
  • 25.Teutsch SM. A framework for assessing the effectiveness of disease and injury prevention. MMWR Recomm Rep. 1992;41(RR-3):1–12. [PubMed] [Google Scholar]
  • 26.Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–1327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Glasgow RE, McKay HG, Piette JD, Reynolds KD. The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management? Patient Educ Couns. 2001;44:119–127. [DOI] [PubMed] [Google Scholar]
  • 28.Glasgow RE, Klesges LM, Dzewaltowski DA, Bull SS, Estabrooks P. The future of health behavior change research: what is needed to improve translation of research into health promotion practice? Ann Behav Med. In press. [DOI] [PubMed]
  • 29.Estabrooks PA, Dzewaltowski DA, Glasgow RE, Klesges LM. How well has recent literature reported on important issues related to translating school-based health promotion research into practice? J School Health. 2003;73:21–28.12621720 [Google Scholar]
  • 30.Rogers EM. Diffusion of Innovations. 4th ed. New York, NY: Free Press; 1995.
  • 31.Mook DG. In defense of external invalidity. Am Psychol. 1983;38:379–387. [Google Scholar]
  • 32.Axelrod R, Cohen MD. Harnessing Complexity: Organizational Implications of a Scientific Frontier. New York, NY: Simon & Schuster; 2000.
  • 33.Biglan A, Glasgow RE, Singer G. The need for a science of larger social units: a contextual approach. Behav Ther. 1990;21:195–215. [Google Scholar]
  • 34.Gleser GC, Cronbach LJ, Rajaratnam N. Generalizability of scores influenced by multiple sources of variance. Psychometrika. 1965;30:1373–1385. [DOI] [PubMed] [Google Scholar]
  • 35.Shadish WR, Cook TD, Campbell PT. Experimental and Quasi-Experimental Design for Generalized Causal Inference. Boston, Mass: Houghton Mifflin; 2002.
  • 36.Brunswik E. Representative design and probabilistic theory in functional psychology. Psychol Rev. 1955;62:217. [DOI] [PubMed] [Google Scholar]
  • 37.Murray DM. Statistical models appropriate for designs often used in group-randomized trials. Stat Med. 2001;20:1373–1385. [DOI] [PubMed] [Google Scholar]
  • 38.Cook TD, Campbell DT. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago, Ill: Rand McNally; 1979.
  • 39.Brewer MB. Research design and issues of validity. In: Reis HT, Judd CM, eds. Handbook of Research Methods in Social and Personality Psychology. New York, NY: Cambridge University Press; 2000:3–39.
  • 40.Oldenburg BF, Sallis JF, Ffrench ML, Owen N. Health promotion research and the diffusion and institutionalization of interventions. Health Educ Res. 1999;14:121–130. [DOI] [PubMed] [Google Scholar]
  • 41.Skinner CS, Campbell MK, Rimer BK, Curry S, Prochaska JO. How effective is tailored print communication? Ann Behav Med. 1999;21:290–298. [DOI] [PubMed] [Google Scholar]
  • 42.Kreuter MW, Strecher VJ, Glassman B. One size does not fit all: the case for tailoring print materials. Ann Behav Med. 1999;21:276–283. [DOI] [PubMed] [Google Scholar]
  • 43.Glasgow RE, Toobert DJ, Hampson SE, Strycker LA. Implementation, generalization, and long-term results of the “Choosing Well” diabetes self-management intervention. Patient Educ Couns. 2002;48:115–122. [DOI] [PubMed] [Google Scholar]
  • 44.Abrams DB, Emmons KM, Linnan L, Biener L. Smoking cessation at the workplace: conceptual and practical considerations. In: Richmond R, ed. Interventions for Smokers: An International Perspective. New York, NY: Williams & Wilkins; 1994:137–169.
  • 45.Prochaska JO, Velicer WF, Fava JL, Rossi JS, Tsoh JY. Evaluating a population-based recruitment approach and a stage-based expert system intervention for smoking cessation. Addict Behav. 2001;26:583–602. [DOI] [PubMed] [Google Scholar]
  • 46.Jeffery RW. Risk behaviors and health: contrasting individual and population perspectives. Am Psychol. 1989;44:1194–1202. [DOI] [PubMed] [Google Scholar]
  • 47.Lichtenstein E, Glasgow RE. A pragmatic framework for smoking cessation: implications for clinical and public health programs. Psychol Addict Behav. 1997;11:142–151. [Google Scholar]
  • 48.Elbourne DR, Campbell MK. Extending the CONSORT statement to cluster randomized trials: for discussion. Stat Med. 2001;20:489–496. [DOI] [PubMed] [Google Scholar]
  • 49.Kolbe LJ. Increasing the impact of school health promotion programs: emerging research perspectives. Health Educ. 1986;17:49–52. [PubMed] [Google Scholar]
  • 50.Moher D, Schulz KF, Altman D. The CONSORT statement: revised recommendations for improving the quality of reports. JAMA. 2001;285:1987–1991. [DOI] [PubMed] [Google Scholar]
  • 51.Zaza S, Lawrence RS, Mahan CS, Fullilove M, et al. Scope and organization of the Guide to Community Preventive Services. Task Force on Community Preventive Services. Am J Prev Med. 2000;18(suppl 1):27–34. [Google Scholar]
  • 52.Bull SS, Gillette C, Glasgow RE, Estabrooks P. Worksite health promotion research: to what extent can we generalize the results and what is needed to translate research to practice? Health Educ Behav. In press. [DOI] [PubMed]
  • 53.Davidson K, Goldstein M, Kaplan R, et al. Evidence-based behavioral medicine: what is it and how do we get there? Ann Behav Med. In press.
  • 54.Green LW, Kreuter MW. Commentary on the emerging Guide to Community Preventive Services from a health promotion perspective. Am J Prev Med. 2000;18:7–9. [DOI] [PubMed] [Google Scholar]
  • 55.Institute of Medicine. Promoting Health: Intervention Strategies From Social and Behavioral Research. Washington, DC: National Academy Press; 2000. [PubMed]
  • 56.Green LM, Kreuter MW. Health Promotion Planning: An Educational and Ecological Approach. 3rd ed. Mountain View, Calif: Mayfield Publishing Co; 1999.

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES