Abstract
Dissemination of evidence-based programs and policies is a critical final step in reducing the burden of cancer in the general public. Yet we have not been fully successful to date in improving clinical or public health practice by disseminating programs found to be effective in research. Therefore, research is needed into the dissemination process and outcomes, to enable better efforts in the future. This paper explores the definitions and models used for dissemination, the designs of dissemination studies, and possible research questions in dissemination research, all focused on cancer prevention and control. We hope that this paper will encourage dissemination research in our field.
Keywords: Dissemination, Implementation, Diffusion, Translation
Introduction
Dissemination is an emerging and important issue in the field of cancer prevention and control. Most agree that we currently do not adequately disseminate key findings and programs to the practitioners that need them, yet dissemination is the logical and critical final step in using a program, policy, or idea to improve cancer outcomes. One key reason is that dissemination so rarely occurs is that we simply do not know how to effectively disseminate programs once they are found to be effective, due to lack of research in this area. Recent publications and conferences (e.g., have begun to move us toward development of the science of dissemination, but we have a long way to go. Research is needed into how we can most effectively disseminate evidence-based programs.
The Cancer Prevention and Control Research Network is a national network of investigators formed with one of its purposes to increasing research on the dissemination of programs and interventions that have been found efficacious but have not been adopted as part of best practices. The purpose of this paper is to provide researchers in cancer prevention and control with a primer for engaging in active research on dissemination. First, we present a working definition of dissemination research and related constructs, followed by a brief review of models used in dissemination research and study designs used in this field, together with methodological challenges. We discuss examples of possible research questions, based on our analysis of models and methods. We hope that this paper will help to clarify concepts and will also serve as a springboard to formulate future dissemination research.
Definitions for this field
Finding a common ground of definition and conceptualization of what we mean when we say dissemination research has been one of the key stumbling blocks in this field. Reports of confusion in writing grants, designing measures, and studying the phenomenon of dissemination have stemmed at least in part from the inconsistent definition of labels used to define the study the dissemination of programs and policies. Increasing agreement on definitions would help us to test competing models of dissemination as we move forward to enhance this research field. Therefore, the first activity that we engaged in was to make a list of definitions that have been used to describe the process of dissemination
Table 1 presents some definitions that have been articulated in the literature. The process of moving tested public health programs and policies toward practice has been variously termed dissemination, translation, implementation, and diffusion, with little agreement on these terms. A recent RFA from the National Cancer Institute defines dissemination as “targeted distribution of information and intervention materials to a specific public health or clinical practice audience”. A related construct, implementation, is defined in this RFA as “use of strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings”. The difference between these definitions seems to be in the level of the product to be moved into practice (i.e., specific materials versus an entire intervention). These definitions do not deal with any conceptually different issues and do not indicate any particular process or engagement by the targeted practice setting in the process. A recent RFA from the Centers for Disease Control and Prevention adds to the issue by providing different and more detailed definitions of both dissemination and implementation, and definitions of the related constructs of translation and diffusion, each with overlapping content of previous definitions. Other investigators represented in Table 1 have offered specific definitions of all of these constructs, many of them overlapping with the federal definitions, but none of them specific to cancer control and prevention. Other differences include that the term dissemination sometimes focuses on clinical practice, sometimes public health practice, or in some settings both. Dissemination can refer to the target audience in an active, participatory manner, or not. Some citations promote dissemination as the key overarching concept, and some see dissemination as a piece of the process of implementing evidence-based practice. The CDC RFA definitions, for example, focus more on the process of moving evidence to the public's use, while the NCI RFA definitions focus on the outcome of the process (ie, having evidence based programs in place). Clearly, this field needs to focus its definitions of key constructs to enable agreement among scientists at all levels of involvement.
Table 1.
Source | Dissemination | Translation | Implementation | Diffusion |
---|---|---|---|---|
NCI RFA | The targeted distribution of information and intervention materials to a specific public health or clinical practice audience. The intent is to spread knowledge and the associated evidence-based interventions. | The use of strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings. | ||
CDC RFA | The systematic study of how the targeted distribution of information and intervention materials to a specific public health audience can be successfully executed so that increased spread of knowledge about the evidence-based public health interventions achieves greater use and impact of the intervention. | The sequence of events (i.e., process) in which a proven scientific discovery (i.e., evidence based public health intervention) is successfully institutionalized (i.e., seamlessly integrated into established practice and policy). | The systematic study of how a specific set of activities and designed strategies are used to successfully integrate an evidence-based public health intervention within specific settings (e.g., primary care clinic, community center, school). | The systematic study of the factors necessary for successful adoption by stakeholders and the targeted population of an evidence-based intervention which results in widespread use and specifically includes the uptake of new practices or the penetration of broad scale recommendations through dissemination and implementation efforts, marketing, laws and regulations, systems-research and policies. |
Natl implementation research group | The process of putting a defined practice or program into practical effect; to pursue to a conclusion | |||
Lomas 1993 | the targeted distribution of information and intervention materials to a specific public health or clinical practice audience | the use of strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings | ||
Curry, 2000 | Effective dissemination is a push-pull process. Those who adopt innovations must want them or be receptive (pull) while there is systematic effort to help adopters implement innovation (push). Tacit knowledge from experience drives pull, explicit knowledge from research drives push. |
Examples of the importance of consistent definitions can be found in the cancer control literature. For example, consider a program designed to increase mammography rates in a target population by increasing the distribution and use of a tested flyer to inform patients of the benefits and reduce the perceptions of barriers of getting a mammogram. Research on this issue would be conducted very differently considered using two different definitions of dissemination to guide research, the Lomas 1991 (1) and the Curry 2000 definitions. The Lomas 1991 definition calls for a relatively reactive process of targeted distribution, while the Curry 2000 definition recognizes the reciprocity of the push of consumers for a specific product or activity together with the pull of the system to increase interest and ultimately use of the product by the target population. The overall outcomes of a research project might be similar when the two definitions were considered, but the process of dissemination would be very different, and therefore differentially successful, depending on the definition applied. The process according to Lomas might be to monitor the uptake of materials sitting in a waiting room in a clinical setting, while the Curry definition would be to distribute the same materials to each patient's home, using active encouragement to use the tested materials. These differences in definition make the design of a research project and comparison across dissemination research studies complicated. For now, we need to identify a workable, simple definition of dissemination research, useful for most applications in cancer control, and one that can be refined and tested in future research projects and adapted to the specific setting and target of dissemination. We recommend for now using the following definition of dissemination research:
Understanding the process of moving evidence-based public health and clinical innovations into practice settings.
This definition allows for research into the process of dissemination and for the acknowledgement that improved practice is the ultimate goal of these activities. This definition also allows for the more passive diffusion of a new idea into a system, or the more active pushed and encouraged acceptance of a program into a system where there is less pull. Other issues to be considered when defining dissemination research include the following:
What is the larger field as described in the definition above, and what shall we call it? (not sure what this question gives us)
What is the relationship between dissemination versus implementation? Are they competing definitions or do they describe complementary elements of some larger field?
Does disseminating a scientific finding to practice settings use the same processes as disseminating a program? Is dissemination of findings essentially policy, or is it something else altogether?
How much of dissemination is about the pull of practice needs and how much occurs due to the push of disseminators? Are these different views of the same process, or different processes? And does whether dissemination occur following push or pull, determine the effectiveness of the effort?
There is no agreement regarding any of this among scientists, practitioners, or funders. Such agreement could make dissemination research easier and less intimidating to grapple with, as well as easier to test scientifically. We should all look for ways to harmonize in the next phase of research by trying to formulate common definitions and questions and looking for common concepts and methods. While we do that, it is important to be sensitive to target audiences, including the funding audiences, as well as the practice audiences. Continued work here at both the domestic and international levels is clearly needed.
What is the necessary evidence base for moving to dissemination research?
Dissemination research commences once there is sufficient evidence indicating that cancer prevention and control interventions work with the ultimate aim of reducing cancer risk and burden within the population. At what point do we recommend that the current best evidence about a specific cancer prevention intervention is enough?(2) There is some debate in the field regarding what constitutes sufficient evidence to move to dissemination, with alternative paths leading us from the research evidence base to practice.(3, 4) Understanding this debate can help to inform dissemination research, the focus of this paper.
The predominant guiding paradigm for intervention research is a linear framework, proceeding along a continuum in which each research phase sets the stage for a future step. In their classic text, Greenwald and Cullen(5) applied this linear framework to cancer prevention research, following several key steps: (1) basic research designed to identify the nature of the problem and generate testable hypotheses; (2) methods development, including development of assessment and intervention methods; (3) efficacy studies, designed to test interventions under tightly controlled conditions(6-14); (4) effectiveness studies, taking the next step of testing interventions under more “real-world conditions” (4, 15); and (5) demonstration and implementation, including dissemination research.(5) Others have described the applicability of this linear framework to other disease outcomes, including for example cardiovascular disease,(16) and mental health.(17).
In general, within the context of this linear framework it is understood that research may not always proceed in a completely linear sequence, but rather may include some feedback loops circling back to previous steps to address newly-defined research questions(5, 6). This might be true specifically for dissemination research, as if a program is not able to be moved into public health or clinical practice, we could consider the development of other programs that have a higher likelihood of spread.
Efficacy and Effectiveness Trials leading to Dissemination
Despite the considerable advantages of scientific rigor provided by the sequencing of efficacy and effectiveness trials, some have raised concerns that efficacy and effectiveness trials may not yield the “best candidate” interventions for dissemination.(1, 10, 15, 18, 19) Interventions tested in efficacy trials, for example, are designed to maximize potential effectiveness, and thus may be more intensive, costly, and complex than interventions that can feasibly be disseminated on a broad scale. The required standardization of the intervention in efficacy studies may actually limit the intervention's effectiveness by failing to incorporate participant input and to adapt the intervention to the context and needs of the setting.(20) The working linear framework connotes a uni-dimensional flow, with the tested intervention being disseminated to the community, when in fact a bi-directional arrow may better illustrate the need for researchers and intervention designers to better understand the world of practice.(21, 22) A further limitation to the generalizability of randomized trials is the requirement that study participants agree to random assignment. Whether individuals or groups (e.g., schools, clinics, worksites), participants in such studies may not be representative of broader populations at diverse stages of readiness for change and receptivity to the intervention. Thus, increasingly there are calls for research methods emphasizing generalizability and feasibility of interventions, and the inclusion of contextual information such as the representativeness of the population and the reach of the intervention, as well on the needed adaptations.(4, 23, 24)
One strategy for addressing these concerns is a movement away from the linear model and its assumption that knowledge is a product of a linear progression through research phases, to a systems framework.(24) Systems models of translation begin with an integration of knowledge, with an understanding of the complexity of implementation and assuming that dissemination is both contextual and embedded in relationships and in turn in organizations.(25-27) The theories that underlie dissemination, described below, certainly need attention of this type.
Knowledge synthesis – Making sense of the evidence base
One question for this field is how to make sense of individual studies that test single intervention programs or policies. Single studies provide an important component of the evidence base for dissemination, but are not generally considered sufficient evidence for broad-scale dissemination. The peer review system provides a means of tracking individual studies. Cancer PLANET and R-TIPS make the findings and interventions of these studies readily accessible. There are a range of tools for synthesizing evidence across studies. (1, 28),(25, 26, 29, 30) For example, meta-analysis provides a quantitative approach to systematically cull and integrate the findings of multiple individual studies, with the goal of finding consistent patterns or lack of agreement across the studies.
A leading public health synthesis method is the process leading to the Guide to Community Preventive Services. The aim is to identify consistent findings from a series of well-designed and rigorously implemented studies. Conclusions about the available evidence are based on the number of studies, types of study designs used, the consistency of the findings, and the effect sizes found, in combination with expert opinion.(31) The systematic nature of the review for the Community Guide provides the basis for the “gold standard” of evidence, based on rigorous scientific studies.(32) Other reviews take into account a broader definition of evidence as they consider whether something “works”, such as the Intervention MICA,.(33) and the E-Roadmap to Evidence-Based Public Health Practice.(34) These reviews illustrate an increasing movement toward expanding the evidence base to include non-randomized studies and the growing recognition of the roles that such trials can play in the developing an integrated picture of the existing evidence.(30, 35) Indeed, some have noted that exclusion of these diverse study designs, including quasi-experimental designs, non-randomized designs, and natural experiments, may bias the conclusions toward the types of interventions and settings that are more readily tested through randomized trials, and thus will lead to less effective dissemination efforts of these interventions.{Des Jarlais, 2004 #7860}
What actually gets disseminated?
Adapting previously tested interventions for a new population target before testing the dissemination process is a matter of some controversy. Green(22) recommends that we replace the expectation that health behavior research will produce “best practices” with something more akin to “best practices for the process of planning for most appropriate interventions for the population and setting.” Tested interventions need to be adapted and often researched to assure that there is an appropriate fit between tested methods and the context, setting, and the population's circumstances.(21) Both NCI and CDC have developed a systematic process for adaptation of tested interventions.(36)a and b. This process begins with the identification of both core and adaptive or “key” elements of the intervention, as has also been noted by others.(37-42) Core elements are those intervention features that must be replicated to maintain the integrity of the interventions as they are transferred to new settings – as CDC defines them, those that are integral to the internal logic of the intervention and required for its main effects. Key or adaptive characteristics are those features of an intervention that can be tailored to organizational, social and economic realities of the new setting without diluting the intervention's effectiveness,(38) important but not essential features of the intervention methods.(43) This process clearly requires research that engages the community, systematically assesses the context, needs and resources, and plans programs in response to those needs.(22) Understanding the process of adapting the intervention is one needed component of dissemination research.
The dissemination research process also requires evidence on the translation of the research into scalable and sustainable interventions. According to Kottke and Pronk,(44) scalability is the ability to present the adaptation of the intervention to all members of the intended audience, and sustainability is the ability to support, maintain, and enhance the application over time. Thus the adaptation process may additionally include incorporation of strategies to optimize the use of program components, for example through the addition of features that situate the intervention within existing work flows and processes. Research into this process includes a comparison of the environment and setting in which the intervention has been tested, with those to which it will be adapted, in order to identify the types of adaptations and measures needed.(37) The community will additionally be interested in evidence of demand for an intervention, to assure that the intervention strategies are feasible, acceptable and compatible with the lifestyle and social environment of the audience.(45)
In summary, there is a need for a balance in defining interventions that are ready for dissemination research. As a field, we need to conduct dissemination research on interventions that have been rigorously tested to assure that the observed changes in health behaviors can truly be attributed to the intervention. We must also attend to the range of study designs and study populations that allow for maximum external validity. We must proceed on the basis of solid evidence, yet we are still defining that evidence base.
Available theoretical models for dissemination research
One of the key elements in designing a dissemination research project is to use theory in designing the intervention, process measures, and outcome measures. A theory should: 1) accurately describe a large class of observations on the basis of a model which contains only a few arbitrary elements and 2) make definite predictions about the results of future observations. (46) In comparison to models or frameworks, theories are explanatory as well as descriptive, while models are only descriptive. (47) Rogers' Diffusion of Innovation Theory (48) remains the fundamental model describing the processes of adoption of an innovation and key factors that must be considered in attempting to disseminate an innovation. While not empirically tested prospectively, this theoretical model has dominated the published literature in this area and is characterized by a focus on the adoption process and use of interventions/innovations as outcomes. The factors described as central to the diffusion of interventions are characteristics of the innovation itself, properties of the communication channel through which the intervention is disseminated, time from no use of the intervention to full adoption across the population and the different activities that occur across time, and characteristics of the social system in which the innovation is being disseminated. These properties have been shown in various fields to be highly predictive of successful adoption (48) For example, an intervention/innovation is more likely to be adopted if it is perceived as (1) superior to the existing practice it replaces (relative or perceived advantage), (2) consistent with the intended adopters' values, norms and perceived needs and/or with organizational or professional norms, values and ways of working (compatibility), (3) simple to use based on practical experience and/or demonstration (complexity), (4) readily implemented in small steps and stages, (5) having observable benefits for various audiences and within acceptable timeframes (observability), (6) amenable to adaptation, refinement, or other modification to suit needs of potential adopters (reinvention) and (7) open to experimentation within an organizational structure that bases change on results (trialability). (48, 49) Even though the evidence supporting these attributes is strong, these attributes are neither stable features of the intervention nor sure determinants of their adoption. The interactions among the intervention, the intended adopter(s) and a particular context determines adoption (49) along with individuals' perceptions of an intervention and its attributes (48). In addition to these key elements of diffusion, Rogers describes five stages of the innovation at an organizational level: agenda setting, matching the problem to the relevant innovation, re-defining and restructuring both the innovation and the organizational structure, clarifying the relationship between the organization and the innovation or intervention and routinizing, or making the innovation an ongoing part of the organization's activities.
Other models might be used in the definition and study of the dissemination process, but have to date received no empirical reports: a map of the program adaptation (43), the systems model of translational research (50), Community Organizing models (51) and Social Marketing Theory (52). Future research could research and refine these potentially useful theories, applied to dissemination research In addition, some frameworks, such as Glasgow's RE-AIM framework (53), Kerner's discovery to delivery continuum (54) and Orleans push pull synergistic framework (55), are not technically dissemination theories but have certainly been used to describe the dissemination process and to outline process evaluation in this field. In particular, RE-AIM provides a useful framework for evaluating the dissemination process. The RE-AIM evaluative framework includes five elements of dissemination research projects: reach, effectiveness, adoption, implementation, and maintenance. Reach is defined as the percent of potentially eligible individuals in the target population who receive the intervention and how representative they are of the population from which they come. Effectiveness refers to the intended results of the intervention and its possible consequences on the primary outcomes of interest. Adoption measures the uptake of the intervention(s) within targeted settings and among targeted providers within settings while Implementation describes the quantity and quality of delivery of the intervention components. Maintenance describes the long-term results of the program (e.g. at the individual or population level of outcomes) and/or the institutionalization of the program in the setting as a part of usual practice awareness, adoption, implementation, and maintenance in health settings. The application of RE-AIM as an evaluative framework helps assess external validity (reach and adoption) as well as internal validity (efficacy and implementation). (56).
Where does this leave us in the area of theoretical models for dissemination research? None of these models were developed specifically for dissemination research, a gap that is important, as none of the models completely describe the continuing challenges and issues when one conducts a study in this area. We feel that when designing a dissemination research project in cancer control, one must critically assess the appropriateness of any new model for the proposed research. Further, one must integrate the components of various models as appropriate for a particular type of dissemination research, with a focus on new model development and testing. Consideration of systems thinking is critical to this theory development, as much of dissemination goes on at the systems level, and more traditional individualistic perspectives will not be fully useful. This is an area of scientific inquiry that should be pursued by all researchers in this area.
Methodological considerations in this field
As noted earlier, dissemination and implementation research can be conducted using experimental, quasi-experimental, or non-experimental research designs. Several recently published articles describe the strengths, limitations, and tradeoffs of the many study designs that investigators could use in dissemination and implementation research (57) We do not intend to review those discussions here. Instead, we wish to raise four points that previous discussions either have mentioned only in passing or have not identified as potential concerns.
Selection of study design
First is the issue of selecting study designs. Much of the recent discussion of research design has focused on the relative merits of various study design options for evaluating dissemination and implementation strategies. For example, Mercer and her colleagues (57) provide a thorough review of the relative merits of several experimental and quasi-experimental designs for conducting dissemination research. However, as they and others acknowledge, study designs for assessing causality, such as a randomized trial with pre and post measures, are often not the best designs for investigating non-causal questions. Other methods, like qualitative research methods, computer simulation, survey research methods, and cost-effectiveness methods, can be useful for answering important descriptive and process-related questions, such as:
Who is the appropriate target or audience for dissemination?
What implementation needs exists, and how do those needs vary across contexts?
How receptive are practitioners to various dissemination and implementation strategies?
How should dissemination and implementation strategies be designed?
What does dissemination and implementation look like in practice?
How feasible are dissemination and implementation strategies in particular contexts?
How much does a strategy cost, and how do those costs vary across contexts?
What would it take to sustain an effective strategy?
Given the current state of knowledge, investigation of such non-causal questions would make a significant contribution. While experimental designs remain the gold standard for assessing causality, the choice of research design for dissemination research should be driven by the research question itself.
Matching study design to setting
Second, the knowledge that can be gained from efficacy or effectiveness studies of specific dissemination and implementation strategies depends heavily on the careful matching of the strategy and the study setting. For example, several systematic reviews find that strategies for disseminating and implementing clinical guidelines exhibit modest and often mixed effects in terms of guideline use and improvements in care. The general conclusion one can draw from this substantial body of research is that no single strategy or combination of strategies works all of the time; rather, strategies work best when they match the determinants of the problem. To illustrate, strategies involving reminders are likely to effective if the principal reason why clinicians in a given setting are not using guidelines is that they lack the necessary cues-to-action. However, if the principal reason why clinicians in a given setting are not using guidelines is that they do not believe the guidelines are appropriate or, alternatively, they believe in the guidelines but they cannot act on them because of work load or work flow, then strategies involving reminders are likely to produce little or no effect. In conducting dissemination research, therefore, investigators must carefully select intervention sites to ensure that a good match exists between the dissemination strategy or strategies and the principal determinants of the problem in that local setting.
Choice of dependent variable
Third, the choice of dependent variables in dissemination and implementation research depends on how one conceptually defines dissemination and implementation. As stated in Table 1, the NIH RFA for dissemination and implementation research defines dissemination as the targeted distribution of information and intervention materials to a specific public health or clinical practice audience. The aim of dissemination, according to this view, is to spread knowledge and the associated evidence-based interventions to practice settings. Given this definition, the proximal outcomes for gauging the effectiveness of a dissemination effort might include increased awareness and understanding of the intervention being disseminated, increased willingness to engage with that intervention, and increased behavioral capability to apply the chosen intervention activities in the specific setting. Behavior change resulting from use of that intervention is a more distal outcome of that dissemination research project. Whether dissemination efforts produce behavior change depends on additional, situational factors such as resource availability, time availability, competing priorities, and coordination with others. Likewise, the NIH defines implementation as the use of strategies to introduce or change evidence-based health interventions within specific settings. According to this view, the aim of implementation is to put into practice new ideas, technologies, policies, or practices. Given this definition, the proximal outcomes for gauging the effectiveness of an implementation effort might include the consistency, quality, and appropriateness of initial or early use of the new idea, technology, policy, or practice. The benefits that result from initial or early use (e.g., reduced morbidity or mortality) are less proximal outcomes of implementation. Whether implementation produces anticipated benefits depends on additional factors, such as whether the efficacy of the new idea, technology, policy or practice remains intact following implementation.
As we noted earlier, the NIH definitions of dissemination and implementation are not the only ones available. Investigators employing alternative definitions might hold differing views about appropriate primary and secondary dependent variables in dissemination and implementation research. Regardless of the definition used, however, careful consideration must be given to selection of the dependent variable.
Measurement issues
Finally, previous discussions of research designs appropriate for dissemination and implementation research (57-63) have focused almost entirely on the relative importance of internal and external validity and the trade-offs inherent in specific study designs with respect to these two concerns. The issue of construct validity of measures, by contrast, has garnered much less attention, despite the fact that poor measurement construct validity can undermine a study's contribution to both theory and practice. Construct validity refers to the degree to which inferences can legitimately be made from operational definitions to theoretical constructs (64). Construct validity can be viewed as a “labeling” issue. Do the concrete operations in a give study (e.g., the actual programs, interventions, or measures that get used) fully and faithfully reflect the concept, idea, or theory that they purportedly represent or manifest? Some threats to construct validity, such as poor conceptualization of constructs or over-generalizing from limited levels (i.e., doses or intensities) of constructs, can be addressed by careful labeling and description of dissemination and implementation strategies and their components. Other construct validity threats can be partially controlled through research design choices. For example, experimenter expectancies can be controlled by assigning to different research personnel the data collection and data analysis tasks. Likewise, compensatory rivalry (i.e., control sites trying harder) and resentful demoralization (i.e., control sites giving up) can be controlled by offered the control group sites delayed intervention or alternative intervention. Finally, contamination can be controlled by spatially or temporally separating control and intervention sites. Other construct validity threats, however, are difficult to address in dissemination and implementation research. For example, construct confounding can occur when intervention sites or study participants engage in co-intervention—that is, when they engage additional activities that support dissemination and implementation that the investigator does not know about or intend. The stronger the intervention sites' or study participants' commitment to the success of the dissemination or implementation effort, the stronger the temptation they face to “stack the deck” in favor of a positive outcome, even if it means deviating from the study protocol. Similarly, dissemination and implementation may be particularly susceptible to novelty and disruption effects, as dissemination and implementation efforts often introduce something new to intervention sites or study participants. Bracht and Glass (1968) suggested that introducing an innovation can breed excitement, energy and enthusiasm that contribute to success, especially if little previous innovation has occurred. Alternatively, introducing an innovation can provoke resistance, especially if it disrupts existing routines. In either case, novelty or disruption represents a construct confounder that renders the results of a study more difficult to interpret, even when the plausibility of causal inference seems strong. Simply anticipating and assessing these types of effects is the best strategy, as then they become part of the scientific record of what has happened in a dissemination research project.
Possible Research Questions to Move the Field of Dissemination Research Forward
In general, systematic research on dissemination of public health promotion programs is in its infancy. As noted above, theories and models need to guide such research; however, relatively few comprehensive models exist. Research questions abound, in terms of understanding the process and outcomes of dissemination. If we think about dissemination research as understanding the way that new innovations spread into society (both public health and clinical practice) then numerous unanswered questions can be posed. One way of defining relevant research questions is to examine gaps in research using four categories taken from Rogers' theoretical model, including (1) characteristics of: the innovation, (2) properties of the communication channel, (3) activities over time, and (4) the environment/system in which the dissemination is to occur. We have selected 3 dissemination research foci to use as examples, presented in Table 2.
Table 2. Applying diffusion theory to cancer prevention and control.
Elements of Diffusion Theory | ||||
---|---|---|---|---|
Research question | The innovation | Communication channel | Time | Social system |
How can we increase use of Community guide recommendations for increasing mammography screening in a clinical setting? | Adding tracking and feedback systems to clinical settings to improve referral to screening for age appropriate women | Electronic medical record, linked to provider office and patient feedback sheet, that cues clinical interaction about mammography referral | Time to implement support system, period between referral and screening, time to follow-up for abnormal finding | Clinic needing system and provider patient relationship that uses the information from the system |
How can we increase use of previously tested mass media spots that encourage consumption of 5 servings fruits and vegetables per day? | Previously tested mass media messages and products (PSA's and billboard/busboard signage) found to enhance 5 servings of fruits and vegetables daily | Community advocacy groups that engage commercial entities in considering ethical business practices, including free health promotion media spots | Phase 1 is the engagement process between advocacy groups and commercial entities, and Phase 2 is the period of media placement | The commercial business community to be engaged in supporting the media spots and the population in the neighborhoods where the spots will be found. |
How can we increase the vaccination for HPV in grade school girls? | Newly available vaccination for HPV | Health care providers in public health clinics | Training period for health care providers, and time to vaccinate all eligible patients at clinic | Population receiving the vaccine: grade school girls and their parents or guardians. |
First, understanding the characteristics of the innovation is a key first step toward promoting its dissemination. Marketers have tremendous expertise in identifying unmet needs in their audiences and finding effective ways to create awareness and adoption by branding and promoting new products, but we know less about how to accomplish this in the public health arena. It cannot be overemphasized that careful formative research is critical in order to determine the needs/preferences of each target audience. For example, studies could examine views about the acceptability of new technological advances in different populations. Consuming fruits and vegetables is a common occurance in most people's diet, although not at the level needed to promote health. Awareness of HPV risk and potential vaccination for risk reduction, however, is likely to be low in situations where awareness of other sexually transmitted diseases is low and/or not an accepted topic of discussion.
Little research has rigorously examined specific intervention characteristics (e.g. trialability, flexibility, relative advantage, etc) to determine which ones are most operational in different types of organizations and/or with different types of interventions. For example, flexibility may be very important in disseminating programs aimed at organizations with a high degree of autonomy or with a lot of decentralized authority, such as churches. However, flexibility might be less important or even counter-productive in highly centralized organizations where fidelity to protocols or regulations is very important, for example in the dissemination of an evidence-based clinical practice in a hospital system.
Selecting the appropriate communication channel to transmit the package for dissemination is an important issue. For example, influencing provider recommendation behaviors has been shown to be difficult REF, partly due to the lack of an acceptable channel for disseminating new ideas. Delivering knowledge and support for using evidence-based screening promotion programs to providers, for example, could occur through continuing medical education sessions, through use of a web-based portal for provider education and support, or through advice from a trusted colleague, as in academic detailing. Or, as in many programs, a combination of these channels could be tested. The same channels would likely not work to disseminate and promote the use of a new vaccine for HPV risk reduction. For this research question the providers themselves could become the channel of dissemination, with appropriate support to patients.
Time is the third key factor in Rogers' model, containing many elements. Some research questions relevant to time might focus on studying how quickly systems and environments promote use of a new program or benefit, (e.g. relative impact of top-down directives, participatory strategies, interpersonal communication via viral marketing or lay health advisors), or the relative impact of these factors in a given subpopulations populations of workplaces or health care settings. Early adopters, those who use innovations early in the dissemination process, are likely to be different from those who are later adopters of the innovation. Therefore, motivations tailored to the adopter characteristics at different stages over time might be tested. For example, early parental adopters of the HPV vaccine might have strong beliefs that the health care system is trustworthy, while late adopters might express mistrust in the health care system as solution for preventing disease. In addition, how much support over time (e.g. technical, peer, supervisory, etc.) is needed to achieve adequate implementation once interventions move from researcher control to community control over program implementation and fidelity is a key issue. What system factors promote or hinder implementation (e.g. program novelty, personal commitment to the program/outcomes, personnel resistance/overwork, competing priorities, lack of belief in benefits) and how does this vary across systems? For example, difficulties in using evidence based programs in clinical settings might be due to lack of experience with electronic medical records, which might be an innovation themselves and might come to publically funded community clinics late in the process. Additional research questions might focus on later stages, such as factors necessary to establish program maintenance and indeed, what defines maintenance.
In addition, much research is needed to understand the characteristics of groups that exert their influence at different points in the dissemination process. We know, for example, that the role of program champions and opinion leaders can be critical in disseminating evidence-based programs and information in both clinical and community contexts. However, less is known regarding the specific contexts and settings in which opinion leaders may be most effective. In addition, who are these champions/pioneers and how do we best identify and train them? In a recent study, Grimshaw and colleagues surveyed professional groups in the UK National Health Service to determine factors impacting effectiveness of opinion leaders, such as extent of social networks and types of identification processes (65). In another example, Valente and Pumpuang (66) recently reviewed over 200 studies utilizing different techniques and methods of identifying opinion leaders, in order to study factors such as relative effectiveness and convergence of these methods in identifying individuals to serve in this capacity. In addition to identification of these groups, we need to know the impact of utilizing such champions/leaders in terms of their personal and professional growth, reputation and status, role conflict and burden, or other positive or negative consequences.
Finally, characteristics of the social system that is the intended target of the dissemination process need research attention. For example, disseminating a mass media program kon fruit and vegetable consumption to a large geographic area may not be the most efficient method of reaching the target audience to to lack of exposure to the media outlet, such as a billboard on the highway. Differences among clinics' resources and support will, in part, determine the amount of evidence based programs that can be incorporated into standard practice.
It is important to differentiate between research questions that truly focus on dissemination research, versus other types of research questions that are important and relevant, but don't speak specifically to dissemination. For example, a study that seeks to determine whether an intervention with proven efficacy in one population can be translated to another population or setting may be better characterized as a replication study, and not a dissemination study. As the field matures and agrees upon needed research strategies, we will likely be able to make these distinctions more easily.
Conclusions
One thing is clear: We need more high quality research into the dissemination of cancer prevention and control programs. For every question that we have answered, several more are identified and need addressing. This is an exciting time in the field, as we have the potential to see much of the past 30 years' worth of work put into practice over the next 30, if we guide it wisely with research into the guidance process.
One improvement that we recommend in the field is to consider a focus on common phases or stages, on key common process and outcome variables, and on common theoretical models that drive all of these choices. This will give us the ability to accept or reject these strategies if we identify multiple tests on the same measures, channels, and phases. Research questions are varied and should include process and component variables appropriate to the setting. Acceptance of alternative designs is another issue that we need to improve in the field, in that the randomized trial is not always feasible and not always practical.
Our previous 30 years has taught us that dissemination does not just happen if we wait for it. New information is often needed to make it happen. Let's consider this a call to action, to gather that new information in support of making it happen.
Acknowledgments
We gratefully acknowledge the support from the following grants: DK56350, CA124415, CA11464, CA124394, CA108663, DP000064, a grant from Liberty Mutual, DP00059-04, and CA124400 in completing this work.
Footnotes
The final publication is available at www.springerlink.com
References
- 1.Lomas J. Words without action? The production, dissemination, and impact of consensus recommendations. Annu Rev Public Health. 1991;12:41–65. doi: 10.1146/annurev.pu.12.050191.000353. [DOI] [PubMed] [Google Scholar]
- 2.Haynes B, Haines A. Barriers and bridges to evidence based clinical practice. British Medical Journal. 1998;317(7153):273–276. doi: 10.1136/bmj.317.7153.273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.McQueen DV. Strengthening the evidence base for health promotion. Health Promotion International. 2001;16(3):261–268. doi: 10.1093/heapro/16.3.261. [DOI] [PubMed] [Google Scholar]
- 4.Glasgow RE, Emmons KM. How can we increase translation of research into practice? Types of evidence needed. Annual Reviews in Public Health. 2007;28:413–433. doi: 10.1146/annurev.publhealth.28.021406.144145. [DOI] [PubMed] [Google Scholar]
- 5.Greenwald P, Cullen JW. A scientific approach to cancer control. Cancer. 1984;25:236–244. doi: 10.3322/canjclin.34.6.328. [DOI] [PubMed] [Google Scholar]
- 6.Flay BR. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine. 1986;15(5):451–474. doi: 10.1016/0091-7435(86)90024-1. [DOI] [PubMed] [Google Scholar]
- 7.Cook TD. Validity and social experimentation. In: Bickman L, editor. Toward a practical theory of external validity. Thousand Oaks, CA: Sage; 2000. pp. 3–43. [Google Scholar]
- 8.Rabin BA, Brownson RC, Kerner JF, Glasgow RE. Methodologic challenges in disseminating evidence-based interventions to promote physical activity. Am J Prev Med. 2006;31(4 Suppl):S24–34. doi: 10.1016/j.amepre.2006.06.009. [DOI] [PubMed] [Google Scholar]
- 9.Koepsell TD, Wagner EH, Cheadle AC, Patrick DL, Martin DC, Diehr PH, et al. Selected methodological issues in evaluating community-based health promotion and disease prevention programs. Annual Review of Public Health. 1992;13:31–57. doi: 10.1146/annurev.pu.13.050192.000335. [DOI] [PubMed] [Google Scholar]
- 10.Susser M. Editorial: The tribulations of trials - interventions in communities. American Journal of Public Health. 1995;85:156–158. doi: 10.2105/ajph.85.2.156. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Goldenhar LM, LaMontagne AD, Katz T, Heaney C, Landsbergis P. The intervention research process in occupational safety and health: An overview for the National Occupational Research Agenda Intervention Effectiveness Research Team. Journal of Occupational and Environmental Medicine. 2001;43(7):616–622. doi: 10.1097/00043764-200107000-00008. [DOI] [PubMed] [Google Scholar]
- 12.Murray DM. Design and analysis of group randomized trials. New York, N.Y.: Oxford University Press; 1998. [Google Scholar]
- 13.Sorensen G, Emmons K, Hunt MK, Johnston D. Implications of the results of community intervention trials. Annual Review of Public Health. 1998;19:379–416. doi: 10.1146/annurev.publhealth.19.1.379. [DOI] [PubMed] [Google Scholar]
- 14.Shadish WR, Cook TD, Campell DT. Experimental and quasi experimental design for generalized causal inference. Boston, MA: Houghton; 2002. [Google Scholar]
- 15.Glasgow RE, Lichtenstein E, Marcus AC. Why don't we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health. 2003;93(8):1261–1267. doi: 10.2105/ajph.93.8.1261. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.National Heart Lung and Blood Institute. Guidelines for demonstation and education research grants. Washington, DC: National Heart Lung and Blood Institute; 1983. [Google Scholar]
- 17.Mrazek P, Haggerty R, editors. Reducing risks for mental disorders. Washington, DC: National Academy Press; 1994. [PubMed] [Google Scholar]
- 18.McKinlay JB. The promotion of health through planned sociopolitical change: Challenges for research and policy. Social Science and Medicine. 1993;36(2):109–117. doi: 10.1016/0277-9536(93)90202-f. [DOI] [PubMed] [Google Scholar]
- 19.Valente TM. Need, demand, and external validity in dissemination of physical activity programs. American Journal of Preventive Medicine. 2006;31(4 Suppl):5–7. doi: 10.1016/j.amepre.2006.06.012. [DOI] [PubMed] [Google Scholar]
- 20.Fisher EB. Editorial: The results of the COMMIT trial. American Journal of Public Health. 1995;85(2):159–160. doi: 10.2105/ajph.85.2.159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Wandersman A. Community science: Bridging the gap between science and practice with community-centered models. American Journal of Community Psychology. 2003;31(3/4):227–241. doi: 10.1023/a:1023954503247. [DOI] [PubMed] [Google Scholar]
- 22.Green LW. From research to “best practices” in other settings and populations. Am J Health Behav. 2001;25(3):165–78. doi: 10.5993/ajhb.25.3.2. [DOI] [PubMed] [Google Scholar]
- 23.Glasgow RE, Green LW, Klesges LM, Abrams DB, Fisher EB, Goldstein MG, et al. External validity: We need to do more. Annals of Behavioral Medicine. 2006;31(2):105–108. doi: 10.1207/s15324796abm3102_1. [DOI] [PubMed] [Google Scholar]
- 24.Best A. Organizational frameworks for knowledge development, exchange and implementation: KDEI Frameworks. 2007 [Google Scholar]
- 25.Brownson RC, Gurney JG, Land GH. Evidence-based decision making in public health. J Public Health Manag Pract. 1999;5(5):86–97. doi: 10.1097/00124784-199909000-00012. [DOI] [PubMed] [Google Scholar]
- 26.Petitti DB. Meta-analysis, decision analysis, and cost-effectiveness analysis: Methods for quantitative synthesis in medicine. New York, NY: Oxford University Press; 1994. [Google Scholar]
- 27.National Cancer Institue, editor. Greater than the sum: Systems thinking in tobacco control. Bethesda, MD: U.S. Dept of Health and Human Services National Institutes of Health, National Cancer Institue; 2007. NIH Pub N. 06-6085. [Google Scholar]
- 28.Brownson RC. Epidemiology and health policy. In: Brownson RC, Petitti DB, editors. Applied epidemiology: Theory to practice. New York, NY: Oxford University Press; 1998. [Google Scholar]
- 29.Glass GV. Primary, secondary and meta-analysis of researach. Education Research. 1976;5:3–8. [Google Scholar]
- 30.Des Jarlais DC, Lyles C, Crepaz N. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004;94(3):361–6. doi: 10.2105/ajph.94.3.361. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Zaza S, Briss PA, Harris KW, editors. The guide to community preventive Services: What works to promote health? (Task Force on Community Preventive Services) Oxford University Press; New York, NY: 2005. [Google Scholar]
- 32.Briss PA, Brownson RC, Fielding JE, Zaza S. Developing and using the Guide to Community Preventive Services: lessons learned about evidence-based public health. Annu Rev Public Health. 2004;25:281–302. doi: 10.1146/annurev.publhealth.25.050503.153933. [DOI] [PubMed] [Google Scholar]
- 33.Missouri Department of Health and Senior Services. Intervention Missouri Information for Community Assessment. DHSS; MO: [Google Scholar]
- 34.New Hampshire Institute for Public Health Policy and Practice. E-Roadmap to Evidence-Based Public Health Practice. 2006 [Google Scholar]
- 35.Victora CG, Habicht JP, Bryce J. Evidence-based public health: moving beyond randomized trials. Am J Public Health. 2004;94(3):400–5. doi: 10.2105/ajph.94.3.400. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.McKleroy VS, G J, Cummings B, Jones P, Harshbarger C, Collins C, Gelaude D, Carey JW, ADAPT Team Adapting evidence-based behavioral interventions for new settings and target populations. AIDS Education and Prevention. 2006 Aug;18(4 Suppl A):59–73. doi: 10.1521/aeap.2006.18.supp.59. [DOI] [PubMed] [Google Scholar]
- 37.Dearing JW, Maibach EW, Buller DB. A convergent diffusion and social marketing approach for disseminating proven approaches to physical activity promotion. American Journal of Preventive Medicine. 2006;31(4 Suppl):8–10. doi: 10.1016/j.amepre.2006.06.018. [DOI] [PubMed] [Google Scholar]
- 38.Price RH, Lorioin RP. Prevention programming as organizational reinvention: From research to implementation. In: Silverman MM, Anthony V, editors. Prevention of mental disorders, alcohol and drug use in children and adolescents. Rockville, M.D.: DHHS; 1989. pp. 97–123. [Google Scholar]
- 39.Pentz MA, Trebow E. Implementation issues in drug abuse prevention research. Substance Use and Misuse. 1997;32(12&13):1655–1660. [PubMed] [Google Scholar]
- 40.Pentz MA, Trebow E, Hansen WB, MacKinnon DP, Dwyer JH, Flay BR, et al. Effects of program implementation on adolescent drug use behavior: The Midwestern Prevention Project (MPP) Evaluation Review. 1990;14(3):264–289. [Google Scholar]
- 41.Florin P, Wandersman A. An introduction to citizen participation, voluntary organizatons, and community development: Insights for empowerment through research. American Journal of Community Psychology. 1990;18:41–53. [Google Scholar]
- 42.Pentz MA. Programs and Abstracts. National Institutes of Health; 1998. Research to practice in community-based prevention trials; pp. 82–83. [Google Scholar]
- 43.McKleroy VS, Galbraith JS, Cummings B, Jones P, Harshbarger C, Collins C, et al. Adapting evidence-based behavioral interventions for new settings and target populations. AIDS Education and Prevention. 2006;18(Suppl A):59–73. doi: 10.1521/aeap.2006.18.supp.59. [DOI] [PubMed] [Google Scholar]
- 44.Kottke TE, Pronk NP. Physical activity: Optimizing practice through research. American Journal of Preventive Medicine. 2006;31(4 Suppl):8–10. doi: 10.1016/j.amepre.2006.06.011. [DOI] [PubMed] [Google Scholar]
- 45.Owen N, Glanz K, Sallis JF, Kelder SH. Evidence-based approaches to dissemination and diffusion of physical activity interventions. American Journal of Preventive Medicine. 2006;31(4 Suppl):35–44. doi: 10.1016/j.amepre.2006.06.008. [DOI] [PubMed] [Google Scholar]
- 46.S H. The Illustrated A Brief History of Time. New York: Bantam Books; p. 15. [Google Scholar]
- 47.RM, FDL . Developmental Systems Theory: An Integrated Approach. New York: Sage Publications; 1992. [Google Scholar]
- 48.Rogers E. Diffusion of Innovations. New York: Free Press; 1995. [Google Scholar]
- 49.Greenhalgh T. Diffusion of Innovations in Service Organizations: Systematic Review and Recommendations. The Milbank Quarterly. 2004;82(4):581–629. doi: 10.1111/j.0887-378X.2004.00325.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Estabrooks PA, GR Translating effective clinic-based physical activity interventions into practice. Am J Prev Med. 2006;31(4 Suppl):S45–56. doi: 10.1016/j.amepre.2006.06.019. [DOI] [PubMed] [Google Scholar]
- 51.Bracht N, K L, Rissel C. A Five Stage Community Organization Model for Health Promotion: Empowerment and Partnership Strategies. In: Bracht N, editor. Health Promotion at the Community Level: New Advances. Thousand Oaks CA: Sage Publications; 1999. [Google Scholar]
- 52.K P, Z G, JSTOR Social Marketing: An Approach to Planned Social Change. Journal of Marketing. 1971;35:3–12. [PubMed] [Google Scholar]
- 53.Glasgow RE, VT, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. American Journal of Public Health. 1999;89(9):1322–1327. doi: 10.2105/ajph.89.9.1322. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Kerner J, Guirguis-Blake J, Hennessy KD, et al. Translating research into imporoved outcomes in comprehensive cancer control. Cancer Causes Control. 2005;16 1:27–40. doi: 10.1007/s10552-005-0488-y. [DOI] [PubMed] [Google Scholar]
- 55.Orleans C. The behavior change consortium: expanding the boundaries and impact of the health behvior change research. Annals of Behavioral Medicine. 2005;29(Suppl):76–79. doi: 10.1207/s15324796abm2902s_11. [DOI] [PubMed] [Google Scholar]
- 56.Bonomi AE, Wagner EH, Glasgow RE, VonKorff M. Assessment of chronic illness care (ACIC): a practical tool to measure quality improvement. Health Serv Res. 2002;37(3):791–820. doi: 10.1111/1475-6773.00049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Mercer SL, DeVinney BJ, Fine LJ, Green LW, Dougherty D. Study designs for effectiveness and translation research:identifying trade-offs. Am J Prev Med. 2007;33(2):139–154. doi: 10.1016/j.amepre.2007.04.005. [DOI] [PubMed] [Google Scholar]
- 58.Eccles M, Grimshaw J, Campbell M, Ramsay C. Research designs for studies evaluating the effectiveness of change and improvement strategies. Qual Saf Health Care. 2003;12(1):47–52. doi: 10.1136/qhc.12.1.47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Gilbody S, Whitty P. Improving the delivery and organisation of mental health services: beyond the conventional randomised controlled trial. Br J Psychiatry. 2002;180:13–8. doi: 10.1192/bjp.180.1.13. [DOI] [PubMed] [Google Scholar]
- 60.Grimshaw J, Campbell M, Eccles M, Steen N. Experimental and quasi-experimental designs for evaluating guideline implementation strategies. Fam Pract. 2000;17 1:S11–6. doi: 10.1093/fampra/17.suppl_1.s11. [DOI] [PubMed] [Google Scholar]
- 61.Neuhauser D, Diaz M. Quality improvement research: are randomised trials necessary? Qual Saf Health Care. 2007;16(1):77–80. doi: 10.1136/qshc.2006.021584. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Rosen L, Manor O, Engelhard D, Zucker D. In defense of the randomized controlled trial for health promotion research. Am J Public Health. 2006;96(7):1181–6. doi: 10.2105/AJPH.2004.061713. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Sanson-Fisher RW, Bonevski B, Green LW, D'Este C. Limitations of the randomized controlled trial in evaluating population-based health interventions. Am J Prev Med. 2007;33(2):155–61. doi: 10.1016/j.amepre.2007.04.007. [DOI] [PubMed] [Google Scholar]
- 64.Trochim WMK. The Research Methods Knowledge Base. First. Cincinnati, OH: Atomic Dog Publishing; 2001. [Google Scholar]
- 65.Grimshaw J, E M, Greener J, Maclennon G, Ibbostson T, Kahan J, Sullivan F. Is the involvment of opinion leaders int he implementation of research findings a feasible strategy? Implementation Science. 2006;1:3. doi: 10.1186/1748-5908-1-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Valente TM, P P. Identifying opinion leaders to promote behavior change. Health Education Behavior. 2007 doi: 10.1177/1090198106297855. epub ahead of print. [DOI] [PubMed] [Google Scholar]