Abstract
An essential strategy expected to reduce the global burden of chronic and cardiovascular disease is evidence-based policy. However, it is often unknown what specific components should constitute an evidence-based policy intervention. We have developed an expedient method to appraise and compare the strengths of the evidence bases suggesting that individual components of a policy intervention will contribute to the positive public health impact of that intervention. Using a new definition of “best available evidence,” the Quality and Impact of Component (QuIC) Evidence Assessment analyzes dimensions of evidence quality and evidence of public health impact to categorize multiple policy component evidence bases along a continuum of “emerging,” “promising impact,” “promising quality,” and “best.” QuIC was recently applied to components from 2 policy interventions to prevent and improve the outcomes of cardiovascular disease: public-access defibrillation and community health workers. Results illustrate QuIC’s utility in international policy practice and research.
Most deaths are due to noncommunicable conditions (e.g., chronic disease), which account for 6 in 10 deaths globally [1]. The World Health Organization predicts that cardiovascular disease will be the single leading cause of death in 2015, with an estimated 20 million people dying mainly from heart disease and stroke [2]. An essential strategy to prevent and control public health problems, including chronic and cardiovascular disease, is evidence-based policy [3]. Although evidence-based policy has the potential for an enormous impact on health, translation of evidence on effective public health interventions into evidence-based policy strategies for broad dissemination will require appraisal of existing evidence [4–6].
Rigorous evidence syntheses (i.e., systematic reviews), such as those produced by the United States Clinical and Community Guides to Preventive Services [7] and the Grading of Recommendations, Assessment, Development, and Evaluation system [8], have resulted in many evidence-based guidelines and recommendations. However, because of the length of time required to identify, review, and evaluate evidence (i.e., about 12 months) [9,10], traditional systematic review methods such as these cannot realistically be used to study every policy option, especially when many public health problems are urgent [4]. In 2005, gaps in the traditional evidence base for public health interventions prompted the United States Institute of Medicine to call for action on the basis of the “best available evidence,” as opposed to the “best possible evidence” [11]. To increase the discovery, study, and application of effective public health interventions, researchers have developed new frameworks and expedient methods for evaluating the best available evidence [9,12–18]. Several of these approaches now classify interventions along an evidence continuum or in a typology that includes categories such as “emerging,” “promising,” and “best” [9,13,14,16].
Public health policy interventions can be defined as new or altered courses of action influencing or determining decisions, laws, rules, or regulations governing health or related health behaviors [9]. Rapid evidence assessment methods are especially important for developing policy interventions because windows of time for evidence to influence policy making are short [10,19]. The use of the best available evidence is also necessary because the rigorous designs used to assess clinical interventions are often not feasible or appropriate to assess policy interventions [20]. Furthermore, most public health interventions, including policy interventions, are made up of many components, each with its own evidence base [6]. Brennan et al. [9,21] recently developed an expedient evidence assessment method for policy and environmental interventions to prevent childhood obesity [9,21] in which the policy interventions assessed could have multiple components (i.e., using more than 1 related activity) or be complex (i.e., using multiple approaches that are not inherently distinct) [7]. They noted that the nature of policy would require the capacity to delineate the many moving parts to determine the minimal intervention components required for effectiveness. Brownson et al. [3] have also identified the need to better describe the evidence-based elements within existing and proposed policy to find the “active ingredients.”
Indeed, it is too often unknown what specific components should constitute an evidence-based public health policy intervention. To expediently appraise and compare the strengths of multiple evidence bases suggesting that individual components of a public health policy intervention will contribute to the positive public health impact of that intervention, we developed the Quality and Impact of Component (QuIC) Evidence Assessment method. In this report, we 1) describe how QuIC was developed and provide an overview of the method, 2) present results of using QuIC to appraise the evidence for 2 cardiovascular disease–related policy interventions (public-access defibrillation [PAD] and community health workers [CHWs]), and 3) discuss QuIC’s utility in international public health policy research and practice, including its strengths and limitations.
METHODS
Our team included 2 public health policy analysts and 4 public health policy research and measurement development experts. Our first step was to select 2 cardiovascular disease–related interventions with evidence of positive (i.e., desired) outcomes, which we also knew had existing state-level policy applications (e.g., laws, including statutes and regulations) in the United States: PAD and CHWs [22,23]. PAD facilitates the use of automated external de-fibrillators (AEDs) by the public and increases the likelihood of survival after out-of-hospital sudden cardiac arrest [24]. Existing U.S. state PAD policies support state, local, and organizational PAD programs in a state [22]. CHWs are lay health workers who improve the cultural competence and quality of service delivery, especially for minority and vulnerable populations, and have evidence of reducing hypertension [25], among other positive outcomes. Existing U.S. state CHW policies support the CHW workforce practicing in a state [23].
After we selected policy interventions of interest, we defined the best available evidence for public health interventions, including policies, by reviewing definitions from the published research [3,9,12–18,26,27]. Next, we searched for this evidence generally for PAD and CHWs by querying Web resources and consulting with subject-matter experts at the Centers for Disease Control and Prevention (CDC). We found very little evidence about PAD or CHW policy interventions (as defined earlier); most evidence pertained to practices and programs. Using the evidence collected, we identified individual policy components for assessment, where policy components were defined as discrete though sometimes related activities that are part of an existing or recommended policy intervention. For example, we used the American Heart Association’s recommended PAD legislative strategies for states, along with other evidence, to identify and describe evidence-based PAD policy components (e.g., targeted AED site placement) [28]. When we examined component-level evidence, we found that although each intervention as a whole (i.e., PAD and CHWs) included experimental or quasi-experimental evidence, the evidence bases for individual components of PAD and CHWs largely did not. For example, experimental evidence shows positive outcomes of CHW programs [25], but many components within these CHW programs (e.g., certification of CHWs) have not been studied experimentally, even though many are recommended in the U.S. as part of state policies and programs [29].
On the basis of this review, we developed a new definition of “best available evidence” for the policy components under study. Similar to existing evidence assessment methods (including the Community Guide), QuIC’s best available evidence includes evidence from practice in addition to evidence from research [7,9,12–18]. This includes evidence on “upstream,” “midstream,” and “downstream” interventions with diverse outcomes at the individual and population levels [3]. However, QuIC’s new definition also includes evidence on practice, program, and policy interventions, which are used to infer potential policy impact; evidence for which the effect or association of the component has not been extracted from the effect or association of the broader intervention; and parallel evidence, defined as evidence of positive outcomes from similar strategies used to address other public health issues [27]. For example, evidence showing CHWs’ effectiveness at asthma management was considered parallel evidence for CHWs’ management of heart disease and stroke.
After defining the best available evidence, we developed the QuIC framework and method. Evidence quality and (evidence of) public health impact were selected from an existing conceptual framework, developed by the CDC, as the 2 key dimensions for assessing the strength of an evidence base for a policy component [16]. After reviewing the published research on these dimensions, we chose 8 criteria to include in the QuIC framework: 4 to assess evidence quality and 4 to assess evidence of public health impact. QuIC’s criteria for evidence quality include the following:
Study type: This criterion was chosen because study type (e.g., design) can indicate rigor and internal validity [17]. For example a quasi-experimental study would have a control or comparison group and/or multiple time points, which increase internal validity [15].
Source: This criterion was chosen because author credibility can be used as a proxy for evidence quality in decision making [30]. Expert peer review [17] and transparency of methods and sponsorship [31] both increase source credibility. Practice or theory-based evidence: Too much public health evidence is from controlled research that does not fit realities of practice. This criterion was chosen because practice-based evidence takes into account diverse circumstances [32] and because interventions should be grounded in sound theory [6,15].
Research-based evidence: This criterion was chosen because empirical evidence can produce generalizable results and generalizability is needed to make inferences about effectiveness for a population [33].
QuIC’s criteria for evidence of public health impact include the following:
Health: This criterion was chosen because affecting health status is a long-term intended outcome of public health policy [3]. Heath impact results from both effect size and reach, where reach is defined as the extent that the intervention affects intended population(s) [16].
Equity: Equity is defined as fairness in the distribution of health and the social determinants of health among people [34,35]. This criterion was chosen because health disparities are responsible for many public health problems, including much of the cardiovascular disease burden [1,2,36]. Equity impact also results from effect size and reach.
Efficiency: An efficient allocation of resources occurs when maximum output is obtained from a particular combination and quantity of resource inputs [34]. This criterion was chosen because policy makers must maximize effectiveness with diminishing funds [37]. Efficiency impact also results from effect size and reach.
Transferability: This criterion was chosen to suggest the extent that an intervention can be applied to or adapted for different contexts [16], including different settings and populations [15,38]. Diverse contexts in the supporting evidence base suggest wider applicability and greater potential public health impact.
These 8 criteria are domains in QuIC’s evidence quality and evidence of public health impact assessment scales, which were developed to produce an evidence quality score (quality score) and an evidence of public health impact score (impact score) for each component assessed as part of a public health policy intervention. (The scales are provided in the QuIC Handbook, which is available on request.) Scale levels were selected using published research on existing evidence hierarchies and assessment tools, and point designations were assigned to scale levels using the expertise of our team in measurement development. We also developed an evidence strength continuum (i.e., quadrant; see Figures 1 and 2) to simultaneously consider quality and impact scores to determine if a policy component is “emerging,” “promising impact,” “promising quality,” or “best.” Descriptions and potential next steps were created for components falling into each of the evidence strength categories (Table 1).
FIGURE 1. Public access defibrillation (PAD) policy component evidence strength categorizations.
Most (4 of 7) of the PAD policy components had the strongest (i.e., “best”) evidence basis. Two PAD policy components fell into the “promising quality” category, and 1 fell into the “emerging” category.
FIGURE 2. Community health worker (CHW) policy component evidence strength categorizations.
Most of the CHW policy components had a strong evidence basis, with 8 falling into the “best” category. Two CHW policy components fell into the “promising quality” category, 1 into the “promising impact” category, and 3 into the “emerging” category.
TABLE 1.
Quality and Impact of Component Evidence Assessment evidence strength category descriptions and next steps
Category | Description | Next steps |
---|---|---|
Best | In general, these components have been shown by rigorous peer-reviewed evidence, which was derived from practice or theory and research, to improve health across many types of settings. These components typically broaden an intervention’s reach and show a larger magnitude of effect. There is also likely evidence of improved equity and/or efficiency. | These components can be considered first when developing new policy or updating existing policy. They might be good candidates for experimental studies to measure their relative contributions to the improved health effect of the policy intervention as a whole or for systematic review. |
Promising quality | In general, these components have been shown by rigorous peer-reviewed evidence, which was derived from practice or theory and research, to improve health across a few types of settings. However, these components typically narrow an intervention’s reach and show a smaller magnitude of effect. There is also likely no evidence on equity and/or efficiency. | These components can also be considered for inclusion in new or existing policy. They might also be good candidates for experimental studies to measure their relative contributions to the improved health effect of the policy intervention as a whole. Additional evidence should examine their equity and efficiency impacts. |
Promising impact | In general, the evidence shows that these components positively affect health across at least a few settings, and there might also be evidence of improved equity and/or efficiency. However, the evidence base is lacking rigorous study types, peer review, and a sufficient amount of evidence from both practice or theory and research. | These components can also be considered for inclusion in new or existing policy. Researchers and evaluators should consider analyzing additional applications of these components to confirm impacts and generate new evidence that improves the overall quality of the evidence base. |
Emerging | In general, there is very little or no evidence on the health, equity, and efficiency impacts of these components. The evidence base is also lacking rigorous study types, peer review, and a sufficient amount of evidence from practice or theory and research. | More evidence is needed on the health, equity, and efficiency impacts of these components. Researchers and evaluators should consider analyzing early applications of these components to generate new evidence that improves the overall quality of the evidence base. |
To test usability and reliability of QuIC, it was independently applied to evidence bases for PAD policy components in February and March 2014 and CHW policy components in March and April 2014 by 3 policy analysts. During this phase, the analysts developed additional decision rules and definitions to resolve disagreements. The final rules and definitions were reviewed by the team as a whole and are documented in the QuIC Handbook.
THEORY AND CALCULATION
There are 7 steps in a QuIC Evidence Assessment:
Collect evidence and use it to identify discrete components of a policy intervention.
Select and train raters.
Raters classify the evidence base to each policy component.
Raters complete evidence quality assessments.
Raters complete evidence of public health impact assessments.
Assess interrater reliability and have raters reach consensus.
Determine evidence strength categories and next steps.
The following provides a brief overview of each step. For more details, including step-by-step instructions for completing a QuIC assessment, refer to the QuIC Handbook.
Step 1: Collect evidence and use it to identify discrete components of a policy intervention
First, one must select a public health policy intervention with some evidence of efficacy and/or effectiveness. Web resources (e.g., PubMed) and subject-matter experts provide evidence, which is reviewed to identify and define discrete components of the policy intervention for assessment. During this step, highly interrelated policy components are not separated for assessment. For example, requiring publicly accessible AEDs to be registered with emergency medical services and requiring AED users to call 911 are highly related components that are part of the broader component of PAD coordinated with emergency medical services.
Step 2: Select and train raters
A minimum of 2 raters with some subject-matter knowledge in the area under study are selected to classify evidence in step 3 and to assess evidence quality and evidence of public health impact in steps 4 and 5. Using more than 1 rater provides data to assess interrater reliability and is also expected to improve the validity of assessments. Although raters with only an introductory background to research methods are expected to use QuIC reliably, subject-matter knowledge will help raters better interpret evidence of public health impact. To ensure consistent application of QuIC, raters are trained by reviewing the QuIC Handbook and by classifying and rating a sample of pre-classified and pre-rated evidence.
Step 3: Raters classify the evidence base to each policy component
The raters’ first task is to complete an independent review of the policy intervention evidence base and classify relevant evidence to each policy component. The QuIC Handbook instructs raters to classify an item of evidence (e.g., 1 journal article) to a policy component’s evidence base if it 1) provides a rationale for why the component will have or contribute to a positive (i.e., desired) outcome and/or 2) uses data analysis to examine outcomes of the component or of an intervention using the component and finds a desired outcome. To reduce redundancy of information, raters ensure that every item adds new data or a new argument (e.g., a new expert’s opinion) and does not simply summarize other evidence. If raters find that a component or intervention has evidence of harming human health, the assessment of that component or intervention is terminated, as QuIC is not designed for assessing evidence bases with mixed findings. Overall, an iterative process of collecting evidence, defining the policy components, and classifying the evidence is used.
Step 4: Raters complete evidence quality assessments
The raters next independently assess evidence quality for each component by completing a structured review of the evidence and then applying an evidence quality scale (quality scale) to each component’s evidence base as a whole. The quality scale ranges from 0 to 40, with its 4 domains evenly weighted at 10 points each. Step 4 is repeated for each component being assessed so that each receives a quality score out of 40. For the scoring protocol, refer to the QuIC Handbook.
Step 5: Raters complete evidence of public health impact assessments
The raters next independently assess evidence of public health impact for each component by completing a structured review of the evidence and then applying an evidence of public health impact scale (impact scale) to each component’s evidence base as a whole. The impact scale ranges from 0 to 40, with its 4 domains evenly weighted at 10 points each. Step 5 is repeated for each component being assessed so that each receives an impact score out of 40. For scoring protocol, refer to the QuIC Handbook.
Step 6: Assess interrater reliability and have raters reach consensus
Interrater reliability remains important to assess because judgments made by humans are prone to measurement error [39]. Interrater reliability is analyzed separately for the quality and impact scales using methods developed by Shrout and Fleiss [39] to calculate an intraclass correlation coefficient with data from raters’ independent assessments. Afterward, raters resolve their discrepancies and reach consensus on final scoring.
Step 7: Determine evidence strength categories and next steps
The final quality and impact scores for each component are used to determine a single evidence strength category for each component: emerging, promising impact, promising quality, or best. This is accomplished using a quadrant (see Figures 1 and 2) that includes the two 40-point quality and impact scales as axes; a quality or impact score of 21 or more moves a component into the adjacent quadrant. The final task of a QuIC assessment is to propose next steps for each policy component on the basis of where each fell along the evidence continuum (Table 1).
RESULTS AND DISCUSSION
Table 2 and Figure 1 show scores and categorizations for 7 evidence-based PAD policy components, and Table 3 and Figure 2 show scores and categorizations for 14 evidence-based CHW policy components. Reliability was high in both assessments [40]. Figures 1 and 2 also illustrate that evidence quality and evidence of public health impact are likely interrelated constructs, as suggested by Spencer et al. [16]. The CHW assessment results were published in September 2014 in the Policy Evidence Assessment Report [40], the first in a series of reports developed by the CDC to translate and disseminate results of QuIC assessments. (The CHW Policy Evidence Assessment Report provides a list of the evidence reviewed in the assessments as well as evidence inclusion and exclusion criteria and search terms. This report also includes summaries of the evidence for each policy component assessed as well as the results of reliability assessments for the quality and impact scales. The PAD policy component evidence assessments were completed primarily to help develop QuIC and are not published; however, additional documentation [e.g., reliability analysis results] is available upon request.)
TABLE 2.
Quality and Impact of Component Evidence Assessment results for PAD policy components
Policy component | Quality score* | Impact score* | Evidence category† |
---|---|---|---|
PAD coordinated with emergency medical services | 32 | 25 | Best |
PAD with training for potential responders | 31 | 23 | Best |
PAD with targeted AED site placement in high-density areas, schools, and fitness facilities | 28 | 24 | Best |
PAD with an emergency response plan | 23 | 22 | Best |
PAD by untrained bystanders | 21 | 19 | Promising quality |
PAD with routine maintenance and testing of AEDs | 22 | 16 | Promising quality |
PAD with limited liability and/or immunity | 19 | 13 | Emerging |
AED, automatic external defibrillator; PAD, public-access defibrillation.
Out of 40 possible points.
“Emerging,” “promising impact,” “promising quality,” or “best.”
TABLE 3.
Quality and Impact of Component Evidence Assessment results for CHW policy components
Policy component | Quality score* | Impact score* | Evidence category† |
---|---|---|---|
CHWs providing chronic disease care services | 40 | 40 | Best |
CHWs included in the team-based care model | 33 | 33 | Best |
CHWs with core competency certification | 29 | 28 | Best |
CHWs when supervised by health care professionals | 28 | 26 | Best |
CHWs trained using standardized core CHW curriculum | 26 | 28 | Best |
CHWs providing services that are paid for by Medicaid | 25 | 22 | Best |
CHWs with specialty area certification | 21 | 28 | Best |
CHWs certified after helping develop their certification requirements | 21 | 24 | Best |
CHWs trained using a standardized specialty area curriculum | 23 | 17 | Promising quality |
CHWs with a defined scope of practice | 21 | 12 | Promising quality |
CHWs trained after helping develop their standardized curriculum | 20 | 24 | Promising impact |
CHWs providing services that are covered and reimbursed by private insurers | 11 | 4 | Emerging |
CHW awareness promoted by educational campaign | 7 | 8 | Emerging |
CHW workforce development supported by grants and incentives | 7 | 4 | Emerging |
CHW, community health worker.
Out of 40 possible points.
“Emerging,” “promising impact,” “promising quality,” or “best.”
The results of these QuIC applications have international implications. Evidence supporting PAD policy components included studies set in many developed countries (e.g., England, Canada, the United States, Japan, Denmark); thus, results can inform policy development in these and similar countries. In addition, this assessment identified evidence gaps; we found no evidence suggesting PAD’s impact on health disparities, which remains important to study given disparities in cardiovascular disease [1,2,36] and sudden cardiac arrest [41]. Although we excluded evidence from low-resource settings during the CHW assessments to ensure a sufficient level of comparability across CHW interventions [40], these QuIC results also have international relevance. Many components of CHW policy to address chronic disease in the United States (e.g., providing CHWs with standard specialty area training) could be adapted and included in the CHW policies of other countries, including those in the developing world, where lay health workers remain a critical strategy and chronic disease a growing problem [1,2,42]. Furthermore, our results provide parallel evidence for similar CHW policy strategies that are especially relevant for the developing world, including policies for promoting maternal and child health and for preventing the human immunodeficiency virus [42,43].
We found that the main strength of QuIC, compared with existing evidence assessment methods, is its potential to assess many policy component evidence bases, for which evidence might range from emerging to well-established. This strength is due mostly to QuIC’s broad definition of “best available evidence.” Furthermore, on the basis of the assessments of 7 PAD components with 34 items of evidence and 14 CHW components with 57 items of evidence, QuIC assessments of similar scopes could be completed by 2 part-time policy research staff members in about 2 to 3 months. This time frame could be considered expedient compared with the 12 months typically expected for a systematic review of only 1 component [10]. Moreover, other approaches for assessing evidence often involve convening an expert panel [17], which might not always be practical when attempting to assess a large number of policy options. Finally, although QuIC’s formative assessments included only cardiovascular disease–related policy interventions, we expect QuIC to be applicable to a broad range of public health policy topics because QuIC criteria were selected on the basis of published research general to public health policy analysis.
There are several limitations of QuIC. One is that QuIC only assesses evidence of a positive public health impact; as such, it does not combine or compare separate study findings on component or intervention outcomes, as in systematic review and meta-analysis. We do not conduct this analysis for two reasons. First, many interventions will have both successful and unsuccessful applications, depending on context [6]. Aggregating primary study findings could result in QuIC undervaluing components that are part of interventions successful in some settings and for some populations. Nevertheless, if QuIC finds any harm to human health, an assessment is terminated. The second reason that QuIC does not combine or compare separate study findings is that QuIC is intended to be “quick,” so that assessments can be completed during the short windows of opportunity for policy change [19]. We surmised that the process of combining and comparing findings would be time consuming and potentially inappropriate, given the heterogeneity of outcomes assessed.
An additional limitation, related to our definition of “best available evidence,” is that what works well as a practice or program might not work at the policy level, because many determinants of the policy process can influence outcomes [44,45]. We decided that the potential cost of this limitation did not exceed the value that practice and programmatic evidence could provide in the absence of policy evidence. Another limitation related to our definition of “best available evidence” is that QuIC attributes some part of a desired health effect or association to all the components that constitute an intervention with equal weight. This assumption is necessary because it is unlikely that all the individual components of recent policy interventions have been studied independently. We ultimately address the described limitations by using a mixed approach to present QuIC results in the Policy Evidence Assessment Report, which also includes narrative summary sections in which we can note findings of ineffectiveness, contextual factors affecting policy implementation, the need for component-level experimental study, and other important considerations [40].
A final limitation is that QuIC’s reliability has been assessed using only its developers as raters. Therefore, it will be important to test reliability with raters outside of the project team who also represent intended users. Overall, QuIC’s limitations are outweighed by its strengths because it offers a method for assessing evidence that would otherwise be overlooked. QuIC can be used to screen a large number of policy options in a relatively short time frame, and its results offer a starting point for detailed and rigorous synthesis (i.e., QuIC is a “systematic preview”).
SUMMARY
Evidence-based policy could be expected to significantly improve cardiovascular health in both developed and developing countries. However, assessment of the evidence for public health policy components is needed. QuIC provides an expedient approach for appraising and comparing the strengths of the best available evidence bases for many individual components of existing or proposed public health policy interventions. Its results could help translate a broad base of information for international policy maker audiences and uncover evidence gaps that could inform future policy research and evaluation.
ACKNOWLEDGMENTS
The authors thank the following persons who provided feedback during the development of QuIC: Sarah Ali, Ryan Bell, Farah Chowdhury, Elizabeth Dodson, Diane Dunet, Valerie Edelheit, Samantha Harrykissoon, Sarah Kerch, Michael Kulcsar, Alberta Mirambeau, Deesha Patel, Kim Prewitt, Tara Ramanathan, Nora Shields, Michael Tynan, Daniella Uslan, Marla Vaughan, and Cathy Vogel. Also, the authors thank Ross Brownson, David Callahan, and Deesha Patel for input on drafts of this report.
This project was funded by Centers for Disease Control and Prevention, Division for Heart Disease and Stroke Prevention, contract 11IPA1103219. The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
Footnotes
The authors report no relationships that could be construed as a conflict of interest.
REFERENCES
- 1.World Health Organization. The Global Burden of Disease: 2004 Update. Available at: http://www.who.int/healthinfo/global_burden_disease/GBD_report_2004update_full.pdf; 2008. Accessed November 5, 2014.
- 2.World Health Organization. Preventing Chronic Diseases: A Vital Investment. Available at: http://www.who.int/chp/chronic_disease_report/full_report.pdf; 2005. Accessed November 5, 2014.
- 3.Brownson RC, Chriqui JF, Stamatakis KA. Understanding evidence-based public health policy. Am J Public Health 2009;99:1576–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Anderson LM, Brownson RC, Fullilove MT, et al. Evidence-based public health policy and practice: promises and limits. Am J Prev Med 2005;28:226–30. [DOI] [PubMed] [Google Scholar]
- 5.Dodson EA, Brownson RC, Weiss SM. Policy dissemination research In: Brownson RC, Coldwitz G, Proctor E, editors. Dissemination and Implementation Research in Health. New York: Oxford University Press; 2012. p. 437–58. [Google Scholar]
- 6.Pawson R Evidence-Based Policy: A Realist Perspective. London: Sage; 2006. [Google Scholar]
- 7.Briss PA, Zaza S, Pappaioanou M, et al. , for The Task Force on Community Preventive Services. Developing an evidence-based guide to community preventive services—methods. Am J Prev Med 2000;18:35–43. [DOI] [PubMed] [Google Scholar]
- 8.Guyatt GH, Oxman AD, Vist GE, et al. , for the GRADE Working Group. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008;336:924–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Brennan L, Castro S, Brownson RC, Claus J, Orleans CT. Accelerating evidence reviews and broadening evidence standards to identify effective, promising, and emerging policy and environmental strategies for prevention of childhood obesity. Annu Rev Public Health 2011;32:199–223. [DOI] [PubMed] [Google Scholar]
- 10.Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci 2010;5:56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.U.S. Institute of Medicine. Preventing Childhood Obesity: Health in the Balance. Washington, DC: The National Academies Press; 2005. [PubMed] [Google Scholar]
- 12.Albert D, Fortin R, Herrera C, et al. Strengthening chronic disease prevention programming: the Toward Evidence-Informed Practice (TEIP), program assessment tool. Prev Chronic Dis 2013;10 Available at: 10.5888/pcd10.120107. Accessed October 30, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health 2009;30:175–201. [DOI] [PubMed] [Google Scholar]
- 14.Leeman J, Sommers J, Leung MM, Ammerman A. Disseminating evidence from research and practice: a model for selecting evidence to guide obesity prevention. J Public Health Manag Pract 2011;17: 133–40. [DOI] [PubMed] [Google Scholar]
- 15.Puddy RW, Wilkins N. Understanding Evidence: Part 1 Best Available Research Evidence: A Guide to the Continuum of Evidence of Effectiveness. Atlanta, GA: Centers for Disease Control and Prevention; Available at: http://www.cdc.gov/violenceprevention/pdf/understanding_evidence-a.pdf; 2011. Accessed October 30, 2014. [Google Scholar]
- 16.Spencer LM, Schooley MW, Anderson LA, et al. Seeking best practices: a conceptual framework for planning and improving evidence-based practices. Prev Chronic Dis 2013;10 Available at: 10.5888/pcd10.130186. Accessed October 30, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Brownson RC, Baker EA, Leet TL, Gillespie KN, True WR. Evidence-Based Public Health. New York: Oxford University Press; 2011. [Google Scholar]
- 18.Leeman J, Sommers J, Vu M, et al. An evaluation framework for obesity prevention policy interventions. Prev Chronic Dis 2012;9 Available at: 10.5888/pcd9.110322. Accessed October 30, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Agendas Kingdon J., Alternatives, and Public Policies. 2nd ed Upper Saddle River, NJ: Pearson; 2010. [Google Scholar]
- 20.Mercer SL, DeVinney BJ, Fine LJ, Green LW, Dougherty D. Study designs for effectiveness and translation research: identifying tradeoffs. Am J Prev Med 2007;33:139–54. [DOI] [PubMed] [Google Scholar]
- 21.Brennan LK, Brownson RC, Orleans CT. Childhood obesity policy research and practice: evidence for policy and environmental strategies. Am J Prev Med 2014;46 Available at: 10.1016/j.amepre.2013.08.022. Accessed October 30, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Gilchrist S, Schieb L, Mukhtar Q, et al. A summary of public access defibrillation laws, United States—2010. Prev Chronic Dis 2012;9 Available at: 10.5888/pcd9.110196. Accessed October 30, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Centers for Disease Control and Prevention. A Summary of State Community Health Worker Laws. Atlanta, GA: Centers for Disease Control and Prevention; Available at: http://www.cdc.gov/dhdsp/pubs/docs/chw_state_laws.pdf; 2013. Accessed October 20, 2014. [Google Scholar]
- 24.Hazinski MF, Idris AH, Kerber RE, et al. Lay rescuer automated external defibrillator (“public access defibrillation”) programs: lessons learned from an international multicenter trial: advisory statement from the American Heart Association Emergency Cardiovascular Committee; the Council on Cardiopulmonary, Perioperative, and Critical Care; and the Council on Clinical Cardiology. Circulation 2005;111:3336–40. [DOI] [PubMed] [Google Scholar]
- 25.Brownstein JN, Chowdhury FM, Norris SL, et al. Effectiveness of community health workers in the care of people with hypertension. Am J Prev Med 2007;32:435–47. [DOI] [PubMed] [Google Scholar]
- 26.Ogilvie D, Egan M, Hamilton V, Petticrew M. Systematic reviews of health effects of social interventions: 2. Best available evidence: how low should you go? J Epidemiol Community Health 2005;59:886–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Swinburn B, Gill T, Kumanyika S. Obesity prevention: a proposed framework for translating evidence into action. Obes Rev 2005;6: 23–33. [DOI] [PubMed] [Google Scholar]
- 28.Aufderheide T, Hazinski MF, Nichol G, et al. Community lay rescuer automated external defibrillation programs: key state legislative components and implementation strategies: a summary of a decade of experience for healthcare providers, policymakers, legislators, employers, and community leaders from the American Heart Association Emergency Cardiovascular Care Committee, Council on Clinical Cardiology, and Office of State Advocacy. Circulation 2006;113:1260–70. [DOI] [PubMed] [Google Scholar]
- 29.Rosenthal EL, Brownstein JN, Rush CH, et al. Community health workers: part of the solution. Health Aff (Millwood) 2010;29: 1338–42. [DOI] [PubMed] [Google Scholar]
- 30.Sutcliffe S, Court J. Evidence-Based Policymaking: What Is It? How Does It Work? What Relevance for Developing Countries? London: Overseas Development Institute; 2005. [Google Scholar]
- 31.Donaldson SI, Christie CA, Mark MM. What Counts as Credible Evidence in Applied Research and Evaluation Practice? Thousand Oaks, CA: Sage; 2009. [Google Scholar]
- 32.Green LW. Public health asks of systems science: to advance our evidence-based practice, can you help us get more practice-based evidence? Am J Public Health 2006;96:406–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Ange BA, Symons JM, Schwab M, Howell E, Geyh A. Generalizability in epidemiology: an investigation within the context of heart failure studies. Ann Epidemiol 2004;14:600–1. [Google Scholar]
- 34.Killoran A, Kelly MP. Evidence-Based Public Health: Effectiveness and Efficiency. New York: Oxford University Press; 2010. [Google Scholar]
- 35.Braveman P, Egerter S, Williams DR. The social determinants of health: coming of age. Annu Rev Public Health 2011;32:381–98. [DOI] [PubMed] [Google Scholar]
- 36.Mensah GA, Brown DW. An overview of cardiovascular disease burden in the United States. Health Aff (Millwood) 2007;26:38–48. [DOI] [PubMed] [Google Scholar]
- 37.Dilley JA, Bekemeier B, Harris JR. Quality improvement interventions in public health systems: a systematic review. Am J Prev Med 2012; 42:S58–71. [DOI] [PubMed] [Google Scholar]
- 38.Rychetnik L, Frommer M, Hawe P, Shiel A. Criteria for evaluating evidence on public health interventions. J Epidemiol Commun Health 2002;56:119–27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Shrout P, Fleiss J. Intraclass correlations: uses in assessing rater reliability. Pyschol Bull 1979;86:420–8. [DOI] [PubMed] [Google Scholar]
- 40.Centers for Disease Control and Prevention. Policy Evidence Assessment Report: Community Health Worker Policy Components. Available at: http://www.cdc.gov/dhdsp/pubs/docs/chw_evidence_assessment_report.pdf; 2014. Accessed October 30, 2014.
- 41.Becker LB, Han BH, Meyer PM, et al. , for the CPR Chicago Project. Racial differences in the incidence of cardiac arrest and subsequent survival. N Engl J Med 1993;329:600–6. [DOI] [PubMed] [Google Scholar]
- 42.Lewin S, Munabi-Babigumira S, Glenton C, et al. Lay health workers in primary and community health care for maternal and child health and the management of infectious diseases. Cochrane Database Syst Rev 2010;(3):CD004015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Schneider H, Hlophe H, van Rensburg D. Community health workers and the response to HIV/AIDS in South Africa: tensions and prospects. Health Policy Plan 2008;23:179–87. [DOI] [PubMed] [Google Scholar]
- 44.Dobrow MJ, Goel V, Upshur RE. Evidence-based health policy: context and utilisation. Soc Sci Med 2004;58:207–17. [DOI] [PubMed] [Google Scholar]
- 45.Rutten A Evidence-based policy revisited: orientation towards the policy process and a public health policy science. Int J Public Health 2012;57:455–7. [DOI] [PubMed] [Google Scholar]