Abstract
Context
Increasing the adoption and implementation of evidence-based policies and practices is a key strategy for improving public health. Although there is widespread agreement about the importance of implementing evidence-based public health policies and practices, there are gaps between what has been shown to be effective and what is implemented at the state level.
Objective
The Centers for Disease Control and Prevention (CDC) developed the Prevention Status Reports (PSRs), a performance measurement system, to highlight evidence-based public health policies and practices and catalyze state performance and quality improvement efforts across the nation.
Design
CDC selected a set of 10 topics representing some of the most important public health challenges in the nation. Stakeholders, including state health departments and other partners, helped conceptualize the PSRs and informed the development of the PSR framework, which provides an organizational structure for the system. CDC subject matter experts developed criteria for selecting policies and practices, indicators for each policy and practice, and a criteria-based rating system for each indicator.
Participants and Setting
The PSRs were developed for all 50 states and the District of Columbia.
Main Outcome
The PSRs were developed and serve as a performance measurement system for monitoring the adoption, reach, and implementation fidelity of evidence-based public health policies and practices nationwide.
Results
The PSRs include 33 policy and practice indicators across the 10 health topics. They use a simple 3-level rating system—green, yellow, and red—to report the extent to which each state (and the District of Columbia) has implemented the policy or practice in accordance with supporting evidence or expert recommendations. Results from aggregate analyses show positive change or improvement.
Conclusion
The PSRs are a unique part of CDC’s work to improve the performance and accountability of the public health system, serving as both a monitoring tool and a call to action to improve health outcomes. The PSRs can be used to track the reach of and fidelity to evidence-based policies and practices nationally over time, as well as inform state efforts to improve their use of evidence-based policies and practice.
Keywords: evidence-based public health, knowledge translation, performance measurement, policy and practice indicators, research-practice-policy integration
Adoption and implementation of evidence-based policies and practices are key strategies for improving the public’s health.1 Health departments that use an evidence-based approach are better positioned to make effective use of limited resources and are more likely to make progress toward their public health goals.1,2 Public health policies and practices are considered evidence-based when data are available that demonstrate their effectiveness in achieving desired outcomes. Such data can be derived from research studies and program or policy evaluations.1
Although a variety of national initiatives seeks to increase or support use of evidence-based policies and practices (eg, Healthy People 2020, the Guide to Community Preventive Services, the Public Health Accreditation Board’s national public health accreditation program),3–5 considerable gaps remain between what research has shown to be effective and what is actually implemented.6 For example, in 2001, the Task Force on Community Preventive Services (Task Force) strongly recommended smoking bans and restrictions as an effective means of reducing exposure to environmental tobacco smoke based on a systematic review of scientific evidence. In 2012, the Task Force updated its recommendation and released findings showing that implementation of comprehensive state smoke-free policies is effective in reducing prevalence of tobacco use, exposure to secondhand smoke, initiation of tobacco use among adolescents, and tobacco-related morbidity and mortality, as well as increasing the number of tobacco users who quit.7 However, as of September 30, 2015, only 27 states had enacted a comprehensive state smoke-free policy that prohibited smoking in workplaces, restaurants, and bars.8 Such delays in translating research findings into public health policy and practice can be attributed to many factors, including political will, leadership attention, workforce skill, resources, and local norms and culture.1,5,9
Performance management offers a set of measurement and accountability tools and techniques that can help close the gaps in implementing evidence-based practice.10,11 In public health, performance management is described as “the practice of actively using performance data to improve the public’s health; this involves the strategic use of performance standards and measures, progress reports, and ongoing quality improvement efforts to ensure that an agency achieves desired results.”12(p463) Performance measurement, a key component of performance management, is generally described as the process of measuring “capacities, processes, or outcomes relevant to the assessment of a performance indicator.”13(p9) The important link between performance measurement and implementation of evidence-based policies and practice was outlined in a 2011 report by the Institute of Medicine (IOM), For the Public’s Health: The Role of Measurement in Action and Accountability. According to the IOM, performance measurement is the primary means of monitoring accountability in the health system.14 In the report, the IOM highlights the importance of performance indicators that measure adoption, reach, and implementation fidelity (the extent to which an intervention is delivered as intended15) of evidence-based programs and policies at the state and local levels.14 An increased focus on evidence-based programs and policies has several benefits, including “access to more and higher quality information on what works, a higher likelihood of successful programs and policies being implemented, greater workforce productivity, and more efficient use of public and private resources.”1(p176)
Acknowledging the importance of measuring policy and practice adoption, reach, and implementation fidelity, as well as building on the growing momentum of performance management and improvement in public health, the Office for State, Tribal, Local and Territorial Support (OSTLTS) at CDC was charged with developing a means to systematically assess states’ implementation of policies and practices designed to address the nation’s leading causes of death and disability. Specifically, CDC leaders requested a focus on 10 public health concerns that (1) represent high-burden challenges, (2) have scientific evidence or expert recommendations to guide prevention, and (3) align with CDC’s programmatic and policy priorities. The 10 public health concerns included the following:
Alcohol-Related Harms
Food Safety
Health Care–Associated Infections
Heart Disease and Stroke
HIV
Motor Vehicle Injuries
Nutrition, physical activity, and obesity
Prescription Drug Overdose
Teen Pregnancy
Tobacco Use
OSTLTS responded to this charge by developing the Prevention Status Reports (PSRs), a national performance measurement system uniquely focused on assessing and monitoring the status of evidence-based policies and practices in all 50 states and the District of Columbia for these public health concerns. The purpose of this article is to describe the design and development of the PSRs and highlight aggregate-level changes in the status of evidence-based policies and practices nationwide.
Methods
Development of the PSR framework
OSTLTS began the development process by engaging 6 state health departments for the purpose of soliciting reactions to the initial concept of the PSRs. Participating state health departments included Alabama, California, Indiana, New York, Oklahoma, and Washington. These states were selected for participation on the basis of the following criteria: geographic location, population size, health department experience in performance improvement, and tenure of the state health official. OSTLTS conducted on-site interviews with state health officials and program leaders in these health departments to discuss the PSR concept and gather feedback on the proposed organizing framework for the system.
Development of the PSR prototype
The form and functionality of the proposed framework were tested through prototyping, an iterative process of product or system development that involves designing, building, testing, and redesigning prior to final development. CDC subject matter experts contributed to the formative stages of prototyping by testing content fit and alignment with the various prototype design features. OSTLTS measurement experts, health communication specialists, and graphic artists used an iterative cycle of development, review, and revision to finalize the PSR prototype. The final PSR prototype was vetted by state health officials and members of the Association of State and Territorial Health Officials (ASTHO).
Development of the PSR indicators and rating criteria
CDC subject matter experts compiled a list of policies (legislative actions including laws, regulations, and executive orders) and practices (nonlegislative approaches such as public health interventions) for inclusion in the PSRs. Policies and practices were considered if they met 1 or more of the following criteria: (1) supported by systematic review(s) of scientific evidence of effectiveness (eg, the Guide to Community Preventive Services); (2) explicitly cited in a national strategy or action plan (eg, Healthy People 2020); or (3) recommended by a recognized expert body, panel, organization, study, or report with an evidence-based focus (eg, IOM). In addition, only those policies and practices that could be monitored using state-level data that are readily available for most states and the District of Columbia were considered. Policies and practices were selected by CDC on the basis of their alignment with established programmatic objectives and the belief that inclusion in the PSRs would support ongoing efforts to improve state adoption and implementation fidelity.
Once the policies and practices were selected, CDC subject matter experts defined performance indicators for those policies and practices. Rating scales reflecting specific criteria for assessing performance were developed for each indicator. A simplified, 3-level rating categorization (green, yellow, and red) was applied to show at a glance the extent to which a state had implemented the policy or practice in accordance with supporting evidence or expert recommendations. CDC subject matter experts defined the rating criteria for the green, yellow, and red categories, taking into account the scientific evidence regarding effectiveness of the policy or practice, expert recommendations regarding implementation of the policy or practice, or the national distribution of state data. All of the indicators and rating scales underwent an iterative process of expert review, editing, and clearance within CDC. To increase transparency, CDC shared the indicators and rating criteria with state health departments to promote awareness of and receive feedback on the indicators and rating criteria. CDC worked directly with stakeholders to reconcile any questions or concerns.
Indicator data collection and assessment of rating status
State-level data for each policy and practice indicator were collected from 25 different data sources, 7 of which were unpublished. Although data sources varied across topics and indicators, state-level data to determine rating status were derived from a common data source for each indicator and all states. A technical methodology for applying the rating criteria to each policy and practice indicator was developed and integrated into Excel data workbooks using macros and algorithms. State ratings underwent 3 levels of quality assurance. First, CDC measurement experts reviewed and confirmed the accuracy of each indicator rating algorithm. Next, CDC subject matter experts confirmed the accuracy of each state rating. Finally, the ratings were reviewed by state health departments and partners external to CDC. All discrepancies were investigated by CDC and reconciled prior to publication.
Participants and settings
The PSRs were developed for public health professionals, community leaders, and policy makers across all 50 states and the District of Columbia. Public health professionals include executive leaders (eg, health officials, public health directors), division and program managers (eg, chronic disease directors), and program staff (eg, epidemiologists, health educators).
Results
The PSR framework
The input from stakeholders resulted in an organizing framework for the PSRs (see Supplemental Digital Content 1, available at http://links.lww.com/JPHMP/A271). The framework consists of 3 related elements that reflect the purpose of the PSRs and provides the basis for the organizational structure of the performance measurement system.
Performance measurement system
The application of the PSR framework resulted in a national, Web-based performance measurement system (www.cdc.gov/psr) that reports the status of public health policies and practices associated with 10 public health topics. This performance measurement system provides a means of measuring policy and practice adoption, reach, and implementation fidelity by describing the public health problem, monitoring potential solutions (ie, evidence-based and expert-recommended policies and practices), and reporting the status of policy and practice implementation for all 50 states and the District of Columbia. The system is organized by state and by public health topic. Each topic has 2 components: (1) public health problem and (2) solutions and ratings.
Public health problem
The “Public Health Problem” section of each topic introduces the health concern and highlights its impact on the health of Americans. A combination of national and state-specific data is provided to describe the magnitude of the burden in terms of death, morbidity, disability, and economic impact. These data are presented in brief text bullets and line and bar charts. National-level data and associated benchmarks, such as the Healthy People 2020 targets, are included in the charts for comparison. Figure 1 displays sample burden data for the Heart Disease and Stroke topic.
FIGURE 1.

Public Health Problem Example: Heart Disease and Stroke
Solutions and ratings
The “Solutions and Ratings” section of each topic outlines the policy and practice indicators (see Supplemental Digital Content 2, available at http://links.lww.com/JPHMP/A272) and state ratings. Ratings are presented using an easy-to-read, 3-level scale—green, yellow, and red—that reflects the extent to which each state (and the District of Columbia) has implemented the policy or practice in accordance with supporting evidence or expert recommendations, providing a means for assessing implementation fidelity for each indicator. (Because the indicators reflect inherently different forms of measurement, including nominal, ordinal, interval, and ratio scales of measurement, consistently applying a 3-level rating system is challenging. In 2 instances, the policy and practice indicators did not lend themselves to a 3-level rating scale. In those cases, a simple binary scale—green and red— was used.) The state rating includes the assigned rating based on the established criteria, along with a text statement explaining the state’s status in relation to the policy or practice criteria. Figure 2 illustrates the type of information reported for each indicator.
FIGURE 2.

Solutions and Ratings Example: One Heart Disease and Stroke Indicator
Publication of the PSRs: Initiating monitoring of state public health policies and practices
Two iterations of the PSRs have been published to date. The first, published in January 2014, included a set of 28 policy and practice indicators and 1385 individual state ratings. For the next iteration of the PSRs, the indicators were reviewed and revised by CDC subject matter and measurement experts to ensure they remained current with evolving scientific evidence. This process resulted in 33 policy and practice indicators and 1627 individual state ratings; these were published in 2016. Table 1 lists the number of 2016 indicators per topic by indicator type.
TABLE 1.
Number of PSR Indicators by Topic and Type of Intervention
| PSR Topic | Number of Indicators by Type of Intervention | ||
|---|---|---|---|
| Policy | Practice | Total | |
| Alcohol-Related Harms | 4 | 0 | 4 |
| Food Safety | 1 | 2 | 3 |
| Health care–Associated Infections | 0 | 2 | 2 |
| Heart Disease and Stroke | 1 | 1 | 2 |
| HIV | 3 | 1 | 4 |
| Motor Vehicle Injuries | 8 | 0 | 8 |
| Nutrition, Physical Activity, and Obesity | 3 | 1 | 4 |
| Prescription Drug Overdose | 2 | 0 | 2 |
| Teen Pregnancy | 1 | 0 | 1 |
| Tobacco Use | 3 | 0 | 3 |
| All topics | 26 | 7 | 33 |
Abbreviation: PSR, Prevention Status Report.
The PSRs include a suite of materials to facilitate use of the PSRs and to meet the needs of a broad and diverse audience. The materials include a national summary providing aggregate information across all topics, 51 state-level summaries providing a snapshot of each state’s ratings across all 10 topics, and 510 topic-specific reports for each state and District of Columbia.
National performance monitoring: Aggregate results
Along with providing individual states with policy and practice data to inform decision making, the PSRs are also designed to monitor performance and improvement nationally. Aggregate analyses of state ratings show the reach of and fidelity to evidence-based policies and practices at a national level. While implementation fidelity is measured by the rating scales, reach is measured by changes in the distribution of state ratings nationally over time. Increased reach is demonstrated when the number of states achieving a “green” status increases. Table 2 shows the percentages of states rated green, yellow, and red for all indicators combined, by PSR publication year. Of the 33 indicators published in 2016, 19 are comparable with indicators used in the previous iteration, allowing aggregate performance analysis over 2 performance years. Results are presented in Figure 3. Of these 19 indicators, the majority (84%) shows positive change or improvement. The number of green ratings across states for the 19 comparable indictors increased by 19 percentage points from 2014 to 2016. It is important to note that these improvements cannot be attributed to the PSRs, but the PSRs serve as a useful tool to capture and summarize complex and often elusive data for monitoring national improvement over time.
TABLE 2.
Overall Rating Status by Year
| Rating | Percentage of States | |
|---|---|---|
| 2014a | 2016 | |
| Red | 40.27 | 33.78 |
| Yellow | 25.45 | 27.50 |
| Green | 34.27 | 38.72 |
| Total indicators | N = 28 | N = 33 |
2014 percentages sum to −0.01 of 100 due to rounding.
FIGURE 3. Summary of PSR Ratings.

Abbreviation: PSR, Prevention Status Report.
Limitations
Although the indicator ratings in the PSRs provide a useful means of communicating the status of evidence-based policies and practices in states and for monitoring changes nationally over time, the system is subject to several limitations. The indicator ratings reflect the observed status of policies and practices in the state at a point in time and cannot explain the conditions that resulted in the observed status. The status might be the result of a complex array of circumstances within a state, such as population demographics, public health system resources and capacity, and other factors posed by the social or political environment. By design, the PSRs do not provide this type of contextual information. While state health departments are charged with protecting and promoting the health and safety of their populations, many of the contextual factors that affect population health are outside their direct control and influence. Therefore, accountability for a state’s PSR rating varies and includes public and private sector entities beyond health departments. In many cases, a “red” or “yellow” rating represents an opportunity for improvement that would best be achieved by collaborative action from multiple sectors across the state. For these reasons, the PSR ratings should not be interpreted as reflecting the status of the efforts of state health departments or other individual organizations.
Furthermore, ongoing changes and developments in the scientific evidence base present another limitation to the PSRs. As scientific research leads to new evidence and recommendations concerning effective public health policies and practices, the indicators and rating criteria in the PSRs must change accordingly. As such, indicators and rating criteria are adjusted over time on the basis of case-by-case examination of the science. This situation presents challenges for monitoring indicator rating changes over time and also prohibits use of a system-wide, performance-driven approach to adjusting rating criteria.
Finally, the PSRs are focused on states and do not include local data. Although there is expressed interest and obvious value in expanding the PSRs to report the status of local policies and practices, several barriers to developing PSRs for localities exist, including a lack of local-level policy and practice implementation data and resource constraints.
Discussion
By design and function, the PSRs fill a unique gap in public health performance management. While much of CDC data collection and reporting are for the surveillance of health risk behaviors (eg, the Behavioral Risk Factor Surveillance System) and outcomes (eg, the National Vital Statistics System), the PSRs provide much needed information about the policies and practices that influence health risk behaviors and outcomes. Likewise, in contrast to “dashboards” or “scorecards” designed to rank and compare states (eg, Trust for America’s Health report, Investing in America’s Health: A State-by-State Look at Public Health Funding and Key Health Facts16), the PSRs are intentionally designed to stimulate examination and discussion of the observed status of policies and practices within the specific and unique context of an individual state or the District of Columbia. These design features are important for performance improvement because, as noted by the IOM, measuring health outcomes, although necessary, is less useful for accountability purposes. Agencies and organizations have more influence over the proximal intervention efforts (ie, policies and practices) to address public health problems than the more distal public health problems reflected by outcome measures, which are often influenced by many other confounding factors.
As a performance monitoring and improvement tool focused on adoption, reach, and implementation fidelity of evidence-based or expert-recommended policies and practices, the PSRs have value and utility at the state, local, and national levels. At the state level, the PSRs facilitate the translation of research to practice. With practitioners in mind, the reports use plain language and are formatted for ease of use. State health department leaders and program staff report using the PSRs to foster dialogue with a variety of their stakeholders (including legislators and partners) and inform their agencies’ assessment, priority-setting, and planning processes. In keeping with the intent of the PSRs, state-level stakeholders indicate that they are using the PSRs to improve implementation fidelity of existing policies and practices, as well as influence the development of new programs and policy initiatives, with the intent of improving their PSR ratings and ultimately reducing public health problems. Finally, as a tool to support local-level assessment, local health department staff indicate that the PSRs are helpful for comparing local data with state burden data and ratings.
The many uses of the PSRs by state and local stakeholders also align well with the expectations of the national accreditation standards established by the Public Health Accreditation Board. While there are connections across most standards and domains, there are particularly strong connections with requirements for state health assessment and health improvement planning, performance management, policy development, and use of evidence-based practices. As part of CDC’s training and technical assistance activities, CDC staff highlight the connections between the PSRs and accreditation to maximize the use of the PSRs as a resource in health departments’ accreditation efforts.
At the national level, the PSRs provide a unique set of data that can be used to monitor the reach of evidence-based policies and practices by tracking adoption across all states over time. For example, the National Conference of State Legislatures uses the PSRs as a source for developing snapshots, or “postcards,” highlighting where states stand in addressing current and emerging public health problems. The postcards include a brief introduction to the problem and a summary of state action, including legislation and program services developed to address the problem. Furthermore, CDC leaders use the PSRs to monitor progress in agency priorities, educate policy makers about effective interventions to address the leading causes of death and disability, and communicate with state health officials and other public health leaders to highlight improvement opportunities. In addition, many of the PSR indicators are used within CDC as performance measures for national programs. CDC project officers, with responsibility for administering CDC-funded grants and cooperative agreements, have received training about the PSRs and are encouraged to use them as a resource when delivering technical assistance to awardees. Relatedly, the PSRs align with and support elements required by CDC funding opportunity announcements, including the need for applicants to describe public health issues in their jurisdictions, consider evidence-based practices or policies for implementation, and adhere to awardee administrative requirements (eg, tobacco and nutrition policies). As such, the PSRs are a valuable resource to health departments applying for federal funds.
Supplementary Material
Implications for Policy & Practice.
-
■
A key challenge in public health is ensuring the implementation of evidence-based public health policies and practices.
-
■
While many existing indicator reports focus on health risk behaviors, health outcomes, or other systems-related measures (eg, public health funding), the PSRs are designed to focus attention on what works to improve public health.
-
■
The PSRs monitor state implementation of evidence-based policies and practices with the intent of facilitating efforts to improve performance.
-
■
This approach is similar to traditional public health surveillance, which involves monitoring health risk factors, diseases, and health outcomes, but the PSRs also provide information about state-level policies and practices that is often inaccessible through traditional health monitoring approaches.
-
■
The PSRs are a unique part of CDC’s work to improve the performance and accountability of the public health system, serving as both a monitoring tool and a call to action to improve health outcomes.
Acknowledgments
The authors acknowledge technical support and assistance from subject matter experts in CDC’s National Center on Birth Defects and Developmental Disabilities; National Center for Chronic Disease Prevention and Health Promotion; National Center for Emerging and Zoonotic Infectious Diseases; National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention; National Center for Injury Prevention and Control; and the CDC Performance Advisory and Accountability Committee. The authors thank state and senior deputy health officials and program staff in health departments of all 50 states and the District of Columbia for their input and review of the Prevention Status Reports (PSRs). The authors also thank the Association of State and Territorial Health Officials (ASTHO) and the ASTHO Senior Deputies Committee for its feedback during the PSR development process.
The findings and conclusions in this article are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
Footnotes
The authors declare no conflicts of interest.
Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (http://www.JPHMP.com).
References
- 1.Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201. doi: 10.1146/annurev.publhealth.031308.100134. [DOI] [PubMed] [Google Scholar]
- 2.Committee on Assuring the Health of the Public in the 21st Century. The Future of the Public’s Health in the 21st Century. Washington, DC: Institute of Medicine; 2003. [Google Scholar]
- 3.US Department of Health and Human Services. Introducing Healthy People 2020. www.healthypeople.gov/2020/About-Healthy-People. Published 2012. Accessed October 14, 2015.
- 4.Community Preventive Services Task Force. Guide to Community Preventive Services. doi: 10.1016/j.amepre.2016.04.012. www.thecommunityguide.org/about/guide.html. Accessed October 14, 2015. [DOI] [PubMed]
- 5.Public Health Accreditation Board. Welcome to PHAB. www.phaboard.org. Accessed October 14, 2015.
- 6.Brownson RC, Chriqui JF, Stamatakis KA. Understanding evidence-based public health policy. Am J Public Health. 2009;99(9):1576–1583. doi: 10.2105/AJPH.2008.156224. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Community Preventive Services Task Force. Guide to Community Preventive Services. Reducing tobacco use and secondhand smoke exposure: smoke-free policies. www.thecommunityguide.org/tobacco/smokefreepolicies.html. Accessed April 14, 2016.
- 8.Centers for Disease Control and Prevention. Prevention Status Reports: National Summary. Atlanta, GA: US Department of Health and Human Services; 2016. http://www.cdc.gov/psr/docs/psr-2015-national-summary-table.pdf. Accessed April 14, 2016. [Google Scholar]
- 9.Rychetnik L, Bauman A, Laws R, et al. Translating research for evidence-based public health: key concepts and future directions. J Epidemiol Community Health. 2012;66:1187–1192. doi: 10.1136/jech-2011-200038. [DOI] [PubMed] [Google Scholar]
- 10.Cash SJ, Ingram SD, Biben DS, et al. Moving forward without looking back: performance management systems as real-time evidence-based practice tools. Child Youth Serv Rev. 2012;34:655–659. [Google Scholar]
- 11.DeGroff A, Schooley M, Chapel T, et al. Challenges and strategies in applying performance measurement to federal public health programs. Eval Program Plann J. 2010;33:365–372. doi: 10.1016/j.evalprogplan.2010.02.003. [DOI] [PubMed] [Google Scholar]
- 12.DeAngelo JW, Beitsch LM, Beaudry ML, et al. Turning point revisited: launching the next generation of performance management in public health. J Public Health Manage Pract. 2014;20(5):463–471. doi: 10.1097/PHH.0000000000000028. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Lichiello P, Turnock BJ. Guidebook for performance measurement. http://www.phf.org/resourcestools/documents/pmcguidebook.pdf. Accessed August 26, 2016.
- 14.Institute of Medicine. For the Public’s Health: The Role of Measurement in Action and Accountability. Washington, DC: National Academies Press; 2011. [PubMed] [Google Scholar]
- 15.Century J, Rudnick M, Freeman C. A framework for measuring fidelity of implementation: a foundation for shared language and accumulation of knowledge. Am J Eval. 2010;31(2):199–218. [Google Scholar]
- 16.Hamburg R, Segal LM, Martin A. Investing in America’s health: a state-by-state look at public health funding and key health facts. http://healthyamericans.org/assets/files/TFAH2013InvstgAmrcsHlth05%20FINAL.pdf. Published April 2013. April 26, 2016.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
