Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
editorial
. 2016 Sep;106(Suppl 1):S22–S24. doi: 10.2105/AJPH.2016.303359

Establishing an Evaluation Technical Assistance Contract to Support Studies in Meeting the US Department of Health and Human Services Evidence Standards

Russell P Cole 1, Susan Goerlich Zief 1, Jean Knab 1,
PMCID: PMC5049470  PMID: 27689485

This special issue highlights the results of the Office of Adolescent Health’s (OAH) substantial investment in rigorous evaluations of teen pregnancy prevention (TPP) programs. Through a two-tiered funding strategy, OAH procured cooperative agreements with 94 grantees to replicate programs deemed evidence-based by the US Department of Health and Human Services’ (HHS) TPP evidence review (tier 1) or to implement promising and innovative TPP programs that did not yet have evidence of effectiveness (tier 2). In addition, the Family and Youth Services Bureau, under the Administration for Children and Families (ACF) at HHS, also funded 13 cooperative agreements to implement promising programs through the Personal Responsibility Education Program Innovative Strategies program. A subset of the cooperative agreements required the grantees to evaluate the effectiveness of their funded programs through random assignment or quasi-experimental impact studies led by independent evaluators. The goal of this investment in evaluation was to infuse the field with dozens of new, internally valid studies whose evidence would meet the rigorous research standards established by the HHS TPP evidence review1 and would inform the field of public health. These new findings would be used to further understand the effectiveness of evidence-based programs when implemented in different contexts and for different populations, and to potentially identify new, effective programs.

OAH’s investment is part of a larger federal effort to use and create evidence through tiered-evidence grant programs.2 As the government encourages and incentivizes rigorous evaluations,3 some large-scale federal grant programs provide evaluation technical assistance (TA) to their grantee-led evaluations, including the Investing in Innovation Grants (I3), administered by the US Department of Education; and the Workforce Innovation Fund Grants, administered by the Employment and Training Administration under the US Department of Labor. To support grantees in producing credible evidence of program effectiveness, OAH (with support from ACF) funded Mathematica Policy Research and its subcontractors to be the evaluation technical assistance contractor for the OAH and ACF grantee-led rigorous evaluations that were not part of federally led evaluations, with oversight by the OAH evaluation specialist.

Most of the federal evaluation TA efforts, OAH’s included, are structured around meeting a very specific goal: that completed studies meet a particular set of evidence standards for their field. For OAH, this is the HHS TPP evidence review standards, described below. These TA efforts have broader goals, as well. According to Gibbs et al.,4 evaluation TA on individual studies can lead more broadly to improved capacity in the field to conduct evaluations. For example, the OAH evaluation TA effort can help the grantee organizations and their partner evaluators to lead and produce future evaluations that will meet rigorous evidence review standards, regardless of whether OAH is supporting them. Increasing the rigor of research in the field should lead to a better understanding of what works in field of public health.

HHS EVIDENCE STANDARDS

The HHS TPP Evidence Review assesses the credibility of the evidence of programs aiming to reduce adolescent pregnancies, sexually transmitted infections, and sexual risk behaviors. The evidence review’s activity is conducted in two steps. First, the evidence review systematically assesses the quality of the evidence from a study, and second, for the subset of studies that are deemed to provide credible evidence, the review describes the effectiveness of the program described in the study.

The assessment of the evidence places studies into a high-, moderate-, or low-quality evidence category. This categorical assessment helps differentiate the trustworthiness, or internal validity, of the evidence generated from a study. The review process examines features of the study design and evaluation implementation (for example, well-implemented randomized controlled trials are eligible for the high-quality evidence rating, but quasi-experimental designs are only eligible to receive a moderate rating because of the potential threats to internal validity associated with the design). The review process also takes into account other threats to internal validity, such as sample attrition (nonresponse), equivalence at baseline, and factors that confound an assessment of program impacts. Illustrative examples of study categorization are as follows:

  • High quality: Randomized controlled trials with low levels of sample attrition, and statistical controls for any baseline nonequivalence.

  • Moderate quality: Randomized controlled trials with high attrition or quasi-experimental designs, and the study demonstrates baseline equivalence.

  • Low quality: Randomized controlled trials with high attrition or quasi-experimental designs, and the study does not demonstrate baseline equivalence.

Studies with a high or moderate rating are considered as having internally valid evidence and are eligible for an assessment of program effectiveness. If a study with a high or moderate evidence rating shows a statistically significant, favorable impact of a program on a sexual behavior outcome, the program is labeled as having evidence of effectiveness by the evidence review. A current list of the 44 programs deemed to be effective by the evidence review, as well as a fuller description of the study eligibility and review criteria, is available at http://tppevidencereview.aspe.hhs.gov.

Notably, similar to other federal systematic review efforts, the HHS TPP evidence review focuses on the internal validity of the evidence: whether the observed impact can be causally attributed to the program being tested. The evidence review does not assess the extent to which evidence from any one study is generalizable to other populations and settings. Most of the studies conducted to date have been implemented in single geographic areas, and the samples are not considered representative of a larger population. In reporting findings, the HHS TPP evidence review does describe the population and setting of each study, allowing users of the evidence review findings to identify relevant populations and settings.

LEARNING FROM A LARGE-SCALE TA EFFORT

Two related editorials describe the role that evaluation TA played in assisting grantee efforts to meet the HHS evidence standards. Specifically, Zief et al.5 describe how the evaluation TA was structured to support over 40 grantee evaluations (both OAH and ACF grants) throughout the funding period, from initial study design through final reports, in an effort to enable studies to meet evidence standards. And Knab et al.6 describe the primary challenges germane to this particular evaluation TA effort, how OAH and the evaluation TA team addressed the challenges, and the implications and lessons learned for future evaluation TA efforts. As a collection, these three editorials, along with Margolis and Roper,7 provide some context for how funders or stakeholders can support and maintain grantee-led evaluation efforts through evaluation TA.

While this large-scale effort was costly for OAH, by most accounts the TA was a success. The final reports are credible, internally valid presentations of the effects of the programs, although, several of them have limited external validity (generalizability). All of the final evaluation reports submitted to OAH are expected to meet HHS evidence standards and be rated as having high- or moderate-quality evidence. The evaluation capacity in the field has been strengthened by these public health grantees and evaluators’ participation in this effort.

ACKNOWLEDGMENTS

This work was conducted under a contract (HHSP233201300416G) with the Office of Adolescent Health, within the Department of Health and Human Services (HHS).

REFERENCES

  • 1.US Department of Health and Human Services. Identifying programs that impact teen pregnancy, sexually transmitted infections, and associated sexual risk behaviors. Review Protocol, Version 4. Available at: http://tppevidencereview.aspe.hhs.gov. Accessed February 22, 2016. [DOI] [PubMed]
  • 2.Council of Economic Advisers. Economic Report of the President. Washington, DC: Council of Economic Advisers; 2014. [Google Scholar]
  • 3. General Accounting Office. Program evaluation: strategies to facilitate agencies’ use of evaluation in program management and policy making. GAO Publication No. 13–570. Washington, DC: Government Printing Office; June 2013.
  • 4.Gibbs DA, Hawkins SR, Clinton-Sherrod AM, Noonan RK. Empowering programs with evaluation technical assistance: outcomes and lessons learned. Health Promot Pract. 2009;10(1):38S–44S. doi: 10.1177/1524839908316517. [DOI] [PubMed] [Google Scholar]
  • 5.Zief SG, Knab J, Cole RP. A framework for evaluation technical assistance. Am J Public Health. 2016;106(suppl 1):S24–S26. doi: 10.2105/AJPH.2016.303365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Knab J, Cole RP, Zief SG. Challenges and lessons learned from providing large-scale evaluation technical assistance to build the adolescent pregnancy evidence base. Am J Public Health. 2016;(suppl 1):S26–S28. doi: 10.2105/AJPH.2016.303358. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Margolis AL, Roper A. Practical experience from the Office of Adolescent Health’s large scale implementation of an evidence-based teen pregnancy prevention program. J Adolesc Health. 2014;54:S10–S14. doi: 10.1016/j.jadohealth.2013.11.026. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES