Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
. 2019 Jan;109(Suppl 1):S34–S40. doi: 10.2105/AJPH.2018.304808

The Importance of Evaluating Health Disparities Research

Bruce A Dye 1,, Deborah G Duran 1, David M Murray 1, John W Creswell 1, Patrick Richard 1, Tilda Farhat 1, Nancy Breen 1, Michael M Engelgau 1
PMCID: PMC6356135  PMID: 30699014

Abstract

Health disparity populations are socially disadvantaged, and the multiple levels of discrimination they often experience mean that their characteristics and attributes differ from those of the mainstream. Programs and policies targeted at reducing health disparities or improving minority health must consider these differences.

Despite the importance of evaluating health disparities research to produce high-quality data that can guide decision-making, it is not yet a customary practice. Although health disparities evaluations incorporate the same scientific methods as all evaluations, they have unique components such as population characteristics, sociocultural context, and the lack of health disparity common indicators and metrics that must be considered in every phase of the research.

This article describes evaluation strategies grouped into 3 components: formative (needs assessments and process), design and methodology (multilevel designs used in real-world settings), and summative (outcomes, impacts, and cost). Each section will describe the standards for each component, discuss the unique health disparity aspects, and provide strategies from the National Institute on Minority Health and Health Disparities Metrics and Measures Visioning Workshop (April 2016) to advance the evaluation of health disparities research.


Disparities in health among different populations account for substantial preventable morbidity and mortality, both nationally and internationally. Research in this area mostly has focused on identifying differences in health and on improving our knowledge of factors that contribute to disparities, with the hope of discovering interventions that will reduce them. Public Law 106-525(d) mandates that health disparities populations include minority groups, as defined by the Office of Management and Budget, as well as rural location populations, those of low socioeconomic status, and sexual or gender minorities.1 Increasingly, research funders are emphasizing that the relevance of minority health and health disparities programs and public policies must be demonstrated to promote good population health and to justify program investments.2,3 However, health research is expensive, and public expenditures supporting health research and health care are receiving increasingly close scrutiny. This heightened level of attention and debate requires that research identify not only health benefits but also social and economic impacts, which requires methodologies based on good evaluation techniques.

During April 2016, the National Institute on Minority Health and Health Disparities (NIMHD) convened a workshop on Methods and Measurement Science for Health Disparities to help inform strategic planning for the Institute. The evaluation of health disparities research was an important topic of the visioning workshop. Invited experts agreed that few evaluations were conducted as part of health disparities research, and this needed to be rectified. A conclusion from this workshop was that incorporating evaluation in health disparities research is needed to amass a rich evidence-based portfolio of interventions that can inform policymaking activities, as well as to justify program investments and substantiate their utility.

Health disparity populations and health disparities research present unique challenges to traditional evaluation because of the population characteristics, sociocultural context, and nascence of the field. Differences between health disparity populations and the “mainstream population” often result from social disadvantage, experience of discrimination through “isms” (e.g., sexism or racism), unique cultural practices, enclaves of living, and difficulty with established societal structures and bureaucracies. In addition, differences often depend on the degree of acculturation of the health disparities population into mainstream society.4 Therefore, evaluations must consider related population characteristics and sociocultural environments, but these considerations challenge the use of traditional or mainstream interpretations of standard evaluation techniques because of the implications of being a socially disadvantaged population.

A paramount consideration in health disparities research evaluations is to do no harm to the populations involved. Mainstream biases can lead to misinterpretation of results and questions about the quality and efficiency of a process, product, or outcome.5 Such effects can cause harm to the study population. For example, in a condom use study, Hispanic adolescents were declared less likely to use condoms during sexual activity when an evaluation of the project concluded that the adolescents did not use condoms because they were not having sex.6 The harm of promiscuity was not acknowledged by the evaluation. Population and sociocultural context must be at the forefront of every stage of the evaluation to ensure that appropriate results are given to decision-makers.

The field of health disparities research is maturing into a discipline, especially with the emergence of new definitions and better understanding of health determinants and relevant outcomes.6 Using findings from multiple disciplines, investigators are showing how health determinants contribute to health disparities at multiple domains of influence, with effects ranging from the individual level to the systemic. Examining the relationship of these factors has become the focus of health disparity etiology and intervention research. However, appropriate methodologies for evaluation in applied or real-world settings have not been commonly implemented and must be adopted.

Rossi et al. have defined program evaluation as the systematic application of scientific methods to assess the need, implementation, design, outcomes, and costs of a program.7 Key steps of an effective evaluation originate from their evaluation hierarchy, a well-known conceptual tool that identifies 5 sequential stages of a project. The 5 levels are the evaluation of project

  • 1.

    need,

  • 2.

    implementation and process,

  • 3.

    design or theory,

  • 4.

    outcomes and impacts, and

  • 5.

    costs and efficiency.

These evaluation levels assess distinct phases of the program to provide appropriate information to decision-makers to improve, sustain, adapt, or end the project. In this article, we organize these 5 levels into 3 components: (1) formative, (2) design and methodology, and (3) summative. The following sections describe these evaluation components, the unique aspects that must be considered in health disparity research evaluations, and recommendations from the NIMHD Methods and Measurement Science visioning workshop.

FORMATIVE EVALUATIONS—GETTING IT RIGHT

Formative evaluation collects information to (1) determine need; (2) define purpose and questions to be answered; (3) establish process, develop the data collection plan, identify the information sources, and specify the instruments; and (4) form a feedback approach for program improvement.8 The information can be qualitative, quantitative, or both, and the methods of analysis can be descriptive or comparative.9 These types of evaluations are the first step in assessing health disparity research to help determine whether the research is appropriate for the population and sociocultural environment.10 A needs assessment determines the program’s requirements relative to the population and environment and determines potential gaps to be addressed. This step must ensure that the evaluation of health disparities research includes population characteristics and unique sociocultural constructs that often are not known to external researchers or are different from mainstream settings.11 For example, a community health needs assessment is vital to identifying the health concerns of communities, to learn about the factors that influence their health, and the assets, resources, and challenges that affect those factors. Once these are identified, meaningful research can be conducted and evaluated to address health disparities locally and respond to longstanding historical health inequities at the health-system level.6

Needs assessments provide a fact-based way to determine gap areas and avoid assumptions of requirements based on mainstream standards. They also help identify and capitalize on the strengths of the health disparity population. Strength-based assessments enable communities to be seen in light of their abilities, talents, competences, possibilities, visions, values, and hopes.12 They assume communities have the capacity to do their best, grow, and change.13 Both needs- and strengths-based assessments of health disparity populations should be mindful of multilevel social context and physical environments, such as system policies, laws, geographic location, existence of food deserts or environmental toxins, lack of resources or green space, and extent of crime. These external factors often are not considered in program evaluations but can have a substantial effect on the involvement of participants and in the success of health disparity interventions.14 Associated impacts often are subtle and missed, which may result in the health disparity population being blamed for inadequate implementation, poor outcomes, or both.

An implementation or process evaluation can be used to examine the fidelity of the implementation as it was planned to ensure the program is being conducted correctly to achieve the goals.15 This information documents the tools, instruments, processes, and outputs of the ongoing program and explores the relationship among design, implementation, and outcomes. It is applied at a time when changes can be made to ensure that the program is moving in the right direction to achieve its goals and enables stakeholders to see how a program outcome or impact was achieved.16

Population characteristics and sociocultural context must be considered in every implementation decision. Instruments should be assessed for language, literacy, and cultural appropriateness for the population(s) evaluated. For example, the Latino conceptualization of depression advocates that current instruments may not be adequate for Latinos because feelings, such as “blue,” are not interpretable into Spanish, and the Latino concept of depression differs from that expressed by Whites.17

To determine the best approach in assessing need and implementation, unique aspects relevant to the health disparity populations under study must be considered. These populations may be discriminated against, be underrepresented in research, lack sufficient resources, require time to identify and engage stakeholders, present unique psychosociocultural values, experience external assumptions of need when need is not perceived internally, project fear of the misuse of their personal information, and not have the knowledge or tools to combat system discriminations.18,19 These experiences and perceptions differ from those experienced and expressed by mainstream populations, influence behaviors and attitudes, and frequently take considerable time to manage in real-world applied community research.20 These factors must weigh into every aspect of the process evaluation so that decision-makers receive useful information and harm is not generated by misinterpretation through using a mainstream framework. For example, in a study on raising healthy children, researchers found that validated family-based behavioral interventions are an underused approach that has been found to significantly prevent excess weight gain and obesity in children and adolescents in minority populations, including Latinos, African Americans, and American Indians. Researchers acknowledged the specific needs of these communities in preparation of the study protocol before it was tested in the community, and the intervention was very successful.21

Researchers outside of a community may not know or understand the nuances of the culture, life experiences, system discrimination, and protective coping skills embodied by the population. These researchers may never penetrate the community defenses to engage the population in research or to conduct evaluations. Without gatekeeper and stakeholder engagement, important population and sociocultural elements of an evaluation may be underappreciated or missed, and important challenges may not be addressed as well as harmful misinterpretations generated.22 For instance, researchers assumed that African American women were not depressed and did not experience pain because they immediately returned to work after breast cancer surgery. Evaluation of the project showed that an African American woman recovering from a mastectomy may go back to work and take care of her children immediately after surgery because she has no sick leave or child care, not because she is not depressed or is not in pain.23

It is suggested that researchers partner with community gatekeepers and stakeholders to interact with the community and to ensure that the research and evaluation is designed and implemented appropriately for the population. Thus, when working in disadvantaged communities where health disparities are more common, interacting with a variety of groups within the community from the onset is important. Involving local gatekeepers to identify population characteristics, the problem, the goals of the evaluation, and what resources are needed to accomplish the goals are the first steps in developing promising implementation strategies that actively engage community members as participants and yield relevant information for decision-makers.24 These gatekeepers are key in providing feedback to the community and eliciting a sense of value for participating in research. Although gatekeepers often are not researchers, most can understand and participate in designing the methodologies that will best secure participants and assess the research findings. These partnerships are key in conducting and evaluating research in health disparity communities.25

Formative evaluations foster relevant planning and implementation, as well as set the foundation for establishing the appropriate study design. The box on page S37 presents strategies for formative evaluations to facilitate success.

Strategies for Formative Evaluations.

Evaluate research during the development, implementation, and process phases to determine the best approach, modifications, or progress that are inclusive and reflective of the unique aspects of health disparity populations in real-world settings.

  • Conduct a needs assessment to ensure the research and the hypotheses tested address health disparity topics relevant to the needs and context of the health disparity population.

  • Foster strength-based assessments to draw upon the assets of the population to reduce health disparities.

  • Evaluate processes to ensure that the appropriate practices, protocols, and tools are being implemented to facilitate scientifically valid results relevant to the health disparity population (e.g., instruments, policies, timing, training, data collection, reliability of the practices, and adherence to the design).

  • Ensure a feedback mechanism to improve the project, identify components that were effective, and ensure that the design and outcomes are meaningful to the health disparity population.

DESIGN AND METHODOLOGY EVALUATIONS—DOING IT RIGHT

Health disparities research includes both etiological research and interventional research. The former seeks to understand the causes of health disparities, while the latter seeks to reduce them. Both need to be informed by sound methods, including evaluation. A separate article in this issue by Jeffries et al. (p. S28–S33) addresses methodological approaches to understanding the etiology of health disparities. Noting that potential causes are often correlated, occur at multiple levels, and may involve feedback loops, Jeffries et al. focus on design and analytic methods for causal inference from observational studies, discuss complex systems and simulation methods for modeling dynamic relations, and describe qualitative and mixed methods in addition to quantitative methods. Qualitative methods foster in-depth understanding of the culture of minority and vulnerable populations, support needs and resources, identify intervention preferences, and assess satisfaction with intervention strategies and impacts. Although labor-intensive and focused on individual voices and perspectives, qualitative methods can be combined with quantitative data to provide greater insight than either quantitative or qualitative methods alone. In a research program addressing low-income and elderly people in Canada, the use of mixed methods enhanced assessment of the mediating impacts of social support on the health of vulnerable populations and enabled the design and testing of support interventions.26

Interventions often operate at multiple levels and in a community or other real-world setting that is dynamic and heterogeneous with regard to race, gender, education, poverty, risk behaviors, and physical environments. That makes it difficult to use randomized controlled trials for their evaluation, even though randomized controlled trials are widely considered to provide the strongest evidence for causal inference. Randomized controlled trials are best used when the researcher has considerable control over all aspects of the study, including the criteria for participation and recruitment of participants, the selection and timing of measurements, the delivery of the intervention, and the analysis of the data. Such control provides good internal validity, maximizing the opportunity for an unbiased result.27–29 However, investigators may not be able to achieve the randomized controlled trial level of control for evaluations of interventions designed to reduce health disparities and instead may have a strong interest in achieving a high level of external validity.

Fortunately, a number of alternative methods are available and appropriate for the evaluation of multilevel and community-based intervention programs that are designed to reduce health disparities. The suggested designs that follow may account for complications, such as the challenges of minority recruitment, lack of sufficient research funds, interactions of multilevel health determinants, and the extended duration of community-based projects in real-world settings, as well as provide the needed rigor.

Group- or cluster-randomized trials (GRTs)30,31 can be just as rigorous as randomized controlled trials but are designed specifically for interventions that operate at a group level, change the physical or social environment, or cannot be delivered to individuals without substantial risk of contamination. For example, neighborhoods or whole communities may be randomized to study arms, allowing a multilevel intervention to be delivered to the neighborhoods or specific sites within communities that are randomized to the intervention program.32,33 Group- or cluster-randomized trials are the strongest design option if randomization is possible in conjunction with a no-treatment, delayed-treatment, or alternative-treatment control arm.

Stepped-wedge designs34,35 also are well suited to test the effectiveness of multilevel or community-based intervention programs. In this design, the groups to be randomized are measured, and 1 or more groups are selected at random to receive the intervention. After an interval that is long enough to assess the intervention effect, measurements are repeated, and 1 or more groups are selected at random from those remaining to receive the intervention. This process continues until all groups have received the intervention and a final round of measurements is completed. The intervention effect is calculated by using the control and treatment observations taken in each group.

One of the primary advantages of this design is that all groups receive the intervention before the end of the study, a feature that will be important to many individuals and communities considering participation in an intervention to reduce health disparities, and this may make it easier to persuade both individuals and community groups to participate. Another advantage is that the design will usually be more efficient than the standard GRT, often requiring fewer groups. The disadvantages are that more measurement occasions will be required, perhaps increasing cost, and the study will usually take longer to complete than the standard parallel GRT, creating more opportunity for unexpected problems and delaying the final evaluation of the utility of the intervention.

The regression discontinuity design (RDD) is an attractive alternative when randomization is not possible.28,29,36,37 In the RDD, participants are allocated to study arms on the basis of a quantitative score, with all participants on one side of a predetermined cutpoint receiving the intervention and all participants on the other side of that cutpoint serving as controls. Accurate modeling of the relationship between the assignment variable and the outcome in the primary analysis provides strong evidence for causal inference.38 The major advantages of the RDD are that randomization is not required, and the intervention can be given to the participants with the greatest need. Such advantages would likely be attractive to participants considering a study to evaluate an intervention to reduce health disparities. The disadvantage is that twice as many participants are required for the same power available in a randomized controlled trial. Even so, the RDD has great potential in studies in which political or cultural reasons make randomization difficult.

The group-level RDD is patterned after the individual-level RDD described previously.32,39 Groups rather than individual participants are allocated to study arms. This variation on the RDD design may be a useful approach to evaluate community-level health disparity interventions in which the intervention operates at a group level, changes the physical or social environment, or cannot be delivered to individuals without substantial risk of contamination. Analysis of RDDs can be accomplished with standard methods based on the general or generalized linear model.36,37 Model-based analysis of GRTs, stepped-wedge designs, and GRDs will require application of the general or generalized linear mixed model. Two-stage models, 2-stage randomization tests, and methods based on generalized estimating equations can also be used, if implemented properly.30,31,34,35,39

If none of these designs can be used, multiple baseline designs,40,41 quasi-experimental designs,28,29 and time-series designs28,29 may be helpful. However, these methods will not provide the same level of rigor as GRTs, stepped-wedge designs, RDDs, and group-level RDDs. The box on page S38 provides a summary of some design strategies to consider for evaluations of health disparity research.

Strategies for Design and Methodology to Advance Evaluations.

Use design and analytic methods that reflect the complexities of multilevel, multidisciplinary health disparities research.

  • The next generation of health disparities projects should promote multilevel and multidisciplinary research across the life course in real-world settings.

  • Etiologic research should assess exposures and outcomes at multiple levels and reflect the unique characteristics and attributes of health disparities populations.

  • Intervention research should address identified etiologic factors at multiple levels and be evaluated by using design and analytic methods appropriate for multilevel interventions.

  • Whether the health disparities research is etiological or interventional, evaluation typically proceeds with both qualitative and quantitative data to understand individual experiences and health outcomes.

SUMMATIVE EVALUATIONS—RIGHT GOALS, RIGHT PRICE

Summative evaluation examines the impact of a program or intervention on the target group. Because it is used to assess whether the results of the program or intervention being evaluated met the stated goals, it is often considered the most important part of an evaluation. Summative evaluation is particularly important for health disparities because the 2 key questions in the research field today are (1) have interventions focused on the right factors that improved the health of the target group, and (2) have programs actually reduced the health disparity between the target and reference groups? Summative evaluation documents the degree of achievement of the project’s goals and outcomes. It enables researchers to quantify changes in resources used, which can help determine the overall impact of the program or intervention, and it promotes the assessment of comparative analyses with different projects to facilitate evidence-based decision-making.

Evaluation in health disparities should be designed to improve transparency, accountability, and sustainability of health equity. A summative evaluation can demonstrate accountability for a variety of stakeholders, including the community and funding bodies. Accountability generally spans 3 dimensions: (1) how efficiently health care resources are being used (fiscal accountability), (2) whether interventions are being delivered with fidelity and yielding the expected result (performance accountability), and (3) whether interventions exhibit government-brokered equity, transparency, and responsiveness (political or democratic accountability).42 Understanding performance for each of these accountability dimensions requires evaluation to ensure that investments, interventions, and policies are actually yielding their expected results—and are not imposing harm. The methods of summative evaluation are typically associated with quantitative assessments but can include qualitative methods as well. Processes include analyses of the outcomes, products, or data from the project, as well as information from reports. It can also provide additional information or perspectives from focus groups, including project participants and members of the community the program was intended to affect.

Substantial reduction in health disparities among underserved and minority populations will occur only if interventions and programs are sustainable and effective. Although cost analyses are important in summative evaluations, understanding effectiveness regardless of cost is equally important. For example, when a study summarized the outcomes and cost-effectiveness analysis of a cardiovascular disease risk-reduction program delivered to patients in federally qualified health centers by nurse practitioners and community health workers versus enhanced usual care, the authors did report costs per unit of improvement in low-density lipoprotein cholesterol, systolic and diastolic blood pressure, and glycated hemoglobin.43 Although the authors concluded that their intervention could effectively reduce cardiovascular health disparities over a 12-month period, the study design was unable to show an actual reduction in health disparity between the targeted group and a healthier reference group. As a consequence, they were unable to answer 1 of the 2 key questions in health disparities research today—Did the program reduce the health disparity between the target and reference (healthier) groups? In this case, a summative evaluation could have provided contextual meaning to the absolute and relative outcomes, which are often used in assessing health disparities research but not always understood by lay users. It also could have validated that the conclusions accurately represented the findings and that the population characteristic and sociocultural environment were construed appropriately.

Economic evaluations can be conducted to determine whether a program is cost-effective relative to comparable programs or is cost-beneficial, which is helpful for choosing among programs to be funded and to foster sustainability. Although an evaluation should include an economic analysis that compares the costs and consequences of alternative interventions,44 this is uncommon in health disparities research. An important aspect of an economic evaluation is specifying the perspective, which can range from individuals to health systems; a governmental program, such as Medicare; or society overall. The appropriate approach depends on what is being compared.

A cost-effectiveness analysis compares the likely health-related effects and costs of alternatives to achieve a specific health-related outcome. The resulting incremental cost-effectiveness ratio (net cost:net effects) may, for example, relate to incremental cost per life year saved or incremental cost per case averted. Including a strategy to determine the cost-effectiveness of minority health and health disparities interventions in community settings is critical to include in the design and long-term sustainability of funded projects. Because the outcome under study is life years saved or survival, cost-effectiveness analysis does not account for improvement in the quality of life of beneficiaries, which is so often associated with health equity interventions. Hence, cost-utility analysis is another type of economic evaluation that expands cost-effectiveness analysis to include multiple health-related effects, such as quality and length of life. Cost-utility analysis typically uses the preferences of individual patients for specific health-related outcomes and aggregates them to a societal level to develop an overall measure, such as a quality-adjusted life year.

Key considerations when one is assessing outcomes and impact include the potential scalability of the results to real-world settings where the implementation strategy can be expanded broadly across populations and the sustainability of health-promoting interventions. Evaluations must move health disparities projects to this point to reduce disparities and to save costs. Policymakers and other key stakeholders are often interested in knowing whether a particular program reduces cost, also known as “the return on investment.” In this case, a cost–benefit analysis that expresses both costs and outcomes in monetary terms of programs addressing health disparities is warranted. For instance, Delaware increased colorectal screening rates among African Americans and, unlike most health disparities projects, continued to assess the screening rates over time to determine whether the colorectal cancer disparity was reduced.45 Overall, the intervention cost about $1 million per year and saved an estimated $8.5 million annually in treatment costs. Equally important, not only did health outcomes improve for both groups, but the health disparity between groups was also reduced. A basic evaluation investigating impact like this example rarely receives attention in health disparities research, although it is much needed.

Achieving “the right goals at that right price” is only part of the journey. Once projects, especially interventions, are properly evaluated, results must be communicated. Reporting summative evaluation findings encompassing assessments of goals and cost analyses is critical to expanding the health disparities literature. Such information can support systematic reviews and meta-analyses, encourage development of interventions that can affect health equity, facilitate evidence-based decision-making with regard to scalability and sustainability of programs, and reduce repetition. Another important activity is to provide feedback to the community that was studied to foster collaborative, caring relationships and to diminish the perceptions of being exploited and concerns of funding consequences. In addition, a centralized location is needed to promote access to health disparities research findings. An example is HDPulse, which is hosted by NIMHD (https://hdpulse.nimhd.nih.gov). Such a strategy could advance health disparities research in a meaningful way by cataloguing exemplary evaluation approaches and other science-driven activities to encourage research in health disparities populations to evolve at a quicker, yet deliberate pace. Moreover, it can become a repository not only for cost-effectiveness outcomes but also for quantifying improvements in health and in health disparities reduction—both of which answer the most compelling questions in health disparities research today. The box on this page provides strategies for summative evaluation of minority health and health disparities research to help achieve the right goals.

Strategies for Summative Evaluations.

Evaluate research outcomes and efforts to determine the overall impact and relative costs of health improvement and reduction of health disparities for the targeted population.

  • Research often focuses on discovering significance. To progress from efforts directed toward accurate identification and surveillance to delivering effective interventions, health disparities research must incorporate summative evaluation methodologies to determine whether appropriate short-term outcomes and long-term impacts were achieved.

  • Interventions should be evaluated to determine whether they were delivered with fidelity and yielded the expected result (performance accountability) relevant to the population and sociocultural context.

  • Economic analysis should always be a part of the evaluation. It requires collection of cost data throughout the course of an intervention to determine the sustainability and feasibility of long-term use in relation to the reduction of health disparity.

  • Communicating the results is critical. Publishing evaluations about health disparity research will support systematic reviews and meta-analyses, which can lead to a better understanding of the impact of activities designed to reduce health disparities. Providing feedback to participant communities fosters collaborative relationships for future research.

CONCLUSIONS

The science of health disparities is promoting research from a comprehensive perspective of the individual, the community, and systems over the life span. The complexities of this work challenge not only conventional population science and research but also evaluation at every phase. Health disparities research must be encouraged to take on these challenges by promoting and adopting new discoveries in methodologies and data science to move beyond describing and understanding health disparities toward identifying mechanisms that can effectively and efficiently reduce them. A substantial part of this effort will require not only the implementation of evaluation methodologies but also the publication of findings from the evaluation.

The use of evaluation techniques in health disparities research is key to identifying the “active ingredients” that make health disparity interventions effective. Evaluation can play a dual role by providing evidence on the outcomes of the intervention within different settings and by appraising the intervention process to help clarify how and why the intervention worked and its cost. Projects lose long-term impact when the cost is too high to sustain. Routine evaluation can not only strengthen the position of minority health and health disparities programs so that investments can be justified when budgets are constrained, but also facilitate progress toward reducing health disparities and improving health and well-being for all.

ACKNOWLEDGMENTS

The authors wish to thank Jane Sisk, PhD, for her contribution to the activities of this work group and insightful review of this article. This trans–National Institutes of Health work resulted from a National Institute on Minority Health and Health Disparities–led workshop, including external experts, to address methods and measurement science.

Note. The final content is the responsibility of the authors and does not necessarily represent the perspective of the US government.

CONFLICTS OF INTEREST

The authors do not have any financial or other competing interests to declare.

HUMAN PARTICIPANT PROTECTION

Because no human participants were involved, institutional review board approval was not required.

REFERENCES

  • 1. Pub L No. 106-525 (2000). Available at: https://www.nimhd.nih.gov/docs/advisory-council/public_law106-525.pdf. Accessed November 5, 2018.
  • 2.Cohen G, Schroeder J, Newson R et al. Does health intervention research have real world policy and practice impacts: testing a new impact assessment tool. Health Res Policy Syst. 2015;13(1):3. doi: 10.1186/1478-4505-13-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Ernø-Kjølhede E, Hansson F. Measuring research performance during a changing relationship between science and society. Res Eval. 2011;20(2):131–143. [Google Scholar]
  • 4. Who is socially disadvantaged? 13 CFR 124.103. Available at: https://www.law.cornell.edu/cfr/text/13/124.103. Accessed November 5, 2018.
  • 5.Kaptchuk TJ. Effect of interpretive bias on research evidence. BMJ. 2003;326(7404):1453–1455. doi: 10.1136/bmj.326.7404.1453. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Grant CG, Ramos R, Davis JL, Lee Green B. Community health needs assessment: a pathway to the future and a vision for leaders. Health Care Manag (Frederick) 2015;34(2):147–156. doi: 10.1097/HCM.0000000000000057. [DOI] [PubMed] [Google Scholar]
  • 7.Rossi PH, Lipsey MW, Freeman HE. Evaluation—A Systematic Approach. 7th ed. Thousand Oaks, CA: Sage; 2004. [Google Scholar]
  • 8.Centers for Disease Control and Prevention. Types of evaluation. Available at: https://www.cdc.gov/std/Program/pupestd/Types%20of%20Evaluation.pdf. Accessed November 5, 2018.
  • 9.Black P, Wiliam D. Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability. 2009;21(1):5. [Google Scholar]
  • 10.Lillie-Blanton M, Hoffman SC. Conducting an assessment of health needs and resources in a racial/ethnic minority community. Health Serv Res. 1995;30(1 pt 2):225–236. [PMC free article] [PubMed] [Google Scholar]
  • 11.Health Research and Educational Trust. Applying research principles to the community health needs assessment process. 2016. Available at: https://www.hpoe.org/researchCHNA. Accessed November 5, 2018.
  • 12.Saleebey D. The strengths perspective in social work practice: extensions and cautions. Soc Work. 1996;41(3):296–305. [PubMed] [Google Scholar]
  • 13.Early TJ, GlenMaye LF. Valuing families: social work practice with families from a strength perspective. Soc Work. 2000;45(2):118–130. doi: 10.1093/sw/45.2.118. [DOI] [PubMed] [Google Scholar]
  • 14.Thornton RL, Glover CM, Cené CW, Glik DC, Henderson JA, Williams DR. Evaluating strategies for reducing health disparities by addressing the social determinants of health. Health Aff (Millwood) 2016;35(8):1416–1423. doi: 10.1377/hlthaff.2015.1357. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Wierenga D, Engbers LH, Van Empelen P, Duijts S, Hildebrandt VH, Van Mechelen W. What is actually measured in process evaluations for worksite health promotion programs: a systematic review. BMC Public Health. 2013;13:1190. doi: 10.1186/1471-2458-13-1190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Moore GF, Audrey S, Barker M et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258. doi: 10.1136/bmj.h1258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Maxson J. Latino Conceptualization of Depression. Capstone Project [dissertation]. Forest Grove, OR: College of Health Professions, Pacific University; June 2011.
  • 18.Durso T. Health care inequalities lead to a mistrust of research. The Scientist. February 1997. Available at: https://www.the-scientist.com/news/health-care-inequities-lead-to-a-mistrust-of-research-57614. Accessed November 5, 2018.
  • 19.Wynia MK, Gamble VN. Mistrust among minorities and the trustworthiness of medicine. PLoS Med. 2006;3(5):e244. doi: 10.1371/journal.pmed.0030244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Blumenthal DS. Is community-based participatory research possible? Am J Prev Med. 2011;40(3):386–389. doi: 10.1016/j.amepre.2010.11.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Smith JD, Jordan N, Gallo C et al. An individually tailored family-centered intervention for pediatric obesity in primary care: study protocol of a randomized type II hybrid effectiveness-implementation trial (Raising Healthy Children study) Implement Sci. 2018;13(1):11. doi: 10.1186/s13012-017-0697-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.McCauley MP, Ramanadhan S, Viswanath K. Assessing opinions in community leadership networks to address health inequalities: a case study from Project IMPACT. Health Educ Res. 2015;30(6):866–881. doi: 10.1093/her/cyv049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Sheppard VB, Harper FW, Davis K, Hirpa F, Makambi K. The importance of contextual factors and age in association with anxiety and depression in Black breast cancer patients. Psychooncology. 2014;23(2):143–150. doi: 10.1002/pon.3382. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Lwembe S, Green SA, Chigwende J, Ojwang T, Dennis R. Co-production as an approach to developing stakeholder partnerships to reduce mental health inequalities: an evaluation of a pilot service. Prim Health Care Res Dev. 2017;18(1):14–23. doi: 10.1017/S1463423616000141. [DOI] [PubMed] [Google Scholar]
  • 25.Friedman DB, Owens OL, Jackson DD et al. An evaluation of a community–academic–clinical partnership to reduce prostate cancer disparities in the South. J Cancer Educ. 2014;29(1):80–85. doi: 10.1007/s13187-013-0550-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Stewart M, Makwarimba E, Barnfather A, Letourneau N, Neufeld A. Researching reducing health disparities: mixed-methods approaches. Soc Sci Med. 2008;66(6):1406–1417. doi: 10.1016/j.socscimed.2007.11.021. [DOI] [PubMed] [Google Scholar]
  • 27.Campbell DT, Stanley JC. Experimental and Quasi-experimental Designs for Research. Chicago, IL: Rand McNally College Publishing Company; 1963. [Google Scholar]
  • 28.Cook TD, Campbell DT. Quasi-Experimentation: Design & Analysis Issues for Field Settings. Chicago, IL: Rand McNally College Publishing Company; 1979. [Google Scholar]
  • 29.Shadish WR, Cook TD, Campbell DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston, MA: Houghton Mifflin Company; 2002. [Google Scholar]
  • 30.Donner A, Klar N. Design and Analysis of Cluster Randomization Trials in Health Research. London, UK: Arnold; 2000. [Google Scholar]
  • 31.Murray DM. Design and Analysis of Group-Randomized Trials. New York, NY: Oxford University Press; 1998. [Google Scholar]
  • 32.Murray DM, Pennell M, Rhoda D, Hade EM, Paskett ED. Designing studies that would address the multilayered nature of health care. J Natl Cancer Inst Monogr. 2010;2010(40):90–96. doi: 10.1093/jncimonographs/lgq014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Murray DM, Pals SP, George SM et al. Design and analysis of group-randomized trials in cancer: a review of current practices. Prev Med. 2018;111:241–247. doi: 10.1016/j.ypmed.2018.03.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Hughes JP, Granston TS, Heagerty PJ. Current issues in the design and analysis of stepped wedge trials. Contemp Clin Trials. 2015;45(pt A):55–60. doi: 10.1016/j.cct.2015.07.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemp Clin Trials. 2007;28(2):182–191. doi: 10.1016/j.cct.2006.05.007. [DOI] [PubMed] [Google Scholar]
  • 36.Bor J, Moscoe E, Barnighausen T. Three approaches to causal inference in regression discontinuity designs. Epidemiology. 2015;26(2):e28–e30. doi: 10.1097/EDE.0000000000000256. [DOI] [PubMed] [Google Scholar]
  • 37.Bor J, Moscoe E, Mutevedzi P, Newell ML, Barnighausen T. Regression discontinuity designs in epidemiology: causal inference without randomized trials. Epidemiology. 2014;25(5):729–737. doi: 10.1097/EDE.0000000000000138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Rubin DB. Assignment to treatment group on the basis of a covariate. J Educ Behav Stat. 1977;2(1):1–26. [Google Scholar]
  • 39.Pennell ML, Hade EM, Murray DM, Rhoda DA. Cutoff designs for community-based intervention studies. Stat Med. 2011;30(15):1865–1882. doi: 10.1002/sim.4237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Hawkins NG, Sanson-Fisher RW, Shakeshaft A, D’Este C, Green LW. The multiple baseline design for evaluating population-based research. Am J Prev Med. 2007;33(2):162–168. doi: 10.1016/j.amepre.2007.03.020. [DOI] [PubMed] [Google Scholar]
  • 41.Rhoda DA, Murray DM, Andridge RR, Pennell ML, Hade EM. Studies with staggered starts: multiple baseline designs and group-randomized trials. Am J Public Health. 2011;101(11):2164–2169. doi: 10.2105/AJPH.2011.300264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Brinkerhoff D. Accountability and health systems: overview, framework, and strategies. Bethesda, MD: Abt Associates, Partners for Health Reform Plus; 2003. [Google Scholar]
  • 43.Allen JK, Dennison Himmelfarb CR, Szanton SL, Frick KD. Cost-effectiveness of nurse practitioner/community health worker care to reduce cardiovascular health disparities. J Cardiovasc Nurs. 2014;29(4):308–314. doi: 10.1097/JCN.0b013e3182945243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Drummond MK, Sculpher MJ, Claxton K, Stoddart GL, Torrance GW. Methods for the Economic Evaluation of Health Care Programmes. Oxford, UK: Oxford University Press; 2015. [Google Scholar]
  • 45.Grubbs SS, Polite BN, Carney J et al. Eliminating racial disparities in colorectal cancer in the real world: it took a village. J Clin Oncol. 2013;31(16):1928–1930. doi: 10.1200/JCO.2012.47.8412. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES