Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Sep 1.
Published in final edited form as: Ann Epidemiol. 2023 Apr 2;85:45–50. doi: 10.1016/j.annepidem.2023.03.008

Proposing the observational-implementation hybrid approach: designing observational research for rapid translation

Justin Knox a,b,c, Sheree Schwartz d, Dustin T Duncan e, Geoff Curran f, John Schneider g, Rob Stephenson h, Patrick Wilson i, Denis Nash j, Patrick Sullivan k, Elvin Geng l
PMCID: PMC10936213  NIHMSID: NIHMS1969307  PMID: 37015306

Abstract

We propose the observational-implementation hybrid approach -- the incorporation of implementation science methods and measures into observational studies to collect information that would allow researchers to anticipate, estimate, or infer about the effects of interventions and implementation strategies. Essentially, we propose that researchers collect implementation data early in the research pipeline, in situations where they might not typically be thinking about implementation science. We describe three broad contextual scenarios through which the observational-implementation hybrid approach would most productively be applied. The first application is for observational cohorts that individually enroll participants – either for existing (to which implementation concepts could be added) or for newly planned studies. The second application is with routinely collected program data, at either the individual or aggregate levels. The third application is to the collection of data from study participants enrolled in an observational cohort study who are also involved in interventions linked to that study (e.g., collecting data about their experiences with those interventions). Examples of relevant implementation data that could be collected as part of observational studies include factors relevant to transportability, participant preferences, and participant/provider perspectives regarding interventions and implementation strategies. The observational-implementation hybrid model provides a practical approach to make the research pipeline more efficient and to decrease time from observational research to health impact. If this approach is widely adopted, observational and implementation science studies will become more integrated; this will likely lead to new collaborations, will encourage the expansion of epidemiological training, and, we hope, will push both epidemiologists and implementation scientists to increase the public health impact of their work.

Keywords: implementation science, observational study, outcome studies

Introduction

Implementation science is the study of methods and strategies that facilitate the uptake of evidence-based strategies to improve public health or clinical practice into everyday practice.1 Implementation science focuses on rigorously understanding causes of implementation, and how to manipulate relevant constructs to improve implementation and health outcomes. Epidemiologists have made important contributions to implementation science by contributing to the design of implementation science studies that are rigorous and actionable. Here, we advocate for incorporating implementation science methods and measures into one of the cornerstones of epidemiological research: observational studies. We draw from the effectiveness-implementation hybrid studies literature, and propose clearer specification and utilization of this hybrid approach within observational research. We name this approach the observational-implementation hybrid approach. We recognize that many of the concepts we describe are already being used in some epidemiologic studies, and we hope that formalizing the approach and providing a theoretical justification for it, will strengthen existing work and expand its application. Epidemiologists who conduct and analyze observational data (from program data or other sources) may consider how these methods can promote the systematic uptake of research findings and other evidence-based strategies to improve public health or clinical practice. We also offer some practical suggestions for how to achieve these goals.

We propose that epidemiologists increasingly collect implementation research data at additional relevant parts of the research pipeline; in some cases, observational researchers might currently not be collecting data relevant to the eventual implementation of programs or interventions. Although some implementation-relevant concepts align with what epidemiologists often focus on (e.g., how and under what conditions an exposure/treatment works), others will be less familiar (e.g., individuals’ motivation and preferences for the uptake of evidence-based practices). We encourage readers who are not familiar with implementation science or who do not use it in their research to approach our arguments with an open mind and to continue reading, as they may ultimately find our suggestions to be interesting and useful.

Motivation for the observational-implementation hybrid approach

Our proposal is motivated by the effectiveness-implementation hybrid study,2,3 a widely used approach that blends design questions of clinical effectiveness research (clinical and/or public health interventions to improve health outcomes) and implementation research (how and under what circumstances interventions work in practice). Effectiveness-implementation hybrid studies may have different emphasis on the effectiveness and implementation components: they may prioritize effectiveness outcomes (type 1), implementation outcomes such as feasibility, fidelity of intervention implementation or sustainment4 (type 3), or both (type 2). The rationale for this approach was to foster rapid translational gains in clinical intervention uptake, increase effectiveness of interventions, and generate more useful information for researchers and decision makers.2 The goal is to take interventions that are proven to work in a controlled trial setting (e.g., are efficacious) and learn how to support the practice-based implementation of interventions in a way that they remain efficacious (e.g., are effective); studying both components concurrently was proposed to improve the efficiency of the research pipeline and accelerate the translation process (i.e., the 17-year gap from discovery to implementation of health interventions5).

Analogously, today epidemiologists conducting observational research are well-positioned to help close the gap from discovery to implementation. We can do this by anticipating challenges along the research-practice translational pipeline. Using an observation-implementation hybrid approach will decrease the time between observations of determinants of health and using that knowledge to improve health. How can this be accomplished? Much observational epidemiological research, especially social epidemiological research, focuses on social determinants of health; many of these critical exposures do not lend themselves to evaluation through experimental designs, or may be logistically difficult to randomize in the context of limited political will and finite resources (e.g. policy, poverty alleviation). In these contexts, observational studies play a key role in driving hypothesis generation and identifying modifiable conditions and targets for interventions. Further, with many exposures or interventions, there are often key steps (and related determinants) along the causal pathway that enable an exposure or intervention to ultimately influence a health outcome (exposure to the intervention→implementation outcomes→health outcomes). The application of causal inference methods to observational data can address questions that experimental designs may be unable to address and can provide results that are more generalizable by enhancing the real-world nature of the data collection. Indeed, prioritizing external validity over internal validity is a hallmark of implementation research. Additionally, when evidence-based practices exist, observational data can increase understanding of their implementation. Observational data can also characterize the need for new or optimized implementation strategies to promote adoption of evidence-based practices and monitor the transportability of effects across contexts. Therefore, we believe that epidemiologists may further increase impact by broadening traditional observational study designs to include the collection of data to inform the design, implementation, or evaluation of implementation strategies on implementation outcomes. We propose that epidemiologists should think broadly about the stages of the research pipeline where implementation science data could be collected.

Opportunities to implement the observational-implementation hybrid approach

The observational-implementation hybrid approach is relevant in three broad scenarios: observational cohorts, program data, and cohorts with study-linked interventions (Examples provided in Table 1). First, it can be applied among observational cohorts that individually enroll participants and provide a natural history of their experiences or behaviors. These studies might include existing studies to which additional information to inform implementation could be added, or new studies where a hybrid approach is planned a priori. Measures could be added to survey assessments to understand acceptability or appropriateness of relevant evidence-based practices (e.g., constructs relevant to patient uptake), including interventions and policies operating at the structural level – areas that experimental studies are often not well suited to assess. Participants might be sub-sampled by relevant characteristics or behaviors, for example, based on their relative uptake of evidence-based practices; data from such participants could further understanding of facilitators or barriers to uptake. A determinants framework, such as the Consolidated Framework for Implementation Research (CFIR),6 could be used to guide which constructs to assess through quantitative or qualitative measures. Standard data collection activities could be enhanced by additional data collection among providers, policymakers, and facilities/organizations to expand the scope of the project and inform a better understanding of implementation gaps. Preference-based measures administered to cohort participants or providers could identify strategies to enhance capabilities, motivations and/or opportunities for behavior change.7

Table 1:

Examples for potential use of the observational-implementation hybrid approach

Context Additional data to collect Potential Frameworks to Layer onto Observational Data Benefit Gained Examples
New or existing observational cohort

Populations individually enrolled into follow-up
New collection of determinants (facilitators and barriers) of uptake of evidence-based interventions or practices from the perspective of patients, providers, health systems Determinants frameworks such as: CFIR,6 Health Equity and Implementation Framework,9 Exploration, Preparation, Implementation, Sustainment framework (EPIS),10 The Behavior Change Wheel7 Clearer elucidation of causal pathways that focuses on determinants beyond individual patient biology and behavior

Hypothesis generation for implementation strategies needed to improve implementation and subsequent health outcomes
The Networks and Neighborhoods Part 2 Study11: an observational cohort study assessing causal effect of substance use and sleep health on HIV-related outcomes among Black sexual minority men and transgender women. Determinants of implementation of alcohol interventions will be assessed at multiple levels (e.g., psychological, social network, policy).
Collection of preference data through discrete choice experiments, best-worst scaling, or ranking approaches Discrete choice experiment around Covid-19 testing preferences within the CHASING COVID national cohort study.12
Ongoing program implementation

Leveraging of routine data collected by health systems and programs, reported at the individual or aggregate level
Collection of implementation outcomes with patients, providers, organizations/health systems Evaluative frameworks such as: RE-AIM,8 Proctor’s implementation outcomes4 Further identification of mechanisms for program success or failure

Robust assessment of intervention effects

Hypothesis generation for implementation strategies needed to improve implementation and subsequent health outcomes
The IeDEA cohort – use of clinic characteristics (adoption) to assess clinical outcomes.13

Optimizing PrEP for Implementation within a large PEPFAR funded PrEP program in South Africa (NIH grant R01MH121161; PI: Schwartz)
Collection of data elements or follow-up strategies in a random sub-set of participants Allows for clearer understanding around missing data assumptions and associated biases

Has the potential to replicate a natural experiment
Updated retention and viral suppression estimates among program data using a mutli-stage sampling and tracing approach.14

Use of random tracing of cohort members as an instrumental variable approach15
Use of external data such as policy shifts within existing cohorts Efficiently harnesses existing data and causal inference methods to test hypotheses for which randomization is impractical or unethical Assessing the effects of implementing universal rapid HIV treatment on initiation of antiretroviral therapy and retention in care in Zambia.14
*

Examples are non-exhaustive; different combinations of additional data collection components could be applied across both general typologies of observational data context.

The observational-implementation hybrid approach can also be used with routinely collected program data. For example, studies using quasi-experimental designs with routine program data could collect additional data to assess implementation science constructs leveraging evaluative frameworks, such as RE-AIM8 or Proctor’s Implementation Outcomes.4 This might be done by using quantitative or qualitative measures, and by observing ongoing implementation, completing facility-level checklists and collecting costing data. Program evaluation studies may be particularly suited to answer questions associated with policy or guideline changes if implementation of a program is occurring on a large scale.

A third scenario is collecting data from participants in observational cohort studies who are also involved in study-linked interventions. For example, in many HIV-related studies, potential participants undergo HIV/STI testing to assess eligibility for study inclusion; these eligibility screening tests include services like counseling and referral to HIV/STI prevention or care services. Implementation-relevant details of eligibility screening are typically not captured in observational studies, but additional data collection about these experience and processes could be used to improve existing and long-standing interventions (e.g., HIV counseling and testing and linkage to care). Unobserved interventions, such as merely presenting at a research site (one that is welcoming to racial/ethnic and sexual/gender minorities), also warrants additional consideration. Data about research participant experiences in research sites would allow an understanding of what characteristics of service locations might be conducive or not conducive to effective service provision. Finally, data could also be collected around the compensation participants receive and the extent to which those funds help mitigate known barriers to medical care– for example, support with respect to food, shelter, transportation and well-being.

Methodological considerations when applying the observational-implementation hybrid approach

Using an observational-implementation hybrid approach starts with acquiring knowledge about relevant interventions or policies that impact modifiable implementation constructs relevant to the research question(s) of the study. If researchers do not have this knowledge at the outset, they can use literature review and/or consultations with community advisory groups, researchers developing interventions and implementers of the interventions (e.g., healthcare providers). Many implementation science models, theories and frameworks have been developed that can help researchers identify which implementation constructs are relevant to their topic and how to measure them.16,17 Where understanding implementation barriers or facilitators is warranted, determinants frameworks, such as the CFIR, synthesize implementation constructs across domains, in the case of CFIR including: Intervention Characteristics, Outer Setting, Inner Setting, Process, and Characteristics of Individuals.6 Characteristics of individuals (i.e., participants and healthcare providers linked to the study) could be assessed in observational studies, such as knowledge and beliefs about a specific evidence-based practice, willingness to uptake/use it, or anticipated self-efficacy to adhere to it. In some cases, research on user willingness to use various interventions touches on some of these considerations (e.g., willingness to use emerging interventions, such as long-acting injectable PrEP18,19), however contextualizing interventions within more complex systems is often missing. Relevant to the CFIR outer setting, policies involving compensation or structural determinants of health could also be assessed as these might impact uptake of an evidence-based practice. Perceptions of intervention characteristics, such as relative advantage, complexity, adaptability and cost may be other critical drivers of implementation; these perceptions could also be assessed by including data collection with providers, especially if the observational study is hosted in a clinic.11 If this were the case, data could also be collected relevant to the CFIR inner setting, such as organizational dynamics and culture or readiness for implementation change, as these may impact provider adoption of screening or implementation of an evidence-based practice.

Clinical trials are often used as settings to collect data on willingness to uptake interventions, usually as part of the very trial in which an intervention is being evaluated (e.g., rectal microbicides for HIV prevention used this approach20). Collecting willingness data in a trial setting might be biased, because participants who are favorable towards the intervention may be more likely to join a trial where they might receive that intervention. Alternatively, willingness data could be collected as part of unaffiliated observational studies. In that case, participants might not be as familiar with the intervention and might need to be provided information about it. Even with that information, they might not have experienced the intervention. However, data collected in unaffiliated observational studies would likely not suffer from the same potential selection bias as data collected during a trial, nor from potential conflict of interest due to being surveyed by investigators who are conducting a trial of the intervention in question.

Observational studies are also well suited to study factors related to the transportability of potential interventions and implementation strategies. Specifically, researchers can use observational studies to measure selection factors (i.e., characteristics of persons and settings that have been shown to impact the reach and effectiveness of interventions and implementation strategies). This knowledge could help researchers understand which interventions and implementation strategies might transport across settings.21,22 Increased understanding of selection factors, in turn would support estimation of the potential impact of interventions/implementation strategies in populations of interest. This information might also inform whether interventions might be replicable in different contexts. The inclusion and evaluation of interventional or implementational elements could also improve the causal inferential value of research findings. By understanding the prevalence of intervention exposure or uptake in real-world settings, as well as fidelity of implementation and factors related to transportability, we would not only be able to estimate population attributable risk (the proportion of the incidence of a disease in a population that is due to an exposure) through observational studies, but to approximate the population addressable risk (i.e., the proportion of the incidence of a disease in a population that could be addressed by using the current evidence-based practices to intervene on an exposure).

Observational studies could also be used to measure participant preferences for interventions and implementation strategies. For example, certain individuals might prefer receiving an intervention in a community- rather than clinic-based setting, or while utilizing other complementary services. This might be accomplished by adding one of a number of survey tools to directly measure preferences. These tools include best-worst scaling23, conjoint analysis24 and discrete choice experiments (DCEs25). DCEs are survey tools widely used in marketing research that could be used to document the relative importance of implementation strategy attributes. Drawn from economic theory, in these experiments, decision making is viewed through the lens of consumer decisions and trade-offs, in which consumers seek to maximize happiness through choices constrained by total costs. DCEs can be incorporated into surveys of observational studies, and their data used to quantify relative utilities (preferences) for combinations of features of a service, product or policy. DCE data from participants who have been oriented to an intervention can be used to (1) assess predictors of future engagement in an intervention; (2) assess key features of an implementation strategy that may result in uptake/engagement by potential clients or providers; or (3) tailor interventions and implementation strategies for the population being studied. If the observational study is prospective, this could become an iterative process.26

Observational studies could also be used to collect preliminary data on patient perspectives about interventions and implementation strategies, as the end-users of these activities.27,28 Human-centered design is an emerging method used to incorporate such end-user perspectives, preferences, and needs into the design and delivery of interventions to optimize their usability and, in turn, utility.29,30 Human-centered design methods are similar to other participatory methods, such as Community-Based Participatory Research31 and Photovoice.32 Using human-centered design methods, observational studies could collect data on participants’ experiences with a particular exposure or interactions with potential intervention delivery settings. These data could then be used to inform how to implement interventions in those settings.

Implications

The observational-implementation hybrid approach has wide-scale implications for epidemiologists, interventionists, and implementation scientists. Applying an observational-implementation hybrid approach could increase the potential public health impact of observational studies by directly informing the implementation of evidence-based interventions under study. In this sense, it has the potential to make the research pipeline more resource efficient and faster. There is a special time urgency for many pressing public health issues, such as for the COVID-19 pandemic, and for improving services for populations that experience long-standing health inequities. Hybrid approaches can collect data around structural interventions and implementation of public health policies, which would be especially important for increasing public health impact. Elements of this type of hybrid research are already happening, as referenced in Table 1. We hope that this manuscript can remove some of the perceived barriers between those who conduct observational work and those who conduct trials, much like the effectiveness-implementation hybrid design has been doing for effectiveness researchers and implementation scientists. We further hope that specifying these methods will promote a sustained discussion and consideration of the opportunities for hybrid work.

Integrating implementation science data collection into observational studies could result in a wider range of public health researchers wanting to learn about implementation science. This could be addressed by offering training in implementation science as part of epidemiology curricula to reflect emerging priorities and disconnects between trial promise and real-world disappointment. Implementation science practitioners should facilitate this by using plain language to convey concepts to make them accessible to those who are not experts in the area.33 Widescale adoption of this observational-implementation hybrid approach will also necessitate increased collaboration between observational researchers and implementation scientists. For example, funding mechanisms might be expanded to facilitate larger collaborations, matching other calls to expand funding for implementation science more generally.34 Opportunities can be created to increase cross-talk between epidemiologists and implementation science researchers. For example, in the field of HIV research, implementation science consulting hubs within Centers for AIDS Research have been developed as part of the Ending the HIV Epidemic initiative35 to support implementation science research projects. These consulting hubs provide venues to promote trans-institutional collaborative work and to break down barriers between observational and trial work. Such collaborations will bring together data sources at multiple socio-ecological levels, including real-world routinely collected surveillance data to conduct optimally impactful public health research.

Limitations

There are limitations to the proposed observational-implementation hybrid approach and to our discussion of it in this paper. Our primary goal is to increase discussion and thought about the opportunities to collect implementation science data during observational studies. Not all observational researchers will want to expand study aims to collect such data; however, awareness of how their work could inform eventual implementation might lead to expansions of data collection, whether modest or extensive. We acknowledge that the focus of this paper is broadly theoretical and is not exhaustive. For example, additional scholarship describing how to apply this approach could be useful, particularly for those new to implementation science. With time, experience and synthesis, observational-implementation researchers might decide whether to use a typing system like that used by hybrid effectiveness-implementation designs based on their position within the research pipeline.

Conclusions

We propose the observational-implementation hybrid approach, and discuss scenarios where it can be applied, methods that might be used to apply it, potential implications of its adoption, and limitations. If adopted, the design will likely lead to new collaborations and integration of observational and implementation science studies. Success of this strategy will require the expansion of epidemiological training, and we hope this design will be a platform to support epidemiologists and implementation scientists to increase the public health impact of their work.

Acknowledgments and funding:

Funding for Dr. Knox’s contribution to the present study was supported by NIAAA (K01AA028199; PI: Knox) and NIDA (R01DA054553, MPI: Duncan & Knox; R01DA057351MPI: Schneider & Knox; R21DA053156; PI: Knox). Dr. Schwartz was supported by NIAID P30AI094189.

We would like to acknowledge Sharon Schwartz, Sandro Galea, and Alfredo Morabia for their input on drafts of the manuscript.

List of abbreviations:

CFIR

Consolidated Framework for Implementation Research

Footnotes

Conflict of Interest Disclosures:

None to disclose.

References

  • 1.Eccles MP, Mittman BS. Welcome to implementation science. In. Vol 1: Springer; 2006:1–3. [Google Scholar]
  • 2.Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Curran GM, Landes SJ, McBain SA, et al. Reflections on 10 years of effectiveness-implementation hybrid studies. Frontiers in Health Services. 2022;2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–520. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation science. 2009;4(1):1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6:42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–1327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Woodward EN, Matthieu MM, Uchendu US, Rogal S, Kirchner JE. The health equity implementation framework: proposal and preliminary study of hepatitis C virus treatment. Implement Sci. 2019;14(1):26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Knox J, Moline T, Dolotina B, et al. Understanding HIV prevention and care and its determinants among an HIV-status neutral cohort of Black Sexual Minority Men and Transgender Women in Chicago: the Neighborhoods and Networks Part 2 (N2P2) Observational-Implementation Cohort Study. Under review. [Google Scholar]
  • 12.Zimba R, Romo ML, Kulkarni SG, et al. Patterns of SARS-CoV-2 Testing Preferences in a National Cohort in the United States: Latent Class Analysis of a Discrete Choice Experiment. JMIR Public Health Surveill. 2021;7(12):e32846. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Brazier E, Maruri F, Duda SN, et al. Implementation of “Treat-all” at adult HIV care and treatment sites in the Global IeDEA Consortium: results from the Site Assessment Survey. J Int AIDS Soc. 2019;22(7):e25331. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Mody A, Sikazwe I, Namwase AS, et al. Effects of implementing universal and rapid HIV treatment on initiation of antiretroviral therapy and retention in care in Zambia: a natural experiment using regression discontinuity. Lancet HIV. 2021;8(12):e755–e765. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Beres LK, Mody A, Sikombe K, et al. The effect of tracer contact on return to care among adult, “lost to follow-up” patients living with HIV in Zambia: an instrumental variable analysis. J Int AIDS Soc. 2021;24(12):e25853. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Nilsen P Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Damschroder LJ. Clarity out of chaos: Use of theory in implementation research. Psychiatry Res. 2020;283:112461. [DOI] [PubMed] [Google Scholar]
  • 18.Levy ME, Patrick R, Gamble J, et al. Willingness of community-recruited men who have sex with men in Washington, DC to use long-acting injectable HIV pre-exposure prophylaxis. PLoS One. 2017;12(8):e0183521. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Shrestha R, DiDomizio EE, Kim RS, Altice FL, Wickersham JA, Copenhaver MM. Awareness about and willingness to use long-acting injectable pre-exposure prophylaxis (LAI-PrEP) among people who use drugs. J Subst Abuse Treat. 2020;117:108058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.McGowan I, Hoesley C, Cranston RD, et al. A phase 1 randomized, double blind, placebo controlled rectal safety and acceptability study of tenofovir 1% gel (MTN-007). PLoS One. 2013;8(4):e60147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Westreich D Epidemiology by design : a causal approach to the health sciences. New York, NY: Oxford University Press; 2020. [Google Scholar]
  • 22.Pearl J, Bareinboim E. Transportability across studies: A formal approach. CALIFORNIA UNIV LOS ANGELES: DEPT OF COMPUTER SCIENCE;2011. [Google Scholar]
  • 23.Muhlbacher AC, Kaczynski A, Zweifel P, Johnson FR. Experimental measurement of preferences in health and healthcare using best-worst scaling: an overview. Health Econ Rev. 2016;6(1):2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Bridges JF, Kinter ET, Kidane L, Heinzen RR, McCormick C. Things are Looking up Since We Started Listening to Patients: Trends in the Application of Conjoint Analysis in Health 1982–2007. The patient. 2008;1(4):273–282. [DOI] [PubMed] [Google Scholar]
  • 25.Lancaster KJ. A New Approach to Consumer Theory. Journal of Political Economy. 1966;74(2):132–157. [Google Scholar]
  • 26.Goldenberg T, McDougal SJ, Sullivan PS, Stekler JD, Stephenson R. Building a Mobile HIV Prevention App for Men Who Have Sex With Men: An Iterative and Community-Driven Process. JMIR Public Health Surveill. 2015;1(2):e18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Asch DA, Rosin R. Innovation as Discipline, Not Fad. N Engl J Med. 2015;373(7):592–594. [DOI] [PubMed] [Google Scholar]
  • 28.Dobson J Co-production helps ensure that new technology succeeds. In: British Medical Journal Publishing Group; 2019. [Google Scholar]
  • 29.What is human-centered design? 2014; http://www.designkit.org/human-centered-design. Accessed January 6, 2022.
  • 30.Chen E, Neta G, Roberts MC. Complementary approaches to problem solving in healthcare and public health: implementation science and human-centered design. Transl Behav Med. 2021;11(5):1115–1121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Israel BA, Schulz AJ, Parker EA, Becker AB, Community-Campus Partnerships for H. Community-based participatory research: policy recommendations for promoting a partnership approach in health research. Educ Health (Abingdon). 2001;14(2):182–197. [DOI] [PubMed] [Google Scholar]
  • 32.Wang C, Burris MA. Photovoice: Concept, methodology, and use for participatory needs assessment. Health Educ Behav. 1997;24(3):369–387. [DOI] [PubMed] [Google Scholar]
  • 33.Curran GM. Implementation science made too simple: a teaching tool. Implement Sci Commun. 2020;1:27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Proctor EK, Geng E. A new lane for science. Science. 2021;374(6568):659. [DOI] [PubMed] [Google Scholar]
  • 35.Fauci AS, Redfield RR, Sigounas G, Weahkee MD, Giroir BP. Ending the HIV Epidemic: A Plan for the United States. JAMA. 2019. [DOI] [PubMed] [Google Scholar]

RESOURCES