Abstract
Background:
The evidence-practice gap in HIV prevention and the care continuum in the United States often reflects a mismatch between the perspectives of researchers and public health practitioners. The traditional research paradigm of sequential progress from efficacy research to implementation in practice and widespread scale-up is not well-aligned with the reality of health department program implementation.
Setting:
This article focuses on public health practice carried out by state and local health departments in the United States and the research intended to inform it.
Methods and Results:
In this narrative review, we discuss approaches to HIV prevention and care continuum research that are shaped by and responsive to public health practice implementation priorities and what is needed to promote productive and successful university-health department research partnerships. We review research methods of particular relevance to health departments to evaluate the effectiveness of HIV prevention and care continuum interventions and how these approaches diverge from traditional research approaches. Finally, we highlight the roles of federal agencies in supporting practice-driven HIV implementation research.
Conclusions:
Health departments are key stakeholders, consumers, and generators of the evidence base for public health practice. High-impact research to improve HIV prevention and the care continuum is informed by health department priorities and current practice from the start. Long-term, equitable relationships between universities and health departments are crucial to advance practice-driven research.
Keywords: HIV prevention, HIV care continuum, health department, implementation science, program science
INTRODUCTION
Closing the gap between evidence and practice is a central goal of implementation science. The evidence-practice gap is conventionally framed as a failure of practitioners to adopt evidence-based interventions, reflecting the traditional model of research as a sequential process of identifying a gap in evidence, generating evidence to fill the gap, then disseminating and scaling up interventions to achieve broad coverage. But public health interventions to prevent HIV and improve the care continuum are rarely developed and implemented in such a linear way, and when implementers do not adopt interventions, it often signifies a poor fit between the intervention and the setting in which it was intended to be delivered. This mismatch reflects deficiencies at each step of the intervention development, testing, and dissemination process. Too often, investigators develop interventions in isolation from the people whom they envision implementing the interventions and without a thorough understanding of existing public health practice. Meanwhile, many widely used public health interventions are generally considered to be effective despite a paucity of evidence to support them. When research intended to change public health practice is carried out in isolation from public health practitioners, the result is unconvinced practitioners and disappointed researchers.
Researchers and practitioners share the central goals of HIV prevention and care continuum improvement, but some fundamental differences underlie their respective approaches to the work. Whereas science reduces the complexity of the real world into concrete, measurable units, public health practice necessarily involves carrying out multiple interventions simultaneously under dynamic real-world conditions. In scientific research, high quality evidence is paramount. The highest level of evidence requires randomized controlled trials, which typically focus on a single intervention in a controlled setting. In public health practice, evidence is just one factor among many in programmatic decision-making and must be considered in the context of competing priorities, political support and constraints, mandates for broad service provision, the practical realities of staff and budgetary resources, and limitations imposed by history and institutional inertia.
The perspectives of researchers and public health practitioners can be harmonized through the development of solid partnerships between academic institutions and health department, and integrating research and practice approaches through institutional collaborations will advance national efforts to end the HIV epidemic. In this paper, we discuss approaches to HIV prevention and care continuum research that are shaped by and responsive to public health practice implementation priorities, how these approaches diverge from traditional research approaches, and what is needed to promote productive and successful university-health department research partnerships. We focus on research relevant to public health practice in the United States, although many of the principles we articulate are broadly applicable.
FORMING SUCCESSFUL UNIVERSITY-HEALTH DEPARTMENT PARTNERSHIPS
Universities and health departments bring complementary expertise to research partnerships, key points of which are summarized in Table 1. Following are some basic tenets for successful collaboration:
Table 1.
Key Areas of Complementary Contributions to University-Health Department Research Collaborations
Health Department Practitioners1 | University Researchers1 |
---|---|
Expertise in local public health practice, epidemiology and services | Expertise in research methods and study design |
Service funding that can be leveraged for implementation research | Research funding that facilitates novel approaches |
Access to surveillance and service utilization data; and access to populations of persons with and at risk for HIV | Access to experts in biostatistics, instrument design, mathematical modeling, and cost-effectiveness analysis |
Working relationships with local HIV service provider agencies, physicians, and community planning entities/advisory groups | Institutional support for research activities such as manuscript preparation, conference travel, and NIH study section participation |
Knowledge of local, state, and federal public health priorities and restrictions of federal HIV services program funding | Knowledge of federal research priorities, what is considered innovative by NIH, and federal requirements for clinical trials |
Experience interacting with community stakeholders and political leadership | Expertise and track record in submitting research grant applications and writing research proposals and protocols |
Ability to change or directly impact policy | Academic freedom unrestricted by direct relationship to policy makers |
Authority to initiate or modify programs and change program priorities | Ability to quickly hire staff, engage consultants, and purchase supplies |
Research expertise varies substantially between health departments and public health practice expertise varies widely between universities
Research projects are most effective when they are part of long-term relationships instead of brief project-based encounters.
Support from leaders in both institutions is essential.
An effective partnership requires researchers and public health practitioners to approach each other as colleagues and partners, not as means to an end (e.g., obtaining data or access to a population).
Researchers should seek to understand local experience and the mission and priorities of the health department with which they are seeking collaboration.
Whenever possible, new intervention programs should build on existing clinical and public health infrastructure rather than seeking to create new infrastructure.
Tangible resources to support research-related activities, such as salary support, should be commensurate to the work required.
Work with health departments requires compliance with government policies, procedures and timelines, even for a grant that is not being submitted through the health department.
Grant and manuscript-writing roles and research project roles should be discussed and agreed upon in advance and based on the particular capacities, experience, interests, and availability of the individual partners involved. Authorship credit must be discussed in advance and shared as appropriate, and data received from or collected in collaboration with a health department should not be presented or published without the agreement of health department partners.
In a successful partnership, both parties make important contributions and both receive benefits that could not be gained by working in isolation. Health department HIV/STD programs vary tremendously in the skills and experience they bring to collaborations with universities. Some have multiple doctoral-level epidemiologists, physicians who provide HIV care, and leaders with extensive experience in HIV public health practice and research. They are not dependent upon universities to carry out research, but benefit from interdisciplinary collaborations just as university researchers do. Others lack sufficient epidemiologic and biostatistical capacity, have minimal access to physicians with contemporary clinical expertise in HIV, and operate in the context of frequent staff turnover and ongoing staff shortages. For such health departments, long-term collaborations with academic researchers can help address critical needs. For example, contracting with a university to support part of the salary of an epidemiologist or clinician can expand the capacity of a public health program at a fraction of the cost required to create and fill a new position. In return, university researchers partnering with health departments gain colleagues with practice and policy expertise who can directly implement changes to public health programs, a real-world setting in which to conduct implementation research, and access to population-based data and populations receiving health department-funded services.
The Figure presents a conceptual model for health department-university relationships in which projects are grounded in a mutual prioritization of specific problems to be addressed. The most effective relationships achieve buy-in at multiple levels, including directors, program managers and, if applicable to the research, front-line staff. The foundational work to develop a collaboration requires diplomatic efforts focused on cultivating the relationship and ongoing efforts to sustain it over time. The process of aligning health department and university priorities early in the course of developing and testing an intervention can proactively address many of the factors that may otherwise impede implementation of an intervention. Specifically, long-term partnerships facilitate early and frank discussions about the relative priority of a new intervention among competing demands for resources, the feasibility of integrating an intervention into existing practice or adapting practice to incorporate it, and local political concerns or social factors relevant to the context in which the intervention will be delivered.
Figure 1.
Conceptual Model of Health Department-University Collaborations in HIV Prevention Implementation Research
University-based researchers have the freedom to advocate for policy changes and make public statements that health department leaders may be restricted from due to their more direct relationship with state and local government. However, the balance between pushing the boundaries and respecting political realities is a delicate one. Dismissal of political concerns creates distrust. Each partner needs to be willing to sacrifice some autonomy for the sake of the relationship and persevere through the changes in political leadership and the bureaucratic tedium of the partner’s institution. In addition to being responsible to political leadership, health department leaders are responsible to community stakeholders. Federal mandates require community planning processes for HIV prevention and care programming, and researchers can assist public health practitioners in cultivating planning bodies’ buy-in by sharing preliminary data and ensuring that dissemination of research results contributes to community planning efforts as well as the scientific discourse.
Table 2 summarizes a set of concrete steps for development of a successful collaboration. The first step is identification of leaders, both scientific and administrative, who are committed to the success of the partnership. Whenever possible, an individual in a leadership position should have a professional identity that involves both agencies, which requires that both organizations contribute to that person’s salary and that the individual has titles in both organizations. Integration of leadership roles can assume different configurations, including joint appointments and contractual relationships for salary support, as well as engagement of health department staff on the relevant Cores of NIH Centers for AIDS Research (CFARs) and National Institutes of Mental Health HIV/AIDS Research Centers (ARCs). A key component of integration is administrative. Success requires an efficient mechanism to transfer funding between agencies. Ideally, this should allow bidirectional transfer of funds with the ability to modify budgets quarterly. This structure should include consideration of indirect costs; health departments are unlikely to pay full indirect costs to universities, which can be surmounted by negotiating indirect cost agreements at the institutional level. In situations where the health department must confront the extensive processes required for government agencies to procure contractors, the university may be better positioned to serve as the lead agency. The research partnership should include agreement about the broad areas for research collaboration in addition to project-specific planning. Negotiating whether and how researchers will contribute to the service mission of the health department is also crucial. For example, it should be clear to both parties whether or not a university researcher will contribute to routine health department surveillance reports, ad hoc analysis requests, or clinical consultation as part of the partnership. Furthermore, defining benefits that university students and public health staff will receive from the collaboration may help facilitate the development of a committed partnership with buy-in at multiple levels of each institution. Such benefits could include a commitment from the health department to host students for practicum experiences or from the university to allow health department staff access to library resources. Within specific collaborations, the roles and responsibilities of all persons involved and data sharing procedures should be agreed upon, perhaps including written data sharing agreements. Finally, especially at the beginning of a collaboration, agreement upon the metrics of success and timeline for assessing those metrics will ensure a shared perspective. Although the agreements will evolve over time, considering these factors systematically can help prevent misunderstanding and set the stage for success.
Table 2.
Framework for University-Health Department Collaborative Research Partnerships
Key Step | Description |
---|---|
Leaders | • Identify lead representatives in each agency, both scientific and administrative, who will spearhead the collaboration |
Funding | • Define an administrative mechanism for transferring funding and a common plan for handling administrative costs and overhead |
Research Agenda | • Determine broad areas for research collaboration and specific collaborative projects |
Support | • Establish whether and how the university will support non-research public health activities related to the research collaboration, such as clinical work or program evaluation |
Benefits | • Identify educational opportunities that university students and trainees can access through the health department and that public health employees can access through the university |
Roles | • Define roles & responsibilities, particularly for specific collaborations, including which agency will have lead administrative responsibility for each activity. |
Data Sharing | • Define data ownership, data protection policies, and plans for data sharing and security |
Dissemination | • Establish a plan for conference abstracts and manuscripts, including authorship roles, and specify processes for press releases and media interactions |
Metrics | • Specify the metrics of success of the collaboration, such as publications and grants, and expected timeline for project-specific goals |
PRACTICE-DRIVEN HIV PREVENTION RESEARCH PRIORITIES
Early involvement of public health practitioners in implementation research is crucial to formulate practice-relevant research questions and to understand the context in which interventions will be introduced. Academic researchers are well-poised to assess the existing knowledge base relevant to those questions and design studies to fill evidence gaps. In practice-driven HIV intervention research, study questions tie directly to public health action, and study outcomes have clear relevance to program metrics, such as the number of individuals who start PrEP or who achieve viral suppression. Early involvement of health departments can also help build support for an intervention. As described in the Consolidated Framework for Implementation Research, the perception of whether an intervention is externally or internally developed is a key factor in implementation success.1 Interventions perceived as coming from outside the organization and being disseminated from a central source (external) are less likely to be successfully routinized in practice than those which are perceived as coming from or being influenced by people within the organization (internal).
We propose several practice-driven research priorities and study questions in Table 3. These questions -- and health department-university research partnerships more broadly -- differ from traditional research approaches in a few ways. First, impact, scalability, and sustainability are central concerns. Efficacious interventions that cannot reach high levels of population coverage or individuals most at risk for poor health outcomes will not have significant public health impact. Second, rather than developing from a theoretical model, these questions typically arise from evidence gaps recognized in the course of public health practice, with theoretical models helping to frame the implementation considerations and research questions. In this way, the practice-driven research approach we advocate here is consistent with the program science approach described by Aral and Blanchard,2,3 which uses data derived from program operations to inform program optimization. Third, the questions in Table 1 and the best approaches to answering them are not necessarily innovative, a key criterion for National Institutes of Health (NIH) grant funding, but nonetheless merit pursuit as part of the effort to end the HIV epidemic.
Table 3.
Public Health Practice-Driven Research Priorities in HIV Prevention and Care Continuum Research
Problem | Examples of Practice-Driven Research Questions |
---|---|
Too few people who are at risk for HIV infection take PrEP | • Which model of PrEP delivery in STD Clinics results in the highest proportion of patients starting PrEP? • How can health departments increase provider capacity to prescribe PrEP in the jurisdiction? • Can PrEP navigators identify people in need of PrEP and generate PrEP demand or should navigators focus on persons referred for assistance? |
Too many people who start PrEP discontinue it | • How can we distinguish “appropriate” from “inappropriate” PrEP discontinuations? • How much do PrEP navigation programs impact PrEP retention and at what unit cost? • What models of PrEP delivery are most effective at retaining patients in the long term? |
Many people living with HIV are unaware of their status | • How can health departments effectively align their testing efforts with the most affected populations? • How can health departments most effectively increase HIV testing in Emergency Departments, jails and primary care settings, and in what situations should each of these activities be prioritized? • How can HIV molecular surveillance data be used to efficiently identify undiagnosed PLWH? |
Viral suppression among persons living with HIV is too low | • How much benefit does same-day ART initiation provide compared to initiation within 1-2 weeks after diagnosis? • What modifiable factors function as the greatest drivers of unsuppressed viral load and what factors define the level of patient need for high intensity services? • What intervention strategies improve durable viral suppression among persons with substantial barriers to care? |
Racial/ethnic disparities in PrEP use, HIV incidence, and viral suppression reflect health inequities | • How should health departments estimate PrEP need and uptake by subgroup? • What interventions increase sustained PrEP use in highly affected groups in which PrEP use is lagging? • What models of care most effectively reduce racial disparities in viral suppression? |
HIV incidence is increasing among people who inject drugs | • What are the comparative HIV case finding outcomes of efforts to reach marginalized PWID for HIV testing? • How should services to provide PrEP and HIV care be prioritized and implemented in the context of an outbreak? • How do syringe services program policies (e.g. one-to-one or unlimited exchange) impact population coverage? |
RESEARCH METHODS WITH PARTICULAR RELEVANCE TO HEALTH DEPARTMENTS
Implementation science offers several methodologic approaches that can increase the standard of evidence in public health practice. The traditional approach to public health program evaluation is the pre-post analysis, the primary virtues of which are simplicity and compatibility with public health resources. However, this approach is limited by the lack of a contemporaneous control group, and reliance on it leaves public health decision makers vulnerable to concluding that an intervention is effective when the pre-post change is due to a secular trend or regression to the mean. Controlled study designs are essential to determine the impact of public health interventions.
Although individual randomization is the gold standard for clinical research, it is often not the appropriate study design for practice-driven research, particularly for interventions designed to be delivered at a population level. Traditional randomized controlled trials (RCTs) that randomize individuals sequentially at the time of study enrollment limit external validity in a few ways. First, restriction of the study population to persons who are reachable and willing to provide informed consent for research participation, or restriction to exclude individuals at high risk for loss to follow-up, takes a study out of the real-world setting. Second, studies that exclude persons with substance use disorders, persons with unstable housing and contact information, adolescents, or pregnant women fail to achieve representation of priority populations for HIV care and prevention programs. Third, traditional RCTs often use methods to ensure strict fidelity to an intervention, which are not used in public health practice. Fidelity to an intervention protocol cannot be assured in practice to the degree it is in research, but a robust intervention should be effective when delivered by different people in variable settings. In situations where individual randomization is not feasible or desirable, or the primary study question is about the performance of an intervention in practice, alternative approaches can enhance both scientific rigor and public health relevance. Pragmatic clinical trials seek to test interventions in real-world settings with participants and interventionists representative of typical patients and health workers. In such studies, individuals can be randomized in groups at the level where the intervention is intended to be delivered, such as by health center or community. When randomization is not practical or appropriate, observational study designs can increase the rigor of analysis.
Pragmatic Trial Design
Cluster randomized trials differ from traditional RCTs in that the unit of randomization is a group rather than an individual. This approach is well-suited to interventions that are designed to improve health outcomes in a system or population. When persons are grouped into clusters at the level of a service site, study analyses need to incorporate clustering coefficients,4 and the analysis plan requires attention to agency-level differences that may affect individual health outcomes. The stepped wedge design is a subtype of cluster randomized trial that involves sequential roll-out of an intervention, and thus accounts for secular trends without withholding the intervention from a control group. Phased implementation of an intervention is compatible with health departments’ responsibility to provide interventions broadly, and recognizes the common practical limitation of being unable to simultaneously deploy an intervention to all eligible individuals at all service sites. While variation in the delivery of an intervention between sites is a weakness in traditional RCTs, it is a strength in pragmatic trials because it reflects real-world practice. One example of the stepped wedge cluster randomization approach was the evaluation of a Data to Care program in Seattle & King County, Washington. Investigators from Public Health – Seattle & King County and the University of Washington phased in a Data to Care intervention by clustering eligible individuals according to their medical providers and randomizing the order in which the clusters were contacted.5 This allowed an evaluation comparing the time to viral suppression among individuals during the intervention vs. control periods and accounted for the secular trend of increasing viral suppression in the population.
Hybrid implementation-effectiveness designs have been used in a variety of clinical and support service settings to assess both effectiveness and implementation in a single study.6–8 This approach allows rigorous assessment of an intervention while speeding the translation to practice by testing the intervention in real-world settings from the outset. Curran and colleagues categorized hybrid designs into three types: Type 1 studies test the effectiveness of an intervention on health outcomes while systematically gathering information on its implementation, Type 2 studies test the effectiveness of an intervention on both health and implementation outcomes, and Type 3 studies test the viability of an implementation strategy while systematically gathering information on health outcomes.6
The RE-AIM model aligns with hybrid trial designs because it uses mixed methods to examine intervention effectiveness alongside implementation processes with pre-defined measures of reach, effectiveness, adoption, implementation, and maintenance.9 This pragmatic approach guides the measurement of essential program elements associated with the sustainable adoption and implementation of effective and generalizable evidence-based interventions. It can be applied to identify how intervention components are implemented in order to devise strategies to boost uptake and effectiveness.10
Observational Comparison Groups to Assess Intervention Effectiveness
Observational comparison groups can be historical or contemporaneous. Both types are limited by selection bias, and historical comparison groups cannot account for the influence of secular trends, but this approach allows a controlled analysis without randomization. One example of an observational comparison group study design employed by university-health department collaborators was the evaluation of New York City’s Ryan White HIV Care Coordination Program. The program enrolls newly diagnosed patients, patients initiating HIV treatment, and those with recent treatment/adherence failure, high viral load, or gaps in medical care. Investigators from the City University of New York (CUNY) and the New York City Department of Health and Mental Hygiene (NYC DOHMH) merged programmatic data from the Care Coordination Program with HIV surveillance data, and defined a contemporaneous non-intervention comparison group using surveillance-based emulation of program eligibility criteria and propensity score matching within distinct baseline treatment-status groups.11 This allowed comparison of the pre-post analysis approach to one that included a comparison group to evaluate short-term and durable viral suppression.12–14 Another recent example was the evaluation of a PrEP roll-out program in New South Wales, Australia. Investigators from the University of New South Wales and the New South Wales Ministry of Health evaluated HIV incidence outcomes in the cohort of the first 3700 participants in the PrEP program and the change in population-level HIV diagnoses in New South Wales before and after PrEP roll-out.15
The regression discontinuity design is a quasi-experimental approach used to support causal inference about the effect of public health programs without randomization.16 One recent application of this approach assessed the influence of national ‘treat all’ policies for HIV on the rapid initiation of ART.17 Using longitudinal data on 814,603 patients enrolling in HIV care during 2004-2018 at HIV clinics in six sub-Saharan African countries, investigators analyzed the change in the proportion of patients initiating ART within 30 days of enrollment in HIV care before and after country-level adoption of Treat All policies.
THE ROLE OF FEDERAL AGENCIES AND THE NATIONAL ENDING THE HIV EPIDEMIC INITIATIVE
NIH’s HIV research priorities include implementation science to improve the delivery of HIV-related services, and new NIH funding opportunities associated with the national Ending the HIV Epidemic plan are designed to foster collaboration between university-based CFARs/ARCs and health departments. These developments represent important progress. The type of collaborative implementation science research we discuss here has been limited to date by the absence of a clear funding mechanism and a paucity of investigators with experience in both research and public health practice. CDC and HRSA fund implementation of interventions already shown (or presumed) to be effective, and CDC and HRSA grants often expressly exclude the use of grant resources for research. Meanwhile, the NIH emphasis on innovation favors the development of novel interventions. As such, practice-driven HIV research is not perfectly aligned with any one federal agency. This problem is exacerbated by the organization of NIH-funded training programs for doctoral-level researchers that do not expose trainees to public health research or prepare them for careers integrating public health practice and research. Greater support from federal agencies for public health practice-driven research and the development of public health research careers could have an enormous impact on the HIV prevention and care workforce. The Ending the HIV Epidemic initiative is reason for optimism for the future.
CONCLUSION
Too often, the gap between evidence and practice reflects a mismatch in the perspectives of public health practitioners and researchers. High-impact research to inform public health practice requires early involvement of health departments, starting with the identification of research priorities. Several implementation science methods can increase the rigor of public health program research, and long-term, stable university-health department teams are best equipped to carry out this type of work. The national Ending the HIV Epidemic initiative presents an outstanding opportunity for researchers and public health practitioners to collaborate and for federal agencies to build the capacity of health department HIV/STD programs.
ACKNOWLEDGEMENTS
The authors would like to acknowledge their many health department and research colleagues who have contributed to the ideas herein through ongoing conversations. JD and MG specifically acknowledge the NN health departments who participated in the CDC-supported Technical Cooperation Working Group, Program Evaluation Technical Assistance Event (PETE) course, and site visits during 2014-2019. The authors would also like to acknowledge Dr. Alan Greenberg, David Purcell, and Christopher Gordon who have led national efforts to increase collaboration between CFAR researchers and health departments through the CFAR HIV Continuum of Care/ECHHP Working Group.
Conflicts of Interest and Sources of Funding: This work was supported by the University of Washington Center for AIDS Research (CFAR), an NIH funded program (P30AI027757); The Einstein-Rockefeller-CUNY CFAR (P30 AI124414); the HIV Center for Clinical and Behavioral Studies (P30 MH043520) and NIH grants R01MH101028 and R01MH117793. JCD and MRG have conducted research unrelated to this work supported by grants to the University of Washington from Hologic.
REFERENCES
- 1.The Center for Clinical Management Research. Consolidated Framework for Implementation Research. Available at: https://cfirguide.org/constructs/. Accessed April 19, 2019.
- 2.Blanchard JF, Aral SO. Program Science: an initiative to improve the planning, implementation and evaluation of HIV/sexually transmitted infection prevention programmes. Sex Transm Infect. 2011;87:2–3. [DOI] [PubMed] [Google Scholar]
- 3.Aral SO, Blanchard JF. The Program Science initiative: improving the planning, implementation and evaluation of HIV/STI prevention programs. Sex Transm Infect. 2012;88:157–159. [DOI] [PubMed] [Google Scholar]
- 4.Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemporary Clinical Trials. 2007;28:182–191. [DOI] [PubMed] [Google Scholar]
- 5.Dombrowski JC, Hughes JP, Buskin SE, et al. A cluster randomized evaluation of a health department Data to Care intervention designed to increase engagement in HIV care and antiretroviral use. Sex Transm Dis. 2018;45:361–367. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care 2012;50:217–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Bernet AC, Willens DE, Bauer MS. Effectiveness-implementation hybrid designs: implications for quality improvement science. Implementation Sci. 2013;8:S2-S. [Google Scholar]
- 8.Simmons MM, Gabrielian S, Byrne T, et al. A Hybrid III stepped wedge cluster randomized trial testing an implementation strategy to facilitate the use of an evidence-based practice in VA Homeless Primary Care Treatment Programs. Implementation Sci. 2017;12:46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.RE-AIM. About RE-AIM. Available at http://re-aim.org. Accessed April 19, 2019.
- 10.Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health 1999;89:1322–1327. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Robertson MM, Waldron L, Robbins RS, et al. Using registry data to construct a comparison group for programmatic effectiveness evaluation: The New York City HIV Care Coordination Program. Am J Epidemiol. 2018;187:1980–1989. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Irvine MK, Chamberlin SA, Robbins RS, et al. Improvements in HIV care engagement and viral load suppression following enrollment in a comprehensive HIV care coordination program. Clin Infect Dis. 2015;60:298–310. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Nash D, Robertson MM, Penrose K, et al. Short-term effectiveness of HIV care coordination among persons with recent HIV diagnosis or history of poor HIV outcomes. PLoS One. 2018;13:e0204017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Robertson MM, Penrose K, Irvine MK, et al. Impact of an HIV care coordination program on durable viral suppression. J Acquir Immune Defic Syndr. 2019;80:46–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Grulich AE, Guy R, Amin J, et al. Population-level effectiveness of rapid, targeted, high-coverage roll-out of HIV pre-exposure prophylaxis in men who have sex with men: the EPIC-NSW prospective cohort study. Lancet HIV. 2018;5:e629–e37. [DOI] [PubMed] [Google Scholar]
- 16.Bor J, Moscoe E, Mutevedzi P, Newell ML, Barnighausen T. Regression discontinuity designs in epidemiology: causal inference without randomized trials. Epidemiology 2014;25:729–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Tymejczyk O, Brazier B, Y CT, et al. Rapid HIV treatment initiation improves with Treat All adoption in six sub-Saharan African countries: regression discontinuity analysis [Abstract 1016]. Presented at: Conference on Retroviruses and Opportunistic Infections; 2019; Seattle. [Google Scholar]