Abstract
This article examines the consequences and causes of low enrollment of Black patients in clinical trials. We develop a simple model of similarity-based extrapolation that predicts that evidence is more relevant for decision-making by physicians and patients when it is more representative of the group being treated. This generates the key result that the perceived benefit of a medicine for a group depends not only on the average benefit from a trial but also on the share of patients from that group who were enrolled in the trial. In survey experiments, we find that physicians who care for Black patients are more willing to prescribe drugs tested in representative samples, an effect substantial enough to close observed gaps in the prescribing rates of new medicines. Black patients update more on drug efficacy when the sample that the drug is tested on is more representative, reducing Black-white patient gaps in beliefs about whether the drug will work as described. Despite these benefits of representative data, our framework and evidence suggest that those who have benefited more from past medical breakthroughs are less costly to enroll in the present, leading to persistence in who is represented in the evidence base.
Keywords: D91, I12, I14, O31, O33
As a physician caring for patients in an urban safety-net setting and wanting to provide the best evidence-based preventive care...I would spend as much time on the science as I devoted to reinforcing with patients why they should still trust these guidelines and the process, despite the unrepresentative populations in the evidence base.
—Kirsten Bibbins-Domingo (NASEM 2022, xii)
I. Introduction
Innovation does not benefit everyone equally (Jones and Kim 2018; Aghion et al. 2019; Kline et al. 2019). Research investments skew toward developing technologies appropriate for more profitable groups (Kremer and Glennerster 2004; Cutler, Meara, and Richards-Shubik 2012; Jaravel 2019; Michelman and Msall 2021), and diffusion often occurs faster among the well-connected or well educated (Skinner and Staiger 2005, 2015; Glied and Lleras-Muney 2008; Foster and Rosenzweig 2010; Papageorge 2015; Agha and Molitor 2018; Hamilton et al. 2021). In this article, we explore a third dimension of innovation and inequality. We ask whether the low enrollment of certain groups in the research and development process (Koning, Samila, and Ferguson 2021) creates gaps in how much group members use those technologies. Put differently, does how a technology is developed affect who adopts it?
Our context is new drug approval in the United States, where information on drug safety and efficacy—generated from clinical trials on human subjects—must be submitted to the U.S. Food and Drug Administration (FDA) before the drug can be sold. Racial disparities in the production of clinical evidence and the eventual diffusion of products is common (Wang et al. 2007; Jung and Feldman 2017; McCoy et al. 2019; Ding and Glied 2022; Elhussein et al. 2022). As Figure I documents, Black patients are consistently underrepresented in clinical trials relative to their share in the U.S. population (Panel A) and are similarly underrepresented in prescriptions for newly approved medications (Panel B). Population is the implied benchmark in Figure I and we note that Black patients are often even more underrepresented relative to their disease burden (Green et al. 2022). Although other groups have also been historically underrepresented, we focus on Black Americans for several reasons, including the history of racial discrimination and associated distrust, persistent racial disparities in health outcomes, and continued underrepresentation in research.1
Figure I. Racial Disparities in the Development and Distribution of New Drugs.
Panel A plots the median enrollee percentage by race (Black and white) for pivotal clinical trials, studies that support new-drug applications to the FDA, over time. Panel B plots the median new drug prescription percentage by race in each year relative to its approval. Straight lines in Panels A and B plot population shares by race in the United States as reported in the 2020 census (Black population share is 13.6% and non-Hispanic white population share is 59.3%; U.S. Census Bureau 2021). Panel A is drawn from the FDA Drug Trials Snapshots data, and Panel B is from the Medical Expenditure Panel Survey data (Agency for Healthcare Research and Quality 2022). Online Appendix Figure B1 plots Panel A using a longer time series from ClinicalTrials.gov. Online Appendix Figure B2 plots the distribution of race in trials using both the ClinicalTrials.gov and FDA Drug Trials Snapshots data sets. Online Appendix Figure B4 plots prescribing rates of new drugs per 1,000 individuals in each racial group.
While gaps in trial enrollment are well-documented, the consequences, if any, have not been rigorously studied. Two natural questions emerge. First, does representative data matter to physicians and patients? Second, if so, why are such data not (endogenously) supplied by the market? To address the first question, we conduct two survey experiments designed to understand physician and patient reactions to trial evidence. To address the second question, we turn to a theoretical framework that sheds light on how underrepresentation may persist, even if representative data would lead to higher drug demand. It also identifies potential levers for policy intervention, which we assess in the context of case studies.
Our framework models how physicians and patients interpret the evidence that supports new technologies when making decisions about whether to adopt them. Through instruction in evidence-based medicine (EBM), physicians are trained to consider whether a new product would work similarly well in their patients as those in its trial. A typical question from EBM training is: “Are the participants in the study similar enough to my patient?” (Masic, Miokovic, and Muhamedagic 2008, 222). Inspired by this process and the role of reasoning by similarity and analogy in belief formation (e.g., Gilboa and Schmeidler 1995; Mullainathan, Schwartzstein, and Shleifer 2008; Bordalo, Gennaioli, and Shleifer 2020; Bordalo et al. 2022; Malmendier and Veldkamp 2022), we develop a model of similarity-based extrapolation. We assume that people update more readily from evidence when their patients (in the case of doctors) or people like them (in the case of patients) have more in common with the experimental sample. Our framework incorporates this assumption in a simple way: it assumes doctors and their patients have in mind a model where a given group characteristic (e.g., race) could be correlated with drug efficacy and they update model parameters using Bayes’s rule. A key result of our framework is that—conditional on trial data—the perceived benefit of a drug will be increasing not only in the average reported efficacy but also increasing at a decreasing rate in the share of one’s own group in the trial.
To empirically assess whether representation affects clinical decisions and health behavior, we designed and conducted two survey experiments among patients and physicians. After completing a short module eliciting patient panel characteristics, physicians viewed profiles of diabetes drugs, including the drug’s mechanism of action and the design of the supporting clinical trials. For each profile, the share of Black trial subjects and average drug efficacy in trials were cross-randomized from distributions of values collected in a comprehensive search of clinical literature. To introduce sufficient variation in sample demographics and efficacy within the mechanism of action of a given drug, the drugs shown were hypothetical but were based on recently developed drugs to treat diabetes.2 After viewing each profile, physicians were asked to indicate their intent to prescribe the drug to patients in their care.
A separate experiment was designed for patients because they must fill and adhere to a prescription to realize any health gains. We recruited 275 patients with diagnosed hypertension who identified as either white or Black. We assessed their interest in a novel therapy to treat hypertension that had been tested in a real clinical trial at two separate sites with varying shares of Black participants. Other product characteristics, including drug efficacy in lowering blood pressure, were held constant.
We find that physicians are more willing to prescribe drugs tested on representative samples. A 1 standard deviation increase in the share of Black trial participants increases physician prescribing intention for a given drug by 0.11 standard deviation units. The magnitude of this effect on prescribing is medically meaningful and equivalent to roughly half the standardized effect of the drug’s efficacy. It also correlates strongly with donation behavior to campaigns aimed at boosting trial participation for underrepresented minority communities measured a few weeks after the initial intervention. In prespecified heterogeneity analyses, we find that the effect of increasing Black representation in a clinical trial sample on prescribing intention is close to zero for doctors who do not routinely see Black patients and rises steeply in the share of a physician’s own patients who are Black.
In our patient experiment, when Black respondents were presented with a representative trial, they viewed the drug in question as significantly more relevant for their own blood pressure control and were 20 percentage points more likely to state that the drug will work as well for them as it was shown to work in the trial. We also find in a separate but similar survey experiment that Black patients exposed to a representative trial were more likely to indicate that they want to participate in future clinical trials, and that they viewed the researchers as more trustworthy. This suggests that increasing representation might be one tool to help address medical mistrust. In contrast, and consistent with the model’s prediction of diminishing returns to representation, we do not find significant effects associated with the trial composition for white patients. The combination of physician and patient results suggest that doctors are broadly acting as agents for their patients.
Survey experiments are important tools for uncovering people’s mental models and perceptions (Stantcheva 2021, 2022) but are also subject to critiques, such as experimenter demand and social desirability bias. Our experiments were designed to mitigate such concerns. First, we used neutral recruitment materials stating that our goal was broadly to understand views on medical research, mirroring language from a nonprofit dedicated to the same whenever feasible.3 Second, we recruited both white and Black patients. If the response to sample representation was solely attributable to social desirability, we might expect to find similar effects for both groups (we do not). Third, survey responses correlate with actual donation behavior in a follow-up study.
A related concern is that our experiment may have informed patients and doctors about something that they did not already know about—that is, the composition of clinical trials. If so, our results might overstate the degree to which trial representation influences treatment choices. Indeed, the order of questions and salience of race might have played a role in the magnitudes of our effects. To better understand baseline knowledge in our study populations, we reviewed literature on how doctors evaluate trials and obtained data on patients’ knowledge regarding medical research. Physicians educated at accredited medical colleges in the United States are explicitly taught to consider the applicability of trial findings to their own patients through EBM training (Blanco et al. 2014).
In our survey, 72% of physicians reported that they have been asked by patients whether a new medicine will “work in people like me.” Data from the nonprofit Research!America reveal that Black and white respondents are aware of clinical trials (80% and 88%, respectively). However, Black respondents are less likely to believe that science benefits them and less likely to consent if invited to participate in clinical trials than white respondents. Two additional pieces of evidence suggest that trial representation is taken into account by (at least some) doctors and patients: one comes from stakeholder quotes compiled in the writing of a recent National Academies of Science Engineering and Medicine report (NASEM 2022) and another comes from the association between more representative clinical trials and higher prescribing rates for new drugs among Black patients (see Section VI.B and Online Appendix Table D2).
Turning to mechanisms, we find that doctors—and to a greater extent, patients—lack confidence in extrapolating from samples that are not representative of them or their patients. This is true of both Black patients (when extrapolating across racial groups) and white patients (when extrapolating across countries). One question is whether this hesitancy to extrapolate, especially among doctors, is a mistake. Given the current state of the literature and data availability, this does not seem to be a clear mistake. Manski, Mullahy, and Venkataramani (2022) show, under mild assumptions regarding doctors’ objective function, that including any predictive factor in clinical decision making is welfare enhancing. Is race in fact predictive of treatment effects? First, precisely because representation is so low, clinical trials offer limited direct evidence on this question. Green et al. (2022) review 290 new drug approvals in the FDA Drug Trials Snapshots data, and approximately 80% did not report treatment effects for Black patients separately; among those that did, 91.4% and 98.1% found no difference in side effects and benefits, respectively. Ramamoorthy et al. (2015) report a higher rate of heterogeneous effects in a review of post marketing analyses, finding such effects for nearly 20% of all new drugs. Second, because the medication mechanisms of action (i.e., a drug’s pharmacodynamics) are often incompletely specified and evolving, it is difficult to provide assurances that the findings will extrapolate across patients with different characteristics without trial evidence. Third, there is a strong relationship between social class and race in the United States that could affect pharmacokinetics, or how the drug is metabolized. Indeed, in our experiments, respondents cited the possibility of biological, socioeconomic, and environmental differences that could alter drug performance as rationales for their lack of confidence. Fourth, even if physicians believe findings do extrapolate, they might internalize patients’ lack of confidence for a variety of reasons (Ellis and McGuire 1986), including that it might affect patient adherence. Our qualitative findings from doctors explaining why they care about representation include concerns regarding treatment effect heterogeneity and concern for patients’ views.
Importantly, we find that increasing the representativeness of medical research can reduce prescription gaps. Physicians treating Black patients are considerably less willing to prescribe drugs approved on the basis of unrepresentative trials—at all levels of drug efficacy—compared with physicians who treat white patients, mirroring the racial prescription gap observed in the Medical Expenditure Panel Survey. When clinical trial samples are more representative of Black patients, this gap disappears. The difference between the share of Black and white patients who believe that the drug will work as well for them as it did in clinical trials is also eliminated when respondents are shown results generated from more representative data. These findings suggest that policies that increase representation in the evidence base for new technologies could narrow gaps in their adoption.
These findings then also imply that a firm could increase sales by recruiting a more representative sample. The trade-off in doing so is cost—our framework and evidence suggest that a history of underrepresentation in (voluntary) research leads Black patients to anticipate lower benefits of trial enrollment, making recruitment more costly. With the status quo recruitment infrastructure, representation of Black patients remains low—perpetuating doubt about whether trial findings extrapolate to them and generating a cycle of underrepresentation.
Although policies that break this cycle of underrepresentation may take many forms, we discuss case studies of successful investments in what we call inclusive infrastructure. We document considerable variation in trial representation across diseases and contrast two especially different cases: cancer and HIV/AIDS. Although research into both diseases is supported by large, coordinated networks with substantial federal investment, Black patients are well represented in HIV/AIDS trials and poorly represented in cancer trials, relative to population share and disease burden benchmarks. To understand the origins of these differences, we draw on interviews with clinical trials networks, qualitative research, and administrative data. We highlight two key features that differentiated HIV/AIDS trials: engagement with priority population communities from protocol design to recruitment, and site selection in and around safety net hospitals. These differences may explain its more representative evidence base and, more suggestively, its higher diffusion rates of new products.
Our work contributes to a growing literature that seeks to understand the role of innovation in creating or exacerbating inequality. Previous studies have focused on how endogenous (demand-pull) investment can affect the composition of resulting technologies. Most closely related is Cutler, Meara, and Richards-Shubik (2012), who find that allocation of NIH grant funding disproportionately flows toward majority groups when physicians “treat what they see,” widening health gradients in settings where disease burden differs across groups. Michelman and Msall (2021) highlight the harm from regulatory restrictions on women’s participation in early-stage clinical trials, which dampens patent activity for women-specific conditions. Other scholarship focuses attention on how product characteristics affect diffusion. Papageorge (2016) develops a dynamic structural model of demand for medical treatment when patients trade off health and work experience, illustrating how side effects associated with HIV medication could affect treatment decisions among employed persons. Hamilton et al. (2021) extend this model, describing more generally how patient preferences exert a demand externality, tilting innovation toward less efficacious drugs and lowering overall experimentation. We build on these important contributions by developing and testing an alternative link between innovation and inequality: we ask whether unequal representation in the R&D process can induce inequality directly by making it more difficult for people to extrapolate from the data to their situation.
We also contribute to a literature on race and trust. People from different backgrounds may have different experiences (i.e., different data to readily extrapolate from), and these experiences can lead to increased or decreased levels of trust that a variety of institutions are effective for them. Previous research has shown that differential beliefs in the returns to investment opportunities (Boerma and Karabarbounis 2023) contribute substantially to the persistence of the racial wealth gap (Derenoncourt et al. forthcoming). Research also indicates that historical exploitation, violence, and discrimination have led to distrust in the medical system and medical research (Alsan and Wanamaker 2018; Eli, Logan, and Miloucheva 2019), declines in home ownership (Albright et al. 2021), and reduced participation in political processes (Williams 2022). Our article provides a way to think about the consequences of these different experiences for trust more broadly, as the cycle of underrepresentation result applies to any process that includes a participation decision.
The remainder of the article proceeds as follows. Section II provides background information on clinical trials and relevant history. In Section III, we formalize how representative clinical trials may matter to patients and physicians. Section IV describes our two experiments. Section V presents our experimental results. We conclude by drawing lessons from case studies of successful efforts to improve representation in medical research.
II. Background
This section discusses the institutional context of clinical research, including trial financing and costs, the regulatory review process, and factors that shape enrollment. We describe how doctors and patients learn about new drugs and trial results. The features are incorporated into our framework. Online Appendix G provides additional details.
II.A. Clinical Trials Landscape
1. The Drug Development Process.
Before a new drug may be marketed in the United States, the FDA must deem it to be safe and effective. Sponsors seeking to obtain FDA approval typically conduct clinical trials—randomized evaluations of the new drug relative to a placebo or current standard of care (National Institutes of Health 2017). Data drawn from ClinicalTrials.gov, the largest global registry of clinical trials, suggest that private firms are the most frequent single primary sponsor of clinical trials (36%), an order of magnitude more frequent than U.S. federal agencies (3%).4 The remainder of clinical trials are sponsored by academic institutions, hospitals, and nonprofit organizations.5
The drug approval process begins when sponsors identify a promising lead compound—the core component of what will become a drug. Sponsors typically file initial patent applications on the drug just before beginning Phase I clinical trials.6 When firms begin clinical testing, they also file investigational new drug (IND) applications, which draw on data from preclinical testing. Patent terms are 20 years long, though firms may receive other forms of market exclusivity that can extend effective patent life.
Drug sponsors must complete three stages of clinical testing before applying for marketing approval. Phase I trials are intended to establish safety, determine appropriate dosages, and identify side effects. Phase II and III trials test efficacy, monitor safety, and compare the product to existing alternatives. Whereas Phase I trials often recruit a small number of healthy volunteers, Phase II and III trials recruit from the target patient population and may enroll thousands of people. Drug approval hinges on so-called pivotal trials, which are typically Phase III trials that aim to demonstrate efficacy.
2. The Cost of Clinical Trials.
Clinical research is expensive. Recent estimates suggest that the median cost of a pivotal clinical trial providing evidence of efficacy to the FDA is about $19 million (Moore et al. 2018).7 Industry reports suggest that the most expensive step of the clinical trial process is recruiting patient participants in Phases II and III (Sertkaya et al. 2014). Accrual rates—the speed with which a trial can recruit eligible patients—are cited as the most common reason for trial delays and, in some cases, failure. Slower accrual rates can lengthen clinical trial periods and erode patent life (Budish, Roin, and Williams 2015). Thus, trial sponsors aim to identify and enroll patients as quickly as possible, often contracting with third parties that specialize in clinical trial enrollment, and sometimes moving operations overseas where recruitment costs tend to be lower (Qiao, Alexander, and Moore 2019).
The cost—in terms of money and time—of enrolling a new patient in a trial also varies across demographic groups. Obtaining proprietary information of these costs is difficult; however, several published studies and our own qualitative interviews with stakeholders provide corroborating evidence that white patients tend to require fewer resources and are thus much lower cost to recruit (see Online Appendix A.2 for details). Efforts to reach out to non-white communities typically involve additional staff, tailored recruitment materials, and new relationships with health care networks—all of which contribute to a comparatively high cost per enrollee (Marquez et al. 2003).
Two additional pieces of evidence provide some quantitative information on the size of these cost differences. First, consider the case of Moderna, which ran one of the highest-stakes clinical trials in recent history for its first-generation SARS-CoV-2 vaccine. In September 2020, the company announced that enrollment was going to be slowed for the explicit purpose of improving representation of patients from racial and ethnic minorities in the trial. Moderna’s stock price fell 8% upon the announcement (Online Appendix Figure B5) (Tirrell and Miller 2020). A second illustration is the cost of recruiting experimental subjects for online surveys. In Online Appendix Figure B6, we plot price quotes for U.S.-based respondents that we received for our own study from three large survey firms. All three firms quoted higher prices to recruit Black respondents as compared to white respondents—with prices ranging from 4% to 130% more to recruit a Black respondent. We endogenize these cost differences and explore their effects in our conceptual framework (Sections III and VI).
3. Enrollment Patterns and Barriers to Participation.
The cost differences described above may play a role in explaining the trial enrollment patterns observed in Figure I. Black patients make up just 5% of trial enrollees in the median clinical trial—far less than the 13.6% of the U.S. population that they make up (U.S. Census Bureau 2021). This level has remained flat since data collection efforts began (Online Appendix Figure B1). Based on the Research!America survey data, Black Americans are less likely to have confidence in research institutions, to believe science benefits them, or to enroll in clinical trials (Table I).8 These findings mirror those of our own survey data: an analysis of open-text responses reveals that Black patients are more likely to cite trust, privacy, and racism as reasons not to enroll, whereas white patients cite logistical barriers and comorbidities (Online Appendix Figure B7).
TABLE I.
Views on Science and Clinical Trials among U.S. Respondents
| Black Respondents | White Respondents | Difference | |
|---|---|---|---|
| (1) | (2) | (3) | |
|
| |||
| Confidence in research institutions | 2.829 | 3.082 | −0.253*** |
| (0.963) | (0.822) | ||
| Heard of clinical trial | 0.796 | 0.875 | −0.079*** |
| (0.374) | (0.339) | ||
| Would enroll in clinical trial if doctor recommends | 0.783 | 0.837 | −0.054*** |
| (0.384) | (0.379) | ||
| Trust not reason for lack of enrollment | 0.432 | 0.536 | −0.104*** |
| (0.463) | (0.514) | ||
| Science is beneficial | 0.284 | 0.383 | −0.099*** |
| (0.419) | (0.493) | ||
| Would get FDA-approved vaccine | 2.907 | 3.069 | −0.163 |
| (1.024) | (1.099) | ||
Notes. The table reports the survey responses from Black and white U.S.-based respondents for a set of questions regarding science. Data are from a national survey conducted by the nonprofit Research!America over 2013, 2017, and 2021. Heard of Clinical Trial, Trust, and Science is Beneficial are dichotomous variables. Other variables are on an ordinal scale. See Online Appendix H for details on variable construction. Standard deviations are in parentheses.
, **, *** refer to statistical significance at the 10%, 5%, and 1% levels, respectively.
4. Clinical Trials Data.
Upon successful completion of the three phases of clinical trials, sponsors submit new drug applications (NDAs) to the FDA. Based on these data, the FDA determines whether the drug will be approved for sale in the United States and for which specific indications. Currently, the FDA only requires that a drug is proven efficacious for the “target population,” which in practice translates to patients with the targeted condition. Most trials are therefore powered to detect a mean difference in the primary endpoint between treatment and control groups and not to detect subgroup-specific treatment effects, which are not commonly reported (Green et al. 2022). The most common statistic reported in abstracts and quoted in advertisements is therefore a drug’s average treatment effect, as demonstrated in the trial. Demographic characteristics of the sample are typically provided in the first table (the balance table) of journal articles or in the short description of the study population in drug advertisements.9
5. The Market for New Drugs.
Although analogous approval processes occur worldwide, approval in the U.S. market is critical for pharmaceutical firms: U.S. sales were projected to account for nearly 50% of the $1.2 trillion in global pharmaceutical revenues earned in 2020 (IQVIA 2015) and a disproportionate share of pharmaceutical net income (Goldman and Lakdawalla 2018; Ledley et al. 2020). In particular, the United States currently lacks the price controls that other countries use to curtail spending and is permissive with respect to marketing. Given these features of the market, we focus on demand in the United States, among physicians and patients.
II.B. Demand for New Drugs in the United States
1. How Physicians Learn about New Drugs.
Randomized controlled trials are considered the gold standard for causal inference in medicine and have been since their popularization by the British Medical Research Council and subsequent adoption by the FDA in 1962 (Cochrane 1972). EBM is a step-by-step process that facilitates the “reasonable use of modern best evidence in making decisions about the care of individual patients” (Martí-Carvajal 2020, 1). EBM’s five steps aim to integrate clinical experience, patient values, and research findings (Blanco et al. 2014).10
After physicians complete their formal training, trial information is often accessed via multiple sources. These sources include ClinicalTrials.gov, which as of April 2019 received more than 215 million page views per month and 145,000 unique visitors daily. (See their website for additional details.) They also include academic journals, society or national practice guidelines, pharmaceutical representatives, medical conferences, and, more informally, online and in-person social networks. To maintain an active medical license, many primary care doctors participate in continuing medical education (CME). In addition to meeting requirements set by professional associations, doctors might wish to stay up to date with the literature for other reasons, including a desire to help their patients (Doximity 2014).
2. How Patients Learn about New Drugs.
Patients learn about new drugs mainly through their physicians and via advertisements. The United States and New Zealand are the only two countries that allow firms to market medications directly to patients (Schwartz and Woloshin 2019). Between 2016 and 2018, firms spent $17.8 billion on direct-to-consumer advertising (DTCA) associated with 553 unique drugs (U.S. Government Accountability Office 2021). Ads can be precisely targeted based on people’s search history and sometimes include links to clinical information. Patient advocacy groups in the United States are also key in disseminating information about new drugs—lists of trials and summaries of evidence exist for nearly all major categories of disease.
Perhaps in part because of this outreach, data from Research!America show that 80% of Black respondents and 88% of white respondents had heard of clinical trials (Table I). Moreover, we document in our survey of primary care physicians that 72% report having ever been asked by their patients about whether a new medication will “work in people like me.” The share of physicians asked this question on a regular basis is higher among those that treat Black patients (Online Appendix Figure B8).
Our theoretical framework considers beliefs and behavior of U.S.-based patient-physician dyads with access to information on average treatment effects and demographics from trials; we then report results from experimentally manipulating these two features of trials in Section IV.
III. Organizing Framework
The framework presented here formalizes how representation in the trial process affects perceived benefits of new drugs for patients and their doctors, yielding predictions we can then test experimentally. After presenting experimental tests of these predictions, we return to the framework in Section VI to try to understand why the underrepresentation of Black patients in clinical trials is so persistent.
III.A. Physicians and Patients
Physicians and patients use clinical trial information to understand the benefits of a new treatment and to inform decisions about participation in clinical research. Both agents are important end users of clinical trial information: physicians are the gatekeepers of prescriptions, whereas patients’ adherence behavior determines whether prescribed drugs will have the intended salubrious effect. To abstract from strategic interactions between physicians and patients and instead focus on the core issues surrounding consequences and causes of low representation, we make two assumptions that guarantee that a doctor’s decision of whether to prescribe a treatment (or recommend trial participation) aligns with a patient’s decision to adhere to the prescription (or participate in the trial). First, we follow the standard assumption that everyone shares a common prior. Second, we assume doctors are agents for patients and share their objective function.11
1. Physician and Patient Beliefs.
The assessments of patient-doctor dyad are influenced by the current and historical trial data. Suppose the benefits to treatment for the patient in dyad equal for , where benefits are measured relative to not getting treatment. That is, the treatment either doesn’t work or works , and parameterizes the stakes of the disease-treatment combination. The likelihood that the treatment works for a patient with characteristics is given by . Overall, the perceived benefit of treatment, , is:
where is the expectation of dyad on whether the treatment will work and this expectation is conditioned on data available at the time of the decision. The assumption that everyone applies the same (explicit or implicit) model of inference allows us to simplify the presentation of the model in two ways. First, the expectation operator is identical across all dyads and we can write as . Second, the perceived benefit of treatment only depends on through s characteristics (i.e., it is not heterogeneous conditional on ), so whenever it does not cause confusion we write as a function of and the available data .
To focus and simplify the exposition, assume is unidimensional and in {0, 1}, where corresponds to “white” and to “Black.” As noted already, clinical trials rarely report subgroup analyses. Instead, data from a given trial consist of the combination of the average reported efficacy and fraction of Black participants, . Average efficacy is defined as , where denotes the benefits of the treatment if successful, the number of trial participants for whom the treatment was in fact successful, and the number of trial participants.12 The fraction of Black trial participants simply equals , where the summation is taken over the trial participants. The complete history of trial data equals before treatment ’s trial is run and equals after. Our focus will be on beliefs about this treatment and, when it does not cause confusion, we omit the subscript when referring to it.
The key assumption underlying patients’ and doctors’ model of inference is that, in assessing the likelihood of treatment success for patients with characteristics , they extrapolate more from data on patients with those characteristics than from data on patients with different characteristics. For patients, this could reflect learning from similarity, central to a wide variety of evidence-backed frameworks in psychology and economics.13 For doctors, this is consistent with evidence-based medicine (see Section II.B). Formally, people form beliefs about and hence by attaching probability to characteristic mattering.14 We then have
To generate simple closed-form expressions for the above expectations, we assume priors over are in the beta family. If is distributed according to beta distributions prior to the trial data for treatment , with parameters , conditional on mattering and parameters , conditional on not mattering, then:
| (1) |
We set initial conditions for these parameters such that (i.e., in the absence of trial data agents assess the likelihood of treatment success as 0.5).
If clinical trial data are available, people form priors on the efficacy of novel treatments under investigation (more on this later), and update their beliefs once trial data on those treatments become available. We assume people attribute fraction of the overall number of successes reported in the trial to study participants with , where equals the fraction of trial participants with characteristics .15
Given this assumption, they update their beliefs from trial data on treatment according to Bayesian updating (see Online Appendix F for precise equations). As is standard, people end up placing some weight on the prior (given by ) and some on the empirical success probability in the trial .
Proposition 1.
Supposing is fixed and average trial efficacy exceeds prior-belief ratios and , then
: the perceived benefit of a treatment to a patient is increasing in efficacy, as measured in the clinical trial.
: the perceived benefit of a treatment to a patient is increasing in the representation of patients with similar characteristics in the clinical trial.
: the degree to which increasing representation in a clinical trial positively impacts perceived benefits for group members is decreasing in the group’s existing trial representation.
Proof. All proofs can be found in Online Appendix F.5.
The intuition is straightforward: when a treatment works better than expected in the trial, people update their beliefs upward on treatment efficacy.16 But the degree to which they update depends on the (effective) sample size of the trial. Given that people place positive probability on characteristic mattering, the effective sample for patients with characteristics is increasing in their trial representation. Diminishing returns to representation follows from diminishing returns to sample size in (e.g., Bayesian) models of updating.
We assume posteriors from the most similar previous treatment become the prior for a novel drug.17 That is, letting the most similar past treatment to come in period . Given this assumption, even when all groups begin with the same prior beliefs on efficacy at the beginning of time (in period 0), the underrepresentation of a given group will lead to a divergence in the perceived benefit of treatment over time (see Online Appendix F.4 for a numerical example). This divergence has important implications for behavior, described next.
2. Patient and Doctor Behavior.
Suppose that a patient with characteristics participates in a trial for treatment when she is invited to participate and
where equals the nonprice costs of participating in the trial (or convincing a patient to do so) and is a stochastic shock that is i.i.d. across according to a differentiable cumulative distribution function .
Similarly, after a successful trial, a patient is treated for treatment when indicated and
where refers to the non price costs of prescribing or adhering to treatment , is the price (i.e., copay) for , and is a stochastic shock that is i.i.d. across according to . Let
be the likelihood that a patient with characteristic participates in a trial when invited. Similarly, let
be the likelihood a patient with characteristic is treated for treatment when the treatment is indicated.
COROLLARY 1.
Given Proposition 1, a patient’s demand to participate in a given trial (or a physician’s decision to recommend a trial) is increasing in the degree to which patients who shared their (their patients’) characteristics were represented in previous trials for which the average trial efficacy exceeded prior-belief ratios. Formally, for such trials ,
This result implies that a failure to represent groups in a trial today creates an intertemporal externality, as it becomes more difficult to recruit those groups in a trial tomorrow. Such less-represented group members perceive limited benefits from novel treatments relative to members of more-represented groups.18
Online Appendix F.3 formalizes two additional results on how beliefs affect behavior. First, Corollary F.2 shows that the comparative statics which Proposition 1 establishes for beliefs also hold for behavior: the demand for a new medication is increasing in the efficacy observed in the clinical trial and the representation of patients with similar characteristics in the clinical trial, with diminishing returns to the latter.19 Second, Corollary F.3 shows how historical and contemporaneous underrepresentation of Black patients in clinical trials creates a gap in the perceived benefits and demand for novel drugs between white and Black patients, where white patients have higher perceived benefits and demand relative to Black patients. It goes on to show how increasing Black representation in clinical trials closes these gaps. Table II summarizes our theoretical predictions and how they connect to our empirical results, which we turn to next.
TABLE II.
Summary of Theoretical Predictions and Empirical Results
| Theory | Predictions | Exhibits | Result Summary |
|---|---|---|---|
|
| |||
| Prop. 1.1; Cor. F.2.1 | Perceived benefits and demand for a new medication are increasing in trial-reported efficacy | Table III | A 1 std. dev. increase in efficacy increases physician prescribing intention by 0.28 std. dev. |
| Prop. 1.2; Cor. F.2.2 | Perceived benefits and demand for a new medication are increasing in representation of similar patients in clinical trials | Table III | • For physicians, a 1 std. dev. increase in representation increases prescribing intention by 0.11 std. dev. • For Black patients, being assigned to the representative treatment increases self-reported relevance for their own care (“relevance”) and the likelihood that their posterior on efficacy is within a small neighborhood of the reported clinical-trial results (“loading on the signal”) by 0.78 std. dev. and 19.9 percentage points, respectively. |
| Prop. 1.3; Cor F.2.3 | Diminishing returns to representation | Figure II, Panel D, Table III | • For physicians treating white patients (PWP), we fail to reject the null hypothesis that a decrease in white representation (from existing high levels) does not change prescribing intention. • For white patients, we fail to reject the null hypothesis that a decrease in white representation (from existing high levels) does not change relevance or loading on the signal. |
| Cor. F.3 | • There are white-Black gaps in perceived benefits and demand for a new medication. • Increasing Black representation in clinical trials narrows these gaps. |
Figure III; Figure IV; Figure V | • PWPs have a mean prescribing intention of 6.46, while PBPs who are exposed to nonrepresentative trials have a mean prescribing intention of 4.90. The prescribing intention of PBPs who are exposed to representative trials increases to 6.26 and is statistically indistinguishable from that of PWPs. • Black patients who are shown the low representation trial are 26 percentage points less likely to load on the signal than white patients. Black patients shown the representative trials are only 1 percentage points less likely to load on the signal than white patients and this difference is statistically indistinguishable from zero. |
| Cor. 1 | Groups that were historically underrepresented in successful trials have a lower propensity to participate in trials today than historically well-represented groups. | Online Appendix Table C11; Section VI.B | • For Black patients, being assigned to the representative treatment increases their stated willingness to participate in similar future blood pressure studies by 0.39 std. dev. • Historically, HIV/AIDS trials were more representative than cancer trials. Recent HIV/AIDS trials are associated with a higher percent Black representation than recent cancer trials. |
Notes. Formatting of the exhibits indicate the type of evidence: italics indicates causal evidence; boldface indicates descriptive evidence.
IV. Experimental Design
IV.A. Experimental Design
To test predictions from our theoretical framework, we conducted survey experiments—one with a sample of primary care physicians, and one with a sample of patients.20 The experiments differed in important ways reflective of the different subject pools. Physicians, who are familiar with the task of evaluating new medications as part of standard practice, were asked to rate several hypothetical drugs. In each drug profile, the racial composition of the trial and efficacy were cross-randomized.21 Drug efficacy was used as a “numeraire” because it is widely considered the most important characteristic of a new medication. Prescribing intention and relevance for own patients for each medication were assessed. When surveying patients, a simpler exercise was presented: respondents were shown trial evidence associated with a single actual drug. Primary outcomes for patients included beliefs on the drug’s efficacy, relevance for own health, and willingness to “ask their doctor” about the new medication.22 We describe the experiments below and discuss common critiques of survey experiments and how we endeavored to overcome them in Section V.B.
1. Physician Survey Experiment.
We recruited physicians who met the following criteria: (i) actively practicing in primary care, (ii) practicing in an outpatient setting (i.e., excluding hospitalists), and (iii) holding either an MD or DO. We worked with a licensed vendor of the American Medical Association’s (AMA) physician masterfile to identify and contact eligible physicians. We verified that survey respondents met all three criteria with a set of screening questions at the outset of the experiment. We prespecified that the representativeness of the trial sample could interact positively with the demographic composition of the physician’s patient panel. Thus, to ensure suitable variation in the panel, we split ZIP codes into deciles by Black population, weighting each ZIP code by its total population, and requested that half of all physician contacts be pulled from the top decile, one-quarter from the bottom decile (these two deciles account for 15% of all primary care physicians), and one-quarter from the remaining deciles.23 This sampling approach was motivated by the fact that the distribution of Black patients across geographies and providers tends to be highly concentrated (Bach et al. 2004; Chandra, Frakes, and Malani 2017).
We sent each physician a personalized email (to their professional email address) inviting them to participate in a study. The email originated from a Harvard email account. We embedded a message as email text, which noted that the purpose of the study was to collect physician views on clinical trials research, that the study had received IRB approval, that their data would be securely stored, and that the study was not funded by industry but for academic purposes (see Online Appendix Exhibit E1). The letter explained that the physician respondents would be asked to rate eight hypothetical drugs and would be compensated $100 for their participation.24
Although the vignettes were hypothetical, the drugs were based on recently developed therapies to treat diabetes. We chose to focus on diabetes because it is a common condition that is typically managed by primary care providers, and several new therapies with novel mechanisms of action have recently been developed (American Diabetes Association 2020). There are no established guidelines that encourage different prescribing by race or ethnicity for patients with diabetes (Golden et al. 2012). However, there is a debate (as with other conditions) about the role that genetic ancestry plays in its incidence (Parcha et al. 2022).
After confirming eligibility and answering questions about their practice, physicians were shown eight unique drug profiles. Profiles were selected randomly without replacement (i.e., physicians never saw an exact duplicate) and drug names were selected from 15 alternatives.25 At the top of each profile, we listed the generic name of a hypothetical drug, which we developed by following standard naming conventions (e.g., suffixes and prefixes) that convey information about a drug’s type. Profiles also included the drug’s mechanism of action, the study type, sample size, and sample demographics (see Online Appendix Exhibit E2 for an example of a profile and Online Appendix Exhibit E3 for a table listing the hypothetical drugs shown to participants). Profiles were randomly assigned an efficacy value ranging uniformly from a 0.5%–2.0% average reduction in A1c, conforming to typical values of FDA-approved oral antiglycemics (e.g., metformin typically reduces A1c by 1–2 percentage points) (Nathan et al. 2009; Wexler 2022), and a percent Black of trial subjects value ranging from 0% to 35%, with lower values oversampled as trial diversity is typically low (Knepper and McLeod 2018; Dornsife et al. 2019).26 Note that only efficacy and percent Black varied across the profiles, with all else held fixed.27 In each case, the trial type was listed as a double-blind active comparator trial, and the sample size was fixed at 1,500 participants.28
After viewing each profile, physicians were asked to rate how relevant the findings from the trial were for their patients (akin to the EBM step) and how likely they would be to prescribe the drug for patients with poorly controlled diabetes in their care. Both outcomes were on a scale from 0 to 10.29 After reviewing all drug profiles, respondents were asked about their confidence in extrapolating trial findings across demographic groups or geographies. In the final survey section, we asked questions about risk aversion, time preference, and altruism. We posed open-text questions used in sentiment analyses.
We sent a follow-up survey to physicians one to three weeks after they initially completed the survey. In the follow-up survey, we allocated $5 to each physician and asked how they would like to divide the amount between two real-world campaigns supporting recruitment efforts for clinical trials (see Online Appendix Exhibit E6). The first campaign aimed to boost trial participation among the American public at large, while the second campaign aimed to boost trial participation for underrepresented minority communities. Both campaigns were run by a nonprofit, the Center for Information and Study on Clinical Research Participation (CISCRP).
2. Patient Survey Experiments.
Patients were recruited from Lucid, an online survey platform frequently used in social science research and marketing (see Online Appendix H for more information on this platform). Respondents were told that the survey was designed to solicit their views on health care and understand the factors that affect their interest in health research. Eligibility criteria included: (i) self-reported non-Hispanic white or non-Hispanic Black race/ethnicity, (ii) at least age 35, and (iii) endorsement of a diagnosis of high blood pressure (alone or comorbid with other conditions). To verify that respondents had, in fact, been diagnosed previously with hypertension, they were asked to enter their latest systolic and diastolic blood pressure readings in an open-text field.30 Any respondent entering nonsensical values for blood pressure was deemed ineligible. We focused on high blood pressure instead of diabetes because a larger share of adults in the United States suffer from hypertension (45%) than diabetes (15%), thus facilitating recruitment (Ostchega et al. 2020; Centers for Disease Control and Prevention 2021). For the experiment assessing the new medication, we introduced consequentiality by explicitly encouraging patients to answer truthfully, and noting that their responses would be used to generate a personalized report they could download and share with their primary care provider. Approximately 42% of patient respondents downloaded the personalized report.
We began the experimental module by providing basic details about the clinical trial process. Before randomization, we informed respondents that new medications to treat blood pressure are frequently studied by researchers. We noted that these new therapies typically aim to improve blood pressure control, reduce complexity, or decrease side effects from medication. We added that new medications may not be an improvement over previous therapies, and thus must be tested before they are widely available. Patients were then shown details about a new medication: a combination antihypertensive medication. We asked each patient whether they had heard of the new drug before (95% had not) and what they anticipated the effect of the medication would be on their systolic blood pressure (in units of mmHg).
Patients were then shown findings from an actual clinical trial. We randomly assigned respondents to see trial data from studies that enrolled different shares of Black patients. The medication we presented was tested in two separate locations: in one setting, the percent Black in the trial was less than 1%—approximately one-third of trials in the ClinicalTrials.gov database meet such a criterion—and in the second, the percent Black in the trial was 15%. Efficacy was strong and comparable in both settings, lowering systolic blood pressure by about 15 mmHg.31 We thus randomized only the percent Black in the trial, holding efficacy and all other parameters of the trial constant.
After being shown information on the drug’s efficacy and the randomized racial composition of the study, in text and graphic form, patient respondents were again asked to provide their beliefs about the drug’s efficacy. In addition, respondents reported how relevant the findings of the trial were to patients like them and whether they would be interested in “asking their doctor” about the medication.32 We also asked patients the same question we had posed to doctors about extrapolating from trials generically. If patients indicated that they were not confident in extrapolating, we asked them to describe the reasons for this limited confidence.
In the final sections of the survey, we inquired about trust, risk aversion, altruism, and time preferences. We also asked respondents to provide details about their current primary health care provider and current regimen for blood pressure management and medication adherence. We concluded with open-text questions and a reference to learn more about clinical trials.
Our survey experiment on clinical trial participation followed the above design but occurred several months later, using a separate group of patients. In this second study, the outcome of interest was a respondent’s stated willingness to participate in a new trial that was similar to the one they had been shown. After respondents provided this information, we asked multiple-choice questions designed to elicit views on the financial or medical consequences of trial participation, on whether the trial would produce new or relevant knowledge, on data privacy, and on researcher trustworthiness.
V. EXPERIMENTAL DATA AND RESULTS
V.A. Sample Characteristics
We invited 12,192 physicians to participate in the study.33 Among those who passed the screening questions, 87% completed the survey (137 physicians); completion rates did not vary significantly across strata. Potential respondents were most commonly screened out if they were not practicing primary care physicians or if they were hospitalists (i.e., not outpatient providers). On nearly all dimensions, the characteristics of physicians in our sample are comparable to those of physicians in the same ZIP code strata in the AMA Masterfile (see Online Appendix Table C3), with the following exceptions: sample physicians from the top Black share decile stratum tend to be older and from higher-ranked medical schools, and physicians in other ZIP codes tend to have a higher share white population and a lower share Hispanic population.34
We recruited 275 patients diagnosed with hypertension to provide views on a novel treatment: 139 Black and 136 white respondents. Respondents are comparable to individuals with hypertension in the Medical Expenditure Panel Survey (MEPS); in Online Appendix Table C5, we document that Black and white respondents in our survey are broadly similar to MEPS respondents by age, geography, income, and insurance status, although there were relatively more female respondents. Black respondents had slightly higher levels of college education, and white respondents were less educated than in the MEPS data (Blewett et al. 2019).35 We recruited another 272 participants to the clinical trials participation experiment. There was no significant imbalance or differential attrition across arms for any survey (see Online Appendix Tables C2, C7, C8, C9, and C10).
V.B. Estimation and Results
To test whether increasing representation of Black patients (which we refer to simply as “representation”) in trials affects how physicians view study results and make prescribing decisions, we estimate the following equation:
| (2) |
where denotes a drug and denotes a unique physician respondent, denotes our primary outcomes of interest: relevance for one's own patients and willingness to prescribe. Representation is the share of patients in a given trial who are Black. Efficacy captures the percentage point drop in measured hemoglobin A1c. Both efficacy and representation were cross-randomized in each profile. Our prespecified main estimating equation includes physician fixed effects , mechanism of action fixed effects , and indicators for the order in which profiles were shown , though we also present results without any controls. The outcome and randomized attributes are standardized. Standard errors are clustered at the physician level. We also prespecified heterogeneity, interacting trial demographics with those of the doctor's panel.
To test whether the racial composition of clinical trials affects patient beliefs and behavior, we estimate for patient of race the following:
| (3) |
where the indicator variable captures the difference between receiving the information that the percent Black of trial participants was 15% versus less than 1%. Recall that efficacy was held fixed, and all respondents saw the same drug. We estimate equation (3) separately by patient race for three outcomes: relevance, efficacy beliefs, and asking one’s doctor. Relevance (of the drug for oneself) is transformed from a Likert scale (0 to 10) to standard deviation units. Loading on Signal is an indicator equal to 1 if patients’ beliefs about personal efficacy are within 1 mmHg of the reported treatment effect in the trial.36 Ask Doctor is an indicator variable equal to 1 if patients indicate a desire to talk to their doctor about the drug.
1. Main Findings.
Table III presents our main results for both experiments: Panel A reports findings for physicians and Panel B for patients. In Panel A, columns (1) and (2) include only the randomized components of drug profiles. A 1 standard deviation increase in the reported efficacy of the drug—a reduction in A1c of roughly 0.44 percentage points—increases relevance and willingness to prescribe a medication by 0.165 and 0.229 standard deviation units, respectively. Conditional on the drug’s efficacy, a 1 standard deviation increase in percent Black—about a 10 percentage point increase in Black trial participants—increases relevance for patients by 0.163 standard deviation units and willingness to prescribe the drug by 0.179 standard deviation units. Columns (3) and (4) present our main specification (equation (2)). We find that representation affects both relevance and intent to prescribe, increasing them by approximately 0.11 standard deviation units.
TABLE III.
Physician and Patient Experimental Results on Effects of Increasing Representation
| Panel A: Primary care physicians | ||||||
|---|---|---|---|---|---|---|
|
| ||||||
| Relevance | Prescribing | Relevance | Prescribing | Relevance | Prescribing | |
| No controls |
Main specification |
Share Black interactions |
||||
| (1) | (2) | (3) | (4) | (5) | (6) | |
|
| ||||||
| Representation | 0.163*** | 0.179*** | 0.109*** | 0.107*** | 0.007 | −0.005 |
| (0.039) | (0.036) | (0.029) | (0.029) | (0.038) | (0.039) | |
| Efficacy | 0.165*** | 0.229*** | 0.189*** | 0.281*** | 0.179*** | 0.285*** |
| (0.038) | (0.039) | (0.029) | (0.032) | (0.036) | (0.043) | |
| Representation × Patient percent Black | 0.004*** | 0.004*** | ||||
| (0.001) | (0.001) | |||||
| Efficacy × Patient percent Black | 0.000 | −0.000 | ||||
| (0.001) | (0.001) | |||||
| p-value: Representation = efficacy | .057* | < .001*** | ||||
| p-value: Representation = (efficacy) | .655 | .314 | ||||
| Doctor Fes | No | No | Yes | Yes | Yes | Yes |
| Profile order FEs | No | No | Yes | Yes | Yes | Yes |
| Rx mechanism FEs | No | No | Yes | Yes | Yes | Yes |
| Observations | 1,096 | 1,096 | 1,096 | 1,096 | 1,096 | 1,096 |
|
| ||||||
| Panel B: Patients | ||||||
|
| ||||||
|
Relevance
|
Ask Doctor
|
Loading on Signal
|
||||
| Black patients | White patients | Black patients | White patients | Black patients | White patients | |
| (1) | (2) | (3) | (4) | (5) | (6) | |
|
| ||||||
| Representative treatment | 0.781*** | 0.172 | 0.021 | 0.006 | 0.199** | −0.057 |
| (0.167) | (0.159) | (0.077) | (0.079) | (0.083) | (0.086) | |
| p-value: Black patients = white patients | .008*** | .893 | .030** | |||
| Control mean | −0.26 | −0.23 | 0.70 | 0.70 | 0.33 | 0.59 |
| Observations | 139 | 136 | 139 | 136 | 139 | 136 |
Notes. Panel A reports OLS estimates for the outcomes of Relevance and Prescribing Intention on the sample of primary care physician respondents. Representation refers to the randomized percent Black in the trial unless otherwise indicated. Efficacy refers to the randomized percentage point drop in A1c. Prescribing Intention, Representation, and Efficacy are standardized to a mean of zero and a standard deviation of one. Columns (3) and (4) report results from the main specification (equation (2)). Columns (5) and (6) interact Representation and Efficacy with the reported percent of patients that are Black in the physician’s panel and the main effect is included but not reported. One hundred thirty-seven physicians participated in the experiment each assessing eight oral antiglycemic medications. Standard errors clustered at the physician level are in parentheses. Panel B reports OLS estimates from equation (3) on the sample of patient respondents. Relevance refers to relevance for own care and is standardized to a mean of zero and standard deviation of one. Loading on Signal is an indicator equal to 1 if the respondent’s posterior was within 1 mmHg of the signal (i.e., between 14 and 16) and 0 otherwise. Robust standard errors are in parentheses.
, **, *** refer to statistical significance at the 10%, 5%, and 1% levels, respectively.
The p-values displayed in the bottom rows of these last two columns indicate that—although we reject that the coefficients on representation and efficacy are equal—we cannot reject that representation has about half the effect of efficacy. In other words, physicians are approximately half as responsive to who was in the trial as they are to how well the drug works. The results in columns (5) and (6)—in which we include interaction terms between experimentally manipulated measures of representation and efficacy with each physician’s Black patient share—are key in understanding our results: the effect of increased Black representation on prescribing behavior is attributable to doctors who treat at least some Black patients. We observe no comparable (significant) interaction between doctors’ patient demographics and efficacy.
In Table III, characteristics of the physician’s patient panel enter linearly. Figure II explores these relationships nonparametrically by interacting quartiles of patient percent Black with the treatment and plotting the total effect (main effect plus interaction). Panel A shows the results for efficacy, demonstrating a relatively constant effect on relevance and prescribing across the percentage Black of patients. By contrast, in Panel B representation has a nearly linear and upward-sloping relationship: the higher percentage Black in a doctor’s patient panel, the more they respond to the inclusion of Black patients in the trial. Note that this line naturally begins at zero over the domain we test: there is simply a null effect (not a strong negative effect) of increasing Black representation among physicians who care mostly for white patients.
Figure II. Heterogeneity among Physicians by Racial Composition of Patient Panel.
The figure plots OLS estimates for two outcomes—Relevance (Panels A and C) and Prescribing Intention (Panels B and D)—from specifications estimated with interaction terms between each quartile of patient percent Black and either Representation or Efficacy. Fixed effects are residualized before estimating equation (2). The figure plots the linear combination of the main effect and the interaction with each quartile; quartile one is defined as the reference. Robust standard errors are clustered at the physician level. Ninety-five percent confidence intervals are displayed.
To provide further assurance that it is specifically the racial composition of the panel that is driving the heterogeneity, Online Appendix Figure B10 presents an omnibus test, in which physician-specific representation coefficients are regressed on panel demographic characteristics. A significant association exists only between the magnitude of the coefficient and the panel percent Black, with no strong relationship between representation and percent female, Hispanic, foreign-born, or senior citizen. Moreover, there is no significant relationship between physician-specific efficacy coefficients and panel percent Black, nor between the other demographic categories. Online Appendix Figure B11 demonstrates few associations between physician-specific responses to representative trials and their own background characteristics.
We next turn to findings from patients in Table III, Panel B. Recall that in this specification (equation (3)), the treatment is an indicator variable. We split the sample by patient race, with findings from Black patients displayed in the odd columns, results from white patients in the even columns, and a p-value of the difference between the two samples in the bottom even rows. Column (1) reports that Black patients with hypertension assess clinical trials with 15% Black participants as 0.781 standard deviation units more relevant than trials with less than 1% Black participants—holding drug name, mechanism, and reported efficacy constant. This result is statistically significant at the 1% level. Column (3) indicates that these higher assessments translate into a positive but statistically insignificant willingness to ask their physician about the medication. Column (5) reports that the representative arm is associated with a 19.9 percentage point increase in believing the drug would perform as well on oneself as in the trial. The results from white patients with hypertension are mixed in sign and never statistically significant (columns (2), (4), and (6)).
Results from our patient sample are also broadly consistent with the model’s prediction of diminishing returns to representation: representation matters for Black hypertensive patients, and does not (over the domain tested) for white patients, similar to what we find for prescribing intentions in Figure II. Taken together, the results suggest physicians are acting as good agents for their patients—combining the evidence on efficacy while also taking patient views into account (Ellis and McGuire 1986; Barnato 2017).
Last, we turn to our main results from the follow-up experiment, which investigated the relationship between beliefs about trial representation and willingness to participate in future clinical trials. Results are reported in Online Appendix Table C11, Panels A and B, column (1). We find that exposure to the treatment—data on a more representative trial—increases Black patients’ stated willingness to participate in similar future blood pressure studies by 0.385 standard deviation units. There was no significant effect for white patients and the difference in treatment effects across the two groups was significant (p-value = .038). We discuss potential mechanisms for these results later.
2. Representation and Disparities.
We assess whether increased racial representation in clinical trials can close gaps similar to those documented in Figure I. Figure III documents that— when the share Black of the trial is low—a gap emerges between Black and white patients shown identical information on drug efficacy. For Black hypertensive patients, beliefs about how much the drug will lower blood pressure are within 1 mmHg of the range of the reported clinical effect for 33% of respondents, compared to almost 60% of white hypertensive respondents. This difference is large and statistically significant. When the trial is more inclusive of Black patients, this gap closes. While the change for Black patients is dramatic, the effect on white patients is negligible. This result is also observed when plotting the distributions of prior and posterior views on drug efficacy—the latter under the different interventions. Before the information treatment, the prior distributions for Black and white patients are indistinguishable (see Figure IV, K-S test p-value = .960). Regardless of the trial arm they are assigned, white patients update substantially on trial results, reporting a perceived effectiveness for their own health that is similar to the study finding. In contrast, Black patients are more willing to accept that reported efficacy under study conditions captures the drug’s effectiveness for their own health when the sample is more representative (K-S test p-value = .026).
Figure III. Loading on Signal by Race and Treatment Status.
The figure plots the share of respondents who “Load on Signal”—whose posteriors are within 1 mmHg of the reported drug efficacy in our intervention (15 mmHg)—by race and treatment group. Load on Signal is an indicator variable that takes a value of 1 if the respondent’s posterior was between 14 and 16, and 0 otherwise. The x-axis reports values for two groups of respondents: nonrepresentative trials with < 1% Black patients and representative trials with 15% Black patients. Results are plotted separately by respondent race. Ninety-five percent confidence intervals are included.
Figure IV. Prior and Posterior Beliefs on Drug Efficacy by Patient Race and Trial Representation.
The figure plots the prior and posterior distribution of beliefs about the perceived efficacy of the new antihypertensive medication for the patient’s own condition by respondent’s race and assigned treatment status (trial shown is either nonrepresentative or representative). The signal on efficacy shown to patients (15 mmHg) is displayed as a black vertical line and was revealed to patients following elicitation of priors. A Kolmogorov-Smirnov test fails to reject the null that the priors are identical across race (p-value = .960). For Black patients, a Kolmogorov-Smirnov test rejects the null that the posteriors are identical across arms (p-value = .026). For white patients, a Kolmogorov-Smirnov test fails to reject the null that the posteriors are identical across arms (p-value = .789).
Our results can be visualized by examining the gaps in prescribing intention across physicians who treat different categories of patients. We divide the sample of physicians into two groups: physicians who treat Black patients (PBP) and physicians who treat white patients (PWP). We define these categories by using the reported characteristics of each physician’s reported panel and whether they treat above or below the sample median for the relevant racial group.
Figure V plots prescribing intentions across the physician types. Efficacy, as measured by A1c reduction, is shown on the x-axis and the mean prescribing intention for each efficacy bin is plotted on the y-axis. The upward-sloping line indicates that physicians serving all types of patients are more likely to prescribe medications that were randomly assigned higher rates of efficacy. If a trial has less than 5% Black representation (the current median share of Black participation in clinical trials) prescribing intention of physicians treating more Black patients lies below that of physicians treating white patients at every efficacy level. However, when trials become more representative, this gap is erased.
Figure V. Physician Prescribing Intention by Patient Composition and Trial Representation.
The figure plots the relationship between Efficacy and Prescribing Intention (on a 0–10 scale) by patient composition and percent Black of trial subjects in the profiles shown to physicians. PBP (physicians treating Black patients) denotes physicians who report above the median percent Black patients in their patient panel. PWP (physicians treating white patients) is defined similarly with respect to white patients. NR indicates nonrepresentative (< 5% Black in trial) whereas R indicates representative (≥ 5% Black in trial). Note that 5% is the median percent Black in clinical trials (see Figure I).
3. Understanding Mechanisms: Extrapolation.
Why does representation matter? The model in Section III.A captures the idea that extrapolation from trial data is facilitated by the similarity between patient characteristics and the trial sample. We probe that assumption by asking physicians and patients how confident they are that a drug found to be safe and effective in a study of white patients would be safe and effective for Black patients. Confidence is measured on a scale of 0 to 3 ranging from “Not confident at all” to “High confidence.” As such a question is likely to be less informative for white patients, who are typically well represented in clinical trial evidence, we also asked respondents about how confident they are about the effectiveness of a drug approved on the basis of evidence generated entirely outside of the United States. Such a scenario mirrors a recent trend of “offshoring” clinical trials (Petryna 2009).
For all respondents who were not highly confident about extrapolating—which turns out to be the vast majority—we sought to understand the rationale for their beliefs. In particular, we asked why they believed that a drug tested on one sample would not work equally well in a different context. We provided a set of multiple-choice responses that allowed respondents to indicate concerns about biological factors, socioeconomic and environmental factors, or trust in the trial. Participants were also allowed to select “other” and asked to provide open-text answers.
Results are reported in Table IV. Panel A presents views from Black patients and doctors who treat them regarding extrapolation across race. Panel B presents views from white patients and doctors who treat them regarding confidence in extrapolating across geography. Each cell demonstrates the percentage of respondents who fall into that category. We find three broad patterns. First, few people fall into the highest confidence category for this exercise: ranging from 7.0% among PBP to 15.4% among PWP. Second, patients are less confident extrapolating on average than physicians: the mean level of confidence for Black and white patients is 1.0 (std. dev. 0.97) and 1.3 (std. dev. 0.91), respectively. For physicians treating these groups, the values are 1.72 (std. dev. 0.65) and 1.91 (std. dev. 0.65), respectively. In both instances, confidence among white patients and their doctors (Panel B) is slightly higher than their counterparts in Panel A. Third, when providing a rationale for why a drug might work differently across samples, a nontrivial share selected biological factors, though the most commonly chosen answer was socioeconomic and environmental factors.
TABLE IV.
Extrapolation from Clinical Trial Data among Physicians and Patients
| Panel A: Black patients and their physicians (PBPs) |
|||||||
|---|---|---|---|---|---|---|---|
| White to Black patients |
Confidence
|
Rationale
|
|||||
| Not at All | Some | Moderate | High | Perceived Biol. Factors | Perceived Social & Envir. Factors | ||
| (1) | (2) | (3) | (4) | (5) | (6) | ||
|
| |||||||
| Black patients | 39.6% | 28.1% | 25.2% | 7.2% | 31.0% | 45.7% | |
| PBPs | 3.5% | 28.1% | 61.4% | 7.0% | 32.1% | 45.3% | |
|
| |||||||
| Panel B: White patients and their physicians (PWPs) |
|||||||
| Offshored to U.S. patients |
Confidence
|
Rationale
|
|||||
| Not at All | Some | Moderate | High | Perceived Biol. Factors | Perceived Social & Envir. Factors | ||
| (1) | (2) | (3) | (4) | (5) | (6) | ||
|
| |||||||
| White patients | 21.3% | 36.8% | 32.4% | 9.6% | 19.5% | 43.9% | |
| PWPs | 1.5% | 21.5% | 61.5% | 15.4% | 10.9% | 70.9% | |
Notes. The table reports clinical trial data extrapolation confidence and rationale among patients and physicians. Panel A reports confidence in extrapolation across race among Black Patients and PBPs. Panel B reports confidence in extrapolation across geography among White patients and PWP. Columns (1)–(4) report the percentage of respondents at each confidence level. If a respondent did not select “High” confidence in extrapolation, they were asked to provide a rationale. Column (5) reports the percentage of respondents who cite perceived biological factors as the rationale for not having “high” confidence in extrapolation. Column (6) reports the percentage of respondents who cite perceived social and environmental factors as the rationale for not having “high” confidence in extrapolation. For each subgroup (Black patients, white patients, PBPs, PWPs), Online Appendix Table C14 reports confidence and rationale for both extrapolation questions (race and geography). PBP denotes physicians who report above the median percent Black patients in their patient panel. PWP is defined similarly with respect to white patients.
Several doctors selected “other,” and their open-text responses are reproduced in Online Appendix Table D1. When discussing extrapolation across race, doctors mention external validity, skepticism with results not obtained from representative samples, or a normative desire for the inclusion of diverse populations. With respect to foreign trial data, similar concerns were raised, though physicians also wondered about standards for studies performed abroad. One respondent noted that the ease of extrapolation depends on where the study took place, stating, “It would depend upon the country. I would expect Western European and Canadian trials to be similar to my particular patient population.”
Returning to the experimental results, we find that Black patients who view others as trustworthy were significantly more likely to want to ask their doctor about the new medication (Online Appendix Table C13, column (3)). In addition, we find that the representative treatment increases Black patients’ willingness to participate in future clinical trials, as well as their views on the trustworthiness of the trial researchers (Online Appendix Table C11, Panel A, column (2)). The same pattern does not hold for white patients (Panel B, column (2)).
4. Threats to Internal Validity.
Concerns with survey responses as outcomes include social desirability or experimenter demand effects. As mentioned already, we added consequentiality to both the physician (i.e., reporting findings on trial preferences to federal agencies) and patient (i.e., sharing personalized reports with their doctors) experiments. The majority of physicians and nearly half of all patient respondents requested access to these reports, suggesting that participants indeed valued them. For the patient survey, all respondents had been diagnosed with hypertension and thus had limited incentives to distort their responses to information about a new drug of potential health benefit for their specific condition. Our results on subsamples of respondents who asked for the reports are similar to those presented already (see Online Appendix Tables C15 and C6 for experimental results from physicians and patients, respectively). Online Appendix Tables C4 and C16 show that patients and doctors who downloaded or requested the report are statistically similar to other respondents.
The second key feature that reduces concerns about social desirability or experimenter demand effects is that we prespecified heterogeneous effects by the patient’s race and the racial composition of the provider’s patient panel. If social desirability was playing a large role, patterns might be similar across Black and white patient respondents and across doctors treating all types of patients. In terms of experimenter demand, the patients were only shown one trial so it would have been difficult for them to discern the rationale for the study. Indeed, a word cloud of responses to the open-ended question “What do you think this study was about?” shows only limited references to race or diversity (see Online Appendix Figure B12), with the dominant response being “blood pressure.” Similarly, information presented in our physician survey closely resembled the demographic information presented in biomedical publications and regulatory publications (e.g., the FDA Drug Trial Snapshots database).
We follow Kuziemko et al. (2015) and Elías et al. (2019), who use donations and petitions to validate survey responses, and ask physicians to make a decision about a donation in a follow-up survey.37 Our follow-up donation survey finds that the amount physicians allocate to the enrollment campaign targeting underrepresented minorities is strongly and significantly associated with physician-specific coefficients on representation (Table V) and not with physician-specific responsiveness to efficacy. Because the donation question was fielded to physicians as a follow-up question released one to three weeks after they completed the survey experiment, the results also suggest that our findings are unlikely to be driven by experimenter demand.
TABLE V.
Association between Physician-Specific Coefficients and Trial Donations
| (1) | (2) | |
|---|---|---|
|
| ||
| Coefficient on representation | 1.279*** | 1.229*** |
| (0.449) | (0.436) | |
| Coefficient on efficacy | 0.199 | |
| (0.621) | ||
| Constant | 3.534 | 3.485 |
| Observations | 82 | 82 |
Notes. The table reports OLS estimates from a regression of physician-specific coefficients for representation and efficacy on dollars donated to a campaign to increase the representativeness of clinical trials. Physicians were asked to indicate, out of a possible $5, how many dollars they would like the research team to donate to a campaign that advocates for increases in clinical trial representation versus a campaign that advocates for increases in participation in clinical trials more generally. Observations are at the physician level. Robust standard errors are in parentheses.
, **, *** refer to statistical significance at the 10%, 5%, and 1% levels, respectively.
5. Threats to External Validity.
There are several potential concerns about mapping our survey results to real-world behavior. First, we may prime people to think about something obviously bad, which might affect their survey responses. Second, we may induce patients to construct beliefs on-the-fly about something (clinical trials) they are not well informed about. Third, features of trials may not alter real-world prescribing or medication adherence decisions, even if people do know about clinical trials.
Regarding the notion that we used an obviously negative prime (underrepresentation) for Black respondents, this presumes that ex ante we had access to our ex post results. Recall that our null hypothesis was that representation did not matter, which is precisely what we can now reject. Thus, we view our design as making underrepresentation—a widely known aspect of medical research—especially salient in the context of the survey experiment. We also ask an open-text question to our patient respondents immediately after the intervention about the rationale for their responses; sentiment analysis reported in Online Appendix Table C18 indicates no significant difference in positive affect across race groups. Furthermore, the time spent on the survey does not differ across those groups.
Of course, if patients are unaware of clinical trials and our surveys elicit responses that then do not map onto real behaviors, our findings are less relevant. However, data from Research!America and our own follow-up survey indicate that patients are, in fact, aware of clinical trials and that Black patients believe that they are not well represented in trial samples. Returning to the Research!America data in Table I, column (1) indicates that on average, 80% of Black respondents report that they have heard of clinical trials.
Regarding whether information on trial representation matters in practice, we document that it affects prescribing intention and updating from trial results. In settings outside of our experiments, evidence that Black Americans are skeptical of research institutions and medical technologies—FDA-approved and investigational—is widespread. We tabulate survey respondents consistent with these patterns in Table I. Qualitative comments from physicians in our study, as well as those drawn from a recent NASEM report, also suggest that representation plays a role in how doctors practice medicine (see Online Appendix Tables D1 and D2 and Online Appendix Figure B13).
6. Robustness.
We probe the robustness of our findings for physicians in Online Appendix Table C15. Columns (1) and (2) indicate that we obtain similar results when we use nonstandardized versions of the outcomes. We replicate our main findings with standardized prescribing as the outcome in column (3) and show that our findings are largely unchanged when restricting the sample either to physicians who answer our follow-up donation question or to those who request a copy of our report to NIH and NASEM (see columns (4) and (5)). Column (6) shows that findings on representation are not sensitive to the addition of controls selected using double-selection LASSO linear regression (Chernozhukov et al. 2018). We also find that the order of profiles presented to physicians does not substantially affect how they respond to the treatment (Online Appendix Figure B14).
Additional results from our physician sample are presented in Online Appendix Table C17. Column (1) reports our main results from equation (2), while column (2) assesses whether representation and efficacy are substitutes or complements by adding an interaction term; we find no evidence of either.38 Columns (4)–(6) indicate that our finding of substantial heterogeneity by Black patient representation in one’s panel is insensitive to varying definitions of physicians who treat Black patients. Our finding of a strong interaction between representation and reported patient percent Black (from Table III and replicated in column (3)) is robust to dichotomizing patient percent Black at the median and to defining physicians treating Black patients using ZIP code–level statistics obtained from the U.S. Census Bureau. In Online Appendix Figure B.15, we present further tests of robustness, including results from alternative specifications and on the sample of observations with at least one efficacy duplicate, and show that our finding of a significant coefficient on representation withstands all these tests.
We report robustness checks for our patient experiment in Online Appendix Table C6. Panel A demonstrates that results across our three outcomes are unchanged when we restrict to patients who requested the personalized report we offered, whereas Panel B shows that our findings are robust to weighting patients using person weights obtained from MEPS. Panel C indicates that our results are robust to including LASSO-selected controls.
VI. Discussion and Conclusion
The theoretical and experimental analysis sheds light on the potential benefits of increasing representation of Black patients in clinical trials to patients and pharmaceutical companies. Given these benefits, why does such underrepresentation persist?
One hypothesis would be that this underrepresentation persists because of a combination of a relative lack of information and distrust of doctor recommendations between Black and white patients. However, the racial participation gap in clinical trials is much larger than would be implied by the observed gaps in trust and information reported in Table I.39 This section uses a combination of theory and case studies to analyze why this gap is so persistently large, extending the earlier theoretical and experimental analysis to study the costs and benefits to firms conducting clinical trials. In the process, this section fleshes out a potentially important intertemporal externality associated with a history of underrepresentation.
VI.A. Why Might Underrepresentation Persist?
Suppose pharmaceutical firms seek to maximize the expected profit from a given experimental drug trial and can choose their recruitment strategy (see Online Appendix F.2 for details). They have access to a status quo technology for recruiting patients to clinical trials. Under this technology, a racial gap in perceived treatment benefits increases the racial gap in trial participation relative to the gap in trial recruitment (Proposition F.3). In other words, firms using the status quo technology anticipate a higher refusal rate from Black versus white patients. Firms could choose to incur a fixed cost to increase Black representation from its level under the status quo by making investments that reduce the marginal costs of inviting more Black participants. We refer to these investments as building “inclusive infrastructure.” Our theoretical and empirical results suggest firms would see value from such investment: due to diminishing returns to representation, it could increase demand among Black patients and their doctors without significantly decreasing demand among white patients or their doctors. However, the returns to such investment may not be completely internalized by any given firm: it increases perceived benefits for all similar treatments in the future, including those developed by other firms.40 The externalities a firm’s current recruitment decisions have on other firms’ future recruitment costs enables a cycle of underrepresentation.
Proposition 2.
Suppose the most similar treatment to out-performed patients’ prior expectations. When the fixed costs to deviating from the status quo recruitment technology to inclusive infrastructure are sufficiently large, then underrepresentation of Black patients in the historical trial leads to further underrepresentation of Black patients in the current trial:
This result flows from the externality described above and is illustrated with a numerical example in Online Appendix F.4.
Together, the theoretical and empirical results (summarized in Table II) are suggestive of a cycle of underrepresentation. (i) Trials in the past have not been representative of Black patients. (ii) The lack of representation decreases the perceived benefits of treatments for Black patients and physicians who treat them. (iii) The first two items make it more costly for firms to increase trial representation actively. (iv) Trials today are not representative for Black patients.41 (v) The cycle continues.
VI.B. Case Studies
The theoretical analysis suggests that investments in inclusive infrastructure may help break such a cycle of underrepresentation. Here, we combine quantitative and qualitative evidence, including insights drawn from informal interviews with experts in trial design, to tighten the links between our theoretical and empirical findings and real-world practice.
Figure VI, Panel A plots the median percent Black in pivotal trials across the most common diseases or conditions in the United States.42 Black patients are underrepresented relative to their population share across most conditions, and underrepresented relative to disease burden as well (see Online Appendix Figure B16), although there is significant variation across conditions. In Panel B, we document that higher representation of Black patients in clinical trials is associated with higher outpatient prescriptions of new drugs to Black Americans across various conditions.
Figure VI. Trial Representation by Condition and Association with New Drug Prescribing.
Panel A plots the median share of Black patients in trials across HIV/AIDS and the 10 leading causes of death (excluding unintentional injuries and suicide) in the United States (Heron 2021). Data on trial composition are from ClinicalTrials.gov. Panel B plots the correlation between the prescription rate of new medications to Black Americans and the median percent Black in pivotal trials. We construct the prescription rate as the percentage of newly marketed drugs (on the market for five or fewer years) received by Black Americans in each major condition category. In Panel B, the y-axis value of Cancer includes outpatient cancer supportive therapies. CLRD, Diabetes, Heart, Kidney, and Flu/PNA indicate chronic lower respiratory diseases, diabetes mellitus, diseases of heart, kidney diseases, and influenza and pneumonia, respectively. Prescription data are from the Medical Expenditure Panel Survey. Observations associated with cancer and HIV/AIDS are denoted with diamonds (purple). See Online Appendix H for details.
Next, we focus on cancer and HIV/AIDS (purple diamonds in Figure VI, Panel B), which are instructive to compare for several reasons. Both disease areas benefit from decades of federal investments into research networks across the United States by the National Cancer Institute (NCI) and National Institute of Allergy and Infectious Diseases (NIAID), respectively.43 Federal investments into these networks are comparable, totaling $6.54 billion into NCI and $6.05 billion into NIAID in 2021 (Congressional Research Service 2022).
The history of these research networks—and their specific forms of investment—shed light on differences in contemporary outcomes across disease areas. Investment in cancer research has historically been driven by top-down investments into academic medical centers, including efforts in the “war on cancer” that began with the National Cancer Act of 1971 (Mukherjee 2010). Beginning in 1972, motivated by a Howard University study documenting “an astounding increase in cancer mortality among the nation’s Black population in recent years,” the NCI invested in efforts to understand the burden of cancer mortality across racial groups (Henschke et al. 1973; Wailoo 2011). After the passage of the 1993 NIH Revitalization Act, investigators receiving NCI funding reported that they were struggling to comply with new rules regarding minority representation in clinical trials because NCI funding could not be used for “ancillary” study costs, including reimbursements for patient expenses, resources for advertising and outreach, and funding for patient navigators and counselors.
In contrast to the top-down development of federal cancer research infrastructure, research into HIV/AIDS has been shaped by community involvement and activism. Activists pushed researchers to alter standard protocols for research, calling for accelerated approvals and emergency access to medicine, introduction of surrogate endpoints that could proxy for other clinical markers, and greater emphasis on representation in trial recruitment (Epstein 1996). In parallel, political, religious, and community leaders worked to combat the stigma associated with links between HIV/AIDS and homosexuality, especially in Black communities, thus creating opportunities for individuals to seek access to experimental therapies (Robertson 2006; Royles 2020). At a 1990 community forum on clinical trials held in San Francisco, ACT UP/San Francisco member Michelle Roland called for a “revolution in clinical trial design,” in which activists and scientists designed “realistic clinical trials that do a better job of meeting people’s needs” (as recounted in Epstein 1996, ch. 7). In response to demands from activists, the AIDS Clinical Trial Group and the NIAID adopted the practice of seeking community involvement at each trial site when developing protocols, prioritizing long-term relationships outside of academic medical centers (Kagan et al. 2012).
Table VI substantiates these anecdotes and makes clear how site selection shapes trial composition. Among U.S.-based trial sites listed in the ClinicalTrials.gov database, sites that enroll for HIV/AIDS are approximately 11 (16) percentage points more likely to be located at a safety net hospital than sites that recruit for cancer (Alzheimer’s Disease Related Dementias ADRD). Unsurprisingly, the demographic characteristics of the trial sites also differ. Online Appendix Tables C20 and C21 report information on the demographics of HIV/AIDS, cancer, and ADRD research centers at the hospital service area level for all clinical trials and for specific networks.44 Trial sites recruiting for cancer have, on average, a 10.5 percentage point higher share of non-Hispanic white population and a 3.0 percentage point higher share of those with private health insurance than trial sites recruiting for HIV/AIDS.45
TABLE VI.
Trial Sites and Safety Net Hospitals
| DSH Index |
UCMP Care |
|||
|---|---|---|---|---|
| (1) | (2) | (3) | (4) | |
|
| ||||
| HIV/AIDS (cancer comparison) | 0.110*** | 0.019*** | ||
| (0.008) | (0.007) | |||
| HIV/AIDS (ADRD comparison) | 0.161*** | 0.054*** | ||
| (0.012) | (0.010) | |||
| Constant | 0.475 | 0.423 | 0.176 | 0.141 |
| Observations | 197,240 | 6,804 | 182,929 | 5,997 |
Notes. The table reports OLS estimates from a regression of an indicator for whether a trial site is located at a safety net hospital. Each observation represents a specific site associated with a unique clinical trial and the data are limited to cancer, HIV/AIDS, and Alzheimer’s disease and related dementias (ADRD) trials. Following Popescu et al. (2019), we define a safety-net hospital as a hospital in the state’s top quartile of Medicaid and Medicare Supplemental Security Income inpatient days historically used to determine Medicare Disproportionate Share Hospital (DSH) payments (columns (1) and (2)); and uncompensated (UCMP) care costs (as a percentage of total operating expenses) (columns (3) and (4)). See Online Appendix H for more detailed definitions of these variables. HIV/AIDS (cancer comparison) is an indicator variable equal to 1 if a trial site studies HIV/AIDS and 0 if a trial site studies cancer. HIV/AIDS (ADRD comparison) is an indicator variable equal to 1 if a trial site studies HIV/AIDS and 0 if a trial site studies ADRD. See Online Appendix Table C19 for a Cancer (ADRD comparison). Trial site information is drawn from Clinical-Trials.gov. See Online Appendix H.1.1 and H.3.8 for details. Robust standard errors are in parentheses.
, **, *** refer to statistical significance at the 10%, 5%, and 1% levels, respectively.
Site selection is just one part of the R&D process: protocol development is another important step and also differs across conditions. Since 1990, the Division of AIDS (DAIDS) at the NIAID has required that trial protocols include explicit community engagement plans, developed in conjunction with standing community advisory boards (Strauss et al. 2001).46 The advisory boards meet regularly with trial investigators and consult on proposed protocols. Our discussion with HIV Vaccine Trials Network leadership suggests that DAIDS requirements have important spillover effects: although firms are not obligated to comply, industry sponsors often engage with communities to benefit from existing recruitment networks.
The stark differences in trial composition for cancer and HIV/AIDS highlight the extent to which active, large-scale investments in inclusive infrastructure, in addition to incentives, can be important for improving health disparities. Figure VI, Panel B demonstrates a positive relationship between greater representation in trials and prescribing rates.47 This descriptive finding is robust to dropping HIV/AIDS (see Online Appendix Figure B18), although the main takeaway from this section is that HIV/AIDS is an “outlier” on many dimensions and therefore a potentially useful template for industry and regulators.
VI.C. Concluding Comments
Motivated by persistent, substantial racial disparities in both clinical trial enrollment and prescriptions for new drugs, we investigated the consequences and causes of underrepresentation of Black patients in medical research. Consistent with a theoretical model of similarity-based extrapolation, Black patients, and the physicians who treat them, find trial evidence less relevant for their care, and are less likely to prescribe medications, when experimental samples are not representative. However, when the evidence base is more racially representative, these gaps close. The results suggest that a feedback loop exists between representation in a process and subsequent decision-making. Such a cycle of underrepresentation could apply more widely to any data-driven participation or take-up decision.
Supplementary Material
Acknowledgments
* We are grateful to Alyce Adams, Anna Aizer, Michelle Andrasik, Arthur Applbaum, David Autor, John Beshears, Sally Bock, Emily Breza, Amitabh Chandra, Niteesh Choudhry, Katherine Coffman, David Costanzo, Amy Finkelstein, Christopher Hucks-Ortiz, Damon Jones, Rem Koning, Lisa Larrimore Ouellette, Corinne Low, Elisa Macchi, Kyle Myers, Nathan Nunn, Gautam Rao, Andrei Shleifer, Jon Skinner, Stefanie Stantcheva, Adi Sunderam, Ebonya Washington, Crystal Yang, and seminar participants from MIT, Stanford, Beth Israel Deaconess Hospital, Massachusetts General Hospital, Brookings Institute, Warwick, Stockholm School of Economics, Rutgers, San Diego State, Georgia State, Boston University, ICSSI 2022, EAAMO 2022, and the U.S. Food and Drug Administration, for helpful comments. We are also grateful for comments and suggestions that substantially improved the article from the editors (Larry Katz and Andrei Shleifer) and five anonymous referees. Nick Shankar, Lukas Leister, Anne Fogarty, Emma Ronzetti, Xingyou Ye, AnneMarie Bryson, and Zahra Thabet provided exceptional research assistance. We thank Harlan Krumholz, Joseph Ross, John Welsh, and Research!America for sharing data and Michele Andrasik, Sally Bock, David Costanzo, and Christopher Hucks-Ortiz from the Fred Hutchinson Cancer Center for sharing experiences and expertise. The study is approved by the IRB at Harvard University (IRB21–1384 and IRB22–0017) and registered at the AEA RCT Registry (AEARCTR-0008957 and AEARCTR-0008959). We gratefully acknowledge financial support from the National Science Foundation under Grant No. DGE-1656518, the Knight Hennessy Scholars Program, and the MacArthur Foundation. We note that Marcella Alsan was part of the Expert Committee for the National Academies of Science, Engineering and Medicine Report on Underrepresentation in Clinical Research referenced herein.
Footnotes
SUPPLEMENTARY MATERIAL
An Online Appendix for this article can be found at The Quarterly Journal of Economics online.
Women’s enrollment in clinical trials has been increasing over time and is currently comparable to women’s population share (see Online Appendix Figure B3), although gaps in certain conditions remain (Feldman et al. 2019; Steinberg et al. 2021; Gupta 2022; Sosinsky et al. 2022).
We informed physicians that the drugs were hypothetical so they would not try to prescribe them after the experiment.
Only 11.5% of physicians and 7.1% of patients attrited after consent, and this was not differential across arms.
See Ehrhardt, Appel, and Meinert (2015) for evidence of the relative importance of industry sponsorship. Our estimates of the composition of clinical trials are drawn from ClinicalTrials.gov. We collected data on trials that both study products approved for sale in the United States and were subject to regulation by U.S. agencies. See Online Appendix H.1.1 for details.
These institutions are flagged as “Other” in ClinicalTrials.gov. We reviewed institutions in this set to confirm that our interpretation of “Other” was correct.
We verify this using data drawn from the U.S. Federal Register. In nearly all cases, core patents are filed just before the beginning of clinical testing. See Budish, Roin, and Williams (2015) for a discussion on the timing of initial patent filing.
This estimate reported in Moore et al. (2018) draws on proprietary data and estimates the costs of pivotal trials associated with new drugs approved by the FDA in 2015 and 2016. Note that both smaller and larger estimates of trial cost have been reported in the academic literature. For example, DiMasi, Grabowski, and Hansen (2016) estimated the median cost of a Phase III trial as $200 million.
Note that these gaps are relatively constant when we control for income, education, and political affiliation (see Online Appendix Table C1). We also note that conditioning on many characteristics may not always be appropriate when quantifying racial gaps (see Online Appendix A.1).
Sample size and measures of statistical significance and precision are also reported in abstracts. We reviewed publications associated with ∼500 clinical trials, including 341 referenced in Welsh et al. (2018) and ∼150 trials associated with products approved for sale in the United States, published between 2015 and 2020. In nearly all cases, average effects of interventions were reported in the abstracts. Nearly all trials included some demographic information in a balance table, and approximately 50% reported race.
The steps include (a) problem definition; (b) search for wanted sources of information; (c) critical evaluation of the information; (d) application of information to the patient; and (e) efficacy evaluation of this application on the patient. In this penultimate step—application of the information to the particular patient—the specific question is asked: “Are the participants in the study similar enough to my patients?” (Masic, Miokovic, and Muhamedagic 2008, 222).
These assumptions simplify the presentation of the model, but it will be clear that the intuitions that arise from the model do not hinge on them.
For simplicity, we abstract from the need for a control group and also assume is known to the firm ahead of the trial, while is stochastic and revealed by the trial.
Such learning includes case-based learning (Gilboa and Schmeidler 1995), analogical reasoning (Jehiel 2005; Mullainathan, Schwartzstein, and Shelifer 2008), associative learning (Mullainathan 2002; Bordalo, Gennaioli, and Shleifer 2020;), reinforcement learning (Daw 2014), and the idea that information from similar sources “resonates” more than information from dissimilar sources (Malmendier and Veldkamp 2022).
In the case that matters, they believe is statistically independent of , so evidence on whether the treatment works on people with does not speak to whether it works on people with = 1 and vice versa. In the case that doesn’t matter, they believe equals . We simplify by assuming that is fixed over time—that is, that people don’t update their beliefs about . Incorporating such updating could strengthen the benefit of increasing Black representation.
Recall that the FDA does not require (and trials are therefore not powered to report) treatment efficacy conditional on . The assumption that successes attributable to participants with scale with their proportion in the trial is a conservative assumption on how people “fill in” missing data. Specifically, it rules out physician- or patient-assumed heterogeneous trial efficacy as the mechanism driving our predictions. Relaxing this assumption would increase the importance of representation in our model.
We focus on situations where the average trial efficacy exceeds prior-belief ratios for several reasons. First, it matches the focus on successful trials in our surveys. Second, doctors are asked to consider “favorable risk-benefit ratios” when recommending trials to their patients (Emanuel, Wendler, and Grady 2000). Third, given the treatment approval process, patients tend to only have access to treatments that performed well in clinical trials. Fourth, it matches the empirical reality that trial results are typically only made public when successful (Turner et al. 2008, 2022; Driessen et al. 2015).
Similar treatments could, for example, refer to treatments in the same category (drug class), or potentially all treatments for the same disease. Our analysis would be unchanged qualitatively if people’s priors were constructed as a weighted average of their posteriors regarding previous treatments, with more similar treatments receiving larger weights, or if priors were constructed through a simulation mechanism akin to that modeled by Bordalo et al. (2022).
As with Proposition 1, Corollary 1 restricts attention to how patients update given successful trials. While many trials fail, results from failed trials are less likely to be made public than results from successful trials (see note 16). In principle, patients could infer that “no news is bad news.” In practice, however, evidence suggests that people often do not make this type of inference even in simple laboratory settings (e.g., Jin, Luca, and Martin 2021). We further note that there are additional mechanisms beyond those we formalize where more representative trials could increase Black patients’ willingness to participate in future trials, even if those trials are less successful than prior beliefs.
The last result on diminishing returns requires mild regularity conditions on .
Online Appendix Figure B9 depicts the flow of the physician and patient surveys.
We used hypothetical drugs instead of real drugs because there were not nearly enough real-world trials to include experimentally a range of Black patients and carefully titrated mechanisms of action and efficacy. Such an approach of using hypothetical drugs was followed by Kesselheim et al. (2012) to measure the influence of the source of clinical trial funding on the prescribing behavior of doctors. In a complementary study, Oostrom (2022) reports that clinical trials funded by pharmaceutical companies report higher efficacy than when the same drug is used by a different study sponsor.
This language was chosen intentionally to mirror standard DTCA in the United States, one of the primary contexts in which patients engage, unassisted by a physician, with medical information.
We determine ZIP code rank using five-year ZIP code–level population estimates reported in the 2019 American Community Survey.
We piloted this survey with $75 honoraria but raised compensation to increase yield. The only meaningful deviation from our preanalysis plan was that we planned to recruit 1,000 hypertensive patients, but it proved difficult to find that many who met both our demographic and medical criteria.
There were 8,640 unique profiles: 15 hypothetical drugs multiplied by 16 possible efficacy values (0.5%–2.0% reductions in A1c in 0.1% increments) multiplied by 36 possible values of percent Black of trial subjects (0%–35% in 1% increments).
Values of percent Black ranging from 0% to 4% were sampled with probability 0.33, values ranging from 5% to 14% were sampled with probability 0.34, and values ranging from 15% to 35% were sampled with probability 0.33.
Online Appendix Table C2 demonstrates that both the mean and the range of representation and efficacy values assigned to physicians are uncorrelated with a host of physician and patient panel characteristics.
Statistics on breakdown by sex were not provided in the drug profile. Although sex is an important characteristic, the policy issue of underrepresentation of women in trials is not as acute (see Online Appendix Figure B3).
See Online Appendix Exhibits E4 and E5 for the exact question wording shown to physicians and a link to the survey.
By declining to provide a range of values or a dropdown menu, we screened out any individuals who were unfamiliar with the scales for either measurement and thus less likely to carry the diagnosis.
To hold efficacy precisely constant across trials, we reported to participants that treated subjects in their assigned trial saw their systolic blood pressure drop significantly compared with subjects in the control group, and then stated that across similar studies the average drop in systolic blood pressure among participants taking the medication was about 15 mmHg.
The exact question wording shown to patients and a link to the survey can be found in Online Appendix Exhibits E7 and E8.
In total, 4.7% of emails bounced and 1.8% of those invited started the survey. Our click-through rate of 1.8% was considerably higher than the 0.25% to 0.5% quoted to us by vendors as typical for email marketing campaigns (Richardson, Dominowska, and Ragno 2007; Kanich et al. 2009).
Approximately 60% of the physicians who completed the initial survey responded to the follow-up email. The physicians who responded to the follow-up survey were comparable to those who did not respond to the follow-up survey (see Online Appendix Table C4).
Our main results are robust to including person weights derived from a nationally representative survey, the MEPS (Online Appendix Table C6).
Nonstandardized outcomes and continuous updating outcomes yield similar results, which are gathered in Online Appendix Table C12. Note that our approach deviated from many tests of Bayesian updating in that we did not vary the signal on drug efficacy (Jensen 2010; Roth and Wohlfart 2020; Hjort et al. 2021). Rather, the intervention informed patient respondents of a distinct feature of the data-generating process—the composition of the sample—that our framework predicts influences the weight they place on the signal in assessing how much the drug would personally benefit them. Our focus is then on this weight, as measured by whether patients’ posterior beliefs were within 1 mmHg of the reported signal.
We sent a follow-up survey to physicians after at least a week, to allow for some time between the actual survey and the donation question. There are few differences between our original sample and the sample of physicians who respond to the follow-up survey, with the exception of race. Physicians who reply to the donation question are more likely to be white than non-white (Online Appendix Table C4).
See Online Appendix A.3 for additional discussion.
Table I suggests that Black patients are about 90% as likely as whites to have heard of a clinical trial and about 94% as likely to say they would enroll if a doctor recommended it. If they only hear about clinical trials when doctors point them out, then this implies they should participate in clinical trials at about 85% the rate of white patients. But Figure I implies that the share of Black patients in clinical trials relative to their population share is around 33% that of white patients.
Firms may also be able to free ride on investments made in inclusive infrastructure by the public sector or other firms (reducing fixed costs f), which is an additional channel by which firms wouldn’t fully internalize the social benefits of such investments. Such an externality suggests that firms may underinvest in such technology relative to what is socially optimal. See Online Appendix F.2 and F.4 for details.
While Proposition 2 suggests that Black representation could get worse over time in a cycle of underrepresentation, it abstracts from policy efforts to improve representation (see Online Appendix G.1). We view the proposition as identifying a force that pushes against such policy efforts.
All diseases or conditions presented except HIV/AIDS are among the 10 leading causes of death in the United States (Heron 2021). We did not include unintentional injuries and suicide as there are few pharmaceuticals intended to prevent/treat such deaths.
There are 131 dedicated research centers that co-organize trials for cancer, and 108 co-organize trials for HIV/AIDS. Although the majority of HIV/AIDS funding is allocated via NIAID, the NCI also includes budgets for HIV/AIDS research.
See Online Appendix Table VI for more comparisons.
Online Appendix Figure B17 demonstrates a strong correlation between trial site ZIP code share Black and share Black in a trial. See Online Appendix G.1 for information on recent cancer and ADRD initiatives to diversify site selection. We outline efforts to compensate patients for participation and to improve the quality of hospitals that serve Black patients in Online Appendix G.1 (see also Chandra, Kakani, and Sacarny 2020 for evidence of recent quality improvement in hospitals).
Although some institutions maintain a community advisory board for cancer trials, the board requirement at DAIDS is unique (National Institute of Allergy and Infectious Diseases 2022).
Another way HIV/AIDS is unique is Ryan White Care Act funding (see Dillender 2022). Title I funds cities and Title II funds states, a portion of which must go to the AIDS Drug Assistance Programs, which may in turn have pull incentives on innovation as per Acemoglu et al. (2006), Finkelstein (2004), and Acemoglu and Linn (2004).
Contributor Information
MARCELLA ALSAN, Harvard Kennedy School and National Bureau of Economic Research, United States.
MAYA DURVASULA, Stanford University, United States.
HARSH GUPTA, Stanford University, United States.
JOSHUA SCHWARTZSTEIN, Harvard Business School, United States.
HEIDI WILLIAMS, Stanford University and National Bureau of Economic Research, United States.
DATA AVAILABILITY
The data underlying this article are available in the Harvard Dataverse, https://doi.org/10.7910/DVN/VB5MDJ (Alsan et al. 2023).
REFERENCES
- Acemoglu Daron, Cutler David, Finkelstein Amy, and Linn Joshua, “Did Medicare Induce Pharmaceutical Innovation?,” American Economic Review, 96 (2006), 103–107. 10.1257/000282806777211766 [DOI] [PubMed] [Google Scholar]
- Acemoglu Daron, and Linn Joshua, “Market Size in Innovation: Theory and Evidence from the Pharmaceutical Industry,” Quarterly Journal of Economics, 119 (2004), 1049–1090. 10.1162/0033553041502144 [DOI] [Google Scholar]
- Agency for Healthcare Research and Quality, “MEPS Prescribed Medicine Files 1996–2019,” 2022, https://meps.ahrq.gov/mepsweb/data_stats/download_data_files.jsp.
- Agha Leila, and Molitor David, “The Local Influence of Pioneer Investigators on Technology Adoption: Evidence from New Cancer Drugs,” Review of Economics and Statistics, 100 (2018), 29–44. 10.1162/REST_a_00670 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aghion Philippe, Akcigit Ufuk, Bergeaud Antonin, Blundell Richard, and Hémous David, “Innovation and Top Income Inequality,” Review of Economic Studies, 86 (2019), 1–45. 10.1093/restud/rdy027 [DOI] [Google Scholar]
- Albright Alex, Cook Jeremy A., Feigenbaum James J., Kincaide Laura, Long Jason, and Nunn Nathan, “After the Burning: The Economic Effects of the 1921 Tulsa Race Massacre,” NBER Working Paper no. 28985, 2021. 10.3386/w28985 [DOI] [Google Scholar]
- Alsan Marcella, Durvasula Maya, Gupta Harsh, Schwartzstein Joshua, and Williams Heidi, “Replication Data for: ‘Representation and Extrapolation: Evidence from Clinical Trials’,” (2023), Harvard Dataverse, 10.7910/DVN/VB5MDJ. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alsan Marcella, and Wanamaker Marianne, “Tuskegee and the Health of Black Men,” Quarterly Journal of Economics, 133 (2018), 407–455. 10.1093/qje/qjx029 [DOI] [PMC free article] [PubMed] [Google Scholar]
- American Diabetes Association, “Section 9. Pharmacologic Approaches to Glycemic Treatment: Standards of Medical Care in Diabetes––2021,” Diabetes Care, 43 (2020), S111–S124. [DOI] [PubMed] [Google Scholar]
- Bach Peter B., Pham Hoangmai H., Schrag Deborah, Tate Ramsey C., and Lee Hargraves J, “Primary Care Physicians Who Treat Blacks and Whites,” New England Journal of Medicine, 351 (2004), 575–584. 10.1056/NEJMsa040609 [DOI] [PubMed] [Google Scholar]
- Barnato Amber E., “Challenges in Understanding and Respecting Patients’ Preferences,” Health Affairs, 36 (2017), 1252–1257. 10.1377/hlthaff.2017.0177 [DOI] [PubMed] [Google Scholar]
- Blanco Maria A., Dorsch Josephine L., Perry Gerald, and Zanetti Mary L., “A Survey Study of Evidence-Based Medicine Training in US and Canadian Medical Schools,” Journal of the Medical Library Association, 102 (2014), 160–168. 10.3163/1536-5050.102.3.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blewett Lynn A., Rivera Drew Julia A., Griffin Risa, and Williams Kari C.W., “IPUMS Health Surveys: Medical Expenditure Panel Survey, Version 1.1,” 2019. 10.18128/D071.V1.1 (2019). [DOI] [Google Scholar]
- Boerma Job, and Karabarbounis Loukas, “Reparations and Persistent Racial Wealth Gaps,” NBER Macroeconomics Annual, 37 (2023), 171–221. 10.1086/723578 [DOI] [Google Scholar]
- Bordalo Pedro, Gennaioli Nicola, and Shleifer Andrei, “Memory, Attention, and Choice,” Quarterly Journal of Economics, 135 (2020), 1399–1442. 10.1093/qje/qjaa007 [DOI] [Google Scholar]
- Bordalo Pedro, Burro Giovanni, Coffman Katherine B., Gennaioli Nicola, and Shleifer Andrei, “Imagining the Future: Memory, Simulation and Beliefs About COVID,” NBER Working Paper no. 30353, 2022. 10.3386/w30353 [DOI] [Google Scholar]
- Budish Eric, Roin Benjamin N., and Williams Heidi, “Do Firms Underinvest in Long-Term Research? Evidence from Cancer Clinical Trials,” American Economic Review, 105 (2015), 2044–2085. 10.1257/aer.20131176 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Centers for Disease Control and Prevention, “Prevalence of Both Diagnosed and Undiagnosed Diabetes,” 2021. https://www.cdc.gov/diabetes/data/statistics-report/diagnosed-undiagnosed-diabetes.html.
- Chandra Amitabh, Frakes Michael, and Malani Anup, “Challenges to Reducing Discrimination and Health Inequity through Existing Civil Rights Laws,” Health Affairs, 36 (2017), 1041–1047. 10.1377/hlthaff.2016.1091 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chandra Amitabh, Kakani Pragya, and Sacarny Adam, “Hospital Allocation and Racial Disparities in Health Care,” NBER Working Paper no. 28018, 2020. 10.3386/w28018 [DOI] [Google Scholar]
- Chernozhukov Victor, Chetverikov Denis, Demirer Mert, Duflo Esther, Hansen Christian, Newey Whitney, and Robins James, “Double/Debiased Machine Learning for Treatment and Structural Parameters,” Econometrics Journal, 21 (2018), C1–C68. 10.1111/ectj.12097 [DOI] [Google Scholar]
- Cochrane Archie, Effectiveness and Efficiency: Random Reflections on Health Services (London: Nuffield Provincial Hospital Trust, 1972). [Google Scholar]
- Congressional Research Service, “National Institutes of Health (NIH) Funding: FY1996–FY2023,” 2022. https://sgp.fas.org/crs/misc/R43341.pdf.
- Cutler David M., Meara Ellen, and Richards-Shubik Seth, “Induced Innovation and Social Inequality: Evidence from Infant Medical Care,” Journal of Human Resources, 47 (2012), 456–492. 10.1353/jhr.2012.0014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Daw Nathaniel D, “Advanced Reinforcement Learning,” in Neuroeconomics: Decision Making and the Brain, 2nd ed., Glimcher Paul W. and Fehr Ernst, eds. (Amsterdam: Academic Press, 2014), 299–320. [Google Scholar]
- Derenoncourt Ellora, Chi Hyun Kim Moritz Kuhn, and Schularick Moritz, “Wealth of Two Nations: The U.S. Racial Wealth Gap, 1860–2020,” Quarterly Journal of Economics, forthcoming. 10.1093/qje/qjad044 [DOI] [Google Scholar]
- Dillender Marcus, “Evidence and Lessons on the Health Impacts of Public Health Funding from the Fight against HIV/AIDS,” NBER Working Paper no. 28867, 2022. 10.3386/w28867 [DOI] [Google Scholar]
- DiMasi Joseph A., Grabowski Henry G., and Hansen Ronald W., “Innovation in the Pharmaceutical Industry: New Estimates of R&D Costs,” Journal of Health Economics, 47 (2016), 20–33. 10.1016/j.jhealeco.2016.01.012 [DOI] [PubMed] [Google Scholar]
- Ding Dong, and Glied Sherry A., “Disparities in the Use of New Diabetes Medications: Widening Treatment Inequality by Race and Insurance Coverage,” Commonwealth Fund issue brief, 2022. https://www.commonwealthfund.org/publications/issue-briefs/2022/jun/disparities-use-new-diabetes-medications-treatment-inequality. [Google Scholar]
- Dornsife Dana, Monroe Stephanie, Richie Nicole, Sandoval Fabian, Brisard Claudine, Kenny Nicholas, McDonough Keri, and Starr Kathleen, “How to Boost Racial, Ethnic and Gender Diversity in Clinical Research,” Syneos Health, 2019. https://www.syneoshealth.com/insights-hub/how-boost-racial-ethnic-and-gender-diversity-clinical-research. [Google Scholar]
- Doximity, “Survey: How Doctors Read and What it Means to Patients,” Business Wire, July 22, 2014. https://www.businesswire.com/news/home/20140722005535/en/Survey-Doctors-Read-Means-Patients.
- Driessen Ellen, Hollon Steven D., Bockting Claudi L. H., Cuijpers Pim, and Turner Erick H., “Does Publication Bias Inflate the Apparent Efficacy of Psychological Treatment for Major Depressive Disorder? A Systematic Review and Meta-Analysis of US National Institutes of Health-Funded Trials, ” PLoS One, 10 (2015), e0137864. 10.1371/journal.pone.0137864 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ehrhardt Stephan, Appel Lawrence J., and Meinert Curtis L., “Trends in National Institutes of Health Funding for Clinical Trials Registered in Clinical-Trials.gov,” Journal of the American Medical Association, 314 (2015), 2566–2567. 10.1001/jama.2015.12206 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Elhussein Ahmed, Anderson Andrea, Bancks Michael P., Coday Mace, Knowler William C., Peters Anne, Vaughan Elizabeth M., Maruthur Nisa M., Clark Jeanne M., and Pilla Scott, “Racial/Ethnic and Socioeconomic Disparities in the Use of Newer Diabetes Medications in the Look AHEAD study,” Lancet Regional Health - Americas, 6 (2022), 100111. 10.1016/j.lana.2021.100111 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eli Shari, Logan Trevon D., and Miloucheva Boriana, “Physician Bias and Racial Disparities in Health: Evidence from Veterans’ Pensions,” NBER Working Paper no. 25846, 2019. 10.3386/w25846 [DOI] [Google Scholar]
- Elías Julio J., Lacetera Nicola, and Macis Mario, “Paying for Kidneys? A Randomized Survey and Choice Experiment,” American Economic Review, 109 (2019), 2855–2888. 10.1257/aer.20180568 [DOI] [Google Scholar]
- Ellis Randall P., and McGuire Thomas G., “Provider Behavior under Prospective Reimbursement: Cost Sharing and Supply,” Journal of Health Economics, 5 (1986), 129–151. 10.1016/0167-6296(86)90002-0 [DOI] [PubMed] [Google Scholar]
- Emanuel Ezekiel J., Wendler David, and Grady Christine, “What Makes Clinical Research Ethical?,” Journal of the American Medical Association, 283 (2000), 2701–2711. 10.1001/jama.283.20.2701 [DOI] [PubMed] [Google Scholar]
- Epstein Steven, Impure Science: AIDS, Activism, and the Politics of Knowledge (Berkeley: University of California Press, 1996). [PubMed] [Google Scholar]
- Feldman Sergey, Ammar Waleed, Lo Kyle, Trepman Elly, van Zuylen Madeleine, and Etzioni Oren, “Quantifying Sex Bias in Clinical Studies at Scale with Automated Data Extraction,” JAMA Network Open, 2 (2019), e196700. 10.1001/jamanetworkopen.2019.6700 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Finkelstein Amy, “Static and Dynamic Effects of Health Policy: Evidence from the Vaccine Industry,” Quarterly Journal of Economics, 119 (2004), 527–564. 10.1162/0033553041382166 [DOI] [Google Scholar]
- Foster Andrew D., and Rosenzweig Mark R., “Microeconomics of Technology Adoption,” Annual Review of Economics, 2 (2010), 395–424. 10.1146/annurev.economics.102308.124433 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gilboa Itzhak, and Schmeidler David, “Case-Based Decision Theory,” Quarterly Journal of Economics, 110 (1995), 605–639. 10.2307/2946694 [DOI] [Google Scholar]
- Glied Sherry, and Lleras-Muney Adriana, “Technological Innovation and Inequality in Health,” Demography, 45 (2008), 741–761. 10.1353/dem.0.0017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Golden Sherita Hill, Brown Arleen, Cauley Jane A., Chin Marshall H., Gary-Webb Tiffany L., Kim Catherine, Sosa Julie Ann, Sumner Anne E., and Anton Blair, “Health Disparities in Endocrine Disorders: Biological, Clinical, and Nonclinical Factors—An Endocrine Society Scientific Statement,” Journal of Clinical Endocrinology and Metabolism, 97 (2012), E1579–E1639. 10.1210/jc.2012-2043 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goldman Dana, and Lakdawalla Darius, “The Global Burden of Medical Innovation,” USC Schaeffer Center for Health Policy & Economics Working Paper, 2018. [Google Scholar]
- Green Angela K., Trivedi Niti, Hsu Jennifer J., Yu Nancy L., Bach Peter B., and Chimonas Susan, “Despite the FDA’s Five-Year Plan, Black Patients Remain Inadequately Represented in Clinical Trials for Drugs,” Health Affairs, 41 (2022), 368–374. 10.1377/hlthaff.2021.01432 [DOI] [PubMed] [Google Scholar]
- Gupta Harsh, “Do Female Researchers Increase Female Enrollment in Clinical Trials?,” Stanford University Working Paper, 2022. [Google Scholar]
- Hamilton Barton H., Andrés Hincapié Emma C. Kalish, and Papageorge Nicholas W., “Medical Innovation and Health Disparities,” NBER Working Paper no. 28864, 2021. 10.3386/w28864 [DOI] [Google Scholar]
- Henschke Ulrich K., Leffall Lasalle D. Jr., Mason Claudia H., Reinhold Andreas W., Schneider Roy L., and White Jack E., “Alarming Increase of the Cancer Mortality in the U.S. Black Population (1950–1967),” Cancer, 31 (1973), 763–768. [DOI] [PubMed] [Google Scholar]
- Heron Melonie, “Deaths: Leading Causes for 2019,” National Vital Statistics Reports, 70 (2021), 1–114. [PubMed] [Google Scholar]
- Hjort Jonas, Moreira Diana, Rao Gautam, and Santini Juan Francisco, “How Research Affects Policy: Experimental Evidence from 2,150 Brazilian Municipalities,” American Economic Review, 111 (2021), 1442–1480. 10.1257/aer.20190830 [DOI] [Google Scholar]
- IQVIA “Global Medicines Use in 2020: Outlook and Implications,” IMS Institute for Healthcare Informatics Technical Report, 2015. [Google Scholar]
- Jaravel Xavier, “The Unequal Gains from Product Innovations: Evidence from the U.S. Retail Sector,” Quarterly Journal of Economics, 134 (2019), 715–783. 10.1093/qje/qjy031 [DOI] [Google Scholar]
- Jehiel Philippe, “Analogy-Based Expectation Equilibrium,” Journal of Economic Theory, 123 (2005), 81–104. 10.1016/j.jet.2003.12.003 [DOI] [Google Scholar]
- Jensen Robert, “The (Perceived) Returns to Education and the Demand for Schooling,” Quarterly Journal of Economics, 125 (2010), 515–548. 10.1162/qjec.2010.125.2.515 [DOI] [Google Scholar]
- Jin Ginger Zhe, Luca Michael, and Martin Daniel, “Is No News (Perceived As) Bad News? An Experimental Investigation of Information Disclosure,” American Economic Journal: Microeconomics, 13 (2021), 141–173. 10.1257/mic.20180217 [DOI] [Google Scholar]
- Jones Charles I., and Kim Jihee, “A Schumpeterian Model of Top Income Inequality,” Journal of Political Economy, 126 (2018), 1785–1826. 10.1086/699190 [DOI] [Google Scholar]
- Jung Jeah, and Feldman Roger, “Racial-Ethnic Disparities in Uptake of New Hepatitis C Drugs in Medicare,” Journal of Racial and Ethnic Health Disparities, 4 (2017), 1147–1158. 10.1007/s40615-016-0320-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kagan Jonathan M., Rosas Scott R., Siskind Rona L., Campbell Russell D., Gondwe Daniel, Munroe Daniel, Trochim William M. K., and Schouten Jeffery T., “Community-Researcher Partnerships at NIAID HIV/AID Clinical Trials Sites: Insights for Evaluation and Enhancement,” Progress in Community Health Partnerships: Research, Education, and Action, 6 (2012), 311–320. 10.1353/cpr.2012.0034 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kanich Chris, Kreibich Christian, Levchenko Kirill, Enright Brandon, Voelker Geoffrey M., Paxson Vern, and Savage Stefan, “Spamalytics: An Empirical Analysis of Spam Marketing Conversion,” Communications of the ACM, 52 (2009), 99–107. 10.1145/1562164.1562190 [DOI] [Google Scholar]
- Kesselheim Aaron S., Robertson Christopher T., Myers Jessica A., Rose Susannah L., Gillet Victoria, Ross Kathryn M., Glynn Robert J., Joffe Steven, and Avorn Jerry, “A Randomized Study of How Physicians Interpret Research Funding Disclosures,” New England Journal of Medicine, 367 (2012), 1119–1127. 10.1056/NEJMsa1202397 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kline Patrick, Petkova Neviana, Williams Heidi, and Zidar Owen, “Who Profits from Patents? Rent-Sharing at Innovative Firms,” Quarterly Journal of Economics, 134 (2019), 1343–1404. 10.1093/qje/qjz011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Knepper Todd C., and McLeod Howard L., “When Will Clinical Trials Finally Reflect Diversity?,” Nature, 557 (2018), 157–159. 10.1038/d41586-018-05049-5 [DOI] [PubMed] [Google Scholar]
- Koning Rembrand, Samila Sampsa, and Ferguson John-Paul, “Who Do We Invent For? Patents by Women Focus More on Women’s health, but Few Women Get to Invent,” Science, 372 (2021), 1345–1348. 10.1126/science.aba6990 [DOI] [PubMed] [Google Scholar]
- Kremer Michael, and Glennerster Rachel, Strong Medicine: Creating Incentives for Pharmaceutical Research on Neglected Diseases (Princeton, NJ: Princeton University Press, 2004). [Google Scholar]
- Kuziemko Ilyana, Norton Michael I., Saez Emmanuel, and Stantcheva Stefanie, “How Elastic Are Preferences for Redistribution? Evidence from Randomized Survey Experiments,” American Economic Review, 105 (2015), 1478–1508. 10.1257/aer.20130360 [DOI] [Google Scholar]
- Ledley Fred D., Sarah Shonka McCoy Gregory Vaughan, and Cleary Ekaterina Galkina, “Profitability of Large Pharmaceutical Companies Compared with Other Large Public Companies,” Journal of the American Medical Association, 323 (2020), 834–843. 10.1001/jama.2020.0442 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Malmendier Ulrike, and Veldkamp Laura, “Information Resonance,” Columbia Business School Working Paper, 2022. [Google Scholar]
- Manski Charles F., Mullahy John, and Venkataramani Atheendar, “Using Measures of Race to Make Clinical Predictions: Decision Making, Patient Health, and Fairness,” NBER Working Paper no. 30700, 2022. 10.3386/w30700 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marquez Miriam A., Muhs Joan M., Tosomeen Ann, Lawrence Riggs B, and Joseph Melton L III, “Costs and Strategies in Minority Recruitment for Osteoporosis Research,” Journal of Bone and Mineral Research, 18 (2003), 3–8. 10.1359/jbmr.2003.18.1.3 [DOI] [PubMed] [Google Scholar]
- Martí-Carvajal Arturo, “What Is Evidence-Based Medicine?,” Ciencias Médicas, 1 (2020), 1–7. 10.47449/CM.2020.1.1.21 [DOI] [Google Scholar]
- Masic Izet, Miokovic Milan, and Muhamedagic Belma, “Evidence Based Medicine—New Approaches and Challenges,” Acta Informatica Medica, 16 (2008), 219–225. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCoy Rozalina G., Dykhoff Hayley J., Sangaralingham Lindsey, Ross Joseph S., Pinar Karaca-Mandic Victor M. Montori, and Shah Nilay D., “Adoption of New Glucose-Lowering Medications in the U.S.—The Case of SGLT2 Inhibitors: Nationwide Cohort Study,” Diabetes Technology and Therapeutics, 21 (2019), 702–712. 10.1089/dia.2019.0213 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Michelman Valerie, and Msall Lucy, “Sex, Drugs, and R&D: Missing Innovation from Regulating Female Enrollment in Clinical Trials,” University of Chicago Working Paper, 2021. [Google Scholar]
- Moore Thomas J., Zhang Hanzhe, Anderson Gerard, and Alexander G. Caleb, “Estimated Costs of Pivotal Trials for Novel Therapeutic Agents Approved by the US Food and Drug Administration, 2015–2016,” JAMA Internal Medicine, 178 (2018), 1451–1457. 10.1001/jamainternmed.2018.3931 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mukherjee Siddhartha, The Emperor of All Maladies: A Biography of Cancer (New York: Simon and Schuster, 2010). [Google Scholar]
- Mullainathan Sendhil, “A Memory-Based Model of Bounded Rationality,” Quarterly Journal of Economics, 117 (2002), 735–774. 10.1162/003355302760193887 [DOI] [Google Scholar]
- Mullainathan Sendhil, Schwartzstein Joshua, and Shleifer Andrei, “Coarse Thinking and Persuasion,” Quarterly Journal of Economics, 123 (2008), 577–619. 10.1162/qjec.2008.123.2.577 [DOI] [Google Scholar]
- NASEM (National Academies of Sciences, Engineering, and Medicine). Improving Representation in Clinical Trials and Research: Building Research Equity for Women and Underrepresented Groups (Washington, DC: National Academies Press, 2022). [PubMed] [Google Scholar]
- Nathan David M., Buse John B., Davidson Mayer B., Ferrannini Ele, Holman Rury R., Sherwin Robert, and Zinman Bernard, “Medical Management of Hyperglycemia in Type 2 Diabetes: A Consensus Algorithm for the Initiation and Adjustment of Therapy: A Consensus Statement of the American Diabetes Association and the European Association for the Study of Diabetes,” Diabetes Care, 32 (2009), 193–203. 10.2337/dc08-9025 [DOI] [PMC free article] [PubMed] [Google Scholar]
- National Institute of Allergy and Infectious Diseases, “DAIDS Community Engagement,” 2022. https://www.niaid.nih.gov/daids-ctu/community-engagement-NEW.
- National Institutes of Health, “NIH’s Definition of a Clinical Trial,” 2017. https://grants.nih.gov/policy/clinical-trials/definition.htm.
- Oostrom Tamar, “Funding of Clinical Trials and Reported Drug Efficacy,” Electronic Health Economics Colloquium, 2022. https://www.ehealthecon.org/pdfs/Oostrom.pdf. [Google Scholar]
- Ostchega Yechiam, Fryar Cheryl D., Nwankwo Tatiana, and Nguyen Duong T., “Hypertension Prevalence among Adults Aged 18 and Over: United States, 2017–2018,” National Center for Health Statistics Data Brief no. 364, (2020), 1–8. [PubMed] [Google Scholar]
- Papageorge Nicholas W., “Why Medical Innovation Is Valuable: Health, Human Capital and the Labor Market,” Quantitative Economics, 7 (2016), 671–725. 10.3982/QE459 [DOI] [Google Scholar]
- Parcha Vibhu, Heindl Brittain, Kalra Rajat, Bress Adam, Rao Shreya, Pandey Ambarish, Gower Barbara, Irvin Marguerite R., McDonald Merry-Lynn N., Li Peng, Arora Garima, and Arora Pankaj, “Genetic European Ancestry and Incident Diabetes in Black Individuals: Insights from the SPRINT Trial,” Circulation: Genomic and Precision Medicine, 15 (2022), e003468. 10.1161/CIRCGEN.121.003468 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Petryna Adriana, When Experiments Travel (Princeton, NJ: Princeton University Press, 2009). 10.1515/9781400830824 [DOI] [Google Scholar]
- Popescu Ioana, Fingar Kathryn R., Cutler Eli, Guo Jing, and Jiang H. Joanna, “Comparison of 3 Safety-Net Hospital Definitions and Association with Hospital Characteristics,” JAMA Network Open, 2 (2019), e198577. 10.1001/jamanetworkopen.2019.8577 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Qiao Yao, Alexander G. Caleb, and Moore Thomas J., “Globalization of Clinical Trials: Variation in Estimated Regional Costs of Pivotal Trials, 2015–2016,” Clinical Trials, 16 (2019), 329–333. 10.1177/1740774519839391 [DOI] [PubMed] [Google Scholar]
- Ramamoorthy A, Pacanowski MA, Bull J, and Zhang L, “Racial/Ethnic Differences in Drug Disposition and Response: Review of Recently Approved Drugs,” Clinical Pharmacology and Therapeutics, 97 (2015), 263–273. 10.1002/cpt.61 [DOI] [PubMed] [Google Scholar]
- Richardson Matthew, Dominowska Ewa, and Ragno Robert, “Predicting Clicks: Estimating the Click-Through Rate for New Ads,” Proceedings of the 16th International Conference on the World Wide Web (2007), 521–530. 10.1145/1242572.1242643 [DOI] [Google Scholar]
- Robertson Roland, “Civilization,” Theory, Culture, and Society, 23 (2006), 421–427. 10.1177/0263276406062699 [DOI] [Google Scholar]
- Roth Christopher, and Wohlfart Johannes, “How Do Expectations about the Macroeconomy Affect Personal Expectations and Behavior?,” Review of Economics and Statistics, 102 (2020), 731–748. 10.1162/rest_a_00867 [DOI] [Google Scholar]
- Royles Dan, To Make the Wounded Whole: The African American Struggle against HIV/AIDS (Chapel Hill: University of North Carolina Press, 2020). 10.5149/northcarolina/9781469661339.001.0001 [DOI] [Google Scholar]
- Schwartz Lisa M., and Woloshin Steven, “Medical Marketing in the United States, 1997–2016,” Journal of the American Medical Association, 321 (2019), 80–96. 10.1001/jama.2018.19320 [DOI] [PubMed] [Google Scholar]
- Sertkaya Aylin, Birkenbach Anna, Berlind Ayesha, and Eyraud John, “Examination of Clinical Trial Costs and Barriers for Drug Development,” HHS Assistant Secretary for Planning and Evaluation report, 2014. https://aspe.hhs.gov/reports/examination-clinicaltrial-costs-barriers-drug-development-0. [Google Scholar]
- Skinner Jonathan, and Staiger Douglas, “Technology Adoption from Hybrid Corn to Beta Blockers,” NBER Working Paper no. 11251, 2005. 10.3386/w11251 [DOI] [Google Scholar]
- Skinner Jonathan, and Staiger Douglas, “Technology Diffusion and Productivity Growth in Health Care,” Review of Economics and Statistics, 97 (2015), 951–964. 10.1162/REST_a_00535 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sosinsky Alexandra Z., Rich-Edwards Janet W., Wiley Aleta, Wright Kalifa, Spagnolo Primavera A., and Joffe Hadine, “Enrollment of Female Participants in United States Drug and Device Phase 1–3 Clinical Trials between 2016 and 2019,” Contemporary Clinical Trials, 115 (2022): 106718. 10.1016/j.cct.2022.106718 [DOI] [PubMed] [Google Scholar]
- Stantcheva Stefanie, “Understanding Tax Policy: How Do People Reason?,” Quarterly Journal of Economics, 136 (2021), 2309–2369. 10.1093/qje/qjab033 [DOI] [Google Scholar]
- Sosinsky Alexandra Z., “How to Run Surveys: A Guide to Creating Your Own Identifying Variation and Revealing the Invisible,” NBER Working Paper no. 30527, 2022. 10.3386/w30527 [DOI] [Google Scholar]
- Steinberg Jecca R., Turner Brandon E., Weeks Brannon T., Magnani Christopher J., Wong Bonnie O., Rodriguez Fatima, Yee Lynn M., and Cullen Mark R., “Analysis of Female Enrollment and Participant Sex by Burden of Disease in US Clinical Trials between 2000 and 2020,” JAMA Network Open, 4 (2021), e2113749. 10.1001/jamanetworkopen.2021.13749 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strauss Ronald P., Sengupta Sohini, Quinn Sandra C., Goeppinger Jean, Spaulding Cora, Kegeles Susan M., and Millett Greg, “The Role of Community Advisory Boards: Involving Communities in the Informed Consent Process,” American Journal of Public Health, 91 (2001), 1938–1943. 10.2105/AJPH.91.12.1938 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tirrell Meg, and Miller Leanne, “Moderna Slows Coronavirus Vaccine Trial Enrollment to Ensure Minority Representation, CEO Says,” CNBC, September 4, 2020. https://www.cnbc.com/2020/09/04/moderna-slows-coronavirus-vaccine-trial-t-to-ensure-minority-representation-ceo-says.html. [Google Scholar]
- Turner Erick H., Cipriani Andrea, Furukawa Toshi A., Salanti Georgia, and de Vries Ymkje Anna, “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy: Updated Comparisons and Meta-analyses of Newer Versus Older Trials,” PLoS Medicine, 19 (2022), e1003886. 10.1371/journal.pmed.1003886 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Turner Erick H., Matthews Annette M., Linardatos Eftihia, Tell Robert A., and Rosenthal Robert, “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy,” New England Journal of Medicine, 358 (2008), 252–260. 10.1056/NEJMsa065779 [DOI] [PubMed] [Google Scholar]
- U.S. Census Bureau “Census Bureau QuickFacts,” 2021. https://www.census.gov/quickfacts/fact/table/US/PST045221.
- U.S. Government Accountability Office “Prescription Drugs: Medicare Spending on Drugs with Direct-to-Consumer Advertising,” 2021. https://www.gao.gov/products/gao-21-380.
- Wailoo Keith A., How Cancer Crossed the Color Line (New York: Oxford University Press, 2011). [Google Scholar]
- Wang Junling, Zuckerman Ilene H., Miller Nancy A., Shaya Fadia T., Noel Jason M., and C. Daniel Mullins, “Utilizing New Prescription Drugs: Disparities among Non-Hispanic Whites, Non-Hispanic Blacks, and Hispanic Whites,” Health Services Research, 42 (2007), 1499–1519. 10.1111/j.1475-6773.2006.00682.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Welsh John, Lu Yuan, Dhruva Sanket S., Bikdeli Behnood, Desai Nihar R., Benchetrit Liliya, Zimmerman Chloe O., Mu Lin, Ross Joseph S., and Krumholz Harlan M., “Age of Data at the Time of Publication of Contemporary Clinical Trials,” JAMA Network Open, 1 (2018), e181065. 10.1001/jamanetworkopen.2018.1065 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wexler Deborah, “Initial Management of Hyperglycemia in Adults with Type 2 Diabetes Mellitus,” in “UpToDate,” Theodore Post, eds. (Netherlands: Wolters Kluwer; 2022). [Google Scholar]
- Williams Jhacova, “Historical Lynchings and the Contemporary Voting Behaviors of Blacks,” American Economic Journal: Applied Economics, 14 (2022), 224–253. 10.1257/app.20190549 [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data underlying this article are available in the Harvard Dataverse, https://doi.org/10.7910/DVN/VB5MDJ (Alsan et al. 2023).






