Abstract
Decades of research have demonstrated that a variety of cognitive biases can affect our judgment and ability to make rational decisions in personal and professional environments. The lengthy, risky, and costly nature of pharmaceutical research and development (R&D) makes it vulnerable to biased decision‐making. Moreover, cognitive biases can play a role in regulatory and clinical decision‐making, the latter impacting diagnostic and treatment decisions in the therapeutic use of medicines. These inherent and/or institutionalized biases (e.g., in assumptions, data, or decision‐making practices) could conceivably contribute to health inequities. In this mini‐review, we provide a broad perspective on how cognitive biases can affect pharmaceutical R&D, regulatory evaluation, and therapeutic decision‐making. Example approaches to mitigate the effect of common biases in the development, approval, and use of new therapeutics, such as quantitative decision criteria, multidisciplinary reviews, regulatory and treatment guidelines, and evidence‐based clinical decision support systems are illustrated. Mitigating the impact of cognitive biases could increase pharma R&D efficiency, change the perspective and prioritization of unmet medical needs, increase representativeness and quality of evidence generated through clinical trials and real‐world research, leading to higher quality insights and more effective medication use, and as such could eventually contribute to more equitable healthcare.
INTRODUCTION
Decades of research have demonstrated that a variety of cognitive biases (Figure 1) can affect our judgment and ability to make rational decisions in personal and professional environments. 1 , 2 , 3 , 4 Inherent and/or institutionalized biases (e.g., in assumptions, data, or decision‐making practices) could conceivably contribute to health inequities. In this mini‐review, based partly on a session held at the 2023 Annual Meeting of the American Society for Clinical Pharmacology and Therapeutics entitled: “How Could Debunking Biases in R&D Decisions Lead to More Equitable Healthcare?”, we provide a broad perspective on how cognitive biases can affect pharmaceutical research and development (R&D), regulatory evaluation, and therapeutic decision‐making. Some approaches to mitigate the effect of common biases in the development, approval, and utilization of new therapeutics will be highlighted along with how their use could enable more equitable healthcare.
MANIFESTATIONS AND MITIGATION OF BIASES IN PHARMACEUTICAL R&D
The lengthy, risky, and costly nature of the pharmaceutical R&D process makes it particularly vulnerable to biased decision‐making. 5 , 6 , 7 Numerous decisions are necessary over the 10+ years typically needed for a novel drug to transition from discovery through development and regulatory approval into therapeutic use. Most new drug candidates fail at some point along this path, adding to the challenge of deciding which candidates to progress to the next stage and which ones to discontinue, while considering the risks and uncertainties at each decision point. Table 1 provides an overview of how common biases show up across the pharmaceutical R&D continuum. It is important to recognize that these biases hardly ever occur in isolation when R&D decisions are made. Instead, multiple biases can impact a single decision. The results of a survey (Appendix S1) that we conducted prior to the 2023 ASCPT annual meeting showed that R&D practitioners and decision‐makers recognize and observe biases in their professional setting and are prone to making decisions differently based on how information is presented (framing bias), as summarized in Appendix S2.
TABLE 1.
Bias | Bias type | Description of bias | Specific examples from Pharma R&D | Mitigation Measure (see Table 2 for further explanation) |
---|---|---|---|---|
Sunk‐cost fallacy | Stability biases | Paying attention to historical costs that are not recoverable when considering future courses of action | A drug development program in pharma R&D generally spans many years and is very costly. This may lead to a tendency to continue a project despite underwhelming results based on the argument that all the money and time invested would be lost if the project is stopped now instead of deciding based on the probability of (future) success. See also Appendix S3 on Sunk Cost fallacy |
|
Anchoring and insufficient adjustment | Stability biases | Rooting oneself to an initial value, leading to insufficient adjustments of subsequent estimates. | Overestimate the probability that a phase II trial result will be replicated in phase III by anchoring on the observed mean result in phase II without sufficient adjustment for uncertainty in the observed mean effect in phase II. This may contribute to the high failure rate observed in phase III |
|
Loss aversion | Stability biases | The tendency to feel losses more acutely than gains of the same amount, making us more risk averse than a rational calculation would suggest | Tendency to advance a pharma R&D project despite a low probability of success because the perception of loss if we terminate the project generally outweighs the perception of gain from investing the freed‐up resources in another project |
|
Status quo bias | Stability biases | Preference for the status quo in the absence of pressure to change it | Allocation of budget is driven by historical precedent, rather than business needs. For example: “We have grown 3% every year, let's make that the planning assumption”, or “Oncology always gets about 30% of the R&D budget” |
|
Excessive optimism | Action‐oriented biases | The tendency for people to be overoptimistic about the outcome of planned actions, to overestimate the likelihood of positive events, and to underestimate the likelihood of negative ones | Teams requesting approval for a drug development program from senior management often provide best case (optimistic) estimates of the development cost, risk and timelines in the hope that the project will be supported over other competing projects. The net result is that projects with more optimistic estimates are more likely to be advanced, thus increasing the probability of missing cost targets and timelines. See also Appendix S4 on Optimism bias |
|
Overconfidence | Action‐oriented biases | Overestimating our skill level relative to others', leading us to overestimate our ability to affect future outcomes, take credit for past outcomes, and neglect the role of chance | Due to the long R&D phase in the pharmaceutical industry there are not many individuals who have successfully completed a full drug development project. Individuals who were involved in one may overestimate the impact of their skills and strategy and apply them similarly to the next project without considering that the success of the previous drug relied on many factors not just their skills and strategy |
|
Competitor neglect | Action‐oriented bias | The tendency to plan without factoring in competitive responses, as if one is playing tennis against a wall, not a live opponent. | Project teams assuming they can be more creative and successful in developing drug candidates than competitors with similar drug candidates from the same pharmacologic class. |
|
Misaligned individual incentives | Interest biases | Incentives for individuals in organizations to adopt views or to seek outcomes favorable to their unit or themselves, at the expense of the overall interest of the company. These self‐serving views are often held genuinely, not cynically | Committee members/senior management benefiting from advancing a compound (progress‐seeking) to the next phase as their bonuses depend on the short‐term pipeline progression. In the long run, the quality of the pipeline could benefit from stopping less‐promising drugs earlier in the value chain |
|
Misaligned perception of corporate goals | Interest biases | Disagreements (often unspoken) about the hierarchy or relative weight of objectives pursued by the organization and about the tradeoffs between them | Too much focus on the short‐term health and quality of the pipeline instead of taking meaningful steps to advance long‐term goals, e.g., making early advances into novel therapeutic areas |
|
Inappropriate attachments | Interest biases | Emotional attachment of individuals to people or elements of the business (such as legacy products or brands), creating a misalignment of interests | Individuals are emotionally attached to their innovative ideas and projects and believe that obvious signs to stop can be overcome. In the context of pharmaceutical pipeline, this can also lead to a “not invented here” mentality, where quality bar is different for internal projects vs. those obtained through an external partner |
|
Confirmation bias | Pattern‐recognition biases | The overweighting of evidence consistent with a favored belief, underweighting of evidence against a favored belief, or failure to search impartially for evidence | When dealing with a negative clinical trial and a positive clinical trial in phase II, selectively searching for reasons to discredit the negative trial (as a false negative) while more readily accepting the results of the positive trial. This may contribute to the surprisingly high failure rate observed in phase III |
|
Champion bias | Pattern‐recognition biases | The tendency to evaluate a plan or proposal based on the track record of the person presenting it, more than on the facts supporting it | Individuals with a success story (often having been involved in a successful pharma R&D project) are more likely listened to, while neglecting the role of chance or other reasons for their earlier successes |
|
Framing bias | Pattern‐recognition biases | The tendency to decide on options based on whether the options are presented with positive or negative connotations such as a loss or as a gain | Teams presenting study results often emphasize positive outcomes while downplaying potential side effects, which may create a biased perception on the drug's overall benefit/risk profile. See also Appendix S2 on illustration of susceptibility to framing bias |
|
Availability bias | Pattern‐recognition biases | Not sufficiently accounting for alternative views but rather relying on immediate examples that come to mind when evaluating a specific topic, concept, method, or decision | A physician relying on recent cases they have encountered rather than considering a broader range of clinical evidence |
|
Sunflower management | Social biases | Tendency for groups to align with the views of their leaders, whether expressed or assumed | In group setting (junior) team members are less likely to offer a different point of view after the senior leader of the team has expressed a particular opinion. Also, people may believe others know more: “Everybody seems to think this is a good investment, so I guess I should vote for it to not be the outsider” |
|
Groupthink | Social biases | Striving for consensus at the cost of a realistic appraisal of alternative courses of action | Project teams and decision‐making committees do not evaluate alternatives rigorously especially when alternative perspectives are less favorable for a project |
|
An example situation that is very susceptible to biased decision‐making is when phase II clinical trial results are analyzed to enable the go/no‐go decision on whether a phase III program should be initiated. By the time a phase II clinical trial is completed and analyzed, significant time and other resources have been invested, and the temptation is high to rationalize a “go” decision to initiate phase III partly because of the sunk costs. Succumbing to the sunk cost fallacy is particularly likely when the phase II trial results show some evidence of efficacy but less than necessary for differentiation from existing therapies (see also Appendix S3). Such borderline efficacy results of a phase II clinical trial could also lead to a confirmation bias scenario, where decision‐makers are more ready to accept results and interpretations that are aligned with their favored views. When a phase II clinical trial results in an efficacy signal that is aligned or exceeds the expectations of the project team, everyone is ready to accelerate initiating a phase III trial without the delays that might accompany asking additional questions or conducting additional data analyses. Conversely, when a phase II clinical trial results in an efficacy outcome that is less than desired, the project team may be tempted to conduct more exploratory analyses to identify a possible subgroup of responders or to find a reason to dismiss the results to keep the program going forward. Anchoring and insufficient adjustment tend to play a role when efficacy outcomes of a phase II clinical trial are interpreted. The project team and decision‐making committee members often anchor their expectations for phase III results based on the observed mean efficacy result in a phase II trial, while making insufficient adjustments for the uncertainty in the observed mean (i.e., relatively small phase II trial) and the extrapolation to the phase III setting (more diverse patients, longer trial duration, etc.). 8
In any area of decision‐making, it is important to educate the organization, decision‐makers, and team members about common cognitive biases with the aim to promote self‐awareness, and recognition, encourage critical thinking, and foster a culture of objective decision‐making. As that may not be sufficient additional mitigation measures may be necessary. As summarized in Table 1, several options exist to mitigate the impact of these specific biases observed across R&D, and several of them are applicable to phase II/phase III go/no‐go decision‐making. Table 2 further explains the most common mitigation strategies.
TABLE 2.
Mitigation measure | Short description |
---|---|
Input from independent experts | Consultation with unaffiliated domain experts to provide unbiased insights, aiding in objective decision‐making and identification of potential oversights |
Multiple options | Consideration of various alternatives at each decision point to prevent anchoring bias, reducing risk of group think, and encourage comprehensive evaluation |
Prospectively setting (quantitative) decision criteria | Establishing predetermined agreements about decision‐making criteria, for example through setting explicit, numerical targets for project outcomes to enable objective evaluation and diminishing the influence of cognitive biases in decision‐making. For example, utilizing MIDD principles 9 to estimate the probability of success in Phase III 8 |
Diversity of thoughts | Ensuring a diverse group of decision‐makers to incorporate a range of perspectives, reducing the risk of groupthink, and promoting balanced decision‐making. See The FDA Equal Voice Initiative as an example 11 |
Planned leadership rotation | Implementing a scheduled change in leadership roles to infuse fresh perspectives, mitigate entrenched biases, and stimulate innovative thinking |
Reference case forecasting | Creating a baseline scenario or projection that allows for comparing various options, anchoring to neutral anchors such as industry benchmarks or multiple anchors and re‐anchoring when revisiting and adjusting initial assumptions with evolving data and circumstances |
Pre‐mortem | Imagining that a project has failed or experienced a negative outcome and analyzing potential reasons for that failure before the project begins that allow to understand risks and creation of action and mitigation plans |
Information exchange formats such as an evidence framework | Utilizing structured communication methods to ensure balanced information sharing, preventing dominance of certain viewpoints, and promoting comprehensive understanding. In particular, this may help to scrutinize predictive validity rather than relying on the availability of evidence 24 |
Mandatory contradictory view | Invite contradictory perspectives through seeking diverse opinions, and assigning people to play Devil's advocate in decision‐making processes. In formalized setting, this could be a red team that is set up to critically review and challenge the assumptions of the decision‐maker |
Confidential voting | Committee or team members vote without knowledge of how other members are voting to mitigate against groupthink and champion bias |
Model‐informed drug development (MIDD 9 ) is a particularly suitable approach to mitigate the impact of some biases by integrating all pertinent prior evidence into a suitable quantitative framework (e.g., mathematical model) that can be used to inform decision‐making based on objective criteria. For example, developing a dose‐exposure‐response and/or PKPD model based on data from a phase II clinical trial (and ideally incorporating all relevant prior data from preclinical and other clinical trials) to simulate and predict the outcome of a phase III clinical trial can be a powerful tool to mitigate the impact of cognitive biases. Not only is a mathematical model immune to the sunk cost fallacy, but committing to developing and using the model to predict the phase III results before making the go/no‐go decision can also mitigate the impact of the confirmation bias, as the data will be scrutinized regardless of the outcome of the phase II trial. An MIDD approach also reduces potential anchoring and insufficient adjustment as mathematical models can account for variability and uncertainty to make more informed projections (e.g., probability of success in phase III) based on prior phase II trial results. In addition, model‐based meta‐analyses to assess the potential of the novel treatment to differentiate from the existing standard of care can be a powerful tool in the context of a go/no‐go decision after a phase II trial.
Whenever mathematical models or MIDD approaches are used to support decision‐making in pharma R&D, it is essential to understand and transparently communicate the inherent assumptions in these models (e.g., phase II trial endpoints or inclusion criteria differing from those of phase III trials).
The use of mathematical models or MIDD approaches as a preferred mitigation measure in the era of artificial intelligence and machine learning appears to be appropriate. Incorporating existing internal and external knowledge about a disease and an asset at any stage in the pharmaceutical R&D value chain within a mathematical model (of varying complexity) could significantly enhance the quality of decision‐making. Given the intricate nature of human biology and the vast amount of healthcare data that has been and can now be generated in real‐time, it seems evident that decision‐making processes supported by artificial intelligence and machine learning could enhance the quality of decision‐making at each stage gate. Envision the impact that an accurate representation of the probability of success of an asset, compared with that of competitors, could have on consistent go/no‐go decision‐making throughout the pharmaceutical R&D value chain. However, it remains uncertain whether every aspect of the complexity of human biology can be fully captured in a mathematical model, despite the advancements in artificial intelligence and machine learning. Additionally, considerations about the biases introduced during data capture for the models and their assembly by humans should also be considered.
MANIFESTATIONS AND MITIGATION OF BIASES IN REGULATORY DECISION‐MAKING
Regulatory bodies are charged with determining whether an investigational new drug should be approved for population use. These public health decisions are ideally made in a fashion that is data‐driven and balances uncertainties against unmet medical needs. These decisions are based on the available empirical clinical and nonclinical evidence but also on mechanistic reasoning and value judgments in terms of benefits to patients and their caregivers. Unmet medical needs, however, can be a challenging dimension to decision‐making in so far as different stakeholders (regulators, clinicians, patients, caregivers) may have different perspectives on what constitutes an unmet need and a positive benefit/risk balance. Therefore, while regulators may keep the data at the forefront of their decisions, it is conceivable that an anticipated reaction from any of several stakeholders could inform a regulatory decision, even if subconsciously. The decision‐making process, then, must be evidence‐based and robust.
Because regulatory decision‐making is evidence‐based, any biases inherent in the data evaluated or the processes by which they are evaluated could theoretically enter the decision. Furthermore, because evidence assessment is done in an interdisciplinary, often hierarchical team environment, cognitive biases might arise precisely because of the social nature of the interdisciplinary team. As such, the most likely biases we might see in regulatory decision‐making arguably include confirmation bias, champion bias, and groupthink.
For example, in a situation of high unmet medical needs (e.g., a rare fatal disease for which there is no currently approved therapy), one might be eager to approve a promising new treatment. Because of the practical challenges with generating robust evidence of effectiveness (i.e., due to small denominator of patients with the disease, recruitment challenges, heterogenous clinical end points, and variability in disease progression), one might be more inclined to weight nonclinical data, anecdotal patient experience data, or even mechanistic rationale more heavily than clinical data generated from a “pivotal” efficacy trial (confirmation bias). In terms of champion bias, there may be a tendency to evaluate a proposal based on the track record of the person presenting it. In these situations, the voice with the most perceived credibility (e.g., based on the most regulatory experience) might be given more weight in the decision‐making process. In terms of groupthink, when teams are structured hierarchically within a discipline or within the interdisciplinary team, an individual or team's thinking could subconsciously be influenced by their desire to align with the expressed or assumed position of their senior leaders. The impact of these biases can be mitigated through process 10 and policy standardization. Good governance practices should ensure inclusivity (all relevant scientific expertise would be brought to bear in the decision‐making process), clarity on the decision‐making model (e.g., alignment vs. consensus), and transparency (professional opinions would be expected to not only be fully expressed and considered, but also documented along with the final decisions). To ensure that the full range of views is considered in the evaluation of therapeutic products, CDER established the equal voice initiative (EVI). 11 EVI was both a philosophy and a set of practices designed to ensure: (1) inclusivity (all relevant scientific expertise would be brought to bear in the decision‐making process); (2) alignment on a given decision (if not consensus); (3) appropriate conflict avoidance or resolution should professional differences of opinion arise; and (4) transparency (professional opinions would be expected to not only be fully expressed and considered, but also documented along with the final decisions). In addition, the Office of Clinical Pharmacology (OCP) has established two key internal milestone meetings: scoping meetings and briefing meetings during the review of new molecular entity NDAs and original BLAs. The purpose of these meetings is to lay out the key questions for regulatory evaluation and the strategy to approach those questions at the outset of the review process and prior to the final decision to ensure the review team's recommendations are clinically relevant, pragmatic, and consistent with current or emerging policy and past precedent. Having this added involvement at key junctions in the decision‐making process allows for consistent, science‐based decision‐making that is more resistant to bias. In terms of policy, regulatory guidance can set clear expectations around important scientific and regulatory areas. Consistent application of such guidances, while preserving the ability to apply scientific judgment and flexibility to accommodate progressive insights from technological advancements can, therefore, minimize but may not eliminate the impact of biases.
MANIFESTATIONS AND MITIGATION OF BIASES IN THERAPEUTIC USE
Cognitive biases can also play a significant role in the therapeutic use of medicines and are well documented for both impacting diagnostic and treatment decisions. 12 , 13 , 14 , 15 For example, availability bias may play a role when healthcare professionals rely on readily available information when making treatment decisions. This means a clinician is more likely to prescribe a drug that is commonly used rather than a less known but possibly more effective alternative. Faced with substantial published evidence that is often difficult to readily synthesize, clinicians may fall prone to confirmation bias where they fall back on their current beliefs and might neglect evidence that contradicts them (e.g., not considering alternative diagnoses once an initial diagnosis has been established despite contradicting evidence). Other types of biases, such as the affect bias, also known as the emotional bias, can have a significant impact on treatment decisions. This bias refers to the influence of emotions and personal feelings on decision‐making. From a healthcare professional, this could show up by labeling a patient, for example, as “complainer” or “noncompliant” and letting these perceptions influence patient care. If the representativeness bias (over‐relying on stereotypes when diagnosing or prescribing treatment) is added to this, this may lead to the racial bias observed, for example, those observed in pain assessment and treatment recommendations. 16
In terms of mitigation, a range of opportunities exist. As for any decision‐making area, education and awareness regarding biases themselves and continued professional development on the evolving diagnostic and treatment landscape are important factors. In addition, the development and utilization of evidence‐based clinical decision support systems and robust treatment guidelines can reduce the impact of cognitive biases, which are built on the assumption that the evidence base is without bias. 17 Finally, the utilization of multidisciplinary teams to discuss and agree on treatment plans, such as commonly used in cancer care, may reduce the prevalence of biases during diagnosis and treatment planning.
HOW BIASES IMPACT HEALTH EQUITIES?
Biases across the healthcare ecosystem can have a significant impact on ensuring everyone has a fair opportunity to attain their optimal health, that is, health equity. Health equity is achieved when every person can attain their full health potential, and no one is disadvantaged from achieving this potential because of social position or other socially determined circumstances. In contrast, health inequities are reflected in differences in length of life; quality of life; rates of disease, disability, and death; severity of disease; and access to treatment. Social determinants, cultural issues, and economic disparities are important determining factors of health inequities. 18 , 19 In addition, the impact of cognitive biases on health equity can be observed in several ways, most notably:
Understanding and prioritizing unmet medical needs to steer biomedical and pharmaceutical R&D efforts;
Evidence generation including the design and conduct of clinical trials;
Interpretation of experimental data in the context of human health and disease followed by decisions like go/no‐go or regulatory approval;
Defining treatment guidelines and pathways based on available evidence for a given disease;
Planning individual therapy approaches for patients accounting for both clinical evidence and patient's health data.
As an example, companies and researchers often choose their scientific exploration areas based on factors like relevance for patient health, ability to address known unmet medical needs, and scientific rationale for a given direction of research. Recency bias can compel scientists (including R&D decision‐makers and grant reviewers) to believe that there is stronger scientific and commercial rationale in areas where others have recently succeeded. That combined with overconfidence in succeeding vis‐à‐vis competitors on a global scale leads to pipeline herding in certain areas while other drug targets or therapeutic areas with significant unmet medical needs go under‐researched or even neglected. 20 It can also be stated that biases are embedded in the very data on which we base our decisions (data biases). Data biases can result from past global research trends (e.g., areas like women's health having had very low research funding and intensity in the past has led to fewer scientific advances, and that in turn makes it harder to originate and validate new therapeutic hypotheses today than in areas like oncology) or flawed evidence generation (e.g., clinical trials being historically done in very skewed populations, which now leads to difficulty in truly understanding unmet needs left by existing therapies in many sub‐populations) and biased entries in electronic health records (which may lead to biases in clinical decision support tools 17 ). Also, data biases can be seen in cases of diseases that were not officially recognized as such historically (i.e., obesity not being recognized as a disease until recently), or diseases that are notoriously difficult to diagnose or get masked by comorbidities, or in cases where our disease classifications might consider several diseases with different root causes and epidemiology but similar manifestation as one indication. Moreover, treatment seeking delays leading to increased mortality are documented for women experiencing a myocardial infarction due to (among other reasons) lesser‐known symptoms compared to better‐known symptoms observed in men. 12
A broadening awareness around health equity led to the introduction of several frameworks and guidelines to support more equitable practices in pharmaceutical R&D and medical practice 21 and initiatives aimed at counteracting existing inequalities (e.g., mechanisms that enhance trial diversity through enlisting community‐based organizations or harnessing large‐scale population analytics). An example is the regulatory guidance on diversity in clinical trials 22 that is intended to promote the inclusion of underrepresented populations in line with epidemiological data. This guidance may mitigate for example data bias by stimulating the data collection toward areas that historically are understudied, acknowledging that is only one of the levers to address the complex origins of health disparities. 19
PATH FORWARD AND CLOSING REMARKS
Commonly prevalent cognitive biases impact decision‐making in pharmaceutical R&D, regulatory assessments, and therapeutic utilization. Being aware of biases is an important first step to mitigate them but is by no means sufficient. Several mitigation measures can be applied to reduce the impact of biases on decision‐making (Table 2). For a pharmaceutical R&D organization, having a clear line of sight in the portfolio composition, a well‐defined process around decision‐making, clear quantitative and qualitative criteria for assets, and a deliberate strategy around cognitive bias mitigation can improve the quality of R&D portfolio decisions. 7 Mitigating the impact of these biases could increase pharma R&D efficiency and consequently reduce (opportunity) costs associated with developing novel therapeutics and eventually lead to more affordable and equitable healthcare. For healthcare providers or regulators, establishing decision support systems, multidisciplinary review boards, and guidelines or nudges to promote best‐practice and bias‐minimizing behaviors, can help mitigate many of the common biases and help health systems progress toward more equitable decisions.
FUNDING INFORMATION
No funding was received for this work.
CONFLICT OF INTEREST STATEMENT
The authors declared no competing interests in this work.
Supporting information
ACKNOWLEDGMENTS
The authors acknowledge and thank Katarzyna Smietana for her valuable and stimulating insights during our collaborative discussions and in particular for her significant contributions to the ASCPT session and this paper.
Weber B, Zineh I, Lalonde R, Visser SAG. How debunking biases in research and development decisions could lead to more equitable healthcare? Clin Transl Sci. 2024;17:e13880. doi: 10.1111/cts.13880
The views and opinions presented here represent those of the authors and do not necessarily represent those of the U.S. Food and Drug Administration.
REFERENCES
- 1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):1124‐1131. doi: 10.1126/science.185.4157.1124 [DOI] [PubMed] [Google Scholar]
- 2. Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981;211(4481):453‐458. doi: 10.1126/science.7455683 [DOI] [PubMed] [Google Scholar]
- 3. Thaler RH, Sunstein CR. Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press; 2008. [Google Scholar]
- 4. Kahneman D. Thinking, Fast and Slow. Penguin Books; 2011. [Google Scholar]
- 5. Lovallo D, Kahneman D. Delusions of success. How optimism undermines executives' decisions. Harv Bus Rev. 2003;81(7):56‐63, 117. [PubMed] [Google Scholar]
- 6. Truebel H, Seidler M. Mitigating bias in pharmaceutical R&D decision‐making. Nat Rev Drug Discov. 2022;21(12):874‐875. doi: 10.1038/d41573-022-00157-4 [DOI] [PubMed] [Google Scholar]
- 7. Bieske L, Zinner M, Dahlhausen F, Trübel H. Trends, challenges, and success factors in pharmaceutical portfolio management: cognitive biases in decision‐making and their mitigating measures. Drug Discov Today. 2023;28:103734. doi: 10.1016/j.drudis.2023.103734 [DOI] [PubMed] [Google Scholar]
- 8. Lalonde RL, Peck CC. Probability of success: a crucial concept to inform decision making in pharmaceutical research and development. Clin Pharmacol Ther. 2022;111(5):1001‐1003. doi: 10.1002/cpt.2513 [DOI] [PubMed] [Google Scholar]
- 9. Marshall SF, Burghaus R, Cosson V, et al. Good practices in model‐informed drug discovery and development: practice, application, and documentation. CPT Pharmacometrics Syst Pharmacol. 2016;5(3):93‐122. doi: 10.1002/psp4.12049 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Good Review Practices: Clinical Pharmacology Review of New Molecular Entity (NME) New Drug Applications (NDAs) and Original Biologics License Applications (BLAs) (fda.gov). https://www.fda.gov/media/71709/download. Accessed June 11, 2024.
- 11. Equal voice: collaboration and regulatory and policy decision‐making in CDER (fda.gov). https://www.fda.gov/media/157807/download. Accessed January 27, 2024.
- 12. Gopal DP, Chetty U, O'Donnell P, Gajria C, Blackadder‐Weinstein J. Implicit bias in healthcare: clinical practice, research and decision making. Future Healthc J. 2021;8(1):40‐48. doi: 10.7861/fhj.2020-0233 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Klein JG. Five pitfalls in decisions about diagnosis and prescribing. BMJ. 2005;330(7494):781‐783. doi: 10.1136/bmj.330.7494.781 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16:138. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Li P, Cheng ZY, Liu GL. Availability bias causes misdiagnoses by physicians: direct evidence from a randomized controlled trial. Intern Med. 2020;59(24):3141‐3146. doi: 10.2169/internalmedicine.4664-20 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Hoffman KM, Trawalter S, Axt JR, Oliver MN. Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. Proc Natl Acad Sci USA. 2016;113(16):4296‐4301. doi: 10.1073/pnas.1516047113 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Kostopoulou O, Tracey C, Delaney BC. Can decision support combat incompleteness and bias in routine primary care data? J Am Med Inform Assoc. 2021;28(7):1461‐1467. doi: 10.1093/jamia/ocab025 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Martínez‐García M, Villegas Camacho JM, Hernández‐Lemus E. Connections and biases in health equity and culture research: a semantic network analysis. Front Public Health. 2022;10:834172. doi: 10.3389/fpubh.2022.834172 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Golden SH. Disruptive innovations to achieve health equity through healthcare and research transformation. Clin Pharmacol Ther. 2023;113(3):500‐508. doi: 10.1002/cpt.2812 [DOI] [PubMed] [Google Scholar]
- 20. Fougner C, Cannon J, The L, Smith JF, Leclerc O. Herding in the drug development pipeline. Nat Rev Drug Discov. 2023;22(8):617‐618. doi: 10.1038/d41573-023-00063-3 [DOI] [PubMed] [Google Scholar]
- 21. Cary MP Jr, Zink A, Wei S, et al. Mitigating racial and ethnic bias and advancing health equity in clinical algorithms: a scoping review. Health Aff (Millwood). 2023;42(10):1359‐1368. doi: 10.1377/hlthaff.2023.00553 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. FDA draft guidance: Diversity Plans to Improve Enrollment of Participants From Underrepresented Racial and Ethnic Populations in Clinical Trials; Draft Guidance for Industry; Availability | FDA. https://www.fda.gov/regulatory‐information/search‐fda‐guidance‐documents/diversity‐plans‐improve‐enrollment‐participants‐underrepresented‐racial‐and‐ethnic‐populations. Accessed January 27, 2024.
- 23. Lavallo and Sibony. – The case for behavioral strategy | McKinsey. https://www.mckinsey.com/capabilities/strategy‐and‐corporate‐finance/our‐insights/the‐case‐for‐behavioral‐strategy. Accessed January 27, 2024.
- 24. Scannell JW, Bosley J, Hickman JA, et al. Predictive validity in drug discovery: what it is, why it matters and how to improve it. Nat Rev Drug Discov. 2022;21(12):915‐931. doi: 10.1038/s41573-022-00552-x [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.