Every patient is constituted by a multitude of oftentimes hidden factors that predict their mortality and morbidity across a range of scenarios. Identifying and making sense out of these hidden factors is the primary challenge and raison d’etre for estimating personalized risk, which in turn serves a critical function in personalized or ‘precision’ medicine more generally. Precision medicine is a dynamic area encompassing a diverse range of approaches that allow the targeting of new treatments, screening programs or preventive healthcare strategies on the basis of clinical or biological markers or complex tests driven by increasingly powerful computer algorithms [1]. In the context of medical decision making, estimating personalized risk is useful and meaningful primarily on the basis of whether it helps to usher patients and clinicians closer to a decision that will bring about a desired outcome. Though other more philosophical or even ethical rationales exist for incorporating personalized risk estimation into clinical decision making (e.g., patients’ ‘right to know’) [2], we focus here on the utility of personalized risk for promoting quality in the process and outcomes of clinical decision-making.
The International Patient Decision Aid Standards (IPDAS) define decision-making quality as the extent to which a patient recognizes that a decision needs to be made; feels informed about the options and their implications; feels clear about what matters most to him/her in this decision; has been given the opportunity to discuss goals, concerns and preferences with their healthcare providers; and is involved in decision making to the extent he/she desires [3]. The quality of the decision-making process can be distinguished from the quality of the decision itself, evaluated on the basis of how consistent a patient’s decision is with their informed values. With these definitions in mind, we discuss whether and why we should expect personalized risk information to enhance quality decisions and decision-making. We also highlight key ethical challenges for healthcare professionals to address when incorporating personalized risk in decision support, particularly following goals articulated by the National Institutes of Health’s Precision Medicine Initiative [4].
In line with IPDAS quality criteria, patient decision aids contain information about risk and benefits that are typically generalized rather than personalized, based on statistical derivations (e.g., logistic regression) from national or international databases. While the generalized nature of these risk statistics is intended to help patients to better understand the likelihood of a clinical event occurring among a relevant patient population, it does little to increase their understanding of how relevant this statistic is for them personally. For example, a decision aid might inform a patient that 1 in 10 patients who undergo a given surgical treatment will experience postsurgery bleeding within the first 6 months. Upon receiving this information, patients are likely to wonder, will I be that 1 in 10?
Personalized medicine promises to improve our answers to such questions. Modern predictive models, particularly those involving hidden Markov models that commonly serve as the basis for machine learning (also referred to as “data-based artificial intelligence”), offer benefits over more conventional statistical models in that they have the potential to more precisely identify which outcomes are most likely for a particular patient on the basis of a wide variety of clinical and demographic features. These features are not really hidden; rather, their complex, causal connections are often hidden. Traditional risk prediction models are not adept at discovering these connections, primarily because they are usually based on a small set of clinical variables and employ rigid, rule-based programs derived from hypotheses, rather than from repeated observation and pattern discrimination (learning). Their technical limitations render them less capable of identifying and quantifying variable interrelationships that critically influence outcomes. These constraints may, in turn, limit the clinical utility of conventional risk models in practice. By comparison, machine learning algorithms examine both direct and indirect relationships between variables over a vast number of iterations, thereby generating predictions with greater precision and accuracy. This is true even for datasets riddled with missing values (a common problem with clinical data). The multifactorial and ‘implicit’ nature of these predictions are features said to render them most similar to human decision-making. The goal is for these algorithms to mimic the nature of our own human logic but with far greater precision and certainty.
When applied to real-world decisions, however, greater statistical certainty and precision do not always translate into greater decisional certainty. The reasons for this are twofold. First, personalized risk information may be perceived as more complex and less intuitive than generalized risk information, despite offering increased precision and relevance for a patient. The increased complexity comes from the capacity of personalized models to identify a person’s risk for experiencing a range of adverse events (e.g., postoperative infection, bleeding, stroke, death), each with a certain probability. Given ample evidence that individuals commonly have difficulties understanding even basic probabilities (an enduring challenge in risk communication) [5], it is unlikely that patients can adeptly and meaningfully integrate the significance of multiple probabilities simultaneously. This limitation can negatively impact quality decision-making by reducing patient understanding and capacity for informedness. These impacts may be further exacerbated by the challenge of assigning and weighing the relative value of these outcomes, each with varying probabilities, making it more difficult to arrive at a decision congruent with their values. These challenges to understanding and ‘meaning-making’ in the context of one’s values can have the effect of patients relying more heavily on their physicians to ‘make sense’ of their prognosis, potentially compromising their ability to be as involved in decision-making as they might prefer.
To appreciate how challenging it can be to understand the decisional implications of personalized risk results, consider that a person contemplating an invasive treatment for heart failure might be told that they have a 76% likelihood of getting a local infection after surgery and a 57% of experiencing bleeding requiring further surgery. Similarly, they are told they have a 35% likelihood of experiencing respiratory failure, and a 20% likelihood of having a stroke. They have a 96% likelihood of surviving to 6 months after surgery and a 65% chance of surviving to 2 years. The patient is told that these estimates are accurate between 65 and 75% of the time. When patients calculate cumulative probability themselves, they have been found to over estimate [6] or underestimate their risk [7], and to be more likely to perceive each hazard independently than to effectively integrate various probabilities into a coherent notion of ‘overall’ risk versus benefit [6].
This can lead patients who are confronted with personalized risk information (which is often more precise and complex) to feel quite confused. Apart from the compelling likelihood of surviving to at least 2 years postimplant versus the alternative of potentially facing earlier death, patients might want to know if living those 2 years will be ‘worth it’, given the possibility of spending them in and out of hospital or with significant physical or psychosocial suffering. Without the capacity to integrate disparate likelihoods to arrive at an overall sense of whether an intervention is worth it or not would amount to no less of a guessing game than if confronted with the 1 in 10 question above. In fact, it may be worse: knowing one’s personalized risk for a range of clinical outcomes may simply confuse a patient to the point that they defer the clinical decision to their physician. Receiving information that encourages such a deferral of ‘sense making’ from patient to physician may risk replacing the last two decades of advancements in patient-centered decision-making with a reversion to an outdated paternalistic model that ‘doctor knows best’.
Another negative reaction to the complexity of personalized risk information might include ‘choice paralysis’, a phenomenon whereby a decision that is perceived as too complicated is put off due to fear that one might make a wrong decision. An illustrative empirical example comes from a study of parents undergoing embryo selection on the basis of polygenic embryo screening (PES) [8], which tests simultaneously for multiple common polygenic diseases. In contrast to preimplantation genetic testing, which is aimed at identifying and avoiding implantation of embryos harboring aneuploidies or clearly defined monogenic disease-causing alleles, PES can be used to examine the contribution of thousands of tiny allelic effects on complex traits, generating complex sets of trait-specific probabilities that may be difficult for parents to integrate into decisions about whether or not to implant a given embryo. Such a decision necessitates a complex assessment of the relative probability and value of each risk point.
In their illuminating characterization of this challenge, Lázaro-Muñoz et al. [9] explained that one embryo may have 30% risk of Type 2 diabetes but minimal risk for Alzheimer’s disease, while another may have only 3% risk of Type 2 diabetes but 20% chance of Alzheimer disease later in life. The increasing number of polygenic conditions added to this screening generates an increasing number of disease risk combinations that must be balanced by decision-makers, with great potential to generate choice overload. Indeed, the first PES case reports [8] reveal that a couple receiving information about elevated risk in two of their five embryos decided against implanting any of the five embryos. This finding offers supportive evidence that perceived over-complexity of personalized risk information intended to help discriminate among choice options can instead potentially lead to negative reactions that reduce decision-making quality. If personalized risk is intended to promote rather than detract from quality decision-making, urgent research is needed into how best to communicate the weight of personalized information about potential outcomes in function of the relationship between their likelihood and value.
A second reason why the greater precision and relevance of personalized risk estimates may not easily translate into greater decisional quality is that their purpose may not be clearly or appropriately framed in the decision-making context. While decision support is intended – as its name suggests – to help patients recognize there is a choice to be made and to consider factors that lead to quality decisions (and ideally, better clinical outcomes), some patients who use decision support tools nevertheless feel that no real choice exists. Such a perception significantly impacts the framing and subsequent reception of personalized risk information.
In our own research to develop a decision aid for patients considering left ventricular assist device therapy (LVAD) for advanced heart failure, we found that most patients understandably did not feel that choosing whether to receive the device (and live) or to decline the device (and die) constituted a real choice [10]. This finding is further supported by the low rates of patient candidates who decline an LVAD when offered one [11]. In our study of patient values related to decision-making about LVAD [10], we found that the desire to survive reigned supreme over other considerations (e.g., desire to avoid frequent re-hospitalization). For majority of LVAD candidates, the primary value attributed to “survival at all costs” frames all other choice options as invalid or undesirable. Given this framing, even very accurate and precise information about personalized outcomes is perceived as “unactionable” insofar as it could not or would not realistically be used to consider another choice option (i.e., decline LVAD and face potential death). Risk information, especially personalized risk information, may be perceived as distressing rather than constructive in these scenarios, serving to increase worry or dread rather than assist in a meaningful discrimination between treatment options [12].
An empirical example of this potential consequence can be seen in studies of women who completed genetic counseling and chromosomal microarray analysis, an invasive prenatal diagnostic test that can detect small deletions or duplications of genetic material that may go undetected by conventional chromosome analysis, including “variants of unknown significance” (VUS). VUSs are genetic variations that are associated with but not directly observed to have causal influence over disease manifestation. They are notorious for exacerbating uncertainty and potential distress among individuals who receive results indicating that their (or their child’s) genome contains them [13]. From interviews with 23 women who received abnormal prenatal microarray results, Bernhardt et al. [14] found that some women said they would have declined to enroll in chromosomal microarray analysis testing or to learn the presence of VUSs had they better understood the challenges that receiving such uncertain information would pose to decision making and how the information would impact their relationships. These findings were confirmed by Desai et al. [15] who found that parents whose children were diagnosed with a VUS reported higher decisional regret (an IPDAS indicator of poor decision quality) over agreeing to undergo genetic testing. Revealingly, while all of the women characterized the personalized risk information as distressing, only a small minority chose to terminate their pregnancies on the basis of their personalized estimates. In other words, the information negatively impacted them psychologically and socially, but did not influence their decisions to move forward with their pregnancies. This may be because terminating pregnancy was not “on the table” as a valid choice option for many of these women, including those who openly questioned the utility of learning what they called “toxic knowledge.”
These examples caution healthcare providers to consider how receiving personalized risk information may inadvertently generate harm to patients if presented in what they perceive to be a “nondecision” – that is, a decision lacking valid choice options. That personalized risk estimates are, by definition, specific to particular patients only heightens their likelihood to cause distress, dread or futility in cases where that information cannot be effectively acted on. This raises ethical questions over whether and how to convey personalized risk in these contexts, and highlights the importance of healthcare providers’ roles in communicating personalized risk in ways that appropriately frame its utility and meaning.
We believe these ethical challenges should not automatically discourage communication of personalized risk even in scenarios where risk knowledge is nonactionable in the traditional sense – that is, where effective prevention measures or treatments are unavailable, or where the cost-benefit ratio of pursuing them renders them untenable. As we report elsewhere [16], “actionability” of personalized results may include the ability to pursue psychosocial or behavioral (rather than clinical) interventions that can positively influence the disease course or lessen its impacts on psychosocial well-being and quality of life. In other words, “actionability” may be construed to include ‘preparedness’. Individuals who know what to expect from an intervention may be more actively prepared to face potentially negative outcomes [17], and have been found to have lower hospital readmission rates. In the LVAD context, one study [18] found that when patients receiving an LVAD as “destination therapy” (their primary option for staying alive) met with palliative medicine specialists to discuss a preparedness plan tailored to their goals, they were better able than those without a preparedness plan to effectively handle a range of adverse events postsurgery. Promoting preparedness and realistic knowledge about outcomes was a primary rationale for developing our own decision aid for LVAD, which was shown to enhance knowledge about self-care and potential postimplant complications [19]. Studies of patients with cancer and other poor-prognosis conditions [20] provide further support for the potentially important but underappreciated role of preparedness in improving outcomes. Preparedness and related constructs like resilience have conceptual links to IPDAS indicators of quality decision-making, namely patient informedness and accurate forecasting, but may be distinct in important ways that are yet unexplored and unaccounted for by existing notions and measures of quality decision making. Further insights are needed into how healthcare providers may utilize personalized risk estimates not only in service of enhancing decision making quality but also of preparing patients and caregivers to proactively respond to adverse clinical events.
Key challenges for healthcare providers to be aware of with respect to fostering the useful and ethical application of personalized risk information in decision support are thus to appropriately frame risk information by clarifying its accuracy, precision and intended utility for decision makers and to; meaningfully elucidate decisional implications by helping to improve patients’ understanding of the information and integrate patients’ understandings of an outcome’s subjective value in function of its probability of happening. Designers of decision support tools aiming to communicate personalized risk must carefully strategize this integration to avoid this knowledge being perceived as toxic knowledge. We must not readily assume that ‘all knowledge is power’ and should provide patients and providers with greater tools for integrating and weighing complex probabilities according to patients’ lifestyle and treatment goals. Further, the framing of this information’s intended purpose at the patient-level should be tailored to the decision-making context as a patient perceives it, which may vary from patient to patient, all else being equal. Achieving these targets should help providers to more effectively and ethically incorporate personalized risk information in quality patient decision support.
Financial & competing interests disclosure
This project was supported by grant number R01HS027784 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.
No writing assistance was utilized in the production of this manuscript.
References
Papers of special note have been highlighted as: • of interest
- 1.Faulkner E, Holtorf A-P, Walton S et al. Being precise about precision medicine: what should value frameworks incorporate to address precision medicine? A report of the Personalized Precision Medicine Special Interest Group. Value Health 23(5), 529–539 (2020). [DOI] [PubMed] [Google Scholar]
- 2.Middleton A, Morley KI, Bragin E et al. Attitudes of nearly 7000 health professionals, genomic researchers and publics toward the return of incidental results from sequencing research. Eur. J. Med. Genet 24(1), 21–29 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Sepucha KR, Borkhoff CM, Lally J et al. Establishing the effectiveness of patient decision aids: key constructs and measurement instruments. BMC Med. Inform. Decis. Mak 13(2), S12 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]; •Outlines key, evidence-based indicators of decision-making quality and provides a basis for evaluating the quality of a decision aid.
- 4.Sankar PL, Parker LS. The Precision Medicine Initiative’s All of Us Research Program: an agenda for research on its ethical, legal, and social issues. Genet. Med 19(7), 743–750 (2017). [DOI] [PubMed] [Google Scholar]
- 5.Trevena LJ, Zikmund-Fisher BJ, Edwards A et al. Presenting quantitative information about decision outcomes: a risk communication primer for patient decision aid developers. BMC Med. Inform. Decis. Mak 13(2), S7 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]; •Points out challenges for effectively communicating risk to patients in ways that facilitate their understanding and recommend a set of principles for improving the quality of risk communication in practice.
- 6.Knäuper B, Kornik R, Atkinson K, Guberman C, Aydin C. Motivation influences the underestimation of cumulative risk. Pers. Soc. Psychol. Bull 31(11), 1511–1523 (2005). [DOI] [PubMed] [Google Scholar]; •In their study of patient understandings of cumulative risk, the authors found that patients overestimate cumulative risk, and that risk understandings were mediated by motivation.
- 7.Fuller R, Dudley N, Blacktop J. Older people’s understanding of cumulative risks when provided with annual stroke risk information. Postgrad. Med. J 80(949), 677–678 (2004). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Treff NR, Eccles J, Lello L et al. Utility and first clinical application of screening embryos for polygenic disease risk reduction. Front. Endocrinol 10 (2019). https://www.frontiersin.org/articles/10.3389/fendo.2019.00845/full?fbclid=IwAR0IStLKXLTIZcUUutZ46F0N_Rhkpd_hca05uPry-wMDI1w_vSguCktPwkzc [DOI] [PMC free article] [PubMed] [Google Scholar]; •Presents the first case reports of polygenic embryonic screening and reveal that some individuals who received results suggesting elevated risk among their viable embryos decided against implanting any of those embryos, due to uncertainties in how and whether to act upon cumulative personalized risk estimates.
- 9.Lázaro-Muñoz G, Pereira S, Carmi S, Lencz T. Screening embryos for polygenic conditions and traits: ethical considerations for an emerging technology. Genet. Meddoi: 10.1038/s41436-020-01019-3 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]; •Reviews ethical considerations for polygenic embryo screening, including the challenge of meaningfully interpreting and incorporating complex polygenic risk information into clinical and reproductive health decisions.
- 10.Blumenthal-Barby JS, Kostick KM, Delgado ED et al. Assessment of patients’ and caregivers’ informational and decisional needs for left ventricular assist device placement: implications for informed consent and shared decision-making. J. Heart Lung Transplant 34(9), 1182–1189 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Bruce CR, Kostick KM, Delgado ED et al. Reasons why eligible candidates decline left ventricular assist device placement. J. Card. Fail 21(10), 835–839 (2015). [DOI] [PubMed] [Google Scholar]
- 12.Crozier S, Robertson N, Dale M. The psychological impact of predictive genetic testing for Huntington s Disease: a systematic review of the literature. J. Genet. Counsel 24(1), 29–39 (2015). [DOI] [PubMed] [Google Scholar]
- 13.Hoffman-Andrews L The known unknown: the challenges of genetic variants of uncertain significance in clinical practice. J. Law Biosci 4(3), 648–657 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Bernhardt BA, Soucier D, Hanson K, Savage MS, Jackson L, Wapner RJ. Women’s experiences receiving abnormal prenatal chromosomal microarray testing results. Genet. Med 15(2), 139–145 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Desai P, Haber H, Bulafka J et al. Impacts of variants of uncertain significance on parental perceptions of children after prenatal chromosome microarray testing. Prenat. Diagn 38(10), 740–747 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Kostick KM, Brannan C, Pereira S, Lázaro-Muñoz G. Psychiatric genetics researchers’ views on offering return of results to individual participants. Am. J. Med. Genet 180(8), 589–600 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]; •Reports on psychiatric genetic researchers’ perspectives on returning results to individual research participants, including how researchers might differently address results that are actionable, nonactionable, incidental or variants of unknown significance.
- 17.Martin LA, Finlayson SRG, Brooke BS. Patient preparation for transitions of surgical care: is failing to prepare surgical patients preparing them to fail? World J. Surg 41(6), 1447–1453 (2017). [DOI] [PubMed] [Google Scholar]
- 18.Swetz KM, Freeman MR, AbouEzzeddine OF et al. Palliative medicine consultation for preparedness planning in patients receiving left ventricular assist devices as destination therapy. Mayo Clin. Proc 86(6), 493–500 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]; •Finds that when patients receiving an left ventricular assist device therapy as “destination therapy” (their primary option for staying alive) met with palliative medicine specialists to discuss a preparedness plan tailored to their goals, they were better able than those without a preparedness plan to effectively handle a range of adverse events postsurgery.
- 19.Kostick KM, Bruce CR, Minard CG et al. A multisite randomized controlled trial of a patient-centered ventricular assist device decision aid (VADDA Trial). J. Card. Fail 24(10), 661–671 (2018). [DOI] [PubMed] [Google Scholar]
- 20.Sun V, Kim JY, Raz DJ et al. Preparing cancer patients and family caregivers for lung surgery: development of a multimedia self-management intervention. J. Canc. Educ 33(3), 557–563 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
