Empathic Engagement with the COVID-19 Vaccine Hesitant in Private Facebook Groups: A Randomized Trial
OP-001 Decision Psychology and Shared Decision Making (DEC)
Lorien Abroms1, Donald Koban2, Nandita Krishnan2, Melissa Napolitano2, Samuel Simmens3, Brendan Caskey1, Tien Chin Wu1, David A. Broniatowski2
1Department of Prevention and Community Health, School of Engineering and Applied Science, The George Washington University, Washington, DC USA
2Department of Engineering Management and Systems Engineering, Mliken School of Public Health, The George Washington University, Washington, DC USA
3Department of Biostatistcs and Bioinformatics, Mliken School of Public Health, The George Washington University, Washington, DC USA
Purpose: To test the efficacy of moderated social media discussions about COVID-19 vaccines in private Facebook groups compared to referral to Facebook’s COVID-19 vaccine information center.
Methods: Unvaccinated Americans (N=476, mean age = 36.8 ± 9.7; 25.6% male; 80.4% white) were recruited using Amazon’s Mechanical Turk service between January and April of 2022 and randomized to intervention or control Facebook groups. In the intervention group (N=261), moderators posted twice per day for 4 weeks and engaged in empathic, relationship-building interactions and information sharing with group members about COVID-19 vaccination. In the control group (N=215), participants received a referral to Facebook’s COVID-19 Information Center. Follow-up surveys were conducted at 4 and 6 weeks post-enrollment. Group differences of rank transformed scores normalized to a standard deviation of 1 were tested through analysis of covariance controlling for baseline scores.
Results: Retention rates were high (intervention = 90.8%, control = 96.2%). On average, participants in the intervention group engaged with content (e.g., commented, reacted) 11.7 ± 25.3 times per person over 4 weeks. At 4 weeks, 74.7% of participants in the intervention group were satisfied with the program and 72.8% found the messages informative.
Compared to the control group, at 4 weeks, 16 and 7 group members reported being vaccinated in the treatment and control group, respectively (RR = 1.9, 95% CI = 0.79 to 4.49, p = 0.08). Compared to the control group, general vaccine confidence (mean = 0.19, 95% CI = 0.06 to 0.33, p = 0.005), confidence specific to the COVID-19 vaccine (mean = 0.15, 95% CI = 0.01 to 0.29, p = 0.04), and responsibility for vaccination to others (mean = 0.18, 95% CI = 0.04 to 0.33, p = 0.02) increased significantly. At 6 weeks, intervention group members were significantly more likely to intend to encourage others to vaccinate for COVID-19 (mean = 0.24, 95% CI 0.07 to 0.41, p = 0.007). General vaccine confidence (mean = 0.18, 95% CI = 0.03 to 0.34, p = 0.02) and responsibility for vaccination to others (mean = 0.20, 95% CI = 0.05 to 0.36, p = 0.01) also improved.
Conclusions: Engaging with vaccine-hesitant individuals in private Facebook groups improved some COVID-19 vaccine related beliefs, especially those related to vaccine confidence and intentions to encourage others to vaccinate.
Keywords: Social media, vaccine confidence, randomized control trial
Clinical trial endpoints
I Don’t Agree But It Makes Sense: Plausibility and Conspiracy Theories about Misinformation in Health and Other Domains
OP-002 Decision Psychology and Shared Decision Making (DEC)
Valerie F Reyna1, Sarah M Edelson1, David M N Garavito2, Jennifer Lee1, Michelle Cazorla1, Julia Fan1
1Cornell University
2U.S. Department of Veterans Affairs
Purpose: Misinformation plays a large role in many domains of health, such as decisions to vaccinate. According to fuzzy-trace theory, patients base decisions mainly on gist representations of the meaning of health information, which implies that plausibility is important in accepting misinformation, including conspiracy theories (Reyna, 2012, Vaccine). Therefore, we developed a plausibility scale in which respondents rated neither true/false nor agree/disagree but, rather, whether statements made sense or were farfetched.
Methods: We recruited samples of 467 college students and 777 community members who rated agreement (“strongly disagree” to “strongly agree” on a 1-7 scale) with items on validated conspiracy scales (Cronbach’s α’s =.95 and.94, respectively) and made plausibility ratings (Cronbach’s α’s =.89 and.90, respectively, e.g., about drug companies covering up evidence, origins of COVID-19 as a bio-weapon) ranging from “completely farfetched” to “makes complete sense” (again on a 1-7 scale).
Results: Agreement with conspiracy theories (2.81 and 2.45, respectively) and plausibility ratings (3.32 and 3.17, respectively) were surprisingly high in both samples, with plausibility exceeding conspiratorial agreement as expected. Linear regressions showed that plausibility significantly predicted agreement with conspiracy theories (β =.59-.61 p <.001 and β =.69-.71, p <.001, respectively), beyond effects of demographics and political ideology or party affiliation.
Conclusions: This is the first application of a reliable plausibility scale applied to misinformation about health and other domains. The scale does not measure the retrieval of verbatim facts from memory whose accuracy can be discerned (or not). Instead, it measures whether certain generic ideas make sense or seem farfetched, reflecting a troubling level of susceptibility to misinformation among college students and community members.
These findings have important implications for understanding widespread resistance to vaccination and other risk-reduction measures that has arguably resulted in millions of preventable deaths. They also suggest that interventions must begin now to prevent future health emergencies because the seeds of misinformation have already been planted, even among educated people, which can sprout into conspiracy theories.
Effective interventions must not only help patients accumulate facts about health and medicine but should also ensure that the gist of those facts is understood so that plausible misinformation becomes implausible.
Keywords: vaccination, misinformation, conspiracy theories, COVID-19, fuzzy-trace theory, health literacy
The politicization of COVID-19 treatments among physicians and laypeople in the United States
OP-003 Decision Psychology and Shared Decision Making (DEC)
Joel Levin1, Leigh Bukowski2, Julia Minson3, Jeremy Kahn4
1Katz Graduate School of Business, University of Pittsburgh, Pittsburgh, United States
2Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, United States
3Kennedy School of Government, Harvard University, Cambridge, United States
4Department Health Policy and Management, University of Pittsburgh, Pittsburgh, United States
Purpose: Political ideology is associated with a variety of beliefs and decisions related to COVID-19. Throughout the pandemic, conservatives have been less likely than liberals to believe in the effectiveness of vaccination and more likely than liberals to believe in the effectiveness of treatments such as hydroxychloroquine and ivermectin. These beliefs appear to be driving important healthcare choices, including those that are made by physicians. Regions that vote republican have consistently lagged those that vote democrat in vaccination rate and were prescribing hydroxychloroquine and ivermectin at several times the rate of those that vote democrat as of the end of 2020. However, both (a) the relative roles of patient and physician preferences in driving these trends and (b) whether physicians’ beliefs are similarly politically polarized are not known.
Methods: We study these questions by combining longitudinal survey data from two novel sources: a national panel of 592 critical care physicians practicing in the United States and a demographically representative sample of 900 adults living in the United States. We collected data in three Phases between the spring of 2020 and the spring of 2022. In all three Phases, we asked physicians to make a series of hypothetical treatment decisions about a COVID-19 patient and collected a range of related beliefs. In Phase 3 we administered an adapted version of the physician survey to American adults and embedded an experiment in both surveys intended to discriminate between possible mechanisms underlying political differences.
Results: (and conclusions)
With regard to vaccination, hydroxychloroquine, and ivermectin, the beliefs of both laypeople and physicians were meaningfully politicized across all Phases of data collection. Notable results include: (1) during the spring of 2020, conservative physicians were approximately three times as likely as liberal physicians to say that they would treat a hypothetical COVID-19 patient with hydroxychloroquine, (2) although hydroxychloroquine became less popular later in the pandemic, physician treatment choices remained similarly politicized throughout our survey period, (3) as of the spring of 2022, laypeople perceived hydroxychloroquine and ivermectin as being more effective and vaccination as being less effective relative to physicians, but beliefs about effectiveness were similarly politically polarized in both groups, and (4) experimental results suggest that these differences are driven by biased processing of information rather than differential exposure to information.
Conclusions: (empty)
Keywords: covid-19, politics, vaccination, hydroxychloroquine, ivermectin, experimental
Figure 1.
The effect of political ideology on beliefs and preferences.
Notes: Values represent the estimated effect of a 1-point conservative shift in political ideology on a 7-point scale (very liberal to very conservative), expressed in standard deviations from the mean response of political moderates. All estimates are from OLS regression models. Circles are estimates from models without controls; triangles are estimates from models with controls. Controls are age, gender, education*, practice setting**, time spent in a clinical capacity**, base clinical specialty**, log population density (zip code level), census metropolitan classification (census tract level), and Republican vote share (2016, county level). Horizontal lines denote 95% confidence intervals. *US adults only **ICU physicians only.
Impact of prior COVID-19 infection on perceptions of the benefit and safety of COVID-19 vaccines
OP-004 Decision Psychology and Shared Decision Making (DEC)
Alistair Thorpe1, Andrea G Levy2, Laura D Scherer3, Aaron M Scherer4, Frank A Drews5, Jorie M Butler6, Angela Fagerlin7
1University of Utah, Intermountain Healthcare Department of Population Health Sciences, Salt Lake City, UT
2Middlesex Community College, Middletown, CT
3University of Colorado, School of Medicine, Aurora, CO; VA Denver Center for Innovation, Denver, CO
4University of Iowa, Department of Internal Medicine, Iowa City, IA
5University of Utah, College of Social and Behavioral Science, Salt Lake City, UT; Salt Lake City VA Informatics Decision-Enhancement and Analytic Sciences (IDEAS) Center for Innovation, Salt Lake City, UT
6University of Utah, Departments of Biomedical Informatics and Internal Medicine, Division of Geriatrics, Salt Lake City, UT; Salt Lake City VA Informatics Decision-Enhancement and Analytic Sciences (IDEAS) Center for Innovation, Salt Lake City, UT; Geriatrics Research, Education, and Clinical Center (GRECC), VA Salt Lake City Health Care System, Salt Lake City, UT
7University of Utah, Intermountain Healthcare Department of Population Health Sciences, Salt Lake City, UT; Salt Lake City VA Informatics Decision-Enhancement and Analytic Sciences (IDEAS) Center for Innovation, Salt Lake City, UT
Purpose: Perceptions about the benefit and safety of COVID-19 vaccines are important determinants of vaccine uptake. We examined whether people think COVID-19 vaccination is less beneficial or safe for someone who has already had COVID-19 versus for someone who has not. We also examined whether a persons’ own COVID-19 infection history and vaccination status influences their perceptions of COVID-19 vaccine benefit and safety.
Methods: In December 2021, we conducted an online survey of US adults. First, respondents self-reported their COVID-19 infection history and vaccination status. Respondents who reported that they had experienced COVID-19 and received a COVID-19 vaccine (≥1-dose) were asked when they had COVID-19 relative to their vaccination. Next, all respondents reported their perceptions of the benefit and safety of COVID-19 vaccination for someone who had already had COVID-19 and for someone who had not.
We calculated mean differences with 95%CIs to assess COVID-19 vaccine benefit and safety perceptions within and between groups.
Results: Our sample (n=1,733), had a mean age of 41±15; 66.4% were non-Hispanic white, and 66.0% were female. About half (53.1%) of respondents reported having received a COVID-19 vaccine (≥1-dose) and 27.6% reported a history of COVID-19.
Respondents perceived COVID-19 vaccination as less beneficial and less safe for someone who had already had COVID-19, as compared to someone who had not (Table 1 and Figure 1).
Table 1.
| Within-groups: repeated measures comparisons across all respondents (n=1,733) | Measure 1:
“For someone who has already had COVID-19” |
Measure 2:
“For someone who has not had COVID-19” |
Difference
estimate (95% CIs) |
||
|---|---|---|---|---|---|
| Benefit of COVID-19 vaccine… | 2.69±1.20 | vs | 2.92±1.16 | -0.23
(-0.27 to -0.18) |
|
| Safety of COVID-19 vaccine… | 2.94±1.05 | vs | 3.01±1.20 | -0.07
(-0.10 to -0.03) |
|
| Between groups: comparisons among vaccinated respondents only (n=920) | Group 1:
Had COVID-19 post-vaccination (n=78) |
Group 2:
Had not had COVID-19 post-vaccination* (n=653) |
Difference
estimate (95% CIs) |
||
| Benefit of COVID-19 vaccine for
someone who has not had COVID-19 |
3.28±0.92 | vs | 3.58±0.74 | -0.31
(-0.53 to -0.09) |
|
| Benefit of COVID-19 vaccine for
someone who has already had COVID-19 |
2.89±1.00 | vs | 3.36±0.90 | 0.48
(-0.72 to -0.23) |
|
| Safety of COVID-19 vaccine for
someone who has not had COVID-19 |
3.61±0.63 | vs | 3.27±0.93 | -0.34
(-0.55 to -0.12) |
|
| Safety of COVID-19 vaccine for
someone who has already had COVID-19 |
3.19±0.89 | vs | 3.50±0.64 | -0.31
(-0.53 to -0.09) |
Outcome measures shown at each comparison level Response scale for the benefit question: 1=No benefit at all, 2=A small benefit, 3=A moderate benefit, 4=A lot of benefit Response scale for the safety question: 1=Very unsafe, 2=Unsafe, 3=Safe, 4=Very safe *Includes vaccinated respondents who had not had COVID-19 and vaccinated respondents who experienced COVID-19 pre-vaccination Confidence intervals which do not include zero are highlighted with bolded text
Figure 1.
Benefit and safety perceptions of COVID-19 vaccines for someone who either has or has not already had COVID-19 according to respondents’ vaccination status and COVID-19 infection history.
Vaccinated respondents who had experienced COVID-19 post-vaccination perceived COVID-19 vaccination as less beneficial and less safe for a potential recipient as compared to respondents who had not experienced COVID-19 or had experienced it pre-vaccination. This was true when asked about a potential recipient who had already had COVID-19 as well as one who had not.
Conclusions: This study provides evidence that COVID-19 vaccination is perceived as less beneficial and less safe for people who have already had COVID-19. Among vaccinated respondents, those who experienced COVID-19 post-vaccination perceived the vaccine to be less beneficial and less safe. These beliefs may be contributing to ongoing low rates of COVID-19 vaccine uptake.
These findings highlight the need to more effectively communicate evolving evidence regarding the benefits and safety of COVID-19 vaccination for people who have already had COVID-19, tailoring communications to peoples’ individual COVID-19 history and vaccination status, and exploring whether these results apply to other vaccine/disease contexts.
Keywords: COVID-19, Vaccination, Risk-perceptions, Health communication, Immunity
The role of knowledge and confidence in intentions to (not) seek care for hypertension: Evidence from a national survey
OP-005 Decision Psychology and Shared Decision Making (DEC)
Wandi Bruine De Bruin1, Yasmina Okan2, Tamar Krishnamurti3, Mark D Huffman4
1Price School of Public Policy, Schaeffer Center for Health Policy and Economics, Dornsife Department of Psychology, University of Southern California
2Centre for Decision Research, University of Leeds
3Center for Research on Health Care, Division of General Internal Medicine, University of Pittsburgh
4Global Health Center and Cardiovascular Division, Washington University and The George Institute for Global Health, Sydney, NSW, Australia
Purpose: Hypertension (high blood pressure) is a modifiable risk factor for cardiovascular disease. The American Heart Association recommends seeking care when systolic blood pressure is 130 mm Hg or higher and/or diastolic blood pressure is 80 mm Hg or higher. However, patients may lack knowledge or confidence about what constitutes normal or healthy blood pressure, undermining intentions to seek care when needed.
Methods: We surveyed a national US sample (N=6,592; mean age 52.5; 41% male, 8% African-American, 14% Hispanic/Latinx) through USC's Understanding America Study. Overall, 2137 participants reported having been diagnosed with hypertension or high blood pressure, including 795 who reported an additional diagnosis with heart disease, kidney disease, and/or diabetes mellitus. We assessed participants’ knowledge about normal or healthy blood pressure as well as their confidence in that knowledge (on a scale from 1=not confident to 4=very confident). We also asked participants “If your blood pressure readings were […], how likely would you be to reach out to a doctor about better controlling your blood pressure?” (on a scale form 1=very unlikely to 4=very likely). For the latter question, participants were randomly assigned to hypothetical blood pressure readings of 142/91 mm Hg, 132/69 mm Hg, or 118/78 mm Hg – respectively representing hypertension stage 2, hypertension stage 1, and normal/healthy blood pressure.
Results: Among participants without hypertension, only 34% knew that normal or healthy blood pressure is below 120/80 mm Hg, compared to 36% of participants with hypertension alone and 44% of participants with hypertension and co-morbidities. Knowledgeable participants were more confident. More importantly, knowledge and confidence were both associated with greater intentions to seek care for hypertension stage 2 blood pressure readings but with lower intentions to seek care for hypertension stage 1 blood pressure readings. Confidence was the main driver of these findings, as seen in multivariable linear regressions that modeled intentions to seek care while controlling for knowledge and socio-demographics. Findings held for participants without hypertension, participants with hypertension and co-morbidities, and participants with hypertension alone.
Conclusions: Independent of actual knowledge about hypertension, confidence was associated with greater intentions to act on hypertension stage 2 blood pressure readings, but with lower intentions to act on hypertension stage 1 blood pressure readings. Interventions are needed to improve both confidence and knowledge about hypertensive self-management.
Keywords: Hypertension; high blood pressure; knowledge; confidence; self-management
BruinedeBruin_figure
The Impact of Objective and Subjective Numeracy on Risk Communication by Dentists
OP-006 Decision Psychology and Shared Decision Making (DEC)
Adam Jones1, Yasmina Okan2, Jing Kang1, Alasdair Mckechnie1, Sue Pavitt1, Sophy Barber1
1School of Dentistry, University of Leeds, Leeds, United Kingdom
2Centre for Decision Research, Leeds University Business School, University of Leeds, Leeds, United Kingdom
Purpose: Sharing healthcare decisions involves weighing risk against benefit. Numeracy, defined as the ability to reason and apply numerical concepts, influences accurate perceptions of risk and benefit and associated decisional satisfaction.
Emerging evidence suggests doctors with low objective numeracy present risk information sub-optimally, make less accurate diagnostic inferences and prefer paternalistic approaches to decision-making. Little is known of the relationship between subjective and objective numeracy among different types of health professional, and the influence self-assessed or objective numeric confidence has on risk communication preferences of clinicians.
This study is the first to quantify numeracy in a cohort of dentists and examine the impact on their risk communication beliefs and practices.
Methods: A cross-sectional survey with 200 general and specialist dentists recruited through professional societies, social media and snowball sampling. Respondents completed an online questionnaire which elicited clinician preferences for giving and receiving risk information using 7-point Likert scales before presenting validated subjective and objective numeracy tests. Free text responses collected perceived barriers to risk discussions with patients. The bespoke questionnaire was tested by representatives from the target group to ensure its acceptability and usability.
Results: Responses were received from all regions of the United Kingdom with representation across primary and secondary care settings. Mean years of experience was 19.6 (SD 13.2).
41.8% of clinicians did not ‘always’ discuss procedure risks prior to treatment. When explaining risks to their patients 77.2% of clinicians most often employed words (e.g., high/moderate/low) but when receiving risk information themselves, 53% indicated strong preference for numerical formats and 72% found numerical information useful in most decisions. High subjective numeracy predicted comfort in discussing risks (OR 3.11, 95% CI 1.39-6.96). Objective numeracy was not correlated to subjective numeracy, or confidence/frequency of risk discussion in clinical practice. Free text responses identified perceived barriers to risk conversation, such as concern about inducing anxiety.
Conclusions: Clinicians demonstrated acceptable numeracy but tended to overestimate ability. There were demonstrable differences in risk formats clinicians prefer when making decisions themselves compared with those employed in explanations to their patients. This ‘communication-mode preference paradox’ is acknowledged in wider decision research. Clinician’s own perceptions of their numeric ability predicted their confidence in navigating risk conversations more than objective ability, which may have implications for decision aid design.
Keywords: Numeracy, Risk, Decisions, Dentists
What is a star worth to Medicare beneficiaries? A discrete choice experiment of hospital quality, travel distance, and out-of-pocket cost
OP-007 Patient and Stakeholder Preferences and Engagement (PSPE)
Logan Trenaman1, Mark Harrison2, Jeffrey S Hoch3
1Center for Healthcare Policy and Research, University of California, Davis, Sacramento CA, USA; Division of Health Policy and Management, Department of Public Health Sciences, University of California, Davis; Collaboration for Outcomes Research and Evaluation, Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, BC, Canada; Centre for Health Evaluation and Outcome Sciences, Vancouver BC, Canada
2Collaboration for Outcomes Research and Evaluation, Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, BC, Canada; Centre for Health Evaluation and Outcome Sciences, Vancouver BC, Canada
3Center for Healthcare Policy and Research, University of California, Davis, Sacramento CA, USA; Division of Health Policy and Management, Department of Public Health Sciences, University of California, Davis
Purpose: We estimate the trade-offs between hospital quality, travel distance, and out-of-pocket cost, to determine how much Medicare beneficiaries would be willing to pay or how far they would be willing to travel for higher rated hospitals.
Methods: We conducted an online survey with a discrete choice experiment (DCE). The DCE presented participants with a choice between two hypothetical hospitals described using six attributes, including four derived from the quality domains of the hospital-value based purchasing program: clinical outcomes, patient experience, safety, efficiency, and two additional attributes: travel distance and out-of-pocket cost. The four quality domains were described using five levels corresponding to Medicare’s five-star quality rating system. Levels for distance (10, 100, or 200 miles) and out-of-pocket cost ($200, $1,000, or $4,000) were derived from a previous DCE of Medicare beneficiaries. A D-efficient ‘fractional factorial’ experimental design using Bayesian priors was created which generated 45 tasks blocked into five versions comprising nine questions. We recruited Medicare beneficiaries through Ipsos KnowledgePanel using quotas to identify a representative sample based on age, sex, and race/ethnicity. We used a mixed logit regression model in the willingness to pay space to estimate trade-offs between quality domains, distance, and out-of-pocket cost.
Results: 1,025 Medicare beneficiaries completed the survey, and respondents were broadly representative of the Medicare population. All attribute coefficients were statistically significant and in the expected direction. Respondents’ willingness to pay and travel for quality ratings are presented in Table 1. A hospital’s performance on clinical outcomes was over twice as valuable as patient experience and safety, and about eight times as valuable as efficiency. The average Medicare beneficiary would be willing to pay an additional $1,697 (95% CI: $1,483 to $1,912) or travel 189 miles further (95% CI: 157 to 221) for a hospital with a one-star higher rating on clinical outcomes.
Table 1.
The value of trade-offs between aspects of hospital quality, travel distance, and out-of-pocket costs.
| Willingness to Pay ($) | Willingness to Pay (95% CI) | Willingness to Travel (Miles) | Willingness to Travel (95%CI) | |
|---|---|---|---|---|
| Clinical Outcomes* | $1,697 | ($1,483 to $1,912) | 189 | (157 to 221) |
| Patient Experience* | $691 | ($587 to $794) | 77 | (63 to 92) |
| Safety* | $615 | ($497 to $733) | 63 | (49 to 77) |
| Efficiency* | $218 | ($133 to $302) | 22 | (13 to 31) |
| Distance | -$8 | (-8 to -8) | ||
| Out-of-pocket Cost | -0.11 | (-0.13 to -0.09) |
Corresponds to willingness to pay or travel for a hospital with a one-star higher rating
Conclusions: We elicited the preferences of Medicare beneficiaries for hospital care in a large, nationally representative sample. This study provides information on patients’ trade-offs when choosing between hospitals based on quality ratings, which will allow patients’ perspectives on quality to inform quality improvement efforts and the design of value-based payment programs.
Keywords: patient preference; quality; discrete choice experiment; Medicare; willingness to pay
Deriving a Health State Classification System to Measure Health Utilities for Children Based on the PedsQL: Using Rasch Analysis for Item Reduction
OP-008 Patient and Stakeholder Preferences and Engagement (PSPE)
Ellen Kim DeLuca1, Kim Dalziel2, Nicholas Henderson3, Eve Wittenberg4, Lisa A. Prosser5
1Department of Health Management and Policy, University of Michigan, Ann Arbor, USA; Susan B. Meister Child Health Evaluation and Research Center, University of Michigan, Ann Arbor, USA
2Centre for Health Policy, University of Melbourne, Melbourne, Australia
3Department of Biostatistics, University of Michigan, Ann Arbor, USA
4Center for Health Decision Science, Harvard University, Boston, USA
5Susan B. Meister Child Health Evaluation and Research Center, University of Michigan, Ann Arbor, USA; Department of Pediatrics, University of Michigan, Ann Arbor, USA; Department of Health Management and Policy, University of Michigan, Ann Arbor, USA
Purpose: An important methodological challenge in pediatric economic evaluations is estimating health utilities for children. Current methods are highly variable and there is no instrument to value health across multiple pediatric age groups. The PedsQL is a non-preference-based health-related quality of life instrument validated for children 2-18 years. The overall goal of this study is to develop and validate a preference-based index for the PedsQL: the PedsUtil scoring system. The first step of this process is to derive a health state classification system from the PedsQL that can be used for preference valuation.
Methods: We used Rasch analyses to reduce the number of items of the PedsQL across the previously established 7 dimensions of the health state classification system. Using data from the Longitudinal Study of Australian Children (LSAC), a study that follows a representative sample of Australian children and their families, we evaluated each multi-item dimension (i.e. school absence, school functioning, social functioning, emotional functioning, and physical functioning) using the Rasch partial credit model. We stratified our analyses by age group (2-5 years, 6-13 years, and 14-17 years) to represent the different developmental stages of children and to reflect the study design of the LSAC. To enhance robustness, Rasch analyses were performed on 5 random subsamples for each age group (n = ~500). We examined item level ordering, differential item functioning, spread of item thresholds, reliability, and fit of items to the Rasch model to identify poorly functioning items to exclude from the health state classification system.
Results: After considering all Rasch analysis criteria, we identified a single item to represent each of the school absence, school functioning, and social functioning dimensions across age groups. For the emotional and physical functioning dimensions, we identified two candidate items in each dimension for inclusion in the health state classification system (Table).
Conclusions: We used Rasch analyses to reduce the full set of PedsQL items to a core set. The next steps will be to assemble and consult an expert panel to validate and finalize the design of the health state classification system, and field surveys to develop the PedsUtil scoring system. This project will address key limitations in measures applied to value assessment in healthcare by providing a method to consistently value child health across multiple pediatric age groups.
Keywords: PedsQL; Rasch analysis; health-related quality of life
Table.
Summary of remaining items in the health state classification system after Rasch analyses.
|
Opioid use disorder and birth outcomes: The impact of preterm birth and weight on long-term survival, quality of life, and healthcare costs
OP-009 Applied Health Economics (AHE)
Ashley A. Leech1, Shawn Garbett2, Elizabeth Mcneer2, Benjamin P. Linas3, Rashmi Bharadwaj4, Lisa Su4, Stephen W. Patrick5
1Department of Health Policy and Vanderbilt Center for Child Health Policy, Vanderbilt University Medical Center
2Department of Biostatistics, Vanderbilt University Medical Center
3Department of Medicine, Boston University Medical Center
4Department of Health Policy, Vanderbilt University Medical Center
5Department of Pediatrics, Vanderbilt Center for Child Health Policy, and Department of Health Policy, Vanderbilt University Medical Center
Purpose: Guidelines for economic evaluations of health interventions recommend including consequences that meaningfully influence cost-effectiveness. Economic evaluations of treatment options for pregnant women with opioid use disorder (OUD) have not assessed the nexus between pre-term birth and low birth weight (LBW), and how they influence long-term survival, quality of life (QOL), and healthcare costs. We sought to quantify the long-term impacts of preterm birth and LBW associated with prenatal opioid use across varying treatment pathways.
Methods: We used CDC vital statistics natality data and a national cohort of commercially insured births to estimate the impact of LBW (<2500 grams) and premature delivery (<37 weeks) on infant survival and healthcare utilization. To match the correlation structure of LBW and premature delivery on infant survival in the US with that observed from pregnant women on treatment for OUD, we constructed a reparametrized beta distribution for each outcome dimension. We then employed a simulation model to compare OUD treatment strategies and project lifetime costs and cost-effectiveness. We applied short and long-term health-related QOL decrements associated with pre-term birth and LBW. We calculated direct and indirect medical costs over an infant’s lifetime. We simulated 10,000 infants per treatment and used an annual 3% discount rate.
Results: For preterm birth and LBW outcomes only, all treatment strategies (buprenorphine, methadone, naltrexone, medication-assisted withdrawal with behavioral therapy, and medication-assisted withdrawal only) were cost-saving compared to no treatment. Maternal buprenorphine dominated all treatment strategies. Mean costs for the first year of an infant’s life ranged from $16,838 (95% CI: $16,242-$17,434) under buprenorphine to $113,406 (95% CI: $110,308-$116,504) for the no treatment option, with lifetime discounted costs ranging from $137,000 (95% CI: $136,749-$137,954) to $234,000 (95% CI: $231,332-$237,511), respectively. Buprenorphine extended quality-adjusted life expectancy by 0.03 years compared to the closest alternative, naltrexone, and an upwards of 2 years compared to no treatment.
Conclusions: Our findings accentuate the importance of incorporating long-term consequences of infant outcomes into economic analyses assessing treatment for maternal OUD. The favorable buprenorphine finding could be due to lower observable rates of concurrent substance use among women on buprenorphine. While incorporating infant outcomes alone is insufficient to determine a preferred economic choice for pregnant women with OUD, rigorously including these outcomes is imperative to decision-making on maternal opioid use treatment.
Keywords: Maternal OUD, opioid use disorder, infant outcomes, cost-effectiveness, low birth weight, preterm birth
Maternal treatment for opioid use disorder and infant outcomes
A value-based framework for assessing the wider societal impacts and distributional effects of net production for patients
OP-010 Health Services, Outcomes and Policy Research (HSOP)
Shainur Premji, Simon Walker, Susan Griffin
Centre for Health Economics, University of York, York, United Kingdom
Purpose: Health technology assessment (HTA) agencies including ICER and NICE have raised the importance of developing value assessment frameworks (VAF) to support fair pricing and innovation. The inclusion of Wider Societal Impacts (WSI) in a VAF is designed to support structured assessments of the non-health benefits of health technologies in terms of their impact on patients’ net contribution to society (net production), i.e. the amount of societal resources patients contribute (produce) minus what they utilize (consume). For this study, we updated calculations made for NICE’s initial exploration of estimating WSI as a function of health improvements, and examined the scope of exploring the distributional effects of WSI.
Methods: Using publicly available data from several population-based surveys in the UK, we estimated, where possible, age- and sex-specific calculations for the following components of net production: paid production, general unpaid production, unpaid sickness care production, unpaid childcare production, formal care consumption, informal care consumption, private paid consumption, private unpaid consumption, childcare consumption, and government services consumption. To inform the feasibility of reflecting inequalities across population subgroups in WSI, we evaluated the prospect of calculating estimates by various socio-demographic characteristics for each component of net production.
Results: It was possible to update the majority of WSI calculations (8/10) using freely available data sources. In some situations, we found that the most suitable functional form of the age or sex-specific outputs had changed over time, raising the question of requirements for regular revisions using up-to-date sources. Following an evaluation of the socio-demographic information present within each data source, we concluded that lack of information may prevent further exploration of the distributional issues that might underlie the WSI framework.
Conclusions: Given the recent call to advance equity in decision making, VAFs that support the rigorous assessment of the distributional effects of health technologies are necessary to encourage innovative health systems and provide evidence to support fair pricing. The WSI framework holds promise as a tool to incorporate non-health benefits in HTA; however, more work is required to understand the role of inequalities in these WSI, and the equity implications of applying the framework.
Keywords: value assessment frameworks, distributional effects, equity
Costs of Excluding Pregnant People from Clinical Trials: A value-of-information study
OP-012 Applied Health Economics (AHE)
Alyssa Bilinski1, Natalia Emanuel2, Andrea Ciaranello3
1Department of Health Services, Policy and Practice & Department of Biostatistics, Brown University School of Public Health, Providence, USA (contributed equally with (2))
2Department of Economics, Princeton University, Princeton, USA (contributed equally with (1))
3Medical Practice Evaluation Center and Division of Infectious Disease, Massachusetts General Hospital, Harvard Medical School, Boston, USA
Purpose: Pregnant people have historically been excluded from clinical trials. As a result, drug and vaccine safety information are often unavailable before use in clinical practice, and post-trial observational studies are limited and less robust. We used a value-of-information (VOI) framework to quantify the cost of excluding pregnant people from randomized controlled trials (RCTs).
Methods: We consider the value of data obtained in early RCTs under different priors (weak positive (WP), weak negative (WN)) and true treatment effects (positive (P), negative (N)). We measure VOI for positive effects through costs of reduced uptake and for negative effects through additional sample needed to detect effects in observational studies, relative to in an RCT. We consider case studies in each quadrant of priors and effects: COVID-19 vaccines (WP/P), thalidomide (WP/N), dolutegravir (WN/P), and cytomegalovirus prevention (WN/N).
In this abstract, we focus on COVID-19 vaccines. While authorized in December 2020, including for pregnant people, the CDC did not recommend vaccines for pregnant people until August 11, 2021, based on observational data. Using vaccination rates in 18-49 year-olds and 40-49 year-olds (less affected by fertility vaccine-related concerns), we estimate counterfactual uptake for pregnant people had they been included in RCTs. To estimate mortality reductions, we use data on COVID-19-related ICU admissions and deaths in pregnant people and COVID-19 perinatal (infant) mortality. We assume a $10 million value per statistical life.
Results: Vaccination among US pregnant people lagged age-specific rates in other groups through November 2021, when they surpassed rates in 18-49 year-olds (Figure 1). Had COVID-19 vaccine RCTs been conducted on pregnant people, we estimate 6-15 maternal deaths, 12-39 ICU stays and 26-68 infant deaths would have been prevented from January-November 2021 in the US. Early RCT data would have had societal value of $56-$156 million based on maternal outcomes and $320-836 million if infant outcomes are also considered. More than 70% of costs accrued after July 2021. Weighting vaccination uptake curves based on state-specific birth rates shifted cost estimates <3%. Societal estimate values were robust to different expert priors.
Figure 1.
Estimates of costs averted by early randomized controlled COVID-19 vaccine trials in pregnant people. The top panel shows vaccination rates in pregnant people and two comparison groups. The bottom panel displays estimated costs averted under the counterfactual uptake following early RCT (18-49 year-olds in green and 40-49 year-olds in blue).
Conclusions: In the contexts studied, health and mortality costs incurred far exceed average trial cost, emphasizing welfare benefits from including pregnant people in RCTs. Restrictions around research on pregnant people curtail the generation of valuable and actionable information.
Keywords: pregnancy; randomized controlled trials; value of information; COVID-19; vaccines
Adult Hearing Screening in the United States: The Value of Future Research
OP-013 Applied Health Economics (AHE)
Ethan D. Borre1, Evan R. Myers1, Judy R. Dubno2, Susan D. Emmett1, Juliessa M. Pavon1, Howard W. Francis1, Osondu Ogbuoji1, Gillian D. Sanders Schmidler1
1Duke University School of Medicine, Durham, NC, USA
2Medical University of South Carolina, Charleston, SC, USA
Purpose: Adult hearing screening is not routinely performed, and most individuals with hearing loss have never had their hearing tested as adults. Our objective was to project the monetary value of future research clarifying uncertainties around the optimal adult hearing screening schedule.
Methods: We used a validated decision model of hearing loss natural history, diagnosis, and treatment (DeciBHAL-US) to simulate current detection and treatment of hearing loss versus alternative hearing screening schedules. The screening schedules varied by age at initiation (ages 45, 55, 65, and 75 years) and frequency of screening (every 1 or 5 years). We simulated 40-year-old US primary care patients throughout their remaining lifetimes. Key model inputs included hearing loss incidence (0.06-10.42%/year), hearing aid uptake (0.54-8.14%/year), screening effectiveness (1.62x hearing aid uptake), utility benefits of hearing aids (+0.11), and costs of hearing aid devices ($3,690). We assigned distributions to uncertain model parameters to conduct probabilistic uncertainty analysis (PUA). Our main outcomes were quality-adjusted life-years (QALYs) and costs (2020 USD) from a health system perspective. We used value of information analysis to estimate the expected value of perfect information (EVPI), and expected value of partial perfect information (EVPPI), using a basecase willingness-to-pay (WTP) of $100,000/ year (QALY). Scaled to the population impacted by the decision, the population EVPI and EVPPI estimate the upper bound of the dollar value of future research.
Results: In the cost-effectiveness analysis (using values discounted at 3%/year), yearly screening beginning at ages 75, 65, and 55 years all were cost-effective, with ICERs of $39,200/QALY, $45,600/QALY, and $80,200/QALY. The PUA demonstrated high uncertainty around the optimal screening schedule, with no individual schedule being optimal in >50% of simulations at basecase WTP of $100,000/QALY. Yearly screening beginning at age 55 was most frequently the optimal screening schedule (in 38% of simulations) at the basecase WTP. The population EVPI, or value of reducing all uncertainty, was $8.2-12.6 billion varying with WTP and the EVPPI, or value of reducing all screening effectiveness uncertainty, was $2.4 billion.
Conclusions: There is large uncertainty around the optimal adult hearing screening schedule. Future research on hearing screening, and in particular clinical research clarifying the effectiveness of hearing screening on hearing aid uptake, has a high potential value so is likely justified.
Keywords: value of information; hearing loss; decision modeling
Table.
Expected value of perfect information and expected value of partial perfect information for screening effectiveness across varying willingness to pay thresholds.
|
An EVSI analysis of further research to evaluate the efficacy of high-dose aducanumab for early-stage AD
OP-014 Applied Health Economics (AHE)
Jonah Popp, Eric Jutkowitz, Tom Trikalinos
Department of Health Services, Policy, and Practice, Brown University, Providence, RI USA
Purpose: In 04/2022, the Centers for Medicare and Medicaid Services (CMS) confirmed its initial decision to only cover treatment with aducanumab for early-stage Alzheimer’s Disease patients enrolled in an approved randomized controlled trial (RCT). We evaluate CMS’s coverage decision and the appropriateness of a future RCT from a decision-theoretic perspective.
Methods: We conducted an expected value of sample information (EVSI) to evaluate an additional RCT to investigate the efficacy of high-dose (10 mg/kg) aducanumab. We determined the optimal trial sample size (1,000-10,000 by 100 increments) and follow-up length (12-36 months by 6-month increments) conditional on an initial coverage decision and CMS’s decision criterion - efficiency (costs and efficacy) vs. efficacy only - for a post-trial coverage determination. We also considered the expected effect of the manufacturer's proposed RCT (‘ENVISION’). We used EVSI results to estimate the population-level expected incremental net monetary benefit (INMB) for four decision alternatives based on coverage and whether a trial is conducted. We took a societal perspective, considered willingness-to-pay (WTP) values between $50K-$200K for a quality-adjusted life year (QALY), and discounted (r=3%) QALYs and cost. We investigated the effect of different modeling scenarios: population size, decision timeframe, efficacy, and reduced wholesale acquisition price (WAP) from $28.2K.
Results: CMS’s initial decision avoided an expected societal loss (INMB) of $10B-$110B (billion). Given this decision, the ENVISION trial would be a reasonable option for WTP between $150K-$200K if CMS made its post-trial coverage decision based on efficiency. No trial would be optimal for WTP≤$100K in the same scenario. However, a small trial (1,000-1,200) would be optimal for WAP≤$14.1K and WTP=$100K. If CMS relied only on efficacy, barring a WAP≤$5.6K, no trial would be optimal (expected loss of $15B-$45B for an ENVISION-like trial).
Conclusions: The expected net societal value of a future trial depends on our WTP for a QALY and how CMS would use trial results to update its coverage decision. A future ENVISION-like trial would inform stakeholders but could lead to a large societal net loss given the high WAP set by the manufacturer and the legal framework limiting CMS’s decision-making process. At a greatly reduced price, or if CMS were free to explicitly consider the opportunity costs of treatment, such a trial would be expected to secure positive net benefit even given more pessimistic efficacy estimates.
Keywords: EVSI, aducanumab, Alzheimer's disease
Table.
|
Closed-Form Solution of the Bivariate Unit Normal Loss Integral with Application to Value of Information Analysis
OP-015 Quantitative Methods and Theoretical Developments (QMTD)
Tae Yoon Lee1, Paul Gustafson2, Mohsen Sadatsafavi1
1Respiratory Evaluation Sciences Program, Collaboration for Outcomes Research and Evaluation, Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, Canada
2Department of Statistics, University of British Columiba, Vancouver, Canada
Purpose: Since its first development by Raiffa and Schlaifer in 1960s, the unit normal loss integral (UNLI) has been used widely in Value of Information (VoI) analysis, providing closed-form solutions for various VoI metrics for both model-based and data-driven economic evaluations. However, one limitation of the UNLI has been that its closed-form solution is available for only one dimension, and thus can be used for models involving only two strategies (where it is applied to the scalar incremental net benefit). While approximate methods have been proposed for more than two strategies (e.g., Jalal et al, MDM, 2015), to the best of our knowledge, no closed-form expression has been proposed. We derived a closed-form solution for the UNLI arising from a bivariate normal distribution, enabling closed-form VoI calculations for three strategies.
Methods: A detailed derivation of the closed-form solution is provided in a pre-print: https://arxiv.org/pdf/2205.06364.pdf. We evaluated the correctness of the closed-form UNLI solution against large-scale Monte-Carlo integrations (with N=100,000), for 252 different bivariate normal distributions. As a case study, we used individual-level data from the Canadian Optimal Therapy of Chronic Obstructive Pulmonary Disease clinical trial (n=449, mean follow-up time: one year), which aimed to compare three treatments for patients with moderate to severe COPD, and computed the Expected Value of Perfect Information across Willingness-To-Pay values ranging from $0 to $200,000. For comparison, we used an established bootstrap method (with N=100,000).
Results: Results from the simulation study supported the correctness of the closed-form solution; the difference between the closed-form and Monte-Carlo solutions always fell within the range of Monte-Carlo error (see the table in https://github.com/tyhlee/SMDM_44_NA_Conference_Abstract_TYL). For the case study, EVPI values were similar between the closed-form UNLI solution and bootstrap method (Figure).
Figure.
Expected Value of Perfect Information (EVPI) based on the closed-form solution (black) and bootstrap method (red; N= 100,000).
Conclusions: Existing closed-form VoI methods based on the UNLI can now be extended to three-decision comparisons, taking a fraction of a second to compute and not being subject to Monte Carlo error. The closed-form solution is implemented in R and is available through the ‘predtools’ R package (https://github.com/resplab/predtools/).
Keywords: Decision Analysis; Cost-effectiveness; Uncertainty; Value of Information
An Analysis of Sample Diversity in Pivotal Clinical Trials Included in ICER Reviews
OP-016 Health Services, Outcomes and Policy Research (HSOP)
Abigail C Wright1, Serina Herron-Smith1, Diya Mathur2, Foluso Agboola1
1Institute for Clinical and Economic Review, Boston, MA. USA.
2Geisel School of Medicine at Dartmouth College, Hanover, NH. USA.
Purpose: As a leading voice for independent HTA in the US, the Institute for Clinical and Economic Review (ICER) has a role in advancing health equity by developing and incorporating methods that will promote increasing clinical trial diversity. Our aim was to determine how well information on demographic characteristics (sex, race, ethnicity, age, and socioeconomic status [SES]) of participants were reported in previously reviewed clinical trials and whether racial/ethnic minority groups, females, and older adults were adequately represented.
Methods: This cross-sectional study examined the pivotal clinical trials in completed ICER reviews from 2017 to 2021. The following data were extracted and validated from the manuscript and clinicaltrials.gov database: recruitment details, race, ethnicity, sex, SES, and age. The data were compared to two data sources: (1) US population census 2021 and (2) disease-specific data. Disease-specific data was obtained from the Global Burden of Disease database, the Centers for Disease Control and Prevention (CDC) or organization websites, or recent peer-reviewed journal articles.
Results: We examined 33 unique disease topics, including a total of 208 trials with 179,570 patients. Gender demographic was consistently reported across trials (99%); however, the reporting of race and ethnicity varied across topics and trials. In general, many trials reported the percentage of White (89%), Black (75%), and Asian (70%) patients, but fewer reported the percentage of American Indian and Alaska Native (51%), Native Hawaiian or Pacific Islander (47%), and Hispanic patients (55%). Similarly, the proportion of patients over the age of 65 years was reported in only 28% of trials, and 0% of trials reported SES. Participation in the trials relative to the proportion of those groups in the disease and the general population will be presented using a metric of “participation-to-disease representative ratio” and “participation-to-population representative ratio”.
Conclusions: Adequate involvement of a diverse population in clinical trials is vital in aiding our understanding of disease presentations, effectiveness across populations, and defining appropriate outcomes, all of which influence the translation of research into real-world practice. The results of the study will inform the role ICER can play in advancing health equity by developing and incorporating methods that will promote increasing clinical trial diversity in an effective, sustainable, and transparent manner in our clinical assessments.
Keywords: Diversity, Inclusion, Prevalence, Gender, Race, Ethnicity
Making decisions about clinical trial design: predicting participation rates in patient populations
OP-017 Patient and Stakeholder Preferences and Engagement (PSPE)
Nick Bansback1, Magda Aguiar1, Tracey Lea Laba2, Sarah Munro1, Tiasha Burch3, Jennifer Beckett3, Julia Kaal1, Marie Hudson4, Mark Harrison1
1University of British Columbia, Vancouver, Canada; Centre for Health Evaluation and Outcome Sciences
2University Technology Sydney, Sydney, Australia
3Patient Partner, Vancouver, Canada
4McGill University, Montreal, QC, Canada
Purpose: Since less than 50% of clinical trials meet their recruitment targets and fail to recruit many diverse individuals, we elicited patient preferences to predict participation rates and inform how trial design decisions could influence uptake.
Methods: In this patient-oriented project, we used focus groups and a discrete choice experiment (DCE) survey to elicit preferences of people with scleroderma for a novel treatment for this condition, autologous hematopoietic stem cell transplant (AHSCT) treatment. We used an existing trial of AHSCT treatment, which had failed to recruit the required sample size as a case-study. In the DCE, participants made a series of choices between hypothetical AHSCT treatments (which had varying levels of effectiveness, immediate and long-term risk, care team composition and experience, cost, travel distance), or no treatment. Preferences were estimated using a mixed and latent class logit models and used to predict participation for different clinical trial designs in diverse populations.
Results: In total, two hundred seventy-eight people with scleroderma completed our survey. All seven of the AHSCT treatment attributes were found to significantly influence preferences. The potential effectiveness of treatment and the risk of late complications were found to contribute most to patient preferences and the choices they made, however there were important modifiable factors that affected preferences; such as travel distance and out-of-pocket costs. Our prediction of participation rates (33%) aligned with the reported participation rates of the previous trial (34%), suggesting the validity of predictions. The models enabled us to predict participation could have been as high as 51% if a treatment could be offered closer to a patient’s home, at lower out-of-pocket cost, and supported with holistic, multidisciplinary care. These predictions varied across some population demographics.
Conclusions: We used a patient engaged approach to predict participation in a clinical trial of AHSCT treatment. Our results can guide future trials of AHSCT treatments about what designs will likely enroll the most patients, and patients from diverse populations. We believe a structured approach to understand the trade-offs people are willing to make can be used routinely to inform clinical study design, which both improve participation rates, and participation of diverse, representative populations.
Keywords: Discrete choice experiments, clinical trials, patient preferences
Does prioritization of COVID vaccine distribution to communities with the highest COVID-burden reduce health inequity?
OP-018 Quantitative Methods and Theoretical Developments (QMTD)
Hae-Young Kim, Anna Bershteyn, R Scott Braithwaite
New York University Grossman School of Medicine
Purpose: It is impossible to vaccinate an at-risk population instantaneously. Prioritizing communities that have borne a disproportionate share of disease burden may improve health equity. On the other hand, those communities may have less potential health benefit from vaccination because of greater immunity from prior exposure. We examined these trade-offs in the context of COVID-19 pandemic in New York City (NYC) and estimated the impact of preferentially targeting COVID vaccine distribution to communities with greatest prior COVID burden on equity-adjusted health outcomes.
Methods: We used a validated SEIR-Quarantine model to estimate and compare the health impact of alternative vaccination strategies for a large U.S. metropolitan area, comparing prioritization for neighborhoods with greatest prior disease burden (exposure-based prioritization) versus no neighborhood-based prioritization (no prioritization). We assessed greatest prior burden by reported cumulative COVID death rate from the first reported COVID-19 case in NYC (March 1, 2020) and categorized NYC neighborhoods into two groups, higher vs. lower. Modeled health outcomes were cumulative infections, hospitalizations and deaths, equally-distributed-equivalent (EDE) infections, EDE hospitalizations, and EDE deaths. Health outcomes were modeled from the vaccine roll-out (December 15th, 2020) to August 31st, 2021. EDE reflects societal willingness to trade-off aggregate health benefit to obtain more equal health distribution. We evaluated a wide plausible range of inequality aversion for covariation of health and social disadvantage using social vulnerability index (Atkinson parameters 1 to 10).
Results: EDE-unadjusted health outcomes were better when no prioritization was performed, with 17.2% reduction in deaths (from 7,849 to 6,501), and 19.0% reduction in infections (from 1,287,632 to 1,042,766) compared to exposure-based prioritization. However, EDE-adjusted health outcomes became better when exposure-based prioritization was performed. With strong inequality aversion (Atkinson parameter of 10), exposure-based prioritization performed better, with 3.0% reduction in EDE-deaths (from 8,024 to 7,824), and 2.5% reduction in EDE-infections (from 1,303,286 to 1,263,695) compared to no prioritization. The “tipping point” where the burden from inequality started to exceed the benefits from efficiency occurred at an Atkinson parameter of ~3, corresponding to moderate inequality aversion.
Conclusions: Prioritization based on prior disease burden may be a useful adjunct to individual risk-based prioritization for societies in which inequality aversion is moderate or greater.
Keywords: Health equity, COVID-19, vaccination, mathematical modeling, New York City
Regional heterogeneity and biased intervention effect estimates when modeling large epidemics
OP-019 Quantitative Methods and Theoretical Developments (QMTD)
Valeria Gracia1, Jeremy D. Goldhaber Fiebert2, Fernando Alarid Escudero1
1Center for Research and Teaching in Economics, Aguascalientes, Mexico
2Department of Health Policy, Stanford University, California, US
Purpose: Epidemics in large populations and policies to control them are often analyzed using single dynamic transmission models. However, in the presence of regional heterogeneity, a single model may not accurately reflect the region-specific dynamics nor the heterogeneity of effectiveness of non-pharmaceutical interventions (NPI) – thus producing biased predictions. We show the extent of such biases using the timely example of COVID-19 in Mexico.
Methods: Using our published modeling framework, we instantiated and calibrated a national-level dynamic epidemic model that includes both community (Beta) and household (Tau) transmission for COVID-19 in Mexico. Likewise, we instantiated and calibrated state-level models to each of Mexico’s 32 states. Bayesian calibration was used to match daily incident cases from 02/20/20 to 02/17/21. We draw 1,000 samples from the joint posterior distribution using the Incremental Mixture Importance Sampling (IMIS) algorithm. We aggregated the outputs of the calibrated state-level models to produce national-level estimates that we could then compare to the single national-level model’s outputs. We evaluated the model-predicted effects of a relaxation of NPI stricture (30% less strict that the calibrated model’s estimate in mid-February 2021) between 02/18/21 and 08/01/21. We computed the absolute bias in the predicted intervention effect as the effect predicted by the single national model minus the effect predicted with the aggregation of the state-level model. We computed the relative bias as the absolute bias divided by the effect predicted with the aggregated state-level models.
Results: The state-specific calibrated models showed substantial heterogeneity in both community and household transmission parameters, wider than the ranges of corresponding parameters in the national-level model (Fig. 1-A). Because the aggregation of state-level models predicted a larger increase in incident cases and deaths with NPI relaxation than the single national-level model, we find that the national-level model’s predictions are downwards biased (Fig. 1-B). Furthermore, over this period, cumulative cases and deaths are underestimated by the national model by -51% (95% credible interval (CrI) [-76 – -5%]) and -54% (95% CrI [-79 – -9%]), respectively (Fig. 1-C).
Figure 1.
(A) State-specific posterior mean and 95% credible interval for the transmission probability per effective contact per day in the community and the household. Solid and dashed lines depict the national-level posterior mean and 95% credible interval, respectively. (B) Absolute bias on intervention effectiveness on daily incident cases (top) and deaths (bottom). Shaded area shows the 95% posterior model-predictive interval of the outcomes, and the colored lines show the posterior model-predicted mean based on 1,000 simulations using samples from the posterior distribution. (C) Relative bias on intervention effectiveness on cumulative cases and deaths from 02/18/21 to 08/01/21.
Conclusions: Modelling a large-scale epidemic using a single model can bias the estimates of intervention effectiveness. Including regional heterogeneity in large epidemic models may improve the accuracy of predictions. However, this gain in predictive power often implies a higher demand for both data and computational resources.
Keywords: heterogeneity, bias, intervention effects, infectious disease modeling, COVID-19, Mexico
Understanding Quantifying the Health Impacts of COVID-19 Lockdown Policies in Los Angeles Using Traffic Data
OP-020 Health Services, Outcomes and Policy Research (HSOP)
Suyanpeng Zhang, Sze Chuan Suen, Han Yu, Anthony Nguyen, Maged Dessouky
Daniel J. Epstein Department of Industrial and Systems Engineering, Viterbi School of Engineering, University of Southern California
Purpose: The COVID-19 lockdowns stifled economic activity and plunged millions into unemployment, but they may have substantially reduced cases and deaths through limiting transmission. Road sensor data that captures traffic volumes over time can provide unique insight into both proximal (traffic reduction) and distal (epidemic outcomes) effects of lockdowns on the COVID-19 epidemic in large urban areas such as Los Angeles County (LAC). We analyze traffic flow data to identify how traffic patterns changed due to policy mandates and incorporate these results into a compartmental model of COVID-19 to estimate the epidemic impact of such policies.
Methods: We processed and analyzed USC archived distribution management systems (ADMS) road sensor data on highways in LAC from 2020 and 2019 (to provide a non-pandemic baseline). The dataset included 30-second traffic volume readings every minute over the analysis duration across all available sensors, which were collected into 145 traffic regions for analysis. We compared traffic over the March 19th - May 17th period between years and used t-tests to determine the statistical significance of volume differences.
To assess the impact of traffic changes on epidemic outcomes, we use a dynamic compartmental model of COVID-19, stratified over the 26 health districts (HDs) in LAC. Using an origin-destination optimization algorithm on our road sensor data, we inferred traffic travel between HDs over time and calibrated model outcomes against historic, HD-level deaths, vaccinations (first dose), hospitalizations, cumulative cases, and daily cases. To identify the lockdown effect, we compared counterfactual model outcomes for a universe where the lockdown did not occur to the status quo, using non-lockdown and lockdown traffic-based transmission matrices.
Results: Our model matched all calibration targets well. In our traffic analysis, we found statistically significant reductions in LAC traffic over the March lockdown period (7.6 in 2019 versus 4.38 in 2020, a difference of 3.22 cars per 30-second interval per day (t-test p-value < 0.001). Without the March lockdown, by the end of year 2020, we found that we would expect 196,855 more cumulative identified cases, 4556 more deaths, and 1,546,809 more cumulative unidentified cases. This suggests the necessity and effectiveness of the lockdown.
Conclusions: While the LAC COVID-19 lockdowns were economically burdensome, they likely reduced disease burden, saving thousands of lives and averting over a million cases.
Keywords: COVID-19, Traffic Analysis, Policy Impacts
How to prioritize COVID-19 Vaccine Adapted to a New Variant of Concern – a Decision-Analysis
OP-021 Health Services, Outcomes and Policy Research (HSOP)
Beate Jahn1, Gaby Sroczynski1, Martin Bicher3, Claire Rippinger2, Nikolai Mühlberger1, Júlia Santamaria1, Christoph Urach2, Nikolas Popper4, Uwe Siebert5
1UMIT - University for Health Sciences, Medical Informatics and Technology, Department of Public Health, Health Services Research and Health Technology Assessment, Institute of Public Health, Medical Decision Making and Health Technology Assessment, Hall i.T., Austria
2dwh GmbH, dwh simulation services, Vienna, Austria
3dwh GmbH, dwh simulation services, Vienna, Austria, TU Wien, Institute for Information Systems Engineering, Vienna, Austria
4dwh GmbH, dwh simulation services, Vienna, Austria, TU Wien, Institute for Information Systems Engineering, Vienna, Austria, DEXHELPP, Association for Decision Support Health Policy and Planning, Vienna, Austria
5UMIT - University for Health Sciences, Medical Informatics and Technology, Institute of Public Health, Medical Decision Making and HTA/ONCOTYROL - Center for Personalized Medicine, Austria; Harvard T.H.Chan School of Public Health, Dept. Health Policy & Management, Boston, MA, USA
Purpose: Since the SARS-CoV-2 outbreak, new variants of the virus have developed. Sufficient vaccination coverage remain a worldwide challenge and variants of concerns may require adaptations of current vaccines. Using a flexible concurrent disease model, we aim to identify optimal vaccination strategies for a COVID-19 vaccine adapted to a hypothetical variant of concern (VoC) focusing on age-specific prioritization during a period with still unvaccinated age groups and initially limited vaccination doses.
Methods: A dynamic agent-based population model for Austria was extended to capture the impact of different viruses or variants in the disease module. A hypothetical, new VoC affects the current pandemic. The parameters for the variant’s infectivity, virulence, susceptibility to the current vaccine and initial vaccination coverage when the VoC is detected were varied in 81 scenarios. Evaluated vaccination strategies are: 1) revaccination of the elderly with the VoC-vaccine only, 2) vaccination of the unvaccinated with the VoC-vaccine only, 3) providing the VoC-vaccine and the current vaccine to the unvaccinated, and 4) providing VoC-vaccine to elderly and current vaccine to unvaccinated compared to 5) continuing with the current vaccine, only to minimize COVID-19-related hospitalizations and deaths. A time horizon of ten months was considered.
Results: Prioritization of vaccination depends on target outcomes, combinations of VoC-characteristics and initial vaccine coverage. For example, at a 75% reduced effectiveness of the current vaccine for the VoC: Minimizing hospitalizations are achieved by selecting strategy 1 followed by 3 (1 followed by 2) considering increased VoC-infectivity of 0% or 33% (66%) independent of vaccination coverage. To minimize deaths, strategy 2 is preferred followed by strategy 4 independent of increased VoC-infectivity when initially only individuals age 80+ are vaccinated. For an initial vaccination coverage of individuals age 65+, strategy 1 is preferred followed by 2 (increased 33% or 66% VoC-infectivity). The decision depends less on additional information on VoC-severity.
Conclusions: Our study provides a flexible vaccination-decision basis for a partially vaccinated population with occurring VoC considering availability of VoC-adapted vaccine only or in addition to the current vaccines. There was no generally preferred strategy for vaccine prioritization of current and VoC-adapted vaccine. The decision depends strongly on combination of effectiveness of the current vaccine for the VoC, vaccination coverage and VoC infectivity, less on additional information on VoC-severity.
Keywords: SARS-CoV-2, vaccination, decision analysis, modelling, agent-based model, variant of concern
COVID-19 Mortality Reduction from Test-to-Treat: A model-based analysis
OP-022 Health Services, Outcomes and Policy Research (HSOP)
Soryan Kumar1, Mihir Khunte1, Joshua Salomon2, Alyssa Bilinski3
1Warren Alpert Medical School, Brown University, Providence, RI, USA
2Center for Health Policy and Center for Primary Care and Outcomes Research, Stanford University School of Medicine, Stanford, CA, USA
3Department of Health Services, Policy, and Practice & Department of Biostatistics, Brown School of Public Health, Providence, RI, USA
Purpose: Following FDA approval of Paxlovid (Nirmatrelvir/Ritonavir), the Biden administration announced the Test-to-Treat initiative to diagnose and provide Paxlovid to high-risk individuals with COVID-19. However, several barriers exist to reducing COVID-19 deaths with antiviral therapy, including testing and initiating medication within 5d of symptom onset. The purpose of this study is to (1) assess mortality reduction from Paxlovid-based Test-to-Treat and (2) quantify the number of COVID-19 tests and Paxlovid courses that would have been required to implement Test-to-Treat during the winter Omicron wave, and associated mortality reductions.
Methods: We modeled COVID-19 mortality reduction associated with Test-to-Treat as the product of (1) the proportion of symptomatic patients tested within 5d, (2) probability of receiving Paxlovid given eligibility, and (3) Paxlovid effectiveness in reducing mortality risk. We parameterized (1) and (2) based on national data and (3) from clinical studies. To quantify the number of required symptomatic tests during the winter Omicron wave, we multiplied (1) the number of individuals with COVID-like illness (CLI), (2) percentage of Paxlovid-eligible infections, and (3) average number of tests administered per CLI episode. For the number of required Paxlovid courses during the winter Omicron wave, we multiplied (1) the number of infections, (2) percentage of Paxlovid-eligible infections, and (3) proportion of symptomatic patients tested within 5d. To compute mortality reductions, we assumed all deaths occur among Paxlovid-eligible individuals.
Results: We estimate that 78% of cases are currently detected within 5d, and that current coverage of Paxlovid is 16% of eligible individuals. Given Paxlovid’s 81% effectiveness against mortality risks, we estimate that Test-to-Treat at current implementation levels reduces overall COVID-19 mortality by 10%. At 50% uptake, the mortality benefit from Test-to-Treat would be a reduction of 32% (Figure 1). During the winter Omicron wave, the required number of symptomatic tests and Paxlovid courses needed to produce the baseline 10% reduction in mortality (averting 15K/146K deaths) would have been 73.9 million and 8.5 million, respectively.
This figure details the percentage reduction in COVID-19 mortality given the proportion of individuals tested within 5 days of symptoms. Percentage of Paxlovid uptake among eligible patients shifts COVID-19 mortality reduction as strategies to increase Paxlovid uptake further reduce COVID-19 mortality.
Conclusions: Improving access to COVID-19 testing and Paxlovid for patients who are at high-risk of severe disease can significantly reduce mortality, although absolute mortality may remain high when the case burden is significant. Our estimates for the number of required tests and Paxlovid during Omicron may provide a minimum benchmark for preparedness in a substantial wave.
Keywords: COVID-19, Paxlovid, Omicron, Test-to-Treat initiative, Modeling
COVID-19 Mortality Reduction
Bias-adjusted predictions of county-level vaccination coverage from the US COVID-19 Trends and Impact Survey
OP-023 Health Services, Outcomes and Policy Research (HSOP)
Marissa B Reitsma1, Sherri Rose1, Alex Reinhart2, Jeremy D Goldhaber Fiebert1, Joshua A Salomon1
1Department of Health Policy, Stanford University
2Department of Statistics and Data Science, Carnegie Mellon University
Purpose: The potential for bias in non-representative, large-scale, low-cost survey data can limit their utility for population health measurement and public health decision-making. We developed an approach to bias-adjust county-level vaccination coverage predictions from the large-scale US COVID-19 Trends and Impact Survey.
Methods: We developed a multi-step regression framework to bias-adjust predicted county-level vaccination coverage plateaus that included post-stratification and secondary normalization. We prospectively applied this framework to vaccination among children ages 5 to 11 years, who became eligible for COVID-19 vaccination on November 3, 2021. First, we estimated county-level parental hesitancy toward vaccinating their children using a mixed effects logistic regression fit to survey data (n=732,925), and post-stratification to the American Community Survey that reflected household structure. Next, we used a second logistic regression to estimate the relationship between county-level hesitancy and CDC-reported vaccination coverage nine months after eligibility, for youth ages 12 to 17 years. Youth ages 12 to 17 years were eligible for COVID-19 vaccination earlier than children ages 5 to 11 years, and therefore act as a reference group for normalization. Finally, we combined the results from the two regression models to predict county-level vaccination coverage for children ages 5 to 11 years nine months after eligibility. We validated our approach against an interim observed measure of three-month coverage for children ages 5 to 11 years, and use long-term coverage estimates to monitor equity in the pace of scale-up.
Results: Our vaccination coverage predictions suggest a low ceiling on long-term national coverage (46%), detect substantial geographic heterogeneity (ranging from 11% to 91% across counties in the US) (Figure 1a), and highlight widespread disparities in the pace of scale-up in the first three months of COVID-19 vaccination for 5- to 11-year-olds (Figure 1b). Compared to previously published estimates on parental hesitancy, our hesitancy estimates showed stronger correlation with vaccination coverage (-0.78 vs. -0.44), and the intraclass correlation coefficient for consistency of predicted versus observed three-month vaccination coverage across all counties was 0.81.
Figure 1a and 1b.
Conclusions: The utility of large-scale, low-cost survey data for improving population health measurement is amplified when these data are combined with other representative sources. Our analysis demonstrates an approach to leverage differing strengths of multiple sources of information to produce estimates on the time-scale and geographic-scale necessary for proactive decision-making.
Keywords: COVID-19, vaccination, health equity
Exploring an Equitable Price Point for Gene Therapy for Patients with Severe Hemophilia B in Low-and-Middle Income Countries
OP-024 Applied Health Economics (AHE)
Nancy S. Bolous1, Yichen Chen1, Huiqi Wang1, Meenakshi Devidas1, Bishesh Sharma Poudyal2, Nancy Loayza3, Jeannie Ong4, Flerida G. Hernandez5, Visaka Ratnamalala6, Ampaiwan Chuansumrit7, Nongnuch Sirachainan7, Bach Quoc Khanh8, Nguyen Thi Mai8, Nguyen Trieu Van8, Phu Huynh9, Quang Nguyen9, Ulrike M. Reiss10, Nickhill Bhakta1
1Department of Global Pediatric Medicine, St. Jude Children’s Research Hospital, Memphis, TN, USA
2Department of Clinical Hematology and Bone Marrow Transplant, Civil Service Hospital of Nepal, Kathmandu, Nepal
3Servicio de Hematologia, Hospital Dos de Mayo, Lima, Peru
4Brokenshire Hospital/MetroDavao Medical & Research Center, Davao City, Philippines
5Department of Pediatrics, Manila, Philippines
6National Hospital of Sri Lanka, Colombo, Sri Lanka
7Department of Pediatrics, Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
8National Institute of Hematology and Blood Transfusion, Hanoi, Vietnam
9Department of Clinical Research and Training, Blood Transfusion Hematology, Ho Chi Minh City, Vietnam
10Department of Hematology, St. Jude Children’s Research Hospital, Memphis, TN, USA
Purpose: To explore an equitable price point for adeno-associated vector (AAV)-gene therapy infusion for patients with severe hemophilia B in six low-and-middle income countries (LMICs) in Southeast Asia and South America.
Methods: We developed a microsimulation Markov model to simulate patients with severe hemophilia B, modelling a birth to death lifetime horizon, using the healthcare perspective of six LMICs. Risk of inhibitor development and AAV+ seroprevalence were included in the model to examine impact of novel therapies at a population, rather than individual, level. To account for the heterogeneity of the clinical approach and the prices, real-world data from eight hemophilia treatment centers in six different LMICs was collected. Weight and mortality were adjusted based on each country’s mean published values. Based on clinical trial data, gene therapy effectiveness was assumed to last for 31 years. Simulants experienced no bleeds during the first 9 years, then experienced incremental bleeding episodes in years 9-31. After 31 years from treatment infusion, they were switched back to prophylaxis.
Results: Simulation results are presented in table 1. Five treatment approaches were identified and ordered in table 1 from the least to the most intense/expensive. Each of the eight treatment centers had access to one or two treatment approaches based on the resource availability. In every tested scenario, gene therapy was more effective than the alternative treatment approach due to fewer bleeding episodes and less complications over the period of effectiveness. The price at which gene therapy was dominant (less expensive and more effective) varied widely by location ranging from $50,000 to $2,100,000. This variation could be attributed to multiple factors including: 1) type of the alternative treatment approaches, 2) intensity of the treatment protocol and 3) price per unit of the factor concentrate or fresh frozen plasma.
Table 1.
|
Conclusions: Gene therapy is not yet approved in any country for routine clinical care. Thus, no product is officially priced. Manufacturers, however, have publicly targeted a commercial price of $2,000,000-$3,000,000. Based on our analysis, gene therapy is likely not cost-effective in LMICs at this price-point. From an equity perspective, manufacturers should not bias decisions based solely on high-income country perspectives but should consider innovative pricing strategies and payment models that are tailored to value-based principles that account for contextual differences observed in LMICs.
Keywords: Cost-effectiveness, gene therapy, equity, hemophilia, pricing
Comparing the costs associated with anxiolytic agents for reducing distress in children undergoing laceration repair in the emergency department
OP-025 Applied Health Economics (AHE)
Nam Anh Tran1, Anna Heath1, Petros Pechlivanoglou1, Naveen Poonai2, Doug Coyle4, Samina Ali3, Annisa Siu1
1The Hospital for Sick Children, 686 Bay Street, Toronto, ON M5G 0A4, Canada.
2Children's Health Research Institute, 800 Commissioners Road East, Rm E1-106, London, ON N6A 5W9, Canada.
3Faculty of Medicine & Dentistry, University of Alberta, Edmonton, Alberta, Canada.
4School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, 600 Peter Morand Crescent, Ottawa, ON K1G 5Z3, Canada.
Purpose: Children often experience distress during laceration repair in emergency departments (ED), causing emotional trauma and less cooperation with the repair. Anxiolytics decrease distress to help facilitate the procedure without the need of physical restraint. This study aimed to compare the effect of different anxiolytics on ED length of stay (LOS), adverse events (AEs) and medication costs.
Methods: We developed an individual-level state-transition model to determine the LOS (in mins), AEs and healthcare costs (in CAD), associated with intranasal midazolam (INM), inhaled nitrous oxide (N2O), and Intranasal dexmedetomidine (IND). The model simulated the pathway through sedation, AEs, and treatment dosing as well. Transition between states (preparation, procedure, post-procedure, discharge) are conditional on whether the patient has been adequately sedated or experiences AEs. Healthcare costs are estimated based on the patient’s LOS, cost of treatments required for any AE and the medication costs. Model inputs include the chance of achieving adequate sedation and additional medication, LOS, and medication costs. All model inputs were informed from studies investigating anxiolytics in the pediatric ED, ideally for laceration repair. Cost inputs were also informed from the literature. Where multiple studies were available for a single input parameter, we synthesized the evidence using meta-analysis. Expected values and standard deviations (SDs) for costs and LOS were computed. We calculated the probability that each treatment minimized the cost and the expected value of (partial) perfect information (EVPI/EVPPI).
Results: The expected LOS for preparation, procedure and post-procedure for the three drugs [INM, N2O, IND] are [36.05, 25.24, 24.02], [13.17, 26.17, 13.28], and [68.8, 49.06, 93.64], respectively, resulting in a total expected LOS of [118.02, 100.47, 130.95]. The expected costs (SD) of INM, N2O and IND are $299.56(84.84), $151.53(105.51) and $500.79(154.40), respectively. The probability of minimizing the cost for INM, N2O, IND is 0.1, 0.89 and 0.01 respectively. Figure 1 displays the cost distributions. The EVPI is $12.38 per person, and the EVPPI is largest for the LOS for N2O at $9.53 per person.
Conclusions: N2O has the lowest cost and the shortest expected LOS in the ED, followed by INM and then IND. The decision uncertainty is driven by uncertainty in LOS in the ED for patients receiving N2O.
Keywords: Light Sedation, Cost Analysis, Microsimulation, Children
Cost distribution
Cost-Effectiveness of Quadruple Therapy in Management of Heart Failure with Reduced Ejection Fraction in the U.S
OP-026 Applied Health Economics (AHE)
Brandon W Yan1, Aferdita Spahillari2, Ankur Pandya3
1Department of Health Policy and Management, Harvard T.H. Chan School of Public Health in Boston, MA, USA and the University of California San Francisco School of Medicine in San Francisco, CA, USA
2Department of Medicine Division of Cardiology, Massachusetts General Hospital and Harvard Medical School in Boston, MA, USA
3Department of Health Policy and Management and the Center for Health Decision Science, Harvard T.H. Chan School of Public Health in Boston, MA
Purpose: The 2022 American College of Cardiology/American Heart Association (ACC-AHA) guidelines for management of heart failure with reduced ejection fraction (HFrEF) call for quadruple therapy as first-line pharmacotherapy for this high-mortality condition affecting 3 million Americans. Quadruple therapy consists of an angiotensin receptor inhibitors/neprilysin inhibitor (ARNI), sodium-glucose cotransporter-2 inhibitor (SGLT2i), beta-blocker (BB), and mineralocorticoid receptor antagonist (MRA). We investigated the currently unknown cost-effectiveness of quadruple therapy in management of HFrEF as compared to triple therapy with ARNI/BB/MRA (+ARNI strategy), SGLT2i plus enalapril/BB/MRA (+SGLT strategy), and previous guideline-directed medical therapy (GDMT) with enalapril/BB/MRA (Old GDMT strategy).
Methods: Using a three-stage Markov model for alive, alive after HFrEF hospitalization, and death, we projected the expected lifetime discounted costs and quality-adjusted life years (QALYs) of a simulated cohort of patients modeled after participants in the PARADIGM-HF clinical trial. Mortality and hospitalization risk reductions from the ARNI and SGLT2i were derived from the PARADIGM-HF, DAPA-HF, and EMPEROR-Reduced trials. We calculated incremental cost-effectiveness ratios (ICERS) to compare quadruple therapy vs. +ARNI vs. +SGLT vs. Old GDMT strategies and assessed healthcare value using ACC-AHA criteria, in which <$50,000/QALY indicates “high-value”, $50,000-150,000/QALY indicates “intermediate-value”, and >$150,000/QALY indicates “low-value”. We also considered cost-effectiveness using the standard $100,000/QALY threshold.
Results: Compared to Old GDMT, the +ARNI strategy had an ICER of $64,392/QALY, representing intermediate value using ACC-AHA thresholds. The +SGLT strategy resulted in higher costs and lower QALYs compared to the +ARNI strategy. Quadruple therapy offered 0.45 additional discounted QALYs over +ARNI at a lifetime discounted cost of $52,188 resulting in an ICER of $117,031/QALY. Quadruple therapy was deemed to be of intermediate value over +ARNI but not cost-effective at a $100,000/QALY cost-effectiveness threshold. Cost-effectiveness results were most sensitive to plausible variations in ARNI and SGLT2i mortality benefit and drug prices. Quadruple therapy became cost-effective compared to +ARNI when the SGLT2i cost fell below $460 per month with ARNI costs below $600/month (Figure).
Figure.
Two-way sensitivity analysis showing the optimal treatment strategy with varying monthly costs of the ARNI and SGLT2 inhibitor.
Note: The colors reflect the combinations where the indicated strategy is optimal at a cost-effectiveness threshold of $100,000/QALY. The +ARNI strategy was optimal in base-case analysis with an ARNI cost of $463 per month and a SGLT2i cost of $548 per month.
Conclusions: The addition of the SGLT2i to triple therapy with an ARNI (i.e., quadruple therapy) offers intermediate value to the management of HFrEF but is not cost-effective at current list prices at the $100,000/QALY threshold. The demonstrated clinical benefits of SGLT2 inhibitors in treating this high-mortality condition should be weighed against its high price in payer and policy decisions.
Keywords: heart failure, cost-effectiveness, quadruple therapy, angiotensin receptor neprilysin inhibitor (ARNI), sodium-glucose cotransporter-2 (SGLT2) inhibitors, drug pricing
No ε, no problem: using an efficient frontier and threshold inequality aversion parameters to interpret DCEA results
OP-027 Applied Health Economics (AHE)
Ankur Pandya1, Andrea Luviano2, Vivien Cheng1, Lyndon James2, George Goshua3
1Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, USA
2Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, USA
3Yale Medical School, New Haven, USA
Purpose: Distributional cost-effectiveness analysis (DCEA) requires an inequality aversion parameter (ε) to calculate the equally distributed equivalent (EDE). The DCEA decision rule is to choose the strategy with the highest EDE. However, ε is unknown for most health disparities and settings, thus hindering the use of DCEA. We propose an efficient frontier method that results in interpretable threshold ε values, and apply this approach to a gene therapy cure for sickle cell disease (SCD).
Methods: DCEA weighs trade-offs between health gains and reductions in health disparities using an equity weight (ε, where 0 implies no extra weight on interventions that reduce disparities) to calculate an EDE for the distribution of outcomes across subgroups. When the exact value of ε is unknown, we propose plotting mutually exclusive strategies on an efficient frontier to identify dominance (worse on disparities and net health benefit [NHB]) and trade-offs (Panel A). Threshold ε values can be back-calculated for adjacent options on the efficient frontier, and these threshold ε parameters can be interpreted in the context of commonly-used values (0.5≤ε≤3.0 in the US, Glassman 2019, Atkinson 1970) or estimates from empirical studies (ε=11.0-28.9 for the UK, Robson 2017). We adapted a cost-effectiveness model developed by Salcedo et al. in 2021 to apply this approach (using a willingness-to-pay [WTP] of $100,000/quality-adjusted life-year [QALY]) for a hypothetical gene therapy cure for SCD in the US. The therapy addresses an important health disparity but might cost ≥$3,000,000/treated patient.
Results: Standard of care for SCD resulted in 38.9 lifetime discounted QALYs compared to 64.3 QALYs and an incremental cost of $3,270,000 with gene therapy, resulting in an incremental cost-effectiveness ratio (ICER) for $130,000/QALY. The threshold ε value was 1.6, implying a preference for reducing this disparity would need to be similar to or stronger than previous estimates used in the US (0.5≤ε≤3.0), or well below empirical estimates from the UK (ε=11.0-28.9), for gene therapy to be favored per DCEA standards, sensitive to gene therapy price (Panel B).
Conclusions: Threshold ε values can be back-calculated in DCEAs and interpreted using conventions (0.5≤ε≤3.0) or empirical estimates (ε=11.0-28.9). The threshold ε is similar to an ICER, which does not require a specific WTP value to be calculated. The EDE is similar to NHB, requiring an additional parameter value.
Keywords: Distributional cost-effectiveness analysis, cost-effectiveness analysis, net health benefit, equity, disparities, efficient frontier
Figure.
Does it matter who incurs the cost in DCEA? An illustrative study for the US setting
OP-028 Health Services, Outcomes and Policy Research (HSOP)
Lyndon P James1, Ankur Pandya2
1PhD Program in Health Policy, Harvard University, Cambridge, USA
2Department of Health Policy and Management, Harvard T. H. Chan School of Public Health, Boston, USA
Purpose: In practice, applied distributional cost-effectiveness analyses (DCEAs) have typically focused on the distribution of health effects in a population, while adopting the simplifying assumption of proportional incidence of cost across society. We explore the implications of relaxing this assumption in the US setting.
Methods: We analyze two independent decisions against the status quo: 1) whether Medicaid should cover the recently FDA-approved tirzepatide for control of type 2 diabetes, and 2) whether a private insurer should reimburse more extensive prenatal care. We consider hypothetical but plausible distributions of health effects (in QALYs) and costs (in US dollars) across high and low income groups, and use simple decision trees to identify the strategy with the highest equally distributed equivalent net health benefit (EDE NHB), using a willingness-to-pay of $100,000/QALY and an Atkinson index inequality aversion parameter (ε) that varies across three analyses. In analysis A, we choose the cost-effective strategy without concern for equity (i.e. standard CEA, ε = 0). In analysis B, we take into account only the distribution of health effects (i.e., similar to DCEA in practice, ε = 2.5 for the US population). In analysis C, we incorporate the distribution of both health effects and costs (ε = 2.5). To provide one example of a cost input distribution, drugs covered by Medicaid will have disproportional costs on the higher income group (through progressive taxation). The distribution of costs is considered for each component under the societal perspective according to the US second panel on CEA.
Results: Medicaid coverage of tirzepatide has the highest EDE NHB in analysis C, but is not preferred to the status quo under analysis A and B. Expansion of prenatal care coverage has the highest EDE NHB under analyses B and C, but is not preferred to the status quo under analysis A. (Table)
Conclusions: Explicitly parameterizing the distribution of costs in DCEA may affect the optimal policy when making trade-offs between equity and efficiency. Including differential cost incidence can either raise or lower the incremental EDE NHB of an intervention relative to its comparator (and thus either favor those better or less well off), depending on the incidence of each cost component.
Keywords: health equity, distributional cost-effectiveness analysis, cost incidence
Table 1.
|
The most relevant factors for each decision are presented, with resulting distributions of health under each analysis scenario (A, B, and C).
Eliciting Trade-Offs Between Equity and Efficiency: A Methodological Scoping Review
OP-029 Patient and Stakeholder Preferences and Engagement (PSPE)
Christopher J. Cadham1, Lisa A. Prosser2
1Department of Health Management and Policy, School of Public Health, University of Michigan, Ann Arbor, Michigan, USA
2Department of Health Management and Policy, School of Public Health, University of Michigan, Ann Arbor, Michigan, USA; Susan B. Meister Child Health Evaluation and Research (CHEAR) Center, Department of Pediatrics, Medical School, University of Michigan, Ann Arbor, Michigan, USA
Purpose: This study aimed to identify and compare characteristics of methods that have been used to elicit trade-offs between equity and efficiency that can contribute to equity-informed cost-effectiveness analyses.
Methods: We conducted a scoping review of studies that elicit trade-offs between equity and efficiency to develop preference rankings and distributional weights or parameterize a social welfare function. Data sources were Ovid (Medline), EconLit, and Scopus. Studies published before June 25th, 2021, were included if they were (1) peer-reviewed, or (2) grey literature; (3) published in English; (4) survey-based; (5) parameterized a social welfare function to quantify inequality aversion, or (6) elicited a trade-off in equity and efficiency characteristics of interventions. Studies were excluded if they (1) did not conduct a trade-off, or (2) were purely theoretical studies. We abstracted 11 elements from each study: location, population, sample size, survey method, survey approach, trade-off characteristics, veil-of-ignorance framing, type of inequality aversion elicited, social welfare functional form, results, and study limitations. Studies were grouped by approach: (1) social welfare function, or (2) preference ranking and distributional weighting. We describe study characteristics and classify study findings as clear, mixed, or no evidence of inequality aversion or equity preferences.
Results: The search identified 2,899 potentially relevant studies; 77 papers were included, 28 parameterized social welfare functions and 49 were classified as preference ranking and distributional weighting, with substantial heterogeneity across most study characteristics. Most studies were conducted in the UK (n=24) and elicited trade-offs from varied samples, including the general public (n=35) and students (n=18), with few representative samples (n=13). Sample sizes ranged from 10 to 4118. Among preference ranking and distributional weighting studies, a variety of equity and efficiency characteristics were considered, such as health gains (n=35), disease severity (n=32) and age (n=23). Overall, 43 of the included studies found aversion to inequality, 37 found mixed evidence of aversion, and only 3 found no evidence of inequality aversion. Evidence of between and within-study heterogeneity was found. Studies suggest that preferences for equity may differ by gender, profession, political ideology, income, and education.
Conclusions: Equity considerations impact an individual’s resource allocation preferences. Review findings highlight ongoing challenges for economic evaluation, including which equity weights to apply and how to incorporate characteristics of interventions and preferences of relevant populations into resource allocation decisions.
Keywords: Inequality aversion; social welfare function; preference ranking; distributional weighting; scoping review
The Cost-Effectiveness of Pre-Screening for Chronic Kidney Disease (CKD) Using Machine Learning Among White and Black Americans
OP-031 Applied Health Economics (AHE)
Yiwen Cao, Sze Chuan Suen
Daniel J. Epstein Department of Industrial and Systems Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California
Purpose: CKD currently affects 37M Americans, but as many as 9 out 10 are unaware of their status. While diabetics and hypertensive patients are recommended to screen for CKD, regular screening in the general population is not cost-effective and not recommended. However, ~52% of CKD patients are non-diabetic and 10% are non-hypertensive, suggesting that there are many CKD patients within this group. We develop machine learning (ML) algorithms to identify high-risk non-diabetic, non-hypertensive patients and test the cost-effectiveness of using such ML models for at-home pre-screening to prompt CKD testing.
Methods: We tune ML models (logistic regression with lasso, decision tree, random forest, support vector machines (SVM), and gradient boost) on a non-diabetic, non-hypertensive cohort from NHANES data to predict CKD stage 3 using a 75% training/25% test split. We also trained separate ML models across White and Black individuals as prior literature shows CKD risk may vary by race. We then used a microsimulation model of CKD to assess whether using the best-performing ML algorithm for annual pre-screening from age 30-90 would be cost-effective. We assume there is no cost to implementing the ML algorithm, as it could be easily completed on a home computer using an online tool.
Results: The best performing ML algorithm was logistic regression with lasso, with an area under the curve (AUC) of 0.893. The final model used only five features for prediction: sex, age, smoking status, and whether the individual had previously been told by a doctor that they have high cholesterol or cardiovascular disease. This algorithm was also the best-performing for race-specific analyses, although with different coefficients, with AUCs of 0.877and 0.931, respectively, for white and black cohorts (Figure 1). Since the race-specific ML models outperformed the general ML model, we use those in the cost-effectiveness analysis. Compared to the status quo policy (annual screening only after developing hypertension or diabetes), using ML for pre-screening was cost-effective for both White and Black individuals, with ICERs of $30,151 and $23,705 per QALY gained and would reduce lifetime ESRD cases by 11.83% and 10.29% among White and Black individuals, respectively.
Figure 1:
CKD Stage 3 Prediction Outcomes for a Non-Diabetic, Non-Hypertensive Cohort Using Race-Specific Algorithm Hyperparameters
Conclusions: ML algorithms using a small number of simple characteristics can predict CKD stage 3 sufficiently well as to be deployed for cost-effective pre-screening for early CKD diagnosis.
Keywords: Chronic Kidney Disease (CKD), Machine Learning, Cost-effectiveness, Screening
Claim-based algorithms for the prediction of cardiovascular risk among US post-menopausal women initiating anti-osteoporosis drug therapies
OP-032 Health Services, Outcomes and Policy Research (HSOP)
Ye Liu1, Tarun Arora2, Yujie Su1, Jeffrey Curtis1
1Department of Medicine, University of Alabama at Birmingham, USA
2Department of Epidemiology, School of Public Health, University of Alabama at Birmingham, USA
Purpose: Lab test results that used in algorithms predicting cardiovascular (CV) risks are not always available in claims data. This study aims to develop claim-based algorithms predicting CV risk in a large cohort with senior people who initiate anti-osteoporosis treatment.
Methods: Using fee-for-service Medicare data from 1/1/2017 to 6/30/2020, women age ≥65 years newly initiating romosozumab, denosumab, and zoledronic acid between 4/1/2019 and 3/31/2020 were identified. Patient demographics (age, race, geographic regions) and provider specialty, health care utilization, and baseline comorbidities were identified based on subject matter expertise. Biometric data and lab test results were retrieved from Medicare-linked PCORNet Clinical Data Research Network (CDRN) data linked using MBI numbers.
CV risk was calculated using the ACC/AHA risk equation. Two approaches were performed for prediction: a predefined variable-based approach (Approach 1) using above-mentioned covariables, and a data-mining approach (Approach 2) using patient demographics and provider specialty, and variables selected by a high-dimensional propensity score-based method based on the strength of their association with high CV risk (ACC/AHA risk above median as high risk, others as low risk). Linear LASSO models were used to select features and generate prediction models. The penalty parameter (λ) was determined by the 5-fold cross-validation. The 1-year CV death risk, defined as inpatient claim for myocardial infarction or stroke followed within 28 days by inpatient or outpatient death, was then used for further comparing the 2 models in the full cohort. Time-dependent receiver operating characteristics (ROC) curves were used to compare the prediction performance.
Results: A total of 107,753 women were included in the final cohort (age: 75.9±7.4 years, race as white: 89.3%). Among them, 176 with completed lab test results and biometrics that used for CV risk calculation were included for model training and testing. The misclassification rates for high vs. low CV risk in the test sample (25% random sample) were 8.9% and 6.7% for Approach 1 and 2, respectively. The time-dependent area under ROC and ROC curves for 90-, 180-, and 360-day CV death risk were shown in the Figure (p>0.05 for each comparison).
Conclusions: A claims-based algorithm can accurately predict cardiovascular risk in the study population. Data-mining approach can be useful to evaluate CV risk in future studies using claims data.
Keywords: Osteoporosis, cardiovascular risk, LASSO
Time-dependent area under ROC curves and ROC curves for 3 time points.
Approach 1: use patient demographics and provider specialty, health care utilization, and predefined baseline covariables Approach 2 (data-mining): use patient demographics and provider specialty, and variables selected by a HdPS-based method ROC: Receiver operating characteristics. AUROC: Area under ROC curve. HdPS: High-dimensional propensity score
Subarachnoid Hemorrhage Treatment Benefit Prediction Models (SHARP): Personalized Decision-Making in Aneurysmal Subarachnoid Hemorrhage
OP-033 Health Services, Outcomes and Policy Research (HSOP)
Jordi De Winkel1, David Van Klaveren2, Ruben Dammers3, Simone A Dijkland4, Pieter Jan Van Doormaal5, Mathieu Van Der Jagt6, Richard Sc Kerr7, Andrew J Molyneux7, Diederik Wj Dippel4, Bob Roozenbeek4, Hester F Lingsma2
1Department of Neurology, Erasmus MC University Medical Center Rotterdam, Rotterdam, The Netherlands, Department of Public Health, Erasmus MC University Medical Center Rotterdam, Rotterdam, The Netherlands
2Department of Public Health, Erasmus MC University Medical Center Rotterdam, Rotterdam, The Netherlands
3Department of Neurosurgery, Erasmus MC University Medical Center Rotterdam, Rotterdam, The Netherlands
4Department of Neurology, Erasmus MC University Medical Center Rotterdam, Rotterdam, The Netherlands
5Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Center Rotterdam, Rotterdam, The Netherlands
6Department of Intensive Care Adults, Erasmus MC University Medical Center Rotterdam, Rotterdam, The Netherlands
7Nuffield Department of Surgical Sciences, University of Oxford, Oxford, United Kingdom
Purpose: We aimed to develop models to predict benefit from endovascular compared to neurosurgical aneurysm treatment in patients with aneurysmal subarachnoid hemorrhage.
Methods: We used data from the randomized International Subarachnoid Aneurysm Trial. We developed an ordinal regression model to predict functional outcome (modified Rankin Scale; mRS) at 12 months. We developed a Cox regression model to predict time to rebleed or retreatment of the target aneurysm as indicator of durability of aneurysm treatment. We modelled heterogeneity of treatment effect by adding interaction terms between treatment and 6 prespecified predictors, and treatment and baseline risk. We defined benefit as the difference in predicted probability of favorable functional outcome (mRS 0-2), and of no rebleed or retreatment within 10 years, with endovascular and neurosurgical aneurysm treatment. A benefit of ≥5% was considered clinically relevant. Model performance was expressed with the c-statistic and the c-for-benefit. We used bootstrapping for internal validation.
Results: We included 2143 patients with a mean age of 51.7 years. We found no evidence of interaction with treatment. The average predicted benefit was 6% (95% CI 3-8%) in favor of endovascular aneurysm treatment, for favorable functional outcome, but 11% (95% CI 9-13%) in favor of neurosurgical aneurysm treatment, for no rebleed or retreatment. We identified 646 patients (30%) who had no clinically relevant benefit of endovascular aneurysm treatment in terms of functional outcome, but had a relevant benefit of neurosurgical treatment because of a lower probability of retreatment or rebleed. The internally validated c-statistic was 0.74 (95% CI 0.71-0.77) for the prediction of favorable functional outcome, and 0.70 (95% CI 0.65-0.72) for rebleed or retreatment. The c-for-benefit was 0.56 (95% CI 0.53-0.59) for functional outcome and 0.60 (95% CI 0.56-0.64) for durability of treatment.
Conclusions: On average, functional outcome after endovascular aneurysm treatment is better than after neurosurgical aneurysm treatment. However, a substantial proportion of patients has limited expected benefit, and should be considered for neurosurgical treatment because of lower rebleed and retreatment rates. To enhance future application of the treatment benefit prediction models, we developed a web-based clinical prediction tool that enables stroke physicians to make personalized treatment decisions. Future external validation will be performed using the Barrow Ruptured Aneurysm Trial.
Keywords: intracranial aneurysm, outcome, stroke, subarachnoid hemorrhage, prognosis, prediction
Optimize Risk-Based Autism Screening Under Limited Diagnostic Service Capacity
OP-034 Health Services, Outcomes and Policy Research (HSOP)
Yu Hsin Chen, Qiushi Chen
Department of Industrial and Manufacturing Engineering, Penn State University, University Park, PA 16802, USA
Purpose: Early diagnosis and intervention are keys to improving long-term outcomes for children with autism spectrum disorder (ASD). To improve early diagnosis, universal ASD screening for children has been recommended, which, however, has raised concerns that it may not necessarily reduce the age of diagnosis due to insufficient accuracy of the existing screening tool and limited capacity of diagnostic services. In this study, we examined risk-based screening strategies accounting for the delay of diagnostic services after screening to improve early diagnosis among ASD children.
Methods: We developed a mathematical programming formulation to optimize the screening decisions based on individuals’ risk assessed at different ages and the current waitlist for diagnostic services, with the objective to minimize the total age of diagnosis for all children with ASD. We utilized a large health claims database (MarketScan) to construct a valid risk prediction model and used it to quantify the transitions between low, medium, and high-risk groups by every 6 months from the age of 18-month. We solved the optimal risk-based screening policy with age-specific risk thresholds. We used fixed policies with the same threshold for all children as benchmark policies. To evaluate and compare screening policies, we developed a detailed individual-level microsimulation model that measures the diagnosis age of all ASD children, rate of early diagnosis (under age 4), and false positives referred to diagnostic services within a 10-year horizon.
Results: In the base-case setting with a moderate diagnostic capacity (40/month), the universal screening yielded an average diagnosis age of 5.9 years and early diagnosis among 35.1% of ASD children. Optimal risk-based screening policy was projected to reduce the diagnosis age to 4.7 years and substantially improve early diagnosis rate to 65.4%, whereas the fixed policy that selectively screens high-risk individuals only showed a worse performance with a higher diagnosis age and a lower early diagnosis rate compared with the universal screening. At a lower diagnostic service capacity (30/month), the optimal risk-based screening policy consistently outperformed fixed policies, showing a lower diagnosis age (5.47 vs. 6.7-7.1 years) and a higher early diagnosis rate (59.1% vs. 25.9-36.9%).
Conclusions: Risk-based screening for ASD children is a promising strategy which can be more effective in utilizing limited diagnostic resources to achieve the goal of early diagnosis than universal screening.
Keywords: screening policy, early diagnosis, autism, optimization, health system
Table.
Model outcomes of different screening policies for autism spectrum disorder under different diagnosis rates.
| Fixed screening for all children: Universal screening | Fixed screening for high-risk individuals only | Optimal risk-based screening | |
|---|---|---|---|
| Moderate diagnosis capacity (40 diagnosis/month) | |||
| Mean diagnosis age, year (95% CI) | 5.92
(5.85, 5.99) |
7.08
(7.02, 7.14) |
4.71
(4.65, 4.78) |
| Percentage of ASD children diagnosed under 4 years old, % (95% CI) | 35.09%
(34.05%, 36.14%) |
36.62%
(35.85%, 37.4%) |
65.4%
(64.64%, 66.15%) |
| Percentage of false positives among the children referred to diagnostic evaluation, % (95% CI) | 90.66%
(90.52%, 90.81%) |
60.01%
(59.19%, 60.83%) |
83.95%
(83.66%, 84.24%) |
| Low diagnosis capacity (30 diagnosis /month) | |||
| Mean diagnosis age, year (95% CI) | 6.69
(6.63, 6.76) |
7.06
(7.01, 7.11) |
5.47
(5.39, 5.54) |
| Percentage of ASD children diagnosed under 4 years old, % (95% CI) | 25.86%
(24.92%, 26.79%) |
36.88%
(36.23%, 37.53%) |
59.13%
(58.21%, 60.05%) |
| Percentage of false positives among the children referred to diagnostic evaluation, % (95% CI) | 90.83%
(90.69%, 90.96%) |
60.53%
(59.77%, 61.28%) |
83.33%
(83.0%, 83.66%) |
The high false positives are due to the low disease prevalence and low sensitivity for the existing screening tool.
People want doctors to use Artificial Intelligence, especially if it helps them to diagnose serious disease
OP-035 Decision Psychology and Shared Decision Making (DEC)
Martine Nurek, Olga Kostopoulou
Imperial College London
Purpose: To identify factors that influence satisfaction with doctors that use Artificial Intelligence (AI) in diagnosis.
Methods: In this preregistered, online vignette study, a representative sample of the UK population were asked to imagine that they were attending an appointment with their physician for stomach pain and constipation. During the consultation, the physician consulted an AI-based decision aid, which recommended a test to rule out a serious disease. In a full factorial design (2x2x2), we varied the invasiveness of the test (invasive/non-invasive), whether the doctor adhered to the aid by ordering the test (yes/no), and the severity of the patient’s disease (serious/non-serious).
After reading one of the resulting eight vignettes, randomly assigned, respondents rated their satisfaction with the consultation (0-100 scale) and indicated on 4-point scales whether they would recommend the doctor and how often decision aids should be used in practice. Finally, they completed the Trust in Physicians scale (TIP), the single-item Maximizer-Minimizer elicitation question (MM1), and the Health Regulatory Focus scale (HRF). We measured the impact of test invasiveness, doctor adherence, and disease severity on evaluations of the doctor, the consultation, and the aid.
Results: 730 adults took part. Satisfaction with the consultation was moderate-to-high (M=68.3, SD=27.7), with most participants indicating that they “probably would” (46%) or “definitely would” (28%) recommend the doctor. Both satisfaction and likelihood of recommending the doctor were higher when the doctor adhered to the aid vs. not (both P<.01) and when the aid suggested an invasive than a non-invasive test (both P≤.05). We found a significant adherence*severity interaction: when the doctor did not adhere to the aid and missed a serious disease, satisfaction and recommendation ratings were low. Most respondents felt that decision aids should be used in the consultation, be it “sparingly” (29%), “frequently” (43%) or “always” (21%). Higher Health Regulatory Focus and Trust in Physicians were associated with higher suggested frequency of use (P<0.01). There were no associations with MM1.
Conclusions: People want doctors to use AI, especially if it helps to diagnose serious disease. This might encourage doctors to use AI-based decision aids and include patients in decision making.
Keywords: decision support, artificial intelligence, computerised decision aids, clinical vignettes, patient satisfaction
Figure 1.
Severity * adherence interaction on satisfaction and likelihood of recommending the doctor
Implementation of Computer-Generated Personalized Risk Information in the Clinical Encounter: The Concern for Decisional Distancing
OP-036 Decision Psychology and Shared Decision Making (DEC)
Holland Kaplan1, Kristin Kostick1, Ben Lang1, Natalie Dorfman1, Robert Volk2, Jennifer Blumenthal Barby1
1Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, United States
2Department of Health Services Research, MD Anderson Cancer Center, Houston, Texas, United States
Purpose: Decisional tools incorporating computer-generated (often machine-learning based) personalized risk information (PRI) are increasingly being integrated into clinical spaces. It is unknown how these tools may impact patient-physician interactions and shared decision-making (SDM). Our research raises concern for increased decisional distancing, which we define as the occurrence of a lower quality or quantity of discussions between patients and physicians surrounding medical decision-making as a result of these computer-based tools.
Methods: As part of a 5-year, multi-institutional AHRQ-funded grant to explore integration of PRI into SDM about left-ventricular assist devices, we conducted 40 in-depth interviews with 14 physicians, 6 nurse coordinators, 18 patients, and 2 caregivers that included exploring stakeholders’ attitudes towards making PRI directly available to patients outside of their clinical encounters. Interviews were analyzed using Thematic Content Analysis in MAXQDA 2018.
Results: Our interview data revealed that physicians are more willing to make PRI accessible at all times (i.e., outside of clinical encounters without professional guidance) than patients are willing to receive it at all times (38% vs. 21%). Most patients (75%) and caregivers (100%) prefer that their physicians discuss PRI with them, while of all stakeholders, physicians (45%) least often felt they should be discussing PRI with their patients. Additionally, only 13% of physicians stated they would never want PRI accessed by patients or caregivers outside of appointments. The most common reasons listed by patients and caregivers for wanting to discuss PRI with their physicians were to ask questions and to mitigate the emotional impact of personalized information.
Conclusions: The above findings raise the concern that patients and caregivers may be given access to computer-based PRI without receiving their preferred guidance from physicians. This practice may negatively impact SDM during the patient-physician relationship by creating decisional distance. These concerns are salient in the setting of the authority that may be ascribed to computer-generated PRI by physicians, physicians’ use of the “independent choice” model of decision-making as opposed to SDM, and the trend of physicians outsourcing tasks in a clinical space due to increasing physician duties. An emphasis on using decisional tools including computer-generated PRI to augment and enhance the clinical encounter with the physician rather than to replace it may minimize the risk decisional distancing. Embedding computer-based PRI in a decision aid may help with this.
Keywords: Shared decision-making, decisional distancing, artificial intelligence, machine learning
Preferred Decisional Role of People Receiving Dialysis
OP-037 Decision Psychology and Shared Decision Making (DEC)
Gwen Bernacki1, Ann O'hare2
1University of Washington
2Veterans Administration of Puget Sound
Purpose: Although clinical practice guidelines recommend a shared approach to decision-making for patients undergoing maintenance dialysis, the extent to which this guidance mirrors patients’ own preferred decisional role is unknown. Our objective was to characterize decisional role preferences of patients undergoing maintenance dialysis and their relationship to patient characteristics and responses to questions about other aspects of end-of-life care.
Methods: This was a cross-sectional survey study of a pragmatic sample of patients undergoing maintenance dialysis at 31 nonprofit dialysis facilities in 2 US metropolitan areas from 4/2015 to 10/2018. Participants’ preferred decisional roles were classified as patient-led, shared or physician-led based on responses to the following question: “If you were to become very sick in the future and were facing a decision about whether to accept treatments to prolong your life that might increase your suffering, what role would you want to have in that decision?” We measured the association between preferred decisional role with both self-reported patient characteristics and responses to questions about engagement in advance care planning, treatment preferences, values around life prolongation, desired place of death and prognostic expectations.
Results: 915 participants (63.9% response rate) had a mean age of 62.7 (±13.9) years; 513 (56.1%) were men, 551 (60.2%) were White. 472 (51.5%) participants indicated that they preferred a patient-led, 336 (36.7%) a shared and 107 (11.7%) a physician-led decisional role. Preferred decisional role was independently associated with age, race, educational level, importance of religious and spiritual beliefs and length of time on dialysis (p<0.05). After adjustment for these differences, participants who preferred a patient-led approach were less likely than those who preferred a shared approach to want CPR (p=0.02), more likely to have had a discussion about stopping dialysis (p=0.01) and more likely to want to die at home (p=0.01). Those who preferred a physician-led approach were less likely than those who preferred a shared approach to have had a discussion about stopping dialysis (p=0.01) and more likely to have a prognostic expectation of 5-10 vs. <5 years (p=0.04).
Conclusions: Only 36% of study participants indicated they would prefer a shared approach to end-of-life decision-making, highlighting the importance of eliciting preferred decisional role as part of the process of planning for serious illness in patients undergoing maintenance dialysis.
Keywords: shared decision making, preferred decisional role, dialysis, end stage renal disease, end-of-life care
Making CHOICES: The Consolidated Health Outcomes/Interventions Choice-Modeling Evaluation Standards for critical appraisal of preference research
OP-038 Patient and Stakeholder Preferences and Engagement (PSPE)
Norah L Crossnohere1, Ellen Janssen2, Sara Knight3, Ilene Hollin4, A. Brett Hauber5, John F P Bridges1
1Department of Biomedical Informatics, The Ohio State University
2Janssen Pharmaceutical
3Department of Internal Medicine, Division of Epidemiology, University of Utah
4Department of Health Services Administration and Policy, Temple University
5Pfizer
Purpose: Standards for conducting and reporting methods are vital to ensuring the validity of research. Best practices for using choice modeling to measure the preferences of patients and other stakeholders in health are emerging, but consolidated standards for their critical appraisal do not yet exist. We sought to identify key concepts for inclusion in CHOICES.
Methods: A targeted literature review identified documents such as frameworks, checklists, and best practices for the use of choice modeling in health research. We recorded what ‘unit’ of research each document provided input on (i.e. preference-elicitation instruments, studies, or syntheses of multiple studies). Content analysis was used to identify all concepts related to quality across the documents. Concepts were then thematically grouped. A nominal group approach was used to reduce and refine concepts into a set of mutually-exclusive key concepts related to quality.
Results: We identified 14 guidance documents that offered quality considerations for choice modeling in health. These included documents relevant for specific methods (e.g. discrete-choice experiments, best-worst scaling) as well as relevant for preference measurement more generally. Documents offered quality considerations for preference-elicitation instruments (n=12), preference studies (n=10), and syntheses of multiple studies (n=1). Content analysis revealed 141 concepts pertaining to quality across all documents. These concepts spanned eight themes, including internal validity (28% of items), external validity (20%), questionnaire development (18%), study conduct (8%), statistical testing (8%), robustness (6%), relevance (6%), and interpretation (6%). Upon reducing and refining these concepts, a total of 20 key concepts were identified for consideration in CHOICES (see Table).
Conclusions: We present the first steps in developing consolidated standards for the critical appraisal of choice modeling to measure preferences for health outcomes and interventions. By critically comparing existing guidance documents and highlighting the key concepts for quality, we demonstrate the need for a consolidated approach and identify the candidate concepts for incorporation. It remains unclear if separate standards are needed for the critical appraisal of preference-elicitation instruments, studies, and syntheses of literature. Further consensus is needed to advance standards for choice-modeling approaches in health.
Keywords: preferences; critical appraisal; quality; methods; standards; guidance as topic
Impact of Variation in Health Objectives on the Potential for Personalized Blood Pressure Control
OP-039 Health Services, Outcomes and Policy Research (HSOP)
John Giardina1, Ankur Pandya2
1Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA
2Center for Health Decision Science and Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, MA, USA
Purpose: Intensive antihypertension treatment reduces the risk of cardiovascular events but has potentially serious adverse events (e.g., kidney injury). Recent research has analyzed heterogeneity in the effect of intensive treatment on cardiovascular and adverse events separately, which can lead to conflicting guidance. In this study, we investigate how variation in patient-level objectives and trade-offs between cardiovascular and adverse events impacts the potential for personalized blood pressure control in a clinical trial analysis.
Methods: We used data from the SPRINT trial (n=6809), which compared an intensive 120mmHg systolic blood pressure (SBP) target to a standard 140mmHg target. We used causal forests to estimate the treatment effect of the intensive target as a function of 16 baseline covariates, including blood lipids, medical history, and demographic factors. We analyzed the effect of intensive treatment on maximizing two objectives over 3.5 years in the trial: (1) restricted mean survival time (RMST) before cardiovascular event or death, and (2) QALYs accrued, including sensitivity analyses for the disutility of adverse events. We used a decision tree algorithm to split patients into subgroups based on their covariates and assign intensive or standard treatment to maximize the projected benefit of treatment; cross-validation was used to ensure assignment is not based on noise in the projected benefit.
Results: Using RMST as the objective, the algorithm assigned all patients to intensive treatment; this aligns with prior research finding there is not enough observed heterogeneity in effects on cardiovascular outcomes to effectively stratify treatments. Using QALYs as the outcome, the algorithm split the population into four subgroups based on patient characteristics (Figure). The average benefit per person from personalizing treatment for these subgroups compared to no personalization was 0.5 quality-adjusted life months (95% CI: 0.2, 0.7). This benefit decreased as disutility of adverse events increased – patients with high cardiovascular benefit from intensive treatment also have high adverse event risk, making it hard to individualize when patients value avoiding both outcomes more similarly.
Figure.
Subgroup assignment to SBP target (intensive vs. standard) by a decision tree algorithm when the objective is to maximize QALYs accrued and each serious adverse event has a disutility of 0.01 QALYs.
Conclusions: Using a summary health objective like QALYs can allow for more effective antihypertensive treatment personalization compared to using a single clinical outcome because it captures more dimensions of treatment effects, including impacts on disease severity and the correlation between cardiovascular and adverse event risk. Changes in patient-specific utilities can also affect the potential value of personalization.
Keywords: hypertension, personalized medicine, individualized care
BC Pediatric Precision Oncology Impact Study - Healthcare Providers’ Perspectives
OP-040 Decision Psychology and Shared Decision Making (DEC)
Nicholas McCloskey1, Rebecca J Deyell1, Paula Mahon2, Samantha Pollard3
1University of British Columbia, Division Of Oncology, Hematology And Bone Marrow Transplant, Vancouver, Canada
2BC Children’s Hospital & Research Institute, Vancouver, Canada
3BC Cancer Research Center, Vancouver, Canada
Purpose: Children and adolescents with hard to treat cancer in British Columbia (BC) have access to precision oncology studies which provide whole genome and transcriptome sequencing (WGTS) of tumour and germline aimed at identifying potentially actionable variants in categories of therapy, diagnosis/prognosis and cancer predisposition. It is essential to understand the perspectives of healthcare providers (HCPs) in order to identify and troubleshoot barriers and to facilitate improved utilization of results.
Methods: Following Ethics Board approval and informed consent, seventeen HCPs (8 oncologists, 6 surgeons/interventional radiologists, 2 genetic counsellors, 1 medical geneticist) at a tertiary care pediatric institution underwent semi-structured interviews. Interviews were audio recorded, transcribed, coded and analyzed thematically. Initial transcripts were read by four investigators to develop and refine the coding tree. Once the code book was finalized, two readers double-coded 53% of interviews to ensure minimal inter-observer variability.
Results: HCP interviews revealed five main themes, each with several subthemes.
1) Communication and Collaboration. HCPs appreciated strong communication from pediatric oncologists on the study team. As HCPs did not directly undertake or attend consent meetings, they were often uncertain how study expectations were explained to patients/families. 2) Navigating uncertainty and new technologies. HCPs reported varying comfort with research results, and many indicated preference for a written WGTS research report with prioritized recommendations. 3) Barriers to clinical utility of WGTS results. HCPs recognize that the clinical translation of research results to actual novel therapies remains impeded by timeliness of WGTS, a limited evidence base, especially in pediatrics, and the complexity involved in attempting to access target drugs. 4) Hope and Expectations. HCPs’ role in managing families’, and their own, hope and expectations is challenged by the balance of setting realistic expectations and not taking away hope. 5) Personal challenges and stressors. HCP stressors included the responsibility for clinical integration of study results, navigating logistical barriers and the burden of advocacy.
Conclusions: Overall, the HCPs in our study were enthusiastic about the future integration of precision medicine into clinical practice for children and adolescents with cancer. However, they have identified a need for ongoing integration into communication with the study team, education and support with interpretation and clinical translation of WGTS results to best support patient care.
Keywords: precision medicine, pediatric, oncology, qualitative, interview, PROFYLE
Using Design Thinking to Promote Patient-Centered Communication between Patients with Ovarian Cancer and their Clinicians
OP-041 Patient and Stakeholder Preferences and Engagement (PSPE)
Rachel A. Pozzar1, James A. Tulsky1, Donna L. Berry2, Alexi A. Wright1
1Dana-Farber Cancer Institute, Boston, MA, United States; Harvard Medical School, Boston, MA, United States
2University of Washington, Seattle, WA, United States
Purpose: Patient-centered communication (PCC) occurs when clinicians elicit and respond to patients’ preferences and concerns. Greater PCC is associated with better quality of life in patients with ovarian cancer, yet PCC is not widely used in practice. Design thinking entails empathizing with stakeholders; defining a problem from multiple perspectives; ideating solutions; and iteratively prototyping and testing a solution. We applied design thinking methods to develop an intervention to promote PCC between patients with ovarian cancer and their clinicians.
Methods: Guided by the National Cancer Institute Framework for PCC in Cancer Care, we previously completed an explanatory sequential mixed methods study and a systematic review and thematic synthesis of the communication experiences of patients with ovarian cancer. Drawing from this research, we developed a stakeholder map to identify potential influencers and end-users of an intervention to promote PCC; a challenge map to identify barriers to engaging in PCC; and a journey map to document pain points in patients’ and clinicians’ ovarian cancer communication experiences. Using these maps, we developed a communication intervention.
Results: Stakeholder mapping highlighted the need to engage with patients, caregivers, and clinicians. Challenge mapping identified four design priorities: normalize assessment of patients’ preferences and concerns; provide patients, caregivers, and clinicians with communication guidance; adapt to existing workflows; and assist patients, caregivers, and clinicians in agreeing on a visit agenda. Journey mapping identified discordance between patients’ information preferences and clinicians’ information disclosure, patients’ late-breaking concerns, and stakeholders’ emotional discomfort as potential pain points during communication encounters. Our Collaborative Agenda-Setting Intervention (CASI; Figure 1) is a tool that will be integrated into the electronic health record (EHR). Prior to a clinic visit, the CASI prompts patients and caregivers to respond to a questionnaire about preferences and concerns in the patient portal. Responses are stored in the EHR, and the CASI automatically generates a question prompt list, tailored to patients’ and caregivers’ stated concerns. Clinicians can review responses to the CASI in the EHR and receive communication guidance tailored to patients’ and caregivers’ concerns prior to each clinic visit.
Conclusions: Design thinking methods supported our development of a communication intervention that aims to meet the needs of patients, caregivers, and clinicians. Additional research will aim to elicit stakeholder feedback and iteratively refine the CASI.
Keywords: ovarian neoplasms, communication, quality of life, physician-patient relations
CASI_Diagram.jpeg
Collaborative agenda-setting intervention (CASI) schematic.
Evidence of Better Selection of Patients by Acuity into Priority Statuses After the 2018 Heart Allocation Policy Change
Stratton B Tolmie1, William F Parker2
OP-042 Health Services, Outcomes and Policy Research (HSOP)
1Pritzker School of Medicine, The University of Chicago, Chicago, US
2Department of Medicine, Section of Pulmonary and Critical Care Medicine, University of Chicago, Chicago, US
Purpose: A key policy goal of the new US heart allocation system was to reduce centers’ ability to inappropriately bump or ‘select’ candidates into higher statuses, originally meant for a minority of similarly highly acute patients, through use of potentially unnecessary therapies. This gaming had led to crowding in the highest status, high variance of clinical acuity within statuses across centers, and thus inefficient allocation of scarce hearts. We determined how effective the new policy has been in decreasing the within-status variance of clinical acuity of transplant recipients across centers as evidence of decreased gaming at the center level.
Methods: We collected Scientific Registry of Transplant Recipients data on all adult heart transplant candidates listed in seasonally matched pre- (2016-2017) and post-policy (2018-2019) cohorts. We fitted a mixed effects Cox proportional hazards model for each policy period, treating transplant and status as time-dependent covariates, and introducing random center effects. Model coefficients were used to estimate between-center variance in the survival benefit of transplant controlling for status. Survival benefit of transplant represents the difference between a patient’s expected survival with a transplant versus if they hypothetically stayed on the waitlist indefinitely. Since more acute patients have the potential to experience higher benefit from transplant, variance in survival benefit is proportional to variance of recipients underlying acuity. We then calculated the median hazard ratios (MHR) of the random slope on transplant in each policy period as a measure of variance of center random effects.
Results: There were 3,646 candidates listed in the pre-policy and 3,680 in the post-policy cohort. There were no statistically significant differences between cohorts in demographics or hemodynamics. The between-center variance in survival benefit decreased from 0.454 in the pre-policy cohort to 0.048 in the post-policy cohort. The MHR of transplant in the pre- and post-policy cohorts were 1.45 and 1.07 respectively.
Conclusions: There is higher variance in survival benefit and thus higher variance in transplant recipient acuity within a given status in the pre-policy period than in the post-policy period. We therefore conclude that centers are more similar to one another in their selection of candidates by acuity into statuses after the policy change. This suggests that centers are less able to inappropriately bump candidates into higher statuses, potentially allowing for more efficient allocation of scarce hearts.
Keywords: Transplant, Survival Analysis, Organ Allocation, Policy, Gaming
Impact of ‘EVEN FASTER’ concept to accelerate cervical cancer elimination in Norway: A model-based analysis
OP-043 Health Services, Outcomes and Policy Research (HSOP)
Allison Portnoy1, Kine Pedersen2, Jane J. Kim1, Emily A. Burger3
1Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA
2Department of Health Management and Health Economics, University of Oslo, Oslo, Norway
3Department of Health Management and Health Economics, University of Oslo, Oslo, Norway; Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA
Purpose: Experts have recently proposed an ‘EVEN FASTER’ concept involving intensifying concomitant screening and vaccination campaigns to age groups that are maintaining circulation of human papillomavirus (HPV) infection in the population (e.g., ages 25+). Advocates for ‘EVEN FASTER’ suggest this approach could accelerate elimination of cervical cancer (CC) (i.e., age-standardized incidence rate (ASR) of <4 cases per 100,000 women) and reduce the need for screening. We explored the effects of these proposals under context-specific contact patterns in Norway on the timing (currently projected to be 2039) and impact of CC elimination beyond existing prevention policies.
Methods: We used a multi-modeling approach that captured HPV transmission and cervical carcinogenesis to estimate the ASR associated with alternative vaccination and screening scenarios compared with a status-quo scenario reflecting previous vaccination and screening efforts. For cohorts ages 25–35 years (including women previously targeted by vaccination campaigns), we examined 11 vaccination scenarios that incrementally increased vaccination coverage from current cohort-specific rates to 90% for girls and 89% for boys (i.e., “maximum-achieved” coverage). Each vaccination scenario was coupled with a screening scenario that lowered the age that women switch to HPV-based screening, if eligible (e.g., either age 25 or 30 in 2022) and varied the frequency of HPV-based screening (i.e., 5-yearly (current practice) or 10-yearly).
Results: Compared to CC elimination in 2039 projected under existing prevention policies, increasing vaccination coverage for women up to at least age 27 under current screening practice accelerated elimination timeframe by one year, while increasing coverage up to at least age 32 and lowering the HPV-based screening age to 25 accelerated timing by two years (Table 1). However, when coupled with de-intensified screening frequencies, several vaccination strategies lowered ASR by year 2050 compared with current projections, but vaccination would have to be offered to at least age 35 to accelerate CC elimination timeframe.
Table 1.
Elimination timeframe and age-standardized incidence rate associated with ‘EVEN FASTER’ scenarios in Norway.
|
Note: Heat map formatting for the year cervical cancer elimination is achieved shows an accelerated elimination timeframe in green and delayed elimination timeframe in red compared to the year elimination achieved under existing prevention policies (2039). Heat map formatting for age-standardized incidence rate shows lower cervical cancer incidence rates in green and higher rates in red compared to the rate estimated under existing prevention policies (2.505 cases per 100,000 women in year 2050).
Conclusions: A targeted, high-coverage, one-time vaccination campaign up to age 35 had greater benefits compared to current prevention policies and may accelerate CC elimination, even if paired with less intensive screening frequencies, as long as the eligible age for HPV-based screening is lowered. Future analyses should explore the cost of vaccinating additional cohorts and the impact of additional levers to increase or decrease screening to reach elimination goals and meet policy thresholds for cost-effectiveness.
Keywords: cervical cancer; vaccination; screening
Feasibility and Sustainability of Reducing Opioid Overdose Deaths in KY, MA, NY, and OH: Modeling HEALing Communities Study Interventions
OP-044 Health Services, Outcomes and Policy Research (HSOP)
Jagpreet Chhatwal1, Peter P. Mueller2, Qiushi Chen3, Neeti S. Kulkarni2, Madeline Adee2, Gary Zarkin4, Marc R. Larochelle5, Amy B. Knudsen1, Carolina Barbosa4
1Massachusetts General Hospital Institute for Technology Assessment, Harvard Medical School, Boston, MA, USA
2Massachusetts General Hospital Institute for Technology Assessment, Boston, MA, USA
3Pennsylvania State University, Department of Industrial and Manufacturing Engineering, State College, PA, USA
4RTI International, Research Triangle Park, NC, USA
5Boston Medical Center, Boston, MA, USA
Purpose: In 2021 over 80,000 Americans died from opioid overdose. The HEALing Communities Study (HCS) was launched with the goal of reducing opioid overdose deaths (OOD) by 40% in a one-year measurement period (July 1, 2021 to June 30, 2022) after the initial roll out of interventions in January 2020. Our objective was to assess whether the goal could be achieved by the end of study measurement period, and if not, whether it could be achieved if interventions were to be sustained beyond the measurement period.
Methods: We developed a system dynamics model that simulated population transitions from opioid misuse to opioid use disorder (OUD) and overdose, treatment, and relapse in four HCS states—Kentucky (KY), Massachusetts (MA), New York (NY), and Ohio (OH). The model reproduced historical trends in OOD in each state from 2015-2020, accounting for reduced access to medications for OUD (MOUD) and increase in OOD during the COVID-19 pandemic. The model simulated HCS interventions that could yield a 50% reduction in the incidence of prescription opioid misuse, a 2-fold increase in MOUD initiation rates, MOUD retention rates at levels observed in randomized trials, and a 10% mortality reduction from increased distribution of naloxone kits. The primary outcome was the reduction in OOD at the end of the HCS measurement period and for up to four years after if interventions are sustained, compared with outcomes under status quo.
Results: The simulated interventions were estimated to yield at most a 13% to 17% reduction in OOD by the end of the HCS measurement period among the four states. Sustaining all interventions for an additional 4 years beyond the end of this period would result in a 19% to 30% reduction in OOD. If the MOUD initiation rate could increase by 5-fold (rather than by 2-fold), and if all interventions were sustained, the goal of a 40% reduction in OOD could be achieved in MA after 2 years of sustainment, and in OH after 3 years of sustainment (Figure). When interventions end, OOD quickly rise.
Conclusions: The goal of a 40% reduction in OOD requires not only multiple interventions, including ambitious scale-up of MOUD initiation and retention and increased distribution of naloxone, but also sustained efforts over many years.
Keywords: Addiction, Modeling, Health Policy
Year-by-year reduction in opioid overdose deaths-under a multitude of interventions
Hepatitis C virus (HCV) screening policies among people who inject drugs (PWID) towards the elimination of hepatitis C
OP-045 Health Services, Outcomes and Policy Research (HSOP)
Lin Zhu1, Tannishtha Pramanick2, William W. Thompson3, Nathan W. Furukawa3, Liisa M. Randall4, Benjamin P. Linas5, Joshua A. Salomon1
1Department of Health Policy, School of Medicine, Stanford University, Stanford, CA, USA
2Department of Medicine, Boston Medical Center, Boston, MA, USA
3Division of Viral Hepatitis, Centers for Disease Control and Prevention, Atlanta, GA, USA
4Bureau of Infectious Disease and Laboratory Sciences, Massachusetts Department of Public Health, Boston, MA, USA
5Schools of Medicine and Public Health, Boston University & Department of Medicine, Boston Medical Center, Boston, MA, USA
Purpose: To evaluate health and economic consequences of hepatitis C virus (HCV) screening strategies among people who inject drugs (PWID).
Methods: We developed a dynamic network model to simulate HCV transmission among PWID via injection-equipment-sharing. We parameterized and calibrated the model with data of PWID network from Eastern Kentucky and published literature. We simulated different screening frequencies among PWID, PWID who tested positive went through a treatment cascade. We examined the impact of screening frequencies on HCV prevalence, incidence, and HCV-related deaths over 10 years. We recorded total quality-adjusted-life-years (QALYs), costs and incremental cost-effectiveness ratios (ICERS) associated with different strategies. We conducted sensitivity analyses that varied input parameters governing screening accessibility, linkage to care, transmission, network density and partnership duration, as well as strategies that incorporated harm-reduction interventions.
Results: In the main analysis, screening PWID every six months reduced HCV prevalence by 55%, incidence by 36%, and HCV-related deaths by 33% by year 2030. Screening effectiveness was lower in a denser network. Deaths from non-HCV causes (mostly fatal overdose) were >100x higher than HCV-related deaths. Increasing screening accessibility by 50% increased prevalence reductions by 56% relative to the main analysis. Integrating harm-reduction interventions increased incidence reductions by 56%. In the main analysis, frequent HCV screening (even at a six-month interval) had an ICER <50,000/QALY. In a denser network (mean degree = 3 vs. 1.43 in main analysis), ICERs were >100,000/QALY, largely driven by higher reinfection. Decreases in injection-related mortality and transmission probabilities decreased total costs and increased QALYs, making ICERs more favorable.
Conclusions: Even with HCV screening among PWID at six-month intervals, achieving elimination goals by 2030 would be unlikely without additional interventions. This is especially true in denser PWID networks and communities with low healthcare access among PWID. Frequent HCV testing is cost-effective under certain conditions. Network density, screening accessibility, and HCV transmission probabilities are key drivers of cost-effectiveness. Incorporating harm-reduction interventions would substantially magnify mortality reductions and improve the cost-effectiveness associated with HCV screening strategies.
Keywords: HCV, screening, treatment, PWID
The impact of twice-yearly influenza vaccination given waning immunity in seniors using the Framework for Reproducing Epidemiological Dynamics (FRED)
OP-046 Health Services, Outcomes and Policy Research (HSOP)
Katherine V Williams1, Mary G Krauland2, Mark S Roberts2, Richard K Zimmerman1
1Department of Family Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
2Public Health Dynamics Laboratory, University of Pittsburgh School of Public Health, Pittsburgh, PA, USA
Purpose: The purpose of this study was to determine if two flu vaccine doses in a season could reduce symptomatic flu cases in the elderly (age 65+), a group at high risk of death from flu, by providing early season (first dose) and late season (second dose) protection to offset immune waning.
Methods: We used the Framework for Reproducing Epidemiological Dynamics (FRED), an agent-based (agent defined as an individual person) modeling platform to model two vaccination strategies based on the timing of the flu season. FRED tracks 1.2 million persons in the Pittsburgh region at their work, school, household, and neighborhood, allowing infection to spread in any locale. A one-season flu model with simulation starting Aug 15, varied flu season start resulting in peak cases from late-Dec to end-Mar, an effective reproductive rate of 1.5, and increased susceptibility in individuals aged 65 and up was used. All infected individuals were symptomatic, susceptibility was reduced to 0 after infection. For flu vaccination, 68.7% of agents 65+ were vaccinated as per CDC 2019 reporting, vaccine effectiveness (VE) was 40% for all for both doses, and waning VE was modeled at 7% and 10% per month. In the first set of models, all individuals were vaccinated beginning Sept 1. In the next set, individuals age 65+ who received the first dose on Sept 1 received a second dose beginning Jan 1. Vaccinations occurred over 45 days in both cases.
Results: In age 65+ with 7% VE waning, a second vaccine dose decreased total cases by up to 19% with a late March peak. With 10% VE waning, a second vaccine dose decreased total cases by 20-24% with an end of Feb to end of March peak.
Conclusions: Given potential variable flu season start dates, the optimal timing of vaccination is unknown; a second vaccine dose may offset this variability particularly with faster waning of VE. For people age 65+ compared to a single vaccine dose, the proportion of cases prevented with a second dose was greater for all start and peak dates; a higher proportion was prevented with a later peak and with increased waning. Depending on the degree of waning and season peak, a second vaccination could prevent up to nearly 1 in 4 estimated cases in vulnerable seniors.
Keywords: Influenza, vaccine, elderly, modeling, immunity
One vs. Two Dose, 7% vs. 10% waning
Cost-effectiveness of screening for renal cell carcinoma using renal ultrasound in high-risk men
OP-047 Applied Health Economics (AHE)
Praveen Kumar1, Hawre J. Jalal3, Cindy L. Bryce1, Bruce L. Jacobs2, Lindsay M. Sabik1, Mark S. Roberts1
1Department of Health Policy and Management, School of Public Health, University of Pittsburgh, Pittsburgh, USA
2Department of Urology, School of Medicine, University of Pittsburgh, Pittsburgh, USA
3School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
Purpose: In the last decade, screening for renal cell carcinoma (RCC) has been identified as a high priority area for further research. Since age, weight, and male sex are known risk factors for RCC, we assessed the cost-effectiveness of RCC screening strategies using renal ultrasound in 60-year-old overweight males.
Methods: We developed an individual-based discrete event simulation model that follows an individual from being at risk of developing RCC to death. The model uses birth-year and risk factors, such as age, sex, smoking, and weight. The model was calibrated to 1975-2018 specific data on RCC incidence, mortality, and tumor size from the SEER dataset using Bayesian calibration to obtain joint posterior distribution for the unobserved parameters such as tumor onset and tumor growth rates. The model simulates age at tumor initiation, the onset of the regional and distant stage, diagnosis, and death. We evaluated four renal ultrasound screening strategies in our target population of 60-year-old overweight (BMI>=25 kg/m2) males: (1) current practice of no screening, (2) screening at age 60, (3) screening at age 60 and then 65, and (4) screening at age 60, 65 and 70. The costs included direct medical costs, while the benefits were measured in quality-adjusted life-years (QALYs). We conducted probabilistic and one-way sensitivity analyses. Both costs and benefits were discounted using a 3% rate, and a healthcare payer’s perspective was adopted.
Results: In the target population, Strategy 2 resulted in mean incremental costs of $117 per person and incremental QALYs of 0.00190 per person compared to the No screening strategy (Figure 1). The ICER of screening policies varied from $61,659 for Strategy 2 to $141,033 per QALY gained for Strategy 4. At a willingness to pay (WTP) threshold of $100,000 per QALY gained, Strategy 2 was the most cost-effective option, with a 60% probability of being cost-effective, followed by Strategy 3, with a probability of 23%. From the sensitivity analyses, the most influential parameters were incidental detection rates, cost inputs, and tumor growth rate-specific parameters.
Conclusions: Our analysis suggests that one-time screening for RCC at age 60 among overweight males is likely cost-effective at a willingness to pay of $100,000/QALY gained.
Keywords: Kidney cancer, screening, decision analysis, simulation model, cost-effectiveness
Cost-effectiveness Frontier
Cost-Effectiveness of Enfortumab Vedotin in Previously Treated Advanced Urothelial Carcinoma
OP-048 Applied Health Economics (AHE)
Shirley Fan, William W.L. Wong
School of Pharmacy, University of Waterloo, Kitchener, Canada
Purpose: Urothelial carcinoma is the most common form of bladder cancer, accounting for almost 90% of all bladder cancers. Approximately 10-15% of patients with bladder cancer are diagnosed at the metastatic stage, in which the prognosis is poor, with a 5-year survival rate of 5%. The EV-301 clinical trial demonstrated that enfortumab vedotin (EV), an antibody-drug conjugate, improved overall survival (OS) in this patient population. Our study’s purpose is to determine the cost-effectiveness of EV in patients with advanced urothelial carcinoma (UC) who were previously treated with platinum-containing chemotherapy and immune checkpoint protein therapies (PD-1/PD-L1).
Methods: A state-transition model for a patient cohort who had UC disease progression within 12 months after completion of platinum-containing chemotherapies and PD-1/PD-L1 regimens was used to perform a cost-utility analysis comparing the cost-effectiveness of EV to standard chemotherapies. The model consists of three health states – progression free disease (PFD), progressive disease (PD), and death. Grade 3 and 4 adverse events were built into our model for each treatment arm. Treatment effect estimates and adverse event rates were obtained from the EV-301 study. Costs, utilities, and other model inputs were gathered from published literature. Quality-adjusted life years (QALYs) were used to measure health outcomes, and the incremental cost-effectiveness ratios (ICERs) were compared with willingness-to-pay. We used a Canadian provincial ministry of health perspective, a 5-year time horizon, and a 1.5% discount rate for the analysis. Sensitivity analyses were conducted to assess the robustness.
Results: Compared with chemotherapy, EV increased the cost by $134,873 and effectiveness by 0.09 QALYs, resulting in an ICER of $1,502,394. At a willingness-to-pay (WTP) threshold of $100,000/QALY, EV is not a cost-effective treatment option for UC. Our sensitivity analysis indicated that the utility value for progression-free survival, and the cost of EV have the largest effects on ICER.
Conclusions: Although EV offers clinical benefit for the treatment of UC over current standard chemotherapies, it is not cost-effective based on its current list price. A 92% price reduction is warranted for EV to be cost-effective at a WTP of $100,000/QALY threshold. As a result, in a publicly funded health care system where cost considerations are a constraint on choice of therapy, EV is unlikely to be a cost-effective option for most patients with advanced UC.
Keywords: Cost-effectiveness analysis, Enfortumab Vedotin, Urothelial Carcinoma, Drug therapy, Programmed cell death 1 ligand 1 protein
An Evaluation of Long-term Benefits and Harms of Risk Reducing Medication in High-Risk Women: A Simulation Modeling Study
OP-049 Health Services, Outcomes and Policy Research (HSOP)
Jinani Jayasekera1, Amy Zhao1, Kathryn Lowry5, Jennifer M Yeh7, Karen J Wernli4, Marc Schwartz1, Suzanne O’neill1, Claudine Isaacs1, Allison W Kurian3, Natasha Stout6, Clyde Schechter2
1Georgetown-Lombardi Comprehensive Cancer Center, Washington, DC, USA.
2Departments of Family and Social Medicine and Epidemiology and Population Health, Albert Einstein College of Medicine, Bronx, NY, USA
3Departments of Medicine and of Epidemiology and Population Health at Stanford University School of Medicine, Stanford, California, USA
4Kaiser Permanente Washington Health Research Institute, Seattle, Washington, USA.
5Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington, USA
6Department of Population Medicine, Harvard Medical School, Harvard Pilgrim Healthcare Institute, Boston, Massachusetts, USA
7Department of Pediatrics, Harvard Medical School, Boston Children’s Hospital, Boston, Massachusetts, USA
Purpose: To quantify the long-term benefits and harms of primary prevention with risk reducing medication (RRM) in high-risk women receiving screening mammography.
Methods: We adapted an established Cancer Intervention and Surveillance Modeling Network (CISNET) breast cancer discrete event microsimulation model to provide data on the long-term benefits and harms of RRM and annual screening in 35-year-old women with a 3% or greater five-year risk of developing breast cancer due to a history of Lobular Carcinoma In Situ (LCIS) and a family history of breast cancer. The model followed a simulated cohort of millions of US women from birth to death. We used large observational and clinical trial data to derive input parameters for cohort-specific birth rates, breast cancer risk factors, breast cancer incidence and stage, screening performance, survival by age, stage, and subtype, efficacy of RRM, side effects, and other cause mortality. We compared the long-term benefits and harms of 4 strategies including screening with annual 3D mammography, RRM with 5-years of Tamoxifen, and supplemental screening with Magnetic Resonance Imaging (MRI) starting at age 35 and stopping screening at age 74. The incremental benefits and harms for each strategy was estimated compared to high-risk women who did not receive screening or RRM or treatment during their lifetime. The benefits included change in breast cancer incidence, breast cancer death, and life-years gained (LYG). The harms included side effects of tamoxifen, false positives, and overdiagnosis with screening.
Results: Annual screening + MRI and RRM resulted in the highest number of LYG per 1000 women screened or treated with 5-years of RRM. Addition of RRM to annual screening with or without MRI resulted in an additional 51-53% decrease in invasive breast cancers detected, and a 13-14% decrease in breast cancer deaths. Annual screening + MRI showed a higher number of false positives per 1000 women screened compared to annual screening with or without RRM. RRM could result in a small increase in venous thromboembolism and endometrial cancer events per 1000 women treated.
Conclusions: RRM reduces the risk of hormone-receptor positive breast cancer, and combining RRM with mammography results in earlier detection. Simulation modeling provides data on the long-term benefits and harms of screening and RRM to facilitate discussions about breast cancer prevention and early detection among high-risk women seen in clinical practice.
Keywords: modeling, risk reducing medication, screening, high-risk women
Long-term Benefits and Harms of Risk Reducing Medication and Annual Breast Cancer Screening in Women at High-Risk of Developing Breast Cancer
| Strategy | Life-Years Gained (LYG) per 1000 | Decrease in Invasive Breast Cancers Detected | % Invasive Breast Cancers Avoided due to Risk Reducing Medication Alone | Breast Cancer Deaths Averted | % Breast Cancer Deaths Avoided due to Risk Reducing Medication Alone | False Positives | Overdiagnosis | Side Effects of Risk Reducing Medication for a Duration of Five-years: Venous Thromboembolism | Side Effects of Risk Reducing Medication for a Duration of Five-years: Endometrial cancer | LYG per 1000 mammograms |
|---|---|---|---|---|---|---|---|---|---|---|
| Annual Screening (35,74) | 2834 | 89 | - | 131 | - | 2916 | 23 | - | - | 96 |
| Annual Screening (35,74) + Risk Reducing Medication | 3302 | 191 | 53% | 151 | 14% | 3122 | 23 | 3 | 5 | 103 |
| Annual Screening + MRI (35,74) | 2908 | 95 | - | 134 | - | 4988 | 34 | - | - | 98 |
| Annual Screening + MRI (35,74) + Risk Reducing Medication | 3350 | 195 | 51% | 153 | 13% | 5341 | 31 | 3 | 5 | 105 |
The table provides a summary of the incremental long-term benefits and harms of four screening and risk reducing medication strategies per 1000 high-risk women screened/treated with 5-years of risk reducing medication (Tamoxifen) compared to high-risk women who did not receive screening, risk reducing medication or breast cancer treatment. These women have a 3% or greater 5-year risk of developing breast cancer due to a history of LCIS and family history of breast cancer.
Using genomic heterogeneity to inform therapeutic decisions for metastatic colorectal cancer: an application of the Value of Heterogeneity framework
OP-050 Applied Health Economics (AHE)
Reka E Pataky1, Stuart Peacock2, Stirling Bryan3, Mohsen Sadatsafavi4, Dean A Regier1
1Canadian Centre for Applied Research in Cancer Control, BC Cancer, Vancouver BC, Canada; School of Population and Public Health, University of British Columbia, Vancouver BC, Canada
2Canadian Centre for Applied Research in Cancer Control, BC Cancer, Vancouver BC, Canada; Faculty of Health Sciences, Simon Fraser University, Burnaby BC, Canada
3School of Population and Public Health, University of British Columbia, Vancouver BC, Canada
4Respiratory Evaluation Sciences Program, Collaboration for Outcomes Research and Evaluation, Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver BC, Canada
Purpose: Mutations in KRAS and NRAS are predictive of poor response to cetuximab and panitumumab, two anti-EGFR monoclonal antibodies used in metastatic colorectal cancer (mCRC). The purpose of this study was to characterize the net-monetary benefit (NMB) of using KRAS and NRAS mutation status to inform third-line anti-EGFR therapy for mCRC in British Columbia (BC), Canada, using the value of heterogeneity (VOH) framework.
Methods: We used administrative data to identify mCRC patients who were potentially eligible for third-line therapy in 2006–2019. Cohort identifiers were linked to BC Cancer treatment records and provincial administrative data. We compared three alternative stratification decisions that were in place at different study period: the full population decision, where anti-EGFR therapy was not offered (2006-2009); subgroup-level decision stratified by KRAS mutation status (2009-2016); and subgroup-level decision stratified by KRAS and NRAS mutation status (2016-2019). We used inverse probability of treatment weighting, using propensity scores estimated from generalized boosted models, to balance covariates across the three groups. Cost and survival time were calculated using a 3-year time horizon and adjusted for censoring. Bootstrapping was conducted to characterize uncertainty. Mean and incremental NMB were calculated at a range of threshold values. Using the VOH framework, the value of using KRAS or NRAS mutation status in treatment selection was calculated as the static VOH (sVOH; the difference in NMB between the stratified and unstratified decisions under current information), and dynamic VOH (dVOH; the difference in NMB between the stratified and unstratified decision under perfect information).
Results: We included 2,664 patients in the analysis. At CAD$100,000/LYG, offering anti-EGFR therapy to KRAS wild-type patients (2-subgroup) provides a sVOH of CAD$1,565 per patient; further stratification on NRAS (4-subgroup) provides additional sVOH of CAD$594. Mean testing cost for the 2-subgroup and 4-subgroup decisions is CAD$215 and CAD$713 respectively; the sVOH exceeds the testing cost under both scenarios. Resolving subgroup-specific uncertainty in the 2-subgroup and 4-subgroup decision could provide additional dVOH of $2,306 and CAD$1,316 respectively.
Conclusions: Stratification of anti-EGFR therapy by KRAS and NRAS mutation status can provide additional value at a threshold of CAD$100,000/LYG. There is diminishing marginal value and increasing marginal costs as the decision becomes more stratified. The VOH framework can illustrate the value of subgroup-specific decisions in a more comprehensive way than conventional analysis.
Keywords: real-world data, precision medicine, cancer, cost-effectiveness analysis, heterogeneity
Value of heterogeneity under current and perfect information, at a threshold of CAD$100,000/LYG
A cancer biomarker modeling framework to optimize treatment strategies
OP-051 Quantitative Methods and Theoretical Developments (QMTD)
David Ulises Garibay-Treviño1, Karen M. Kuntz2, Fernando Alarid Escudero3
1Center for Research and Teaching in Economics (CIDE)
2Division of Health Policy and Management, University of Minnesota School of Public Health
3Division of Public Administration, Center for Research and Teaching in Economics (CIDE)
Purpose: Tumor-and host-specific genomic biomarkers are relevant to tailor treatment for cancer patients. Some biomarkers are associated with higher progression rates (i.e., prognostic significance) or higher benefits of a particular therapy (i.e., predictive significance). We propose a general modeling framework to evaluate the economic value of genomic testing in cancer patients and showcase the framework with a practical example based on a previously published analysis.
Methods: We developed a continuous-time, age- and time-dependent state-transition model to simulate a cohort of newly diagnosed cancer patients in discrete time. The model can simulate the whole cohort simultaneously (cohort model) or different individuals independently (microsimulation) for different cycle lengths derived using a matrix exponential approach. Individuals in the model face a parametric hazard of recurrence—options include exponential, Gompertz, Weibull, and generalized gamma—as a function of age and stage of diagnosis. A prognostic biomarker alters the risk of recurrence and could alter the benefit of treatment and survival following relapse. We showcase the model to evaluate the costs, effectiveness, and cost-effectiveness of different test-and-treat strategies in average-risk stage II colon cancer patients: (1) test for the absence of CDX2 biomarker followed by adjuvant chemotherapy for patients without the biomarker, (2) adjuvant chemotherapy for all patients, and (3) no adjuvant chemotherapy for any patient. We conducted deterministic and probabilistic sensitivity analyses.
Results: Testing for the absence of CDX2 expression followed by adjuvant chemotherapy for those without biomarker expression compared with no adjuvant chemotherapy had an ICER of $10,089 USD/QALY (7.55 vs. 7.35 QALYs, and $615,586 USD vs. $613,575 USD lifetime costs). The CDX2 testing strategy remains cost-effective under a range of assumptions and reaches an ICER of $5,633 USD/QALY, assuming a test cost of $1,000 USD, and an ICER of $10,060, assuming 20% of patients lack CDX2 biomarker expression. The modeling framework is available as an open-source R-package flexible enough to adapt to a wide range of scenarios defined by the presence of specific biomarkers. The R package includes an R Shiny app that simulates cohorts and generates summary statistics and plots based on user defined input parameters.
Conclusions: Our modeling framework will help inform the optimal test-and-treat strategies in newly diagnosed patients by biomarker status and guide which potential future studies should be conducted.
Keywords: Biomarker, cancer, cost-effectiveness analysis, decision-analytic model, open source, state-transition model.
Trace of a simulated cohort from 65 to 99 years of age.
This plot shows the distribution of a hypothetical cohort in different health states over time.
Emulator-based Bayesian calibration of CISNET colorectal cancer models
OP-052 Quantitative Methods and Theoretical Developments (QMTD)
Carlos Pineda-Antunez1, Claudia Seguin2, Luuk van Duuren3, Amy Knudsen2, Barak Davidi2, Pedro Nascimento de Lima4, Carolyn Rutter4, Nicholson Collier5, Jonathan Ozik5, Fernando Alarid-Escudero1
1Center for Research and Teaching in Economics, Center for Research and Teaching in Economics (CIDE), Aguascalientes, Mexico
2Institute for Technology Assessment, Massachusetts General Hospital, Boston, MA, United States
3Department of Public Health, Erasmus MC Medical Center Rotterdam, The Netherlands
4RAND Corporation, Santa Monica, CA, United States
5Decision and Infrastructure Sciences Division, Argonne National Laboratory, Argonne, IL, United States; Consortium for Advanced Science and Engineering, University of Chicago, Chicago, IL, United States
Purpose: To calibrate Cancer Intervention and Surveillance Modeling Network (CISNET)’s SimCRC, MISCAN-Colon, and CRC-SPIN simulation models of the natural history colorectal cancer (CRC) with Bayesian Calibration with Artificial Neural Networks (BayCANN), an emulator-based algorithm, and validate the model-predicted outcomes to calibration targets.
Methods: We used Latin hypercube sampling design of experiment to sample 50,000 parameter sets for each CISNET CRC simulation model and generated the corresponding outputs. Using an optimizer that implements the Adam algorithm, we trained multilayer perceptron artificial neural networks (ANN) as emulators using the inputs and outputs samples for each CISNET model. We selected optimal ANN structures with corresponding hyperparameters (i.e., number of hidden layers, nodes, activation functions, epochs, and optimizer) that minimize the predicted mean square error on the validation sample. We implemented the selected ANNs as emulators of the CISNET models in a probabilistic programming language. We then performed Hamiltonian Monte Carlo-based Bayesian calibration to obtain the joint posterior distributions of the simulation model parameters. We internally validated each calibrated emulators by comparing the model-predicted outputs from the posterior distributions against the calibration targets.
Results: The optimal ANN for SimCRC had three hidden layers and 450 hidden nodes, and MISCAN-Colon and CRC-SPIN ANNs had one hidden layer and 600 and 140 hidden nodes, respectively. The emulators showed a prediction accuracy (percentage of predictions that match the model outputs) of 95%, 92%, and 86% for SimCRC, MISCAN-Colon, and CRC-SPIN models, respectively. The total time for training and calibrating the emulators was 144, 85, and 24 minutes for SimCRC, MISCAN-Colon, and CRC-SPIN, respectively. The joint posterior distribution sampled from BayCANN produced outputs in the desired ranges for most targets. The mean of the model-predicted outputs fell within the 95% confidence intervals of the calibration targets in 98 of 110 targets for SimCRC and 63 of 87 targets for MISCAN. Figure 1 shows the validation for CRC incidence and adenoma prevalence targets of SimCRC and MISCAN-Colon models.
Figure 1.
Validation for prevalence and incidence targets.
Conclusions: Using ANN as emulators in Bayesian calibration is a practical solution to reduce the computational burden and efficiently calibrate complex simulation models like the CISNET CRC microsimulation models using only a dataset of the models’ inputs and the corresponding outputs.
Keywords: Bayesian calibration, Emulator, Artificial neural networks, colorectal cancer model
Distribution by income quintile of health gains and financial protection from novel tuberculosis vaccines in low- and middle-income countries
OP-053 Health Services, Outcomes and Policy Research (HSOP)
Allison Portnoy1, Rebecca A. Clark2, Matthew Quaife2, Chathika Weerasuriya2, Christinah Mukandavire2, Roel Bakker3, Mark Jit4, Richard G. White2, Nicolas A. Menzies5
1Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA
2TB Modelling Group, London School of Hygiene and Tropical Medicine, London, UK; Centre for the Mathematical Modelling of Infectious Diseases, London School of Hygiene and Tropical Medicine, London, UK; Department of Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK
3TB Modelling Group, London School of Hygiene and Tropical Medicine, London, UK; Centre for the Mathematical Modelling of Infectious Diseases, London School of Hygiene and Tropical Medicine, London, UK; Department of Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK; KNCV Tuberculosis Foundation, The Hague, Netherlands
4Centre for the Mathematical Modelling of Infectious Diseases, London School of Hygiene and Tropical Medicine, London, UK; Department of Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK; School of Public Health, University of Hong Kong, Hong Kong SAR, China
5Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA; Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA, USA
Purpose: Many households in low- and middle-income countries (LMICs) face catastrophic costs due to out-of-pocket expenses for tuberculosis (TB). Safe and effective TB vaccines are needed to accelerate TB control, and several promising candidates exist. We assessed the potential health gains and financial risk protection from introducing novel TB vaccines, and how these benefits would be distributed across income quintiles.
Methods: We developed a system of epidemiological and economic models, calibrated to demographic, epidemiological, and health service data in 105 LMICs. For each country, we assessed the likely future course of TB-related outcomes, distributional health gains, and changes in household financial vulnerability to catastrophic costs under alternative scenarios for both infant and adolescent/adult vaccine products compared to a ‘no-new-vaccine’ counterfactual. We defined catastrophic costs as instances where the patient costs incurred during an episode of TB disease—sum of direct medical costs (e.g., treatment costs), direct non-medical costs (e.g., transport and food during visits), and indirect costs (e.g., loss of earnings)—exceeded 20% of total annual household income.
Results: The number of TB cases averted by vaccine introduction (for both vaccine products) was greatest in the lower income categories (~56% of benefits in poorest two quintiles), due to the concentration of TB burden in these groups. Over the 2028–2050 period, 7.1 (95% uncertainty range: 5.8–8.6) million fewer households were projected to face catastrophic costs with the infant vaccine, and 32.8 (26.9–39.5) million with the adolescent/adult vaccine. The largest absolute reduction in households facing catastrophic costs was in lower income quintiles (Figure 1). Reductions in patient out-of-pocket spending for direct medical costs were projected to be $25.7 ($22.1–29.6) billion globally over 2028–2050 with the infant vaccine, and $193.8 ($167.0–224.8) billion with the adolescent/adult vaccine. The absolute amount of averted costs was concentrated in higher income groups (~75% in wealthiest two quintiles), due to their greater likelihood of accessing care (for those developing TB), and greater spending per episode of TB care.
Figure 1.
Cases of catastrophic costs averted by quintile comparing infant vaccine to adolescent/adult vaccine.
Conclusions: Under a range of assumptions, TB vaccination would be highly impactful and help narrow income-based health disparities in most LMICs. The results of these analyses will inform decision-making around TB vaccine development and introduction by global and country stakeholders.
Keywords: tuberculosis; equity; vaccines
Isolating the Drivers of Racial Disparities in Prostate Cancer Treatment
OP-054 Health Services, Outcomes and Policy Research (HSOP)
Noah Hammarlund1, Anirban Basu2, John Gore4, Sarah Holt4, Jenney Lee4, Erika Wolff4, Ruth Etzioni3, Yaw Nyame4
1Department of Health Services Research, Management and Policy, University of Florida, Gainesville, USA
2CHOICE Institute, University of Washington, Seattle, USA
3Fred Hutchinson Cancer Research Center, Seattle, USA
4Department of Urology, University of Washington, Seattle, USA
Purpose: Black individuals are less likely to receive potentially lifesaving medical interventions compared to their White counterparts in the US, a phenomenon rooted in social and health factors that are determined by structural factors. Prostate cancer is a common disease affecting nearly 1 in 8 men during their lifetime and demonstrates the widest inequity in cancer-related death with Black men being twice as likely to die from prostate cancer as their counterparts. Black men are also less likely to receive definitive local therapy for prostate cancer, despite the high rate of cure with early detection and treatment. Here, we assess whether the disparity in prostate cancer treatment is driven by differences in patient health characteristics or care delivery (i.e., how health characteristics translate into care decisions) using a machine learning-based decomposition approach.
Methods: We analyzed a sample of Medicare beneficiaries in the Surveillance, Epidemiology and End Result prostate cancer registry using the Kitagawa-Oaxaca-Blinder decomposition approach. Our application decomposes the 4.3 percentage point difference in definitive treatment between Black and White prostate cancer patients into two portions: (1) group-level differences in health predictors of treatment (age, cancer risk, and comorbidities), which we define as differences in health, and (2) group-level differences in how those health predictors associate with definitive treatment, which we define as differences in healthcare. We select important health predictors without overfitting the using a machine learning prediction algorithm.
Results: Our analysis shows that none of the observed difference in prostate cancer treatment utilization can be explained by differences in baseline health characteristics between Black and White men. Treatment differences by race are instead entirely explained by differences in how prostate cancer care is delivered.
Conclusions: Our decomposition demonstrates the inadequacy of the explanation that differences in patient health drive the treatment disparity in prostate cancer. Our findings are consistent with race being a social construct and suggest that understanding broader social and structural factors are important steps in understanding the association between race and treatment disparities in prostate cancer. Durable solutions to inequities in prostate cancer care and outcomes will require a patient-centered approach that aims to engage and empower Black communities and relevant stakeholders in the development of interventions that ensure the equitable delivery of quality care.
Keywords: African American, Black, prostate cancer, treatment, racial disparities
Visual Abstract
Better for Whom? Exploring the influence of socioeconomic status and geography on the cancer care pathway and patient outcomes in England
OP-055 Health Services, Outcomes and Policy Research (HSOP)
Thomas Ward1, Antonieta Medina Lara1, Ruben E Mujica Mota2, Willie Hamilton1, Anne E Spencer1
1Health Economics Group, College of Medicine and Health, University of Exeter
2Academic Unit of Health Economics, School of Medicine, University of Leeds
Purpose: Published evidence has highlighted disparities in cancer outcomes by socioeconomic status (SES) and geographical location in England, but less is known about the relationship between these social attributes and health care resource use, patient clinical characteristics, and the care pathway of such patients, including diagnosis and treatment patterns. The aim of this analysis was to explore and quantify the impact of socioeconomic and geographical heterogeneity on the clinical care pathway of colorectal, lung and ovarian cancer patients, and their subsequent survival and resource use outcomes.
Methods: Adult patients with a diagnosis of colorectal, lung or ovarian cancer on or after the 1st of January 2013 were identified from the Clinical Practice Research Datalink (CPRD) dataset. Data from CPRD was linked to National Health Service Hospital Episode Statistics data, National Cancer Registration and Analysis Service data and Office for National Statistics mortality data. Survival analysis was undertaken and assessed from the time of cancer diagnosis, focusing initially on patient groups stratified by SES and geographical region, using concentration curves to illustrate observed inequalities. Subsequently, the impact of common influential factors on survival was evaluated through univariable and multivariable analyses. Resource use was quantified in terms of health care visits, health care specialist time, testing procedures, prescriptions, and treatments; both the likelihood of incurring health care resource use and the total costs of such resource use were estimated. A ‘non-cancer’ control group was used to assess relative differences in both survival and resource use.
Results: Survival outcomes were notably correlated with SES, geographical location, and several other attributes, including age, gender, ethnicity, comorbidity status and cancer stage at diagnosis. Furthermore, patients in different socioeconomic groups and geographical regions observed subtly different clinical care pathways and were made up of patients with different clinical characteristics, compounding differences in outcomes. Health care resource use significantly increased after cancer diagnosis, and was similarly associated with SES and geographical location, although such associations were reduced when adjusting for diagnosis stage and other relevant factors.
Conclusions: Patient clinical and demographic heterogeneity impact on the clinical care pathway, survival and health care resource use. In the discussion we debate the implications of these findings and how they might aid decision makers in defining a clearer pathway towards equitable health care resource allocation.
Keywords: Inequality, equity, heterogeneity, cancer, survival
Forecasting Racial and Ethnic Outcomes Disparities in United States Adults with Newly-Diagnosed Type 2 Diabetes
OP-056 Health Services, Outcomes and Policy Research (HSOP)
Zachary Newman1, Aaron N Winn2, Andrew J Karter3, Elbert Huang1, Amber Deckard1, Neda Laiteerapong1
1Department of Medicine, University of Chicago, Chicago, USA
2Department of School of Pharmacy Administration, Medical College of Wisconsin, Milwaukee, USA
3Division of Research, Kaiser Permanente, Oakland, USA
Purpose: Simulation modeling has been used in type 2 diabetes research since 1997 and allows researchers to investigate the clinical impact and cost-effectiveness of proposed interventions at the population level. The majority of current models are based on population data unrepresentative of the social diversity of the United States. The Diabetes Model of the United States (DOMUS), a novel simulation model developed from the Kaiser Permanente Northern California Diabetes Registry, includes five race and ethnicity covariates as well as the neighborhood deprivation index (NDI) in its risk equations. Using DOMUS, we sought to forecast racial/ethnic disparities in 13-year incident complication rates and mortality of a NHANES cohort representative of US adults newly diagnosed with type 2 diabetes.
Methods: We selected our cohort from the NHANES 2011-2018 survey cycles and included adults with a known duration of diabetes of less than one year. We simulated each individual a total of 1000 iterations in DOMUS and compared their 13-year predicted rates of macro- and micro-vascular complications, hypoglycemia, dementia, and depression, as well as mortality by racial/ethnic group, using the chi-squared test.
Results: In the NHANES 2011-2018 combined cohort, 159 participants met our inclusion criteria, representing 1.42 million Americans with newly diagnosed type 2 diabetes. These individuals were, by weighted frequency, 65% non-Hispanic White (NHW), 11% non-Hispanic Black (NHB), 19% Hispanic, and 5% non-Hispanic Asian (NHA). The NHW subpopulation was significantly older than the NHB and Hispanic subpopulations by 8.32 and 8.47 years on average, respectively (adjusted p = 0.022 and p = 0.013). We found significant differences in 13-year diabetes outcomes between racial and ethnic groups (Table 1). NHWs experienced the highest combined incident rate of macrovascular complications and mortality, while NHAs and NHBs experienced the highest incident rate of depression and hypoglycemia, respectively.
Table 1.
13-year Complications and Mortality by Race-Ethnicity
|
Conclusions: Although NHWs experienced the highest group rate of macrovascular complications and mortality over the 13-year simulation, this is likely due to their significantly older age at diagnosis. In contrast, despite the similar age at diagnosis and initial level of comorbidity of the NHB and Hispanic populations, NHBs fared significantly worse both in terms of macrovascular complication burden and mortality. Additionally, insignificant differences of estimated NDI quartile between these groups suggest the long-term impact of race-based harms to the NHB population.
Keywords: Simulation, Diabetes, Disparities
Disparities in shared decision-making discussions about PSA screening: Data from the National Health Interview Survey
OP-057 Decision Psychology and Shared Decision Making (DEC)
Kristin G. Maki, Naomi Q.p. Tan, Robert J. Volk
Department of Health Services Research, Division of Cancer Prevention and Population Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
Purpose: The purpose of this study is to identify sociodemographic factors related to whether clinicians discussed the advantages and disadvantages of prostate-specific antigen (PSA) screening with patients.
Methods: We conducted a cross-sectional analysis of data from the 2019 National Health Interview Survey (NHIS). Our analysis included males ages 40 and above who reported having a PSA test in the past year as part of a routine exam; participants who had previously been diagnosed with prostate cancer, or who reported having a PSA test for reasons other than as part of a routine exam were excluded from the analysis.
Self-reported outcomes included whether their doctor talked with them about advantages and disadvantages of the PSA test. Sociodemographic factors included age, education level, race and ethnicity, income-poverty ratio, health insurance coverage, having a usual place to get preventive care, and a 5-year mortality risk estimate.
We used the “survey” package in RStudio to account for the complex survey design, and the sample was weighted to extrapolate to the US population. We used multinomial logistic regression for the analysis.
Results: The final weighted sample consisted of 17,417,715 male respondents (mean [SD] age = 64.0 [10.2] years), most of whom were non-Hispanic White (76.2%, weighted n = 13,265,579) and had an income-poverty ratio level above 2 (84.6%, weighted n = 14,727,188).
The results of our multinomial regression analysis show an increased likelihood of not having a discussion about either advantages or disadvantages (OR=1.74, 95% CI: 1.03-2.93, p <.05), compared to having a discussion about both, for respondents with less than a high school education compared to those who have a 4-year college education or above. We also found an increased likelihood of not having a discussion about either advantages or disadvantages (OR=1.50, 95% CI: 1.01-2.21, p <.05), compared to a discussion about both, for respondents with an income-poverty ratio less than 1 (indicating lower income), compared to those with an income-poverty ratio of 2 or higher (indicating higher income). Additional results are summarized in Table 1.
Table 1.
Factors Associated with Discussion of PSA screening Advantages and Disadvantages
| Discussion of advantages only vs. Discussion of both
OR (95% CI) |
Discussion of neither vs. Discussion of Both
OR (95% CI) |
|
|---|---|---|
| Intercept | 0.44 (0.15 - 1.32) | 0.99 (0.38 - 2.60) |
| Race and Ethnicity | ||
| Non-Hispanic White (referent)
Hispanic Black or African American Asian Other single and multiple groups |
----
0.68 (0.42 - 1.12) 0.78 (0.49 - 1.22) 1.03 (0.49 - 2.14) 1.13 (0.47 - 2.72) |
-----
0.76 (0.48 - 1.21) 0.91 (0.61 - 1.34) 1.34 (0.71 - 2.55) 1.20 (0.45 - 3.16) |
| Education Level | ||
| 4-year college or higher (referent)
Less than high school or GED High school/GED Some college |
-----
0.69 (0.36 - 1.30) 1.18 (0.87 - 1.60) 0.91 (0.69 - 1.20) |
-----
1.74 (1.03 - 2.93) 1.06 (0.78 - 1.44) 1.08 (0.82 - 1.42) |
| Income-Poverty Ratio | ||
| Income:Poverty ≥2 (referent)
Income:Poverty 1-2 Income:Poverty <1 |
-----
1.18 (0.59 - 2.34) 0.83 (0.54 - 1.28) |
-----
1.60 (0.95 - 2.69) 1.50 (1.01 - 2.21) |
| Geographic Region within the US | ||
| Northeast (referent)
Midwest South West |
-----
1.27 (0.84 - 1.91) 0.82 (0.58 - 1.16) 0.83 (0.57 - 1.20) |
-----
1.16 (0.75 - 1.79) 0.90 (0.61 - 1.32) 0.92 (0.59 - 1.42) |
| Urbanicity | ||
| Large central metro (referent)
Large fringe metro Medium and small metro Nonmetropolitan |
-----
1.23 (0.87 - 1.74) 1.01 (0.72 - 1.42) 1.26 (0.83 - 1.92) |
-----
1.06 (0.74 - 1.50) 1.06 (0.77 - 1.46) 1.67 (1.11 - 2.52) |
| Health Insurance Status | ||
| Private insurance (referent)
Public insurance Other insurance No insurance |
-----
0.86 (0.63 - 1.17) 0.92 (0.61 - 1.37) 0.47 (0.20 - 1.10) |
-----
0.80 (0.59 - 1.08) 0.84 (0.57 - 1.25) 0.62 (0.27 - 1.42) |
| 5-Year Mortality Risk | ||
| 5-year mortality 5% (referent)
5-year mortality 8% 5-year mortality 12% 5-year mortality 19% 5-year mortality 29-37% 5-year mortality 49-62% |
-----
1.28 (0.87 - 1.89) 1.39 (0.94 - 2.07) 1.27 (0.81 - 2.00) 1.00 (0.64 - 1.57) 1.24 (0.73 - 2.09) |
-----
1.01 (0.69 - 1.48) 1.03 (0.71 - 1.52) 0.91 (0.59 - 1.40) 0.80 (0.52 - 1.24) 0.93 (0.57 - 1.53) |
Results from the multinomial logistic regression where discussion of potential advantages and disadvantages of PSA screening is the dependent variable.
Conclusions: In our analysis, individuals with lower levels of education and higher levels of poverty were more likely to report not having a discussion of advantages or disadvantages of PSA testing compared those with higher education and higher incomes.
Keywords: Shared decision-making, PSA screening, Health disparities
Evaluating a Breast Cancer Screening Decision Aid for Racially/Ethnically Diverse Women: A Pilot Trial
OP-058 Decision Psychology and Shared Decision Making (DEC)
Ashley J. Housten1, Michelle Eggers1, Maggie Britton2, Mary Politi1, Victoria Shaffer3, Jean Hunleth1, Kia Davis1, Rachel Mintz1, Fei Wan1, Diana S. Hoover4, Robert J. Volk5
1Division of Public Health Sciences, Department of Surgery, Washington University School of Medicine, St. Louis, MO USA
2Department of Psychological, Health, and Learning Sciences, University of Houston, Houston, TX USA
3Department of Psychological Sciences, University of Missouri, Columbia, MO USA
4Department of Health Disparities Research, Division of Cancer Prevention and Population Sciences, The University of Texas MD Anderson Cancer Center, Houston, TX USA
5Department of Health Services Research, Division of Cancer Prevention and Population Sciences, The University of Texas MD Anderson Cancer Center, Houston, TX USA
Purpose: Women in their 40s face a difficult decision about when to start and how frequently to undergo screening for breast cancer. These decisions are compounded by conflicting recommendations about screening initiation and frequency from professional organizations. This study developed and evaluated a breast cancer screening decision aid for racially and ethnically diverse women in their 40s.
Methods: We employed a user-centered design approach to develop a breast cancer decision aid. Specifically, we: 1) conducted focus groups to characterize breast cancer screening knowledge, attitudes, and intentions; 2) engaged a stakeholder advisory board; and 3) completed user-testing through 3 iterative cycles. Following the development of the decision aid, we evaluated the tool with an online sample. Participants were randomized to review either the decision aid (intervention) or breast cancer screening information (control; from the National Cancer Institute). Eligible participants were: 1) women 40-49 years; 2) able to read and write in English; 3) Latina, Black, or non-Latina White; and 4) women with no known increased risk for breast cancer. We used closed- and open-ended questions to evaluate breast cancer screening knowledge, decisional conflict (values clarity subscale), decision self-efficacy, preparation for decision making, acceptability, and intention to screen.
Results: 284 participants reviewed either the intervention (n=137 [48.2%]) or control (n=147 [51.8%]). Mean age was 44 years (5.0 IQR) and participants were racially diverse (39.8% White; 33.5% Black; 26.8% Latina/Hispanic). 125 (44.5%) had less than a college education and 83 (30.1%) had limited health literacy. Both arms increased breast cancer screening knowledge (intervention pre 50.9 vs. post 56.2; control pre 46.9 vs. post 54.4; post-score p-value=0.63). No significant differences were observed between arms for decisional conflict (values clarity subscale), decision self-efficacy, preparation for decision making, or intention to screen (all p>0.05). Sub-analyses revealed limited differences between racial/ethnic groups and by health literacy. Participants found both the intervention and control to be acceptable. Those in the intervention arm rated the values clarification and narratives components as acceptable.
Conclusions: This breast cancer screening patient decision aid was rated as highly acceptable but had a similar impact on knowledge and decision-making outcomes compared to high-quality educational materials. Future research evaluating the implementation of breast cancer screening information for women in their 40s could improve delivery among socially marginalized populations.
Keywords: Cancer, Oncology, Health Equity, Mammography, Decision Aids
Development of a visual risk display for cancer surgeons: a user-centered design approach
OP-059 Decision Psychology and Shared Decision Making (DEC)
Hung Jui Tan1, David Gotz2, Susan Blalock3, Antonia V. Bennett4, Dan S. Reuland5, Alex Sox Harris6, Matthew E. Nielsen1, Ethan Basch5
1Department of Urology, University of North Carolina, Chapel Hill, NC, USA; Lineberger Comprehensive Cancer Center, University of North Carolina, Chapel Hill, NC, USA
2Lineberger Comprehensive Cancer Center, University of North Carolina, Chapel Hill, NC, USA; School of Information and Library Science, University of North Carolina, Chapel Hill, NC, USA
3Pharmaceutical Outcomes & Policy, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC, USA
4Lineberger Comprehensive Cancer Center, University of North Carolina, Chapel Hill, NC, USA; Department of Health Policy & Management, Gillings School of Global Public Health, University of North Carolina, Chapel Hill, NC, USA
5Lineberger Comprehensive Cancer Center, University of North Carolina, Chapel Hill, NC, USA; Department of Medicine, University of North Carolina, Chapel Hill, NC, USA
6Department of Surgery, Stanford University, Palo Alto, CA, USA
Purpose: Many risk prediction tools have been developed to inform surgical decision-making, but these tools have had limited uptake and utility in real-world practice. To address these challenges, we applied a user-centered design approach to create a surgeon-facing visual risk display for cancer surgery.
Methods: For the user-centered design, we assembled a design team, expert panel, and user panel. The user panel consisted of 7 urologists with varied backgrounds and practices. These teams participated in 3 rounds of design, each round beginning with a user panel design session followed by review and iteration from the expert panel and design team. Round 1 consisted of individual design sessions where users shared their design needs, prioritized risk categories (0 to 100), and illustrated designs from which the design team created 3 low-fidelity mock-ups. In Round 2, users rated the mock-ups using a 5-point Likert scale (5 being the best score) on acceptability, usability, and overall satisfaction and provided feedback focusing on low-scoring areas. Following further review and iteration, users rated a high-fidelity mock-up for Round 3 followed by group discussion. Descriptive statistics were applied to the ratings, and coding-based analysis was applied to the discussions to generate higher order themes.
Results: Of multiple potential risks to include in a visual risk display, users gave the highest priority to life expectancy (mean 87), cancer mortality (mean 93), serious complications from surgery (mean 79), and adverse functional outcomes (mean 78). Thematically, key design elements included multi-dimensionality, simplicity, interactivity, framing/context, and multi-faceted communication. The Figure shows the Round 2 low-fidelity mock-ups and Round 3 high-fidelity mock-up. Among the low-fidelity mock-ups, users gave mock-up 1 a mean overall satisfaction rating of 3.0 (SD 1.3) due to wasted space and difficulty in comparing radial lines and mock-up 3 a mean rating of 2.7 (SD 1.1) due to complexity. Mock-up 2 received a mean rating of 4.1 (SD 1.5) for its simplicity, ease of use, and multi-dimensional design. In Round 3, the high-fidelity mock-up achieved a mean overall satisfaction rating of 4.7 (SD 0.5).
Figure.
Conclusions: User-centered design efficiently facilitated the development of a visual risk display for cancer surgery with high acceptability and usability potential. Future studies will examine how this display affects risk perception, surgical decision-making, and outcomes.
Keywords: Risk communication, user-centered design, data visualization, cancer surgery
An Online Application to Explain Community Immunity with Personalized Avatars: A Randomized Controlled Trial
OP-060 Decision Psychology and Shared Decision Making (DEC)
Hina Hakim1, Julie A Bettinger,12, Christine T Chambers2, S. Michelle Driedger3, Eve Dubé4, Teresa Gavaruzzi5, Anik M. C. Giguere1, Noah M Ivers6, Anne Sophie Julien1, Shannon E Macdonald7, Rita Orji2, Elizabeth Parent1, Beate Sander8, Aaron M Scherer9, Kumanan Wilson10, Brian J Zikmund Fisher11, Daniel Reinharz1, Holly O Witteman1
1Laval University
2Dalhousie University
3University of Manitoba
4Institut national de santé publique du Québec
5University of Padova
6University of Toronto
7University of Alberta
8University Health Network,Toronto General Hospital
9University of Iowa
10University of Ottawa
11University of Michigan
12Vaccine Evaluation Center, BC Children’s Hospital
Purpose: To evaluate the effects of an intervention conveying the concept of community immunity (herd immunity) on risk perception, emotions, knowledge and vaccination intentions.
Methods: We previously developed an online application showing how community immunity works through a user-centered design process with 110 participants across 4 cycles. In our application, people personalize a virtual community by making avatars (themselves, 2 vulnerable people in their community, and 6 others.) The application integrates these avatars in a 2-minute narrated animation. The present study evaluated this intervention in a randomized controlled trial among adults in Canada. We collected participants’ sociodemographic details and a validated measure of individualism and collectivism. We analyzed the application’s effects on primary outcome risk perception, divided into comprehension (accuracy) and feelings (subjective sense of risk) and secondary outcomes emotions (worry, anticipated guilt), knowledge and vaccination intentions, using analyses of variance for continuous outcomes and logistic regressions for dichotomous outcomes. We pre-registered our trial, depositing all study materials (including pre-scripted analysis code) on Open Science Framework, then ran the trial March 1-July 1, 2021.
Results: Study participants (N=5516) approximately reflected Canadian adult population statistics, with median age 42 years (interquartile range 32-58), 50% women, 49% men (1% other answers), 79% white, 16% born outside Canada, 20% French-speaking, and 59% with college/higher education. The application had positive effects on all outcomes. People assigned to the application were more likely to score high on risk perception as comprehension (Chi-squared(1)=134.54, p<0.001) and risk perception as feelings (F(1,3875)=28.79, p<0.001) compared to those assigned to a control condition. The application also increased emotions (F(1,3875)=13.13, p<0.001), knowledge (F(1,3875)=36.37, p<0.001), and vaccination intentions (Chi-squared(1)=9.4136, p=0.002). Overall, participants with more collectivist orientations demonstrated more responsiveness to arguments about the collective benefits of widespread vaccination. Comparing our application to others available online, other interventions had weaker effects for named diseases (measles, flu) but stronger effects in the context of an unnamed, ‘vaccine-preventable disease.’
Conclusions: An online application about community immunity can contribute to higher-quality decision making (i.e., more accurate risk perception, greater knowledge, more concern for others, higher vaccination intentions) about recommended vaccines.
Keywords: Community immunity; Herd immunity; Vaccination; Visualization; Web-based application; Randomized controlled trial
Do a social proof nudge and a visual aid enhance the impact of a cancer risk algorithm on physicians’ judgments?
OP-061 Decision Psychology and Shared Decision Making (DEC)
Bence Palfi, Kavleen Arora, Olga Kostopoulou
Department of Surgery and Cancer, Imperial College London, London, UK
Purpose: Evidence-based algorithms are developed in an increasing number to improve health outcomes. In the UK, a critical area of application and a national priority is the earlier diagnosis of cancer. In a previous study, we found that family physicians improved their risk estimates and referral decisions about hypothetical patients suspected to have colorectal cancer, when they received advice from an unnamed algorithm (Kostopoulou et al. 2022). This study aimed to replicate and extend these findings using a different cancer and a larger sample of physicians.
Methods: 215 GPs were presented with 20 clinical vignettes describing patients with symptoms that could indicate upper GI cancer. First, GPs provided their initial risk estimate and indicated their inclination to refer. They were then shown the risk score of an unnamed algorithm and could update their estimates and referrals if they wished. In a 2x2 between-groups design, half of the sample was provided with a social proof nudge describing how previous study participants had found the algorithm useful, whereas the other half received no such information. For each vignette, half of the sample saw a bar graph depicting the relative contribution of symptoms and risk factors to the risk score, while the other half did not receive the graph.
Results: Consistent with our previous study, the algorithm impacted risk estimates by 8%, on average, and changed inclination to refer 26% of the time. Importantly, decisions improved significantly post-algorithm according to the 3% NICE threshold (OR=1.57 [1.41, 1.74], p<.001). The provision of a social proof nudge enhanced the algorithm’s impact on risk estimates (b=1.28% [0.25, 2.30], p=.016) as well as on referrals (b=0.07 [0.01, 0.13], p=.018). However, the bar graph had no observable effect on behaviour. As in the previous study, we observed a learning effect: initial (unaided) risk estimates moved closer to the algorithm with increasing vignette order (b=-.11% [-.17, -.05], p<.001).
Conclusions: Our findings reinforce the idea that cancer risk calculators have the potential to improve cancer risk assessment and urgent referral decisions. When algorithms are introduced to clinical care, it is recommended to notify clinician-users about their proven usefulness to colleagues to maximise impact. The possibility of the algorithm to be applied as a learning tool for risk assessment gained further support and future research should investigate its long-term effect.
Keywords: cancer risk score, early detection of cancer, primary-care, human-algorithm interaction, nudge, explainable algorithm
Alternative Risk-Communication Approaches Impact Understanding with Associated Impacts on Choice Consistency in Discrete-Choice Experiments
OP-062 Patient and Stakeholder Preferences and Engagement (PSPE)
Shelby D Reed1, E. Hope Weissler2, Matthew J Wallace1, Jui Chen Yang1, Laura Brotzman3, Matthew A Corriere4, Eric A Secemsky5, Jessie Sutphin1, F. Reed Johnson1, Juan Marcos Gonzalez1, Michelle E Tarver6, Anindita Saha6, Allen L Chen6, David J Gebben6, Misti L Malone6, Andrew Farb6, Olufemi Babalola6, Brian J Zikmund Fisher6
1Duke Clinical Resarch Institute, Durham, NC
2Duke University School of Medicine, Durham, NC
3University of Michigan School of Public Health, Ann Arbor, MI
4University of Michigan Medical School, Ann Arbor, MI
5Beth Israel Deaconess Medical Center, Boston, MA
6U.S. Food and Drug Administration, Rockville, MD
Purpose: ±To test whether alternative approaches to presenting probabilistic risk information in discrete-choice experiments (DCE) influence respondents’ understanding of benefit-risk information being conveyed and whether their level of understanding is associated with choice consistency across DCE tasks.
Methods: Respondents ages 40-75 from the US were recruited by a commercial survey vendor and randomized to one of 7 versions of a survey testing alternative approaches to representing 4 probabilistic attributes, representing two potential events at two time points, in DCE choice questions. To assess respondents’ understanding of the information portrayed, 12 multiple-choice questions were embedded throughout the training section of the survey instrument. To assess choice consistency, three internal validity tests and a series of 2-3 questions designed to test whether respondents were dominating on one of two selected attributes (i.e. 5-year repeat procedure risk; 5-year mortality risk) were included across 7-8 choice questions.
Results: There were 2,610 respondents: mean age 59.8±10.4; 55.5% female; 21.9% Black/African American. Across the 7 risk-communication approaches, mean percentages of correct responses ranged from 69.8% to 78.4% (p< 0.001). Percentages of respondents correctly answering all 12 questions ranged from 8.4% to 16.3% (p= 0.002). Respondents randomized to any of the 3 versions combining risk of a repeat procedure, risk of dying, or neither in the same table cells +/- icon risk arrays consistently performed worse than respondents randomized to any of the 4 versions that separately presented risk information for each attribute. Across the 7 approaches, there were no significant differences in performance on internal validity tests; overall 71.3% passed the transitivity test (p=0.92), 87.4% selected the dominant alternative (p=0.65), and 76.1% chose the same alternative in a repeated choice question (p=0.55). However, respondents randomized to view separated risk information were more likely to make choices consistent with minimizing 5-year repeat procedure risk than those randomized to view combined risk information (17.0% vs. 11.6%; p<0.001). Also, respondents with better understanding were more likely to exhibit consistent logic (p<0.001), controlling for survey version, demographics, numeracy and health literacy.
Conclusions: Combining information for different probabilistic attributes in DCE questions leads to poorer understanding of comparative benefit-risk information than separately presenting probabilities for each. Alternative risk-communication approaches can influence choices in DCEs.
Keywords: stated-preference research, risk communication, discrete-choice experiment
Contextualizing Care Through EHR Clinical Decision Support: A Randomized Clinical Trial
OP-063 Decision Psychology and Shared Decision Making (DEC)
Alan Schwartz1, Saul J. Weiner2, Frances Weaver3, William Galanter1, Sarah Olender1, Karl Kochendorfer1, Amy Binns-Calvey1, Ravisha Saini4, Sana Iqbal5, Monique Diaz6, Aaron Michelfelder7, Anita Varkey8
1University of Illinois at Chicago
2University of Illinois at Chicago, Jesse Brown VA Medical Center, & Center of Innovation for Complex Chronic Healthcare
3Loyola University Chicago, Edward Hines, Jr. VA Hospital, & Center of Innovation for Complex Chronic Healthcare
4University of Illinois at Chicago & Jesse Brown VA Medical Center
5Loyola University Chicago
6University of Illinois at Chicago & Pacific Central Coast Health Centers, Dignity Health
7Loyola University Chicago & Loyola University Health System
8Loyola University Health System & Oak Street Health
Purpose: Failures to consider relevant patient circumstances and behaviors when planning care, i.e., to “contextualize care,” are common, adversely affect healthcare outcomes, and increase costs. We sought to determine whether clinical decision support (CDS) tools in the electronic health record (EHR) improve clinician contextual probing, attention to contextual factors in care planning, and healthcare outcomes.
Methods: We conducted a randomized clinical trial (NCT03244033) at two academic medical centers with different EHR systems. Primary care patients completed a pre-visit questionnaire that elicited contextual red flags and factors. In the intervention arm, these red flags and factors, along with additional red flags the EHR culled from the medical record, were displayed in the clinician’s note template in a “contextual care box” (CCB), along with alerts and proposed relevant orders. In the control arm, patients completed the questionnaire but the CDS was not activated. Patients carried concealed audiorecorders, and blinded research assistants reviewed visit recordings and physician notes for additional red flags and factors, and to determine how often the clinician probed red flags and adapted care plans to identified factors. Blinded assistants later determined whether each red flag had resolved 4-6 months later.
Results: Over 452 completed encounters, the CDS increased contextual probing (AOR 2.1, 95% CI 1.1-3.9) and contextualization of the care plan (AOR 2.7, 1.3-5.4) controlling for whether a factor was identified by probing or otherwise. Contextual red flags were not more likely to resolve in the intervention vs. control arm (AOR 0.97, 0.57 – 1.64), but across study arms, contextualized care plans were more likely than non-contextualized plans to result in improvement in the red flag (AOR 2.1, 1.4-3.3). When the EHR populated the CCB or activated alerts for red flags available prior to the start of the visit, the likelihood of the clinician probing the red flag was significantly increased (AOR=3.6, 1.2-11.2), and when a factor was present, the likelihood of incorporating it into the plan also increased (AOR=11.3, 2.3 – 55.9). When a red flag was the subject of an alert, it was less likely to have worsened four months later (AOR=0.19, 0.05-0.73), beyond the effect of whether the plan was contextualized.
Conclusions: Clinical decision support tools can increase clinician attention to patient life context in care planning. Increased attention is associated with better outcomes.
Keywords: patient context,contextualization of care,contextual error,clinical decision support,electronic health record,randomized clinical trial
Figure.
Raw numbers of red flags, contextual factors, and patient outcomes by study arm.
Development and usability testing of a patient portal to enhance familial communication about hereditary cancer syndromes: A patient-driven approach
OP-064 Decision Psychology and Shared Decision Making (DEC)
Samantha Pollard1, Deirdre Weymann1, Rosalie Loewen1, Jennifer Nuk2, Sophie Sun2, Kasmintan A Schrader2, Chiquita Hessels4, Dean A Regier3
1Cancer Control Research, BC Cancer, Vancouver, Canada
2Hereditary Cancer Program, BC Cancer, Vancouver, Canada
3Cancer Control Research, BC Cancer, Vancouver, Canada; School of Population and Public Health, University of British Columbia, Vancouver, Canada
4Patient partner
Purpose: Genetic testing to identify hereditary cancer syndromes (HCS) can improve health outcomes for families through uptake of cancer risk mitigation strategies. Realizing patient and health system benefits from publicly funded testing programs requires effective communication between tested individuals and their genetic family members. Overcoming inter-familial communication barriers is critical to encouraging cascade testing and initiating strategies to reduce familial cancer burden. We developed and evaluated a patient portal to enhance familial communication about HCS susceptibility genetic testing within a Canadian provincial cancer control system, BC Cancer.
Methods: To inform the structure and content of the portal, we conducted semi-structured qualitative interviews with individuals having undergone HCS testing through BC Cancer’s Hereditary Cancer Program. Following thematic analysis, prioritized requirements were then integrated into the online portal platform. Using convenience sampling, we then recruited patient partners and healthcare providers from BC Cancer to review the portal and provide critical feedback through an electronic survey. Two analysts summarized quantitative feedback using descriptive statistics, synthesized qualitative feedback, and led critical discussion within the multi-disciplinary research team to prioritize recommendations for portal refinement.
Results: Following qualitative analysis of 25 patient interviews, we developed the patient portal with six distinct informational and interactive sections. Core components present educational information about genetic testing, genetic variation, HCS, and secondary prevention strategies, as well as guidance to enhance familial communication. In addition to written guidance, we developed and integrated a brief video with an individual diagnosed with an HCS, describing her communication experience. Fourteen healthcare providers and 8 patient partners participated in user testing. Following analysis of feedback, participants recommended clear and succinct presentation of information about HCS and cancer risk, as well as available cancer risk reduction strategies. To further support familial communication, participants stressed a need to clarify testing eligibility and recommendations for potentially affected family members following a familial diagnosis.
Conclusions: Using a patient-oriented approach, our portal endeavours to address an unmet need to promote constructive familial communication about HCSs. Throughout design and user testing, we prioritized the direct integration of patient voices with a view to mitigate reported challenges engaging in complex conversations about HCS. Developed for feasible clinical implementation, this work presents an ongoing effort to broaden the population-wide impact of hereditary cancer screening programs and reduce cancer burden among affected families.
Keywords: decision support techniques, patient oriented research, hereditary cancer syndromes, familial cancers, risk communication
Streamlining Markov process modeling to avoid bias in simulating continuous-time Markov models in discrete-time
OP-065 Quantitative Methods and Theoretical Developments (QMTD)
Kyueun Lee1, Fernando Alarid Escudero2
1Department of Health Policy and Management, Graduate School of Public Health, University of Pittsburgh, USA
2Drug Policy Program, Center for Research and Teaching in Economics (CIDE)-CONACyT, Aguascalientes, Mexico
Purpose: Simulating continuous-time state transition models (STM) in discrete cycles can introduce bias by holding off state transition until the end of each cycle. Shortening the cycle length can reduce the bias but running a model over an extended period or running a microsimulation model with many short cycles can be computationally demanding. Specifying a model in continuous time with a matrix on intensities and deriving the discrete-time transition probability matrix using the matrix exponential can avoid bias being computationally efficient. We illustrate how to simulate a continuous-time STM in discrete time and quantify the bias when converting each rate to probabilities independently, using an alcohol drinking model as an example.
Methods: We developed a 4-state cohort STM to simulate the age-specific dynamics in alcohol drinking. We used previously calibrated annual rates of initiating, quitting, and resuming drinking. We specified the model in continuous time with a yearly transition rate matrix and transformed it into discrete time by deriving an annual transition probability matrix using two approaches. First, we applied a matrix exponential to the whole transition rate matrix (‘gold standard’). Second, we transformed individual transition rates, assuming independent exponentially distributed transition times by converting each rate to probability with 1-e^(-r)(‘traditional’). We compared the transition probability matrix and the simulated model outcomes, such as the age-specific prevalence of drinkers between both approaches. We also quantified the bias of years spent as a drinker when using the incorrect approach vs. the gold standard.
Results: Using a gold standard approach, the direct transition from ‘never drinker’ to ‘former drinker’ can occur within a cycle in the model (Figure A). In contrast, individual transformation of rate to probability does not allow a direct transition from ‘never drinker’ to ‘former drinker’ within a cycle, and a transition can only happen indirectly through ‘current drinker’. This discrepancy overestimated the age-specific prevalence of drinkers in the traditional approach (Figure B). Compared to the gold standard approach, the traditional approach of calculating transition matrix biased the time spent as a drinker by 13.6%.
Figure.
(A) possible transitions occurred within a cycle under the gold standard and traditional approach (B) age-specific prevalence of drinkers estimated under the gold standard and traditional approach
Conclusions: When an STM is specified in continuous-time and simulated in discrete time, using a matrix exponential approach can correctly capture all the possible state transitions and can be an efficient way to avoid bias in the discretization of continuous time.
Keywords: state transition model, continuous time, transition probability matrix
Addressing challenges in the development of clinical risk prediction algorithms due to intermittent patient follow-up in electronic health record data
OP-066 Quantitative Methods and Theoretical Developments (QMTD)
Patricia Rodriguez1, Patrick Heagerty2, Samantha Clark1, Eric Haupt3, Erin Hahn3, Veena Shankaran4, Sara Khor1, Yilin Chen1, Lurdes Inoue2, David Veenstra1, Anirban Basu2, Aasthaa Bansal1
1The Comparative Health Outcomes, Policy & Economics (CHOICE) Institute, University of Washington, Seattle, USA
2Department of Biostatistics, University of Washington, Seattle, USA
3Kaiser Permanente Southern California, Pasadena, USA
4Fred Hutchinson Cancer Research Center, Seattle, WA
Purpose: Electronic health record (EHR) data contains comprehensive, long-term data on a large number of patients and can be an appealing data source for the development of risk prediction algorithms that incorporate patient history. However, in practice, intermittent patient follow-up presents methodological challenges related to missing biomarker values and delayed outcome ascertainment that can introduce biases if ignored. Using colorectal cancer (CRC) surveillance as a case study, we demonstrate a principled statistical approach for pre-processing data, i.e. imputing longitudinal biomarker values and recurrence outcomes on a guideline-concordant schedule, to create a complete dataset that can be used for analysis.
Methods: We used EHR data on 3,156 patients treated for CRC and monitored for recurrence for up to five years. To impute recurrence, we modeled its natural history using a multi-state Markov model with the following states: no detectable recurrence, detectable and resectable recurrence, detectable and unresectable, never resectable, and dead. For each patient with observed recurrence, we generated 10 trajectories that were consistent with observed data, e.g. if a patient was observed to experience detectable and unresectable recurrence 12 months post-diagnosis, we used trajectories that simulated earlier stages of recurrence while keeping detectable and unresectable recurrence fixed at 12 months. We then imputed the longitudinal biomarker in 3-month increments using a pattern mixture model that preserved the relationship between the longitudinal biomarker and the timing of the recurrence outcome, e.g. patients with earlier recurrence had steeper biomarker trajectories.
Results: Although guidelines recommend that surveillance testing occur every 3-6 months, patients in our cohort had an average of 1.2 biomarker tests annually. Older patients who had more comorbidities and did not see an oncologist had fewer biomarker measurements and lower incidence of observed recurrence. Using the above pre-processing approach, we generated a dataset with biomarker measurements and recurrence outcome ascertainment every three months (see Figure 1). A dynamic risk prediction model developed using this complete dataset and biomarker trajectory as a predictor had AUC=0.87-0.94 over time.
Figure 1.
Observed (left) and imputed (right) biomarker trajectories for 10 random individuals
Conclusions: We demonstrated the feasibility of a data preprocessing approach to generate a complete longitudinal dataset that allows for analyses that incorporate long-term patient history on a representative cohort. We applied this approach to develop a powerful risk prediction model that used the biomarker trajectory as a predictor and had excellent performance.
Keywords: Electronic health record data, risk prediction, missing data, longitudinal data, biomarker
A Mathematical Model for Inequality, Morbidity and Life Expectancy Estimation from Electronic Health Records
OP-067 Quantitative Methods and Theoretical Developments (QMTD)
Lyla Mourany1, Glen B Taksler1, Paul R Gunsalus1, Adam T Perzynski2, Jarrod E Dalton1
1Department of Quantitative Health Sciences, Cleveland Clinic, Cleveland, Ohio
2Center for Healthcare Research and Policy, Case Western Reserve University at MetroHealth, Cleveland, Ohio
Purpose: Existing models of life expectancy (LE) rarely account for inequalities in socioeconomic position and chronic disease over the life course.
Methods: We extracted electronic health records (EHR) on 558,102 primary care patients aged ≥40 years seen at Cleveland Clinic between 2007-2021. We used a probabilistic dynamic systems approach to generate health trajectories with respect to incident disease and mortality risk.
Patients were stratified into 5-year age groups (40-44 to ≥90 years). For each age group, we developed competing risks models to estimate, given a patient’s age, sex, and Area Deprivation Index (ADI) quintile (a measure of socioeconomic position, higher=more deprivation), the conditional probability of nine incident diagnoses: alcohol abuse, chronic kidney disease, COPD, congestive heart failure, end-stage renal disease, hyperlipidemia, hypertension, myocardial infarction, type 2 diabetes.
We then estimated all-cause mortality for each age groups based on age, sex, ADI quintile and presence of the diagnoses, using Cox regression models. For example, for a hypothetical 44-year-old patient, we applied the age 40-44 Cox model to estimate annual survival probabilities up to age 49; then simulated new-onset diagnoses at age 49 based on the conditional probabilities derived from the age 40-44 competing risks models for diagnoses; then applied the age 45-49 Cox model to estimate the next 5 years of survival, onward, until survival estimates were obtained through age 100. LE was estimated as the expected value of annual survival decrements from the patient’s current age onward.
Results: Across ages, the highest vs. lowest quintile of ADI was associated with a 2- to 5-fold increase in incidence for each of the 9 chronic conditions (Figure 1A). For each condition, the ADI disparity peaked at the age with the highest incidence rate, and was generally higher in males than females. To illustrate the model’s potential, Figure 1B shows estimated LE for 4 hypothetical patients, ranging from 85 years for a 40-year-old female in ADI quintile 1 to 50 years for a 40-year-old male in quintile 5 with multiple comorbidities.
Figure 1.
Conclusions: Using EHR data and probabilistic microsimulation, we developed generated complex health trajectories to estimate LE as a function of socioeconomic and clinical characteristics.
Keywords: life expectancy, dynamic systems, competing risk models, electronic health records
Integrating machine learning estimates of heterogeneous treatment effects and decision modelling
OP-068 Quantitative Methods and Theoretical Developments (QMTD)
David Glynn1, John Giardina2, Julia Hatamyar1, Ankur Pandya2, Noemi Kreif1
1Centre for Health Economics, University of York
2Department of Health Policy and Management, Harvard T.H. Chan School of Public Health
Purpose: Machine learning (ML) is increasingly being used to understand how policy outcomes depend on individual characteristics (X). Here we integrate machine learning estimates of heterogeneous causal effects into a decision model to improve estimation of cost and health outcomes at the population level (“one size fits all”) and at the individual level.
Methods: ML can be used to learn how parameters used in a decision model depend on X (e.g. age, comorbidities) without pre-specifying a functional form. There have been significant advances in ML tools that can estimate individualized treatment effects. However, this has not been integrated with decision modelling. Current ML methods presuppose very comprehensive individual patient datasets, which: include all relevant decision making outcomes; include all relevant interventions; cover a sufficient time horizon; and include all relevant patient groups. To overcome this, we propose a method to integrate ML estimates into a microsimulation decision model. This approach combines the flexibility of ML with the ability of decision models to integrate data from a wide range of sources.
We demonstrated the methods using individual-level data from SPRINT (Systolic Blood Pressure Intervention Trial). We estimated individual level survival curves for the primary outcome (composite cardiovascular event or death), using Bayesian additive regression trees (BARTs). We integrated the BART estimates into a microsimulation model by calculating implied hazard ratios for each individual and compared results from the homogeneous and individualized models (Figure).
Results: The incremental net health benefit (INHB) of the optimal treatment for the average patient differed across the models, 0.12 and 0.23 QALYs for the homogeneous and individualized models respectively (using a decision threshold of $100,000/ QALY).
Patient specific INHB were calculated by repeatedly sampling the microsimulation model for each X. Individualizing treatment based on X increased population INB by 0.028 QALYs compared to a “one size fits all” policy. The homogeneous and individualized models disagreed on which patients should receive which treatment in 29% of cases.
Conclusions: ML can be integrated with decision modelling, gaining the advantages of both. Incorporating ML into decision modelling can have substantial impacts on estimated health benefits, both at the aggregate and individual level. Next steps are to examine the trade-offs involved in conditioning on sets of X for both estimation and decision making.
Keywords: Machine learning, microsimulation, decision making, heterogeneity, precision medicine
Comparison of outcomes with homogeneous and individualized decision models
The clinical decision is between intensive and standard blood pressure control. This clinical decision is analyzed using a standard model (homogeneous hazard ratio) and an individualized model (individualized hazard ratios).
Bayesian Joint Prediction of Risk Factor Trajectories and Disease Incidence in Microsimulation Models: An Application to Ischemic Stroke
OP-069 Quantitative Methods and Theoretical Developments (QMTD)
John Giardina1, Nathaniel Alemayehu2, Sebastien Haneuse3, Ankur Pandya4
1Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA
2Harvard College, Cambridge, MA, USA
3Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
4Center for Health Decision Science and Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, MA, USA
Purpose: Microsimulation decision models often simulate disease incidence as a function of risk factors that evolve over time (e.g., blood pressure increasing with age). Existing models, however, typically rely on incidence rates estimated with standard survival analysis techniques (e.g., proportional hazards from baseline data) that are not designed to be continually updated each model cycle. We compared this standard approach to a Bayesian approach that jointly estimates longitudinal risk factor trajectories and disease incidence in the context of ischemic stroke.
Methods: Pooled data from four longitudinal studies (ARIC, CHS, MESA, Framingham Offspring) were split into a training set (58,774 observations across 15,525 individuals) and test set (11,119 individuals). We estimated time to ischemic stroke based on seven risk factors: age, smoking, cardiovascular disease, atrial fibrillation, diabetes, systolic blood pressure, and antihypertensive medication use. We used a Bayesian approach to fit a joint model for the trajectories of risk factors, using a random effects framework, and the hazard of ischemic stroke as a function of those trajectories. We compared this approach to three other options: (1) published revised Framingham stroke risk score; (2) proportional hazards model estimated using baseline data; and (3) proportional hazards model estimated using time-varying data. We simulated risk factor trajectories from age 70y to 90y for individuals in the test set stroke-free at age 70y using the fitted random effects model, and then simulated ischemic stroke incidence for each method by continually re-updating incidence estimates each year based on the trajectories. We compared the simulated to observed incidence to measure the validity of each approach.
Results: The average ischemic stroke survival curve estimated by the joint model closely matches the observed curve (Figure), while the other estimation methods performed comparatively worse. Additionally, we calculated the absolute error between observed and predicted cumulative incidence across individuals for different ages and found that the 90th percentile error for the joint model was consistently less than the median error of the next best method.
Figure.
Simulated ischemic stroke survival curves generated using different estimation methods compared to observed survival for a cohort of individuals who were stroke-free at age 70. The observed survival curve was calculated with the Kaplan-Meier estimator, and the grey region represents the 95% confidence interval for the observed survival calculated with a complementary log-log transformation. (PH = Proportional Hazards)
Conclusions: Using a Bayesian framework to jointly model both risk factor trajectories and ischemic stroke incidence generated estimates in a basic simulation model that more accurately reflected observed data than standard approaches. This method could lead to more reliable microsimulation models, especially for models that evaluate policies which depend on tracking dynamic risk factors.
Keywords: microsimulation modeling, risk prediction, stroke
A novel approach to phase-based costing using multi-state survival regression
OP-070 Quantitative Methods and Theoretical Developments (QMTD)
John Kenneth Peel1, Eleanor Pullenayegum2, David Naimark3, Mingyao Liu4, Lorenzo Del Sorbo5, Kali Barrett6, Shaf Keshavjee7, Beate Sander8
1Department of Anesthesiology & Pain Medicine, University of Toronto, Toronto, Canada; Institute of Health Policy, Management and Evaluation, Dalla Lana School for Public Health, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment Collaborative, UHN, Toronto, Canada
2Institute of Health Policy, Management and Evaluation, Dalla Lana School for Public Health, University of Toronto, Toronto, Canada; Senior Scientist, Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada
3Institute of Health Policy, Management and Evaluation, Dalla Lana School for Public Health, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment Collaborative, UHN, Toronto, Canada; Division of Nephrology, Sunnybrook Health Sciences Centre, Toronto, Canada
4Latner Thoracic Surgery Research Laboratory, Toronto General Hospital Research Institute, University Health Network; Departments of Surgery, Medicine and Physiology, Temerty Faculty of Medicine, University of Toronto; Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto
5Institute of Health Policy, Management and Evaluation, Dalla Lana School for Public Health, University of Toronto; Latner Thoracic Surgery Research Laboratory, Toronto General Hospital Research Institute, University Health Network; Interdepartmental Division of Critical Care Medicine, University of Toronto
6Institute of Health Policy, Management and Evaluation, Dalla Lana School for Public Health, University of Toronto; Toronto Health Economics and Technology Assessment Collaborative, UHN, Toronto, ON, Canada; Interdepartmental Division of Critical Care Medicine, University of Toronto
7Latner Thoracic Surgery Research Laboratory, Toronto General Hospital Research Institute, University Health Network; Departments of Surgery, Medicine and Physiology, Temerty Faculty of Medicine, University of Toronto; Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto; Division of Thoracic Surgery, Toronto General Hospital, University Health Network; Toronto General Hospital Research Institute, University Health Network, Toronto, ON, Canada
8Institute of Health Policy, Management and Evaluation, Dalla Lana School for Public Health, University of Toronto; Toronto Health Economics and Technology Assessment Collaborative, UHN, Toronto, ON, Canada; Toronto General Hospital Research Institute, University Health Network, Toronto, ON, Canada; Adjunct Scientist, Institute for Clinical Evaluative Sciences, ON, Canada; Adjunct Scientist, Public Health Ontario, ON, Canada
Purpose: Phase-based costing typically involves empiric determination of phase duration at the population level, yielding phases which are not bounded by individual’s event dates. To overcome this limitation, we developed a phase-based costing methodology which employs multi-state survival modelling. We applied our methodology to estimate cumulative 5-year costs for lung transplantation in Ontario before versus after ex-vivo lung perfusion (EVLP) – a technology intended to rehabilitate potentially-useable donor lungs prior to transplantation – was introduced into practice.
Methods: We performed a retrospective, before-after, cohort study of patients wait-listed for lung transplant at University Health Network (UHN) in Ontario, Canada, between January 2005 and December 2019 using institutional administrative data. We compared costs, in 2019 Canadian Dollars ($), between patients referred for transplant before EVLP was available (Pre-EVLP) versus after (Modern EVLP). Covariates between eras were balanced using inverse probability weighting (IPW). Costs were assigned to phases according to individual event dates. We constructed an IPW-weighted, k-progressive multi-state survival model of the phases-of-care between referral and death using individual-level data. We estimated the expected prevalence in each phase (state-occupancy probabilities) at 10-day intervals up to a 5-year time-horizon using the Landmark Aalen-Johansen estimator. Cumulative costs were then calculated by multiplying state-occupancy probabilities by the mean 10-day cost for each phase, and then summing those 10-day costs up to the time-horizon. 95% confidence intervals were derived from 2,000 bootstrap iterations.
Results: 1,224 patients met our inclusion criteria (377 Pre-EVLP; 847 Modern EVLP). We estimated cumulative costs at five-years since referral to be $278,777 ($82,575–$298,135) in the Pre-EVLP era and $293,680 ($252,832–$317,599) in the Modern EVLP era. We observed lower waitlist state-occupancy after EVLP was available, and correspondingly higher post-transplant state-occupancy after EVLP was available.
Conclusions: Our methodology precisely assigns costs to their correct clinical phase, and accounts for confounding and selection biases using IPW. This method may be a potential improvement over conventional phase-based costs-assessment, which requires predefined population-level phases and for which uncertainty and subgroup analyses are not straightforward. An advantage of our approach is that phase-specific costs measured in this way map directly onto the health states of a decision-analytic model. An additional strength is that it yields state-occupancy probabilities, from which we may draw inferences about clinical trajectory or use to validate decision-analytic models.
Keywords: Phase-specific costs; Lung transplantation; Multi-state model; Costs assessment
peel_multistate_costs
Shared Decision Making for elective surgery decisions in older adults with and without mild cognitive insufficiencies
OP-071 Decision Psychology and Shared Decision Making (DEC)
Ha Vo1, Richard Urman2, Franchesca Arias3, Kathrene D Valentine4, Brittney Mancini1, Michael Barry4, Karen Sepucha4
1Health Decision Sciences Center, Massachusetts General Hospital, Boston, USA
2Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital, Boston, USA
3Division of Cognitive Neurology, Beth Israel Deaconess Medical Center, Boston, USA; Hebrew SeniorLife, Boston, USA; Harvard Medical School, Boston, USA
4Health Decision Sciences Center, Massachusetts General Hospital, Boston, USA; Harvard Medical School, Boston, USA
Purpose: Older adults are often the target of shared decision-making (SDM) interventions. However, they are also prone to cognitive impairment which may impact their ability to engage in SDM and their ability to complete surveys about the process. This study evaluated the performance of the SDM Process Scale in older adults with and without mild cognitive insufficiencies (MCI).
Methods: Eligible patients were 65 years and older who had a scheduled appointment with the pre-operative clinic before elective surgery (e.g., arthroplasty) at an academic medical center between March-September 2021.
Approximately 1 week pre-visit, study staff contacted patients and administered the baseline survey, which included the SDM Process Scale (4 items, scores 0-4), SURE scale (4-items, top scored) and the Montreal Cognitive Assessment Test (MoCA) Blind/Telephone Version 8.1 (scores 0-22). Scores of <19, suggestive of MCI, were used to categorize patients. Patients completed follow-up surveys 3 months post-visit to assess decision regret (1-item, top scored) and retest reliability for SDM Process.
We examined descriptive statistics and missing responses. We compared SDM Process scores, decisional conflict, and decision regret for those with and without MCI. Retest reliability was calculated using intra-class correlation coefficients (ICCs) between baseline and follow-up.
Results: We had 127 responders, with 121 in the analytic data set. Subsequently, 85/121 provided complete follow-up information. There was minimal missing data for SDM Process (n=1, 0.7%), SURE (n=3, 2.3%), regret (n=2, 2.3%) and MoCA Test (n=5, 4.1%). Scores for SDM Process ranged from 0 to 4, were not skewed (-0.35), and did not indicate ceiling or floor effects regardless of MCI group. SDM Process retest reliability was good, ICC=0.7, 95%CI (0.59, 0.80).
Average MoCA scores were 18.8 (SD=2.0), and 40% of patients (n=49/121) had scores suggesting MCI. Average SDM Process scores did not differ between groups (2.5 (SD 1.0) no MCI vs. 2.5 (SD=1.0) MCI, p=0.80, d=-0.05) and SURE top scores were also similar (83% no MCI vs. 90% MCI, p=0.43). Most respondents reported no decision regret at follow-up (92% no MCI vs. 79% MCI, p=0.10) and regret did not vary by cognitive status.
Conclusions: Mild cognitive insufficiencies were prevalent in a sample of older patients undergoing elective surgery. Reports of SDM, decisional conflict, and decision regret did not differ significantly for patients with and without mild cognitive insufficiencies.
Keywords: Shared Decision Making, Mild Cognitive Impairment, Elective Surgery
Variation in real-world use of the Statin Choice decision aid in a major health system and association with statin prescriptions
OP-072 Decision Psychology and Shared Decision Making (DEC)
Kathryn A. Martinez, Michael B. Rothberg
Cleveland Clinic Center for Value-Based Care Research
Purpose: Use of decision aids (DAs) may increase uptake of statins, yet physician use of DAs in practice is inconsistent. We describe real-world physician use of an electronic health record (EHR)-embedded decision aid, Statin Choice, in a major health system, and assess the association between use of Statin Choice and statin prescriptions.
Methods: This study includes statin-naïve patients aged 40-74 years seen for primary care at Cleveland Clinic Health System between January 2020 and June 2021. Whether Statin Choice was fired during an encounter was recorded by the EHR. We used mixed effects logistic regression to assess the odds of a physician using Statin Choice by patient characteristics, including sex, race, age, insurance, and number of other medications. We also included clinical characteristics associated with atherosclerotic cardiovascular disease (ASCVD) risk, for which a statin is indicated, including smoking, hypertension, diabetes, and total and HDL cholesterol. We generated adjusted Statin Choice use rates for individual physicians. Finally, we used mixed-effects logistic regression to estimate the odds of a statin prescription following use of Statin Choice, adjusted for the factors described above. Models accounted for clustering by physician and site.
Results: The sample included 174,171 patients; 59% were female and 79% were white. Physicians used Statin Choice in 5% of encounters. Physicians were more likely to use Statin Choice with male versus female patients (aOR:1.20; 95%CI:1.13-1.27). Physicians were less likely to use Statin Choice with patients with Medicaid (aOR:0.85; 95%CI:0.58-0.84), Medicare (aOR:0.83; 95%CI: 0.77-0.90), or no insurance (aOR:0.70; 95%CI:0.58-0.84), compared to patients with private insurance. There were no differences by patient race. Among 1,040 physicians, Statin Choice use varied from 0% to 62% with a median use rate of 10%. Twenty-one percent of patients exposed to Statin Choice were prescribed a statin, compared to 10% who were not. The adjusted odds of being prescribed a statin given exposure to Statin Choice was 3.41 (95%CI:3.18-3.65).
Conclusions: Statin Choice was highly associated with statin prescriptions, but physicians varied widely in its use. Women face higher ASCVD risk than men, in part due to lower statin use. That we found physicians were less likely to use Statin Choice with women and patients without private insurance suggests interventions are needed to increase physician use of the tool with all statin-eligible patients.
Keywords: decision aids, statins, physician variation, shared decision making
Communication of Genetic Testing Results to Family Members of Pancreatic Cancer Patients
OP-073 Health Services, Outcomes and Policy Research (HSOP)
Mary Linton B Peters1, Barak Davidi2, Claudia Seguin2, Andrew Eckel2, Pari V Pandharipande3
1Division of Medical Oncology, Department of Medicine, Beth Israel Deaconess Medical Center, Boston MA USA
2Institute for Technology Assessment, Massachusetts General Hospital, Boston MA USA
3Department of Radiology, Ohio State University
Purpose: Family members of pancreatic cancer patients with germline genetic mutations could benefit from cancer screening, but information flow through the family network is incomplete. We evaluated which aspects of improved information flow could yield the largest gains in life expectancy.
Methods: We used a microsimulation model of pancreatic ductal adenocarcinoma (PDAC) to estimate the potential life expectancy (LE) benefit of cascade genetic testing of first-degree relatives (FDRs) of PDAC patients. We defined FDR groups by sex, age, and information state (not informed, informed but not tested, test negative, or test positive). We based family size and composition, communication of genetic testing results, and uptake of cascade testing on published studies. We simulated multiple MRI-based screening strategies and identified the strategy that resulted in the highest LE gain for each FDR group. Screening allowed for earlier cancer detection and monitoring of precancerous cysts. However, for groups with low RR of PDAC, screening did not add LE due to surgical mortality of intervention. We performed sensitivity analysis on the age of the PDAC patient, number of FDRs, communication of results, and uptake of cascade testing.
Results: PDAC patients have on average 4.35 living FDRs. Of these, 2.07 would not be informed of the patient’s genetic testing result, 0.21 would be informed but not undergo testing themselves, 1.03 would test negative, and 1.03 would test positive. The LE benefit of screening for the family would range from 0 to just over a year, depending on the RR of PDAC. With perfect communication and testing uptake, the value to the family could increase to 0-1.58 years. Overall, the most valuable aspect of communication to target is informing FDRs of the patient’s test results, although at lower risk factors increasing testing uptake can be equally beneficial.
Conclusions: Given current communication, testing rates, and screening modalities, the family of a PDAC patient with a germline genetic mutation could gain 0 to 1.05 years of LE from screening. If we could increase information flow in the family, we could double this benefit for some groups. Shared decision making about genetic testing should include family communication and cascade testing as an explicit goal to maximize the benefit of this information.
Keywords: pancreatic cancer, genetic testing, patient communication
Figure.
Decision making in congenital femur deficiency: a qualitative exploration of preferences, priorities, and sources of decisional conflict
OP-074 Decision Psychology and Shared Decision Making (DEC)
Ilene L Hollin1, Camille Brown2, Sarah Nossov2, Corinna Franklin2
1Department of Health Services Administration and Policy, College of Public Health, Temple University, Philadelphia, PA, USA
2Shriners Hospitals for Children, Philadelphia, PA, USA
Purpose: This study aimed to explore patient and caregiver priorities for outcomes and identify the sources of decisional conflict in treatment decision making for children with congenital femur deficiency (CFD).
Methods: We conducted a qualitative study using semi-structured interviews amongst patients, parents, and clinicians at a pediatric orthopedic hospital system. Patients with CFD and their parents were identified from an administrative database. A semi-structured interview guide was developed to explore sources of decisional conflict in treatment decision making, including outcome priorities, perceived advantages and disadvantages of options, and preferences for shared decision making. Interviews were conducted by trained study staff via video, recorded, transcribed, and double-coded using MAXQDA. Data analysis was conducted using a grounded theory approach.
Results: Thirty research subjects were interviewed (13 caregivers, 5 patients, and 12 clinicians) between May and October 2021. Parents preferred a shared decision making model. Seven main themes were identified as sources of decisional conflict in CFD: parent emotions, experience with physicians, information availability and accessibility, proxy decision making for a child, timing of the decision, child well-being, and treatment goals. "Parent emotions" refers to emotions related to the diagnosis itself (guilt, grief, and acceptance) combined with emotions related to treatment decision making. "Experience with physicians" focuses on trust parents have in their physician and the sensitivity the physician demonstrates. "Information availability and accessibility" includes lack of information, information overload, and misinformation. "Proxy" refers to the difficulty parents face when making a decision on their child's behalf, specifically fear about the child disagreeing or resisting treatment later in life. "Timing" refers to uncertainty regarding when to undergo treatment. Some parents fear treating too soon (prioritized waiting for child to decide or technological progress) or too late (prioritized treating before child was old enough to remember). "Child well-being" includes fears about selecting a treatment that will cause emotional damage or trauma. "Treatment outcomes" refers to the priorities for goals of treatments and includes process outcomes (e.g., minimizing the number of surgeries), pain-free outcomes (e.g., minimizing short- or long-term pain), aesthetic outcomes and functional outcomes (e.g., mobility and performance).
Conclusions: We identified sources of decisional conflict that are amenable to intervention and will inform the design of a decision aid to help parents choose CFD treatments concordant with their values and reduce decisional conflict.
Keywords: Pediatrics, orthopedics, decisional conflict, treatment preferences, treatment priorities, shared decision making
Capacity constraints, information gathering, and quality of care: Evidence from a behavioral experiment with pediatricians
OP-075 Health Services, Outcomes and Policy Research (HSOP)
Kerstin Eilermann1, Bernd Roth2, Anna Katharina Stirner1, Daniel Wiesen1
1Department of Business Administration and Health Care Management, Faculty of Management, Economics and Social Sciences, University of Cologne, Cologne, Germany
2Department of Pediatrics, Medical Faculty and Cologne University Hospital, University of Cologne, Cologne, Germany
Purpose: We study how capacity constraints affect physicians’ willingness to gather additional information supporting their therapy decisions. Further, we examine how capacity constraints affect the utilization of additionally gathered information and the appropriateness of physicians’ therapy decisions.
Methods: Using a controlled framed field experiment with German pediatricians (n=247), we exogenously vary the extent to which physicians’ capacity is constrained. In our experiment, pediatricians make decisions on the length of antibiotic therapies for 40 pediatric routine cases. We use a between-subject design to vary two treatment parameters in our experiment: Availability of decision support and information gathering costs. For each case, subjects first make an initial decision on the length of antibiotic therapy. Depending on the experimental condition they are assigned to, subjects then either (i) decide whether they want to use decision support before making the final therapy decision for that case, (ii) automatically get decision support, or (iii) do not have the option to use decision support. We vary the level of information gathering costs, which reflect the fraction of available capacities that is needed to use decision support. If a subject decides to use decision support or automatically receives support, he or she is given the opportunity to adjust his or her initial therapy decision for this case.
Results: Our behavioral results evidence that physicians’ willingness to gather additional information that supports decision making decreases as capacity constraints increase. However, the utilization of the information gathered is not affected by increasing constraints. We also find that capacity constraints have a statistically significant and clinically relevant impact on the appropriateness of therapy decisions and thus on the quality of care. This is especially the case for physicians with little clinical experience.
Conclusions: Behavioral results suggest that decreasing the extent to which capacity is constrained can be an effective way to enhance the utilization of decision support and thus help improve the appropriateness of therapy decisions. Implications of our findings for the management of healthcare organizations are discussed.
Keywords: decision support, capacity constraints, quality of care, pediatrics, antibiotic therapy
Cost-Effectiveness Analysis using zip-code-level data on HIV diagnoses to inform resource allocation for HIV-related services: Case study in Atlanta
OP-076 Applied Health Economics (AHE)
Enrique M Saldarriaga1, Ruanne Barnabas2, Marita Zimmermann3, Xiao Zang5, Bohdan Nosyk4, Anirban Basu1
1The Comparative Health Outcomes, Policy, and Economics (CHOICE) Institute, University of Washington, Seattle, WA, USA
2Division of Infectious Disease, Massachusetts General Hospital, Harvard University, Boston, MA, USA
3The Comparative Health Outcomes, Policy, and Economics (CHOICE) Institute, University of Washington, Seattle, WA, USA, Institute for Disease Modeling, Bill & Melinda Gates Foundation, Seattle, WA, USA
4Centre for Health Evaluation & Outcome Sciences, Vancouver, BC, Canada, Faculty of Health Sciences, Simon Fraser University, Burnaby, BC, Canada
5Department of Epidemiology, Brown University, RI, USA
Purpose: Local data can procure reliable information to better address underserved populations. We aimed to quantify the consequences of using zip-code level prevalence estimates to inform resource allocation in Atlanta, Georgia, compared to the status quo.
Methods: We adapted a structurally-robust, published, dynamic, compartmental HIV-model calibrated to Atlanta Metro to replicate the epidemic progression in each of its 132 zip-codes. Starting with prior-predictive distributions derived from the city parameters, we calibrated our zip-code-specific models to observed annual new diagnosed between 2012 and 2018, varying the proportions governing risk-group sizes (e.g., MSM, intravenous drugs users). The calibrated results reflect the resource allocation status quo. Alternate allocations were determined by redistributing total resources proportional to the deviation of zip-code-specific “new-diagnoses-only” and “total” (modeled) prevalence from the grand mean. These reallocated resources were then weighted by the current elasticity of coverage observed in each zip-code to obtain new coverage parameters values. Costs and quality-adjusted-life-years (QALYs) were accrued from 2020 to 2040, and the cumulative results of each alternative compared to status quo.
Results: The calibrated zip-code level results demonstrated good fit to annual HIV-diagnoses, with an mean error of 7 cases. The zip-code aggregated results closely reflected the city-level projections of incidence and new-diagnoses, overall and specifically for MSM and African Americans. In contrast to a homogeneous distribution of resources across zip-codes in the status quo, reallocation strategies accounted for the differences in HIV spreading.(Figure1) The analysis showed that while, on average, alternate reallocation was not cost-effective, there was high variability of economic value across zip-codes. Compared to the status quo, alternatives were dominant among increased-coverage zip-codes with a cost savings of $13.8Million and 1,026 additional QALYs; among decreased-coverage zip-codes, the alternatives were $3Million more expensive and generated 2,019 less QALYs.
Figure 1.
Annual resources per resident, by allocation strategy and zip-code, in Atlanta. Values expressed in thousands of dollars. Status quo refers to the calibrated model at the zip-code level. “Diagnosed-only cases” refers to the reallocation of resources based on diagnosed cases reported by the CDC HIV passive surveillance system in 2018, the latest data available at the zip-code level. “Total cases” refers to the reallocation of resources based on a prediction model that used passive surveillance data and social determinants of HIV spreading to estimate undiagnosed cases of HIV at the zip-code level.
Conclusions: Our analysis revealed heterogeneity in the HIV epidemic within Atlanta, and consequently in the health productions functions. Furthermore, although investing additional resources at some of the local levels can be quite cost-effective for improving HIV outcomes, under current overall resource constraints, the opportunity costs of such investment imply disinvestment other areas. Therefore, there remains scope that a targeted reallocation for a local jurisdiction can be achieved that can increase the efficiency of the expenditures and create greater health benefits for the city at large.
Keywords: cost-effectiveness analysis, economic efficiency, HIV mathematical modeling, resource optimization, resource allocation, resource prioritization
Cost-effectiveness of single-visit cervical cancer screening in KwaZulu-Natal, South Africa: Model-based analysis accounting for the HIV epidemic
OP-077 Applied Health Economics (AHE)
Jacinda Nguyen Tran1, Christine Lee Hathaway2, Cara Jill Bayer3, Monisha Sharma4, Thesla Palanee-Phillips5, Ruanne Vanessa Barnabas6, Darcy White Rao7
1The Comparative Health Outcomes, Policy, and Economics (CHOICE) Institute, Department of Pharmacy, University of Washington, Seattle, USA
2Division of Infectious Diseases, Massachusetts General Hospital, Boston, USA
3Department of Epidemiology, University of North Carolina, Chapel Hill, USA; Department of Global Health, University of Washington, Seattle, USA
4Department of Global Health, University of Washington, Seattle, USA
5Research Center Clinical Research Site, Wits Reproductive Health and HIV Institute, University of the Witwatersrand, Johannesburg, South Africa; Department of Epidemiology, University of Washington, Seattle, USA
6Division of Infectious Diseases, Massachusetts General Hospital, Boston, USA; Harvard Medical School, Boston, USA
7Department of Epidemiology, University of Washington, Seattle, USA
Purpose: Human immunodeficiency virus (HIV) increases the risk of acquiring human papillomavirus (HPV) and disease progression to cervical cancer. Management of cervical cancer is associated with considerable clinical and economic costs with implications for accessing effective care in low- to middle-income settings. In this analysis, we estimate the cost-effectiveness of cervical cancer screening strategies among women in KwaZulu-Natal, a South African coastal province with high burden of cervical cancer and HIV.
Methods: We parameterized a dynamic compartmental model of HPV and HIV transmission and cervical cancer progression to KwaZulu-Natal. Over a 100-year time horizon, we simulated the health and economic impact of implementing seven comparator scenarios relative to the status quo of moderate (57%) HPV vaccine coverage and a three-visit strategy with cytology and colposcopy triage (current South Africa standard of care). Comparator scenarios scaled nonavalent HPV vaccination to 90% coverage and varied in screening with the three-visit standard of care or single-visit strategies (HPV DNA testing with or without genotyping and HPV DNA testing with automated visual evaluation triage, a new technology with assumed high performance), screening frequency (one-time or repeat), and loss to follow-up for pre-cancer treatment. Using a Ministry of Health perspective, we estimated the programmatic costs associated with HPV vaccination, screening, and treatment from published studies and in-country collaborators. For each scenario, we quantified the number of cervical cancer cases averted and incremental cost-effectiveness ratios (ICERs) per case averted compared to the status quo. Costs (2021 US dollars) and outcomes were discounted at 3% annually.
Results: Single-visit scenarios with repeat screening averted considerably more cervical cancer cases than one-time screening strategies (range: 27,600 – 32,000 compared to 12,800 – 15,000) but had higher total costs (range: $575.1 million [M] – $615.3M compared to $445.5M – $458.3M). The most cost-effective strategy was vaccine-scale up with single-visit HPV testing (without genotyping) as one-time screening (ICER: $6,836/case averted), while vaccine scale-up alone had the highest ICER ($17,101/case averted).
Conclusions: In combination with HPV vaccine scale-up, single-visit strategies using HPV DNA testing with and without genotyping were more cost-effective than vaccine scale-up alone. Single-visit strategies with high performance HPV DNA testing may facilitate more frequent screening and improve retention for treatment, and therefore may be efficient for reducing cervical cancer incidence and associated costs in settings similar to KwaZulu-Natal.
Keywords: Infectious diseases, economic evaluation, cervical cancer screening, cancer prevention
CEA plot
Cost-Effectiveness of Cervical Cancer Screening Strategies with Nonavalent HPV Vaccine Scale-Up in KwaZulu-Natal, South Africa
The relative value of information versus implementation provided by clinical trials of diagnostic tests for HIV-associated tuberculosis
OP-078 Applied Health Economics (AHE)
Pamela P Pei1, Kieran P Fitzmaurice1, Mylinh H Le1, Christopher Panella1, Michelle L Jones1, Ankur Pandya2, C. Robert Horsburgh3, Kenneth A Freedberg4, Milton C Weinstein2, A. David Paltiel5, Krishna P Reddy4
1Medical Practice Evaluation Center, Massachusetts General Hospital, Boston, MA, USA
2Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, MA, USA
3School of Public Health and School of Medicine, Boston University, Boston, MA, USA
4Medical Practice Evaluation Center, Massachusetts General Hospital, and Harvard Medical School, Boston, MA, USA
5Public Health Modeling Unit, Yale School of Public Health, New Haven, CT, USA
Purpose: Changes in testing guidelines are sometimes made only after demonstrating an improvement in a downstream patient-centered outcome, rather than an improvement in diagnostic performance alone. We sought to assess how future randomized controlled trials of diagnostic tests for HIV-associated tuberculosis (TB) could improve public health decision-making in South Africa.
Methods: We projected the clinical outcomes and costs, given current information, of three TB screening strategies among hospitalized people with HIV in South Africa: sputum Xpert MTB/RIF (Xpert), sputum Xpert plus urine AlereLAM (Xpert+AlereLAM), and sputum Xpert plus the newer and more sensitive and costly urine FujiLAM (Xpert+FujiLAM). Accounting for both value-of-information and value-of-implementation, we projected the incremental net monetary benefit (INMB) of a future trial designed to compare mortality associated with each TB screening strategy, rather than making decisions based on current knowledge of improved diagnostic performance of FujiLAM (Panel A). We used a validated, 2nd-order Monte Carlo simulation to account for uncertainty around key parameters, including TB prevalence, rates of empiric treatment and sputum specimen provision, and the sensitivity and specificity of each test. The simulated trial could provide value-of-information by reducing parametric uncertainty (traditional value-of-information analysis). We accounted for value-of-implementation by estimating the INMB of improved implementation of the optimal pre-trial or post-trial strategy compared to current TB screening practice, assumed to be 50% Xpert and 50% Xpert+AlereLAM. We used a previously-estimated opportunity cost-based cost-effectiveness threshold of $3,000/year-of-life saved (YLS) for South Africa (~50% of per-capita GDP).
Results: Based on current information, adoption of Xpert+FujiLAM is expected to yield 0.5 years-of-life gained per person compared to current TB screening practices, with an incremental cost-effectiveness ratio of US$780/YLS. There is no expected health or economic benefit in delaying adoption of this strategy until additional information is obtained from trials (Panel B, value-of-information=0 at $3,000/YLS cost-effectiveness threshold). Meanwhile, implementing Xpert+FujiLAM now would produce substantial health benefits at the population level compared to current practices, resulting in an INMB >$900 million over 10 years (Panel C), far exceeding the expected costs of outreach and data-acquisition exercises required to improve test adoption.
Conclusions: Future research on existing higher-sensitivity TB diagnostics should include a focus on removing barriers to implementation. Policymakers may consider more prompt adoption of novel diagnostic tests based on estimates of diagnostic accuracy alone.
Keywords: Value-of-information, Value-of-implementation, Tuberculosis, HIV, Clinical trials
Panels A-C
Value of information and value of implementation outcomes are estimated based on incremental net monetary benefit. Health benefits are valued based on a willingness-to-pay threshold of $3,000 per year-of-life saved. Interventions to obtain information and improve implementation are assumed to impact up to 500,000 eligible patients per year for up to 5 years. In scenarios with improved (but not perfect) implementation, adoption of the most cost-effective test is assumed to increase linearly from current levels to 100% of the market after 5 years.
Evaluating the potential impact of implementation interventions on the US’ efforts towards ending the HIV Epidemic
OP-079 Applied Health Economics (AHE)
Lia Humphrey1, Benjamin Enns1, Micah Piske1, Xiao Zang2, Bohdan Nosyk3
1Centre for Health Evaluation & Outcome Sciences, Vancouver, BC, Canada
2Department of Epidemiology, Brown School of Public Health, Providence, Rhode Island, USA & Centre for Health Evaluation & Outcome Sciences, Vancouver, BC, Canada
3Faculty of Health Sciences, Simon Fraser University, Burnaby, BC, Canada & Centre for Health Evaluation & Outcome Sciences, Vancouver, BC, Canada
Purpose: Improving the implementation of existing evidence-based interventions (EBIs) is a key priority of the Ending the HIV Epidemic Initiative; however, competing efforts to this end may deliver different long-term outcomes and value for decision-makers to consider. Our objective was to estimate the epidemiological impact of an illustrative set of interventions designed to expand implementation of three EBIs for HIV treatment and prevention.
Methods: We used a dynamic deterministic HIV transmission model calibrated to simulate HIV microepidemics in Los Angeles (LA), Atlanta, and Miami. We identified published results of implementation interventions designed to improve the scale of delivery for pre-exposure prophylaxis (PrEP), rapid antiretroviral therapy (ART) initiation (within 30 days of diagnosis), and HIV testing. Scale of delivery was defined as the product of reach (i.e., proportion of the target population reached by and accepting the intervention) and adoption (i.e., proportion of settings and healthcare providers delivering the intervention). Our status quo comparator scenario reflected current population characteristics and scale of service delivery. Interventions were sustained up to a maximum of 10 years and we estimated reduction in new HIV infections over a 20-year time horizon.
Results: We estimated that the scale of PrEP delivery could increase by 1.2 times through appointment pre-screening questionnaires (filled out by patients and handed to their primary care practitioners), or by 1.67 times through educational training for primary care practitioners, resulting in 6% and 10% incidence reductions respectively in LA if their effect was sustained for 10 years (Fig.1A). Free opt-in, mailed HIV self-testing kits could increase the scale of HIV test delivery by 1.6 times above status-quo levels, compared to opt-out testing (1.1 times) or emergency room testing offer reminders (1.3 times), reducing new infections in Miami by up to 7% over 20 years versus 1% and 4%, respectively (Fig.1B). Rapid ART uptake could be increased by 1.2 times with the addition of telehealth services, reducing new infections by an additional 1% in Atlanta over 20 years (Fig.1C).
Conclusions: Our results demonstrate the impact of relatively low-cost interventions to enhance the implementation and outcomes of existing EBIs in HIV prevention and treatment. Selecting the most promising interventions requires long-term estimates of their relative impact and value, which will be influenced by local demographic, epidemiological, and structural conditions.
Keywords: HIV, implementation science, evidence-based interventions, pre-exposure prophylaxis (PrEP), HIV testing, antiretroviral therapy (ART)
Estimated HIV infections averted by increases in intervention scale and sustainment
Potential Health Impacts of Expanding Food Vouchers for Low-Income People Living with HIV
OP-080 Applied Health Economics (AHE)
Margo Wheatley, Eva A Enns
Division of Health Policy and Management, University of Minnesota School of Public Health, Minneapolis, USA
Purpose: To determine the potential health impacts of reducing food insecurity among people living with HIV (PLWH) by expanding food vouchers through the Ryan White HIV/AIDS Program (RWHAP), a federal program funding support services for low-income PLWH.
Methods: An individual-based model that reflected the population and HIV care dynamics of RWHAP clients in Minneapolis/St. Paul region was developed, calibrated, and validated. Two strategies were compared: 60% of clients needing food vouchers received them (status quo) and 100% of clients needing food vouchers received them (expansion intervention). Health benefits were measured as proportion of clients who achieved desired HIV outcomes (viral suppression) and quality-adjusted life-years (QALYs). Increases in viral suppression when receiving food vouchers was estimated from a causal analysis of RWHAP data; QALYs gains were estimated from published literature on food insecurity. Strategies were evaluated from a healthcare system perspective over 10 years, discounting costs and QALYs at 3% annually. The intervention included cost of food vouchers ($60/month for an average of 4 months/year) and outreach, in addition to HIV-related health care costs. Key parameters were varied in scenario analyses and preliminary cost-effectiveness results were assessed.
Results: In the status quo, 73.5% of clients were virally suppressed on average over the 10-year period; this increased to 74.4% with the expanded food voucher intervention. The intervention resulted in 535 QALYs gained (discounted) in a population of >4,000 clients and incurred $147,000 in voucher and outreach costs per year. When compared to the status quo, the intervention was cost-saving overall due averted health care costs with improved viral suppression. If a higher amount of food aid was required to alleviate food insecurity, an expanded food aid program was still highly cost-effective in preliminary analyses. Compared to the base case status quo, increasing number of vouchers from 4/year to 12/year had an incremental cost-effectiveness ratio (ICER) of $5,033/QALY gained; increasing the dollar amount/month from $60 to $240 had an ICER of $14,597/QALY gained; increasing both had an ICER of $50,275/QALY gained.
Conclusions: Expanding food vouchers to fill unmet need could be very cost-effective and potentially cost-saving. Results support continued funding for food vouchers and similar programs that address socioeconomic challenges for PLWH. Expansion of food voucher programs could be integrated into multifaceted strategies aimed at achieving local and national HIV treatment goals.
Keywords: HIV/AIDS, social determinants of health, food insecurity
Reluctance to switch from daily oral to long-acting antiretroviral therapy among people living with HIV: results from a discrete choice experiment
OP-081 Patient and Stakeholder Preferences and Engagement (PSPE)
Douglas Barthold1, Enrique M Saldarriaga1, Aaron T Brah2, Pallavi Banerjee3, Jane M Simoni4, Brett Hauber5, Susan M Graham6
1The Comparative Health Outcomes, Policy, and Economics (CHOICE) Institute, University of Washington, Seattle, USA
2School of Medicine, Oregon Health and Science University, Portland, USA
3Information School, University of Washington, Seattle, USA
4Department of Psychology, University of Washington, Seattle, USA; Department of Global Health, University of Washington, Seattle, USA
5Pfizer, Inc, New York, USA
6Division of Allergy & Infectious Diseases, Department of Medicine, University of Washington, Seattle, USA; Department of Global Health, University of Washington, Seattle, USA; Department of Epidemiology, University of Washington, Seattle, WA
Purpose: To examine the characteristics of people living with HIV (PLWH) who stated in a discrete choice experiment (DCE) their preferences to remain on daily-oral antiretroviral therapy (ART), rather than switching to long-acting ART (LA-ART).
Methods: A web-based DCE was administered to PLHV who were aged 18+ and taking daily oral ART at a network of HIV-care clinics in Washington State, and in Atlanta, Georgia. Each participant answered 17 choice scenarios with three options: two hypothetical LA-ART regimens and their current daily-oral regimen. Each LA-ART option featured 7 attributes: mode of delivery, location, frequency, pain, pre-treatment time undetectable, pre-treatment negative reaction testing, and late-dose leeway.
We calculated the percent of scenarios where the respondent chose their current daily therapy. We used multivariate fractional logistic regression analysis to identify the association between that percent and the following respondent characteristics: location, age, years using ART, years since HIV diagnosis, mental health diagnoses, substance use, injectable drug use, CD4 count, past ART regimens, non-HIV medication use, AIDS diagnosis, past adherence to ART, injection aversion, ease of getting to HIV clinic, race/ethnicity, gender, sexual orientation, employment, education, and income.
Results: Among 697 respondents, the reported genders were 492 men, 167 women, 14 transgender women, 6 transgender men, and 10 other (8 preferred not to say). Median age was 51 years (range, 18-73) and patients identified as Black (n=346), White (n=277), Hispanic (n=61), and other race/ethnicity (n=91). Some (n=74) respondents chose their current daily treatment over LA-ART in all 17 scenarios (100% reluctancy), 317 never chose their current treatment (0% reluctancy), and choices for 306 patients were mixed.
Preliminary results from multivariate analyses suggest that aversion to injections and high school education or less were both significantly associated with greater preference for current daily oral treatment, rather than switching to LA-ART. Younger age (<50) and poor past adherence to ART were both significantly associated with greater likelihood of preferring to switch to LA-ART.
Conclusions: LA-ART may be preferable to daily ART, and has potential to improve adherence, but some PLHV may be unwilling to switch from their current daily regimen. Preliminary analyses suggest PLHV who are older, more adherent, more averse to injections, and with lower educational attainment will be more reluctant to switch to LA-ART.
Keywords: long-acting antiretroviral therapy, patient preferences, people living with HIV, discrete choice experiment
Optimizing multivariate calibration parameters in microsimulation models
OP-082 Quantitative Methods and Theoretical Developments (QMTD)
Elizabeth A Handorf, Daniel M Geynisman, J Robert Beck
Fox Chase Cancer Center, Temple University Health System, Philadelphia, PA
Purpose: We develop an approach to find the best calibration parameters for health-state based microsimulation models, adjusting transition probabilities based on interim outcomes (i.e. time of disease progression) so that model results match target survival curves.
Methods: When constructing a health-state model, often one cannot obtain estimates of all transition probabilities from a single source. Synthesis of separate study results is required; however, this generates calibration issues when 1) dependence exists between study states, and/or 2) differences exist between study samples. In a microsimulation study of first and second line therapies for advanced prostate cancer, we encountered both issues: our target overall survival (OS) should match results of relevant first-line clinical trials, but incorporating parameters from second-line trials independent of first-line outcomes caused a substantial upward OS bias. We propose a two-parameter correction that introduces a survival penalty for later health states in patients with poor outcomes on first-line treatment. We modify transition probabilities in later health states via a hazard ratio that depends on the progression-free time a subject spends in line 1 (T1). This hazard ratio decreases linearly as a function of T1 from a maximum of θ>1, down to 1 (no effect) at time ω. The natural next step is to find optimal values of θ and ω, i.e. those minimizing differences between model-based and trial OS. Because the results of the microsimulation model are a step function without a parametric form, we do this via the Nelder-Mead simplex method with a constrained parameter space. We overcome convergence issues in this noisy problem by conditioning on the random draws of the survival function. Repeating the microsimulation procedure with different randomization sequences, we identify a final set of parameters for model calibration. This procedure was applied to a target first-line trial in prostate cancer.
Results: Three- and five- year OS from the target trial were 70% and 44%. Without calibration, the microsimulation model resulted in estimates of 79% and 61%. The optimal calibration parameters for (θ, ω) were (2.21,87). With calibration, estimates were 68% and 49%. (See Figure 1).
Figure.
Overall survival from microsimulation model compared to trial results, with and without calibration.
Conclusions: Without proper calibration, microsimulation models for multiple lines of therapy can exhibit substantial bias. Our method for choosing the optimal calibration parameters successfully reduced differences between the model-based results and target clinical trial.
Keywords: microsimulation, calibration, survival
Sensitivity and identifiability analyses for efficient calibration of health policy models
OP-083 Quantitative Methods and Theoretical Developments (QMTD)
Valeria Gracia1, Jeremy D. Goldhaber-Fiebert2, Fernando Alarid-Escudero1
1Center for Research and Teaching in Economics, Aguascalientes, Mexico
2Department of Health Policy, Stanford University, California, US
Purpose: Model calibration informs input parameters by matching model outputs to calibration targets. Since calibration is often computationally intensive, we adapted algorithms from the inverse modeling literature to reduce its computational burden by better informing the input parameter space.
Methods: We used coverage, local sensitivity, and collinearity analyses to define and inform the prior distributions of input parameters to reduce estimation time. We illustrated this approach with a realistic COVID-19 dynamic model that includes both community and household transmission, calibrating it to historical COVID-19 daily incident confirmed cases in the Mexico City Metropolitan Area (MCMA) using the Incremental Mixture Importance Sampling (IMIS) algorithm. First, we sampled 1,000 parameter sets from the prior distribution and performed a coverage analysis, comparing the model-predicted lower and upper bounds to corresponding calibration target bounds. Then, we numerically computed a sensitivity matrix populated with the first derivatives of the model output with respect to each input parameter to assist in ranking parameters’ influence on the model output. We used this matrix to determine which parameters’ prior distributions should have their bounds modified. Finally, we conducted a collinearity analysis, estimating the approximate linear dependence between parameters expressed as a collinearity index using the sensitivity matrix’s eigenvalues. Parameters with high collinearity indices (>15) will not be identifiable, meaning that their calibration will yield more than one unique solution.
Results: With the local sensitivity analysis, we found that parameters governing the time-varying daily detection rate had the highest impact on model-predicted incident cases, followed by the parameters governing proportional reduction in effective contacts and the transmission probabilities in the community and household. The analysis permitted resizing the prior bounds (see Fig. 1-A), reducing the computation time to calibrate the model by almost a half. Using daily incident cases as targets, we found that all parameter combinations were identifiable with a collinearity index of less than 15 (see Fig. 1-B). In contrast, if we had used weekly incident cases as targets, no parameter combination was identifiable, with the sensitivity matrix’s eigenvalues being infinite for all parameter combinations.
Conclusions: Local sensitivity and collinearity analyses can increase the efficiency of model calibration appreciably, especially in large complex models that are computationally intensive.
Keywords: calibration, sensitivity analysis, non-identifiability, inverse modeling, health policy models, Bayesian methods
Figure 1.
(A) Coverage pre and post sensitivity analysis. Shaded areas depict the 95% posterior model-predicted interval on incident cases. Solid points show daily COVID-19 confirmed incident cases in MCMA from February 24, 2020, to December 7, 2020. (B) Collinearity index for all combinations of parameters.
Simulating event times from non-homogeneous Poisson Point Processes
OP-084 Quantitative Methods and Theoretical Developments (QMTD)
Thomas A Trikalinos1, Yuliia Sereda2
1Department of Health Services, Policy & Practice, School of Public Health, Brown University, Providence, RI, USA; Department of Biostatistics, School of Public Health, Brown University, Providence, RI, USA
2Masters in Public Health Program, School of Public Health, Brown University, Providence, RI, USA
Purpose: It is often desirable to simulate single (e.g. death) or repeated (e.g. symptoms) events in continuous time. Mathematically, event patterns can be modeled with Point Processes (PPs). Commonly used PPs include the homogeneous Poisson PP and the Weibull PP which correspond to exponentially- and Weibull-distributed event times, respectively. These easy-to-simulate cases are special examples of non-homogeneous Poisson PPs (NHPPP); in general NHPPP the instantaneous risk of the event at time t can be any continuous intensity function λ(t) > 0. We describe and implement in an R package the simulation of event times from arbitrary NHPPPs.
Methods: We provably sample from a target NHPPP with three approaches: (i) Time warping recovers the target NHPPP with an appropriate time-transformation of samples drawn from a time-homogeneous Poisson PP. (ii) Properties of the gaps of order-statistics from exponentially distributed variables with appropriate rates can be used to recover the target NHPPP. (iii) An acceptance-rejection algorithm can thin out a “denser” homogeneous Poisson PP to obtain event times from the target NHPPP. We explicate with a numerical study simulating 10⁴ event trajectories given the λ(t), ʌ(t) in Figure 1a, with r=0.2 and t∈[0, 6π).
Results: All three methods sample from the target process. From theory, the number of events, in each simulated trajectory should follow a Poisson distribution with parameter (6π)−ʌ(0)≈50.33, i.e., should have mean and variable of 50.33. Means were closer than 0.07% and variances were closer than 1.36% to the target value with all three simulation approaches (Table). Figure 1b shows that the empirical distribution of the simulated times with the time warping method matches the simulated intensity function scaled to unit area (qualitatively identical with the other approaches). With each routine’s preferred arguments, the time warping approach is fastest and the thinning approach slowest by two orders of magnitude (Figure 1c).
Conclusions: We use theory-based approaches to sample event times from arbitrary NHPPP. Although NHPPPs are not appropriate models for all phenomena, they are satisfactory models for many. Specifically, we use NHPPPs to model event times for cause specific deaths, lesion occurrence, lesion transitions, and symptoms emergence in Kystis, our discrete event simulation model of bladder cancer natural history, prevention, and control.
Keywords: Point Processes, memoryless property
Numerical study results
(a) Shown are the intensity function (red) and its integral, the intensity measure, (blue) of a time varying process comprising a sine wave with an exponentially changing amplitude. (b) The histogram of the simulated trajectories with the {time warping} approach matches the unit-area-scaled simulated intensity function well. (c) In our implementation, the {time warping} approach is fastest and {thinning} is slowest.
Summaries of simulated event trajectories
| Simulation approach | Mean number of events [relative bias] | Variance of events [relative bias] | Goodness of fit with target NHPPP, P value |
|---|---|---|---|
| Time warping | 50.31 [0.03%] | 49.65 [1.36%] | 1.000 |
| Order Statistics | 50.36 [-0.07%] | 50.38 [-0.10%] | 0.980 |
| Thinning | 50.33 [0.00%] | 50.61 [-0.55%] | 0.921 |
In the numerical study the expected mean and variance of the number of events are both equal to 50.33. Results with the three approaches are close. A goodness of fit test cannot reject that the simulations came from the targeted process (P-values are close to 1, indicating good fit).
Using Deep Neural Networks for Calibration of a Colorectal Cancer Microsimulation Model
OP-085 Quantitative Methods and Theoretical Developments (QMTD)
Vahab Vahdat1, Paul J. Limburg2, Jing V. Chen1, Leila Saoud1, Bijan J. Borah3, Oguzhan Alagoz4
1Exact Sciences Corporation, Madison, WI, USA
2Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, MN, USA; Exact Sciences Corporation, Madison, WI, USA
3Division of Health Care Delivery Research, Mayo Clinic, Rochester, MN, USA
4Departments of Industrial &Systems Engineering and Population Health Sciences, University of Wisconsin-Madison Madison, WI, USA
Purpose: Calibration is a commonly used method for estimating unobservable natural history (NH) parameters in cancer microsimulation models. Conventional approaches to calibration require running the simulation with a large number of input combinations and are therefore computationally intensive. This study develops a deep neural network (DNN) framework to efficiently calibrate simulation models.
Methods: We developed the Colorectal Cancer (CRC) and Adenoma Incidence & Mortality (CRC-AIM) microsimulation model which simulates the NH of CRC based on the adenoma-carcinoma sequence. CRC-AIM calibration involves estimating 23 unknown parameters where each simulation replication takes approximately 10 minutes on an Amazon Web Services (AWS) cloud instance. We first generated an initial set of 10,000 parameter combinations using Latin-Hypercube Sampling (LHS) and ran CRC-AIM to estimate the CRC incidence for each parameter combination. We then used this set of 10,000 runs as inputs to the DNN model, which included four dense layers with 128, 64, 64, and 64 nodes, respectively, and an output layer with four nodes, representing incidence by sex and location (colon and rectum). The model was developed with sigmoid activation functions in the first and third layers and Rectified Linear Unit functions for the other layers. We used mean square error (MSE) between the predicted and observed incidence as the goodness-of-fit measure.
Results: Using AWS p3.2xlarge instance, the DNN model was trained in 27.3 seconds with a training MSE of 0.002. On the test dataset, predicted CRC incidence was comparable to actual CRC incidence in most of the cases, with MSE less than 0.005. The figure below shows the predicted vs actual CRC incidence for the first 150 testing data points, stratified by sex and location. The trained DNN was used to predict outcomes for 10 million newly generated inputs in 463.9 seconds, which would have required a total of 190 years of computational time with the CRC-AIM model. The average difference between the CRC incidence predicted by DNN and CRC-AIM was 7% among 50 parameter sets that were considered well-fitting to incidence, providing a potentially useful framework for model selection.
Figure 1.
Conclusions: Our proposed DNN framework provides an efficient and reliable approach to reduce the computational burden of the calibration and may be a feasible approach to calibrate complex simulation models.
Keywords: Calibration, Deep Neural Network, Colorectal cancer, Microsimulation, Natural history model, CRC-AIM
Comparison of CRC incidence per 100,000 by location (colon and rectum) and gender (female and male) between the CRC-AIM natural history model and DNN predictions for 150 unknown parameter sets. Note that red and black lines overlap for most cases therefore black lines are often invisible.
Projecting the impact of rapid drug-susceptibility tests on the incidence of gonococcal infection and the effective lifespan of antibiotics
OP-086 Health Services, Outcomes and Policy Research (HSOP)
Reza Yaesoubi1, Minttu Rönn2, Thomas L. Gift3, Sancta B. St. Cyr3, Joshua A. Salomon4, Yonatan H. Grad5
1Department of Health Policy and Management, Yale School of Public Health, New Haven, USA
2Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, USA
3Division of STD Prevention, Centers for Disease Control and Prevention, Atlanta, USA
4Department of Health Policy, School of Medicine Stanford University, Stanford, USA
5Department of Immunology and Infectious Diseases, Harvard T. H. Chan School of Public Health, Boston, USA
Purpose:N. gonorrhoeae, the cause of the sexually transmitted disease gonorrhea, has developed resistance to all first-line antibiotics used to treat it, making the threat of untreatable gonorrhea a major public health concern. The control of antimicrobial resistant gonorrhea is challenged by the lack of rapid diagnostic tests that inform on the susceptibility of infection to available antibiotics. Gonococcal infections are currently treated empirically with ceftriaxone, the last remaining recommended antibiotic against gonorrhea. Our objective was to evaluate the impact of a rapid drug-susceptibility test (DST) that reports susceptibility to ciprofloxacin and tetracycline on gonorrhea incident cases and the effective lifespan of existing antibiotics for the treatment of gonorrhea.
Methods: We developed and calibrated a mathematical model of gonococcal transmission to describe the key characteristics of gonorrhea epidemics among men who have sex with men (MSM) in the U.S., including disease prevalence and incidence, and the development of resistance to ciprofloxacin, tetracycline, and ceftriaxone. We used this model, projecting forward over 25 years, to estimate gonorrhea cases and the effective lifespan of first-line antibiotics if a rapid DST were available. We varied the sensitivity and the specificity of the test between 50-100% and 75-100%, respectively (sensitivity represents the probability of correctly detecting susceptibility for a drug-susceptible infection and specificity represents the probability of correctly detecting non-susceptibility for a drug-resistant infection).
Results: When 75% of gonorrhea cases receive a rapid DST with 75% sensitivity and 97.5% specificity, the overall effective lifespan of ciprofloxacin, tetracycline, and ceftriaxone is expected to increase by 31.0% (95% uncertainty interval: 0.2%, 79.2%) to 20.6 (12.9, 25.0) years. Under this scenario, the annual rate of incident gonorrhea is expected to increase by 4.4% (0.6%, 10.7%) to 6,569 (5,428, 8,038) per 100,000 MSM over the next 25 years, compared to the scenario where rapid DSTs are not available (see Figure).
Figure.
Conclusions: Our results suggest that the availability of rapid DSTs could prolong the clinical effectiveness of available antibiotics for the treatment of gonorrhea. However, rapid DSTs with imperfect specificity could lead to an increase in gonorrhea burden since a small number of resistant infections incorrectly identified as susceptible will be expected to fail therapy due to ineffective treatment. Individuals infected with these strains may contribute to gonorrhea spread and thus increased disease incidence.
Keywords: gonorrhea, drug resistance, antimicrobial resistance, mathematical modeling, simulation, drug susceptibility tests
Is implementing hrHPV self-sampling always cost-effective? A microsimulation modelling study
OP-087 Applied Health Economics (AHE)
Sylvia Kaljouw, Erik Jansen, Ellen Olthof, Jan Hontelez, Inge De Kok
Department of Public Health, Erasmus MC, University Medical Center Rotterdam, Rotterdam, the Netherlands
Purpose: HrHPV self-sampling has been suggested as an alternative to the hrHPV-test administered by the general practitioner to reach women who might otherwise not participate in cervical cancer screening. Recently, population-based data on loss-to-follow-up (LTFU) for reflex cytology (after self-sampling) (-7.8% LTFU) and relative sensitivity of self-sampling have been reported in the Netherlands (0.94 relative sensitivity for CIN3+). This influences the cost-effectiveness of wide deployment of self-sampling. Therefore, we aimed to find the tipping point for cost-effectiveness of self-sampling with respect to switchers (women who used to participate at the general practitioner but switch to self-sampling) when using realistic population-based assumptions for loss-to-follow-up and test characteristics.
Methods: We used the microsimulation model MISCAN-Cervix to calculate the cost-effectiveness of several self-sampling deployment strategies based on feasible options to adjust the guideline (such as actively sending the self-sampling kit to all women, to 30-year old women or with the reminder letter) (Table 1). Scenarios varied in the percentage of switchers (Sw: 5%, 20%, 30%, 40%, 50%, 100%, distinction between 30-years old (Sw30) and 35+ years old (Sw35+)) and the percentage of current non-attenders starting to use self-sampling (Extra: 0%, 10%, 30%). Main outcome measures were life years (LYs) gained, QALYs gained and costs per 100,000 women simulated, compared to the current screening strategy. Sensitivity analyses are performed for different vaccination rates (0%, 50%, 100%).
|
Results: In an unvaccinated population, the self-sampling strategies led to a change in LYs gained varying between -0.4% (Sw30:40%, Sw35+:5%, Extra:0%) and +5.7% (Sw30:30%, Sw35+:5%, Extra:30%) and a change in QALYs gained varying between -0.4% (Sw30:40%, Sw35+:5%, Extra:0%) and +6.7% (Sw30:30%, Sw35+:5%, Extra:30%). The change in costs varied between -15.8% (Sw30:100%, Sw35+:100%, Extra:0%) and +8.9% (Sw30:5%, Sw35+:5%, Extra:30%). Analyses on a vaccinated population are currently being done.
Conclusions: We found that costs will substantially decrease if more women use self-sampling, without a negative impact on the health benefits of screening. Health benefits increase the most when wider deployment of self-sampling leads to extra attendance, although QALYs gained could also increase slightly when women switch to self-sampling due to a small decrease in false positive test results.
Keywords: microsimulation, cost-effectiveness analysis, cervical cancer, screening, self-sampling
Table 1.
Strategy assumptions and examples of interventions that could fit with these assumptions.
Not another correlation coefficient: Correcting cost-effectiveness acceptability curves in economic evaluations
PP-002 Applied Health Economics (AHE)
Ali Jalali
Department of Population Health Sciences, Weill Cornell Medical College
Purpose: Evidence generated by health economic evaluations provide decisionmakers with critical information to determine how to allocate healthcare resources and improve population. Risk-averse decisionmakers are not only interested in the estimated comparative economic value of health interventions, but also the level of confidence or uncertainty in such estimates. The cost-effectiveness acceptability curve (CEAC), a function that relates a given monetary threshold (e.g., willingness-to-pay) per additional unit of effectiveness to a corresponding probability that an intervention is cost-effective, is a widely utilized and appealing approach to measuring uncertainty in economic evaluations.
Methods: CEACs are constructed from stochastic and probabilistic methods that generate a joint distribution of incremental cost and effect estimates between competing health interventions. While CEACs are ubiquitous in the literature, less attention has been paid to ensure that decision-makers can make valid inferences from such data. In this study, we focus on one potential source of bias in constructing CEACs; the bivariate correlation between incremental cost and effect, hereafter ρ.
Results: We argue that in order to produce reliable data for decision-makers, researchers must explicitly measure and make post-hoc adjustments for ρ when construction CEACs. We then introduce a Stata program that can adjust for ρ and reproduces CEACs using output data from either bootstrapped resampling or probability sensitivity analyses.
Conclusions: Since decision-analytic models do not directly observe ρ, we argue that results from trial-based economic evaluations can be an important reference for improving the validity of decision-modeling.
Keywords: Economic Evaluation, Cost-effectiveness Acceptability Curve, Uncertainty, sensitivity analyses
The Cost of Early Diagnosis of Thyroid Carcinoma with Calcitonin Tests
PP-003 Applied Health Economics (AHE)
Andres Alban, Reagan A. Collins, Tianna Herman, Jaber Valinejad, Mohammad S. Jalali, G. Scott Gazelle, Jagpreet Chhatwal, Carrie Cunningham
MGH Institute for Technology Assessment, Harvard Medical School, Boston, MA, USA
Purpose: The 2015 American Thyroid Association guidelines for managing thyroid cancer recommend delaying the biopsy of nodules with a diameter under 1cm due to the indolent nature of most small nodules. However, this practice may lead to the late detection of the more aggressive medullary thyroid carcinoma. We evaluate the costs of diagnosing medullary carcinoma with serum calcitonin tests in nodules under 1cm in diameter.
Methods: We used a microsimulation model of the natural history and treatment of thyroid nodules to evaluate the effectiveness of diagnosing thyroid nodules with calcitonin tests. We simulated a population of patients developing benign and malignant nodules (papillary, follicular, and medullary carcinoma). The model was calibrated to match the incidence rates of benign and malignant nodules and the prevalence of thyroid nodules in the US population. The patients with detected nodules go through diagnosis and treatment. We compared two strategies for nodules under 1cm in diameter: delayed, as recommended by current guidelines, or calcitonin testing. Patients diagnosed with a benign nodule do not receive any treatment. Patients diagnosed with medullary carcinoma during calcitonin testing receive a biopsy to confirm the diagnosis. If the diagnosis is confirmed, they receive surgery (thyroidectomy and lymph node dissection). The model incorporates the costs of calcitonin testing, its effectiveness in diagnosing early cases of medullary thyroid carcinoma, and the costs of biopsy and surgery. We focus the analysis on the female population between the ages 25 and 64 (the population at highest risk).
Results: The model estimates that 78,000 patients per year in the US female population between the ages 25 and 64 have a detected nodule of less than 1cm in diameter and are eligible for calcitonin testing. The calcitonin testing strategy results yearly in the early diagnosis of 123 medullary carcinomas and 19 false-positive diagnoses, relative to the current guidelines, with additional yearly costs of calcitonin testing at $2.9M, biopsies at $85,700, and surgeries at $1.7M. The overall cost per early detection is $38,300.
Conclusions: Calcitonin testing to diagnose medullary carcinoma leads to the early diagnosis and treatment at an increased cost. Assessing the benefits of early diagnosis (lives saved, improved quality of life, treatment costs saved) and the harms of false-positive diagnoses (quality of life lost, increased treatment costs) could inform the cost-effectiveness of the strategies.
Keywords: Thyroid cancer, early diagnosis, cost analysis, microsimulation
Cost-Effectiveness of Revised US Pneumococcal Vaccination Recommendations in Underserved Minority Adults <65-Years-Old
PP-004 Applied Health Economics (AHE)
Angela R Wateska1, Mary Patricia Nowalk1, Chyongchiou J Lin2, Richard K Zimmerman1, Kenneth J Smith1
1University of Pittsburgh
2The Ohio State University College of Nursing, Columbus, OH
Purpose: The 15- and 20-valent pneumococcal conjugate vaccines (PCV15/PCV20) were recently recommended for US adults, giving either PCV20 alone or PCV15 followed by 23-valent pneumococcal polysaccharide vaccine (PPSV23) 1 year later to all 65+-year-olds and to high-risk younger adults. However, general population recommendations to give pneumococcal vaccine to all 50-year-olds, rather than just those at high risk, could reduce racial disparities resulting from greater pneumococcal disease risk in underserved minority populations.
Methods: A Markov model examining hypothetical 50-year-old Black cohorts (serving as a proxy for underserved minorities) and non-Black cohorts estimated the incremental cost effectiveness of US general population pneumococcal vaccination recommendations (with risk-based vaccination of adults aged <65 years) compared to PCV20 or PCV15/PPSV23 for all 50-year-olds with no vaccination thereafter and to PCV20 or PCV15/PPSV23 for all at ages 50 and 65 years. Model parameters were obtained from US databases, clinical trials, and Delphi panels. Cohorts were followed over their lifetimes from a healthcare perspective, with costs and effectiveness discounted at 3%/year.
Results: PCV15/PPSV23 given at ages 50/65 had the greatest public health impact. In Black cohorts, PCV15/PPSV23 use in the general population at age 50 cost $104,723/quality adjusted life year (QALY) gained compared to PCV20 at age 50; PCV15/PPSV23 at 50/65 years cost $240,952/QALY gained compared to PCV15/PPSV23 at age 50. Either current recommendation option was more expensive and less effective than other strategies in both Black and non-Black cohorts. In sensitivity analyses, age-based PCV20 or PCV15/PPSV23 use at ages 50 or 50/65 could be favored depending on vaccine effectiveness or differential vaccine uptake, while current recommendations remained unfavorable in all scenarios. If absolute vaccine uptake increases by ≥10% with the less complex PCV20 strategies, PCV20 at 50/65 becomes more economically favorable in Black cohorts. Otherwise, individual variation of model parameters had little influence on strategy favorability. In probabilistic sensitivity analyses, current recommendation strategies were never favored at acceptability thresholds >$40,000/QALY gained.
Conclusions: Recent risk-based US adult pneumococcal vaccination recommendations for adults <65 years old were economically and clinically unfavorable compared to vaccination of all 50-year-olds in both Black and non-Black cohorts. Age-based pneumococcal vaccination of the general population at age 50 years could disproportionately benefit higher risk populations and reduce racial inequities in pneumococcal disease burden.
Keywords: Pneumococcal vaccination, racial disparities, cost-effectiveness analysis
Probabilistic sensitivity analysis - Black cohort
Effectiveness, Benefit Harm and Cost Effectiveness of an Organized Colorectal Cancer Screening in Austria – A Decision-Analytic Study
PP-005 Applied Health Economics (AHE)
Beate Jahn1, Gaby Sroczynski3, Júlia Santamaria3, Ursula Rochau3, Silke Siebert3, Nikolai Mühlberger3, Uwe Sieber2
1UMIT - University for Health Sciences, Medical Informatics and Technology, Department of Public Health, Health Services Research and Health Technology Assessment, Institute of Public Health, Medical Decision Making and Health Technology Assessment, Hall i.T., Austria, DEXHELPP, Association for Decision Support Health Policy and Planning, Vienna, Austria
2UMIT - University for Health Sciences, Medical Informatics and Technology, Institute of Public Health, Medical Decision Making and HTA/ONCOTYROL - Center for Personalized Medicine, Austria; Harvard T.H.Chan School of Public Health, Dept. Health Policy & Management, Boston, MA, USA
3UMIT - University for Health Sciences, Medical Informatics and Technology, Department of Public Health, Health Services Research and Health Technology Assessment, Institute of Public Health, Medical Decision Making and Health Technology Assessment, Hall i.T., Austria
Purpose: Population-wide screening for colorectal cancer (CRC) can effectively reduce the incidence of CRC and related mortality. This study commissioned by the National Committee for Cancer Screening in Austria systematically evaluates the long-term benefits, harms, benefit-harm balance, and cost effectiveness of various CRC-screening strategies compared to no screening for women and men with an average CRC risk in Austria.
Methods: We developed and validated a Markov-state-transition model to evaluate 17 different CRC screening strategies differing in screening tests (fecal immunochemical test (FIT), guaiac-based fecal occult blood test (gFOBT), colonoscopy (COL)), age at start (40,45,50 years) and end (COL: 65,70,75 years, FIT/gFOBT 75 years), and screening-interval (FIT/gFOBT annual/biennial, COL 10 yearly). Austrian clinical and epidemiological data including screening-registry data were combined with test accuracy data from meta-analyses and international clinical trials. Evaluated outcomes included benefits (life-years gained (LYG), CRC-cases avoided, CRC-deaths avoided), harms (additional COL, severe complications of colonoscopy, psychological harms due to positive test results), incremental harm-benefit ratios (IHBR), and incremental cost-effectiveness ratios (ICER). We adopted the perspective of the Austrian public health care system and a lifelong time horizon using a 3% annual discount rate. Parameter uncertainty was assessed in comprehensive sensitivity analyses.
Results: The most effective colonoscopy-based screening strategy is colonoscopy at 40/50/60/70 years with an ICER of 10,249 EUR/LYG compared to COL 45/55/65/75 (IHBR: 24 COL/LYG compared to COL 45/55/65). Moving to biennial FIT- screening starting at 40 years yielded an ICER of 19,812 EUR/LYG compared to colonoscopy at 40/50/60/70 (IHBR: 8 COL/LYG compared to FIT 45 biennial). Shortening the interval of FIT starting at age 40 from biennial to annual screening results in an ICER of 37,750 EUR/LYG (IHBR 34 COL/LYG). gFOBT-based strategies are less effective than the respective FIT-based strategies with the same age cutoffs and intervals of screening. All examined screening strategies result in higher benefits compared to no screening, which was dominated in the cost-effectiveness analysis.
Conclusions: Our study shows that organized CRC-screening programs with biennial FIT starting at 40 years and colonoscopy-based screening at 40/50/60/70 are cost effective at a willingness-to-pay ratio of 20,000 EUR/LYG. Comparing colonoscopy with blood stool tests may be less relevant in practice, as the decision between FIT- and colonoscopy-based screening should be based on screenee acceptability and adherence rather than cost effectiveness.
Keywords: colorectal cancer, screening, decision analysis, modeling, benefit-harm analysis, cost-effectiveness analysis
Clinical outcomes and cost-effectiveness of colorectal cancer screening among childhood cancer survivors treated with abdominal-pelvic radiation
PP-006 Applied Health Economics (AHE)
Claudia L Seguin1, Jillian Whitton2, Wendy Leisenring2, Gregory T Armstrong3, Tara O Henderson4, Melissa M Hudson3, Paul C Nathan5, Joseph P Neglia6, Kevin C Oeffinger7, Lisa R Diller8, Amy B Knudsen9, Jennifer M Yeh10
1Institute for Technology Assessment, Massachusetts General Hospital, Boston, MA
2Fred Hutchinson Cancer Research Center, Seattle, WA
3St. Jude Children's Research Hospital, Memphis, TN
4University of Chicago Medicine
5The Hospital for Sick Children Toronto, ON Canada
6Department of Pediatrics, University of Minnesota Medical School
7Duke Cancer Institute, Durham, NC
8Dana-Farber/Boston Children's Cancer and Blood Disorders Center, Boston, MA & Harvard Medical School, Boston, MA
9Institute for Technology Assessment, Massachusetts General Hospital, Boston, MA & Harvard Medical School, Boston, MA
10Boston Children’s Hospital, Boston, MA & Harvard Medical School, Boston, MA
Purpose: Childhood cancer survivors treated with abdominal-pelvic radiation are at increased risk for colorectal cancer (CRC). The Children’s Oncology Group (COG) recommends early initiation of CRC screening at age 30 with colonoscopy every 5 years, multitarget stool DNA [mtsDNA] testing every 3 years, or fecal immunochemical testing [FIT] every year. However, the benefits, costs, and cost-effectiveness of screening strategies among this patient population are unknown.
Methods: We used data from the Childhood Cancer Survivor Study to modify the SimCRC model from CISNET to reflect high CRC and competing mortality risks among survivors. Strategies evaluated varied by modality (no screening, colonoscopy, mtsDNA, FIT), screening start age (25, 30, 35, 40, 45), and screening interval (3, 5 or 10y for colonoscopy; 1, 2 or 3y for mtsDNA and FIT). Analyses assumed complete uptake and adherence to all screening and follow-up procedures. Utility weights and costs (2020 USD), including those from the health-care and patient perspectives, were derived from the Centers for Medicare & Medicaid Services publically-available resources and the published literature. We calculated incremental cost-effectiveness ratios (ICERs) and identified the cost-effective strategy at a willingness-to-pay threshold of 150K/quality-adjusted life-years gained (QALYG). Our primary analysis evaluated the cost-effective strategy when all strategies are compared against each other. Because the COG currently recommends 3 options for screening, each with a different screening modality, we also identified the cost-effective strategy within each modality. To more fully capture uncertainty in CRC risk in survivors of childhood cancer, we conducted the analyses with 100 calibrated natural history parameter sets and reported mean outcomes.
Results: QALYG from screening ranged from 226 to 326 per 1000 25-year-olds. Compared to no screening, nearly all strategies were cost-saving. Annual FIT starting at age 25 was the cost-effective strategy among all those evaluated and among just the FIT strategies. Among the colonoscopy strategies, 10-yearly screening starting at age 30 was cost-effective; among the mtsDNA strategies, triennial screening starting at age 30 was cost-effective. If the risk of CRC was 11% lower than the base-case, biennial FIT at age 25 is cost-effective.
Conclusions: Early initiation of screening, as recommended by the COG, may substantially reduce CRC mortality among high-risk childhood cancer survivors and is cost-effective. Decision-makers can use this information to guide screening recommendations for childhood cancer survivors.
Keywords: Childhood cancer, Survivorship, Colorectal cancer, Screening
Table.
Strategy assumptions and examples of interventions that could fit with these assumptions.
|
Health system perspective on cost for delivering a decision aid for prostate cancer using Time Driven Activity Based Costing
PP-008 Applied Health Economics (AHE)
David Ryan Ho1, Jefersson Villatoro1, Kristen Williams1, Lorna Kwan1, Jonathan Bergman1, David Penson2, Robert Kaplan3, Christopher Saigal1
1Department of Urology, David Geffen School of Medicine at UCLA, Los Angeles, USA
2Department of Urology, Vanderbilt University Medical Center, Nashville, USA
3Department of Accounting and Management, Harvard Business School, Boston, USA
Purpose: Pre-visit Decision Aids (DAs) have promising outcomes in decisional quality, however, the cost to deploy a DA is not well defined, representing a possible barrier to widespread clinical adoption. We aimed to define the health system cost of DA delivery for prostate cancer management across three diverse hospitals. We hypothesize that compared to the costs of prostate cancer treatments, the measured cost of DA delivery is low and that DA integration into the electronic health record (EHR) would yield cost efficiencies.
Methods: We performed an observational analysis of an implemented shared decision making (SDM) program targeting men with localized prostate cancer at two academic medical centers (UCLA and Vanderbilt) and an academically affiliated safety net country hospital (Olive View-UCLA). We interviewed faculty, administrators, and clinical staff to create detailed process maps for DA delivery. Cost determination was performed utilizing time-driven activity-based costing (TDABC). Further economic analyses were performed to calculate efficiency, price, joint, and quantity variances. For variance calculations purposes we compared UCLA and Vanderbilt sites, and defined UCLA as our benchmark site due to the observed lowest process time and average cost of DA delivery. The efficiency variance represents the cost savings if workflow practices at institution “A” were modeled after a benchmark institution “B” and serves as a quantitative justification for iterative process refinement.
Results: Process mapping resulted in a range of 5-7 procedural steps across institutions, with a total process time (minutes) of 10.14 (UCLA), 68 (Olive View-UCLA), and 25 (Vanderbilt). Total average costs (USD) per patient for a delivered DA was $38.32 (UCLA), $49.09 (Olive View-UCLA), and $42.38 (Vanderbilt), respectively. Between UCLA and Vanderbilt, the following economic cost variances were calculated: price variance is $-21.21, joint variance is -$31.07, quantity variance is $55.25, and efficiency variance is $24.18. EHR integration of the DA at UCLA explained much of this variance.
Conclusions: TDABC is an effective approach to determine true inclusive costs of service delivery while also elucidating process inefficiencies and opportunities for cost containment. The absolute cost of delivering a DA to men with prostate cancer in various settings is much lower than the system costs of the treatments they consider. EHR integration streamlines DA delivery efficiency and results in substantial cost savings.
Keywords: cost analysis, value based care, decision aid delivery, health economics, time driven activity based costing
Health Economic Evaluations of Brachytherapy in Patients With Cervical Cancer: A Methodological Systematic Review
PP-009 Applied Health Economics (AHE)
Erica Aranha Suzumura1, Fernando Maia1, Heloisa de Andrade Carvalho2, Grazielle de Oliveira Diniz1, Agnes Sardinha1, Beate Jahn3, Uwe Siebert4, Patricia Coelho de Soárez1
1Departamento de Medicina Preventiva, Faculdade de Medicina FMUSP, Universidade de Sao Paulo, Sao Paulo, Brazil
2Departamento de Radiologia e Oncologia, Divisao de Radioterapia, Faculdade de Medicina FMUSP, Universidade de Sao Paulo, Sao Paulo, Brazil
3Institute of Public Health, Medical Decision Making and Health Technology Assessment; Department of Public Health, Health Services Research and Health Technology Assessment, UMIT - University for Health Sciences, Medical Informatics and Technology, Hall in Tirol, Austria
4Institute of Public Health, Medical Decision Making and Health Technology Assessment; Department of Public Health, Health Services Research and Health Technology Assessment, UMIT - University for Health Sciences, Medical Informatics and Technology, Hall in Tirol, Austria. Division of Health Technology Assessment and Bioinformatics, ONCOTYROL - Center for Personalized Cancer Medicine, Innsbruck, Austria. Center for Health Decision Science, Departments of Epidemiology and Health Policy & Management, Harvard T.H. Chan School of Public Health, Boston, MA, USA
Purpose: The purpose of this study is provide information on the: 1) costing methodology and main cost components used in economic evaluations of brachytherapy (BT) in patients with cervical cancer, and 2) model characteristics and main results in full economic evaluations comparing the use of three-dimensional (3D) BT over conventional (2D) BT in that population, to support the development of a cost-effectiveness analysis of 3D BT versus 2D BT from the perspective of the Brazilian Unified Health System.
Methods: We searched 18 databases for partial economic evaluations reporting 3D BT and/or 2D BT among the resources used to treat patients with cervical cancer, or full economic evaluations comparing 3D BT over 2D BT. No language or publication date restrictions were used. Study selection, appraisal and data extraction were performed independently by two reviewers. Cost estimates were converted to US dollars (US$) in 2022 using Purchasing Power Parities. We used systematic evidence tables with narrative summaries to present the results.
Results: The search yielded 1907 publications, of which 11 were included, representing two full economic evaluations with cost-utility analyses and nine partial evaluations. Eight studies were conducted from the payer’s and three from the provider’s perspective. The mean cost of the 3D BT strategy varied from US$ 3754 to US$ 31,305 and 2D BT from US$ 467 to US$ 20,172. Differences in costs were driven by variations in cost methodologies and cost components. Costs of equipment acquisition were reported by two and personnel costs by eight studies. The two full economic evaluations applied a Markov state-transition model, with simplifications in the course of the disease, particularly about treatment harms. There were differences in Markov cycle lengths, analytic time horizons and health states considered. The transitions probabilities were based on non-systematically reviewed literature and expert opinion. In both modeling studies, 3D BT was dominant or cost-effective.
Conclusions: The economic evaluations have adopted varied resources to account for costs, resulting in substantially different mean BT costs. The 3D BT strategy was cost-effective in both of the two identified full economic evaluations, however, there is considerably source of parameter and structural uncertainty in these models.
Keywords: brachytherapy, cervical cancer, uterine cervical neoplasms, systematic review, economic evaluation
Economic Analysis of 15-valent and 20-valent Pneumococcal Conjugate Vaccines among Older Adults in Ontario, Canada
PP-010 Applied Health Economics (AHE)
Gebremedhin Beedemariam Gebretekle1, Ryan O'Reilly1, Stephen Mac1, Shaza Fadel2, Natasha Crowcroft3, Beate Sander4
1Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment (THETA) Collaborative, University Health Network, Toronto, Canada.
2Dalla Lana School of Public Health, University of Toronto, Toronto, Canada.
3World Health Organization, Geneva, Switzerland
4Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment (THETA) Collaborative, University Health Network, Toronto, Canada; ICES, Toronto, Ontario, Canada; Public Health Ontario, Toronto, Ontario, Canada.
Purpose: Despite the availability of publicly funded 23-valent polysaccharide pneumococcal vaccine (PPV23), pneumococcal disease remains a public health concern among adults aged ≥65 years in Ontario, Canada. Novel pneumococcal conjugate vaccines (PCV), 15-valent (PCV15) and 20-valent (PCV20), are expected to reduce disease burden, but their economic implication is unknown. We aimed to evaluate the cost-effectiveness of PCV15 and PCV20 in this population.
Methods: We conducted a cost-utility analysis using an individual-level state transition model to compare one dose of PCV-alone or in series with PPV23 to the current routine program (PPV23-alone). Our model predicts lifetime health outcomes and costs from Ontario’s healthcare payer perspective, discounted at 1.5% annually. Primary outcomes were expected quality-adjusted life years (QALYs), costs (C$ 2021), and the incremental cost-effectiveness ratio (ICER) expressed in C$/QALY gained. Parameter values were obtained from Ontario health administrative and surveillance data, supplemented by the literature. PCV15 and PCV20 vaccines are expected to cost C$98 and C$112, respectively. Cost-effectiveness was assessed against a threshold of C$50,000/QALY. Deterministic sensitivity analysis was performed to assess parameter uncertainty and alternative scenarios were examined.
Results: In the base case analysis (considering no indirect effects from potential childhood vaccination with PCV15/20) compared to PPV23-alone, vaccination of older adults using PCV15-alone was associated with an incremental benefit of 0.0018 QALY gains and an incremental cost C$108, resulting in an ICER of $61,057/QALY. PCV20-alone resulted in an ICER of $62,024/QALY with an incremental cost C$122 and an incremental 0.0020 QALY gains, compared to PPV23-alone. Addition of PCV15 or PCV20 to PPV23 costs >C$67,000/QALY compared to PPV23-alone. In the scenario analysis (with indirect effects from potential childhood vaccination with PCV15/20 and no vaccine effectiveness against type-3 serotype), PCV15-alone was dominated by PCV20, while PCV20-alone resulted in a higher ICER (C$107,291/QALY) compared to PPV23. In the sensitivity analysis, PCV15 or PCV20 appears to be cost-effective compared to PPV23 at a cost-effectiveness threshold of C$$50,000/QALY when vaccine price decreased to ≤C$75, the proportion of community acquired pneumonia attributed to S. pneumoniae increased to 28% or PCV15/20 provides up to 15 years full-protection.
Conclusions: Vaccinating older adults with PCV15 or PCV20 does not appear to be cost-effective at a threshold of $50,000/QALY, especially in the longer-term when herd immunity effects resulting from childhood vaccination programs is considered.
Keywords: Pneumococcal vaccine, conjugate vaccines, older adults, PCV15, PCV20, PPV23
Cost-Effectiveness Analysis of Caspofungin and Fluconazole for Primary Treatment of Invasive Candidiasis and Candidemia in Ethiopia
PP-011 Applied Health Economics (AHE)
Gebremedhin Beedemariam Gebretekle1, Atalay Mulu Fentie2, Girma Tekle Gebremariam2, Eskinder Eshetu Ali2, Daniel Asfaw Erku3, Tinsae Alemayehu4, Workeabeba Abebe5, Beate Sander6
1Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment (THETA) Collaborative, University Health Network, Toronto, Canada.
2School of Pharmacy, Addis Ababa University, Addis Ababa, Ethiopia
3University Centre for Applied Health Economics, School of Medicine & Menzies Health Institute Queensland, Griffith University, Griffith, Australia
4St. Paul's Hospital and Millennium Medical College, Addis Ababa, Ethiopia; American Medical Center, Specialty Center for Infectious Diseases and Travel Medicine, Addis Ababa, Ethiopia
5School of Medicine, Addis Ababa University, Addis Ababa, Ethiopia
6Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment (THETA) Collaborative, University Health Network, Toronto, Canada; CES, Toronto, Ontario, Canada; Public Health Ontario, Toronto, Ontario, Canada.
Purpose: This study aimed to assess the cost-effectiveness of caspofungin compared to fluconazole as primary treatment of patients with invasive candidiasis and/or candidaemia (IC/C) in Ethiopia.
Methods: A Markov cohort model was developed to compare the cost-utility of caspofungin versus fluconazole antifungal agents as first-line therapy for adult inpatients with IC/C from the Ethiopian health system perspective. Treatment outcome was categorized as either a clinical success or failure, with clinical failure being switched to a different antifungal medication. Liposomal amphotericin B (L-AmB) was used as a rescue agent for patients who had failed caspofungin treatment, while L-AmB or caspofungin were used for patients who had failed fluconazole treatment. Primary outcomes were expected quality-adjusted life years (QALYs), costs (US$ 2021), and the incremental cost-effectiveness ratio (ICER) expressed in US$ per QALY gained. QALYs and costs were discounted at 3% annually. Cost data was extracted from hospitals in Addis Ababa while locally unavailable data were derived from the literature. Cost-effectiveness was assessed against the recommended threshold of 50% of a country’s GDP/capita for low- and middle-income countries. Deterministic and probabilistic sensitivity analyses were conducted to assess the robustness of our findings.
Results: Treatment of IC/C with caspofungin as first-line therapy was more costly (US$ 7,714) and more effective (12.86 QALYs gained) compared to fluconazole-initiated therapy followed by caspofungin (US$ 3,217; 12.30 QALYs) or L-AmB as second-line medications (US$ 2,781; 10.92 QALYs). In the base-case analysis, caspofungin as the primary treatment for IC/C was not cost-effective when compared to fluconazole-initiated treatments. Fluconazole-initiated therapy followed by caspofungin as second-line medication was cost-effective for the treatment of C/IC compared to fluconazole with L-AmB as second-line treatment, at US$ 316/QALY gained. Our findings were sensitive to medication costs, drug efficacy, infection recurrence, and infection-related mortality rates. Probabilistic sensitivity analysis confirmed the stability of our findings.
Conclusions: Our study showed that the use of caspofungin as primary treatment for IC/C in Ethiopia was not cost-effective when compared with fluconazole-initiated treatment alternatives.
Keywords: Candidaemia, Caspofungin, Echinocandin, Fluconazole, Invasive Candidiasis, Ethiopia
Rising Costs of Alcohol-attributable Liver Disease in the United States: A Modeling Study
PP-013 Applied Health Economics (AHE)
Jovan Julien1, Turgay Ayer2, Jagpreet Chhatwal3
1Department of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA; Institute for Technology Assessment, Massachusetts General Hospital, Boston, MA
2Department of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA
3Institute for Technology Assessment, Massachusetts General Hospital, Boston, MA; Harvard Medical School, Boston, MA; Liver Center and Gastrointestinal Division, Massachusetts General Hospital, Boston, MA
Purpose: In recent years, the proportion of individuals with alcohol-attributable cirrhosis has increased because of changing alcohol consumption and aging. Our objective was to project healthcare and labor losses related to extant health crises such as alcohol-attributable liver disease.
Methods: We extended a previously developed individual-level state-transition model that simulated the natural history of alcohol-attributable liver in US adults by accounting for current age- and sex-based drinking rates as collected by the National Epidemiologic Survey on Alcohol and Related Conditions-III (NESARC-III) and fibrosis progression rates for drinkers of varying levels from published studies. Unknown progression probabilities are calibrated to reproduce the observed trends in initiation of alcohol consumption, and deaths from cirrhosis associated with ALD in the US from 2012 to 2018. The model accounted for competing causes of mortality, both related and unrelated to alcohol use, and increased alcohol consumption during COVID-19 pandemic. We include direct health care costs of the health states in the model and indirect costs related to the impact of early mortality on society.
Results: We estimate that the annual alcohol-attributable liver-related costs in 2022 to be $34 billion, of which direct healthcare spending is $15 billion and indirect costs of lost labor and economic consumption is the remaining $19 billion. By 2040, these costs are projected to increase by 153% to $86 billion, of which $42 billion would be direct healthcare costs and $44 billion would be indirect costs (Figure). Over the next 18 years (i.e., between 2022–2040), indirect and direct alcohol-attributable liver diseases will sap the US economy of over $1 trillion. When considering total expenditures, cost for the female subpopulation is converging to cost in males (mostly a result of converging drinking rates of females and males).
Conclusions: Our study considers the significant costs associated with alcohol-attributable liver disease faced by the US population and healthcare system in years to come. Our findings highlight the significant economic and human costs of current alcohol consumption patterns and the need for policy and cultural shifts to address the growing economic burden.
Keywords: alcohol, liver disease, costs
Cumulative and annual costs of alcohol-attributable liver disease
Liver disease costs are categorized by type of cost (direct healthcare or indirect costs) and by sex of patient (male or female.)
Cost-effectiveness of blood pressure targets for patients at high risk for cardiovascular disease
PP-014 Applied Health Economics (AHE)
Karen C. Smith1, Thomas A. Gaziano2, Ankur Pandya3
1Harvard PhD Program in Health Policy, Cambridge, MA, USA; Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA
2Division of Cardiovascular Medicine, Brigham and Women's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA
3Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, MA, USA; Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA
Purpose: 16.8 million United States adults with hypertension are at high risk for cardiovascular disease, yet there is little consensus around the optimal target systolic blood pressure (SBP) for these patients. The Systolic Blood Pressure Intervention Trial (SPRINT) found that a <120 mmHg target significantly lowered rates of cardiovascular events and all-cause mortality compared to a <140 mmHg target. Following SPRINT, the American College of Cardiology and American Heart Association guidelines recommended a <130 mmHg target. Cost-effectiveness analyses have found that the SPRINT <120 mmHg target is cost-effective relative to the <140 mmHg target. However, the cost-effectiveness of the guideline-recommended <130 mmHg target is unclear.
Methods: We developed a microsimulation model to project lifetime costs and quality-adjusted life-years (QALYs) for patients at high risk for cardiovascular disease under <120, <130, and <140 mmHg targets. We simulated a trial setting: simulated individuals were sampled with replacement from SPRINT participants, and we assumed trial-level treatment adherence and SBP measurement. We estimated model parameters from published literature and individual-level SPRINT data. Based on SPRINT SBP distributions, the mean (standard deviation) SBP in each target was 123 (8), 129 (8), and 135 (8) mmHg respectively. We modeled the effect of SBP changes on cardiovascular events using a meta-analysis of 48 SBP-lowering trials. We used a healthcare sector perspective and included the cost of office visits, medications, cardiovascular events, adverse events, and non-cardiovascular healthcare (2021 USD). We discounted costs and QALYs at 3% annually and assumed a $100,000/QALY willingness-to-pay (WTP) threshold.
Results: The <140 mmHg target had the lowest lifetime costs and QALYs ($197,900; 13.19 QALYs), followed by the <130 mmHg target ($201,000; 13.26 QALYs) and the <120 mmHg target ($204,000; 13.32 QALYs). The <130 mmHg target had an incremental cost-effectiveness ratio (ICER) of $41,800/QALY compared to the <140 mmHg target. The <120 mmHg target was the cost-effective target with an ICER of $48,600/QALY compared to the <130 mmHg target. In probabilistic sensitivity analyses, the <120, <130, and <140 mmHg targets were cost-effective in 95%, 5%, and 0% of simulations at a WTP of $100,000/QALY (Figure 1).
Figure 1.
Cost-effectiveness acceptability curves.
Conclusions: In a trial setting, a <120 mmHg target is cost-effective compared to <130 mmHg and <140 mmHg targets. The cost-effectiveness of these targets outside of clinical trial settings should also be evaluated.
Keywords: hypertension, cost-effectiveness, cardiovascular disease
Effectiveness of One-Time Screening for Chronic Kidney Disease among U.S. Adults
PP-015 Applied Health Economics (AHE)
Marika Mae Cusick1, Rebecca Tisdale2, Glenn Chertow3, Doug Owens1, Jeremy Goldhaber Fiebert1
1Department of Health Policy, Stanford University School of Medicine, Stanford, CA, USA
2Veteran's Affairs Palo Alto Healthcare System, Center for Innovation to Implementation (Ci2i), Menlo Park, CA
3Division of Nephrology, Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA
Purpose: To assess the effectiveness of one-time chronic kidney disease (CKD) screening for U.S. adults.
Methods: We developed a decision-analytic Markov model of CKD progression among U.S. adults. We used Bayesian calibration (Sample-Importance-Resampling) to fit our model to empirical data from the National Health Administration and Nutrition Survey (NHANES). Using 2013-2018 NHANES, we estimated age-specific joint distributions of stage-specific CKD prevalence, diagnosis, and treatment for 30 to 70+ year-old people. We initialized the model with estimates for 30-39-year-olds with the remaining age-specific distributions as calibration targets (188 targets in total). We assessed concordance between the calibrated model and the targets via overlap in 95% confidence intervals (CIs). Using the calibrated CKD progression model, we assessed the effectiveness of one-time screen-and-treat interventions for those currently age 35 if delivered now or at future time points under two treatment scenarios. Under scenario 1, we assumed detected patients initiate conventional CKD therapy (Angiotensin-converting enzyme (ACE)-inhibitors/Angiotensin II receptor blockers (ARBs)), which slows CKD progression without direct survival benefits. Under scenario 2, we assumed addition of sodium-glucose cotransporter-2 (SGLT-2) inhibitors, a treatment recently identified as effective in slowing CKD progression and directly enhancing survival. Outcomes include changes in life expectancy and cumulative incidence of kidney failure (KF) on kidney replacement therapy (KRT) and CKD diagnosis rates.
Results: The calibrated model had 98.4% overlap between the 95% CIs of its projections and the NHANES targets. Screening increased CKD diagnosis by over 20 percentage points. Use of ACE/ARBs (scenario 1) in people identified with CKD produced gains in cohort life expectancy and absolute reductions in cumulative incidence of KF on KRT by at most 0.1 years and 0.4%. Addition of SGLT-2 inhibitors to ACE/ARBs (scenario 2) for those testing positive was most beneficial when conducted once the cohort reached ages 45-55, with life expectancy gains of 0.5 years and absolute reductions in cumulative incidence of KF on KRT of 0.5%. Given imperfect screening tests (false positive rate: 21%) and treatment discontinuation, intervention timing is critical for increasing longevity and preventing severe CKD.
Conclusions: Our CKD progression model matches population-representative U.S. data including undiagnosed CKD prevalence. One-time screen-and-treat interventions appear effective in improving diagnosis and population health, with greater benefit after adding SGLT-2 inhibitors.
Keywords: chronic kidney disease, cost-effectiveness analysis, health economics
Figure 1.
Increase in life expectancy (compared to status quo)
*Scenario 1 screen-and-treat interventions are compared against scenario 1 status quo. Scenario 2 screen-and-treat interventions are compared against scenario 2 status quo.
Using a state-transition model to project life expectancy and lifetime liver-related health costs of untreated perinatally-acquired HCV
PP-016 Applied Health Economics (AHE)
Megan Rose Curtis1, Ben Linas2, Andrea Ciaranello3
1Medical Practice Evaluation Center, Massachusetts General Hospital, Boston, USA; Division of Infectious Disease, Brigham and Women’s Hospital, Harvard Medical School, Boston, USA
2Boston Medical Center, Boston, MA, USA; Boston University Schools of Medicine and Epidemiology, Boston, USA
3Medical Practice Evaluation Center, Massachusetts General Hospital, Boston, USA; Division of Infectious Disease, Massachusetts General Hospital, Harvard Medical School, Boston, USA
Purpose: Prevalence of chronic HCV among pregnant people is increasing in the US, with the highest burden among women with opioid use disorder (216/1000 live births). HCV is transmitted vertically in 5-10% of births from pregnant people with chronic HCV. Directly acting antiviral (DAA) therapy has been approved for children aged 3 and older. The likely clinical and economic impacts of DAA therapy for children with HCV when treated at 3 years old are unknown.
Methods: Using a state-transition model, we projected life expectancy and lifetime costs for 3-year-old children with perinatally-acquired HCV with either: 1) 8 weeks of DAA therapy at 3 years old or 2) no access to therapy throughout their lifetime. Key model inputs were from published estimates of disease progression (annual risk of progression from stages F0-F1, F1-F2, F3-F4/cirrhosis, cirrhosis to decompensated cirrhosis, cirrhosis to hepatocellular carcinoma of 28.1%, 13.9%, 9.5%, 9.2%, 3.0%, and 1.0%); DAA effectiveness (sustained virologic response [SVR] of 96%); DAA costs ($15,601/8-week treatment course); and annual healthcare costs ($8,712 for cirrhosis, $12,199 for decompensated cirrhosis, and $42,517 for hepatocellular carcinoma; all costs are in 2022 USD). We calibrated the model to match reported 8- and 10-year fibrosis risk, as well as median time from infection to cirrhosis (36 years) in untreated pediatric HCV. Using estimated numbers of infants born with perinatally-acquired HCV annually who do not clear infection by 3 years old (675-3000 children), we projected total cost and life-year gains from DAA therapy for all 3-year-old children with HCV in the US in 2022.
Results: With no therapy, 3-year-old children with HCV were projected to live 51.32 years, with lifetime HCV-related undiscounted healthcare costs of $327,200/person. With DAA therapy at age 3, projected life expectancy was 75.32 years, with HCV-related health costs of $28,100/person. Providing DAA therapy for 675-3000 children 3 years of age would save 16,175-71,888 life-years and save $201,892,500-$897,300,000.
Conclusions: DAA therapy for 3-year-old children with HCV is projected to dramatically increase survival and reduce HCV-related healthcare costs, compared to no therapy. Expansion of highly effective and safe DAA therapy for children with HCV at the age of 3 should be a public health priority.
Keywords: pediatric HCV, perinatal infections, directly-acting antivirals, state-transition model
Life-time cost-effectiveness of Tenecteplase use versus Alteplase use for the treatment of acute ischemic stroke: A microsimulation study
PP-017 Applied Health Economics (AHE)
Priyadarsini Dasari, Ryan Suk
Department of Management, Policy and Community Health, School of Public Health, University of Texas Health Science Center, Houston, TX, USA.
Purpose: To evaluate the life-time cost-effectiveness of acute ischemic stroke (AIS) treatments (Tenecteplase or Alteplase) from a payer’s perspective.
Methods: We developed an individual-level state-transition model that simulated the individuals who had an AIS episode in 2022 in the United States. Patients profile was defined by their age at episode (mean=71.9 years; SD=13.7). These individuals entered two treatment scenarios we simulated: Tenecteplase and Alteplase. Since Tenecteplase can be administered within 6 hours to be effective (versus 4.5 hours for Alteplase), we also identified the proportions of those who received the treatment post-episode within ≤4.5 hours (48.8%), 4.5-6 hours (8.2%), and 6-24 hours (43.1%) extracted from the clinical data. Each patient entered the model with a modified Rankin score (mRS) indicating the severity of disability, which was measured 90 days after the AIS episode. They were then simulated for mutually exclusive health states (mRS0~mRS5 and death) with a yearly cycle over a life-time horizon. Each patient was at risk of a recurrence. Our base analysis output included costs and quality-adjusted life years (QALY) to evaluate the cost-effectiveness. The secondary analysis projected the number of disability and death from AIS and the related costs after following the actual number of episodes per year (to be estimated prior to meeting). The parameters were derived from the published clinical trials and other relevant literature. All costs were adjusted to 2022 US dollars. All costs and utilities were discounted at 3%/year. One-way sensitivity analyses were conducted by varying model input parameters. The analyses were conducted using TreeAge Pro 2022.
Results: In our base-case analysis, we found that the administration of Tenecteplase (cost: $190,644; QALY: 5.69) dominated the administration of Alteplase (cost: $191,994; QALY: 4.92) for treating AIS episodes. One-way sensitivity analyses found that the cost-effectiveness result was sensitive to the long-term annual costs of mRS0. However, the incremental cost-effectiveness ratio (ICER) did not exceed the willingness-to-pay (WTP) threshold of $100,000.
Conclusions: For those having AIS episodes in the US, Tenecteplase use can be more effective and cheaper than Alteplase use. The results can help promote further research in this area influencing a change in guidelines for the treatment of AIS and to reduce associated mortality, morbidity, and the high economic burden for AIS patients.
Keywords: acute ischemic stroke, cost-effectiveness, Tenecteplase, Alteplase
Table 1.
Results of Cost-Effectiveness Analysis
| Strategy | Cost (USD) | Incremental Cost (USD) | Effectiveness (QALY) | Incremental Effectiveness (QALY) | ICER |
|---|---|---|---|---|---|
| Tenecteplase | 190,644.17 | 5.69 | |||
| Alteplase | 191,994.10 | 1,349.93 | 4.92 | -0.78 | dominated |
Incorporating family members' health outcomes in pediatric and maternal-perinatal cost-utility analyses: A systematic review
PP-018 Applied Health Economics (AHE)
Ramesh Lamsal1, E. Ann Yeh2, Eleanor Pullenayegum3, Wendy J Ungar4
1Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
2Division of Neurology, The Hospital for Sick Children, Toronto, Ontario, Canada
3Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Ontario, Canada
4Technology Assessment at SickKids (TASK), Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Ontario, Canada
Purpose: A burgeoning of literature has shown the health-related quality of life effects (HRQoL) on family members of children with chronic illness or disabilities. Several methodological challenges exist in assessing and incorporating family health outcomes in pediatric and maternal-perinatal cost-utility analyses (CUAs). This review aimed to determine the methods used to measure and incorporate family health outcomes in pediatric and maternal-perinatal (CUAs).
Methods: A literature search was conducted in MEDLINE, Embase, EconLit, Cochrane collection, CINAHL, INAHTA, and PEDE from inception to 2020 to identify studies that included family health spillover effects in pediatric CUAs, and health outcomes of the mother and child in maternal-perinatal CUAs. This review investigated methods used by authors to measure the family health spillover effects, incorporate them into pediatric CUAs, measure health outcomes of pregnant women and children, and integrate them into maternal-perinatal CUAs.
Results: Of 747 pediatric CUAs, 20 (3%) included family health spillover effects, and of 139 CUAs of maternal-perinatal CUAs, 38 (27%) included health outcomes of the pregnant woman and child. Supplementary searches yielded nine pediatric CUAs and seven maternal-perinatal CUAs. Pediatric and maternal-perinatal CUAs used various methods to measure and integrate family health spillovers and health outcomes of pregnant women and children. Most pediatric CUAs measured the family health spillover effects as utility decrements or QALYs (quality-adjusted-life-years) losses due to a child’s illness or disability on family members and/or caregivers. Most maternal-perinatal CUAs measured the health outcomes of pregnant women as health utilities and children as health utilities. The most common approach to include family health spillover effects in pediatric CUA was summing the child QALYs losses due to illness or disability and parent QALYs losses due to a child’s illness or disability. In contrast, the most common method in maternal-perinatal CUAs was summing the pregnant woman and child QALYs or disability-adjusted life years.
Conclusions: Only a small number of pediatric and maternal-perinatal CUAs considered family members’ health outcomes. Ignoring adverse health effects on family members may lead to inequitable decision-making for caregivers and family members who sacrifice time and their well-being and health in providing care for the patient. The results from this review could inform the development of standardized methods to measure and incorporate the family health outcomes in pediatric and maternal-perinatal CUAs.
Keywords: pediatric cost-utility analyses, maternal-perinatal cost-utility analyses, spillover effects, family member health outcomes
Health-related quality of life and costs in parents of children with neuroinflammatory disorders: A cross-sectional study
PP-019 Applied Health Economics (AHE)
Ramesh Lamsal1, E. Ann Yeh2, Eleanor Pullenayegum3, Wendy J Ungar4
1Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
2Division of Neurology, The Hospital for Sick Children, Toronto, Ontario, Canada
3Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Ontario, Canada
4Technology Assessment at SickKids (TASK), Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Ontario, Canada
Purpose: Neuroinflammation in children can result in chronic disability with significant impacts on a child’s independence and health-related quality of life (HRQoL). Consequently, children with the neuroinflammatory disorder (ND) may require parents to support them in their everyday life. Thus, pediatric ND could affect parents’ health and well-being, and economic well-being. The study aimed to measure HRQoL and caregiving-related quality of life of parents of children with ND. The further objective was to measure the mental health care service utilization, productivity losses and costs for parents of children with ND.
Methods: This was a prospective cross-sectional observational study. Participants were enrolled through the Neuroinflammatory Disorders Clinic at the Hospital for Sick Children, Toronto, Canada. The HRQoL of respondent parents and their children was measured using HUI. The carer-related quality of life of respondent parents was measured using the carer-related quality of life instrument (CarerQol). Time losses from paid labour and/or usual activities and mental health service use by parents of children with ND were measured using the Resource Use Questionnaire. The mean total annual costs per two parents from the parents’ and societal payer perspectives were estimated.
Results: Forty-seven parents and their children participated. The mean age of children with ND was 12.02 years (SD: 3.34), and of respondent parents was 43.76 years (SD: 6.31). The mean [SD] multi-attribute utility score for respondent parents using HUI3 was 0.91 [0.24] and using HUI2 was 0.88 [0.17]. The mean total CarerQol utility score was 85.97 (SD: 11.94). Majority of respondent parents (88%) and their spouses or partner (80%) missed paid labour time and/or usual daytime activities to provide care for an affected child. The estimated mean (median) annual cost per two parents from a parent payer perspective was CAD 7366 (2477). The productivity cost of parents constituted a sizable portion of the overall costs for parents. From a societal perspective, the estimated mean (median) total annual cost per two-parent was CAD 7401 (2,477).
Conclusions: Parents of children with ND reported reduced HRQoL, carer-related quality of life, and time devoted to paid labour and/or usual activities. Measuring these illustrate the burden on parents of having and caregiving for a child with ND. It is essential to capture these for more accurate pediatric economic evaluation and more equitable funding decision-making.
Keywords: parents, caregivers, neuroinflammatory disorder, family spillover effects
Early Economic Evaluation of Gene Therapy Treatment for Hemophilia A as compared to Factor VIII Prophylaxis Treatment in Canada
PP-020 Applied Health Economics (AHE)
Sam D. Hirniak1, Andrea N Edginton1, Alfonso Iorio2, Mhd Wasem Alsabbagh1, William W. L. Wong1
1School of Pharmacy, Faculty of Science, University of Waterloo, Kitchener, ON, Canada
2Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada; Department of Medicine, McMaster University, Hamilton, Ontario, Canada
Purpose: The standard of care for hemophilia A is bleed prevention (prophylaxis) through regular iv injections of factor VIII clotting factor. The economic burden of the disease is high, primarily due to the cost of factor VIII over the patient’s lifetime. Currently, pharmaceutical companies are developing gene therapy options for hemophilia A. While short-term clinical trial results are promising, gene therapy has an incredibly high short-term cost. The purpose of this study was to perform an early economic evaluation of gene therapy treatment for hemophilia A patients.
Methods: A state-transition (Markov model) was implemented for the economic evaluation in the form of cost-utility analysis. The model compared factor VIII with gene therapy and considered a cohort of 18 years old male patients with uncomplicated hemophilia A. The model considered six health states: prophylaxis, permanent joint damage, bleeding event with joint damage, bleeding event, death, and gene therapy (Figure 1). The gene therapy cohort may have successful or unsuccessful gene therapy. If gene therapy is unsuccessful, the patient is put back on prophylaxis. Model assumptions including transition probabilities, cost, and utility data were retrieved from the literature. We used a Canadian provincial Ministry of Health perspective, a lifetime time horizon, and a 1.5% discount rate for the analysis. Sensitivity and scenario analyses were also completed to assess the uncertainty.
Results: Gene therapy showed a relative risk reduction of 0.85, 0.87, and 0.88 for bleeding, hospitalizations, and surgeries in comparison with prophylaxis only, respectively. The average lifetime costs were: $3,824,880 CAD for gene therapy and $20,648,696 CAD for prophylaxis only. Patients who attempted gene therapy had an average of 38.93QALY compared to patients on prophylaxis only who had 27.91QALY. This demonstrated that prophylaxis was completely dominated by gene therapy. Our sensitivity and scenario analyses showed that the model is sensitive to the duration of treatment effect, the cost of prophylaxis treatment, joint damage utility, and the cost of gene therapy treatment. However, the overall conclusion remained the same regardless of the ranges that were tested.
Conclusions: Gene therapy could be a cost-saving option for hemophilia A patients. Whether or not gene therapy remains to be a cost-saving/cost-effective option will depend on the cost of gene therapy, the long-term distribution of treatment effects, and the overall success rate of the treatment.
Keywords: cost-effectiveness, hemophilia A, gene therapy, hemophilia prophylaxis, health utility, health technology assessment
Model diagram
Model diagram showing the dynamics of the health states.
Cost and Cost-effectiveness Analysis of a Digital Diabetes Prevention Program- Results from the PREDICTS Trial
PP-021 Applied Health Economics (AHE)
Tzeyu L Michaud1, Kathryn E Wilson2, Jeffrey A Katula3, Wen You4, Paul A Estabrooks5
1Center for Reducing Health Disparities and Department of Health Promotion, College of Public Health, University of Nebraska Medical Center, Omaha, NE, USA
2Department of Kinesiology and Health, College of Education & Human Development, Georgia State University, Atlanta, GA, USA
3Department of Health and Exercise Science, Wake Forest University, Winston-Salem, NC, USA
4Department of Public Health Sciences, University of Virginia, Charlottesville, VA, USA
5Department of Health and Kinesiology, College of Health, University of Utah, Salt Lake City, UT, USA
Purpose: Although technology-assisted diabetes prevention program (DPP) interventions have been shown to improve glycemic control and weight loss, information is limited regarding relevant costs and their cost-effectiveness. The aim of this study is to describe costs associated with a digital DPP (d-DPP) and a small group education class (SGE; the usual care arm), and conduct an economic evaluation, using data from a randomized controlled diabetes prevention trial.
Methods: A retrospective within-trial cost and cost-effectiveness analysis (CEA) was conducted to compare d-DPP (n=299) and SGE (n=300), over a one-year study period. The costs were summarized into direct medical costs, direct nonmedical costs, and indirect costs (operationalized as lost work productivity costs) by intervention arms, estimated from survey questionnaires administered at 4- and 12-month followups. CEA were conducted using an incremental cost-effectiveness ratio metric, which was expressed as incremental costs for additional unit reduction in hemoglobin A1c (HbA1c) and weight loss, and unit increase in quality-adjusted life years (QALYs) between d-DPP and SGE. The CEA were assessed from both the private payer and societal perspectives. Sensitivity analysis was performed using nonparametric bootstrap analysis to construct the incremental cost-effectiveness plane.
Results: Over one year, the direct medical costs, direct non-medical costs, and indirect costs were $4,517, $1,595, and $6,942 per participant in the d-DPP group; whereas they were $4,141, $1,350, and $9,204 per participant in the SGE group. The CEA results showed that the d-DPP was cost-saving (i.e., lower costs but greater benefits) compared to SGE from a societal perspective. When taking on a private payer perspective, they were $4,700, $113, and $19,790 more for d-DPP to obtain additional unit reduction of HbA1c and weight, and additional unit gain of QALY compared to SGE, respectively. From a societal perspective, bootstrapping results indicated that d-DPP has 39% (at a willingness-to-pay [WTP], of $50,000/QALY) and 69% (at a WTP of $100,000/QALY) of being cost-effective.
Conclusions: The d-DPP was cost-effective in achieving HbA1c reduction and weight loss and offers prospect of high scalability and sustainability due to its program features and delivery modes, which can be easily translated to other settings.
Keywords: lifestyle intervention; economic evaluation, work productivity, digital health
Impact of Implementation Barriers on Economic Models of Genetic Testing in Metastatic Colorectal Cancer
PP-022 Applied Health Economics (AHE)
Zachary T Rivers1, David Stenehjem3, Helen Parsons4, Andrew C Nelson5, Emil Lou6, Veena Shankaran2, Karen M Kuntz4
1Hutchinson Institute For Cancer Outcomes Research, Fred Hutchinson Cancer Center, Seattle, WA, United States
2Hutchinson Institute For Cancer Outcomes Research, Fred Hutchinson Cancer Center, Seattle, WA, United States, Division of Medical Oncology, University of Washington, Seattle, WA, United States
3Department of Pharmacy Practice and Pharmaceutical Sciences, University of Minnesota College of Pharmacy, Duluth, MN, United States
4Division of Health Policy and Management, University of Minnesota School of Public Health, Minneapolis, MN, United States
5Department of Lab Medicine and Pathology, University Of Minnesota Medical School, Minneapolis, MN, United States
6Division of Hematology, Oncology, and Transplantation, University of Minnesota Medical School, Minneapolis, MN, United States
Purpose: Economic models of personalized medicine do not routinely include implementation barriers. These barriers include lack of inclusion of genotypes from underrepresented races and ethnicities, variability in test accuracy, and lack of patient access to testing. This study illustrates how inclusion of classification, accuracy, and patient access parameters changes the impact of genetic testing in metastatic colorectal cancer (mCRC).
Methods: We updated a model of genetic testing in mCRC to a microsimulation and added personalization of anti-emetic treatment. The base analysis compares sequencing DPYD, UGT1A1, and CYP2D6 (where variants make specific treatments inappropriate) against no testing, the current standard of care. Genetic variant rates were assigned with race and ethnicity as a surrogate for genetic ancestry. Outcomes included number of inappropriate treatments prevented, cost per inappropriate treatment prevented (CPI), and percentage of inappropriate treatments prevented in White and non-White populations. One thousand simulations of 2,108 patients receiving chemotherapy were conducted. Scenario analyses tested the impact of implementation barriers on model findings. Scenario 1 screened limited variants, representing challenges with test variability and accuracy in diverse populations, while Scenario 2 explored barriers preventing access to testing, where patients living in non-metropolitan areas were less likely to receive testing. Scenario 3 combined Scenario 1 and Scenario 2.
Results: The base analysis prevented a mean of 191 (standard deviation (SD): 13.8) inappropriate treatments. Scenario 1 prevented 135 (SD: 11.5), while Scenario 2 prevented 180 (SD: 13.4) and Scenario 3 prevented 128 (SD: 11.1) inappropriate treatments. Scenario 2 ($17,418) had the highest CPI, followed by the base analysis ($17,415), Scenario 3 ($13,492), and Scenario 1 ($13,501). The base analysis (100%) identified the highest percentage of inappropriate treatments in non-White individuals, followed by Scenario 2 (94%), Scenario 1 (43%), and Scenario 3 (41%).
Conclusions: The exclusion of access barriers may overestimate the clinical impact of personalized medicine. Exclusion of test applicability barriers may lead to policy and reimbursement decisions that concentrate improved care in a subset of the population. Future models of genetic testing should include barriers to improve real-world validity.
Keywords: colorectal cancer, pharmacogenomics, health equity, cost-effectiveness, microsimulation
Primary Model Outcomes
| Scenario | Inappropriate Treatments Prevented | CPI (2022 USD) | Percentage of Inappropriate Treatments Prevented in White Patients | Percentage of Inappropriate Treatments Prevented in non-White Patients |
|---|---|---|---|---|
| Base Analysis | 191 | 17,415 | 100 | 100 |
| Scenario 2 | 180 | 17,418 | 94 | 94 |
| Scenario 1 | 135 | 13,501 | 77 | 43 |
| Scenario 3 | 128 | 13,492 | 72 | 41 |
Impact of Lung Cancer Screening Visual Display Format and Result Values on Behavioral Intentions, Understanding, and Distress
PP-023 Decision Psychology and Shared Decision Making (DEC)
Aaron M Scherer1, Jessica C Sieren2, Kimberly C Dukes1, Emily E Chasco1, Richard M Hoffman1, Eric E Hoffman2, David A Katz1, Mark W Vander Weg3
1Department of Internal Medicine, University of Iowa, Iowa City, IA, USA
2Department of Radiology, University of Iowa, Iowa City, IA, USA
3Department of Community and Behavioral Health, University of Iowa, Iowa City, IA, USA
Purpose: Newer forms of low-dose computed tomography (LDCT) used in lung cancer screening provide more continuous indicators of lung health but have not been used as a method of promoting smoking cessation efforts. The main objectives of this pilot study were to:
• Experimentally examine how responses to one indicator of poor lung health from a LDCT scan—greater volumes of low lung density (LLD)—might change based on how the LDD result is visually represented and the magnitude of LLD volume.
• Assess whether the experimental paradigm used in the current study can test different factors related to LDCT scan results) with groups who are not eligible for lung cancer screening (i.e., get similar effects across groups).
Methods: A national, online, quota-based sample of 1,241 U.S. adults aged 50-80 years completed a survey-based experiment using a hypothetical clinical vignette where they were eligible for lung cancer screening and received a LDD result after a LDCT scan. Participants were categorized into one of three groups: Never-Smokers (n=459), Screening Ineligible (<20 smoking pack years, n=385), Screening Eligible (>20 smoking pack years, n=397). Participants were independently randomized to a LDD display format (drawing, image) and LDD result (11%, 22%), which were displayed on a visual number line from 3% to 27%. The primary outcome was intentions to enroll in a smoking cessation program. Subjective understanding of the LDD result and psychological distress were secondary measures. Responses were on 6-point Likert scales. Main effects and interactions for LDD result, LDD display format, and smoking group were tested via 3-way ANOVA analyses.
Results: The LDD image display format produced higher enrollment intentions (p=.009) and distress in response to the result (p=.021) relative to the drawing display format (Figure). Subjective understanding of the LDD result was higher with a LDD result of 22% vs. 11% (p=.037). There was only one significant interaction between an experimental factor and smoking group; a smoking group and LDD result interaction for distress (p=.044).
Figure.
Conclusions: In this pilot study, the display format of LDD results was more impactful on intentions to enroll in a smoking cessation program and distress than the actual LDD result, regardless of the smoking and screening eligibility status of the participant.
Keywords: risk communication; lung cancer screening; information displays
Analyzing screening influences to support shared decision-making for mammograms among diverse women
PP-025 Decision Psychology and Shared Decision Making (DEC)
Sienna Ruiz1, Kamilah Abdur Rashid1, Rachel Mintz1, Maggie Britton2, Michelle Eggers1, Ashley J. Housten1
1Department of Surgery, Washington University School of Medicine, St. Louis, MO, U.S.
2Department of Psychology, University of Houston, Houston, TX, U.S.
Purpose: Mammography guidelines can be complicated and debatable, as interpretations of guidelines may differ by institutional source or emphasize potential benefits while minimizing possible harms. Racial and ethnic minority women, who are more likely to have low health literacy due to various structural barriers including educational access, may have difficulty interpreting screening guidelines. This study examined what influences racially and ethnically diverse women’s decisions to undergo mammography screening to better support informed medical decisions.
Methods: We conducted 28 focus groups with 134 non-Latina Black (hereafter, Black; n=51), non-Latina White (hereafter, White n=39), and English- (n=18) and Spanish- (n=26) speaking Latina women. Focus group facilitators asked women what they considered when deciding to get a mammogram and how often to get a mammogram. We used deductive coding following the Andersen Behavioral Model of Health Services Use to describe three types of factors impacting mammography use (i.e. screening): predisposing, enabling, and need. Predisposing factors are individuals’ pre-existing characteristics that impact whether they use healthcare services like demographics or personal values. Enabling factors refer to resources that facilitate healthcare use while need factors indicate the reasons for immediate use of or perceived need for services.
Results: Predisposing factors were as follows: benefits of mammograms (White), beliefs that people should only go to the doctor when a health problem arises and feelings of embarrassment surrounding breast cancer screenings (Latina), and convictions surrounding personal independence, bodily autonomy, and past negative medical experiences (Black). Enabling factors included customer service-like experiences of screening (White), and free or low-cost screenings (Latina). Need factors related to physician recommendation (White, Black) and family responsibilities (Latina).
Conclusions: In this study, White women’s trust in providers and expectations of customer service-like delivery of care may reflect their socioeconomic positionality, which can facilitate their ease in navigating health systems and increase expectations for health service delivery. To support shared decision-making about screening in Black and Latina women, healthcare providers can deliver culturally and structurally adjusted care that aligns with or addresses each group’s influences or barriers to care. Specifically, community initiatives that emphasize individual bodily autonomy may support Black women in making decisions and increase trust in providers. For Latina women, culturally-adjusted care recognizing potentially stigmatizing effects of breast cancer and cost-covering programs could reduce barriers to screening.
Keywords: Cancer, Mammography, Shared-Decision Making, Racial/Ethnic Minority
Clinician Perspectives on Site Differences in Implementing a New Decision Aid for Patients Considering Implantable Cardioverter Defibrillator
PP-026 Decision Psychology and Shared Decision Making (DEC)
Bryan C Wallace1, Christopher E Knoepke1, Daniel D Matlock2
1Adult and Child Center for Outcomes Research and Delivery Science, University of Colorado School of Medicine, Aurora, Colorado, USA
2Adult and Child Center for Outcomes Research and Delivery Science, University of Colorado School of Medicine, Aurora, Colorado, USA; Division of Geriatric Medicine, Department of Medicine, University of Colorado School of Medicine, Aurora, Colorado, USA
Purpose: Understanding site-based differences in the success or failure of implementing shared decision making interventions is important in informing strategies to optimally increase the use of such interventions in diverse clinical settings. This study describes clinician perspectives regarding implementing a new decision aid for patients considering treatment with an implantable cardioverter defibrillator (ICD). This clinical context is unique, as the most recent CMS reimbursement guidelines require documented use of a patient decision aid prior to implanting an ICD in these patients.
Methods: As part of the DECIDE-ICD Trial (R01 HL136403), a stepped-wedge implementation/effectiveness hybrid trial evaluating decision aids in this context, we interviewed clinicians involved in the ICD care process (MDs, nurses, administrators) during post-intervention phases. Interviews were guided by semi structured interviews, including questions pertaining to existing attitudes and experiences related to SDM, as well as implementing decision aids into clinical workflow. We qualitatively analyzed these interviews using a grounded theory mixed deductive/inductive approach to understand how implementation qualitatively differed between sites. As data were described, they were contextualized according to site characteristics, allowing us to compare sites that more successfully implemented decision aids with those that did not, as defined by observed reach within the RE-AIM framework. All qualitative analyses were conducted using Dedoose analytic software.
Results: We completed 22 interviews post-implementation across all seven sites. Significant barriers by site included a site champion leaving, physician differences within sites, teaching education methods to new staff, implementation beginning after COVID-19, large catchment area, and lacking reliable electronic means of communication with patients. Site facilitators included engaged site champions, previous experience with SDM interventions, communication among teams, adaptations to delivery methods, and multi-tier team engagement. See Table 1 for quotes regarding barriers and facilitators.
Conclusions: Site differences are important for understanding how to tailor real-world interventions to specific clinics and populations. Decision support for patients considering defibrillators is both a timely and important topic, as CMS mandates require SDM for reimbursement. Clinicians stated that to be successful, implementing a new SDM intervention needs to have engaged site champions, prevent staff turnover, and adapt delivery methods to their patient population.
Keywords: ICD, electrophysiology, qualitative, SDM, implementation, REAIM
Clinician perspectives on site-specific barriers and facilitators.
| Thematic Areas | Sub-themes | Representative Quotes |
|---|---|---|
| Site-specific Barriers | 1) Site champion leaving
2) Physician Differences within Sites 3) Teaching Education Methods to New Staff 4) Large Catchment Area 5) Lacking Reliable Electronic Means of Communication with Patients 6) Implementation beginning after COVID-19 |
1) “My individual practice, of course, has been curtailed since I moved. So, I haven't seen a lot of these individual consults myself in recent months. So, I'm not sure how to answer that. That's a good question. I've been asked about the materials from time to time, more so by the advanced heart failure providers who – because they know I'm involved in the study. They will occasionally reach out to me if they want to access the materials to have them sent to patients.” -S7P1-MD
2) “maybe we spend a little extra time but I’m not sure. Each physician, I guess, is a little bit different” -S1P3-NP 3) “So just reminding staff to do it I think is going to be the big thing, making sure that – because there will be turnover in the clinic, and just so making sure that education process is continued and people that join our clinic are aware of what's going on, and that every patient that's being considered for a defibrillator needs these materials, that's going to be the main issue.” -S3P1-MD 4) “we see patients in a much broader geographical area than other networks would in the city.” -S2P2-MD 5) “with our patient population, it's hard. I mean, you could say like, oh, the moment that you see a patient on your schedule three weeks from now for an ICD discussion, you email them the decision tool from your office, right? That's great. You could probably do it at the (other clinic site), or private practices or whatever, but at (clinic site), where people maybe don't have a computer, or they just have their little Boost mobile phone, it's hard, right? …It's a great idea, but I think with our patient population, it's gonna be really challenging.” 6) “there's the whole COVID situation, which threw wrenches into everything” -S6P1-MD |
| Site-specific Facilitators | 1) Engaged site champion
2) Previous experience with shared decision making interventions 3) Communication among teams 4) Adapting delivery methods 5) Multi-tier team engagement |
Doctor (X) – obviously, she's the director of our group – she's been pretty proactive also about making sure that the right patients get included in the study. So, I think that's been helpful, too, kind of coming from the top to make sure that we're capturing all the patients we need to.” -S5P6-NP
2) “The only way for me to really think about this is - I'm trying to translate it over to, and sorry, I'm just going to my heart failure world, but I'm thinking about the iDecide LVAD stuff, what's helped with those patients, because patients really, I think, benefit from hearing from others, and experience from other people.” -S7P5-MD 3) “I think before the packets are introduced in the clinics and inpatient settings, I think it needs to be presented at a provider meeting, just so everybody knows what's out there and it's not a surprise. Where we had not done that here, so it's a constant upgrade to the attendings when they come into clinic. And so it's sort of the fellows and the nursing staff know about it, but sometimes the attendings don't. They haven't been in clinic for a while.” -S6P4-RN 4) “switching over to virtual and trying to get everything done, that just required a little bit of change: the printed materials had to be mailed instead of given to the patients in clinic, that sort of thing” -S3P1-MD 5) “I present that information to the patients, ask them if they have questions. Some patients are already decided. They know they want an ICD. Some are not decided. Then, I transition to the nurse and the nurse is there for continued discussion with them and continued addressing questions. If they haven't already received written material, we give that to them and then let them let us know if they decide.” -S1P4-MD |
Quotes from post-implementation interviews regarding site differences with the success of reach.
Utilizing a Novel Patient Roadmap Model for Developing a Shared Decision Support Tool in the Process of Lower Limb Prosthesis Design
PP-027 Decision Psychology and Shared Decision Making (DEC)
Chelsey B Anderson1, Stefania Fatone3, Mark M Mañago1, Laura A Swink1, Andrew J Kittelson4, Dawn M Magnusson2, Cory L Christiansen1
1Department of Physical Medicine and Rehabilitation, Physical Therapy Program, University of Colorado, Aurora, Colorado, USA; Department of Research, Geriatric Research, Education, and Clinical Center, VA Eastern Colorado Healthcare System, Aurora, Colorado, USA
2Department of Physical Medicine and Rehabilitation, Physical Therapy Program, University of Colorado, Aurora, Colorado, USA
3Department of Rehabilitation Medicine, University of Washington, Seattle, Washington, USA
4Department of Physical Therapy and Rehabilitation Science, University of Montana, Missoula, Montana, USA
Purpose: Lower limb amputation is a chronic health condition that introduces a lifetime of complex healthcare decision-making, including multiple decisions associated with the design and maintenance of a prosthesis. Most decision aid development processes focus on patients facing discreet healthcare decisions. In order to address the many decisions and long-term decision-making challenges of people with chronic health conditions, such as lower-limb amputation, we tested a novel Patient Roadmap Model designed to engage patients earlier in their care trajectory and prepare them for future shared decision-making. This research aimed to use both the IPDAS and the Patient Roadmap Model to develop and test a shared decision support tool for prosthesis design after lower limb amputation.
Methods: The development of the shared decision support tool for prosthesis design followed the IPDAS steps for developing patient decision aids, including a qualitative needs assessment with 38 prosthetists and 17 prosthesis users, and alpha testing with a steering group of five experienced prosthetists and six experienced prosthesis users. The development of content followed the criteria listed in the IPDAS and the Patient Roadmap Model (Table 1). Alpha testing involved evaluating the tool’s accuracy, usability, and comprehensibility using Likert response scales, and open-ended feedback from steering group members.
Table 1.
Criteria for Content Required and Included
| Criteria for Content | PDAS Criteria | Patient Roadmap Model Criteria | Prosthesis Design Shared Decision Support Tool |
|---|---|---|---|
| 1. Describe health condition | YES | YES | YES |
| 2. Describe decision | YES | YES | YES |
| 3. Describe available options | YES | YES* | YES* |
| 4. Describe natural course of disease | YES | YES | YES |
| 5. Describe +/- of options | YES | NO | NO |
| 6. Compare option features | YES | NO | NO |
| 7. Provides outcome probabilities | YES | NO | NO |
| 8. Includes values identification | YES | YES | YES |
| 9. Provides step-by-step guidance for making a decision | YES | NO | NO |
| 10. Includes questions to use when talking with a provider about a decision | YES | YES* | YES* |
Listed options and questions associated with options are broad and non-specific
Results: The final shared decision support tool included sections that addressed four identified decisional needs: 1) acknowledging complexity in communication, 2) clarifying values, 3) recognizing the role of experience to inform preferences, and 4) understanding the prosthetic journey. Unique to the Patient Roadmap Model, the tool aimed to provide a general overview of the multiple decisions that contribute to prosthesis design, and met all six of the Model’s criteria for content (Table 1). During alpha testing, steering group members rated the tool’s accuracy as 96%, usability as 93%, and comprehensibility as 96%.
Conclusions: A shared decision support tool was developed using the established IPDAS supported by the Patient Roadmap Model. The tool achieved all six of the Patient Roadmap criteria for content, and was rated to have high accuracy, usability, and comprehensibility by an experienced steering group of prosthesis users and prosthetists. Results from this work offer an opportunity to provide longitudinal anticipatory healthcare guidance and prepare new prosthesis users for shared decision-making about prosthesis design decisions throughout their health journey.
Keywords: Shared decision-making, patient decision aid, patient roadmap, lower limb amputation, prosthesis.
Shared Decision Making and Socioeconomic Status: An Observational Study
PP-031 Decision Psychology and Shared Decision Making (DEC)
Kim Tenfelde1, Marjolijn Antheunis1, Nadine Bol1, Mirela Habibovic2, Jos Widdershoven3
1Department of Communication and Cognition, Tilburg University, Tilburg, The Netherlands
2Department of Medical and Clinical Psychology, Tilburg University, Tilburg, The Netherlands
3Department of Cardiology, Elisabeth-Tweesteden Hospital, Tilburg, The Netherlands
Purpose: To explore and examine communication differences due to patient socioeconomic status (SES) regarding patient-centeredness, patient-participation, and shared-decision making (SDM).
Methods: An observational study was conducted at the cardiology department of an outpatient clinic. A total of 45 consultations were observed. SES was measured by patients’ education level, neighborhood, income, household composition, and occupation. Sixteen patients were classified as low-SES, 16 as middle-SES, and 13 patients as high-SES. Consultations were audio recorded and transcribed, after which utterances of both patients and physicians related to patient-participation, patient-centeredness, and SDM were coded using a codebook based on the studies of Willems et al. (2005), Street (1991), and the Roter Interaction Analysis System (RIAS; Roter & Larson, 2002). As consultation length varied between 34 and 602 utterances, proportions scores of communication utterances were calculated to account for variety in consultation length. MANOVAs were executed to examine SES differences in doctor-patient communication related to patient-participation, patient-centeredness, and SDM.
Results: For patient-participation, results revealed that low-SES patients expressed proportionally less utterances overall in the consultation compared to consultations with high-SES patients, p =.030. Furthermore, low-SES patients answered questions of the physician more often with one word (yes/no) compared to high-SES patients, p =.030. Lastly, low-SES patients expressed less concerns than high-SES patients, p =.043. No significant SES differences were found for the proportion of partnership building utterances by the physicians, F(2, 42) = 0.52, p =.599, ηp2=.03, nor for the proportion of SDM utterances by the physicians, F(2, 42) = 1.33, p =.276, ηp2=.06. As such, physicians did not vary partnership building and SDM based on the SES level of their patients.
Conclusions: Findings suggest that low-SES patients are more passive communicators than high-SES patients. Physicians did not seem to use different communication styles based on patient SES. Nonetheless, patient-participation differences may still lead to less adequate SDM, potentially leading to worse patient outcomes, further increasing the socioeconomic health gap. Hence, it is recommended that physicians should appropriately encourage low-SES patients to become more active communicators.
Keywords: Socioeconomic Status, Shared-decision Making, Health Communication, Patient-participation, Patient-centeredness
Communicating Lifestyle Advices through Shared-Decision Making: An Interview Study
PP-032 Decision Psychology and Shared Decision Making (DEC)
Kim Tenfelde, Fleur Hasebos, Marjolijn Antheunis, Nadine Bol
Department of Communication and Cognition, Tilburg University, Tilburg, The Netherlands
Purpose: To examine physicians insights on how they use shared-decision making (SDM) and if – and how – they communicate lifestyle advices about subjects such as diet, smoking, exercising, etc. to their patients?
Methods: In-depth semi-structured interviews were conducted with ten physicians from various specialisms (five cardiologists, two neurologists, one pediatrician, one trauma surgeon, one medical psychologist). Age ranged from 28 to 63 years old (M = 45.8, SD = 10.2). Interviews were done via ZOOM or Microsoft Teams due to COVID-19 restrictions. Examples of interview questions were: How important do you find shared-decision making (SDM)?; Do you communicate to your patient that lifestyle is an important factor in (improving) their health and if so, how? The interviews lasted between 29 and 50 minutes and were audio recorded and transcribed. A thematic analysis was done to identify relevant themes and patterns in the data. Examples of relevant (sub)codes are importance of lifestyle choices, value of SDM, showing support, time constraints, and patient differences.
Results: Main findings show that all physicians try to use SDM in their consultations. Most physicians (n=7) however said that they often have a strong preference for a certain treatment option for their patients and some (n=3) try to steer patients towards a certain choice: “Shared decision-making mostly comes down to making the patient feel like you have decided together”. Physicians (n=10) also mention that they take factors such as IQ, age, education into account when using SDM with their patients. For communicating lifestyle choices, most physicians believe SDM is a powerful tool for advising patients on lifestyle changes as sharing the decision can lead to improved patient compliance (n=7). However, two physicians mentioned they prefer to be rather strict and harsh in communicating lifestyle advice as they believe this makes patients take their health more seriously. Nonetheless, physicians often doubt if their communication and advices are effective. Besides, physicians experience a time constraint when it comes to discussing lifestyle choices in consults.
Conclusions: This research explored physicians insights on using SDM as a tool for communicating lifestyle advices. This made way for further steps in researching how communication can be used to improve people's health.
Keywords: lifestyle choices, shared-decision making, doctor-patient communication, interview study
“It isn’t the tool we want, it’s the tool we have”: Clinician perspectives on use of life expectancy to guide cancer screening in older adults
PP-033 Decision Psychology and Shared Decision Making (DEC)
Laura E Brotzman, Brian J Zikmund Fisher
Department of Health Behavior & Health Education, University of Michigan School of Public Health, Ann Arbor, MI, USA
Purpose: Every primary care clinician faces the challenge of making individualized and appropriate cancer screening recommendations for their older patients. While cancer screening guidelines increasingly recommend incorporating life expectancy estimates to inform screening decisions for older adults, little is known about if and how this happens in clinical practice. Many clinicians find it difficult or unclear how to implement these cancer screening guidelines in practice for individual patients. This study investigates what clinicians are thinking in these moments and how they handle these decisions for breast, cervical, colon, and prostate cancer screening in patients over the age of 65.
Methods: Fifteen primary care clinicians from Internal Medicine and Family Medicine completed a 30-60 minute semi-structured interview about use, estimation, and discussion of life expectancy with older patients to guide cancer screening decisions. We transcribed, coded, and analyzed the interviews using thematic content analysis approach. A clinician champion reviewed and assisted with interpretation of findings.
Results: Clinicians vary widely both in how they define what life expectancy is for an individual patient and in which factors they believe should be considered when estimating life expectancy. Only two clinicians were aware of existing life expectancy estimation tools for individual patients. Instead, many clinicians named adjacent metrics, such as quality of life, that they consider equally or more important than life expectancy in guiding cancer screening decisions. Clinicians consistently report uncertainty and discomfort around discussing life expectancy, and few report they routinely and directly talk about life expectancy with older patients. Overall, most clinicians appeared satisfied with the idea that life expectancy should be considered in cancer screening decisions and medication discontinuation decisions for older adults, but its use in practice is affected by both its conceptual limitations and multiple implementation challenges.
Conclusions: Life expectancy will always be a difficult topic for clinicians and patients to discuss, but incorporating it in cancer screening decisions for older adults offers clear potential benefits. Future work should focus on overcoming operational and clinician-perceived barriers to estimating and communicating about life expectancy and improving clinician knowledge, skill, and confidence in using life expectancy in clinical practice.
Keywords: doctor-patient communication, life expectancy
Exploring the potential of using artificial intelligence in maternity care to improve value clarification in shared decision making
PP-034 Decision Psychology and Shared Decision Making (DEC)
Marene Dimmendaal1, Marieke De Vries1, Arjen De Vries1, Jeroen Van Dillen2
1Department of Institution for Computing and Information Sciences, Radboud University, Nijmegen, the Netherlands
2Department of Obstetrics and Gynecology, Radboud University Medical Centre, Nijmegen, the Netherlands
Purpose: The purpose of this paper is twofold: to present the willingness to work with AI in the maternity care context by various stakeholders and to present two prototypes of a tool that employs AI that may improve the involvement of personal values in SDM in the maternity care context.
Methods: A total of n=10 interviews have been held with different stakeholders to indicate the willingness to work with AI in maternity care: n=2 AI experts, n=5 maternity care professionals and n=3 postpartum women have been interviewed. In addition to these interviews, two theoretical designs of a prototype of a tool that may be beneficial in the involvement patient values in SDM have been described.
Results: The interviews indicate an interest and willingness to work with AI in the maternity care context to involve personal values more in SDM, if certain requirements (e.g. easy usage of the tool and no financial costs) are met. The two illustrated prototypes for tools that could be of help are firstly a prototype that uses AI to establish certain milestones of a patient’s social media profile (e.g. their family situation and certain hobbies). With these milestones, a basic personal profile will be sketched and presented to the patient. If shared with their care provider, this profile could help steer the conversation to the personal values that the patient finds important. A clinician can then address medical implications related to these milestones. Secondly, a prototype that will scan fora of maternity care situations and questions. With this information, important personal values, wishes and fears will be detected and divided in different categories. These categories are classified according to the preference-sensitive maternity care choice it is about. This information will then be accessible for future parents, preparing them with questions they may wish to ask in an upcoming consultation.
Conclusions: Stakeholders seem open to work with tools that make use of AI in the maternity care context. Certain requirements are mentioned that will strengthen the readiness of the participants. The two suggested prototypes of shared decision making-tools that make use of AI show promise in involving and clarifying personal values of parents in the maternity care context. To evaluate their utility in real-world applications, the prototypes need to be elaborated and studied in practice.
Keywords: shared decision making, artificial intelligence, value clarification
How people living with cystic fibrosis prefer to use an educational website to support shared decision making about lung transplant
PP-036 Decision Psychology and Shared Decision Making (DEC)
Nick Reid1, Kathleen J. Ramos2, Mara R. Hobler2, Lauren E. Bartlett2, Joseph B. Pryor3, Donna L. Berry4, Melissa J. Basile5, Siddhartha G. Kapnadak2, Andrea L. Hartzler1
1Department of Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA, USA
2Division of Pulmonary, Critical Care, and Sleep Medicine, Department of Medicine, University of Washington, Seattle, WA, USA
3Department of General Internal Medicine, University of Washington, Seattle, WA, USA
4Biobehavioral Nursing and Health Informatics, University of Washington, Seattle, WA
5Northwell Health, Manhasset, New York, USA
Purpose: Understand how people living with cystic fibrosis (CF) prefer to use lung transplant (LTx) educational resources for shared decision making (SDM) with their care team.
Methods: By conducting a mixed-methods usability study of educational website prototypes among people living with CF, we elicited preferences about their use of didactic and experiential content when considering LTx. Whereas didactic content communicates clinical knowledge via articles, experiential content communicates lived experience via patient stories. Although educational resources providing both didactic and experiential content can improve knowledge in preparation for SDM about LTx, they must be usable and useful to people living with CF.
To investigate preferences, we created two website prototypes that people living with CF used in scenario-based tasks. The “Author-Driven” prototype presented structured didactic and experiential content based on a self-assessment survey. Comparatively, in the “Reader-Driven” prototype let participants freely browse didactic and experiential content. Participants rated each prototype with the System Usability Scale (SUS), and then described their preferences for educational content and prototype design during an exit interview. We analyzed SUS scores with descriptive statistics. Interview recordings were analyzed deductively.
Results: Fourteen people living with CF who had not received LTx participated. The Reader-Driven prototype received a higher mean SUS score than the Author-Driven prototype (M=90.0 vs M=81.6, p=0.002). When learning about LTx, participants preferred to navigate between didactic and experiential content in the Reader-Driven prototype -- searching for “hidden gems” (Participant #4). Participants described the Author-Driven prototype’s recommendations as a valuable starting point for learning. Participants indicated that didactic content was important to understanding illness trajectory, and could help them initiate SDM with providers. They found experiential content could improve knowledge and reduce fear, but was emotionally difficult to read.
Conclusions: Participants preferred to use educational resources as part of an active process where deliberation is informed by sensemaking with both didactic and experiential content. Study results informed an educational website that we are evaluating through a longitudinal study. This educational tool could support people with other types of advanced lung disease with similar needs and preferences.
Acknowledgements: Dr. Mara R. Hobler contributed significantly to the study design, data collection, and analysis, but died prior to abstract submission.
Keywords: Shared decision making, Patient-centered decision aids, Human-centered design, Usability
Implementation Outcomes of a Decision Aid for Men with Prostate Cancer in a Diverse Population
PP-037 Decision Psychology and Shared Decision Making (DEC)
Patrick J Wilson1, Jonathan Bergman1, Jefferson Villatoro1, Lorna Kwan1, Kristen Williams1, David F Penson2, Christopher S Saigal1
1Department of Urology, David Geffen School of Medicine at UCLA, Los Angeles, United States
2Department of Urology, Vanderbilt University School of Medicine, Nashville, United States
Purpose: We scaled an effective Decision Aid (DA) for men with prostate cancer across different healthcare settings to learn best practices for implementation in a variety of patient populations. We aimed to understand how the DA performed in improving decision quality in men seen at a public hospital serving many Hispanic patients compared to academic medical centers (AMCs). This DA was previously shown to improve decision quality at an AMC.
Methods: The DA, WiserCare, was issued electronically to men with newly diagnosed prostate cancer at a public hospital (Olive View Medical Center) and two AMCs (UCLA and Vanderbilt Medical Centers). The software uses patients’ clinical information to personalize clinical outcomes, before measuring their preferences for these outcomes with conjoint analysis. Patients and physicians both receive a print out of their preferences and the “expected value” of possible treatments before the clinic visit. We used the Decisional Conflict Scale (DCS) score to categorize patients as either no difficulty (<25), some difficulty (25-37.5) or significant difficulty (>37.5) with decision making, based on these cutoffs’ prior association with delaying decision implementation. DCS score was measured after the clinic visit. A Spanish language version DA was used when needed.
Results: 1,961 men were invited to use the DA across three sites, and 1,098 did so prior to their visit. 352 completed the post-visit survey. We evaluated differences in DCS scores using Chi-squared analysis. Overall scores were excellent (median total DCS score was 6.3 across sites). However, the percent of patients reporting “significant difficulty” after counseling was lower at the AMCs (UCLA 10%, Vanderbilt 6%) versus the public hospital (27%, p = 0.0518). Also, men seen at the public hospital had higher DCS subscale scores, reporting significant difficulty in values clarity, support, and feeling informed ( 27%, 27% and 36%, p<0.05).
Conclusions: Electronic DAs can be implemented successfully in public hospitals serving men of varying language and socioeconomic status. We saw an overall preservation of DA effectiveness in the public hospital. However, a higher proportion of public hospital patients demonstrated “significant difficulty” in the decision, despite changes to workflow and use of a Spanish language DA. Further qualitative work with these men is indicated to improve DA effectiveness in this setting.
Keywords: Shared Decision Making, Decision Aids, Prostate Cancer, Decisional Conflict, Qualitative Research
Figure 1.
Total Decisional Conflict Scale Scores
Fig. 1 Total Decisional Conflict Scale (DCS) scores across all three sites where the Decision Aid (DA) was implemented; median DCS score of 6.3.
Addressing Trust and Transparency in Patient-Centered Clinical Decision Support: Findings from a Horizon Scan
PP-038 Decision Psychology and Shared Decision Making (DEC)
Priyanka J Desai1, Krysta Heaney Huls1, David F Lobach2, Aziz Boxwala2, Chris Dymek3, Michael I Harrison3, James Swiger3, Edwin Lomotan3, Dean F Sittig4, Prashila Dullabh1
1NORC at the University of Chicago, Bethesda, MD, USA
2Elimu Informatics, El Cerrito, CA, USA
3Center for Evidence and Practice Improvement, Agency for Healthcare Research and Quality, Rockville, MD, USA
4School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, USA
Purpose: Patient-centered clinical decision support (PC CDS) incorporates outcomes and measures that are meaningful to patients; we conducted a horizon scan to assess the current PC CDS landscape.
Methods: We convened a 22-member technical expert panel (TEP), conducted a scoping literature review, and interviewed 18 key informants. Using qualitative thematic synthesis, we synthesized 215 peer-reviewed and gray literature sources and analyzed transcripts to identify challenges to and opportunities for advancing PC CDS.
Results: Our qualitative synthesis identified lack of patient and clinician trust as a central challenge in PC CDS development and implementation. We found that fostering patient and clinician trust requires transparency in 1) the evidence-base for PC CDS, 2) digital tools that deliver PC CDS (i.e., applications, patient portals), and 3) processes for capturing and using patient data. Key informants noted that patients and clinicians need to know that the evidence that informs PC CDS is accurate, up-to-date, and strong enough to guide decision-making. There must be transparency about how evidence is translated for PC CDS. TEP discussions emphasized that promoting transparency in PC CDS also requires acknowledging potential bias in its evidence-base. PC CDS algorithms are informed by research that may not capture data from underserved and underrepresented populations. Our synthesis identified integration of social determinants of health (SDOH) data into PC CDS algorithms as potential way to address bias and, consequently, improve trust in PC CDS tools. However, using SDOH data and other patient-centric data (e.g., patient-generated health data) requires transparency about data collection and exchange practices. Key informants emphasized that fostering patient trust also requires meaningfully engaging them throughout PC CDS development; this is critical to ensuring that PC CDS reflects their needs, desires, expectations.
Conclusions: Addressing lack of patient and clinician trust in PC CDS is central to its adoption, use, and advancement. There are opportunities to develop best practices and standards to increase transparency in the PC CDS development process and improve confidence in decision support tools. To foster trust, particularly for patients, the CDS community must prioritize patient-centeredness in the design of tools and incorporate patient preferences in deliberate ways. Without trust in PC CDS, we will fall short of the aspiration of equipping patients and clinicians with evidence-based guidelines to inform their shared decision-making.
Keywords: Patient-Centered, Clinical Decision Support
“R” you getting this? Effects of evaluative labels and visual aids on people’s understanding and evaluation of the COVID-19 reproduction number
PP-039 Decision Psychology and Shared Decision Making (DEC)
Ruben Vromans1, Emiel Krahmer1, Marloes Van Wezel2, Nadine Bol1
1Department of Communication and Cognition, Tilburg University, Tilburg, The Netherlands
2Department of Social and Behavioral Sciences, Tranzo Scientific Center for Care and Welfare, Tilburg University, Tilburg, the Netherlands; Department of Reintegration and Community Care, Trimbos Institute, Utrecht, The Netherlands
Purpose: We tested whether communicating the COVID-19 reproduction number (i.e., “r-number”) with or without evaluative labels and with or without visual aids would influence people’s understanding, perceived usefulness, cognitive and affective evaluations of the statistic; whether—compared to baseline measurements—people’s perceived susceptibility of the risk of getting COVID-19 and adherence to preventive measures would change after exposure to the r-number; and whether these effects and changes would depend on people’s numeracy skills.
Methods: Participants from a representative sample of the Dutch population (N=1,168, 50.3% women, mean age=55.6, SD=17.3) received information about the COVID-19 r-number published on September 11, 2020 (r=1.38) on the Dutch corona dashboard. Participants either received the r-number with or without an evaluative label using an action threshold and with or without a visual aid consisting of a population diagram displaying how the virus with a basic r-number of 2 might spread from person to person over three reproductive stages (Figure 1). Before and after the experiment, we measured perceived susceptibility of COVID-19 and adherence to five basic preventive measures. After the experiment, we also measured understanding, perceived usefulness, affective (e.g., ”I think the information is frightening”), and cognitive evaluation (e.g., “I think the information on the corona dashboard is simple”) of the r-number. Numeracy was measured with an existing scale consisting of three open-ended mathematical questions.
Figure 1.
Dashboard communicating the actual COVID-19 reproduction number on September 11, 2020 (r=1.38) in the Evaluative Label (present) x Visual aid (present) experimental condition.
Results: About two-third (67.1%) of our sample was familiar with the r-number, and 55.7% correctly interpreted the r-number, with highly numerate people having better understanding than less numerate people. None of the experimental conditions affected people’s understanding, affective, and cognitive evaluations. However, information about the r-number was perceived as more useful when being presented with a visual aid, in general by highly numerate people. Finally, independent of conditions, adherence to preventive measures were higher after seeing the r-number, which was more pronounced among highly numerate people.
Conclusions: Although evaluative labels and visual aids did not facilitate people’s understanding of the r-number, our results suggest that the statistic may be used to motivate people to adhere to preventive measures to limit the spread of the coronavirus. At the same time, our results call for testing other (visual) displays and strategies for effectively communicating the r-number to the general public, especially among low numerate people.
Keywords: COVID-19, Dashboard, Evaluability, Reproduction number, Risk communication, Visual displays.
Broken bones and risks: How to communicate personalized predictions about life after an emergency to trauma patients
PP-040 Decision Psychology and Shared Decision Making (DEC)
Saar Hommes1, Ruben Vromans2, Nadine Bol2, Marjolijn Antheunis2, Emiel Krahmer2
1Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, The Netherlands & Department of Research and Development, The Netherlands Comprehensive Cancer Organization (IKNL), Utrecht, The Netherlands.
2Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, The Netherlands
Purpose: The aim of this study was twofold by (1) testing the effects of comparative risk information and message format on risk understanding, risk perceptions and affective evaluations as well as (2) explore how personalized predictions are received by trauma patients.
Methods: We conducted two separate studies. In the first study, participants from the general population (N=983) took part in a 2x2 experiment for which we manipulated comparative risk information (performing better vs. performing worse than the average patient) and message format (text only vs. text and visuals). We tested whether the manipulations had an effect on (1) risk understanding (e.g., verbatim in “what was your personal risk on X?” and gist in “was your risk better or worse than average on X?”), (2) risk perceptions (e.g., “how likely do you think X is?”) and (3) affective evaluation (e.g., “how frightening do you think X was?”). Our pre-registered hypotheses were that people who received worse-than-average-risks would have higher risk perceptions (H1a) and affective evaluations (H1b) than people who received better-than-average-risks. Additionally, we hypothesized that people receiving the information in text and visuals would have higher risk perceptions (H2a), risk understanding (H2b) and affective evaluations (H2c) than people in the text-only condition. We conducted MANOVA’s (H1ab, H2ac) and logistic regressions (H2b). We also explored the possible effects of numeracy and health literacy skills. In the second study, we conducted semi-structured interviews with patients (N=30) that had suffered an injury in the past. Relevant themes will be identified using thematic analysis.
Results: For study 1, we did not find support for our hypotheses. When exploring the results, we found numeracy and health literacy led to higher risk understanding but in general, risk understanding was problematic. However, patients do want to receive personalized predictions. For study 2 we are currently analyzing the interviews yet preliminary results show patients want to receive personalized predictions as currently information is scarce in trauma care.
Conclusions: From both these studies we conclude that although understanding personalized predictions might be challenging, patients do want to receive such information so they can plan for life after their emergency. Additionally, even if risks are relatively bad, people still have a need for receiving such information and are not overly worried.
Keywords: Trauma care, tailoring, patient education, risk understanding, risk perception.
A framework supporting treatment decision-making in the era of novel diagnostics: The case of Prostate-specific membrane antigen imaging
PP-041 Decision Psychology and Shared Decision Making (DEC)
Zizi Elsisi1, Lucas Owens2, Matthew Schipper3, Ruth Etzioni4
1The Comparative Health Outcomes, Policy, and Economics (CHOICE) Institute, School of Pharmacy, University of Washington, Seattle, Washington, USA
2Division of Public Health Sciences, Fred Hutchinson Cancer Research Center, Seattle, Washington, USA
3Department of Biostatistics & Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan, USA
4Division of Public Health Sciences, Fred Hutchinson Cancer Research Center, Seattle, Washington, USA & Department of Biostatistics, University of Washington, Seattle, Washington, USA
Purpose: To update treatment benefits established in randomized controlled trials (RCTs) in order to generate evidence that supports treatment decision-making in the era of new diagnostics that alter disease prognosis and have recently entered practice.
Methods: We use Prostate-specific membrane antigen (PSMA) imaging in patients with biochemically recurrent (BCR) prostate cancer (PCa) after radical prostatectomy (RP) as a case example for our conceptual framework. Before the emergence of PSMA technology, patients are either at low, intermediate, or high-risk of developing metastases according to their prostate-specific antigen (PSA) level prior to salvage therapy. According to a published randomized trial, we allow patients to either receive radiation therapy alone (RT) or radiation therapy in combination with androgen deprivation therapy (RT + ADT) as salvage treatment tailored to their risk group. However, with the utilization of PSMA imaging, PSA-defined risk groups are further partitioned into finer strata, where each risk group can have either a positive (metastasis exists) or negative outcome. We use the trial and published literature to extract survival benefits of treatments and the fraction of PSMA positivity in each risk group. In addition, the model requires an assumption of hazard ratio (HR) comparing PSMA-positive vs PSMA-negative patients. We consider two scenarios; First, HR is constant across treatments and risk groups and is equal to 1.5. Second, HR increases by risk and is higher in patients receiving RT. We provide a user interactive interface for experts to conduct multiple scenario analyses.
Results: In both scenarios, ADT+RT have greater survival benefits in patients at intermediate and high-risk groups, but not in low-risk patients. In the second scenario, the treatment effect is more significant in PSMA-positive compared to PSMA-negative patients (Figure 1).
Figure 1.
A: Estimated Overall Survival by Risk Group, Treatment Type and PSMA-status when HR for PSMA+ versus PSMA- equal to 1.5 across risk groups and treatments. B: Estimated Overall Survival by Risk Group, Treatment Type and PSMA-status when HR for PSMA+ versus PSMA- varies across risk groups and treatments.
Conclusions: It is possible to update previously published treatment benefits to provide evidence for treatment decision-making with the addition of information on how the novel diagnostic marker affects prognosis. Our modeling framework can be the basis for future cost-effectiveness analysis that can further guide decision makers on the best use of novel diagnostics that partitions defined prognostic groups into further strata.
Keywords: Prostate-specific membrane antigen imaging, Risk stratification, Updating existing treatment benefits, Novel diagnostics, Evidence-based decision making, personalized treatment.
Feasibility and Cost of Hepatitis C Elimination in Rwanda: Results of the Global Hepatitis C Elimination Tool
PP-043 Health Services, Outcomes and Policy Research (HSOP)
Alec Aaron1, Huaiyang Zhong1, Lindsey Hiebert2, Janvier Serumondo3, Gallican N. Rwisabira3, John Ward2, Jagpreet Chhatwal4
1Massachusetts General Hospital Institute for Technology Assessment, Boston, MA, USA
2Coalition for Global Hepatitis Elimination, Task Force for Global Health, Decatur, GA, USA
3Ministry of Health, Rwanda Biomedical Center, Kigali, Rwanda
4Massachusetts General Hospital Institute for Technology Assessment, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
Purpose: The World Health Organization (WHO) established an ambitious set of goals for countries to work towards eliminating the hepatitis C virus (HCV) by 2030. Now, countries need tools to assess their progress towards elimination and develop strategies to address the remaining gaps. Our objective was to develop a global Hep C Elimination Tool that supports budget-based planning by allowing countries to simultaneously evaluate the feasibility, costs, and health outcomes of multiple screening and treatment policy scenarios.
Methods: We worked with the Ministry of Health of Rwanda to obtain programmatic and epidemiologic data, which was combined with peer-reviewed literature to adapt a previously developed microsimulation model to simulate the HCV epidemic in Rwanda through 2050. Users can interact with the model online by changing key parameters, such as the testing algorithm, testing and treatment costs, and elimination targets, directly on the tool’s interface. Rwanda was selected to demonstrate the utility of the tool, given the advanced stage of their elimination campaign along with the availability of high-quality data.
Results: The majority of WHO’s HCV elimination targets, including incidence reduction, diagnosis coverage, and treatment coverage are achievable by 2023, while mortality reduction is achievable by 2027. A strategy with an annual 30% screening and 100% treatment rate attains elimination the soonest. This translates to screening approximately 2 million people in 2022, followed by 1 million in 2023 (Figure). Compared to 2015, this program will reduce HCV incidence by 98% and HCV-related deaths by 73% in 2030, surpassing the WHO’s goals of 80% and 65%, respectively. The screening, diagnosis, and treatment costs of this program from 2022 through 2050 will be $12.2 million (USD). An additional $17.0 million would be required for treatment of liver disease complications. Compared to having never initiated an elimination program, this strategy becomes cost-saving in 2030 and would save the health system $17.6 million between 2015 and 2050.
Conclusions: Rwanda’s program demonstrates that HCV elimination is feasible and that investing in hepatitis C elimination can be cost-saving, even for countries with limited resources. This case also articulates the capability of the interactive Hep C Elimination Tool to assist policymakers with the planning and management of tailored, country-specific HCV elimination strategies. Additional countries, including those with less established elimination programs, will be added to the tool.
Keywords: Hepatitis C Elimination, Budget-Based Health Program Planning, Online Modeling Tool
Results Figure
Value of Information: Consolidated Health Economic Evaluation Reporting Standards – A Checklist Extension in Development
PP-045 Health Services, Outcomes and Policy Research (HSOP)
Annisa Siu1, Natalia Kunst2, Anna Heath1, Doug Coyle3, Mike Drummond4, Sabine Grimm5, Janneke Grutters6, Don Husereau3, Erik Koffijberg7, Claire Rothery4, Ed Wilson8
1Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada
2Norwegian Medicines Agency, Oslo, Norway
3School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada
4Centre for Health Economics, University of York, Heslington, England
5Department of Epidemiology, Maastricht University Medical Centre+, Maastricht, Netherlands
6Department for Health Evidence, Radboud University Medical Centre, Nijmegen, Netherlands
7Department of Health Technology & Services Research, University of Twente, Enschede, Netherlands
8Department of Health Economics and Health Policy, University of Exeter, Exeter, England
Purpose: This Delphi study aimed to support the development of reporting guidance to ensure that the conduct and reporting of value of information (VOI) analyses is transparent, reproducible and of good quality, allowing thorough critique.
Methods: A comprehensive literature review generated a list of candidate reporting items that were preliminarily reviewed by experts in VOI methods. These candidate items then underwent a Delphi study with key stakeholders. The Delphi process consisted of three rounds of web-based surveys, where participants rated items on a 9-point Likert scale to indicate its relevance when reporting the minimal, essential information about VOI methods. Participants also provided a rationale for their scores and commented on the wording of items. Aggregate scores and comments informed amendments to candidate items for Rounds 2 and 3.
Results: Out of 34 Delphi registrants, 30 responded in Round 1, 25 responded in Round 2, and 24 responded in Round 3. Overall, no items were excluded from the checklist based on each round’s score thresholds (Figure 1). All 26 items, alongside their split, merged or edited iterations, will proceed to a two-day consensus meeting for detailed review to finalize the reporting guidance.
Figure 1.
Delphi Study Thresholds and Results: Rounds 1 to 3
Conclusions: The results of the Delphi study will be discussed at a consensus meeting with select experts to determine the minimal, essential information that should be reported from a VOI analysis. These recommendations will be published as a user-friendly checklist. This will support the dissemination of VOI results and ensure that VOI analyses are reported and conducted in a transparent, rigorous manner.
Keywords: value of information, health economic evaluation, reporting guideline
Evaluating economic model data sources and inputs: a thematic analysis from stakeholder interviews
PP-046 Health Services, Outcomes and Policy Research (HSOP)
Bansri Desai, Eleanor M. Perfetto
Department of Pharmaceutical Health Services Research, University of Maryland School of Pharmacy, Baltimore, USA
Purpose: Evaluators of economic models, such as payer and health care decision-makers, lack guidance or criteria on the appropriateness of the key parameters, including data sources and inputs (DSIs). The objective of this qualitative analysis was to identify a multi-stakeholder derived list of key attributes to inform the development of a rubric for decision-makers to “judge” the appropriateness of key parameters in economic models.
Methods: Semi-structured interviews with modeling experts, payer decision makers, and patient advocates were conducted. An interview guide was developed and included questions on how participants approach evaluation of DSIs, what information is needed to evaluate DSIs, and characteristics of appropriate DSI identification and selection. Virtual (web-conference) interviews took place between November 2021 to March 2022 with a convenience sample of experts from key stakeholder groups: economic modeling experts, payer decision-makers, and patient advocates. Interviews were recorded and transcribed, and interview data were thematically analyzed using a combination of inductive and deductive approaches.
Results: A total of 13 interviews were conducted (5 economic modeling experts, 4 health care decision-makers, and 4 patient advocates). Three overarching themes emerged across interviewees regarding assessment of DSIs: transparency (n=12), relevance (n=13), and credibility (n=12). Participants identified several attributes for each theme that may be utilized by DMs when evaluating economic models for decision-making purposes. (Table 1)
Table 1.
Identified Themes and Attributes
| Theme | Attributes |
|---|---|
| Transparency | • Documentation of key data input values
• Citation of original data sources • Documentation of DSI identification & selection processes (e.g., search strategies, other DSIs considered) |
| Relevance | • Relevance to target decision population
• Relevance to decision context (e.g., health care system, actual clinical practice) • Use of sufficiently up-to-date DSIs • Relevance to patients’ lived experiences |
| Credibility | • Use of well-designed, fit-for-purpose data & data sources
• Use of reasonable search strategies to identify best available DSIs • Face validity checks of key DSIs with experts and/or patients • Discussion of DSI limitations and/or rationale for DSI selections |
Key: DSIs = data sources and inputs
Conclusions: While no formal guidelines exist for the evaluation of the appropriateness of key parameters in economic models, participants identified attributes of transparency, relevance, and credibility as consistent evaluation criteria. Our future work will leverage these attributes to develop a systematic and consistent user-friendly rubric for evaluating key parameters in economic models. Publishers of economic models, such as value assessment/health technology assessment organizations, may also leverage the upcoming rubric to evaluate the appropriateness of key input parameters in economic models.
Keywords: data sources, data inputs, economic models, decision-makers
Perceived need for mental health treatment and specialty substance use treatment among people in recovery from substance use disorders
PP-048 Health Services, Outcomes and Policy Research (HSOP)
Bridget B. Hayes
Cornell University
Purpose: Substance use disorders (SUD) are chronic illnesses requiring ongoing treatment to prevent relapse (U.S. Department of Health and Human Services, 2016), yet it is unknown how frequently people in sustained remission in the United States receive such treatment, nor the types of treatment perceived as necessary among members of this group.
Methods: Data were drawn from the nationally-representative National Survey on Drug Use and Health datasets for years 2018-2020. Participants (n = 11,827) were aged 18 or older, did not meet criteria for DSM-IV abuse or dependence of alcohol or drugs during the prior year, and a) endorsed being in recovery from alcohol or substance addiction and/or b) reported lifetime receipt of treatment for alcohol or substance addiction. Logistic regressions were performed to examine associations between perceiving need for treatment and demographic characteristics, types of treatment received, comorbid mental illness, and asymptomatic illicit drug use. Survey year was also included as a dichotomous covariate (i.e. before 2020 vs. during 2020) to illustrate effects of the COVID-19 pandemic. Analyses were performed with the “survey” package in R.
Results: Whereas 12.4% of people in recovery reported receiving specialty SUD treatment during the prior year, only 1.1% reported that they needed such treatment, and only 3.1% of those who received SUD specialty treatment perceived it was needed. In contrast, 8.9% reported insufficient treatment for mental health concerns, in spite of already high levels of mental-health treatment utilization (27.7%). Both types of treatment were more likely to be perceived as necessary among younger people and those reporting past-year asymptomatic use of illicit drugs (see Tables 1 and 2). Commonly reported reasons for not receiving mental health treatment perceived as necessary included high costs, believing the problem should be handled on one’s own, and not knowing where to find treatment (see Table 3).
Table 1.
|
This is the first of three tables to be shown on the poster
Conclusions: Failure to receive ongoing treatment for SUD during periods of remission is expected to increase probability of relapse to symptomatic substance use (U.S. Department of Health and Human Services, 2016). Nonetheless, such treatment is a frequently unmet need in the United States. Since treatment is more effective when it is perceived as necessary (Longshore & Teruya, 2006), focus should be placed on delivering general mental health care rather than SUD-specialty care.
Keywords: Perceived need for treatment, substance use disorders, mental health, recovery, health services, health policy
Clinical Burden of Sickle Cell Disease for Patients With Recurrent Vaso-Occlusive Crises Among Medicaid and Commercially Insured Individuals
PP-049 Health Services, Outcomes and Policy Research (HSOP)
Chuka Udeze1, Kristin Evans2, Yoojung Yang1, Paula Smith2, Timothy Lillehaugen2, Janna Manjelievskaia2, Urvi Mujumdar1, Nick Li1, Biree Andemariam3
1HEOR, Vertex Pharmaceuticals Incorporated, Boston, MA, USA
2Research/Data Analytics, IBM Health Watson, Cambridge, MA, USA
3New England Sickle Cell Institute, University of Connecticut Health Center, Farmington, CT, USA
Purpose: To investigate the differences in clinical burden for patients with sickle cell disease (SCD) with recurrent vaso-occlusive crises (VOCs) with commercial vs. Medicaid insurance.
Methods: This retrospective analysis used IBM® MarketScan® Commercial and Medicaid Multi-State Databases to identify patients with ≥1 inpatient or ≥2 outpatient claims for SCD between March 1, 2010, and March 1, 2019. Patients with ≥2 diagnosed VOCs per year in any 2 consecutive years after and including the first qualifying SCD claim were included in this analysis and considered to have recurrent VOCs. VOC was defined as an inpatient or outpatient claim of acute chest syndrome, priapism, SCD crisis, or splenic sequestration; at least 3 days between VOC claims were required to be considered discrete VOCs. The index date was the second VOC in the second consecutive year. Patients were required to have ≥24 months’ continuous enrollment pre-index and ≥12 months’ post-index and were followed from the index date to the end of enrollment, death, or the end of the study period (March 1, 2020), whichever came first. Patients with a hematopoietic stem cell transplant or sickle cell trait claims during baseline or follow-up were excluded. Demographics were assessed at index date; VOC frequency, treatment patterns, and clinical complications were summarized during follow-up and descriptively compared across insurance type.
Results: A total of 35,791 patients with SCD were identified: 3,416 met the inclusion/exclusion criteria, 2,737 had Medicaid insurance, and 679 had commercial insurance. Mean patient age at the index date was 17.5 years (mean follow-up: 4.4 years) in patients with Medicaid and 25.3 years (mean follow-up: 3.1 years) in those with commercial insurance. Compared to patients with commercial insurance, patients with Medicaid had a higher mean number of VOCs per patient per year (PPPY) (5.3 vs. 4.0), outpatient prescriptions PPPY (36.9 vs. 27.7), ER visits PPPY (5.3 vs. 4.0), and opioid claims PPPY (10.1 vs. 8.3).
Conclusions: There were significant clinical impacts among patients with SCD with recurrent VOCs. This study indicates potential health disparities in the US SCD patient population, as demonstrated by the higher number of VOCs and healthcare utilization among the Medicaid population than the commercially insured population.
Keywords: Sickle cell disease, Equity, Complications, Healthcare resource utilization, Payer
The Impact of the Safe Intersections Program in Mexico City: A Longitudinal Evaluation
PP-050 Health Services, Outcomes and Policy Research (HSOP)
David Ulises Garibay-Treviño1, Yadira Peralta2, Jaime Sainz-Santamaría3, Fernando Alarid-Escudero3
1Center for Research and Teaching in Economics (CIDE)
2Department of Economics, Center for Research and Teaching in Economics (CIDE)
3Division of Public Administration, Center for Research and Teaching in Economics (CIDE)
Purpose: Traffic injuries are one of the leading causes of human and economic losses worldwide. Road safety interventions intend to reduce these burdens. We aim to evaluate Mexico City’s Safe Intersections program’s impact on the number and fatality of vehicle-pedestrian crashes. This program comprised a set of road infrastructure modifications to improve road safety for all users.
Methods: We used georeferenced data on road traffic events, mainly pedestrian-vehicle collisions, from 65 intervened intersections between 2017 and 2020 using a 35-meter radios buffer. We constructed a bimonthly longitudinal dataset with the pedestrian-vehicle crashes in each intersection during the study period. We used propensity score matching to create a control group and then estimated the impact of the program, implemented during 2019. We used a multiphase longitudinal model, defining the program application date for each intersection as a change point.
Results: The matching algorithm had a 2:1 ratio of control to the treated intersections included in the model. We used T-tests to check disparities of the variables affecting the risk of having a pedestrian-vehicle crash between both groups and found no statistical differences. In the intersections where the program was applied, the average number of pedestrian-vehicle crashes went from 0.55 to 0.36 events per bimester before and after implementation. The program reduced the bi-monthly rate of pedestrian-vehicle crashes on the treated interventions by 0.003, 95% CI [-0.0127, 0.0204], and reduced the proportion of lethal pedestrian-vehicle crashes by 0.001, 95% CI [-0.0004, 0.003], each bimester between 2017 and 2020.
Conclusions: The Safe Intersections program effectively reduced the number of pedestrian-vehicle collisions and had a smaller effect on reducing their fatality. Further studies are needed to evaluate the cost-effectiveness of this program.
Keywords: Mexico City, longitudinal model, propensity score matching, road safety, open data, georeferenced data.
Predicted trajectory of average pedestrian-vehicle crashes per group.
This figure shows the predicted trajectory of the bi-monthly number of pedestrian-vehicle crashes based on the longitudinal model results.
Secure messaging for medical advice increased over the last 2 years and is associated with physicians working outside of scheduled hours
PP-051 Health Services, Outcomes and Policy Research (HSOP)
Kathryn A. Martinez, Michael B. Rothberg, Elizabeth R. Pfoh
Cleveland Clinic Center for Value-Based Care Research, Cleveland, OH, USA
Purpose: Secure messaging through patient portals (e.g. MyChart) increases access to medical advice. Messaging is free to the patient, and physicians do not receive reimbursement. Our objective was to characterize the change in volume of MyChart messages to physicians over a two-year period and to describe the association between message volume and physician time spent outside of working hours on the electronic health record (i.e. Epic).
Methods: This study includes data on primary care physicians from the Cleveland Clinic Health System and their patients from January 2019 through December 2021. We assessed the number of messages from patients seeking medical advice received by physicians per quarter. We assessed time (in hours) spent on Epic outside of scheduled working hours. We used mixed-effects linear regression to assess the change in volume of medical advice messages over the study period. We then used mixed-effects linear regression to determine the association between message volume and physician characteristics (i.e. gender and specialty). Finally, we used mixed-effects linear regression to assess the association between message volume and time spent on Epic outside of working hours. Models accounted for clustering by physician and quarter, and adjusted for outpatient volume, patient Charlson score, and age.
Results: The sample included 153 physicians and 319,382 patients. Adjusting for outpatient volume, the mean number of medical advice messages increased 82% from 373 in Q1 of 2019 to 677 in Q4 of 2021 (p<0.001). Physicians averaged 44 hours on Epic outside of work, which was similar at the start and end of the study period (46 hours versus 45 hours). In the regression model, compared to internists, family physicians averaged 144 additional MyChart messages per quarter (95%CI:38-250). There was no difference by physician gender. Each medical advice message was associated with half a minute of additional time spent on Epic outside of working hours (Est:0.48 minutes; 95%CI:0.24-0.54). Compared to physicians who received the lowest decile of MyChart messages, those in the highest decile spent 12 additional hours per quarter on Epic outside of working hours. (95%CI:2.3-21.5).
Conclusions: Over a two-year period, messaging volume increased substantially. Message volume was associated with time spent outside of working hours on Epic, with physicians who received the most messages working more than an additional day outside of hours every quarter.
Keywords: secure messaging, physician workforce, patient physician communication
A Small Group of Pediatricians Receive the Majority of Secure Messages
PP-052 Health Services, Outcomes and Policy Research (HSOP)
M. Charmaine Tang, Kathryn A. Martinez, Michael B. Rothberg, Elizabeth R. Pfoh
Cleveland Clinic Community Care, Cleveland Clinic, Cleveland, United States
Purpose: Secure messaging allows patients to seek medical advice. This is convenient for patients, but the work is uncompensated. We used group-based modeling to identify variation in the volume of messages pediatricians received, factors associated with the variation, and whether receiving more messages was associated with time spent on the electronic health record (EHR) outside work hours.
Methods: This study includes data on pediatricians from the Cleveland Clinic Health System (CCHS) and their patients from January 2019 and December 2021. We used group-based modeling, a statistical method for analyzing trajectories, to categorize pediatricians into groups based on their volume of messages asking for clinical advice. We used multinomial regression (clustered by quarter) to assess the association between years of service, mean visits per quarter, patient age and insurance, and time spent responding to messages on group membership. We used mixed-effects linear regression to determine the association between group membership and time spent on the EHR outside of working hours. The model accounted for time as a random effect, visit volume, patient age and insurance, and years of service as fixed effects.
Results: The sample included 95 physicians and 527,399 visits. Three groups emerged: 62% of pediatricians were categorized into the low group (mean of 50 messages per quarter), 32% into the middle group (mean of 147 messages per quarter), and 6% into the high group (mean of 246 messages per quarter). Compared to the high group, pediatricians in the low group had more patients with government insurance (29% versus 15% in the high group, p<0.01), older patients (mean 6 years versus 5 years, p<0.01), fewer visits per quarter (441 versus 627, p<0.01) and fewer years of service (11 versus 16, p<0.01). There was no difference in the average time physicians took to respond to messages in the low versus high message groups. Compared to pediatricians in the low group, the middle group spent five additional hours on the EHR outside of work hours each quarter, and the high group spent eight (p<0.01).
Conclusions: A small percentage of pediatricians were in the high messaging group, and those pediatricians received 5-times the number of messages per quarter as those in the low group. Receiving more messages was associated with working more on the EHR outside of hours.
Keywords: Secure messaging; Pediatricians; Primary Care; Workload
Argentine Valuation of the EQ-5D-3L Health States: An Experienced Utility Approach
PP-053 Quantitative Methods and Theoretical Developments (QMTD)
Jose Felipe Montano Campos
CHOICE Institute, University of Washington
Purpose: A set of societal health-related quality of life (HR-QOL) values need to be estimated to inform country-specific health policy decisions. The traditional method (original United Kingdom protocol) to estimate the country-specific HR-QOL conducts an experiment to elicit the 243 health state values by asking respondents to rate hypothetical health states assigned to them. I propose a new method to estimate the societal HR-QOL values using observational data, where I can observe the experienced utility health state of respondents rather than the anticipated utility as in the traditional method.
Methods: I use the Argentinian National Survey of Risk Factors (NSRF) to estimate the societal HR-QOL values for Argentina. It contains a representative sample of the Argentine population, and information about the EQ-5D-3L health sate of each individual and the score about their general health in the Visual Analog Scale. Using this information, I predict the HR-QOL for the 243 Health States for Argentina using two different methodologies: Ordinary Least Square model, as in the traditional method, and the Random Forest Regression (RFR) supervised learning model. I compute the Mean Absolute Error to assess what methodology is fitting better the data and I compare these results with the existing HR-QOL values for Argentina derived using the traditional method. I conducted a sensitivity analysis on how different risk factors impact the HR-QOL of individuals using the different sets of health state values derived.
Results: The results using experienced utility data are as expected. For both estimations the health state 11111 has the highest and the health state 33333 has the lowest quality of life. The ranking of the health states using the HR-QOL predicted value is consistent with the idea that the closer we get to the worst health state the lower in the ranking is the health state. The RFR model has the lowest MSE and compared to the existing HR-QOL values, my estimations have the same tendency but different levels. The impact of risk factors on the HR-QOL have the same sign regardless the utility measurement, however, the magnitude is on average higher for the experienced utility approach, compared to the anticipated.
Conclusions: I use and experienced utility approach to estimate the Argentine Societal HR-QOL. My results are not equal but consistent with the traditional method.
Keywords: Experienced utility, anticipated utility, Health related Quality of Life
Health State Values
Plot the assigned HR-QOL for each of the 243 health states. In blue we have the estimations of the new proposed method using OLS (New Estimation: OLS), in orange we have the estimations of the new proposed method using Random Forest Regression (New Estimation: RFR). Finally, in green we have the estimations calculated by Augustovski., et al (2009) using the traditional method (Argentina – Health States).
Mean Absolute Error
| Mean Absolute Error (|Prediction - Value|) | |
|---|---|
| Argentine Values (Traditional Method) | 0.02 |
| New Method (OLS) | 0.04 |
| New Method (RFR) | 0.006 |
Random Forest Regression has the lowest men absolute error indicating that is the model that best performs among the three
The Projected Clinical Effects of Alternative Pregnancy Testing Requirements for Individuals on Isotretinoin Acne Therapy
PP-054 Health Services, Outcomes and Policy Research (HSOP)
Gabriella V. Alvarez1, Camille Robinson1, Ethan D. Borre1, Evan Myers2, Matilda Nicholas3
1School of Medicine, Duke University, Durham, USA
2Division of Women's Community and Population Health, Department of Obstetrics & Gynecology, Duke University School of Medicine, Durham, USA
3Department of Dermatology, Duke University Hospital, Durham, USA
Purpose: Isotretinoin is a highly effective therapy for acne but carries high teratogenic risk and currently requires monthly lab-based pregnancy testing to limit fetal exposures. We sought to project the clinical outcomes of alternative pregnancy testing strategies for women on isotretinoin acne therapy.
Methods: We used a Markov microsimulation model to simulate a cohort of 20-year-old females initiating isotretinoin therapy over six months. We simulated four strategies: 1) No testing; 2) Monthly lab-based urine pregnancy testing; 3) Monthly at-home low-end urine pregnancy testing; and 4) Monthly at-home high-end urine pregnancy testing. Monthly pregnancy probability was conditioned on contraceptive method, which included those required under the current FDA-required Risk Evaluation and Mitigation System (REMS) for isotretinoin. Sensitivity/specificity for tests were obtained from the literature: lab test 100%/100%, at-home high-end test 97%/99%, and at-home low-end test 41%/99%. Embryonic isotretinoin exposure was calculated as the number of days from conception to pregnancy detection based on either a positive pregnancy test result or pregnancy symptoms.
Results: Six-month pregnancy rates ranged from 0.05-5.9%, depending on contraceptive method, and 50.0% for no contraception. In women who became pregnant, lab testing detected 27.2% prior to the onset of symptoms, low-end home testing detected 11.2%, and high-end home testing detected 26.5%. In the absence of testing, the mean time from conception to detection was 17.5 days. Lab testing and high-end home testing reduced this mean time by 1.4 days, while low end testing reduced it by 0.6 days. The most common cause of “false negative” results was testing prior to detectable levels of hCG in urine; among those with detectable levels, false negative rates were 1.5% for high-end home tests and 14% for low-end tests. Both home tests had cumulative 6-month false positive rates of 6.5%. Results were most sensitive to variations in pregnancy test sensitivity and probability of experiencing symptoms of pregnancy.
Conclusions: Lab-based testing was the safest strategy in reducing the number of days of fetal isotretinoin exposures. However, monthly at-home testing with high-end pregnancy tests had similar effectiveness. For both at-home pregnancy test strategies, 6.5% of people had a false positive test and were removed from isotretinoin therapy early. This model suggests at-home testing may be a safe alternative testing method for women on isotretinoin therapy.
Keywords: Dermatology; pregnancy testing; decision modeling
Table 1.
Clinical Outcomes
|
Regional and socioeconomic variation, surgical interventions, and costs associated with pediatric palliative care
PP-055 Health Services, Outcomes and Policy Research (HSOP)
Harold J Leraas, Diego Schaps, Krista Haines, Elisabeth T Tracy, Ryan M Antiel
Duke University Department of Surgery, Durham, North Carolina, USA
Purpose: Palliative care provides crucial end of life counseling and symptom management for critically ill children and their families. Pediatric surgeons and interventionalists play an important and evolving role in pediatric palliative care to guide families deciding which interventions to pursue for their child’s well-being and comfort. We sought to assess the current utilization and costs associated with interventions as part of pediatric palliative hospitalizations.
Methods: We reviewed patients <21 years of age with an ICD-10 code indicating an encounter for palliative care in the 2019 Kids Inpatient Database for children. We evaluated demographics, treatment center information, disease classification, healthcare utilization, and cost associated with hospitalizations that utilized palliative care services. Finally, we compared these variables between patients who survived hospitalization and those who did not. To address possible survival bias we corrected costs based on length of stay to evaluate costs per day of inpatient care between survivors and non-survivors.
Results: Of the 9780 patients with a palliative care encounter among 3089283 hospitalizations, the majority were from the Southern U.S. (38.9%), and almost all encounters occurred at urban teaching hospitals (97%). Nearly half of the patients seen for a palliative care encounter were Caucasian (48.1%), 56.3% of patients were covered by Medicaid, and another 34.9% by private insurance. The most common diagnosis categories for patients seen by a palliative care included: perinatal diagnoses (28.6%), respiratory disease (13.9%), and cancer (11.1%). Children who died during their hospitalization underwent a greater number of procedures than those who survived (non-survivors 4 [IQR 1-8] vs. survivors 2 [IQR 1-5], P=-0.0012). Non-survivors had shorter lengths of stay (non-survivors 4 days [IQR 1-5] vs. survivors 6 days [IQR 4-24], P<0.0001), but greater daily expenditure (non-survivors $5723 per day [IQR $3396-$9324] vs. survivors $3592 per day [IQR $2323-5327], P<0.0001).
Conclusions: Patients with palliative care encounters who die during their hospitalization have higher daily costs of care, reflecting more aggressive treatment modalities and greater use of invasive procedures. There are notable U.S. regional differences in utilization of palliative care services. It is important for pediatric surgeons and interventionalists, in partnership with palliative care teams, to address costs and treatment burden to help parents chose a treatment course that aligns with their values and hopes.
Keywords: Palliative Care, Pediatric Critical Care, End of Life Care
Comparison. of Demographics, Healthcare Utilization, and Cost Between Patients Who Survived Hospitalization with Palliative Care Encounters and Those Who Did Not
| Variable | Survived Hospitalization
(N=6173) |
Terminal Hospitalization
(N=3607) |
P-Value |
|---|---|---|---|
| Age | 6 (0-14) | 0 (0-8) | P<0.0001 |
| Race | P<0.0001 | ||
| Caucasian | 49.5% (2933) | 45.4 (1481) | |
| African American | 18.2 (1076) | 20.2% (657) | |
| Hispanic | 21.89% (1296) | 20.9% (682) | |
| Hospital Region | P<0.0001 | ||
| Northeast | 9.4% (578) | 12.6% (455) | |
| Midwest | 28.7% (1770) | 22.9% (825) | |
| South | 36.9% (2277) | 41.0% (1478) | |
| West | 52.1% (1548) | 23.5% (849) | |
| Hospital Location and Teaching Status | P<0.001 | ||
| Rural | 0.5% (28) | 1.0% (36) | |
| Urban Nonteaching | 1.3% (78) | 3.89% (140) | |
| Urban Teaching | 98.3% (6067) | 95.1% (3431) | |
| Urban/Rural Residence | P=0.0038 | ||
| Large Urban/Metro | 54.3% (3324) | 53.2% (1907) | |
| Medium Urban/Metro | 23.9% (1465) | 22.2% (796) | |
| Small/Rural | 21.8% (1335) | 24.6% (882) | |
| Primary Payor | P<0.0001 | ||
| Medicare | 0.6% (37) | 0.3% (11) | |
| Medicaid | 58.4% (3600) | 52.6% (1892) | |
| Private | 34.4% (2121) | 35.7% (1284) | |
| Operating Room Indicator | 23.0% (1420) | 20.2% (728) | P=0.0012 |
| No Hospital Procedures | 24.01% (1485) | 23.3% (839) | P=0.3722 |
| Length of Stay | 6 (4-24) | 4 (1-15) | P<0.0001 |
| Number of Procedures | 2 (1-5) | 4 (1-8) | P<0.0001 |
| Total Cost | $34367.05
($12447.01-$91453.98) |
$27054.82
($6344.14-$82863.61) |
P=0.1991 |
| Cost Per Day | $3592.18
($2323.43-$5327.04) |
$5723.39
($3395.58-$9324.69) |
P<0.0001 |
Categorical comparisons are made using Chi-Squared test, Continuous comparisons made using students T-test.
Discharge outcomes from Inpatient Rehabilitation or Skilled Nursing Facility in patients post-stroke under Medicare Advantage plans
PP-056 Health Services, Outcomes and Policy Research (HSOP)
Heather Anne Hayes1, Priyanka Ghule1, Joseph Biskupiak1, Christine Mcdonough2
1University of Utah
2University of Pittsburgh
Purpose: Determine differences in outcomes (functional, readmission, and community discharge) after placement in an Inpatient Rehabilitation Facility (IRF) or Skilled Nursing Facility (SNF) for patients post-acute, ischemic stroke who have a Medicare Advantage (MA) plan.
Methods: A retrospective cohort study of individuals discharged to an IRF / SNF Post-Acute Care (PAC) setting, was conducted with data from an MA convener company. Multivariable logistic regression analyses were performed with the independent variables being facility (IRF [referent] or SNF) and for each outcome variable; 1) functional change from admission to discharge (Activity Measure for Post Acute Care [AM-PAC] CAT Basic Mobility [BM], Daily Activity [DA], and Applied Cognition [AC]); 2) readmission to acute care hospital; and 3) community discharge. Subjects were matched using propensity scores on gender, sex, acute care length of stay, living setting prior to stroke, medical complexity, admission AM-PAC BM, DA, AC, and co-morbidities. Results are expressed as beta coefficients for the functional outcomes, and Odds Ratio (95% CI) for readmission and community discharge.
Results: Matched populations found 3,377 for each group (IRF and SNF). After controlling for the other variables, the change in the AM-PAC BM score, β = 2.15 points (95% CI, 1.80, 2.49), p < 0.001 was higher in individuals in an IRF at discharge compared to a SNF. The DA score, β = 0.05 points (-0.29, 0.38), p = 0.78. The AC score, β = 1.42 points (1.10, 1.75), p < 0.001. Individuals discharged to an IRF were less likely to be readmitted to the acute care hospital, OR = 0.47 (0.39, 0.57), p < 0.001. There was no difference between the groups discharged to community or not, OR = 0.93, (0.81, 1.06), p = 0.27.
Conclusions: For individuals post-stroke with MA plans, we found that functional outcomes were statistically significant for BM, and AC for individuals placed in an IRF or SNF; however, these scores do not reflect a meaningful clinical difference between the groups (minimal detectable difference of 4.45, 7.73 respectively). Additionally, individuals sent to a SNF were more likely to be readmitted to an acute care hospital, but there was no difference between groups in discharge to a community setting or not.
Keywords: Stroke, rehabilitation, Post-Acute Care, Medicare Advantage
Increasing melanoma incidence: Sun exposure or changing diagnostics?
PP-057 Health Services, Outcomes and Policy Research (HSOP)
Ivar Sonbo Kristiansen1, Jesper Bo Nielsen2
1Department of Health Economics, University of Oslo, Oslo, Norway
2Department of Public Health, University of Southern Denmark, Denmark
Purpose: In several European countries melanoma incidence has increased much more than melanoma mortality with incidence-mortality discrepancies most pronounced in the richest countries. We aimed to explore changes in the relationship between diagnostic activity and melanoma incidence and mortality.
Methods: For the years 1998-2019, we obtained data (SNOMED code, age, sex) for all skin biopsies with melanocyte related diagnoses from the national Danish Pathology Data Bank (DPDB). We classified each biopsy as melanoma, melanoma in-situ or benign nevi.
We estimated age standardized incidence and biopsy rates (per 100,000 population per year) and correlation between them with Pearson correlation coefficient.
Results: The dataset encompassed 1,143,370 biopsies in 720,992 individuals (2018 Danish population 5.8 million) among whom 65.1% were female. Biopsy rates were two-fold higher in females than in males. They increased in both sexes until 2011 whereafter they declined (Figure 1, male only), but only among those aged 15-44 (graph not shown).
Figure 1.
Age-standardized melanoma incidence and biopsy rate, age standardized incidence rate of melanoma, atypical nevi/melanoma in-situ and benign nevi and mortality. All rates per 100,000 population per year. 1998-2019. Scale for melanoma, atypical/melanoma in-situ and mortality rate are presented on the right side Y-axis
With the same level in men and women, total melanoma incidence increased throughout the 22 years period (Figure 1, male only), but less so after 2011. In men aged 15-44, the incidence was stable after 2011 while it declined in women (graph not shown). The in-situ incidence increased twice as much over time as melanoma incidence. Melanoma mortality was unchanged during the study period.
The correlation between biopsy rates and incidence was 0.86 in men and 0.89 in women, while it was 0.74 and 0.72 between biopsy rate and in-situ incidence for men and women, respectively.
Conclusions: We observe increasing biopsy rates and melanoma incidence, but stable mortality. Pathologists seem to have changed the balance between melanoma and in-situ lesions as there have been no changes in pathology techniques or diagnostic criteria. We also observe decreasing biopsy rates in young people after 2011, possibly because of the introduction of dermascopy and Photofinder®. The findings are compatible with, but not conclusive for, melanoma overdiagnosis.
Keywords: Melanoma, incidence, mortality, diagnostic activity, overdiagnosis
Adapting a Model of Cervical Carcinogenesis to Reflect Black Women in the U.S
PP-059 Health Services, Outcomes and Policy Research (HSOP)
Jennifer C Spencer1, Emily A Burger3, Nicole G Campos2, Mary Caroline Regan2, Stephen Sy2, Jane J Kim2
1Department of Population Health, Dell Medical School, University of Texas at Austin
2Center for Health Decision Sciences, Harvard TH Chan School of Public Health
3Department of Health Management and Health Economics, University of Oslo
Purpose: Use data on cervical cancer natural history and screening practice patterns among Black women relative to all women in the U.S. to explore factors that contribute to existing disparities in cervical cancer incidence and mortality
Methods: We obtained race-stratified demographic, epidemiological and screening data for Black women in the U.S. and compared to data for all women. Demographic data included life tables from the National Center for Health Statistics and hysterectomy rates estimated from the 2018 Behavioral Risk Factor Surveillance Survey. Epidemiological data included age- and genotype-specific HPV prevalence from the 2002-2006 National Health and Nutrition Examination Survey and HPV type distribution within cervical precancerous lesions and cervical cancers. Data on cervical cancer incidence, mortality, stage at diagnosis, and stage-specific survival by race and age came from the Surveillance, Epidemiology, and End Results study (SEER). Cervical cancer screening practice patterns were estimated from the PROSPR cervical consortium. Data specific to Black women were input into a previously-developed Monte Carlo simulation model describing the natural history of HPV infection and progression to cervical cancer among all women. HPV incidence, natural immunity, and disease progression were recalibrated to fit data on HPV prevalence and type distribution in precancerous lesions and cancer in Black women.
Results: We found cancer incidence and mortality were higher for Black women compared to all women. Likely driven by the significant differences also identified in demographic variables, screening initiation, follow-up of abnormal findings, and survival after diagnosis. Type distribution of HPV infection and detection in precancerous disease varied significantly for Black women when compared to estimates for all women, although type distribution did not vary significantly within cervical cancer cases. Our calibrated Black woman-specific model achieved good model fit to HPV prevalence (Figure 1; HPV 16 prevalence) and type distribution.
Figure 1.
Figure 1 compares empirical data on HPV 16 prevalence using the National Health and Nutrition Examination Survey (NHANES) to model output across 50 input sets. Bars reflect 95% confidence interval for data and min/max of input sets for model output.
Conclusions: There is substantial variation in demographic characteristics, clinical care indicators, and cervical cancer natural history markers for Black women compared to the full U.S. population. To reflect cancer burden in Black women, it is important to consider drivers of cervical cancer incidence across the cancer care continuum, including demographic characteristics like all-cause mortality and hysterectomy rates that impact the number of women at risk. Ongoing work will describe the relative contribution of each of these characteristics to observed disparities in incidence and mortality.
Keywords: disparities, cancer, cervix, human papillomavirus, simulation, screening
“How do you get care for something people don’t understand?”: Care navigation in Veterans with Gulf War Illness
PP-060 Health Services, Outcomes and Policy Research (HSOP)
Kara Winchell1, Shannon Nugent1, Sara Knight2, Deborah Passey2, Elizabeth Tilley2, Mark Helfand3, Megan Lafferty1
1Center to Improve Veteran Involvement in Care, VA Portland Health Care System, Portland, OR, USA
2Informatics, Decision Enhancement, and Analytic Science (IDEAS) Center, VA Salt Lake City Health Care System, Salt Lake City, UT, USA
3Staff Physician, VA Portland Health Care System, Portland, OR, USA
Purpose: Gulf War Illness (GWI) is estimated to affect up to 250,000 Gulf War Veterans (GWV). Symptoms include fatigue, pain, cognitive problems, and dysfunction of the respiratory, gastrointestinal, and neurological systems. Because lack of recognition and clinical understanding of GWI as a service-connected condition may contribute to delays in care for GWVs, we sought to understand healthcare navigation experiences of GWVs with GWI and identify opportunities for improvement.
Methods: We followed the Database of Individual Patient Experiences (DIPEx) methodology, using a semi-structured guide to prompt discussion of healthcare experiences and interviewing a national sample of Veterans (n=58), 39 of whom have GWI. Interviews were transcribed, coded, and analyzed for themes.
Results: We found four themes reflected in Veterans’ discussions of GWI care quality and satisfaction in the VA: 1) provider understanding of GWI; 2) acknowledgment or diagnosis of GWI; 3) care coordination and referral; and 4) longitudinal follow-up and care-plan adjustments. Veterans expressed frustration with their clinician’s poor understanding of GWI and lack of care coordination in the VA system, and saw this as lower quality of care. When clinicians recognized clinical GWI, or were willing to research GWI, Veterans described feeling validated, and mentioned access to resources, including referrals to military exposure health specialists, clinical trials, and the War-Related Illness and Injury Study Centers (WRIISC). Veterans associated this access with symptom improvement and higher care satisfaction. In contrast, Veterans attributed lack of clinician recognition of GWI to delays in disability connection or referrals for appropriate care, which they said exacerbated their frustration and limited symptom relief. Several participants with GWI described dissatisfaction with diagnostic workup or referral to a series of specialists with no clear treatment pathway or resolution to their concerns. In these cases, GWVs often became their own advocates, seeking information and appropriate care. Veterans attributed increased care quality, satisfaction, and relief to the presence of care coordination including targeted specialist referral, implementation of WRIISC recommendations, and longitudinal follow-up.
Conclusions: Recognition and understanding of GWI and limited care coordination within the VA are primary sources of frustration for Veterans with GWI. Additional GWI clinical education, knowledge and utilization of existing care resources for physicians and patients, and a GWI “champion” at each institution may offer a centralized way to coordinate care for this complex illness.
Keywords: Health Services Research; Veterans Affairs; Care Navigation
Designing Guidelines for Those Who Don’t Follow Them: Exploring the Impact of Compliance Assumptions on Optimal Cancer Screening Guidelines
PP-061 Health Services, Outcomes and Policy Research (HSOP)
Kine Pedersen1, Ivar Sønbø Kristiansen1, Stephen Sy2, Jane J. Kim2, Emily A. Burger3
1Department of Health Management and Health Economics, University of Oslo, Norway
2Center for Health Decision Science, Harvard T.H. Chan School of Public Health, USA
3Department of Health Management and Health Economics, University of Oslo, Norway; Center for Health Decision Science, Harvard T.H. Chan School of Public Health, USA
Purpose: Health technology assessment guidelines are unclear about the role of non-adherence in model-based cost-effectiveness analysis. For analyses informing the design of cancer screening guidelines (e.g., age to start/end, frequency, follow-up), including non-adherent screening behavior could lead to inefficient and harmful recommendations for women who intend to comply. To inform recommendations for conducting model-based decision analyses, we aimed to evaluate the impact of alternative screening compliance assumptions on cost-effectiveness results within the context of cervical cancer screening in Norway.
Methods: We used a previously developed microsimulation model of human papillomavirus (HPV) and cervical carcinogenesis to project the long-term health and economic outcomes for a cohort of Norwegian women born in 1989. We compared six alternative screening strategies involving primary HPV testing (5-yearly or 7-yearly), with different wait-times prior to repeat testing for women identified with intermediate risk (12, 24 or 36 months). We applied 10 scenarios of compliance assumptions, involving: perfect compliance, six scenarios assuming “higher” (>80%) and “lower” (=<80%) rates of random compliance to follow-up testing, colposcopy and treatment (i.e., individual compliance to a single procedure was independent of previous behavior) and three scenarios assuming systematic compliance according to an individual “complier-profile” (i.e., distribution of never-, under-, guideline- and over-screeners). For each scenario, we calculated incremental cost-effectiveness ratios (ICERs) and considered a strategy with an ICER below USD55,000 as cost-effective, according to Norwegian benchmarks for cost-effectiveness.
Results: Benchmarking on the perfect compliance scenario, five imperfect compliance scenarios yielded the same strategies on the cost-efficiency frontiers, and four scenarios identified the same cost-effective strategy, involving 7-yearly screening with 36 months follow-up for women with intermediate risk (Table 1). The random and complier-profiler compliance scenarios provided similar results as perfect compliance in the case of “higher” rates, but considerably differed from the perfect compliance scenario in the case of “lower” rates, in which case relatively more intensive screening strategies were preferred. Recommending a more intensive strategy could increase colposcopies by 15% for compliant women.
Table 1.
|
Cost-effectiveness results across ten screening compliance scenarios. Numbers indicate rank order of cost-efficient strategies on the efficiency frontier. Strategies with an incremental cost-effectiveness ratio below USD55,000 are denoted by an asterix (*) and shaded area. X denotes that the strategy was not cost-efficient (i.e., dominated). Shaded compliance scenario label indicate that the scenario resulted in similar cost-efficiency frontier as the perfect compliance scenario. ** Indicate primary screening frequency with HPV testing (5- or 7-yearly) and wait-time prior to repeat testing for women who were positive for HPV-non-16/18 and reflex cytology negative at their primary screen (referred to as intermediate risk group). Random-1=85% compliance to screening and follow-up; random-2=85% compliance to screening and follow-up, 90% compliance to colposcopy procedure; Random-3=85% compliance to screening and follow-up, 90% compliance to colposcopy procedure, and 95% compliance to treatment; Random-4=60% compliance to screening and follow-up; random-5=60% compliance to screening and follow-up, 75% compliance to colposcopy procedure; Random-6=60% compliance to screening and follow-up, 75% compliance to colposcopy procedure, and 80% compliance to treatment; Profile-1=10% never-screeners, 40% under-screeners, 20% guideline-screeners, 30% over-screeners; Profile-2=Profile-1 in combination with Random-3 follow-up/colposcopy/treatment, Profile-3=Profile-1 in combination with Random-6 follow-up/colposcopy/treatment compliance. Abbreviations: mo = months.
Conclusions: Analyses including imperfect compliance may lead to a relatively more intensive screening optimal strategy, both in terms of primary screening frequency and follow-up recommendations. In turn, designing guidelines for those who do not follow them, may lead to over-screening of those who do follow them.
Keywords: cost-effectiveness analysis; cancer screening; compliance
A Discrete Choice Experiment to Compare COVID-19 Vaccination Decisions for Children and Adults
PP-062 Health Services, Outcomes and Policy Research (HSOP)
Lisa A Prosser1, Abram Wagner1, Eve Wittenberg2, Brian Zikmund Fisher1, Angela M Rose1, Jamison Pike3
1University of Michigan
2Harvard Chan School of Public Health
3Centers for Disease Control and Prevention
Purpose: To assess preferences for attributes of adult and pediatric COVID-19 vaccination.
Methods: We conducted an online survey in a national online panel of U.S. adults (n=1040) in May and June 2021. We used discrete choice analysis to measure the relative value of attributes of COVID-19 vaccination for adults and children. Six attributes described hypothetical vaccination options: vaccine effectiveness (60% or 95%), mild common side effects (1 day mild or 1-2 days systemic), rare adverse events (no risk, same risk as flu vaccine, or higher risk than flu vaccine), number of doses (1 or 2 shots), waiting time required for vaccination appointment (1, 2, 4, or 8 hours), and regulatory approval (formal FDA approval or emergency use authorization). Respondents chose between two hypothetical vaccination profiles or no vaccination. Respondents were first asked to consider the choice for their own vaccination and then to consider an identical set of profiles for a hypothetical child aged 0-17 years. Additional survey questions asked about vaccination beliefs, COVID-19 illness experience, COVID-19 risk factors, vaccination status, opinions about the risk of COVID-19, and respondent sociodemographic characteristics. We estimated the relative value of vaccination-related attributes using Bayesian logit regression and identified subgroups with similar preference profiles using latent class analyses.
Results: Vaccine effectiveness was identified as a significant attribute for both adult and child vaccination (Figure). Respondents also preferred fewer rare adverse events, fewer mild side effects, one dose, FDA approval, and shorter waiting times. Results were very similar when framing the question as adult or child vaccination, with a slightly stronger preference for fewer rare adverse events for children. More respondents selected to opt-out (no vaccination) for vaccination of children (29% of choice sets) compared with adults (21% of choice sets). Latent class analysis revealed 4 groups of respondents: (1) sensitive to safety and regulatory status (14% adult; 18% child), (2) sensitive to convenience (9% adults; 6% child), (3) careful deciders (57% adults; 48% child), who considered all attributes in making their choices, and (4) vaccine rejecters (21% adults; 29% child).
Figure.
Conclusions: The identification of a subgroup who prioritizes convenience (less time required for vaccination and fewer doses) may present an opportunity for actionable strategies to increase vaccination uptake for both adult and pediatric populations.
Keywords: discrete choice experiment, COVID-19, vaccination
Longitudinal patterns of health care costs among high-cost users: An application of group-based trajectory modelling to large administrative data
PP-063 Health Services, Outcomes and Policy Research (HSOP)
Logan Trenaman1, Daphne Guh2, Kim Mcgrail3, Mohammad Ehsanul Karim4, Rick Sawatzky5, Stirling Bryan3, Linda Li6, Marilyn Parker7, Kathleen Wheeler7, Mark Harrison1
1Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, BC, Canada; Centre for Health Evaluation and Outcome Sciences, Vancouver, BC, Canada
2Centre for Health Evaluation and Outcome Sciences, Vancouver, BC, Canada
3School of Population and Public Health, University of British Columbia, Vancouver, BC, Canada
4School of Population and Public Health, University of British Columbia, Vancouver, BC, Canada; Centre for Health Evaluation and Outcome Sciences, Vancouver, BC, Canada
5School of Nursing, Trinity Western University, Langley, BC, Canada; Centre for Health Evaluation and Outcome Sciences, Vancouver, BC, Canada
6Department of Physical Therapy, University of British Columbia, Vancouver, British Columbia, Canada
7Patient Partner, Kelowna, BC, Canada
Purpose: We sought to explore the variability of longitudinal patterns of health costs for high-cost users (HCUs) by identifying latent groups with distinct cost trajectories and describing the sociodemographic and clinical characteristics associated with group membership.
Methods: We obtained five calendar years of data on all British Columbians registered in the Medical Services Plan (MSP) from January 2015 to December 2019. We conducted person-level costing with a health system perspective, which included costs associated with hospital separations, day surgeries, physician services, and prescription medications. We defined HCUs as those incurring in the top 5th percentile of health care costs amongst those with non-zero costs. We retained those who were continuously registered in MSP, alive at the end of the study period, and a HCU in the calendar year 2017. Costs were adjusted to 2019 Canadian dollars. We used a group-based trajectory model to identify groups with distinct trajectories of quarterly log-costs. We chose our final model and identified the number of groups using a combination of statistical criteria and interpretability and conducted sensitivity analyses that considered different samples, definitions of HCUs, and ways of modelling cost. We explored sociodemographic and clinical characteristics associated with group membership using unadjusted odds-ratios from a multinomial logistic regression model.
Results: Our final sample comprised 5.4 million British Columbians. In year three, 161,323 people met our definition of a HCU and were included in our analysis (threshold: $10,448). We selected a model with five groups (Figure 1). These groups included those with persistently high-cost (Group 1, mean total: $98,178); persistent moderate-cost (Group 2, mean total: $39,604); declining costs (Group 3, mean total: $34,428); rising costs (Group 4, mean total: $46,220) and those with a cost spike (Group 5, mean total: $23,401). Being older, being in the lowest income quintile, and the incidence of nearly every comorbidity in the Elixhauser index were associated with increased odds of being in the persistently high-cost group versus each other group.
Figure 1.
Actual (solid line) and predicted (dashed line) five-year cost trajectories from the five-group trajectory model of 2017 high-cost users in British Columbia, Canada
Conclusions: HCUs exhibit heterogeneous longitudinal cost trajectories associated with age and multimorbidity. Future work is needed to identify other socio-demographic factors associated with different cost trajectories, to differentiate between preventable and non-preventable health care costs, to develop methods for predicting people at-risk of becoming HCUs, and to identify and provide supports that can improve outcomes and reduce costs.
Keywords: patient-oriented research, health care costs, high-cost users, administrative data, group-based trajectory modeling
Potential Impact of Deceased Donor Kidney Allocation Policies on Older Transplant Candidates: A Modeling Study
PP-065 Health Services, Outcomes and Policy Research (HSOP)
Matthew B. Kaufmann, Jeremy D. Goldhaber-Fiebert
Department of Health Policy, Stanford University
Purpose: While 50% of those in need of kidney transplants are aged 65 years or older, they make up only 20% of recipients. We developed and validated a microsimulation model of the transplant process for older candidates and then used the model to estimate upper bound estimates of potential benefits from policies aimed at increasing the transplant rate for this critical subgroup.
Methods: We developed a series of risk equations used to predict key transplant process outcomes including waitlist and post-transplant outcomes, estimating them using the complete Scientific Registry of Transplant Recipients (SRTR) dataset. We combined these equations of competing events into a microsimulation model. Sampling Importance Resampling was used to calibrate the model to key targets, resulting in a posterior distribution of the coefficients for all equation. Our goal is to use the calibrated model to evaluate policies that increase the rate of transplantation for older candidates. As a first step, we established an upper feasible bound of effectiveness by simulating a scenario in which we increase the rate of transplantation for candidates aged 65 years or older by 20% with no change to the distribution of kidney quality, a best-case scenario.
Results: In a simulated cohort of 100,000 candidates aged 65 years or older at baseline, the average total life years gained by this policy is 6,886 [95% CrI: 386-11,551] years. An average of an additional 4,777 [95% CrI: 4,534-4,952] deceased donor transplants occurred, where the average time on waitlist decreased by 1.7 [95% CrI: 1.6-1.9] months. The average life expectancy for someone who receives a deceased donor transplant is 9.4 [95% CrI: 8.0-10.3] years, while the life expectancy for someone who does not is 5.7 [95% CrI: 5.5-5.9] years. The average life expectancy for an older candidate who receives a low quality kidney (Kidney Donor Profile Index (KDPI) ≥ 85) is 8.9 [95% CrI: 6.5-10.3] years compared to a candidate who receives a high quality kidney (KDPI < 20) is 9.9 [95% CrI: 9.2-10.3] years.
Conclusions: Our model shows that if kidney transplantation rates for candidates over age 65 could be substantially increased, appreciable life expectancy gains would result. Further cost-effectiveness analyses are needed to fully assess the incremental costs and benefits of a range of transplant policy options.
Keywords: Microsimulation, Kidney Transplant Policy
The impact of naturalization on the use of health services among agricultural workers in the United States
PP-066 Health Services, Outcomes and Policy Research (HSOP)
Melissa I Franco, Eran Bendavid
Department of Health Policy, Stanford University School of Medicine
Purpose: This study estimates the effect of being naturalized on health service use among foreign-born agricultural workers in the US.
Methods: We used repeat cross-sectional survey data collected on 69,139 foreign-born agricultural workers, 2000-2017. We compared three worker populations: comparison (1) citizens to undocumented workers, comparison (2) permanent residents to undocumented workers, and comparison (3) citizens to permanent residents. In sensitivity, analysis we restricted the sample to California. We used propensity score matching (1:1 matching, without replacement), with citizenship (comparisons 1 and 3) or permanent residency (comparison 2) as the treatment of interest. Predictors included the worker’s age, gender, family income (below the poverty level), country of origin in Latin-America (four country indicators), and household insurance coverage. Outcomes include the following survey questions: “Within the last two-years has anyone in your household receive benefits from or used services from any of the following social programs?” 1) public health clinic (PHC) and 2) any type of healthcare services from doctors, nurses, dentist, clinics, or hospital in the US (any health services). We identify the effect of naturalization as the difference in health service use in the matched samples.
Results: Compared with undocumented workers, any health services use was 16% (95%CI:12.5% to 19.5%) higher among foreign-born citizen workers, while PHC use decreased by -1.4% (95%Cl: -3.2% to 0.4%) in the US. A similar effect was observed in California (PHC: -2.0%, 95%Cl: -4.2% to 0.1%; any health services: 16.7%,95%Cl: 11.8% to 21.6%). Health services usage was higher (3.1%,95%CI: -0.1% to 6.3% in US; 5.1%,95%Cl: 3.2% to 8.1% in California) in permanent residents compare to undocumented workers, while PHC usage only significantly differed in California. Compared with US foreign-born permanent residents, use of any health services (11.2%; 95%Cl:8.0% to 14.4%) and PHC (2.0%; 95%Cl:0.6% to 3.6%) was higher among US foreign-born citizens. California had a similar effect.
Conclusions: We find a large and significant increase in health services use among foreign-born naturalized citizens relative to undocumented and permanent residents. This appears to be a partial substitution away from use of PHC, likely indicating increased access to diverse healthcare settings. The effect is smaller for permanent residents relative to undocumented workers. Expanding eligibility for citizenship and permanent residency to agricultural workers may have a significant effect on access to healthcare in this population.
Keywords: agricultural workers, naturalization, health services, utilization
Table 1.
|
The impact of naturalization on health services usage among foreign-born agricultural workers
Alcohol-Associated Liver Disease Mortality During the COVID-19 Pandemic
PP-067 Health Services, Outcomes and Policy Research (HSOP)
Neeti S Kulkarni1, Divneet K Wadhwa1, Fasiha Kanwal2, Jagpreet Chhatwal3
1Massachusetts General Hospital Institute for Technology Assessment, Boston, MA, USA
2Department of Gastroenterology, Baylor College of Medicine, Houston, TX, USA
3Massachusetts General Hospital Institute for Technology Assessment, Harvard Medical School, Boston, MA, USA
Purpose: The unprecedented times brought on by the COVID-19 pandemic further exacerbated a multitude of problems, including but not limited to alcohol intake. Our objective was to analyze trends in alcohol-associated liver disease (ALD) deaths in 2020 at the state-level to understand if drinking was a contributing factor to the rise of ALD during the pandemic.
Methods: Death data was obtained from the Centers for Disease Control and Prevention (CDC) Wonder Multiple Cause of Death database. We specifically extracted age adjusted mortality rates for alcohol related chronic liver disease (ICD-10 code K70) at the state-level. Additionally, we also analyzed ALD deaths by sex, age group, and liver transplant regions to account for any further trends.
Results: The age adjusted mortality rate for ALD in the US increased from 6.4 in 2019 to 7.9 in 2020 (23.4% increase), and at the state-level, increased in 49 of 51 states (including District of Columbia). We observed substantial heterogeneity in ALD deaths across states. The states with the highest ALD mortality rates in 2020 included Wyoming (22.8), New Mexico (22.1), and South Dakota (21.8), however the states with the highest increase in ALD mortality rates between 2019 and 2020 included Mississippi (86.4%), Alaska (69.6%) and Maine (63.3%). Overall, ALD mortality rates increased for all sexes, age groups, and liver transplant regions. Across all states, men had higher rates of ALD than women in 2020, and generally the 55-64 age group had the highest rates of ALD, followed by the 45-54 age group, the 65-74 age group, and lastly the 25-44 age group. Liver transplant region 6 (consisting of states Alaska, Hawaii, Idaho, Montana, Oregon, and Washington) had the highest ALD mortality rate of 11.8 in 2020.
Conclusions: Alcohol-associated liver disease deaths drastically increased in 2020 across 49 of 51 states, both sexes, all age groups, and all liver transplant regions. Possible explanations include increased drinking due to the stress of the pandemic, relaxed liquor laws in certain states coupled with stay-at-home orders, and possible comorbidities worsening the problem. Further investigation is required to understand state-level differences.
Keywords: Addiction, Drinking, Chronic liver disease
ALD Mortality in 2019 and 2020
Age adjusted mortality rates and absolute death counts for ALD in 2019 and 2020 are presented at the state level.
Methodological considerations for measuring the intergenerational spillover effects related to maternal postpartum depression screening
PP-068 Health Services, Outcomes and Policy Research (HSOP)
Shainur Premji1, Sheila W McDonald2, Deborah A McNeil3, Maria J Santana4, Eldon Spackman5
1Department of Community Health Sciences, University of Calgary, Calgary, Canada; Centre for Health Economics, University of York, York, United Kingdom
2Department of Paediatrics, University of Calgary, Calgary, Canada; Department of Community Health Sciences, University of Calgary, Calgary, Canada; Alberta Health Services, Calgary, Canada
3Faculty of Nursing, University of Calgary, Calgary, Canada; Community Health Sciences, University of Calgary, Calgary, Canada; Alberta Health Services, Calgary, Canada
4Department of Paediatrics, University of Calgary, Calgary, Canada; Department of Community Health Sciences, University of Calgary, Calgary, Canada
5Department of Community Health Sciences, University of Calgary, Calgary, Canada
Purpose: Interventions to promote parental health during the perinatal period and in the child’s earliest years of life affect lifelong child development and economic potential. Yet, the intergenerational spillover effects associated with interventions during this sensitive period in human development are not commonly measured nor reported in economic evaluations. Such findings would equip decision-makers with information on potential trade-offs being made between the health of parents versus their children. The purpose of this study was to investigate the intergenerational spillover effects associated with an opportunistic public health screening pathway for maternal postpartum depression (PPD) in Alberta, Canada.
Methods: Longitudinal data from the All Our Families prospective cohort was linked to public health, inpatient, outpatient, and physician claims administrative databases. Bivariate and multivariable regression models were used to explore the association between maternal PPD screening category and odds of children being at risk versus not at risk of delay across five developmental domains: communication, gross motor, fine motor, personal social, and problem solving. Multiple imputation methods were used to impute missing values for the outcome variables.
Results: Relative to mothers not screened, children whose mothers screened either low/moderate-risk or high-risk for PPD had increased odds of being at risk of delay across all developmental domains at age 3 and/or 5 years. Despite significant results, however, we were unable to conclude any causal inferences or associations due to several challenges that resulted in potentially high levels of residual confounding. Based on our analysis, mothers who voluntarily declined screening likely experienced a lower level of risk for PPD relative to those who were screened. Information on whether treatment reduced mothers’ risk of depression recurrence or improved her responsiveness to and attachment with her child were also unknown. Finally, we were unable to capture in full new cases or recurrence of depression symptoms throughout the timeframe under study, both of which also increase the risk for adverse child outcomes.
Conclusions: Given the methodological and practical issues faced in obtaining an unbiased measure of child spillover effects, findings from this study must be interpreted with caution. This study highlights the challenges with measuring intergenerational spillover effects using longitudinal observational research designs. Future research on intergenerational spillover effects will require careful attention to methodological considerations and the risk of bias.
Keywords: spillover effects, observational data, trade-offs
A Model-Based Analysis of Screening Strategies to Reduce COVID-19 Mortality in Nursing Homes in the Omicron Era
PP-069 Health Services, Outcomes and Policy Research (HSOP)
Shirley Dong1, Eric Jutkowitz1, Alyssa Bilinski1, John Giardina2
1The Department of Health Services, Policy & Practice, Brown School of Public Health, Providence, RI, United States
2Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, United States
Purpose: During the January SARS-CoV-2 Omicron wave, 11% of COVID-19 deaths occurred in nursing homes, despite housing only 0.4% of the population. Screening entails testing all individuals, regardless of symptoms, to identify those infected with SARS-CoV-2 and prevent onward spread. We explore the impact of screening strategies for residents, staff, and visitors on nursing home COVID-19 mortality.
Methods: We developed an agent-based model to examine nursing home SARS-CoV-2 transmission (Figure 1). We constructed a synthetic U.S. nursing home population to reflect the average resident and staff population, with infections entering the nursing home through staff and visitors. We estimate the mortality impact over 30 days of 1) twice-weekly screening/no screening of staff and visitors and 2) twice-weekly screening/no screening of residents, varying community incidence (20 cases per 100,000 per day vs. 150 cases per 100,000 per day) and vaccination (40% of staff/visitors boosted and 65% of residents boosted vs. 70% of staff/visitors boosted and 90% of residents boosted). We assume that staff and visitors who screen positive do not enter the nursing home, and residents who screen positive isolate such that they do not transmit the virus to other individuals in the nursing home while they are infectious.
Figure 1.
Results: Based on preliminary results, we found that across all levels of community incidence and vaccine uptake, not screening any individuals resulted in 18-20 times the expected number of resident deaths compared to screening residents, staff, and visitors. Screening only residents resulted in 1.1-1.3 times more resident deaths on average compared to screening only staff and visitors. When community incidence is high and vaccine uptake is low, screening has the largest benefit in terms of absolute mortality reduction compared to no screening.
Conclusions: Screening remains an effective intervention to reduce resident deaths, particularly when vaccination/booster rates are low and community incidence is high. Because infections are imported through staff and visitors, screening targeted toward these groups is particularly efficient. Nevertheless, screening residents also has substantial mortality reductions. Compared to more intrusive measures like social distancing, screening is less disruptive and should be considered in the current context of declining vaccine effectiveness and highly transmissible variants. Future analyses are needed to compare the benefits of screening relative to the costs.
Keywords: nursing homes, COVID-19, agent-based model, screening
Comparing projected fatal opioid overdose outcomes and costs of strategies to expand naloxone distribution: A modeling study in Rhode Island
PP-070 Health Services, Outcomes and Policy Research (HSOP)
Xiao Zang1, Sam E Bessey1, Maxwell S Krieger1, Shayla Nolen1, Benjamin Hallowell2, Jennifer A Koziol2, Czarina N Behrends3, Bruce R Schackman3, Brandon Dl Marshall1
1Department of Epidemiology, School of Public Health, Brown University, Providence, RI, USA
2Rhode Island Department of Health, Providence, RI, USA
3Department of Population Health Sciences, Weill Cornell Medical College, New York, NY, USA
Purpose: In 2021, Rhode Island distributed 10,000 additional naloxone kits compared to the prior year through partnerships with community-based organizations. We conducted a modeling study to compare alternative strategies to increase naloxone distribution in order to prevent fatal opioid overdoses through community-based programs in Rhode Island.
Methods: We developed and calibrated a spatial microsimulation model with an integrated decision tree to compare the outcomes of alternative strategies for distributing 10,000 additional naloxone kits annually. We evaluated distribution strategies focusing on programs that target people who inject drugs (e.g., syringe services programs), individuals at various levels of risk for opioid overdose (e.g., mail-/event-based programs), or people who may misuse prescription opioids (e.g., primary health care) versus no additional kits (status quo). We considered two expanded distribution implementation scenarios: (1) consistent with the current spatial distribution patterns for each program type (supply-based approach); and (2) consistent with the current spatial distribution of individuals in each of the risk groups, assuming programs could direct the additional kits to new geographic areas if required (demand-based approach). We assessed and compared the number of witnessed opioid overdose deaths (OODs) (effectiveness), cost per OOD averted (efficiency), and geospatial health inequality, as measured by the Theil index and between-group variance for city/town OOD rates, as a result of different strategies
Results: Distributing additional naloxone kits to programs that target people who inject drugs had the best outcomes at the lowest cost, averting 15.6% (5.5%-26.3%) of witnessed OODs annually at an incremental cost of $89,339 per OOD averted with the supply-based approach and 25.3% (13.1%-37.6%) of witnessed OODs at an incremental cost of $55,261 per OOD averted with the demand-based approach. Distributing additional kits to the other two risk groups averted fewer OODs at higher costs per OOD averted and showed similar patterns of improved outcomes and lower unit costs if kits could be re-allocated to areas with greater need using the demand-based approach. While geospatial inequality increased from the status quo under the supply-based approach, the demand-based approach reduced geospatial inequality (Table 1)
Conclusions: Naloxone distribution to people who inject drugs should be prioritized and re-directing naloxone spatial distribution to areas with the greatest need will improve effectiveness and efficiency and will reduce geospatial inequality.
Keywords: naloxone distribution, opioid overdose, microsimulation, cost, geospatial health inequality
Estimated outcomes and costs for different strategies to increase naloxone distribution in Rhode Island through community-based programs in the last year of the 2020-2022 evaluation period
| Strategy | Supply-based approach | Demand-based approach | ||||||
|---|---|---|---|---|---|---|---|---|
| Witnessed OODs averted (%) | Cost per OOD averted | Theil index | Between-group variance | Witnessed OODs averted (%) | Cost per OOD averted | Theil index | Between-group variance | |
| Status quo | - | - | 2.53 | 3.63 | - | - | 2.53 | 3.63 |
| SSPs | 17.5 (15.6%) | $89,339 | 3.40 | 4.32 | 28.3 (25.3%) | $55,261 | 2.48 | 2.81 |
| Mail/Event-based | 10.9 (9.7%) | $199,963 | 3.36 | 4.35 | 11.5 (10.3%) | $188,345 | 2.45 | 3.18 |
| Primary Healthcare | 2.8 (2.5%) | $619,841 | 2.56 | 3.61 | 3.1 (2.7%) | $557,655 | 2.53 | 3.56 |
Outcomes are based on average estimates from 500 calibrated parameter sets. OOD, opioid overdose death; SSP, syringe services program.
Trauma/Non-Trauma Centers Unsupervised Clustering Analysis Using Non-Surgical and Surgical Care Features
PP-071 Health Services, Outcomes and Policy Research (HSOP)
Xiaonan Sun1, Shan Liu1, Charlie Mock2, Monica Vavilala3, Eileen Bulger2, Rebecca Maine2
1University of Washington Department of Industrial and Systems Engineering
2Harborview Medical Center, the University of Washington Department of Surgery & Harborview Injury Prevention and Research Center
3Harborview Medical Center, the University of Washington Department of Anesthesia & Harborview Injury Prevention and Research Center
Purpose: Hospitals can be designated as different level (I-V) trauma centers (TCs) with criteria outlining staffing, infrastructure, and processes, but not proscribing the types of injuries or type of specific injury care that should be delivered by different level TCs. This study aimed to determine if TC level was associated with real-life surgical and non-surgical delivery care patterns.
Methods: We carried out three sets of unsupervised clustering analyses using Partition Around Medoids (PAM) method to identify clusters on the TCs/non-TCs in Washington State based on state hospital discharge data from 2016. We included both hospital, trauma, and non-trauma care features. Set 1 clustered 69 TCs/non-TCs that carried out at least one major therapeutic procedure for trauma admissions, using non-surgical care and surgical care procedure specialty subgroup clustering labels. Set 2 and 3 clustered 53 TCs/non-TCs that carried out ≥50 major therapeutic procedures for trauma admissions. Set 2 used non-surgical care and surgical procedure distribution. Set 3 included surgical procedure volume (3-1) or distribution (3-2). A Principal Component Analysis (PCA) removed collinearity and reduced the features to the top principal components that accounted for 90% of the variation in all three sets.
Results:Figure 1 maps 10 clusters of TCs/non-TCs with their designated level. Results showed that the clusters only partially aligned with the TC designations. In Set 1 the volume and variation of surgical care drove the clusters, while Set 2 found orthopedic procedures and non-surgical care including patient age, social vulnerability indices, and payer types distinguished the hospital clusters. This result generates potential equity discussion as TCs/non-TCs, although designated at the same level, may function differently in areas with different vulnerability and deprivation. Set 3 results showed that procedure volume rather than the relative proportions of each type of procedure aligned more, though not completely, with TC designation.
Figure 1.
Clustering results map
Clustering results map: (a) Set 1 non-surgical care and surgical care procedure subgroup labels clustering on 69 TCs/non-TCs, (b) Set 2 non-surgical care and surgical care distribution clustering on 53 TCs/non-TCs, (c) Set 3-1 surgical care volume clustering on 53 TCs/non-TCs, (d) Set 3-2 surgical care distribution clustering on 53 TCs/non-TCs
Conclusions: Unsupervised machine learning identified non-surgical and surgical care delivery patterns that explained variation beyond level designation. This provides insights into how leaders could optimize the level allocation for TCs/non-TCs in a mature trauma system by better understanding the distribution of care in the system.
Keywords: Unsupervised clustering, trauma center, trauma system optimization, trauma level designation, non-surgical care, surgical care
At what prevalence of resistance should empiric antibiotic treatment for gonorrhea change? A cost-effectiveness analysis
PP-072 Health Services, Outcomes and Policy Research (HSOP)
Xuecheng Yin1, Minttu Rönn2, Song Li3, Yue Yuan4, Thomas L. Gift5, Joshua A. Salomon6, Yonatan H. Grad7, Reza Yaesoubi1
1Department of Health Policy and Management, Yale School of Public Health, New Haven, United States of America
2Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, United States of America
3College of Computer Science and Technology, Zhejiang University, Hangzhou, China; School of Cyber Science and Technology, Zhejiang University, Hangzhou, China
4Altfest Personal Wealth Management, New York, United States of America
5Division of STD Prevention, Centers for Disease Control and Prevention, Atlanta, United States of America
6Department of Health Policy, Stanford University School of Medicine, Palo Alto, United States of America
7Department of Immunology and Infectious Diseases, Harvard T. H. Chan School of Public Health, Boston, United States of America
Purpose: Common diagnostic tests for gonorrhea do not provide information about susceptibility to antibiotics. Guidelines recommend changing antibiotics used for empiric therapy once resistance prevalence exceeds 5%. Increasing the switching threshold reduces the probability that an infected individual receives effective first-line therapy, resulting in greater morbidity and longer duration of infectiousness. In contrast, decreasing the switching threshold increases the probability of receiving effective first-line therapy but leads to earlier and more extensive use of newer antibiotics, which would be expected to shorten their lifespan. We aimed to identify the optimal switch threshold based on its impact on health loss and costs due to gonorrhea.
Methods: We developed a model of gonorrhea transmission to project gonorrhea-associated costs and loss in quality-adjust life-years (QALYs) under different switch thresholds among men who have sex with men (MSM) in the US. The model is calibrated to describe key characteristics of gonorrhea transmission among MSM, including the prevalence and incidence of gonorrhea and the emergence and spread of resistance to first-line antibiotics. We calculated the overall costs and QALYs from the healthcare sector perspective, accounting for the costs related to diagnosis, antibiotic treatment, and gonorrhea sequalae, and for the disutility associated with gonorrhea symptoms, antibiotic side effects, and gonorrhea sequalae. Overall costs and QALYs were calculated using a 3% discount rate.
Results: Under the scenario where two antibiotics are available for use as first-line therapy and no additional antibiotics are expected to become available over the next 50 years, the optimal switch threshold is between 15 and 20%, depending on the willingness-to-pay level (Figure). The optimal switch threshold decreases when future antibiotics are expected to become available sooner and/or more frequently (Figure).
Figure.
The optimal switching threshold for varying values of willingness-to-pay under different scenarios describing the availability of future antibiotics. All scenarios assume that two first-line antibiotics are currently available.
Conclusions: To determine the threshold of resistance prevalence to inform the first-line therapy of gonorrhea, policymakers should consider the prospect of future antibiotics. The switching threshold could be reduced when newer antibiotics suitable for the first-line therapy are expected to become available in the near future.
Keywords: gonorrhea, antimicrobial resistance, mathematical modeling, transmission models, cost-effectiveness analysis, economic evaluation
A Model-based Analysis of Screening Selection Rates for Post-traumatic Stress Disorder
PP-073 Health Services, Outcomes and Policy Research (HSOP)
Gian Gabriel P. Garcia1, Navid Ghaffarzadegan2, Mohammad S. Jalali3
1H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, United States of America
2Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, United States of America
3Massachusetts General Hospital Institute for Technology Assessment, Harvard Medical School, Boston, United States of America; Sloan School of Management, Massachusetts Institute of Technology, Boston, United States of America
Purpose: Post-traumatic stress disorder (PTSD) is a mental health disorder characterized by failure to recover after experiencing a traumatic event that is shocking, scary, or dangerous. The effects of PTSD are debilitating; when left untreated, individuals suffering from PTSD may lose their careers, families, or commit suicide. For population groups at risk of PTSD, accurate screening is critical to ensuring that proper care and treatment are received. PTSD screening heavily relies on self-reported measures, and stigma around PTSD may cause individuals to under-report these measures. Hence, understanding the dynamics between stigma and screening is critical to improving PTSD screening protocols, and ultimately, health outcomes for people with PTSD.
Methods: We formulate a mathematical model capturing the long-term dynamics between screening and stigma. This model considers how missed diagnoses and PTSD treatment outcomes affect stigma, and how stigma affects screening efficacy in future cohorts. We propose a societal cost function based on the total cost of (1) severe PTSD cases, (2) PTSD-positive screenings, and (3) false-positive PTSD diagnoses and characterize both the single-period and long-term cost-minimizing screening selection rates. Finally, we parameterize our model using literature values and perform numerical analysis to characterize the dynamics between stigma, screening rates, screening outcomes, and societal costs.
Results: In our analytical study, we find that the single-period cost-minimizing screening selection rate is sensitive to (1) the characteristics of the patient population undergoing screening for PTSD, (2) the return of early treatment, i.e., the effectiveness of medical treatment and the healthcare costs associated with positive diagnoses, and (3) the burden of stigma. In the long-run, we find that there exists an equilibrium level of stigma that depends on the screening selection rate. Our numerical experiments show that higher screening selection rates exhibit “worse-before-better” dynamics, i.e., initially high costs are incurred due to false-positive outcomes but in the long-term, costs fall due to a reduction in stigma which result in increased screening efficacy (Figure 1).
Figure 1.
Dynamics of annual societal cost ($) as a function of screening selection rate
Higher screening selection rates show “worse-before-better” cost dynamics while lower screening selection rates show “better-before-worse” cost dynamics. Screening selection rate x denotes the proportion of the population to be screened as PTSD-positive; x*(L) is the long-term cost-minimizing screening selection rate; x*(B) and x*(W) denote the single-period cost-minimizing screening selection rate with no stigma (best-case) and full stigma (worst-case), respectively; x*(0) denotes the single-period cost-minimizing screening selection rate based on the stigma level in year 0.
Conclusions: Our findings highlight the importance of accounting for stigma in PTSD screening and understanding the long-term impact of screening decisions on screening outcomes and costs. It is imperative that the effects of stigma are explicitly considered in the future design of screening protocols for PTSD and other diseases where stigma plays a pivotal role in patient self-reporting behavior.
Keywords: Post-traumatic stress disorder, stigma, screening, mathematical modeling
Design, Analysis, and Optimization of Interpretable Policies for Hypertension Treatment Planning
PP-074 Quantitative Methods and Theoretical Developments (QMTD)
Gian Gabriel P. Garcia1, Lauren N. Steimle1, Wesley J. Marrero2, Jeremey B. Sussman3
1H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, United States of America
2Thayer School of Engineering, Dartmouth College, Hanover, United States of America
3Department of Internal Medicine, Michigan Medicine, University of Michigan, Ann Arbor, United States of America
Purpose: Hypertension is a major risk factor for atherosclerotic cardiovascular disease (ASCVD), the leading cause of death in the United States. Clinical guidelines for hypertension treatment are critical for mitigating the consequences of ASCVD, e.g., heart attack and stroke. Yet, guidelines designed by expert consensus may be inefficient, failing to capture the risks, benefits, and uncertainty inherent to treatment planning. In contrast, the Markov Decision Process (MDP) is a decision-analytic method that can capture these complexities. However, MDP-based treatment recommendations may be uninterpretable or unintuitive, limiting their acceptance in clinical practice. We aim to bridge these two paradigms by designing MDP-based treatment policies that capture the complexities of hypertension treatment planning while leveraging the natural interpretability in monotone policies, thereby generating a treatment policy structure amenable to clinical implementation.
Methods: We formulate mixed integer programs to obtain the optimal monotone policy (MP) and class-ordered monotone policy (CMP). Both policies ensure that treatment recommendations do not decrease in potency as a patient’s health worsens. However, compared to the MP, the CMP is capable of handling health states and treatment types that cannot be strictly ordered from least to most severe. We parameterize an MDP for hypertension treatment using National Health and Nutrition Examination Survey data. We compare our optimal MP and CMP to clinical practice guidelines and the optimal MDP policy via patient-level treatment recommendations and population-level quality-adjusted life years (QALYs).
Results: Our methods generate patient-level treatment recommendations that are more clinically intuitive than the optimal MDP policy, i.e., less intense treatments for healthier patients and more aggressive treatments for sicker patients. Across 66.5 million people, the optimal MP and CMP achieve just 16 and 15 fewer QALYs per 100,000 patients, respectively, compared to the optimal MDP policy, whereas clinical guidelines achieve 3262 fewer QALYs per 100,000 patients. Compared to clinical guidelines, our methods achieve greater QALYs in every blood pressure category, providing the greatest benefit to patients with elevated blood pressure and stage 1 hypertension (Figure 1).
Figure 1.
Quality Adjusted Life Years (QALYs) Saved Compared to Clinical Guidelines
Compared to clinical guidelines, the optimal monotone policy and class-ordered monotone policy save the greatest number of QALYs among patients with elevated blood pressure (BP) and stage 1 hypertension. Additionally, the optimal monotone and class-ordered monotone policies achieve similar performance as the optimal policy.
Conclusions: We developed novel methods for designing interpretable, decision-analytic treatment policies. Our methods drastically outperform clinical guidelines, while being more clinically intuitive than the optimal MDP-based policy. While we focused on hypertension treatment, our methods can be applied to other healthcare contexts where interpretability is essential for implementing model-derived treatment policies.
Keywords: Markov Decision Processes, hypertension treatment planning, interpretability
Value-Based Integrated Care: A systematic literature review
PP-075 Health Services, Outcomes and Policy Research (HSOP)
Evelien Shannon Van Hoorn, Lizhen Ye, Nikki Van Leeuwen, Hein Raat, Hester Floor Lingsma
Erasmus MC, Erasmus University Medical Center Rotterdam, Department of Public Health, The Netherlands
Purpose: Worldwide, healthcare services are transforming towards value-based organizations to achieve more sustainable healthcare services. An important aspect of value-based healthcare (VBHC) is integrated care, but practical evidence-based recommendations for the successful implementation of integrated care within a VBHC context are lacking. This systematic review aims to assess how value-based integrated care (VBIC) is defined in the literature and to assess both the effects, and the facilitators and barriers for its implementation.
Methods: The Embase, Medline ALL, Web of Science Core Collection and Cochrane Central Register of Controlled Trails databases were searched from inception until January 15th 2022. Empirical studies that implemented and evaluated an integrated care intervention within a VBHC context were included. Non-empirical studies were included if they described either a definition of VBIC or facilitators and barriers its implementation. The Rainbow Model of Integrated Care (RMIC) was used to analyze the VBIC interventions. The quality of the articles was assessed using the Mix Methods Appraisal Tool.
Results: After screening 1328 titles/abstracts and 485 full-text articles, twenty-five articles met the inclusion criteria, of which 12 were of empirical studies. No articles were excluded based on quality. Three articles mentioned the term VBIC and one article provided a definition of VBIC. The VBIC interventions consisted of multiple components, targeted different patient populations and occurred in different settings. According to the RMIC, the majority of the VBIC interventions could be classified as professional integration (n=6). Various outcome measures were used to evaluate the effects of the VBIC interventions. An improvement in clinical outcomes, patient-reported outcomes, and healthcare utilization were reported most frequently. Frequently reported facilitators were supportive information technology (n=8), new reimbursement or payment model (n=7), and leadership (n=5). Commonly reported barriers were limited or insufficient information technology (n=8), current reimbursement or payment model (n=7), and the required cultural change (n=5).
Conclusions: Few studies explicitly define the concept of VBIC. The effect of VBIC seems promising but the exact interpretation is challenged by the evaluation/implementation of multicomponent interventions, multiple testing and generalizability issues. For successful implementation of integrated care within a VBHC context, it is imperative that healthcare organizations consider investing in adequate IT infrastructure and in the development and implementation of new reimbursement models.
Keywords: Integrated Care, Value-based Healthcare, Systematic review
Figure 1.
Flowchart depicting the article selection
Patient experiences with value-based healthcare interventions at the HIV outpatient clinic of the Erasmus Medical Centre
PP-076 Patient and Stakeholder Preferences and Engagement (PSPE)
Evelien Shannon van Hoorn1, Nikki van Leeuwen1, Nadine Yvette Bassant2, Hester Floor Lingsma1, Theodora Engelina de Vries - Sluijs3
1Erasmus MC, University Medical Center Rotterdam, Department of Public Health, Rotterdam, the Netherlands
2Erasmus MC, University Medical Center Rotterdam, Department of Internal Medicine – Infectious Diseases, Rotterdam, the Netherlands
3Erasmus MC, University Medical Center Rotterdam, Department of Internal Medicine – Infectious Diseases, Department of Medical Microbiology and Infectious Diseases, Rotterdam, the Netherlands
Purpose: One aspect of value-based healthcare (VBHC) is more patient-centered care. However, little is known about patient experiences with VBHC interventions. We aim to explore how patients experience VBHC as implemented in the HIV outpatient clinic of the Erasmus MC.
Methods: The HIV outpatient clinic of the Erasmus MC in Rotterdam, the Netherlands, implemented a VBHC intervention consisting of (1) a change in consultation schedule; from twice a year physically to one physical double consultation (infectious disease specialist and nurse consultant/specialist) and one remote consultation per year, (2) a change in consultation structure; from a single physical consultation with the physician to a double consultation in which the patient first visits the nurse consultant/specialist followed by the physician, and (3) implementation of a generic quality of life questionnaire. Semi-structured interviews were held with Dutch or English-speaking adult patients with HIV that have been a patient at the Erasmus MC for more than 5 years on their experiences with the implemented changes.
Results: In total, 25 patients were interviewed. Analysis of the first 10 interviews indicate that patients were generally positive about the changes in consultation schedule and preferred a phone-consultation once a year above a video-consultation for their remote consultation. The structure of the physical consultation differed among patients, not all patients first visited the nurse specialist followed by the physician. Patients were primarily neutral or positive towards the change in consultation structure. Patients indicated that the addition of the nurse consultant/specialist provided more structure to the consultation, allowing the nurse consultant/specialist to focus more on the psychosocial aspects and the physician on the medical aspects. On the other hand, some patients did not see added value of talking to two professionals on the same day. Patients who completed the questionnaire before their consultation had a neutral or positive attitude towards its implementation. They had no objections towards completing the questionnaires and felt it could help them prepare for their consultation or provide the professionals with additional information.
Conclusions: The implementation of the VBHC intervention seems to have positively influenced patient’s experiences at the HIV outpatient clinic. Our findings may inform further optimization of VBHC interventions and improve patient-centered care in outpatient HIV clinics.
Keywords: HIV, Qualitative Research, Patient Preference, Value-based Healthcare
What patients with bone cancer say about receiving information for surgical options: Communication is more than content
PP-077 Patient and Stakeholder Preferences and Engagement (PSPE)
Janet Panoch1, Elizabeth Goering2, Jacob Watson2
1Department of Pediatrics, Indiana University School of Medicine, Indianapolis, USA
2Communication Studies, IUPUI, Indianapolis, USA
Purpose: The purpose of this study is to explore what patients and caregivers say about receiving information for osteosarcoma surgical options in the lower extremity. Surgical options include above knee amputation, limb salvage surgery, and rotationplasty, in which the foot and ankle are removed, rotated 180 degrees, and reattached to the femur, resulting in a backwards foot with the ankle serving as a knee joint. The chances for survival are the same regardless of the surgical decision, making this ideal for shared decision making with orthopedic surgeons.
Methods: Three researchers used constant comparative analysis to code 20 transcripts of interviews with 29 osteosarcoma survivors and/or caregivers, exploring the lived experience of making the surgical decision with a focus on the communication of surgical options. The three-step process included initial open coding for themes, intermediate coding to develop categories across themes, and advanced coding to develop a storyline that connects the categories. Surveys with three questions about challenges communicating surgical options with 29 healthcare providers were explored to identify similar themes in the storylines. Communication Complex theory is used to examine how bioactive responses impact receiving information at a highly emotional period.
Results: Storylines that emerged for both patients/caregivers and providers include more information is better, realistic accounts of pros and cons of all options/realistic acknowledgement of risks, reasons for not giving all the information, conversations about topics of interest to patients, information about where to get information, and how information is delivered, relationally and through different channels. Patient comments such as, “We’re in a whole new world here, we need you guys to talk to us about the pros and cons” to challenges providers have in communicating those risks and benefits or “how to provide enough detail to help but not paralyze.”
Conclusions: Patients feel that surgeons are the gatekeepers of information and want all the information to participate in making a decision while providers struggle with balancing the amount of information without overwhelming patients. Both are cognizant of emotional dissonance to information processing. We provide recommendations and sample script for providers. Evidence from both patients and providers indicate a need for a decision aid.
Keywords: Patient engagement, shared decision making
Participants (n=29)
| ID# | Pseudonym | Role | Age at Diagnosis | Current Age | Surgical Decision | Years Post Op |
|---|---|---|---|---|---|---|
| 1 | Aretha | Self | 15 | 41 | LSS | 26 |
| 2 | Bert (father)
Benjy (son) |
Father/son | 10 | 14 | LSS | 4 |
| 3 | Cleo | Self | 12 | 31 | LSS | 19 |
| 4 | Diana | Self | 18 | 20 | LSS
(AMP TBD) |
2 |
| 5 | Ernie (father)
Frank (son) |
Father/son | 14 | 15 | RP | 1 |
| 6 | Gloria (mother)
Holly (daughter) |
Mother/daughter | 13 | 14 | LSS, then AMP | 1 |
| 7 | Ingrid | Self | 21 | 23 | LSS
(RP TBD) |
2 |
| 8 | Joy (mother)
Kent (father) Levi (son) |
Mother/father/son | 10 | 25 | LSS | 15 |
| 9 | Mona (mother)
Mandy (daughter) |
Mother/daughter | 14 | 18 | AMP | 4 |
| 10 | Nerisa (mother), Nate (son) | Mother/son | 13 | 17 | LSS | 4 |
| 11 | Opal (mother)
Olivia (daughter) |
Mother/daughter | 10 | 13 | LSS, then RP | 3 |
| 12 | Phoebe | Self | 15 | 27 | LSS, then AMP | 12 |
| 13 | Quentin | Self | 15 | 32 | LSS | 17 |
| 14 | Ramona (mother)
Ray (son) |
Mother/son | 10 | 16 | LSS, the RP | 6 |
| 15 | Shelley (mother)
Ted (son) |
Mother/son | 18 | 33 | LSS | 15 |
| 16 | Uma (mother)
Vera (daughter) |
Mother/daughter | 12 | 14 | LSS | 2 |
| 17 | Wilma | Self | 30 | 36 | RP | 6 |
| 18 | Xena | Self | 33 | 35 | AMP | 2 |
| 19 | Yates (father)
Zora (mother) Arthur (son) |
Father/mother/son | 11 | 17 | RP | 6 |
| 20 | Brody | Self | 28 | 30 | LSS, then AMP | 2 |
Developing attributes and levels for a discrete choice experiment to design pharmacy-based HIV prevention services for pregnant women in Western Kenya
PP-078 Patient and Stakeholder Preferences and Engagement (PSPE)
Melissa Latigo Mugambi1, Annabell Dollah2, Rosebel Ouda2, Mary Marwa3, Judith Nyakina4, Ben Ochieng3, John Kinuthia5, Jared M Baeten6, Bryan J Weiner7, Grace John Stewart8, Ruanne Vanessa Barnabas9, Brett Hauber10
1Department of Global Health, University of Washington, Seattle, USA
2Washington State University, Global Health Kenya, Nairobi, Kenya
3Kenyatta National Hospital, Nairobi, Kenya
4UW-Kenya, Nairobi, Kenya
5Department of Research and Programs, Kenyatta National Hospital, Nairobi, Kenya
6Departments of Global Health, Medicine, and Epidemiology, University of Washington, Seattle, USA, Gilead Sciences, Foster City, California, USA
7Departments of Global Health and Health Systems and Population Health, University of Washington, Seattle, USA.
8Departments of Global Health, Medicine, and Epidemiology, University of Washington, Seattle, USA
9Havard Medical School and Division of Infectious Diseases, Massachusetts General Hospital, Boston, USA
10The Comparative Health Outcomes, Policy and Economics (CHOICE) Institute, Department of Pharmacy, University of Washington, Seattle, USA
Purpose: The delivery of HIV prevention services (e.g., HIV testing, pre-exposure prophylaxis (PrEP) initiation and refills, and STI screening) in community pharmacies could address clinic barriers faced by pregnant women such as extended travel and wait times. We conducted a qualitative study in western Kenya to select and prioritize attributes and levels for a discrete choice experiment (DCE) to design pharmacy-based HIV prevention services for pregnant women.
Methods: From March-May 2022, we conducted seven focus group discussions with 52 women of reproductive age. We purposefully recruited women from maternal and child health clinics in Homa Bay and Siaya counties and pharmacies in Kisumu County to include a diverse range of ages and experiences with antenatal care and pharmacy services. We evaluated the importance of 11 attributes through discussions and ranking exercises, including HIV prevention services and service characteristics identified from the literature and ongoing pharmacy-based studies. Discussions were audio-recorded, translated, transcribed, and summarized in debrief reports. We conducted debriefing meetings and analyzed debrief reports to identify and refine essential attributes that would inform decisions on whether to access HIV prevention services from a pharmacy during pregnancy.
Results: Women had a median age of 24 years (IQR: 19, 32). Forty-six percent of women were married or living with a partner, 48% were unemployed, 73% had previously been pregnant, and 40% visited a pharmacy every 2 or 3 months. Table 1 summarizes the initial attributes, levels, and changes made based on the qualitative findings. There was diversity in preferences for the HIV prevention services that a pharmacy should provide. While some women expressed reservations such as privacy and confidentiality, particularly with STI screening and PrEP initiation, other women supported the services because of improved access and convenience during pregnancy. Among the service features, having a health provider was considered the most important, followed by a private room, service fee, and operation hours. Having a health provider and private room primarily addressed women’s concerns of privacy, confidentiality, and the need for competent providers in pharmacy settings.
Table 1.
Initial and updated attributes and levels
| Initial attributes | Updated attributes and levels | Rationale for updates |
|---|---|---|
| HIV testing available
- Yes - No |
HIV testing
- Finger prick - Oral swab |
Modified to include finger prick and oral swab as levels because HIV testing is available at pharmacies, and women have varying preferences for HIV test type. |
| STI screening available
- Yes - No |
STI screening
- Yes - No |
Not modified. Women expressed varying preferences for STI screening availability in pharmacies. STI screening is currently not available in pharmacies, so the DCE's goal would be to assess preference for having this service available. |
| Partner testing available
- Yes - No |
Partner testing
- Yes - No |
Not modified. Women largely preferred to have partner testing available in pharmacies. Partner testing is not widely available in pharmacies, so the DCE's goal would be to assess preference for having this service available. |
| PrEP initiation available
- Yes - No PrEP refills available - Yes - No |
PrEP
- PrEP initiation and refills - PrEP refills only - No |
Merged PrEP initiation with PrEP refills. While women largely preferred PrEP refill availability due to convenience, there were concerns about needing support and counseling with initial PrEP dispensing. |
| Facility location
- Pharmacy - ANC clinic Private room available - Yes - No |
Facility location
- Pharmacy with a private room - Pharmacy without a private room - ANC clinic |
Merged facility location and private room because location is correlated with perception of privacy. |
| Health provider
- Nurse - Community health worker - Pharmacist |
Health provider available
- Yes - No |
Modified the attribute to indicate whether a health provider is available, defined by the levels yes or no. Women preferred to have competent and trained providers available, which can be achieved irrespective of provider type. |
| Service fee
- TBD |
Service fee
- Free - 300 KES (~ USD 3) - 500 KES (~ USD 5) |
Modified levels to include the range of service fee values that women preferred: 50 KES to 500 KES. |
| Operating hours
- TBD |
Operating hours
- 8 am - 5pm (M - F) - 7 am - 10pm (M - F) - 7 am - 10pm (7 days of the week) |
Modified levels to include the range of operating hours that women preferred. |
| Wait time
- 10 minutes - 30 minutes - 60 minutes |
Removed | Removed wait time because women ranked it low; they indicated that wait time would depend on the type of service provided. They anticipated that pharmacy wait times would be short and not impact their decision to use the pharmacy-based service. Wait time is also correlated with facility location. |
| Follow-up
- Text message - Phone |
Removed | Removed follow-up because women ranked it low; women had concerns around privacy and indicated that the need for follow-up should be treated on a case-by-case basis. |
Conclusions: Our study is the first step in data collection toward defining attributes and levels for a DCE survey and successfully identified eight preliminary attributes and levels. Expert interviews and pilot testing activities will explore additional feasibility and test understanding of the attributes and levels.
Keywords: Qualitative research; patient preferences; HIV & AIDS; maternal medicine;
How to present work productivity loss results from clinical trials for patients and caregivers? A mixed methods approach
PP-079 Patient and Stakeholder Preferences and Engagement (PSPE)
Jacynthe L’Heureux1, Helen McTaggart-Cowan2, Gary Johns3, Lin Chen4, Ted Steiner5, Paige Tocher6, Huiying Sun6, Wei Zhang7
1School of Population and Public Health, University of British Columbia, Vancouver, Canada
2Faculty of Health Sciences, Simon Fraser University, Vancouver, Canada; BC Cancer Research Institute, Vancouver, Canada
3John Molson School of Business, Concordia University, Montreal, Canada; Sauder School of Business, University of British Columbia, Vancouver, Canada
4Patient Voices Network, Vancouver, Canada
5Faculty of Medicine, University of British Columbia, Vancouver, Canada
6Centre for Health Evaluation and Outcome Sciences, St. Paul’s Hospital, Vancouver, Canada
7School of Population and Public Health, University of British Columbia, Vancouver, Canada; Centre for Health Evaluation and Outcome Sciences, St. Paul’s Hospital, Vancouver, Canada
Purpose: To identify and measure which result presentations, describing work productivity loss outcomes (i.e., absenteeism, presenteeism and employment status changes), are most understandable and important to report alongside clinical trials from the perspectives of patients and caregivers.
Methods: We used a two-phased, sequential mixed methods design, guided by patient-oriented research; one patient partner was engaged. First, we conducted one-on-one interviews with patients and caregivers to review the result presentations of work productivity loss outcomes developed by the research team. The result presentations used days lost, difference of days lost between two treatment groups, percentage of time lost, percentage of people with lost days, number of days lost among people with lost days (i.e., “sub-group result”) and cost of lost days. Based on feedback from the interviews, we designed a survey to assess the understandability and importance of the different result presentations and the importance of each work productivity loss outcome. The survey was tested with second-round interviews with a new set of patients and caregivers. Then we administered the survey online to a sample of working Canadian patient and caregiver populations.
Results: 14 patients and 6 caregivers were interviewed. Participants stressed the need for the result presentations to be brief and approachable (e.g., short sentences, no decimal places, easy to follow structure), the work productivity loss outcomes to be explained in lay terms, and the results to be presented using visual support. 118 patients and 120 caregivers participated in the online survey. The results presented using days lost or cost of lost days for all outcomes yielded the highest percentage of very easy and easy to understand (82% to 93%), while the “sub-group result” had the lowest percentage (61% to 72%). Similarly, the highest percentage of patients and caregivers identified the results presented in days lost and cost of lost days as important to somewhat important to report in clinical trials (81% to 92%) compared to the lowest percentage for the “sub-group results” (69% to 80%). All work productivity outcomes were identified as important to somewhat important (78% to 95%).
Conclusions: Presenting work productivity loss results in days lost and cost of lost days, using lay terms and visual support, were viewed as easiest to understand and most important to report in clinical trials by patients and caregivers.
Keywords: Work productivity loss, mixed methods, patient-reported outcome, patients, caregivers
A Comparison of the Child Health Utility 9D and the Health Utilities Index for Estimating Preferences in Pediatric Inflammatory Bowel Disease
PP-080 Patient and Stakeholder Preferences and Engagement (PSPE)
Wendy J. Ungar1, Naazish S. Bashir5, Thomas D. Walters2, Anne M. Griffiths2, Anthony Otley3, Jeff Critch4
1Program of Child Health Evaluative Sciences, The Hospital for Sick Children Research Institute, Toronto, Canada and Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Canada
2Division of Gastroenterology, Hepatology and Nutrition, The Hospital for Sick Children, Toronto, Canada, and Department of Paediatrics, University of Toronto, Toronto, Canada
3Departments of Paediatrics and Medicine, Dalhousie University, Halifax, Canada, and Division of Gastroenterology & Nutrition, IWK Health Centre, Halifax, Canada
4Department of Pediatrics, Janeway Children’s Health and Rehabilitation Centre, St. John’s, Canada, and Faculty of Medicine, Memorial University, St. John’s, Canada
5Program of Child Health Evaluative Sciences, The Hospital for Sick Children Research Institute, Toronto, Canada
Purpose: With costly treatments such as biologics becoming available for pediatric Inflammatory Bowel Disease (IBD), there is a need for economic evaluations to inform funding decisions. However, assessing health state utilities to calculate quality-adjusted life years (QALYs) is challenging in children and there are few instruments validated for pediatric use. Moreover, multiple clinical measures are used to determine IBD disease activity complicating ascertainment of utilities for disease health states. The objective was to compare utilities elicited using the Child Health Utility-9 Dimension (CHU9D) to the Health Utilities Index (HUI) across multiple disease activity scales in pediatric Crohn’s disease (CD) and ulcerative colitis (UC).
Methods: Preference-based instruments were administered to 188 children with CD and 83 children with UC aged 6 to 18 years in Ontario, Nova Soctia, and Newfoundland, Canada. Utilities were calculated using CHU9D adult and youth tariffs, and HUI2 and HUI3 algorithms in children with inactive (quiescent) and active (mild, moderate, and severe) disease as defined by the weighted Pediatric Crohn’s Disease Activity Index, the Pediatric Ulcerative Colitis Activity Index, and a Physician Global Assessment. Differences between instruments, tariff sets and disease activity categories were tested statistically.
Results: In CD and UC, all instruments were able to detect significantly higher utilities for inactive compared to active disease (p<0.05). Mean utilities for quiescent disease ranged from 0.810 (SD 0.169) to 0.916 (SD 0.121) in CD and from 0.766 (SD 0.208) to 0.871 (SD 0.186) in UC across instruments. Active disease mean utilities ranged from 0.694 (SD 0.212) to 0.837 (SD 0.168) in CD and from 0.654 (SD 0.226) to 0.800 (SD 0.128) in UC. CHU9D and HUI were sensitive to differences in disease activity in CD and UC regardless of the clinical scale used, with the CHU9D youth tariff most often displaying the lowest utilities for worse health states.
Conclusions: Both the CHU-9D and HUI captured clinically important differences in disease activity for children with CD and UC. The CHU-9D youth tariff reflects preferences obtained from adolescents using a classification system designed to include quality of life dimensions important to children. Distinct utilities for different IBD disease activity states can be used in health state transition models evaluating the cost-effectiveness of treatments in children with CD or UC.
Keywords: health state preferences, utilities, CHU9D, HUI, child health, pediatric inflammatory bowel disease
A Comparison of Mixed Effects Models for EuroQol 5 dimensions, 3 level Utility Analysis of Larotrectinib for Neurotrophic Tyrosine Kinase Gene Fusion
PP-081 Quantitative Methods and Theoretical Developments (QMTD)
Noman Paracha1, Katherine L Rosettie2, Pinar Bilir2, Chenye Fu2, Jamie Partridge3, Sean Sullivan4
1Bayer Pharmaceuticals, Basel, Switzerland
2IQVIA, Falls Church, VA, USA
3Bayer Pharmaceuticals, Whippany, New Jersey, USA
4The CHOICE Institute, School of Pharmacy, University of Washington, Seattle, WA, USA
Purpose: To compare mixed effects models for estimating EuroQol 5 dimensions, 3 level (EQ-5D-3L) on/off-treatment utilities for patients with neurotrophic tyrosine kinase (NTRK) fusion-positive tumors treated with larotrectinib.
Methods: Health related quality of life (HRQoL) data were collected in two larotrectinib clinical trials among patients with NTRK fusion tumors. NAVIGATE (NCT02576431) included patients aged 12 years and older; HRQoL was assessed using EuroQol 5 dimensions, 5 level (EQ-5D-5L). SCOUT (NCT02637687) included patients under 21 years; HRQoL was assessed using Pediatric Quality of Life Inventory (PedsQL). Published algorithms were used to convert HRQoL to EQ-5D-3L (adults) and EQ-5D-Y, a proxy for ERQ-5D-3L in pediatrics. Patients were pooled across trials and a mixed model for repeated measures (MMRM) was fitted to the pooled sample with EQ-5D-3L utility score as the dependent variable. Fixed effects included time from start of treatment and treatment status (on/off treatment). Within-subject correlated residual errors were modeled using a first-order autoregressive covariance structure. A random slope model (RSM) was fitted to the pooled sample using the same dependent variable and fixed effects as the MMRM. Random intercepts were included for each subject, as well as random slopes for each subject by time using an unstructured covariance structure. Akaike information criterion (AIC) was used to assess model fit and a likelihood ratio test (LRT) determined whether differences in model fit were statistically significant.
Results: 142 and 69 patients from NAVIGATE and SCOUT, respectively, were included in the analysis. In the MMRM model, there was no statistically significant difference between on/off treatment groups (p=0.789). In the RSM, the on-treatment group had a higher utility (0.807; standard error [SE] = 0.018) than the off-treatment group (0.744; SE = 0.018), with a statistically significant difference (p<0.0001). AIC values were -2697 and -3248 for the MMRM and RSM, respectively. The LRT demonstrated that the RSM fit the data significantly better (p<0.001).
Conclusions: The RSM is preferred for estimating on/off treatment utilities because it estimates the average relationship between time and utility across the entire sample rather than estimating utility change from baseline. Additionally, MMRM relies on fixed HRQoL measurement times, yet data collection time points varied in the clinical trials. Finally, because of the prior two points, the RSM fits the data better than the MMRM.
Keywords: mixed model for repeated measures, random slope model, utility analysis, EQ-5D-3L, health technology assessment
MMRM and Random Slope Regression Model Results
| Assessments (N) | MMRM Least Squares Mean (95% CI) | MMRM SE | Random Slope Least Squares Mean (95% CI) | Random Slope SE | |
|---|---|---|---|---|---|
| On-treatment | 1051 | 0.777
(0.754-0.800) |
0.013 | 0.807
(0.772-0.843) |
0.018 |
| Off-treatment | 1414 | 0.781
(0.755-0.806) |
0.011 | 0.744
(0.709-0.779) |
0.018 |
| P-value | 0.789 | <0.0001* |
(*) denotes significance at 0.05; AIC = Akaike information criterion; CI = confidence interval; SE = standard error; MMRM = mixed-effects model repeated measures
Associations between subjective quality and study characteristics in preference research
PP-083 Quantitative Methods and Theoretical Developments (QMTD)
Norah L Crossnohere1, Ilene L Hollin2, John F P Bridges1
1Department of Biomedical Informatics, The Ohio State University
2Department of Health Services Administration and Policy, Temple University
Purpose: To explore the association between quality and study characteristics that have been hypothesized to reflect study quality in preference literature. We hypothesized that these characteristics would independently predict subjective quality even after accounting for PREFS, a widely-used measure of the risk of bias in preference studies.
Methods: We conducted a secondary analysis of 165 studies identified in a recent systematic review of best-worst scaling studies in health. We explored the association between subjective quality, a global measure of study quality on a scale from 1 (lowest) to 10 (highest), and other indicators of quality using uni- and multivariate regression. These indicators included: i) PREFS (total score [0-5], or five individual items [y/n]); ii) policy relevance (1 [lowest]-10 [highest]); iii) heterogeneity analysis (y/n); iv) the absence of developmental methods (y/n); v) use of an appropriate experimental design (BIBD design, [y/n]), and; vi) sample size (100 unit increase).
Results: Subjective quality ranged from 2-10, with a mean of 6.53 (SD=1.8). Subjective quality was positively associated with PREFS total score (coeff=1.31, p<0.001; Table 1a). Among individual PREFS items, using significance tests (1.86, p<0.001) and having a clear explanation of methods (1.68, p<0.001) had the strongest association with subjective quality, followed by clearly reporting findings (1.07, p<0.001), having respondents similar to non-respondents (0.72, p=0.001), and including a preference-related purpose (0.68, p=0.588; Table 1b). Controlling for the association with PREFS, other variables were also associated with subjective quality including policy relevance (0.17, p=0.001), using a BIBD (0.74, p<0.001), not reporting any development methods (-1.95, p=0.020), and sample size (-0.02, p=0.001; Table 1c). Similar associations were observed when controlling for individual PREFS items, and this model had the highest explanatory power (R-squared=0.57; Table 1d).
Conclusions: We identified characteristics associated with study quality that are not included in PREFS. This suggests a need for more comprehensive and consolidated guidance on how to critically appraise the quality of patient preference research. Despite room for improvement, PREFS does have some utility as it allows for comparisons of quality to be made to previously published preference studies.
Keywords: quality; preferences; best-worst scaling
Table.
Associations between subjective quality, PREFS, and other study characteristics
|
Understanding the Interaction of Multiple Mechanisms of Racial Bias in Clinical Risk Prediction Models: Case Study in Colorectal Cancer Recurrence
PP-084 Quantitative Methods and Theoretical Developments (QMTD)
Sara Khor1, Anirban Basu1, Patrick J Heagerty2, Erin E Hahn3, Eric C Haupt3, Lindsay Joe L Lyons3, Veena Shankaran4, Aasthaa Bansal1
1The Comparative Health Outcomes, Policy, and Economics (CHOICE) Institute, University of Washington, Seattle, USA
2Department of Biostatistics, University of Washington, Seattle, USA
3Southern California Permanente Medical Group, Pasadena, CA, USA
4Fred Hutchinson Cancer Research Center, Seattle, WA, USA
Purpose: Racial biases in risk prediction models can arise from multiple mechanisms, including the use of racially biased proxy outcome labels or non-representative training data. Using colorectal cancer (CRC) recurrence as a case study, we examined multiple interacting mechanisms of bias when developing risk prediction models.
Methods: We used electronic health records data from a large integrated healthcare system to develop a recurrence risk prediction model for adults with CRC after resection, using a Cox proportional hazards model with clinical and demographic variables as predictors (training dataset N=2115). The outcome label was a common proxy for recurrence based on observed healthcare utilization and diagnoses codes (proxy-Y). In a test dataset (N=276), we carried out typical model evaluation by comparing predicted-Y against proxy-Y using the area under the receiver operator curve (AUC). Separately we conducted chart reviews to obtain the gold standard true recurrence status (true-Y) and examined the accuracy of proxy-Y in estimating true-Y using positive and negative predictive values (PPV and NPV). Finally, we evaluated predicted-Y against true-Y using AUC.
Results: When proxy-Y was used as the outcome label for model training, predicted-Y underestimated recurrence risk for Hispanic patients, potentially due to non-representative training data and/or systemic biases in the predictors. However, we found that proxy-Y had a higher false positive rate among Hispanic versus Non-Hispanic White (NHW) patients when compared to true-Y (PPV among Hispanics was 15% lower than NHW (95%CI 0.7-29%)), suggesting that the proxy tended to overestimate the recurrence rate among Hispanics. Ultimately, the two biases seemed to partially cancel each other, resulting in better performance of predicted-Y among Hispanic patients despite the lower accuracy of proxy-Y in capturing true recurrence among Hispanics. (Predicted-Y vs. true-Y AUC: Hispanic=0.90; NHW=0.84).
Conclusions: Racially-biased proxy outcome labels can act as a channel through which prediction algorithms reproduce racial discrimination. In our case study, we found that our algorithm incorporated additional racial bias in predicting proxy-Y, potentially due to systemically-biased predictors and non-representativeness of data, and these two sources of biases acted in opposite directions. This research highlights the complex and often unpredictable relationships between multiple sources of bias and the importance of understanding the data-generating process and evaluating and addressing racial bias in the outcome label, predictors, and representativeness of the training data.
Keywords: Algorithmic Bias, health equity, risk prediction, proxy outcome label
Optimal vaccination campaigns under disease transmission and opinion propagation
PP-085 Quantitative Methods and Theoretical Developments (QMTD)
Serin Lee, Shan Liu, Zelda B. Zabinsky
Department of Industrial and Systems Engineering, University of Washington, Seattle, US
Purpose: To combat the recent increase of anti-vaccination movement and minimize the high burden of infectious diseases, we model the impact of vaccination campaigns that target different groups to identify the most effective use of limited resources.
Methods: We propose a networked compartmental model that couples the dynamics of disease transmission in physical networks with opinion propagation in virtual opinion networks. The model consists of several groups that represent opinion characteristics (e.g., initial proportion of anti-vaccination population). We identify an optimal combination of opinion conversion rates that minimizes deaths and reduces the anti-vaccination population. Decreasing the rate of conversion from pro- to anti- vaccination represents limiting anti-vaccination movements, whereas increasing the rate of conversion from anti- to pro- vaccination represents promoting pro-vaccination campaigns. For illustration, we considered two groups where Group 1 initially consists of 90% anti-vaxxers and Group 2 initially consists of 10% anti-vaxxers. Sensitivity analyses include opinion and physical network structure, anti-vaccination population, vaccine effectiveness, and power of persuasion.
Results: Without interventions, where Groups 1 and 2 are isolated in physical and opinion networks, opinion towards vaccination is polarized and leads to more total deaths than with intervention. When Groups 1 and 2 are fully connected and well mixed, the reduction in death is more pronounced with intervention. It is most effective to target a pro-vaccination campaign in Group 1 only in reducing deaths and anti-vaccination population, regardless of whether Group 1 and Group 2 are connected to each other.
Conclusions: In order to minimize the disease burden, an intervention that focuses on pro-vaccination campaigns for groups with high resistance to vaccination is most effective. This result can be extended to multiple groups in realistic social networks.
Keywords: Vaccination, coupled dynamics, infectious disease, opinion propagation
Consequences of interventions on anti-vaccination population, total vaccinated population, and total deaths.
Consequences of interventions on anti-vaccination population, total vaccinated population, and total deaths. Group 1 and 2 has an initial population size of 100,000 each.
Modeling the Diffusion and Impact of Health Technology Uptake Among Family Members: Application to Cascade Genomic Sequencing
PP-086 Quantitative Methods and Theoretical Developments (QMTD)
Shawn P Garbett1, John Graves2, Gregory F Guzauskas3, David Veenstra3, Josh F Peterson2
1Department of Biostatistics, Vanderbilt University Medical Center
2Department of Health Policy, Vanderbilt University Medical Center
3Department of Pharmacy, University of Washington
Purpose: Decision modeling of testing and preventive screening strategies for genomic conditions often focuses on individual patients, leaving unanswered the potential cascading impacts of screening uptake among patient family networks. Cascade testing the first-degree relatives of screening-identified probands targets a highly enriched population, with 50% of relatives expected to have autosomal dominant inheritance patterns. We developed a modeling approach for estimating the costs and health outcomes of cascade testing to understand how cascade testing amplifies the value of a hypothetical genomic screening program.
Methods: To model cascade testing after population genomic screening, we drew upon a Markov model of three autosomal-dominant hereditary conditions (familial hyperlipidemia [FH], hereditary breast and ovarian cancer [HBOC], and Lynch syndrome [LS]) with existing clinical guidelines for preventive and/or ameliorative interventions. The incremental cost and health outcomes of cascade testing were modeled using uptake parameters for whether probands inform their first-degree relatives and whether the relatives subsequently completed the indicated testing. Family structures were simulated using data from the Panel Study of Income Dynamics (PSID), which surveys US-based households on the number of living relatives. We modeled 1st degree family member testing (“single cascade”) at various efficacy thresholds, and a theoretical maximum “continued cascade” involving extended family. Quality-adjusted life-years (QALYs) and costs were treated as linearly additive across conditions and sequencing scenarios. We computed these functions targeting potential age cohorts of relatives who are family members of the proband.
Results: At a willingness to pay of $100k, performing 1st degree family member cascade testing after screening a population of 30-year-old adults, achieved an improvement in net health benefit (NHB) of 83, 138, and 541 QALY/100k for single cascade at 15%, 30%, and 100% average efficacy and maximum 1056 QALY/100k incremental NHB over a panel test alone (Figure). Cascade testing also shifted the maximum age at which population screening for hereditary conditions is cost-effective.
Conclusions: Cascade testing of autosomal dominant, heritable conditions improves cost-effectiveness and is dependent on the complex interaction between family structure and age of disease onset and prevention strategy. Importantly, cascade testing expands the age range for cost-effective population screening.
Keywords: Inheritance, Cascading, Genomic Testing, NHB, Cost Effectiveness
Predictive Validity of a Microsimulation Model for Evaluating Long-Term Health and Economic Outcomes of Surgical Treatment for Ischemic Cardiomyopathy
PP-087 Quantitative Methods and Theoretical Developments (QMTD)
Susmita Chennareddy1, Garred S. Greenberg2, José M. Rodriguez Valadez1, Malak Tahsin1, Asem Berkalieva1, Emilia Bagiella1, Gennaro Giustino1, Donna Mancini1, Bart S. Ferket1
1Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, NY, USA
2Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
Purpose: The clinical utility of coronary artery bypass grafting (CABG) in patients with ischemic cardiomyopathy (ICM) has not been clearly established. The Surgical Treatment for Ischemic Cardiomyopathy (STICH) trial reported a small, statistically non-significant relative reduction in 6-year all-cause mortality between CABG plus guideline-directed medical therapy (GDMT) versus GDMT alone, which only became statistically significant after 11-years of follow-up in the STICH Extension Study (STICHES). However, recent GDMT advances may alter the efficacy of CABG. As such, CABG decision-making in ICM remains complex given trade-offs between high surgical risk, costs, and uncertain long-term benefits. We developed a microsimulation model for estimating long-term outcomes using STICH trial individual participant data and evaluated its predictive validity in STICHES.
Methods: The model was developed in R and TreeAge using time-to-event data of 1212 STICH trial participants with a median follow-up of 4.7 years (IQR 4.0–5.7) and allowed for microsimulations of heart failure (HF) readmissions, CABG in the GDMT strategy, and mortality beyond the 6-year trial duration. We ignored the one repeat CABG in the CABG arm. We used multivariable logistic regression for 120-day HF readmission and mortality risk, combined with flexible survival regression modeling for event rates beyond 120 days. Predictors included pre-specified risk factors, with HF readmission and CABG as time-dependent variables. Parameter uncertainty was estimated using bootstrapping. We assessed predictive validity of microsimulations through comparisons with observed 11-year cumulative incidences based on STICHES data.
Results: The model generally showed adequate calibration of predicted 11-year cumulative incidence curves across observed STICHES outcomes (Figure). For the cumulative incidence of HF readmission in the GDMT + CABG arm, curves of model predictions (dashed line) and observed data (solid line) began to diverge beyond 6 years, although 95% uncertainty bands still overlapped. The divergence was explained by a different effect of randomization to GDMT + CABG between STICH trial data: (hazard ratio 0.80, 95% CI: 0.61-1.07) and STICHES: (hazard ratio 0.71, 95% CI: 0.56-0.89).
Figure.
Predicted (dashed line) and observed (solid line) 11-year cumulative incidence curves across STICHES outcomes
Conclusions: We demonstrate predictive validity of a microsimulation model for long-term risks of HF readmission, CABG, and mortality in ICM. Our final microsimulation model could be utilized to evaluate the long-term cost-effectiveness of surgical therapy for the treatment of ICM.
Keywords: Heart failure, CABG, Markov modeling
Optimizing HIV viral load testing systems in Kenya based on queueing and optimization theory
PP-088 Quantitative Methods and Theoretical Developments (QMTD)
Yinsheng Wang1, Anjuli Wagner2, Rena Petal3, Leonard Kingwara4, Shan Liu1
1Department of Industrial & Systems Engineering, University of Washington, Seattle, United States
2Department of Global Health, University of Washington, Seattle, United States
3Department of Medicine, University of Washington, Seattle, United States
4National Public Health Laboratory (NPHL), Kenya
Purpose: Health systems in resource-limited areas, including Kenya, have limited resources in conducting routine HIV viral load (VL) monitoring via centralized lab networks. Thus, systematic ways to optimize the placement of point-of-care (POC) machines, including via hub-and-spoke models where select “hub” facilities (sites with a POC machine) serve “spoke” (sites that send samples to a hub) facilities, are needed.
Methods: The decision support framework consisted of an estimation phase and then an optimization phase. The estimation of the total turnaround time for individual spoke facilities incorporated calculating the transportation time to hub facilities and the batching delay time, and modeling the waiting time in the system. To estimate waiting time, we used queueing model to approximate the real-world processing time in central labs and hub facilities. We estimated the daily testing demand of facilities in western Kenya in 2019 (before the Covid-19 pandemic). We also acquired the distance between each facility in western Kenya through Google Map API, and calculated the empirical time associated with adjustment to different transportation modes and weather and road conditions. In the optimization phase, we formulated the optimization model as a large-scale nonlinear integer programming (NLIP) aiming to find the minimum total turnaround time through optimizing the referral network and the assignment of hub facilities. Because the large-scale NLIP exceeded the capability of current commercial solvers, we developed a solution algorithm based on a tractable approximation scheme.
Results: We applied the framework to develop both a full model for research purpose and a simplified model for practical use. From the perspective of ‘hub’, assuming that we keep the current 7 hubs and 3 central labs, when we increased the number of added hubs from 2 to 4, the total transportation time would decrease by 22.7%, and the expected waiting time in three central labs would decrease by 3.5%, 33.3%, and 12.2% separately. From the perspective of ‘spoke’, we also modeled the total turnaround time at the facility level. For example, for the 146 facilities in Kisumu County under biweekly batching mode, the average turnaround time is around 31 hours, with a standard deviation of 12.8 hours.
Conclusions: We developed a decision support framework that integrates queueing theory with integer programming. Our work also incorporates a practical Excel-based tool, which is undergoing usability testing.
Keywords: HIV testing system, optimization, queueing theory
A Mathematical Modeling Approach to Estimate Behavioral Responses to COVID-19 Mitigation Measures in Minnesota
PP-089 Quantitative Methods and Theoretical Developments (QMTD)
Zongbo Li, Karen M Kuntz, Eva A Enns
Division of Health Policy and Management, University of Minnesota School of Public Health, Minneapolis, USA
Purpose: To use a mathematical model of SARS-CoV-2 transmission calibrated to the state of Minnesota COVID-19 outcomes to estimate changing contact rates between different age groups over time.
Methods: We developed an age-stratified dynamic compartmental model of SARS-CoV-2 transmission in Minnesota from March 23, 2020 – August 20, 2021. We divided the time horizon into ten periods reflecting the timing of major changes in state COVID-19 polices and for each, and we estimated a distinct set of age-specific contact reductions, which are shown as percentage of contacts reduced between different age groups compared with baseline. We calibrated the model to match COVID-19 hospitalization (H_t) and mortality (D_t) trends to estimate age-specific contact reductions (X_t), time-dependent probabilities of people dying at home (Y_t) and other time-independent model parameters (θ). We employed the Nelder-Mead algorithm in a sequential calibration approach to identify well-fitting parameter sets for each calibration period as follows:
(1) in period 1, we ran Nelder-Mead algorithm with 100 different starting point to identify the top-10 best fitting θ^*,X_1^* and Y_1^* that maximize the log likelihood function given data in period 1: l(θ,X_1,Y_1 | H_1,D_1);
(2) in period 2, we randomly sampled X_2,Y_2, combined with the results in (1) as starting points, to identify the top-10 best fitting θ^*,X_1^*, Y_1^*, X_2^* and Y_2^* that maximize the log-likelihood function given data in period 2: l(θ^*,X_1^*,Y_1^*,X_2^*,Y_2^* | H_1,H_2, D_1, D_2);
(3) in period t (3≤t≤10), we sampled X_t,Y_t and combined with previous results to calibrate for top-10 fitting X_t^*, Y_t^*, X_(t-1)^* and Y_(t-1)^* to maximize the log-likelihood function given data in period t: l(X_t,Y_t,X_(t-1),Y_(t-1) |θ^*,X_1^*,…, X_(t-2)^*,Y_1^*,…,Y_(t-2)^*,H_1,…,H_(t-1), D_1,…, D_(t-1)).
In other words, we calibrate the parameters in period t, adjust parameters in period t-1 and fix all parameters before period t-1.
Results: The age-specific contact reductions and weighted averages in each time period are shown in the Table.
Conclusions: This sequential calibration method has the flexibility to obtain parameters with latest surveillance data and is more computationally efficient. It can help estimate the effective contact reductions to show the real behavioral response to polices and include the impact of mask and ventilation in addition to physical distancing and contact frequency, which may not be reflected in mobility or contact survey data.
Keywords: Calibration, COVID-19, Contacts
Age-specific contact reductions over time
| Time | Young vs Young | Young vs Middle | Elderly vs Others | Middle vs Middle | Elderly vs Elderly | Weighted Contact Reduction | |
|---|---|---|---|---|---|---|---|
| Period 1 | 3/22/2020-3/27/2020 | 22.6% | 22.6% | 22.6% | 22.6% | 22.6% | 22.6% |
| Period 2 | 3/28/2020-5/18/2020 | 86.8% | 82.4% | 31.0% | 59.7% | 42.0% | 60.5% |
| Period 3 | 5/19/2020-6/30/2020 | 74.3% | 77.4% | 77.2%% | 89.1% | 78.4% | 81.3% |
| Period 4 | 7/1/2020-8/31/2020 | 78.6% | 71.7% | 54.0% | 81.9% | 64.6% | 72.2% |
| Period 5 | 9/1/2020-11/20/2020 | 81.1% | 75.7% | 8.1% | 73.7% | 9.6% | 56.2% |
| Period 6 | 11/21/2020-12/18/2020 | 81.4% | 75.9% | 41.0% | 90.0% | 61.4% | 73.3% |
| Period 7 | 12/19/2020-2/23/2021 | 84.8% | 71.6% | 57.0% | 87.1% | 70.8% | 76.1% |
| Period 8 | 2/24/2021-4/5/2021 | 89.8% | 85.4% | 41.1% | 51.3% | 11.2% | 58.2% |
| Period 9 | 4/6/2021-6/25/2021 | 90.0% | 90.0% | 73.2% | 88.8% | 68.1% | 84.2% |
| Period 10 | 6/26/2021-8/20/2021 | 89.6% | 88.9% | 51.5% | 69.7% | 55.4% | 71.3% |
This table shows contact reductions between different age groups in 10 periods. The last column is the weighted average of contact reduction based on population structure. We assumed that each group would have a contact reduction within their own group (e.g., young with young) and with each of the other groups (e.g., young with middle, middle with elderly, etc.) that could differ. We further assumed that the elderly would reduce their contacts with the young and middle age groups by the same amount. We also assumed a single contact reduction parameter for all age groups in first period.
Associating Health-Related Quality of Life with Time Use to Capture Productivity Impacts in Economic Evaluation
PP-090 Applied Health Economics (AHE)
Boshen Jiao, Anirban Basu
The Comparative Health Outcomes, Policy, and Economics (CHOICE) Institute, University of Washington, Seattle, USA
Purpose: The Second Panel on Cost-effectiveness in Health and Medicine recommends that cost-effectiveness analysis (CEA) explicitly incorporate costs of productivity from a societal perspective. However, short of prospectively measuring these impacts in clinical trials, systematic data is often unavailable to directly include productivity impacts in the CEA modeling effort. We aimed to develop a new approach to capture productivity impacts in CEA in the absence of direct evidence of these impacts by associating varying levels of health-related quality of life (HrQoL) with different time uses.
Methods: To associate time uses with HrQoL, we used the 2012–2013 American Time-Use Survey (ATUS) when data on a Well-Being Module (WBM) was additionally collected alongside ATUS. WBM measured the quality of life using a visual analog scale (VAS). We studied time-use categories of labor, household activities/caring for household members, caring for non-household members, volunteer activities, medical care, and education. An econometric approach was employed to address three technical issues: 1) correlation across different categories of time uses and the share-structure of time-use data, 2) measurement error of VAS scale in representing HrQoL, and 3) reverse causality between time use and VAS score in a cross-sectional setting. Furthermore, we developed a metamodel-based algorithm to efficiently summarize the numerous estimates from the primary econometric model. Finally, we illustrated the use of our algorithm in an empirical CEA of enzalutamide for elderly men with nonmetastatic castration-resistant prostate cancer.
Results: We provided the coefficient estimates of the metamodel algorithm with the variance-covariance matrix. The table below presents the results of the empirical study. If incorporating the costs of formal and informal labor, household production, and time seeking care during the periods of baseline survival, the incremental cost-effectiveness ratio (ICER) of enzalutamide would decrease by 5%. Including the same types of productivity costs and non-medical consumption during the periods of additional survival due to enzalutamide decreased the ICER by 23%. Notably, accounting for household production in this elderly population led to a 32% reduction in the ICER. Finally, the ICER decreased by 28% after including all types of costs during baseline and additional survival.
Conclusions: Our estimates can serve as a second-best solution to capture the impacts of changes in health on productivity in CEA when direct measurement of such impacts is not available.
Keywords: Cost-effectiveness analysis; productivity; time uses; health-related quality of life
Results of empirical cost-effectiveness analysis of enzalutamide plus androgen deprivation therapy (ADT) versus continued ADT for nonmetastatic castration-resistant prostate cancer
| Survival Period | Type of included costs | Incremental Costs, $ | Incremental cost-effectiveness ratio (ICER), $ per quality-adjusted life year | Change in ICER, (%) |
|---|---|---|---|---|
| Both baseline and additional survival | Excluding productivity and time seeking care | 150,103 | 100,347 | reference |
| Baseline survival | Including formal labor | 148,253 | 99,111 | -1% |
| Baseline survival | Including informal labor | 147,563 | 98,649 | -2% |
| Baseline survival | Including household production | 148,407 | 99,213 | -1% |
| Baseline survival | Including time seeking care | 149,131 | 99,697 | -1% |
| Baseline survival | Including all types of productivity and time seeking care | 143,045 | 95,629 | -5% |
| Additional survival | Including formal labor | 143,988 | 96,259 | -4% |
| Additional survival | Including informal labor | 135,695 | 90,715 | -10% |
| Additional survival | Including household production | 102,011 | 68,197 | -32% |
| Additional survival | Including time seeking care | 146,968 | 98,252 | -2% |
| Additional survival | Including non-medical consumption | 187,626 | 125,432 | +25% |
| Additional survival | Including all types of productivity, time seeking care, and non-medical consumption | 115,876 | 77,466 | -23% |
| Both baseline and additional survival | Including all types of productivity, time seeking care, and non-medical consumption | 108,818 | 72,747 | -28% |
Cost-Effectiveness of Preoperative Topical Antibiotic Prophylaxis for Endophthalmitis following Cataract Surgery
PP-091 Applied Health Economics (AHE)
Tina Felfeli1, Rafael N Miranda2, Jeeventh Kaur3, Clara C Chan4, David Mj Naimark5
1Department of Ophthalmology and Vision Sciences, University of Toronto, Ontario, Canada
2Institute of Health Policy, Management and Evaluation, University of Toronto, Ontario, Canada
3Faculty of Medicine, University of Toronto, Ontario, Canada
4Department of Ophthalmology, Toronto Western Hospital, University Health Network, Ontario, Canada
5Department of Medicine, Sunnybrook Health Sciences Centre, Ontario, Canada
Purpose: To determine the cost-effectiveness of preoperative topical antibiotic prophylaxis for the prevention of endophthalmitis following cataract surgery.
Methods: This is a decision-analytic microsimulation model for evaluation of cost-effectiveness in the setting of a theoretical surgical centre in the United States. Preoperative topical antibiotic prophylaxis costs and effects were projected over a life-time horizon in discrete time steps of one year for a simulated cohort of 500,000 adult patients (≥ 18 years old) requiring cataract surgery. We modelled a reduced risk of developing endophthalmitis among patients receiving prophylaxis relative to no-prophylaxis. The model estimated transition probabilities based on efficacy-adjusted visual acuity. Efficacy results were derived from pivotal population-based studies and Cochrane systematic reviews. Healthcare resource utilization and out of pocket expenses (2021 United States dollars [USD$]) and health utility values were obtained from the literature and discounted at 3% per year. Quality-adjusted life-years (QALYs), lifetime costs, and the incremental cost-effectiveness ratio (ICER) of prophylaxis were estimated. We assumed a cost-effectiveness criterion of ≤ $50,000 per QALY gained. Deterministic and probabilistic sensitivity analyses were conducted to examine the impact of alternative model input values on the results.
Results: The mean incidence of endophthalmitis following cataract surgery for preoperative topical antibiotic prophylaxis versus no-prophylaxis was 0.034% (95% CI, 0-0.2%) and 0.042% (95% CI, 0-0.3%), respectively; a relative risk reduction of 19.05%. The mean life-time costs for cataract surgery with prophylaxis and no-prophylaxis were $2,486.67 (95% CI, $2,193.61-$2,802.44) and $2,409.03 (95% CI, $2,129.94-$2,706.69), respectively; these amounts resulted in an incremental cost of $77.64 for prophylaxis The QALYs associated with prophylaxis and no-prophylaxis were 10.33495 (95% CI, 7.81629-12.38158) and 10.33498 (95% CI, 7.81284-12.38316), respectively. Threshold analyses indicated that prophylaxis would be cost-effective if the incidence of endophthalmitis after cataract surgery was greater than 5.5% or if the price of the preoperative topical antibiotic prophylaxis was less than $0.75.
Conclusions: Assuming a cost-effectiveness criterion of $50,000 per QALY gained, general use of preoperative topical antibiotic prophylaxis is not cost-effective compared to no-prophylaxis for the prevention of endophthalmitis following cataract surgery. Preoperative topical antibiotic prophylaxis, however, would be cost-effective at a higher incidence of endophthalmitis and/or a substantially lower price for prophylaxis.
Keywords: Antibiotics, Cataract Surgery, Cost-Effectiveness, Endophthalmitis, Prophylaxis, Vision
Figure 1.
Model state transitions schematic of patients undergoing cataract surgery can transition to following assignment to prophylaxis versus no-prophylaxis through postoperative complications and vision loss states. Notes: At all states, individuals have a risk from natural cause of death. All utilities calculated based on vision outcomes in each of the health states.
Danish women want to participate in a hypothetical breast cancer screening with no reduction in mortality, only harms
PP-092 Decision Psychology and Shared Decision Making (DEC)
Eeva Liisa Røssell1, Anne Bo2, Therese Koops Grønborg3, Ivar Sønbø Kristiansen4, Signe Borgquist5, Laura D. Scherer6, Henrik Støvring1
1Department of Public Health, Aarhus University, Aarhus, Denmark
2Social & Health Services and Labour Market, DEFACTUM, Aarhus, Denmark
3Department of Clinical Epidemiology, Aarhus University, Aarhus, Denmark
4Department of Health Management and Health Economics, University of Oslo, Oslo, Norway
5Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
6School of Medicine, University of Colorado, Denver, CO, United States of America
Purpose: Decision aids with balanced information on harms and benefits are recommended to support informed decision making about breast cancer screening. However, informed decision making may be challenged by overly positive attitudes towards cancer screening likely due to strong emotions, subjective assessment of screening information, strong a priori held beliefs and attitudes, and worry, fear and anxiety of having breast cancer. We hypothesized that a substantial proportion of Danish women would want to participate in screening regardless of the presented information. Therefore, we aimed to estimate the prevalence of Danish women wanting to participate in a hypothetical breast cancer screening offering no reduction in breast cancer death, only harms related to unnecessary treatment.
Methods: We invited a random sample of 768 women in the non-screening population aged 44-49 in the Central Denmark Region to an online questionnaire. The questionnaire included a description of a hypothetical screening with no reduction in breast cancer mortality, only harms. Women were asked if they wanted to participate in this screening program. In addition, questions were asked regarding breast cancer worry, perceived likelihood of breast cancer, breast cancer history, health literacy, and their assessment of the hypothetical screening. Data were subsequently linked to register data on socioeconomic factors. Logistic regression analyses were performed to estimate the odds of wanting to participate.
Results: In total, 43.0% women responded to the questionnaire. Hereof, 82.3% (95% CI: 77.5-86.5) wanted to participate in the hypothetical screening. More than two-thirds in both groups seemed to understand the presented information. Among women understanding the information, half of them disbelieved it. Odds for wanting to participate were lower with higher educational levels, higher health literacy, higher perceived risk of the hypothetical screening, and understanding the screening information. Odds for wanting to participate were higher when women seemed to disbelieve the hypothetical screening information (answered that the screening reduced breast cancer mortality). Further, odds seemed to be a little higher with higher levels of perceived breast cancer risk and breast cancer worry.
Conclusions: Exceeding our expectations, a majority of women wanted to participate in a hypothetical screening program with harms, but no health benefits. A large proportion understood but disbelieved the screening information. This could indicate that Danish women make their screening decisions based on prior beliefs rather than presented screening information.
Keywords: breast cancer screening, mammography, decision making, affect, information disbelief, risk information
Dyadic Medical Decision Problems: Motivating the Use of a Dyadic Utility Function
PP-093 Decision Psychology and Shared Decision Making (DEC)
Mauricio Lopez Mendez1, Fernando Alarid Escudero2, Eric Jutkowitz1, Thomas Trikalinos3
1Department of Health Services, Policy and Practice at Brown University School of Public Health, Providence, Rhode Island, USA
2Center for Research and Teaching in Economics (CIDE), Aguascalientes, Mexico
3Department of Health Services, Policy and Practice & Center for Evidence Synthesis in Health (CESH) at Brown University School of Public Health, Providence, Rhode Island, USA
Purpose: Medical decision problems often involve dyads whose health-related outcomes are affected by actions taken by decision- and policy-makers. Standard analyses, however, only consider the consequences of actions from the perspective of a single dyad member. We present an analytic framework that relies on the construction of a dyadic utility function and illustrate situations when the decision between competing alternatives changes depending on the adopted perspective (i.e., individual vs. dyadic perspective).
Methods: We consider a decision problem under uncertainty for a dyad composed of a patient and a caregiver. We simulate consequences as health-related quality of life (HR-QOL) on a continuous absolute scale between =0 and 25 for three possible acts: increase the HR-QOL by 5 units on average for the patient only (S1), for the caregiver only (S2), and do nothing (SQ). The individual utility functions of patient and caregiver over their own HR-QOL are modeled as exponential utility functions. The dyadic utility function is an archimedean utility copula that combines patient and caregiver utility functions and their association. We compare optimal decisions based on maximizing expected utility from the patient, the caregiver, and dyad perspectives while varying the risk aversion parameters of the individuals’ utility functions and the association parameter.
Results: Ignoring the dyadic relation, the patient or caregiver alone would choose the strategy that increases their own expected HR-QOL, ignoring the dependence between their HR-QOL and individual risk aversion. Under a dyadic perspective, with no dependence (i.e., zero correlation) and same risk aversion (0.2), S1 and S2 (0.963,0.963) are strategically equivalent choices. In the presence of association (high correlation, >|0.97|) and the same risk aversion, S2 yields a slightly higher dyadic expected utility than strategy S1. If the patient is less risk averse than the caregiver, the optimal decision for the dyad is S1 regardless of the correlation, and if the patient is more risk averse than the caregiver, the optimal decision for the dyad is S2 regardless of the correlation.
Conclusions: A dyadic utility function correctly assesses tradeoffs between strategies that affect joint outcomes of a dyad. The level of correlation between the joint consequences and the level of risk aversion of the individual utility functions can affect the choice of the best decision for the dyad.
Keywords: dyadic decision problem, shared decision-making, health-utility, utility-copula
Table 1.
Optimal Strategy from each Perspective by Levels of Correlation and Risk Aversion
|
Table 1 shows the strategy that maximizes the expected utility (*) from each perspective (patient, caregiver, dyad). We explore how the optimal strategy changes as we vary the correlation between the HR-QOL of members of the dyad as well as their individual risk aversion parameters.
Wisdom of the (Expert) Crowd: Performance of aggregation models for fetal heart rate evaluations
PP-094 Decision Psychology and Shared Decision Making (DEC)
Medhini Urs, Christian Luhmann
Department of Psychology, Stony Brook University
Purpose: Cardiotocogram (CTG) is a tool that provides obstetricians with information about the fetal heart rate (FHR) and uterine activity. Despite widespread use for the last 40 years, the clinical utility of CTG is low; studies have repeatedly demonstrated that clinicians’ interpretations of CTG recordings are unreliable. One strategy to increase the reliability of decisions is to aggregate the judgments of multiple decision makers (leveraging the “wisdom of crowds”). The field of geo-political forecasting has developed a variety of sophisticated aggregation techniques, permitting ensembles of non-experts to generate impressively accurate forecasts. The current study investigates strategies for aggregating obstetricians’ CTG evaluations. Specifically, the objectives were 1) to determine whether aggregating individual evaluations improves clinical accuracy and 2) to compare the predictive performance of the different aggregation methods.
Methods: Using fetal evaluations made by nine obstetricians on CTG tracings during various stages of the labor and delivery process taken from the TU-CHB Intrapartum Cardiotocography Database, we investigate four different aggregation techniques that ranged in complexity. Brier scores were used to measure the accuracy of obstetrician’s judgments (based on FIGO classifications) in relation to the health outcome of the baby as defined by hypoxia.
Results: The results indicate that simple aggregation approaches (simple averaging) were uniformly out-performed by more sophisticated techniques. For example, approaches that incorporate individuals in proportion to their individual skill were most accurate. However, due to the low rate of adverse outcomes (i.e., hypoxia), a naive, baseline model, which “guesses” that all babies are healthy with high probability, outperformed both the best-performing obstetrician and all aggregation models.
Conclusions: The reason for the superior performance of the naïve, “guessing” model is the dramatic miscalibration of obstetricians: all the obstetricians systematically underestimated the health of the babies and obstetricians were more accurate only to the extent that they underestimated the health of the babies less. These findings suggest the need for changes to both clinical practice and fetal evaluation guidelines. Specifically, it is important to understand where this failure in evaluations occurs (e.g., interpretation and application of FIGO guidelines, asymmetric costs of over-estimating versus under-estimating the health of the babies).
Keywords: cardiotocogram (CTG) evaluation, obstetrician decision-making, aggregation models, miscalibration of judgments
Longitudinal relationship between parents’ stated vaccine preferences and their child’s vaccination patterns in Shanghai, China
PP-095 Decision Psychology and Shared Decision Making (DEC)
Mengdi Ji1, Zhuoying Huang2, Jia Ren2, Abram L. Wagner1
1Department of Epidemiology, School of Public Health, University of Michigan, Ann Arbor, United States
2Shanghai Disease Control and Prevention, Shanghai, China
Purpose: The study aims to characterize parents by their pediatric vaccination preferences and assess the relationship between stated vaccine preferences and actual vaccination behavior among parents in Shanghai, China.
Methods: We conducted a discrete choice experiment in 2017 among parents of infants <3 months to measure the relative value of each attribute in the decision to choose the pediatric vaccines for their kids. Parents were sampled at immunization clinics within Shanghai. Five attributes were used to describe the hypothetical vaccination options: waiting time, vaccination site: governmental clinic/private clinic, number of visits, co-administration, and costs. Respondents were asked which assumed vaccination agenda they would prefer for their children in five pairs of vaccine profiles.
We classified individuals by their vaccination preferences within an exploratory latent class analysis (LCA). We then linked these preferences to whether the children got combination vaccines by linking the vaccination records retrieved in the summer of 2020. Finally, logistic regression analysis was used to explore the relationship between acceptance of combination vaccines and respondents’ LCA results.
Results: In total, 589 parents completed the baseline survey, and we were able to link vaccination data to 468 parents (79%). We split the parents into four classes: governmental clinics advocates (20%), careful deciders (45%), convenience-focused (19%), and prefer less co-administration (16%). In addition, 104 (22%) children received a DTP+Hib combination vaccine, and 210 (45%) children received the DTP+Hib+polio combination vaccine. Receipt of a combination vaccine was not significantly related to parents’ preference classes.
Conclusions: Future analyses will also examine the relationship between preference class and timing of vaccines and co-administration. The longitudinal study explored the parents’ stated attitudes toward vaccines with the actual vaccination behavior of their children. The high acceptance of combination vaccines gives the confidence to explore the new possible combination of vaccines, simplifying the vaccination schedule and improving vaccination coverage.
Keywords: Decision-making, Pediatric vaccine, combination vaccines, latent class analysis
Efforts to gain health by over-coming adversely affecting shared decision making in treatment initiation in resource limited settings in Tanzania
PP-097 Decision Psychology and Shared Decision Making (DEC)
Shoaib Hassan1, Shoaib Hassan2, Ole Frithjof Norheim4, Tehmina Mustafa5, Bjarne Robberstad3
1Public Health Modeling; Yale Institute for Global Health, Yale School of Public Health
2Centre for International Health, Department of Global Public Health and Primary Care, University of Bergen, Bergen, Norway
3Health Economics, Leadership, and Translational Ethics Research
4Bergen Centre for Ethics and Priority Setting (BCEPS)
5Department of Thoracic Medicine, Haukeland University Hospital, Bergen, Norway
Purpose: Globally, Extrapulmonary Tuberculosis (EPTB) patients are approximately 15-20% of annual 10 million tuberculosis patients. This infectious yet a chronic illness is under-researched, poses challenges of difficult diagnoses hence, adversely affecting shared decision making (DEC) in treatment initiation. This study reveals factors affecting the shared DEC.
Methods: A yearlong prospective cohort study provided demographic, socio-economic, health-seeking, and clinical data using a semi-structured questionnaire in patients presenting with symptoms suggestive of EPTB. The vast data was analyzed as part of larger project entailing multiple studies. Patients’ annual income and diagnostic costs were estimated to ascertain catastrophic expenditures. Using Principal Component Analysis (PCA) technique, asset index was calculated that signifies the socioeconomic status. Finally, the above-mentioned analyses were merged using regression techniques to reveal factors responsible for delayed treatment initiation.
Results: Despite knowing EPTB is curable, most patients had a long treatment delay (>2 months) and 15% even >6 months. About 58% of the patients suffered from catastrophic costs (10% of annual income). On average, patients faced 67 days of reduced working capacity due to illness. During pre-diagnostic phase of illness, a higher average monthly income was the main predictor of high total expenses among male patients. While such high expenses were associated with reduced working capacity in females. The QALYs of patients having HIV as comorbidity were significantly lower. Patients with higher education status, HIV negative and not able to reach the major hospital had higher delays. Interestingly, most of patients with long-delays had lymphadenitis that is relatively less lethal.
Conclusions: To our knowledge, this is the first ever study that utilizes a comprehensive primary data for under-researched EPTB illness. Factors identified for delayed treatment initiation are controlled by patients as well as health providers. Treatment initiation linked to affordability for male and adversely affecting working capacity for females is a discouraging finding. While health-seeking behavior a responsible factor, large proportion of patients were not timely diagnosed linked to health-providers’ factors. The picture maybe even daunting that HIV negative patients have longer delays. This maybe due to serious drawbacks in health system where HIV positive had shorter-delays only due to additional screening or rapidly advancing illness. The findings will modulate existing diagnostic guidelines that would support better identification of patients and treatment outcomes.
Keywords: DEC
A Moment-based Learning Approach to Assist Medical Decision Making
PP-098 Decision Psychology and Shared Decision Making (DEC)
Yao Chi Yu1, Jr Shin Li2, Su Hsin Chang3
1Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, USA
2Department of Electrical and Systems Engineering, Division of Biology & Biomedical Sciences, Washington University in St. Louis, St. Louis, USA
3Division of Public Health Sciences, Department of Surgery, Washington University School of Medicine; Division of Biology & Biomedical Sciences, Washington University in St. Louis, St. Louis, USA
Purpose: Medical decision making for a major intervention is complex. We aimed to develop a novel and interpretable moment-based learning approach to help make timely decisions utilizing data from patients’ electronic health records (EHRs).
Methods: We formulate medical decision making as a classification problem in machine learning, where each class represents one possible outcome after a major medical intervention. Individual demographics and clinical variables are predictors of the classification task, which are normalized patient-wise prior to the major intervention as a probability distribution. This allows for the proper definition of moments as well as the design of the moment kernel, which is constructed by ranking the feature importance obtained by Hilbert–Schmidt Independence Criterion (HSIC) Lasso. By interpreting the probability distribution through the moment kernel, we design a succinct representation, moment, in place of the original EHR data. This representation was then used as input to three different machine learning (ML) algorithms: 1) logistic regression (LR), 2) neural networks (NNs), and 3) gradient boosting machines (GBMs) with an 80-20% training-testing split in three different datasets: A) breast cancer dataset from the University of California Irvine (UCI) machine learning repository, B) weight loss surgery dataset from the Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP), and C) liver transplant dataset from the Organ Procurement and Transplantation Network (OPTN). The post-intervention outcome of the respective three datasets includes A) recurrence of breast cancer, B) catastrophic events after weight loss surgery such as acute stroke, unresponsiveness, and death, etc., and C) graft failure within 30 days following liver transplantation.
Results: Comparable AUC score and favorable training time can be achieved across all datasets under the same ML algorithm using the moment representation (M). The best performing AUC score pair for moment representation and original data for the breast dataset, weight-loss surgery dataset, and the liver-transplant dataset is (0.65,0.75), (0.75,0.70), and (0.65,0.63). The greatest percentage of training time saved for the three datasets are LR (64%, 99% and 98%), followed by NNs (51%, 91% and 59%) and GBMs (19%, 31.4% and 17%).
Conclusions: We demonstrate that the moment-based learning approach is parsimonious, generalizable, and computationally efficient to aid pre-intervention evaluation.
Keywords: Machine learning, feature selection, moment-based method, electronic health records, medical decision making
SMDMFiigure2022
Classification performance (AUC score) on testing data and model training time using the proposed moment-based method and the original data.
Natural History of Major Depression in U.S. Adolescents by Age, Sex, and Race-Ethnicity: A Simulation Model
PP-099 Health Services, Outcomes and Policy Research (HSOP)
Tran T Doan1, David W Hutton2, Davene Wright3, Lisa A Prosser4
1Department of Pediatrics, General Academic Pediatrics Division, School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
2Department of Health Management and Policy, School of Public Health, University of Michigan, Ann Arbor, Michigan, USA
3Center for Healthcare Research in Pediatrics, Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, Massachusetts, USA
4Susan B. Meister Child Health Evaluation and Research Center, Department of Health Management and Policy, School of Public Health, University of Michigan, Ann Arbor, Michigan, USA
Purpose: At least 1 in 5 adolescents have experienced major depression in the United States. It is unknown the trends of the same set of adolescents developing depression, recovery, or relapse over a period longer than two years. The purpose of this study is to measure the experience of adolescent major depression along its natural history over the long-term, using a health simulation model.
Methods: Using the latest U.S. national data, we constructed a state-transition model to estimate depression-related outcomes of adolescents over a ten-year period. The modeled population, a hypothetical cohort of 1,000 adolescents starting at age 12, is simulated once a year until age 22. Model inputs, specifically transition probabilities, are estimated by using survival analysis on longitudinal depression data from the National Longitudinal Study of Adolescent to Adult Health from 1994 to 2008. Transition probabilities are then calibrated to cross-sectional depression data from the 2019 National Survey on Drug Use and Health. Model outcomes include proportions of adolescents who are depression-free, have depression, are in recovery, or died from suicide or other causes of death. Outcomes are reported for a U.S. general population of adolescents as well as by sex (boy/girl) and race/ethnicity (White, Latinx, Black, Asian/Pacific Islander, American Indian).
Results: Using a natural history of depression model, it has been shown that approximately 35% of adolescents from 12- to 22-year-old have ever experienced major depression. At age 22-years, about 65% of individuals are depression-free, 12% have depression, and 23% are in recovery (Table). Older teens tend to be more depressed than younger teens. Girls are much more likely to experience depression than boys (Figure). Adolescents of Hispanic, Latino, or Spanish or White or Caucasian origin are the most likely to have depression among all racial/ethnic groups.
Figure.
Figure. Proportion of Depression Experience among a Hypothetical Cohort of 1,000 U.S. Adolescents by Sex and Race/Ethnicity
Conclusions: To our knowledge, we describe the first simulation model to illustrate the adolescent depression experience over a period longer than described previously. This model enables the simulation of impact of policies for adolescents and is parameterized to be able to capture differences by sex and race/ethnicity. The impact of depression-based interventions on adolescent outcomes will be assessed in future research. Data disaggregated by race-ethnicity is needed to accurately project outcomes for various demographic groups.
Keywords: adolescent depression, state-transition model, adolescent health, major depression
Table.
Proportions of Depression-Related Model States in a General U.S. Adolescents Cohort (N=1,000)
| Age | Depression-Free Cases | Depression Cases | Recovery Cases | Suicide Deaths | Non-Suicide Deaths |
|---|---|---|---|---|---|
| 12 | 0.896 | 0.104 | 0.000 | 0.000 | 0.000 |
| 13 | 0.734 | 0.246 | 0.020 | 0.00002 | 0.00013 |
| 14 | 0.725 | 0.221 | 0.054 | 0.00007 | 0.00026 |
| 15 | 0.715 | 0.199 | 0.085 | 0.00013 | 0.00041 |
| 16 | 0.705 | 0.180 | 0.114 | 0.00022 | 0.00062 |
| 17 | 0.696 | 0.164 | 0.139 | 0.00031 | 0.00089 |
| 18 | 0.686 | 0.151 | 0.161 | 0.00042 | 0.00127 |
| 19 | 0.677 | 0.140 | 0.181 | 0.00055 | 0.00178 |
| 20 | 0.668 | 0.131 | 0.199 | 0.00069 | 0.00234 |
| 21 | 0.658 | 0.124 | 0.214 | 0.00084 | 0.00298 |
| 22 | 0.649 | 0.118 | 0.229 | 0.00102 | 0.00368 |
Aggregated outcomes for a general U.S. adolescent cohort were based on weighted averages of the 12 demographic groups of varying sex and race/ethnicity combinations. Weighted averages came from the 2019 American Community Survey, Population Counts. Source: American Community Survey. Population Estimates by Age, Sex, Race and Hispanic Origin. 2019. https://www.census.gov/newsroom/press-kits/2020/population-estimates-detailed.html.
Mental health decision making and Polycystic Ovary Syndrome: Results of a systematic review
PP-100 Patient and Stakeholder Preferences and Engagement (PSPE)
Bimbola Olure, Paula Pinzón Hernández, Enav Zusman, Sarah Munro
Department of Obstetrics and Gynaecology, University of British Columbia, Vancouver, Canada
Purpose: Polycystic Ovary Syndrome (PCOS) is the most common endocrine disorder among reproductive aged people with ovaries, impacting 6 to 10% of this population worldwide. People living with PCOS are at increased risk for mental health conditions including anxiety, depression, and suicide. We conducted a systematic review to understand factors that influence decision making for access to mental health services and mental health decision support needs among people living with PCOS.
Methods: We co-developed the search strategy with a medical subject librarian. Search terms included key words and variations of phrases including ‘polycystic ovary syndrome,’ ‘qualitative research,’ and ‘mental health’. The search databases included MEDLINE, Embase, CINAHL, PsycInfo, and Web of Science, as well as the reference lists of other publications for studies published between 1900 to June 2021. The search was limited to papers published in English language. We conducted the search and organized the results using Covidence software. Two reviewers independently screened abstracts and titles in relation to the criteria for full-text consideration. Data were extracted from studies using an extraction tool developed by the reviewers, including details about the setting, study design, participants, sample size, procedures, outcomes of interest, and key results regarding decision making needs, priorities, preferences, and experiences.
Results: We included 19 publications in our analysis. People living with PCOS feel ‘left out’ of the decision-making process for their choices related to both management of PCOS and mental health. They feel their symptoms and concerns are dismissed and are broadly unsatisfied with their provider-patient relationships. Their decision-making needs largely centre on need for knowledge of options to take charge of their mental health and the need for increased decision support from health care providers. The existing literature also indicates that decision support resources for PCOS should include information on these key attributes important to patients' mental health: coping with negative body image and low self-esteem, coping with loss of feminine self-identity, and social isolation.
Conclusions: This is the first systematic review to investigate mental health decision support needs among people living with PCOS. Results will be used to develop shared decision-making interventions that support patients and providers.
Keywords: decision-support, polycystic ovary syndrome, shared decision making
Assessment of Patients’ and Providers’ Priorities for Outcomes in Obesity Pharmacotherapy using Best-Worst Scaling
PP-101 Patient and Stakeholder Preferences and Engagement (PSPE)
Kyrian Ezendu1, Askal Ali1, Sandra Suther1, Vakaramoko Diaby3, Matthew Dutton1, Hongmei Chi2
1Economic Social & Administrative Pharmacy, Florida A&M University, Tallahassee, United States
2Department of Computer Science, Florida A&M University, Tallahassee, United States
3Department of Pharmaceutical policy & Outcomes, University of Florida, Gainesville, United States
Purpose: To examine patients’ and providers’ priorities for outcomes in the pharmacotherapy of obesity.
Methods: 6 outcomes including weight, lipidemic, glycemic, blood pressure, heart rate and adverse effect outcomes were identified as key outcomes for evaluation of weight loss products. With these outcomes, a Best-Worst Scaling study was developed using Balanced Incomplete Block Design of 10 blocks with individual block size of 3. The design was translated into a questionnaire of 10 choice sets, each with unique 3 outcomes. Through online survey, self-reported adults with obesity and obesity care providers were asked to evaluate the outcomes in each choice set and select their most important and least important outcomes when choosing obesity treatment. Using count analysis, the Best-Worst Score of each outcome was estimated as the difference between outcome’s total count as the most important item and its count as the least important outcome. The mean standardized Best-Worst Scores (Mean.Std.BW) of the outcomes were further estimated and utilized for priority ranking of the outcomes. Mean.St.BW ranges from -1 to +1 with higher Mean.St.BW indicating higher ranking in priorities.
Results: 137 and 62 complete responses were received from patients and providers, respectively. Overall, weight was the patients’ topmost priority at Mean.Std.BW of 0.54 while lipidemic outcome was the patients’ least priority at Mean.Std.BW -0.28. In the providers perspective, weight was the most important outcome at Mean.Std.BW of 0.64 while heart rate was the least important outcome at Mean.Std.BW of -0.55. In non-comorbid obesity, weight was the most important outcome to patients and providers at Mean.Std.BW of 0.53 and 0.76, respectively. The priorities for outcomes in adults with comorbid obesity varied with the patients’ morbidities. In comorbid obesity, cardiometabolic outcomes related to the patient’s comorbidities were more important to patients and providers over other outcomes. Except in people with obesity and type-II-diabetes, patients’ priorities for obesity treatment outcomes differed from the providers.
Conclusions: Treatment priorities for obesity management varies with the patient’s comorbidity profile. Although weight outcomes remain the overall goal of obesity treatment, cardiometabolic outcomes could be more important to people with comorbid obesity and could inform the value of an obesity treatment. Patients and Providers did not fully agree on the priorities of obesity. treatment, therefore, more collaboration and deliberation through shared decision-making are needed to address such discordance.
Keywords: Obesity, Preferences, Pharmacotherapy, Patients, Providers, Best-Worst Scaling
Choices for Abortion and Contraception for People in Prison: A Scoping Review
PP-102 Patient and Stakeholder Preferences and Engagement (PSPE)
Martha Paynter1, Paula Pinzón Hernández2, Clare Heggie3, Sarah Munro2
1Department of Family Planning, University of British Columbia, Vancouver, Canada
2Department of Obstetrics & Gynaecology, University of British Columbia, Vancouver, Canada
3IWK Health Centre, Halifax, Nova Scotia, Canada
Purpose: Women are the fastest growing population in prisons in Canada and around the world. Evidence suggests women experiencing incarceration have higher rates of unmet contraceptive need and of abortion than the general public. Incarceration presents multiple potential barriers to accessing abortion and contraception including prison security protocols, locations, lack of access to care providers, stigma, and low health literacy. We conducted a scoping review to understand the extent and type of evidence on contraception and abortion access for people experiencing criminalization and incarceration, with a focus on understanding peoples’ preferences for access, choices available, and decision support needs.
Methods: We used the Joanna Briggs Institute methodology for scoping review. A medical librarian developed the search strategy in CINAHL and translated it for APA PsycInfo, Gender Studies, Medline (Ovid), Embase, Sociological Abstracts, Social Services Abstracts. The search was conducted in February-March 2022. All citations were uploaded into COVIDENCE. Two reviewers independently screened abstracts and titles in relation to the criteria for full-text consideration. Data were extracted from papers included in the scoping review using a tool developed by the reviewers, including details about the setting, study design, participants, sample size, procedures, outcomes of interest, and key findings.
Results: The search obtained 6095 titles once duplicates were removed. After title and abstract screening, 133 were considered for full-text assessment, and 54 were included in the review. Most studies used survey methods, and were based in the United States. Outcomes of interest in the studies included prison policies governing contraception, and geographic distance from prisons to abortion providers. It also included patients attitudes towards contraception and pregnancy options, desire to access contraception, trust in in-prision health care, contraception use, and education and knowledge needs about sexual and reproductive health.
Conclusions: There is a lack of research conducted outside of the US, about abortion experiences, and interventions to improve sexual health, knowledge, and outcomes. Therefore, there is limited evidence on abortion and contraception preferences and decision support needs among people experiencing incarceration. Available research suggests, for example, the need for decision support during incarceration to prepare people for experiences after release. However, the need for future research remains. There is a critical need to bridge this knowledge gap to ensure safe, evidence-based access to family planning services for this population.
Keywords: Abortion, contraception, family planning, prison, decision-making, patient preferences
Use of a Qualitative Needs Assessment to Develop an Interactive Contraceptive Choice Patient Decision Aid for Diverse Young Adults
PP-103 Patient and Stakeholder Preferences and Engagement (PSPE)
Rose Goueth1, Kelsey Holt2, Karen B Eden3, Aubri S Hoffman4
1Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
2Department of Family and Community Medicine, University of California, San Francisco, San Francisco, CA, USA
3Department of Medical Informatics & Clinical Epidemiology & Pacific Northwest Evidence-based Practice Center, Oregon Health & Science University, Portland, OR, USA
4Value Institute for Health & Care, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
Purpose: To conduct a needs assessment for the development of an interactive contraceptive decision tool.
Methods: We used a user-centered design approach in accordance with the IPDAS standards to build the contraceptive tool for young adults (18 – 24 years old) with the capacity to become pregnant. For our needs assessment, we formed an advisory panel of 5 clinicians and 2 female patient stakeholders, conducted a clinician focus group session, and completed semi-structured interviews with 15 family planning-focused clinicians. To analyze our interview data, we used an inductive content analysis approach to reveal themes. To curate the educational information for the decision aid, we conducted a clinician focus group. We performed iterative rounds of testing with the advisory panel and 6 beta testers finalized the tool for pilot testing.
Results: Stakeholders identified important steps within the contraceptive decision making process, created a sexual health education to inform contraceptive decision making, and identified important decision process data to form decision making logic. Interviews revealed 3 decision-making stages and provided suggestions for supporting patients at each stage. Clinicians identified 6 important sexual health topics aligned with national comprehensive sexuality education guidelines. Stakeholders identified information and factors important to contraceptive decision making (i.e., use of pronouns, previous methods tried, etc.) and informing decision making logic for the tool. Beta testers were content with the tool and suggested improvements (e.g., additional visuals, addition of sexual health topics) for future refinements of the tool.
Conclusions: This needs assessment informed the creation of decision aid logic, sexual health education pieces, and the overall contraceptive decision making process using a user-centered design approach. The development resulted in the formation of an interactive contraceptive decision making tool now being evaluated in a community family planning clinic for acceptability and changes in knowledge and decisional conflict.
Keywords: reproductive health, contraceptive, decision making, qualitative data, needs assessment, user centered design
Assessing Applicability of Patient Preference Phenotypes to Clinical Decision-Making for Symptomatic Peripheral Artery Disease
PP-104 Patient and Stakeholder Preferences and Engagement (PSPE)
Sonali M. Reddy BA1, Chloe A. Powell MD1, Gloria Y. Kim MD, MPH1, Nicholas H. Osborne MD, MS1, Sunghee Lee Ph.D.2, Mick Couper Ph.D.2, Matthew A. Corriere MD, MS1
1Section of Vascular Surgery, Department of Surgery, University of Michigan, Ann Arbor, USA
2Institute for Social Research, University of Michigan, Ann Arbor, USA
Purpose: We have previously identified latent treatment preference “phenotypes” among patients with symptomatic peripheral artery disease (PAD). The objective of this ongoing pilot study is to evaluate applicability of preference phenotype information to clinical management and to understand relative utilities of different approaches to the content, distribution, and formatting of preference information summaries.
Methods: Healthcare providers who treat PAD were identified through medical records and a quality collaborative and recruited by email. Preference phenotype applicability to to treatment selection was evaluated through survey vignettes. A discrete choice experiment evaluated process issues related to obtaining and reviewing elicited preference information based on 3 attributes: 1) content (attribute importances, gist summary, or both); 2) distribution (all patients, selected patients based on specific circumstances, or patient-driven); and 3) format for review (paper document, electronic document, or verbal report from the patient). Vignette responses were assessed based on response frequencies and categorical tests. Discrete choice results were used to calculate attribute utilities and relative importance scores.
Results: Participants (N=37) had a mean age of 44±14 and experience of 12±8 years; 36% were women. Specialties included Vascular Surgery (64%), Cardiology (16%), Interventional Radiology (8%), Vascular Medicine (4%), Family Medicine (4%), and Physician Assistant (4%). Concordant choices were evident for all preference phenotypes. Surgical revascularization was favored for chronic limb-threatening ischemia (CLTI) when durability was most important (85% of responses, P<0.001), but less frequently when technical success was most important (70% versus 15% for endovascular revascularization, P<0.001). Endovascular treatment was preferred when the intervention attribute dominated and the vignette indicated a preference to avoid surgery (74% vs. 15% palliative management, 2% surgical, and 1% amputation; P<0.001). Responses were most heterogeneous for claudication symptoms when technical success was the most important attribute (55% chose revascularization; P=0.003). Routine preference elicitation among all patients had superior utility vs. elective or patient-driven approaches (P<0.01). No utility differences were observed between preference summary content or formatting (P>0.05) (Figure).
Conclusions: Patient preferences are applicable to PAD treatment selection, and providers favor routine preference elicitation from patients. No clear utility advantages were observed between different approaches to preference summary formatting or distribution.
Keywords: Peripheral Artery Disease, Patient Preferences, Discrete Choice Experiment
Survey Results for Formatting, Distribution, and Content of Patient Preference Results and Elicitation Tools
Factors Associated with Biosimilar Coverage Restrictions among US Commercial Health Plans
PP-105 Patient and Stakeholder Preferences and Engagement (PSPE)
Tianzhou Yu1, Shihan Jin1, Chang Li2, Jakub P Hlávka3
1Department of Pharmaceutical and Health Economics, School of Pharmacy, University of Southern California, Los Angeles, CA, United States
2Department of Economics, University of Southern California, Los Angeles, CA, United States
3Schaeffer Center for Health Policy & Economics, School of Pharmacy, University of Southern California, Los Angeles, CA, USA & Department of Health Policy and Management, Price School of Public Policy, University of Southern California, Los Angeles, CA, USA
Purpose: Biosimilars were introduced to promote competition with existing biologics, in the hopes of curbing the rise of health care spending related to biologics. However, the adoption of biosimilars is stymied by coverage restriction by the health plans. This study explored the drivers of biosimilar coverage relative to their reference products by commercial plans in the U.S.
Methods: We drew on the Tufts Medical Center Specialty Drug Evidence and Coverage database which contains coverage decisions for commercially available biosimilars (n=1,181). The biosimilar coverage decisions were categorized as either “more restrictive than reference product” or “not more restrictive than reference product”. We also drew on the Tufts Medical Center Cost-Effectiveness Analysis Registry and the IBM Micromedex Red Book for list prices. We used a multivariate logistic regression to examine the association between coverage restrictiveness and a number of potential drivers of coverage.
Results: Compared to the reference product, health plans imposed restriction on biosimilar coverage in 229 (19.4%) of the decisions. Plans were more likely to restrict biosimilar coverage for pediatric population (OR: 10.247, 95% CI: 3.512-29.892) and diseases with prevalence > 1,000,000 in the US (OR: 2.044, 95% CI: 1.056-3.958). Plans were less likely to impose restrictions on the biosimilar-indication pair if the biosimilar was indicated for cancer treatments (OR: 0.020, 95% CI: 0.009-0.043), if the product was the first biosimilar (OR: 0.228, 95% CI: 0.120-0.432), if the biosimilar had two competitors (reference product included) (OR: 0.061, 95% CI: 0.007-0.561), if the biosimilar could generate an annual saving > $15,000 per patient (OR: 0.174, 95% CI: 0.059-0.518), if the biosimilar’s reference product was restricted by the plan (OR: 0.065, 95% CI: 0.038-0.109), or if cost-effectiveness measure was not available (OR: 0.074, 95% CI: 0.027-0.207).
Conclusions: Our study provided insights to health plans’ decision-making process and identified multiple factors that were associated with the plans’ decision to impose restriction on biosimilar coverage compared with their reference product.
Keywords: Biosimilar, Coverage decision, Health plans
Variation in biosimilar coverage by included health plans
| Health plan | Payer size | Coverage policies, N | Coverage no more restrictive than the reference product, N (%) | Coverage more restrictive than the reference product, N (%) |
|---|---|---|---|---|
| Health plan 1 | National | 76 | 54 (71.1) | 22 (29.0) |
| Health plan 2 | National | 55 | 37 (67.3) | 18 (32.7) |
| Health plan 3 | Regional | 73 | 63 (86.3) | 10 (13.7) |
| Health plan 4 | Regional | 75 | 73 (97.3) | 2 (2.7) |
| Health plan 5 | Regional | 60 | 57 (95.0) | 3 (5.0) |
| Health plan 6 | Regional | 72 | 64 (88.9) | 8 (11.1) |
| Health plan 7 | Regional | 76 | 74 (97.4) | 2 (2.6) |
| Health plan 8 | Regional | 66 | 50 (85.5) | 16 (24.2) |
| Health plan 9 | Regional | 66 | 42 (63.6) | 24 (36.4) |
| Health plan 10 | National | 68 | 68 (100.0) | 0 (0.0) |
| Health plan 11 | National | 76 | 76 (100.0) | 0 (0.0) |
| Health plan 12 | Regional | 64 | 39 (60.9) | 25 (39.1) |
| Health plan 13 | National | 60 | 43 (71.7) | 17 (28.3) |
| Health plan 14 | Regional | 76 | 59 (77.6) | 17 (22.4) |
| Health plan 15 | National | 76 | 41 (54.0) | 35 (46.1) |
| Health plan 16 | Regional | 66 | 39 (59.1) | 27 (40.9) |
| Health plan 17 | National | 76 | 73 (96.1) | 3 (4.0) |
| Total | - | 1,181 | 952 (80.6) | 229 (19.4) |
Making Sexual & Reproductive Health Accessible for Newcomer Youth in BC
PP-106 Patient and Stakeholder Preferences and Engagement (PSPE)
Zeba Khan1, Michelle Fortin2, Sarah Munro1
1Department of Obstetrics & Gynaecology, University of British Columbia, Vancouver, Canada
2Options for Sexual Health, Vancouver, Canada
Purpose: We sought to explore the role of settlement service organizations in supporting equitable access to sexual & reproductive care services (SRH) for newcomer youth and to co-develop an implementation pathway for interventions that support SRH decision making in this population.
Methods: This was Phase 1 of a community-based participatory action project conducted in partnership with Options for Sexual Health (British Columbia’s Planned Parenthood). Participants were recruited using a purposeful sampling framework and snowball sampling to ensure inclusion of a range of rural and urban settlement service organizations. We recorded virtual focus groups and analyzed the video recordings as well results of graphic facilitation. We conducted a reflexive thematic analysis and presented results back to participants in the form of an infographic.
Results: We conducted two virtual focus groups, with 12 participants. Results highlighted that community organizations providing settlement services are often the first point of contact and a trusted source for newcomer youth and their families looking for SRH information. These organizations are well positioned to provide language support to make SRH care accessible and provide culturally-safe decision support. However, most settlement service organizations lack formal training or programming to support youth in making decisions regarding SRH care. Leveraging partnerships with health organizations focussing on SRH care can lead to development of culturally-safe pathways to interventions that support SRH decision making in this population. Implementation strategies identified include providing comprehensive resources to bolster staff awareness of newcomer youth needs in seeking SRH care, and brief training workshops to increase settlement service provider knowledge of principles for inter-professional shared decision-making teams in SRH.
Conclusions: We demonstrate how to engage with community-based organizations in development of implementation pathways for shared decision making interventions. Results of our research will be used to inform interviews with newcomer youth and healthcare providers, and to co-develop and implement a digital tool that supports shared decision making for SRH care.
Keywords: sexual and reproductive care, immigrant youth, knowledge translation
A Novel Spatio-Temporal Modeling Framework for Improving Public Health Surveillance: An Application to Opioid-related Drug Overdose Deaths
PP-107 Quantitative Methods and Theoretical Developments (QMTD)
Che-Yi Liao1, Gian-Gabriel Garcia1, Kamran Paynabar1, Mohammad Jalali2
1H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, USA
2Massachusetts General Hospital Institute for Technology Assessment, Harvard Medical School, Boston & Sloan School of Management, Massachusetts Institute of Technology, Cambridge
Purpose: Public health surveillance efforts, critical to planning for policy interventions, often can be enhanced by developing decision-support tools capable of identifying the emerging trends from available data. For example, surveillance of the opioid crisis, one of the most severe public health crises in US history, has been complicated by ever-changing drug overdose trends across various geographic regions and population groups. To facilitate the design of effective overdose prevention strategies, it is critical to construct accurate space-time forecasting systems capable of modeling complex interactions and changes in the underlying trends of opioid overdose deaths (OOD).
Methods: We designed a novel Multivariate Spatio-Temporal Hawkes Process with a dynamic network structure (D-MSTHP) to model spatiotemporal dynamics of OOD. Combining covariates from local communities, D-MSTHP can model the complex OOD trends by quantifying the future influence of each overdose death across space, time, and substance. Each node in the D-MSTHP network represents a unique community-cause-of-death combination while the directed edges represent the dynamic influences between these nodes over time.
We validated D-MSTHP using individual-level death data recorded from 01/01/2015 to 09/09/2021 from Massachusetts Registry of Vital Records and Statistics, using ICD-10-CM codes to identify OOD. Drawing on other publicly available data, we further parameterized D-MSTHP using county-level demographic features. Using multi-step-ahead cross-validation, we evaluated the model’s accuracy in predicting OOD by cause-of-death and county multiple months into the future. We compared predictions against the Vector Auto-Regression (VAR) model, which is frequently used for modeling disease transmission networks. Then from the D-MSTHP dynamic network structure, we assessed the strengthens of the causal connections between these nodes.
Results: In both short- and long-term predictions of heroin- and fentanyl-related OOD across all counties, D-MSTHP achieved a 40-43% reduction in root-mean-squared-error compared to the VAR (Figure 1). Furthermore, the learned dynamic network structure revealed that OOD of the same drug class from nearby locations to the local communities outweighed that of different drug classes from the locals.
smdm-fig1.
Multi-month-ahead predictive accuracy of heroin- and fentanyl-involved overdose death trends across all counties in Massachusetts.
Conclusions: We created and assessed an interpretable spatio-temporal predictive model incorporating geographic, socioeconomic, and behavioral variations in individual-level OOD incidents and spatial covariates of local communities. While D-MSTHP has shown promise for improving surveillance of the opioid crisis, it can also be broadly applied to other public health surveillance of diseases with evident incident-level influence across space and time.
Keywords: Public Health Surveillance, Opioid Crisis, Spatio-Temporal Modeling, Hawkes Process, Mutually Exciting Point Process
Component Network Meta-Analysis Including Individual Participant and Summary Aggregate Data to Tailor Health Policy Recommendations to Sub-Populations
PP-108 Quantitative Methods and Theoretical Developments (QMTD)
Ellesha Smith1, Laura Gray1, Keith Abrams2, Suzanne Freeman1, Stephanie Hubbard1
1Department of Health Sciences, University of Leicester, United Kingdom
2Department of Statistics, University of Warwick, United Kingdom
Purpose: In many settings, particularly public health, interventions are ‘complex’ meaning that they are comprised of multiple components. This research demonstrates how evidence on complex interventions can be synthesised to provide better information to policy decision makers for tailoring evidence-based public health intervention recommendations to sub-populations and inform future research.
Methods: Component network meta-analysis (CNMA) is an evidence synthesis method that can identify combinations of intervention components that are effective, including combinations not evaluated in previous research. CNMA has recently been extended to include summary aggregate data (SAD) level covariates. However, this method is subject to ecological bias. This research project developed a novel adaptation of CNMA including covariates using individual participant data (IPD) and SAD to identify which interventions should be recommended for particular sub-populations, adjusting for ecological bias. The method was applied to a Cochrane Collaboration systematic review that investigated interventions for promoting the use of safety practices for preventing childhood poisonings at home. The interventions were made up of the following components: usual care (UC), education (Ed), free or low cost equipment (Eq), installation (In) and home safety inspection (HSI). Ethnicity was included as a binary covariate.
Results: Network meta-analysis and CNMA of SAD identified the Ed+Eq+HSI intervention as the most effective (odd ratio: 2.52, 95% credible interval (1.16, 6.62)). The adapted CNMA method including IPD and SAD, including the ethnicity covariate, found no evidence that the Ed+Eq+HSI intervention is effective. There was evidence that Ed+Eq was effective (odd ratio: 1.58, 95% credible interval (1.02, 2.46)), but there was no evidence of an association between the covariate at the individual or study level, meaning there was no evidence of an intervention effect difference between black or minority ethnicity individuals compared to white individuals. There was evidence of ecological bias for this intervention effect.
Conclusions: CNMA has the potential to identify intervention combinations that have not been tested in trials but may be more effective. CNMA of IPD and SAD makes use of all available data, including individual participant data (IPD) where it is available, to provide better evidence-based health policy recommendations to healthcare providers and help determine which interventions are best for whom. This method also reduces uncertainty in the component effect estimates and adjusts for ecological bias.
Keywords: Evidence synthesis, network meta-analysis, complex interventions, ecological bias
Calibration of a Microsimulation Disease Model and the Effect on Model Predictions
PP-109 Quantitative Methods and Theoretical Developments (QMTD)
Hilliene J van de Schootbrugge-Vandermeer1, Luuk A van Duuren1, Lucie de Jonge1, Monique E van Leerdam2, Esther Toes-Zoutendijk1, Iris Lansdorp-Vogelaar1
1Department of Public Health, Erasmus MC University Medical Center, Rotterdam, The Netherlands
2Department of Gastroenterology and Hepatology, Netherlands Cancer Institute – Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
Purpose: To calibrate the Microsimulation Screening Analysis (MISCAN)-Colon model using most recent data of the national colorectal cancer (CRC) screening program in the Netherlands, and to evaluate the effect of the model update on model predictions.
Methods: A nationwide CRC screening program was implemented in the Netherlands in 2014 using fecal immunochemical testing (FIT), targeting individuals aged 55-75. FIT-positive screenees were referred for a colonoscopy. MISCAN-Colon was calibrated on observed screen-detected CRC rates and interval CRC (CRC after a negative primary screening but before the invitation to the next screening) rates between 2014 and 2020. The parameters that were calibrated included the mean and variance of dwell times of preclinical cancers for each cancer stage, as well as the sensitivity of the FIT depending on the time to clinical diagnosis if an individual had not been screened. A genetic algorithm was used to minimize the Poisson deviance between model predictions and observed data. After calibration, we evaluated the change in predicted CRC incidence and mortality between 2020 and 2050 for the entire Dutch population alive in 2014.
Results: The estimated average dwell times decreased from 2.5, 2.5, 3.7 and 1.5 to 2.0, 0.9, 2.4 and 0.8 years for preclinical CRC stage I, II, III and IV, respectively. The distribution of dwell times across individuals changed from standard exponential to Weibull. The estimated test sensitivity changed from 66.2% vs 30.4% to 86.1% vs 23.4% shortly before diagnosis vs long before diagnosis, respectively. Model calibration reduced the total deviance of all targets from 6979 to 2026 (71.0%). As a consequence of the model update, predicted cumulative long-term CRC incidence increased by 1.7%, whereas predicted cumulative CRC mortality increased by 9%.
Conclusions: Recalibration of MISCAN-Colon using real-life data from an ongoing CRC screening program improved the accuracy of model parameters, indicating that the model represents reality better. The calibration should be repeated when additional data on more subsequent screening rounds is available. Despite the change in parameters, long-term predictions remained similar after calibration. Model predictions should be verified in the future.
Keywords: Microsimulation disease model, model calibration, model predictions
Beyond instantaneous partnerships: capturing partnership-level herd effects in compartmental models of sexually transmitted infections
PP-110 Quantitative Methods and Theoretical Developments (QMTD)
Jesse Knight1, Sharmistha Mishra2
1Institute of Medical Science, University of Toronto; MAP Centre for Urban Health Solutions, Unity Health Toronto
2Institute of Medical Science, University of Toronto; MAP Centre for Urban Health Solutions, Unity Health Toronto; Dalla Lana School of Public Health, University of Toronto; Division of Infectious Diseases, Department of Medicine, University of Toronto
Purpose: For decades, standard compartmental models of sexually transmitted infections have simulated sexual partnerships as "instantaneous". However, this instantaneous approach may bias model dynamics, especially those related to partnership duration. We developed a new approach to simulating sexual partnerships in compartmental models which moves beyond the instantaneous paradigm, and avoids such biases.
Methods: In the instantaneous approach, people change partners at least once per year, or more frequently for shorter and/or simultaneous partnerships per-person. Then, a fraction of people changing partners become infected, reflecting the cumulative probability of transmission per-partnership/year (Figure a, left). In the proposed approach, we define a rate of transmission per-partnership, reflecting sex frequency; such rates can be summed across simultaneous partnerships per-person. Then, we track the number of people who recently transmitted or acquired infection; we decrease by one the effective numbers of partners among these people until they form a new partnership, determined by partnership duration (Figure a, right). We integrated the proposed and instantaneous approaches, separately, into an existing model of heterosexual HIV transmission in Eswatini. After calibrating the model to observed data under the proposed approach, we compared modelled HIV incidence under the proposed versus the instantaneous approach, with the same model parameters.
Figure.
(a) Simplified diagram of modelled compartments, key parameters, and simplified incidence equation for the instantaneous and proposed approaches to modelling sexual partnerships. (b) Modelled heterosexual HIV incidence in Eswatini under the instantaneous versus proposed approach; relative differences grow over time as partnership-level herd effects accumulate in the proposed approach.
Results: Incidence under the instantaneous approach was consistently greater than under the proposed approach, with relative differences growing over time (Figure b). These differences can be explained mechanistically as follows. As prevalence increases, both approaches capture population-level herd effects: reduced onward transmission from each infection due to new partnerships forming between two infected people. However, the proposed approach also captures partnership-level herd effects: reduced transmission due to existing partnerships continuing between two infected people following transmission. In the instantaneous approach, these continuing partnerships cannot be modelled, and so infected individuals are modelled to be immediately at risk of onward transmission. Thus, as partnership-level herd effects accumulate over time, relative differences in incidence under the instantaneous versus the proposed approach also grow over time.
Conclusions: Modelling sexual partnerships as instantaneous can cause compartmental models of HIV transmission to overestimate HIV incidence, especially in mature and declining epidemics. The proposed approach offers a generalizable solution to move beyond instantaneous partnerships in compartmental models of sexually transmitted infections, and captures key epidemic dynamics related to partnership-level herd effects.
Keywords: compartmental models, incidence, sexual behavior, sexually transmitted disease, HIV, herd effects
Estimating the Expected Value of Sample Information Using Gaussian Approximations and Spline-Based Taylor Series Expansions
PP-111 Quantitative Methods and Theoretical Developments (QMTD)
Linke Li1, Anna Heath2
1Dalla Lana School of Public Health, University of Toronto, Toronto, Canada; Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada
2Dalla Lana School of Public Health, University of Toronto, Toronto, Canada; Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada; Department of Statistical Science, University College London, London, United Kingdom
Purpose: The Expected Value of Sample Information (EVSI) measures the expected benefits that could be obtained by collecting additional data. Estimating EVSI using the traditional nested Monte Carlo is computationally expensive but the recently developed Gaussian approximation (GA) approach can efficiently estimate EVSI across different sample sizes. However, the conventional GA may result in biased EVSI estimates if the decision models are highly nonlinear and the parameters are correlated. This bias may lead to suboptimal study designs when GA is used to optimize the EVSI of trial designs. Therefore, we extend the conventional GA approach to improve its performance for nonlinear decision models and correlated parameters.
Methods: Our method provides accurate EVSI estimates by approximating the posterior expectation of the net benefit conditional on a range of simulated datasets, which is known as the preposterior expectation, based on two steps. First, a Taylor series approximation is applied to estimate the preposterior expectation of the net benefit as a function of the preposterior distribution of the parameters of interest using a cubic spline, which is fitted to the probability sensitivity analysis dataset. Next, the preposterior distribution of parameters is approximated by the conventional GA and fisher information. The proposed approach is applied to several synthetic data collection exercises involving correlated non-Gaussian parameters and non-linear decision models. Its performance is compared with the nested Monte Carlo, conventional GA approach and the nonparametric regression-based method.
Results: The proposed approach provides accurate EVSI estimates across different sample sizes when the parameters of interest are non-Gaussian, correlated and the decision models are nonlinear. Figure 1 compares the EVSI of a Markov Model computed by conventional GA (GA_meta), the proposed approach (GA_Taylor), Nonparametric regression-based method (Non_par) and nested Monte Carlo (denoted by the red cross). In subplots A and B, the decision model is approximately linear, EVSI estimates computed by all methods are similar. However, when the decision model is nonlinear, in subplots C and D, the proposed approach still provides accurate EVSI estimates across different sample sizes whilst conventional GA is less accurate. The computation cost of the proposed method is similar to other novel methods.
Conclusions: The proposed approach can estimate EVSI across different sample sizes accurately and efficiently, which may support researchers to determine an economically optimal trial design using EVSI.
Keywords: Expected value of sample information, Function approximation, Taylor series approximation, Health economic evaluation, Value of Information.
EVSI computed by conventional GA, Spline-based Taylor series expansions and GA, Non-parametric Regression-based method for a Markov Model. The expected value of partial perfect information is denoted by the horizontal dashed lines.
Development and calibration of a population model for renal cell carcinoma
PP-112 Quantitative Methods and Theoretical Developments (QMTD)
Praveen Kumar1, Hawre J. Jalal3, Cindy L. Bryce1, Bruce L. Jacobs2, Lindsay M. Sabik1, Mark S. Roberts1
1Department of Health Policy and Management, School of Public Health, University of Pittsburgh, Pittsburgh, USA
2Department of Urology, School of Medicine, University of Pittsburgh, Pittsburgh, USA
3School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
Purpose: A population model for renal cell carcinoma (RCC) can be a valuable tool to estimate the clinical and economic impacts of population interventions such as screening. Here, we present the first population model of RCC calibrated to cross-sectional data on RCC prevalence and other outcomes using Bayesian calibration.
Methods: We developed an individual-based discrete event simulation model that simulates an individual's life trajectory. The model simulates multiple birth cohorts from 1891 to 1998. The model included RCC risk factors, such as sex, obesity, and smoking, and simulated age at tumor initiation, stage of diagnosis, and death due to RCC or other causes. The model’s unobserved parameters were estimated by calibrating the model to targets such as calendar year-specific incidence rates, mortality rates, and tumor size using the Bayesian Calibration using Artificial Neural Networks (BayCANN) method, which uses an artificial neural network to estimate the joint posterior distribution of calibrated parameters and is especially suited for stochastic and complex models.
Results: The model predictions of tumor size, mortality rates by sex, and the proportion of localized cases were close to the observed data (Figure 1). However, the predicted incidence rates for the 2000-2018 period showed a stable trend instead of an increasing trend seen in the observed incidence rates. The calibrated model suggests that the risk of developing RCC increases with age (Weibull’s shape parameter = 3.76), and males are at higher risk than females (hazard ratio = 2.47). The tumor with papillary histology was more aggressive than clear cell and chromophobe. The 5-year probability of symptomatic detection varied from 0.5% for a <=20 mm tumor inside the kidney to 80% for distant stage RCC. Tumors among recent birth cohorts were more likely to be incidentally detected than in older birth cohorts.
Conclusions: Our model is the first population model for RCC that has been calibrated using the state-of-the-art methodology, BayCANN, which provided us with the joint posterior distribution of calibrated parameters, capturing correlation among parameters. We will use our model to inform public health policy, such as screening for RCC among the high-risk population.
Keywords: Kidney cancer, simulation model, population model, Bayesian calibration
Model Fit after Calibration
A Preference-sensitive Approach to Optimal Screening Problems with Multiple Objectives: Application to Breast Cancer Screening
PP-113 Quantitative Methods and Theoretical Developments (QMTD)
Sun Ju Lee, Gian Gabriel Garcia
H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia
Purpose: Sequential screening and treatment problems such as designing breast cancer screening policies often involve balancing multiple objectives, e.g., minimizing breast cancer mortality while minimizing risks and costs associated with mammography. To circumvent the difficulty of multi-objective screening and treatment design, researchers often rely on scalarization, i.e., the conversion of multiple objectives into a single objective. However, it is not always possible or convenient to scalarize multiple objectives when appropriate weights are impossible to quantify or unknown. As such, there is a critical need to design modeling approaches that can handle multiple objectives when scalarization is undesirable or limiting.
Methods: For multi-objective sequential screening and treatment policy design under uncertainty, we formulated a lexicographic ordering method with tolerances to solve a finite-horizon multi-objective partially observable Markov Decision Process (MOPOMDP). The decision-maker specifies (1) a lexicographic preference ordering among multiple objectives and (2) tolerances corresponding to the decision-maker’s willingness to deviate from the optimal value for each objective. Using breast cancer screening as a case study, we solve for the optimal mammography policy across two objectives. Instead of generating screening policies and then evaluating them with respect to multiple objective values which is computationally prohibitive when the number of policies to be evaluated is large, our formulation solves a series of linear programs to produce a policy which is optimal within the specified tolerance. Finally, we compare the performance of our policies to current U.S. Preventative Services Task Force (USPTF) screening guidelines.
Results: Our approach yields a menu of screening policies that Pareto-dominate the USPTF screening guidelines when evaluated by lifetime mortality risk and the expected number of mammograms required (Figure 1). One such optimal policy achieves a 12.7% reduction in the expected number of mammograms required while maintaining a lifetime mortality risk within 1% of the USPTF guidelines, indicating improved health outcomes and lower screening burden.
Figure 1.
Expected probability of dying from breast cancer and expected number of mammograms for optimal screening policies found using our approach with varying tolerances. The U.S. Preventative Task Force (USPTF) recommendation, the policy in which patients are never screened, and the policy in which patients are screened in every decision epoch (every 6 months) are also depicted.
Conclusions: We proposed a lexicographic approach to MOPOMDPs based on tolerance – the decision-maker’s willingness to deviate from the optimal policy. While our case study focused on personalized mammography screening strategies, our approach for the efficient computation of optimal policies can be applied to other multi-objective screening problems. Our framework for balancing multiple objectives is easily interpretable and facilitates a personalized and shared decision-making process better aligned with the priorities of individual patients.
Keywords: Markov models,operations research,mathematical models and decision analysis
Screening First Degree Relatives of Individuals with Germline BRCA2 Mutations and Prostate Cancer: A Cost-Effectiveness Analysis
PP-114 Applied Health Economics (AHE)
Alexie Carletti, Alexandra Sokolova
Department of Internal Medicine, Division of Hematology/Oncology, Oregon Health and Science University, Portland, USA
Purpose: To evaluate the feasibility of cascade genetic testing (CGT) in male first-degree relatives (FDR) of individuals with germline BRCA2 mutations and prostate cancer.
Methods: A cost-effectiveness model was created with TreeAge software to compare prostate cancer outcomes of germline genetic testing for BRCA2 mutations in male FDR versus no genetic testing. We used a theoretical cohort of 1,000 men. Outcomes included death from de novo prostate cancer, death from recurrent prostate cancer, death from radical prostatectomy surgery, or surviving prostate cancer with erectile dysfunction, urinary incontinence, bowel dysfunction or no complications. All probabilities, costs, utilities, and life expectancies were derived from literature. Incremental cost-effectiveness (ICERs) was calculated to determine the cost per quality adjusted life year (QALY) gained. The willingness-to-pay (WTP) threshold was set at $100,000/quality adjusted life year (QALY) with a QALY discount rate of 3%. Monte Carlo simulation and sensitivity analyses were conducted to assess the robustness of the model.
Results: In our theoretical cohort of 1,000 men, we found that cascade genetic testing of male FDR would result in 7 fewer deaths from de novo prostate cancer, 3 fewer deaths from recurrent prostate cancer, and 14 more people who are prostate cancer-free (Table 1). Cascade genetic testing for BRCA2 mutations in male FDR is cost-effective, with an ICER of $14,011/QALY compared to not cascade testing FDR. Our model showed that genetic testing would save an estimated $2.8 million USD. Once variability was incorporated into our model inputs via Monte Carlo simulation, we found that cascade genetic testing for germline BRCA2 mutations was dominant in 72.8% of samples (Figure 1).
| Outcome | Genetic testing | No genetic testing | Difference |
|---|---|---|---|
| Death from de novo prostate cancer | 24 | 31 | -7 |
| Death from recurrent prostate cancer | 40 | 43 | -3 |
| Survive prostate cancer with erectile dysfunction | 55 | 57 | -2 |
| Survive prostate cancer with urinary incontinence | 4 | 5 | -1 |
| Survive prostate cancer with GI dysfunction | 1 | 1 | 0 |
| Survive prostate cancer with no complications | 31 | 32 | -1 |
| Death from radical prostatectomy surgery | 0 | 0 | 0 |
| No prostate cancer | 849 | 835 | 14 |
| Cost (2022 USD) | 22,711,410 | 19,909,290 | 2,802,120 |
| Effectiveness (QALYs) | 19,690 | 19,490 | 200 |
Conclusions: Cascade genetic testing of male FDR of individuals with confirmed germline BRCA2 mutations and prostate cancer may be a cost-effective strategy to facilitate early prostate cancer detection and reduce deaths from metastatic prostate cancer.
Keywords: cost-effectiveness analysis, cost analyses, prostate cancer, cascade genetic testing, BRCA2
Monte Carlo Simulation
Outcomes in a theoretical cohort of 1,000 men who are FDR of individuals with germline {BRCA2} prostate cancer
Cost-effectiveness of Early Take-Home Buprenorphine-Naloxone Versus Methadone for Treatment of Prescription-Type Opioid Use Disorder
PP-115 Applied Health Economics (AHE)
Benjamin Enns1, Emanuel Krebs2, David Whitehurst2, Bohdan Nosyk2
1Centre for Health Evaluation and Outcome Sciences; Vancouver, British Columbia, Canada
2Faculty of Health Sciences, Simon Fraser University; Burnaby, British Columbia, Canada
Purpose: A robust evidence base demonstrates the effectiveness and cost-effectiveness of opioid agonist therapy in reducing illicit opioid use, retaining individuals in treatment, and reducing the societal costs of opioid use disorders. However, the relative cost-effectiveness of flexible take-home buprenorphine-naloxone (BNX) vs. standard directly supervised methadone models of care has not been formally assessed. Our objective was to determine the cost-effectiveness of take-home BNX versus methadone alongside the OPTIMA trial for treatment of prescription-type opioid use disorders (POUD) in Canada.
Methods: The OPTIMA study was a pragmatic, open-label, noninferiority, two-arm randomized controlled trial, designed to assess the comparative effectiveness of flexible take-home BNX and standard methadone models of care for the treatment of POUD in routine clinical care. The study included 272 participants from clinical settings across Canada, with outcomes evaluated over 24 weeks. We evaluated cost-effectiveness using a semi-Markov cohort model. Health state-specific probabilities of overdose were calibrated, accounting for fentanyl prevalence across clinical sites of enrollment and an elevated risk of overdose during the first month following health state transitions. We included costs for BNX and methadone treatment, health resource use, and criminal activity (2020 CAD), and health state-specific preference weights (EQ-5D-5L) as outcomes. We calculated incremental cost-effectiveness ratios (ICERs) for BNX vs. methadone, evaluated at a willingness-to-pay threshold of $100,000 per quality-adjusted life year (QALY), using societal and health-sector perspectives. Six-month and lifetime (using a 3% annual discount rate) time horizons were explored.
Results: Over a six-month time-horizon, BNX was the dominant treatment option (i.e., higher QALYs and lower costs) in 33.1% (societal perspective) and 64.1% (health-sector perspective) of simulations (Table 1). Using a lifetime time-horizon, methadone was dominant in 49.4% of simulations when adopting a societal perspective. From a health-sector perspective, BNX had lower incremental costs and lower incremental QALYs (ICER: $32,585/QALY).
Conclusions: Evaluated over a lifetime time-horizon, BNX generated lower QALY gains compared to methadone, with lower costs from a societal and health-sector perspective. Over a longer time-horizon, individuals initially receiving methadone had higher incremental QALYs due to better continuous treatment retention compared to those initially receiving BNX.
Keywords: cost-effectiveness, opioid agonist treatment, methadone, buprenorphine/naloxone, clinical trial, opioid use disorder
Results of cost-effectiveness analysis
The Importance of Underappreciated Sources of Heterogeneity for Economic Evaluations in the United States
PP-116 Applied Health Economics (AHE)
Michael Willis1, Cheryl Neslusan2, Andreas Nilsson1
1The Swedish Institute for Health Economics, Lund, Sweden
2Janssen Scientific Affairs, LLC, Titusville, NJ, USA
Purpose: While clinical decisions are taken for patients at the individual level, treatment choices may be constrained by coverage decisions taken at the population level. Population-level (i.e., “average”) estimates of cost and outcomes can mask important sources of heterogeneity across patients and treatment settings, and thus lead to suboptimal outcomes and an exacerbation of existing inequities in access to care. Reviews suggest that few economic evaluations account for heterogeneity, which researchers have attributed to factors like inadequate subgroup-level evidence, lack of specificity in pharmacoeconomic guidance, and ethical concerns. In addition, much of the methodological work has been developed from the single-payer perspective, and thus lacks sufficient consideration of sources of heterogeneity important in multi-payer settings. Our objective was to characterize current practices for addressing heterogeneity relevant for multi-payer settings, and to highlight key considerations for economic evaluations in the US.
Methods: Two reviews and an analysis of the Tufts Medical Center CEA Registry were sourced to identify empirical studies for the analysis. Only studies that considered at least one subgroup analysis were included. The framework for cataloging these studies was sourced from an integrative review of methods for addressing heterogeneity. The extent to which different types of heterogeneity was assessed and examples that illustrated good practices and potential challenges were highlighted.
Results: Previous work has described six distinct types of heterogeneity. Within each type are a multitude of factors that vary systematically and may affect comparative economic evaluations. We found that most applications considered only age and treatment effects. Sources of heterogeneity arising from factors important to multi-payer settings were not adequately addressed, including differences in insurance coverage and access to care, patient preferences for health and health insurance (e.g., high vs. low deductible plans), payer and physician preferences, and variability across regions (e.g., urban vs. rural).
Conclusions: Economic evaluations can only be patient-centric and informative to decision-making if they are sufficiently tailored to features of the local health care ecosystem, including properly accounting for all relevant sources of heterogeneity. If economic evaluations are to be useful in the U.S. setting, findings from this study suggest that awareness must first be raised among practitioners about forms of heterogeneity arising from the unique features of its multi-payer system.
Keywords: Heterogeneity, economic evaluation, multi-payer system
Health care costs attributable to prostate cancer in British Columbia, Canada: A population-based cohort study
PP-117 Applied Health Economics (AHE)
Daphne P. Guh1, Tima Mohammadi1, Reka E. Pataky2, Alexander C.t. Tam3, Annalijn I. Conklin4, Larry D. Lynd4, Wei Zhang3
1Centre for Health Evaluation and Outcome Sciences, St. Paul’s Hospital, Vancouver, British Columbia, Canada
2Canadian Centre for Applied Research in Cancer Control, BC Cancer, Vancouver, British Columbia, Canada
3School of Population and Public Health, University of British Columbia, Vancouver, British Columbia, Canada
4Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, British Columbia, Canada
Purpose: Evidence-based decision-making requires up-to-date cost-of-illness estimates. We aimed to estimate the total healthcare costs including hospitalizations, outpatient and physician services, radiotherapy, systemic therapy, and prescription medications attributable to prostate cancer (PCa) in British Columbia (BC), Canada from the healthcare system’s perspective.
Methods: Using linked administrative health data from BC Cancer and Population Data BC, we constructed a cohort of men aged ≥50yrs at diagnosis with PCa between 2010 and 2017 (Cohort 1) and followed them from the date of diagnosis till the date of death, the last date of observation, or 2019/12/31, whichever came first. We further selected those who died from PCa and formed Cohort 2. The observation period included 4 intervals: (I) initial care (12 months after diagnosis); (II) post-initial care (12-24 months after diagnosis), (III) continuing care (24 months after diagnosis till 12 months before death); and (IV) terminal care (12 months before death). Up to 2 controls for cases in each cohort were matched from the BC general population not diagnosed with PCa by age, health-delivery area, neighbourhood income and the year of last follow-up. Attributable costs were estimated by comparing costs in cases to matched controls. Estimates for Interval I-III and that for Interval IV were derived from piecewise linear regression models on Cohort 1 and from linear regression models on Cohort 2, respectively. Costs by cancer stage at diagnosis and tumor grade and by primary treatment were estimated. All costs are in 2019 Canadian dollars.
Results: A total of 23,165 PCa patients were included in Cohort 1 with mean age of 69.8 (SD=9.2) years and median follow-up time of 5.1 [IQR: 2.7-7.5] years. The majority of the patients were diagnosed at early stage (stages I, IIA, IIB: 62.7%) and of high grade (Gleason score>6: 70.0%). There were 3,121 patients in Cohort 2. The mean attributable costs (95% CI) per year were: Interval I: $15,063 ($14,811-$15,316); Interval II: $3,595 ($3,357-$3,833); Interval III: $1,592 ($1,425-$1,759); Interval IV: $13,277 ($11,427-15,127). Table 1 presents the cost estimates for Intervals I-III by cancer stage and tumor grade.
Conclusions: The costs attributable to prostate cancer are the highest in 12 months after diagnosis and end of life, and vary by cancer stage at diagnosis and tumor grade. The cost estimates could inform cost-effectiveness modeling.
Keywords: health care costs; administrative data; prostate cancer
Total cost attributable to prostate cancer per year: mean (95% CI)
| Interval I | Interval II | Interval III | |
|---|---|---|---|
| Early Stage | |||
| Low grade (n=3922) | $7,246
($6,743 - $7,749) |
$2,601
($2,128 - $3,074) |
$485
($137 - $833) |
| High grade (n=10041) | $17,283
($16,949 - $17,618) |
$2,661
($2,355 - $2,967) |
$677
($456 - $897) |
| Advanced Stage | |||
| Low grade (n=186) | $16,300
($13,626 -$18,974) |
$4,444
($2,500 - $6,388) |
$319
($-744 - $1,381) |
| High grade (n=4656) | $19,069
($18,550 -$19,588) |
$6,648
($5,938 - $7,358) |
$4,301
($3,831 - $4,772) |
Accounting for Non-Health and Future Costs in Cost-Effectiveness Analysis: Modeling Distributional Impacts of a Cancer Prevention Strategy
PP-118 Applied Health Economics (AHE)
David D Kim
Center for the Evaluation of Value and Risk in Health (CEVR), Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, MA
Purpose: To examine distributional impacts of including non-health and future costs in assessing cost-effectiveness of implementing a national 10% excise tax on processed meats
Methods: Using the US Dietary and Cancer Outcome Model – a probabilistic cohort-state transition model developed in 2018, we evaluated the lifetime cost-effectiveness of implementing a 10% excise tax on processed meats, which are associated with an increased risk of colorectal and stomach cancers, in the following scenarios of accounting for (A) related future health care expenditure (HCE) only; (B) both related and unrelated future HCE; (C) all future HCE + time cost and productivity; (D) all future HCE + time cost and productivity + non-health consumption (population average); and (E) all future HCE + time cost and productivity + non-health consumption (age-specific). We evaluated the magnitude of changes in a net monetary benefit (using the valuation of $50,000-per-QALY) by incrementally adding non-health and future cost components and whether the difference would be substantial enough to change the “cost-saving” determination.
Results: A national 10% excise tax on processed meat would be considered health-improving and cost-saving or highly cost-effective across all subgroups (Figure 1). We found greater health and economic benefits among 1) younger individuals who would have longer life expectancy for accruing the health benefits, 2) racial-ethnic minorities with greater cancer burdens (e.g., non-Hispanic black men) and higher baseline processed meat consumption (e.g., Hispanic men). However, the inclusion of non-health and future costs had a substantial impact on cost-effectiveness results, often differential impacts across population subgroups and changes in “cost-saving” determination. Notable findings include: 1) excluding unrelated health care costs would bias toward favoring a cancer prevention policy; 2) accounting for productivity benefits would generate more favorable cost-effectiveness results, often leading to changes in the “cost-saving” determination; 3) adding non-health consumption costs posed greatest impacts on cost-effectiveness results, offsetting productivity benefits; and 4) age-specific non-health consumption costs would lead to more favorable cost-effectiveness results among the younger population.
Figure 1.
Lifetime cost-effectiveness of implementing a 10% excise tax (versus no policy) on processed meats in the US population
Abbreviation: HCE, health care expenditure; TC, patient time cost; Prod, productivity effect; Consmp, population-average non-health consumption estimates; Age_Consmp: age-specific non-health consumption estimates; NMB, net monetary benefit Note: We applied a valuation of $50,000 per quality-adjusted life-year (QALY) to calculate the net of monetized additional QALY gained and overall costs (health care costs and non-health costs). Cost-saving benchmark was estimated based on the monetized value of average QALY gained (0.00237 QALY) per person. When the net monetary benefit (NMB) is greater than the cost-saving benchmark, the 10% excise tax on processed meats is a health-improving and cost-saving strategy. When the NMB is positive (i.e., NMB > 0), but smaller than the cost-saving benchmark, the 10% excise tax on processed meats is considered cost-effective at the $50,000-per-QALY benchmark.
Conclusions: Health interventions providing survival benefits consequentially incur medical and non-medical consumption during added life-years as well as generating non-health impacts. It is imperative to account for all of the relevant consequences in cost-effectiveness analysis to provide a comprehensive assessment of the opportunity costs to inform resource allocation decisions.
Keywords: cost-effectiveness analysis, simulation model, future cost, non-health impact, cancer prevention
Cost-effectiveness analysis of therapy sequence in metastatic prostate cancer: Generic availability of targeted hormonal therapy
PP-119 Applied Health Economics (AHE)
Elizabeth A Handorf1, Daniel M Geynisman1, Andres Correa1, Chethan Ramamurthy2, J Robert Beck1
1Fox Chase Cancer Center, Temple University Health System, Philadelphia, PA
2Division Hematology/Oncology, Mays Cancer Center, UT Health San Antonio
Purpose: Patients with metastatic prostate cancer often undergo multiple lines of treatment, making the cost-effectiveness of therapy sequence an important question. Previously, the high price of abiraterone acetate (AA) made its first-line use not cost-effective, however, generic alternatives are now available.
Methods: We examined the cost-effectiveness of the planned sequence of therapy for metastatic Hormone Naïve Prostate Cancer (mHNPC). Both docetaxel (DCT), a cytotoxic chemotherapy, and AA, a targeted anti-androgen agent, are approved for treatment of mHNPC. DCT has higher rates of adverse events requiring costly treatment and reducing quality of life. Patients who start treatment with DCT or AA are typically eligible to switch to the other agent after disease progression. We developed a microsimulation model that includes health states across multiple lines of therapy (see figure). Our model allows intermediate outcomes (i.e., time spent on line 1 therapy) to influence later health states, introducing a realistic correlation across multiple regimens. Overall and Progression-Free Survival curves were extracted from published clinical trial data, and model-based estimates were calibrated to survival outcomes of target trials. We used retail pharmacy prices or adjusted average wholesale prices (AWP) for drugs; health state utilities were found in relevant literature. We assessed planned therapy order (starting with DCT or AA, then switching upon progression) evaluating 5-year costs in 2021 US dollars and survival outcomes in Quality-Adjusted Life-Years (QALYs). We varied the cost of AA to assess the impact of generic pricing on cost-effectiveness.
Figure.
Health states and allowable transitions for two lines of therapy for advanced cancer.
Results: In 2019, the AWP of AA was $11,275 per 30-day supply. In 2021, retail price for brand-name AA was $3,420, while generic AA was $420. Estimated QALYs were 2.87 when starting with AA, and 2.72 when starting with DCT. Cost differences for starting with AA vs DCT were $222,071 for on-patent pricing, $67,754 for 2021 brand name pricing, and -$10,078 for generic pricing. Incremental Cost-Effectiveness Ratios were $1,516,974/QALY, $462,834/QALY, and -$68,843/QALY, respectively. Under generic pricing, a strategy of starting with AA dominates DCT, as over time it is cost saving and provides greater QALYs.
Conclusions: Our microsimulation approach provides insights into cost-effectiveness problems involving therapy switching. The arrival of generic AA has greatly improved its cost-effectiveness relative to docetaxel for mHNPC.
Keywords: microsimulation, therapy sequence, cancer
Population-level model of the health and economic impact of NVX-CoV2373 as a COVID-19 booster vaccine option for adults in the United States
PP-120 Applied Health Economics (AHE)
Kyle Paret1, Hadi Beyhaghi2, Will Herring1, Matthew Rousculp2, Seth Toback2, Josephine Mauskopf1
1RTI Health Solutions, Research Triangle Park, NC, USA
2Novavax, Inc., Gaithersburg, MD, USA
Purpose: This modeling study estimated the potential population-level health and economic impact of including NVX-CoV2373, an investigational COVID-19 vaccine, as a booster vaccine option for previously vaccinated adults (18 years or older) in the United States (US).
Methods: A decision-analytic Markov model was developed to estimate COVID-19–related cases, hospitalizations, and deaths with and without NVX-CoV2373 as a booster vaccine option for adults in the US. The model population was stratified by age, with health states including susceptible, detected infection, long COVID-19, and recovered. The severity of COVID-19 outcomes within the detected infection state was modeled based on the highest level of care required. Vaccine efficacy was sourced from published phase 3 clinical trials, based primarily on the prototype variant, and was assumed to wane equally for all vaccines based on observed real-world data. Booster vaccination efficacy (homologous or heterologous) was modeled based on the manufacturers’ primary vaccination series. Other model inputs were sourced from published literature or derived from publicly available data from the US Centers for Disease Control and Prevention. The cost per dose was assumed to be equal for all booster vaccines. We compared key outcomes over a 1-year time horizon for the mix of booster vaccines currently authorized in the US and a vaccine mix including NVX-CoV2373 as a booster option.
Results: A 5% increase in booster vaccine coverage among the 200 million fully vaccinated adults in the US, allocated to NVX-CoV2373 market share, reduced hospitalizations and deaths due to COVID-19 by approximately 26,000 and 4,500, respectively. The increase in coverage resulted in additional vaccination costs of $566M but reduced direct medical costs by $562M. Figure 1 illustrates the impact of including NVX-CoV2373 as a vaccine option on incremental hospitalizations across a range of coverage and market share assumptions. Each additional percentage point increase in coverage reduced COVID-19 hospitalizations and deaths by approximately 0.5% in the population over 1 year.
Figure 1.
Incremental Difference in Total Hospitalizations Avoided across Variations in Market Share and Coverage Rates
Conclusions: Our results suggest that including NVX-CoV2373 as a COVID-19 booster vaccine option for adults in the US has the potential to reduce hospitalizations and deaths because of the anticipated increase in vaccine coverage. Increases in vaccination costs due to higher coverage were predicted to be almost completely offset by reductions in direct medical costs.
Keywords: COVID-19, Economic analysis, Booster vaccine
A practical approach for complementing the health sector perspective with a patient perspective in cost-effectiveness analysis
PP-121 Applied Health Economics (AHE)
Joseph Corlis, Stephen C Resch
Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, USA
Purpose: Most cost-effectiveness analyses (CEAs) of health interventions in low- and middle-income countries adopt a health sector perspective and often exclude patients’ out-of-pocket expenditures and time costs. Common justifications for this practice are that patient costs are hard to collect, are small relative to those of other payers (e.g., governments, donors), and would not substantially affect incremental cost-effectiveness ratios if included. However, even relatively small costs can impact patient behavior (i.e., care-seeking, participation). Therefore, conducting a companion patient perspective alongside the health sector perspective can produce useful evidence about demand-side considerations of health interventions.
Methods: The objective of this study was to create a clear procedure for incorporating patient perspectives into CEAs. From both the health sector and patient perspectives, we ran iterations of a generic CEA model with interventions that avert DALYs and incur one-time costs. We set a threshold for patient affordability based on 10% of annual spending for out-of-pocket expenditures on health. We then mapped the range of possible results, categorizing them based on efficiency and affordability, and described implications for possible next steps that researchers can offer decision-makers.
Results: Table 1 shows the typology of five result patterns that can occur when a health sector perspective is compared to a companion patient perspective: (1) perfectly congruent: both perspectives identify the same intervention as optimal and it is affordable for patients; (2) weakly congruent: the optimal strategy in the health sector perspective is efficient and affordable for patients; (3) incongruent: the optimal strategy in the health sector perspective is affordable but not efficient in the patient perspective; (4) consistent: the optimal strategy in the health sector perspective is efficient but unaffordable for patients; or (5) inconsistent: the optimal strategy in the health sector perspective is neither efficient, nor affordable in the patient perspective. We posit that interventions identified as unaffordable from the patient perspective should not be recommended. In the case of incongruent, consistent, or inconsistent results, researchers should consider whether CEA model parameters related to patient participation are overly optimistic and whether intervention design could be modified to achieve congruence.
Conclusions: Including a patient perspective offers additional evidence either by confirming the optimal strategy from the health sector perspective is also efficient and affordable for patients or by identifying discrepancies that could affect patient participation.
Keywords: Cost-effectiveness analysis, patient perspective, decision-making, affordability, methodology
Typology of comparison outcomes for the results of multiple perspectives in CEAs
| Result Pattern | Efficiency Results | Optimal Intervention’s Affordability for Patients | Possible Next Steps in Decision-Making |
| Perfectly Congruent | The different perspectives identify the same intervention as optimal. | Affordable | The CEA should recommend the optimal intervention. |
| Weakly Congruent | The intervention which is optimal from the health sector perspective is also efficient from the patient perspective but not optimal at the willingness-to-pay threshold. | Affordable | The CEA should recommend the intervention that is optimal from the health sector perspective. The CEA should also recommend that decision-makers redesign or implement the interventions in such a way that maximizes patient incentive to choose the health sector’s optimal intervention. |
| Incongruent | The intervention which is optimal for the health sector is not efficient for the patient perspective. | Affordable | The CEA should recommend the intervention that is optimal from the health sector perspective. The CEA should also recommend that decision-makers redesign or implement the interventions in such a way that maximizes patient incentive to choose the health sector’s optimal intervention. |
| Consistent | The intervention which is optimal from the health sector perspective is also efficient from the patient perspective (may be optimal at the willingness-to-pay threshold or simply on the efficiency frontier). | Unaffordable | The CEA should recommend that decision-makers redesign the intervention which is optimal from the health sector perspective in order to decrease or offset the patient’s OOP expenditures, making the intervention more affordable for patients. If such a redesign does not generate an optimal intervention from the health sector perspective that is affordable to patients, then the strategy should be eliminated from consideration (but ICERs should not be recalculated) and the next most cost-effective intervention from the health sector perspective that is efficient and affordable from the patient perspective becomes optimal and should be recommended. |
| Inconsistent | The intervention which is optimal for the health sector is not efficient for the patient perspective. | Unaffordable | The CEA should recommend that decision-makers redesign the intervention which is optimal from the health sector perspective in order to decrease or offset the patient’s OOP expenditures, making the intervention more affordable for patients. If such a redesign does not generate an optimal intervention from the health sector perspective that is affordable to patients, then the strategy should be eliminated from consideration (but ICERs should not be recalculated) and the next most cost-effective intervention from the health sector perspective that is efficient and affordable from the patient perspective becomes optimal and should be recommended. |
CEA cost-effectiveness analysis, ICER incremental cost-effectiveness ratio, OOP out-of-pocket.
Cost-Effectiveness of Fibrinogen Concentrate vs. Cryoprecipitate for Treating Acquired Hypofibrinogenemia in Bleeding Adult Cardiac Surgical Patients
PP-122 Applied Health Economics (AHE)
Lusine Abrahamyan1, George Tomlinson3, Jeannie Callum4, Steven Carcone2, Deep Grewal5, Justyna Bartoszko5, Murray Krahn2, Keyvan Karkouti6
1Toronto General Hospital Research Institute, University Health Network, Toronto, Ontario, Canada
2Toronto Health Economics and Technology Assessment (THETA) Collaborative, Toronto, Ontario, Canada
3Biostatistics Research Unit, University Health Network, Toronto, Ontario, Canada
4Department of Pathology and Molecular Medicine, Kingston Health Sciences Centre and Queen’s University, Kingston, Ontario, Canada
5Department of Anesthesia and Pain Management, Sinai Health System, Women's College Hospital, University Health Network, Toronto, Ontario, Canada
6Department of Anesthesiology and Pain Medicine, University of Toronto, Toronto, Ontario, Canada
Purpose: To determine cost-effectiveness of fibrinogen concentrate vs. cryoprecipitate for managing active bleeding in adult cardiac surgery patients.
Methods: Design: A within-trial economic evaluation of the FIBRES randomized controlled trial (February, 2017 to November, 2018) examined all in-hospital resource utilization costs and allogeneic blood product (ABP) transfusion costs incurred within 28 days of surgery.
Setting: 4 Ontario based hospitals.
Participants: Subset of adult cardiac surgery patients from the FIBRES trial with active bleeding and acquired hypofibrinogenemia requiring fibrinogen replacement (n=495).
Interventions: Fibrinogen concentrate (4 g per dose) or cryoprecipitate (10 units per dose) randomized (1:1) up to 24 hours post cardiopulmonary bypass.
Outcomes: Effectiveness outcomes included the number of ABPs administered within 24 hours and 7 days of cardiopulmonary bypass. ABP transfusion (7-day) and in hospital resource utilization (28-day) costs were evaluated, and a multivariable net benefit regression model was built with covariates site, treatment group, critical illness status, and the interaction of group and status.
Cost-effectiveness Analysis: The cost-effectiveness analysis was conducted using incremental net monetary benefit (INB) and net benefit regression (NBR) approaches.
Results: Patient level costs for 495 patients were evaluated; mean (SD) age 59.2 (15.4) and 69.3% male. Consistent with the FIBRES trial, ABP transfusions and adverse events were similar in the fibrinogen concentrate and cryoprecipitate groups. Overall mean (SD) total 7-day ABP costs were $3,870 ($5,040) CAD. Median (interquartile range) total 28-day cost was $37,830 ($26,200–64,980) CAD in the fibrinogen concentrate group and $38,660 ($26,010–70,380) CAD in the cryoprecipitate group. After exclusion of patients who were critically ill before surgery (11%) due to substantial variability in costs, the incremental net benefit of fibrinogen concentrate versus cryoprecipitate was positive (probability of being cost-effective 75% and 92% at $0 and $2,000 willingness-to-pay, respectively). Estimation of net benefit was imprecise for preoperative critically ill patients.
Conclusions: Fibrinogen concentrate is cost effective when compared to cryoprecipitate in the large majority of bleeding adult cardiac surgery patients with acquired hypofibrinogenemia requiring fibrinogen replacement. The generalizability of these findings outside the Canadian health system needs to be verified.
Keywords: cardiac surgery, fibrinogen concentrate, cost-effectiveness
Adherence to the Dual Prevention Pill for HIV and Contraception: identifying thresholds for net benefit and cost-savings using mathematical modeling
PP-123 Applied Health Economics (AHE)
Masabho P. Milali1, David Kaftan1, Ingrida Platais1, Hae Young Kim1, Danielle Resar2, Danny Edwards2, Jennifer Campbell2, Sarah Jenkins2, Anna Bershteyn1
1Department of Population Health, NYU Grossman School of Medicine, New York, NY, USA
2Clinton Health Access Initiative, Boston, MA, USA
Purpose: Women in sub-Saharan Africa experience the world’s highest rates of unintended pregnancy and HIV infection. Currently under development, the Dual Prevention Pill (DPP) combines the active ingredients of combined oral contraceptives (COC) and HIV pre-exposure prophylaxis (PrEP) into one daily tablet. Current COC users are considered likely to uptake DPP, however, it is unknown whether the DPP would increase or decrease contraception adherence given that DPP is a larger tablet (Figure) and may involve more side effects than COCs. We used agent-based modeling to estimate the threshold DPP adherence levels that would lead to net harm, benefit at additional cost, and benefit with cost-savings in different populations of COC users.
Methods: EMOD-HIV, an agent-based HIV model calibrated to HIV epidemics in South Africa and western Kenya, was augmented to additionally incorporate the effect of contraception on unintended pregnancies and years of life lost due to unsafe abortion and maternal mortality in each setting. We estimated the incremental disability-adjusted life-years and costs from a healthcare perspective if current COC users in different HIV risk categories switched to DPP starting in 2025 over a 20-year time horizon with 3% annual discounting.
Results: Switching COC users in the general population to DPP in South Africa would become net harmful if COC adherence dropped from 90% to below 30%. In western Kenya net harm would occur if adherence dropped from 90% to below 73%. Above these levels, DPP would provide benefit at additional cost, never becoming cost-saving in the general population of COC users in both countries. However, DPP could achieve not only benefit but also cost-savings higher-risk populations, including female sex workers and serodiscordant couples. In South Africa, the threshold adherence above which DPP would be cost-saving in serodiscordant couples was 71.9% (95% CI: 70.1% – 73.7%) and in female sex workers was 12.3% (95% CI: 7.1% –39.3%).
Conclusions: DPP is likely to be beneficial and in some cases cost-saving for COC users at higher risk of HIV, such as serodiscordant couples and sex workers, even if contraception adherence were to decrease. In the general population, DPP could not be cost-saving and sufficiently large adherence declines could cause net harm due to the health risks of unintended pregnancy. Future studies should measure contraception adherence changes among COC users switching to DPP.
Keywords: Family planning, Contraception, HIV prevention, PrEP, cost-effectiveness, Africa
Approximate pill sizes.
Comparison of approximate pill sizes for combined oral contraceptives (COC) and the dual prevention pill (DPP). The DPP is not yet available; its size is based on current HIV pre-exposure prophylaxis (PrEP).
Considerations in Evaluating Screening Options that Increase Adherence
PP-124 Applied Health Economics (AHE)
Peter B. Bach1, Jesse D. Ortendahl2
1Delfi Diagnostics
2Partnership for Health Analytic Research
Purpose: Explore the nuances and potential biases that can arise when assessing new technologies that can be used to improve patient adherence to guideline-recommended screening approaches.
Methods: An exploratory model was developed to assess the clinical and economic value of a hypothetical test (Test 1) that increases adherence to a well-established method of disease screening (Test 2). We first estimated survival and costs assuming the population either underwent Test 2 or received Test 1 before going to Test 2 if positive. We then recalculated when using real-world adherence rates for Test 2 compared with the expected adherence increase when incorporating Test 1. Using the second approach, we examined the relationship between value and test sensitivity, specificity, and cost.
Results: In a worked example in which all patients either initially received Test 1 or immediately received Test 2, the use of Test 1 was found to be dominated due to the additional costs associated with its use and the reduction in life expectancy due to false negative test results. When incorporating a range of plausible adherence boosts, test prices and health gains from testing, the Test 1 first scenario had higher costs, greater health benefits, and incremental cost effectiveness ratios below frequently cited willingness-to-pay thresholds. In one-way sensitivity analyses, we found that the Test 1 sensitivity and underlying disease prevalence were the most influential parameters. The increased adherence associated with Test 1 use had the largest impact on incremental costs, but the least impact on the ICER. In multi-way sensitivity analyses, changs in test sensitivity, test specificity, incremental adoption rate and costs of false positives all yielded insightful findings.
Conclusions: Across medicine, and especially within oncology, the most effective methods of screening may be underutilized due to lack of access, high costs, or test burden, and increasing utilization of these techniques would provide value. Appropriately assessing these technologies requires approaches that consider their uniqueness and a realization that real-world utilization of recommended testing is rarely close to 100%. The value of such tests will be driven by a combination of their attributes, such that clinical and economic evaluations should closely examine these test characteristics.
Keywords: Cancer screening; New technology adoption, cost-effectiveness modeling, technology diffusion
Cost-effectiveness of Stress Ulcer Prophylaxis in High-risk Critically Ill Patients in the United States
PP-125 Applied Health Economics (AHE)
Sarah Beth Tucker1, Bryan L. Love1, Brandon Bookstaver1, Saud Alsahali2, Claiborne E. Reeder1, Ismaeel Yunusa1
1Department of Clinical Pharmacy and Outcomes Sciences, University of South Carolina, Columbia, SC, United States
2Department of Pharmacy Practice, Unaizah College of Pharmacy, Qassim University, Qassim, Saudi Arabia
Purpose: To guide decision-makers in the judicious use of healthcare resources in the intensive care unit (ICU), this study aimed to evaluate the cost-effectiveness of proton pump inhibitors (PPI) versus histamine-2 receptor antagonists (H2RA) as stress ulcer prophylaxis (SUP) in high-risk critically ill patients.
Methods: Using the decision tree approach, we conducted a cost-effectiveness analysis for ICU patients at risk of developing a clinically important gastrointestinal bleed (CIGB) receiving either a PPI or H2RA as SUP. The analysis was from the healthcare system perspective and had a 90-day time horizon. Relevant probabilities and cost inputs were derived from a comprehensive literature search. We evaluated the cost-effectiveness of PPI vs. H2RA for the following outcomes: CIGB event prevented, pneumonia averted, clostridium difficile infection (CDI) averted, and mortality averted. Expected total costs for each prevention strategy were expressed in 2021 US$. Also, we expressed cost-effectiveness as incremental cost-effectiveness ratio (ICER) and net monetary benefit (NMB). When one strategy dominates the other, we did not estimate ICERs because such a scenario suggested cost savings. In addition, we conducted one-way sensitivity analyses to characterize the robustness of our findings.
Results: The expected cost of SUP with PPIs and H2RAs were $5,579 and $5,831, respectively. In preventing CIGB and CDI, PPIs dominated H2RAs by being more effective (0.97 vs. 0.96) and less costly. For the prevention of CDI, PPI was more cost-effective (NMB, US$93,190) than H2RAs (NMB, $92,759). With an ICER ($420,693/pneumonia avoided) greater than the conventional willingness-to-pay threshold, prophylaxis involving PPIs was the more cost-effective strategy in avoiding pneumonia. Conversely, the use of H2RAs was more cost-effective than using the PPIs in preventing mortality (ICER, $16,606/death averted). Sensitivity analyses showed that the findings of the CIGB, pneumonia, and mortality prevention outcomes were most sensitive to the probability of CDI if given an H2RA and the CDI model finding were most sensitive to the probability of pneumonia if given a PPI.
Conclusions: The findings from this study suggest that, for high-risk patients in the ICU with indication for SUP, PPI use was the more cost-effective option in reducing cases of CIGB, CDI, and pneumonia; however, H2RAs may reduce mortality.
Keywords: cost-effectiveness, stress ulcer prophylaxis, proton pump inhibitors, histamine-2 receptor antagonists
Cost-effectiveness of TB preventive treatment for household contacts and people with HIV
PP-126 Applied Health Economics (AHE)
Theresa Ryckman1, Jeff Weiser2, Karin Turner2, Priyanka Soni3, Gavin Churchyard2, Richard E. Chaisson4, David W. Dowdy1
1Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, USA
2The Aurum Institute, Johannesburg, South Africa
3Unitaid, Geneva, Switzerland
4Division of Infectious Diseases, Department of Medicine, Johns Hopkins School of Medicine, Baltimore, USA
Purpose: Tuberculosis (TB) preventive treatment (TPT) is highly effective at preventing people infected with TB from developing active disease, and household contacts of people with TB and people with HIV (PWH) are both high-risk populations who could benefit from TPT. In countries with insufficient resources to provide TPT to all PWH and contacts, cost-effectiveness analysis can provide evidence to inform decision-making.
Methods: We developed a state-transition model to simulate TPT for PWH and household contacts in 29 countries. For household contacts, a TPT program includes contact investigation (to rule out active disease before TPT initiation), which can also link people with undetected active TB to treatment, while for PWH investigation for active disease is a recommended component of routine health visits. We compared TPT for PWH and household contacts to no TPT or contact investigation over ten years. Our model integrates data from IMPAACT4TB – an initiative to scale up TPT in 12 high-burden countries – and published sources to simulate screening and diagnostic algorithms; TPT initiation, toxicity, and completion; and subsequent time-varying risks of reactivation, treatment, and mortality. The model was stratified by age and time since infection for contacts and included transitions on and off antiretroviral therapy for PWH. Our primary outcome was the incremental cost-effectiveness ratio (ICER), expressed as mean incremental costs (in 2020 USD) per mean incremental disability-adjusted life years (DALYs) averted, averaged across 50,000 parameter set samples and with costs and DALYs discounted 3% annually.
Results: In 4 countries with high TB treatment unit costs, TPT for PWH was cost-saving. In 24 of the remaining 25 countries, ICERs were lowest for contacts <5, followed by contacts aged 5-14. The relative cost-effectiveness of TPT for PWH versus adult household contacts varied and was largely determined by case detection ratios (without contact investigation) and TB treatment costs; ICERs were lower for adult contacts than PWH in 14 of 29 countries. ICERs fell below per-capita gross national incomes in all 29 countries for contacts <15 years, in 25 countries for adult contacts, and in 17 countries for PWH.
Conclusions: In many settings, TPT is likely cost-effective for PWH and household contacts of all ages. Given slow progress in scaling up TPT for contacts over five, providing TPT to these individuals should be given increased attention.
Keywords: Cost-effectiveness analysis, Tuberculosis, Global health, Mathematical model
Cost-effectiveness of tuberculosis preventive treatment in four selected countries
In each of the 4 panels, markers designate incremental cost-effectiveness ratios, with absolute discounted cumulative costs on the x-axis and discounted cumulative DALYs averted relative to the status quo on the y-axis. Shading indicates whether a strategy is dominated or extended dominated (white shading) vs. non-dominated (colored shading). Non-dominated strategies appear on the cost-effectiveness frontier (shown via black connecting lines), and the incremental cost-effectiveness ratios of these strategies are labeled on the graph. These 4 countries were selected because they demonstrate 3 prototypical patterns of cost-effectiveness ordering; Kenya and India have the same ordering of strategies but demonstrate variation in geographic region and HIV prevalence.
Evaluation of prostate cancer risk assessment techniques: a cost-effectiveness analysis using population data
PP-127 Applied Health Economics (AHE)
Tima Mohammadi1, Daphne P. Guh1, Annalijn I. Conklin4, Alexander C.t. Tam2, Reka E. Pataky3, Larry D. Lynd4, Wei Zhang5
1Centre for Health Evaluation and Outcome Sciences, St. Paul’s Hospital, Vancouver, British Columbia, Canada
2School of Population and Public Health, University of British Columbia, Vancouver, British Columbia, Canada
3Canadian Centre for Applied Research in Cancer Control, BC Cancer, Vancouver, British Columbia, Canada
4Centre for Health Evaluation and Outcome Sciences, St. Paul’s Hospital, Vancouver, British Columbia, Canada; Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, British Columbia, Canada
5Centre for Health Evaluation and Outcome Sciences, St. Paul’s Hospital, Vancouver, British Columbia, Canada; School of Population and Public Health, University of British Columbia, Vancouver, British Columbia, Canada
Purpose: Prostate cancer (PCa) is the most common cancer in men in many countries, including Canada. The current standard of care for diagnosing is based on a needle biopsy after an elevated prostate-specific antigen (PSA) level. The low specificity of PSA in the grey zone (3–10 ng/ml) is a source of overdiagnosis, unnecessary biopsies and overtreatment. Therefore, there is a need for a sensitive and specific second test to assess the risk of PCa before the biopsy. Our objective was to construct a modeling framework to evaluate the value of using such a new biomarker to inform diagnostic decisions in men with a PSA in the grey zone.
Methods: Using a time-dependent state-transition model for PCa, we evaluated the cost-effectiveness of using new biomarkers in biopsy naïve men from the health care system’s perspective. Five patient partners and three clinicians were engaged in identifying the care pathway and designing the model. Our target population was British Columbia men 50+ years with a PSA of 3–10 ng/ml. We used administrative health data from BC Cancer and Population Data BC to inform the model parameters (the prevalence of high-grade and low-grade PCa, treatment distribution, costs and transition probabilities, including those to PCa-specific death and other death). Other model parameters were derived from a comprehensive review of the literature. To address uncertainty around parameters, the expected values of costs and quality-adjusted life-years (QALY) in the base case and all scenario analyses were attained through probabilistic analysis.
Results: Analyses were conducted using a Monte-Carlo simulation with 1,000 iterations. Results showed that at a cost-effectiveness threshold of CAD$50,000/QALY, there is an 82% probability that using a new biomarker to guide biopsy is cost-effective compared to the standard of care. This strategy was associated with an incremental cost of $133 and a QALY gain of 0.0181which resulted in an incremental cost-effectiveness ratio of $7,360 per QALY gained over 18 years of follow-up. The outcome was sensitive to the characteristics of the biomarker.
Conclusions: Our findings suggest incorporating biomarkers in the initial diagnosis of PCa in men with PSA levels in the grey zone can be a cost-effective strategy by avoiding unnecessary biopsies and overtreatment. The analysis could inform the cost-effectiveness of a new biomarker even under development.
Keywords: Prostate cancer, biomarker, State-transition model, Cost-effectiveness analysis, Probabilistic analysis
Fig 1.
Cost-Effectiveness Acceptability Curve
Impact of race and ethnicity on the intervention value of a community weight loss program: A distributional cost-effectiveness analysis
PP-128 Applied Health Economics (AHE)
Tzeyu L Michaud1, Su Hsin Chang2, Wen You3, Paul Estabrooks4
1Center for Reducing Health Disparities and Department of Health Promotion, College of Public Health, University of Nebraska Medical Center, Omaha, NE, USA
2Division of Public Health Sciences, Department of Surgery, Washington University School of Medicine, St Louis, MO, USA
3Department of Public Health Sciences, University of Virginia, Charlottesville, VA, USA
4Department of Health and Kinesiology, College of Health, University of Utah, Salt Lake City, UT, USA
Purpose: Racial/ethnic minorities experience disproportionally higher rates of obesity, but they are less likely to access to weight loss programs, and if do, they have worse weight loss outcomes, compared to their majority counterparts. We evaluated the cost-effectiveness of two implementation approaches for a community weight loss program and quantified their impact on health disparities.
Methods: We adapted a published community weight loss model to evaluate the cost-effectiveness of standard implementation and enhanced implementation (including a letter of invitation aiming at increasing program uptake) versus no intervention. We incorporated various sex and race/ethnicity (White, Black, Hispanic, and others) differences in disease incidence, intervention program uptake, intervention effectiveness, and disease-specific and all-cause mortalities using national representative data, trial data and published literatures. Applying a distributional cost-effectiveness framework, we simulated the progression of a hypothetical closed overweight/obese cohort with a lifetime horizon. We estimated total costs, quality-adjusted life-years (QALYs), and quality-adjusted life expectancy (QALE). To measure health inequity impact, we used a summary measure of equally distributed equivalent health incorporating QALE and an Atkinson inequality aversion parameter. Through scenario/sensitivity analyses, we explored the impact of altering the sex and/or racial/ethnic differences of input parameters, especially intervention program uptake and intervention effects, on the estimated inequality impacts of the implementation of a community weight loss program.
Results: Compared to standard implementation, the enhanced implementation is cost-saving with 0.002 QALYs gained and $20 saved. Although both standard and enhanced implementations increase population health, the Atkinson inequality indices also increase. That is, both implementations compared to no intervention are cost-effective but increase health inequity among sex and race/ethnicity subgroups. Sensitivity analysis results indicate that intervention program uptake is the key driver of the health equity impact.
Conclusions: Our study demonstrated that implementing a community-based weight loss program is generally cost-effective either with or without a letter of invitation to increase program uptake. We also observe a tradeoff between an increase in overall health and an increase in health inequity in terms of obesity-related health outcomes. Improving intervention program uptake, especially targeting male race/ethnicity minorities, is an effective way to improve this health inequity.
Keywords: extended cost-effectiveness analysis, simulation modeling, health disparity, obesity, lifestyle intervention
Intramural health care consumption and costs after traumatic brain injury: a CENTER-TBI study
PP-129 Applied Health Economics (AHE)
Z.L.Rana Kaplan1, Marjolein Van Der Vlegel1, Jeroen T.J.M Van Dijck2, Dana Pisică1, Nikki van Leeuwen1, Hester F. Lingsma1, Ewout W. Steyerberg1, Juanita A. Haagsma1, Marek Majdan3, Andrew I.R. Maas4, Suzanne Polinder1
1Department of Public Health, Erasmus University Medical Center, Rotterdam, The Netherlands
2Department of Neurosurgery, University Neurosurgical Center Holland (UNCH), Leiden University Medical Center & Haaglanden Medical Center & HAGA Teaching Hospital, Leiden/The Hague, The Netherlands
3Institute for Global Health and Epidemiology, Department of Public Health, Trnava University, Trnava, Slovakia
4Department of Neurosurgery, Antwerp University Hospital and University of Antwerp, Edegem, Belgium
Purpose: Traumatic brain injury (TBI) is a global public health problem and a leading cause of morbidity, disability and mortality. Besides the medical sequelae, TBI also causes major economic and societal burdens. Measuring healthcare costs are necessary to improve accessibility and delivery of healthcare and to identify potential savings. Therefore, this study aimed to provide a detailed overview of intramural healthcare consumption and costs across the full spectrum of TBI within Europe.
Methods: We used data from the CENTER-TBI (Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury) CORE study, a prospective observational study conducted in 18 countries across Europe. The baseline Glasgow Coma Scale score (GCS) was used to classify patients into mild (GCS 13-15), moderate (GCS 9-12) or severe (GCS<=8) TBI. We analyzed seven cost categories: prehospital costs, hospital admission, surgical interventions, imaging, laboratory, blood products, and rehabilitation. Costs were estimated based on Dutch real price assessments and converted to country-specific unit prices using GDP-PPP adjustment. Mixed generalized linear models (GLM) quantified which patient characteristics were associated with health care consumption.
Results: We included 4,349 patients, of whom 2,854 (66%) had mild, 371 (9%) moderate, and 962 (22%) severe TBI. The median intramural healthcare costs of a TBI patient in Europe were €9,500 (IQR €2,000-€41,300). Costs increased with higher age (16-25 years; €7.400 [IQR €1,800-€42,700] vs. 41-65 years; €10,400 [IQR €2,200-€44,300]) and greater trauma severity (mild; €3,800 [IQR €1,400-€14,000], moderate; €37,800 [IQR €14,900-€74,200], severe; €60,400 (IQR €24,400-€112,400)). Hospital admission accounted for most of the intramural costs (59%), with a mean length of stay (LOS) of 5.1 days at the ICU and mean LOS of 6.4 days at the ward. Between country differences in length of stay at the hospital were present for all severities of TBI. Other large contributors to the total costs were rehabilitation (20%) and intracranial surgeries (8%). The multivariable GLM showed that male patients, a worse premorbid overall health state, increasing age and trauma severity, expressed in the injury severity score (ISS) and GCS, were significantly associated with higher costs.
Conclusions: In conclusion, our findings suggest that intramural costs of TBI are significant and are profoundly caused by hospital admission. The knowledge we gained on the intramural healthcare expenses could be a leading step towards more cost-efficient TBI care.
Keywords: cost analysis, traumatic brain injury, in-hospital costs, healthcare consumption
Exploring family and clinician decision preferences about interventions for congenital melanocytic nevi
PP-130 Decision Psychology and Shared Decision Making (DEC)
Carrie C Coughlin1, Yuliya Kozina2, Kendrick Williams3, Mary C Politi4
1Division of Dermatology, Departments of Medicine and Pediatrics, Washington University School of Medicine in St. Louis
2Washington University School of Medicine in St. Louis
3Department of Pediatrics, Washington University School of Medicine in St. Louis
4Department of Surgery, Washington University School of Medicine in St. Louis
Purpose: Families with children who have congenital melanocytic nevi (CMN) face complex decisions about procedural interventions and screening. This study explored family and clinician preferences, needs, and goals when making these decisions to inform the development of a decision aid.
Methods: We conducted semi-structured interviews with parents of children with congenital nevi, and with clinicians (pediatric dermatologists, pediatric plastic surgeons, a neurologist) who care for children with congenital nevi.
Results: All parents who completed the post-interview surveys were mothers with a mean age 35.7 years from urban (3/10), suburban (5/10), and rural (2/10) regions. Two parents identified as black and 8 identified as white. Most parents interviewed (9/11) had a child under age 6, with no prior surgical procedure for their nevi (7/11). Few children had biopsies (2/11), partial removal (1/11) or total removal (1/11) of their primary nevus. Four children had magnetic resonance imaging (MRI) to evaluate for neural melanosis. Of the 8 clinicians (6 female, 2 male) participating, all were in academic practices, with assistant professor (2), associate professor (4), or professor (2) appointments. Deductive and inductive analysis of the qualitative interviews is ongoing. Preliminary themes suggest that cancer risk weighs heavily on families making decisions about surgery. Post-surgical concerns (scarring, healing) also are important to parents evaluating procedural options. Cost was not a factor for some families, but was an important factor for others as they considered the timing of procedures and frequency of appointments. Clinicians stated they wanted tools to support discussions with families, but did not have a uniform preference for the format of these tools (i.e., electronic, paper-based, video). Clinicians recognized the emotional and practical challenges associated with having a child with CMN (particularly large, giant, or multiple lesions). Most of the pediatric dermatologists considered MRI screening for neural melanosis important to do early without sedation, and encouraged it in the correct clinical context. All clinicians deemed family preferences vital to conversations about care.
Conclusions: Communication about cancer and surgical sequela is vital in conversations about procedures for children with CMN. Information about wound healing and scarring should be communicated as clearly as possible before surgeries. Future work can explore the development of family-centered decision tools that can be incorporated into routine workflows to support CMN choices.
Keywords: pediatric, dermatology, shared decision making, congenital nevi
Arguments About Vaccination
PP-131 Decision Psychology and Shared Decision Making (DEC)
Christopher R Wolfe1, Grace E Tirzmalis1, Wylie Brace1, Josselyn E. Marroquin1, Junjie Wu2, Yizhu Wang2, Hongli Gao2
1Department of Psychology, Miami University, Oxford, Ohio, USA
2School of Psychology, Xinxiang Medical University, Xinxiang, China
Purpose: We assessed how people process arguments for and against vaccination in the United States and China.
Methods: We conducted three studies in the United States and two in China on the extent to which people agreed with arguments for and against vaccination, and judged their strength or quality, compared with other argument topics. Study 1 replicated previously-published research on the weight of claims and reasons in agreement and judgments of argument quality. Participants read 8 sets of four arguments unrelated to vaccination, two supporting pro claims, and two with the opposite con claim. Crossed with pro and con claims, two supporting reasons were similar in ideology or other factors, and two were dissimilar. For both argument strength and agreement, the absolute difference of the two pro minus the two con ratings assesses the weight of claims. The absolute difference of the two similar minus the two dissimilar reasons assesses the weight of supporting reasons. Study 2 applied the same approach using pre-tested liberal and conservative supporting reasons for and against claims about vaccination. In Study 3 participants rated their agreement with and quality of anti-vaccination arguments before and after rebuttal. Study 4 replicated Study 3 with Chinese participants. In Study 5, Chinese undergraduates majoring in medical and non-medical fields rated agreement with claims about vaccines alone or supported by good and bogus reasons.
Results: In Study 1 claims had significantly more weight in influencing agreement, but reasons had significantly more weight for judgments of argument quality. However, for Study 2 arguments about vaccination, claims were significantly more influential than reasons for both agreement and quality judgments. In Studies 3 and 4, rebuttal did not change agreement with anti-vaccination arguments. In Study 5, claims supported by good reasons yielded higher agreement compared to claims alone, and less for those supported by bogus reasons. Pre-medical majors agreed significantly less than others with claims supported by bogus reasons.
Conclusions: To the extent that people agree with an argument about most topics, they are mostly agreeing with the claim, but when judging an argument’s strength or quality, they are more influenced by supporting reasons. However, positions about vaccination are “hardened,” meaning that supporting reasons have little weight. Yet blatantly bogus reasons reduce agreement. Rebuttal was ineffective in the United States and China.
Keywords: judgments about vaccination, reasoning and argumentation about vaccination
Stroke surrogate decision makers report low levels of regret after decisions on life-sustaining treatment
PP-132 Decision Psychology and Shared Decision Making (DEC)
Darin B Zahuranec1, Christopher J Becker1, Madeline D Kwicklis2, Xu Shi2, Rebecca J Lank3, Carmen Ortiz1, Erin Case2, Lewis B Morgenstern1
1Stroke Program, University of Michigan, Ann Arbor, USA
2School of Public Health, University of Michigan, Ann Arbor, USA
3Carver College of Medicine, University of Iowa, Iowa City, USA
Purpose: Surrogate decision makers are often asked to make decisions on life-sustaining treatments early after stroke and other critical illness. There are limited studies assessing decision regret longitudinally among surrogates after severe stroke. We report initial findings on decision regret out to 12 months after hospitalization from a longitudinal study of stroke surrogate decision makers.
Methods: Surrogate decision makers who discussed life-sustaining treatments for severe stroke patients (ischemic stroke or intracerebral hemorrhage) were enrolled into the longitudinal OASIS (Outcomes Among Surrogate decision makers In Stroke) study (enrolled 4/2016 to 9/2020). Participants self-identified one decision as the hardest one for them and completed the validated decision regret scale at or shortly after hospital discharge and again at 3,6, and 12 months after the stroke. We report a descriptive analysis of the decision regret scale (range 0-100, higher scores indicating greater regret) over time in the entire population and in pre-specified subgroups.
Results: A total of 320 surrogates for 257 stroke cases were enrolled. Patient characteristics: median age 75 (interquartile range, IQR 64-87), female 52%, intracerebral hemorrhage 25%, in-hospital death or hospice discharge 38.9%, Mexican American 61.5%. Surrogates were a median age of 57 (IQR 46-66) and were mostly female (77%). Surrogates reported discussing the following decisions with the health care team: do-not-resuscitate orders 90%, feeding tube 58%, transition to comfort care 55%, ventilator 54%, brain surgery 44%; note that discussion of a treatment does not mean that the patient received the treatment. Decision regret scores from hospital discharge to 12 months and in pre-specified subgroups are shown in the Table. The mean decision regret score was 11.1 (standard deviation, SD 20.2) at discharge and 11.5 (SD 20.2) at 12 months. Floor effects were common, with over 50% of respondents reporting the lowest possible score for decision regret at each time point and among nearly all pre-specified subgroups.
Conclusions: Stroke surrogate decision makers who discuss life-sustaining treatments with health care professionals report low levels of decision regret that persist at 12 months. While this is encouraging, the low levels of regret and considerable floor effects would indicate either true satisfaction in the healthcare provided or that this may not be the optimal scale to target for future efforts to improve surrogate decision making after stroke.
Keywords: Stroke, resuscitation orders, emotions, family caregivers
Decision regret over time among stroke surrogate decision makers
Associations between Basic Human Values and Health-Related Beliefs and Behaviors: A Cross-Sectional Study
PP-133 Decision Psychology and Shared Decision Making (DEC)
Heqin Yang, Nabin Poudel, Kimberly B Garza
Department of Health Outcomes Research and Policy, Harrison College of Pharmacy, Auburn University, Auburn, AL, USA
BACKGROUND AND AIM:This study aimed to explore the associations of the most important basic values with the perception of decision involvement in respondents’ own healthcare, subjective well-being, noticing calorie information on the menu, and beliefs about cancer.
METHOD:We examined cross-sectional nationally representative data from the 2020 Health Information National Trends Survey (HINTS 5, Cycle 4). Respondents chose the most important value (MIV) in their day-to-day life from seven basic values (making their own decision, being happy, helping people, being loyal to family, connection to religion, keeping themselves healthy, and assuring family safety). Multivariate logistic regression examined the association of the MIV with binary dependent variables including perception of decision involvement in respondents’ own healthcare, subjective well-being (feeling little interest or pleasure in doing things; feeling down, depressed or hopeless; feeling nervous, anxious or on edge; not being able to stop or control worrying), noticing calorie information on the menu, and beliefs about cancer (e.g., worried about getting cancer). Key demographic characteristics (e.g., age, gender, race, educational background), chronic conditions and the tendency to reflect on important values when feeling threatened were included as independent variables.
RESULTS: The final sample consisted of 3,865 respondents. Multivariate logistic regression showed that compared to the respondents who value making their own decision, those who value keeping themselves healthy had higher odds of being always involved in their healthcare as much as they wanted (OR=1.92, 95 % CI: 1.07, 3.45) and feeling little interest or pleasure in doing things (OR= 0.48, 95 % CI: 0.26, 0.90). Connection to religion had lower odds of feeling down, depressed, or hopeless (OR=0.48, 95 % CI: 0.25, 0.92) and not being able to stop or control worrying (OR= 0.41, 95 % CI: 0.20, 0.85). Helping people had higher odds of noticing calorie information on the menu (OR=2.43; 95 % CI: 1.26, 4.68). Assuring family safety had higher odds of worrying about getting cancer (OR=1.72, 95 % CI: 1.13, 2.61) and believing everything causes cancer (OR=1.68, 95 % CI: 1.05, 2.68).
CONCLUSIONS: Basic human value was associated with perception of decision involvement in respondents’ own healthcare, subjective well-being, noticing calorie information on the menu, and cancer beliefs. Findings from this study suggests that eliciting patient basic value is essential to the process of sharing medical and/or behavior change decision.
Keywords: Basic Human Value, Health-related Behaviors, Health Information National Trends Survey, Medical Decision Involvement, Shared Decision Making
A Methodology for Communicating Risk Ethically and Effectively in Decision Aids
PP-134 Decision Psychology and Shared Decision Making (DEC)
Kristin Kostick Quenet, Benjamin Lang, Natalie Dorfman, J. S. Blumenthal Barby, Holland Kaplan
Center for Medical Ethics and Health Policy, Baylor College of Medicine; Houston, Texas
Purpose: As personalized approaches to medicine continue to expand, so does the need to explore evidence-based mechanisms to help ensure responsible communication of personalized risk estimates to patients facing critical health decisions. Challenges to responsible risk communication include enduring algorithmic uncertainty as well as significant variation in how individuals perceive and incorporate risk information into decision making.
Methods: We present a methodology for identifying and evaluating risk communication options in the context of decision making about Left Ventricular Assist Device (LVAD) among patients with advanced heart failure. We illustrate how to generate evidence-based options for communicating personalized post-implant survival estimates informed by formative interviews to triangulate patient and physician knowledge needs and preferences. We also demonstrate how to embed relevant insights about common cognitive and decisional biases drawn from behavioral economics (e.g. framing and messenger effects; affective forecasting; etc.) to positively influence receptivity and actionability of information.
Results: We describe step-by-step our evaluation process involving multi-stage user testing to ensure acceptability, interpretability and utility for informed and values-congruent decision making within the context of a holistic decision support system.
Conclusions: We believe these risk communication strategies will be widely relevant and replicable across clinical decision making areas and constitute a multidisciplinary approach to embedding ethics, behavioral and decision science into personalized risk communication.
Keywords: personalized risk, risk communication, decision aids, ethics, behavioral economics, decision science
Attitudes towards and perceived barriers to implementing patient decision aids in orthopedic clinics
PP-135 Decision Psychology and Shared Decision Making (DEC)
Ha Vo1, KD Valentine2, Felisha Marques1, Hany Bedair3, Thomas Cha3, Antonia Chen4, Jesse Eisler5, Prakash Jayakumar6, Michael Kain7, Lauren Leavitt1, Benjamin Ricciardi8, James Slover9, Daniel Vigil10, Richard Wexler11, Adolph Yates12, Karen Sepucha2
1Health Decision Sciences Center, Massachusetts General Hospital, Boston, USA
2Health Decision Sciences Center, Massachusetts General Hospital, Boston, USA; Harvard Medical School, Boston, USA
3Department of Orthopaedic Surgery, Massachusetts General Hospital, Boston, USA
4Department of Orthopaedic Surgery, Brigham and Women's Hospital, Boston, USA
5Connecticut Back Center, Hartford, CT, USA; Bone and Joint Institute, Hartford Healthcare, Hartford, USA
6Department of Surgery and Perioperative Care, Dell Medical School, Austin, USA
7Boston Medical Center, Boston, USA
8University of Rochester Medical Center, Rochester, USA
9Department of Orthopaedic Surgery, New York University Langone, New York, USA
10Department of Orthopaedic Surgery, University of California LA, Los Angeles, CA, USA
11Consultant for Massachusetts General Hospital, Boston, MA, USA
12Department of Orthopaedic Surgery, UPMC, Pittsburgh, PA, USA
Purpose: To better understand the attitudes, experiences, and barriers toward implementing shared decision making (SDM) and patient decision aids (PDAs) in a sample of orthopedic clinics across the United States (US).
Methods: We conducted a prospective survey study from August – December 2021 including clinicians and staff from nine orthopedic clinics who had agreed to participate in an Orthopedics SDM Learning Collaborative.
We asked participants how often they engaged their patients in SDM, their feelings toward PDAs, and their familiarity with PDAs. They were also asked about internal/institutional support for SDM, and compatibility of PDA use with their organization’s mission and approach to care delivery.
We asked clinicians about potential implementation barriers such as longer visit times, reduced surgery rates, and staff readiness to integrate PDAs into routine care.
Most questions had 5 response options; we reported the percentage who selected the top two options (e.g., “extremely” and “very” or “always” and “usually”).
Results: Overall response rate was 61% (71/106); response rates by site ranged from 38% to 100%. Overall, 61% of surgeons (34/56) and 74% of clinic staff (37/50) completed the surveys.
Majority of responders (82%) reported that they engaged patients in SDM. The majority (74%) also felt positive about providing PDAs to their patients, but only 45% were familiar with the PDAs available to them. Respondents reported strong internal support from clinic leadership (89%) and surgeons (88%), but less from nonclinical staff (53%).
Majority of clinicians (71%) agreed that PDAs were compatible with their organization’s mission and priorities. Staff (69%) also reported that getting PDAs to patients was at least somewhat compatible with their jobs.
Surgeons were not very concerned (9%) that engaging in SDM would make their visits longer and no one was very concerned about a reduction in surgical rates. However, only 27% felt confident their team had everything they needed to deliver PDAs to patients as part of routine care.
Conclusions: Clinicians and clinic staff from clinics in the Learning Collaborative had positive attitudes about providing PDAs and believed they were frequently practicing SDM. Visit length and reduced surgery rates were not flagged as major barriers to implementation, but clinic staff reported that providing PDAs was not always a high priority and felt they needed additional logistical support to get PDAs into patients’ hands.
Keywords: Shared Decision Making, Patient Decision Aids, Implementation
A little time makes a big difference: Patient and physician concordance on colorectal cancer screening discussions
PP-136 Decision Psychology and Shared Decision Making (DEC)
Kd Valentine1, Lauren Leavitt1, Leigh Simmons1, Steven J Atlas1, Sanja Percac Lima1, Neil Korsen2, Paul Han2, Karen Sepucha1, Kathleen Fairfield2
1Massachusetts General Hospital
2Maine Medical Center
Purpose: To identify the extent to which patients and physicians agree on aspects of their discussions regarding colorectal cancer screening.
Methods: This is a secondary analysis of a multicenter randomized controlled trial of training primary care physicians (PCPs) in shared decision making (SDM) to improve colorectal cancer (CRC) screening decision making in older adults. PCPs (N=67) were randomized to receive notifications that patients were due for screening (comparator arm) or to receive notifications plus a pre-trial 2-hour training on SDM skills for CRC screening (intervention arm). Eligible patients were due for CRC screening and aged 76-85. We surveyed both patients and PCPs after eligible clinical visits and linked their responses. We measured patients and PCPs’ reports of discussion of CRC screening (yes or no), time spent discussing CRC screening (<2min, 2-5 min or >5 min), patient preference for screening, and SDM scores (SDM Process scale, range 0-4, 4 being optimal). We examined concordance of their responses and whether the amount of time spent was related to concordance and higher SDM scores.
Results: 382 patient and PCP post-visit survey dyads were identified. The majority of patients and PCPs agreed on whether or not they spoke about CRC screening at the visit (82%). Patients and PCPs reported different amounts of time spent discussing CRC in 47% of cases. Only 54% of patient-physician dyads agreed on the patient's preference. Dyads most often agreed when the patient wanted no further testing (75% accurate), followed by when patients wanted colonoscopy (62%), and when patients wanted stool-based testing (53%). Patient and PCP reported SDM process scores were positively correlated (r=0.5, p<0.001).
As PCPs reported more time was spent discussing CRC screening, they were more likely to accurately report the patient's preference (<2 minutes 51%, 2-5 minutes 64%, and >5 minutes 78% matched, respectively, p=0.005). PCPs who reported spending <2 minutes discussing CRC screening had lower self-reported SDM scores (M=1.8, SD=0.9) than those who spent 2-5 minutes (M=2.8, SD=.6) or >5 minutes discussing CRC (M=3.0, SD=1.0; ps<0.001).
Conclusions: PCPs and patients agreed about the patient’s preference in just over half of visits. PCPs who spent more time discussing CRC screening were more likely to have an accurate perception of their patients’ screening preference and to report more SDM.
Keywords: shared decision making, patient-clinician concordance, patient preference, colorectal cancer screening, training, geriatrics
Impact Index Scores Across People with Symptomatic and Asymptomatic Chronic Conditions
PP-137 Health Services, Outcomes and Policy Research (HSOP)
Kd Valentine1, Suzanne Brodney1, Carol Cosenza2, Susan Edgman Levitan1, Karen Sepucha1, Michael Barry1
1Division of General Internal Medicine, Massachusetts General Hospital
2Center for Survey Research, University of Massachusetts, Boston
Purpose: To extend the generalizability of the previously validated Impact Index by evaluating its relationships in patients with a broad range of chronic conditions that differ by symptom severity.
Methods: We surveyed 1000 English-speaking patients aged 21 and older who visited their provider in the past 6 months and were included in a hospital’s chronic condition registry for diabetes, cardiovascular disease, hypertension, or gout. Patients completed a mailed or web-based survey that included the Impact Index (4-item measure of how much a patient is bothered, worried, limited or in pain from their health condition in the past 30 days: range 0-12, higher score indicating greater impact on quality of life), overall health (SF-1), and self-report if they were diagnosed with any of 8 chronic conditions. Patients were categorized as having symptomatic conditions (1 or more conditions including arthritis/gout, angina, heart failure, COPD, depression) or asymptomatic conditions (diabetes, kidney disease, high blood pressure, or no conditions endorsed). To assess validity of the Impact Index, Pearson correlations were analyzed to describe the relationship with overall health and the number of chronic conditions. Mean scores were compared for symptomatic and asymptomatic conditions.
Results: Of the 500 respondents (50% response rate), 408 were eligible to complete the survey and 396 patients completed all the Impact Index items. Respondents were 44% women, 69% White, 16% African American, 2% Asian, 15% Hispanic, 49% with at least a college degree or more education, 33% aged 65-74 years and 29% aged 75 or older. Impact Index scores ranged from 0 to 12 (M=6, SD=4). As reports of overall health increased, Impact Index scores decreased (r=-0.59, p<.001). As the number of chronic conditions patients reported increased, so did Impact Index scores (r=0.3, p<.001). Patients with one or more symptomatic conditions reported higher Impact Index scores (M=6.7, SD=3.5) compared to those who did not report any symptomatic conditions (M=4.3, SD=3.6, p<.001, d=.67).
Conclusions: These findings extend the generalizability of the Impact Index, which was previously validated in patients with osteoarthritis or benign prostatic hyperplasia to patients with a range of chronic conditions. The Impact Index was negatively correlated with overall health, positively correlated with the number of chronic conditions, and patients reporting more symptomatic conditions had higher scores than those categorized as asymptomatic.
Keywords: chronic condition, validity, Impact Index, measurement, patient reported outcomes
Women’s Awareness of Breast Cancer Risks in the Era of Breast Density Notifications: A Mixed Methods Study
PP-138 Decision Psychology and Shared Decision Making (DEC)
Laura B Beidler1, Nancy R Kressin2, Jolie B Wormwood3, Tracy A Battaglia2, Priscilla J Slanetz4, Christine M Gunn1
1The Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire, USA; The Dartmouth Cancer Center, Lebanon, New Hampshire, USA
2Section of General Internal Medicine, Boston University School of Medicine, Boston, Massachusetts, USA
3Department of Psychology, University of New Hampshire, Durham, New Hampshire, USA
4Department of Radiology, Boston University School of Medicine, Boston, Massachusetts, USA
Purpose: We have little understanding of how women perceive the breast cancer risk associated with breast density. In terms of attributable risk, breast density confers about as much risk as family history or prior biopsy and more risk than alcohol consumption, parity, or obesity. This study describes women’s awareness of breast cancer risks in the context of breast density notification.
Methods: This mixed methods study included a national telephone survey and semi-structured interviews to assess women’s awareness of breast cancer risks. Eligible women were: >40 years old, reported a mammogram in <2 years, had no prior breast cancer, and had heard of “breast density”. Bivariate analyses examined women’s perceptions of the relative contribution of each risk to overall cancer risk. Survey participants who were notified of their personal breast density were invited for a qualitative interview. We used a matrix coding approach to develop qualitative themes from the interview data and justify the inclusion of interviewees within themes.
Results: The sample included 1,858 surveys and 61 qualitative interviews. About half (51%) of survey participants had college degrees, 16% had limited literacy; 42% were non-Hispanic White, 27% Black, 14% Hispanic, and 9% Asian. Surveys indicated that 93% of respondents perceived family history of breast cancer and 65% perceived being overweight/obese as a greater risk than breast density. About half felt that not having children (52%), having more than one alcoholic drink per day (53%) and having had a breast biopsy (48%) were lesser risks relative to breast density. In interviews, 6 of 61 women described breast density as contributing to breast cancer risk. Women most frequently emphasized family history or genetic factors as breast cancer risk factors (Figure 1). Nineteen women stated that they were unsure or did not know if it was possible to reduce their breast cancer risk.
Figure 1.
Perceived breast cancer risk factors, ranked by frequency of mention by interviewees
Conclusions: Despite breast density conferring about as much risk as family history to overall breast cancer risk, most women perceive family history-related risk to be much greater. Even with widespread use of dense breast notifications, women did not identify breast density as a cancer risk factor, nor did they feel confident about actions to mitigate their breast cancer risk.
Keywords: Breast cancer, breast density notification, breast cancer risk perceptions, breast density
Decision regret and health outcomes among patients with knee osteoarthritis following use of a technology-enabled patient decision aid
PP-139 Decision Psychology and Shared Decision Making (DEC)
Zoe Trutner1, Lauren Uhler1, Nazan Aksan1, James Custer1, Prakash Jayakumar1, Kevin Bozic1, Joel Tsevat2
1Department of Surgery and Perioperative Care, Dell Medical School at University of Texas at Austin, Austin, USA
2Center for Research to Advance Community Health and Department of Medicine, University of Texas Health Science Center at San Antonio
Purpose: Decision aids (DA) aim to enhance shared decision making (SDM) and the partnership between clinicians and patients to arrive at treatment decisions aligned with the patient’s preferences, values, and needs. A more personalized and technology-enabled approach to SDM may be achieved using patient DAs driven by machine learning and predictive analytics applied to patient reported outcome measurements and other clinical data. In this interim analysis of a 2-year randomized controlled trial, we aimed to describe the level of SDM among adults with moderate-to-severe knee osteoarthritis (OA) who utilized an AI-enabled DA, Joint Insights (JI). The JI DA incorporates general health and knee-specific patient-reported outcomes plus clinical data to predict the likelihood of clinical improvement following total knee arthroplasty (TKA). Secondarily, we aimed to describe decision quality, conflict, and regret following use of the JI DA.
Methods: At an academic health center musculoskeletal clinic, we recruited adults establishing care for moderate to severe knee OA. We analyzed data for patients in the intervention arm, which consisted of access to the JI DA, information on knee OA and its treatment, and preferences elicitation. The measures of SDM included the CollaboRATE (level of SDM; range: 0 [lowest] – 27 [highest]), the DQI-Knee Decision Quality Index (DQI-Knee; range 0 [lowest] – 100 [highest]), and the Decision Conflict Scale (DCS; range 0 [lowest] – 100 [highest]), and then 3 and 6 months after the initial visit, the Decision Regret Scale (DRS; range 0 [lowest] – 100 [highest]).
Results: Over 14 months, we randomized 80 patients to the intervention arm. Their mean (SD) age was 64.2 (9.8) years; 51.8% were female; 47.0% were Latino/a or Hispanic; and 51.8% had a high school education or less. At baseline, patients’ mean [SD] scores on the SDM measures were: CollaboRATE 24.2 [0.77]; DQI-Knee 67.9 [2.9]; and DCS 3.3 [1.0]. On follow-up, patients’ mean [SD] DRS scores at 3 months were 20.1 [2.7] and at 6 months 18.7 [3.3].
Conclusions: Patients with moderate-to-severe knee OA utilizing an AI-enabled DA reported high levels of SDM, moderate decision quality, and low levels of decision conflict and regret. Further analysis at study completion is required to determine the added benefit of this personalized DA compared with education and preference elicitation alone when considering TKA for patients with moderate-to-severe knee OA.
Keywords: shared decision making, artificial intelligence, decision aid, knee osteoarthritis
Exploring total knee arthroplasty decision-making process: an interpretive descriptive study
PP-140 Decision Psychology and Shared Decision Making (DEC)
Lissa Pacheco Brousseau1, Stéphane Poitras1, Marylène Charette2, Sarah Ben Amor3, François Desmeules4, Dawn Stacey5
1School of Rehabilitation Sciences, University of Ottawa, Ottawa, Canada.
2Interdisciplinary School of Health Sciences, University of Ottawa, Ottawa, Canada.
3Telfer School of Management, University of Ottawa, Ottawa, Canada.
4School of Rehabilitation, Université de Montréal, Montréal, Canada; Orthopaedic Clinical Research Unit, Maisonneuve-Rosemont Hospital Research Center, Montréal, Canada.
5School of Nursing, University of Ottawa, Ottawa, Canada; Clinical Epidemiology Program, The Ottawa Hospital Research Institute, Ottawa, Canada.
Purpose: To explore barriers and facilitators to meeting six appropriateness criteria for total knee arthroplasty (TKA) in a decision-making process for adults with primary knee osteoarthritis. Six Canadian appropriateness criteria for TKA based on expert consensus (clinicians, researchers, decision-makers, patients) are: osteoarthritis symptoms negatively impacting quality of life, clinical and radiographic evidence of osteoarthritis, trial of conservative treatments, patient has realistic expectations, patient and healthcare professional agree that benefits outweigh risks, and mental and physical readiness for surgery.
Methods: Interpretive descriptive qualitative study at an academic hospital (Ontario, Canada). In Ontario, adults considering elective TKA are assessed in a standardized rapid access clinic primarily by physiotherapists and triaged to conservative management or consultation with an orthopedic surgeon. Purposive sampling aimed to recruit 15-20 participants including decision-makers at the provincial and regional levels, healthcare team members at the hospital (physiotherapists, nurses, TKA orthopedic surgeons), and adults assessed in the hospital program who underwent elective TKA. Semi-structure interviews asked questions about barriers and facilitators to using these criteria. Transcribed interviews were analyzed using inductive thematic analysis combined with constant comparative methods using NVivo software by two independent reviewers.
Results: Preliminary analysis revealed that the main barriers to using the appropriateness criteria were lack of understanding and inadequate operationalization in practice of some criteria: a) quality of life impact assessed with pain and function; b) expectations implicitly assessed; c) evidence of OA based on radiography; d) discussion of benefits and risks delegated to surgeons; and e) surgery readiness implicitly assessed. Other barriers were perception that all criteria were presently used and belief that some criteria should be taken into account later in the decision-making process (realistic expectations, discussion of benefits and risks, readiness for surgery). Main facilitators were perceived benefits of the process from healthcare professionals, patients, and leaders, having standardized processes to update assessments based on new evidence, and leaders for driving changes.
Conclusions: Various interconnected barriers and facilitators influenced the use of the six appropriateness criteria in clinical practice. Interventions tailored to identified barriers would be needed to support the use of the six appropriateness criteria in current TKA decision-making process.
Keywords: Total knee arthroplasty, decision-making, patient involvement, barriers, facilitators
Influence of financial burden on advanced cancer treatment decisions: Patient, caregiver, and clinician perspectives
PP-141 Decision Psychology and Shared Decision Making (DEC)
Rachel Forcino, Satveer Kaur Gill, Rebecca Butcher, Amber Barnato
The Dartmouth Institute for Health Policy and Clinical Practice, Dartmouth College, Lebanon, NH, US
Purpose: To explore how financial burden (problems affecting patients and families related to costs of medical care) influences advanced cancer treatment decision-making among patients, caregivers, and clinicians.
Methods: We conducted qualitative analysis of semi-structured interviews with advanced cancer patients (n=16), their caregivers (n=23), and their treating clinicians (n=62) and field notes from direct observation of encounters (n=64) at three minority-serving United States National Cancer Institute-designated comprehensive cancer centers between 2018-2020. Multiple team members coded elements of the transcripts relating to treatment decision-making using a mixed inductive and deductive approach. A single analyst conducted secondary thematic content analysis using the Jones et al. (2020) model of financial burden after cancer diagnosis as an analytic framework. We used investigator triangulation through discussion of findings among the research team.
Results: Patients, caregivers, and clinicians agreed that insurance characteristics often drive available diagnostic and treatment options for patients with advanced cancer, with insurance reimbursement for interventions depending on specific care trajectories and disease progression milestones. For example, patients and caregivers cited evidence of recurrence required for insurance coverage of a CT scan with contrast, and initiation of hospice care required for insurance coverage of a pigtail catheter for recurrent ascites. One clinician acknowledged that colleagues sometimes rule out expensive treatments for uninsured patients, while others described significant barriers to accessing reduced-cost care on behalf of uninsured patients. Patients, caregivers, and clinicians also agreed that travel to and from cancer center appointments is an important non-medical cost that can produce burden, especially for rural or remote patients. Patients and caregivers discussed direct medical costs, an area where clinicians acknowledged they lack information. Patients and caregivers also emphasized other non-medical costs such as worker productivity loss. For example, one caregiver described figuring out how to continue working during her husband’s daily radiation treatments as the single most difficult cancer-related decision they faced.
Conclusions: Insurance characteristics are widely recognized as an explicit influence on advanced cancer treatment decisions. Travel and transportation were also recognized as limitations on access to care. Prior research suggests that employment-related concerns, which we find are prioritized by patients and caregivers but not readily recognized by clinicians, have the potential for a delayed impact on decision-making through unplanned treatment discontinuation and/or poorer patient outcomes.
Keywords: advanced cancer; treatment costs
Parents’ Quality of Life after Treatment Decision for a Fetus Diagnosed with a Severe Congenital Heart Defect
PP-142 Decision Psychology and Shared Decision Making (DEC)
Rebecca K. Delaney1, Alistair Thorpe1, Nelangi M. Pinto2, Elissa M. Ozanne1, Mandy L. Pershing1, Kirstin Tanner1, Lisa M. Hansen2, Linda M. Lambert2, Angela Fagerlin3
1University of Utah Intermountain Healthcare Department of Population Health Sciences, University of Utah Spencer Fox Eccles School of Medicine, Salt Lake City, UT
2Division of Cardiology, Department of Pediatrics, University of Utah, Salt Lake City, UT
3University of Utah Intermountain Healthcare Department of Population Health Sciences, University of Utah Spencer Fox Eccles School of Medicine, Salt Lake City, UT;Salt Lake City VA Informatics Decision-Enhancement and Analytic Sciences (IDEAS) Center for Innovation
Purpose: There is a limited understanding of how different treatment decisions for severe, life-threatening congenital heart defects (CHD) relate to parents’ quality of life (QOL) and health. The purpose of this exploratory study was to address this gap by examining differences in parent reported QOL and health by treatment decision (termination, comfort-directed care or surgery).
Methods: Parents of a fetus or neonate diagnosed with a severe CHD (e.g., hypoplastic left heart syndrome) were enrolled at the time of diagnosis and completed validated surveys 3-months after their treatment decision. The Impact of a Child with Congenital Anomalies on Parents measures parents’ QOL across multiple domains: contact with clinicians, social network, partner relationship, and state of mind on a Likert scale from 1 (strongly disagree) to 4 (strongly agree). The short-form 12-item health survey was used to assess parents’ mental and physical health functioning. Mean difference estimates with 95% confidence intervals are reported for these measures by treatment decision group as well as by the ultimate outcome of the child (survival).
Results: Of 35 parents enrolled, 23 had complete data and were included in the analyses. Most were female (78%), non-Hispanic White (87%), married (91%) and had some college education (70%). There were 16 parents who chose surgery, 7 who chose comfort-directed care, and no parents who chose termination. Parents who chose comfort-directed care reported less contact with clinicians (Mean difference= -2.57 95% CI, -4.95 to -0.19) and social network support (Mean difference= -4.83; 95% CI, -7.37 to -2.29), compared to parents whose child did not survive after they chose surgical intervention. Parents who chose comfort-directed care also reported a poorer state of mind compared to parents who chose surgery and their child survived (Mean difference= -3.59; 95% CI, -6.31 to -0.87). No differences were found for partner relationship, or for measures of mental or physical health.
Conclusions: Parents QOL differed based on their treatment decisions, particularly for parents who chose comfort-directed care, that highlights the need for additional research. The development of interventions and resources such as decision support tools and bereavement resources, are important next steps to consider to enhance decision support for parents who choose comfort care.
Keywords: congenital heart defect, parent quality of life, decision making, coping
“The safest option I have”: Risk perception and decision-making for home breech birth
PP-143 Decision Psychology and Shared Decision Making (DEC)
Robyn Schafer1, Julia C. Phillippi2, Shelagh Mulvaney2, Mary S. Dietrich2, Holly Powell Kennedy3
1School of Nursing, Rutgers University, Newark, USA
2School of Nursing, Vanderbilt University, Nashville, USA
3School of Nursing, Yale University, New Haven, USA
Purpose: This presentation will share findings from a mixed methods study that explored the experience of decision-making of individuals (n=25) who left the hospital system to pursue home breech birth in the United States, with a focus on the ways in which participants perceived, interpreted, and prioritized risk.
Methods: Data was collected through both semi-structured interviews of participants’ experience and cross-sectional surveys that assessed birth preferences, perceived access to care, and validated instruments of decisional autonomy and satisfaction. Descriptive and inferential statistics are presented for demographic characteristics and quantitative variables. Qualitative analysis followed an interpretative description approach informed by situational analysis.
Results: Results of this study demonstrate that participants perceived increased risk and lack of autonomy in the hospital system. Participants’ constructions of risk extended beyond adverse biomedical health outcomes such as morbidity and mortality to incorporate threats to personal needs and values; broad interpretations of maternal, neonatal, and familial health; and access to quality, person-centered care. In addition to the wellbeing of their baby, participants highly valued the experience of childbirth itself independent from its outcomes and considered maternal psychological and emotional health, dignity, agency, and bodily integrity essential components of overall safety. In a system with numerous barriers to accessing supportive care for their desired mode and site of birth, participants sought to balance subjective interpretations of risk and safety and concluded that planned home breech birth was the best and safest of all available options.
Conclusions: Findings from this study provide insight into risk perception and decision-making for high-risk birth at home and contribute to clinical practice and health systems recommendations to improve informed choice and the provision of safe and respectful perinatal care for breech birth.
Keywords: decision-making, risk perception, birth, home birth, perinatal care, breech
Elementary Conceptual Framework for Shared Decision Making in the Patient-Medical Professional-Artificial Intelligence Relationship
PP-144 Decision Psychology and Shared Decision Making (DEC)
Seiji Bito1, Atsushi Asai2, Shinji Matsumura1, Masashi Tanaka1, Mami Kikuchi3
1NHO Tokyo Medical Center
2Tohoku University
3Teikyo Heisei University
Purpose: AI-assisted shared decision making will be implemented daily soon. We aimed to develop a conceptual framework for shared decision making in a patient-medical practitioner-information terminal relationship in the near future.
Methods: This study used a hybrid method to construct a conceptual framework; [A] grounded data analysis obtained from focus group interviews, and [B] applied extrapolation of the concept from landmark books that describe humans and medicine in a society with artificial intelligence. For study A, 9 physicians, 8 health care staff, and 16 members of the public were asked to answer the following questions: 1. What functions and roles currently performed by health care professionals would be outsourced to AI in an AI-implemented practice setting? 2. What changes will occur in the way clinical decisions are made? For study B, the contents of "Deep Medicine" by Eric Topol (2019), "21 Lessons for the 21st Century" by Yuval Noah Harari (2018), and "Life 3.0: Being Human in the Age of Artificial Intelligence" by Max Tegmark (2018) were referenced while developing the conceptual framework.
Results: Patients usually engage in two types of thinking simultaneously. Harari and Tegmark positioned those as "intelligence" and "consciousness.” The elements of “intelligence-based” thinking required for problem solving include "formulation of options to achieve a set goal," "individualized assessment of the efficacy and risks of each option," "collection of medical evidence to support a choice and critical appraisal of that evidence," and "development of a recommended treatment plan based on a specific value base. In turn, the elements of "consciousness-based” thinking is for "handling subjective experiences as a patient", and included "clarifying what is important to me and what I want to avoid by referring to the context of my life," "dealing with anxiety that I feel in unpredictable situations," and "to rely on others moderately as well as making preparation by myself in the decision making process.” Many aspects of “intelligence-based” thinking can be outsourced to AI. On the other hand, the involvement of health care professionals in the decision making process may become more important in commitment to patients’ "consciousness-based” thinking.
Conclusions: We presented an elementary conceptual framework for AI-assisted shared decision making. More detailed analysis and development of a conceptual model for this agenda are expected.
Keywords: Artificial Intelligence, Shared Decision Making, AI-Assisted Shared Decision Making, Modeling
Feasibility and Acceptability of Piloting Chaplain Decision Coaching on Periviable Resuscitation Decision Quality
PP-145 Decision Psychology and Shared Decision Making (DEC)
Brownsyne Tucker Edmonds1, Shelley M. Hoffman1, Shelley Varner Perez2
1Indiana University School of Medicine
2Indiana University Health
Purpose: Chaplains play a critical role in helping patients navigate spiritual distress. As such, chaplain-led decision coaching may afford families the opportunity to examine and make resuscitation decisions aligned with their personal values and preferences. During the development of the Periviable GOALS (Getting Optimal Alignment around Life Support) decision aid, which is designed to enhance shared decision-making in periviable delivery (22-24 weeks gestation), stakeholders felt strongly that utilizing chaplains could benefit families making resuscitation decisions. This study sought to assess the feasibility and acceptability of chaplain-led decision coaching alongside the Periviable GOALS DA.
Methods: Pregnant patients admitted for a threatened periviable delivery (22 0/7-25 6/7 weeks) were enrolled from November 2020-December 2021. Data were collected before and after the intervention. Participants viewed the GOALS DA and then engaged in a chaplain-led decision coaching session adapted from the Ottawa Personal Decision Guide. Decisional Conflict Scale, knowledge, and Decision Aid Acceptability Questionnaire were measured. Chaplain coaches journaled about their impressions of training and coaching encounters to assess feasibility and acceptability of the intervention. Descriptive analysis of the mean quantitative scales and conventional content analysis of the journal entries were completed.
Results: Eight pregnant persons were enrolled. Decisional conflict scores decreased by a mean of 6.8 (SD=10.5) and knowledge increased by a mean of 1.8 (SD=1.8). All rated their decision coaching experience as “good” or “excellent,” and the amount of information was “just right.” Participants found it “helpful to have someone to talk to” and the chaplain helped them reach a decision. Chaplains reported that the decision coaching was a valuable use of their time and skillset, they were adequately trained, and the experience was beneficial for participants even if they had already made a treatment decision.Despite some initial discomfort using a script, chaplains later noted the structure “helped address topics” participants “otherwise avoided.”
Conclusions: To our knowledge, our study is the first to utilize chaplains as decision coaches. Our results suggest that chaplain coaching in conjunction with a decision aid is feasible and well-accepted by both parents and chaplains. Chaplains may represent an underutilized resource for offering decision-support in the context of value-laden clinical decision-making.
Keywords: shared decision making; decision coach; chaplain; periviable delivery; resuscitation
Measuring the effect of COVID-19 illness on health-related quality of life in adults and children
PP-146 Health Services, Outcomes and Policy Research (HSOP)
Kerra R Mercon1, Angela M Rose1, Acham Gebremariam1, Eve Wittenberg2, Jamison Pike3, Lisa A Prosser1
1University of Michigan
2Harvard University
3Centers for Disease Control and Prevention
Purpose: To measure loss in health-related quality of life associated with COVID-19 illness and caregiving.
Methods: We conducted an online survey to measure time trade-off amounts for COVID-19 health states described for adults and children. Respondents were members of a probability-based representative sample of US adults (n=1014). We collected data from April-May 2021 and employed the time trade-off (TTO) technique to elicit health utilities. Respondents were asked how much time they would be willing to trade from the end of their life to avoid hypothetical COVID-19 health states for: outpatient (uncomplicated) illness, hospitalized illness, hospitalized illness with complications (ICU- adult only; multisystem inflammatory syndrome in children (MIS-C)), and long-term symptoms following COVID-19 illness (long COVID). Health states were evaluated by respondents using 3 frames: (1) self (adult); (2) hypothetical child 0-17 years; 3) parent of a child 0-17 years (“family spillover”). Health state descriptions intentionally incorporated uncertainty of progressing to more severe illness and transmitting infection to others. Additional survey questions collected information on vaccination beliefs, COVID-19 illness experience, COVID-19 risk factors, vaccination status, opinions about COVID-19 risk, and sociodemographic characteristics. Descriptive statistics are reported for maximum TTO amounts and quality-adjusted life years (QALYs) lost (calculated as maximum TTO amount divided by life expectancy). We conducted stratified and regression analyses to evaluate effects of (1) vaccination status and COVID-19 beliefs on QALY losses, and (2) sociodemographic factors or COVID-19 attitudes on time traded.
Results: Maximum TTO amounts and mean QALYs lost increased with severity of illness and were higher for child frame (outpatient 0.0725) than adult frame (outpatient 0.0162; Table). When evaluating the family spillover effect (adults valuing the experience of being a caregiver of a child with COVID-19 illness), results were lower than QALY losses for child illness but similar to those for adult illness. Mean QALYs lost varied for stratified analyses of COVID-19 “deniers” vs. “acceptors” but not by vaccination status/intention. Regression analyses showed significant differences in TTO amounts only for education level.
Conclusions: COVID-19 illness is associated with an estimated decrease in quality of life for both the affected individual and their caregivers, especially for severe illness in children. Measuring loss in health-related quality of life in a pandemic setting has highlighted the need to advance health state valuation techniques that will merit future consideration.
Keywords: time trade-off, COVID-19, utilities
Table.
|
Surgical Decision Making in Intermittent Claudication: A Qualitative Investigation of Surgeon Factors
PP-147 Decision Psychology and Shared Decision Making (DEC)
Chloe A Powell1, Mary E Byrnes2, Nicholas H Osborne1
1Section of Vascular Surgery, Department of Surgery, University of Michigan, Ann Arbor, USA
2Center for Healthcare Outcomes and Policy, Department of Surgery, University of Michigan, Ann Arbor, USA
Purpose: Intermittent claudication (IC) is a common presentation of peripheral artery disease, however, consensus on the indications for revascularization remain elusive. This study seeks to understand the surgeon perspective on decision making around IC.
Methods: Data for this qualitative study is from an ongoing investigation about surgeon decision making around IC. Participants engaged in 45-60 minute interviews and were transcribed verbatim and analyzed inductively.
Results: We identified three themes from our data. First, offering revascularization to patients requires balancing the risks of the procedure, potential benefit to the patient, and perceived durability of the intervention. “I usually lay it on pretty thick about the risks…And if we have a bad outcome, I feel better knowing that with the patient we kind of partnered up…” While most surgeons in our sample discuss risk vs. benefit vs. durability with patients, most navigated surgical decision making through trial and error. Given the complexities of management and variability in patient responsiveness to treatment surgeons often attempt multiple medical treatments before pursuing surgery “it generally involves making a plan and deciding when we’re going to check in to see if it worked or not.” Finally, the decision to pursue revascularization is filtered through surgeon values. In other words, decision making is influenced by the patient’s physical limitation within the context of an activity that they can no longer perform. More value may be placed on symptoms that negatively impact a patient’s livelihood, as opposed to the ability to participate in other activities. “…If I have a patient that is telling me that they are going to lose their job walking on a warehouse floor, I usually will be a little more willing and consider revascularization earlier because I think that’s a pretty good reason.” “I’ve had a patient tell me I gave her life back because she can dance again, which is not a common indication for doing a revascularization.”
Conclusions: There are challenging complexities around IC when thinking about what the process looks as the surgeon considers the patient’s treatment goals, the utilization of medical management, the risks of revascularization, and the potential of failure of different treatment options despite the best of circumstances. Understanding these interactions may further elucidate decision-making approaches and challenges in caring for a complex patient population.
Keywords: surgical decision making, peripheral artery disease, intermittent claudication
Exploring associations between strength, sarcopenia, and socioeconomic status. Implications for risk mechanisms and patient selection biases
PP-148 Health Services, Outcomes and Policy Research (HSOP)
Chloe A Powell, Gloria Y Kim, Sonali Reddy, Matthew A. Corriere
Michigan Medicine, Department of Surgery, Section of Vascular Surgery, Ann Arbor, Michigan, United States
Purpose: Strength and sarcopenia are associated with all-cause mortality and risk of adverse perioperative events. Accordingly, these measures are increasingly being applied to patient selection for elective procedures. Associations with socioeconomic status are unknown, however, and therefore may translate into selection biases. Thus, we evaluated associations between neighborhood-level affluence, disadvantage, and grip strength in an institutional cohort of patients with cardiovascular disease.
Methods: An institutional cohort of patients undergoing grip strength measurement during outpatient appointments with cardiovascular and/or pulmonary disease was retrospectively studied. Residence address information was used to characterize socioeconomic status based on the National Neighborhood Data Archive Socioeconomic Status and Demographic Characteristics of United States Census Tracts from 2008-2017. Categorical neighborhood affluence and disadvantage were based on 75th percentile cut-points (>55% for affluence, and >10% for disadvantage, respectively). Categorical weakness and strength were assigned stratified by gender and body mass index based on published cut-points and the cohort-specific 75th percentile, respectively. Grip strength comparisons stratified by affluence and disadvantage strata were performed using confidence intervals. Categorical tests were used to compare rates of patients who were strong, weak, or neither stratified by affluence and disadvantage. Statistical significance was assessed based on P<0.05.
Results: 4373 patients underwent grip strength screening and were included. Mean age was 65.6±13.6 years, and 46% were women. Mean grip strength among patients from affluent neighborhoods was 30.5±12.6 kg (95% CI 29.6-31.3) versus 28.4±11.6 (95% CI 27.7-29.25) among patients from disadvantaged neighborhoods. Prevalence of categorical strength and weakness were 21.3% and 29.3% among patients from disadvantaged neighborhoods versus 26.5% and 23.6% of patients from affluent neighborhoods respectively (Table). Categorical strength versus weakness was associated with disadvantaged neighborhood residence (P=0.0003) but not affluent neighborhood residence (P=0.2392).
Conclusions: Patients with cardiovascular disease residing in disadvantaged neighborhoods had lower grip strength and higher prevalence of categorical weakness. No such associations were associated with neighborhood affluence. Weakness and sarcopenia may contribute mechanistically to increased risk of mortality and adverse perioperative events associated with socioeconomic status.
Keywords: risk screening, disparities, sarcopenia, strength, frailty
Categorical Strength and Weakness by Neighborhood Category
| Neighborhood Category | Strong | Weak | Neither | Total |
|---|---|---|---|---|
| Affluent | 210 (26.5%) | 281 (23.6%) | 564 (23.6%) | 1055 (24.1%) |
| Disadvantaged | 169 (21.3%) | 348 (29.3%) | 610 (25.5%) | 1127 (25.8%) |
| Total | 794 (18.1%) | 1189 (27.2%) | 2390 (54.7%) | 4373 (100%) |
Results displayed are count (%) per stratum.
The evolution of the loss of life expectancy in patients with solid malignancies: a population-based study in the Netherlands, 1989-2019
PP-149 Health Services, Outcomes and Policy Research (HSOP)
Carolien C. H. M. Maas1, David Van Klaveren2, Otto Visser3, Matthias A. W. Merkx4, Hester F. Lingsma2, Valery E. P. P. Lemmens5, Avinash G. Dinmohamed6
1Department of Public Health, Erasmus University Medical Centre, Rotterdam, The Netherlands; Department of Research and Development, Netherlands Comprehensive Cancer Organisation (IKNL), Utrecht, The Netherlands
2Department of Public Health, Erasmus University Medical Centre, Rotterdam, The Netherlands
3Department of Registration, Netherlands Comprehensive Cancer Organisation (IKNL), Utrecht, The Netherlands
4Board of Directors, Netherlands Comprehensive Cancer Organisation (IKNL), Utrecht, The Netherlands; Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
5Board of Directors, Netherlands Comprehensive Cancer Organisation (IKNL), Utrecht, The Netherlands; Department of Public Health, Erasmus University Medical Centre, Rotterdam, The Netherlands
6Department of Research and Development, Netherlands Comprehensive Cancer Organisation (IKNL), Utrecht, The Netherlands; Department of Public Health, Erasmus University Medical Centre, Rotterdam, The Netherlands; Amsterdam UMC, Department of Hematology, Cancer Center Amsterdam, Amsterdam, The Netherlands
Purpose: Population-based cancer survival is generally measured within a fixed period post-diagnosis (e.g., 5-year relative survival). Alternatively, the loss of life expectancy (LOLE) more intuitively measures the impact of cancer on the complete patient’s lifespan. It is interpreted as the average number of life years lost due to a cancer diagnosis. However, LOLE is not widely used in population-based cancer research. Therefore, this population-based study assessed how LOLE evolved among Dutch cancer patients.
Methods: From the Netherlands Cancer Registry, we selected adult (≥18 years) patients diagnosed with one of 17 solid malignancies between 1989 and 2019 (n=1,832,244). We estimated LOLE—i.e., the difference between the life expectancy of patients and an age-, sex-, and period-matched group from the general population—using flexible parametric relative survival models. LOLE can vary markedly across ages because life expectancy is age-dependent, i.e., younger individuals have more life-years remaining than older individuals. Therefore, the proportional LOLE (PLOLE) was estimated by dividing the LOLE by the population life expectancy. Lastly, we estimated the conditional LOLE (CLOLE), i.e., the LOLE of cancer patients surviving several years after diagnosis. These survival measures were presented by year of diagnosis, ages at diagnosis, sex, and cancer stage.
Results: Overall, the LOLE and PLOLE consistently decreased over time. However, the magnitude of this decrease varied across cancer types. Cancers with the highest PLOLE (i.e., worse prognosis) in 2019 included cancers of the hepato-pancreato-biliary tract, central nervous system, lung, esophagus, cardia, stomach, and ovaries (Fig1). Cancer types with the lowest PLOLE (i.e., better prognosis) in 2019 included cancers of the skin (i.e., melanoma and squamous cell carcinoma), thyroid, testicle, and female breast (Fig1). Generally, the LOLE and PLOLE were lower for localized cancers. Conversely, these estimates were higher for cancers diagnosed at distant stages. For all examined cancers, CLOLE decreased with each additional year the patient survived post-diagnosis, irrespective of age and sex. For most cancers, CLOLE remained constant after five years post-diagnosis, and small excess mortality remained ten years post-diagnosis.
Conclusions: The LOLE provides insightful information concerning the impact of cancer on the lives of patients. The decreasing LOLE between 1989 and 2019 indicates a decreasing impact of cancer diagnosis, causing optimism in the fight against cancer. Nevertheless, the prognosis of some malignancies remains poor, particularly those diagnosed with distant disease.
Keywords: Oncology, population-based registry, loss of life expectancy, flexible parametric survival model
Proportional loss of life expectancy (PLOLE) in 2019 versus the absolute change in PLOLE from 1989 to 2019 for 45-year-old, 65-year-old, and 75-year-old patients, stratified by sex.
We created clusters in terms of good (PLOLE < 25%, red), intermediate (25% < PLOLE < 50%, yellow), and bad (PLOLE > 50%, green) prognosis in 2019. We displayed more elaborate results in an online webapp (https://mdmerasmusmc.shinyapps.io/Calculator/). Abbreviations: proportional loss of life expectancy (PLOLE); all cancers combined (ALL); bladder and urinary tract (BLAD); cervix (CERV), central nervous system (CNS); colorectum (CRC); esophagus, cardia, stomach (ECS), endometria (ENDO), female breast (FBRE), head and neck (HN), hepato-pancreato-biliary (HPB); kidney (KIND); skin melanoma (MEL); ovary, fallopian tube (OFT), prostate (PROST), squamous cell carcinoma (SCC), testicle (TEST), thyroid (THY).
Performance metrics for models designed to predict individualized treatment effect
PP-150 Quantitative Methods and Theoretical Developments (QMTD)
Carolien C. H. M. Maas1, David M. Kent2, Hester F. Lingsma1, David Van Klaveren3
1Department of Public Health, Erasmus University Medical Center, Rotterdam, Netherlands
2Predictive Analytics and Comparative Effectiveness Center, Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, USA
3Department of Public Health, Erasmus University Medical Center, Rotterdam, Netherlands; Predictive Analytics and Comparative Effectiveness Center, Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, USA
Purpose: Measuring the performance of models designed to predict individualized treatment effect is challenging, because the outcomes of two alternative treatments are inherently unobservable in one patient. The C-for-benefit was proposed to measure discriminative ability, i.e. the ability to distinguish between patients with small and large treatment effect. We aimed to propose metrics of calibration and overall performance for models predicting treatment effect.
Methods: Similar to the previously proposed C-for-benefit, we defined the observed treatment effect by the difference between outcomes in pairs of matched patients. Thus, we redefined the E-statistics, the logistic loss and the Brier score into metrics for measuring a model’s ability to predict treatment effect. In a simulation study, the metric values of deliberately perturbed models were compared to those of the data generating model. The data generating model was a risk-based model explaining the probability of an outcome by the linear predictor of the patient characteristics, treatment assignment, and the interaction between the two. The three perturbed models were altered to result in overestimation of treatment effect, risk heterogeneity, and treatment effect heterogeneity. To illustrate the performance metrics, different models predicting treatment effect were applied to the data of the Diabetes Prevention Program: 1) a risk modelling approach with restricted cubic splines; 2) an effect modelling approach including penalized treatment interactions; and 3) the causal forest.
Results: As desired, performance metric values of perturbed models were consistently worse than those of the optimal model (Eavg-for-benefit≥0.070 versus 0.001, E90-for-benefit≥0.115 versus 0.002, log-loss-for-benefit≥0.757 versus 0.733, Brier-for-benefit≥0.215 versus 0.212, Figure 1). Calibration, discriminative ability, and overall performance of three different models were similar in the case study. The regression models (risk and effect model) gave slightly better calibrated treatment effect predictions, but had less discriminative ability, whereas the machine learning approach (causal forest) was better at discriminating between small and large treatment effect predictions, but gave worse calibrated treatment effect predictions.
Conclusions: The proposed metrics are useful to assess the calibration and overall performance of models predicting individualized treatment effect, and R code for the proposed metrics is provided (https://github.com/CHMMaas/HTEPredictionMetrics). The proposed metrics correctly indicated better performance of the data generating model compared to deliberately perturbed models. In the case study, we observed a trade-off between calibration and discrimination.
Keywords: Precision medicine, calibration, discrimination, heterogeneous treatment effect, logistic regression, causal forest
Calibration plot of the treatment effect of simulated data from patients receiving lifestyle intervention.
This Figure depicts observed versus predicted treatment effect by smoothed calibration curves (blue line) and quarters of predicted treatment effect (black dots) of simulated data from the lifestyle intervention versus placebo treatment. Observed treatment effect was obtained by matching patients based on patient characteristics. Smoothed calibration curves were obtained by local regression of the observed treatment effect of matched patient pairs on predicted treatment effect of matched patient pairs. For prediction of treatment effect, we used a risk-based optimal model (panel A) and three perturbed models that overestimate treatment effect, risk heterogeneity, and treatment effect heterogeneity (panel B, C, D, respectively). The average treatment effect is 12.9, 20.4, 12.9 (after a correction of -0.14), and 12.9 (after a correction of 0.53), respectively.
Genetic Testing Referrals for Prostate Cancer in a Safety Net Medical Center: How have NCCN Guidelines Changed Practice?
PP-151 Health Services, Outcomes and Policy Research (HSOP)
Christine Marie Gunn1, Brianna Hardy1, Gretchen Gignac2, Kimberly Zayhowski3, Stephanie Loo4, Catharine L Wang5
1The Dartmouth Institute for Health Policy and Clinical Practice, Dartmouth College, Hanover, USA; The Dartmouth Cancer Center, Lebanon, USA
2Department of Medicine, Boston University School of Medicine, Boston, USA
3Boston Medical Center, Boston, USA
4Department of Health Law, Policy, and Management, Boston University School of Public Health, Boston, USA
5Department of Community Health Sciences, Boston University School of Public Health, Boston, USA
Purpose: In 2018, guidelines expanded eligibility for germline genetic testing for people with prostate cancer. This study characterizes genetics referral patterns and predictors of referrals among patients with prostate cancer diagnosed at an urban, safety-net medical center.
Methods: Using a retrospective cohort, we identified individuals who received a prostate cancer diagnosis between January 1, 2011 and March 1, 2020 through the medical record and tumor registry. The primary outcome was defined as the presence of a referral to genetics after the diagnosis of prostate cancer. Using multivariable logistic regression, we identified patient characteristics associated with referrals to genetics, adjusting for race, ethnicity, country of birth, language, insurance and prostate cancer clinical stage. We plotted trends in quarterly referral rates standardized as the number of referrals per 1,000 prostate cancer patients. An interrupted time series using Poisson regression examined whether guideline changes resulted in higher rates of referral one year after implementation.
Results: A total of 1,877 patients with a prostate cancer diagnosis were identified. The mean age of the cohort was 65 years; 44% identified as Black, 32% White; 17% Hispanic or Latino. About half were married (49%) and foreign born (46%). The predominant insurance type was Medicaid (34%) followed by Medicare or Private insurance (25% each). Most patients were diagnosed with local disease (65%), while 3% had regional, and 9% had metastatic disease. Of these 1,877 patients, 163 (9%) had at least one referral to a genetic counselor. In multivariable models, higher age was negatively associated with referral (OR: 0.96, 95% CI: 0.94, 0.98), while identifying as Black vs. White race (OR: 1.66, 95% CI: 1.05-2.36) and having regional (OR: 4.45, 95% CI: 2.40, 8.25) or metastatic disease (OR: 4.64, 95% CI: 2.98 – 7.24) vs. local only were significantly associated with genetics referral. The Poisson time series analysis demonstrated a significant rise in referrals one year after guideline implementation (Figure 1), even after adjusting for seasonal trends in referral patterns and model over-dispersion.
Figure 1.
Conclusions: While genetic testing referral rates were low overall based on eligibility (9%), referrals did increase one year post-guideline implementation. The strongest predictor of referral was clinical stage, suggesting opportunities to provide education that raises awareness about guideline eligibility for patients with local or regional disease who may benefit from genetic counseling and testing.
Keywords: genetic testing, prostate cancer, guidelines, health services research
Prostate Cancer Patient Decision-Making about Germline Genetic Testing: A Qualitative Study
PP-152 Patient and Stakeholder Preferences and Engagement (PSPE)
Stephanie Loo1, Javier Barria1, Rosemary Raymundo1, Kimberly Zayhowski2, Gretchen Gignac3, Catharine Wang4, Christine Marie Gunn5
1Department of Health Law, Policy, and Management, Boston University School of Public Health, Boston, USA
2Boston Medical Center, Boston, USA
3Department of Medicine, Section of Hematology and Oncology, Boston University School of Medicine, Boston, USA
4Department of Community Health Sciences, Boston University School of Public Health, Boston, USA
5The Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine, Dartmouth College, Hanover, USA; Dartmouth Cancer Center, Lebanon, USA; Department of Medicine, Section of General Internal Medicine, Boston University School of Medicine, Boston, USA
Purpose: Prostate cancer guidelines recommending genetic testing expanded eligibility criteria in 2018, increasing the availability of this technology that can guide patient treatment decision-making. Understanding patients’ experiences with and decision-making about undergoing genetic testing is important, particularly when seeking to address individual, inter-personal, systemic or structural barriers towards genetic testing. We conducted a qualitative study to understand prostate cancer patient experiences in making decisions about genetic testing in a safety-net hospital setting.
Methods: Qualitative interviews were conducted with adult English or Spanish-speaking prostate cancer patients who were referred to genetic counseling in the past 12 months. Participants were invited via mailed letters and telephone. Semi-structured telephone interviews focused on participants’ experience with and decision-making about genetic counseling and testing following referral. Audio recordings of interviews were professionally transcribed. An inductive thematic analysis generated themes arising from interview transcripts. Themes were reviewed and finalized by the full study team to ensure credibility of the findings.
Results: Twenty individuals were interviewed (6 Spanish, 14 English). Ten participants reported receiving genetic testing, 7 did not, and 3 were unsure. Two themes emerged regarding prostate cancer patient decision-making around genetic testing (Table 1). First, genetic counseling was often offered in the midst of treatment when many visits were occurring. This led participants to feel overwhelmed: “I got other issues on my mind, like how am I going to survive? No sense of tracing it back if you're trying to deal with staying alive”. Participants who described being overwhelmed by treatment demands expressed uncertainty about how much genetic testing would help them, and they were most interested in participating in care that was central to their survival. Second, participants who underwent testing reported receiving little follow up. These participants, and especially those with a family history of cancer, were confused in interpretating negative testing results and sought more support in understanding the implications of testing: “[The results letter] didn't make a whole lot of sense to me. I don't understand what I was looking at really.”
Conclusions: Understanding how to support informed decisions about prostate cancer genetic testing is important in providing high quality cancer care. Patients in our study revealed several pre- and post-testing communication gaps that appeared to create misperceptions about genetic risk and limited engagement in genetics services.
Keywords: prostate cancer, decision-making, patient experience, genetic testing, qualitative research
Table 1.
Exemplary Quotes for Identified Themes
|
The Influence of COVID-19 on the HIV Epidemic in San Francisco County: The Importance of Rapid Return to Normalcy
PP-153 Health Services, Outcomes and Policy Research (HSOP)
Citina Liang1, Sze Chuan Suen1, Anthony Nguyen1, Corrina Moucheraud2, Ian Holloway3, Edwin Charlebois4, Wayne Steward4
1Department of Industrial and Systems Engineering, University of Southern California, Los Angeles, U.S.
2Department of Health Policy and Management, University of California, Los Angeles, U.S.
3Department of Social Welfare, Luskin School of Public Affairs, University of California, Los Angeles, CA
4Department of Medicine, University of California, San Francisco, U.S.
Purpose: San Francisco County (SFC) shifted many non-emergency healthcare resources to COVID-19 and placed a Shelter in Place (SIP) order that limited nonessential social activities, which reduced HIV care initiation and retention. We quantify the COVID-19 effects on HIV burden among men who have sex with men (MSM) as SFC returns to normal service levels and progresses towards the HIV Ending the Epidemic (EHE) goals.
Methods: We use a microsimulation of MSM in SFC that tracks HIV disease progression and treatment while considering COVID-19-related reductions in testing, viral load suppression (VLS), PrEP uptake and retention, reduction in sexual partners, and COVID-19 deaths. To identify the importance of rapid return to normalcy, we consider scenarios where COVID-19 effects end in 2022 or 2025 and compare outcomes to the counterfactual where COVID-19 never occurred. We also examine scenarios where resources are prioritized to new or existing patients from 2023-2025 before all services return to normal.
Results: The annual number of MSM prescribed PrEP, VLS, new diagnoses, incidence, and knowledge of HIV status rebounds quickly after HIV care returns to normal. The model suggests if COVID-19 effects stop in 2022, COVID-19 effects will reduce PrEP use by 12% (95% uncertainty interval: 11.8%-12.2%) person-years from 2020-2035, VLS by 1.4% (1.1%-1.6%) person-years and increased incidence by 3.1% (1.3%-4.9%) cases and deaths by 1.3% (1%-1.5%). The cumulative burden is larger if these effects end in 2025, with 23.3% (23.2%-23.5%) reduced PrEP person-years, 2.5% (2.3%-2.8%) reduced VLS person-years, and 9.1% (7.4%-10.8%) more new cumulative cases. Prioritizing care to existing versus new patients will not substantially change cumulative incidence, although it will result in more person-years of PrEP but less VLS person-years and more deaths. All EHE goals besides PrEP are unchanged by COVID-19 effects, with incidence and VLS goals unmet even without COVID-19 effects.
Conclusions: The sooner HIV care returns to pre-COVID-19 service levels, the lighter the cumulative burden. For example, the cumulative difference from a non-COVID-19 counterfactual in 2035 is 6% higher if COVID-19 ends in 2025 than in 2022. Whether care is prioritized to new or existing patients does not have a large difference on incidence but may influence PrEP and VLS counts. However, COVID-19 effects do not substantially alter the likelihoods of reaching EHE goals in SFC.
Keywords: HIV/AIDS, COVID-19, microsimulation
Difference between non-COVID-19 counterfactual and four different COVID-19 scenarios
Association between health literacy and participation in the Danish breast cancer screening program
PP-154 Health Services, Outcomes and Policy Research (HSOP)
Sofie Dyekær Egsgaard, Eeva Liisa Røssell, Henrik Støvring
Department of Public Health, Aarhus University, Aarhus, Denmark
Purpose: In Denmark, all women aged 50-69 are invited to breast cancer screening biennially. Women receive written information about screening at invitation. However, different health literacy levels can impact how women access, understand, and apply this screening information. Further, benefits and harms of breast cancer screening are debated and challenge how to inform about breast cancer screening. Therefore, transparent and balanced information as the basis for informed decision making is recommended. Investigating health literacy is important prior to improving information and informed decision making. The aim of this study is to investigate the association between health literacy and participation in the Danish breast cancer screening program.
Methods: In a cross-sectional study, we linked questionnaire data from a population-based health survey, “How are you? 2017”, with data on health literacy and socioeconomic factors, and register data on breast cancer screening participation from 2016-2017. Data were restricted to women aged 49-72 living in the Central Denmark Region. Crude and adjusted logistic regression analyses were used to investigate the association between health literacy and participation in screening. Multiple imputation analyses and complete case analyses were performed.
Results: Women in this study generally had a high health literacy level. Based on multiple imputation analyses, an unadjusted odds ratio of 1.32 (1.10-1.59) was estimated indicating higher odds for participation with higher health literacy level. However, in both of the adjusted models, no association was found (non-significant odds ratios of 1.11 and 1.09). Complete case analyses gave similar results. In sensitivity analyses, no statistically significant associations were found in the adjusted analyses.
Conclusions: We found no association between health literacy and participation in the Danish breast cancer screening program. These findings can contribute to understanding the demands for future breast cancer screening information sent to women at screening invitation.
Keywords: breast cancer screening, mammography, health literacy, participation, screening uptake, health information
Using wearables to augment patient self-reports: A study of COVID-19 vaccine side effects
PP-155 Health Services, Outcomes and Policy Research (HSOP)
Grace Guan1, Merav Mofaz2, Gary Qian1, Tal Patalon3, Erez Shmueli2, Dan Yamin2, Margaret L Brandeau1
1Department of Management Science and Engineering, Stanford University, Stanford, California, United States of America
2Department of Industrial Engineering, Tel Aviv University, Tel Aviv, Israel
3Kahn Sagol Maccabi (KSM) Research & Innovation Center, Maccabi Healthcare Services, Tel Aviv, Israel
Purpose: Billions of COVID-19 vaccination shots have been administered worldwide, but information from active surveillance about vaccine safety is limited. Surveillance is generally based on self-reporting, making the monitoring process subjective. In this study, we sought to determine whether smartwatches can be more sensitive than self-reported questionnaires in detecting COVID-19 vaccine side effects in a large sample size.
Methods: We studied participants in Israel who received their second (n=355) or third (n=1,179) Pfizer BioNTech COVID-19 vaccination. All participants wore a Garmin Vivosmart 4 smartwatch and completed a daily questionnaire via smartphone. We stratified patients by self-reported symptom severity in the questionnaires. Stratified by symptom severity, we compared post-vaccination smartwatch heart rate data and a Garmin-computed stress measure based on heart rate variability to each participant's baseline, defined as the 7 days prior to vaccination. We used a mixed effects panel regression to remove participant-level fixed and random effects to identify whether there was a significant elevation in heart rate in the 72 hours post-vaccination, especially among participants who reported no symptoms after vaccination.
Results: Wearable device data allowed for greater sensitivity than using self-reported questionnaires alone. Importantly, even among participants who did not report side effects after vaccination, we identified considerable changes in smartwatch measures in the first 72 hours after vaccination (p < 0.005) compared to baseline (Figure). This result was supported by our mixed-effects panel regression, in which we removed participant-level fixed and random effects. Moreover, while participants returned to their baseline levels on average 72 hours after vaccination, analysis of smartwatch stress data reveals that participants who had a severe reaction to the third vaccination, as categorized by symptoms self-reported in the questionnaire, took longer to return to their baseline levels.
Figure.
Mean difference in heart rate (in beats per minute) and stress measure (in points) between the post-vaccination and baseline periods in Garmin smartwatch data after the third vaccination, by hour, for individuals who reported no reaction, mild reaction, and severe reaction in the self-reported questionnaires. Shaded regions represent 95% confidence intervals.
Conclusions: The ubiquity of smartwatches provides an opportunity to gather improved data on patient health. This study showed that smartwatches can detect physiological responses following vaccination that may not be captured by patient self-reporting. More broadly, as the market for wearable devices is growing rapidly, such devices can provide valuable data about other diseases or health conditions for which continuous monitoring of a patient’s physiological state can augment patient self-reports – with or without a breakpoint such as vaccination.
Keywords: vaccines, wearable devices, health technology assessment, panel regression
Life Expectancy and Risk of a Second Primary Breast Cancer among Older Women with a History of Breast Cancer
PP-156 Health Services, Outcomes and Policy Research (HSOP)
Ilana B Richman1, Jessica B Long1, Meera Sheffrin3, Elizabeth Berger2
1Department of Medicine, Yale School of Medicine, New Haven, CT, USA
2Department of Surgery, Yale School of Medicine, New Haven, CT, USA
3Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA
Purpose: Older women with a history of breast cancer are commonly faced with a decision about whether to continue regular mammography. However, risks and benefits of regular surveillance mammography may vary considerably according to individual characteristics including life expectancy. The goal of this study was to develop estimates of the long-term risk of a second primary breast cancer by life expectancy among older women with a history of breast cancer. Estimates may help inform subsequent imaging decisions.
Methods: This was a longitudinal cohort study using data from the Surveillance, Epidemiology, and End-Results program linked to Medicare claims. We identified women ages 67 and older who were diagnosed with non-metastatic breast cancer between 2003-2007. Women were followed from one year after diagnosis through 2017, allowing at least 10 years follow-up. Life expectancy was estimated at baseline using age, sex, and comorbidity. Primary outcomes included use of regular mammography (defined as >3 mammograms during follow up), and diagnosis of a second primary breast cancer. Using a competing risk model, we calculated the cumulative incidence of second primary breast cancers, stratified by life expectancy (LE) at cohort entry.
Results: The cohort included 44,475 women with a recent diagnosis of breast cancer. At baseline, 5,634 (13%) had a LE of ≤ 5 years, 14,546 (31%) had a LE of 6-10 years and 25,295 (57%) had a LE of >10 years. By the end of follow up in 2017, 31% of women with a LE ≤ 5 years had >3 mammograms, 55% of women with a LE of 6-10 years had >3 mammograms, and 75% of women with a LE >10 years had >3 mammograms. The cumulative incidence of a second breast cancer at the end of follow up was 3.7% (95% CI 3.2-4.3) among women with a life expectancy ≤ 5 years, 4.9% (95% CI 4.5%-5.3%) among women with a life expectancy of 6-10 years, and 7.6% (95% CI 7.2%-7.9%) among women with a life expectancy of >10 years (Figure).
Conclusions: Among older women with a history of breast cancer, risk of a second primary breast cancer is substantially lower among women with limited life expectancy compared to women with longer life expectancy. Life expectancy should be a considered when making decisions about routine surveillance mammography after a diagnosis of breast cancer.
Keywords: Breast cancer outcomes, mammography, life expectancy
Cumulative Incidence of a Second Primary Breast Cancer
Linguistic bias in verbal discussions of patients in clinical handovers
PP-157 Health Services, Outcomes and Policy Research (HSOP)
Ilona Fridman1, Austin J. Wesevich2, Jane She3, Sonya V. Patel Nguyen4, Erica Langan5, Victoria M. Parente5
1Unc Lineberger Comprehensive Cancer Center
2Department of Medicine, University of Chicago, Chicago, IL
3Department of Biostatistics, University of North Carolina, Chapel Hill, NC
4Department of Medicine, Duke School of Medicine Durham, NC, Department of Pediatrics, Duke School of Medicine Durham, NC, Trinity College of Art and Science, Duke University, Durham NC
5Department of Pediatrics, Duke School of Medicine Durham, NC, Trinity College of Art and Science, Duke University, Durham NC
Purpose: Implicit physician gender and racial biases frequently lead to patients feeling doubted or dismissed. The words that clinicians choose to describe patients can propagate biases to other members of the healthcare team, particularly with repeated phrases or rote patterns. Prior analyses of electronic records (EHR) found that clinicians systematically downplay the concerns of Black and female patients (Beach et al., 2021). Using a similar approach, we explored the prevalence of biases in transcribed resident verbal shift handovers.
Methods: Inpatient resident handovers from the day to night team were recorded and transcribed. The linguistic analysis of the transcripts was built on automated retrieval of words that were associated with downplaying patients’ concerns (Beach et al., 2021). Random subsets of the data were manually evaluated by linguistic researchers and physicians. Based on manual evaluation the methodology was revised to ensure the accuracy of the theoretical operationalization. Quasi-Poisson regression was used to explore how the linguistic biases differed by race and gender.
Results: There were 303 handover transcripts, which were 60% adult versus 40% pediatric patients; 52% female; 47% Black, and 37% White patients. The linguistic analysis identified targeted words with 99% specificity and 91% sensitivity. Patient concerns were downplayed in several ways. First, “evidentials” occurred when symptom presentation was accompanied by an expression of disbelief or constructing sentences as “hearsay”. Second, the analysis showed that patient preferences were presented either as cooperation (e.g. “patient refuses”, “finally agrees”, etc.) or as patient “wishes” (e.g. “patient wants”). Out of 303 handovers, residents used “evidentials” in 14(5%) transcripts, “cooperation” words in 23(8%) transcripts, and “wish” words in 42(14%) transcripts. Residents used more “cooperation” words when describing female patients than male patients (16 vs 7, OR 2.33, p = 0.045). There were no other significant differences in the use of “evidentials”, “cooperation” or “wish” words by gender or race.
Conclusions: While verbal handoffs had a lower prevalence of linguistic bias than prior reported EHR research, bias was still present and more likely for female patients. Healthcare team awareness of bias transmission through word choice could reduce bias transmission and lead to other interventions that will strengthen patient-physician relationships. Future studies could look at nonverbals such as tone that were not captured in the transcripts.
Keywords: bias, stereotypes, blame, race, gender
Analytic challenges to estimating the association between retail pharmacy naloxone distribution and overdose mortality
PP-158 Health Services, Outcomes and Policy Research (HSOP)
Jake R Morgan1, Christina E Freibott1, Ali Jalali2, Philip Jeng2, Alexander Y Walley3, Avik Chatterjee3, Traci C Green4, Michelle L Nolan5, Benjamin P Linas6, Brandon Marshall7, Sean M Murphy2
1Department of Health Law, Policy, and Management, Boston University School of Public Health, Boston, MA, USA
2Department of Population Health Sciences, Weill Cornell Medical College, New York, NY, USA
3Grayken Center for Addiction, Clinical Addiction Research and Education Unit, Section of General Internal Medicine, Department of Medicine, Boston Medical Center and Boston University School of Medicine, Boston, MA, USA;
4Brandeis University Heller School for Social Policy and Management; Rhode Island Hospital, RI, USA; Brown University School of Public Health, Department of Epidemiology, Providence, RI, USA.;COBRE on Opioids and Overdose, Rhode Island Hospital, Providence, RI, USA
5Department of Epidemiology, Mailman School of Public Health, Columbia University, New York, NY, USA
6Grayken Center for Addiction, Clinical Addiction Research and Education Unit, Section of General Internal Medicine, Department of Medicine, Boston Medical Center and Boston University School of Medicine, Boston, MA, USA; Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
7Brown University School of Public Health, Department of Epidemiology, Providence, RI, USA; COBRE on Opioids and Overdose, Rhode Island Hospital, Providence, RI, USA
Purpose: Naloxone is an effective antidote for opioid overdose, however, associations of retail pharmacy-distributed naloxone with overdose mortality, and the analytic challenges presented in that analysis, have not been evaluated. We aimed to assess the independent association of pharmacy and community-based naloxone distribution with fatal opioid overdose rates and to test a range of analytic approaches to addressing endogeneity when an instrument does not exist.
Methods: Our analytic cohort uses retail pharmacy claims data from Symphony Health, community distribution data from health departments in Massachusetts, Rhode Island, and New York City, opioid overdose data from the National Center for Health Statistics, and publicly available American Community Survey data. Data were analyzed by 3-digit ZIP Code and calendar quarter-year (2016Q1-2018Q4), and observations were weighted by population. A key consideration is endogeneity. While we expect that more naloxone reduces opioid overdose deaths, a worsening opioid overdose epidemic may prompt an increase in naloxone demand and supply. We regressed opioid-related mortality on retail-pharmacy and community naloxone distribution, and community-level demographics using a linear model with ZIP Code fixed effects and a time trend and tested the inclusion of various lag structures and a simultaneous equation model, in addition to assessing a reverse regression. Both types of naloxone distribution were parameterized using a level effect (current-quarter distribution), and a change effect (change in distribution from prior to current quarter).
Results: The unadjusted naloxone distribution rate more than doubled from 97 kits per 100,000 persons in Q1–2016 to 257 kits per 100,000 persons in Q4–2018 while the unadjusted opioid overdose mortality rate fell from 8.1 per 100,000 persons to 7.2 per 100,000 persons. We found that the level of naloxone distribution (both pharmacy and community) was positively and significantly associated with fatal opioid overdose rates, but did not detect associations between overdose mortality and the change in pharmacy or community naloxone distribution rates. While our reverse regression demonstrated the existence of endogeneity, our base case model was robust to multiple approaches.
Conclusions: Across NYC, RI, and MA pharmacy and community naloxone distribution was correlated with fatal overdose, getting to communities where it was needed most. Yet, in the midst of high rates of overdose driven by fentanyl in the drug supply, naloxone distribution alone was not enough to reverse the opioid-overdose crisis.
Keywords: naloxone, distribution, opioid overdose, endogeneity
Characterizing the Effect of Ex-Vivo Lung Perfusion on Organ Availability: A Retrospective Cohort Study
PP-159 Health Services, Outcomes and Policy Research (HSOP)
John Kenneth Peel1, Eleanor Pullenayegum2, David Naimark3, Beate Sander4, Shaf Keshavjee5
1Department of Anesthesiology & Pain Medicine, University of Toronto, Toronto, Canada; Institute of Health Policy, Management and Evaluation, Dalla Lana School for Public Health, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment Collaborative, UHN, Toronto, Canada
2Institute of Health Policy, Management and Evaluation, Dalla Lana School for Public Health, University of Toronto, Toronto, Canada; Senior Scientist, Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada
3Institute of Health Policy, Management and Evaluation, Dalla Lana School for Public Health, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment Collaborative, UHN, Toronto, Canada; Department of Nephrology, Sunnybrook Health Sciences Centre, Toronto, Canada
4Institute of Health Policy, Management and Evaluation, Dalla Lana School for Public Health, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment Collaborative, UHN, Toronto, Canada; Toronto General Hospital Research Institute, University Health Network, Toronto, Canada; Adjunct Scientist, Public Health Ontario, Toronto, Canada; Adjunct Scientist, Institute for Clinical Evaluative Sciences, Toronto, Canada;
5Division of Thoracic Surgery, Toronto General Hospital, University Health Network, Toronto, Canada; Toronto General Hospital Research Institute, University Health Network, Toronto, Canada;
Purpose: Although consent for organ donation is increasingly commonplace, there persists critical scarcity of suitable lungs for transplantation. Previous efforts to expand the donor pool have yielded limited results. Ex-vivo lung perfusion (EVLP) is an emerging technology which may overcome the bottleneck attributable to conservative organ selection criteria. We sought to characterize the effect of ex-vivo lung perfusion (EVLP) on organ availability.
Methods: We performed a retrospective, before-after cohort study of donor organs transplanted at University Health Network (UHN) between January 1, 2005 and December 31, 2019. To characterize donor pool changes over time, we performed an interrupted time-series analysis. Additionally, the number of lungs available for transplantation each year was regressed against year using a linear model with restricted cubic splines, and EVLP use, donor type, and status as an extended versus standard criteria donor treated as effect modifiers.
Results: In time-series analysis, EVLP availability (p=0.0143 for interaction) and EVLP use (p=0.0116 for interaction) were both associated with significantly steeper increases in the annual rate of organ transplantation than expected due to the passage of time. In multivariable regression (Figure 1), EVLP was associated with a greater number of donation after cardiac death (DCD) organs and extended criteria donors for transplant while the numbers of standard criteria donors remained relatively stable (p<0.001 for interaction). We observed of a non-linear increase in extended criteria + DCD donors treated with EVLP (p<0.001) which suggests that this technology may enable transplantation of higher-risk organs that would have been previously considered unsuitable. An association between EVLP availability and increased acceptance of extended criteria organs that were treated with conventional organ preservation methods suggested that having EVLP available impacted organ selection, regardless of whether the technology was ultimately used. An adjusted R-squared of 0.9152 indicated excellent model fit.
Conclusions: Our results demonstrate that since EVLP was introduced into practice, there has been a significant increase in organ availability, predominantly from increased acceptance of DCD and extended criteria lungs for transplantation. Overall, more permissive organ selection was observed since EVLP was clinically available, with significantly increased acceptance of DCD and extended criteria lungs for transplantation.
Keywords: Organ Preservation; Lung Transplantation; Health Technology Assessment; Resource Constraint
evlp_organ_annual_donor_types
Preferences on Diversity in the Selection of Radiology Residents in the Post-Step 1 World: A Discrete Choice Experiment
PP-160 Health Services, Outcomes and Policy Research (HSOP)
Jose Felipe Montano Campos1, Charles M Maxfield2, Lars J Grimm3
1CHOICE Institute, University of Washington
2Vice-Chair of Education, Department of Radiology, Duke University Medical Center, Durham, North Carolina
3Department of Radiology, Duke University Medical Center, Durham, North Carolina
Purpose: Empirical and survey studies have shown that the USMLE Step-1 score, single standardized metric, is the most important factor in residency selection and is considered a proxy of how well the applicant will perform in the residency (academic quality). Starting in January 2022 the USMLE Step-1 examinations is no longer reported as a numeric score and is reported only as a binary pass-or-fail result. Program directors and selection committees must seek alternative measures of performance and achievement of students. We aim to study the relative weight that committees will place on different attributes of the applicants in a post- Step-1 world.
Methods: A discrete choice experiment was designed to model radiology resident selection and determine the relative weights of various application factors, including academic and sociodemographic attributes of the applicants, when paired with a numerical or pass or fail Step 1 result. Faculty involved in resident selection at 14 US radiology programs chose between hypothetical pairs of applicant profiles between. A conditional logistic regression model assessed the relative weights of the attributes, and odds ratios (ORs) were calculated. Finally, we explore the heterogeneity of preferences on diversity (as the willingness to choose an applicant from a minority/underrepresented race/ethnicity in the radiology field) when hypothetical pairs of applicants are either close or not in the numeric Step-1 score.
Results: The academic attributes gain relative importance and the attributes related to diversity (Race/Ethnicity and Gender) lose importance when the Step-1 score is pass or fail relative than when is numeric. We find that the greatest effect on diversity is when applicants are very close in numeric Step-1 score but close with high scores (high quality applicants). If applicants are close in numeric Step-1 score with low scores (low quality applicants), the selection committees look at the School of Medicine of applicants and, conditional on being close in this second academic attribute, they only consider diversity in the selection process.
Conclusions: In the absence of a numeric Step 1 score, residency programs try to infer the academic quality of applicants by combining information on the other academic attributes of applicants. This introduces a transaction cost in the selection process and selection committees start paying less attention on the diversity of applicants.
Keywords: Residency Selection, Diversity, Step-1 Score, DCE
Attribute Relative Importance
We plot the relative (to the most important attribute) importance of the attributes for Part 1 and Part 2. We can see a non-decrease in importance for Academic Attributes (SOM, Step 2, Publications, Clerkship Honors, Class Rank) and a decrease in importance in the Diversity Attributes (Race/Ethnicity and Gender)
Attributes of the Experiment
| Attributes | Levels |
|---|---|
| Medical School | Top 10, midlevel ranked, unranked |
| Gender | Female, Male |
| Race or ethnicity | Asian, Black, Hispanic, White |
| USMLE Step 1 score (part 1) | 202,228,246,269 |
| USMLE Step 1 score (part 2) | Pass |
| USMLE Step 2 score | 213, 229, 248, 267 |
| Class rank | 1st, 2nd, 3rd, 4th quartile |
| Core clerkship honors | 1,3,5,6 |
| No. of publications | 0,1,3,6 |
We present the attributes used in the Discrete Choice Experiment and their different levels.
Electronic Cigarette Use and Characteristics Associated to its Use Among Cancer Survivors: A Systematic review and Meta-Analysis
PP-162 Health Services, Outcomes and Policy Research (HSOP)
Maria A. Lopez Olivo1, Justin James2, Joel James2, Kate Krause3, Michael Roth4, Guadalupe R. Palos5, Hilary Ma6, Alma Rodriguez7, Katherine Gilmore5, Paul Cinciripini8, Maria E. Suarez Almazor9
1Department of Health Services Research, The University of Texas, MD Anderson Cancer Center, Houston, United States of America
2City University of New York School of Medicine, New York, United States of America
3Research Medical Library, The University of Texas, MD Anderson Cancer Center, Houston, United States of America
4Department of Pediatrics, The University of Texas, MD Anderson Cancer Center, Houston, United States of America
5Department of Cancer Survivorship, The University of Texas, MD Anderson Cancer Center, Houston, United States of America
6Department of General Medical Oncology, The University of Texas, MD Anderson Cancer Center, Houston, United States of America
7Department of Lymphoma-Myeloma, The University of Texas, MD Anderson Cancer Center, Houston, United States of America
8Department of Behavioral Science, The University of Texas, MD Anderson Cancer Center, Houston, United States of America
9Department of Health Services Research, The University of Texas, MD Anderson Cancer Center, Houston, United States of America; Department of General Internal Medicine, The University of Texas, MD Anderson Cancer Center, Houston, United States of America
Purpose: To determine the prevalence and patterns of electronic cigarette (e-cigarette) usage among cancer survivors.
Methods: We searched 6 electronic databases until 09/2021. We included studies reporting on the rate of e-cigarette use among cancer survivors and factors associated with its use. Two authors independently selected studies, appraised their quality using the National Institutes of Health Quality Assessment Tool for Cross-Sectional Studies, and collected data. Our primary outcome was the prevalence of e-cigarette use. We performed a random-effects model meta-analysis.
Results: Eighteen publications met our eligibility criteria. These came from 6 different registries (or data sources). Most publications (17/18) met less than 50% of the items in the quality appraisal tool. In 12 publications, more than a third of the cancer survivors were ≥65 years of age. The percentage of females ranged from 44.6% to 100%. The rate of lifetime e-cigarette use among cancer survivors was 12% (95% CI 4% to 25%) compared with 16% in people without cancer history (95% CI 4% to 33%). Among cancer survivors, the pooled rate of current use was 3% (95% CI 0% to 10%) and among survivors who were current traditional cigarette users, 63% (95% CI 57% to 69%) also used e-cigarettes. The reported weighted lifetime e-cigarette use rates among the included studies differed between age groups (18-44 years (up to 46.7%), 45-64 (up to 27.2%), ≥65 (up to 24.8%). Similar differences were observed for current e-cigarette use (18-44 years (up to 31.3%), 45-64 (up to 5.4%), ≥65 (up to 1.1%). Nine publications reported the factors associated with lifetime e-cigarette use. Factors associated with e-cigarette use were: active users of traditional cigarettes (see Figure) heavy drinking, poor mental health, younger age, being male, nonHispanic White, single, having less than high school education, income $25,000 USD or less, and living in the South regions of the US, and urban areas.
Figure.
Association of e-cigarette use (lifetime and current) among cancer survivors who also reported use of traditional cigarettes
Conclusions: More than two-thirds of cancer survivors who currently use traditional cigarettes also use e-cigarettes. These findings highlight the need to improve screening of smoking behaviors and design individualized decision aids to help providers support attempts to quit smoking among cancer survivors, particularly in those using both traditional and e-cigarettes.
Keywords: cancer survivor, electronic cigarette, systematic review
A community planning tool to visualize socioeconomic support service expansion impacts on HIV health disparities
PP-163 Health Services, Outcomes and Policy Research (HSOP)
Eva A Enns1, Margo M Wheatley1, Aaron D Peterson2, Darin Rowles3, Thomas Blissett3, Jonathan Hanft2
1Division of Health Policy and Management, University of Minnesota School of Public Health, Minneapolis, USA
2Hennepin County Public Health, Minneapolis, USA
3Minnesota Department of Human Services, St. Paul, USA
Purpose: To develop a community planning tool that allows decision-makers to visualize how allocating resources to expand different socioeconomic support services for people living with HIV might impact disparity reduction in HIV health outcomes.
Methods: We conducted a propensity score-adjusted statistical analysis of Ryan White HIV/AIDS Program (RWHAP) client data from 2015-2019 for the Minneapolis/St. Paul region in Minnesota to estimate the causal effect of socioeconomic support service utilization on rates of sustained HIV viral suppression (SVS), stratified by race/ethnicity. For services with positive causal effects, we conducted semi-structured interviews with local service providers to estimate costs of reaching more clients. These data were integrated into an interactive spreadsheet-based planning tool (Figure). The user selects which service(s) to expand and by how much in each race/ethnicity subpopulation, allowing the evaluation of general as well as targeted service expansions. The tool calculates the estimated change in SVS post-expansion, overall and by subpopulation, and the expected expansion cost over a one-year planning horizon.
Figure.
Community planning tool user interface.
Results: Use of AIDS Drug Assistance Program (ADAP) services, food aid, transportation vouchers, financial aid, and housing support services were found to have positive impacts on SVS. Estimated expansion costs per new user ranged from $145/person/year (food aid) and to $2,371/person/year (housing support). We used the tool to evaluate the impact of meeting the unmet need estimated in a 2020 Minneapolis/St. Paul regional RHWAP client needs assessment survey. Expanding food aid to fully meet client needs was the single service expansion that yielded the greatest increase in SVS overall (81.8% to 82.7%) and among African American clients (75.6% to 78.2%), the subpopulation with the lowest SVS at baseline, at a cost of nearly $200,000. If services were expanded to specifically meet the greatest needs of African American clients (food, transportation, and housing), we estimated an increase from 75.6% to 79.1% in SVS, though at a significant cost, largely due to the high cost of expanding housing services.
Conclusions: The spreadsheet-based tool allows RWHAP and community decision-makers to visualize how different investments to expand engagement in socioeconomic services may impact viral suppression rates, overall and across subpopulations. The tool provides insights into the trade-offs of different resource allocation decisions and could be used to scaffold community-engaged discussions in setting funding priorities and supporting health equity goals.
Keywords: HIV/AIDS, optimal resource allocation, health equity
Modeling Enhanced Influenza Vaccine Efficacies in the over 65 Age Group
PP-164 Health Services, Outcomes and Policy Research (HSOP)
Mary G Krauland1, Richard K Zimmerman2, Lee H Harrison3, John V Williams4, Mark S Roberts1
1Department of Health Policy and Management, School of Public Health, University of Pittsburgh, Pittsburgh, Pennsylvania, USA;Public Health Dynamics Laboratory, School of Public Health, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
2Department of Family Medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
3Center for Genomic Epidemiology, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
4Department of Pediatrics, School of Medicine, University of Pittsburgh; University of Pittsburgh Medical Center Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania, USA
Purpose: Most persons 65 and older who are vaccinated against influenza are immunized with an enhanced vaccine (~80% of the vaccinated population). Multiple enhanced vaccines are available with possibly varied efficacy, but for several reasons there is no preferential recommendation for one particular vaccine. This study investigates what level of vaccine efficacy (VE) would provide a meaningful difference between enhanced vaccines.
Methods: We modeled a single influenza season using the agent-based modeling platform Framework for Reproducing Epidemiological Dynamics (FRED). Simulations took place in a population of ~1.2 million. Agents over age 65 had increased susceptibility to influenza compared to agents under 65, producing case numbers consistent with surveillance data. Effective reproductive rate (Reff) in the simulations was ~1.2 or ~1.3. The influenza season start was varied from mid-November to early January to reproduce seasons peaking in early to late winter. Reduction in susceptibility to infection due to vaccination was varied from 40% to 70%, with 40% representing the base comparison group. Of the agents 65+, 20% received a standard vaccine (35% reduction in susceptibility in that age group) and the remainder were vaccinated with a more effective vaccine. Immunity was modeled to wane at a rate of 10% per month in agents 65+. Vaccination took place over 45 days, starting September 1 or October 1. The model included seasonal forcing.
Results: Compared to vaccination with the current enhanced vaccine (40% VE in 65+), vaccines with 45 or 50% efficacy gave little added value (0 to 7% decreased influenza burden in 65+). More effective vaccines (60 or 70% reduction in susceptibility to influenza) had greater impact, particularly in seasons with an early winter peak, with a maximum 20% reduction in influenza cases in 65+ (Figure, maximum reduction in cases with 70% VE and early influenza season).
Figure.
Influenza cases per 100,000 in age group 65 and over with varied effectiveness of enhanced vaccines.
Conclusions: Enhanced vaccines with only modest differences in effectiveness resulted in small relative differences in influenza burden in this model. Vaccines with 60-70% effectiveness provided additional protection, particularly in seasons with an early winter peak. If season peak was delayed, the impact decreases, due to waning of vaccine effectiveness. These results suggest that significantly more effective influenza vaccines would be needed to change policies and recommendations.
Keywords: Influenza, modeling, vaccination
Modeling Prevention of Influenza in Infants by Early Maternal Influenza Vaccination
PP-165 Health Services, Outcomes and Policy Research (HSOP)
Mary G Krauland1, Richard K Zimmerman2, Katherine V Williams2, Lee H Harrison3, John V Williams4, Mark S Roberts1
1Department of Health Policy and Management, School of Public Health, University of Pittsburgh, Pittsburgh, Pennsylvania, USA; Public Health Dynamics Laboratory, School of Public Health, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
2Department of Family Medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
3Center for Genomic Epidemiology, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
4Department of Pediatrics, School of Medicine, University of Pittsburgh; University of Pittsburgh Medical Center Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania, USA
Purpose: Infants less than 6 months have enhanced risk of influenza infection and severe outcomes since they lack prior immunity and vaccination is not recommended until after 6 months. Infants can be protected by passive immunity provided through maternal antibodies transferred across the placenta during gestation, especially during the third trimester, however the usual fall influenza vaccination schedule leaves a number of infants unprotected during most of the influenza season. The purpose of this study was to investigate the impact of mid-summer vaccination of pregnant women on influenza cases in young infants
Methods: We used the Framework for Reproducing Epidemiological Dynamics (FRED), an agent-based modeling platform, to investigate the impact of mid-summer vaccination of pregnant women on influenza cases in young infants. We implemented a model that included maternity and influenza. Simulations were performed on a population of ~1.2 million. Actuarial maternity rates were applied to the population on initiation of the simulation such that births were evenly distributed during the year at appropriate daily rates. Agents (people in the model) who were pregnant on July 1 were vaccinated against influenza on that day. Agents who became pregnant after July 1 and the general population were vaccinated over 45 days beginning on September 1. Agents who gave birth after vaccination conferred either 60% or 80% reduced susceptibility to influenza to their infants. Infant immunity waned over 6 months. The comparison model provided no reduction in susceptibility to influenza to infants of vaccinated mothers. We modeled 2 levels of influenza transmissibility and 4 season timing scenarios
Results: Vaccination in July reduced influenza cases in infants in the following season by ~16-31% depending on the level of reduced susceptibility and the level of influenza transmission (Figure). The reduction was slightly less in seasons with a later peak (mid-March versus late January)
Conclusions: Prioritizing influenza vaccination of pregnant women as soon as vaccine is available could result in decreased influenza morbidity and mortality in infants. To provide optimal protection to both the mother and infant, timing of vaccination may need to be coordinated with expected delivery date, with early vaccination provided to expectant mothers who will deliver in late summer and early fall.
Keywords: Influenza, maternal immunization, modeling
maternity-figure.png
Proportion of influenza cases in infants prevented by mid-summer vaccination of mothers before delivery, with vaccine efficacy of 60% and 80%.
Modeling the Burden of Nonalcoholic Fatty Liver Disease-associated Liver Complications in the United States
PP-167 Health Services, Outcomes and Policy Research (HSOP)
Moosa Tatar1, Phuc Le1, Srinivasan Dasarathy2, Naim Alkhouri3, William Herman4, Wen Ye5, Michael B. Rothberg1
1Center for Value-based Care Research, Cleveland Clinic, Cleveland, OH, USA
2Department of Inflammation and Immunity, Lerner Research Institute and Department of Gastroenterology, Hepatology, and Nutrition, Digestive Disease and Surgery Institute, Cleveland Clinic, Cleveland, OH, USA
3Arizona Liver Health, Tucson, AZ, USA
4University of Michigan School of Public Health, Ann Arbor, MI, USA. University of Michigan School of Medicine, Ann Arbor, MI, USA.
5University of Michigan School of Public Health, Ann Arbor, MI, USA.
Purpose: Nonalcoholic Fatty Liver Disease (NAFLD) is one of the most common causes of advanced liver complications and is increasing in prevalence. We aimed to assess the future health burden of NAFLD in US adults.
Methods: We developed and validated an agent-based state-transition microsimulation model of the natural history of NAFLD in AnyLogic 8.7.10. We populated the model with 200,000 persons using US nationally representative distribution of age and sex in 2001 and simulated NAFLD progression in adults (aged ≥18 years) until 2050. After each yearly cycle, individuals transition to 15 mutually exclusive states: no steatosis, simple steatosis, nonalcoholic steatohepatitis (NASH), fibrosis stages F1-F4 with or without NASH, decompensated cirrhosis (DCC), hepatocellular carcinoma (HCC), transplant, or liver-related death. Transition probabilities were derived from published literature and varied by age, sex, and years since diagnosis/event as appropriate. We assumed 25% of US adults had NAFLD, and 10% of NAFLD cases were NASH in 2001. Model outcomes included prevalence of NAFLD and incidence of HCC, DCC, transplantation, and liver-related death. For validation, we compared model projection to national data on US population (Census), HCC (SEER) and liver transplant (UNOS) from 2001-2020. We also compared our projected survival to observed survival in a community-based NAFLD population in Olmsted County, located in southeastern Minnesota.
Results: Our projected population, incidence of HCC and liver transplant during 2001-2020 closely match reported national data for this time period and the predicted survival closely matches the community-based NAFLD population (Figure 1). Among US adults, prevalence of NAFLD will increase from 25.3% in 2020 to 29.6%, or 89.5 million people, in 2050. Annual HCC incidence will increase to >21,000 cases—a 300% increase compared to 2020—and NAFLD-related liver transplantation will more than double to an average of 3,027 cases per year by 2050. Annual DCC incidence will increase to >142,000 cases—a 100% increase compared to 2020. Finally, NAFLD-related liver death will account for 2.9% of all deaths in 2050, an increase from 1.2% in 2020.
Figure 1.
Survival predicted by the NAFLD model vs. Adams et al
Conclusions: In the absence of effective intervention, increasing prevalence of NAFLD and associated liver complications will create a substantial treatment burden in the US, straining the liver transplant system, and becoming a common cause of death.
Keywords: Microsimulation Modeling, Nonalcoholic Fatty Liver Disease, Liver Complications, Liver Transplantation, Burden of Disease
Potential impact of improving dental education for risk-based management of patients with oral potentially malignant disorders: a simulation model
PP-168 Health Services, Outcomes and Policy Research (HSOP)
Mutita Siriruchatanon1, Alexander Ross Kerr2, Stella Kijung Kang3
1Department of Radiology, New York University Grossman School of Medicine, New York, New York, United States of American
2Department of Oral and Maxillofacial Pathology, Radiology, and Medicine, New York University College of Dentistry, New York, New York, United States of America
3Department of Radiology and Department of Population Health, New York University Grossman School of Medicine, New York, New York, United States of America
Purpose: We assessed the potential impact of educating general dentists to perform visual-tactile exams coupled to risk-based stratification to manage patients presenting with oral potentially malignant disorders (OPMD).
Methods: We developed a microsimulation model for 45-year-old US adults with OPMDs incorporating lesion features and histology. Our population was characterized by risk factors of smoking and alcohol use (four risk profiles). Three strategies were compared: 1) no screening and 2) screening with visual-tactile exam with two pathways: 2a) referral of all OPMDs for scalpel biopsy and 2b) referral of lesions with a red component for scalpel biopsy and surveillance for white lesions. For both pathways, surgery was performed for any dysplasia grade or cancer if positive biopsy, otherwise, lesions were placed on surveillance. We evaluated the primary outcome of life expectancy and intermediate outcomes of cumulative incidence of oral cancers and the number of biopsies using a lifetime horizon. We identified efficient strategies by comparing life-years gained (LYG) over no screening and biopsy burden. We conducted scenario analyses based on the assumptions: 1) general dentists use color and texture as cues for referral to specialists for biopsy and 2) specialists consider different histopathologic thresholds for surgical intervention (mild or moderate dysplasia). Sensitivity analysis was performed to assess model parameter uncertainty.
Results: Referral of all OPMDs for biopsy provided the highest life expectancy of 31 years among the three strategies, while referral strategy with triage based on lesion color resulted in a reduction of 0.29 life-years but with 60% fewer biopsies. Comparing LYG and biopsy burden, all three strategies were on the efficient frontier. Training for using color and texture as cues resulted in 0.08 life-years less than referral of all OPMDs, with 50% fewer biopsies. Comparing different treatment thresholds between surgery and surveillance considered by specialists, any referral strategies with surgery threshold of moderate dysplasia or worse yielded 4.9 to 14 LYG less than those with the threshold of mild dysplasia, while resulting in 10% fewer biopsies. Model results were most sensitive to the probabilities of adherence to surveillance protocols, death from surgery, and progression from early-stage to late-stage cancer.
Conclusions: Strengthening education for risk-based management of oral lesions among general dentists based on visual-tactile exam for differentiating low-risk from high-risk lesions reduce biopsy burden while maintaining early-stage cancer detection.
Keywords: OPMD management, OPMD screening strategies, visual-tactile exam, decision-analytics model, dentistry education
Lifetime number of biopsies and life-years gained (LYG) for a cohort of 45-year-old US adults for OPMD management strategies
A Model-based Decomposition of Racial Disparities in Prostate Cancer Incidence and Mortality
PP-170 Health Services, Outcomes and Policy Research (HSOP)
Roman Gulati1, Jane M Lange2, Yaw A Nyame3, Jonathan E Shoag4, Ruth Etzioni2
1Division of Public Health Sciences, Fred Hutchinson Cancer Center, Seattle, USA
2Division of Public Health Sciences, Fred Hutchinson Cancer Center, Seattle, USA; Knight Cancer Institute, Oregon Health & Science University, Portland, USA
3Division of Public Health Sciences, Fred Hutchinson Cancer Center, Seattle, USA; Department of Urology, University of Washington Medical Center, Seattle, USA
4Case Comprehensive Cancer Center, Case Western Reserve University, Cleveland, USA; Department of Urology, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Cleveland, USA; Department of Urology, New York-Presbyterian Hospital, Weill Cornell Medical Center, New York, USA
Purpose: To investigate the relative contributions of disease natural history and clinical interventions to disparities in prostate cancer incidence and mortality between Black men and men of all races in the United States.
Methods: We extended an established microsimulation model of prostate cancer natural history and diagnosis that was previously calibrated to incidence rates from the Surveillance, Epidemiology, and End Results (SEER) registry separately for Black men and for men of all races combined. The extended model integrated race-specific estimates of initial treatment frequencies (from SEER), prostate cancer survival (from SEER), and all-cause mortality (from U.S. life tables). Efficacies of prostate-specific antigen screening and curative treatment (from randomized controlled trials) were assumed to be similar for both groups. Starting with the model for all races, we systematically replaced up to 9 estimated model components with corresponding components for Black men, projecting incidence and mortality rates at each step.
Results: The extended model approximated age-standardized prostate cancer incidence and mortality rates per 100,000 men ages 40-84 years by race group, although mortality rates for Black men were overprojected after 2002 (Figure 1). Based on model projections in 2015, the increased prevalence of subclinical, biopsy-detectable disease in Black men explained 87% of the observed disparity in incidence and 37% of the observed disparity in mortality. The increased rate of subclinical progression and worse prostate cancer survival in Black men diagnosed with localized or metastatic disease explained much of the remaining disparity in mortality, with contributions of relative sizes 34%, 21%, and 7%, respectively. Contributions due to differing patterns of screening and initial treatment were small.
Figure 1.
Age-standardized prostate cancer incidence and mortality rates for Black men and for men of all races from SEER and model-projected contributions to observed disparities
Conclusions: This model-based decomposition of prostate cancer disparities highlights the key roles of increased prevalence of subclinical disease, increased rate of subclinical progression, and worse prostate cancer survival in Black men. It remains unclear how much biologic risk, structural and social factors, and health care delivery drive these components. Nonetheless, these results underscore the potentially critical importance of targeted screening for Black men.
Keywords: Black men, microsimulation model, prostate cancer, racial disparities
Lesbian, gay, bisexual, and other sexual minority adults in the US and their unmet medical needs and telehealth use due to the COVID-19 pandemic
PP-171 Health Services, Outcomes and Policy Research (HSOP)
Ryan Suk1, Zhigang Xie2, Jennifer C Spencer3, Aliénor Lemieux Cumberlege4, Young Rock Hong5
1Department of Management, Policy and Community Health, The University of Texas Health Science Center, Houston, Texas, United States
2Department of Public Health, University of North Florida, Jacksonville, Florida, United States
3Department of Population Health, Dell Medical School, The University of Texas at Austin, Austin, Texas, United States & Department of Internal Medicine, Dell Medical School, The University of Texas at Austin, Austin, Texas, United States
4NHS Lothian, Edinburgh, United Kingdom
5Department of Health Services Research, Management and Policy, College of Public Health and Health Professions University of Florida, Gainesville, Florida, United States
Purpose: Sexual minority adults – including lesbian, gay, bisexual, and other sexual minorities (LGBQ+) – face demonstrated barriers to employment, health insurance, and healthcare access. We sought to assess the association between sexual orientation (LGBQ+ vs. heterosexual), unmet medical needs, and telehealth use due to the COVID-19 pandemic using nationally representative survey data among US adults.
Methods: In this cross-sectional study, we used the 2020 National Health Interview Survey (NHIS) to identify unmet medical needs (defined as “any delayed medical care” and “forgone non-COVID-related medical care” that are attributable to the COVID-19 pandemic) and telehealth use (defined as “any telehealth use” and “COVID-19 pandemic-prompted telehealth use” occurred within the past year). We compared these outcomes by sexual orientation using a Wald chi-square test. We also conducted sequential multivariable logistic regressions to assess associations between sexual orientation and each outcome adjusting for age and sex (Model 1), then clinical factors (Model 2), and finally additional socioeconomic moderators (Model 3). Analyses were conducted using SAS 9.4, adjusting for complex survey design and sampling weights.
Results: Among 17,231 (representing 244 million) US adult respondents, there were significant differences in unmet medical needs due to the COVID-19 pandemic for LGBQ+ vs. heterosexual adults; both for any care (28.9% vs. 23.4%, p=.01) and for non-COVID-related care (22.8% vs. 15.4%, p<.001). For telehealth, LGBQ+ adults had more frequent utilization both in general (40.9% vs. 32.0%, p<.001) and for pandemic-related reasons (35.3% vs. 26.7%, p<.001). In sequential adjustment, demographic and clinical differences did little to explain these disparities (Figure). Even after adding socioeconomic adjustments, LGBQ+ adults had significantly greater odds of any unmet medical needs (aOR=1.27, 95% CI=1.03-1.57), unmet non-COVID-related medical needs (aOR=1.49, 95% CI=1.20-1.85), but higher telehealth use (aOR=1.38, 95% CI=1.11-1.73), and pandemic-prompted telehealth use (aOR=1.39, 95% CI=1.12-1.73).
Figure.
Conclusions: Our findings indicate that disastrous events such as the COVID-19 pandemic likely exacerbate existing healthcare access disparities faced by LGBQ+ individuals and warrant further research to explore the interventional potentials of telehealth for reducing the healthcare access disparities, especially for those living in geographic regions lacking culturally competent providers.
Keywords: LGBTQ+, sexual minority, telehealth
Persistent High-Need High-Cost healthcare use in adults: the validation of predictors in Dutch claims data
PP-173 Health Services, Outcomes and Policy Research (HSOP)
Ursula W De Ruijter1, Willem A Bax2, Hester F Lingsma3
1Department of Public Health, Erasmus MC, University Medical Center, Rotterdam, The Netherlands; Department of Internal Medicine, Northwest Clinics, Alkmaar, The Netherlands
2Department of Internal Medicine, Northwest Clinics, Alkmaar, The Netherlands
3Department of Public Health, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
Purpose: To validate predictors of persistent High-Need High-Cost (HNHC) healthcare use in the Dutch context and to develop a prediction model for HNHC healthcare use applicable in a clinical setting.
Methods: We performed a retrospective cohort study in the claims database of a large national healthcare insurer covering 4 million beneficiaries using data from 2015 - 2019. HNHC was defined as belonging to the top 10% of total annualized healthcare expenditures for three consecutive years. Potential predictors that were identified in a recent systematic review were assessed with regression models and their effect was quantified as odds ratios and beta coefficients. Predictors were selected for the final model based on their p-value with a liberal p-value (<0.2) to prevent overfitting. The model was validated internally by a bootstrapping procedure and discrimination was quantified with the c-statistic.
Results: Preliminary RESULTS: Patients with persistent HNHC healthcare use were older and had more comorbidities. We expect the Charlson comorbidity score, age and number of medical specialties involved to be predictors with the highest independent predicted values. We will report the number of readily available predictors of the newly developed model and expect it to show good performance for which we will report an internally validated C-statistic.
Conclusions: We provide a readily clinically applicable model based on evidence-based predictors to identify patients with of high risk of future persistent HNHC healthcare use. This model potentially enables case-finding for HNHC care management to improve quality of care and patient experience.
Keywords: prognosis; meaningful use; managed care programmes; patient care management; health expenditures
Including Resources for Implementation in Economic Analyses of Genetic Testing for Cancer: A Scoping Review
PP-174 Health Services, Outcomes and Policy Research (HSOP)
Zachary Rivers1, Stefan Allen1, Tiana Butler2, Hadley Stevens Smith3, Veena Shankaran4, Scott Ramset4
1Hutchinson Institute for Cancer Outcomes Research, Fred Hutch Cancer Center, Seattle, WA, USA
2Fairview Pharmacy Services, Minneapolis, MN, USA
3Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA
4Hutchinson Institute for Cancer Outcomes Research, Fred Hutch Cancer Center, Seattle, WA, USA; University of Washington, Seattle, WA, USA
Purpose: Economic models of genetic testing evaluate the impact of implementing this technology into clinical care. Inaccurate evaluations of testing impact may occur when models omit resources necessary to address barriers of implementation. This review evaluates the frequency of inclusion of implementation resources in economic analyses of cancer genetic testing.
Methods: We conducted a scoping review of original research articles indexed in PubMed or Embase related to cancer, genetic testing, and economic modeling. We included studies that were: 1) model-based economic analyses; 2) published January 1995-November 2021; 3) comparisons of one or more genetic testing approaches, written in English. Inclusion of implementation barriers were extracted by multiple reviewers using a standardized form. These included test applicability, use of race and/or ethnicity matched genetic variant and disease frequency rates, test accuracy (inclusion of test sensitivity or specificity), and test accessibility (inclusion of patient or provider willingness to use genetic testing, and patient access to testing).
Results: There were 149 papers included in this analysis. In total, 42 (28%) focused on treatment response, 40 (26.7%) used testing to identify the mutation explaining a patient’s cancer, 32 (21%) to quantify an individual’s risk of future disease, 23 (15%) examined risk of recurrence, 10 (6.7%) looked at treatment toxicities, and three (2%) looked at the likelihood that a patient had cancer. Regarding implementation barriers, 62 (42%) studies considered test accuracy, 27(18.4%) considered the applicability of the genetic test to the population, and 28 (18.4%) considered test accessibility. Twenty-three (15%) papers considered multiple implementation barriers, 68 (46%) considered one, and 58 (39%) considered none.
Conclusions: This scoping review identified that fewer than half of all economic analyses of genetic testing in cancer care consider test accuracy, while even fewer consider test applicability or access. The lack of inclusion of these barriers in economic models creates models that do not reflect real-world cancer care. As models are increasingly used to inform reimbursement policy or clinical implementation, the impact of barriers to implementation should be considered to understand the real-world economic outcomes of genetic testing.
Keywords: Cancer, Genetics, Cost-effectiveness, scoping review
Altruism and breast milk donations
PP-176 Patient and Stakeholder Preferences and Engagement (PSPE)
Anna Katharina Stirner, Daniel Wiesen
Department of Business Administration and Health Care Management, Faculty of Management, Economics and Social Sciences, University of Cologne, Cologne, Germany
Purpose: What drives donation decisions in medical donations? Solving this puzzle could contribute to increase donation rates and allocate donations more efficiently. We investigate the relationship between socioeconomic preferences and donation behavior (i.e. breast milk donations) of mothers of very low birth weight infants (VLBW; < 1,500 grams). Furthermore, we explore the effect of personality traits on mothers' donation behavior.
Methods: Data were collected by a survey among mothers of VLBW infants aged 6 to 24 months across four German statutory health insurances (N=533). Donation behavior was defined by two measures: Willingness to donate breast milk of mothers for other VLBW infants and willingness to accept donations of breast milk for their own infant. Socioeconomic preferences were elicited within the Preference Survey Module framework. Data on personality traits were collected using the Big-Five-Inventory-10.
Results: Using logit regression, we find a significant relationship between altruism and the willingness to donate breast milk. A marginal increase in the level of altruism increases mothers' willingness to donate by about 3 percent. Moreover, we find a significant correlation between altruism and the willingness to accept donations. A marginal increase in the level of altruism increases mothers' willingness to accept donations by about 4 percent. In addition, previous lactation experience, duration of lactation and having multiples were found to play a significant role in the maternal decision making process whether to accept donations. We do not find a significant effect of personality traits on mothers' donation behavior.
Conclusions: Our results indicate a strong link between altruistic traits and donation decisions among mothers of VLBW infants. While willingness to donate breast milk appears to be exclusively determined by altruistic tendencies, the decision to accept donations turns out to be more complex. However, in our sample, about 78 percent of mothers reported that they would have donated, if they had been given the opportunity, while only about 8 percent were able to report an actual donation. Relatedly, about 62 percent of mothers stated that they would have been willing to accept donor milk, while only about 18 percent were able to obtain donor milk for their infant. This pattern implies untapped potential in breast milk donations both in terms of supply and demand resulting from deficient structures of milk banks within the German health care system.
Keywords: socioeconomic preferences, personality traits, medical donation, breast milk
Developing a core outcome set for patients with autosomal inherited bleeding disorders in the Netherlands
PP-177 Patient and Stakeholder Preferences and Engagement (PSPE)
Evelien Shannon van Hoorn1, Hester Floor Lingsma1, Marjon Hester Cnossen2, Samantha Claudia Gouw3
1Erasmus MC, University Medical Center Rotterdam, Department of Public Health, Rotterdam, the Netherlands
2Erasmus MC Sophia Children’s Hospital, University Medical Center Rotterdam, Department of Pediatric Hematology, Rotterdam, the Netherlands
3Amsterdam UMC location University of Amsterdam, Emma Children’s Hospital, Pediatric Hematology, Meibergdreef 9, Amsterdam, the Netherlands
Purpose: Worldwide, value-based healthcare is being implemented. To evaluate patient value, it is essential to regularly measure health outcomes that matter to patients. Multiple sets of outcomes have been proposed for patients with hemophilia. These have not yet been developed for patients with other bleeding disorders. This study aims to determine a standard set of outcomes for patients with autosomal inherited bleeding disorders as well as a set of case-mix variables for risk adjustment in order to implement value-based healthcare for this patient group.
Methods: A preliminary list of 146 health outcomes and 79 case-mix variables was identified from literature. Using a modified Delphi process, two panels consisting of patients and caregivers, and healthcare professionals respectively, developed consensus on which outcomes are important for patients with autosomal inherited bleeding disorders. The healthcare professionals further achieved consensus on a set of case-mix variables for risk adjustment. During three different Delphi rounds, the panels scored and reached consensus on the importance of the health outcomes and case-mix variables on a 5-point Likert scale using the online Welphi platform. Agreement on importance was defined as a median score of 5 across all panelists, disagreement as a median score below 5 and consensus as an interquartile range ≤ 1.
Results: The two panels consisted of 42 participants (19 healthcare professionals, 13 patients and 10 caregivers). Although, patients and caregivers focused more on quality of life and healthcare professionals on disease-related health outcomes, both panels agreed on the importance of eight outcome domains: bleeding episodes, treatment, joint health, complications, menstruation, pain, emotional functioning and knowledge. The healthcare professionals agreed on the importance of six groups of case-mix variables: patient, bleeding, treatment and gynecological characteristics, self-efficacy and sport participation.
Conclusions: This study proposes a consensus set of health outcomes that are important for patients with autosomal inherited bleeding disorders. This outcome set, in combination with the identified case-mix variables, is recommended for use in the Netherlands to assess the health-related quality of life of these patients in a variety of healthcare settings.
Keywords: Delphi Technique, Blood Coagulation Disorders, Outcome Assessment
A Best-Worst Scaling Survey for Measuring Family Spillover Effects for Children with Complex Chronic Conditions: A Pilot Study
PP-178 Patient and Stakeholder Preferences and Engagement (PSPE)
Huey Fen Chen1, Angela M. Rose2, Kerra Mercon2, Melissa K. Cousino3, Jeremy Adler4, Samir Gadepalli5, Folafoluwa O. Odetola4, David E. Sandberg4, Courtney Streur6, Lisa A. Prosser1
1Department of Health Management and Policy, University of Michigan, Ann Arbor, U.S.; The Susan B. Meister Child Health Evaluation and Research (CHEAR) Center, University of Michigan, Ann Arbor, U.S.
2The Susan B. Meister Child Health Evaluation and Research (CHEAR) Center, University of Michigan, Ann Arbor, U.S.
3Department of Pediatrics, University of Michigan, Ann Arbor, U.S.
4Department of Pediatrics, University of Michigan, Ann Arbor, U.S.; The Susan B. Meister Child Health Evaluation and Research (CHEAR) Center, University of Michigan, Ann Arbor, U.S.;
5Department of Surgery, University of Michigan, Ann Arbor, U.S.; The Susan B. Meister Child Health Evaluation and Research (CHEAR) Center, University of Michigan, Ann Arbor, U.S.
6Department of Urology, University of Michigan, Ann Arbor, U.S.; The Susan B. Meister Child Health Evaluation and Research (CHEAR) Center, University of Michigan, Ann Arbor, U.S.;
Purpose: To develop a best-worst scaling (BWS) survey to evaluate the most impactful items measuring family spillover effects for children with complex chronic conditions from an economic perspective.
Methods: To develop the BWS survey, a list of candidate outcome measures/items was identified from literature review, expert panels, and the taxonomy framework developed by the Core Outcome Measures in Effectiveness Trials (COMET) Initiative. A list of 21 items in three main categories was generated for the BWS survey including direct medical and non-medical costs borne by families, Informal caregiving time, and Impact on family members’ quality of life (QOL). Using balanced incomplete block designs, a survey with 21 BWS tasks was generated. To reduce the cognitive load of survey respondents, the original survey was randomized to three surveys (7 BWS tasks each) and adjusted for the 21 items to occur at least once in each survey. Pretests were carried out prior to the pilot survey. Analyses included calculation of importance scores and conditional (multinomial) logistic (MNL) regression.
Results: Using convenience sampling, a total of 30 respondents (healthcare providers, researchers, and parents) were included. The mean age of the population was 40 years old and the population differed from characteristics of the overall US population (100% have private insurance, 97% employed full-time, 83% white, 77% female). Importance scores (Table 1) in the QOL and time categories were chosen more frequently as having the most impact on families (the top 10 included 2 out of 2 QOL items, 6/9 time, and 2/10 costs) while items in the cost category the least impactful. The highest impact items were: “Quit jobs or did not pursue a job in order to care for the child”, followed by “Caregivers’ quality of life” and “Family member’s quality of life”. Analyses of using the MNL regression yield the same rankings of the categories.
Conclusions: This pilot study is the first step of a larger study in developing the core outcome set in measuring family spillover effects for children with complex chronic conditions. The results showed that items in the quality of life and time categories had the greatest impact on families of children with complex chronic conditions. Costs were least impactful though respondents may not be a representative sample. Further evaluation in a more diverse population is needed.
Keywords: Best-Worst Scaling
Table 1.
Pilot study best-worst scaling results
|
The accuracy and impact of patients’ expectations in severe chronic obstructive pulmonary disease: a longitudinal mixed methods study
PP-179 Patient and Stakeholder Preferences and Engagement (PSPE)
Joanna Hart1, Amy Summer2, Lon Ogunduyile2, Folasade Lapite2, David Hong2, Casey Whitman2, Wei Wang2, Bryan Zoll2, Daniel Carter2, Leo Thorbecke1, Scott Halpern1
1Palliative and Advanced Illness Research Center, University of Pennsylvania, Philadelphia, United States; Division of Pulmonary, Allergy, and Critical Care, Department of Medicine, University of Pennsylvania, Philadelphia, United States; Department of Medical Ethics and Health Policy, University of Pennsylvania, Philadelphia, United States
2Palliative and Advanced Illness Research Center, University of Pennsylvania, Philadelphia, United States
Purpose: We aimed to identify how patients with serious illness develop and manage health expectations, quantify the accuracy of their expectations, and identify the relationship between expectation accuracy and health-related quality of life (HRQL).
Methods: We conducted a longitudinal study of outpatients with severe COPD, prospectively measuring their individual health expectation accuracy over two years. At baseline, patients predicted the emotional symptoms and dyspnea they would experience 3, 12, and 24 months later. We measured their actual outcomes at these intervals, calculating a difference score between their predicted and actual symptoms. We measured HRQL using the St. George’s Respiratory Questionnaire (SGRQ) at the same time points. Using descriptive statistics, we described patterns in patients’ expectation accuracy. We built linear regression models for each symptom domain at each time point, setting the primary exposure as patients’ difference scores and the primary outcome as their SGRQ scores. A subset of participants also completed longitudinal interviews on (1) their health information-seeking practices; (2) the development and management of health predictions; and (3) how expectations influenced future preparations. We analyzed the interviews using thematic analysis methods.
Results: Our cohort included 207 patients (RR=80.0%). We retained over 85% of eligible, living patients through two years. Patients’ expectation accuracies varied. Linear regression models revealed that overly optimistic expectations of future burdens of dyspnea and negative emotions (i.e., anticipated less burden of symptoms than occurred) were consistently associated with lower HRQL over two years. We conducted 52 semi-structured interviews with 26 patients. Patients’ sources of future health information included clinicians, familial COPD experiences, recent health, informants from social networks, spiritual beliefs, and media. Trust in clinicians’ prognostic information was based upon interpersonal interactions and clinicians’ previous predictive accuracy. Patients felt accurate information about future health should guide decision making. Yet, many also desired only positive prognostic information to prevent unactionable worry or as an avoidant coping strategy. Patients found revisiting expectations proven later to be overly optimistic to be disappointing, yet identified alternative coping strategies they used as their health declined.
Conclusions: Given the negative relationship between overly optimistic expectations and patients’ subsequent HRQL, interventions that build from mechanisms of expectation formation, target patients or clinicians, and guide patients towards accurate expectations are likely to promote delivery of value-aligned care and improve patients' serious illness experience.
Keywords: quality of life, prognosis, longitudinal studies, pulmonary disease, mixed methods, information seeking behavior
Associations of expectation accuracy with health-related quality of life over 24 months
How much do patients care about taste alteration? Findings from a discrete-choice experiment on chronic-cough treatments
PP-180 Patient and Stakeholder Preferences and Engagement (PSPE)
Jui Chen Yang1, Aparna Swaminathan2, Helen Ding3, Jonathan Schelfhout3, Theresa Coles4, F. Reed Johnson5
1Duke Clinical Research Institute, Duke University School of Medicine, Durham, NC, USA.
2Duke Clinical Research Institute, Duke University School of Medicine, Durham, NC, USA; Department of Medicine, Duke University School of Medicine, Durham, NC, USA.
3Merck & Co., Inc., Rahway, NJ, USA.
4Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA.
5Duke Clinical Research Institute, Duke University School of Medicine, Durham, NC, USA; Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA.
Purpose: To quantify patients’ willingness to accept tradeoffs between symptom improvements and risk of taste-alteration side effects of chronic-cough (CC) treatments.
Methods: A web-enabled discrete-choice experiment (DCE) survey was designed to elicit trade-off preferences among four attributes identified from qualitative research, clinical-trial data (NCT02612610), and literature review. Attributes included cough frequency (0, 5, 10, or 20 times per hour), cough timing (daytime vs. nighttime), number of intense cough attacks (0, 1, 3, 5, or 7 times per week), and taste alterations (normal taste, inability to taste sweet foods, inability to taste salty foods, or a metallic, chalky or oily taste). Adults with self-reported CC (a cough lasting >8 weeks or a physician diagnosis of CC) were recruited from a commercial survey panel to complete the DCE survey with 12 choice sets in which they were asked to choose between two constructed treatment options characterized by varying attribute levels. The resulting choice data were analyzed using a conditional logit model to estimate relative preference weights for all attribute levels and tradeoffs between improvements in CC symptoms and taste alteration.
Results: The survey was completed by 502 qualifying adults. The median age was 58 years, and most respondents were white (91%) and female (80%). About half of the respondents coughed ≥10 times per hour and about 60% had cough attacks ≥5 times per week. About a third reported their cough was worst during the night, during the day, or during both times. Less than half reported experiencing a taste alteration before. Given the attributes included in the study, number of intense cough attacks was the most important (34%), followed by taste alteration (24%), nighttime cough frequency (23%), and daytime cough frequency (19%). Respondents indicated tradeoffs between varying levels of treatment efficacy and avoidance of metallic, chalky, or oily taste changes. To accept that taste alteration, they required either a) no nighttime coughing and a reduction in daytime coughing from 20 to 18 times per hour; or b) no daytime coughing and a reduction in nighttime coughing from 20 to 14 times per hour; or c) a reduction in intense cough attacks from 7 times per week to twice a week.
Conclusions: Patients with CC were willing to accept some taste alteration for improved efficacy of CC treatments.
Keywords: Discrete-choice experiment (DCE), patient preference, chronic cough.
Relative Preference Weights
Oncologist Preferences for Response Scores Summarizing Patient Genomics and Implications for Cancer Care: A Discrete Choice Experiment in Development
PP-181 Patient and Stakeholder Preferences and Engagement (PSPE)
Katherine T Lofgren1, Enrique Saldarriaga2, Omar Hamdani1, Jerry Mitchell1, Josh J Carlson3
1Foundation Medicine, Inc., Cambridge MA.
2The Comparative Health Outcomes, Policy, and Economics (CHOICE) Institute, University of Washington, Seattle, WA.
3Curta, Inc., Seattle, WA.
Purpose: The list of genomic results informing therapy selection in advanced cancer is rapidly growing. Often, biomarkers are binary indicators of a patient’s candidacy for targeted therapies. However, response to treatments like immunotherapy is uncertain and both programmed death ligand 1 (PD-L1) expression and tumor mutational burden (TMB) are imperfect predictive biomarkers. A multi-marker response score may improve treatment selection when no single biomarker sufficiently defines the population likely to benefit. Here we describe a discrete choice experiment developed to identify and quantify oncologist preferences for a patient response score to inform therapy selection in the context of advanced non-small cell lung cancer.
Methods: We conducted semi-structured 60-minute interviews with 8 oncologists to inform the set of response score attributes most meaningful to oncologists. We probed for specific clinical decisions in need of baseline treatment response prediction beyond current biomarkers. Two researchers used rapid qualitative analysis of the interview transcripts to identify key themes. The final attributes and levels were selected based on identified literature, key themes, an oncologist ranking exercise, and attribute relevance to the development of a response score.
Results: Based on literature and qualitative research, potential attributes included: clinical benefits, concerns (such as turn-around-time), patient demand, clinical development study design, test characteristics, actionability of results, treatment context, and healthcare costs. Among these, the most important were the number of patients with modified recommendations based on the score (actionability), study design, and the magnitude of expected clinical benefits as measured by overall survival, progression free survival, or toxicity reduction with the score available. Interviewed oncologists universally identified difficulty selecting between immunotherapy alone versus in combination with chemotherapy and the shortcomings of PD-L1 and TMB as a high priority clinical decision context. Generally, the interviewees did not consider FDA approval or test characteristics important attributes, stating they assumed a high level of both sensitivity and specificity for any on-market response score.
Conclusions: Oncologists identified key attributes likely to inform their adoption of a treatment response score and the most important clinical care context – immunotherapy response prediction. A formal discrete choice experiment will provide a robust analysis of the relative importance of and key trade-offs between attributes and will improve the development of such decision aids to best meet the needs of oncologists.
Keywords: Cancer, discrete choice experiment, oncologist perspective, preference elicitation
Defining and measuring the value of genetic testing from patients’ perspectives: Developing the Patient-reported Genetic testing Utility InDEx
PP-182 Patient and Stakeholder Preferences and Engagement (PSPE)
Robin Z Hayeems1, Stephanie Luca2, Elise Poole2, Daniel Assamad2, Wendy J Ungar1, Lesleigh Abbott3, Linlea Armstrong4, Patricia Birch5, Kym Boycott6, June C Carroll7, Lauren Chad8, Avram Denburg9, Rebecca Deyell10, Alison Elliott5, Anne Marie Laberge11, Iskra Peltekova12, Becky Quinlan13, Sarah Sawyer6, Maureen Smith14, Anita Villani15
1Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada; The Institute of Health Policy Management and Evaluation, University of Toronto, Toronto, Canada
2Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada
3Division of Hematology/Oncology, Children’s Hospital of Eastern Ontario, Ottawa, Canada
4Division of Medical Genetics, British Columbia Children’s Hospital, Vancouver, Canada; British Columbia Children's Hospital Research Institute, Vancouver, Canada
5British Columbia Children's Hospital Research Institute, Vancouver, Canada; Department of Medical Genetics, University of British Columbia; Vancouver, Canada
6Department of Medical Genetics, Children’s Hospital of Eastern Ontario, Ottawa, Canada
7Department of Family and Community Medicine, Sinai Health, University of Toronto, Toronto, Canada
8Division of Clinical and Metabolic Genetics, The Hospital for Sick Children, Toronto, Canada; Department of Pediatrics, University of Toronto, Toronto, Canada; Department of Bioethics, The Hospital for Sick Children, Toronto, Canada
9Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada; The Institute of Health Policy Management and Evaluation, University of Toronto, Toronto, Canada; Division of Hematology/Oncology, The Hospital for Sick Children, Toronto, Canada
10British Columbia Children's Hospital Research Institute, Vancouver, Canada; Pediatric Hematology/Oncology, British Columbia Children’s Hospital, Vancouver, Canada
11Division of Medical Genetics, Centre Hospitalier Universitaire Sainte-Justine, Montreal, Canada
12Department of Pediatrics, University of Toronto, Toronto, Canada; Developmental Pediatrics, Holland Bloorview Kids Rehabilitation Hospital, Toronto, Canada
13Ontario Genetics Advisory Committee, Ontario Health, Toronto, Canada
14Canadian Organization for Rare Disorders, Toronto, Canada
15Division of Hematology/Oncology, The Hospital for Sick Children, Toronto, Canada
Purpose: Determining the full value of genomic sequencing technologies requires a set of metrics that includes the patient perspective. The purpose of this study is to develop the Patient-reported Genetic testing Utility InDEx (P-GUIDE), a novel patient-reported outcome measure for genomic medicine. Preliminary work related to the content validity of P-GUIDE is presented.
Methods: Informed by an evidence synthesis to characterize the concept of personal utility from the perspective of parents/caregivers, a preliminary set of domains and items were generated. From a common item bank, specific item lists were created to suit three genetic test result types (i.e. diagnostic/medically actionable, non-diagnostic/non-medically actionable, variants of uncertain significance). Parents of children who received genetic test results related to neurodevelopmental disorders (NDD) or hereditary/hard to cure cancer (HHC) were identified from five centres across Canada. To attend to under-represented patient populations, non-English speaking parents were purposefully sampled. Cognitive interviews were conducted wherein respondents were asked to reflect on the meaning of personal utility and provide feedback on the relevance, comprehensibility, and comprehensiveness of preliminary P-GUIDE items. Interviews were audio-recorded and transcribed verbatim. Data were analyzed thematically and item-specific feedback was synthesized to inform revisions.
Results: Of 15 interviews completed; 12 were with parents of children with NDD and 3 with parents of children with HHC. The majority of participants were mothers (81%) and the average age of their children was 9.7 years. Recent genetic testing was diagnostic/medically actionable for 40% of participants’ children, non-diagnostic/non-medically actionable for 33%, and uncertain for 27%. Domains of personal utility related to the cognitive, affective, behavioural, social, and medical management impacts of genetic testing resonated with parents. For some, impact of genetic test results on mental health was a prominent component of the affective domain. Preliminary analyses of items suggested that some items were relevant across NDD and HHC populations and result types while others were specific to population and result type. Most items were clearly understood, but wording changes were recommended for others. Ongoing data collection will inform further assessments of content validity.
Conclusions: Once developed and validated, evidence generated from the use of P-GUIDE in comparative studies will constitute a reliable measure of personal utility, thereby enhancing equity considerations in health technology assessments, policy deliberations and funding recommendations.
Keywords: genomic medicine, patient-reported outcome measure, content validity
Decision making on dementia primary prevention: The impact of modifiable risk factor reductions
PP-184 Quantitative Methods and Theoretical Developments (QMTD)
Chiara C. Brück1, Frank J. Wolters2, M. Arfan Ikram2, Inge M.C.M. de Kok1
1Department of Public Health, Erasmus MC, Rotterdam
2Department of Epidemiology, Erasmus MC, Rotterdam
Purpose: To aid decision making on dementia primary prevention strategies, we added risk factors to the microsimulation model MISCAN-Dementia that can be used to investigate the effect of changes in risk factors on the future burden of dementia.
Methods: We used the microsimulation model MISCAN-Dementia to simulate the life history and development of dementia of 10 million individuals. MISCAN-Dementia synthesizes population-based Rotterdam Study data with changes in demographics between birth cohorts from the early 1900s onwards. In this study, we added mid-life hypertension and late-life smoking as risk factors for dementia to the model. Every birth cohort was divided into four groups: (1) no risk factor, (2) smoking, (3) hypertension, and (4) both smoking and hypertension. Sex and birth cohort specific data was used to replicate observed risk factor prevalence in the population. Compared to no risk factor group, groups with risk factors have an increased risk of developing dementia (risk estimates based on literature). At the same time, background mortality of groups with risk factors is increased to match differences in life expectancy compared to the no risk factor group. The model can be used to evaluate various risk factor scenarios (i.e. complete reduction, varying degrees of reduction) and determine their effect on dementia incidence and prevalence until 2050.
Results: We added mid-life hypertension and late-life smoking as risk factors to the MISCAN-Dementia model. The model was able to fit the observed risk factor prevalence and differences in life expectancy between risk groups well. Given the risk factor additions to MISCAN-Dementia, the model can be used in decision making on primary prevention strategies for dementia to evaluate long-term effects of risk factor reductions on dementia incidence and prevalence. The model provides various outcome measures, such as dementia incidence, prevalence, costs, and quality adjusted life years (QALYs). To capture the complex relationship between reductions in risk factors, differences in dementia incidence and increases in life expectancy, it is essential to model the effect of risk factors on dementia incidence as well as on background mortality.
Conclusions: The microsimulation model MISCAN-Dementia was updated to include mid-life hypertension and late-life smoking as risk factors for dementia to predict the future burden of dementia. Evaluating risk factor reduction scenarios can aid decision making on which primary prevention strategy for dementia to pursue.
Keywords: Dementia, microsimulation, primary prevention, modeling, risk factor
Methods to quantify the importance of parameters for model updating and distributional adaptation
PP-185 Quantitative Methods and Theoretical Developments (QMTD)
David Glynn, Susan Griffin, Nils Gutacker, Simon Walker
Centre for Health Economics, University of York
Purpose: Decision models produce information on the overall health impacts of a decision, taking account of both the benefits and costs. Distributional cost-effectiveness analysis (DCEA) uses decision modelling to quantify impacts on health inequality. Decision models are time consuming to build, therefore adapting previously developed models for new purposes may be advantageous. Here we provide methods to quantify the importance of parameters to 1) update existing models to reflect new evidence and 2) adapt existing models to estimate distributional outcomes.
Methods: Previous research has described methods to assess the influence of different inputs on decision models, including value of information (VOI) and one-way sensitivity analysis (OWSA). Here we apply these established methods in novel ways by looking at the interpretation and assumptions required to aid parameter prioritisation for 1) model updating (altering parameter values to reflect the latest available evidence) and 2) model adaptation to estimate health inequality impacts (converting a model to estimate a distribution of health consequences, for example by introducing evidence on how parameter values differ between population groups). For model updating we apply VOI, for model adaptation we apply OWSA.
Further we propose metrics which quantify the extent to which the most important parameters in a model have been updated or adapted. For model updating, the metric is the VOI associated with updating a particular parameter as a percentage of the VOI from updating all parameters. For model adaptation, the metric is the variation in net benefit from a OWSA in a particular parameter as a percentage of the sum total of variation from OWSA in all parameters. We demonstrate our methods using an oncology case study.
Results: Out of 30 probabilistic model parameters, updating the two most important addressed 71.5% of the total VOI. Updating the top four parameters addressed nearly 100% of total VOI. For model adaptation, 46.3% of the total OWSA variation came from a single parameter. Adapting the top 10 input parameters was found to account for over 95% of the total variation (Table).
Conclusions: These methods offer a systematic approach to guide investment in updating models with new data or adapting models to undertake DCEA. The case study demonstrated small marginal gains from updating more than 4 parameters or adapting more than 10 parameters.
Keywords: Distributional cost effectiveness analysis, equity, model updating, decision modelling, sensitivity analysis, value of information
Rank of parameter importance and adaptation metric when converting the case study model to a DCEA
Each input is ranked based on the absolute change in net health benefit resulting from a 1% change that parameter (semi-elasticity). The adaptation metric quantifies the cumulative sum of each elasticity divided by the total.
A simulation framework comparing performance of longitudinal matching methods for time-dependent treatments
PP-187 Quantitative Methods and Theoretical Developments (QMTD)
Deirdre Weymann, Brandon Chan, Dean A Regier
BC Cancer, Vancouver, Canada
Purpose: Longitudinal matching methods are emerging to mitigate confounding in observational studies of time-dependent treatments. Relative performance is not well established through simulation and no studies consider machine learning to automate balancing of time-dependent covariate histories. In this study, we develop a Monte Carlo simulation framework for evaluating longitudinal matching methods. We compare naïve matching on baseline propensity scores with two longitudinal methods, including a proposed machine learning based approach.
Methods: Using Monte Carlo simulation, we generated 1,000 datasets, each consisting of 1,000 subjects. Datasets reflected a series of pseudo experiments in which some patients were treated and other eligible at-risk patients were not, a design common to longitudinal matching. Within each dataset, we applied: (1) nearest neighbor matching on time-invariant, baseline propensity scores; (2) sequential risk set matching on time-dependent propensity scores; and (3) a proposed longitudinal extension of genetic matching that considers time-dependent covariates. To evaluate comparative performance, we measured covariate balance, efficiency, bias, and root mean squared error (RMSE) of treatment effect estimates. In scenario analysis, we varied underlying assumptions for assumed covariate distributions, correlation structures, treatment assignment models, and outcome models.
Results: In all explored scenarios, matching on baseline propensity scores resulted in biased treatment effect estimation in the presence of time-dependent confounding. Mean bias ranged from 29.7% to 37.2% depending on underlying assumptions. In contrast, sequential risk set matching with a time-dependent propensity score and longitudinal genetic matching achieved stronger covariate balance and yielded less biased treatment effect estimates, ranging from 0.7% to 13.7%. Scenario analysis revealed that underlying data generation processes strongly affected the relative performance between manual and machine learning based longitudinal matching methods.
Conclusions: Longitudinal matching consistently outperformed baseline propensity score matching for evaluating time-dependent treatments. The most appropriate method will depend on the research question and patterns in underlying data. Our study will guide future comparative assessments of treatments accessible at multiple time points and enable further simulation-based validation for longitudinal matching.
Keywords: longitudinal matching, time-dependent treatment, propensity score, Monte Carlo simulation, machine learning
Sequential allocation of vaccine to control an infectious disease
PP-188 Quantitative Methods and Theoretical Developments (QMTD)
Isabelle J Rao, Margaret L Brandeau
Department of Management Science and Engineering, Stanford University, Stanford, United States
Purpose: The problem of optimally allocating a limited supply of vaccine to control a communicable disease has broad applications in public health and has received renewed attention during the COVID-19 pandemic. When vaccine supplies are constrained, policy makers must address the question of how best to allocate vaccines.
Methods: We consider an SIR model with interacting population groups, an allocation over multiple time periods of limited vaccines, and four OBJECTIVES: minimize new infections, deaths, life years lost, or quality-adjusted life years (QALYs) lost due to death. We approximate the model using Taylor series expansions and develop simple analytical conditions characterizing the optimal solution for a single time period. We develop a solution approach in which we allocate vaccines using the analytical conditions in each time period based on the state of the epidemic at the start of the time period. We illustrate our method with an example calibrated to the COVID-19 epidemic in New York State with three time periods. We divide the population into four age groups: individuals under age 20, individuals aged 20-39, individuals aged 40-65, and individuals over age 65.
Results: The approximated solution is an all-or-nothing allocation: to minimize new infections in any time period it is optimal to vaccinate groups in decreasing order of their force of infection, whereas to minimize deaths it is optimal to vaccinate groups in decreasing order of their force of infection multiplied by the mortality rate. For COVID-19, individuals aged 20-39 should be prioritized for vaccination in the first time period to minimize new infections because they have the highest initial force of infection. However, prioritizing individuals over age 65 minimizes deaths, life years lost, and QALYs lost due to death because of the high mortality rate in this group. Numerical simulations show that our method achieves near-optimal results over a wide range of vaccination scenarios.
Conclusions: Our allocation method provides a practical, intuitive guide for decision makers that can achieve near-optimal solutions as they allocate limited vaccines over time. Although black box models are prevalent in the literature on vaccine allocation, our study shows that accuracy need not be sacrificed for interpretability. Our analysis highlights the need for interpretable models to aid in important problems in public health and epidemic control.
Keywords: vaccine allocation, optimization, COVID-19, dynamic disease model, epidemic control, health policy
Results Table
Digital Twin Neighborhoods for Precision Population Health: Proof of Concept and Tutorial
PP-189 Quantitative Methods and Theoretical Developments (QMTD)
Jarrod E. Dalton1, Lyla Mourany1, Glen B. Taksler2, Adam T. Perzynski3
1Department of Quantitative Health Sciences, Cleveland Clinic, Cleveland, Ohio
2Center for Value-Based Care Research, Cleveland Clinic, Cleveland, Ohio
3Center for Healthcare Research and Policy, Case Western Reserve University at MetroHealth, Cleveland, Ohio
Purpose: Social and neighborhood indicators have repeatedly been found to have a powerful influence on disease risks and individual health outcomes. In order to assist in understanding differential impacts of population health interventions across diverse communities, we introduce Digital Twin Neighborhoods (DTNs): digital representations of the health status and outcomes of real communities which integrate biological, social, and geographic data and algorithms in a cloud computing environment.
Methods: We devised a procedure to construct representative synthetic populations of local community residents at high spatial resolution, and anonymously sample geocoded electronic health records (EHRs) to populate these synthetic persons with health characteristics and outcomes. The method applies to any result arising from study of geocoded EHR data. To synthesize a focal area’s (block group’s) population and health information, we 1) identified nearby areas for localized sampling based on either a fixed radius or minimum population size around the focal area; 2) measured pairwise similarity with the focal area based on 17 Area Deprivation Index (ADI) indicators; 3) and assigned sampling weights to each available patient who resided in these nearby areas based on this similarity metric. Synthetic persons were then populated with EHR characteristics of real patients based on these sampling weights. We used EHR-derived estimates of life expectancy (LE) to demonstrate the procedure, based on a microsimulation of mortality rates among 558102 Cleveland Clinic patients aged 40-55 residing in Cuyahoga County, OH. We visualized LE across neighborhoods for subpopulations defined based on sex, race, ethnicity and ADI.
Results: Maps of median LE beyond age 40 for these Cuyahoga County subpopulations are provided in the Figure. Estimates for a given area are displayed for neighborhoods in which there were >10 residents aged 40-55 within a given subpopulation in 2019. The maps display both the spatial decomposition of race, ethnicity, socioeconomic position, sex and neighborhood disparities in LE.
Conclusions: While many researchers have begun to harness advancements in computing and machine learning within specific fields, few attempts have been made to hybridize or synthesize these advances across social, biological and clinical disciplines and thereby more richly document the mechanisms by which social risks are embodied as biological differences. DTNs can be adapted to simulate the impact of health services and public health interventions on localized populations.
Keywords: Synthetic Populations, Simulation, Neighborhood Health Disparities, Life Expectancy, Sampling
Mid-Life Life Expectancy for Localized Subpopulations Defined Based on Sex, Race, Ethnicity and Area Deprivation Index Quintile
Median life expectancy from age 40 is displayed for subpopulations defined according to sex, race, ethnicity and Census block group (top two rows) and according to sex, Area Deprivation Index quintile and Census block group (bottom two rows). Estimates are provided for a given panel wherever a given subpopulation has >10 population according to the 2019 American Community Survey, resulting in a depiction of both the spatial distribution and disparity in life expectancy among subpopulations.
A Modular Transplant Simulation Framework to Accelerate Innovation in Organ Allocation Policy
PP-190 Quantitative Methods and Theoretical Developments (QMTD)
Johnie Rose1, Maryam Valapour2, Carli J. Lehr2, Paul R. Gunsalus3, Mark F. Swiler3, Belinda L. Udeh3, Lyla Mourany3, Jarrod E. Dalton3
1Center for Community Health Integration, Case Western Reserve University, Cleveland, USA
2Department of Pulmonary Medicine, Cleveland Clinic, Cleveland, USA
3Department of Quantitative Health Sciences, Cleveland Clinic, Cleveland, USA
Purpose: Simulated allocation models used to compare U.S. organ transplantation system scenarios are confined to existing risk models and allocation schemes currently implemented in policy. We describe a flexible, open-source framework for the study of user-defined mortality risk and organ allocation schemes.
Methods: We implemented a discrete time microsimulation model in R to simulate the random emergence of transplant candidates and donors from U.S. hospitals. We organized the relevant steps of organ allocation into a framework of interconnected, discrete modules with standardized inputs and outputs. To decouple the simulation model from the underlying protected data, we applied hierarchical Bayesian regression models (HBRM) to 2015-2021 data (N = 12,797 lung transplant candidates) from the U.S. Scientific Registry of Transplant Recipients to 1) assign each transplant hospital an annual number of candidates and 2) assign each simulated candidate demographic and disease characteristics. Characteristics of organ availability were assigned based on an HBRM, but location was treated as another organ characteristic, drawn from a spatial probability distribution.
Results: We divided the steps of organ allocation into modules of allocation criteria, matching weights, donor generation, candidate generation, pre-transplant risk, screening, matching, acceptance, post-transplant risk, and output (Figure) with fixed data structures for each. Allocation criteria encompass the factors considered in donor-organ matching, allowing for incorporation of matching weights for scoring and prioritization. Separate modules allow for varied approaches to modeling risk of waitlist and post-transplant mortality. The screening module identifies eligible candidates for each donor; the matching module then assigns a priority score based on candidate and organ characteristics and locations; the acceptance module simulates decision making among matched candidates based on key donor and candidate characteristics. The outcomes module tracks individual-level outcomes, facilitating calculation of aggregate metrics and design of econometric analyses.
Conclusions: An open-source model framework based on discrete modules with standardized inputs and outputs offers the potential to accelerate research into optimal organ allocation strategies, the impact of extrinsic factors on organ supply and demand, and equity considerations in transplantation. Future work will adapt the framework to other organ allocation schemes; evaluate the effectiveness and equity of alternate allocation strategies; and expand the model to include upstream pre-waitlist chronic disease modules.
Funding: NIH R01HL153175
Keywords: Transplant, organ allocation, microsimulation
Modular Organ Transplant Simulation Framework
Estimating Parameter Uncertainty in a Colorectal Cancer Screening Microsimulation Model using Bayesian Calibration
PP-191 Quantitative Methods and Theoretical Developments (QMTD)
Luuk van Duuren1, Fernando Alarid-Escudero2, Lucie de Jonge1, Jonathan Ozik3, Nicholson Collier3, Carolyn Rutter4, Iris Lansdorp-Vogelaar1, Reinier Meester1
1Department of Public Health, Erasmus MC Medical Center Rotterdam, The Netherlands
2Division of Public Administration, Center for Research and Teaching in Economics (CIDE), Aguascalientes, Mexico
3Decision and Infrastructure Sciences, Argonne National Laboratory, United States
4RAND Corporation, Santa Monica, CA, United States
Purpose: Until recently, Cancer Intervention and Surveillance Modelling Network (CISNET) models generally did not account for uncertainty in calibrated input parameters. We aim to investigate the uncertainty in the simulated natural history of colorectal cancer (CRC) of the MIcrosimulation SCreening ANalysis-Colon (MISCAN-Colon) model, one of the CISNET models, through Bayesian calibration.
Methods: MISCAN-Colon simulates CRC natural history, and CRC screening effects. In the model, CRC develops following the adenoma-carcinoma pathway through up to seven preclinical stages. We calibrated the natural history parameters to observed calibration targets obtained from randomized clinical trials, population-based studies, and cancer registries using the Incremental Mixture Approximate Bayesian Calibration algorithm.
We used uniform distributions centred around current model parameters as prior parameter distributions. The algorithm iteratively derived a joint posterior distribution of 2800 parameter sets consistent with preselected uncertainty bounds around the calibration targets. To evaluate the extent of parameter uncertainty and the impact on model predictions, we analysed five outcomes across the 2800 parameter sets, and their coefficients of variation (CV): i) standardized parameter value ranges (relative to their mean), ii) mean adenoma dwell times (time from adenoma onset to preclinical CRC development), iii) mean CRC sojourn times (time from preclinical CRC to clinical CRC diagnosis), iv) lifetime CRC incidence and v) life-years gained by 10-yearly colonoscopy from age 45 to 75 (US guidelines).
Results: Standardized ranges for most natural history model parameters were limited to -50% to +50% of the mean value (Figure 1A), but parameters controlling age-dependent hazard of adenoma onset and adenoma progressiveness were relatively more uncertain (CV 0.29-0.54). Estimates of the mean dwell and sojourn time varied from 11.30 to 12.91 (interquartile range [IQR] 11.81-12.18, CV=0.022), and 2.35 to 2.85 (IQR 2.53-2.71, CV=0.040), respectively (Figure 1B,C). Lifetime CRC incidence varied between 6.0% and 7.3% (IQR 6.61-6.87, CV=0.028) (Figure 1D). Estimated life-years gained by screening were 206.1 to 264.0 (IQR 234.7-244.3, CV=0.031) per 1000 individuals (Figure 1E).
Figure 1.
Across the 2800 parameter sets,we found the following distributions of A) the standardized parameter ranges, B) mean adenoma dwell time, C) mean CRC sojourn time, D) lifetime CRC incidence, E) life-years gained by screening. Adn = adenoma; CRC = colorectal cancer, FIT sens = Fecal Immunochemical Test sensitivity.
Conclusions: Bayesian calibration of MISCAN-Colon suggests limited uncertainty in most natural history parameters, except for parameters for age-dependent adenoma hazard and adenoma progressiveness. Despite this, the relative variation of the estimates for mean adenoma dwelling time, CRC sojourn time and screening effects were small. Nevertheless, further research should explore the implications of the absolute uncertainty on the identification of optimal strategies.
Keywords: Microsimulation model, Parameter uncertainty, Colorectal cancer screening
Bounding price and effectiveness for early phase economic modeling: an example of a hypothetical new vaccine and treatment for Clostridium Difficile infection
PP-193 Quantitative Methods and Theoretical Developments (QMTD)
Marina Richardson1, Beate Sander2, Nick Daneman3, Fiona A Miller4, David Mj Naimark5
1Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment (THETA) Collaborative, University Health Network, Toronto, Canada
2Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment (THETA) Collaborative, University Health Network, Toronto, Canada; ICES, Toronto, Canada; Public Health Ontario, Toronto, Canada
3Division of Infectious Diseases, Sunnybrook Health Sciences Centre, Toronto, Canada; Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada; ICES, Toronto, Canada; Public Health Ontario, Toronto, Canada
4Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
5Division of Nephrology, Sunnybrook Health Sciences Centre, Toronto, Canada; Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
Purpose: To explore two methods for identifying combinations of vaccine and treatment effectiveness and prices that yield the highest incremental net health benefits (INHB).
Methods: We developed a Markov model to simulate a hypothetical cohort of individuals 18 years of age at risk of C. difficile infection over a lifetime time horizon from the perspective of the healthcare payer in Ontario, Canada. We assessed two methods to determine sets of model inputs (vaccine and treatment effectiveness and prices) that yield INHBs close to 0 for a strategy with a new vaccine and a new treatment added to standard of care (SOC) compared to SOC alone. For the purposes of comparability, we sought the same objective for both METHODS: find the 10% of combinations that yield INHBs closest to 0 assuming a cost-effectiveness threshold, λ, of C$50,000/QALY gained. Method 1 used a “random draws approach” to sample values for four parameters: vaccine effectiveness (VE), treatment effectiveness (TE) preventing recurrence of C. difficile infection, vaccine cost, and treatment cost from uniform distributions to derive 5,000 combinations of model inputs. The model was run sequentially with each of the 5,000 parameter sets. Method 2 used a “calibration approach” based on the Nelder-Mead optimization algorithm to minimize the difference between the incremental cost-effectiveness ratio output of the model and λ (equivalent to INHB = 0).
Results: Method 1 yielded a parameter set of: VE 80%, $93/dose and TE odds ratio (OR) 0.61, and $2,522/treatment course. Method 2 yielded a parameter set of: VE 87%, $97/dose, TE OR 0.72, and $2,736/treatment course. Method 2 results were highly dependent on the starting values of the input parameters and was not guaranteed to yield full coverage of the parameter space. Method 1 was more likely to have covered the full parameter space, allowed for other analytic approaches such as an assessment of the optimal parameter set according to the highest INHB, and yielded traditional outputs that allow for specifying optimal parameter sets according to clinical outcomes such as cases averted.
Conclusions: Researchers can use the “random draws approach” to provide estimates of effectiveness and prices to be used in subsequent cost-effectiveness analyses of sequential interventions along the care pathway.
Keywords: optimization, cost-effectiveness, prevention, treatment, C. difficile
Utilizing wastewater data for COVID-19 surveillance in the United State
PP-194 Quantitative Methods and Theoretical Developments (QMTD)
Masahiko Haraguchi, Nicolas Menzies
Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Harvard University
Purpose: Establishing quantitative estimates of COVID-19 infection trends, disease prevalence, and incidences remains challenging. Many existing studies that use wastewater data for COVID focus the analysis on one or a few locations, such as one municipality. Accounting for heterogeneous data in multiple locations has the potential to improve the estimate, as does synthesizing multiple data sources. The purpose is to estimate the quantitative relationships to better inform COVID-19 estimates in settings with wastewater data available and to develop better estimates for areas without wastewater surveillance systems. Also, we explore to what extent wastewater data can be a reliable leading indicator of infection trends. Here, we pursue three research questions: i) what is the quantitative relationship between wastewater data and clinical and estimated data over multiple wastewater sheds?; ii) how can wastewater data most appropriately be used in disease models to recapitulate disease dynamics?; and iii) how can detection of wastewater trends be optimized to give the most lead time on trends in cases and hospitalizations?
Methods: Wastewater surveillance data from 36 US counties, as well as reported cases from Johns Hopkins University's database and projected cases and infection from our own database (covidestim) are used. Reported case as well as estimated cases and infection are fitted with wastewater concentrations. The mixed linear methods are used to account for heterogeneous characteristics of predictors. Also, in identifying the lead time on trends, the mixed linear method is used to account for temporal variations and geographical variations simultaneously.
Results: The density plots of fitted coefficients in each of the 36 counties are shown in Figure 1. For reported cases, estimated infection, and cases, the medians are 0.71, 0.88, and 0.94, respectively. The adjusted R2 of linear trends between normalized wastewater content and normalized reported cases is 0.5 to 0.79 in the middle half of the range. Twenty of the 36 counties had the same week leading time between wastewater concentration data and the 7-day lagged average of reported cases, while ten counties had a 1-week leading time.
Conclusions: According to the findings, wastewater monitoring data can give useful information for covid-19 surveillance decision-making, such as leading time and quantitative estimates. However, communicating associated uncertainties with these anticipated values is critical.
Keywords: wastewater epidemiology, covid-19 surveillance
Density plots of estimated coefficients of beta
The density plots of fitted coefficients in each of the 36 counties are shown here. For reported cases, estimated infection, and cases, the medians are 0.71, 0.88, and 0.94, respectively.
Vectorization to impute a missing categorical variable in a study using electronic medical record data
PP-195 Quantitative Methods and Theoretical Developments (QMTD)
Matthew A Pappas
Cleveland Clinic
Purpose: Structured data in electronic health records (EHRs) are rife with missingness, but often include similar information as free text. We set out to reduce missingness of a categorical variable by encoding corresponding free text descriptions as vectors and comparing the vectors of unknown observations against those of known observations.
Methods: We assembled a cohort of patients seen in a dedicated preoperative risk assessment clinic at our tertiary care center. The EHR template in our clinic included a field for proposed surgery, encoded as a Current Procedural Terminology (CPT) code. This field is frequently missing, but a free text description is often available. Analyzing only visits with complete information would introduce bias, as would using the procedure code of the completed surgery (an event after the clinic visit).
Vectorization is a technique to represent a word in numerical space, in order to allow comparisons of similarity between words and syntactical relationships. The original algorithm (word2vec) has been extended to create a numeric representation of arbitrary length text (doc2vec). Using this extended algorithm, we first encoded the free text description of every free text surgical description available in our dataset. Next, we compared the vector from each description whose CPT code was missing against the vector of each description with a valid CPT code. Finally, we selected the best match as the likely CPT code that was under consideration at the clinic visit, thereby imputing an unknown categorical variable from a corresponding free text description.
Results: Our dataset included 159,795 visits. Of those, 27,750 were missing the CPT code of the procedure being considered but included free text description. After comparing the vectorized representations of those descriptions against 130,607 visits that included both a CPT code and a text description, we were able to impute our categorical variable for 27,691 visits. A small number of visits failed when no reasonable match could be algorithmically identified.
Conclusions: Vectorization appears to have utility for imputing a missing categorical variable using EHR free text entries. Studies relying on categorical EHR variables with appreciable missingness and available text description may consider this approach to reduce bias in addition to multiple imputation.
Keywords: Imputation, Vectorization, EHR data, Missingness
Validating agent-based simulation models designed for hospital-acquired infections
PP-197 Quantitative Methods and Theoretical Developments (QMTD)
Oguzhan Alagoz, Elizabeth Scaira, Nasa Safdar
University of Wisconsin-Madison, Madison, Wisconsin, USA
Purpose: As agent-based models (ABMs) have been increasingly used for making infectious disease control decisions, model validation is becoming more crucial whereas there is a lack of ABM-specific validation metrics. In this study, we introduce an alternative approach to validate ABMs designed for hospital-acquired infections that focuses on replicating hospital-specific conditions and propose a new metric for validating the social-environmental network structure of ABMs.
Methods: We conducted a cross-validation experiment for our previously developed ABM representing Clostridioides difficile infection (CDI) by tailoring our model to a 426-bed Midwestern academic hospital and comparing the model-predicted CDI rates to the actual CDI rates between 2013 and 2018. For this cross-validation experiment, we incorporated hospital-specific layout, agent behaviors, and input parameters estimated from primary hospital data into the ABM while keeping all other components of the ABM including those related to the disease natural history the same. We then demonstrated the use of colonization pressure, an established risk factor for CDI which measures the impact of infectious and colonized patients in nearby locations on the risk of developing CDI, to validate the socio-environmental agent networks in ABMs. We also evaluated the impact of various infection control measures on CDI rates for this hospital-specific ABM and compared them against those for a generic ABM.
Results: Our ABM was able to replicate observed CDI trends in our target hospital during 2013-2018, including approximately a 46% drop during a period of major initiatives to reduce CDI infections (Figure 1). High colonization pressure in socio-environmental networks increased the risk of CDI (Risk ratio: 1.37; 95% CI: [1.17, 1.59]). The comparison between the generic ABM and the hospital-specific ABM demonstrated that infection control interventions lead to different levels of reduction in CDI rates for the hospital-specific and generic ABM.
FIgure 1.
Comparison between model-predicted hospital-acquired CDI rates to the actual rates between 2013 and 2018
Conclusions: We present an alternative approach to validate ABMs and propose a new metric to validate socio-environmental network structure of ABMs, which are crucial for the analyses made based on ABMs.
Keywords: agent-based simulation models; validation, hospital-acquired infections
Incorporating Effects of Delayed Transplant in Lung Transplant Mortality Risk Models
PP-198 Quantitative Methods and Theoretical Developments (QMTD)
Paul R. Gunsalus1, Maryam Valapour2, Carli J. Lehr2, Belinda L. Udeh1, Jarrod E. Dalton1
1Department of Quantitative Health Sciences, Cleveland Clinic, Cleveland, USA
2Department of Pulmonary Medicine, Cleveland Clinic, Cleveland, USA
Purpose: The US lung transplant allocation system ranks candidates by acuity using waitlist and post-transplant mortality risk models that make counterfactual predictions without considering effects of delayed transplant (i.e., transplant vs. no transplant today). We propose an alternative modeling framework that characterizes mortality risk as a function of delay.
Methods: Scientific Registry of Transplant Recipients data (2015-2021, n=12797) were used to estimate (using a random 80% training sample) two survival models, a competing risk model for transplant and waitlist mortality/removal and a post-transplant mortality model. Predicted 1-year mortality risk was defined from these models, assuming transplant on day DT, as the product of waitlist mortality estimated at DT from the first model and post-transplant mortality estimated at (365-DT). This prediction was evaluated for all values of DT between 0 and 365, resulting in a curve of 1-year predicted mortality vs. time of delay for each candidate. We used concordance (C) indices and calibration plots to validate model accuracy in the 20% test sample. For that analysis, we compared the predicted mortality estimate at the observed day of transplant for each candidate to the overall mortality rate from time of waitlisting.
Results: Discrimination performance for death was moderate [C=0.75; 95% Confidence Interval: (0.72, 0.78)] and these predictions were well-calibrated for the 94.6% of candidates with predicted 1-year mortality risk <0.50. Mortality at one year varied greatly as a function of transplant delay as well as candidates’ clinical characteristics. Figure 1A displays risk for three example candidates. If all three candidates were listed simultaneously, Candidate 3 would be of greatest risk over time and prioritized for transplant. If Candidate 3 was added 100 days after Candidates 1 and 2 (Figure 1B), Candidate 3 would be prioritized under current policy despite Candidate 2 having the highest risk of the three until day 140.
Conclusions: Lung transplant mortality risk varied greatly as a function of transplant delay and warrants further examination to be considered as a factor in lung allocation policy formulation. In particular, our model provides a potential means for adapting allocation policy to delay transplant for some, perhaps even the sickest candidate waiting for transplant, to optimize population level survival. This shift in allocation paradigm would deviate from the “rule of rescue” that dominates US organ allocation policy.
Funding: NIH R01HL153175
Keywords: time-delay effects, competing risk modeling, transplant, organ allocation
Change in Lung Allocation After Incorporating Risks of Delayed Transplant
Is it Possible to Delay and Reduce the Peak of an Epidemic by Increasing Clearance Rates? A Theoretical Analysis
PP-199 Quantitative Methods and Theoretical Developments (QMTD)
Peng Dai, Sze Chuan Suen
Daniel J. Epstein Department of Industrial and Systems Engineering, Viterbi School of Engineering, University of Southern California
Purpose: During the peak of an epidemic (when the maximum number of individuals are infected), health resources may be strained to capacity to meet surging caseloads. Insufficient hospital resources during these times can lead to excess mortality, so public health efforts may focus on reducing or delaying the maximum number of individuals infected at the same time. For instance, in the Covid-19 pandemic, there was outreach messaging to “flatten the curve.” While reductions in transmission can change the epidemic peak, it is less clear whether treatment improvements may as well. In this work, we analytically examine the relationship between clearance rates and the epidemic peak to understand the theoretical implications for how more rapid and effective treatment might influence the size and timing of peak caseloads.
Methods: We use a susceptible-infected-susceptible (SIS) model with vital dynamics for simplicity and generalizability. We solve the ordinary differential equations to obtain analytical expressions for the infected population (I) and examine the existence of the maximal I value (magnitude of the peak). We use first and second optimality conditions to identify the maximum value of I and its arrival time, from which we can use partial derivatives to find how increasing clearance rates change the peak number of infected, and the time at which this peak occurs.
Results: We prove that in an SIS system, an epidemic peak (a rise and then fall in the count of infected individuals) only occurs under a specific set of initial conditions and parameter settings. While increasing clearance rates can extend the time to and decrease the magnitude of the epidemic peak, there are, surprisingly, disease conditions under which increasing the clearance rate will increase the size of the peak (outcome 1 in Figure 1b) or hasten the epidemic peak (outcome 2 in Figure 1b), although not both simultaneously. We also identify the conditions under which these adverse outcomes cannot occur (outcome 3 in Figure 1b).
Conclusions: Increasing clearance rates alone can result in a larger peak later, a smaller peak earlier, or a smaller peak later, depending on disease and setting. Knowing about these phenomena can help decisionmakers make more informed decisions about whether to increase treatment rates alone if it necessitates tradeoffs between preparation time for a surge and surge magnitude.
Keywords: Disease Control, Dynamic Analysis, Compartmental Models
Visual depiction of an example epidemic peak and its timing (panel a) as clearance rate increases (panel b)
A Theoretical framework for incorporating family spillover effects in pediatric economic evaluation
PP-200 Quantitative Methods and Theoretical Developments (QMTD)
Ramesh Lamsal1, Laila Rahman2, E. Ann Yeh4, Eleanor Pullenayegum3, Wendy J Ungar5
1Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
2Epidemiology and Biostatistics Schulich School of Medicine and Dentistry, University of Western Ontario, London, Ontario, Canada
3Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Ontario, Canada
4Division of Neurology, The Hospital for Sick Children, Toronto, Ontario, Canada
5Technology Assessment at SickKids (TASK), Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Ontario, Canada
Purpose: A child’s health condition affects family members’ health, well-being, and economic well-being. These spillover effects are commonly ignored in pediatric economic evaluations and decision-making processes. The objective was to develop a theoretical framework and an approach for incorporating family spillover effects in pediatric economic evaluation.
Methods: A scoping review was conducted to identify theories, conceptual frameworks, and models in disciplines of psychology, economics, and health services research that support the inclusion of family spillover effects or emphasize using a family approach in providing care and understanding the child’s health and development. These databases were searched from inception to 2020: MEDLINE, Embase, CINAHL, PsychInfo, Scopus, EconLit and Sociological Abstracts. A critical interpretive synthesis was conducted to integrate evidence from theories, conceptual frameworks, and models to develop a theoretical framework for incorporating family spillover effects in pediatric economic evaluation.
Results: Sixteen theories, conceptual frameworks or models were identified. Five concepts emerged from selected theories, conceptual frameworks, or models across the disciplines of psychology, economics, and health services research: (1) health and well-being of family members are inter-dependent; (2) collective family costs; (3) maximizing health and well-being; (4) family is a unit of analysis; and (5) factors influencing child health and development. In the proposed theoretical framework ‘conducting the pediatric economic evaluation from a family perspective,’ the family is the unit of analysis to evaluate the child’s health and well-being, where family costs and consequences related to a child’s illness or disabilities are derived from all family members and incorporated in the analysis. An approach was proposed for incorporating family spillover effects using isolated and inherent methods. An isolated method for incorporating family cost spillover requires estimating the individual cost for family members due to a child’s illness and summing with the child’s costs to estimate the family costs. An inherent method for incorporating the family health spillover effect requires estimating each family member’s health utility, estimating QALYs independently, and then summing the QALYs of each family member to estimate the family QALYs.
Conclusions: The proposed theoretical framework can be used to develop empirical methods and/or justify incorporating family spillover effects in pediatric CUA and, therefore, can improve evidence and advance equity for funding decision-making to optimize the health and well-being of not only children but also caregivers and family members.
Keywords: family spillover effects, pediatric cost-utility analysis, theoretical framework
Preservation of non-linear correlation between birth outcomes when calibrating to known marginal distribution ratios among infants affected by opioids
PP-201 Quantitative Methods and Theoretical Developments (QMTD)
Shawn Garbett1, Elizabeth Mcneer1, Rashmi Bharadwaj2, Stephen W. Patrick3, Ashley A. Leech4
1Department of Biostatistics, Vanderbilt University Medical Center
2Department of Health Policy, Vanderbilt University Medical Center
3Department of Pediatrics, Vanderbilt Center for Child Health Policy, and Department of Health Policy, Vanderbilt University Medical Center
4Department of Health Policy and Vanderbilt Center for Child Health Policy, Vanderbilt University Medical Center
Purpose: Infant mortality data encompasses a non-linear correlation structure between birthweight and gestational age that affects survival. Opioid use disorder (OUD) is associated with preterm birth and low birth weight; however, treatment for OUD may minimize this risk. Previous economic evaluations have reported marginal outcomes on birthweight and gestational age by OUD treatment (buprenorphine, medication-assisted withdrawal, methadone, naltrexone, and untreated) on pregnant women without accounting for their correlation structure. Therefore, a method is necessary to preserve this structure when performing calibration on marginal distribution statistics by treatment.
Methods: Using 2018 CDC vital statistics data on births in the US, we assessed the correlation between birth weight and gestational age on mortality during the first year of life. Given the gap in joint outcome data for OUD treatments on birth weight and gestational age, we calibrated data from a general US population to known marginal statistics by treatment using a smooth reparametrized incomplete beta function. The modified beta is multiplied by each dimension of the general US birth distribution: Birth weight and gestational age. The Nelder-Mead method is used to search for optimal modified beta parameters to match the observed marginal outcomes. The resulting modified distribution is used for the simulation of birth outcomes under each treatment, minimizing alterations of the known non-linear correlation. We compared this approach with a naïve method that ignores the correlation structure between these two birth outcomes.
Results: Focusing on infant outcomes only, accounting for the correlation between preterm birth and birth weight on infant survival does not shift the ultimate cost-effectiveness decision; however, the shift in the cost space is approximately $25,000, which could affect decisions for other problems. Overall, preserving the correlation structure between these variables shows a clear benefit in lower cost and improved quality of life for any treatment over lack of treatment.
Conclusions: Our research shows that preserving the correlation structure of infant outcomes when assessing treatment outcomes is important for cost-effectiveness analysis. This method allows for calibration of non-linear multivariate distributions with preservation of unusual correlation structures while matching to known marginals. This approach could have a variety of applications extending to other types of research involving infant outcomes.
Keywords: non-linear correlation, calibration, cost-effectiveness, opioid use disorder, maternal OUD, infant outcomes
OUD Treatment Outcomes on Infant
A discrete choice experiment to elicit preferences for the choice of making a clinical negligence claim: views of those who have experienced harm
PP-202 Quantitative Methods and Theoretical Developments (QMTD)
Tara Wickramasekera1, Anju Keetharuth1, Arne Risa Hole2, Donna Rowen1, Allan Wailoo1
1School of Health and Related Research, University of Sheffield, Sheffield S14DA, UK
2Department of Economics, Universitat Jaume I, Castellón de la Plana, Spain
Purpose: The rising volume of clinical negligence claims remains a concern and this study aims to assess the stated preferences of the different factors that influence patients to make a clinical negligence claim against the NHS using a discrete choice experiment (DCE).
Methods: Participants were eligible to complete the survey if either themselves, or a close family member, had experienced harm while receiving treatment from the National Health Service (NHS). They were recruited via an online panel to complete the survey. The DCE involved a single profile design where participants indicated whether they wished to make a claim for compensation (yes/no) with their choice based on the patient safety incident that actually happened. Information on the severity of the harm caused was used to allocate participants to scenarios featuring low or high compensation amounts. Attribute and scenario wording and task framing were piloted and refined using qualitative interviews with 10 participants. The DCE survey data were modelled using logistic regression. The probability of making a claim for different scenarios were estimated using marginal effects. Preference heterogeneity was examined using latent class analyses.
Results: A total of 1029 participants responded to the survey. The most common incidents were harm related to wrong or missed diagnosis (19%) and delay in treatment (18%). In Figure 1 the marginal effects show that if participants are satisfied with the NHS taking appropriate measures to prevent the incident from happening again and if they received an apology, the probability to claim decreases. The influential attribute levels that increased respondents’ probability to claim were the compensation amount, having a high chance of receiving compensation and the length of time to receive a decision. Latent class models assessing preference heterogeneity identified groups of participants with distinct preferences for their willingness to sue the NHS. Respondents were less likely to make a claim if their overall satisfaction of the NHS was high.
Figure 1.
Probability to claim for each attribute level (marginal effects)
Marginal effects show the increased or decreased probability to claim for each attribute level, compared to a base case scenario. In the base case scenario the respondents did not receive an apology or explanation; a detailed investigation was not carried out and they were not satisfied that the NHS had taken appropriate measures to prevent this incident from happening again; the claim process will not hold those responsible for the incident to account; making a claim is complicated and a hassle; it will take 5 years (route low)/10 year (route high) to receive a decision; there is a low chance of getting compensation; the compensation was £5,000 (route low)/£100k (route high); claim is made using the legal scheme. Larger effects indicate attributes levels that are relatively more important to respondents when deciding to make a claim.
Conclusions: The results suggest that the actions of the NHS following a patient safety incident can have a large impact on the respondents’ probability of choosing to make a claim after a patient safety incident. The NHS response to the incident could be improved by taking steps to be more upfront and honest about what happened and provide adequate support to the patients.
Keywords: discrete choice experiment, latent class analyses, mixed methods
Evaluating Heterogeneity and Treatment Outcomes of a Tumor-Agnostic Drug using Bayesian Hierarchical Models
PP-203 Quantitative Methods and Theoretical Developments (QMTD)
Yilin Chen1, Josh J Carlson1, Anirban Basu1, Lurdes Inoue2
1CHOICE Institute, School of Pharmacy, University of Washington, Seattle, USA
2Department of Biostatistics, School of Public Health, University of Washington, Seattle, USA
Purpose: Bayesian hierarchical model (BHM) is particularly well-suited for dealing with heterogeneity in treatment effect of a tumor-agnostic drug (TAD) through borrowing of information on treatment effects across tumors, thereby enhancing precision in estimates of cohorts with small sample sizes. We proposed using BHM to assess heterogeneity and improve estimates of tumor-specific treatment outcomes, which are crucial for health-care decision-making.
Methods: We assumed a three-level BHM for treatment effects within the tumor types. We developed two models to evaluate the objective response rate (ORR) and the median progression-free survival (mPFS). We estimated the posterior outcomes for each tumor type, the pooled effect across tumor types as well as the predictive outcomes in an unrepresented, new cancer type from the same population using published basket trial evidence from KEYNOTE-158 and KEYNOTE-164. We assumed a normal distribution for the overall drug effect across all tumor sites and a uniform distribution for between-tumor standard deviation. We used weakly-informative priors in the base-case analysis. Prior predictive checks were conducted to assess the appropriateness of the priors. Posterior distributions, medians, 80% and 95% posterior credible intervals (CrL) were obtained. We performed sensitivity analyses with various priors to account for uncertainty in the prior specification.
Results: The pooled median posterior ORR across seven cancers was 38.8% (95%CrL: 26.1%-52.3%) and the predictive probability of response in a new cancer was 38.8% (95%CrL: 12.7%-72.6%). The highest median posterior ORR was observed in endometrial (51.3%, 95%CrL: 37.5%-65.6%), followed by gastric (42.2%, 95%CrL: 28.3%-59.3%), small intestine (40.1%, 95%CrL: 25.1%-58.2%), cholangiocarcinoma (39.7%, 95%CrL: 25.2%-56.6%), ovarian (36.6%, 95%CrL: 19.4%-54.1%), colorectal (34.2%, 95%CrL: 26.4%-42.3%), and pancreatic cancers (28.6%, 95%CrL: 12.2%-43.9%). The pooled median mPFS was 5.62 months (95%CrL: 3.59-7.80) and the predictive median was 5.59 months (95%CrL: 0.14-11.34). Similarly, endometrial cancer had the longest posterior median mPFS at 10.07 months (95%CrL: 6.54-13.63), followed by gastric (7.69, 95%CrL: 3.54-12.10), small intestine (7.1, 95%CrL: 3.10-11.31), cholangiocarcinoma (4.48, 95%CrL: 2.19-6.78), colorectal (4.22, 95%CrL: 2.8-5.63), ovarian (2.63, 95%CrL: 1.02-4.23), pancreatic (2.3, 95%CrL: 1.07-3.55), and brain cancers (1.23, 95%CrL: 0.38-2.08).
Conclusions: Assuming exchangeable effects of TAD across cancers, BHM can be useful for studying TAD by reducing the uncertainty in treatment effect estimates through borrowing of information. Both pooled and tumor-specific ORR and PFS posterior estimates may be useful in medical decision making and health technology assessment.
Keywords: Bayesian hierarchical model, tumor-agnostic drug, objective response rate, progression-free survival, heterogeneity
Figure 1.
Posterior distributions of ORR and median PFS
The loss in health outcomes produced by adopting heuristic approaches for dichotomizing diagnostic tests
PP-204 Quantitative Methods and Theoretical Developments (QMTD)
Yuli Lily Hsieh1, Nicolas A Menzies2, Ankur Pandya3
1Interfaculty Initiative in Health Policy, Harvard University; Center for Health Decision Science, Harvard School of Public Health
2Department of Global Health and Population, Harvard School of Public Health; Center for Health Decision Science, Harvard School of Public Health
3Department of Health Policy and Management, Harvard School of Public Health; Center for Health Decision Science, Harvard School of Public Health
Purpose: Clinicians often use dichotomous tests to guide treatment. For tests that are not naturally dichotomous, a cutoff (i.e., positivity criterion) is assigned, and sensitivity and specificity can be calculated. Per health decision science methods, the optimal positivity criterion (OPC) is defined by the likelihood ratio of [(1-P(D))*(CTN - CFP)] / [P(D)*(CTP - CFN)], which depends on disease prevalence P(D), and consequences of true positive (CTP), false negative (CFN), true negative (CTN) and false positive (CFP). However, heuristics such as the Youden Index, which selects the cutoff by maximizing the sum of sensitivity and specificity, are still commonly used in clinical research studies. In this study, we demonstrated how the Youden Index can lead to incorrect threshold selection and sub-optimal health outcomes, using both simulated data and a real-world case study.
Methods: We used the ‘ROCR.simple’ dataset in R-Package ‘ROCR’ to construct a receiver operating characteristic curve, and simulated data for P(D), (CTP-CFN), and (CTN-CFP) to estimate expected health outcomes at different thresholds. We calculated the loss in expected health outcomes when test results were dichotomized using the Youden Index versus the OPC. We examined how the magnitude of loss varied with P(D) and decision consequences. We also used real-world data on a novel diagnostic for incipient TB disease (Zak et al. 2016 Lancet) and examined its application in TB contacts, to illustrate the potential loss in QALYs when the criteria for incipient TB diagnosis were based on the Youden Index.
Results: The Youden Index approach led to a greater loss in health outcomes in low P(D), high CFP and high P(D), high CFN settings (Figure). When P(D) approaches 0.5, the loss increases as the ratio (CTN - CFP)/(CTP - CFN) departs from 1. In the TB case study, using the Youden Index-based positivity criterion led to a loss of 26 expected discounted QALYs, assuming the novel TB test was used to screen 1000 close contacts of TB cases (P(D)=0.03, mean age=30 years).
Conclusions: Using a test with a positivity criterion that ignores disease prevalence and decision consequences can yield suboptimal health outcomes. When possible, raw test results and corresponding disease labels of new technologies should be published to enable identification of context specific OPC, so that the potential utility of the technology can be properly evaluated.
Keywords: Optimal positivity criterion, dichotomous test, receiver operating characteristic curve, Youden Index
Loss in expected health outcomes as a function of disease prevalence and decision consequences.
Vaccination as Hearing Loss Prevention: Model-Projected Clinical and Economic Impacts of PCV-10 Vaccination Scale-Up in Nigeria
PP-205 Applied Health Economics (AHE)
Austin Ayer1, Ethan D. Borre2, Titus Ibekwe3, Siddharth Dixit4, Jacqueline Vicksman5, Danah Younis5, Anna Zolotor5, Mohini Johri5, Megan Knauer5, Brian Wahl6, Mira Johri7, Bolajoko Olusanya8, Debara L. Tucci9, Blake S. Wilson10, Osondu Ogbuoji11, Gillian D. Sanders Schmidler12
1Duke University School of Medicine, Durham, NC, USA; Duke-Margolis Center for Health Policy, Duke University, Durham NC, USA
2Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA; Duke-Margolis Center for Health Policy, Duke University, Durham NC, USA
3Department of Ear, Nose and Throat, Head & Neck, University of Abuja Teaching Hospital, Gwagwalada Abuja, Nigeria
4Duke Global Health Institute, Duke University, Durham, NC, USA; Center for Policy Impact in Global Health, Duke Global Health Institute, Durham NC, USA
5Duke-Margolis Center for Health Policy, Duke University, Durham NC, USA
6Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
7École de santé publique, Université de Montréal, Montréal, QC, Canada
8Centre for Healthy Start Initiative, Lagos, Nigeria
9National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, USA
10Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, NC, USA; Duke Global Health Institute, Duke University, Durham, NC, USA; Department of Biomedical Engineering, Pratt School of Engineering, Duke University, Durham, NC, USA; Department of Electrical & Computer Engineering, Pratt School of Engineering, Duke University, Durham, NC, USA
11Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA; Duke-Margolis Center for Health Policy, Duke University, Durham NC, USA; Duke Global Health Institute, Duke University, Durham, NC, USA; Center for Policy Impact in Global Health, Duke Global Health Institute, Durham NC, USA
12Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA; Duke-Margolis Center for Health Policy, Duke University, Durham NC, USA; Duke Clinical Research Institute, Duke University School of Medicine, Durham NC, USA
Purpose: To project the clinical and economic impacts of scaling up 10-valent pneumococcal conjugate vaccination (PCV-10) in Nigeria for the prevention of hearing loss.
Methods: We used a previously validated Markov microsimulation model of hearing loss in Nigeria (DeciBHAL-I) to compare two strategies: 1) current coverage of PCV-10 (≥ 1-dose coverage 62%), and 2) target coverage of 90% ≥ 1-dose PCV-10. Persons were simulated from birth until death and experienced yearly age- and sex-specific probabilities of acquiring hearing loss (0.1-2.7%/year) and subsequent treatment, with hearing loss severity assigned by etiology (meningitis, 68 dB Pure Tone Average). Incorporating age- and setting-specific proportions of disease due to PCV-10 serotypes (14.0% of meningitis, 0.7-2.0% of acute otitis media [AOM]), as well as vaccine serotype effectiveness (94.5% against meningitis, 67.1% against AOM), vaccination scale-up reduced the incidence of meningitis by 3.7% and of AOM by 0.13-0.38% in persons aged ≥1 year. Quality-adjusted life-years (QALYs) were assigned according to hearing loss severity and treatment status. Costs included vaccine dose and delivery ($2.62/dose, 2020 USD), hearing aids ($295), and treatment of chronic suppurative otitis media ($421). Outcome measures were total hearing loss cases, lifetime costs, and lifetime QALYs.
Results: Implementing PCV vaccination to target coverage decreased lifetime prevalence of hearing loss by 0.03%. This is projected to avert 2,135 lifetime cases of hearing loss per annual birth cohort in Nigeria. Average lifetime total per-person discounted costs were $467.70 for current coverage and $468.13 for target coverage. Average lifetime discounted QALYs were 18.8220 for current coverage and 18.8221 for target coverage. PCV-10 scale up was cost-effective or cost-saving relative to current coverage in 60% of model runs, though estimates were subject to significant uncertainty given the small effect size relative to overall hearing loss incidence. The amount of hearing loss averted was sensitive to variations in serotype distribution, current coverage, and vaccine effectiveness.
Conclusions: The expansion of PCV-10 vaccination to 90% coverage in Nigeria may prevent approximately 2,100 cases of hearing loss each year and is likely to be cost-effective even when considering only the benefits related to hearing loss, though larger model runs are ongoing to better quantify uncertainty. Other health benefits from pneumococcal vaccination are well known and further support the cost-effectiveness of PCV-10 expansion in Nigeria.
Keywords: Cost-effectiveness, vaccination, hearing loss
Hearing Loss Prevalence, Lifetime Costs, and Lifetime Quality-Adjusted Life-Years by Vaccination Strategy
| Strategy | Hearing loss Prevalence Over a Lifetime | Lifetime Per-Person Costs (2020 USD), discounted | Lifetime Per-Person QALYs, discounted |
|---|---|---|---|
| Current Coverage: 62% ≥ 1 dose PCV-10 coverage | 37.96% | $467.70 | 18.8220 |
| Target Coverage: 90% ≥ 1 dose PCV-10 coverage | 37.93% | $468.13 | 18.8221 |
| Difference | -0.03% | $0.43 | 0.0001 |
Cost Effectiveness of Hepatitis C Treatment in Patients with Hepatocellular Carcinoma
PP-206 Applied Health Economics (AHE)
Kevin Huynh, William W.l. Wong
Department of Pharmacy, University of Waterloo School of Pharmacy, Kitchener, Canada
Purpose: Hepatocellular carcinoma (HCC) is one of the most common tumours developed in the world and the majority of HCC cases are seen in those with prior chronic liver disease, with hepatitis C virus (HCV) infection being one of the major causes of HCC. The introduction of direct-acting antiviral (DAA) treatment has revolutionized the care of patients with HCV infection, but DAA treatment in the HCC population is less well understood. Emerging data has stated DAA therapy has been associated with increased survival in patients with a history of HCC. These studies show optimism by highlighting the safety and efficacy of DAA therapy in the HCC population, but the cost-effectiveness of treating HCC patients with DAA agents remains unknown and will be analyzed in our study.
Methods: A cost-utility analysis (CUA) was performed using a state-transition model, comparing DAA treatment for HCV in HCC patients versus not treating HCV at all. The study cohort included individuals with an average age of 62 years old, diagnosed with hepatitis C, who have achieved HCC complete response and have never initiated DAA therapy. In the model, health states that represent the progression and treatment of HCC were implemented, they included: surgical resection, radiofrequency, orthotopic liver transplant, HCC relapse, palliative, and treatment with sorafenib. Transition probabilities between health states, in addition to utilities and costs within health states, were primarily obtained from the literature. We used a Canadian provincial ministry of health perspective, lifetime time horizon, and a discount rate of 1.5% per year. Sensitivity analyses were conducted to access the robustness of the model.
Results: DAA treatment for HCV in HCC patients resulted in an additional 1.97 quality-adjusted-life-years (QALYs) in terms of effectiveness, at an additional cost of $120,863. The incremental cost-effectiveness ratio (ICER) is calculated to be $61,201/QALY. The results were most sensitive to the hazard ratio of DAA therapy for those who underwent orthostatic liver transplant.
Conclusions: DAA treatment for HCV in HCC patients is potentially cost effective compared to not treating HCV at all when using a willingness-to-pay threshold of $100,000/QALY. The variability in the model is primarily dependent on the survival benefit, which is reflected in our sensitivity analysis. Policy makers should consider coverage for DAA treatment in appropriate HCC populations.
Keywords: Hepatocellular carcinoma, hepatitis C virus, direct-acting antiviral, cost-utility analysis, quality-adjusted life year, incremental cost-effectiveness ratio
Conceptual diagrams illustrating Markov health states
In each cycle, patients can stay at their current position, transition to another health state, or die. Patients are diagnosed based on their HCC symptoms and will be distributed to the appropriate treatment and followed up thereafter. Death can occur at any point throughout the model. OLT, orthotopic liver transplantation. HCC, hepatocellular carcinoma.
Measuring the cost-effectiveness of HIV self-testing using clinical trial data
PP-207 Applied Health Economics (AHE)
Md Hafizul Islam1, Ram Shrestha1, Jeffrey S. Hoch2, Paul G. Farnham1
1Division of HIV Prevention, Centers for Disease Control and Prevention, Atlanta, USA
2Division of Health Policy and Management, University of California, Davis, USA
Purpose: To estimate the cost-effectiveness of HIV self-testing considering new diagnoses as the primary outcome from a randomized controlled trial.
Methods: We used a published study reporting aggregated data from a 12-month longitudinal, 2-group (self-testing arm and control arm) randomized clinical trial enrolling 2665 men who have sex with men (MSM) who have never been diagnosed with HIV infection. The study participants were randomly assigned to the self-testing arm (1325) receiving HIV self-test kits by mail and the control arm (1340) receiving information on community-based testing options. We used a net benefit regression framework to investigate the cost-effectiveness of HIV self-testing, which compared the incremental cost per new diagnosis with various decision makers’ willingness-to-pay (WTP) thresholds. We addressed the uncertainties both in estimating the incremental cost per new HIV diagnosis and the decision makers’ WTP per new HIV diagnosis through the use of a cost-effectiveness acceptability curve (CEAC). We conducted a sensitivity analysis using bootstrapping to observe the effect of distributional assumptions.
Results: In the self-testing arm, 48 additional new HIV diagnoses were reported compared with the control arm at an average additional cost of $145 (in 2016 U.S. dollars) per participant. The incremental cost-effectiveness ratio (ICER) of HIV self-testing was $9,365 per new diagnosis. The INB analysis showed that the incremental net benefit of HIV self-testing is positive as soon as the WTP per diagnosis is higher than the ICER and can reach as high as $1,097 at a WTP of $80,000 per new diagnosis. The CEAC showed that the probability of HIV self-testing being cost-effective is 0.61 at a WTP of $10,000 per new diagnosis, and the value is higher than 0.99 at a WTP of $80,000 per new diagnosis.
Conclusions: The incremental net benefit analysis and the CEAC based on trial outcomes suggest that HIV self-testing has the potential to be cost-effective for relatively low values of decision-makers’ WTP. Our results also indicate that a cost-effectiveness analysis considering longer-term outcomes, such as lifetime HIV treatment costs and quality-adjusted life years (QALYs), which require additional assumptions and data beyond those in the clinical trial, would be worth performing.
Keywords: Economic evaluation, net benefit regression, willingness to pay threshold, cost-effectiveness acceptability curve, HIV self-testing
Pembrolizumab versus Chemotherapy as First-Line Treatment of Advanced or Metastatic Bladder Cancer: A Cost-Utility Analysis
PP-208 Applied Health Economics (AHE)
Peter L Hoang, William W. L. Wong
University of Waterloo School of Pharmacy, Waterloo, Canada
Purpose: Metastatic urothelial carcinoma (bladder cancer) places significant burden on the healthcare system worldwide. Cystectomy and chemotherapy regimens are the standard of care, but recent immune checkpoint inhibitor therapies (e.g., Pembrolizumab) have been coming out as alternative therapies which demonstrate better tolerability, at a higher expense. In the literature, Pembrolizumab is reported to be cost-effective in multiple studies based on the relatively short-term KEYNOTE-052 study. The recent KEYNOTE-361 study overviewing longer-term efficacy outcomes has been published and thus a cost utility analysis will need to be updated to assess its cost-effectiveness as a 1st line therapy option.
Methods: A state transitional model was developed to conduct a cost-utility analysis of Pembrolizumab compared to GC-based chemotherapy (gemcitabine with cisplatin or carboplatin) in the 1st line setting for patients with: average age of 67, diagnosed with bladder cancer, had not received prior treatment for inoperable locally advanced or metastatic bladder cancer, and with a positive PD-L1 status and Combined Positive Score (CPS) score ≥ 10. Health states related to treatment, adverse events (AEs), and bladder cancer progression status were implemented. The perspective was of the Canadian provincial Ministry of Health. The model time horizon was 5 years with health outcomes and costs discounted at 1.5% per year. The efficacy of the model was informed by the KEYNOTE-361 study. Costs and utilities were obtained from literature. Sensitivity analyses were conducted to evaluate the impact of uncertainty for all parameters in the model.
Results: Pembrolizumab was associated with a quality-adjusted life-year (QALY) increase of 0.14 and a cost increase of $125,077 which translated to an incremental cost-effectiveness ratio (ICER) of $899,756 (Table 1). For the sensitivity analysis, parameters related to the probability of serious AEs from chemotherapy and pembrolizumab, disutilities of certain AEs (e.g., blood and lymphatic system disorders, decreased platelet count), and cost of the progressed health state were the variables which had the greatest impact on the ICER.
Conclusions: Our base case analysis found that pembrolizumab was not cost effective compared to current GC CTX using a willingness-to-pay threshold of $150,000/QALY based on the latest evidence. Use of pembrolizumab as a 1st line therapy should not be recommended, with the exception of patients unable to tolerate GC-based chemotherapy due to its high ICER.
Keywords: Pembrolizumab, chemotherapy, metastatic urothelial carcinoma, bladder cancer, cost utility analysis, health economics
Table 1.0.
Scenario Analyses for Pembrolizumab vs. Chemotherapy
| Therapy | Total Cost
($) |
Effectiveness | Incremental Cost ($) | Incremental QALY | ICER ($/QALY) |
|---|---|---|---|---|---|
| Chemotherapy | 100,542.99 | 1.37 | 125,077.24 | 0.14 | 899,755.62 |
| Pembrolizumab | 225,620.23 | 1.51 |
A summary of the base case analysis comparing chemotherapy and pembrolizumab in the treatment of metastatic urothelial carcinoma. Abbreviations: QALY, quality-adjusted life years; ICER, incremental-cost-effectiveness ratio
Cost analysis of an intervention to retain patients in HIV clinical care
PP-209 Applied Health Economics (AHE)
Ram K. Shrestha1, Carla A. Galindo1, Cari Courtenay Quirk1, Camilla Harshbarger1, Iddrisu Abdallah2, Vincent C Marconi3, Michelle Dalla Piazza4, Shobha Swaminathan4, Charurut Somboonwit5, Megan A Lewis6, Olga A Khavjou6
1Division of HIV Prevention, Centers for Disease Control and Prevention
2SeKON
3Emory University School of Medicine, Atlanta VA Medical Center
4Rutgers New Jersey Medical School
5University of South Florida
6RTI International
Purpose: To assess the costs of Positive Health Check (PHC), a digital video-based behavioral intervention, for improving health outcomes of people with HIV by improving retention in care.
Methods: The PHC study was a randomized trial evaluating the effectiveness of a highly tailored interactive video counseling intervention delivered in four HIV primary care clinics in the U.S. Eligible patients were randomized to either the PHC intervention or control arm. Control arm participants received standard of care (SOC), and intervention arm participants received SOC plus PHC. The intervention was delivered on computer tablets in the clinic waiting rooms.
The outcomes included three retention in care measures: >1 visit in each 6-month interval within 12-months (RIC-A), keeping 2 visits separated by >90 days (RIC-B), and >6-month visit gaps (RIC-C). A microcosting approach was used to assess the intervention costs, including labor hours, materials and supplies, office overhead, and equipment, separate from the SOC costs. The analysis reported annual total intervention cost and average cost-effectiveness ratios, cost per person retained in care, in 2019 US dollars.
Results: A total of 397 (range across sites [r]: 95–102) participants were enrolled in the PHC intervention arm between February 2018 and March 2019. At the end of each participant’s12-month follow-up assessment, the number of participants retained in care was 257 (r: 54–73), 267 (r: 60–72), and 253 (r: 53–72) based on RIC-A, RIC-B, and RIC-C measures, respectively. The overall annual intervention cost was $402,274 (r: $65,581–$124,629). Our estimated average cost per participant retained in care was consistent across all three measures: the average cost was $1,565 (r: $979–$2,308) under RIC-A, $1,507 (r: $937–$2,077) under RIC-B, and $1,590 (r: $950–$2,351) under RIC-C. Costs are reduced if ignoring costs related to recruitment and re-engagement efforts.
Conclusions: The costs of this interactive video counseling intervention are comparable to other interventions to retain or re-engage patients in HIV care. The results provide useful information for program planning and scaling up clinic-based video counseling interventions to improve HIV care outcomes.
Keywords: HIV treatment, video doctor, retention, viral suppression, randomized trial, cost-effectiveness
Using qualitative methods to inform health technology assessment (HTA): A review of submissions to NICE and CADTH
PP-210 Applied Health Economics (AHE)
Shelagh M Szabo, Neil Hawkins, Evi Germeni
Health Economics and Health Technology Assessment (HEHTA), Institute of Health and Wellbeing, University of Glasgow, Glasgow, UK
Purpose: Qualitative research methods allow in-depth exploration of the patient experience and can help provide context to inform healthcare decision-making. Despite expanding frameworks for incorporating patient evidence into HTA – including from agencies such as NICE and CADTH – how extensively qualitative methods are used is unclear. The objective was to characterize the extent and quality of qualitative data submitted to NICE and CADTH for HTA.
Methods: NICE and CADTH submissions from 09/2019 to 08/2021 were reviewed. Submission characteristics and features of patient evidence included within submissions were extracted. Details of sample size, data collection and analytic methods, topics covered, and whether data were published, were summarized. The quality of reporting within submissions including qualitative data collection and analysis methods was assessed using the CASP checklist, and the percentage of submissions scoring >7/10 was calculated.
Results: We reviewed 143 submissions (67% of all submissions from 09/2019 to 08/2021). Sixteen (11.2%) did not include direct patient evidence. Qualitative data were presented in 59 (41.3%) submissions, 46 described mixed-methods approaches (32.2%), and 22 (15.4%) included quantitative patient evidence only. Forty-eight of the 59 submissions (81.3%) including qualitative data used qualitative data collection methods; one-on-one interviews (occurring in 91.5% of submissions) and focus groups (25.4%) were most frequent. Eleven of the 59 submissions (19.6%) also employed qualitative analytic methods, and all used forms of thematic analysis. Seven (63.6%) of these 11 submissions satisfied more than 70% of the criteria on the CASP checklist. The most frequent topics covered in qualitative assessment were: patient burden (in 100% of submissions), impact of symptoms/pain (100%), health-related quality-of-life impact (100%) and disease experience (98.3%). The qualitative data included within four submissions (6.8%) were presented at conferences, and one study (1.7%) published as a peer-reviewed manuscript.
Conclusions: In the HTA submissions reviewed, qualitative data collection methods were relatively commonly used, but qualitative analytic methods infrequent. Reporting tended to be brief and peer-reviewed publication was rare. At present, there is little guidance from decision-makers about the types of qualitative patient data most useful to submit, and how these can inform HTA. While interest in providing patient evidence for HTA is increasing, the focus needs to be on collecting and analyzing these data in a systematic and rigorous way to help ensure their usefulness and credibility.
Keywords: qualitative research, health technology assessment
Cost-effectiveness of endovascular thrombectomy for acute ischemic stroke with the time of onset between 6 and 24 hours in China : Result from the AUROR
PP-211 Applied Health Economics (AHE)
Shi Feng Hao1, He Zi Xuan2, Su Hang1, Wang Lin3, Han Sheng1
1International Research Center for Medicinal Administration, Peking University, Beijing, China
2School of Pharmaceutical Sciences, Peking University, Beijing, China
3School of International Pharmaceutical Business, China Pharmaceutical University, Nanjing, China
Purpose: Recently, the individual patient data meta-analysis study (AURORA) demonstrated the benefit of endovascular thrombectomy performed between 6 and 24 hours in anterior circulation stroke. The current economic evidence is supporting the intervention only within 6 hours in China, but extended endovascular thrombectomy treatment times may result in better long-term outcomes for a larger cohort of patients.
This study aimed to evaluate the cost-effectiveness of e in addition to medical treatment versus medical treatment alone performed beyond 6 h from stroke onset in China, the world’s largest low- and middle-income country.
Methods: A model combing decision tree and Markov model was developed to assess the cost-effectiveness of endovascular thrombectomy plus medical-care therapy versus medical-care alone over a 20-year horizon from Chinese healthcare system’s perspective. Both efficacy and safety data were extracted from the AURORA study. Local costs and utilities were derived from publications and open-access databases. Utilities were derived from published literature. Both one-way and probabilistic sensitivity analysis were carried out to verify the robustness of the model.
Results: Over a 20-year period, the incremental cost per QALY of endovascular thrombectomy was CNY 63638.32 ($9386.18) when performed between 6 and 24 hours from onset, CNY 93244.30 ($13752.85) between 6 and 12 hours and CNY 50723.89 ($7481.40) between 12 and 24 hours.The probabilistic sensitivity analysis demonstrated that endovascular thrombectomy had 99.86% (6-24 h), 88.62%(6-12 h) and 100.00%(12-24 h) probability of being cost-effective at a willingness-to-pay threshold of CNY121,464 (1.5× gross domestic product per capita of China in 2021) per quality-adjusted life-year.
Conclusions: The results of this study demonstrate that performing endovascular thrombectomy up to 24 h from acute ischemic stroke symptom onset is still cost-effective, suggesting that this intervention should be implemented by the China on the basis of improvement in quality of life as well as economic grounds.
Keywords: Endovascular thrombectomy ; Cost-effectiveness analysis ; Anterior circulation stroke; Acute ischemic stroke
Cost effectiveness and utility analyses of detection test in diagnosing Extrapulmonary Tuberculosis in a high TB and HIV burden setting, Tanzania
PP-212 Applied Health Economics (AHE)
Shoaib Hassan1, Shoaib Hassan2, Ole Frithjof Norheim4, Tehmina Mustafa5, Bjarne Robberstad3
1Public Health Modeling; Yale Institute for Global Health, Yale School of Public Health
2Centre for International Health, Department of Global Public Health and Primary Care, University of Bergen, Bergen, Norway
3Health Economics, Leadership, and Translational Ethics Research
4Bergen Centre for Ethics and Priority Setting (BCEPS)
5Department of Thoracic Medicine, Haukeland University Hospital, Bergen, Norway
Purpose: Globally, Extrapulmonary Tuberculosis (EPTB) patients are approximately 15-20% of annual 10 million tuberculosis patients. The unspecified illness and sub-optimal tests hinder prompt diagnoses, slowing shared decision making to initiate treatment. Considering the EPTB diagnostic limitations of the available suggested option (Xpert), a cost-effectiveness analyses against MPT64 test was performed.
Methods: The tests effectiveness, cost-effectiveness and cost-utility analyses was presented as part of a multi-centered prospective cohort study. A follow up of EPTB-HIV patients over 1-year included collection of socio-economic data, clinical follow-ups, and Eq5D-VAS information before as well as after diagnostic and treatment interventions. A cost-effectiveness and utility analyses were performed from providers-perspective between a new diagnostic test, Xpert and no-intervention.
Results: The exhaustive hybrid models comprising of simple decision-tree and Markov analyzed combinations of MPT64, Xpert and no intervention strategies. We also modelled EPTB multidrug resistance (MDR) incidence and its influence on cost utility analyses. The model using three strategies showed that MPT64 test would be cost-effective with an ICER of 221.6 US$/QALY gain. MPT64 test was also cost effective with an ICER of 29 US$/QALY gain using second model having Xpert only as baseline. The third model that did not include MDR, showed MPT64 cost-effective with an ICER of 27 US$/QALY gain against Xpert. One way sensitivity analyses showed following parameters influenced ICER in order of priority; cost of EPTB treatment, cost of MPT64 and Xpert test, age of patients, cost of antiretrovirals and patients’ compliance to treatment. A willingness-to-pay of USD 700 per QALY was used as threshold for cost-effectiveness.
Conclusions: To our knowledge, this is the first ever study that utilizes primary data from a prospective cohort of ETPB patient for such models. The strength of study is based on the data collected from our larger project that studies new diagnostic test. While the use of Xpert test is quite proven for pulmonary TB, it is not the case for EPTB. Moreover, challenges in wider role out of GeneXpert, either limited availability, operational-utility or relatively higher requirement of resources are well known. Against this background, MPT64 test despite requiring implementation by trained health staff appears cost-effective. The health outcomes and policy research findings will modulate existing diagnostic guidelines that would support better identification of patients and treatment outcomes.
Keywords: QALYs, Extrapulmonary Tuberculosis, Cost-effectiveness, Health Technology, Diagnostic tool
Tornado_CEA_ICER
A novel cost analysis of the implementation of a complex colorectal cancer screening program
PP-213 Applied Health Economics (AHE)
Meghan C. O'leary1, Karl T. Johnson2, Kristen Hassmiller Lich2, Alexis A. Moore3, Alison T. Brenner4, Daniel S. Reuland4, Renée M. Ferrari3, Deeonna E. Farr5, Stephanie B. Wheeler1
1Department of Health Policy and Management, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
2Department of Health Policy and Management, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
3Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
4Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
5College of Health and Human Performance, East Carolina University, Greenville, NC, USA
Purpose: Using a combination of methods, we estimated the labor and non-labor costs associated with implementing a complex multi-level and multicomponent intervention designed to improve colorectal cancer (CRC) screening among federally-qualified health center patients.
Methods: We used process flow diagramming to prospectively identify intervention components and implementation activities requiring costs and develop novel methods for estimating those costs. The intervention consisted of three primary components: a registry to identify eligible patients due for CRC screening, mailed fecal immunochemical testing (FIT) outreach, and navigation of patients with abnormal FIT to follow-up colonoscopy. For the two sites where this intervention was implemented, we created a tracking tool to estimate the time required for registry activities, such as developing the query and running reports. We conducted two rounds of time-and-motion observations of all mailed FIT outreach activities to estimate the implementer time required per patient. To estimate navigation time, we used detailed navigator call logs, which prospectively documented all interactions with and on behalf of patients requiring navigation to colonoscopy. We then used mean wages from the Bureau of Labor Statistics to estimate the labor costs for each activity. Finally, we used a supply inventory to track materials used and patient interviews to identify any out-of-pocket costs and time incurred by patients to complete CRC screening and follow-up.
Results: Across sites, the time required for registry activities ranged from 2 to 3 hours monthly. For mailed FIT outreach activities, the most time-intensive activity was preparing and mailing FIT kit packets, which ranged from 2.6 to 5.1 minutes per packet mailed. Among navigated patients, the median time spent per person was 45.0 minutes (range 24.0-169.0 minutes), with differences by insurance status (e.g., median time of 79.5 vs. 38.5 minutes for uninsured and privately insured patients, respectively). Patients reported minimal time and costs; the most resource-intensive activity for patients was completing the bowel preparation.
Conclusions: Identifying and establishing methods for tracking all possible costs upfront during implementation planning allowed for more comprehensive data collection and cost analysis. Most activities required minimal staff time per patient reached, suggesting that scale-up and spread may be feasible. Patient navigation is a more time-intensive component of the intervention, though there are opportunities to further tailor navigation to individual patient needs.
Keywords: cost analysis, microcosting, colorectal cancer screening, implementation research
Development of an intervention to support people with Chronic Obstructive Pulmonary Disease make decisions about Pulmonary Rehabilitation
PP-214 Decision Psychology and Shared Decision Making (DEC)
Amy C Barradell1, Hilary L Bekker4, Noelle Robertson3, Linzy Houchen Wolloff2, Kim Marshall Nichols2, Sally J Singh2
1Department of Respiratory Sciences, University of Leicester, Leicester, United Kingdom
2Centre for Exercise and Rehabilitation Science, University Hospitals of Leicester NHS Trust, Leicester, United Kingdom
3Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, United Kingdom
4Leeds Unit of Complex Intervention Development, University of Leeds, Leeds, United Kingdom
Purpose: People with Chronic Obstructive Pulmonary Disease report a lack of information about Pulmonary Rehabilitation (PR) options and support from healthcare professionals (HCPs) when planning treatments. We describe the development of a patient decision aid (PtDA) and decision coaching training for HCPs to support patients in making informed, value-based decisions about PR.
Methods: A multi-stakeholder steering group guided intervention development (patient advisor, n=2; decision scientist, n=1; health psychologist, n=2; PR specialists, n=2).
The intervention was informed by: PtDA development framework (Coulter et al., 2013); Ottawa Decision Support Framework (Stacey et al., 2020); International Patient Decision Aids Standards (IPDAS) checklist (Elwyn et al., 2006); PtDA Development Training (IPDAS, 2019); decision coaching framework (Stacey et al., 2008); multiple stakeholder decision support framework (MIND-IT; Bekker, 2015); user-centred design checklist (Wittemann et al., 2021); complex intervention development framework (O’Cathain et al., 2019).
Two components were developed: a PtDA describing options for PR (centre-based PR, telephone home-based PR, online home-based PR, and routine COPD care), and a decision coaching workshop for PR HCPs to support their understanding and self-efficacy in Shared Decision-Making (SDM). A narrative synthesis was conducted to identify research and clinical audit evidence for each option. A systematic review of SDM interventions for chronic respiratory disease was conducted to inform its format and distribution.
Qualitative methods were used to assess the interventions’ acceptability. During August – September 2021 the PtDA was evaluated in one workshop with PR HCPs (n=8) and one with patients with chronic respiratory diseases (n=4).
Results: Nine iterations of the PtDA were created before acceptability and usability were confirmed.
HCPs suggested improving coherence of the PtDA by adding more information about available options. Patients suggested amends to terminology, visual representations, and the addition of rating scales to allow simpler comparison between the options. Both confirmed the provision of decision coaching would improve HCPs’ self-efficacy in delivering the intervention.
HCPs and patients had positive attitudes towards the final PtDA and its implementation in practice.
Conclusions: We have developed a complex intervention involving a PtDA used prior to, and during, a consultation with a PR HCP, and decision coaching training to upskill PR HCPs. The feasibility of our intervention to support shared decision making about PR is currently being tested in a mixed methods study.
Keywords: Intervention development, Patient decision aid, Decision coaching, shared decision making
‘Difficult conversations’ in kidney care management: a guide for health professionals talking about end-of-life and palliative care
PP-215 Decision Psychology and Shared Decision Making (DEC)
Anna Winterbottom1, Andrew Mooney1, Helen Hurst2, Lynne Russon1, Paula Ormandy3, Fliss Murtagh4, Hilary L Bekker5, Barny Hole6, Emma Murphy7, Keith Bucknall8, Iain Simkin8
1Leeds Teaching Hospitals Trust
2Central Manchester University Hospitals NHS Foundation Trust
3University of Salford
4University of Hull
5University of Leeds
6North Bristol NHS Foundation Trust & University of Bristol
7University Hospital Coventry & Warwickshire NHS Trust
8Patient representative
Purpose: People with kidney failure have poor outcomes and complex health needs but are less likely to discuss end-of-life and palliative care compared to people with other chronic illnesses. Providing good quality, accurate information about end-of-life care options, is a fundamental step in ensuring timely, reasoned decisions are made between patients and professionals about the care people wish to receive at end-of-life. We developed the ‘Difficult Conversations: talking with people with kidney failure about what is important towards the end of life’ to improve confidence and skills of kidney health professionals engaging in these types of conversations.
Methods: Phase 1) Survey using qualitative interviews exploring older adults with kidney failure (n=18), family/carers (n=5) and bereaved family/carers (n=4) perspectives on end-of-life issues. Interviews took place on the telephone or Zoom lasting no longer than 1 hour. Audio recordings were transcribed verbatim, NVivo software managed the data, which was analysed using thematic analysis. Phase 2) Resource development based on themes from phase 1 analysis, guidelines on end-of-life/palliative care, complex interventions, patient information and ‘difficult conversation’ training, and stakeholder feedback.
Results: Phase 1) Most people had not discussed future care with a kidney clinician. Preferences varied for how, when and with whom conversations these should take place. People were uncertain about how death occurred as a result of kidney disease. People made their own end-of-life plans, often in discussion with their spouse. Carers were closely involved in decision-making and care planning and reported their own support/care needs.
Phase 2) Iterative drafts of the resource were produced using themes from phase 1 data and direct quotations. Feedback was sought from patient and public involvement (PPI), project and advisory team, and (inter)national interested parties.
Conclusions: Identifying how people with kidney failure and their families experience end-of-life improves clinicians understanding and gives permission to have discussions about end-of-life care options. For patients, these discussions improve knowledge and realistic expectations, reduce anxiety, and support decision-making in line with their preferences. For services, future care planning can mitigate against ‘crisis’ management in acute situations, improves allocation and co-ordination of services. The booklet will be launched in UK NHS-kidney services (January 2023). Funding is being sought to develop a training package and assess the booklet’s acceptability in practice.
Keywords: End-of-life care, palliative care, kidney care, decision making, healthcare information, dr-pt communication
Willingness to Attend Video Consultation: Evaluating and Matching Patients’ and Clinicians’ Preferences
PP-216 Decision Psychology and Shared Decision Making (DEC)
Daniel Gartner1, Charlotte Marshall1, Nathan Gibson1, Paul Harper1, Geraint Palmer1, Alka Ahuja2, Gemma Johns2
1Cardiff University, School of Mathematics, Cardiff, Wales
2Aneurin Bevan University Health Board, Caerleon, Wales
Purpose: Since the start of the COVID-19 pandemic, telehealth and video consultation services demand have increased sharply. Despite this sudden increase being down to necessity initially, a number of benefits and positive responses to the service have been identified. Consequently, it appears likely to remain key in the future of healthcare even with an easing pandemic situation. As a result of this, it is important to identify the factors that play a part in patients’ hesitancy to continue using the service and what could address it. The objective of our study is twofold: Firstly, our aim is to understand which variables are influential in the uptake of video consultation services. Secondly, we evaluate how predictable patients’ intentions are to use video consultation services again.
Methods: To understand which variables are influential in the uptake of video consultation, we used data from U.K.’s National Health Service (NHS) and identified significant variables through statistical tests. To evaluate how predictable patients’ and clinicians’ intentions are to use the video consultation service again, machine learning models such as logistic regression, k-nearest neighbours, random forests, gradient boosting and voting classifiers were trained and tested. Finally, we evaluated feature importance methods to find the best models that maximise the predictability in patient and clinician uptake using video consultation services.
Results: Our analysis revealed several clinical and demographic variables associated with the uptake of the video consultation service. These include the profession and specialty of the clinicians, care sector, video consultation length, whose choice it was to have a video consultation, age, patient’s living area, gender, device used (by the patient for the video consultation), travel time to the location if it would have been a face-to-face consultation, household income, and clinical purpose of the video consultation.
Conclusions: Our methods and results revealed relevant and non-redundant variables with the uptake of video consultation. This information can be used in the future to automatically assign patients to their appointment types (whether face to face or using video consultation). In doing so, users and providers can run the service more efficiently by saving on appointment slots, avoiding that patients are seen both virtually and face to face.
Keywords: Telehealth, Patient Preferences, Video Consultation, Doctor-Patient Communication, Health Services
VideoConsultationUptake
Video Consultation Results
Periviable Decision Making and Diverse Family Structures: Ethico-legal Considerations and Provider Perspectives in a New Era of Parentage
PP-217 Decision Psychology and Shared Decision Making (DEC)
Erika R. Cheng1, Seema Mohapatra2, Shelley M. Hoffman1, Naomi Castellon Perez1, Brownsyne Tucker Edmonds1
1Indiana University School of Medicine
2Southern Methodist University Dedman School of Law
Purpose: Existing frameworks for shared decision making in periviable delivery (e.g., between 22 and 25 weeks gestation) primarily focus on pregnant patient’s values and goals, without attending to the patient-partner dyad. We examined provider perspectives on dyadic decision-making in periviable delivery, particularly navigating parental disagreement regarding resuscitation and decisional authority within diverse family structures.
Methods: We recruited 30 obstetricians and neonatologists (e.g., attending physicians, fellows, and advanced practice nurses) from 16 academic centers to participate in a brief Zoom interview focused on a clinical case vignette of a 22-week pregnant person with preterm premature rupture of membranes. We asked providers a series of questions related to the level of involvement partners should have in making neonatal treatment decisions (e.g., resuscitation vs. palliation), specifically probing whether they thought partner involvement should be impacted marital status, intended involvement after delivery, or biological relationship to the child. We then presented a legal case vignette depicting a married, heterosexual couple who disagreed on the neonatal treatment plan. We asked providers to describe how the healthcare team should proceed and to choose which parent should have ultimate decisional authority. Finally, we asked providers to consider their answers under various scenarios of diverse family constructs (e.g., same-sex partnerships, adoption, surrogacy).
Results: Most providers were neonatologists (n=20), female (73.3%), heterosexual (93.3%), and white (60.1%). Overall, providers discussed the importance of including partners in periviable counseling and believed their input should be considered. Marital status and intent to coparent emerged as important considerations, while partners’ biological relationship to the child was less emphasized. In the context of parental disagreement, providers overwhelming agreed that “mom is the priority” in heterosexual relationships and should have ultimate decisional authority; however, providers expressed uncertainty and hesitation when asked to consider decisional authority within diverse family constructs (e.g., adoption, same-sex marriage, and surrogacy). Here, providers expressed a need for clinical and legal guidance.
Conclusions: Providers value partners’ input in neonatal resuscitation decision-making but lack guidance and support for navigating decisional authority, especially in the context of diverse family structures. Because the outcomes of periviable delivery directly impact both parents, incorporating the dynamics and impact of partners’ involvement may enhance decisional autonomy for the pregnant person and facilitate more shared, equitable and high-quality decision-making tailored to both parents.
Keywords: shared decision making, periviable delivery, diverse families, decisional autonomy
Shared Decision Making in the Setting of Disability Culture and Limited English Proficiency: An Undergraduate Medical School Curriculum
PP-218 Decision Psychology and Shared Decision Making (DEC)
Hannah Ship, Sahana Shankar, Jeffrey Brosco, Sheryl Eisenberg Michalowski, Shelly Baer, Jairo Arana, Damian Gregory, Ashley Falcon
Department of Medical Education, University of Miami Miller School of Medicine
Purpose: The purpose of this study is to analyze the impact and efficacy of the novel “Communication in Special Settings” educational session at the University of Miami Miller School of Medicine.
Methods: A mandatory 3-hour session for second-year clinical medical students was co-designed by our curriculum development team, which consisted of self-advocates, medical students, and healthcare providers. Learning objectives included: 1. Recognize that limited English proficiency is a social determinant of health (for example in individuals who communicate using American Sign Language) 2. Apply elements of the medical and social models as appropriate in the context of Deaf culture 3. Apply the key components of valid consent using a shared decision making framework 4. Value the ability of all persons, regardless of disability to provide valid consent that reflects respect for self- determination. The session included individual pre-work, a small group ethics case, and a large group panel. Pre-work included videos featuring members of the Deaf community discussing Deaf culture and the need for accessible health communication, literature on informed consent, and a case-study lecture on family professional partnerships. The ethics case focused on a discussion with parents who identify as culturally and linguistically Deaf on the decision to obtain or not obtain a cochlear implant for their child. The large group panel featured self-advocates with intellectual and/or developmental disabilities and their family members sharing their positive and negative experiences with communication in healthcare teams. The session was evaluated through a self-report retrospective survey.
Results: The post-session survey had a total response rate of 62% (n=124/201). When comparing students’ self perceived abilities to perform each of the learning objectives, students reported significantly higher confidence after the session when compared to their retrospective pre-session confidence for all four of the learning objectives (p<0.001 respectively). After the session, 98% of respondents signified achievement of the learning objectives (scores of 4 or 5,“probably yes” and “definitely yes”) as compared to 72% of respondents signifying achievement before the session.
Conclusions: Survey responses indicate that students learned the importance of incorporating and upholding patients' individual values in clinical discussions. This study suggests medical students would benefit from increased educational initiatives on shared decision making, valid consent and effective communication in the context of health equity, disability culture and limited English proficiency.
Keywords: Deaf culture, valid consent, shared decision making, self-determination
US Societal Perspectives on Prioritizing Aspects of Disease: A Discrete Choice Experiment
PP-220 Decision Psychology and Shared Decision Making (DEC)
Ivana F Audhya1, Karissa M Johnston2, Jessica Dunne2, David Feeny3, Peter I Neumann4, Daniel C Malone5, Shelagh M Szabo2, Katherine L Gooch1
1Sarepta Therapeutics, Inc, Cambridge MA
2Broadstreet HEOR, Vancouver, BC
3McMaster University, Hamilton, ON
4Tufts School of Medicine, Boston, MA
5The University of Utah, Salt Lake City, UT
Purpose: This study aimed to characterize United States (US) general population preferences regarding disease attributes considered most important for research and treatment.
Methods: We conducted a discrete choice experiment (DCE) in a US general population sample. Attributes included: disease prevalence, onset age, inherited vs. acquired cause, life expectancy, caregiver requirement, and symptom severity. Attributes and level definitions were developed based on in-depth qualitative interviews with 29 members of the US general public to understand elements of disease and treatment found to be most important, and why. The Health Utilities Index (HUI) was used to characterize extent of health impairment within severity profiles and inform severity of impairment for each level. Relevance and ease of completing the DCE was explored through semi-structured qualitative and think-aloud interviews, and pilot analysis. A market research company recruited interview participants through social media and DCE respondents through global online consumer panels. We analyzed results using a mixed-logit regression model with random effects included for all attributes. In addition to the overall sample, analyses were repeated for subgroups defined by self-report of personal and family history of chronic disease.
Results: 537 respondents were included. The majority (70.6%) were 18-54 years of age and 52.1% were female. Chronic disease self-history was reported by 25.7%, while 27.6% reported family history of chronic disease. 40% of respondents had children younger than 18 years. Attributes identified as most important were life expectancy (odds ratio [OR] and 95% confidence interval [CI] 1.88 [1.56-2.27] for a 50% reduction to remaining life expectancy vs. no impact), and severity (OR and 95% CI 1.84 [1.47-2.31] for highest vs. lowest severity level). Other factors identified as important included childhood onset, extensive caregiver need, and higher prevalence (Figure 1). In respondents reporting self and/or family history of chronic disease, further weight was placed on life expectancy in determining importance. Respondents reporting self-history placed additional weight on extensive caregiver requirements.
Figure 1.
Results of mixed-logit analysis of DCE responses
Conclusions: Respondents valued life expectancy (premature mortality) and severity (prospective morbidity) as the most important disease attributes for prioritizing research and treatment. Life expectancy was of particular importance to those reporting a history of chronic disease. This study builds on previous qualitative work by quantifying societal preferences for disease elements and may be used to inform future value assessments for decision-making.
Keywords: discrete choice experiment; population preferences; disease utilities; Health Utilities Index
Periviable Breech Delivery Decision Analysis
PP-222 Decision Psychology and Shared Decision Making (DEC)
Makayla Kirksey1, Devika Chakrabarti2, Madeleine Caplan3, Shelley Hoffman1, Barrett Robinson4, Brownsyne Tucker Edmonds5
1Indiana University School of Medicine, Indianapolis, United States
2Department of Obstetrics and Gynecology, Ascension St. Vincent Hospital, Indianapolis, United States
3Department of Obstetrics and Gynecology, The Ohio State University Medical Center, Columbus, United States
4Department of Obstetrics and Gynecology, The University of Chicago, Chicago, United States
5Department of Obstetrics and Gynecology, Indiana University School of Medicine, Indianapolis, United States
Purpose: The optimal mode of delivery (MOD) for malpresentation in periviable deliveries (22-24 weeks), remains a source of debate. Neonatal and maternal complications can arise from both vaginal (VD) and cesarean delivery (CD), and the threat of maternal morbidity extends to subsequent pregnancies. It has been difficult to compare these risks while counseling patients about MOD options, so we sought to create a decision tree that maps probable outcomes associated with breech deliveries at 23- and 24-weeks’ gestation, as well as complications posed for subsequent pregnancies.
Methods: An extensive literature review was conducted to identify risk estimates of periviable maternal and neonatal outcomes, along with elective repeat CD (ERCD) and trial of labor after cesarean (TOLAC) for subsequent pregnancies. Probabilities were inputted into TreeAge software, starting with primary maternal health states that may result from CD and VD – “death”, “hysterectomy”, or “no hysterectomy”, followed by the probability of neonatal health states– “death”, “severe morbidity”, or “no severe morbidity”. The likelihood of placenta previa or normal placenta was considered for subsequent pregnancies. We factored in the possibility of ERCD or TOLAC and the associated maternal and neonatal risks for each. Maternal preferences were quantified using previously published utility values and primary analysis was performed in order to determine the preferred method of delivery.
Results: CD was the preferred for breech periviable delivery. The utility value for CD at 23 weeks was 0.74 compared to 0.61 for VD. Similarly, at 24 weeks it was 0.79 compared to 0.69 for VD. For the first pregnancy, maternal death (0.00190 vs 0.00002), hysterectomy (0.00650 vs 0.00020), and neonatal morbidity (0.27000 vs 0.13250) were higher with CD versus VD, but CD had lower neonatal mortality (0.4600 vs 0.7250) and more healthy births (0.2700 vs 0.11750). In subsequent pregnancies, CD had lower chance of maternal mortality (0.00021 vs 0.00031), hysterectomy (0.00194 vs 0.00220), neonatal morbidity (0.00080 vs 0.2238), and neonatal mortality (0.00072 vs 0.01563), compared to VD. Probability of healthy birth after CD was 0.74147 vs 0.70931 after VD.
Conclusions: Whether CD or VD is optimal for breech presentation in periviable delivery is influenced by a complex array of factors, including future reproductive plans and maternal values related to potential neonatal and maternal morbidity and mortality. Quantifying risks associated with each MOD will aid providers in their efforts to help families make informed decisions and reduce morbidity across the reproductive lifespan.
Keywords: shared decision making, periviable delivery, breech delivery, mode of delivery
Shared decision making for breast cancer screening and prostate cancer screening: A cognitive interview study comparing decision aids
PP-223 Decision Psychology and Shared Decision Making (DEC)
Oskar Bergengren1, Khadra A. Dualeh1, Kathleen Lynch3, Nicholas Emard3, Sene Martin2, Mia D. Austria2, Gabriel C. Ogbennaya2, Jason Gonsky4, Andrew J. Vickers2, Thomas Atkinson3, Angie Fagerlin5, Yuelin Li3, Jada G. Hamilton3, Jennifer L. Hay3, Sigrid V. Carlsson2
1Department of Surgery (Urology Service). Memorial Sloan Kettering Cancer Center, New York, New York
2Department of Epidemiology and Biostatistics. Memorial Sloan Kettering Cancer Center, New York, New York
3Department of Psychiatry and Behavioral Sciences. Memorial Sloan Kettering Cancer Center, New York, New York
4NYC H+H/Kings County, New York, New York
5University of Utah, Salt Lake City, Utah
Purpose: Traditional decision-aids have prioritized completeness of information. We hypothesize that a “gist-based” decision aid that focused on a single decision-point, would offer better decision-support. We conducted cognitive interviews with healthy volunteers to compare their views on traditionally designed with novel gist-based decision aids for decisions about cancer screening.
Methods: Following an iterative, systematic process, we developed separate novel, gist-based, decision aids for men aged 45-60 considering prostate-specific antigen (PSA) screening and for women aged 40-49 contemplating mammography screening. The tools were developed to minimize numeracy and literacy demands, as well as information overload, and included two rounds of cognitive interviews where participants (8+8 men, 8+8 women) compared our novel-gist based decision aids to a traditional decision aid for each cancer screening context. Traditional aids were more complex and contained large amounts of information. During the cognitive interviews, we conducted a structured interview where we asked participants different attributes regarding perceived utility of the decision aids. Differences in proportions between paired data was assessed using McNemar’s test.
Results: Analysis of 32 cognitive interviews suggested that participants found the gist-based decision tools easier to understand and exhibited less information overload compared to traditional tools. When people viewed the traditional decision aids they used descriptions such as “a lot of information”, “academic” and “overkill”. These tools also triggered negative emotions expressed as “overwhelming” and “you hear cancer and you feel like it’s a death sentence”. Several participants found the wording of the complex and information-dense decision aids “off-putting” and “dense”. One participant commented, "It puts me off trying to read through all this. I [just] glanced through it.” We were unable to demonstrate a difference between groups in terms of information delivery, helpfulness, or usefulness. Compared to the gist-based decision aid, more patients found the traditional decision aid to be unclear (47% vs. 9%, p=0.004), difficult to read (50% vs. 22%, p=0.03) and requiring a lot of mental effort (66% vs. 25%, p=0.002). (Table 1)
Table 1.
|
Table 1: Comparison between traditional and gist-based decision aids
Conclusions: This study provides a critical advance of the field of shared decision-making that enhances our understanding of methods to designing decision aids and supporting a novel gist-based approach. A large-scale randomized trial is needed to evaluate the effectiveness of a gist-based vs. traditionally designed decision aid on informed choice, one that we are currently planning.
Keywords: decision-making, decision aids, cancer screening
Systematic development of a shared decision-making tool and consent form to counter Forced and Coerced Sterilization of Indigenous clients
PP-224 Decision Psychology and Shared Decision Making (DEC)
Unjali Malhotra1, Yvonne Boyer2, Robert Finch3, Ellen Giesbrecht3
1First Nations Health Authority, West Vancouver, British Columbia, Canada
2Senate of Canada, Ottawa, Ontario, Canada
3Perinatal Services British Columbia, Vancouver, British Columbia, Canada
Purpose: In British Columbia (BC), Canada, a Eugenic Sexual Sterilization Act was in place for 40 years, disproportionately impacting Indigenous women. This practice is ongoing as stories continue to surface and Class Action lawsuits are underway. Although mentioned in some educational programs, the health care system in BC has failed to put protections in place. We sought to create a shared decision-making (SDM) tool to enable non-coercive contraception provision if so desired by the patient.
Methods: This was a quality improvement initiative to embed shared decision-making in routine care. Development of a new way to engage in decision-making and consent involved year-long engagement with Indigenous community members, birth workers, and non-Indigenous health care providers. We used Traditional Indigenous Storytelling (stories of abuses), researched current consent forms at reproductive health clinics, reviewed drafts with Indigenous community members and health care providers, reviewed copy with the Cultural Humility and Safety department at First Nations Health Authority (FNHA), did small group engagement sessions with providers throughout BC, presented to a large group of stakeholders across BC at a virtual presentation (Grand Rounds) and at the Perinatal Services BC Steering Committee.
Results: An interdisciplinary partnership of health system leaders, led by the words of victims, created a shared decision-making tool to enable non-coercive contraception provision. The 4-page form is written in plain language English and is intended to be completed by community members at home or by choice with a trusted health care professional. The introduction includes definitions of Cultural Safety, Cultural Humility, Patient Choice, and SDM. The patient form includes 4 sections of open ended questions to guide SDM, including patient information, home community (e.g. equity, access, supports), managing contraception (e.g. goals, experiences), and potential medical and social risks of contraception. It also includes a free writing section and specific questions to mitigate coercion while protecting freedom of choice (e.g. Has the contraception discussion been initiated by the healthcare provider or the patient?)
Conclusions: Our collaborative development approach resulted in a SDM tool and consent form to counter Coerced Sterilization of Indigenous clients. The tool is now being shared nationally and internationally; it is available to all providers and clients through the FNHA and PSBC websites. The form remains fluid with 6-month reviews to ensure it remains true to community.
Keywords: Shared decision-making, patient engagement, quality improvement, implementation, Indige-nous health, contraception
Conflict and participation effects caused by decision aids in choosing discharge destinations for elderly stroke patients: Randomized controlled trial
PP-225 Decision Psychology and Shared Decision Making (DEC)
Yoriko Aoki1, Kazuhiro Nakayama2, Yuki Yonekura2
1Department of Gerontological Nursing, Faculty of Medicine, University of Toyama, Toyama Japan
2Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan
Purpose: This study uses a randomized controlled trial (RCT) to evaluate the influence of using Decision Aids (DAs) that match the values of elderly stroke patients and their families on the internal conflict over, and participation in, discharge destination decisions.
Methods: In a two-arm parallel RCT, subjects were randomly allocated to an Intervention Group and a Control Group according to hospital room. The intervention lasted for approximately two months from admission to discharge, and a survey was performed on admission and at discharge. DAs were offered to the Intervention Group, and brochures were offered to the Control Group. At about a month after admission, we confirmed with both groups whether the materials were being used. The Decisional Conflict Scale (DCS), which evaluates subjects’ internal conflict over their decisions, was made the primary endpoint, and the Control Preference Scale (CPS), which evaluates subjects’ participation in the decision-making process, was made the secondary endpoint.
Results: Ninety-nine subjects, 51 in the Intervention Group and 48 in the Control Group, completed a Full Analysis Set (FAS). The analysis results showed no significant inter-group differences were observed in the change in scores on the DCS between “on admission” and “at discharge.” The scores for “Uncertainty,” a sub-item of the DCS, were significantly higher in subjects who were still undecided (p < 0.05). Regarding the effect size of the amount of change in DCS scores, a moderately significant tendency was seen among subjects living alone with “Uncertainty” (t (21) = −1.35, p = 0.19, d = 0.59) and among the elderly aged over 75 with “Clarification of values” (t (49) = 1.98, p = 0.05, d = 0.57). No significant differences were seen between the two groups in terms of CPS scores. Concerning the effect size of the amount of change in participation rate, a moderately significant tendency was seen among subjects who lived alone (t (21) = 1.44, p = 0.17, d = 0.63).
Conclusions: DAs were shown to be effective (1) at reducing uncertainty and controlling the decline in participation rates, especially in subjects living alone who were unable to decide their discharge destination from the time of admission, and (2) at clarifying the values of those aged 75 and older.
Keywords: decision aids, elderly, discharge planning, Decisional Conflict Scale, Control Preference Scale, Randomized Controlled Trial
Characterizing the effectiveness of social determinants of health-focused hepatitis B interventions: A systematic review
PP-226 Health Services, Outcomes and Policy Research (HSOP)
Kikanwa Anyiwe1, Aysegul Erman2, Jordan Feld3, Eleanor Pullenayegum4, William W. L. Wong5, Beate Sander6
1Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment (THETA) Collaborative, Toronto General Hospital, University Health Network, Toronto, Canada
2Toronto Health Economics and Technology Assessment (THETA) Collaborative, Toronto General Hospital, University Health Network, Toronto, Canada
3Toronto Centre for Liver Disease, Toronto General Hospital, University Health Network, Toronto, Canada; Sandra Rotman Centre for Global Health, University of Toronto, Toronto, Canada; Department of Medicine, University of Toronto, Toronto, Canada
4Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Canada; Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada; Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
5Toronto Health Economics and Technology Assessment (THETA) Collaborative, Toronto General Hospital, University Health Network, Toronto, Canada; School of Pharmacy, University of Waterloo, Waterloo, Canada; Leslie Dan Faculty of Pharmacy, University of Toronto, Toronto, Canada
6Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Canada; Toronto Health Economics and Technology Assessment (THETA) Collaborative, Toronto General Hospital, University Health Network, Toronto, Canada; ICES, Toronto, Canada; Public Health Ontario, Toronto, Canada
Purpose: To characterize equity-oriented, social determinants of health (SDH)-focused hepatitis B (HBV) interventions, and to describe their effectiveness on the prevention, care, or treatment of HBV in high-income countries.
Methods: Relevant studies were identified through searches of six electronic databases: MEDLINE, EMBASE, PsycINFO, CINAHL Plus, EconLit, and Applied Social Sciences Index and Abstracts (ASSIA). Literature searches encompassed central concepts of ‘HBV’, ‘equity’, ‘social determinants of health’, ‘intervention’, and ‘OECD countries’. Title and abstract screening and review of full-text articles for eligibility were conducted independently by two reviewers, using pre-established inclusion criteria. Data abstraction was performed using a pilot tested data collection chart. Quantitative trends of included studies were examined through descriptive analysis of study characteristics. For studies with a comparison group, findings – highlighting intervention components, HBV-relevant health outcomes, and potential non-health SDH impacts – were explored in a narrative synthesis informed by analysis of emergent themes. The review protocol was registered with PROSPERO in advance of initiation (CRD42020183441).
Results: Of 185 full-text articles reviewed for eligibility, 55 studies met the specified inclusion criteria. Study and intervention objectives and outcomes were oriented around six emergent themes corresponding to HBV prevention and care: i) knowledge & education (n=28), ii) diagnosis & screening (n=45), iii) immunization (n=25), iv) care initiation (n=27), v) engagement with clinical care (n=18), and vi) upstream prevention (n=4); most studies (n=47) incorporated multiple themes. Of 29 studies explored in the narrative synthesis, SDH-focused interventions were most frequently tailored to reach ethnocultural populations from regions with high or intermediate HBV endemicity, including immigrants and refugees. Interventions were additionally designed for other populations experiencing marginalization (e.g., sexual and gender minorities, individuals with lower socioeconomic status, and persons who use drugs). Studies presented a heterogeneous array of HBV-relevant health outcomes specifically related to knowledge & education (n=9), diagnosis & screening (n=10), immunization (n=9), care initiation (n=7), engagement with clinical care (n=8), and upstream prevention (n=2). Non-health SDH outcomes were observed for one study.
Conclusions: This review summarized peer-reviewed literature concerning equity-oriented HBV initiatives in high-income countries. Review findings reveal a recurring emphasis on tailoring interventions for ethnocultural populations affected by HBV. Considerable heterogeneity was observed across studies regarding intervention goals and effectiveness outcomes, revealing inherent diversity in the extent of population-level strategies intended to address HBV.
Keywords: hepatitis B, social determinants of health, health interventions, public health
Patterns of care following a following a positive fecal blood test for colorectal cancer screening
PP-227 Health Services, Outcomes and Policy Research (HSOP)
Erin E Hahn1, Joanne E Schottinger1, Nirupa R Ghai1, Katherine J Pak1, Richard Contreras1, Nancy T Cannizzaro1, Theodore R Levin2, Christopher D Jensen2, Jessica Chubak3, Beverly B Green3, Celette S Skinner4, Ethan A Halm5
1Research and Evaluation, Kaiser Permanente Southern California
2Division of Research, Kaiser Permanente Northern California
3Health Research Institute, Kaiser Permanente Washington
4Department of Population and Data Sciences, University of Texas Southwestern
5Department of Medicine, Rutgers Robert Wood Johnson Medical School
Purpose: At-home fecal blood testing for colorectal cancer (CRC) is an evidence-based practical strategy to achieve high rates of CRC screening; however, those with positive results do not always receive guideline-recommended colonoscopy. We investigated patterns of care after a positive at-home result, focusing on inappropriate use of repeated fecal blood testing.
Methods: This retrospective cohort study included data from Kaiser Permanente (KP) Washington, KP Northern/Southern California, and University of Texas Southwestern; all have longstanding CRC screening outreach programs. We included patients eligible for fecal blood testing aged 50-89 who had a positive result 2010–2018 (follow-up through 2019). We categorized post-test patterns of care: 1) received guideline-recommended colonoscopy within one-year without repeat fecal blood testing; 2) completed repeat fecal blood testing and received colonoscopy within one-year; 3) completed repeat testing and did not receive colonoscopy within one-year; 4) no action. Potential associations for repeat testing after a positive result compared to colonoscopy follow-up were analyzed with logistic regression adjusting for age, sex, Charlson comorbidity status, and race/ethnicity.
Results: We identified 316,443 patients with a positive fecal blood test: 52% male, 14% Asian/Pacific Islander, 22% Hispanic, 8% non-Hispanic Black, 53% non-Hispanic white, 3% other. Overall, 77% received colonoscopy within one-year without repeat fecal blood testing; 3% completed repeat tests and subsequently received a colonoscopy; 5% completed repeat tests without a colonoscopy; 16% took no action. Of the 23,312 who completed repeat fecal blood tests, 41% completed a colonoscopy. Compared to those who received a colonoscopy after a positive result, older age (76-64 years vs. 50-64: OR: 1.95, 95% CI: 1.82-2.08; 85-89 years vs. 50-64 OR: 5.48, 95% CI 4.72-6.37) and more comorbid conditions (≥4 vs. 0 comorbidities: OR: 1.73, 95% CI: 1.65-1.81) were significantly associated with higher odds of repeat fecal blood testing.
Conclusions: A lower proportion of patients with repeat fecal blood tests received colonoscopy within one year compared to those without repeat testing. Those completing repeat tests were older and had more comorbid conditions, suggesting they were unable or unwilling to complete a colonoscopy. Substituting repeat fecal testing for diagnostic colonoscopy is not recommended by guidelines and more research is needed to understand why this occurs, even among organized colorectal cancer screening programs. Efforts to improve the delivery of guideline-recommended CRC screening and follow-up must address this practice.
Keywords: Medical decision making, colorectal cancer, patterns of care, health services research
Health-related quality of life and treatment satisfaction of patients with cardiovascular disease in Ethiopia
PP-228 Health Services, Outcomes and Policy Research (HSOP)
Girma Tekle Gebremariam1, Kebron Tito1, Kebede Beyene2, Beate Sander3, Gebremedhin Beedemariam Gebretekle3
1School of Pharmacy, Addis Ababa University, Addis Ababa, Ethiopia.
2Department of Pharmaceutical and Administrative Sciences, University of Health Sciences and Pharmacy in St. Louis, United State.
3University of Toronto
Purpose: Cardiovascular disease (CVD) is the leading cause of disability and premature death worldwide, imposing a significant burden and patients tend to have lower health-related quality of life (HRQoL). The purpose of this study was to assess (1) the HRQoL and factors associated with it, (2) treatment satisfaction and its relationship with HRQoL among patients with CVD in Ethiopia.
Methods: Institutional-based cross-sectional study among patients with CVD was conducted at the outpatient cardiac clinic of Tikur Anbessa Specialized Hospital in Ethiopia between July and September 2021. Participants who consented were recruited consecutively during follow-up visits. Validated EQ-5D and Treatment Satisfaction Questionnaire for Medication were used to assess patients’ HRQoL and treatment satisfaction, respectively. Utility scores were calculated using disutility weights of the Ethiopian general population. Non-parametric tests, Kruskal-Wallis and Mann-Whitney U tests were used to compare HRQoL between patients with different characteristics. Tobit regression modeling was used to identify determinants of HRQoL. Statistical significance was determined at p<0.05.
Results: Three hundred fifty seven patients with CVD participated in the study with a mean (SD) age of 49.3 (17.8) years. Pain/discomfort (75.4%) and mobility (73.4%) were the most frequently reported health problems of the EQ-5D-5L descriptive dimensions, whereas self-care (23%) dimension was the least reported health problem. The median (IQR) EQ-5D-5L utility and European Quality Visual Analogue Scale (EQ-VAS) scores were 0.84 (0.55-0.92) and 70.0 (50.0-85.0), respectively. Convenience satisfaction dimension had the highest mean (SD) score of 87.7 (17.9) while safety satisfaction dimension had the lowest mean (SD) score of 53.1 (33.5). There were significant modest positive correlations between treatment satisfaction dimensions and EQ-5D-5L utility and EQ-VAS scores. The Pearson’s correlation coefficient between mean scores of effectiveness and the EQ-5D-5L score was 0.32. The safety satisfaction dimension showed a negative correlation with the EQ-5D-5L index (r = -0.37) and EQ-VAS score (r = -0.34). Multivariable Tobit regression analysis showed that unemployment, older age, previous hospital admission, non-adherence to lifestyle modification, and presence of three or more CVD were negatively associated with HRQoL.
Conclusions: Overall, patients with CVD had lower HRQoL than Ethiopian general population's mean utility of 0.92. Our findings suggest that designing and implementing intervention to improve HRQoL should prioritize modifiable factors, controlling CVD progression, fostering lifestyle changes and improving treatment satisfaction.
Keywords: Cardiovascular disease, EQ-5D-5L, Ethiopia, HRQoL, TSQM, Treatment satisfaction
Figure 1.
Self-reported health problems using the EQ-5D-5L descriptive dimensions in patients with Cardiovascular disease
Selective Prescribing of Atypical Antipsychotics to Medicare Beneficiaries
PP-229 Health Services, Outcomes and Policy Research (HSOP)
Ismaeel Yunusa1, Ibraheem M. Karaye2, Chijioke Okeke3, Saud Alsahali4
1Department of Clinical Pharmacy and Outcomes Sciences, University of South Carolina College of Pharmacy, Columbia, SC
2Department of Population Health, Hofstra University, Hempstead, NY
3National Agency for Food and Drug Administration and Control (NAFDAC), Lagos, Nigeria
4Department of Pharmacy Practice, Unaizah College of Pharmacy, Qassim University, Qassim, Saudi Arabia
Purpose: An essential principle of conservative prescribing is to optimally use only a few within a drugs class. However, with newer atypical antipsychotics (AAPs) being approved by the US Food and Drug Administration (FDA) in the past decade, it is unclear if clinicians are conservative in prescribing them. Therefore, this study aims to determine how prescribers limit the number of AAPs they prescribe to Medicare beneficiaries.
Methods: We conducted a cross-sectional analysis of prescription claims of AAPs using the 2016 Medicare Part D Public Use Files. We retrieved data from prescribers with ≥100 prescriptions of AAPs. We estimated the number of analogs accounting for 50% (DU50) and 90% (DU90) of prescribed AAPs. We used Gini coefficient to determine the Formulary Selectivity Index (FSI), which is a number between 0 and 1, with ‘0’ representing equal prescription of each analog (no selectivity) and ‘1’ representing prescribing of only one analog (perfect selectivity).
Results: There were 14,054,118 AAPs prescribed to Medicare Part D beneficiaries. The prescribers were very selective in prescribing AAPs (FSI: 0.81). Among the thirteen AAPs prescribed, six drugs; quetiapine, risperidone, olanzapine, aripiprazole, clozapine, and ziprasidone accounted for more than 90% of the medications. Quetiapine and risperidone alone accounted for more than 50% of the AAPs. The results were consistent when analyses were stratified by prescriber specialty.
Conclusions: The findings suggest that prescribers limit the use of AAPs to quetiapine and risperidone. Future studies may examine if prescribing these drugs is associated with more favorable health outcomes than other AAPs.
Keywords: conservative prescribing, atypical antipsychotics
FACIT-Fatigue Responders and Related Hemoglobin Improvements After Pegcetacoplan in Treatment Naïve Patients with Paroxysmal Nocturnal Hemoglobinuria
PP-230 Health Services, Outcomes and Policy Research (HSOP)
Jesse Fishman1, Mohammed Al-Adhami1, David A Andrae2, William R Lenderking2
1Apellis Pharmaceuticals, Inc., Waltham, MA, USA
2Evidera, Bethesda, MD, USA
Purpose: Paroxysmal nocturnal hemoglobinuria (PNH) is a rare hematologic disease involving complement-mediated hemolysis. Pegcetacoplan, an FDA/EMA approved C3 complement-inhibitor, has significantly improved hemoglobin levels and demonstrated clinically meaningful improvements in fatigue as measured by the Functional Assessment of Chronic Illness Therapy (FACIT)-Fatigue scale. This post hoc analysis explored the relationship between clinically meaningful differences in FACIT-Fatigue total score and responsiveness of the FACIT-Fatigue total score with improvement in hemoglobin levels.
Methods: PRINCE (NCT04085601) enrolled 53 complement-inhibitor (i.e., eculizumab/ravulizumab) naïve patients with PNH. Patients were randomized 2:1 to pegcetacoplan (1080 mg subcutaneously twice weekly, n=35) or control (standard treatment excluding complement-inhibitors, n=18) for 26 weeks. Control patients could escape to pegcetacoplan arm following a ≥2 g/dL hemoglobin level decrease from baseline. The Intent-to-Treat (ITT) population included all randomized patients, and analysis was conducted based on the randomized treatment group at baseline, regardless whether patients escaped to pegcetacoplan-treatment at a later time-point. The percent of patients achieving clinically meaningful improvements (≥3- and ≥5-point [Cella, et al. Blood. 2021]) in FACIT-Fatigue total score from baseline to Week 16 were reported for the pegcetacoplan and control-treatment arms. Responsiveness of FACIT-Fatigue to increases in hemoglobin levels (<1 g/dL, ≥1 to <2 g/dL, and ≥2 g/dL) at Week 16 was assessed using analysis of covariance.
Results: For the ITT population, 70.0% and 66.7% patients in the pegcetacoplan arm achieved the clinically meaningful ≥3- and ≥5-point improvements, respectively. In the control-treatment arm, 50.0% met the ≥3-point improvement threshold while only 37.5% met the ≥5-point threshold. After evaluating the overall ITT population for responsiveness, groups with greater improvement in hemoglobin level (≥2 g/dL) from baseline to Week 16 showed the largest increase in score, with a 10.4-point improvement in FACIT-Fatigue total scores (F= 7.53, p=0.0004; Figure). Similar results were observed across the pegcetacoplan treatment group (F=2.70, p=0.0662), with a 10.4-point improvement in FACIT-Fatigue total score for the hemoglobin ≥2 g/dL group, and the item-level FACIT-Fatigue scores.
Abbreviations: FACIT-Fatigue, Functional Assessment of Chronic Illness Therapy-Fatigue; SE, standard error.
Conclusions: Overall, a higher proportion of pegcetacoplan-treated patients achieved clinically meaningful decreases in fatigue as compared to the control-treatment at Week 16 and patients report greater improvement in FACIT-Fatigue scores with a higher increase in hemoglobin levels, indicating improved quality of life.
Keywords: Paroxysmal nocturnal hemoglobinuria (PNH), hemoglobin, quality of life, FACIT-Fatigue, Pegcetacoplan, complement-inhibitor
Mean (SE) Change in FACIT-Fatigue Total Score from Baseline to Week 16 by Increase in Hemoglobin Levels in the Overall (Intent-to-Treat) Population
Item-Level Findings and Longitudinal Fatigue Scores Among Patients with Paroxysmal Nocturnal Hemoglobinuria from the Phase 3 PRINCE Study
PP-231 Health Services, Outcomes and Policy Research (HSOP)
Jesse Fishman1, David A Andrae2, Mohammed Al-Adhami1, William R Lenderking2
1Apellis Pharmaceuticals, Inc., Waltham, MA, USA
2Evidera, Bethesda, MD, USA
Purpose: Fatigue is the most common patient-reported symptom associated with paroxysmal nocturnal hemoglobinuria (PNH), a rare hematologic disease involving complement-mediated hemolysis. Here we use the Phase 3 PRINCE data to report the longitudinal and categorical changes (item-level) in Functional Assessment of Chronic Illness Therapy (FACIT)-Fatigue scores in patients treated with pegcetacoplan, the first FDA/EMA approved C3-inhibitor for PNH.
Methods: PRINCE (NCT04085601) enrolled 53 complement-inhibitor (eculizumab/ravulizumab) naïve patients with PNH. Patients were randomized 2:1 to pegcetacoplan (1080 mg subcutaneously twice weekly, n=35) or control (standard treatment excluding complement-inhibitors, n=18). Control patients could escape to pegcetacoplan treatment following a ≥2 g/dL hemoglobin level decrease from baseline (control-escape group). Post hoc analyses evaluated least-squares (LS) mean FACIT-Fatigue scores (and standard error [SE]) from baseline to Week 16 and mean item-level (13 items) scores (and standard deviation [SD]) among patients in the pegcetacoplan, control, or control-escape groups at baseline and Week 16. The FACIT-Fatigue scale uses 13 separate Likert-like scale questions (0=very much, 4=not at all [these analyses reversed the scale]) to measure fatigue severity and improvement. For total FACIT-Fatigue scores, higher total scores indicate less fatigue, and the maximum total score is 52 (measures of fatigue improvement are recoded for total score).
Results: Patients reported considerable levels of fatigue at baseline (LS mean [SE] scores: pegcetacoplan, 39.9 [1.4]; control, 36.0 [3.2]; control-escape, 35.6 [2.5] versus general population norm 43.6). Pegcetacoplan-treated patients displayed improved LS mean (SE) FACIT-Fatigue scores at Week 8 (pegcetacoplan, 41.8 [1.4]; control-escape, 38.4 [2.4]; versus control, 36.5 [3.0]) and clinically meaningful improvements (≥3-point increase) at Week 16 (pegcetacoplan, 43.8 [1.4]; control-escape, 41.2 [2.4]; versus control, 37.0 [3.0]). Pegcetacoplan-treated patients also demonstrated improvements in mean item-level scores at Week 16 versus baseline (mean scores for all 13 items are presented in the table), including categories such as “I feel fatigued”, “I have trouble starting (or finishing) things because I am tired”, and “I have energy”.
Conclusions: These data demonstrate that patients who were randomized or switched to pegcetacoplan during the PRINCE study displayed clinically meaningful improvements in fatigue over time. Item-level results indicate that improvements in fatigue occur in meaningful categories relevant to quality of life. Altogether, these findings suggest that pegcetacoplan provides a positive treatment benefit on fatigue levels among complement-inhibitor naïve patients with PNH.
Keywords: Fatigue, Paroxysmal Nocturnal Hemoglobinuria (PNH), complement inhibitor, C3, quality of life
Total and item-level FACIT-Fatigue scores during the PRINCE study*
| Total FACIT-Fatigue scores, least-squares mean (SE)† | Pegcetacoplan | Control | Control-escape‡ | |||
|---|---|---|---|---|---|---|
| Baseline | 39.9 (1.4) | 36.0 (3.2) | 35.6 (2.5) | |||
| Week 8 | 41.8 (1.4) | 36.5 (3.0) | 38.4 (2.4) | |||
| Week 16 | 43.8 (1.4) | 37.0 (3.0) | 41.2 (2.4) | |||
| FACIT-Fatigue item-level scores, mean (SD) | Pegcetacoplan | Pegcetacoplan | Control | Control | Control-escape‡ | Control-escape‡ |
| Measures of fatigue severity
0=very much, 4=not at all¶ |
BL
n=35 |
Wk 16
n=30 |
BL
n=7 |
Wk 16
n=7 |
BL
n=9 |
Wk 16
n=10 |
| I feel fatigued | 2.6 (1.1) | 3.5 (0.7) | 2.7 (1.0) | 2.4 (1.1) | 3.0 (0.9) | 3.4 (0.8) |
| I feel weak all over | 2.9 (1.1) | 3.5 (0.9) | 2.9 (1.1) | 2.6 (1.1) | 3.0 (1.2) | 3.5 (0.9) |
| I feel listless (‘washed out’) | 2.9 (1.2) | 3.8 (0.5) | 2.9 (1.2) | 2.6 (1.1) | 3.3 (0.7) | 3.7 (0.5) |
| I feel tired | 2.5 (1.0) | 3.5 (0.9) | 2.6 (0.8) | 2.3 (0.8) | 2.7 (0.9) | 3.3 (1.0) |
| I have trouble starting things because I am tired | 2.8 (1.2) | 3.7 (0.6) | 3.0 (0.8) | 2.6 (1.0) | 2.8 (1.0) | 3.4 (1.0) |
| I have trouble finishing things because I am tired | 2.8 (1.2) | 3.7 (0.6) | 2.9 (0.9) | 2.9 (0.9) | 2.8 (1.1) | 3.4 (1.0) |
| I need to sleep during the day | 2.6 (1.1) | 3.0 (1.1) | 2.1 (1.4) | 2.0 (1.3) | 2.8 (1.0) | 3.2 (0.9) |
| I am too tired to eat | 3.8 (0.5) | 3.8 (0.5) | 3.9 (0.4) | 3.6 (0.8) | 3.4 (1.0) | 3.7 (0.7) |
| I need help doing my usual activities | 3.6 (0.8) | 3.7 (0.7) | 3.7 (0.8) | 3.6 (0.8) | 3.1 (0.9) | 3.7 (1.0) |
| I am frustrated by being too tired to do the things I want to do | 2.9 (1.3) | 3.6 (0.8) | 3.6 (0.8) | 3.1 (0.9) | 3.0 (1.1) | 3.6 (1.0) |
| I have to limit my social activity because I am tired | 2.6 (1.3) | 3.6 (0.8) | 3.4 (1.1) | 2.9 (0.9) | 2.4 (1.2) | 3.3 (1.0) |
| Measures of fatigue improvement
Here, higher scores=improvement (scores are recoded for analysis) |
BL
n=35 |
Wk 16
n=30 |
BL
n=7 |
Wk 16
n=7 |
BL
n=9 |
Wk 16
n=10 |
| I have energy | 2.1 (1.0) | 2.7 (1.3) | 2.6 (1.3) | 2.4 (1.1) | 1.3 (0.7) | 2.3 (1.3) |
| I am able to do my usual activities | 2.2 (1.4) | 2.9 (1.3) | 2.9 (1.2) | 3.0 (0.8) | 2.0 (1.1) | 2.4 (1.3) |
Abbreviations: BL, baseline; FACIT-Fatigue, Functional Assessment of Chronic Illness Therapy-Fatigue; PEG, pegcetacoplan; SD, standard deviation; SE, standard error; Wk, week *These post hoc analysis values were obtained using a modified analysis set that analyzed patients in three groups: pegcetacoplan, control, and control-escape †Previous reports using the intent-to-treat analysis set have described mean (SD) total baseline FACIT-fatigue scores of 36.3 (10.7) for pegcetacoplan and 37.1 (9.3) for control ‡Control arm patients could escape to pegcetacoplan treatment following a qualifying decrease in hemoglobin levels or a qualifying thromboembolic event secondary to PNH ¶For these post hoc analyses, the scoring/scale was reversed
Identifying priorities for hearing loss prevention: the global cause-specific incidence of hearing loss
PP-232 Health Services, Outcomes and Policy Research (HSOP)
Kavita Prasad1, Ethan D Borre2, Lauren K Dillard3, Austin Ayer2, Caroline Der4, Catherine Mcmahon5, Debara Tucci6, Blake S Wilson7, Gillian D Sanders Schmidler8, James E Saunders9
1Tufts University School of Medicine, Boston, MA
2Duke University School of Medicine, Durham, NC
3Department of Otolaryngology- Head & Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
4Universidad del Desarrollo, Dr Luis Calvo Mackenna Hospital, Concepción, Chile
5Department of Linguistics, Macquarie University, NSW, Australia
6National Institute on Deafness and Other Communication Disorders National Institutes of Health, Bethesda, MD
7Department of Head and Neck Surgery & Communication Sciences, Duke University Medical Center, Durham, NC
8Duke University and Duke-Margolis Center for Health Policy, Durham, NC
9Section of Otolaryngology, Department of Surgery, Dartmouth-Hitchcock Medical Center, Lebanon, NH
Purpose: Hearing loss is the fourth leading cause of years lived with disability worldwide, but up to 50% of hearing loss cases are preventable. Our objective was to identify and estimate the yearly number of hearing loss cases due to preventable etiologies, globally, to inform decision modeling development and global policy recommendations.
Methods: We selected 5 preventable etiologies of hearing loss, focusing on medical conditions, including: meningitis, congenital rubella syndrome, and medication-induced ototoxicity from aminoglycosides, platinum-based chemotherapeutics, and antimalarials. We utilized targeted and systematic literature searches to determine 1) the yearly global incidence of each selected etiology and 2) the risk of hearing loss associated with each etiology. Our main outcomes were the estimated total number of exposures to each etiologic factor, the prevalence of hearing loss after exposure, and the number of hearing loss cases associated with each etiology annually. We varied uncertain parameters to create upper and lower bounds for annual hearing loss case estimates.
Results: Each year there are an estimated 256.5 million people exposed to selected preventable causes of hearing loss, leading to an estimated 31.1 million cases of hearing loss worldwide (range: 10.8-54.8M). The majority of cases are likely due to ototoxic medication, with 19.7 million (54%), cases due to short courses of aminoglycoside therapy and another 12.3 million (41.4%) due to antimalarials. 577,000 hearing loss cases were estimated to be associated with meningitis, and 63,000 with congenital rubella syndrome. Patients with congenital rubella syndrome, who have been treated with platinum-based chemotherapeutics for cancer, or who have received aminoglycoside treatment for multi-drug resistant tuberculosis (MDR-TB) have the greatest risk of acquiring hearing loss, with prevalence estimates of hearing loss associated with exposure of 60%, 43%, and 40% respectively.
Conclusions: Our results demonstrate the high global caseload of preventable hearing loss and highlight the urgent need to prioritize global hearing loss prevention. However, there is large uncertainty around exposures and risks of hearing loss that future research might clarify. Reduction of ototoxic hearing loss may be achieved through use of alternative pharmaceuticals, reducing inappropriate drug use, and bringing otoprotectants to market. Vaccine program expansion would reduce the burden of vaccine-preventable causes of hearing loss. Future planned analyses will expand selected etiologies to include estimates of hearing loss cases associated with congenital cytomegalovirus and otitis media.
Keywords: Hearing loss, aminoglycosides, antimalarials, platinum-based therapies, congenital rubella syndrome, meningitis
Table 1.
Annual global estimates of HL cases associated with preventable disease-related etiologies.
|
Telemental Health Providers Prior to and During the COVID-19 Pandemic Among Medicare Beneficiaries With Major Depression
PP-233 Health Services, Outcomes and Policy Research (HSOP)
Maria T Peña1, Jan Lindsay2, Ruosha Li1, Ashish Deshmukh1, John M Swint1, Robert O Morgan1
1Department of Management, Policy and Community Health, The University of Texas School of Public Health, Houston, Texas, United States
2Houston VA HSR&D Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey VA Medical Center& VA South Central Mental Illness Research, Education and Clinical Center & Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, Texas, USA.
Purpose: Examining the association between Telemental health (TMH) delivery with patient characteristics before and during the COVID-19 pandemic among Medicare beneficiaries with major depression.
Methods: We examined 2019-2020 Texas Medicare 100% fee-for-service claims data including beneficiaries aged ≥65 years with major depression who sought TMH care. Providers were identified if beneficiaries had at least one major-depression related visit by provider specialty. Multiple multivariable logistic regression analyses were conducted to examine the likelihood of seeking care by each provider specialty, as the outcomes, with beneficiary characteristics including: sex, age, race, county-level income, Medicaid eligibility, rurality, and comorbidities.
Results: Our sample includes 1,093 Medicare Beneficiaries in 2019 and 21,331 in 2020.
Compared to 2019, in 2020 the likelihood of patients seeking TMH care decreased with psychiatrists (53.2% vs 42.2%, OR= 0.6, P<.001) and nurse practitioners (46.8% vs 19.3%, OR=0.5, P<.001), while the likelihood of receiving TMH care increased with clinical psychologists (1.6% vs 6.6%, OR=3.0, P<.001), licensed social workers (1.1% vs 7.4%, OR=5.4, P<.001) and primary care physicians (1.0% vs. 24.7%,OR=26.5, P<.001).
In both 2019 and 2020, Medicaid eligible beneficiaries were consistently more likely to seek TMH care with a nurse practitioner (OR=2.5, P<.001; 3.6 P<.001), and less likely to seek care with a psychiatrist compared to non-Medicaid eligibles (OR=0.05, P<.001; 0.9, P=0.026). Similarly, rural beneficiaries also sought TMH care with a nurse practitioner at a higher rate and psychiatrists at a lower rate compared to urban beneficiaries. Finally, in 2020, the likelihood of seeking care with a primary care physician increased with age: compared to beneficiaries aged 65 to 74, the OR for those aged 75 to 84 was 1.3 (P<.001) and that for age ≥85 years was 1.6 (P<.001).
Conclusions: Our findings suggest that the public health emergency during the pandemic allowed for a wider variety of providers to deliver mental health care via telehealth among older adults. We found that older, Medicaid eligible and rural beneficiaries were more likely to seek care with non-specialty mental health care providers compared to their counterparts. Patterns of delivery in 2020 can inform policies and ensure accessibility of mental health care among older adults.
Keywords: Telemental health; Telebehavioral health; Telehealth; COVID-19; Medicare; Digital health; Major depression
Regional Differences in Optimal Acute Stroke Triage and Transport by Emergency Medical Services: A Discrete Event Simulation
PP-234 Health Services, Outcomes and Policy Research (HSOP)
Mehul D Patel1, Eliot Mcginnis2, Joseph Konstanzer3, Kristen Hassmiller Lich3
1Department of Emergency Medicine, University of North Carolina at Chapel Hill, Chapel Hill, USA
2Department of Statistics and Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, USA
3Department of Health Policy and Management, University of North Carolina at Chapel Hill, Chapel Hill, USA
Purpose: Acute strokes due to large vessel occlusion (LVO) are the most severe cases and often untreated because of lack of timely access to specialized care. Emergency medical services (EMS) can screen for LVO in the field and transport patients directly to LVO-capable stroke centers. We aimed to understand the impact of regional characteristics on the benefits of EMS LVO stroke triage and routing algorithms.
Methods: Using a discrete event simulation model, we simulated EMS-suspected acute stroke patients in abstract regions of varying geographic size and stroke center location. We considered the process from 9-1-1 call to acute treatment and projected time-dependent clinical outcomes using the modified Rankin Score (mRS) at 90-days. Simulated outcomes were estimated across plausible intervention scenarios of EMS LVO screening accuracy (90% sensitivity and 60% specificity, 75% sensitivity and 75% specificity, and 60% sensitivity and 90% specificity) and additional transport time limits (10, 20, 30, 40, 50, and 60 min). Intervention effect was defined as the improvement in favorable neurologic outcome (mRS 0-1) with the EMS routing algorithm compared to the base case of transport to the nearest hospital. One hundred abstract regions were randomly generated ranging from 900-16,900 square miles. A small region was defined as <2,500 square miles; a large region was defined as >12,100 square miles. The proportion of region area closest to a non-LVO-capable stroke center (“equipoise”) was categorized into low (<40%) and high (>60%) groups. We simulated 2,500 patients over a year for each region. Mean intervention effects across 40 replications were computed.
Results: Screening by EMS with high sensitivity (90%) and low specificity (60%) produced the greatest benefit in clinical outcomes. The majority of regions (66%) had the greatest benefit with a 60-min additional transport time limit whereas only 3 regions were optimal at 30 min or less. Examples of intervention effects by large and small regions with high and low equipoise are shown (Figure). The greatest intervention effects were seen in large regions with high equipoise. Small regions reached maximum benefit at shorter additional transport time limits compared to large regions.
Conclusions: Regional characteristics play an important role in the potential benefit of EMS LVO stroke triage and transport decisions. Modeling studies can be useful to regional stroke system policy makers for optimizing prehospital triage strategies.
Keywords: emergency medical services, stroke, region, applied modeling, comparative effectiveness
Mean Intervention Effects
The improvement in clinical outcome (intervention effect) from EMS LVO stroke triage with a high sensitivity (90%) and low specificity (60%) screen varies by additional transport time limit and region size and equipoise.
Identifying attributes for a value assessment framework in China: a qualitative study
PP-235 Health Services, Outcomes and Policy Research (HSOP)
Mengmeng Zhang1, Yun Bao2, Yi Yang3, Melissa Kimber4, Mitchell Levine5, Feng Xie6
1Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
2Institute of Clinical Research and Evidence Based Medicine, Gansu Provincial Hospital, Lanzhou, Gansu, China
3Ministry of Health Key Lab of Health Technology Assessment, Fudan University, Shanghai, China
4Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada; Offord Centre for Child Studies, Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada
5Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada; Department of Medicine, Division of Clinical Pharmacology and Toxicology, McMaster University, Hamilton, Ontario, Canada.
6Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada; Centre for Health Economics and Policy Analysis, McMaster University, Hamilton, Ontario, Canada.
Purpose: Value assessment framework (VAF) is a promising tool for measuring the value of new health technologies and informing coverage policy making. However, most published VAFs were developed for high-income countries. This study was aimed to identify important value attributes as part of the development of a VAF in China.
Methods: We used qualitative description, which is a research design well regarded for addressing applied research questions with health care policy and practice relevance. Specifically, we conducted open-ended semi-structured interviews with Chinese stakeholders, as well as a review and analysis of publicly available government documents related to health technology assessment (HTA) and coverage policies in China. Conventional content analysis and the constant comparison technique were used to generate relevant concepts and categories which we reported as value attributes and categories. Multiple criteria were used to determine the inclusion of a value attribute, with response levels of included attributes finalized via consensus meetings with experts in qualitative methods, HTA, and health economics.
Results: Thirty-four stakeholders living or working in China completed a semi-structured interview. These stakeholders included policymakers (n=4), healthcare providers (n=8), academic HTA researchers (n=6), patients and members of the general public (n=9) and industry representatives (n=7). In addition, 16 government-focused documents related to HTA or coverage policy were included for analysis. 12 value attributes grouped in eight categories are included in the VAF via the analytic process: 1) severity of disease, 2) health benefit including survival, clinical outcomes, and patient-reported outcomes, 3) safety and tolerability, 4) economic outcomes, including costs to payer and to patients, and cost-effectiveness, 5) innovation, 6) organizational impact, 7) health equity and 8) quality of evidence.
Conclusions: These twelve value attributes are identified for the development of a VAF to support value assessment of new health technologies and coverage policy making in China.
Keywords: value assessment framework; health technology assessment; qualitative description.
Economic evaluation of expanding dental workforce in dental professional shortage areas in the US
PP-236 Health Services, Outcomes and Policy Research (HSOP)
Sung Eun Choi1, Ye Shen2, Davene Wright3
1Department of Oral Health Policy and Epidemiology, Harvard School of Dental Medicine, Boston, USA
2Graduate School of Arts and Sciences, Harvard University, Boston, USA
3Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, USA
Purpose: Despite considerable efforts to reduce oral health disparities, large oral health disparities remain among US children. A dental professional shortage is thought to be one of the determinants associated with poor quality dental care and oral health disparities, particularly for those residing in rural communities. We sought to evaluate the theoretical impact of expanding the dental workforce via the National Health Service Corps (NHSC) program on children’s oral health and identify the cost-effectiveness of NHSC expansion.
Methods: Using oral health status and dental utilization data of US children (ages 0-19) from the nationally-representative National Health and Nutrition Examination Survey (2011-2016) linked to county-level dentist supply information, we constructed and simulated a stochastic microsimulation model of oral health outcomes over a 10-year period. We assessed changes in dental utilization, prevalence and cumulative incidence of dental caries, and complications from untreated caries given observed differences in population characteristics within the population residing in dental care Health Professional Shortage Areas (HPSAs) as compared to the general US population. Sensitivity analyses were conducted to assess how variations in NHSC budgets and length of awardees’ commitment to serve in HPSAs could alter the impact of the proposed program.
Results: In dental care HPSAs, dental caries prevalence was estimated to be 56.6% (95%CI: 56.2, 57.0) as compared to 52.5% (95%CI: 51.6, 53.4) in non-designated areas. Over the past eight years, NHSC budgets have been increasing by 3.9% on average. When funding for NHSC program increased by 5%, dental caries prevalence was estimated to decrease by 0.35 (95%CI: 0.26, 0.44) percentage points. When funding for NHSC program increases between 5% and 30%, the estimated decrease in number of decayed teeth ranged from 0.27 (95%CI: 0.19,0.36) to 1.64 (95%CI: 1.55, 1.73) million cases, total QALY gains ranged from 0.06 (95%CI: 0.04, 0.07) to 0.35 (95%CI: 0.33, 0.37) million QALYs, and total cost savings ranged from 85.09 (95%CI: 49.69, 120.48) to 508.23 (95%CI: 473.08, 543.39) million USD among children residing in dental HPSAs from a healthcare perspective. Benefits of the intervention accrued the most among Hispanic and low-income children residing in dental care HPSAs (Figure 1).
Figure 1.
Impact of increasing NHSC awards to dentists/dental students by 5%
Conclusions: Expanding the NHSC funding for dental practitioners could significantly reduce the risk of dental caries via increased access to dental care among children residing in HPSAs.
Keywords: Health Professional Shortage Areas, workforce, access to care
Who gets to be a mother after cancer?: assessing diverse cancer survivors’ access, experiences, and decision making about fertility care
PP-237 Patient and Stakeholder Preferences and Engagement (PSPE)
Aubri S Hoffman1, Tito Mendoza2, Roni Nitecki2, Jose Garcia2, J. Alejandro Rauh Hain2
1The Value Institute for Health and Care, University of Texas at Austin, Austin, United States of America
2The University of Texas MD Anderson Cancer Center, Houston, United States of America
Purpose: : While most men diagnosed with cancer are offered sperm banking, only 23% of medically-eligible women -- and as few as 5% of women of color -- are referred for fertility care. This study sought to better understand women’s access, experiences and decision making regarding motherhood and cancer.
Methods: : A mixed methods approach using 5 survivor experience groups and the Motherhood After Cancer questionnaire assessed access, experiences, barriers, decision-making factors, and pathways to fertility care at diagnosis and after cancer. Thematic analyses summarized qualitative findings. Descriptive statistics and multivariate regression summarized quantitative results.
Results: Experience groups (n = 26 survivors) identified 1) 23 clinical, personal, familial, and financial barriers; 2) 10 key decision-making values; 3) a lack of evidence about obstetric outcomes; and 4) recommendations for solutions that prioritize equity. On the questionnaire,112 of 133 women (84.2%) wanted biological children after cancer, but 94 women (88.7%) were not able to achieve their goals. Less than half (47.4%) of women’s insurance plans covered fertility care, and 44.4% of those plans would not cover fertility care after a cancer diagnosis, leaving only 20% of women with any coverage. Notably, 55 women (58.5%) decided not to do fertility treatments solely because of the out-of-pocket costs.
Survivors reported difficulty finding evidence about cancer survivors’ obstetric outcomes (100%), culturally-tailored information (100%), or pregnancy-related risk of recurrence (87.2%). Women who were older or farther from their date of diagnosis reported more barriers to successfully achieving their motherhood goals (x=70.0 out of 100 for ≥35 years old vs 54.5 for <35 years old, p=0.02; x=71.5 for ≥6 years since diagnosis vs 54.3 for <6 years, p=0.01). Women recommended the development of culturally-tailored brochures and websites to provide direct access to information and support groups and measures to create a learning cycle.
Conclusions: Despite an overall desire for motherhood after cancer, women experience significant barriers to information, access, coverage and resources at the time of diagnosis and as survivors. Women of color and women who are older experience greater barriers and a lack of evidence. However, these barriers could be addressed through co-designing culturally-relevant patient decision aids, peer groups, and patient-reported outcome measures to provide direct access, create a learning cycle, and address gaps in equity.
Keywords: shared decision making, cancer, fertility, needs assessment, mixed methods, equity
Quantifying preferences for watch-and-wait compared with surgery after a clinical complete response in rectal cancer: a discrete choice experiment
PP-238 Patient and Stakeholder Preferences and Engagement (PSPE)
Garima Dalal1, Stuart J. Wright1, Lee Malcomson2, Andrew G. Renehan2, Katherine Payne1
1Manchester Centre for Health Economics, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester, United Kingdom
2Division of Cancer Sciences, University of Manchester, Manchester, United Kingdom; Colorectal and Peritoneal Oncology Centre, Christie NHS Foundation Trust, Manchester, United Kingdom
Purpose: To quantify the preferences of individuals with and without experience of cancer when making a treatment decision between a watch-and-wait programme and surgery after a clinical complete response in rectal cancer.
Methods: An online discrete choice experiment quantified the preferences of a purposive sample of UK-based adults with and without experience of cancer (recruited using an online-panel provider; Pureprofile). Respondents chose their preferred labelled alternative from watch-and-wait and surgery. A literature review with input from patients and clinicians identified six attributes with four levels (delayed surgery, cancer metastases, faecal urgency, time until stoma, number of follow-up visits, health status) and one attribute with eight levels (survival, four levels for high uncertainty and four levels for low uncertainty). The experimental design consisted of four blocks of 10 choice sets generated to minimise the D-error. Respondents were asked background questions about themselves, their experience of cancer, their views on cancer treatment and their attitude towards healthcare decision-making. Choice data were analysed using a random parameters logit (RPL) model and uptake probabilities were calculated.
Results: Choice data were collected from 398 (51% female; mean age 59 years) individuals with experience of cancer and 949 (51% female; mean age 59 years) individuals without experience of cancer. Estimated coefficients from the RPL model indicated that six out of seven attributes were statistically significant predictors of choice in both samples. Chance of needing a delayed surgery did not influence the choice between alternatives in either sample. In general, individuals preferred longer time until stoma, lower chance of metastases and faecal urgency, lower number of follow-up visits, and increased survival and health. The absence of statistical significance for the alternative-specific constant term indicated that respondents in either sample did not have an intrinsic desire for watch-and-wait or surgery. Evidence of preference heterogeneity was present for all attributes in the cancer-experience sample and six attributes within the cancer-naïve sample. Calculated uptake indicated that there was a 50% and 53% probability of a respondent choosing watch-and-wait in the cancer-experience and cancer-naïve sample, respectively.
Conclusions: This study quantified preferences for watch-and-wait compared with surgery and indicated a need for explaining the benefits and harms of each option to enable patients to make an informed decision, indicating a potential role for a patient decision aid in this context.
Keywords: Rectal cancer, cancer, discrete choice experiment, preference elicitation
When does a decision curve analysis require quantifying end-users' utilities for sensitivity and specificity?: An empiric case study
PP-239 Patient and Stakeholder Preferences and Engagement (PSPE)
Gary Eric Weissman1, Zahra Faraji2
1Palliative and Advanced Illness Research (PAIR) Center, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA; Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA; Pulmonary, Allergy, and Critical Care Division, Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
2Palliative and Advanced Illness Research (PAIR) Center, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
Purpose: To quantify the relative weighting of sensitivity and specificity that would lead to selective an alternative prediction model while optimizing net benefit in a decision curve analysis.
Methods: Using data from the National Lung Screening Trial (NLST), we fit penalized regression (elastic net), random forest, and XGBoost models to predict a future diagnosis of lung cancer. The data were randomly split into 80% for training and 20% for testing. Each individual's age, gender, race, smoking history, and other clinical and demographic features were used to predict a binary outcome of a diagnosis of lung cancer during the study period. Model hyperparameters were determined by minimizing the area under the receiver operating characteristic curve (AUC) using five-times repeated 10-fold cross validation in the training set. We created decision curves for each model, compared them to treat all and treat none strategies, and calculated net benefit using a range of weights for sensitivity relative to specificity.
Results: Among 53,542 total eligible study participants, 2,054 (3.9%) were eventually diagnosed with lung cancer. In the hold-out test set, the best regression, random forest, and XGBoost models performed similarly with AUCs of 0.69 (95% CI 0.67 to 0.72), 0.68 (95% CI 0.66 to 0.71), and 0.69 (0.67 to 0.72), respectively. The net benefit varied for all models across a range of relative weights (Figure). In the threshold range of 0% to 20%, all models similarly dominated at unweighted values of sensitivity and specificity. As the relative weight of sensitivity increased, the treat-all strategy produced the highest net benefit.
Figure.
Net benefit of each strategy over a range of weightings of the sensitivity relative to the specificity.
Conclusions: Decision curve analysis is a useful approach for determining an optimal prediction strategy for a given range of classification thresholds. However, disentangling the assumed weights of sensitivity and specificity, which are likely to vary by clinical context and user preference, from the calculation of net benefit, would lead to more personalized results. For the case study presented here, a weight of 3:1 would be required to alter the optimal strategy based on the net benefit. Because such a weighting is plausible, explicit quantification of end-user's utilities should be considered. This approach can guide model developers in estimating the value of using more costly conjoint methods to explicitly quantify end-users' utilities for sensitivity and specificity when training prediction models that underlie decision support systems.
Keywords: decision curve analysis, net benefit, prediction models, user preferences
Patient preferences for out-of-hospital cardiac arrest care attributes in South Africa: a discrete choice experiment
PP-240 Patient and Stakeholder Preferences and Engagement (PSPE)
Kalin Werner1, Willem Stassen3, Elzarie Theron3, Lee A Wallis3, Tracy Kuo Lin2
1Institute for Health & Aging, Department of Social and Behavioral Sciences, University of California, San Francisco, San Francisco, CA, USA & Division of Emergency Medicine, University of Cape Town, Cape Town, South Africa
2Institute for Health & Aging, Department of Social and Behavioral Sciences, University of California, San Francisco, San Francisco, CA, USA
3Division of Emergency Medicine, University of Cape Town, Cape Town, South Africa
Purpose: To conduct a comprehensive assessment of the preferences of patients and their family members for attributes (and levels) of out-of-hospital cardiac arrest (OHCA) management in a low-resource setting.
Methods: We developed and piloted a discrete choice experiment (DCE) survey instrument to quantify the importance of attributes associated with OHCA care. The DCE survey was administered to participants 18 years or older across South Africa between April and May 2022. Participants were presented with 18 paired choice tasks, each comprised of five attributes and a range of three to five levels: distance to closest adequate facility (10, 60, and 360KM), provider of care (bystander, family member, religious leader, basic life support trained individual, advanced life support trained individual), response time (7,14, and 21 minutes), chances of survival (1%, 8%, 16%), and transport cost (2,500, 5,000, and 10,000ZAR). Mixed logit choice models were used for analysis and evaluation of variation in respondents’ preferences for selected attributes based on sociodemographic characteristics.
Results: 2,270 community members participated in the survey, completing a total of 40,860 choice tasks. All attribute levels, except care delivered by religious leaders under the provider attribute, were found to be statistically significant. The levels most preferred by patients (comparing with other levels in the same attribute) were care delivered by advanced life support trained providers, who respond to scene in 7 minutes, where the closest adequate facility is 10KM away, the patient has a 16% chance of survival, and the cost of transport is 2,500 ZAR. Preference orderings for all levels under each attribute were logically consistent; for example respondents preferred shorter response times to longer response times, closer facilities to further facilities and higher probabilities of survival rather than lower. The preference ordering for the provider attribute was advanced life support trained provider (1.47, CI 1.39-1.55), family member (1.40, CI 1.32-1.48), basic life support trained (1.38, CI 1.30-1.46), bystander (base) and then religious leader (0.98, CI 0.93-1.04). Respondents were more likely to choose resuscitation care provided by family members rather than bystanders or BLS trained providers.
Conclusions: In low resource settings, it may align with patients’ preference to include targeted resuscitation training for family members of individuals with high risk for OHCA as part of OHCA intervention strategies.
Keywords: out-of-hospital cardiac arrest, discrete choice experiment, emergency medical services, low-resource setting, patient preference
Mixed logit choice model analysis on attributes and levels of OHCA care
| VARIABLES | Mixed logit regression |
| Base (10KM, bystander, 7 minutes, 1 out of 100, 2,500ZAR) | . |
| (.) | |
| Distance to closest facility - 60KM | 0.676*** |
| (0.014) | |
| Distance to closest facility - 360KM | 0.368*** |
| (0.011) | |
| Provider of care- Family member | 1.396*** |
| (0.039) | |
| Provider of care- Religious leader | 0.981 |
| (0.027) | |
| Provider of care- Basic life support trained individual | 1.378*** |
| (0.039) | |
| Provider of care- Advanced life support trained individual | 1.467*** |
| (0.041) | |
| Response time- 14 minutes | 0.735*** |
| (0.015) | |
| Response time- 28 minutes | 0.480*** |
| (0.012) | |
| Probability of survival- 8 out of 100 | 2.279*** |
| (0.061) | |
| Probability of survival- 16 out of 100 | 3.819*** |
| (0.144) | |
| Cost of transport- 5000ZAR | 0.843*** |
| (0.018) | |
| Cost of transport-10000ZAR | 0.632*** |
| (0.016) | |
| Observations | 81,328 |
Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1
Multivariate Empirical Calibration of a Simulation Model of Opioid Use Disorder
PP-241 Quantitative Methods and Theoretical Developments (QMTD)
Jianing Wang1, Madushani R. W. M. A.3, Benjamin P Linas4, Laura F White1, Stavroula Chrysanthopoulou2
1Department of Biostatistics, School of Public Health, Boston University, Boston, MA, US
2Department of Biostatistics, School of Public Health, Brown University, Providence, RI, US
3Section on Infectious Disease, Boston Medical Center, Boston, MA, US
4School of Public Health, Boston University, Boston, MA, US; Section on Infectious Disease, Boston Medical Center, Boston, MA, US
Purpose: Researching Effective Strategies to Prevent Opioid Death (RESPOND) is a dynamic population, state-transition model evaluating the impact of opioid-related treatments on multiple health outcomes by simulating the Massachusetts (MA) opioid use disorder (OUD) population. Structural complexity and sparsity of available data pose challenges to model calibration. Previously we used a naïve empirical approach (naïve EC) to calibrate the model outcomes to the target data in a univariate and sequential manner with a pre-defined order. In this work, we propose a multivariate version of the naïve EC to account for outcomes correlation.
Methods: The multivariate approach calibrates RESPOND to fatal overdoses, detox admissions, and OUD population size between 2013 and 2015. First, we run the naïve EC approach to obtain a set of calibrated parameters. Second, we estimate the covariance matrix of the model outputs using simulated outcomes generated from the calibrated parameters from the naïve EC approach. Third, we compare the simulated model outcomes from Latin Hypercube sampled parameter sets to the pre-specified calibration targets using Mahalanobis distances (M-D) assuming the squared M-D follows a Chi-square distribution. Lastly, we accept values of the model parameters for which the computed squared M-D falls within the acceptance region with upper tail probability of 0.05.
We compare the distributions of calibrated outcomes from the two approaches and their coverage of the 95% CIs of calibration target data.
Results: The acceptance was based on 5 million RESPOND simulation runs. The multivariate approach doubled the selection probability compared to the naïve EC and the calibrated simulation outcomes were closer to the mean of target data (Figure 1). Except for Total OUD size in year 2013 and 2014, the interquartile ranges of the accepted outcomes are within the 95% CIs of the calibration targets. A sensitivity analysis fixing the temporal relationship with doubled and half correlation strengths between outcomes does not significantly affect the calibration performance.
Conclusions: We present a model calibration algorithm using a weighted average of distance between simulated outputs and multiple calibration targets, where the weights are informed by the outcomes’ correlation structure. This approach demonstrates the existence of non-zero temporal correlation and correlation between outcomes; the incorporation of the relationships would lead to higher acceptance probabilities and improved coverage of the target.
Keywords: Complex Predictive Modeling; Cohort-based simulation model; Model Calibration; OUD shared decision making
Distributions of computed Mahalanobis Distances from the two approaches
This is the figure referred in the result section showing the distributions of original RESPOND outputs and the outputs from naive and multivariate empirical calibration approach, comparing to the 95% confidence intervals of the target data
Mathematical Models for Bladder Cancer; a systematic review
PP-242 Quantitative Methods and Theoretical Developments (QMTD)
Stavroula Chrysanthopoulou, Timothy Hedspeth
Department of Biostatistics, Brown University School of Public Health, Providence, RI, USA
Purpose: The purpose of this study is to conduct a systematic review of published mathematical models of bladder cancer (BCA) that use simulation to describe disease course and evaluate the impact of cancer detection and control interventions.
Methods: We electronically searched PubMed, Web of Science, Google Scholar, and Brown University Library portal to identify English language papers, without date restrictions, incorporating mathematical or statistical models for simulation-based analyses (Figure 1). Search strategies included combinations of terms related to bladder cancer; and (micro)simulation, computer-based model, state or dynamic transition, risk prediction, discrete event simulation, cohort-/population- based model.
Figure 1.
Flow Diagram of the systematic review
We extract information from relevant models about important characteristics including: 1) simulation level (meso- [e.g., cell dynamics] vs macro [population] scale), 2) discrete vs continuous time, 3) deterministic vs stochastic, 4) mathematical approach, 5) model structure (distinct states and transition rules), 6) data sources, 7) model inputs and outputs; 8) calibration, 9) validation, 10) uncertainty propagation, 11) and (as applicable) approach to choosing optimal acts.
Results: Of the 4765 identified total records 3159 are eligible for review (an ongoing process). Of the 234 fully reviewed articles, 55 mention some type of mathematical simulation model for analyzing data the vast majority of which representing the US population and about 10% European. Differential equations are commonly used for describing the biology and disease progression of BCA (20%). Decision trees are often used in Comparative Effectiveness Analysis (CEA) to evaluate and compare the effectiveness and burden of different BCA interventions (21.8%). Only about 10% use microsimulation models. The vast majority of simulation models (65.5%) focus on comparing treatments, a considerable number on evaluating screening protocols (25.5%). Several models include specific assumptions about the tumor growth (21.8%) and/or a detailed description of the natural history component. Key simulated outcomes are usually diagnosis, recurrence, and mortality with most commonly type of cancer the NMIBC and MIBC. Many of the articles lack sufficient information and details for fully understanding the model structure, assumptions, input data, calibration, and validation methods.
Conclusions: Although simulation techniques have been widely used to explore important facets of the disease, there is paucity of large-scale, simulation models for synthesizing information from multiple sources to describe the dynamics and evaluate different interventions and health policies for bladder in a comprehensive manner.
Keywords: Simulation model, bladder cancer, cost-effectiveness analysis, comparative effectiveness research, systematic review
Flow_Diagram
Similar-Performing Sets of Treatment Choices for Personalized Management of Hypertension
PP-243 Quantitative Methods and Theoretical Developments (QMTD)
Wesley J Marrero
Thayer School of Engineering, Dartmouth College, Hanover, NH, USA; Health Equity and Action Lab, The Dartmouth Institute for Health Policy and Clinical Practice, Lebanon, NH, USA
Purpose: Translating hypertension treatment guidelines to practice may be challenging due to variability in physicians’ opinions and patients’ preferences, uncertainty in medications’ effects, and other implementation barriers. While current hypertension treatment guidelines provide a single recommendation, the benefits from other treatment regimens may not be statistically different. To account for implementation barriers, this work presents a method to obtain sets of choices that fall within a margin of certainty of the recommendations provided by hypertension treatment guidelines.
Methods: This research generates similar-performing sets to the most recent hypertension treatment recommendations from the American College of Cardiology and the American Heart Association. The effect of these treatment guidelines on patients’ health is evaluated using a Markov decision process simulation model over a 10-year planning horizon. Based on this model, the sets of choices are derived as nonparametric simultaneous confidence intervals on the difference between the quality-adjusted life years obtained following the treatment guidelines and the remaining alternatives. The proposed methodology is applied to a set of patient profiles representative of a population with a high prevalence of atherosclerotic cardiovascular disease (ASCVD) that can benefit greatly from hypertension treatment.
Results: In general, how much flexibility a physician may receive to treat a patient depends on the patient’s characteristics (e.g., age and blood pressure levels). Patients with higher risk for ASCVD events typically obtain fewer treatment choices in the sets than patients with lower risk. For example, Figure 1 shows the results of applying the proposed methodology to a 40-year-old, non-diabetic, non-smoker individual with stage 2 hypertension and normal cholesterol levels. For this patient, the sets of treatment choices contain two to five alternatives across the planning horizon. Decreasing the patient’s blood pressure to elevated or stage 1 hypertension usually increases the number of treatment choices. Conversely, increasing the patient’s age normally results in fewer treatment choices.
Conclusions: This work presents a novel method for obtaining personalized sets of antihypertensive treatment choices. The methodology provides clinicians and their patients with treatment options while ensuring similar health benefits to the current clinical guidelines. Sets of similar-performing treatment choices could have many advantages in medical practice, such as increased robustness and flexibility.
Keywords: Hypertension treatment, Markov decision process, statistical inference, simulation
Sets of Treatment Choices for Sample Patient
Test sensitivity in a prospective cancer screening program: Bias of a common estimator
PP-244 Quantitative Methods and Theoretical Developments (QMTD)
Jane Lange1, Yibai Zhao2, Kemal Caglar Gogebakan1, Antonio Olivas Martinez3, Marc D. Ryser4, Charlotte C. Gard5, Ruth Etzioni6
1Oregon Health and Science University, Knight Cancer Institute, Portland, OR
2Fred Hutchinson Cancer Research Center, Seattle, WA
3University of Washington School of Public Health, Department of Biostatistics, Seattle, WA
4Duke University, Department of Population Health Sciences and Department of Mathematics, Durham, NC
5New Mexico State University, Department of Economics, Applied Statistics, and International Business, Las Cruces, NM
6Fred Hutchinson Cancer Research Center, Seattle, WA & University of Washington School of Public Health, Department of Biostatistics, Seattle, WA & University of Washington School of Public Health, Department of Health Services, Seattle, WA
Purpose: Sensitivity is a key metric to verify the ability of a diagnostic test to correctly identify patients with a disease. In prospective screening study, however, it’s hard to directly get the sensitivity of a cancer screening test, instead, people are frequently using other proxy measurements to estimate the true sensitivity.
Methods: We develop an analytic approach of a commonly used proxy measurement – the empirical sensitivity, which is defined as the ratio of screen-detected cancers to the screen-detected and interval cancers. We show that in many settings empirical sensitivity is biased as an estimate of true preclinical sensitivity and identify conditions that reduce the bias.
Results: We compare empirical sensitivity with true sensitivity across sojourn time distributions and screening intervals. In general, empirical sensitivity overestimates true sensitivity when the sojourn time is long relative to screening interval. Conversely, the empirical sensitivity underestimates the true sensitivity when the screening interval is long relative to the sojourn time. As true sensitivity decreases or mean sojourn time increases, the optimal interval/lookout time lengthens. The variation in the optimal interval is considerably greater when the true sensitivity is lower than when it is close to 1.Under an exponential sojourn distribution, a typical screening interval of 1-2 years appears to be optimal for high true sensitivities, but ranges from 1 to over 6 years for low true sensitivities. We show that based on the reported estimate of 0.869 for empirical sensitivity of digital mammography in the Breast Cancer Surveillance Consortium (BCSC), the corresponding true sensitivity is.82 under a mean sojourn time of 3.6 years estimated based on breast cancer screening trials. However, the BCSC estimate is likely overly optimistic for true sensitivity under more contemporary estimates of mean sojourn time that are longer.
Conclusions: Our analytic approach demonstrates that empirical sensitivity is generally a biased proxy measure for true sensitivity in cancer screening settings. Nomenclature distinguishing empirical sensitivity from true sensitivity is needed in the applied literature.
Keywords: empirical sensitivity, interval cancer, screen-detected cancer, mean sojourn time, screening interval
Limited Evidence of Shared Decision Making for Prostate Cancer Screening in Audio-Recorded Primary Care Visits Among Black Men and their Providers
PP-245 Decision Psychology and Shared Decision Making (DEC)
Elizabeth Stevens1, Jerry Thomas2, Natalia Martinez Lopez1, Angela Fagerlin3, Shannon Ciprut2, Michele Shedlin4, Heather Taffet Gold1, Huilin Li1, J Kelly Davis5, Ada Campagna5, Sandeep Bhat6, Rueben Warren7, Peter Ubel5, Joseph Errick Ravenell1, Danil Victor Makarov2
1Departments of Population Health, NYU Langone Health, 227 E 30th St, New York, USA
2Departments of Population Health and Urology, NYU Langone Health, 227 E 30th St, New York, USA; VA New York Harbor Healthcare System, 423 E 23rd St, New York, USA
3Department of Population Health Sciences, University of Utah Spencer Eccles School of Medicine, Salt Lake City, USA; VA Salt Lake City Informatics Decision-Enhancement and Analytic Sciences (IDEAS) Center for Innovation, Salt Lake City, USA
4College of Nursing, New York University, New York, USA
5The Fuqua School of Business, Duke University, Durham, USA
6Sunset Park Health Council, Brooklyn, NY, USA
7National Center for Bioethics in Research and Health Care, Tuskegee University, Tuskegee, USA
Purpose: Evaluate the use of shared decision making (SDM) practice in routine primary care appointments based on direct observation of patients and providers.
Methods: Qualitative analysis of audio-recorded patient-provider interactions on prostate cancer screening. Participants were five primary care clinic providers and 13 patients at a Federally Qualified Health Center. Eligible patients were: 1) 40-69 years old, 2) Black, 3) male, and 4) attending the clinic for routine primary care appointment. The patient-provider interactions were coded using the Physician Recommendation Coding System (PhyReCS) standards to assess the presence/absence and quality of SDM within a consultation.
Results: While nearly all providers recommended PSA screening, and most patients received it, only 3 patients were directly asked for their screening preferences or healthcare goals. Few patients were asked about their prostate cancer knowledge (2), potential symptoms (3), or family history (6). Most providers consistently discussed the disadvantages (80%) and advantages (80%) of PSA screening.
Conclusions: We found limited SDM during PSA screening consultations among Black men and their providers. The counseling that did take place utilized components of SDM but only inconsistently and/or incompletely. Efforts are needed to develop strategies to improve SDM for prostate cancer screening during patient-provider encounters. This is especially salient for diverse patient populations to ensure the quality of PSA screening and promote health equity in populations shown to be highly vulnerable to morbidity from prostate cancer.
Keywords: prostate cancer, shared decision making, communication
Table 1.
|
PSA screening interactions between Black male patients and their providers during primary care visits at a Federally Qualified Healthcare Center
PSA screening interactions between Black male patients and their providers during primary care visits at a Federally Qualified Healthcare Center
A study to compare a CHW-led versus a physician-led intervention for prostate cancer screening decision-making among Black men
PP-246 Decision Psychology and Shared Decision Making (DEC)
Natalia Martinez Lopez1, Danil Victor Makarov2, Jerry Thomas2, Shannon Ciprut2, Theodore Hickman1, Helen Cole1, Michael Fenstermaker3, Heather Taffet Gold1, Stacy Loeb2, Joseph Errick Ravenell1
1Department of Population Health, NYU Langone Health, New York, USA
2Departments of Population Health and Urology, NYU Langone Health, New York, USA; VA New York Harbor Healthcare System, New York, USA
3Department of Urology, NYU Langone Health, New York, USA
Purpose: To compare the impact of a CHW-led vs a Physician-led educational session to counsel Black men about the risks and benefits of PSA screening.
Methods: One hundred and eighteen Black men were recruited from 8 predominately Black churches in Harlem, NY and attended a prostate cancer screening education session led by either a CHW or a physician. Both arms utilized a decision aid that explains the benefits, risks, and controversies of PSA screening, demonstrates prostate cancer rates, and decision coaching. Participants completed baseline and post-teaching questionnaires to assess knowledge, decisional conflict, and perceptions about the intervention. We performed Fisher’s exact or X2 (chi-squared) tests to determine the association between teacher (CHW vs. physician) and our outcomes of interest. For Decisional Conflict, we ran the Wilcoxon Signed Ranks Test to compare differences from pre-test to post-test in all participants. We then conducted a Wilcoxon Rank Sum test to compare differences between the physician-led and CHW-led groups.
Results: There was no significant difference in change in decisional conflict by group (24.31 physician-led vs 30.64 CHW-led, p=0.31). The CHW-led group showed significantly greater improvement on knowledge post- intervention (change of 2.6, sd=2.81 vs 5.1, sd=3.19, p<0.001). However, those in the physician-led group were more likely to agree that the speaker knew a lot about PSA testing (p<0.001) and were more likely to trust the speaker (p<0.001).
Conclusions: CHW-led interventions can effectively assist Black men with complex health decisions in community-based settings. This approach may improve prostate cancer knowledge, and equally minimize decisional conflict compared to a physician-led intervention.
Keywords: prostate cancer, community health worker, shared decision-making
Table 1.
Individual Perceptions of the Education Session
|
Individual perceptions about the education session were measured on a scale from 1 to 4 where 1=disagree, 2=somewhat disagree, 3=somewhat agree, and 4=agree. P-values calculated using Wilcoxon Rank Sum Test.
Assessment of capacity of health care interventions to benefit disadvantaged populations: a case study in Indigenous Australians
PP-247 Health Services, Outcomes and Policy Research (HSOP)
An Tran Duy1, Robyn Mcdermott2, Luke Burchill3, Philip Clarke4
1Centre for Health Policy, Melbourne School of Population and Global Health, University of Melbourne, Melbourne, Australia
2Centre for Chronic Disease Prevention, James Cook University, Cairns, Australia
3Melbourne School of Medicine, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Melbourne, Australia
4Health Economics Research Centre, Nuffield Department of Population Health, University of Oxford, Oxford, UK
Purpose: This study aims to (1) estimate gaps in life expectancy (LE) between an Indigenous cohort and the Australian general population, and (2) assess the capacity of three cardiovascular disease (CVD) preventive strategies, namely cholesterol lowering therapy, blood pressure lowering therapy and smoking cessation, to close these gaps.
Methods: Data obtained from the Well Person’s Health Check (WPHC) study and linked death records were used in this study. The WPHC was a survey conducted between 1998 and 2000 and included 3,508 people in 26 Indigenous communities in North Queensland. We fitted a Gompertz proportional hazards model using age as time scale to the survival data and used the fitted model to estimate age and sex-specific survival rates and LE of Indigenous people. To calculate the gaps in LE between Indigenous and general populations, we used the life table of the Australian population in year 2007, the middle year of the follow-up period in the survival data. We used the mortality rate ratio of people receiving a specific intervention compared to no intervention to adjust the survival rates, based on which we estimated the impact of an intervention on LE. The mortality rate ratios were estimated based on meta-analyses of clinical trials.
Results: The remaining LEs of Indigenous men and women aged 35-50 years were 37.16-25.16 and 42.97-30.25 years, respectively, which were 8.54-6.54 and 6.73-5.10 years lower than those of men and women from the general population. These gaps were larger or smaller for younger or older people, respectively. With an average reduction of 15 mm Hg in systolic blood pressure, 50-year-old Indigenous men and women could live 2.44 and 2.62 years longer, respectively. An average reduction of 2 mmol/L in LDL-cholesterol resulted in increases of 2.47 and 2.36 years in LEs of 50-year-old Indigenous men and women, respectively. With 50% of the Indigenous people being current smokers, smoking cessation at age 50 years resulted in increases of 2.41 and 2.58 years in LEs across the Indigenous male and female populations aged 50 years, respectively.
Conclusions: The gaps in LE between Indigenous people and the general population were substantial. Implementation of CVD preventive therapies in Indigenous people could reduce these gaps by 5.0-2.5 years in those aged 35-50 years. Our findings confirm that CVD prevention is crucial for Indigenous Australians.
Keywords: Indigenous Australians; Life expectancy; Cardiovascular disease prevention; Capacity to benefit; Health equity; Health inequalities
Figure 1.
Ages at death of the Australian general population and Indigenous people receiving different cardiovascular disease preventive strategies.
In this figure, ages at death of men and women conditional on having lived to a baseline age between 35 and 74 years are plotted against the baseline age. In each plot, the curves show ages at death of (1) the Australian general population, (2) Indigenous people of whom the current smokers stop smoking at the baseline age, (3) Indigenous people starting blood pressure lowering therapy at the baseline age and achieving an average reduction in systolic blood pressure of 15 mm Hg, (4) Indigenous people starting cholesterol lowering therapy at the baseline age and achieving an average reduction in LDL-cholesterol of 2 mmol/L, and (5) Indigenous people receiving no further cardiovascular disease preventive strategies.
Failure to balance social contact matrices can bias models of SARS-CoV-2 transmission
PP-248 Quantitative Methods and Theoretical Developments (QMTD)
Mackenzie A. Hamilton1, Jesse Knight2, Sharmistha Mishra3
1MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Canada
2MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Canada; Institute of Medical Science, University of Toronto, Toronto, Canada
3MAP Centre for Urban Health Solutions, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Canada; Institute of Medical Science, University of Toronto, Toronto, Canada; Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Canada; Department of Medicine, St. Michael's Hospital, Unity Health Toronto, Toronto, Canada
Purpose: Social contact survey data are commonly used to define patterns of mixing (“who contacts whom”) in transmission models of infectious diseases. However, empirical contact data rarely reflect the balanced (i.e. reciprocal) nature of contacts: that is, the total contacts reported by group A with group B may differ from those reported by group B with group A. We explored potential biases associated with imbalanced (i.e. non-reciprocal) contact patterns in transmission dynamics of infectious diseases.
Methods: We constructed a susceptible exposed infected recovered transmission model of SARS-CoV-2, stratified into two age groups: <15 and 15+. Epidemiological and biological parameters were drawn from literature. Imbalanced mixing parameters were drawn from synthetic contact matrices from 177 demographic settings [Prem et al. 2021]. Balanced mixing parameters were derived by averaging imbalanced population level mixing parameters. We compared the basic reproduction number across all demographic settings when models were parameterized with imbalanced versus balanced contact matrices. We further compared the timing and magnitude of peak incidence, cumulative infections after one year, and cumulative infections averted following age-specific vaccination, in settings where imbalanced contacts reported by 15+ with <15 were: a) larger than (Singapore), b) equal to (Luxembourg), and c) less than (Gambia) balanced contacts reported by 15+ with <15.
Results: When compared to models with balanced contact matrices, models with imbalanced contact matrices consistently underestimated the basic reproduction number (Figure 1), had delayed timing of peak incidence, and underestimated the magnitude of peak incidence of SARS-CoV-2. Moreover, models with imbalanced contact matrices always overestimated cumulative infections in one age group, and underestimated cumulative infections in the other age group. For example, in Gambia, imbalanced contacts reported by 15+ with <15 were 0.45 times balanced contacts between 15+ and <15, leading to underestimation of cumulative infections among 15+ by 6.7% and overestimation of cumulative infections among <15 by 3.2%. The inverse pattern was observed in Singapore, and little effect was observed in Luxembourg. Imbalanced contacts also influenced projected outcomes of age-specific vaccination strategies. For example, when vaccine was prioritized to <15 in Gambia, imbalanced models underestimated infections averted among 15+ by 24.4%.
Figure 1.
Direction and magnitude of bias in SARS-CoV-2 R0 according to imbalance in synthetic contact matrix.
R0, basic reproduction number; C, population contact rate; o, 15+; y, <15; imbal, imbalanced; bal, balanced.
Conclusions: Stratified transmission models that do not explicitly consider reciprocity of contacts may generate biased projections of epidemic trajectory and impact of targeted public health interventions.
Keywords: infectious disease modelling, SARS-CoV-2, contact mixing, contact reciprocity, vaccine policy, vaccine prioritization
Assessment of racial bias in a machine learning model for predicting fatal opioid overdose following release from jail
PP-249 Health Services, Outcomes and Policy Research (HSOP)
Serena Jingchuan Guo1, Wei Hsuan Lo Ciganic1, Rayid Ghani2, Stephanie Fedro Byrom3, Susan Barnes3, Courtney Kuza3, Walid Gellad3
1University of Florida
2Carnegie Mellon University
3University of Pittsburgh
Purpose: We developed a machine-learning model to predict fatal opioid overdose after release from jail. Before deploying our model into practice, bias assessment is needed to ensure it is not unfairly targeting certain groups. We therefore conducted a bias assessment by race to ensure an equitable prediction performance across racial groups.
Methods: Our study leveraged an integrated data warehouse from a county Department of Human Services in Pennsylvania. We identified individuals who entered the county jail between 2015-2020. The cohort was randomly and equally split into training and testing sets to develop and validate a gradient boosting machine model to predict the risk of fatal opioid overdose 90 days following release from jail. Given that fatal opioid overdose is a critical adverse outcome and non-invasive preventive interventions are available, our racial bias assessment focused on the false negative rate (FNR, defined as the proportion of individuals with fatal opioid overdoses but misclassified as low risk) across racial groups. We examined the ratio of FNR for Black versus White individuals, or FNRBlack/FNRWhite. In addition, we evaluated whether or not removing the sensitive attribute (i.e., race) from the prediction model would improve model fairness.
Results: Among 37,704 eligible individuals (White=50.4%, Black=48.4%, and other=1.2%) entering county jail, 266 (0.71%) individuals had fatal opioid overdose 90 days following their release (White=222 vs. Black=44). In the testing set, our model achieved a C-statistics 0.804 in predicting fatal opioid overdoses. However, Black individuals had a significantly higher FNR than White individuals: FNRBlack/FNRWhite = 3.7 (FNR for White and Black were 19% vs. 70%). After removing the race variable when developing the prediction model, the C-statistic minimally decreased (0.804 vs. 0.800), but the magnitude of racial bias was reduced but not eliminated: FNRBlack/FNRWhite = 2.3 (FNR for White and Black were 22% vs. 50%).
Conclusions: In a machine learning model predicting fatal opioid overdoses after release from jail, we identified a much higher risk of misclassification in Black versus White individuals. Removing race from the prediction model improved model fairness, but did not eliminate bias completely. Efforts continue to further improve fairness in the model.
Keywords: machine learning, racial bias, fairness











































































