Key Points
Question
Are implementations of real-time prescription benefit (RTPB) tools that provide prescription cost estimates to clinicians at the time of prescribing associated with changes in prescription costs for patients and payers?
Findings
This cohort study including 2 805 060 Medicare Advantage beneficiaries found no reductions in prescription costs, including out-of-pocket costs for patients, among beneficiaries prescribed medication by clinicians at practices with an RTPB tool.
Meaning
The findings of this cohort study suggest that although RTPB tools have many anticipated benefits, further research on how to design and deploy RTPB tools to maximize potential benefits is needed.
This cohort study examines trends in prescription use and spending among Medicare Advantage beneficiaries receiving prescriptions from clinicians at practices with a real-time prescription benefit tool compared with beneficiaries treated by clinicians without access to the tool.
Abstract
Importance
Real-time prescription benefit (RTPB) tools provide point-of-care information for clinicians at the time of prescribing and may reduce prescription costs for patients and payers.
Objective
To assess trends in prescription use and spending among Medicare Advantage beneficiaries at a national health insurer during the first year of clinician access to an RTPB tool.
Design, Setting, and Participants
This cohort study used 2018 to 2020 administrative data from a national insurer to compare prescription fills for beneficiaries receiving prescriptions from clinicians at practices with an RTPB tool with fills prescribed by clinicians without access to the tool. Trends in prescription spending and fills in the year after practices adopted an RTPB tool (in March 2019) were measured using a difference-in-differences design. Data were analyzed from November 2022 to June 2024.
Exposure
Access to an RTPB tool within a national electronic health record software vendor.
Main Outcomes and Measures
The main outcomes were total prescription spending, beneficiary out-of-pocket spending, and number of prescription fills. Secondary outcomes included percentage of fills with the insurer-owned mail-order pharmacy, percentage of fills with a 90-day supply, and subgroup analyses in drug classes appearing most frequently in the RTPB tool and high-cost prescription drug classes.
Results
The sample included 2 805 060 beneficiaries (mean [SD] age 70.9 [9.2] years; 56.7% female; 14.7% Black individuals; 80.5% White individuals), with mean (SD) monthly out-of-pocket costs of $29.1 ($90.4), total prescription costs of $213.2 ($1066.3), and 2.6 (2.1) prescription fills per month. After introduction of the RTPB tool, there was no change in prescription spending (estimated out-of-pocket spending change, 1.2% [95% CI, −0.7% to 3.0%]; estimated total prescription spending change, 0.5% [95% CI: −0.2% to 1.2%]) or number of prescription fills (estimated change, 0.01 [95% CI, −0.01 to 0.02]) among beneficiaries prescribed medication by clinicians at practices with the RTPB tool.
Conclusions and Relevance
In this cohort study of 2.8 million patients, simply providing clinicians access to a RTPB tool was not associated with the anticipated benefits to patients and payers in the first year the tool was released. Further research on how to design and deploy RTPB tools to maximize potential benefits is needed.
Introduction
Despite wide variation in medications costs used to treat the same condition, point-of-care information on prescription drug costs is limited for both patients and clinicians. The lack of real-time insight into costs at the time of prescribing may result in patients facing high out-of-pocket (OOP) costs for prescribed drugs despite the presence of lower-cost, medically appropriate alternatives. For example, a clinician may unknowingly prescribe a medication that requires a high OOP contribution by the patient when a suitable alternative with a lower OOP cost exists. Numerous studies have found that increases in patient cost–sharing reduce adherence to drugs across disease areas.1,2 While this research shows increasing reductions in adherence the larger the OOP costs, even a relatively small copay can affect a patient’s choice to fill a prescription.3
Real-time prescription benefit (RTPB) tools address this information problem by automatically offering plan- and beneficiary-specific drug coverage and pricing information (including OOP costs) within the electronic health record (EHR) order entry system as the prescriber orders the prescription. The objective of these tools is to shift patients toward lower OOP cost medications and potentially increase prescription fills and adherence.
RTPB tools have already been embraced by policy makers; in 2021, the Centers for Medicare & Medicaid Services required each Medicare Part D plan sponsor to implement at least 1 RTPB tool that is capable of integrating with an electronic prescribing system or EHR.4 However, the expected impact of this policy on prescription use and spending is unclear. Correcting the information problem could allow physicians to act more effectively as agents on their patients’ behalf, steering them to therapeutically equivalent options with lower expected OOP costs. However, if the lower OOP costs lead to greater utilization,5 the introduction of RTPB tools could increase spending and put plans at a competitive disadvantage (although the greater use may be clinically appropriate). Therefore, it is important to measure the impact of RTPB tools on clinician prescribing habits and ultimately patient prescription use.
Of course, such hypotheses assume that prescribers will act on the RTPB tool information. This may not necessarily be the case. If the information displayed by the RTPB tool is missing or incorrect, this could reduce the likelihood of clinician adoption.6 Furthermore, even if the information is complete and accurate, previous EHR-based interventions on clinicians have often failed to produce the desired results because they required clinicians to change their workflow.7 The effect of any tool may depend on the details of its design and implementation.
Empirical research on RTPB tools to date has been limited to interventions deployed at single health systems.8,9,10 In these studies, researchers found that while RTPB recommendations moved prescription orders toward lower patient OOP costs—especially when the potential cost savings were higher—recommendations were made for a small share of total prescription orders. For example, a 2022 randomized clinical trial found that RTPB tool recommendations led to a 11% reduction in patient OOP costs among ordered prescriptions in which a recommendation was made, but that recommendations were made for only 4% of prescription orders.8 Another study found that 12.3% of medication orders changed after the clinician viewed the RTPB price estimate, but only 9.7% of orders included a price estimate.9 While this may suggest promising results for RTPB tools, their overall impact appears modest when considered in the context of all prescription orders. It remains unclear whether RTPB tools can be updated to provide recommendations for a larger set orders and whether similar effect sizes will persist among these orders.11 Finally, estimated costs savings from prescription orders may not translate to actual cost savings at the pharmacy if, for example, pharmacists recognize that there is a lower cost option and contact the prescriber to make a change.
Our study leveraged the introduction of an RTPB tool in a national EHR vendor in March 2019 to study trends in prescription fills. Using data from the tool in combination with pharmaceutical and medical claims data for Medicare Advantage (MA) beneficiaries at a nationwide health insurer, we used a difference-in-differences (DID) approach to estimate changes in prescription use and spending for beneficiaries filling prescriptions from clinicians at practices with the RTPB tool vs clinicians at practices without the tool.
Methods
This cohort study was deemed exempt from review and informed consent by the institutional review board at Harvard Medical School because because it was deemed not human participants research. This study is reported following the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cohort studies.
Study Design and Overview
In March 2019, a national EHR vendor incorporated an RTPB tool into its prescription software. This RTPB tool was provided by a large, national health insurer of MA plans. To write a prescription, the clinicians needed to be using the vendor’s electronic prescription ordering system. Clinicians either ordered from a list of structured prescription options or entered the prescription as free text. If they chose from the list of structured prescriptions, they were given the option to click on the RTPB tool before completing the prescription order. If they clicked on the RTPB tool, they would see the expected patient costs for the selected prescription at the currently selected pharmacy, in addition to patient costs if they were to fill the prescription at other pharmacies, including the insurer’s own mail-order pharmacy. When applicable, potential substitutes for the selected prescription were also shown (eg, instead of a 30-day prescription for atorvastatin, at an estimated patient cost of $10, the clinician could prescribe simvastatin for as low as $2.50 at the same pharmacy).
Using administrative data on beneficiary enrollment, pharmacy claims, medical claims, and RTPB data provided by the national insurer, we compared prescription use and spending for beneficiaries receiving prescriptions from clinicians at practices with the RTPB tool vs practices without the RTPB tool. The observation period was April 2018 through March 2020. All beneficiaries enrolled in a health maintenance organization (HMO) and preferred provider organization (PPO) MA plan for the entirety of at least 1 calendar year were included. Beneficiaries with 2 or more enrollment gaps during the sample period were excluded.
The main analytic datasets consisted of RTPB data and pharmacy and medical claims data for beneficiaries in our sample. The RTPB data included data available to the clinician if they had clicked on the RTPB tool when writing a structured script, including the National Provider Identifier (NPI) of the clinician ordering the prescription, the original prescription and pharmacy, alternative pharmacies and/or prescriptions, and the expected patient costs associated with those alternatives. The pharmacy claims data included data on the prescription fill, including the National Drug Code of the prescription, the OOP amount paid by beneficiaries, the total prescription cost, the prescribing clinician, and the date the prescription was filled. Medical claims data were used to identify the practices where clinicians worked based on billed claims. Clinicians and practices were identified using NPI and Taxpayer-Identification Number (TIN) codes, respectively.
Because we were unable to directly identify practices using the EHR vendor in the data, clinician identifiers in the RTPB data were used to identify practices with the RTPB tool. We classified a practice as having the RTPB tool based on the percentage of its assigned clinicians with RTPB data. Clinicians were assigned to the practice in which they submitted the largest number of medical claims each month. For the main analysis, we defined a treated practice as one where at least 50% of its assigned clinicians appeared in the RTPB data. This threshold was implemented to exclude practices without the EHR vendor that employed clinicians practicing in multiple locations, including sites with the RTPB tool. In sensitivity analyses, we tested the robustness of our results using thresholds of 10% and 90%.
Beneficiaries were assigned to a primary prescriber in each month of the study period based on the clinician who wrote the highest number of that beneficiary’s prescriptions in each month. Because the EHR vendor’s market was primarily office-based settings, we restricted the sample to beneficiaries assigned to clinicians who practiced in an office setting (based on whether they billed under the office place of service code in the medical claims data) and their associated prescription fills. We further refined the pharmacy claims data to focus on prescription fills for drug classes that appeared in the RTPB data. Drug classes were defined using the Anatomical Therapeutic Chemical classification system, specifically National Drug Codes were mapped to Anatomical Therapeutic Chemical Level 4 categories, which were obtained from the National Library of Medicine’s RxNorm database.12 Sample exclusion criteria are provided in eFigure 1 in Supplement 1.
Outcomes
Outcomes were measured at the beneficiary-month level. The main outcomes were OOP spending, total prescription spending, and number of prescription fills. Secondary outcomes included percentage of prescriptions that were filled at the insurer’s mail-order pharmacy and the percentage of fills with a 90-day supply. OOP spending was the sum of prescription costs that the beneficiary was responsible for paying in a given month, excluding any Part D Low-Income Subsidy (LIS) assistance. Total prescription spending included costs to the insurer, beneficiary, as well as any subsidy assistance. Because RTPB tools might affect the number of days supplied for beneficiaries if, for example, clinicians transfer from a 30-day prescription to 90-day prescriptions to reduce beneficiary costs, spending and fills outcomes were adjusted to reflect the costs of a 30-day prescription in a sensitivity analysis.
Covariates and Patient Characteristics
We constructed covariates on patient demographics, properties of insurance coverage, and geography. Gender was self-reported by the beneficiary on enrollment in their MA plan. Race and ethnicity information was obtained from Center for Medicare & Medicaid Services. Beneficiaries were categorized into 4 race and ethnicity groups: Black, White, other, and missing. The other category included beneficiaries identifying as American Indian or Alaska Native, Asian, Hispanic, or Pacific Islander. Race and ethnicity were included for analysis to account for demographic variation in the patients visiting practices with an RTPB tool versus those without, which may contribute to differences in prescription outcomes. To capture information on plan coverage and benefit design, we created indicators for whether the beneficiary was enrolled in an HMO or PPO plan, was dually enrolled in Medicaid (dual eligible), and received LIS assistance. We mapped beneficiaries to hospital referral regions (HRRs), which represent regional health care markets for tertiary medical care, based on their address. All covariates were calculated as of the first month in which the beneficiary was observed in the data.
Statistical Analysis
We quantified changes in beneficiary prescriptions after practices enabled an RTPB tool with a DID analysis, comparing differential changes in beneficiary outcomes for beneficiaries attributed to practices with the RTPB tool vs beneficiaries attributed to practices that that did not have the RTPB tool. We ran the following linear regression model:
| Yijt = β0+β1Treatedj × Postt+β2TINj+β3Montht+β5Covariatesi+β6HMOi × HRRi × Yeart+ϵijt, |
where Yijt is the outcome for beneficiary i assigned to TIN j during month t. Following previous work,8 we log-transformed spending outcomes before estimating the equation with an identity link. Treated is an indicator for whether the beneficiary was assigned to a clinician at a practice (TIN) with the RTPB tool. Post is equal to 1 for all months on or after March 2019, and 0 otherwise. The coefficient of interest is β1, which represents the differential change in the outcome for treated practices in the post period. We included TIN fixed effects to control for unobserved time-invariant practice characteristics and time fixed effects (measured monthly) to control for fluctuations in the outcome over time. Beneficiary covariates (age, gender, race and ethnicity, LIS status, dual-eligible status, and HMO [vs PPO] indicator) were included to control for changing risk panels over time, and an HMO × HRR × year interaction to control for differences in benefit coverage generosity year over year. Beneficiaries were assigned to TINs based on the primary TIN associated with the beneficiary’s primary clinician that month. Heteroskedasticity-robust SEs, clustered at the TIN level, are computed for all regressions analyses.
Our interpretation of β1 relies on several assumptions. First, we assume that in the absence of treatment, treatment and control practices would have followed parallel trends over time. To test this, we modified our regression model to include interactions between treatments status and each month separately. This allowed for differential changes to evolve with time. We plotted these coefficients to assess whether trends were parallel before intervention. Second, we assume that there are no anticipatory effects leading up to the RTPB intervention. Given the instantaneous activation of the RTPB tool within the EHR vendor, such effects are unlikely. However, we also examined trends in the month preceding the intervention to confirm this assumption. Finally, we assume that there are no spillover effects between treated and control practices. Our method for defining treatment could introduce spillovers if clinicians work at both treated and control practices, or if misclassification occurs. To address this, we conducted a sensitivity analysis in which we defined treated practices as those with at least 90% of clinicians appearing in the RTPB tool and untreated practices as those with less than 10% of clinicians appearing in the tool.
For 7% of prescriptions, the RTPB tool would recommend an alternative medication from the one selected by the clinician (although the clinician would only see the recommendation if they used the dropdown menu). This may nudge clinicians to prescribe lower-cost substitutes. In the RTPB tool, antidepressants (eg, Zoloft [Pfizer]) and lipid modifying agents (eg, Lipitor [Pfizer]) were the most common drug classes for which the RTPB tool suggested alternative medications. We hypothesize that the RTPB tool might be most effective for high-cost prescription drug classes. Prior research on RTPB tools has found larger responses among high-cost prescription drugs.8 To identify high-cost prescription drug classes, we computed the mean OOP costs per fill within each drug class among control observations before RTPB tool implementation and selected drug classes in the top quartile of this distribution. To assess whether trends differed for drug classes where alternative medications were frequently suggested and for high-cost drugs, we conducted separate analyses for these 3 subgroups.
P values were 2-sided, and statistical significance was set at P ≤ .05. Analyses were conducted using Stata version 18.0 MP-Parallel Edition (StataCorp). Data were analyzed from November 2022 to June 2024.
Results
Our final analysis sample included 2 805 060 beneficiaries (mean [SD] age 70.9 [9.2] years; 56.7% female; 14.7% Black individuals; 80.5% White individuals) across 78 119 practices. Of these beneficiaries, 13.2% were prescribed prescriptions by a clinician practicing at 1 of the 3863 practices with the RTPB tool, which we refer to as treated practices. Among treated practices, 78.7% of clinicians appeared in the RTPB data vs 0.7% in untreated practices.
Beneficiaries at treated practices were different from beneficiaries at untreated practices on several measures (Table 1). Treated beneficiaries were more likely to be enrolled in a PPO than HMO plan (55.1% of beneficiary-months were enrolled in PPO plans in the treated group vs 39.8% in the untreated group). Baseline OOP costs were approximately $29 for both groups (mean [SD] cost: total, $29.1 [$90.4]; treated, $29.8 [$86.5]; untreated, $29.1 [$09.8]); mean (SD) total prescription costs were $213.2 ($1066.3), and patients had a mean (SD) of 2.6 (2.1) prescription fills per month (mean [SD] fills: treated, 2.7 [2.2]; untreated, 2.6 [2.1]. Baseline prescription spending was higher for untreated beneficiary-months (mean [SD], $213.9 [$1070.6] vs $204.9 [$1014.6]), possibly reflecting that these beneficiaries had a different mix of plan generosity. Practice size was also much smaller at treated practices, with treated practices having a mean (SD) of 72.9 (100.8) NPIs compared with 117.6 (199.5) NPIs in untreated practices, suggesting that practices with the RTPB tool were smaller than those with other EHR vendors. Note that practice size excludes clinicians who did not treat any beneficiaries covered by the insurer. All comparisons between treated and untreated beneficiaries and practices in Table 1 were statistically significant due to the large sample size.
Table 1. Sample Characteristics.
| Characteristic | % (SE) | ||
|---|---|---|---|
| All | Treated | Untreated | |
| Beneficiary characteristics | |||
| Age, mean (SD), y | 70.9 (9.2) | 70.5 (9.2) | 70.9 (9.2) |
| Gender | |||
| Female | 56.7 (49.6) | 56.5 (49.6) | 56.7 (49.6) |
| Male | 43.3 (49.6) | 43.5 (49.6) | 43.3 (49.6) |
| Unknown or missing | 0.01 (1.04) | 0.01 (1.07) | 0.01 (1.04) |
| Race and ethnicity | |||
| Black | 14.7 (35.4) | 13.4 (34.1) | 14.8 (35.6) |
| White | 80.5 (39.6) | 83.2 (37.4) | 80.3 (39.8) |
| Othera | 4.8 (21.3) | 3.4 (18.1) | 4.9 (21.5) |
| Missing | 1.0 (9.9) | 0.9 (9.6) | 1.0 (9.9) |
| Insurance plan and coverage | |||
| HMO | 59.1 (49.2) | 44.9 (49.7) | 60.3 (48.9) |
| PPO | 40.9 (49.2) | 55.1 (49.7) | 39.8 (48.9) |
| Low-income status | 23.8 (42.6) | 25.0 (43.3) | 23.7 (42.5) |
| Dual eligible | 16.1 (36.8) | 17.0 (37.5) | 16.0 (36.7) |
| Monthly outcomes | |||
| OOP cost, mean (SD), $ | 29.1 (90.4) | 29.8 (86.5) | 29.1 (90.8) |
| Prescription cost, mean (SD), $ | 213.2 (1066.3) | 204.9 (1014.6) | 213.9 (1070.6) |
| Prescription fills, mean (SD), No. | 2.6 (2.1) | 2.7 (2.2) | 2.6 (2.1) |
| Mail-in from insurer | 28.8 (43.3) | 30.5 (43.9) | 28.7 (43.2) |
| 90-d Supply | 49.6 (44.7) | 50.6 (44.5) | 49.6 (44.7) |
| Practice characteristics | |||
| Practice size, mean (SD) No. of NPIs | 114.1 (194.0) | 72.9 (100.8) | 117.6 (199.5) |
| Beneficiary-months, No. | 33 001 795 | 2 645 075 | 30 356 720 |
| Beneficiaries, No. | 2 805 060 | 371 241 | 2 694 281 |
| Practices, No. | 78 119 | 3863 | 74 256 |
Abbreviations: HMO, health maintenance organization; NPI, National Provider Identifier; OOP, out-of-pocket; PPO, preferred provider organization.
Includes underrepresented race and ethnicity groups identifying as American Indian or Alaska Native, Asian, Hispanic, or Pacific Islander.
Differences in beneficiaries at untreated vs treated practices suggest potential systematic sorting of beneficiaries across practices. We address this by using a DID design that allows for differences in levels but assumes that outcome trends in both groups are parallel before the intervention. To test whether this assumption was met, we plotted the coefficients on the month by treatment indicators to assess for differential changes in the treatment group prior to the intervention (Figure). Although the results were noisy, we did not find evidence of trends before the RTPB tool for our primary outcomes that would bias our results. Similar plots for the secondary outcomes are presented in eFigure 2 in Supplement 1.
Figure. Relative Changes in Out-of-Pocket Spending, Total Spending, and Prescription Fills Per Beneficiary Per Month Before and After Real-Time Prescription Benefit Tool Adoption.
Dashed line indicates adoption of the real-time prescription benefit tool. Outcomes are measured at the beneficiary-month level; regression controls for month, Taxpayer-Identification Number, health maintenance organization × hospital referral region × year, and beneficiary covariates. SEs (error bars) are clustered at the Taxpayer-Identification Number level.
In Table 2, we report the DID estimates for primary and secondary outcomes. We found no significant differences in log OOP spending (DID, $0.01 [95% CI, −$0.01 to $0.03]) or log prescription spending (DID, $0.01 [95% CI, −$0.00 to $0.01]). Following the work of Desai et al,8 estimates for log-transformed outcomes were converted to estimated percentage changes by taking the exponent of the log-transformed estimate subtracting 1 and multiplying by 100, ie, ([exp(estimate) − 1] × 100]). This corresponds to an estimated percentage change of 1.2% (95% CI, −0.7% to 3.0%) in OOP spending and 0.5% (95% CI, −0.2% to 1.2%) in total prescription spending. There was no significant change in prescription fills (DID, 0.01 [95% CI, −0.01 to 0.02]), percentage of fills with 90-day supply (DID, −0.00% [95% CI, −0.00% to 0.00%]), or percentage of prescriptions filled through the insurer mail-order pharmacy (DID, 0.00% [95% CI, −0.00% to 0.00%]) after the introduction of the tool.
Table 2. Difference-In-Differences Estimates for Primary and Secondary Outcomes for All Fills From Assigned Prescribera.
| Outcome | Differential change for treated vs untreated practices in post period | |
|---|---|---|
| Estimate (95% CI), per mo | P value | |
| Log OOP spending, $ | 0.01 (−0.01 to 0.03) | .23 |
| Log prescription spending, $ | 0.01 (−0.00 to 0.01) | .19 |
| Fills, No. | 0.01 (−0.01 to 0.02) | .36 |
| Mail-in from insurer, % | 0.00 (−0.00 to 0.00) | .77 |
| 90-d Prescriptions, % | −0.00 (−0.00 to 0.00) | .23 |
Abbreviation: OOP, out-of-pocket.
Outcomes are measured at the beneficiary-month level. Regression controls for month, Taxpayer-Identification Number, health maintenance organization × hospital referral region × year, and beneficiary covariates. SEs are clustered at the Taxpayer-Identification Number level.
These results remained the same when we looked at outcomes adjusted for days supplied (eTable 1 in Supplement 1) and subset to continuously enrolled beneficiaries (eTable 2 in Supplement 1). In a sensitivity analysis, we compared results using different cutoffs for assignment treatment. Results were consistent under the 10% cutoff, but we did detect a small but significant increase in prescription fills under the 90% cutoff (eTable 3 in Supplement 1). We found no significant differences when we used an alternative study design that compared NPIs appearing in the RTPB tool with those who did not appear in the tool (eTable 4 in Supplement 1).
Table 3 presents our results for the set of subgroup analyses. There were no statistically significant changes among antidepressant fills or high-cost drug classes. However, we did note a small but statically significant increase in prescription spending of approximately 0.9% (95% CI, 0.2% to 1.6%) among lipid-modifying agents.
Table 3. Difference-In-Differences Estimates for Primary Outcomes Among Relevant Subgroupsa.
| Outcomes | Differential change for treated vs untreated practices in post period | |
|---|---|---|
| Estimate (95% CI) | P value | |
| Antidepressants | ||
| Log OOP spending, $ | −0.01 (−0.04 to 0.02) | .58 |
| Log prescription spending, $ | 0.00 (−0.01 to 0.01) | .44 |
| Fills | 0.00 (−0.00 to 0.01) | .32 |
| Lipid-modifying agents | ||
| Log OOP spending, $ | −0.01 (−0.04 to 0.02) | .48 |
| Log prescription spending, $ | 0.01 (0.00 to 0.02) | .01 |
| Fills | 0.00 (−0.00 to 0.00) | .80 |
| High-cost prescription drug classes b | ||
| Log OOP spending, $ | −0.02 (−0.07 to 0.03) | .41 |
| Log prescription spending, $ | −0.02 (−0.04 to 0.01) | .13 |
| Fills | −0.00 (−0.01 to 0.01) | .64 |
Abbreviation: OOP, out-of-pocket.
Outcomes are measured at the beneficiary-month level. Regression controls for month, Taxpayer-Identification Number, health maintenance organization × hospital referral region × year, and beneficiary covariates. SEs are clustered at the Taxpayer-Identification Number level.
Defined as drugs in the top quartile of OOP costs per fill, based on control observations from before the exposure period.
Discussion
In this cohort study, we studied whether prescription spending and fills changed after the early implementation of an RTPB tool in its first year of use. We found no change in prescription spending, number of fills, or percentage of prescriptions filled with the insurer’s mail-order pharmacy after the introduction of an RTPB tool. These results were robust to various sensitivity checks, including adjusting for days supplied, comparing results among continuously enrolled beneficiaries, and alternative treatment assignment definitions. Although we detected a small but significant change in log prescription spending among lipid-modifying agents, more broadly, our subgroup analyses detected minimal to no significant changes in antidepressants, lipid-modifying agents, or high-cost fills.
Several factors could explain the lack of change in prescription spending and fills after the introduction of the RTPB tool. The design of the RTPB tool in the EHR system may have reduced physicians’ inclination to use it, as clinicians were required to take an extra step in the EHR (ie, clicking on a dropdown box) to see the information on prescription alternatives. Previous studies have shown that even a minor inconvenience, such as an additional click, can greatly reduce tool use in EHRs.13 Moreover, clinicians using free-text prescription writing were unable to see the RTPB information from this field. Although we did not know how often clinicians were using free-text writing in this study, earlier work on hypoglycemic prescriptions found that free-text writing was used for approximately 10% of all orders.14 If that was the case here, clinician access to the tool may not have been highly correlated with its use, diminishing any potential effects. This is likely compounded in the EHR setting, where clinicians are already navigating numerous clinician decision support tools. Alert fatigue has been well documented,15,16,17 and it might not be surprising that another EHR-based intervention, such as an RTPB tool, is facing similar challenges.
Even if clinicians saw the alternative prescription data, barriers in switching to a different pharmacy or an alternative prescription likely still exist. Clinicians may hesitate to prescribe alternatives they are not familiar with or be wary of the costs of customizing prescriptions to every patient.18,19,20 There may also be concerns that the disclosed prices may be inaccurate.21 Patients may also have strong preferences for specific medications or brand names22,23 or specific pharmacies that offer convenience in terms of distance, friendliness, or other factors24 and may be unwilling to switch for a few dollars in savings. In our study, the mean OOP savings from switching to the lowest cost option was $4 per script, which might not justify the perceived costs (to either the clinician or the patient) associated with changing a prescription. However, $4 per prescription could lead to large savings, considering the number of prescriptions filled annually across the Medicare population. Furthermore, early implementation of this tool provided alternative medication options for a mere 7% of prescriptions, and potential savings would likely be much higher if this was expanded to a broader set of prescription drug classes.
Our findings contribute to the evolving literature on price transparency efforts in the US to lower health care spending.25 Initial price transparency efforts targeted patients through price shopping tools. Despite overwhelming patient support for these tools, patients did not use them very often.26 More recently, the Centers for Medicare & Medicaid Services released federal regulation requiring insurer and hospital price transparency through the posting of negotiated rates and standard charge amounts, respectively.27 This information is intended to not only help patients shop for lower-cost services and improve health plan choice, but also to increase market competition with respect to negotiated prices.28 In comparison, the use of price transparency tools that target prescribers has been relatively understudied.
As RTPB tools are relatively early in development, further research is needed to optimize their design for maximum benefit. One potential barrier to RTPB use in our study was the additional click required for clinicians to access RTPB information. Prior research suggests that alerts are significantly more effective in ensuring clinicians see relevant information.9 While alerts raise concerns about alert fatigue, RTPB tools could mitigate this by targeting recommendations only for prescription orders with the greatest cost savings. Other studies have found that clinicians are more willing to switch medications when the cost savings are substantial.8,9 Further refinements could incorporate free-text prescription searches, broadening the tool’s usability among clinicians.
Limitations
Our study had several limitations. First, we were unable to identify intervention practices directly and had to instead impute practices with the RTPB tool. This likely introduced some error into the assignment of treated vs untreated practices. However, our results remained the same when we measured trends in prescription fills using an alternative specification of treatment. Second, we were unable to measure use of the dropdown box from the RTPB tool and thus the extent to which prescribers saw and acted on the alternative price information. This limited our analysis to an examination of clinician access to RTPB tools rather than clinician use of RTPB information, which may not fully capture the potential of RTPB tools. We cannot say how the results would change if the information on alternatives was more frequently seen. Additionally, the study sample was limited to MA beneficiaries with continuous enrollment for at least 1 year, so the findings may not generalize to patients with insurance gaps or those with different types of health insurance coverage.
Conclusions
In this cohort study of 2.8 million patients, we found no change in prescription spending after the introduction of an RTPB tool, likely due to underuse of the tool in the EHR. RTPB tools have the potential to address physician behavior as a mechanism for lowering prescription drug spending, but our findings highlight the importance of leveraging the design of price transparency tools like RTPB to reduce barriers to accessing the information.
eTable 1. Difference-In-Differences Estimates for Primary and Secondary Outcomes, Adjusted for Days Supplied
eTable 2. Difference-In-Differences Estimates for Primary and Secondary Outcomes Among Continuously Enrolled Beneficiaries
eTable 3. Difference-In-Differences Estimates for Primary and Secondary Outcomes using Alternative Treatment Cutoff
eTable 4. Difference-In-Differences Estimates for Primary and Secondary Outcomes Under Alternative Study Design Assigning Treatment at the NPI-Level
eFigure 1. Sample Exclusion Criteria
eFigure 2. Event Study Results (Secondary Outcomes)
Data Sharing Statement
References
- 1.Goldman DP, Joyce GF, Zheng Y. Prescription drug cost sharing: associations with medication and medical utilization and spending and health. JAMA. 2007;298(1):61-69. doi: 10.1001/jama.298.1.61 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Fusco N, Sils B, Graff JS, Kistler K, Ruiz K. Cost-sharing and adherence, clinical outcomes, health care utilization, and costs: a systematic literature review. J Manag Care Spec Pharm. 2023;29(1):4-16. doi: 10.18553/jmcp.2022.21270 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Karter AJ, Parker MM, Solomon MD, et al. Effect of out-of-pocket cost on medication initiation, adherence, and persistence among patients with type 2 diabetes: the Diabetes Study of Northern California (DISTANCE). Health Serv Res. 2018;53(2):1227-1247. doi: 10.1111/1475-6773.12700 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Modernizing Part D and Medicare Advantage to lower drug prices and reduce out-of-pocket expenses. Federal Register. May 23, 2019. Accessed December 11, 2023. https://www.federalregister.gov/documents/2019/05/23/2019-10521/modernizing-part-d-and-medicare-advantage-to-lower-drug-prices-and-reduce-out-of-pocket-expenses
- 5.Mojtabai R, Olfson M. Medication costs, adherence, and health outcomes among Medicare beneficiaries. Health Aff (Millwood). 2003;22(4):220-229. doi: 10.1377/hlthaff.22.4.220 [DOI] [PubMed] [Google Scholar]
- 6.Everson J, Frisse ME, Dusetzina SB. Real-time benefit tools for drug prices. JAMA. 2019;322(24):2383-2384. doi: 10.1001/jama.2019.16434 [DOI] [PubMed] [Google Scholar]
- 7.Granja C, Janssen W, Johansen MA. Factors determining the success and failure of eHealth interventions: systematic review of the literature. J Med Internet Res. 2018;20(5):e10235. doi: 10.2196/10235 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Desai SM, Chen AZ, Wang J, et al. Effects of real-time prescription benefit recommendations on patient out-of-pocket costs: a cluster randomized clinical trial. JAMA Intern Med. 2022;182(11):1129-1137. doi: 10.1001/jamainternmed.2022.3946 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Sinaiko AD, Sloan CE, Soto MJ, Zhao O, Lin CT, Goss FR. Clinician response to patient medication prices displayed in the electronic health record. JAMA Intern Med. 2023;183(10):1172-1175. doi: 10.1001/jamainternmed.2023.3307 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Sloan CE, Morton-Oswald S, Smith VA, et al. Real-world use of a medication out-of-pocket cost estimator in primary care one year after Medicare regulation. J Am Geriatr Soc. 2024;72(5):1548-1552. doi: 10.1111/jgs.18774 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Everson J, Dusetzina SB. Real-time prescription benefit tools—the promise and peril. JAMA Intern Med. 2022;182(11):1137-1138. doi: 10.1001/jamainternmed.2022.3962 [DOI] [PubMed] [Google Scholar]
- 12.Bodenreider O, Rodriguez LM. Analyzing U.S. prescription lists with RxNorm and the ATC/DDD Index. AMIA Annu Symp Proc. 2014;2014:297-306. [PMC free article] [PubMed] [Google Scholar]
- 13.Collier R. Rethinking EHR interfaces to reduce click fatigue and physician burnout. CMAJ. 2018;190(33):E994-E995. doi: 10.1503/cmaj.109-5644 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Zhou L, Mahoney LM, Shakurova A, et al. How many medication orders are entered through free-text in EHRs—a study on hypoglycemic agents. AMIA Annu Symp Proc. 2012;2012:1079-1088. [PMC free article] [PubMed] [Google Scholar]
- 15.van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13(2):138-147. doi: 10.1197/jamia.M1809 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Embi PJ, Leonard AC. Evaluating alert fatigue over time to EHR-based clinical trial alerts: findings from a randomized controlled study. J Am Med Inform Assoc. 2012;19(e1):e145-e148. doi: 10.1136/amiajnl-2011-000743 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Drew BJ, Harris P, Zègre-Hemsey JK, et al. Insights into the problem of alarm fatigue with physiologic monitor devices: a comprehensive observational study of consecutive intensive care unit patients. PLoS One. 2014;9(10):e110274. doi: 10.1371/journal.pone.0110274 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Ito Y, Hara K, Kobayashi Y. The effect of inertia on brand-name versus generic drug choices. J Econ Behav Organ. 2020;172:364-379. doi: 10.1016/j.jebo.2019.12.022 [DOI] [Google Scholar]
- 19.Frank RG, Zeckhauser RJ. Custom-made versus ready-to-wear treatments: behavioral propensities in physicians’ choices. J Health Econ. 2007;26(6):1101-1127. doi: 10.1016/j.jhealeco.2007.08.002 [DOI] [PubMed] [Google Scholar]
- 20.Currie JM, Macleod WB. Understanding doctor decision making: the case of depression treatment. Econometrica. 2020;88(3):847-878. doi: 10.3982/ECTA16591 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Dusetzina SB, Besaw RJ, Whitmore CC, et al. Cost-related medication nonadherence and desire for medication cost information among adults aged 65 years and older in the US in 2022. JAMA Netw Open. 2023;6(5):e2314211. doi: 10.1001/jamanetworkopen.2023.14211 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Kesselheim AS, Gagne JJ, Franklin JM, et al. Variations in patients’ perceptions and use of generic drugs: results of a national survey. J Gen Intern Med. 2016;31(6):609-614. doi: 10.1007/s11606-016-3612-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Limenh LW, Tessema TA, Simegn W, et al. Patients’ preference for pharmaceutical dosage forms: does it affect medication adherence—a cross-sectional study in community pharmacies. Patient Prefer Adherence. 2024;18:753-766. doi: 10.2147/PPA.S456117 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.National Community Pharmacists Association . National consumer survey: more than 8 in 10 adults prefer their local pharmacist over mail order. Accessed April 3, 2024. https://ncpa.org/newsroom/news-releases/2021/03/04/national-consumer-survey-more-8-10-adults-prefer-their-local
- 25.Zhang A, Prang KH, Devlin N, Scott A, Kelaher M. The impact of price transparency on consumers and providers: a scoping review. Health Policy. 2020;124(8):819-825. doi: 10.1016/j.healthpol.2020.06.001 [DOI] [PubMed] [Google Scholar]
- 26.Mehrotra A, Dean KM, Sinaiko AD, Sood N. Americans support price shopping for health care, but few actually seek out price information. Health Aff (Millwood). 2017;36(8):1392-1400. doi: 10.1377/hlthaff.2016.1471 [DOI] [PubMed] [Google Scholar]
- 27.Transparency in Coverage final rule fact sheet (CMS-9915-F). News release. Centers for Medicare & Medicaid Services. October 29, 2020. Accessed June 13, 2024. https://www.cms.gov/newsroom/fact-sheets/transparency-coverage-final-rule-fact-sheet-cms-9915-f
- 28.Glied S. Price transparency—promise and peril. JAMA. 2021;325(15):1496-1497. doi: 10.1001/jama.2021.4640 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
eTable 1. Difference-In-Differences Estimates for Primary and Secondary Outcomes, Adjusted for Days Supplied
eTable 2. Difference-In-Differences Estimates for Primary and Secondary Outcomes Among Continuously Enrolled Beneficiaries
eTable 3. Difference-In-Differences Estimates for Primary and Secondary Outcomes using Alternative Treatment Cutoff
eTable 4. Difference-In-Differences Estimates for Primary and Secondary Outcomes Under Alternative Study Design Assigning Treatment at the NPI-Level
eFigure 1. Sample Exclusion Criteria
eFigure 2. Event Study Results (Secondary Outcomes)
Data Sharing Statement

