Skip to main content
Journal of Managed Care & Specialty Pharmacy logoLink to Journal of Managed Care & Specialty Pharmacy
. 2021 Nov;27(11):10.18553/jmcp.2021.27.11.1601. doi: 10.18553/jmcp.2021.27.11.1601

The evolution of ICER’s review process for new medical interventions and a critical review of economic evaluations (2018-2019): how stakeholders can collaborate with ICER to improve the quality of evidence in ICER’s reports

Naoko A Ronquest 1,*, Kyle Paret 1, Ian Gopal Gould 1, Christine L Barnett 1, Deirdre M Mladsi 1
PMCID: PMC10390909  PMID: 34714108

Abstract

BACKGROUND:

Since its inception in 2006, the Institute for Clinical and Economic Review (ICER) has rapidly gained influence on drug pricing and reimbursement decisions despite historical resistance to the use of cost-effectiveness thresholds in the US health care system. Although patient groups, physicians, and pharmaceutical manufacturers voiced their concerns about the potential negative effects of increased use of ICER’s assessments on patient access to innovative medications, there is little guidance and consensus on how the stakeholders should collaborate with ICER to ensure that its reviews reflect the best clinical and economic evidence.

OBJECTIVES:

To (1) summarize the evolution of ICER’s evaluation procedure, scope, and topics; (2) evaluate the effectiveness of stakeholder engagement approaches; and (3) inform stakeholders of their potential role in collaborating with ICER in estimating the cost-effectiveness of new interventions.

METHODS:

Publicly available ICER evaluations from 2008 to 2019 were systematically reviewed. Changes in evaluation procedures, scope, and topics were summarized. For evaluations that occurred in 2018 (n = 12) and 2019 (n = 8), key characteristics were extracted from 172 letters documenting interactions between ICER and all stakeholders who provided comments to draft reports. Stakeholder suggestions were analyzed in terms of their effectiveness indicated by ICER’s reconsideration of its original cost-effectiveness analysis approach.

RESULTS:

The number of ICER evaluations increased consistently from 2 to 12 per year between 2008 and 2018 but declined to 8 in 2019. Stakeholder opportunity to engage with ICER increased from 1 to 3 per evaluation between 2008 and 2015. ICER initially focused on reviewing general treatment strategies but shifted its focus to specific pharmaceuticals and medical devices in 2014. In 2018 and 2019, 30% of 172 stakeholder letters resulted in a revision in the base-case analysis (49 comments in 2018, 23 in 2019); nearly half of comments in these letters included specific alternative data or a published article to rationalize recommendations. Other common types of suggestions that resulted in ICER’s base-case analysis revisions included comments relating to inconsistent methods used to derive model inputs across different treatments (12/49 in 2018, 5/23 in 2019); clinical justifications (12/49 in 2018, 0/23 in 2019); and justifications based on patient perspectives (1/49 in 2018, 5/23 in 2019). These revisions rarely affected ICER’s conclusions on the cost-effectiveness of evaluated interventions. Among the 20 assessments that involved 172 stakeholder engagements in 2018 and 2019, only 2% (n = 3) of the engagements (all from 2018) were associated with a change in the cost-effectiveness conclusion.

CONCLUSIONS:

Between 2018 and 2019, stakeholders leveraged ICER evaluations as opportunities to promote dialogue for better understanding of the value of technologies. Actionable, evidence-based recommendations were accepted more often than other recommendations.


What is already known about this subject

  • Since its foundation in 2006, the Institute for Clinical and Economic Review (ICER) has rapidly gained influence on drug pricing and reimbursement decisions despite historical resistance to the use of cost-effectiveness thresholds in the US health care system.

  • Although ICER’s methods and approach to health technology assessments are documented widely, there is little consensus on how stakeholders should collaborate with ICER during their engagement processes in order to ensure that the review reflects the best clinical and economic evidence.

  • To provide insight on stakeholder opinions of ICER’s reports, in January 2019, the Journal of Managed Care + Specialty Pharmacy launched an article series, in which each issue includes highlights of a recent ICER report followed by a commentary article written by stakeholders; patient advocacy groups, physicians, and pharmaceutical manufacturers have voiced their concerns about ICER’s influence on access to innovative medications.

What this study adds

  • Based on a review of ICER evaluation reports from 2008 to 2019, key changes are summarized in evaluation procedures, scope, and topics in the past 11 years.

  • This study critically assessed 172 letters from all stakeholders during 2018-2019 reviews to summarize types of approaches that resulted in ICER’s reconsideration of its original methodologies for its cost-effectiveness analyses.

  • This study provides important context to inform stakeholders of their potential role in collaborating with ICER to ensure that its cost-effectiveness conclusions and value-based pricing benchmark estimates will account for multistakeholder perspectives.

Health care costs in the United States have increased significantly and consistently in the past several decades. In 2018, US health care spending was $3.8 trillion, and growth is projected to increase at an average rate of 5.4% per year to reach nearly $6.2 trillion by 2028.1 In an effort to manage the growth of federal health care spending, federal agencies and private-public partnerships have made numerous attempts since the 1970s to use health technology assessment (HTA) for health care pricing and reimbursement decisions.

However, most of these efforts have been unsuccessful.2-4 A variety of reasons for these failures have been reported. For instance, policymakers and industry stakeholders have expressed concern that HTAs may have detrimental effects on investment in innovative health care technologies.2 In an editorial on this topic, Neumann and Sullivan hypothesized that HTA methodologies and processes used by other Western developed countries may not work practically in the United States because of its decentralized pricing and reimbursement policies, financing, and delivery methods.5 Different political systems, traditions, cultural expectations, and attitudes surrounding health care also have been discussed as potential hurdles.2,5

In 2006, the Institute for Clinical and Economic Review (ICER) was founded as an independent, nonpartisan foundation that evaluates the clinical and economic value of prescription drugs, medical tests, and other health care and health care delivery innovations. Efforts by ICER have been reported as “the first major US attempt” at HTA, “attempting to boldly go where no US health technology assessment group has gone before, to engage the public in a discourse on health care value by presenting transparent and scientifically rigorous information on the clinical features of treatments.”6 Since its inception, ICER has increased its volume, scope, and transparency for evaluating the value of health technologies.7

Subsequently, ICER’s assessments have become an important resource for negotiations between manufacturers and payers. For instance, a 2018 payer interview study found that more than 75% of 22 payer decision makers reported they would use an ICER cost-effectiveness threshold price as a basis for decisions on prior authorization requirements, and nearly half were likely to use the threshold price for rebate negotiations.4 Further, in August 2018, CVS Caremark announced that it would allow its pharmacy benefit management clients to exclude drugs that exceed a cost-effectiveness ratio of $100,000 per quality-adjusted life-year (QALY) reported in ICER evidence reports.8

In response to these movements, patient advocacy groups, physicians, and pharmaceutical manufacturers have voiced their concerns about the potentially harmful effects of increased use of cost-effectiveness analyses in payers’ formulary decision making on patient access to innovative medications. In January 2019, the Journal of Managed Care + Specialty Pharmacy launched an article series in which each issue summarizes a recent evidence report issued by ICER and provides a commentary authored by stakeholders. These commentaries have often highlighted practicing clinicians’ disagreements on approaches used in evidence reports.

As noted in a quote by ICER’s founder and president, Steven Pearson, ICER’s goal includes ensuring “that all stakeholders in the health care system–patients, manufacturers, payers, clinicians, and others–have an opportunity to contribute meaningfully to” its work.9 However, several factors may make it particularly challenging to communicate effectively during ICER’s evaluations. First, due to ICER’s commitment to rapidly generate the latest value-based evidence to support negotiations for pricing and access policies for new health technologies, the evaluation process and framework have undergone numerous updates to address emerging challenges.10 Similarly, during each ICER evaluation, stakeholder comments are accepted only during 3 prespecified time windows, with limited durations (3 weeks for initial open inputs and comments to draft scoping documents and 4 weeks for comments to draft reports), despite the need for stakeholders to digest a large volume of work conducted by ICER.

To determine which types of stakeholders most often comment on ICER technology appraisal reports and which aspects of the reports are the topics of these comments (ie, methodology, data, and real-world experience) Gerlach et al reviewed stakeholder comments in 7 ICER reports.11 The authors found pharmaceutical manufacturers to be the majority of stakeholders submitting public comments (63.1%) followed by patients or patient advocacy groups (18.1%) and providers or provider groups (9.7%), and the comments most often addressed the methodology of the value assessment (53.8%). However, the effectiveness of each stakeholder’s comments or how ICER responded back to the stakeholders was not reported in this study. To promote stakeholders contributions and engagements, it is critical for those who represent the clinical and patient community, payer bodies, and pharmaceutical manufacturers to gain a deeper understanding of ICER’s process and scope evolution and the effects of comments on ICER’s analyses.

To inform ICER engagement strategies for pharmaceutical stakeholders, Cohen et al conducted a review of industry stakeholders’ letters and ICER’s responses associated with 15 assessments of pharmaceutical interventions from 2017 to April 2019.12 Specifically, the authors categorized stakeholder comments in terms of the 3 predefined characteristics: clarity (clear vs unclear), whether it offered an alternative to ICER’s approach, and whether it described the expected amount of impact of revision on the incremental cost per QALY gained. The study found that higher percentages of recommendations that satisfy each of these criteria were accepted by ICER.

Although these findings offer a practical recommendation that emphasizes the importance of clear writing and provision of strong supporting justifications along with alternative methods, they raise further questions about what type of justifications and alternative approaches were accepted. Further, although for pharmaceutical manufacturers the primary goal of ICER engagements may be to ensure that a specific product to be evaluated is cost-effective, an important common objective for all stakeholders is to ensure that ICER’s assessments reflect the true value of evaluated products. Therefore, a study that explores the effects of all stakeholder comments on various types of ICER’s approaches is warranted.

The objectives of this study were to summarize the evolution of ICER’s review procedures in the past decade and to recommend strategies for effective communication by stakeholders of the cost-effectiveness of health technologies. Our summary of ICER’s procedural evolution is based on a structured review of materials made publicly available between 2008 and 2019. The goal of this summary was to provide a context for ICER’s review process evolution. To understand the impact of stakeholder comments on ICER analyses, we critically reviewed all stakeholder letters between 2018 and 2019 to identify the relationship between various types of stakeholder recommendations and ICER’s responses.

Although we first assessed whether stakeholder comments made any impact on ICER reports, stakeholder engagement approaches were then analyzed in terms of their effectiveness, which was indicated by ICER’s reconsideration of its original cost-effectiveness analysis approach. Among the comprehensive evidence to support the value of health technologies, this study focused on the cost-effectiveness aspect because of the increasing use of the results of ICER cost-effectiveness analyses in payer pricing and reimbursement decision making.13

Methods

We reviewed publicly available evaluation materials for ICER assessments performed from 2008 to 2019. Materials were identified from ICER’s materials library on its official website (https://icer.org/explore-our-research/assessments/). For our review of ICER assessments from 2008 to 2019, a desktop search of the ICER website was conducted to identify key characteristics of ICER reviews and procedures over time. For the assessments conducted in 2018 and 2019, all draft evidence reports, all documents summarizing stakeholder comments on ICER draft reports and ICER responses, and all final evidence reports were reviewed. Because all of the full-text reports and stakeholder comments were reviewed, there was no screening involved.

To characterize the evolution of ICER’s scope and procedural changes from 2008 to 2019, the annual number of reviews and the sequence of activities for each review were extracted. For each review, the following information was extracted: patient population, therapeutic area, names of interventions reviewed, the intervention types (eg, treatments for rare disease, gene therapies), the comparator names, whether only marketed therapies were reviewed, and whether ICER had conducted a review in the same therapeutic area previously. Finally, to ensure that our review captured any latest critical procedural changes that may have occurred after the review cutoff date, a search of ICER materials was conducted using ICER’s online library in December 2020. Based on that review, we confirmed ICER’s latest stakeholder engagement procedure and documented any change that may have occurred between 2019 and 2020.

To assess the effectiveness of stakeholders’ scientific communication with ICER and their ability to inform ICER’s cost-effectiveness analyses, the following information was extracted for each of the assessments published in 2018 and 2019: the number and type of health technologies evaluated; the number of participating stakeholders by type (ie, manufacturers vs nonpharmaceutical stakeholders such as payers, patient advocates, clinical or research organizations, and individual participants); and the number of products that were reported to be cost-effective at the willingness-to-pay threshold of $150,000 in draft or final reports.

For each of ICER’s assessments, all stakeholders were identified from ICER’s document “Response to Public Comments on Draft Evidence Report.” They included not only stakeholders invited by ICER for collaboration but all other stakeholders who submitted their comments. In addition, for stakeholders’ public comments on ICER’s draft reports, by stakeholder type, each comment was categorized according to the following 3 attributes: comments with only editorial implications, comments that resulted in additional scenario or sensitivity analyses, and comments that resulted in revisions to ICER’s methods or analysis inputs used in the cost-effectiveness analyses.

When multiple stakeholders submitted similar comments, they were counted separately so that each participant’s comment acceptance rate could be estimated. We then critically assessed the comments in each of the 3 groups to characterize the type of comments that led to each type of ICER reaction. For all stakeholder inputs that resulted in any revision to ICER’s methods or analysis inputs, we reviewed each of the recommendations to create categories for the type of recommendations.

Finally, for comments that were associated with a change in the base-case analysis and a positive change in the cost-effectiveness conclusion associated with a product, we summarized the stakeholder requests, their justifications, and ICER’s responses.

Results

ICER’S EVOLUTION IN STAKEHOLDER ENGAGEMENT APPROACHES

As of December 2020, each ICER assessment consists of 3 phases: (1) the research protocol development phase, (2) the economic modeling plan development phase, and (3) the evidence discussion and dissemination phase. In the first phase, once the topic is selected and key stakeholders are identified, ICER invites input from selected stakeholders before completing and releasing a draft scope document. These select stakeholders and other stakeholders subsequently are invited to provide public comments on the draft scope document.

In the second phase, ICER first releases a model analysis plan to the public, then moves ahead to compile comparative clinical evidence and to develop an economic model, and finally releases its draft evidence report. Although the selected stakeholders have opportunities to provide additional data during ICER’s model development process, they do not have an opportunity to comment on ICER’s modeling approach until the draft report is released to the public. All stakeholders have an opportunity to make public comments on the draft report, which is responded to by ICER in a written public document.

In the final phase, the content of the evidence report is discussed during the public meeting, where stakeholders from a variety of organizations, such as pharmaceutical companies, health insurance companies, and patient advocacy organizations, as well as clinical experts, exchange their views on the assessment results in a roundtable discussion. After the public meeting, ICER releases a final evidence report that incorporates the key discussion points and policy recommendations based on the outcome of the roundtable discussion.

In the early years of ICER reviews (2008-2011), ICER did not release details of each review, such as the research plan and draft results, until they were presented at the public stakeholder meeting. From 2011, in its review of treatments for depression, ICER started to release a draft evidence review report in advance of the public meeting. Although ICER released public comments on the depression report from stakeholders between 2012 and 2013, no public comments were posted again until mid-2014. From 2015, ICER started to release its responses to each of the stakeholder comments before issuing a revised evidence report. Shortly after this, ICER started to engage stakeholders earlier by releasing its draft scoping document and seeking stakeholder input (from July 2015, in the review of congestive heart failure treatments). The initial open-input period was then introduced in 2016.

The standard timeline for stakeholders to respond to the draft scope document and draft report has been 3 weeks and 4 weeks, respectively, until early 2020. A new timeline that allows for an extra week for stakeholders was introduced in January 2020 for “class reviews,” which involve a review of the entire class of treatments in a therapeutic area.

ASSESSMENT TRENDS BETWEEN 2008 AND 2019

As summarized in Figure 1, between 2008 and 2019, the number of ICER evaluations per year increased from 2 to 12 from 2008 to 2018 and then dropped to 8 in 2019. In earlier years (2008-2010), all ICER assessments evaluated the value of disease management strategies, such as computed tomography colonography for colorectal cancer screening, intensity modulated radiation therapy for prostate cancer, and parent behavior training for attention-deficit/hyper-activity disorder rather than specific products. Conversely, all 12 evaluations in 2018 were on novel commercial products. Evaluations of rare indications were introduced in 2016, comprising 50% of evaluations in 2018. Evaluations of gene therapies were introduced in 2018. An update of a previous review was introduced in 2017, and the proportion of reviews that are updates increased from 14% in 2017 to 40% in 2019.

FIGURE 1.

FIGURE 1

Number of Evaluations and Topics From 2008 and 2019

With regard to the types of stakeholders providing input during each review, from 2008 to 2014, all policy roundtable participants were representing payers and hospitals. However, from 2015, patient advocacy groups and pharmaceutical companies started to participate in public engagements. Additionally, all reviews in 2015 involved participation of clinical experts from academic or clinical organizations, and most reviews in 2015 involved payer participants. The percentage of participants who represented a pharmaceutical company was initially below 20% in 2015; however, since 2016, approximately 40% of stakeholders participating in each evaluation have represented pharmaceutical companies. Between 2015 and 2019, the proportion of nonindustry stakeholders who represented clinical societies, academic organizations, and payers declined. Contrarily, nearly all participants whose affiliations were not pharmaceutical manufacturers in 2019 represented patients or patient groups.

OVERVIEW OF THE 2018-2019 REVIEWS

In 2018 and 2019, 172 (108 in 2018 and 64 in 2019) stakeholders provided comments to ICER’s draft reports. The number of interventions reviewed was 39 in 2018 and declined to 17 in 2019. As presented in Figure 2, at the willingness-to-pay threshold of $150,000, one-third (4 of 12) of the final reports in 2018 found no intervention to be cost-effective, and one-third (4 of 12) found all interventions to be cost-effective. On the other hand, in 2019, the proportion of assessments that found no intervention to be cost-effective in the final report increased to 50% (4 of 8 reviews), and only 2 of 8 assessments found all evaluated interventions to be cost-effective (Figure 2).

FIGURE 2.

FIGURE 2

Summary of Evaluations in 2018 and 2019

Although the analysis framework was consistent across reviews in general, ICER’s approach for selecting comparators varied by assessment. For example, although nearly all (18 of 20) assessments reviewed compared interventions to 1 most relevant comparator, in an assessment of hemophilia A in 2018, 2 alternative comparators were used in the base case, scenario, sensitivity, and threshold analyses, although the rationale for using multiple comparators in the base case was not presented in the report. Similarly, in an assessment of treatments for type 2 diabetes in 2019, ICER evaluated 1 new intervention compared with 4 existing comparators; in this assessment, the value-based benchmark pricing was calculated using only the intervention with the lowest cost as a reference comparator.

IMPACT OF COMMENTS ON ICER’S APPROACH BY REVISION TYPE AND STAKEHOLDER TYPE

The majority of stakeholder letters submitted in 2018 and 2019 resulted in at least 1 revision (67% [72/108] in 2018 and 70% [45/64] in 2019) in ICER’s draft report to address editorial or technical issues (Figure 3). Especially noteworthy was that the proportion of nonpharmaceutical stakeholder letters that resulted in any change increased from less than half (32/68) in 2018 to 65% (24/37) in 2019. When the impact of each letter on ICER’s report was categorized by whether the revision made to address stakeholder comments may have affected the incremental cost-effectiveness ratios for the interventions reviewed, 28% of reviewed letters (27% [29/108] in 2018, 31% [20/64] in 2019) resulted in a revision in ICER’s base-case analysis (49 comments in 2018, 23 in 2019).

FIGURE 3.

FIGURE 3

Proportion of Stakeholder Letters That Resulted in Revisions by Type of Revision and by Stakeholder Type

The proportion of letters from nonpharmaceutical stakeholders with such impact increased from 16% (11/68) in 2018 to 30% (11/37) in 2019, where the proportion of letters from pharmaceutical companies that may have influenced ICER’s cost-effectiveness ratios declined from 45% (18/40) to 33% (9/27). In 2018, about a quarter of stakeholder letters led ICER to explore their concerns by additional sensitivity or scenario analyses; the frequency of this type of response declined to 19% (12/64) in 2019.

RESULTS OF CRITICAL ASSESSMENT OF STAKEHOLDER COMMENTS

Seventy-two (49 in 2018, 23 in 2019) comments resulted in any change in ICER’s base-case analyses. Nearly all comments that resulted in any change in ICER’s base-case analyses in 2018 (48/49) were related to economic model inputs (efficacy inputs, n = 12; health care resource use use, n = 11; drug pricing, n = 10; dosing regimen, n = 6; utility inputs, n = 6; safety inputs, n = 2; and baseline severity inputs, n = 1; Table 1). On the other hand, more than half of these comments in 2019 (12/23) were related to the model structure (structure not reflecting the disease course [n = 8] and structure not being able to reflect important clinical differences [n = 4]), whereas most other comments (10/11) were related to model inputs (Table 1).

TABLE 1.

Comments That Resulted in Base-Case Outcomes Change by Type and by Rationale, 2018-2019a

Comment type Rationale for comment
2018 reviews 2019 reviews 2018 reviews 2019 reviews
Efficacy inputs related (n = 12, 25%) Model structure inconsistent with disease course (n = 8, 35%) Published articles (n = 12, 24%) Patient experience (n = 5, 22%)
Costs of health care resources other than drugs (n = 11, 23%) Model structure not being able to capture key clinically important differences in outcomes (n = 4, 17%) Inconsistency across therapies (n = 12, 24%) Inconsistency with published data/model, with specific recommendations (n = 5, 22%)
Drug price related (n = 10, 20%) Costing inputs related (n = 3, 13%) New data provided by stakeholder (n = 7, 14%) External validation (n = 4, 17%)
Drug dosing or regimen related (n = 6, 12%) Utility inputs related (n = 2, 9%) Clinical justifications and specific solutions (n = 5, 10%) Obsolete data, with specific recommendation for new data to use (n = 2, 9%)
Utility inputs related (n = 6, 12%) Efficacy assumption related (n = 2, 9%) Clinical experts’ opinions (n = 4, 8%) Logical flaw (n = 2, 9%)
Safety inputs related (n = 2, 4%) Baseline clinical characteristics (n = 2, 9%) No justification (n = 4, 8%) Inconsistency with published data (without specific reference; n = 2, 9%)
Baseline severity related (n = 1, 2%) Mortality related (n = 1, 4%) Clinical justifications (n = 3, 6%) Impact on the model results (n = 1, 4%)
Model structure related (n = 1, 2%) Time horizon related (n = 1, 4%) Error (n = 2, 4%)
Patients’ opinions (n = 1, 2%)
No rationale (n = 1, 4%)
Error (n = 1, 4%)

aN = 49 in 2018; N = 23 in 2019.

When the 72 comments were further categorized by the stakeholders’ rationale to support their recommendations, most accepted comments in these letters included specific alternative data or a published article to rationalize recommendations; specifically, in 2018, 24 of 49 comments provided such alternatives (published data: n = 12; data provision: n = 7; clinical justifications + specific solutions: n = 5; Table 1). Similarly, in 2019, approximately half (11/23) of the comments in this category continued to be comments that included specific recommendations on an alternative approach (alternative published data or approaches related to model inputs or structure: n = 4; data offered to validate external model outcomes validation: n = 5; more recent data: n = 2; Table 1). In addition to providing alternative approaches for model inputs or structure, comments that pointed out inconsistent methods or assumptions used to derive model inputs across different treatments (eg, different methods used to derive costs of treatments and efficacy) were also addressed frequently (12/49 comments in 2018; Table 1). In 2019, over 20% of the accepted comments were to reflect patient perspectives, provided by patient advocacy groups (5/23 comments in 2019; Table 1).

Among the 49 letters (29 in 2018 and 20 in 2019) that resulted in a change in the base-case analyses, 3 were associated with ICER reports where a product changed from being not cost-effective to being cost-effective or to being inconclusive. Table 2 summarizes the stakeholder requests and their justifications, as well as ICER’s responses and the impact of the changes on the cost-effectiveness conclusions. All 3 cases were manufacturer letters from 2018, and the stakeholders provided alternative data sources and methods for ICER’s use.

TABLE 2.

Summary of Comments Associated With a Change in the Cost-Effectiveness Conclusions From Draft to Final Report

Therapeutic areaa Products Stakeholder requests, justifications, and ICER responses Draft vs final cost-effectiveness conclusions under $150,000 WTPT
Migraine prevention Erenumab (Aimovig) vs no preventive treatment Costs of emergency department visits, hospitalizations, and botox administrations were updated following manufacturer comment that the original estimates were too low. Peer-reviewed publications were recommended as alternative sources. Draft: Not cost-effective in the episodic indication
Final: Cost-effective in both chronic and episodic indications
Migraine prevention Fremanezumab (Ajovy) vs no preventive treatment Treatment-specific effect on severity distribution was introduced based on clinical trial data provided by manufacturer.
ICER updated discontinuation rates based on clinical trial data provided by manufacturer.
Draft: Not cost-effective for both chronic and acute indications
Final: Cost-effective in both indications
Opioid use disorder Buprenorphine sustained release monthly injection (Sublocade) vs generic buprenorphine/naloxone Manufacturer criticized ICER’s methods for estimating comparative effectiveness inputs and provided alternative methods and data sources.
In response, ICER dropped the product from base-case analysis due to a lack of evidence of comparative effectiveness.
Draft: Not cost-effective
Final: Dropped from base-case assessment due to insufficient evidence

aIn high-risk prostate cancer, abiraterone acetate (Yonsa) was removed from ICER’s clinical assessment in response to the manufacturer’s assertion that it was distinct from the comparators. However, an economic assessment for this product was not conducted in ICER’s draft assessment, so we did not include the manufacturer’s comment as one that influenced the cost-effectiveness conclusion.

ICER = Institute for Clinical and Economic Review; WTPT = willingness-to-pay threshold.

The new data provided were (1) reference to a published study, (2) new clinical trial data, and (3) results of a clinical systematic literature review, along with the stakeholder’s recommendation on how to use the extracted data to conduct an indirect treatment comparison. In 2019, no stakeholder comment resulted in a change to ICER’s cost-effectiveness conclusions. However, in an assessment of treatments for Duchenne muscular dystrophy, after a model structure was revised based on recommendations from 7 stakeholder letters, 1 of the assessed interventions’ incremental cost per QALY gained declined from $561,000 to $344,000. This reduction, though, was not sufficient to satisfy the cost-effectiveness under any of the willingness-to-pay thresholds used by ICER.

Discussion

In this review, we observed increased opportunities for exchanging comments between ICER and stakeholders over the past decade (only 1 opportunity to 3 opportunities from 2008 to 2020). As ICER transitioned from conducting several general treatment strategy reviews each year to over 10 annual reviews of specific pharmaceutical products, the types of stakeholders who participated in public engagements evolved from a group of payers and hospitals to a more diverse group that included patient advocacy groups, pharmaceutical companies, and clinical experts.

For the 2 years (2018 and 2019) that we conducted a critical review of interactions between ICER and stakeholders who provided comments to draft reports, more than one-third of interventions reviewed (19/54 products in 20 assessments) were found to be cost-effective at the current net price. Although ICER addressed most stakeholder comments by editorial updates, a sizable number of comments resulted in revised base-case cost-effective analysis or additional sensitivity or scenario analyses. We found that more of the comments that resulted in base-case analysis revisions involved arguments around inconsistency across treatments or an unrealistic model structure.

Logical explanations of inconsistencies in modeling methods (eg, assumptions used to derive model inputs) across evaluated treatments often triggered a reconsideration by ICER, indicating ICER’s interest in using methods that treat all included treatments fairly. ICER also accepted recommendations to revise a model structure that were supported by patient and physician clinical experiences, especially when such arguments were supported by published evidence.

This study also found that, in general, comments accepted by ICER to revise its cost-effectiveness analysis improved the scientific accuracy of the analyses but had little influence on the primary cost-effectiveness assessment conclusions. Given that ICER’s cost-effectiveness analyses are conducted by experienced health economists, it may not be surprising for these draft models not to be drastically different from the final versions. However, it is important for decision makers to be able to conduct their own analyses. For instance, payers would likely benefit from continuing to develop their own models or using ICER’s interactive modeler populated by their own inputs in their decision making. Similarly, it is critical for pharmaceutical manufacturers to develop and publish a US-based cost-effectiveness model based on the best evidence to support their product; the ability to elaborate the differences between the company-developed model and ICER’s analysis will likely support its collaboration with payers.

Given that many pharmaceutical and payer stakeholders regularly invest in developing their own cost-effectiveness models, an improved process to encourage early interactive collaboration between stakeholders and ICER to prepare an economic model framework together and participate in multiple stakeholder discussions to review and resolve competing recommendations may promote more efficient collaboration. Because ICER’s assessment teams often rely on clinical evidence used to support regulatory approvals such as phase 2 and some phase 3 trial data, the team may be unaware of additional information that may be potentially helpful for the assessment, such as patient surveys pertaining to disease severity, patient preference, and other contextual benefits. Therefore, this type of early collaboration and interactive exchange may improve the ability of ICER’s research teams to capture value from a more holistic viewpoint.

Further, although stakeholders are currently given 4 weeks to provide comments on ICER draft reports, the current time window allotted for commenting on draft reports may not be sufficient for many stakeholders because this window includes time for stakeholders to digest the details of the draft report, prioritize key issues to be corrected, investigate which evidence to use to support their comments (and potentially conduct a new analysis), and develop a well-thought-out letter. Additional discussions between stakeholders and ICER to determine the timeline to achieve an optimal quality of collaboration may be warranted.

In our review, we found that many of the stakeholder comments resulted in ICER’s efforts to add scenario and sensitivity analyses but made no change in base-case analyses. Although scenario and sensitivity analyses provide useful information to decision makers, these additional analyses do not have any effect on ICER’s threshold pricing calculations at the moment (based on our review of ICER reviews conducted by 2019) because only base-case inputs were used to calculate the threshold prices. Since many US payers use ICER’s threshold pricing calculations as the basis of their pricing and reimbursement negotiations, the threshold pricing recommendations should be interpreted carefully according to the strength of the evidence supporting the cost-effectiveness model.

Our review revealed that ICER often addressed comments about drug pricing and regimens in their cost-effectiveness analyses, highlighting the importance for all participating stakeholders, including not only manufacturing stakeholders but also payers and clinical stakeholders, to conduct a careful review of inputs presented in the draft ICER report related to drug pricing, dosing, and regimen so that ICER will have balanced inputs. In addition, ICER accounted for many of the manufacturer-provided new clinical data currently unavailable in the public domain. However, because the data provision to ICER is voluntary, it is unknown whether the participating manufacturers chose data favorable to their products.

It is important for ICER’s procedure to provide incentives for manufacturing stakeholders to provide all key supportive data to achieve the overarching goal of strengthening the evidence to support the assessment, irrespective of the impact of the data on cost-effectiveness outcomes. For instance, ICER may be able to create such incentives by requiring manufacturers to submit all data requested in the initial review stage, if the manufacturers wish to submit additional data when commenting on ICER’s draft report.

Although several published articles have described ICER’s methods and issues of methodologies used by various HTA agencies,12,15,16 and general opinion articles have provided guidance on how to use comparative effectiveness research in determining drug pricing and access policies,17-19 to our knowledge, this is the first attempt to formally review and summarize ICER’s public interactions with all stakeholders by detailed comment type.

With ICER’s influence on pharmaceutical policies rapidly gaining traction in the United States, the takeaways from this review are applicable to not only pharmaceutical manufacturers but also all other key stakeholders, such as payers, policymakers, and patient advocates, as well as experts at ICER. For instance, in addition to participating in public meetings for ICER assessments and incorporating information presented in ICER reports in their pricing and reimbursement decision making, insurers may benefit from assigning their in-house pharmacoeconomic experts to review ICER’s draft economic analyses and provide their input to ICER during open public comment periods. Similarly, to continue to improve their effectiveness in communication with ICER, patient advocacy groups may start consulting with external health economists or training their staff in health economics.

LIMITATIONS

Our review has several limitations. First, although ICER has published transparent analysis plans in its draft and final evidence reports for each of its assessments, its economic models were not publicly available in 2018 and 2019. Therefore, the exact revisions made to the draft models to address stakeholder comments and their effects on the cost-effectiveness analysis outcomes could not be validated.

Second, when evaluating types of comments associated with changes in ICER’s approaches, we focused on revisions in cost-effectiveness analyses. Because there are many other aspects of ICER’s evaluations, such as its evidence rating and voting questions that are critical to stakeholders, future studies to evaluate effective communications from these other aspects of ICER’s reviews would be of value.

Finally, due to the limited documentation on interactions between ICER and stakeholders in the earlier phase of ICER reviews, we limited the review of the exchanges between ICER and stakeholders to 2018 and 2019. Because ICER’s stakeholder engagement process has consistently improved, a continued monitoring of future stakeholder engagements would be warranted. In particular, based on our review of ICER’s stakeholder interactions in 2018 and 2019, there was a significant overlap between comments by pharmaceutical manufacturers and patient advocacy groups in 2019. Because this is a new trend that we started to observe in 2019, additional research is needed to understand how the frequency of similar comments by multiple stakeholders would impact ICER’s considerations of such comments.

Conclusions

This study found that stakeholders can communicate the value of health technologies effectively by providing solution-oriented guidance in their public comments during ICER evaluations. These evaluations allow stakeholders an opportunity to strengthen clinical justifications and provide patient perspective. Revisions beyond editorial changes were typically triggered by provision of new data, references, or logical explanations that were often supported by references, clinical opinions, or patient experiences. Actionable, evidence-based recommendations were accepted more often than other recommendations. It is critical that stakeholders leverage ICER evaluations as opportunities to promote dialogue to better understand the value of technologies.

ACKNOWLEDGMENTS

The authors thank John Forbes of RTI Health Solutions for medical editing support.

REFERENCES

  • 1.Keehan SP, Cuckler GA, Poisal JA, et al. National health expenditure projections, 2019-28: expected rebound in prices drives rising spending growth. Health Aff (Millwood). 2020;39(4):704-14. [DOI] [PubMed] [Google Scholar]
  • 2.Luce B, Cohen RS. Health technology assessment in the United States. Int J Technol Assess Health Care. 2009;25 (Suppl 1):33-41. [DOI] [PubMed] [Google Scholar]
  • 3.Sullivan SD, Watkins J, Sweet B, Ramsey SD. Health technology assessment in health-care decisions in the United States. Value Health. 2009;12 (Suppl 2):S39-44. [DOI] [PubMed] [Google Scholar]
  • 4.White N, Johns A, Latch E; ICON . Industry perceptions and expectations: the role of ICER as an independent HTA organization. 2019. Accessed July 28, 2019. https://heatinformatics.com/sites/default/files/images-videosFileContent/Whitepaper_ICER.pdf
  • 5.Neumann PJ, Sullivan SD. Economic evaluation in the US: what is the missing link? Pharmacoeconomics. 2006;24(11):1163-68. [DOI] [PubMed] [Google Scholar]
  • 6.Pizzi LT. The Institute for Clinical and Economic Review and its growing influence on the US healthcare. Am Health Drug Benefits. 2016;9(1):9-10. [PMC free article] [PubMed] [Google Scholar]
  • 7.Ronquest NA, Gould IG, Barnett CL, Mladsi DM. A systematic review of ICER evaluations from 2008 to 2018: recent trends in evaluation process and lessons learned. Poster presented at: the ISPOR 24th Annual International Meeting; May 21, 2019; New Orleans, LA. [Google Scholar]
  • 8.CVS Health. Current and new approaches to making drugs more affordable. August 2018. Accessed March 15, 2021. https://cvshealth.com/sites/default/files/cvs-health-current-and-new-approaches-to-making-drugs-more-affordable.pdf.
  • 9.Institute for Clinical and Economic Review. ICER addresses misconceptions about its goals and methods of value assessment. August 9, 2016. Accessed April 19, 2021. https://icer-review.org/announcements/icer-myths/
  • 10.Schafer J, Blandford L. The evolution of value frameworks and what is next. J Clin Pathways. 2019;5(6):34-35. doi:10.25270/jcp.2019.08.00090 [Google Scholar]
  • 11.Gerlach JA, Snow B, Prioli KM, Vertsman R, Patterson J, Pizzi LT. Analysis of stakeholder engagement in the public comments of ICER Draft Evidence Reports. Am Health Drug Benefits. 2020;13(4):136-42. [PMC free article] [PubMed] [Google Scholar]
  • 12.Cohen JT, Silver MC, Ollendorf DA, Neumann PJ. Does the Institute for Clinical and Economic Review revise its findings in response to industry comments? Value Health. 2019;22(12):1396-401. [DOI] [PubMed] [Google Scholar]
  • 13.Yan A, Loos A. ICER’s growing influence on payer decision making: the impact of ICER assessments on market dynamics and patient access. HTA Quarterly. Winter 2020. Accessed June 24, 2021. https://www.xcenda.com/insights/htaq-winter-2020-icer-payer-decision-making
  • 14.Pearson SD. The ICER value framework: integrating cost effectiveness and affordability in the assessment of health care value. Value Health. 2018;21(3):258-65. [DOI] [PubMed] [Google Scholar]
  • 15.Pauly MV. The questionable economic case for value-based drug pricing in market health systems. Value Health. 2017;20(2):278-82. doi: 10.1016/j.jval. 2016.11.017 [DOI] [PubMed] [Google Scholar]
  • 16.Ghabri S, Mauskopf J. The use of budget impact analysis in the economic evaluation of new medicines in Australia, England, France and the United States: relationship to cost-effectiveness analysis and methodological challenges. Eur J Health Econ. 2018;19(2):173-75. doi: 10.1007/s10198-017-0933-3 [DOI] [PubMed] [Google Scholar]
  • 17.Hoffman A, Pearson SD. “Marginal medicine”: targeting comparative effectiveness research to reduce waste. Health Aff (Millwood). 2009;28(4):w710-18. doi: 10.1377/hlthaff.28.4.w710 [DOI] [PubMed] [Google Scholar]
  • 18.Pearson SD, Dreitlein WB, Henshall C, Towse A. Indication-specific pricing of pharmaceuticals in the US healthcare system. J Comp Eff Res. 2017;6(5):397-404. doi: 10.2217/cer-2017-0018 [DOI] [PubMed] [Google Scholar]
  • 19.Sculpher M, Claxton K, Pearson SD. Developing a value framework: the need to reflect the opportunity costs of funding decisions. Value Health. 2017;20(2):234-39. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Managed Care & Specialty Pharmacy are provided here courtesy of Academy of Managed Care Pharmacy

RESOURCES