Skip to main content
Innovations in Pharmacy logoLink to Innovations in Pharmacy
. 2021 Jan 20;12(1):10.24926/iip.v12i1.3606. doi: 10.24926/iip.v12i1.3606

Let a Thousand Models Bloom: ICER Analytics Opens the Floodgates to Cloud Pseudoscience

Paul C Langley 1,
PMCID: PMC8102958  PMID: 34007666

Abstract

It has been noted on numerous occasions that modeled claims for cost-effectiveness, if driven by assumption for the lifetime of a hypothetical patient population, can be easily ‘gamed’ to create a required claim. These marketing exercises to support product entry are all too common in the literature. The institute for Clinical and Economic Review (ICER) in its launch of the ICER Analytics platform has provided a framework to support precisely these activities. Following the mainstream methodology in health technology assessment, the ICER Analytics platform facilitates the creation of approximate information to support formulary decisions. This is an odd development because it undercuts ICERs belief that it is the key arbiter in health technology assessment in the US, setting the stage for pricing and access recommendations. With the release of the ICER Analytics platform, others can now customize the ‘backbone’ ICER model in a disease area (i.e., change assumptions) to develop alternative and competing value assessments and ‘fair’ price claims. The problem is, of course, that without a reference point, there is no basis for comparing modeled claims other than through challenging assumptions. Indeed, ICER has made this easy by reducing barriers to lifetime model building so that manufacturers and others can create competing (and confusing) claims within, literally, a few minutes. ICER will then become one of a multitude of competing voices for the attention of formulary committees and other health decision makers; letting a thousand imaginary models bloom where no model can be judged on the basis of credible, empirically evaluable and replicable product claims.

Keywords: ICER Analytics, imaginary worlds, pseudoscience, multiple models

INTRODUCTION

Demarcating science from pseudoscience rests on a simple premise: the ability of claims made to be credible, empirically evaluable and replicable 1. Health technology assessment fails this test. Since the early 1990s it has focused on inventing claims for cost-effectiveness based on lifetime simulation models; a collection of beliefs or practices mistakenly regarded as being based on scientific method. Hypothesis testing has been rejected in favor of approximate information 2. The reason for this denial of the standards of normal science is clear: it is easier, at product launch to drive claims for cost-effectiveness, to fill evidence gaps, with assumptions 3. The alternative, to agree a research program to meet evidence gaps is far too time consuming. It is far easier to create claims, from a lifetime simulation, that have no possibility of ever being empirically evaluated.

The Institute for Clinical and Economic Review (ICER) has accepted this approximate information meme alongside professional groups such as the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) 2. ICER’s claims for pricing and product access are clearly pseudoscience. But more interestingly is the extent of its failure to meet those standards. Not only do its claims lack credibility from the perspective of empirical assessment, but the claims themselves are mathematically impossible 4. This is because ICER has failed to appreciate the limitations imposed by the axioms of fundamental measurement. It holds to the belief, without a shred of evidence to support this position, that utility scales have ratio properties 5. This allows the creation of quality adjusted life years (QALYS); the cornerstone of their reference case modeling. Unfortunately the respective multiattribute utility scales such as the EQ-5D-3L were never designed to have ratio properties; they are ordinal scales. An ordinal scale cannot support multiplication; QALYs are therefore an impossible construct and hence the term I-QALY. Unfortunately, this misuse is even more egregious: the utility scale (such as the EQ-5D-3l) is a multiattribute scale. This means it lacks dimensional homogeneity. It cannot support a single score because the symptoms covered are each dimensionally unique 6. If any evidence is needed for the EQ-5D-3L failing to meet ratio standards all we have to note is that the utilities can take negative values; there is no true zero. Nor, it might be added does the EQ-5D-3L have interval properties. It cannot make any claims for response to therapy.

ICER is not alone in failing to understand the limitations placed by the axioms of fundamental measurement on multiattribute utility instruments and disease specific measures where items on scales capture different constructs. The Food and Drug Administration (FDA) is also unaware (or chooses to ignore it). The recent recommendations by the FDA for cost-effectiveness presentations to payers endorses the construction of ICER-type imaginary worlds and is willing to accept QALYs or other patient reported outcomes (PROs) 7. The only qualification is not that a particular model may fail the standards of normal science or that, more specifically, the axioms of fundamental measurement are ignored, but that there is some minimum justification for the measure provided by the manufacturer or model builder; the FDA is quite willing to endorse pseudoscientific claims. The notion that cost-effective and similar claims for pharmaceutical products and devices should be credible, empirically evaluable and replicable is of no interest. Perhaps we can look forward to the FDA’s enthusiastic endorsement of the ICER Analytics cloud platform for manipulating imaginary worlds.

The FDA is not alone. In the leading technology assessment textbook the measurement argument for multiattribute scales is quite confused 8. As a first step, it is acknowledged that as the multiattribute scales such as the EQ-5D-3L can create negative scores, then the scale cannot have the ratio properties, required to create QALYs. However, all is not lost. We are saved by the argument that these scales have interval properties and can, therefore, support QALY ratios. The case presented is confused and false. The unfortunate fact is that the multiattribute scales do not have interval properties. The scales were not designed to have this property. They are, in fact, ordinal scales. They cannot support any of the standard arithmetic operations. There is no proof presented that they have interval properties. As noted, the fact that they are multiattribute scales means they cannot support a single score where each item on a scale relates to a single common construct. They lack construct validity. The common mistake made is to assume that because it is possible to place these ordinal scores on a number line with equal intervals means they must have interval properties; we could just as well place them on a number line with unequal intervals.

The deficiencies of I-QALY modelling are well known, yet ICER perseveres with the I-QALY imaginary simulations. After all, it is their business model. ICER refuses to accept the standards of normal science. Yet ICER has gone one step further; it has established ICER Analytics, a cloud based curated virtual reality platform that allows those with a belief in imaginary worlds and I-QALYs to create a universe of alternative imaginary modeled claims 9. ICER has all but eliminated barriers to constructing lifetime simulations. Anyone can develop model variants to include both those who support modeled claims as well as those who may wish simply to demonstrate the key weakness. Irrespective of how closely one observes ‘standards’ for model claims, the truth is that it always possible to ‘create’ a countervailing case.

ICER ANALYTICS: CURATED PSEUDOSCIENCE

The ICER Analytics platform comprises (i) a pricing comparison module where users can compare their ‘fair’ drug prices with ICER’s imaginary threshold ‘fair’ rice; and (ii) an interactive ‘backbone’ model where the user can modify selected assumptions in published ICER models. Both of these are, of course, imaginary exercises as all ICER models (including pricing thresholds) fail the standards of normal science.

The pricing comparison model takes the ‘fair’ price value assessment of the product(s) modeled evidence report as the ‘reference’ price and allows users to compare their price. This is essentially a waste of time as the ICER fair price is created from a simulation model that lacks any credibility. The pricing comparison module could, of course, be replicated for any number of competing imaginary models. Whether decision makers feel emboldened to take this ICER reference point as a serious element in price negotiations is a matter of choice; they can always be challenged. The fact that a ‘price’ is greater or less that the magical and imaginary ICER ‘fair’ price claim is of no significance; it just reflects choice of model structure and the assumptions driving the respective models. ICER price is not a ‘unique’ contribution; it is just a pedestrian fantasy. After all, a so-called imaginary fair price is just an outcome of the ICER model, the assumptions and the choice of mathematically impossible threshold willingness to pay criteria. The result is that for each ICER evidence model there will, depending on choice of assumption, a multitude of ‘fair’ prices consistent with any cost-per-QALY threshold. Not only does this demonstrate that the ICERs concept of a ‘fair’ price is a charade but that any prior claim from ICER evidence reports over the past six or more years for a ‘fair’ price can be easily challenged. Where the previous ICER ‘fair’ price has been a benchmark in pricing and access negotiations the opportunity presents for re-negotiation with formulary committees and other health care decision makers.

The interactive modeling option takes the ICER reference case lifetime simulation framework as the point of departure; the user apparently has ‘bought in’ to this ICER Analytics curated virtual reality fantasy platform. This is seen by ICER as a major step forward in health technology assessment: the ability to manipulate ICER models to create alternative imaginary pricing claims. ICER still maintains that creating evidence is to be preferred to the more mundane process of discovering new facts, evaluating the impact of competing therapies through the tedious process of theorizing and hypothesis testing. ICER is joined in this fantasy by many in health technology assessment: truth is simply consensus (Ludwig Wittgenstein; 1889-1951) 10 . As Hume (David Hume; Scottish philosopher 1711-1776) in an Essay Concerning Human Understanding makes clear: Every step the mind takes in its Progress towards Knowledge, makes some Discovery, which is not only new, but the best too, for the time at least 11. Rather than seeing knowledge as progressive yet provisional, ICER is content to let subscribers to ICER Analytics manipulate the assumptions of a static, mathematically impossible simulation to create claims that have no intention whatsoever of meeting the standards of normal science. For ICER and its acolytes truth is about rhetoric, persuasion and authority; a belief system that is a sociological consensus that is not interested in coming to grips with reality. Evidence is created not discovered. This, as noted in previous commentaries is in stark contrast to the invention of science in the 17th century as evidenced in the motto of the Royal Society (1662) nullius in verba (take no man’s word for it) 12 13.

To those who accept the relativism of the ICER Analytics curated virtual reality, the ability to manipulate assumptions, to believe in an unknown future reality (discounted at 3%), this platform must have significant intuitive appeal in supporting formulary decisions. Indeed, the more cynical observer might draw a parallel between belief in the ICER alternative curated reality to support formulary decisions to those who subscribe to ‘less’ curated conspiracy theories (e.g., QAnon) that attract, in the absence of any reality check, a multitude of devoted followers.

The ability in the ICER Analytics platform to manipulate assumptions to create alternative curated future realities also defies the rules of logic. It has been pointed out in previous commentaries that the assumption that an observation drawn from past experience (i.e., an assumption based on previous empirical observations) will necessarily hold in the future is to ignore Hume’s problem of induction 13 . Certainly, assumptions can support hypotheses; the difference is that the hypotheses are credible, empirically evaluable and replicable. If the hypothesis is not supported empirically then we go back to the drawing board and assess the merits otherwise of chosen assumptions. In the ICER Analytics approach to creating the unknown future reality there is no ability to evaluate the respective claims. The virtual reality can never be falsified. We don’t know whether it is right or wrong and we will never know; and were never intended to know; ICER Analytics curated model claims for ‘fair’ prices, created by application of I-QALY thresholds, are never ‘wrong’. Driven by assumption it fails the simple proposition that it cannot be ‘established by logical argument, since from the fact that all past futures have resembled past pasts, it does not follow that all future futures will resemble future pasts’ 14.

The ability to defy logic and manipulate assumptions to create alternative future realities is constrained by the structure of the relevant product specific ICER model; described as the backbone of the particular disease specific ICER model and the launching pad for virtual games. The objective of such manipulation (as is allowed) is, apparently, to provide the opportunity to make any number of customized disease and product specific imaginary value assessment for clients or negotiating partners. There is presumably a willing audience for this belief. The more concerning view is that they are not aware of these standards, taking the ICER model at face value and any revised value claims based on manipulating assumptions and plugging in other selected data elements as an acceptable analytical dead end. Indeed, those health care systems that have relied in ICER imaginary simulated claims to ‘inform’ decisions by various formulary and budget committees will now have the bonus of any number of competing ICER ‘backbone’ models for products in disease states to enjoy.

OPEN SEASON: CONSTRUCTING YOUR IMITATION ICER SIMULATION

A perennial, and well deserved criticism of the ‘build your own imaginary simulation’ is that the less scrupulous will ‘game’ the modeling to generate faux claims for cost-effectiveness. This has been a point of contention with the Academy of Managed Care Pharmacy’s Format for Formulary Submissions which allows discretion in choice of simulated model framework 15 16. This is unlikely to change with ICER Analytics; in fact it makes it easier given the ability to claim that it has the ICER model backbone seal of approval.

ICER Analytics allows considerable scope for potential users to exercise their imagination and change the assumptions with each of the product specific models. Allowable assumptions for modification (including inputting ‘new’ data) are detailed under: model setting (e.g., time horizon); epidemiology (e.g., gender); clinical inputs (e.g., trial results); quality of life (e.g., utility score); costs (e.g., separate direct medical costs); and budget impact model (e.g., plan size). There is ample room for ‘adjustments’ with the ability to manipulate and see the results for cost-per I-QALY and threshold I-QALY values.

The opportunity to manipulate to create a desired ‘result’ is obvious with new ‘assumptions’. An important case in point is the ability to reformulate QALYs. A feature of many of the ICER models is the negligible difference in lifetime QALYs between new and comparator products. A minimal gain of, say, one or two QALYs over the model timeframe means that the impact of product cost differences dominate cost-per-QALY claims, leading to cost-per-QALY thresholds dictating substantial price discounts. This opportunity is explored in the attached Appendix which demonstrates that If you are committed to making the case for price discounting then you must stay with a multiattribute utility score which may have little if any relevance to evaluating the impact of competing therapies for the target population in the disease state.

Justifying new (yet imaginary) utility score is reasonably straightforward given that ICER’s preferred scores are generic multiattribute scores such as the EQ-5D-3L. These typically rest on a limited number of symptoms or attributes and minimal response categories. The EQ-5D-3L, for example, embodies five attributes (mobility, self-care, usual activity, pain/discomfort, and anxiety/depression) and three response levels (no problem, some problems and extreme problems). It would be relatively easy to argue that these attributes are of limited or no interest to patients in the disease area (no mention is made of caregivers) and that a patient-centric utility score (e.g., proposed by key opinion leaders in the disease state from health state descriptions) which evaluates directly differences between the target and comparator product is more appropriate. Even a small increase in lifetime imaginary QALYS (e.g., a 5 QALY difference) would have a substantial impact on discounted cost-per-QALY claims and, given the ICER cost-per-QALY thresholds, justify claims for higher prices than ICER may have recommended with the EQ-5D-3L utilities. This approach, of course, puts to one side the fact that the QALY (or impossible I-QALY) is a mathematically nonsensical construct.

Modelled direct medical costs are a further obvious opportunity to consider options. Under the ICER modeling certain costs (not drug costs) are projected for decades into the future tracking the natural course of a disease for the hypothetical target population. While this is not logically defensible, there is the opportunity, with appropriate references to the literature, to match a revision of the utility scores (up) with a revision of costs (down). As the ICER model is usually created at product launch to invent evidence for cost-outcomes and fair price claims, there is substantial scope for imagining both the quantity of direct medical units to be consumed and their prices to challenge competing modelled imaginary claims. After all, we are only dealing with assumptions, however unrealistic or impossible they may be; presaging a new consulting industry focused on manipulating ICER models for an attentive client base who want to challenge pricing and access recommendations.

LET A THOUSAND MODELS BLOOM

While users are invited to include their own evidence, it is doubtful if many will be prepared to do this. This would be a mistake as manipulating the models and producing qualified or conflicting claims is trivially simple. While it takes, apparently, an ICER academic consultant group some 12 months to create the backbone model and its attendant scenarios, the resulting model is easy to manipulate once it is on the ICER Analytics cloud platform (a very transparent cloud).

Modification could go back years with challenges to negotiated prices from existing ICER evidence reports. This is, it should be cautioned, actually a waste of time given the manifest deficiencies of the ICER methodology. Even so, once a multiplicity of ICER ‘backbone’ based models within disease states is allowed, then we open the gates further to I-QALY based value assessments by other organizations, each promoting their own ‘reference’ case and even ‘backbone’ model platform. This would be attractive to manufacturers who could bring in models developed for other jurisdictions at relatively low cost as ‘preferred alternatives’ to the ICER model. Health decision making in the US would be awash in approximate or impossible ICER-type information packaged to support marketing claims.

Unlike the UK and Australia, for example, the problem with an open season for simulated model claims is that there is no referee to ‘judge’ the contestants 17 18. Within single payer health systems, modeled approximate evidence claims submitted for evaluation by manufacturers are subject to external review by academic groups schooled in assessing imaginary claims (don’t ask). In the US, formulary committees subject to conflicting model claims based on the ICER ‘backbone model’ have no referee to assess their competing merits as imaginary constructs. The committees will not have the skills to unravel competing black boxes or the resources to devote to this activity. They may just reject the application which would be a wise move given that the models are an analytical dead end. In the US, the absence of a referee for competing imaginary claims would be the equivalent of a football match without a referee. Welcome to ICER’s imaginary playing field for multiple models; a scene fully consistent with the thousands of decision models published in the last 30 years to support client marketing claims.

It would be fair to say that ICER has shot itself in the foot; as far as can be judged there are no constraints on ICER Analytics use, other than the exorbitant fee schedule for fairground entry. Open access to the model (all ICER reference case models) ensures the emergence of a multiplicity of models each claiming to be anchored in the ICER ‘backbone’ model. Pseudoscience proliferates. Manufacturers, dissatisfied with the ICER ‘house’ model for an evidence report may be quite willing to underwrite alternative modelled assumptions for the same or similar products to arrive at competing claims. The waters would not be muddied; they would be churned up. Published in fee-based peer reviewed journals, competing models would present decision makers with an interesting choice.

VALE COST-EFFECTIVENESS

The fundamental mistake (among many) in health technology assessment is to think that a ‘single metric’ such as ICER’s cost per QALY thresholds can support claims for comparative cost-effectiveness. A belief, unfortunately, shared by health system decision makers who are looking for a ‘one size fits all’ solution. Decision making is somewhat more complex with formulary committees required to consider a range of product attributes and comparative claims for those attributes. While this might degenerate with checklists and weights into multiple criteria decision analysis (MCDA), it is not the purpose to pre-empt (as the ICER model does) the need for identifying the attributes of interest in therapy evaluation for target populations within disease areas. This is a decision for the formulary committee. Imaginary claims from an ICER lifetime simulation are of little consequence; a view that is likely to be reinforced once competing ICER ‘gold standard backbone’ imaginary modelled claims are presented ‘for approximate information’ to the bemused members of formulary committees with all claiming the seal of approval from the ICER Analytics package

The question then becomes: why did ICER venture down this path? Not to put too fine a point on it, the choice seems suicidal. ICER is not the most admired organization with the I-QALY having even fewer friends, so why present manufacturers and others with the opportunity to discredit the ICER creation of imaginary claims through comparative simulations? Certainly ICER has been accused of being less than transparent in developing its models, but this seems an over-reaction. As promoted by ICER, the platform is intended to support formulary development and adoption, internal assessment, formulary committee preparation, long-term value and budget impact modeling, and development of outcomes based agreements. This last point is intriguing: how can value assessment contracts be built on imaginary and mathematically nonsensical constructs, if the intent of the contract is tracking and assessing empirical claims? It gets even more ambitious: ICER Analytics claims that its platform can be used to formulate pre-market research and pricing strategies (again with the I-QALY) while many others ‘will find their own goals advanced … by patient groups seeking a full seat at the table …. to discuss pricing and access’; discussions presumably based on comparing imaginary non-evaluable pseudoscientific simulated clams without any comprehension that the ‘backbone’ ICER Analytics model is irrelevant.

A MEANINGFUL ALTERNATIVE

Hopefully, the ICER Analytics platform will bring home the absurdity of creating imaginary evidence to support nonsensical cost-outcomes claims and ‘fair’ prices. Fortunately there is an alternative that meets the requirements of normal science: Version 3 of the Minnesota proposed formulary guidelines 19. There are four key principles:

  • Claims must meet the standards of normal science

    • Claims must be credible, empirically evaluable and replicable

    • Claims must meet the standards for fundamental measurement

    • Claims must be dimensionally homogeneous or unidimensional

  • Claims must be for single attributes defined for clinical outcomes, quality of life and resource utilization

  • Claims must be specific to target populations within disease areas

  • Claims must be accompanied by a protocol detailing how they might be evaluated or how they have been evaluated

The implications of applying these standards is that generic, multiattribute claims and the subsequent construction of the mathematically impossible QALY are of no interest; nor is there any interest in broad ‘cost-effectiveness’ claims based upon imaginary incremental lifetime cost per I-QALY models. Cost-per-I-QALY thresholds are also of no interest along with value assessments of a ‘fair’ price.

Ensuring that an instrument is designed to capture single attributes with measurement on either an interval scale or a ratio scale is critical. Unfortunately, the majority of disease specific PRO instruments fail these standards. They are dimensionally heterogeneous and hence lack construct validity. There are a handful of instruments, particularly in needs based quality of life that meet, for interval response assessment, the required Rasch Measurement Theory (RMT) standards 20.

Claims must be evaluated within a timeframe agreed with the formulary committee. Where possible an evidence base should be proposed to evaluate the claims and set the scene for ongoing disease area and therapeutic class reviews. This does not mean an evidence base for each formulary committee; it would be sufficient to report on a single evidence base for a number of committees (e.g., a registry).

The Minnesota proposed guidelines are expected to play a key role in value based contracting for high-cost gene based products targeted to rare diseases. As noted, the ICER imaginary simulation has no role in value based contracting as the claims are driven entirely by assumption: they lack credibility and are not empirically evaluable. Indeed, they were never intended to be evaluable. The manifest deficiencies in the ICER simulation, notably the failure to recognize the axioms of fundamental measurement means that it would be quite unwise to even consider the ICER pricing as elements in contract negotiations. The Minnesota guidelines are designed to support value based contracting for high cost products. Contracts will only be entered into if there is the least downside risk by selecting claims that meet the standards of normal science.

If formulary decision making is focused on the utilization of dispersed knowledge then the economic problem, as Hayek eloquently put it in his 1945 seminal essay, is how to secure the best use of that knowledge that is not given to anyone in its totality 21. The problem with the ICER creation of imaginary information, in the invention through an assumption driven simulation of non-evaluable claims for pricing and access or rationing of products, is to subvert the process by which information is created and utilized, and markets function. To paraphrase Hayek: every time market exchange is restricted, ignorance is substituted for knowledge.

CONCLUSIONS

Unless you, as a health system decision maker, are committed to the belief that imaginary approximate (impossible) information curated by virtual worlds are both necessary and sufficient conditions to drive formulary listing, pricing and access, then the ICER draft evidence reports and the ICER Analytics mausoleum are of no interest. It seems such a complete waste of time. It is surprising ICER supporters were prepared to fund this trivial and unnecessary exercise.

ICER presents and tries to sell a bankrupt analytical framework. The modeled evidence reports are a chimera; this has been demonstrated on multiple occasions. ICER is well aware of these criticisms with half-hearted and mistaken attempts to defend the QALY 22. Rather than trying to expand the market for imaginary information, ICER should either withdraw or attempt to meet the standards common in the physical sciences and the more advanced social sciences such as education, psychology and economics. Otherwise we face a slow and tenacious resistance to the withdrawal from a technology assessment meme that should have been smothered at birth.

Acknowledgments

Conflict of Interest: PCL is an Advisory Board member and consultant to the Patient Access and Affordability Project, a program of Patients Rising

APPENDIX: Creating a Galaxy of ICER Fair Prices

The advent of the ICER Analytics cloud platform for imaginary cost-per-I-QALY worlds, gives the interested user the opportunity to create an ICER-type ‘fair price’ by judicious manipulation of the ordinal utility score. Keep in mind that the ordinal score ensures the creation of an impossible or I-QALY so that the examples given here are illustrative and should not be considered to meet the required standards of normal science. They are fantasy ICER examples.

The simplest approach is to focus on the utility scores for a hypothetical multiattribute instrument (e.g., EQ-5D-3L) and a hypothetical disease specific score. Both are assumed to be ratio scales (which is an impossibility) in a range 0=death and 1=perfect health. There is an assumed true zero, unlike the EQ-5D-3L which has a lower bound of -0.59. The ICER model excludes the possibility of negative utilities. The utilities are manipulated for the ‘backbone’ ICER model which is a disease specific structure where the disease pathways followed by the hypothetical patient are fixed as are the timeframes for the disease stages. These cannot be varied. This ensures the user can claim that the manipulation of assumptions conform to the ICER model reference case. This is the model presented in the ICER evidence report for the product(s) with a base case ‘fair price’. All we are demonstrating is that depending on modified assumptions this is not a unique base case; merely one of many. No one fair price can claim to be superior to another.

The model framework is for two products: a new product A and a standard of care B. This is illustrated in Table 1. Drug A yields a longer time spent in the less burdensome disease stages than Drug B. Simulated lifetime drug costs are assumed at $30,000 per annum and $10,000 per annum respectively for Drug A and Drug B respectively. The benefits conferred by Drug A over Drug B are measured by time spent in each of four successive stages of disease. There is no increase in life expectancy (it can be assumed) but the time spent (in years) shows a longer time in disease stage 1 for Drug A (41 vs. 25 years). Two utility scales are assumed: a generic scale and a disease specific scale. The former clusters utilities at the perfect health end of the scale; the disease specific is more reflective of disease experience with utilities substantially less by disease stage and more ‘spread out’. The standard modeling approach is to propose a series of stages which a hypothetical patient is proposed to experience over the natural course of a disease. Each ‘stage’ is assumed to have a utility weight so that time spent can be re-imagined as the equivalent time in perfect health (QALYs). Thus, as the disease becomes worse, time spent in each stage yields fewer I-QALYs.

Table 1. IMAGINARY QALY SIMULATION FOR SAME LIFE EXPECTANCY.

 

Years in Disease Stage: A

Years in Disease Stage: B

Utility Score (0-1) Generic Multi attribute

Utility Score (0-1) (Disease specific)

QALYS: A: Generic Multi attribute

QALYS: B Generic Multi attribute

QALYS: A Disease Specific Utility

QALYS: B Disease Specific Utility

Stage 1

41

25

0.98

0.95

40.18

24.50

38.95

23.75

Stage 2

10

15

0.92

0.65

9.20

13.80

6.50

9.75

Stage 3

3

10

0.88

0.30

2.64

8.8

0.90

3.00

Stage 4

1

5

0.75

0.05

0.75

3.75

0.05

0.25

Total years

55

55

 

 

 

 

 

 

Total QALYS

 

 

 

 

52.77

50.85

46.40

36.75

QALY Gain

 

 

 

 

1.92

 

9.65

 

The utility score for the multiattribute utilities yields an aggregate of 52.77 QALYs for Drug A compared to 50.85 utilities for Drug B. Applying the disease specific utilities yields corresponding total QALYs of 46.40 and 36.75 respectively. The difference in total QALYS is only 1.92 for the multiattribute utility compared to 9.65 utilities for the disease specific utilities.

The implications of this difference in imaginary or I-QALYs is interesting. Focusing on drug costs, with a total cost over 55 years for Dug A of $1.65 million and Drug B of $550,000, the overall difference in drug costs is $1.10 million. Matched against the multiattribute utility difference of 1.92 QALYs substituting Drug A for Drug B gives a cost per QALY difference of $572,917. This is somewhat in excess of the ICER recommended cut-off of $150,000 per incremental QALY. To achieve this lifetime drug costs would have to fall by $422,917 or 26% (or an annual price of $22,300).

In we consider the disease specific utilities in Table A then the QALY increment is now 9.65. With the same costs the difference in incremental cost of 1.10 million yields as incremental cost per QALY of $114,000. This is well below the notional ICER threshold of $150,000 providing a basis for a possible Drug A price increase. With suitable changes in assumptions it should prove relatively straightforward to challenge ICER for price increases to achieve a price consistent with a threshold of $150,000 rather than price discounts. The key is maximizing QALY gains from switching from Drug B to Drug A. If you are committed to making the case for price discounting then you must stay with the multiattribute utility score which may have little if any relevance to evaluating the impact of competing therapies for the target population in the disease state given the limited range of symptoms covered and their relevance to the target patient population.

The impact of utility score differences is even more pronounced if Drug A extends life compared to Drug B (gene therapy has a more pronounced impact). This is illustrated in Table 2 where life expectancy increases from 55 to 67 years for Drug A. Projected costs of Drug A will increase for the additional annual coverage. If life expectancy increases by 12 years then Drug A costs will increase to $2.01 million, with the difference in lifetime drug costs (67 vs 55 years) now $1.46 million (Table 2). With the multiattribute utility scores the number of I-QALYs with Drug A increases from 1.92 to 12.95 and for the disease specific utilities an increase from 9.65 to 17.65. In the former group the incremental cost per I-QALY is $112,741 and in the latter $.82,720. Both are below the $150,000 cost-per-I-QALKY threshold, giving the opportunity to increase the price of Drug A. Given the drug and other support costs, the case for price discounting rests on the choice of utility score. The less responsive the score is to differences in drug impact, with Drug A delivering a clinical and quality of life benefit over Drug B, the easier it is to invent the need for price discounts.

Table 2. IMAGINARY QALY SIMULATION WITH DRUG A INCREASED LIFE EXPECTANCY.

 

Years in Disease Stage: A

Years in Disease Stage: B

Utility Score (0-1) Generic Multi attribute

Utility Score (0-1) (Disease specific)

QALYS: A: Generic Multi attribute

QALYS: B Generic Multi attribute

QALYS: A Disease Specific Utility

QALYS: B Disease Specific Utility

Stage 1

45

25

0.98

0.95

44.10

24.50

42.75

23.75

Stage 2

15

15

0.92

0.65

13.80

13.80

9.75

9.75

Stage 3

5

10

0.88

0.30

4.40

8.8

1.50

3.00

Stage 4

2

5

0.75

0.05

1.50

3.75

0.10

0.25

Total years

67

55

 

 

 

 

 

 

Total QALYS

 

 

 

 

63.80

50.85

54.10

36.75

QALY Gain

 

 

 

 

12.95

 

17.65

 

REFERENCES


Articles from Innovations in Pharmacy are provided here courtesy of University of Minnesota Libraries Publishing

RESOURCES