Skip to main content
CMAJ : Canadian Medical Association Journal logoLink to CMAJ : Canadian Medical Association Journal
. 2011 Apr 5;183(6):E307–E308. doi: 10.1503/cmaj.110384

A glimpse into the black box of cost-effectiveness analyses

Ava A John-Baptiste 1, Chaim Bell 1,
PMCID: PMC3071404  PMID: 21402688

Cost-effectiveness analyses have become a key component of health policy. The detailed evaluations can incorporate multiple, complex elements for decision-making. However, some have noted the potential for industry involvement to bias their findings. It is through this lens that the article by Polyzos and colleagues1 sheds light into the proverbial black box of mathematical modeling, identifying one mechanism through which industry bias may be manifest.

The authors’ systematic review of cost-effectiveness analyses assessing screening technologies for cervical cancer shows that studies with industry involvement consistently underestimated the diagnostic characteristics of the Papanicolau (Pap) test. Researchers in this area are now challenged to apply these findings to improve the evaluation process of cost-effectiveness analyses and maintain the integrity of this important tool.

Much of the previous work examining bias in cost-effectiveness analyses sponsored by industry has focused on the results of the study models. A systematic review and meta-regression of published cost-effectiveness analyses showed that industry-sponsored studies produced significantly lower estimates of cost per quality-adjusted life-year that were more likely to be below thresholds considered to represent good value for money.2

Similar conclusions were found by other systematic reviews that used qualitative approaches or concentrated on specific conditions.3,4 Polyzos and coauthors also focused on a specific disease, although in contrast to earlier approaches, they examined how choices related to input parameters can introduce bias and affect the conclusion of the overall model. Their analysis adds to previous findings by Chauhan and colleagues,5 who compared pairs of cost-effectiveness analyses submitted to the National Institute of Clinical Excellence in the United Kingdom and found that manufacturers estimated larger effectiveness benefits than did independent academic organizations. By isolating information on assumptions of the diagnostic accuracy of the Pap test, Polyzos and colleagues provide a fine-tuned look at the mechanisms contributing to the resulting estimates of cost-effectiveness.

Polyzos and colleagues chose an excellent test case for examining the machinery of bias in model-based cost-effectiveness analyses. The authors identified almost 90 model-based cost-effectiveness analysis studies from both the published and grey literature. The Pap test is a well-established screening modality that is successful in reducing mortality from cervical cancer. However, newer, expensive technologies are forcing health care providers to make difficult policy decisions related to screening for cervical cancer. Against a backdrop of considerable information on test accuracy, the authors illustrate how choosing less favourable input data for the comparator therapy can have significant downstream effects. This effect is most evident in their finding that none of the cost-effectiveness analyses with industry association found the cheap and readily available Pap test to be the preferred screening tool and in their finding that baseline sensitivity estimates for the Pap test were 10% lower in studies with manufacturer involvement.

How might industry sponsorship or authors’ conflict of interest affect the choice of input parameters? Cost-effectiveness analysis modeling is a complex undertaking that usually requires a division of labour between decision modeling experts and content experts. In the course of the project, collaborators employed by or under contract to manufacturers may be tasked with identifying effectiveness parameters and may forego systematic review in favour of manufacturer-sponsored effectiveness studies. The outcomes of studies sponsored by pharmaceutical companies are significantly more likely to favour the sponsored product than studies with other sponsors.6 Literature searches, when conducted, may be selective or incomplete. Polyzos and colleagues noted that available published meta-analyses were not referenced in many cost-effectiveness analysis studies. Literature searches in those submitted by industry to Australian authorities excluded relevant studies that sometimes contradicted claims by the sponsor.7

Notwithstanding observations of bias, nefarious intent may not always be at play. Identifying valid estimates of effect is a difficult task. Even Polyzos and colleagues point out that there is disagreement among meta-analyses on Pap test sensitivity. In certain situations, investigators may judge that values from a single study better represent the population of interest than values from meta-analyses involving heterogenous populations. Interestingly, Polyzos and coauthors note no effect of industry involvement on the confidence intervals used for sensitivity analyses of test characteristics, thus identifying bias only in the main model estimates.

There are some unresolved questions related to the analysis by Polyzos and coworkers. Examination of the reference list reveals multiple publications by similar author groups. This is a common practice in the cost-effectiveness analysis field, where complicated, resource-intensive policy models are applied to different populations or jurisdictions. Because each study is not an independent observation, biased or unbiased estimates may propagate throughout multiple studies. This phenomenon may have affected the strength of the association observed by Polyzos and colleagues. In addition, the authors could have used regression models to isolate the independent effect of industry involvement from other study characteristics.

Tighter restrictions by medical journals may not be an effective solution to the problem. Journal editors already require conflict of interest disclosure for cost-effectiveness analyses, and some journals limit publication of industry-sponsored work.8 However, the findings of Polyzos and colleagues show that these policies are clearly not enough to minimize bias. Accurately assessing the validity of incorporated evidence requires content expertise. Peer reviewers, who often have methodologic expertise, may not have the clinical background to assess the validity of each input parameter and assumption. Journals could require review by both a methodologic expert and a clinical expert, with special emphasis on thorough incorporation of the relevant literature into the model parameters.

Journals could also make it standard practice to require that authors justify their approach to incorporating data into a model. For instance, authors should justify use of a single study to estimate the effectiveness of a long-standing intervention for which it can be reasonably assumed meta-analysis is possible. Improving the processes of cost-effectiveness analysis modeling by requiring an independent advisory board for industry sponsored studies has also been suggested.9

Cost-effectiveness analyses will continue to play a substantial role in informing health policy. Until now, those evaluating the validity of cost-effectiveness analyses and their conclusions had not systematically examined bias related to input parameters. Polyzos and coauthors take the work of previous evaluations of cost-effectiveness analysis models to a different level by quantifying the impact of potentially inaccurate model values. Only through this kind of illumination of the cost-effectiveness analysis machinery can we move toward confidence in its conclusions and ensure the integrity of this important tool for policy decisions.

Key points

  • Bias related to manufacturer involvement in cost-effectiveness analyses is a well-established phenomenon, but previous work had not examined the mechanisms of bias in a systematic way.

  • Polyzos and colleagues conducted a rigorous examination of cost-effectiveness analyses that assessed cervical cancer screening methods, and identified significantly lower baseline estimates of Pap test accuracy in studies with manufacturer involvement.

  • These findings should prompt journals to improve peer review by requiring that content experts comment on whether or not model parameters accurately reflect the available literature.

See related research article by Polyzos and colleagues at www.cmaj.ca/cgi/doi/10.1503/cmaj.101506.

Footnotes

Competing interests: Ava John-Baptiste has served as a consultant to the Committee to Evaluate Drugs and the Joint Oncology Drug Review of Canada, on behalf of the Ontario Ministry of Health and Long-Term Care (MOHLTC), and as a consultant to the Canadian Agency for Drugs and Technologies in Health (CADTH). She is a member of the Toronto Health Economics and Technology Assessment (THETA) Collaborative, which consults for the Medical Advisory Secretariat of the MOHLTC, and is a coauthor of a systematic review included in the analysis by Polyzos and colleagues. Chaim Bell is a reviewer on the Committee to Evaluate Drugs and a member of the Joint Oncology Drug Review of Canada, on behalf of the MOHLTC. He is a member of the Ontario Health Technology Advisory Committee.

This article was solicited and has not been peer reviewed.

Contributors: Both of the authors contributed substantially to the conception, drafting and revision of the article and approved the final version submitted for publication.

Funding: Ava John-Baptiste is supported by a postdoctoral fellowship and the Emerging Team in Pharmacologic Management of Chronic Disease in Older Adults from Canadian Institutes of Health Research. Chaim Bell is supported by a Canadian Institutes of Health Research/Canadian Patient Safety Institute Chair in Patient Safety and Continuity of Care.

References

  • 1.Polyzos NP, Valachis A, Mauri D, et al. Industry involvement and baseline assumptions of cost-effectiveness analyses: diagnostic accuracy of the Papanicolaou test. CMAJ 2011;183:E337–43 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Bell CM, Urbach DR, Ray JG, et al. Bias in published cost effectiveness studies: systematic review. BMJ 2006;332:699–703 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Garattini L, Koleva D, Casadei G. Modeling in pharmacoeconomic studies: funding sources and outcomes. Int J Technol Assess Health Care 2010;26:330–3 [DOI] [PubMed] [Google Scholar]
  • 4.Ligthart S, Vlemmix F, Dendukuri N, et al. The cost-effectiveness of drug-eluting stents: a systematic review. CMAJ 2007;176:199–205 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Chauhan D, Miners AH, Fischer AJ. Exploration of the difference in results of economic submissions to the National Institute of Clinical Excellence by manufacturers and assessment groups. Int J Technol Assess Health Care 2007;23:96–100 [DOI] [PubMed] [Google Scholar]
  • 6.Lexchin J, Bero LA, Djulbegovic B, et al. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 2003;326:1167–70 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Hill SR, Mitchell AS, Henry DA. Problems with the interpretation of pharmacoeconomic analyses: a review of submissions to the Australian Pharmaceutical Benefits Scheme. JAMA 2000;283:2116–21 [DOI] [PubMed] [Google Scholar]
  • 8.Kassirer JP, Angell M. The journal’s policy on cost-effectiveness analyses. N Engl J Med 1994;331:669–70 [DOI] [PubMed] [Google Scholar]
  • 9.John-Baptiste A, Bell C. Industry sponsored bias in cost effectiveness analyses. BMJ 2010;341:c5350. [DOI] [PubMed] [Google Scholar]

Articles from CMAJ : Canadian Medical Association Journal are provided here courtesy of Canadian Medical Association

RESOURCES