Abstract
Purpose: To determine the growth rate, methodologic clarity, and quality changes in cost-effectiveness analyses (CEAs) and to assess whether the U.S. Panel on Cost-effectiveness in Health and Medicine recommendations affected CEA studies in which imaging technologies were evaluated.
Materials and Methods: Six databases were systematically searched for CEA reports published between 1985 and 2005. All imaging-related studies were selected and grouped according to year, country, and journal of publication, as well as imaging modality and disease being studied. Two readers with formal training in decision analysis and CEA used a seven-point (1, low; 7, high) Likert scale based on reasonableness of assumptions, quality of presentation, and adherence to guidelines to independently evaluate study quality. Quality scores according to year, country, and journal of publication were compared with the unpaired Student t test.
Results: The first radiology-related CEA was published in 1985; 111 radiology-related CEAs were published between 1985 and 2005. The average number of studies increased from 1.6 per year between 1985 and 1995 to 9.4 per year between 1996 and 2005. Eighty-six studies were performed to evaluate diagnostic imaging technologies, and 25 were performed to evaluate interventional imaging technologies. Ultrasonography (35.0%), angiography (31.5%), magnetic resonance imaging (22.5%), and computed tomography (19.8%) were evaluated most frequently. Forty-nine studies received government funds; 42 did not disclose the source of funding. The mean quality score was 4.23 ± 1.12 (standard deviation), without significant improvement over time. Scores in studies performed in the United States were significantly higher than scores in studies that were not performed in the United States (4.45 ± 1.02 vs 3.61 ± 1.17, respectively; P < .01). Scores were also higher in journals with three or more CEA articles published during the study period than in journals with two or fewer CEA articles published during this period (4.54 ± 1.09 vs 3.91 ± 1.06, respectively; P < .01).
Conclusion: CEAs are an important tool with which to analyze the value of diagnostic imaging. However, improvement in the quality of analyses is needed.
© RSNA, 2008
Medical imaging continues to be one of the fastest growing fields in medicine (1). Growth has been driven by the emergence of advances in noninvasive imaging modalities, such as multidetector computed tomography (CT), combined positron emission tomography (PET) and CT, and magnetic resonance (MR) imaging (2,3). In addition, there has been a dramatic escalation in the demand for advanced imaging by health care providers (2,4,5). This has resulted in increased imaging costs, which now exceed $100 billion per year in the United States (3). Relatively recently, there has been increased recognition of the need to consider cost in the medical decision-making process and to apply new technologies in a cost-effective manner (6–8).
Economic analyses are tools that help identify the interventions that yield the most benefit by maximizing the output of the resources devoted to health care in an environment where the health-related interventions exceed the societal ability to afford them (9). Cost-effectiveness analysis (CEA) is a method of economic evaluation in which costs and outcomes of a program and at least one alternative are compared. The difference in cost (incremental cost) is divided by the difference in outcome (incremental effect) to derive the incremental cost-effectiveness ratio. Although any natural unit of outcome can be used to determine the effect of a program, the common metrics used are life-years or quality-adjusted life-years (QALYs) gained because they allow for comparisons across diverse treatments and diseases. CEAs in which a cost per QALY ratio is calculated are referred to as cost-utility analyses (10–12). However, most of the studies in the literature are probably better defined as CEAs than as true cost-utility studies because they involve the use of QALYs as the effect measure, which are not true utilities. Thus, for the purpose of this study, we used the standard terminology of CEA to refer to both cost-utility analysis and CEAs that present their results in terms of cost per QALY.
In 1996, the U.S. Panel on Cost-effectiveness in Health and Medicine (also known as the Public Health Service [PHS] panel) set a list of recommendations for performing and reporting the results of CEAs (13–16). Although the number of published CEAs has increased rapidly, studies have shown that many CEAs do not adhere to recommended practices (17–21). Specifically, more than 10 years after publication of the panel's recommendations, the quality of the imaging-related CEAs has not been determined.
Without economic considerations, finite health care resources likely would be allocated inefficiently (22). Special emphasis is needed on guidance in the use of high-technology diagnostic and therapeutic imaging resources that usually have a high cost (22). Thus, it is critically important to understand the body of evidence of cost-effectiveness and, specifically, the results in terms of cost-effectiveness research that concerns medical imaging. The purpose of this study was to determine the growth rate, methodologic clarity, and quality changes in CEAs and to assess whether the PHS recommendations affected CEA studies in which imaging technologies were evaluated.
MATERIALS AND METHODS
Tufts Medical Center CEA Registry
In this study, we reviewed all imaging-focused CEAs found in a comprehensive registry maintained by Tufts Medical Center (Boston, Mass). This registry is available on the Internet (http://www.tufts-nemc.org/cearegistry/) (23). The database was constructed by means of an extensive search of the medical subject headings and for the following text keywords: “quality-adjusted,” “QALY,” and “cost-utility.” We used the MEDLINE, HealthSTAR, CancerLit, Current Contents, EconLit, and Health Economic Evaluations databases. We also screened more than 6000 article titles in two extensive bibliographies of articles published in the field. Two qualified screeners independently read each reference listing and classified the type of study (cost per QALY or another unit of measure) in the database reference list. If either screener marked the reference as potentially an English-language article in which cost per QALY analysis was described, we passed the reference on to be audited by two trained readers with master or doctoral degree training in decision analysis and CEA. A total of 1310 articles were identified and audited.
Description and Analysis
The auditing process was organized so that the two readers independently reviewed each analysis and then convened for a consensus review to resolve discrepancies. We used a standard data auditing form (Appendix E1, http://radiology.rsnajnls.org/cgi/content/full/249/3/917/DC1). For disease categorization, we used the World Health Organization classification system (24). For each CEA, descriptive characteristics collected included disease category, year of publication, country of origin (ie, country from where the economic data were derived), intervention type, type of journal in which the article was published, and funding source. Methodologic characteristics included study perspective, discounting of future costs and QALYs, and performance of sensitivity analyses. Studies were also classified according to the imaging modality being investigated. For studies published after 2002, we also collected data on the performance of acceptability curves (Appendix E2, http://radiology.rsnajnls.org/cgi/content/full/249/3/917/DC1), probabilistic sensitivity analysis, and collection resource utilization and economic data alongside a clinical trial. Finally, we used a Likert scale to assign a subjective score of the overall study quality. Scores ranged from 1 (low) to 7 (high). The quality score was based on methodologic rigor, reasonableness of assumptions, quality of presentation, adherence to recommended protocols, and potential value of the study to decision makers. Adherence to protocols included clear description of comparator and intervention, incremental analyses, credible elicitation of preference weights, use of sensitivity analyses, discounting, etc. Country-specific protocols may vary. The PHS panel protocols included use of societal perspective, a clearly described base-case scenario, a clearly stated proposed intervention and comparator, a discount on future costs (measured in U.S. dollars) and outcomes (measured in QALYs) with use of a 3% annual discount rate, performance of sensitivity analysis, and description of incremental cost-effectiveness ratios. Other countries might favor a health care payer perspective over a societal perspective or recommend different rates for discounting (25). More details in the auditing process are available elsewhere (23).
We reviewed 1164 articles contained in the registry to select the CEAs in which imaging and/or radiologic technologies were studied. The first imaging-related study was published in 1985 and described the cost-effectiveness of coronary angiography in patients with chest pain (26). There were 111 studies published between 1985 and 2005 (Fig 1). Studies in which radiation therapy, lithotripsy, or laser therapy were evaluated were excluded from this analysis. Each study resulted in one or more cost per QALY saved ratio.
Figure 1:
Flow chart shows the selection of studies and the review process. DALY = disability-adjusted life-year, LY = life-year.
We reviewed the characteristics of published radiology-related CEAs over time. To examine quality-related factors, we grouped the studies according to the time (performed before PHS panel recommendations versus performed after PHS panel recommendations), journal, and country of publication. The effect of the PHS panel on the quality of published analyses was established by defining all articles published before 1998 as prepanel reports, assuming a 2-year lag between the performance of the analyses and the publication of the guidelines in 1996; articles published after 1998 were defined as postpanel reports (17). Regarding the country of interest, studies were grouped as either U.S. or non-U.S. studies. The journals were divided into those with three or more CEA articles published between 1985 and 2005 and those with two or fewer CEA articles published during this period. Studies were grouped according to whether they were published by a radiology-focused journal or by a journal with another focus. Studies published in 2002 or 2003 were compared with studies published in 2004 or 2005 regarding acceptability curves and probabilistic analysis (25,27).
Statistical Analysis
Interobserver agreement was assessed with κ statistics. An unpaired Student t test assuming unequal variances was used to compare quality scores between groups. The Fisher exact test was used to compare proportions of methodologic characteristics (ie, present vs absent) between groups. All analyses were performed by using Microsoft Excel 2000 for Windows (Microsoft, Seattle, Wash) and Stata, version 9.0, for Windows (Stata, College Station, Tex), with a level of statistical significance set at P < .05.
RESULTS
Imaging-related CEAs
The annual average number of CEA publications increased from 1.6 per year between 1985 and 1995 to 9.4 per year between 1996 and 2005 (Fig 2); 62 (56%) of the articles were published between 2000 and 2005.
Figure 2:
Graph shows imaging-related cost-utility analysis publication trend. The number of imaging-related cost-utility analyses has increased in the past decade.
A total of 17 disease categories were evaluated (Table 1). The majority of the 111 studies (n = 86 [77.5%]) focused on diagnostic radiology (Table 2). The modality most frequently evaluated was US (39 of 111 studies [35.1%]). The articles were published in 56 journals, 11 of which are specific to radiology. Of the 111 studies, 49 (44.1%) were sponsored or partially funded by the government.
Table 1.
Cost-Utility Analyses Distribution by Disease Category
Note.—Data in parentheses are percentages.
Noncerebral noncardiac disease.
Table 2.
Publication Characteristics
Note.—Data in parentheses are percentages. All percentages were calculated with a denominator of 111.
There were 45 journals included in this category.
US = ultrasonography.
SPECT = single photon emission computed tomography.
The resulting incremental cost-effectiveness ratios ranged from dominated (higher cost and lower effectiveness) to cost-saving or dominating (similar or higher effectiveness and lower cost) strategies. The ratios for interventions with incremental cost and incremental effectiveness ranged from $520 per QALY when screening for abdominal aortic aneurysm with US to $7.6 million per QALY when screening children for craniosynostosis with three-dimensional CT (28,29).
Quality and Adherence to Protocol Analysis
The majority of the 111 studies (n = 76 [68.5%]) had a health care payer perspective (Table 3). Discounting future costs and benefits was performed in 76 (68.5%) of the studies. Sensitivity analyses were performed in 104 (93.7%) studies. Incremental analysis was reported in 97 (87.4%) studies.
Table 3.
Methodologic Characteristics of the CEA Studies
Note.—Data in parentheses are percentages. NA = not applicable.
After 2002, 10 (22%) of 46 studies included acceptability curves, and 13 (28%) included probabilistic sensitivity analysis (Table 4). Acceptability curves and probabilistic sensitivity analysis, respectively, were used in two (8%) and three (12%) studies in 2002 and 2003; these numbers increased to eight (38%) and 10 (48%) studies in 2004 and 2005. In three (7%) studies, researchers collected economic data alongside a clinical trial. In 62 (55.9%) of the 111 studies, researchers reported a threshold for considering a technology cost-effective; in 34 (55%) of these studies, researchers used $50 000 per QALY as the threshold.
Table 4.
Additional Methodologic Characteristics of CEA Studies after 2002
Note.—Data in parentheses are percentages.
Interobserver agreement was assessed (κ = 0.54). The overall average quality score of all studies was 4.23 ± 1.12 (standard deviation). The average quality score increased from 4.09 ± 1.24 between 1985 and 1995 to 4.26 ± 1.09 between 1996 and 2005 (Fig 3), with no significant correlation between year of publication and score (P = .8). Scores were significantly higher for studies performed in the United States (4.45 ± 1.02, P < .01) and for studies published in journals with three or more CEA articles published during the study period (4.54 ± 1.09, P < .01) (Table 5).
Figure 3:
Graph shows average annual quality score of published cost-utility analysis articles. The average quality score per year has remained constant, despite an increase in the number of articles published.
Table 5.
Comparison of Mean Quality Scores
Note.—Data are mean quality scores ± standard deviations. P values were calculated with the two-tailed t test with unequal variances.
Analysis was performed from the societal perspective significantly more often in studies reported in radiology-focused journals than in studies reported in journals that focused on other specialties (45% vs 21%, P < .01). Studies performed after the PHS recommendations were published were significantly more likely to use a 3% annual discount rate (64% vs 15%, P < .001) (Table 6).
Table 6.
Comparison of Methodologic Characteristics (Present vs Absent) by Group
Note.—P values were calculated with the Fisher exact test.
DISCUSSION
We critically reviewed all published imaging-related CEAs and determined quantity and quality trends over a 20-year period. In many countries, there is increasing reliance on results from economic analyses when making financial decisions and deciding on reimbursement. For example, the United Kingdom requires proof of cost-effectiveness to be discussed before services are made available. It is important to know if CEA studies are improving in quality and if researchers are properly following established protocols from the national agencies that are intended to be influential. In a previous study, researchers determined the methodologic quality of all radiology-related economic analyses (30) and found no significant improvement over time. In this prior study, researchers considered all subtypes of economic analyses (cost-benefit analysis and CEA expressed in terms of dollars per life-year, dollars per disability-adjusted life-year, etc), and analysis was performed before the recommendations of the PHS panel were made public.
Despite the fact that the number of CEAs published after 1996 is six times higher than the number of CEAs published before 1996, we did not note a corresponding increase in the quality of studies. The increasing number of studies indicates a growing interest in economic analysis. Thus, imaging providers are challenged to address the cost-effectiveness of emerging technologies. Given the greater attention received, why has quality—as measured with a subjective Likert scale—not improved? Explanations include doubts about the lack of expertise and agreement surrounding the method of the studies and the bias introduced by the authors given the discretionary nature of model building and data selection in these analyses (6). These concerns have been addressed with publishing standards and methodologic guidelines for performance and publication in peer-reviewed journals (6,10,11,22,31). However, if personal biases are being introduced and if there is true investigator-induced quality variability due to lack of expertise, these variables will continue to be factors as more academics try to address concerns related to value for money and attempt to enter the field of health economics. Another factor to consider is that imaging technologies have evolved quicker than has the ability to gather clinical evidence supporting their use, thus frequently limiting the possibility of providing solid, valuable, and timely economic analysis.
The difficulty of performing CEA in the field of radiology has been described elsewhere (10). If a study appears too soon after the introduction of a modality, readers may feel that the data are insufficient, and the study may be judged as being of low quality. However, if researchers wait for enough relevant clinical data to be available before conducting CEA, health care providers may have already made their decisions about the modality and may consider the study results irrelevant (10). Given the importance of imaging in today's health care system, we believe the interval between the time solid evidence and information are obtained and the time the decision to adopt the new technology is made should, and likely will, be narrowed.
An encouraging finding is the rapid adoption of acceptability curves and probabilistic sensitivity analysis. We found a five-fold increase in the use of these techniques in just 4 years. Acceptability curves and probabilistic analyses are emerging as important tools with which to address uncertainty in CEAs. They may be used to determine the likelihood that an intervention will be cost-effective given different thresholds of willingness to pay or willingness to evaluate several scenarios at once (11). Acceptability curves have limitations related to their inability to be used to distinguish different joint distributions of incremental cost and effects (32). The use of acceptability curves has increased rapidly, with their use being recommended by the National Institute for Health and Clinical Excellence (25). Additional analysis tools used to deal with the uncertainty of the model include cost-effectiveness planes, expected values of perfect information, and credible confidence intervals (11,32). We also found that the technologies most commonly evaluated are those in which the primary clinician performs the imaging intervention. In this sense, angiography and other cardiology-driven modalities were evaluated more frequently than were modalities considered to be the turf of radiologists (CT, mammography, and radiography). This might be due to the well-documented fact that it is harder to connect the diagnosis with the outcome because therapy is an extraintermediary step (33,34) and because nonradiologists are more comfortable overcoming this shortfall.
The majority of studies were performed in the United States by university-affiliated authors. This finding reflects the fact that, as with most clinical research, academic institutions lead the way in economic research. Regarding the targeted diseases, we found a preponderance of vascular diseases—including cerebral, cardiac, and noncerebral noncardiac diseases—that represented more than 50% of the sample. We also found that the number of CEAs published in radiology-focused peer-reviewed journals continued to represent less than a third of the total sample over time. This might reflect radiologists' acknowledgment that use of imaging technologies lies in the hands of referring physicians and that the information must be available to the user and not only to peer imagers. However, this can also mean that clinical journals and their readers whose specialty lies in another field are keener to the results of economic analyses.
Government grants funded almost half of all published CEAs. This finding also reflects the increasing importance of this evolving field. Industry-sponsored research accounted for only 5% of the articles. This is contrary to other clinical fields, such as oncology and the study of infectious diseases, in which industry funding supported a fair amount of CEAs (15% and 17%, respectively) (9,21). This may be due to several factors, including a market driven by innovation, the rapid evolution of the field that serves to narrow the opportunity to perform timely economic analyses (10), the idea that cost-effectiveness could harm innovation (6), and the lack of correlation between the targeted technologies of CEAs and the interests of the industry. In radiology, new technologies and imaging applications are reimbursed on the basis of the time needed to perform and interpret the results of the examination and the complexity of the examination; however, the vendor or manufacturer of the equipment is disregarded. This leads to a different model of competition, where costs of new technology (ie, hardware and software) are important from the perspective of the hospital or imaging group but are somewhat distant from the reimbursement rates set for it. Also, the introduction of new drugs requires a proved effectiveness that leads to decisions regarding reimbursement and coverage. This direct link between cost, cost-effectiveness, reimbursement, and manufacturer price might help explain why industry support of CEAs is more common in other clinical areas. Nevertheless, a larger review revealed that studies funded by industry were more likely to report favorable results, and this funding was considered a potential source of bias (35).
Although no consensus to establish a threshold for cost-effectiveness has been achieved (8), several thresholds have been proposed; the most commonly used thresholds are $20 000 per QALY, $50 000 per QALY, and $100 000 per QALY (11,36,37). Consistently, we found that more than half the studies reported use of one of these thresholds. Our results showed that the quality of a radiology-related CEA was dependent on the publishing journal and the country in which the study was performed. The adherence to protocols by journals with more experience in publishing economic analyses was expected. Usually, these journals have set reporting and performing standards that have helped improve the quality of the reports (38), and they also have a more rigorous review process. In addition, there is an economic analysis quality checklist available to help ensure adherence to protocols (39,40). Overall, this finding represents a need for investigators to abide by similar standards and for editorial boards to add more health care economics specialists as reviewers. The difference in study quality due to the country of origin is difficult to explain because most requirements are similar in all countries and because the quality scores were determined mainly by how clearly the method used was explained and its assumptions reported.
Our study had several limitations. The most relevant was that the measure of quality was based on the subjective appreciation of two expert readers and, therefore, was subject to reader bias. Another limitation was that the primary focus of some studies might have been the disease being studied and not the imaging technology used to diagnose the disease; thus, these studies were intended for other audiences. This could explain why some modalities were studied more than others.
In summary, CEA has become an important tool with which to analyze the value of diagnostic imaging, and its use has grown over the years. However, little quality improvement has been achieved with the increased number of publications. The establishment of standard protocols has had a limited effect on the quality of publications. The optimal use of economic tools in radiology is vital given the importance of this field in modern medicine and the costs that its use represents for health care and society.
ADVANCES IN KNOWLEDGE
The average number of imaging-related cost-effectiveness analyses (CEAs) published per year increased from 1.6 between 1985 and 1995 to 9.4 between 1996 and 2005; however, this increase was not accompanied by an increase in study quality.
The modalities most frequently evaluated with CEAs were US (35.0%), angiography (31.5%), MR imaging (22.5%), and CT (19.8%).
Government funds supported almost half of all published studies; industry funds supported only 5.4% of them.
Supplementary Material
Abbreviations
CEA = cost-effectiveness analysis
PHS = Public Health Service
QALY = quality-adjusted life-year
Author contributions: Guarantor of integrity of entire study, P.J.N.; study concepts/study design or data acquisition or data analysis/interpretation, all authors; manuscript drafting or manuscript revision for important intellectual content, all authors; manuscript final version approval, all authors; literature research, H.J.O., D.G., P.J.N.; statistical analysis, all authors; and manuscript editing, all authors
Authors stated no financial relationship to disclose.
See also the editorial by Hunink in this issue.
Funding: This research was supported by the Agency for Healthcare Research and Quality (grant R01 HSI0919 for 2001–2004) and the National Library of Medicine (grant 1 G08 LM008413 for 2004–2007).
References
- 1.America's imaging problem. National Imaging Associates Web site. http://www.radmd.com/. Accessed December 20, 2007.
- 2.Chan S. The importance of strategy for the evolving field of radiology. Radiology 2002;224(3):639–648. [DOI] [PubMed] [Google Scholar]
- 3.Iglehart JK. The new era of medical imaging: progress and pitfalls. N Engl J Med 2006;354(26):2822–2828. [DOI] [PubMed] [Google Scholar]
- 4.Maitino AJ, Levin DC, Parker L, Rao VM, Sunshine JH. Practice patterns of radiologist and non-radiologist in utilization of non-invasive diagnostic imaging among the Medicare population 1993–1999. Radiology 2003;228(3):795–801. [DOI] [PubMed] [Google Scholar]
- 5.Maitino AJ, Levin DC, Parker L, Rao VM, Sunshine JH. Nationwide trends in rates of utilization of non-invasive diagnostic imaging among the Medicare population between 1993 and 1999. Radiology 2003;227(1):113–117. [DOI] [PubMed] [Google Scholar]
- 6.Neumann PJ, Rosen AB, Weinstein MC. Medicare and cost-effectiveness analysis. N Engl J Med 2005;353(14):1516–1522. [DOI] [PubMed] [Google Scholar]
- 7.Glassman PA, Model KE, Kahan JP, Jacobson PD, Peabody JW. The role of medical necessity and cost-effectiveness in making medical decisions. Ann Intern Med 1997;126(2):152–156. [DOI] [PubMed] [Google Scholar]
- 8.Goldman L. Cost-effectiveness in a flat world: can ICDs help the United States get rhythm? N Engl J Med 2005;353(14):1513–1515. [DOI] [PubMed] [Google Scholar]
- 9.Earle CC, Chapman RH, Baker CS. Systematic overview of cost-utility assessments in oncology. J Clin Oncol 2000;18(18):3302–3317. [DOI] [PubMed] [Google Scholar]
- 10.Singer ME, Applegate KE. Cost-effectiveness analysis in radiology. Radiology 2001;219(3):611–620. [DOI] [PubMed] [Google Scholar]
- 11.Hollingworth W. Radiology cost and outcomes studies: standard practice and emerging methods. AJR Am J Roentgenol 2005;185(4):833–839. [DOI] [PubMed] [Google Scholar]
- 12.Drummond MF, Sculpher MJ, Torrance GW, et al. Methods for the economic evaluation of health care programmes. Oxford, England: Oxford University Press, 2005.
- 13.Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russell LB. Recommendations of the panel on cost-effectiveness in health and medicine. JAMA 1996;276(15):1253–1258. [PubMed] [Google Scholar]
- 14.Gold MR. Standardizing cost-effectiveness analyses: the panel on cost-effectiveness in health and medicine. Acad Radiol 1998;5(suppl 2):S351–S354. [DOI] [PubMed] [Google Scholar]
- 15.Russell LB, Gold MR, Siegel JE, et al. The role of cost-effectiveness analysis in health and medicine: panel on cost-effectiveness in health and medicine. JAMA 1996;276(14):1172–1177. [PubMed] [Google Scholar]
- 16.Siegel JE, Weinstein MC, Russell LB, Gold MR. Recommendations for reporting cost-effectiveness analyses. JAMA 1996;276(16):1339–1341. [DOI] [PubMed] [Google Scholar]
- 17.Neumann PJ, Greenberg D, Olchanski NV, Stone PW, Rosen AB. Growth and quality of the cost-utility literature 1976–2001. Value Health 2005;8(1):3–9. [DOI] [PubMed] [Google Scholar]
- 18.Udvarhelyi IS, Coldits GA, Rai A, Epstein AM. Cost-effectiveness and cost-benefit analyses in the medical literature: are the methods being used correctly? Ann Intern Med 1992;116(6):238–244. [DOI] [PubMed] [Google Scholar]
- 19.Gerard K, Smoker I, Seymour J. Raising the quality of cost-utility analyses: lessons learnt and still to learn. Health Policy 1999;46(3):217–238. [DOI] [PubMed] [Google Scholar]
- 20.Neumann PJ, Stone PW, Chapman RH, Sandberg EA, Bell CM. The quality of reporting in published cost-utility analyses, 1976–1997. Ann Intern Med 2000;132(12):964–972. [DOI] [PubMed] [Google Scholar]
- 21.Stone PW, Schackman BR, Neukermans CP, et al. A synthesis of cost-utility analysis literature in infectious disease. Lancet Infect Dis 2005;5(6):383–391. [DOI] [PubMed] [Google Scholar]
- 22.Kielar AZ, El-Maraghi RH, Carlos RC. Health-related quality of life and cost-effectiveness analysis in radiology. Acad Radiol 2007;14(4):411–419. [DOI] [PubMed] [Google Scholar]
- 23.CEA registry: center for the evaluation of value and risk in health. Institute for Clinical Research and Health Policy Web site. http://www.tufts-nemc.org/cearegistry/. Accessed December 20, 2007.
- 24.World Health Organization. International classification of diseases and related health problems: tenth revision (ICD 10). World Health Organization Web site. http://www.who.int/classifications/icd/en/. Accessed January 9, 2008.
- 25.National Institute for Clinical Excellence. Guide to the methods of technology appraisal. London, England: National Institute for Clinical Excellence, 2004. [PubMed]
- 26.Doubilet P, McNeil BJ, Weinstein MC. The decision concerning coronary angiography in patients with chest pain: a cost-effectiveness analysis. Med Decis Making 1985;5(3):293–309. [DOI] [PubMed] [Google Scholar]
- 27.Claxton K, Sculpher M, McCabe C, et al. Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra. Health Econ 2005;14(4):339–347. [DOI] [PubMed] [Google Scholar]
- 28.Connelly JB, Hill GB, Millar WJ. The detection and management of abdominal aortic aneurysm: a cost-effectiveness analysis. Clin Invest Med 2002;25(4):127–133. [PubMed] [Google Scholar]
- 29.Medina LS, Richardson RR, Crone K. Children with suspected craniosynostosis: a cost-effectiveness analysis of diagnostic strategies. AJR Am J Roentgenol 2002;179(1):215–221. [DOI] [PubMed] [Google Scholar]
- 30.Blackmore CC. Methodological quality of radiology economic analyses. Eur Radiol 2000;10(suppl 3):S349–S353. [DOI] [PubMed] [Google Scholar]
- 31.Tunis SR. Why Medicare has not established criteria for coverage decisions. N Engl J Med 2004;350(21):2196–2198. [DOI] [PubMed] [Google Scholar]
- 32.Groot Koerkamp B, Hunink MG, Stijnen T, Hammitt JK, Kuntz KM, Weinstein MC. Limitations of acceptability curves for presenting uncertainty in cost-effectiveness analysis. Med Decis Making 2007;27(2):101–111. [DOI] [PubMed] [Google Scholar]
- 33.Revicki DA, Yabroff KR, Shikiar R. Outcomes research in radiologic imaging: identification of barriers and potential solutions. Acad Radiol 1999;6(suppl 1):S20–S28. [DOI] [PubMed] [Google Scholar]
- 34.Hillman BJ. Outcomes research in radiology. Acad Radiol 1996;3(suppl 1):S3–S4. [DOI] [PubMed] [Google Scholar]
- 35.Bell CM, Urbach DR, Ray JG, et al. Bias in published cost effectiveness studies: systematic review. BMJ 2006;332(7543):699–703. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Laupacis A, Feeny D, Detsky AS, Tugwell PX. How attractive does a new technology have to be to warrant adoption and utilization? tentative guidelines for using clinical and economic evaluations. CMAJ 1992;146(4):473–481. [PMC free article] [PubMed] [Google Scholar]
- 37.Laking G, Lord J, Fischer A. The economics of diagnosis. Health Econ 2006;15(10):1109–1120. [DOI] [PubMed] [Google Scholar]
- 38.Rosen AB, Greenberg D, Stone PW, et al. Quality of abstracts of papers reporting original cost-effectiveness analyses. Med Decis Making 2005;25(4):424–428. [DOI] [PubMed] [Google Scholar]
- 39.Evers S, Goossens M, de Vet H, van Tulder M, Ament A. Criteria list for assessment of methodological quality of economic evaluations: consensus on health economic criteria. Int J Technol Assess Health Care 2005;21(2):240–245. [PubMed] [Google Scholar]
- 40.Hillner BE. Potential evaluation of the incremental cost-effectiveness of paclitaxel in advanced non-small-cell lung cancer (Eastern Cooperative Oncology Group 5592). J Natl Cancer Inst Monogr 1995;19:65–67. [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.









