Abstract
Standards for the economic evaluation of medical technologies were instituted in the mid-1990s, yet little is known about their application in medical information technology studies. In a review of evaluation studies published between 1982 and 2002, we found that the volume and variety of economic evaluations had increased. However, investigators routinely omitted key cost or effectiveness elements in their designs, resulting in publications with incomplete, and potentially biased, economic findings.
Introduction
Since the mid-1990s, a number of standards have been proposed for the economic evaluation of medical technologies.1 We sought to determine whether the quality of economic information included in medical information studies improved after these standards were published.
Methods
Ammenwerth and de Keizer conducted a systematic review of all medical information technology evaluation research published between 1982 and 2002.2 We searched their online database for English language articles that made economic claims (e.g., cost-saving, cost-effective, etc.) for specific medical information technologies. Two reviewers independently coded abstracts of articles meeting our inclusion criteria and resolved coding differences by discussion. General characteristics were coded for all papers in our study; whereas, economic study characteristics (cost, effectiveness, or both) were only coded for papers that collected empirical data.
Results
General Characteristics
The Ammenwerth and de Keizer database contained 1036 evaluation studies, 964 of which (93%) were published in English. We identified 134 studies (14%) that made economic claims and formed the population for our analyses. While the number of economic studies increased throughout our three eras, the percent of English language studies making economic claims (14%) was relatively constant across all eras. Similarly, 23% of all economic studies made rhetorical claims without presenting cost data, and this did not vary substantially across eras.
Empirical Study Characteristics
Throughout our study period, there was little evidence of evolution in economic study design. Most studies were single center, prospective designs that did not randomize patients to different information interventions. Economic and patient effectiveness measures reported also changed little across time (See Table). Forty percent of studies in all eras did not include measurements of key resource use. In addition, studies were likely to report one cost element (e.g., patient care costs) without including other elements (e.g., information intervention costs). And, less than half of economic evaluation studies reported changes in patient outcomes.
| Evaluation Period | |||
|---|---|---|---|
| Number of Studies (n) |
1982–88 14 |
1989–95 28 |
1996–02 66 |
| Economic Measurements (%) | |||
| Resource use reported
|
57
|
64
|
58
|
| Cost Components: | |||
| Information | 29 | 18 | 36 |
| Patient care | 29 | 46 | 39 |
| Other | 29 | 29 | 15 |
| Unclear
|
21
|
18
|
24
|
| Effectiveness Measurements (%) | |||
| Patient outcomes included | 29 | 29 | 39 |
Conclusion
Despite the advancement of standards for the economic evaluation of medical technologies, medical informatics investigators routinely disregard established economic guidelines in their studies.
References
- 1.Gold MR, Siegel J, Russell L, Weinstein M. Cost-Effectiveness in Health and Medicine. New York, NY: Oxford University Press; 1996. [Google Scholar]
- 2.Ammenwerth E, de Keizer N. Inventory of evaluation publications. [Accessed March 15, 2006.]. http://evaldbumitat/index.htm.
