Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2007 May-Jun;14(3):368–371. doi: 10.1197/jamia.M2276

A Viewpoint on Evidence-based Health Informatics, Based on a Pilot Survey on Evaluation Studies in Health Care Informatics

Elske Ammenwerth a ,, Nicolette de Keizer b
PMCID: PMC2244873  PMID: 17329724

Abstract

Concerned about evidence-based health informatics, the authors conducted a limited pilot survey attempting to determine how many IT evaluation studies in health care are never published, and why. A survey distributed to 722 academics had a low response rate, with 136 respondents giving instructive comments on 217 evaluation studies. Of those studies, half were published in international journals, and more than one-third were never published. Reasons for not publishing (with multiple reasons per study possible) included: “results not of interest for others” (1/3 of all studies), “publication in preparation” (1/3), “no time for publication” (1/5), “limited scientific quality of study” (1/6), “political or legal reasons” (1/7), and “study only conducted for internal use” (1/8). Those reasons for non-publication in health informatics resembled those reported in other fields. Publication bias (preference for positive studies) did not appear to be a major issue. The authors believe that widespread application of guidelines in conducting health informatics evaluation studies and utilization of a registry for evaluation study results could improve the evidence base of the field.

The Problem of Publication Bias

Healthcare IT systems have been shown to increase quality and efficiency of health care. 1–3 However, there are also examples where IT systems failed to provide the expected benefits or even seem to have negative effects on patient care. 4–6 Systematic evaluation is thus needed and even seen as an ethical imperative for health informaticians. 7 As a whole, published IT evaluation studies contribute to the emergence of evidence-based health informatics 7,8 which can be defined as the conscientious, explicit, and judicious use of current best evidence to support a decision with regard to IT use in health care (based on EBM-definition by Sackett). 9

There have been quite a lot of IT evaluation papers in the last 25 years, as shown in a recent inventory. 10 However, we do not know how representative and complete those IT evaluation publications are. One problem frequently discussed in this context is publication bias. The most common type of publication bias is the one in which well-executed studies with null, negative, or disappointing results do not find their way into the archival literature. 11,12

In health care, publication bias is an issue discussed for more than 100 years. 13 Since then, hundreds of publications have discussed the problem of publication bias in clinical research (e.g., 14–16 ), analyzing reasons as well as methods for detection and prevention.

However, despite the fact that publication bias can pose larger threats to the evidence-base of health informatics, publication bias in health informatics has not yet been systematically studied, even when the problem has already been discussed by Tierney 17 and Friedman and Wyatt. 11 Consequently, the authors conducted a limited pilot study to determine:

  • 1 What percentage of IT evaluation studies are not published in international journals or proceedings?

  • 2 What are typical reasons for not publishing the results of an IT evaluation study?

Publication Bias in Health Informatics: Results of a Pilot Survey

To answer those questions, the authors in Spring 2006 conducted a written, e-mail-based survey of academics. The survey sample included members of the mailing lists of the AMIA working group on Evaluation (n=341), the EFMI working group on Assessment of Health Information Systems (n=224), and the IMIA working group on Technology Assessment and Quality Development in Health Informatics (n=220), and first authors of IT evaluation papers that were published between 2001 and 2006 and Medline Indexed (n=204). Overall, after removing duplicate names, 722 academics were included.

The survey consisted of three questions:

  • 1 Which information systems did you evaluate in the last three years?

  • 2 Where did you publish results?

  • 3 If you did not publish: What were reasons for this? (Here, typical reasons could be selected, or free-text entered).

For each study, authors analysed where it had been published, and classified the responses as internationally published (e.g., peer-reviewed journal, proceeding, or book); only local publication (e.g., local conferences, masters thesis); or non-publication (no publication available, just internal reports).

Only 136 academics responded (return rate: 18.8%). The preliminary results are reported herein as indicators of possible trends, with the hope that others might confirm them in larger studies. The 118 individuals who reported completing studies provided information on 217 evaluation studies. From those 118 individuals, 33 came from the mailing list of first authors (they reported on 77 studies), 37 from the EFMI mailing list (reporting on 53 studies), 31 from the AMIA mailing list (reporting on 56 studies), and 17 from the IMIA mailing list (reporting on 31 studies). In addition, 18 of the 136 respondents said they conducted no evaluation studies. The 118 respondents that provided valid information came from the USA (n=45), UK (n=11), Netherlands (n=10), Canada (n=8), Germany (n=6), Australia (n=6), and from 18 other countries (n=32). Please refer to the Appendix, available as an electronic data supplement at www.jamia.org, for the response rate per country. Based on the e-mail signature, the e-mail address, and the comment of the respondent, we grouped the respondents by background. Most respondents (n=92) came from an academic environment, 8 from IT management, 6 from industry, and 5 from governmental institutions, with 7 backgrounds unknown.

Overall, the 118 respondents reported on 217 evaluation studies. The most often evaluated type of IT system was EPR/EHR systems (n=28) and CPOE and medication systems (n=23) (see Appendix, available as an electronic data supplement at www.jamia.org for details on type of IT systems). For the 217 evaluation studies conducted by respondents, 213 publications were reported. Of those 213 publications, 77 were included in this survey based on pre-selection of authors with prior evaluation study publications. About half of the 217 evaluation studies were published in peer-reviewed international journals, proceedings, or books. Slightly more than 1/3 of studies have not yet been published anywhere. One-tenth of other studies had only internal project reports and 1/16 had only local publications (e.g., local conferences, thesis). From those studies published internationally, more than half appeared in health informatics journals or proceedings, 1/3 appeared in medical or nursing journals, and 1/10 appeared in other journals. See the Appendix (available as an on-line data supplement at www.jamia.org) for details on what journals the evaluations appeared in.

For 107 evaluation studies that were unpublished or only published in internal project reports or local publications, respondents gave reasons for non-publication. These reasons were grouped into the following categories (with multiple reasons per study possible):

  • 1 “Planned or in preparation”: Publication is planned or already in progress. Quote: “May publish following validation” (around 1/3 of all non-published studies).

  • 2 “Not of interest for others”: The generalizability of the results seemed too limited, or the results seemed not to be of interest for others. Quote: “Constellation of internal social factors, adoption factors, staff training/experience, etc. seemed too unique to make it general enough” (around 1/3 of all non-published studies).

  • 3 “No time for writing”: No time found yet for publication, as e.g., making the IT system operational took too much time, funding ran out, or new projects started. Quote: “Too busy implementing CPOE to publish” (around 1/5 of all non-published studies).

  • 4 “Limited scientific quality”: The methods used seemed not adequate, or the paper was submitted, but not accepted as the editor said it had insufficient quality. Quote: “The setup (e.g., amount of interviews) was not robust enough” (around 1/6 of all non-published studies).

  • 5 “Political and legal reasons”: The organization the author work(ed) in prohibited publication, or the results were too negative to be published (both categories from the original questionnaire). Quote: “Government was unwilling to publicly share negative content of initial responses” (around 1/7 of all non-published studies).

  • 6 “Only meant for internal use”: Results were only meant for internal use; there was no academic or scientific interest to publish. Quote: “The evaluation was only meant for the own organization; academic output is not necessary” (around 1/7 of all non-published studies).

See Appendix (available as an on-line data supplement at www.jamia.org) for a more complete list of comments from the respondents.

Discussion

Based on a limited pilot e-mail survey, the authors found that about half of current study respondents’ IT evaluation studies were reported in international publications. Stated reasons for non-publication were diverse, including unclear generalizability of results obtained in a local context, missing time or budget to write down the evaluation results, doubts on scientific quality of the study, political and legal reasons (including “publication bias,” i.e., non-publication due to negative results), and studies conducted only for internal use without any academic research interest. With regard to the last point, the authors believe that results of reasonably conducted evaluation studies, independent of the results, should be made publicly available, even though such studies may lack innovative methods or new results.

This pilot survey was an early attempt to quantify rates of non-publication of informatics evaluation results and to explore reasons for this. Limitations of the current study include its lack of information about the respondents (e.g., how well were academics who publish regularly represented, versus non-academic who rarely publish), lack of analysis of non-respondents, and the low survey response rate, with less than 20% of those contacted providing information. As a result, it is unclear whether the results are representative for the overall informatics community—the scientific validity of the pilot study should be therefore judged carefully.

Most respondents came from academia, and 1/4 of them were selected because they already had published evaluation studies—so survey results have a large academic bias, with possible overestimation of publication rates. We do not know what motivated respondents to participate in this survey, and thus its results are prone to many potential forms of sampling bias—for example, the number of evaluation studies, academic position, professional background, national background, language, and other factors characterizing intended subjects. This means that the numbers given in the results can only be illustrative, and not considered representative for the IT evaluation community in health informatics.

While its validity may be discussed, the current study’s results are at least consistent with studies from other domains. For example, Dickerson 12 analysed 204 RCTs in health care and found that 50% had not been published. Reasons for non-publication reported in the latter study included negative results (n=35), lack of interest (n=16), article planned or in progress (n=15), or methodological problems (n=5). However, there are two aspects that seem to differ. First, negative publication bias seems to be a larger issue in clinical fields. In our survey, only three respondents explicitly indicated lack of publication due to negative results. A 2001 JAMIA Editorial exploring issues related to negative publication bias in healthcare informatics 11 also came to the conclusion that publication bias in health informatics is not a major reason for non-publication. Study registers being developed for clinical trials such as the AMIA Global Trial Bank and also being promoted for IT evaluation studies (e.g., 18 ) seem to the authors to not be taken in general as a high priority in health informatics. Second, in our survey, a rather high number of authors mentioned “limited scientific quality,” pointing to methodological problems within the evaluation study that prevented publication. Reasons for this can comprise the complexity of IT evaluation studies which is often not optimally planned beforehand, or the feeling that the results are not easily generalizable to other settings.

In the authors’ opinion, a broad problem exists, which is to establish a foundation for Evidence-Based Health Informatics (EBHI) through providing a means to access results of all systematically-conducted IT evaluation studies. We believe that several steps can be taken in this direction: 7

  • a Increase the number of IT evaluation studies by providing higher academic or monetary rewards, and by reserving a fixed amount of the budget of each IT project for evaluation.

  • b Develop Guidelines for Evaluation Practice in Health Informatics to increase the methodological and scientific quality of IT evaluation studies (see, as one example of many efforts, ongoing activities on GEP-HI at http://iig.umit.at/efmi).

  • c Develop Guidelines for Reporting on Evaluation Studies in Health Informatics to increase the quality of IT evaluation submission (see, as one example of many efforts, ongoing activities on STARE-HI at http://iig.umit.at/efmi).

  • d Increase accessibility of evaluation studies, e.g. by developing open repositories for IT evaluation studies (see as an example the AMIA Global Trial Bank at http://www.amia.org/gtb or the IT Evaluation Database at http://evaldb.umit.at).

Conclusion

The study mentioned in this viewpoint paper was a preliminary, and potentially biased, early attempt to explore and quantify the various reasons for non-publication of IT evaluation studies in health informatics. It suggests that possibly half of all IT evaluation studies are never published. The authors believe that further studies are required to better quantify the exact nature of non-publication in our field, and to determine how best to make results of evaluation studies accessible by means other than traditional peer-reviewed publications, such as repositories of evaluation studies such as the Evaluation Database EvalDB or the AMIA Global Trial Bank.

Footnotes

We thank all respondents for answering and providing valuable information and comments in this survey.

References

  • 1.Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care Ann Intern Med 2006;144(10):742-752. [DOI] [PubMed] [Google Scholar]
  • 2.Garg A, Adhikari N, McDonald H, Rosas-Arellano M, Devereaux P, Beyene J, et al. Effects of computerised clinical decision support systems on practitioner performance and patient outcomesA systematic review. JAMA 2005;293:1223-1238. [DOI] [PubMed] [Google Scholar]
  • 3.Rothschild J. Computerized physician order entry in the critical care and general inpatient setting: a narrative review J Crit Care 2004;19(4):271-278. [DOI] [PubMed] [Google Scholar]
  • 4.Han YY, Carcillo JA, Venkataraman ST, Clark RS, Watson RS, Nguyen TC, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system Pediatrics 2005;116(6):1506-1512. [DOI] [PubMed] [Google Scholar]
  • 5.Ammenwerth E, Shaw N. Bad health informatics can kill—is evaluation the answer? Meth Inf Med 2005;44:1-3. [PubMed] [Google Scholar]
  • 6.Koppel R, Metlay J, Cohen A, Abaluck B, Localio A, SE K, et al. Role of computerized physician order entry systems in facilitating medication errors JAMA 2005;293(10):1197-2003. [DOI] [PubMed] [Google Scholar]
  • 7.Ammenwerth E, Brender J, Nykänen P, Prokosch H-U, Rigby M, Talmon J. Visions and strategies to improve evaluation of health information systems—reflections and lessons based on the HIS-EVAL workshop in Innsbruck Int J Med Inf 2004;73(6):479-491. [DOI] [PubMed] [Google Scholar]
  • 8.Rigby M. In: Patel V, Rogers R, Haux R, editors. Evaluation: 16 Powerful Reasons Why Not to Do It—And 6 Over-Riding Imperatives. Amsterdam: IOS Press; 2001. pp. 1198-1202Proceedings of the 10th World Congress on Medical Informatics (Medinfo 2001). [PubMed]
  • 9.Sackett D, Rosenberg W, Gray J, Haynes R, Richardson S. Evidence based medicine: what it is and what it isn’t BMJ 1996;312:71-72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Ammenwerth E, de Keizer N. An inventory of evaluation studies of information technology in health care: Trends in evaluation research 1982–2002 Methods Inf Med 2005;44:44-56. [PubMed] [Google Scholar]
  • 11.Friedman C, Wyatt J. Publication bias in Medical Informatics J Am Med Inform Assoc 2001;8(2):189-191. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Dickersin K. The existence of publication bias and risk factors for its occurrence JAMA 1990;263(10):1385-1389. [PubMed] [Google Scholar]
  • 13.Heath D. The reporting of unsuccessful casesAugust 19 Boston Med Surg J 1909;1090:263-264. [Google Scholar]
  • 14.Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, Hogan JW, et al. Publication bias in editorial decision making JAMA 2002;287(21):2825-2828. [DOI] [PubMed] [Google Scholar]
  • 15.Phillips CV. Publication bias in situ BMC Med Res Methodol 2004;4:20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Scholey JM, Harrison JE. Publication bias: raising awareness of a potential problem in dental research Br Dent J 2003;194(5):235-237. [DOI] [PubMed] [Google Scholar]
  • 17.Tierney W, McDonald C. Testing informatics innovations: the value of negative trials J Am Med Inform Assoc 1996;3(5):358-359. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, et al. Is this clinical trial fully registered?—A statement from the International Committee of Medical Journal Editors. N Engl J Med 2005;352(23):2436-2438. [DOI] [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES