Skip to main content
The BMJ logoLink to The BMJ
. 1998 Jun 27;316(7149):1959–1961. doi: 10.1136/bmj.316.7149.1959

Evaluating information technology in health care: barriers and challenges

Heather Heathfield a, David Pitty b, Rudolph Hanka c
PMCID: PMC1113407  PMID: 9641938

There is strong push for clinical leadership in the development and procurement of information technology in health care.1 The lack of clinical input to date has been cited as a major factor in the failure of information technology in health services2 and has prompted many clinicians to become involved in such endeavours. Furthermore, there are various clinical decision support systems available, the merits of which clinicians are expected to judge (such as Prodigy3 and Capsule4).

It is essential that clinicians have a knowledge of evaluation issues in order that they can assess the strengths and weaknesses of evaluation studies and thus interpret their results meaningfully, and also contribute to the design and implementation of such studies to provide them with useful information.

Summary points

  • Clinicians are becoming increasingly involved in the development and procurement of information technology in health care, yet evaluation studies have provided little useful information to assist them

  • Evaluations by means of randomised controlled trials have not yet provided any major indication of improved patient outcomes or cost effectiveness, are difficult to generalise, and do not provide the scope or detail necessary to inform decision making

  • Clinical information systems are a different kind of intervention from drugs, and techniques used to evaluate drugs (particularly randomised controlled trials) are not always appropriate

  • The challenge for clinical informatics is to develop multi-perspective evaluations that integrate quantitative and qualitative methods

  • Evaluation is not just for accountability but to improve our understanding of the role of information technology in health care and our ability to deliver systems that offer a wide range of clinical and economic benefits

The evaluation dilemma

Decision makers may be swayed by the general presumption that technology is of benefit to health care and should be wholeheartedly embraced. This view is supported by assertions such as that general practitioner computing is seen “as an integral part of the NHS IT strategy,”5 the US Institute of Medicine’s statement that computing is “an essential technology for healthcare,”6 and the increasingly high levels of spending on healthcare information technology. On the other hand, decision makers may support the argument that procurement of information technology should be based on the demonstration, in randomised controlled trials, of economic benefits or positive effects on patient outcomes.712).

Regardless of which view you take, evidence is scarce. Large scale pilot initiatives such as the NHS electronic patient record project have yielded only anecdotal evidence, with little or no credence given to results of external evaluation (“We now know how to do it and it is achievable in the NHS”13). Results from economic analyses and randomised controlled trials of healthcare systems are emerging, but these studies cover only a small fraction of the total number of healthcare applications developed and address a limited number of questions, and most show no benefits to patient outcomes (D L Hunt et al, Proceedings of the 5th Cochrane Colloquium, Amsterdam, October 1997).14

Those who base their judgment on the failure of randomised controlled trials to show improved outcomes may cause important projects to be prematurely abandoned and funding to be discontinued. In contrast, those who heed the proponents of healthcare information technology and base their decisions on unsubstantiated reports of projects, written without external verification, may waste precious NHS resources through the inappropriate and uninformed application of information technology. This is likely to result in repeated failure without retrospective insight, and so does nothing to further the science of system development and deployment. The problem is confounded by the fact that negative results are seen as unacceptable and do not generally become public, thus failing to facilitate knowledge for future developments.

Problems with inappropriate evaluations

Evaluation can be viewed as having a severe negative impact on the progress of clinical information technology because, in our opinion, many evaluation studies ask inappropriate questions, apply unsuitable methods, and incorrectly interpret results. The evaluation questions most often asked include those concerning economic benefits and clinical outcomes, despite the lack of strong evidence of such and the recognition of the difficulty of applying results in other contexts.15 The misplaced notion that clinical information technology is comparable to a drug and should be evaluated as one has led to the idea that the randomised controlled trial is the optimal method of investigation.16 While a major deterrent to the use of randomised controlled trials has been their cost, they are also vulnerable with respect to external validity: trial results may not be relevant to the full range of subjects (that is, specific implementations of a healthcare application) or typical uses of a system in day to day practice, and they are likely to cover only a small proportion of the wide range of potential healthcare applications. Furthermore, negative results from such trials cannot help us understand the effects of clinical systems or build better ones in the future. graphic file with name heah4003.f1.jpg

New directions in evaluation

New perspectives on evaluation are emerging in the domain of health care. Most important is the recognition that randomised controlled trials cannot address all issues of evaluation and that a range of approaches is desirable (Heathfield et al, Proceedings of HC96, Harrogate, 1996).17 As pointed out by McManus, “Can we imagine how randomised controlled trials would ensure the quality and safety of modern air travel . . .? Whenever aeroplane manufacturers wanted to change a design feature . . . they would make a new batch of planes, half with the feature and half without, taking care not to let the pilot know which features were present.”18 Others have sought to find surrogate process measures that may be used instead of “prohibitive” outcome measures, thus making randomised controlled trials more cost effective.19

Likewise, workers in clinical informatics have questioned the usefulness of conducting randomised controlled trials on clinical systems. The demonstration of quantifiable benefits in a randomised controlled trial does not necessarily mean that end users will accept a system into their working practices. Research shows that satisfaction with information technology is more correlated with users’ perceptions about a system’s effects on productivity than its effect on quality of care.2022

These insights have highlighted the need to examine professional and organisational factors in system evaluation and have led to the concept of multi-perspective, multi-method evaluations, which seek to address a number of issues with multiple methods and with evaluators from different backgrounds working together to produce an integrated evaluation. This is coupled with an awareness of the importance of qualitative methods in system evaluation.2326 The NHS electronic patient record project is an example of a large, multi-perspective evaluation, which includes social scientists, health economists, computer scientists, health service managers, and psychologists and uses a wide range of different methods. However, the problems of conducting large scale evaluations of this type show the need for careful planning in such studies.27

Challenges for evaluating information technology in health care

Clinical systems are embedded social systems with different people, institutions, providers, settings, and so on. While it is important that we search for causal mechanisms that lead to clinical outcomes, the investigation and, possibly, classification of such contexts is essential. This will help us to understand and predict the behaviour of systems and provide important knowledge to inform further developments. This form of research will be facilitated by refocusing attention from debates about specific methods towards issues of multi-method evaluation and the integration of methods and results.

Conclusions

The arguments for performing multi-method evaluations must be acknowledged and progressed within the community. Information technology is not a drug and should not be evaluated as such. We should look to the wider field of evaluation disciplines, in which many of the issues now facing clinical informatics have been addressed.

The current political context in which healthcare applications are evaluated emphasises economic gains rather than quality of life. Thus, the role of evaluation has been to justify past expenditures to taxpayers, managers, etc, and so evaluation becomes a way of trying to rebuild lost public trust. This is short sighted. Evaluation is not just for accountability, but for development and knowledge building in order to improve our understanding of the role of information technology in health care and our ability to deliver high quality systems that offer a wide range of clinical and economic benefits.

Footnotes

Funding: None.

Conflict of interest: None.

References

  • 1.Wyatt JC. Hospital information management: the need for clinical leadership. BMJ. 1995;311:175–178. doi: 10.1136/bmj.311.6998.175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Heathfield HA, Wyatt J. Philosophies for the design and development of clinical decision-support systems. Methods Inf Med. 1993;32(1):1–8. [PubMed] [Google Scholar]
  • 3.Wise J. Computer prescribing scheme gets green light. BMJ. 1996;313:250. [Google Scholar]
  • 4.Walton RT, Gierl C, Yudkin P, Mistry H, Vessey MP, Fox J. Evaluation of computer support for prescribing (CAPSULE) using simulated cases. BMJ. 1997;315:37–38. doi: 10.1136/bmj.315.7111.791. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Leaning M. The new information and management strategy of the NHS. BMJ. 1993;307:217. doi: 10.1136/bmj.307.6898.217. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Dick RS, Steen EB, editors. The computer-based patient record: an essential technology for health care. Washington DC: National Academy Press; 1991. [PubMed] [Google Scholar]
  • 7.Lock C. What value do computers provide to NHS hospitals? BMJ. 1996;312:1407–1410. [PMC free article] [PubMed] [Google Scholar]
  • 8.Donaldson LJ. From black bag to black box: will computers improve the NHS? BMJ. 1996;312:1371–1372. doi: 10.1136/bmj.312.7043.1371a. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Sullivan F, Elizabeth M. Has general practice computing made a difference to patient care? A systematic review of published reports. BMJ. 1995;311:848–852. doi: 10.1136/bmj.311.7009.848. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Johnston ME, Langton KB, Haynes B, Mathieu A. Effects of computer-based clinical decision support systems on clinician performance and patient outcome. A critical appraisal of research. Ann Intern Med. 1994;120:135–142. doi: 10.7326/0003-4819-120-2-199401150-00007. [DOI] [PubMed] [Google Scholar]
  • 11.Wyatt JC. Clinical data systems. Part 1: data and medical records. Lancet. 1994;344:1543–1547. doi: 10.1016/s0140-6736(94)90353-0. [DOI] [PubMed] [Google Scholar]
  • 12.Van der Loo JA. Overview of published assessment and evaluation studies. In: Van Gennip EMSJ, Talmon JL, editors. Assessment and evaluation of information technologies. Amsterdam: IOS Press; 1995. pp. 64–78. [Google Scholar]
  • 13.Brennan S, Dodds B. The electronic patient record programme: a voyage of discovery. Br J Healthcare Comput Inf Manage. 1997;14:16–18. [Google Scholar]
  • 14.Rotman BL, Sullivan AN, McDonald TW, Brown BW, DeSmedt P, Goddnature D, et al. A randomised controlled trial of a computer-based physician workstation in an outpatient setting: implementation barriers to overcome evaluation. J Am Med Inf Assoc. 1996;3:340–348. doi: 10.1136/jamia.1996.97035025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Heathfield HA. Decision support systems. In: van Bemmel JH, McCray AT, editors. Yearbook of Medical Informatics 1995. Stuttgart: Schattauer Verlagsgesellschaft mbH; 1995. pp. 455–457. [Google Scholar]
  • 16.Wyatt J, Spiegelhalter D. Evaluating medical expert systems: what to test and how? Med Inf. 1990;15:205–217. doi: 10.3109/14639239009025268. [DOI] [PubMed] [Google Scholar]
  • 17.Mongerson PA. Patient’s perspective of medical informatics. J Am Med Inf Assoc. 1995;2:79–84. doi: 10.1136/jamia.1995.95261909. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.McManus C. Engineering quality in health care. Qual Health Care. 1996;5:127. [Google Scholar]
  • 19.Mant J, Hicks N. Detecting differences in quality of care: the sensitivity of measures of process and outcome in treating acute myocardial infarction. BMJ. 1995;311:793–796. doi: 10.1136/bmj.311.7008.793. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Lee F, Teich JM, Spurr CD, Bate DW. Implementation of physician order entry: user satisfaction and self-reported usage patterns. J Am Med Inf Assoc. 1996;3:42–55. doi: 10.1136/jamia.1996.96342648. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Kaplan B. Information technology and three studies of clinical work. ACM SIGBIO Newsletter. 1995;15(2):2–5. [Google Scholar]
  • 22.Igbaria M, Livari J, Maragahh H. Why do individuals use computer technology? A Finnish case study. Inf Manage. 1995;29:227–238. [Google Scholar]
  • 23.Friedman CP, Wyatt JC. Evaluation methods in medical informatics. New York: Springer-Verlag; 1997. [Google Scholar]
  • 24.Van Gennip EMSJ, Talmon JL, editors. Assessment and evaluation of information technologies in medicine. Amsterdam: IOS Press; 1995. [Google Scholar]
  • 25.Andersen JG, Aydin CE, Jay SJ. Evaluating healthcare information systems: methods and applications. Thousand Oaks, CA: Sage; 1994. [Google Scholar]
  • 26.Kaplan B. A model of a comprehensive evaluation plan for complex information systems: clinical imaging systems as an example. In: Brown A, Remeny D, editors. Proceedings of second European conference on information technology investment evaluation. Birmingham: Operational Research Society; 1995. pp. 174–181. [Google Scholar]
  • 27.Heathfield HA, Hudson P, Kay S, Nicholson L, Peel V, Williams J, et al. Issues in multidisciplinary assessment of healthcare information systems. J IT People (in press).

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES