Skip to main content
The BMJ logoLink to The BMJ
. 2002 Mar 30;324(7340):783–785. doi: 10.1136/bmj.324.7340.783

Rational, cost effective use of investigations in clinical practice

Ron Winkens a, Geert-Jan Dinant b
PMCID: PMC1122712  PMID: 11924663

Investigations such as blood tests and radiography are important tools for the making correct diagnoses. The use of diagnostic resources is growing steadily—in the Netherlands, for example, nationwide expenditure on diagnostic tests is growing at the rate of 7% a year. Unfortunately, health status is not improving similarly, which suggests that investigations are being overused. The ordering of tests seems not to be influenced by the fact that their diagnostic accuracy is often disappointing. Considerations other than strict scientific indications seem to be involved, and we may ask whether new knowledge and research findings are adequately reflected in daily practice.

Several factors may be responsible for the increasing use of investigations, such as the increasing demand for care (due to ageing of the population and increasing numbers of chronically ill people); the fact that they are available, which in itself leads to ordering; and the urge to make use of new technology. Once an abnormal test result is found, doctors may order further investigations, not realising that on average 5% of test results are outside their reference ranges, and a cascade of testing may result. Furthermore, higher standards of care, the guidelines for which often recommend additional testing, and defensive behaviour have led to more investigations. Unfortunately, when guidelines on selective and rational ordering of investigations are introduced, numerous motives for ignoring evidence based recommendations, such as fear of litigation, or procrastination on the part of the doctor, come into play in daily practice and are difficult to influence.

Overuse of investigations—and there is reason to believe that some requests are illogical—leads to overloading of the diagnostic services and overexpenditure: more efficient usage is therefore needed. Interventions focusing on overt examples of inappropriate testing might reduce costs while simultaneously improving quality of care.

Summary points

  • Intervention is needed to reduce the often quite illogical overuse of diagnostic tests

  • Current evidence favours using combinations of methods to influence doctors' behaviour

  • In daily practice doctors' decisions are often affected by pressure from patients

  • General practitioners perhaps need more help in putting across the rationale for using, or not using, tests

What does the change involve?

To change how clinicians order investigations calls for a number of stages, shown in the implementation cycle published elsewhere.1

Guidelines, protocols, and standards are needed to formalise optimal practice. The standards developed for general practitioners by the Dutch College of General Practitioners are a good example of guidelines that have already been developed.2,3 Since 1989 the college has set up some 70 guidelines on a variety of common clinical problems, one dealing specifically with rational ordering of investigations.4

Simply distributing guidelines, however, does not make clinicians adopt them; strategies have to be devised to bring about actual change. Implementation involves a range of activities to stimulate the use of guidelines, such as communication and information about their contents and relevance, providing insight into the problem of inappropriate ordering of tests and the need to change, and, most importantly, interventions to achieve actual behavioural changes.

Is change feasible?

Ideal interventions would improve the rationality of ordering of investigations while at the same time leading to fewer requests being made, but identifying or formulating interventions that will do this is not easy. Some interventions by their nature cannot always be properly evaluated; especially large scale interventions, such as changes in national regulations or in reimbursement terms, for which it is difficult to obtain a concurrent control group.

Some of the strategies that have been evaluated have proved to be effective, others were disappointing. Several reviews focused on the effectiveness of implementation strategies. The conclusions of the reviews vary, but there is a measure of consensus that while some strategies by and large seem to fail, some are at least promising. A few examples follow.

Changes in terms of reimbursement or regulatory steps by health insurers or government can affect ordering of investigations by acting as a stimulus to clinicians to adopt the desired changes. In several western countries the healthcare system includes a payment to doctors for investigations ordered, even if these are carried out elsewhere: under these systems ordering fewer tests affects the doctor's income. Changing this payment system could improve adherence to guidelines without the risk of reducing clinicians' income. There is a clear need for trials in this field, as at present virtually no evidence exists on the point.

Since one of the reasons for the growing use of investigations is simply that they are so easy to request on the laboratory request forms in use, one simple strategy would be to remove them from the standard forms or to ask for explicit justification for ordering them. Such interventions have been effective and require little extra cost and effort, and Zaat and Smithuis found they resulted in reductions of 20-50%.5,6 Extensive or unselective curtailing of the request forms, however, carries the risk of potential underuse of tests. Therefore, changes in request forms should be designed very carefully.

A range of interventions provide both information and monitoring of the clinician's performance, such as audit, feedback, peer review, and computer reminders. Investigations the clinician has ordered are reviewed and discussed by expert peers, audit panels, or computerised systems. There is a huge variation in what is reviewed and discussed, how often and into whose performance these interventions enquire, and the ways in which the reviews are presented.

An audit represents systematic monitoring of specific aspects of care; it is somewhat formal, being set up and organised by national colleges and regional committees.7 Feedback resembles audit, although it is less formal and its development is often dependent on the spontaneous initiative of local bodies or even individuals. In peer review, performance is reviewed by expert colleagues. It is used not only to improve aspects of patient care but also to improve organisational aspects (practice management).

Audit and feedback are among the strategies most frequently employed, but the reviews available do not reach any common conclusion. Highly successful trials, such as one with nine years of feedback on rationality of tests, are published but so are interventions with no effect, such as studies of feedback on costs of tests ordered.810 Nevertheless, there is evidence suggesting that feedback under specific conditions is effective—for example, when the information provided is directly useful in daily practice, or when doctors are addressed personally and when they have accepted the expert peer.

Computer reminders are becoming more popular, possibly because of the increasing use of computers in health care. Immediate computer reminders try to influence the behaviour of individuals directly, with less emphasis on monitoring performance. “Anonymous” computer reminder systems may seem less threatening, and their feedback does not need to be seen by anyone but the user. They seem to be a potentially effective method with relatively little effort, and although their effects in reducing unnecessary tests are variable, they seem promising for improving adherence to guidelines.11 The number of studies on computer reminders is relatively low, but it is likely that interventions of this type will increase in the future.

It is clear that two common implementation strategies have little or no effect on ordering of investigations. For many years we have put much effort into continuing medical education (CME) and into writing books, clinical journals, and protocol manuals. Although such written material is partly meant to disseminate research findings and increase scientific knowledge, it is also meant to improve clinical competence, though whether any improvement is reflected in clinical practice is another matter. The effectiveness of these methods has been shown to be disappointing.12,13

The effects of interventions are therefore by no means assured. To discriminate between successful and unsuccessful interventions we need evidence. However, after several decades with many studies and a large number of reviews of implementation strategies, many questions still remain and no final conclusion can be drawn. Differences in interventions, settings, environments, and many other factors impair comparability. Moreover, in a dynamic environment such as the medical profession, it is inevitable that interventions and their effects are dynamic and variable over time. Hence, there will always be a need for evaluations. Owing to their complexity, studies on implementation strategies are difficult to evaluate, and we tend to sacrifice scientific principles in the process. The quality criteria required are no different from those for other evaluations.14 The randomised controlled trial still remains the “gold standard,” but some aspects need special attention.15 The following is a striking example. In most studies on improving behaviour the doctor is the one we are trying to influence. Therefore, the unit of randomisation and, hence, the unit of analysis is the individual doctor, but the number of participating doctors is often limited, and this may affect the power of the study. Here, cluster randomisation and multilevel analyses may offer a solution.16

Perpetuation and cost effective implementation

More attention should be paid to perpetuation of interventions oncethey have been started. It is often unclear what the long term effects are. Interventions in most studies are short, and continuing effects after the intervention has ended are usually not evaluated. Tierney is an exception: he continued observations after ending his intervention, the use of computer reminders to affect test ordering. The effects had disappeared by six months after the reminders were stopped.17 On the other hand, Winkens found that feedback is still effective after being continued over a nine year period.8 Should strategies be continued once they are started? Implementation strategies that are effective with the least effort and lowest cost are to be preferred. We may also question whether those strategies that have not proved effective should be continued. Should we continue to put effort in continuing medical education, especially into “one off” training courses or lectures with no follow up? Who should we try to reach by scientific and didactic papers, clinicians in daily practice or only scientists and policy makers with special interests? Should we choose the most effective intervention method, regardless of the effort and cost it requires? If we start an implementation strategy to change test ordering, have we to continue it for years? There is no clear answer to these questions, although some published reviews argue in favour of combined, tailormade interventions. How such a combination is composed depends on local needs, the availability of experts, and many other aspects. General recommendations for specific combinations are not possible, but if we look at costs in the long term, computer interventions look promising.

From evidence to practice

An important objective in changing the ordering of investigations is to achieve more rational and lower use, thereby reducing costs or achieving a better cost benefit ratio. The ultimate goal is to improve quality of care for the individual patient, but effects on health status and final outcome for individual patients are difficult to assess. On the other hand, reduced use of unnecessary and inappropriate tests is not likely to have any ill effects on the patient.

Despite the increasing evidence that changes in ordering of investigations are necessary, when it comes to individual patients, their doctor's decision whether to investigate will always involve more than just scientific evidence.18 Low diagnostic accuracy or high costs of tests may conflict with patients' explicit wishes to have tests ordered or with their doctors' wish to procrastinate because of fear of missing an important diagnosis or feelings of insecurity and desire for their opinions to be backed up by a positive test result. These dilemmas are influenced by many factors related to both doctor and patient. For the doctor one important aspect is failure on a previous occasion to diagnose important relevant disease. Patients may be have a chronic disease, and question the skills of their doctor when it cannot be cured, or have recurrent vague or unexplained complaints which doctors may be tempted to over-investigate.. Adequate patient education may offer a solution. Patients should be told that not all tests give reliable results and that sometimes the value of investigations, especially in primary care, is limited. But this requires, first of all, that doctors know the principles of medical decision making and its relevance to daily practice.

Footnotes

Series editor: J A Knotterus

This is the last in a series of five articles

  Funding: None declared.

graphic file with name ebcd.f1.jpgThe Evidence Base of Clinical Diagnosis, edited by J A Knottnerus, can be purchased through the BMJ Bookshop (www.bmjbookshop.com)

References

  • 1.Grol RPTM. Beliefs and evidence in changing clinical practice. BMJ. 1997;315:418–421. doi: 10.1136/bmj.315.7105.418. http://bmj.com/cgi/content/full/315/7105/418/F1 . (Figure at http://bmj.com/cgi/content/full/315/7105/418/F1 (accessed 25 Oct 2001).) (accessed 25 Oct 2001).) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Geijer RMM, Burgers JS, Van der Laan JR, Wiersma T, Rosmalen CFH, Thomas S. NHG-Standaarden voor de huisarts I. 2nd ed. Utrecht: Bunge; 1999. [Google Scholar]
  • 3.Thomas S, Geijer RMM, Van de Laan JR, Wiersma T. NHG-Standaarden voor de huisarts II. Utrecht: Bunge; 1996. [Google Scholar]
  • 4.Dinant GJ, Van Wijk MAM, Janssens HJEM, Somford RG, de Jager CJ, Beusmans GHMI, et al. NHG-Standaard Bloedonderzoek. Huisarts Wet. 1994;37:202–211. [Google Scholar]
  • 5.Zaat JO, van Eijk JT, Bonte HA. Laboratory test form design influences test ordering by general practitioners in the Netherlands. Med Care. 1992;30:189–198. doi: 10.1097/00005650-199203000-00001. [DOI] [PubMed] [Google Scholar]
  • 6.Smithuis LOMJ, Geldrop WJ van, Lucassen PLBJ. Beperking van het laboratorium-onderzoek door een probleemgeorienteerd aanvraagformulier. Huisarts Wet. 1994;37:464–466. . (Abstract in English.) [Google Scholar]
  • 7.Smith R. London: BMJ Publishers; 1992. Audit in action. [Google Scholar]
  • 8.Winkens RAG, Pop P, Grol RPTM, Bugter AMA, Kester ADM, Beusmans GHMI, et al. Effects of routine individual feedback over nine years on general practitioners' requests for tests. BMJ. 1996;312:490. doi: 10.1136/bmj.312.7029.490. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Wones RG. Failure of low-cost audits with feedback to reduce laboratory test utilization. Med Care. 1987;25:78–82. doi: 10.1097/00005650-198701000-00009. [DOI] [PubMed] [Google Scholar]
  • 10.Everett GD, de Blois CS, Chang PF, Holets T. Effects of cost education, cost audits, and faculty chart review on the use of laboratory services. Arch Intern Med. 1983;143:942–944. [PubMed] [Google Scholar]
  • 11.Buntinx F, Winkens RAG, Grol RPTM, Knottnerus JA. Influencing diagnostic and preventive performance in ambulatory care by feedback and reminders. A review. Fam Pract. 1993;10:219–228. doi: 10.1093/fampra/10.2.219. [DOI] [PubMed] [Google Scholar]
  • 12.Davis D, O'Brien MA, Freemantle N, Wolf FM, Mazmarian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA. 1999;282:867–874. doi: 10.1001/jama.282.9.867. [DOI] [PubMed] [Google Scholar]
  • 13.Van der Weijden T, Wensing M, te Giffel M, Grol RPTM, Winkens RAG, Buntinx F, et al. Interventions aimed at influencing the use of diagnostic tests. Cochrane Database System Rev (in press). .Pocock SJ. Clinical trials; a practical approach. Chichester: John Wiley & Sons, 1991.
  • 14.Winkens RAG, Knottnerus JA, Kester ADM, Grol RPTM, Pop P. Fitting a routine health-care activity into a randomized trial: an experiment possible without informed consent? J Clin Epidemiol. 1997;50:435–439. doi: 10.1016/s0895-4356(96)00422-2. [DOI] [PubMed] [Google Scholar]
  • 15.Campbell MK, Mollison J, Steen N, Grimshaw JM, Eccles M. Analysis of cluster randomized trials in primary care: a practical approach. Fam Pract. 2000;17:192–196. doi: 10.1093/fampra/17.2.192. [DOI] [PubMed] [Google Scholar]
  • 16.Tierney WM, Miller ME, McDonald CJ. The effect on test ordering of informing physicians of the charges for outpatient diagnostic tests. N Engl J Med. 1990;322:1499–1504. doi: 10.1056/NEJM199005243222105. [DOI] [PubMed] [Google Scholar]
  • 17.Knottnerus JA, Dinant GJ. Medicine-based evidence, a prerequisite for evidence-based medicine. BMJ. 1997;315:1109–1110. doi: 10.1136/bmj.315.7116.1109. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES