A descriptive feast but an evaluative famine - A systematic review of published papers on primary care computing 1980-1997

Review of Paper
 

Importance

Whether or not computers can make a real difference to the quality of primary care is a crucial question to both clinicians, who are currently spending quite a lot of their time using computers, and to people working in management, including primary care groups, who are investing monetary and human resources in Information Technology. The government have pledged £2M to research the impact of computers in the NHS(1). This paper will help researchers in the field of primary care avoid covering old ground and point out the areas which have been neglected so far.

Originality

I know of no other comprehensive review of primary care computing other than the author's previous paper. The new update adds considerably to the usefulness of the paper in a fast developing field. The review is substantially extended in scope.

Method

The databases searched were appropriate, and I was pleased to see the breath of search included enquiries to authors active in the field. This seems to balance the advantages of electronic searching, which can encompass huge numbers of articles, with the ability of researchers who are better at identifying which are the really important ones. Non-English journals were included too. The scoring system (US 1994) is simple and appropriate to this review. I am less convinced about the method of scoring non-experimental studies used. Although the Delphi survey may be established (ref given is 1974), the 'experts' involved could be biased positively towards the use of computers, and ordinary clinicians might see more of the problems with the studies. Table II I found unhelpful. Before I could attempt to repeat the author's method of review I would need to look up the references anyway, so I would either omit the table or improve it to give a better idea of how a score is reached. Furthermore, I am not sure how the two methods interact. Are the overall scores listed in Table III a combination of the methods, or is any one article only scored by one method?

Results

It might be better to mention in the method section that editorials, dentistry or veterinary medicine studies and duplicates were excluded, rather than including 3583 (65%) of them in the results. Strictly speaking dentistry is part of primary care, and should have been included to justify the paper's title, but I can see the authors' would not wish to broaden the scope of the review so much that it became unfocused.

From 1892 abstracts, 214 were selected for further evaluation by the criteria used in the authors' previous paper, and finally 89 were included in the review. Given such a reduction I think it would be helpful to include a brief summary of these criteria in this review.

Are the horizontal lines in Table IV significant? If so, it's not obvious why.

The descriptive part of the results section (pages 6 and 7) is particularly well done. It summaries interesting points from the Tables. Some important studies are not included in the section at the bottom of page 7 describing effects on prescribing generic drugs and reduction in costs (2,3,4,5). Two of these (2,3) might have just missed the time slot of this review if only part of 1997 was included, but I would have thought they should have passed the selection criteria and they do give a much more

up-to-date view than the references that have been quoted.

Page 9 explains that patients may be concerned about the confidentiality of computerised records. "Some were unwilling to be completely frank about their problems and would consider changing doctor." This gives a different impression to Table V: "Majority had positive attitude towards computerisation" Only 6.7% would consider changing doctor, and that was in 1981 when computers were much less a part of everyday life. So I think the results section is misleading on this issue.

Discussion

While I agree with the first paragraph about the dearth of evaluations, particularly on patient outcomes, it could be said that this review only detects studies in primary care. Outcome benefits to patient care from computer programs shown in other settings, such as in warfarin dosing or glycaemic control, might be telling us that it is the programs that are helpful, and the location of where the computer is used might not matter.

The discussion on the practical implications of this field of research is good.

I'm not sure how few is meant in the point on page 10 "Few studies have dealt with nursing....". I enclose 67 references related to this (7). Of course there are fewer than for the applications of computers in medicine as a whole, but not that few really.

My major concern of the whole review is that anyone who reads it with less than full attention might get the impression that there is evidence that computerised records are more accurate than written ones. The subject occurs on pages 8 and 10. To be fair, the reviewers do say that accuracy is thought to be better, that this is opinion, not fact. The problem is that this opinion, which the reviewers note from a questionnaire study done in Australia, is almost certainly wrong (8). Often computer records are checked for accuracy against the written records which act as the gold standard. The greatest bar to accuracy is computer coding, which is both restrictive and over-simplistic compared to a real language like English. When an exact code is not available users are force to substitute the nearest one they can find (9). Prof. Ronald Mann has withdrawn the suggestion that GPs can reply to the national Green Card new drug surveillance scheme using computer printouts because new drugs often have no code, so the incidence of reported side-effects according to computer records is quite misleading. Defining codes is another problem (10). Hierarchical classifications are known to create difficulties when concepts of the underlying pathology change. Who would have though peptic ulcers could be an ineffective disease 20 years ago? I feel that if the review is to comment on accuracy then the evidence should be included as well as the opinion. Difficulties with coding threaten to wreck the value of clinical data and research to see if computers could cope by smart searches on real language is desperately needed.

The very last character is a reference 1, which is a misprint I think.

The note to me from the authors about the numbering of the references will need to be mentioned in the paper for readers. Would it be possible to include a number with each reference to identify where it can be found in the tables? Otherwise a referenced comment in the body of the text cannot be easily linked to the comments in the tables.

Readership

This important review is of wide interest. Although it restricts its scope to primary care, there are important messages for other sectors too. It would fit well into a general medical journal.
 

1. Information for Health An Information Strategy for the Modern NHS 1998–2005 A national strategy for local implementation. http://www.doh.gov.uk/nhsexipu/strategy/full/index.htm. Paragraphs 1.23 3.67 6.38 - 6.43

2. Walton RT, Gierl C, Yudkin P, Mistry H, Vessey MP, Fox J. Evaluation of computer support for prescribing (CAPSULE) using simulated cases. British Medical Journal Vol 315(7111) (pp 791-795), 1997.

Objective: To evaluate the potential effect of computer support on general practitioners' prescribing, and to compare the effectiveness of three different support levels. Design: Crossover experiment with balanced block design. Subjects: Random sample of 50 general practitioners (42 agreed to participate) from 165 in a geographically defined area of Oxfordshire.

Interventions: Doctors prescribed for 36 simulated cases constructed from real consultations. Levels of computer support were control (alphabetical list of drugs), limited support (list of preferred drugs), and full support (the same list with explanations available for suggestions).

Main outcome measures: Percentage of cases where doctors ignored a cheaper, equally effective drug; prescribing score (a measure of how closely prescriptions matched expert recommendations); interview to elicit doctors' views of support system.

Results: Computer support significantly improved the quality of prescribing. Doctors ignored a cheaper, equally effective drug in a median 50%, (range 25%-75%) of control cases, compared with 36% (8%-67%) with limited support and 35% (0-67%) with full support (P < 0.001). The median prescribing score rose from 6.0 units (4.2-7.0) with control support to 6.8 (5.8 to 7.7) and 6.7 (5.6 to 7.8) with limited and full support (P < 0.001). Of 41 doctors, 36 (88%) found the system easy to use and 24 (59%) said they would be likely to use it in practice.

Conclusions: Computer support improved compliance with prescribing guidelines, reducing the occasions when doctors ignored a cheaper, equally effective drug. The system was easy to operate, and most participating doctors would be likely to use it in practice. [References: 20]

3. Vedsted P, Nielsen JN, Olesen F, Vedsted P Does a computerized price comparison module reduce prescribing costs in general practice?. Family Practice Vol 14(3) (pp 199-203), 1997.

Objective: We aimed to assess the trends in prescribed defined daily doses (DDD) and drug expenses before and after the introduction of a computerized cost containment module into the computer record system of a defined group of GPs. The GPs' expectations for and experiences with the module were examined.

Method: We performed a controlled follow-up study on antecedent data before and after intervention. A questionnaire was administered to the intervention group at the introduction and 1 year later. Data on prescribing were collected in the database of the Health Insurance Aarhus County, as a normal routine for accounting. The GPs were not aware of the ongoing cost supervision study. Additional cost information software was introduced on 1 January 1993 to 20 practices with 28 GPs. The software assisted the GPs in a semiautomatic way to identify and prescribe the cheapest drugs. The subjects comprised 158 practices including 231 GPs in Aarhus County, Denmark. Questionnaires were sent to the 20 intervention practices. The main outcome measures were prescribed DDD, reimbursement for prescribed drugs, and reimbursement per prescribed DDD quarterly during 1992 and 1993.

Results: Compared with the controls there were no changes in prescribed DDD, reimbursement for prescribed drugs, and reimbursement per prescribed DDD in the intervention group after the introduction of the module. Conclusion. Simply giving a random group of GPs computer assistance to choose less expensive drugs did not reduce expenditure per DDD. Cost containment procedures should be more intensive than just giving the doctors a computer-assisted decision aid. [References: 16]

4. Wyatt J, Walton R. Computer based prescribing - Improves decision making and reduces costs. British Medical Journal Vol 311(7014) (pp 1181-1182), 1995.

5. Difford F. Title Reducing prescribing costs through computer controlled repeat prescribing. Royal College of General Practitioners. 34(269):658-60, 1984 Dec.

6. Basden A. Clark EM. Data integrity in a general practice computer system (CLINICS). International Journal of Bio-Medical Computing. 11(6):511-9, 1980 Nov.

The accuracy of computer held medical information may be of critical importance in patient care, therefore it is important not only to know the error rate in the stored data but also to know the effectiveness of error checking and detection programmes. This paper reports on the errors which were detected in the University of Southampton Primary Medical Care computer system (CLINICS) by checking the consistency between stored data and incoming data. Seven per cent of incoming data had important errors of kinds not normally detected by many medical record systems. The majority were traced either to the registration of new patients or to the doctors failing to pay adequate attention to detail in their record keeping (or to their legibility). They have been subsequently corrected, and it is calculated that the stored data contains less than 1% errors. We suggest ways of improving this; and conclude that certain items are essential to general practice information systems.

7. This is a set of references in Medlars format.

nurse computer.htm

Name: nurse computer.htm
Type: Plain Text (text/plain)
Encoding: quoted-printable
Description: nurse computer.htm

8. Basden A. Clark EM. Data integrity in a general practice computer system (CLINICS). International Journal of Bio-Medical Computing. 11(6):511-9, 1980 Nov.

The accuracy of computer held medical information may be of critical importance in patient care, therefore it is important not only to know the error rate in the stored data but also to know the effectiveness of error checking and detection programmes. This paper reports on the errors which were detected in the University of Southampton Primary Medical Care computer system (CLINICS) by checking the consistency between stored data and incoming data. Seven per cent of incoming data had important errors of kinds not normally detected by many medical record systems. The majority were traced either to the registration of new patients or to the doctors failing to pay adequate attention to detail in their record keeping (or to their legibility). They have been subsequently corrected, and it is calculated that the stored data contains less than 1% errors. We suggest ways of improving this; and conclude that certain items are essential to general practice information systems.

9. Chan LS, Schonfield N. How much information is lost during processing? A case study of paediatric emergency department records. Computers and Biomedical Research 26: 582-91, 1993.

10. McKee M, Dixon J, Chenet L. Making routine data adequate to support clinical audit. British Medical Journal 309 (6964): 1246-7, 1994 Nov.

11. Newrick DC, Spencer JA, Jones KP. Collecting data in general practice: need for standardisation. British Medical Journal 312: 33-34, 1996 Jan.

Ian Hill-Smith