Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2002 Jul-Aug;9(4):395–401. doi: 10.1197/jamia.M1023

Does Feedback Improve the Quality of Computerized Medical Records in Primary Care?

Simon de Lusignan 1, Peter N Stephens 1, Naeema Adal 1, Azeem Majeed 1
PMCID: PMC346626  PMID: 12087120

Abstract

Objective: The MediPlus database collects anonymized information from generalpractice computer systems in the United Kingdom, for research purposes. Data quality markers are collated and fed back to the participating general practitioners. The authors examined whether this feedback had a significant effect on data quality.

Methods: The data quality markers used since 1992 were examined. The authors determined whether the feedback of “useful” data quality markers led to a statistically significant improvement in these markers. Environmental influences on data quality from outside the scheme were controlled for by examination of the data quality scores of new entrants.

Results: Three quality markers improved significantly over the period of the study. These were the use of highly specific “lower-level” Read Codes (p=0.004) and the linkage of repeat prescriptions (p=0.03) and acute prescriptions (p=0.04) to diagnosis. Clinicians who fall below the target level for linkage of repeat prescriptions to diagnosis receive more detailed feedback; the effect of this was also statistically significant (p<0.01.)

Conclusions: The feedback of four of the ten markers had a significant effect on data quality. The effect of more detailed feedback appears to have had a greater effect. The lessons learned from this approach may help improve the quality of electronic medical records in the United Kingdom and elsewhere.


Although the majority of NHS (National Health Service) general practitioners in the United Kingdom are now computerized1 and the computer systems they use have can record structured data (Read Coded2; see box on next page), high-quality coding of clinical data is not yet universal.3–5 There are a number of reasons for this. Until recently, general practitioners were required to keep written as well as computerized medical records.6 Using computers in primary care also results in longer consultations.7,8 Despite these obstacles, an increasing amount of clinical data is now being recorded electronically.9

Many recent NHS policy documents have promoted the use of computerized records. These include the NHS information strategy,10 the National Service Frameworks,11 and the NHS Plan.12 The more recent “Building the Information Core” document,13 from the NHS Information Policy Unit, provides the most up-to-date milestones. A key target is that half the primary care trusts will have implemented electronic patient records by 2004. To improve the usefulness and accuracy of these electronic records, primary care trusts will need to implement programs that improve data quality. Evaluation of such interventions is lacking, however.14

Until now, only clinicians who volunteered have been part of data quality schemes and received feedback on the quality of their coding. There is no clear evidence about whether such feedback can improve data quality. For example, the effectiveness of PCO (primary care organization)-wide feedback on data quality has yet to be shown by the Primary Care Information Services (Primis) project,15 and only limited published data from three PCOs has come out from the program Primary Care Data Quality (PCDQ) program.5

We examined the feedback of data quality markers within the MediPlus database to see whether this led to a more rapid improvement in data quality than that generally occurring in primary care. We hoped that the experience gained from data quality feedback over an 8-yr period could be applied more generally to raising the standards of computerized medical records in primary care.

Methods

Data Source: The MediPlus Database

The MediPlus database was established in 1992; it contains information on almost two million patients and more than 53 million prescriptions.16 The database is based on information drawn from more than 500 representative general practitioners across the United Kingdom using the Torex-Meditel System 5 computer package.17 This computer system allows the linkage of diagnosis or problem title to the acute or repeat (long-term) prescriptions issued to patients. This makes clearer the diagnoses for which prescriptions are being issued. This is particularly useful among groups such as the elderly, who often suffer from several chronic diseases.

Data quality markers are used to ensure that only doctors supplying data that reaches specified quality standards are included in the database used by researchers. In total, ten data quality scores are used. These are calculated at individual doctor level and fed back to the participating practices quarterly. Newsletters are also sent every six months, addressing issues around coding highlighted by a panel of expert general practitioners. Doctors are given a small incentive (about £400 per doctor per year) to reach the target levels across the ten quality scores used (Table 1).

Table 1 .

Data Quality Markers Used by the MediPlus Database

Quality Marker Reason for Its Inclusion Weakness as a Marker
1. Percentage of registered patients for whom there has been a change in the record over the previous 12 mo An indication that that system is being used routinely. Less useful since Health Authority registers and practice data bases are linked (GP-Links Project)
2. Percentage of patients with year of birth and sex recorded Ensures that researchers can analyze disease by age and sex of patient Less useful since Health Authority registers and practice data bases are linked (GP-Links Project)
3. Percentage of problems or diagnoses with Read Code of level 3 or lower Lower-order codes represent more specific diagnoses. Some high-order codes contain negatives within the lower orders. May contradict primary care group choices about coding data.
4. Percentage of notes linked to problem or diagnosis Linkage of notes increasingly important for analysis of test results Electronic transmission of test results cannot be linked to a problem but only pasted into the patient's notes.
5. Percentage of notes in which Read Code is level 3 or lower As marker 3 May contradict Primary Care Group choices about coding data.
6. Number of prescriptions issued per week per 1000 registered patients This is a crude measure of how much prescribing is not being computerized. Also looking for abnormalities in trends over time that would allow detection of missing data Duration of repeat prescription interval can seriously affect this marker, e.g., a practice that begins to issue repeat prescription for 3-mo intervals, as opposed to 1- or 6-mo intervals, would see radical changes.
7. Complete dose and regimen details related to dose-effect or ADR Important for prescribing analyses Once acute prescribing is computerized, it does not differentiate between practices.
8. Proportion of acute prescriptions issued linked to a problem title or diagnosis A key function of the database is to show what the prescribing Office automation may drive this process. Once these are being issued, specifically targeting home visits may be more important.
9. Proportion of repeat prescriptions linked to a problem title or diagnosis A key function of the database is to show what the prescribing behavior of general practitioners is. When practices newly join the scheme, doing this linkage de novo may lead to inaccuracies.
10. Ratio of acute prescriptions issued to chronic prescriptions Checks for consistent usage See comments for 5, 6, and 7 above.

Literature Review

We carried out a literature review to establish whether there was a consensus on what data quality parameters should be fed back. PubMed (National Library of Medicine) was searched using “Data Quality” and “General Practice” as search terms. This identified 848 abstracts, each of which was examined to identify articles relating to either the membership of a data quality scheme or the effectiveness of feedback of clinical markers. There were many descriptions of the potential,18,19 the need for,20–23 or the actual use of data quality feedback,24–26 but no evidence about what elements should be fed back and in what way. Where feedback techniques and audit have been used, they have focused on clinical outcomes rather than on changes in data quality.27–29 Individual feedback appears to be better than group feedback,30 and feedback from a peer-group in general practice appears to be more effective than feedback from non-clinicians.31 Feedback focused on a particular clinical area also seems to be more effective than generalized feedback.32

Data Analysis

If feedback had a positive effect on quality, then the longer a general practitioner had been in the scheme, the higher would be their score.

Calculating Whether Data Quality Is Related to Time

General practitioners contributing data in the first quarter of year 2000 were placed into groups representing the year in which they joined. The mean scores on each quality marker for each group in the first quarter of 2000 were then calculated and regression analysis was used to determine whether length of time in the scheme affected quality scores and, if so, in which areas.

Excluding the Effect of External Environmental Factors

Improvements in the quality marker scores may have been due to various NHS initiatives, such as Collection of Health Data from General Practice.33 If general practitioners were improving “naturally,” this would be reflected in an increase in the starting scores of general practitioners who joined the scheme over time. General practitioners were therefore grouped according to year of joining, their starting scores on each marker extracted, and regression analysis on the means for each group used to see whether their starting scores improved over time on any marker.

Excluding the Effect of Differential General Practitioner Drop Out

Results may be biased by a greater proportion of the poorly performing doctors dropping out during the early years of the scheme. To check this, doctors were first grouped according to the length of time they had spent in the scheme, regardless of start date. For example, two general practitioners who started in 1992 and in 1994 but who remained in the scheme for three years would be placed in the same group. The difference in each general practitioner's first and last scores was calculated and from these the mean scores were found for each group and marker. Regression analysis was again used to determine whether time in scheme affected data quality.

Effect of Specific Feedback to Those with a Below-average Score for Linkage of Diagnosis and Prescription

One particular form of feedback was also investigated, an initiative to improve the linkage of diagnosis to prescription. Its effectiveness was assessed by comparing the mean score for all the general practitioners in the second quarter of 1999 who received the report, with their mean score in the first quarter of 2000.

Results

Three Markers Show Improvement with Time

The quality markers showing a significant improvement with time at the 5 percent level were:

  • Percentage of acute prescriptions linked to a diagnosis

  • Percentage of repeat prescriptions linked to a diagnosis

  • Percentage of problems defined by a Read Code of level 3 or lower.

The mean starting scores and the results of the regression analysis are shown in Tables 2 and 3.

Table 2 .

Mean Score for Each Quality Marker, by Year in Which General Practitioners Joined MediPlus Database

Year of Joining Scheme
1991–92 1993 1994 1995 1996 1997 1998 1999 2000
No. of general practitioners joining 85 134 20 25 43 65 63 132 9
Data quality markers:
    Active patients seen in last 12 months (%) 88.4 87.7 87.8 84.2 85.7 89.5 90.6 89.8 77.4
    Year of birth and sex recorded (%) 100 100 100 96.0 100 100 100 96.2 77.8
    No. of prescriptions per 1,000 patients 187 189 171 192 160 202 207 190 350
    Notes linked to diagnosis (%) 83.4 86.0 92.2 78.4 86.3 76.5 76.8 86.5 87.3
    Notes in which Read Code is level 3 or lower (%) 72.0 72.0 76.4 68.7 66.5 65.6 55.2 69.6 77.2
    Acute prescriptions linked to diagnosis (%) 96.5 96.4 95.3 94.4 93.0 86.8 95.1 85.5 93.8
    Repeat prescriptions linked to diagnosis (%) 97.3 97.0 95.6 97.4 92.1 83.6 93.2 85.1 95.1
    Problems with Read Code of level 3 or lower (%) 87.3 84.1 81.9 87.8 74.9 74.5 72.4 71.2 83.8
    Dose detail recorded (%) 96.1 95.8 96.4 95.6 96.2 96.4 95.8 95.8 95.8
    Ratio of repeat to acute prescriptions 4.3 4.1 4.4 3.6 4.2 4.9 3.9 5.3 2.7

Table 3 .

Change in Data Quality Markers, and Their Regression Scores, Ranked in Order of Significance

Direction R2 p Value
Significant change:
    Percentage of problems with Read Code of level 3 or lower Better 78.2 0.004
    Percentage of repeat prescrip-tions linked to diagnosis Better 58.7 0.03
    Percentage acute prescriptions linked to diagnosis Better 54.7 0.04
No significant change:
 emsp;  Percentage of notes in which Read Code is level 3 or lower Better 38.8 0.10
    Ratio of repeat to acute prescriptions Better 17.2 0.31
    Percentage active patients seen in last 12 months Better 15.9 0.33
    Percentage of patients with year of birth and sex recorded Worse 13.1 0.38
    Percentage of notes linked to diagnosis Better 10.5 0.43
    No. of prescriptions per 1,000 patients Worse 9.6 0.46
    Percentage with dose details Worse 2.9 0.69

External Environmental Factors Do Not Explain Improvement

None of the markers that improved over time showed any evidence that external factors had played a part. Table 4 shows the results of the regression analysis on the general practitioners' starting scores. The only marker showing any significant improvement in starting score with time was the number of prescriptions issued per 1,000 registered patients (p<0.05).

Table 4 .

The Influence of External Factors on Data Quality, Ranked in Order of Significance.

R2 p Value
Significant change:
    No. of prescription items per 1,000 patients 80.6 0.002
No significant change:
    Percentage of notes in which Read Code is level 3 or lower 26.1 0.20
    Percentage of acute prescriptions linked to diagnosis 22.5 0.24
    Percentage of notes linked to diagnosis 18.2 0.29
    Percentage of repeat prescrip-tions linked to diagnosis 14.4 0.35
    Ratio of repeat to acute prescriptions 10.4 0.44
    Percentage active patients seen in last 12 months 7.8 0.50
    Percentage of patients with year of birth and sex recorded 3.4 0.66
    Percentage with dose details 1.8 0.75
    Percentage of problems with Read Code of level 3 or lower 0.5 0.86

Differential General Practitioner Drop Out Cannot Explain Improvement

It was possible to do this analysis for two data quality markers—the percentage of repeat prescriptions and the percentage of acute prescriptions linked to a diagnosis. The repeat prescription linkage showed significant improvement with time in the scheme, regardless of the general practitioner start date (p<0.05).

Acute prescription linkage did not show any such trend. However, whether time spent in the scheme was short or long, acute prescription linkage improved (range, 2.5–15.1 percent). These two analyses suggest that the improvements seen in coding quality were not due to differences in the rate at which poorly performing general practitioners dropped out of the scheme.

Significant Effect of Specific Feedback for Linkage of Diagnosis and Prescription:

The feedback of the detailed repeat prescribing reports had a significant effect on the percentage of repeat prescriptions linked to diagnosis. This increased from 64.6 percent in the second quarter of 1999 to 79.5 percent in the first quarter of 2000 (difference, 14.9 percent; 95% confidence interval, 3.7–26.1 percent; p=0.005).

Discussion

The main finding from this study is that the feedback of four of the quality markers improved data quality. All these markers were fed back over a long period; one marker was also fed back over a shorter period, with the specific aim of increasing the linkage of diagnosis or problem to repeat prescriptions. The scheme members were also offered a small financial incentive, but this was dependent on meeting target scores for all markers, not specific ones. To receive this payment, general practitioners would have needed to focus on those markers for which they performed least well.

The results of this study are mixed. Feedback of half the markers achieved significant improvement, while feedback of others did not. Feedback of this nature is not, therefore, in itself an effective mechanism, but it may represent a low-cost tool that can be used alongside other tools. The explanations for why the short-term feedback was so successful also needs to be explored further.

The findings from this study have potentially important implications for electronic patient records. First, they can inform those seeking mechanisms to improve data quality about the effects of a long period of feedback to a large group of practices. Second, those feeding back data quality indexes may wish to critically examine whether the feedback had any effect on data quality. Third, they indicate the importance of further research to describe the context in which feedback may contribute toward improvement in data quality.

Some potential confounding factors need to be considered. The members of the MediPlus database all use the Meditel computer system and are volunteers. They may have made considerable efforts to raise their data quality standards before joining the scheme. Some general practitioners had data quality markers that were already at levels over 90 percent before they joined the scheme. For markers with such high scores, it would be difficult to show a statistically significant improvement in score over time. Finally, some markers have been overtaken in their usefulness by advances in technology. Automated registration links with the Health Authority (GP-Links Project) has almost eliminated the number of patients without full demographic details. Similarly, patients who die or move away are now more likely to be automatically removed from a practitioner's list. In the past, unremoved patients may have artificially increased the list size, thereby increasing the denominator population used to calculate the data quality scores. Technical solutions like GP-Links have clearly had a major influence on some aspects of general practitioner data.

Further research is needed to ascertain what data quality markers should be fed back and by whom. From the literature review, individual feedback on a narrow clinical focus seems to offer the best approach to feedback of data quality markers. However, this was not the mechanism used for the most successful data quality marker fed back within the MediPlus database. Research is also needed to ascertain whether the personal feedback, the token financial rewards, or some other factor was responsible for this change. The role of practice staff may also need to be carefully examined, as primary care support staff are responsible to varying degrees for issuing repeat prescriptions.

Conclusions

We found that four data quality markers, all relating to the linkage of diagnosis to prescription and the use of more specific Read Codes, improved at a significantly higher rate in MediPlus practices. The personalized feedback to those general practitioners with below-average scores and the token financial incentives may have been important motivating factors and should be tested elsewhere. However, the role of practice support staff and the improvements made to the accuracy of the denominator through the GP-links project show that factors other than coding by clinical staff may have profound effects on data quality. If general practice computer records are to become the cornerstone of the electronic patient and health records promised by the NHS information strategy, research is urgently needed to define how feedback on data quality should be given.

Figure .

Figure

Acknowledgments

Simon de Lusignan heads a Primary Care Informatics Group within the Department of General Practice and Primary Care at St. George's Hospital Medical School. Peter Stevens and Naeema Adal are employees of IMS, which runs the MediPlus database. Azeem Majeed holds a Primary Care Scientist Award and is funded by the NHS Research and Development Directorate. Neither Simon de Lusignan nor Azeem Majeed received any payment for this study, and they have no financial interest in IMS.

References

  • 1.NHS Management Executive. Computerisation in GP practices, 1993 survey. London: Department of Health, 1993.
  • 2.NHS Information Authority, Clinical Terminology Service. Read Codes. NHSIA Web site. Available at: http://www.nhsia.nhs.uk/terms/pages/readcodes_intro.asp?om=m1. Accessed Jun 12, 2002.
  • 3.Pringle M, Ward P, Chilvers C. Assessment of the completeness and accuracy of computer medical records in four practices committed to recording data on computer. Br J Gen Pract. 1995;45(399):537–41 [PMC free article] [PubMed] [Google Scholar]
  • 4.Pierry AA. The clinical content of the computerized British General Practice Record. J Inform Primary Care. 1999. (June):13–4
  • 5.de Lusignan S, Hague NJ. The PCDQ (Primary Care Data Quality) Programme. Bandolier/Impact. Jan 2001. Available at: http://www.jr2.ox.ac.uk/Bandolier/booth/mgmt/PCDQ.html. Accessed Jun 12, 2002.
  • 6.Secretary of State for Health (UK). The National Health Service (General Medical Services) Amendment (no. 4), Regulations 2000 of National Health Service Act 1977. London, UK: The Stationary Office, 2000.
  • 7.Pringle M, Robins S, Brown G. Computer-assisted screening: effect on the patient and his consultation. BMJ. 1985;290(6483): 1709–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Pringle M, Robins S, Brown G. Timer: a new objective measure of consultation content and its application to computer assisted consultations. Br Med J (Clin Res Ed). 1986;293(6538):20–2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Thiru K, de Lusignan S, Hague N. Have the completeness and accuracy of computer medical records in general practice improved in the last five years? The report of a two-practice pilot study. Health Inform J. 1999:5(4);233–9. [Google Scholar]
  • 10.Burns F. Information for Health: An Information Strategy for the Modern NHS, 1998–2005. Leeds, UK: NHS Executive, 1998.
  • 11.Department of Health (U.K.). National Service Frameworks. Available at: http://www.doh.gov.uk/nsf/about.htm. Accessed Jun 12, 2002.
  • 12.Secretary of State for Health (U.K.). The NHS Plan: A Plan for Investment, A Plan for Reform. London: The Stationary Office, 2000. Reference 4818-1.
  • 13.National Health Service Executive Information Policy Unit. Building the Information Core: Implementing the NHS Plan. Primary/community EPRs. Available at:http://www.doh.gov.uk/ipu/strategy/update/ch3/3_3_3.htm. Accessed Jun 12, 2002.
  • 14.Mitchell E, Sullivan F. A descriptive feast but an evaluative famine: systematic review of published articles on primary care computing during 1980–97. BMJ. 2001;322(7281):279–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Primis (Primary Care Information Services) Web site. Available at:http://www.primis.nottingham.ac.uk/Default.htm. Accessed Jun 12, 2002.
  • 16.IMS Health Web site. Available at:http://www.imshealth.com. Accessed Jun 12, 2002.
  • 17.Torex-Meditel System 5. Comprehensive links from the Torex-Medical User Group Web site. Available at:http://www.tug.uk.com. Accessed Jun 12, 2002.
  • 18.Gribben B, Coster G, Pringle M, Simon J. Non-invasive methods for measuring data quality in general practice. NZ Med J. 2001;114(1125):30–2. [PubMed] [Google Scholar]
  • 19.Hobbs FD, Hawker A. Computerized data collection: practicability and quality in selected general practices. Fam Pract. 1995;12(2):221–6. [DOI] [PubMed] [Google Scholar]
  • 20.Wilkinson EK, McColl A, Exworthy M, et al. Reactions to the use of evidence-based performance indicators in primary care: a qualitative study. Qual Health Care. 2000;9(3):166–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.McColl A, Roderick P, Smith H, et al. Clinical governance in primary care groups: the feasibility of deriving evidence-based performance indicators. Qual Health Care. 2000;9(2):90–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Whitelaw FG, Nevin SL, Milne RM, Taylor RJ, Taylor MW, Watt AH. Completeness and accuracy of morbidity and repeat prescribing records held on general practice computers in Scotland. Br J Gen Pract. 1996;46(404):181–6. [PMC free article] [PubMed] [Google Scholar]
  • 23.Rethans JJ, Westin S, Hays R. Methods for quality assessment in general practice. Fam Pract. 1996;13(5):468–76. [DOI] [PubMed] [Google Scholar]
  • 24.Teasdale S, Bainbridge M. Interventions for improving information management in family practice. Stud Health Technol Inform. 1997;43(pt B):806–10. [PubMed] [Google Scholar]
  • 25.Moser K, Majeed A. Prevalence of treated chronic diseases in general practice in England and Wales: treatment over time and variations by the ONS area classification. Health Serv Stat Q. 1999;2:75–82. Also available at: http://www.azmaj.org/PDF/Chronds.pdf. Accessed Jun 13, 2002.
  • 26.Hollowell J. The General Practice Research Database: quality of morbidity data. Popul Trends 1997(87):36–40. [PubMed]
  • 27.Thomson O'Brien MA, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2000;2:CD000260. [DOI] [PubMed] [Google Scholar]
  • 28.Thomson O'Brien MA, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL. Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2000;2:CD000409. [DOI] [PubMed] [Google Scholar]
  • 29.Hooker RC, Cowap N, Newson R, Freeman GK. Better by half: hypertension in the elderly and the “rule of halves”: a primary care audit of the clinical computer record as a springboard to improving care. Fam Pract. 1999;16(2):123–8. [DOI] [PubMed] [Google Scholar]
  • 30.Figueiras A, Sastre I, Tato F, et al. One-to-one versus group sessions to improve prescription in primary care: a pragmatic randomized controlled trial. Med Care. 2001;39(2):158–67. [DOI] [PubMed] [Google Scholar]
  • 31.van den Hombergh P, Grol R, van den Hoogen HJ, van den Bosch WJ. Practice visits as a tool in quality improvement: mutual visits and feedback by peers compared with visits and feedback by non-physician observers. Qual Health Care. 1999;8(3):161–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Borgiel AE, Williams JI, Davis DA, et al. Evaluating the effectiveness of two educational interventions in family practice. CMAJ. 1999;161(8):965–70. [PMC free article] [PubMed] [Google Scholar]
  • 33.CHDGP (Collection of Health Data from General Practice), Nottingham, UK. Available at: http://www.nottingham.ac.uk/chdgp/. Accessed Jun 13, 2002.

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES