INTRODUCTION
Relational continuity in general practice is associated with positive outcomes for patients, doctors, and health systems, including several of the most important outcomes in medical care, including reduced admissions to hospitals and reduced mortality.1–4 In 2022, a key question became how to measure continuity of GP care. The Conference of Local Medical Committees (2022), the policymaking body for NHS GPs, passed a resolution that continuity should be included in a future NHS contract for GPs.5 The Select Committee on Health and Social Care (2022)6 reporting on The Future of General Practice recommended that GP continuity be improved by measuring it in all practices by 2024.
Continuous measurement is important in quality improvement programmes. Achieving improvements in continuity requires effective measurements. If all practices reported a standardised measure of continuity, this might identify practices needing continuity support, and identify high-performing practices providing good models.
Different measures exist in continuity research. The calculation methods, advantages, and disadvantages of these for research have been described.7–9 Alternative measures have been promulgated by practices or NHS organisations. The Select Committee6 proposed that continuity be measured and reported quarterly in all general practices, using the Usual Provider of Care (UPC)10 or the St Leonard’s Index of Continuity of Care (SLICC).11
For a continuity measurement method to be useful, it needs to be simple for practices to use, to be easily understood by GPs and managers, and to capture meaningful continuity, ideally within a reasonably short timescale. Plans are already being made to measure continuity in English general practices but there are important differences between the various methods.
We compare the two methods recommended by the Select Committee and also consider the Bice–Boxerman COC Index,12 which is often used in research, and the Own Patient Ratio (OPR),11 currently used in some general practices.
RECOMMENDED MEASUREMENT METHODS
UPC
One Select Committee6-recommended measure is the UPC,10 a commonly used quantitative measure in continuity research. It is relatively simple to calculate and interpret. For each patient, the UPC score is the proportion of appointments or contacts with the most frequently seen GP.
A minimum number of consultations per patient is required for the UPC, so it cannot be calculated on all consultations and patients. To have enough patients included, a sufficient timescale is needed, which depends on attendance rates. For a general practice population, this is usually at least a year. For comparisons between groups, or time periods, a mean of patient scores is often used as a summary statistic.
SLICC
The SLICC is the percentage of all patient GP consultations that are with their named/registered/personal/list-holding GP.11 It is a simple percentage and is quickly understood by GPs and staff. It is inclusive, making use of every appointment/contact, for every patient consulting, with every GP in the practice. It can be applied to short timescales — usually 1 month. This allows regular monitoring of continuity and can be face-to-face or telephone, or both combined. Table 1 compares the SLICC and UPC. Table 2 shows example results of them for patients and patient groups.
Table 1.
A comparison of four of the methods for measuring continuity in general practices
UPC | Bice–Boxerman | SLICC | OPR | |
---|---|---|---|---|
Ease of calculation/understanding | Easy Calculated by dividing a patient’s appointment count with the most-seen GP by each patient’s total appointment count Mean often used as summary statistic for group of patients |
More difficult Calculated for each patient using the formula COC = ((SUM(nj2)) − n)/(n(n − 1)) from j = 1 to j = s, where n is the total number of consultations, nj is the number of consultations to GP j and s is the total number of GPs seen by the patient Mean often used as summary statistic for group of patients |
Easy Calculated for group of patients (often a GP’s list) by dividing the group’s number of appointments with the named/list-holding GP by the total number of appointments for the group Expressed as a percentage |
Easy Calculated for a list-holding GP by dividing appointment count with patients on their own list by the total number of appointments with that GP Expressed as a percentage |
Level of measurement | Patient | Patient | Appointments for a group of patients | A single GP’s appointments |
Minimum requirements | Two appointments in timeframe | Two appointments in timeframe, three more usual | A pre-specified named GP for each patient | A pre-specified named GP for each patient |
Minimum timescale | If the timescale is too short, <1 year, most patients excluded It then becomes a measure of frequent-attender continuity | If the timescale is too short, <1 year, most patients excluded It then becomes a measure of frequent-attender continuity | Can be measured monthly | Can be measured monthly |
Strengths | Works for any GP No requirement for named GP Easy to calculate and understand Patient-level measure so allows comparisons between individuals |
Takes into account continuity with more than one GP No requirement for named GP Patient-level measure so allows comparisons between individuals |
Easy to understand and calculate Can be used in statistical process control charts Good for monthly measurement Includes all appointments and all patients Patient perspective |
Easy to understand and calculate Useful for looking at continuity and workload from the GP perspective Can be used in statistical process control charts Good for monthly measurement Includes all appointments and all patients |
Limitations | Less useful for short-term measurement Does not take into account team continuity Bias towards frequent attenders if the timescale is too short Upwards skew if patients with only two appointments included Sometimes most-seen provider is a locum or registrar |
Less useful for short-term measurement Bias towards frequent attenders if the timescale is too short |
Requires a named GP who has seen the patient before or will again. High turnover of patients or GPs may reduce this Requires the ‘usual’ or named GP field to be correct and up-to-date Does not take into account team continuity |
Not patients’ perspective A GP could see only their own patients but, if the list size were too high, their patients would not see their own GP very often To be meaningful, practice needs to be using personal lists Does not take into account team continuity |
Possible adaptions | Could be adapted so that the ‘usual’ provider is a pre-specified named GP Could be used to identify which GP the patient sees the most when setting up GP lists A cumulative measure of the previous year to date could be used for regular monitoring |
A cumulative measure of the previous year-to-date could be used for regular monitoring | Could be adapted to measure the percentage of appointments with both/all the GPs within the micro-team | Could be adapted to measure the percentage of micro-team GP appointments with patients on the lists of both/all the GPs within the micro-team |
COC = Continuity of Care. OPR = Own Patient Ratio. SLICC = St Leonard’s Index of Continuity of Care. UPC = Usual Provider of Care.
Table 2.
Worked examples of patient/patient group scores using the continuity measuresa
UPC | Bice–Boxerman | SLICC | OPR | |
---|---|---|---|---|
GP A’s List | For GP A’s list, Dr A’s patients have 22 appointments with Dr A and 41 appointments with any GP The SLICC is 22/41 = 53.7% |
For GP Dr A, the OPR is: 22 appointments with their own patients/35 appointments provided by Dr A in total during this time period (with either GP’s patients) = 63.9% For GP Dr B, the OPR is: 17 appointments with their own patients / 31 appointments provided by Dr B in total during this time period (with either GP’s patients) = 54.8% Uses only appointments provided by list-holding GPs as denominator when calculating the whole-practice OPR If there were more list-holders, appointments with their patients would also be included in the denominator |
||
Patient 1 A | Cannot calculate | Cannot calculate | ||
Patient 2 AA | 1 | 1 | ||
Patient 3 AB | 0.5 | 0 | ||
Patient 4 AAB | 0.67 | 0.33 | ||
Patient 5 ABC | 0.33 | 0 | ||
Patient 6 AABB | 0.5 | 0.33 | ||
Patient 7 AABC | 0.5 | 0.17 | ||
Patient 8 AAABBB | 0.5 | 0.4 | ||
Patient 9 AAAABBBB | 0.5 | 0.43 | ||
Patient 10 AAAABCDE | 0.5 | 0.21 | ||
Mean: 0.56 | Mean: 0.32 | |||
| ||||
GP B’s List | For GP B’s list the SLICC is: 17 appointments with Dr B/41 appointments with any GP = 41.5% | |||
Patient 11 B | Cannot calculate | Cannot calculate | ||
Patient 12 BB | 1 | 1 | ||
Patient 13 AC | 0.5 | 0 | ||
Patient 14 ABB | 0.66 | 0.33 | ||
Patient 15 ACC | 0.66 | 0.33 | ||
Patient 16 AABC | 0.5 | 0.17 | ||
Patient 17 ABBC | 0.5 | 0.17 | ||
Patient 18 AABBBC | 0.5 | 0.27 | ||
Patient 19 AAABBBBC | 0.5 | 0.32 | ||
Patient 20 AABBCCCC | 0.5 | 0.29 | ||
Mean: 0.59 | Mean: 0.32 | |||
| ||||
Whole practice | Mean: 0.57 | Mean: 0.32 | 47.6% | 59.1% |
Each letter is an appointment with that doctor, so A is an appointment with Dr A, B an appointment with Dr B, C an appointment with Dr C, etc. In this ‘practice’ only Drs A and B are list-holding GPs. OPR = Own Patient Ratio. SLICC = St Leonard’s Index of Continuity of Care. UPC = Usual Provider of Care.
Despite being regarded as ‘similar’ to the UPC, the SLICC is not calculated at the patient level but at the level of a group of patients, usually the list of patients for each named GP. This measure can also be applied to entire practices or to specific groups of interest.
ALTERNATIVE MEASURES OF CONTINUITY
Continuity of Care index (COC, Bice–Boxerman)
The Continuity of Care (COC, Bice–Boxerman) index12 is often used in continuity research. This measure (included in Tables 1 and 2) incorporates the dispersion of consultations, with higher scores for patients who see fewer GPs. The COC score reduces with less-than-perfect continuity more than with the UPC, so GPs used to UPC measurements may find it surprising.
Like the UPC, there is a minimum number of consultations required for a patient to be included, usually three, excluding many consultations. This means the COC has similar limitations for timeframes and consultation rates. The mean is, again, often used as a summary statistic.
Own Patient Ratio (OPR)
The Own Patient Ratio11 (included in Tables 1 and 2) is the percentage of a GP’s consultations that are with patients on their own list. This can be useful for individual GPs as it can be easier to make changes so that they see more of their own patients. However, this may not accurately reflect the patient experience, particularly if the GP has a list size that is too large for the number of sessions worked. Used in conjunction with the SLICC, the OPR enables GPs and managers to understand how the practice works.
Other measures
The Herfindahl–Hirschman7,13 is similar to the COC, being a measure of the concentration of consultations among a group of providers, although it is calculated differently and less used in healthcare research. The SECON,9 also used in research, incorporates the sequence of appointments, with higher scores for consecutive consultations with the same GP. This might interest practices studying their episodic continuity.
There are also some additional, largely unresearched, measures used by NHS organisations or software providers. These are often simple measures and sometimes focus on particular groups of patients such as frequent attenders. The OPR has sometimes been independently developed and used, but, without the SLICC, it lacks the patient perspective.
Another measure is the percentage of patients who reach a threshold percentage of consultations with one GP (either the most seen or their personal GP). This is, essentially, a different way of creating a UPC summary statistic. A similar measure has been proposed using the COC/Bice‒ Boxerman index, taking the percentage of patients who score 0.7 or higher.14
Some practices count the total number of GPs a patient has seen, which is more useful for studying continuity for frequent attenders. It can show practices how patient consultations are spread between doctors.
Patient surveys have been used to measure continuity in research and within practices. These include the Nijmegen Continuity Questionnaire and disease-specific continuity surveys.15 These have the potential to capture the patient perspective of relationship continuity, potentially more meaningful than quantitative measures based on appointment data. However, surveys are time consuming and costly. In England, the annual General Practice Patient Survey includes two questions that have been used as measures of continuity. These results correlated with the UPC in one study.16
MEANINGFUL MEASUREMENT OF CONTINUITY
Continuity is a proxy for the doctor–patient relationship and the associated benefits accrue when patients and GPs have repeated consultations together over time. Ideally, a measure of continuity should capture that long-term relationship. A meaningful measure therefore needs to either measure consultations over a long period of time or measure consultations where there is a reasonable expectation of a continuing clinical relationship.
Most measures require a minimum number of consultations so that the patient has seen the most-seen GP several times when they have a high score. The SLICC and OPR are different in that they assume that the patient has, or will have, a clinical relationship with their named/list-holding GP. The single patient appointment that is included in 1 month is considered to be one of a series.
In English practices with personal lists,17,18 the contractually required named accountable GP19 is the GP who takes long-term clinical responsibility, making the SLICC and OPR straightforward and meaningful. The Select Committee6 has recommended that 80% of practices have personal lists by 2027. Currently, in some practices, the requirement for a named GP is seen as an administrative formality and patients are not encouraged to see their named GP,20 nor does the GP take long-term responsibility for the patient.
If the named GP is not the GP with whom the patient has (or is expected to have) a continuing therapeutic relationship, the SLICC and OPR are not very meaningful, for example, if practices did not keep this field up-to-date after a change in GP. Likewise, if there is high patient or GP turnover at the practice, a SLICC may not be meaningful as the single consultations are less likely to build up to long-term continuity. Here, a measure that investigates continuity for patients with the most-seen GP (such as the UPC) may be more useful, particularly as this would also restrict the measurement to patients with a minimum number of consultations.
If the practice prioritises measuring the dispersion of appointments across GPs, the COC/Bice–Boxerman or total number of GPs seen may be more helpful. However, if these measures are used over too short a timescale, they are no longer meaningful, as only a very small number of patients will reach the appointment number threshold for inclusion.
If a practice is using a micro-team system in which continuity is expected to be with more than one GP, the SLICC and UPC are not likely to capture this well. The COC might then be a more meaningful index to use. However, it is possible to adapt the SLICC or UPC to calculate the proportion of appointments with either/any member of the micro-team.
TIMESCALE
To be useful in general practice and for healthcare improvement, a measure should track changes in continuity over relatively short timescales. The Select Committee recommends quarterly reporting of continuity measurement.6 The SLICC and OPR can produce meaningful results for a calendar month that makes them useful for regular monitoring of continuity and also use of statistical process control charts (Supplementary Figure S1), which are used in quality improvement to distinguish between normal variation and significant changes.
For other measures, up-to-date monitoring is more difficult. It is possible, using measures such as the UPC, to take the year to date as the timescale, then update this each month or quarter. This gives a cumulative measure and can determine improvement or deterioration in continuity levels. However, it does not have the immediacy of single-month measurement and practices may become discouraged that continuity does not appear to improve more rapidly. Statistical process control charts cannot be used with cumulative measures.
EASE OF USE
Many of the measures used in research may be too complex to calculate and understand in busy practices. The UPC has the advantage of being a straightforward measure. However, unless it is attached to a pre-specified GP for each patient, there is a statistical problem with patients with low numbers of consultations. Because the score can only be 1 or 0.5 with two consultations (common), the UPC is artificially inflated for patient populations with fewer consultations, particularly when a mean is used as a summary statistic. The UPC may be useful for identifying the most-seen GP, which might then aid practices in establishing personal lists.
The SLICC and OPR are easy to calculate and understand. Once a named GP is identified and recorded, these measures allow monthly measurement of continuity in a way that GPs and practice managers understand. The SLICC and sometimes the OPR are already used in several practices around the country,21 allowing for benchmarking. The Select Committee published results showing both good (>50% SLICC) and excellent GP continuity (>75% SLICC).21
CONCLUSION
Measuring GP continuity in all English general practices is now proposed. GP software clinical systems may soon be required to provide all English practices with the capability to calculate one or more continuity measures.
The principles of the different methods in use need to be clearly understood as different methods prioritise different features and can generate different figures for the same group of patients. There is a risk that software developers will produce measures that do not measure continuity in a way that is meaningful or statistically reliable.
When considering ease of understanding, and the capacity for use in quality improvement (statistical process control charts), the SLICC (possibly combined with the OPR) is likely to be the best measure to incorporate into clinical systems. However, for practices looking to establish personal lists, it may also be useful to have the ability to calculate a UPC at the individual patient level to identify the most-seen GP.
The Select Committee identified two methods of measuring GP continuity. As they differ, the optimum arrangement would be to use both.
Funding
None.
Provenance
Freely submitted; externally peer reviewed.
Competing interests
The St Leonard’s Index of Continuity of Care (SLICC) was constructed by Denis Pereira Gray in 1973 and introduced in the St Leonard’s Practice in 1974. He coined the term ‘personal lists’ in 1979 and they were used by him and Philip Evans subsequently and to date. Kate Sidaway-Lee named the SLICC in 2019.
REFERENCES
- 1.Pereira Gray DJ, Sidaway-Lee K, White E, et al. Continuity of care with doctors — a matter of life and death? A systematic review of continuity of care and mortality. BMJ Open. 2018;8(6):e021161. doi: 10.1136/bmjopen-2017-021161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Barker I, Steventon A, Deeny SR. Association between continuity of care in general practice and hospital admissions for ambulatory care sensitive conditions: cross sectional study of routinely collected, person level data. BMJ. 2017;356:10–11. doi: 10.1136/bmj.j84. [DOI] [PubMed] [Google Scholar]
- 3.Sandvik H, Hetlevik Ø, Blinkenberg J, Hunskaar S, editors. Continuity in general practice as predictor of mortality, acute hospitalisation, and use of out-of-hours care: a registry-based observational study in Norway. Br J Gen Pract. 2022 doi: 10.3399/BJGP.2021.0340. DOI: . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Delgado J, Evans PH, Pereira Gray D, et al. Continuity of GP care for patients with dementia: impact on prescribing and the health of patients. Br J Gen Pract. 2022 doi: 10.3399/BJGP.2021.0413. DOI: . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Waters S. General practice: policy makers must value continuity of care over access and targets, GP conference hears. BMJ. 2022;377:o1202. doi: 10.1136/bmj.o1202. [DOI] [PubMed] [Google Scholar]
- 6.Health and Social Care Committee . The future of general practice. London: 2022. https://publications.parliament.uk/pa/cm5803/cmselect/cmhealth/113/report.html (accessed 24 Apr 2023). [Google Scholar]
- 7.Jee SH, Cabana MD. Indices for continuity of care: a systematic review of the literature. Med Care Res Rev. 2006;63(2):158–188. doi: 10.1177/1077558705285294. [DOI] [PubMed] [Google Scholar]
- 8.Pollack CE, Hussey PS, Rudin RS, et al. Measuring care continuity: a comparison of claims-based methods. Med Care. 2016;54(5):e30–e34. doi: 10.1097/MLR.0000000000000018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Saultz JJW. Defining and measuring interpersonal continuity of care. Ann Fam Med. 2003;1(3):134–143. doi: 10.1370/afm.23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Breslau N, Reeb KG. Continuity of care in a university-based practice. J Med Educ. 1975;50:965–969. doi: 10.1097/00001888-197510000-00006. [DOI] [PubMed] [Google Scholar]
- 11.Sidaway-Lee K, Pereira Gray D, Evans P. A method for measuring continuity of care in day-to-day general practice: a quantitative analysis of appointment data. Br J Gen Pract. 2019 doi: 10.3399/bjgp19X701813. DOI: . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Bice TW, Boxerman SB. A quantitative measure of continuity of care. Med Care. 1977;15(4):347–349. doi: 10.1097/00005650-197704000-00010. [DOI] [PubMed] [Google Scholar]
- 13.Maarsingh OR, Henry Y, van de Ven PM, Deeg DJH. Continuity of care in primary care and association with survival in older people: a 17-year prospective cohort study. Br J Gen Pract. 2016 doi: 10.3399/bjgp16X686101. DOI: . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Dai M, Pavletic D, Shuemaker JC, et al. Measuring the value functions of primary care: physician-level continuity of care quality measure. Ann Fam Med. 2022;20(6):535–540. doi: 10.1370/afm.2880. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Uijen AA, Heinst CW, Schellevis FG, et al. Measurement properties of questionnaires measuring continuity of care: a systematic review. PLoS One. 2012;7(7):e42256. doi: 10.1371/journal.pone.0042256. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Hull SA, Williams C, Schofield P, et al. Measuring continuity of care in general practice: a comparison of two methods using routinely collected data. Br J Gen Pract. 2022 doi: 10.3399/BJGP.2022.0043. DOI: . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Pereira Gray DJ. The key to personal care. J R Coll Gen Pract. 1979;29(208):666–678. [PMC free article] [PubMed] [Google Scholar]
- 18.Pereira Gray D, Sidaway-Lee K, Evans P. Continuity of GP care: using personal lists in general practice. Br J Gen Pract. 2022. DOI: . [DOI] [PMC free article] [PubMed]
- 19.NHS England Standard General Medical Services contract. 2020. https://www.england.nhs.uk/wp-content/uploads/2020/12/20-21-GMS-Contract-October-2020.pdf (accessed 24 Apr 2023).
- 20.Tammes P, Payne RA, Salisbury C, et al. The impact of a named GP scheme on continuity of care and emergency hospital admission: a cohort study among older patients in England, 2012–2016. BMJ Open. 2019;9(9):e029103. doi: 10.1136/bmjopen-2019-029103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.UK Parliament Evidence to the Parliamentary Select Committee on the Future of General Practice. 2022. https://committees.parliament.uk/work/1624/the-future-of-general-practice/publications/written-evidence/ (accessed 24 Apr 2023).