Abstract
Background
Studies find that primary care physician (PCP) visit continuity is positively associated with care quality. Some of the evidence base, however, relies on patient-reported continuity measures, which may be subject to response bias.
Objective
To assess the concordance of patient-reported and administratively derived visit continuity measures.
Design
Random samples of patients (n = 15,126) visiting 1 of 145 PCPs from a physician organization in Massachusetts were surveyed. Respondents reported their experienced visit continuity over the preceding 6 months. Usual Provider Continuity (UPC), an administratively derived measure, was calculated for each respondent. The concordance of patient reports and UPC was examined. Associations with patient-reported physician-patient interaction quality were assessed for both measures.
Results
Patient-reported and administratively derived visit continuity measures were moderately correlated for overall (r = 0.30) and urgent (r = 0.30) measures and modestly correlated for the routine (r = 0.17) measure. Although patient reports and UPC were significantly associated with the physician-patient interaction quality (p < 0.001), the effect size for patient-reports was approximately five times larger than the effect size for UPC.
Conclusions
Studies and quality initiatives seeking to evaluate visit continuity should rely on administratively derived measures whenever possible. Patient-reported measures appear to be subject to biases that can overestimate the relationship between visit continuity and some patient-reported outcomes.
KEY WORDS: continuity of care, quality measurement, patient-reported outcomes, physician-patient communication
BACKGROUND
Visit continuity with primary care physicians (PCPs) is highly valued by patients1–3 and physicians4,5 internationally. Over the past 2 decades, several studies have demonstrated the benefits of visit continuity. Patients who see their PCP for a high proportion of their primary care office visits are more likely to be satisfied with their care6–8, maintain relationships with their physician over time9,10, and have more appropriate utilization of emergency and hospital services11,12. Some of the evidence base, however, relies on patient-reported measures of visit continuity13–15, which might be subject to response bias. For example, more satisfied and adherent patients may overstate their experienced visit continuity, which could bias study results toward finding positive associations between visit continuity and patient-reported outcomes.
Clarifying the effects of visit continuity has important policy implications. For example, care provided by multiple primary care clinicians might result in higher quality, especially if some of them have condition-specific expertise and complementary knowledge, skills, and roles16. For example, advanced practice nurses are increasingly considered instrumental to effective chronic disease management17. Therefore, valid measures of visit continuity can provide important information to guide the design of primary care practices, including clarifying when the use of multiple clinicians or care teams can improve quality. This study assesses the concordance of patient-reported and administratively derived visit continuity measures and compares their associations with the quality of physician-patient interactions.
METHODS
Patient-reported Measures
The study questionnaire used is the Ambulatory Care Experiences Survey (ACES), a validated survey that measures patients’ experiences with a specific, named PCP and that physician’s practice18. Patients were asked two questions related to visit continuity. First, patients were asked, “In the last 6 months, when you were sick and went to the doctor, how often did you see your personal doctor (not an assistant or partner)?” Second, patients were asked “In the last 6 months, when you went for a check-up or routine care, how often did you see your personal doctor (not an assistant or practice partner)?” The quality of physician-patient interactions was assessed using a previously validated six-item composite measure19. Response continuum for all questions consisted of: “Never,” “Almost never,” “Sometimes,” “Usually,” “Almost Always,” and “Always.” Responses to each visit continuity question were transformed to a 0–1 scale, with 0 representing “Never” and 1 representing “Always.” An overall visit continuity composite score (range: 0–1) was then computed for each respondent based on the unweighted average of the two continuity questions. For the quality of physician-patient interactions scale, responses were transformed into a 0–100 scale, with higher values representing more favorable responses, i.e., 0 representing “Never” and 100 representing “Always.”
Sampling
The study draws on 68,479 patients from the practices of 145 physicians from a large physician organization in Massachusetts. During 2004 and 2005, surveys were administered monthly to a random sample of approximately 40 patients per physician who saw their PCP during the prior month. Patients would be sampled no more than once in the calendar year. Mailings included an invitation letter, a printed survey, and a postage-paid return envelope. A second invitation and questionnaire were sent to non-respondents 2 weeks after the initial mailing. Each data collection effort proceeded over a period of approximately 6 weeks.
Administrative Data
For each respondent, detailed administrative data concerning all primary care visits during the 6 months preceding the return date of the patient’s survey were obtained. Data included information about visits to all physicians, physician assistants, nurse practitioners, and registered nurses within Internal Medicine and Adult Urgent Care Departments at the 14 care sites within the organization. The visit data included an identifier of the patient’s PCP-of-record at the time of each visit. This identifier was compared with the visit provider identifier, allowing us to classify each visit as a PCP or non-PCP visit. Administrative data maintained by the organization identify the PCP-of-record for each patient, and this information is updated if patients designate a new physician for that role.
ANALYSES
A total of 27,213 completed surveys were received for an overall response rate of 40.3%, after undeliverable surveys were excluded from response rate calculations (n = 1,005). The analytic sample included 15,126 unique respondents with at least two primary care visits during the study period. Respondents who indicated that the physician named in the survey was not their primary physician or failed to answer the physician confirmation item (n = 398) were excluded from the analytic sample. In addition, respondents were excluded if they did not have two or more visits documented in administrative data during the 6 months preceding the survey completion date (n = 10,684). Other studies assessing visit continuity have imposed similar or stricter analytic sample restrictions20.
Using administrative data, the Usual Provider Continuity (UPC) index21, a frequently used continuous measure representing the percentage of visits made to the PCP of all primary care office visits to physicians, physician assistants, nurse practitioners, and registered nurses (number of PCP visits/number of overall visits) was calculated for each respondent. UPC index scores (range: 0–1) were also calculated separately for routine and urgent visits. Visits were classified as urgent or routine based on primary ICD-9 codes and encounter type descriptions coded by billing personnel.
Pearson correlations were used to examine the concordance of UPC and patient-reported continuity for overall, routine, and urgent visits. Multilevel regression models that accounted for the clustering of patients within PCPs using physician random effects were used to examine the association of the quality of physician-patient interactions and (1) patient-reported continuity and (2) UPC. These multilevel models controlled for patient age, gender, race/ethnicity, education, and self-rated health, which were assessed in the survey. Results were stratified by overall visit frequencies to examine the consistency of results across levels of utilization. In order to test the sensitivity of results to the total number of visits as well as the degree of visit dispersion among different clinicians, we examined the Modified Modified Continuity Index (MMCI)22 as the administratively derived continuity measure.
RESULTS
The overall patient-reported visit continuity composite (mean = 0.84, SD = 0.19) was positively skewed compared to UPC (mean = 0.69, SD = 0.27). Patient-reports and UPC were moderately correlated for overall (r = 0.30) and urgent (r = 0.30) measures and modestly correlated for the routine (0.17) measure (Table 1). The correlations were fairly consistent across overall visit frequencies.
Table 1.
Patient sample | Overall visits | Routine visits only | Urgent visits only | |||
---|---|---|---|---|---|---|
n | r | n | r | n | r | |
All patients | 15,126 | 0.30 | 13,214 | 0.17 | 12,207 | 0.30 |
1 visit | n/a | n/a | 4,732 | 0.17 | 5,272 | 0.28 |
2 visits | 6,887 | 0.27 | 5,163 | 0.16 | 3,814 | 0.32 |
3 visits | 3,829 | 0.31 | 1,868 | 0.22 | 1,646 | 0.34 |
4 visits | 1,990 | 0.36 | 708 | 0.29 | 717 | 0.35 |
5 or more visits | 2,420 | 0.28 | 743 | 0.11 | 758 | 0.27 |
Note: Overall visit analyses examine the association between the unweighted average of patient-reported measures and overall UPC; r = correlation coefficient
n/a = not applicable because patients with one overall visit were excluded from the analysis
Although patient-reported visit continuity and UPC were significantly associated with the physician-patient interaction quality (p < 0.001), the effect size for patient reports was approximately five times larger than the effect size for UPC (Table 2). For example, a standard deviation (0.19 points) change on the patient-reported visit continuity scale (range: 0–1) was associated with a 5.0 point change on the quality of physician-patient interaction measure (range: 0–100). By contrast, a standard deviation (0.27 points) UPC change was associated with a 1.13 point change on the quality of physician-patient interaction measure. Effect size differences between patient reports and UPC were consistent across overall visit frequencies. When examining the sensitivity of results to accounting for the total number of visits as well as the degree of dispersion among different providers using MMCI, results were consistent (data not shown). However, MMCI was more strongly associated with the quality of physician-patient interactions compared to UPC when overall visits were five or greater.
Table 2.
The Effect of a standard deviation change in continuity on the 100-point quality of physician-patient interactions scale | |||
---|---|---|---|
Patient sample | n | Patient-reported visit continuity | Administratively derived visit continuity (UPC) |
Overall sample | 15,126 | 5.00 (4.78, 5.23) ‡ | 1.13 (0.85, 1.40) ‡ |
2 visits | 6,887 | 4.64 (4.29, 4.99) ‡ | 1.48 (1.07, 1.89) ‡ |
3 visits | 3,829 | 5.33 (4.89, 5.78) ‡ | 1.43 (0.87, 2.00) ‡ |
4 visits | 1,990 | 4.49 (3.87, 5.10) ‡ | 1.09 (0.26, 1.92) † |
5 or more visits | 2,420 | 5.81 (5.27, 6.35) ‡ | 1.07 (0.33, 1.80) † |
Notes:
UPC = Usual provider continuity
Effect sizes represent the effect of a standard deviation change in the visit continuity measure on the quality of physician-patient interactions (0–100 scale), controlling for patient age, gender, race/ethnicity, education, self-rated health, and patient clustering within physicians
Standard deviations: Patient-reported visit continuity = 0.19, UPC = 0.27
Numbers in parentheses are 95% confidence intervals
Patients with one overall visit were excluded from the analysis
†P < 0.01, ‡P < 0.001
DISCUSSION
This study highlights some limitations of using patient-reported visit continuity measures for quality assessment. First, patient reports were moderately correlated with administratively derived measures, and patient reports were skewed toward high visit continuity. These findings suggest that some patients may “telescope” or incorporate timeframes beyond the survey reference period23 when completing survey-based assessments of visit continuity because of the nature of their clinical relationship and/or a long history of prior continuity with their PCP.
The results also indicate that patient-reported visit continuity measures are much more strongly associated with the quality of physician-patient interactions than administratively derived measures. This suggests that empirical studies that use patient-reported measures to assess the effect of visit continuity may be biased toward finding significant relationships with some outcomes. One reason for the divergence of patient-reported and administratively-derived visit continuity measures is that patients may consider the extent to which their continuity experiences meet their expectations and/or needs when responding to questions about visit continuity. For example, PCPs might make follow-up telephone or e-mail contact to patients, thereby improving patients’ experiences with visit discontinuity. Survey measures that assess the desirability and effectiveness of multiple clinicians and/or clinical teams might be useful for clarifying the effects of visit continuity on care quality.
There are limitations to this study. First, the sample includes patients from one physician network whose 14 sites all use midlevel clinicians to some extent. Our findings might not generalize to practices that do not use advanced practice nurses or physician’s assistants. These clinicians’ increasing prominence in ambulatory care settings24, however, underscores the study’s relevance. Second, the survey questions used a 6-month reference period, and the relationship between patient reports and administrative data might differ when longer reference periods are used. Longer reference periods might reduce the reliability of patient reports and weaken the observed associations, however. Finally, two patient-reported measures and one administratively derived measure of visit continuity were examined. Results might not generalize to other continuity measures, including patient-reported measures that use different response continuums or administrative measures. Our sensitivity analysis using MMCI resulted in similar findings, however, indicating that our results are robust to the degree of visit dispersion among different clinicians seen by patients.
In conclusion, studies seeking to evaluate the effects of primary care physician visit continuity for purposes of linking continuity to care outcomes should rely on administratively derived measures whenever possible. Patient-reported measures appear to be subject to biases that can overestimate the relationship between visit continuity and some patient-reported outcomes. Patient-reported measures that directly assess patients’ experiences with continuity configurations and/or primary care teams, however, might provide important information for assessing the effects of visit continuity on care outcomes.
Acknowledgements
We are extremely grateful to Jo-Anne Foley and to David Atwood and his staff at DataStat for their expertise in the sampling and data collection activities that supported this work. We would also like to thank the anonymous reviewers for their insights that strengthened the paper.
Conflict of Interest None disclosed.
References
- 1.Pereira AG, Pearson SD. Patient attitudes toward continuity of care. Arch. Intern. Med. 2003;163(8):909–12. [DOI] [PubMed]
- 2.Mainous AG, III, Goodwin MA, Stange KC. Patient-physician shared experiences and value patients place on continuity of care. Ann Fam Med. 2004;2(5):452–4. [DOI] [PMC free article] [PubMed]
- 3.Nutting PA, Goodwin MA, Flocke SA, Zyzanski SJ, Stange KC. Continuity of primary care: to whom does it matter and when? Ann Fam Med. 2003;1(3):149–55. [DOI] [PMC free article] [PubMed]
- 4.Blankfield RP, Kelly RB, Alemagno SA, King CM. Continuity of care in a family practice residency program. Impact on physician satisfaction. J Fam Pract. 1990;31(1):69–73. [PubMed]
- 5.Stokes T, Tarrant C, Mainous AG, III, Schers H, Freeman G, Baker R. Continuity of care: is the personal doctor still important? A survey of general practitioners and family physicians in England and Wales, the United States, and The Netherlands. Ann Fam Med. 2005;3(4):353–9. [DOI] [PMC free article] [PubMed]
- 6.Saultz JW, Albedaiwi W. Interpersonal continuity of care and patient satisfaction: a critical review. Ann Fam Med. 2004;2(5):445–51. [DOI] [PMC free article] [PubMed]
- 7.Rodriguez HP, Rogers WH, Marshall RE, Safran DG. The effects of primary care physician visit continuity on patients’ experiences with care. J Gen Intern Med. 2007;22(6):787–93. [DOI] [PMC free article] [PubMed]
- 8.Rodriguez HP, Rogers WH, Marshall RE, Safran DG. Multidisciplinary primary care teams: effects on the quality of clinician-patient interactions and organizational features of care. Med Care. 2007;45(1):19–27. [DOI] [PubMed]
- 9.Safran DG, Montgomery JE, Chang H, Murphy J, Rogers WH. Switching doctors: predictors of voluntary disenrollment from a primary physician’s practice. J Fam Pract. 2001;50(2):130–6. [PubMed]
- 10.Sorbero ME, Dick AW, Zwanziger J, Mukamel D, Weyl N. The effect of capitation on switching primary care physicians. Health Serv Res. 2003;381 Pt 1191–209. [DOI] [PMC free article] [PubMed]
- 11.Gill JM, Mainous AG, III,. The role of provider continuity in preventing hospitalizations. Arch Fam Med. 1998;7(4):352–7. [DOI] [PubMed]
- 12.Gill JM, Mainous AG, III, Nsereko, M. The effect of continuity of care on emergency department use. Arch Fam Med. 2000;9(4):333–8. [DOI] [PubMed]
- 13.Saultz JW. Defining and measuring interpersonal continuity of care. Ann Fam Med. 2003;1(3):134–43. [DOI] [PMC free article] [PubMed]
- 14.Love MM, Mainous AG, III, Talbert JC, Hager GL. Continuity of care and the physician-patient relationship: the importance of continuity for adult patients with asthma. J Fam Pract. 2000;49(11):998–1004. [PubMed]
- 15.Fan VS, Burman M, McDonell MB,Fihn SD. Continuity of care and other determinants of patient satisfaction with primary care. J Gen Intern Med. 2005;20(3):226–33. [DOI] [PMC free article] [PubMed]
- 16.Rodriguez HP, Marsden PV, Landon BE, Wilson IB, Cleary PD. The Effect of Care Team Composition on the Quality of HIV Care. Med Care Res Rev. 2008;65(1):88–113. [DOI] [PubMed]
- 17.Wagner EH, Austin BT, Von Korff M. Organizing care for patients with chronic illness. Milbank Q. 1996;74(4):511–44. [DOI] [PubMed]
- 18.Safran DG, Karp M, Coltin K, Chang H, Li A, Ogren J, et al. Measuring patients’ experiences with individual primary care physicians. Results of a statewide demonstration project. J Gen Intern Med. 2006;21(1):13–21. [DOI] [PMC free article] [PubMed]
- 19.Agency for Healthcare Research Quality (2006) American Institutes for Research, Harvard Medical School, Rand Corporation. The Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Clinician and Group Survey: Submission to National Quality Forum.
- 20.Jee SH, Cabana MD. Indices for continuity of care: a systematic review of the literature. Med Care Res Rev. 2006;63(2):158–88. [DOI] [PubMed]
- 21.Breslau N, Reeb KG. Continuity of care in a university-based practice. J Med Educ. 1975;50(10):965–9. [DOI] [PubMed]
- 22.Magill MK, Senf J. A new method for measuring continuity of care in family practice residencies. J Fam Pract. 1987;24(2):165–8. [PubMed]
- 23.Sudman S, Bradburn, NM. Effects of Time and Memory Factors on Response in Surveys. J Am Stat Assoc. 1973;68(344):805–15. [DOI]
- 24.Druss BG, Marcus SC, Olfson M, Tanielian T, Pincus HA. Trends in care by nonphysician clinicians in the United States. N Engl J Med. 2003;348(2):130–7. [DOI] [PubMed]