Abstract
Background
There is significant promise in analyzing physician patient-sharing networks to indirectly measure care coordination, yet it is unknown whether these measures reflect patients’ perceptions of care coordination.
Objective
To evaluate the associations between network-based measures of care coordination and patient-reported experience measures.
Design
We analyzed patient-sharing physician networks within group practices using data made available by the Centers for Medicare and Medicaid Services.
Subjects
Medicare beneficiaries who provided responses to the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Survey in 2016 (data aggregated by physician group practice made available through the Physician Compare 2016 Group Public Reporting).
Main Measures
The outcomes of interest were patient-reported experience measures reflecting aspects of care coordination (CAHPS). The predictor variables of interests were physician group practice density (the number of physician pairs who share patients adjusting for the total number of physician pairs) and clustering (the extent to which sets of three physicians share patients).
Key Results
Four hundred seventy-six groups had patient-reported measures available. Patients’ perception of “Clinicians working together for your care” was significantly positively associated with both physician group practice density (Est (95 % CI) = 5.07(0.83, 9.33), p = 0.02) and clustering (Est (95 % CI) = 3.73(1.01, 6.44), p = 0.007). Physician group practice clustering was also significantly positively associated with “Getting timely care, appointments, and information” (Est (95 % CI) = 4.63(0.21, 9.06), p = 0.04).
Conclusions
This work suggests that network-based measures of care coordination are associated with some patient-reported experience measures. Evaluating and intervening on patient-sharing networks may provide novel strategies for initiatives aimed at improving quality of care and the patient experience.
Electronic supplementary material
The online version of this article (10.1007/s11606-019-05313-y) contains supplementary material, which is available to authorized users.
KEY WORDS: physician networks, network analysis, care coordination, Physician Compare, CAHPS
INTRODUCTION
Optimizing care coordination is a cornerstone to efforts aimed at improving patient care.1 Despite strong consensus that improving care coordination is a high priority, robust measures of care coordination are lacking, making assessment of the efficacy of improvement strategies a challenge.2 Agreement on measure(s) has been a key issue for researchers and policy-makers due to the various definitions and diverse stakeholders involved.
Physician patient-sharing networks hold significant promise in offering a quantitative, scalable approach for indirectly measuring care coordination by defining relationships between physicians based on shared patients observed in administrative data.3–9 This approach has the potential to uncover how otherwise latent organizational aspects of health care systems impact patient outcomes. The extent of patient-sharing relationships within physician networks (e.g., the network density) has been associated with care utilization, cost of care, and some measures of care quality.3, 8–17 A key challenge to this approach is how the various measures used to describe these networks infer the complex realities of health care delivery. While patient sharing has been validated to signal true professional relationships between physicians from the physician’s perspective, 18 it is unknown whether these measures reflect patients’ perceptions of their own care coordination.
The objective of this study was to evaluate whether network measures reflecting the extent of patient sharing among physicians correlate with patient-reported experience measures included in the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey. To this end, this study assessed two network measures hypothesized to capture aspects of care coordination: (i) network density, the number of observed patient-sharing relationships between physicians adjusting for total number of physician pairs, and (ii) network clustering, the extent to which sets of three physicians share patients. Increased patient sharing among physicians within a group is posited to facilitate care coordination by (i) increasing physician awareness and familiarity with the care provided to their patients by other physicians, (ii) providing more opportunities for the sharing of information and working together, and (iii) informally establishing common referral pathways to support timely care. We hypothesized that physician group practices with greater network density and network clustering would have higher reported scores for patient experience measures related to care coordination. This is the first study to examine associations between patient-sharing network measures and patients’ perceptions of their care experience. A better understanding of the real-world interpretations of these physician network measures from various stakeholders’ perspectives is essential for translating data-driven findings to the clinical setting to improve patient care.
METHODS
Data Sources
To conduct this study, we linked three publicly available data sources released by Centers for Medicare and Medicaid Services (CMS) that include data on physician-patient sharing in 2015 (https://www.cms.gov/Regulations-and-Guidance/Legislation/FOIA/Referral-Data-FAQs.html), physician participation in a group practice, and patient-reported quality performance for group practices in 2016 (https://data.medicare.gov/data/physician-compare).
The Physician Shared Patient Patterns Data lists health care providers who participate in the delivery of health services to the same Medicare beneficiary within specific time intervals (30 days, 60 days, 90 days, and 180 days). It reports the number of patients each physician pair (i.e., physician dyad) shared within the specified time interval. These data are presented as a directed graph, which indicates the sequence in which the patients were seen by the physicians in each dyad. We used the Physician Shared Patient data for 2015 to create directed physician networks for which ties between physicians indicate shared patients within 30 or 180 days.
To identify whether a physician belongs to a group practice, we took advantage of the Physician Compare National Downloadable File which contains general information about individual eligible health care professionals including specialty and group practice membership. To be listed on Physician Compare, a physician or other clinician must have “approved” status in Medicare Provider Enrollment, Chain, and Ownership System, have a specialty and at least one practice address listed, and submit at least one Fee-For-Service Medicare claim within the previous 6 months. Group affiliation, as displayed on Physician Compare, is determined through a physician’s or other clinician’s benefit reassignment.
For the outcome measures of patient experience, we used the Consumer Assessment of Healthcare Providers and Systems (CAHPS) for Physician Quality Reporting System (PQRS) measure performance rates reported as part of the Physician Compare 2016 Group Public Reporting data (dataset updated on May 17, 2018). Public reporting of these measures was required for group practices of 100 or more eligible providers and optional for group practices of 2–99 eligible providers via a certified CAHPS vendor.
We also assessed socioeconomic and geographic factors that could potentially confound the relationships between physician networks and the patient experience measures. We obtained the 2016 5-year estimate of percent living below the poverty level for each ZIP code tabulation area (ZCTA) from the American Community Survey data released by the Census Bureau. We linked the ZTCA to ZIP code using a crosswalk and each ZIP code was categorized as urban or non-urban using the 2010 rural-urban commuting area (RUCA) codes. Urban ZIP codes were defined as RUCA code 1 and non-urban ZIP codes was defined as RUCA codes 2–4 (corresponding to suburban, large rural town, and small town/isolated).
Physician Network Analysis
Edges between physicians were defined as two physicians having at least one clinical encounter with the same patient. We evaluated physician networks based on patient sharing within 30 days or 180 days. The sub-networks of physicians within group practices were analyzed to calculate two network measures hypothesized to capture aspects of care coordination:
Network density: The density of the network was calculated within each group practice and is determined by the number of observed patient-sharing ties between physician pairs adjusting for the total number of possible pairs. The values range from 0 to 1, with 0 indicating an empty network (no observed ties) and 1 indicating a completely connected network.
Network clustering (also known as transitivity): The global clustering coefficient was calculated within each group practice, and this measure is thought to capture teamwork by considering the patient-sharing connections among sets of three physicians. The global clustering coefficient is distinct from density because it is based on triplets of nodes instead of pairs. It is defined as the number of closed triplets (that is, triangles) adjusting for the total number of triplets in the network (open and closed). In this context, a triangle would occur when three physicians all share patients with each other. The global clustering coefficient values range from 0 to 1, with 0 indicating a network with no observed triplets and 1 representing a network where all triplets are closed.
Network analysis was performed using the igraph package19 in the R software environment.20 The networks were visualized using the Kamada-Kawai force-directed algorithm, which positions nodes to provide a graph with relatively uniform edge length, vertex distribution, and symmetry.21
Study Variables
The outcome measures were obtained from the 2016 Physician Compare Group Public Reporting Data on patient experience within the physician group practices. These data include eight summary survey measures: (1) Between visit communication; (2) Clinicians working together for your care; (3) Getting timely care, appointments, and information; (4) how well clinicians communicate; (5) Health promotion and education; (6) Attention to patient medicine cost; (7) Patients’ rating of clinicians; (8) Courteous and helpful office staff. This study focused on the measures related to patients’ perceptions of their care coordination (measures 1–3). The measures are reported as “top box scores” (0–100) representing the percentage of responses in the most positive response categories. Table 1 includes the survey questions that inform the patient experience summary measures evaluated in this study (obtained from https://www.pqrscahps.org/globalassets/table%2D%2D1-cahps-for-pqrs-12-ssm_corresponding-questions.pdf).
Table 1.
Questions Included in the CAHPS Survey Patient Experience Measures Evaluated in This Study
Summary survey measure | Question(s) included in the measure |
---|---|
Between visit communication | In the last 6 months, did this provider’s office contact you to remind you to make an appointment for tests or treatment? |
Clinicians working together for your care |
When you visited this provider in the last 6 months, how often did he or she have your medical records? In the last 6 months, when this provider ordered a blood test, x-ray, or other test for you, how often did someone from this provider’s office follow up to give you those results? In the last 6 months, how often did you and anyone on your health care team talk about all the prescription medicines you were taking? |
Getting timely care, appointments, and information |
In the last 6 months, when you phoned this provider’s office to get an appointment for care you needed right away, how often did you get an appointment as soon as you needed? In the last 6 months, when you made an appointment for a check-up or routine care with this provider, how often did you get an appointment as soon as you needed? In the last 6 months, when you phoned this provider’s office during regular office hours, how often did you get an answer to your medical question that same day? In the last 6 months, when you phoned this provider’s office after regular office hours, how often did you get an answer to your medical question as soon as you needed? Wait time includes time spent in the waiting room and exam room. In the last 6 months, how often did you see this provider within 15 minutes of your appointment time? |
Other group practice characteristics evaluated in this study were based on aggregated data from the Physician Compare National Downloadable File: number of physicians per group, proportion of PCPs per group, proportion male physicians per group, and the count of patients who reported the patient experience measure. The number of physicians and number of patients who reported the patient experience measure per group were categorized into tertiles (e.g., “small,” “medium,” and “large”).
Using ZIP code–level Census Bureau data to obtain a measure of poverty at the physician group level, we calculated the mean poverty score across all ZIP codes associated with physician practice locations within each group. We also categorized physician groups as either urban or non-urban based on the RUCA tier of the majority of physicians’ practice locations within each group.
Statistical Analyses
Associations between physician group practice network measures and study variables were assessed with linear regression models. Multivariable linear models were then used to evaluate the associations between physician network measures and patient experience measures adjusting for other physician group practice characteristics. The network-based measures of care coordination were lagged 1 year behind the outcome variables rather than performing cross-sectional analysis. This approach improves our ability to interpret any associations as more likely related to a causal process than a mechanical association induced by observing the network and patient experience in the same year.
RESULTS
There were 476 physician groups that had reported patient experience measures in the Physician Compare 2016 Group Public Reporting file. The physician groups span all 50 states. Table 2 summarizes the distribution of the physician group practice characteristics and the performance rate “top box” scores for the three patient experience measures related to care coordination.
Table 2.
Physician Proup Practice Characteristics
Characteristic | Across physician groups | ||
---|---|---|---|
Mean (SD) (N = 476) | Median (IQR) | Range (Min, Max) | |
Predictors | |||
Density | 0.11 (0.12) | 0.08 (0.04, 0.15) | 0, 1 |
Clustering | 0.48 (0.15) | 0.49 (0.39, 0.57) | 0, 1 |
Size (number of physicians) | 396 (470) | 219 (131, 471) | 5, 3734 |
Male sex, % | 53 (8) | 52 (47, 57) | 23, 80 |
Primary care, % | 24 (15) | 22 (14, 31) | 0, 82 |
RUCA tier, n | |||
Urban | 433 | N/A | N/A |
Non-urban | 42 | N/A | N/A |
Percent below poverty level | 14 (4) | 14 (11, 17) | 2.6, 31.4 |
Patient count | 185 (35) | 190 (161, 207) | 88, 296 |
Outcomes | |||
Between visit communication | 57.9 (7.3) | 58 (53, 63) | 38, 80 |
Clinicians working together for your care | 75.4 (3.4) | 76 (73, 78) | 56, 85 |
Getting timely care, appointments, and information | 58.9 (6.1) | 59 (55, 63) | 31, 76 |
Outcome measures represent top box scores. 35 physician groups were missing scores for “Between visit communication,” 33 groups were missing scores for “Clinicians working together for your care,” and 47 groups were missing scores for “Getting timely care, appointments, and information.” Patient count represents the number of patients reporting the experience measures per group. SD, standard deviation; IQR, interquartile range
Physician groups varied in both density and clustering. Examples of variations in physician network structures within groups based on the 30-day patient-sharing network are illustrated in Figure 1. While all three groups shown have relatively similar numbers of physicians in the network, the number and configuration of patient-sharing ties within the groups lead to considerably distinct network structures. Physician groups A and B have the same level of clustering, but group A has greater density and forms one densely connected hub of physicians. Physician group B, with lower density, has tightly knit sub-groups within the practice group. Physician groups B and C have the same low density, but the lower clustering in group C is a reflection of the less tightly knit sub-networks. The Spearman rank correlations between the network-based care coordination measures and patient experience measures are plotted in Figure 2. In bivariate analyses, physician group clustering was positively associated with “Getting timely care, appointments, and information” (p = 0.004).
Figure 1.
Illustrations of physician group practice networks. Each node (circle) in the network represents a physician and the edges (lines) between nodes indicate shared patients.
Figure 2.
Correlations between physician group practice density and clustering and patient experience measures. Correlations were measured using Spearman’s Figure 2 contains poor quality and small text inside the artwork. Please do not re-use the file that we have rejected or attempt to increase its resolution and re-save. It is originally poor, therefore, increasing the resolution will not solve the quality problem. We suggest that you provide us the original format. We prefer replacement figures containing vector/editable objects rather than embedded images. Preferred file formats are eps, ai, tiff and pdf.Revised Figure 2 is attached.rho(r).
Table 3 presents the associations between the 30-day patient-sharing physician network measures and the other group practice characteristics. Physician groups with greater density were smaller (p < 0.001), were non-urban (p < 0.001), had a higher proportion of male physicians (p < 0.001), and had more patients (p = 0.002). Physician groups with greater clustering were also smaller (p < 0.001), were non-urban (p < 0.001), had a higher proportion of PCPs (p = 0.01) and male physicians (p < 0.001), and had more patients (p < 0.001).
Table 3.
Associations between Network-Based Care Coordination Measures and Physician Group Practice Characteristics.
Group practice characteristic | Density | Clustering | ||
---|---|---|---|---|
Est (95% CI) | p value | Est (95% CI) | p value | |
Practice size | ||||
Small (5–155 providers) | Referent | – | Referent | – |
Medium (156–328 providers) | − 0.10 (0.07, 0.12) | < 0.001 | − 0.02 (− 0.05, 0.01) | 0.13 |
Large (329–3734 providers) | − 0.15 (− 0.12, − 0.17) | < 0.001 | − 0.12 (− 0.15, − 0.09) | < 0.001 |
Proportion PCP | 0.02 (− 0.05, 0.10) | 0.53 | 0.12 (0.02, 0.21) | 0.01 |
Proportion male | 0.33 (0.20, 0.45) | < 0.001 | 0.39 (0.24, 0.55) | < 0.001 |
RUCA tier | ||||
Urban | Referent | – | Referent | – |
Non− urban | 0.10 (0.06, 0.13) | < 0.001 | 0.08 (0.03, 0.13) | < 0.001 |
Percent below poverty | 0.001(− 0.002, 0.003) | 0.57 | − 0.0004 (− 0.003, 0.003) | 0.79 |
Patient count | ||||
Low | Referent | – | Referent | – |
Medium | 0.01 (− 0.01, 0.04) | 0.24 | 0.05 (0.02, 0.08) | 0.004 |
High | 0.04 (0.01, 0.06) | 0.002 | 0.07 (0.03, 0.10) | < 0.001 |
Est, estimate; CI, confidence interval. The estimates represent the change in the expected value of the physician network measure with a 1-unit increase in the predictor
Table 4 reports the adjusted estimated effects of physician group practice characteristics with patient-reported experience measures. Model 1 estimates the associations between the non-network-based group practice characteristics and patient-reported experience measures. Physician groups with higher scores for “Between visit communication” had a greater proportion of PCPs (p = 0.004), a lower proportion of male physicians (p = 0.03), and fewer patients (top tertile compared with bottom tertile, p = 0.03). Groups with higher scores for “Clinicians working together for your care” had a lower proportion of male physicians (p = 0.03). Groups with higher scores for “Getting timely care, appointments, and information” had lower poverty (p = 0.01) and were more likely to be in an urban setting (p = 0.004).
Table 4.
Adjusted Relationships between Study Variables and Patient Experience Measures
Model 1 Non-network group practice characteristics Estimate (SE) |
Model 2a Predictors in (1) + density Estimate (SE) |
Model 2b Predictors in (1) + clustering Estimate (SE) |
Model 3 All Estimate (SE) |
|
---|---|---|---|---|
Between visit communication, mean (sd) = 57.9 (7.3) | ||||
Density | N/A | − 1.22 (− 10.08, 7.63) | N/A | 0.02 (− 11.51, 11.48) |
Clustering | N/A | N/A | − 4.83 (− 10.35, 0.70) | − 4.82 (− 11.00, 1.36) |
Practice size | ||||
Small | Referent | Referent | Referent | Referent |
Medium | − 0.63 (− 2.38, 1.12) | − 0.72 (− 2.58, 1.14) | − 0.63 (− 2.39, 1.12) | − 0.64 (− 2.52, 1.25) |
Large | 0.27 (− 1.59, 2.11) | 0.11 (− 2.05, 2.27) | − 0.12 (− 2.04, 1.81) | − 0.12 (− 2.32, 2.09) |
Proportion PCP | 7.78 (2.46, 13.10)** | 7.66 (2.26, 13.06)** | 8.24 (2.88, 13.59)** | 8.24 (2.82, 13.66)** |
Proportion male | − 10.14 (− 19.27, − 1.02)* | − 9.87 (− 19.21, − 0.53)* | − 8.27 (− 17.59, 1.06) | − 8.26 (− 17.79, 1.26) |
RUCA tier | ||||
Urban | Referent | Referent | Referent | Referent |
Non-urban | 1.00 (− 1.48, 3.49) | 1.05 (− 1.46, 3.55) | 1.18 (− 1.31, 3.66) | 1.18 (− 1.34, 3.69) |
Percent below poverty | − 0.01 (− 0.17, 0.15) | − 0.01 (− 0.17, 0.15) | − 0.02 (− 0.18, 0.14) | − 0.02 (− 0.18, 0.14) |
Patient count | ||||
Low | Referent | Referent | Referent | Referent |
Medium | − 2.44 (− 4.17, − 0.71)** | − 2.42 (− 4.16, − 0.68)** | − 2.29 (− 4.03, − 0.56)** | − 2.29 (− 4.03, − 0.55)** |
High | − 1.95 (− 3.69, − 0.20)* | − 1.92 (− 3.68, − 0.16)* | − 1.82 (− 3.58, − 0.06)* | − 1.82 (− 3.58, − 0.05)* |
Clinicians working together for your care, mean (sd) = 75.4 (3.4) | ||||
Density | N/A | 5.08 (0.83, 9.33)* | N/A | 2.55 (− 2.87, 7.98) |
Clustering | N/A | N/A | 3.73 (1.01, 6.44)** | 3.08 (0.05, 6.12)* |
Practice size | ||||
Small | Referent | Referent | Referent | Referent |
Medium | − 0.64 (− 1.53, 0.25) | − 0.26 (− 1.20, 0.68) | − 0.53 (− 1.42, 0.35) | − 0.37 (− 1.32, 0.57) |
Large | 0.92 (− 0.03, 1.87) | 1.58 (0.49, 2.67)** | 1.34 (0.35, 2.32)** | 1.58 (0.47, 2.68)** |
Proportion PCP | − 0.02 (− 2.70, 2.73) | 0.58 (− 2.16, 3.32) | 0.12 (− 2.59, 2.84) | 0.33 (− 2.42, 3.08) |
Proportion male | − 5.22 (− 9.91, − 0.54)* | − 6.63 (− 11.43, − 1.82)** | − 6.69 (− 11.49, − 1.89)** | − 7.19 (− 12.10, − 2.28)** |
RUCA tier | ||||
Urban | Referent | Referent | Referent | Referent |
Non− urban | − 0.82 (− 2.06, 0.42) | − 0.95 (− 2.18, 0.29) | − 0.85 (− 2.08, 0.38) | − 0.91 (− 2.15, 0.32) |
Percent below poverty | − 0.01 (− 0.09, 0.07) | − 0.01 (− 0.10, 0.07) | − 0.01 (− 0.09, 0.08) | − 0.01 (− 0.09, 0.08) |
Patient count | ||||
Low | Referent | Referent | Referent | Referent |
Medium | 0.86 (− 0.03, 1.75) | 0.82 (− 0.07, 1.70) | 0.75 (− 0.13, 1.64) | 0.76 (− 0.12, 1.65) |
High | 0.69 (− 0.21, 1.59) | 0.63 (− 0.27, 1.52) | 0.58 (− 0.31, 1.48) | 0.57 (− 0.32, 1.47) |
Getting timely care, appointments, and information, mean (sd) = 58.9 (6.1) | ||||
Density | N/A | 4.12 (− 1.16, 9.81) | N/A | − 0.57 (− 8.50, 7.37) |
Clustering | N/A | N/A | 4.63 (0.21, 9.06)* | 4.80 (− 0.19, 9.79) |
Practice size | ||||
Small | Referent | Referent | Referent | Referent |
Medium | − 0.11 (− 1.58, 1.36) | 0.44 (− 1.07, 1.95) | 0.29 (− 1.15, 1.74) | 0.26 (− 1.26, 1.78) |
Large | − 0.23 (− 1.79, 1.34) | 0.33 (− 1.38, 2.03) | 0.35 (− 1.24, 1.93) | 0.30 (− 1.44, 2.04) |
Proportion PCP | 0.76 (− 3.72, 5.24) | 1.60 (− 2.77, 5.98) | 1.51 (− 2.88, 5.90) | 1.49 (− 2.92, 5.90) |
Proportion male | 5.95 (− 1.97, 13.86) | 4.94 (− 2.81, 12.70) | 4.39 (− 3.50, 12.27) | 4.46 (− 3.51, 12.44) |
RUCA tier | ||||
Urban | Referent | Referent | Referent | Referent |
Non− urban | − 3.16 (− 5.29, − 1.02)** | − 3.27 (− 5.35, − 1.19)** | − 3.18 (− 5.25, − 1.10)** | − 3.16 (− 5.25, − 1.06)** |
Percent below poverty | − 0.18 (− 0.32, − 0.04)* | − 0.16 (− 0.29, 0.02)* | − 0.15 (− 0.29, − 0.01)* | − 0.15 (− 0.29, − 0.01)* |
Patient count | ||||
Low | Referent | Referent | Referent | Referent |
Medium | − 0.43 (− 1.91, 1.06) | − 0.67 (− 2.11, 0.78) | − 0.83 (− 2.29, 0.62) | − 0.84 (− 2.30, 0.62) |
High | 1.09 (− 0.42, 2.60) | 0.86 (− 0.61, 2.34) | 0.71 (− 0.76, 2.19) | 0.71 (− 0.77, 2.19) |
The estimates represent the change in the expected value of the patient experience measure with a 1-unit increase in the predictor. The effect size can be assessed by comparing the size of the estimate with the standard deviation of the experience measure. SE, standard error; sd, standard deviation. *p < 0.05; **p < 0.01
We then estimated the adjusted effects of the 30-day patient-sharing physician network density and clustering on patient experience measures in separate models (Table 4, models 2a and 2b, respectively). We found physician network density was positively associated with patients’ perception of “Clinicians working together for your care” (p = 0.02). Physician network clustering was positively associated with “Clinicians working together for your care” (p = 0.007) and “Getting timely care, appointments, and information” (p = 0.04). Neither physician network density nor clustering was significantly associated with “Between visit communication”.
Model 3 in Table 4 reports the adjusted estimated relationships between group practice characteristics and patient experience measures for the model including both network density and clustering. Physician groups with greater levels of clustering showed higher scores for “Clinicians working together for your care” (p = 0.05), but the association with “Getting timely care, appointments, and information” was slightly attenuated (p = 0.06). The association between network density and “Clinicians working together for your care” was not detected when network clustering was included in the model (p = 0.36). The significant associations between the non-network group practice characteristics and patient experience measures observed in model 1 were robust to the inclusion of physician network density and clustering.
Finally, we hypothesized that patient sharing within 30 days was more likely to detect meaningful physician relationships, and therefore more likely to influence the patient experience, compared with the patient sharing within 180 days. We examined the associations between group practice density and clustering based on the 180-day patient-sharing network and the patient experience measures (Appendix Table 1, online). We found that groups with higher scores for “Clinicians working together for your care” had higher network density (Est (95 % CI) = 4.59 (0.48, 8.70), p = 0.03) and clustering (Est (95 % CI) = 3.03 (0.22, 5.85), p = 0.03). However, we no longer observed a significant association between “Getting timely care, appointments, and information” and network clustering.
DISCUSSION
Patient-sharing between physicians can be considered as edges in a network, and the structure of these networks can act as facilitators or barriers to care coordination and the patient experience. The aim of this study was to determine whether the extent of patient-sharing among physicians reflects patients’ perceived care experience. We observed that physician group practices with better scores for “Clinicians working together for your care” had higher network density and network clustering. We also found that physician group practices with better scores for “Getting timely care, appointments, and information” had higher network clustering. These results indicate that physician networks constructed using administrate data on shared patients are likely detecting care patterns with meaningful impact on patient experience in addition to the previously demonstrated relationships with quality and costs.3, 8–17
Our study suggests that physician network clustering may have a stronger relationship with patient experience measures compared with density. Increased patient-sharing relationships among sets of three physicians within a practice would lead to greater levels of clustering and may reflect referral relationships or multidisciplinary care teams that facilitate timely scheduling of appointments, the sharing of medical records, and coordination of prescription medicines. That is, enhancing patient-sharing patterns which encourage tightly knit sub-groups of physicians in the practice would be more likely to have a positive impact on the patient experience rather than increasing patient-sharing across the entire practice. We also observed that the proportion of PCPs in the group practice was the only study variable associated with clustering and not density, suggesting PCPs may specifically enhance the sharing of patients among teams of three physicians. A future extension of this work could be to explore the degree to which higher clustering reflects greater multidisciplinary care versus greater within-specialty patient-sharing. Initiatives aiming to improve care coordination may benefit from considering how existing provider relationships are organized around patient care. For example, community detection in physician networks has been used to identify groups of physicians who may be best suited to become an accountable care organization based on naturally occurring patient-sharing relationships.22
Other group practice characteristics were also found to be associated with patient experience measures. We found that better scores for “Between visit communication” were observed with physician groups with higher proportions of PCPs, suggesting that the offices of PCPs may be more likely to have effective asynchronous care. We also found that group practices in non-urban areas and with higher poverty had lower scores for “Receiving timely care, appointments, and information.” This finding is concordant with other research evaluating barriers to access to care in lower socioeconomic or rural areas.23
We explored whether the time frame used to define an edge between physicians in the patient-sharing network impacted the associations between network measures and the patient experience measures. The associations between network density and clustering and “Clinicians working together for your care” was observed when evaluating both the 30-day and the 180-day patient-sharing networks, suggesting that there may be established patient-sharing patterns utilized over shorter and longer time frames within group practices that facilitate coordination among physicians. However, we found that the relationship between “Getting timely care, appointments, and information” was observed when analyzing the 30-day, but not 180-day, patient-sharing network. That is, the signal was lost when the time frame used to determine a network edge was increased. It is possible that a network with more patient sharing within 30 days facilitates efficient referrals and increases the likelihood for patients to report receiving timely care.
This study has several limitations. First, the patient experience measures were aggregated by physician group practice, and we were unable to account for patient clinical factors, such as disease severity, that could impact care experiences and perceptions of care coordination. However, if we posit that patients with more severe disease (i) are at a higher risk of receiving less coordinated care and (ii) see a greater number of physicians and therefore lead to more patient-sharing relationships among physicians in the group, we believe that lacking these data biases towards the null. Second, we were not able to evaluate patient care density, a network-based care coordination measure reflective of the amount of shared patients among the set of physicians caring for an individual patient.3 Third, these networks were not constructed around a specific condition. Patient-sharing networks based on the care of patients with a specific condition, and the network position of individual physicians or specialties within these networks, may be more amenable for detecting larger effects on the patient experience. Fourth, the physician networks and patient-reported outcomes are specific to Medicare beneficiaries and may not be generalizable to the care of patients less than 65 years of age or in managed care. Finally, due to the observational study design, our results cannot be interpreted as causal.
In conclusion, this study suggests that the extent of patient sharing within physician networks constructed using Medicare claims relates to the quality of the patient experience. Researchers and policy-makers may consider evaluating or intervening on the patient-sharing networks within physician groups when designing potential strategies for group practices to optimize care coordination.
Electronic Supplementary Material
(DOCX 21 kb)
Acknowledgments
The authors would like to acknowledge Andrew Schaefer for his assistance in obtaining the Census data and RUCA codes used in the analyses.
Funding Information
This study was supported by NIH NIA P01AG019783 and NIH NIGMS P20GM104416.
Data Availability
The datasets analyzed during the current study are made available by the Centers for Medicare and Medicaid Services (CMS) from the Physician Compare website (https://data.medicare.gov/data/physician-compare) and the Physician Shared Patients Patterns website (https://www.cms.gov/Regulations-and-Guidance/Legislation/FOIA/Referral-Data-FAQs.html).
Compliance with Ethical Standards
Conflict of Interest
The authors declare that they do not have a conflict of interest.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.McDonald KM, Sundaram V, Bravata DM, et al. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 7: Care Coordination). 2007. https://www.ncbi.nlm.nih.gov/books/NBK44015/. Accessed October 2018. [PubMed]
- 2.Schultz EM, Pineda N, Lonhart J, Davies SM, McDonald KM. A systematic review of the care coordination measurement landscape. BMC Health Serv Res. 2013;13(1):443. doi: 10.1186/1472-6963-13-119. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Pollack CE, Weissman GE, Lemke KW, Hussey PS, Weiner JP. Patient Sharing Among Physicians and Costs of Care: A Network Analytic Approach to Care Coordination Using Claims Data. J Gen Intern Med. 2012;28(3):459–465. doi: 10.1007/s11606-012-2104-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Mandl KD, Olson KL, Mines D, Liu C, Tian F. Provider collaboration: cohesion, constellations, and shared patients. J Gen Intern Med. 2014;29(11):1499–1505. doi: 10.1007/s11606-014-2964-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Uddin S, Hamra J, Hossain L. Mapping and modeling of physician collaboration network. Stat Med. 2013;32(20):3539–3551. doi: 10.1002/sim.5770. [DOI] [PubMed] [Google Scholar]
- 6.Bynum JPW, Ross JS. A measure of care coordination? J Gen Intern Med. 2013;28(3):336–338. doi: 10.1007/s11606-012-2269-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Moen EL, Kapadia NS, O’Malley AJ, Onega T. Evaluating breast cancer care coordination at a rural National Cancer Institute Comprehensive Cancer Center using network analysis and geospatial methods. Cancer Epidemiol Biomark Prev. 2019;28(3):455–61. doi: 10.1158/1055-9965.EPI-18-0771. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Moen EL, Austin AM, Bynum JP, Skinner JS, O’Malley AJ. An analysis of patient-sharing physician networks and implantable cardioverter defibrillator therapy. Health Serv Outcome Res Methodol. 2016;16(3):132–153. doi: 10.1007/s10742-016-0152-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Moen EL, Bynum JP, Austin AM, Skinner JS, Chakraborti G, O’Malley AJ. Assessing Variation in Implantable Cardioverter Defibrillator Therapy Guideline Adherence With Physician and Hospital Patient-sharing Networks. Med Care. 2018;56(4):350–7. doi: 10.1097/MLR.0000000000000883. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Pollack CE, Weissman G, Bekelman J, Liao K, Armstrong K. Physician Social Networks and Variation in Prostate Cancer Treatment in Three Cities. Health Serv Res. 2011;47(1pt2):380–403. doi: 10.1111/j.1475-6773.2011.01331.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Pollack CE, Frick KD, Herbert RJ, et al. It’s who you know: patient-sharing, quality, and costs of cancer survivorship care. J Cancer Surviv. 2014;8(2):156–166. doi: 10.1007/s11764-014-0349-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Landon BE, Keating NL, Barnett ML, et al. Variation in patient-sharing networks of physicians across the United States. - PubMed - NCBI. JAMA. 2012;308(3):265–273. doi: 10.1001/jama.2012.7615. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Barnett ML, Christakis NA, O’Malley AJ, Onnela J-P, Keating NL, Landon BE. Physician Patient-sharing Networks and the Cost and Intensity of Care in US Hospitals. Med Care. 2012;50:1–9. doi: 10.1097/MLR.0b013e31822dcef7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Uddin S, Hossain L, Kelaher M. Effect of physician collaboration network on hospitalization cost and readmission rate. Eur J Pub Health. 2012;22(5):629–633. doi: 10.1093/eurpub/ckr153. [DOI] [PubMed] [Google Scholar]
- 15.Ong M-S, Olson KL, Chadwick L, Liu C, Mandl KD. The Impact of Provider Networks on the Co-Prescriptions of Interacting Drugs: A Claims-Based Analysis. Drug Saf 2016:1-10. [DOI] [PMC free article] [PubMed]
- 16.Ong M-S, Olson KL, Cami A, et al. Provider Patient-Sharing Networks and Multiple-Provider Prescribing of Benzodiazepines. J Gen Intern Med. 2015;31(2):164–171. doi: 10.1007/s11606-015-3470-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Hollingsworth JM, Funk RJ, Garrison SA, et al. Association Between Physician Teamwork and Health System Outcomes After Coronary Artery Bypass Grafting. Circ Cardiovasc Qual Outcomes. 2016;9(6):641–648. doi: 10.1161/CIRCOUTCOMES.116.002714. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Barnett ML, Landon BE, O’Malley AJ, Keating NL, Christakis NA. Mapping Physician Networks with Self-Reported and Administrative Data. Health Serv Res. 2011;46(5):1592–1609. doi: 10.1111/j.1475-6773.2011.01262.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Csárdi G, Nepusz T. The igraph software package for complex network research. Interjournal. Complex Systems:1695.
- 20.R Development Core Team. R: A language and environment for statistical computing. http://www.R-project.org.
- 21.Kamada T, Kawai S. An algorithm for drawing general undirected graphs. Inf Process Lett. 1989;31(1):7–15. doi: 10.1016/0020-0190(89)90102-6. [DOI] [Google Scholar]
- 22.Landon BE, Onnela JP, Keating NE, et al. Using Administrative Data to Identify Naturally Occurring Networks of Physicians. Med Care. 2013;51(8):715–21. doi: 10.1097/MLR.0b013e3182977991. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Douthit N, Kiv S, Dwolatzky T, Biswas S. Exposing some important barriers to health care access in the rural USA. Public Health. 2015;129(6):611–620. doi: 10.1016/j.puhe.2015.04.001. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
(DOCX 21 kb)
Data Availability Statement
The datasets analyzed during the current study are made available by the Centers for Medicare and Medicaid Services (CMS) from the Physician Compare website (https://data.medicare.gov/data/physician-compare) and the Physician Shared Patients Patterns website (https://www.cms.gov/Regulations-and-Guidance/Legislation/FOIA/Referral-Data-FAQs.html).