Abstract
Objective
To identify factors that promote the effective performance of accountable care organizations (ACOs) in the Medicare Shared Savings Program.
Data Sources/Study Setting
Data come from a convenience sample of 16 Medicare Shared Savings ACOs that were organized around large physician groups. We use claims data from the Center for Medicaid and Medicare Services and data from 60 interviews at three high‐performing and three low‐performing ACOs.
Study Design
Explanatory sequential design, using qualitative data to account for patterns observed in quantitative assessment of ACO performance.
Data Collection Methods
A total of 16 ACOs were first rank‐ordered on measures of cost and quality of care; we then selected three high and three low performers for site visits; interview data were content‐analyzed.
Principal Findings
Results identify several factors that distinguish high‐ from low‐performing ACOs: (1) collaboration with hospitals; (2) effective physician group practice prior to ACO engagement; (3) trusted, long‐standing physician leaders focused on improving performance; (4) sophisticated use of information systems; (5) effective feedback to physicians; and (6) embedded care coordinators.
Conclusions
Shorter interventions can improve ACO performance—use of embedded care coordinators and local, regional health information systems; timely feedback of performance data. However, longer term interventions are needed to promote physician–hospital collaboration and skills of physician leaders. CMS and other stakeholders need realistic timelines for ACO performance.
Keywords: Accountable care organizations, Patient Protection and Affordable Care Act, physician leadership
What factors differentiate high‐ from low‐performing accountable care organizations (ACOs) in the Medicare Shared Savings Program (MSSP)? Addressing this question is critically important to inform decisions by policy makers, ACO leaders, and researchers as they redesign, implement, and evaluate these still‐new models of service delivery (McClellan et al. 2010).
Indeed, research to date shows mixed results for the performance of ACOs in the MSSP program. McWilliams et al. (2016) examined MSSP ACOs formed in 2012 and 2013 and found evidence of small, but meaningful, reductions in spending with unchanged or improved quality of care, but only for ACOs that entered the program in 2012. The Centers for Medicare and Medicaid Services (CMS) reported results for the 2014 performance of all 333 MSSP ACOs and, although these ACOs improved on 30 of 33 quality measures compared to 2013, only 28 percent achieved targets for cost control, thereby achieving a shared savings payment. The number of MSPP ACOs that received shared savings bonuses increased slightly to 30 percent in 2015 (Muchmore 2016). Very little empirical research has aimed to account for variation observed in the performance of ACOs to date.
Given the lack of prior research on factors that differentiate high‐ from low‐performing MSSP ACOs, we conducted an inductive study that aimed to identify such factors. We guided data collection for the study with a broad framework that includes components at four levels of analysis: (1) government policy—the role of state and federal policies that might hinder or support ACO performance; (2) local market and community factors, including the dynamics of market supply, demand, competition, and norms for cooperation; (3) organizational‐managerial factors—the role of governing boards, leadership, and management systems (e.g., for performance feedback); and (4) socio‐technical factors—key characteristics of providers and patients (e.g., use of care coordinators), and the use of information technology to link these actors (D'Aunno et al. 2015).
We draw on data from a convenience sample of 16 ACOs located in 12 states that involved contracts between an insurance plan and groups of primary care physicians (i.e., excluding hospitals). We argue that our focus on primary care physician groups is useful given their important role in ACOs generally (Davis, Abrams, and Stremikis 2011) and especially because McWilliams et al. (2016) found that cost savings were greater in MSSP ACOs that consisted of independent primary care groups than in hospital‐integrated groups. Further, results from the first national survey of ACOs show that physicians govern over half of ACOs (Shortell, Casalino, and Fisher 2010; Shortell et al. 2014).
Methods
Study Design
We used an explanatory sequential design in which quantitative data initially are analyzed to inform the collection of qualitative data to help explain patterns observed in the quantitative data (Fetters, Curry, and Creswell 2013). The study data come from a convenience sample of 16 MSSP ACOs that an insurance firm founded in 2012, the year that CMS first launched ACOs (Table 1). Of course, convenience samples are not necessarily representative of any populations of organizations. But this sample has several advantages, including that it is relatively large (5 percent of all MSSP ACOs in 2012); spans 12 states; the ACOs were founded at the same time; and they all had the same parent organization (an insurance plan).
Table 1.
ACO Label | Geographic Region | NCHS Classificationb | Total Members | Percent of Members with Chronic Disease | Average HCC Score | Rank on CMS Metrics |
---|---|---|---|---|---|---|
Aa | Mid‐Atlantic | Medium metro | 12,083 | 65.1% | 0.98 | 1 |
B | New England | Small metro | 5,984 | 54.7% | 0.85 | 2 |
C | South Atlantic | Small metro | 8,633 | 58.7% | 0.92 | 3 |
Da | Mid‐Atlantic | Large metro | 12,745 | 54.2% | 1.01 | 4 |
E | South Atlantic | Small metro | 8,441 | 41.4% | 0.98 | 5 |
F | New England | Micropolitan | 8,173 | 53.1% | 1.02 | 6 |
Ga | West South Central | Large metro | 27,336 | 25.0% | 1.06 | 7 |
H | South Atlantic | Medium metro | 5,182 | 59.8% | 1.05 | 8 |
I | East North Central | Large metro | 9,298 | 62.2% | 1.36 | 9 |
J | South Atlantic | Medium metro | 17,648 | 46.1% | 0.87 | 10 |
K | South Atlantic | Medium metro | 7,966 | 70.2% | 1.02 | 11 |
La | South Atlantic | Large metro | 11,290 | 56.7% | 0.97 | 12 |
Ma | South Atlantic | Medium metro | 8,667 | 57.7% | 1.04 | 13 |
Na | West South Central | Large metro | 7,049 | 71.0% | 0.99 | 14 |
O | New England | Micropolitan | 5,485 | 59.8% | 1.00 | 15 |
P | East South Central | Medium metro | 5,518 | 60.5% | 1.03 | 16 |
Site visit ACOs.
National Center for Health Statistics (NCHS) six‐level urban–rural classification scheme for U.S. counties and county‐equivalent entities. These classifications are large metropolitan statistical area (MSA), population of 1 million or more; medium metropolitan MSA, population of 250,000–999,999; small metropolitan MSA, population less than 250,000; micropolitan urban cluster, population of 10,000–49,999.
Sample
Of the 16 ACOs in our sample, we selected three high‐ and three low‐performing ACOs for qualitative data collection using three criteria. First, as discussed in more detail below, we ranked the ACOs on their performance on measures of avoidable costs and quality of care using data from the Centers for Medicaid and Medicare (CMS). Second, we purposively sampled ACOs to promote geographic diversity (i.e., we have two ACOs from the Northeast; one from the Mid‐Atlantic; two in the Southwest; and one in the Southeast).
Third, we informed final site selection based on a systematic assessment of ACO performance that top managers in the insurance plan partnering with physician groups conducted in 2013. Managers independently created a ranking of the ACOs based on a qualitative assessment of competency measures, including leadership ability, clinical operations, and quality of care. Each ACO was rated on a three‐point scale: needs improvement; adequate; and high performing. This ranking corresponded highly with the ranking we created from CMS claims data. Thus, we selected sites for case studies whose rankings overlapped on our performance measure and the one that the insurance plan produced.
Specifically, as high performers, we selected ACOs A, D, and G; ACOs L, M, and N were selected as low performers (Table 1). We did not select ACO‐P because it had disbanded at the time of the study and we anticipated great difficulty in scheduling a site visit. We did not select ACOs B and E because, as noted, we wanted to increase the geographic diversity of the sample. Finally, we did not sample ACOs iteratively with ongoing analysis; although doing so would have had advantages, we faced practical constraints in scheduling ACOs for site visits and completing the study in a timely way.
CMS Measures of ACO Performance
We measured the rates of three types of avoidable costs for individuals enrolled in each ACO: avoidable inpatient admissions (AHRQ Ambulatory Care Sensitive); readmission to an inpatient facility within 30 days of discharge; and emergency department visits. To measure quality of care, we used CMS‐sanctioned Healthcare Effectiveness Data and Information Set (HEDIS) measures, considered to be a very “careful and systematic” approach to performance measurement (Lied and Sheingold 2001; NCQA 2007). We focused on quality of care for three common chronic conditions; these included the following: diabetes quality (three measures); congestive heart failure quality (two measures); and chronic obstructive pulmonary disease quality (three measures). In addition to examining performance in the first year of the MSSP (2012), we also examined change in performance between the baseline year (2011) and first performance year (2012).
We summed the scores, unweighted, on the 11 performance measures (i.e., three measures of cost and eight measures of quality of care) to create a single index. This index also included a measure of the change in summed scores, if any, between 2011 and 2012, to monitor possible changes in ACO performance.
This analysis allowed us to reliably distinguish high‐ versus low‐performing ACOs in the study sample (Table 1). First, we found that all high‐performing ACOs had service use (cost) rates that were below average for the group of 16 ACOs, while all low‐performing ACOs had scores above the group average. Second, high‐performing ACOs tended to increase their performance on the study measures from between 2011 and 2012, while the low‐performing ACOs often showed decreased performance.
Differences between the high‐ and low‐performing ACOs do not seem to depend on severity of illness among their members. Both groups had similar proportions of members with chronic diseases and similar CMS hierarchical condition category (HCC) risk scores (Kautter et al. 2014). Results also show a relationship between ACO performance and number of members: ACOs with larger patient pools tended to perform better than ones with fewer (Table 1). We conducted sensitivity analyses with alternative approaches for rank‐ordering, and all alternatives yielded the same ranking.
Data Collection at ACO Sites
Table 2 provides details about the data collection. All the ACOs we contacted agreed to participate in the study. Using protocols (included as Appendix SA2) that we developed based on literature on accountable care, we conducted in‐person interviews with key individuals at each site, including the board chair; chief executive officer; chief financial officer; and senior manager responsible for ACO operations (McClellan et al. 2010). In addition to interviews, we collected key documents, including managers’ presentations on the assessment of ACO performance; analytic reports that the insurance firm distributed to each site; and site‐specific memos on ACO objectives. These documents were added to Atlas.ti and coded using the approach described below.
Table 2.
ACO Executive Directors | Physician Board Chair | Care Coordinators and Clinical Staff | Local ACO Managers | |
---|---|---|---|---|
Site A | 1 | 1 | 4 | 4 |
Site D | 1 | 1 | 1 | 1 |
Site G | 2 | 1 | 3 | 1 |
Site L | 2 | 1 | 2 | 2 |
Site M | 1 | 1 | 2 | 4 |
Site N | 1 | 1 | 5 | 3 |
In addition to interviews with ACO site staff, we conducted 14 interviews with informants from the insurance plan who oversaw the design and implementation of work that spanned all ACOs (e.g., legal compliance with CMS regulations). These informants provided insight on all sites, especially site D, with whom they worked very closely.
In a few cases, individuals of interest were not available at the time of the site visit; two interviews were conducted by phone. At least two, and, in many instances three, members of the research team conducted interviews, most of which involved a single respondent. However, there were occasions when individuals with the same role (e.g., care coordinators) were interviewed together, and scheduling constraints at one specific site dictated that we conduct an interview with the leadership team as a group. In addition, we were able to observe a board meeting at one of the sites.
In total, we interviewed 60 respondents; 46 of whom were involved directly with a specific site and 14 who played a central role at the insurance plan (Table 2). Interviews lasted between 30 and 180 minutes, were recorded, transcribed, and entered into Atlas.ti for coding.
Data Analysis
We used an integrated approach to analyzing and coding the data (Miles and Huberman 1994; Hsieh and Shannon 2005). First, we created and applied a structure of initial codes to serve as an organizing framework for the data. These codes were based on literature on accountable care that emphasized the importance of factors such as physician engagement and market and community context. As we collected data, members of the research team debriefed after each interview to review content and highlight key pieces of information that emerged from interviews. Following each site visit, team members distributed individual notes that they took during each interview. These notes were combined and used to guide regular, ongoing analytic meetings in which insights from each site were synthesized and compared to prior site data. We identified recurrent concepts, both within and across sites, that prior literature did not precisely capture, or missed altogether, and we incorporated these concepts into the coding structure. The six performance factors emerged from this iterative process.
In sum, although case study data may be open to bias in collection and analyses, we used data collection and analysis approaches that are known to limit such biases, including the recording and verbatim transcription of interviews; use of Atlas software in data analyses; reliability checks among the two research team members from each site visit; corroboration of interview data with records; and use of multiple (an average of 10) key respondents at each site (Yin 2013).
Results
We identified six factors that distinguish high‐ from low‐performing ACOs.
Collaborative Relationships with Local Hospitals
In comparison with low‐performing ACOs, primary care providers in high‐performing ACOs had formed collaborative relationships with local hospitals (prior to ACO formation) that enabled them to gain timely and consistent access to information about their patients’ admissions to hospitals and information about when, and under what conditions, their patients were being discharged from hospitals. In turn, this enabled effective planning for follow‐up services that patients might have needed within 30 days of discharge.
Hospitals are really essential in timely follow‐up, otherwise our data is three months old, and by the time you find out, that person has been in ER three times—Care Coordinator, High‐Performing Site ACO‐A
Further, collaborative relationships between provider groups and hospitals depended, in large part, on existing norms within the health care community:
If a patient comes in [to the hospital] and talks about who their doctor is, the hospital knows these physicians. ACO‐G has a long‐term relationship with the hospitals. There is almost an unspoken understanding of how things work … in [this city]. —Corporate CMO, High‐Performing Site ACO‐G
We note that ACO leadership and community context are related factors. Although community norms that support collaboration among providers were important, strong ACO physician leadership also played a role in defining effective hospital relationships. In high‐performing ACOs, hospitals and provider groups had a long‐standing history of sharing information, and physician leadership fostered good working relationships with local delivery systems separate from theirs. In contrast, in low‐performing ACOs, we observed relationships between hospitals and ACO physicians that were both negative and weak. Specifically, we observed three major challenges to effective relationships between physician groups and hospitals.
First was the geographic dispersion of physician practices: A few ACOs had practices located in multiple communities. This resulted in significant variation in the quality of physician group–hospital relationships because, for the ACO to perform well overall, most practices would need to form effective relationships with local hospitals. This was a barrier that the ACOs could not overcome.
Second, long‐standing competition between the ACO primary care physician groups and the local hospital (specialists) hindered performance. In one ACO, the physician group had been involved in a managed care contract that limited referrals for inpatient care. Local hospitals viewed the group as a threat and refused to cooperate in providing patient information for the ACO. Indeed, the leader of this physician group reported efforts to improve relationships with local hospitals, but hospitals were not responsive.
Similarly, in one low‐performing ACO, competition among local hospitals spilled over to hinder the work of the ACO physician group. In this case, interview data indicate that the local hospitals had focused mainly on gaining or protecting market share and, as a result, paid less attention to improving quality or efficiency of services, and were unwilling to collaborate. Moreover, when local hospitals competed with each other, they did so primarily to increase their volume of patients (both inpatient and outpatient admissions). In turn, where hospitals competed with each other, they were likely to be wary that ACO physician groups wanted to lower inpatient admissions, and thus, these hospitals were less likely to cooperate with the physician group.
Third, in two low‐performing ACOs, we observed lack of awareness and/or motivation on the part of local hospitals to respond to Medicare penalties for 30‐day readmissions. As a result, these hospitals were making minimal efforts to manage patient discharges and did not respond to ACO requests to collaborating on reducing readmissions.
Finally, collaborative relationships between ACOs and local hospitals also depended in part on the power of ACO physician groups. In one high‐performing ACO, a large multispecialty group dominated local referral patterns and had the power to influence hospitals to share patient information. In two low‐performing ACOs, physician groups did not control many referrals to local hospitals and hence lacked such influence on hospitals.
The Role of High‐Performing Physician Groups Prior to ACO Formation
Evidence from the site visits indicates that high‐performing ACOs were distinguished from low‐performing ACOs in that they had relatively large, well‐established physician groups (over 200 physicians) that provided cost‐effective care prior to their involvement in the ACO. For example, interview data show that the physician group in high‐performing ACO‐A had worked to make incremental improvements in its performance for about 20 years. By incremental, we mean, for example, that a physician group had been working over a period of several years to improve its use of EMR. In other words, it took years to get all the physicians to use an EMR and more time for them to use it to its capabilities (e.g., to manage a panel of patients with diabetes).
The low‐performing ACOs lacked physician groups with this history. As reported above, results from analyses of claims data for these ACOs show that they had lower scores on HEDIS measures and higher avoidable costs prior to ACO formation. In the time period we observed, they were not able to improve on these measures to any substantial degree.
Further, changes aimed at improving established organizations often cause an initial decrement in their performance as a result of the substantial reorganizations that performance improvement requires. For example, two low‐performing ACOs did not have electronic medical records for their patients (EMRs): installing technology; training physicians, other clinicians and staff members in their use; and gaining acceptance and benefit from these systems would require years to accomplish.
Effective, Long‐Serving Physician Leaders Who Focused on Building a High‐Performing Physician Group
Our data indicate that physician leaders contributed substantially to the effective performance of ACOs A, D, and G. Specifically, these physician leaders focused heavily on making incremental improvements in the performance of their groups over a long time period. This approach had several benefits: It required no major changes for physicians at any given point in time and thus minimized disruptions in their practices; physicians could experience the benefits of changes in their behavior, which, as noted above, often take time to materialize; and, finally, with the passage of time, physicians could build trust in group leaders.
Moreover, the interview data indicate that the success of these physician leaders also rested in part with their narrow focus on two related factors that specifically matter for quality improvement and cost control: use of EMRs and timely feedback on physician performance:
[The Physician Board Chair of ACO‐A] is a phenomenal leader in the [EMR] forefront. That's why ACO‐A is at the top … . He is the reason why [the EMR] so advanced and easier to use.—Operations Manager, High‐Performing Site ACO‐A
Although the other ACOs clearly had leaders who were both well respected and long serving, these leaders did not focus on developing systems and practices to promote cost‐effective care. For example, a physician leader in one low‐performing ACO had spent years trying to improve systems for billing and reporting data that enabled the group to participate in managed care contracts, but these contracts did not require physicians to improve their performance to any significant degree. On the one hand, this approach was sensible because obtaining and retaining the managed care contracts did not require physicians to change their practices. On the other hand, this group was not well prepared for participation in an ACO.
Relatively Extensive and Sophisticated Use of Electronic Medical Records within the Group, Combined with Use of Regional Health Information Systems
All three high‐performing ACOs had relatively extensive and sophisticated use of electronic medical records (EMRs). Two of these organizations combined their EMRs with regional health information systems to obtain timely and more complete access to patient records.
The data indicate that the regional health information systems, although useful, were less important than the groups’ internal EMR systems. This is because EMR systems were useful not only for managing care for individual patients but also for managing care for panels of patients (e.g., those with similar and prevalent chronic care conditions). In contrast, the regional information systems were particularly useful for working with patients who had obtained services outside the group practice.
Effective use of both EMRs and regional information systems enabled high‐performing ACOs to make local, data‐driven decisions about which provider locations need embedded care coordinators. For example, one of these ACOs relied on its system to determine which physician practices had the most concentrated ACO‐panel, and it placed care coordinators in those six high‐volume primary care centers.
We have a single EMR that is of real time data. Care coordinators go keep track of assigned patients in real time.—Executive Director, High‐Performing Site ACO‐D
In contrast, the low‐performing ACOs did not use EMRs effectively for care coordination; in fact, two did not have EMRs.
Effective Feedback to Physicians (Independent of CMS Data)
Due to respected and skillful leaders who were armed with useful EMR data on physician performance, it appears that ACO‐A and ACO‐D were able to use data for effective feedback to physicians to improve their performance (but, as noted above, ACO‐G does not seem to use data as effectively). ACO‐A in particular had made significant investments in IT that allowed them to use performance metrics in a timely way. These metrics were fed back to physicians; we observed a board meeting in ACO‐A, for example, that focused heavily on reviewing performance data and devising approaches for acting on it.
In contrast, respondents from two lower‐performing sites (ACO‐L, ACO‐M) stated that presenting data to physicians to change their behavior requires forethought and attention to local and organizational culture, sometimes to the detriment of transparency. In ACO‐L, one respondent noted that because many of the providers are sensitive to being compared to one another, ranking them against one another could create problems within the group. In ACO‐M, respondents described sensitivity to appearing incompetent that comes from being located in an area that has a reputation for being unsophisticated; this requires presenting data with minimal errors in a tactful way. Neither of these two plans had found ways to overcome these obstacles to providing effective feedback.
Embedding Care Coordinators in Physician Practices
High‐performing ACOs were able to successfully create physical space for, and incorporate, care coordinators into large primary care practice sites, while low performers described ongoing negotiations regarding the inclusion of care coordinators into the existing primary care workforce.
Most often, the care coordinators at the high‐performing ACOs were nurses with relatively deep experience in health care, ranging from acute inpatient care to nursing homes and home care, who contacted patients in person or by phone. At some sites, however, care coordinators described beneficiaries as suffering from problems that are not narrowly health related (e.g., they lack money to pay for medications or transportation to physicians’ offices; social isolation and loneliness). As a result, much of the day‐to‐day work of care coordinators in these ACOs focused on working with individuals to connect them to resources such as hearing aids and wheelchair ramps.
Moreover, care coordinators indicated that, in some cases, social workers would be better equipped to work with their patients, and at some sites, social workers have either been hired for these roles or managers are considering hiring them.
The data also indicate that physician leaders in high‐performing ACOs played an important role by convincing their colleagues to create physical space for, and work with, care coordinators in their practices. In contrast, dedicated care coordinators at low‐performing ACOs described struggling to effectively collaborate with physicians who, in the absence of strong physician leadership, had not yet supported the care coordination model or the ACO:
I sent a fax to doctors saying “is there anything we can help with?” I had two or three doctors respond. I don't think the doctors here bought in. They were told, “You are part of an ACO, now you're going to do this.” And they went, “Yeah whatever.”—Care Coordinator, Low‐Performing Site ACO‐L
Physician resistance to the use of on‐site care coordinators stemmed from a few different factors, including (1) concern that a care coordinator would interfere with relationships with patients; (2) practical concerns about office space (where to put the coordinators?) or money (how to pay for coordinators?); (3) physicians who were slow to adapt after being accustomed to practicing medicine a certain way for many years.
Discussion
A fundamental and distinctive insight from this study—one that our conceptual approach did not anticipate—is that high‐performing ACOs consisted of well‐established physician groups with a history of providing cost‐effective patient care prior to their involvement in an ACO. This implies that ACOs whose participants have not worked together, or that had not built effective care management systems prior to ACO formation, may take years to reach their potential(Highfill and Ozcan 2016; McWilliams et al. 2016). These results may explain why fewer than one‐third of MSSP ACOs earned bonuses for cost savings in the first 3 years of the program.
Our results are consistent with much research on organizational change in the management literature: Substantial performance improvement typically takes at least 2 to 3 years to accomplish (Van de Ven and Poole 1995; Poole and Van Ven 2004). ACOs of any type in which participant organizations had not worked together prior to ACO formation need time to build partnerships across organizational boundaries.
An alternative explanation is that many health care providers have the potential for high performance in an ACO at its inception, but incentives in the MSSP have been too weak thus far to motivate these organizations to take advantage of their capabilities (Chernew, McGuire, and McWilliams 2014). In this view, providers may have the elements that high performance requires (e.g., information systems), but they simply lack motivation. Given this possibility, CMS is strengthening incentives not only in its Next Generation ACO model (which includes 18 ACOs in its first year, 2016) but also more broadly in its new payment plan for all Medicare providers (2015).
We want to underscore the implications of these alternative explanations. To the extent that it takes years to build effective ACOs (e.g., because providers need to learn how to work together), CMS’s strengthening of incentives for performance improvement is not likely to produce near‐term results. The extent to which policy makers, leaders of ACOs, and other stakeholders have the patience that it may take for ACOs to develop thus is a critical open question. Of course, compared to organizations with a history of performing well, low performers have the opportunity to make more dramatic improvements, but they and other stakeholders will likely need to be willing to make investments of effort and funds before seeing returns. If only a fraction of ACOs perform well and low performers opt out of the MSSP, the net impact of Medicare ACOs will be minimal.
Lessons from the Perspective of a Conceptual Framework
Our results also indicate that ACO leaders need to focus on several key activities to achieve high performance. First, at the community level, the results suggest that physician‐based ACOs need collaborative relationships with local hospitals (Fisher, McClellan, and Safran 2011). Primary care may be the lynchpin of accountable care, but physician‐led ACOs without relationships with, or sufficient influence over, community hospitals will not have access to the necessary information to effectively manage transitions of care.
Second, at the organizational‐managerial level, effective collaboration with local hospitals and physician groups’ provision of cost‐effective patients is more likely to occur when an ACO has strong, long‐standing, and highly trusted physician leadership. Such physician leadership is necessary for buy‐in from local office‐based providers to make critical changes in their practices, including the advanced use of information systems and embedding care coordinators in practices. Further, it appears that respected and skillful leaders who are armed with useful and timely EMR data on physician performance are able to use these data for effective feedback to physicians to improve their performance.
Third, from a socio‐technical perspective, health information systems, especially EMR systems internal to physician groups, are a necessary component of overall performance. ACOs need timely access to patient information, and such access clearly distinguishes high versus low performers.
Last, we found that embedding care coordinators in large physician practice sites encourages timely and effective care coordination (Shojania et al. 2004; McCarthy, Klein, and Cohen 2014). Embedding care coordinators in physician practices is a challenge that involves gaining support from physicians to include care coordinators in their practices and practice sites. Another less‐well‐discussed challenge is that effective care coordination needs to take into account the psychosocial needs of a highly vulnerable population, suggesting social workers and community health workers should complement the work of nurses and traditional medical providers.
Alternative Explanations
Notably, all six ACOs experienced common factors that did not seem to differentiate their performance. These include the following: (1) exposure to the same CMS policies and implementation problems; (2) similar challenges and benefits from their relationship with the insurance firm's managers, systems, and resources; and (3) similar capital investment made by the local physician groups (i.e., no physician partners changed their patterns of investment much after joining the ACO).
Limitations
This study has several limitations. First, our convenience sample consisted entirely of MSSP ACOs that were physician‐led (excluding hospitals); to some extent, this limits generalizability of results. Yet, as noted above, the performance of ACOs that include hospitals, or that are hospital‐led, will likely depend, in part, on the performance of physician groups, especially primary care groups that are the focus of this study (McWilliams et al. 2016).
Second, the ACOs we examined were partnered with an insurance firm; this atypical arrangement may also hold implications for the generality of our results, although we did not find that the insurance firm played a role in the performance of ACOs we examined. We note that the insurance firm took a hands‐off approach to the ACOs, believing that local physician leaders should take charge. Further, the insurance firm had to rely on ACO performance data from CMS and these data were typically available too late to be useful.
Third, we examined a relatively low number of cases, opening the possibility that we missed other differentiators of ACO performance. To limit this threat, we analyzed the data aiming both to assess the reliability of results across cases and to identify factors that are not common across cases (Yin 2013).
Notwithstanding these limitations, our results should be useful for policy makers and leaders of MSSP ACOs and ACOs in general. Indeed, it seems likely that all ACOs face challenges and opportunities that are similar to those we examined. The results suggest some factors that are amenable to near‐term interventions to improve ACO performance, including the use of embedded care coordinators; development and use of local and regional health information systems; and timely feedback of performance data to physicians.
Supporting information
Acknowledgments
Joint Acknowledgment/Disclosure Statement: Authors reports grants from Universal American Insurance Co., during the conduct of the study; and readers could possibly perceive that Universal American (UA) could influence the results (i.e., the appearance of potentially influencing the paper) because they paid us to do the study, which examines organizations (ACOs) that UA founded. Yet UA signed a standard contract with Columbia University that gives us total control of the project, including the right to publish the results without interference. Further, the Columbia University Institutional Review Board (IRB) approved the study.
Disclosures: None.
Disclaimers: None.
References
- Chernew, M., McGuire T., and McWilliams J. M.. 2014. “Refining the ACO Program: Issues and Options.” Healthcare Markets and Regulation Lab, Department of Health Care Policy. Harvard School of Medicine: November.
- D'Aunno, T., Friedmann P. D., Chen Q., and Wilson D. M.. 2015. “Integration of Substance Abuse Treatment Organizations into Accountable Care Organizations: Results from a National Survey.” Journal of Health Politics, Policy, and Law 40 (4): 797–819. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davis, K., Abrams M., and Stremikis K.. 2011. “How the Affordable Care Act will Strengthen the Nation's Primary Care Foundation.” Journal of General Internal Medicine 26 (10): 1201–03. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fetters, M. D., Curry L. A., and Creswell J. W.. 2013. “Achieving Integration in Mixed Methods Designs—Principles and Practices.” Health Services Research 48 (6pt2): 2134–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fisher, E. S., McClellan M. B., and Safran D. G.. 2011. “Building the Path to Accountable Care.” New England Journal of Medicine 365 (26): 2445–47. [DOI] [PubMed] [Google Scholar]
- Highfill, T., and Ozcan Y.. 2016. “Productivity and Quality of Hospitals that Joined the Medicare Shared Savings Accountable Care Organization Program.” International Journal of Healthcare Management 9 (3): 210–17. 10.1179/2047971915Y.0000000020. [DOI] [Google Scholar]
- Hsieh, H.‐F., and Shannon S. E.. 2005. “Three Approaches to Qualitative Content Analysis.” Qualitative Health Research 15 (9): 1277–88. [DOI] [PubMed] [Google Scholar]
- Kautter, J., Pope G. C., Ingber M., Freeman S., Patterson L., Cohen M., and Keenan P.. 2014. “The HHS‐HCC Risk Adjustment Model for Individual and Small Group Markets under the Affordable Care Act.” Medicare and Medicaid Research Review 4 (3). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lied, T. R., and Sheingold S.. 2001. “HEDIS Performance Trends in Medicare Managed Care.” Health Care Financing Review 23 (1): 149–60. [PMC free article] [PubMed] [Google Scholar]
- McCarthy, D., Klein S., and Cohen A.. 2014. “The Road to Accountable Care: Building Systems for Population Health Management.” The Commonwealth Fund.
- McClellan, M., McKethan A. N., Lewis J. L., Roski J., and Fisher E. S.. 2010. “A National Strategy to Put Accountable Care into Practice.” Health Affairs 29 (5): 982–90. [DOI] [PubMed] [Google Scholar]
- McWilliams, J. M., Hatfield L. A., Chernew M. E., Landon B. E., and Schwartz A. L.. 2016. “Early Performance of Accountable Care Organizations in Medicare.” New England Journal of Medicine 374: 2357–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Miles, M. B., and Huberman A. M.. 1994. Qualitative Data Analysis: An Expanded Sourcebook. Thousand Oaks, CA: Sage. [Google Scholar]
- Muchmore, S. 2016. “Fewer than a third of Medicare ACOs received bonuses last year.” Modern Healthcare.
- NCQA . 2007. “HEDIS Technical Specifications for Physician Measurement 2007.” Washington, DC: National Committee for Quality Assurance. [Google Scholar]
- Poole, M. S., and deVan Ven A. H.. 2004. Handbook of Organizational Change and Innovation. New York: Oxford University Press. [Google Scholar]
- Shojania, K. G., Ranji S. R., Shaw L. K., Charo L. N., Lai J. C., Rushakoff R. J., McDonald K. M., and Owens D. K.. 2004. “Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (vol. 2: Diabetes Care).” Rockville, MD: Agency for Healthcare Research and Quality; [PubMed] [Google Scholar]
- Shortell, S. M., Casalino L. P., and Fisher E. S.. 2010. “How the Center for Medicare and Medicaid Innovation Should Test Accountable Care Organizations.” Health Affairs 29 (7): 1293–98. [DOI] [PubMed] [Google Scholar]
- Shortell, S. M., Wu F. M., Lewis V. A., Colla C. H., and Fisher E. S.. 2014. “A Taxonomy of Accountable Care Organizations for Policy and Practice.” Health Services Research 49 (6): 1883–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van de Ven, A. H., and Poole M. S.. 1995. “Explaining Development and Change in Organizations.” Academy of Management Review 20 (3): 510–40. [Google Scholar]
- Yin, R. K. 2013. Case Study Research: Design and Methods. Thousand Oaks, CA: Sage. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.