Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Dec 4.
Published in final edited form as: JAMA Intern Med. 2013 Oct 14;173(18):1694–1701. doi: 10.1001/jamainternmed.2013.9241

Patient-Centered Medical Home Intervention at an Internal Medicine Resident Safety-Net Clinic

Michael E Hochman 1, Steven Asch 1, Arek Jibilian 1, Bharat Chaudry 1, Ron Ben-Ari 1, Eric Hsieh 1, Margaret Berumen 1, Shahrod Mokhtari 1, Mohamad Raad 1, Elisabeth Hicks 1, Crystal Sanford 1, Norma Aguirre 1, Chi-hong Tseng 1, Sitaram Vangala 1, Carol M Mangione 1, David A Goldstein 1
PMCID: PMC4254756  NIHMSID: NIHMS639184  PMID: 24006034

Abstract

IMPORTANCE

The patient-centered medical home (PCMH) model holds promise for improving primary care delivery, but it has not been adequately tested in teaching settings.

DESIGN, SETTING, AND PARTICIPANTS

We implemented an intervention guided by PCMH principles at a safety-net teaching clinic with resident physician providers. Two similar clinics served as controls.

MAIN OUTCOMES AND MEASURES

Using a cross-sectional design, we measured the effect on patient and resident satisfaction using the Consumer Assessment of Healthcare Providers and Systems survey and a validated teaching clinic survey, respectively. Both surveys were conducted at baseline and 1 year after the intervention. We also measured the effect on emergency department and hospital utilization.

RESULTS

Following implementation of our intervention, the clinic’s score on the National Committee for Quality Assurance’s PCMH certification tool improved from 35 to 53 of 100 possible points, although our clinic did not achieve all must-pass elements to qualify as a PCMH. During the 1-year study period, 4676 patients were exposed to the intervention; 39.9% of these used at least 1 program component. Compared with baseline, patient-reported access and overall satisfaction improved to a greater extent in the intervention clinic, and the composite satisfaction rating increased from 48% to 65% in the intervention clinic vs from 50% to 59% in the control sites (P = .04). The improvements were particularly notable for questions relating to access. For example, satisfaction with urgent appointment scheduling increased from 12% to 53% in the intervention clinic vs from 14% to 18% in the control clinics (P < .001). Resident satisfaction also improved in the intervention clinic: the composite satisfaction score increased from 39% to 51% in the intervention clinic vs a decrease from 46%to 42% in the control clinics (P = .01). Emergency department utilization did not differ significantly between the intervention and control clinics, and hospitalizations increased from 26 to 27 visits per 1000 patients per month in the intervention clinic vs a decrease from 28 to 25 in the control clinics (P = .02).

CONCLUSIONS AND RELEVANCE

Our PCMH-guided intervention, which represented a modest but substantive step toward the PCMH vision, had favorable effects on patient and resident satisfaction at a safety-net teaching clinic but did not reduce emergency department or hospital utilization in the first year. Our experience may provide lessons for other teaching clinics in safety-net settings hoping to implement PCMH-guided reforms.


There has been considerable recent interest in reorganizing primary care according to the principles of the patient-centered medical home (PCMH), a model emphasizing continuity, expanded access, coordination, a team-based approach, quality, and safety.1 Early demonstrations suggested that the PCMH model is challenging to implement2,3 but has the potential to improve the quality4 and perhaps efficiency5 of primary care delivery.

Teaching clinics represent an important setting for PCMH implementation. Numerous patients nationwide, including many in underserved communities, receive primary care from physicians in training and directly benefit from care in teaching clinics. In addition, many experts believe that the United States faces a shortage of primary care providers,6,7 yet few physicians in training plan to pursue primary care careers.8 One reason for this may be that their primary care experiences are suboptimal. Implementation of the PCMH in teaching clinics may improve the primary care experiences of physicians in training, encouraging more to pursue primary care careers.9

The PCMH model presents special challenges in teaching settings.10 Not only do resident physicians have less clinical experience than practicing physicians, they are present only intermittently. As a result, incorporating resident physicians into the PCMH team is more difficult. In addition, their sporadic presence presents challenges for continuity of care. Moreover, teaching clinics have a responsibility not only to provide care but also to educate physicians in training. Thus, PCMH-guided reforms must be implemented in a way that enhances the educational experience of trainees.

We report on a grant-funded intervention guided by PCMH principles at a safety-net primary care clinic staffed with internal medicine resident physicians. At baseline, our clinic provided suboptimal services (eg, limited telephone services, lack of urgent care availability, and limited case management). The primary purpose of our intervention was to improve patient satisfaction with these services; our secondary purpose was to improve resident physician experience. Although our intervention incorporated only some elements of the PCMH model, it was guided by central principles including expanded access to care, enhanced care coordination, and team-based care. To our knowledge, our study represents the first controlled evaluation of a PCMH-guided intervention in a teaching setting.

Methods

Setting

Our study took place at 3 primary care internal medicine clinics at the Los Angeles County + University of Southern California Medical Center (LAC+USC), an urban academic medical center serving a safety-net population. The study clinics are located in the same building, and patients are distributed to each clinic in a similar manner. We selected 1 of the clinics to serve as the intervention clinic, and the other 2 served as controls. Patients who visited both the intervention and control clinics during the study period (1.5% of all patients) were assigned to the clinic in which they had their first visit during the study period.

The study clinics operate on weekday afternoons, and each is staffed by approximately 2 registered nurses, 6 medical assistants, 2 clerks, and 10 to 14 resident physicians (each resident is present 1 afternoon each week). Three or 4 attending physicians supervise the residents (each attending physician is typically present 1 or 2 afternoons each week). In total, approximately 60 resident physicians and 15 attending physicians rotate through each clinic.

Program Design and Description

The intervention was designed according to 3 central principles of the PCMH model: (1) expanded access to care, (2) enhanced care coordination, and (3) team-based care (Table 1). We focused on these principles because they are central to the PCMH concept and because we believed that addressing them could remedy key shortcomings of our clinic. The intervention included the development of a call center, a process for renewing medication prescriptions by telephone, urgent care appointment availability, and enhanced case management (Supplement [eAppendix and eTable]).

Table 1.

Summary of Intervention

Clinic Service Preintervention Postintervention
Access
 Telephone service Calls during business hours go to a general telephone line that is answered inconsistently Call center for calls during business hours staffed with 2 care coordinators and 1 resident physician
After-hours calls not answered On-call resident physician available for urgent after-hours calls
Medical triage by telephone not consistently available Medical triage by telephone from resident physician always available
 Medication renewal Patients must bring pill bottles to clinic from 1 to 3 PM and wait for physician to renew prescription Patients able to call the call center for medication renewals during business hours
 Urgent appointment scheduling No formal process for urgent care appointments (attending physicians must authorize same-day scheduling on a case-by-case basis) 5 Appointment slots protected each day for urgent care appointments
Typical wait for appointments >2 mo
Coordination
 Case management Limited case management support from clinic staff 2 Care coordinators available to assist with case managementa
 Outreach to patients who visit ED or hospital No formal outreach to patients who visit the ED or hospital Care coordinators visit hospitalized patients and telephone patients who visit the ED or hospital within 5 d of discharge
Team-based care Residents had limited assistance managing patients between clinic visits Residents worked closely with care coordinators who helped manage their patients between visits

Abbreviation: ED, emergency department.

a

Further details are given in the text and Supplement (eAppendix).

When designing the intervention, we solicited input from patients and staff during interviews and focus groups. During these sessions, patients and staff identified shortcomings of the clinic and potential solutions. This feedback was used to develop the intervention. For example, patients and staff agreed that major clinic shortcomings included inadequate telephone services, a cumbersome medication renewal process, and a lack of urgent care appointment availability, and both groups suggested ways that we might address these shortcomings with our intervention (Supplement [eAppendix]).

Two of the investigators (M.E.H. and A.J.) performed retrospective assessments of the study clinics using the 2011 PCMH standards from the National Committee for Quality Assurance (NCQA).11 To qualify as a level 1, 2, or 3 PCMH, a practice must meet all 6 must-pass criteria and achieve a minimum score of 35, 60, and 85 of 100, respectively. These unofficial assessments were conducted for research purposes. Other studies12 of the PCMH model have also used data from informal assessments.

The intervention was supported by 2 care coordinators who answered patient calls and provided case management. The care coordinators are unlicensed personnel; however, both have health care experience. In addition, a resident physician, supervised by an attending physician, was assigned each day to triage medically related calls and guide the care coordinators.

Study Design and Outcomes

The primary outcome was the change in patient satisfaction between the preintervention and postintervention periods in the intervention vs control clinics using a shortened and modified version of the Consumer Assessment of Healthcare Providers and Systems Clinician and Group Survey.13 The modified survey (Supplement [eAppendix]) included questions relating to access to care, care coordination, and overall care—the outcomes our intervention was designed to improve. The survey was modified after a pilot instrument suggested that patients had difficulty completing the full survey.

Two cross-sectional surveys were conducted, one before the intervention and another 12 months afterward. For each survey, English- and Spanish-speaking patients from the intervention clinic and the 2 control clinics who self-reported at least 1 visit within the previous year were identified in the waiting area and invited to participate. Patients were approached until we had 300 surveys from the intervention clinic and another 300 from both control clinics. The Agency for Healthcare Research and Quality recommends a sample of 300 surveys per clinic for detecting meaningful differences.14 Patients who were unable to understand the questions because of cognitive difficulties were excluded; however, patients were included if they were able to complete the survey with assistance from a research assistant or family member. All other patients presenting for clinic visits who met the above criteria were eligible.

Changes in resident satisfaction in the study clinics were measured using the Clinic Characteristics section of a validated resident clinic satisfaction survey.15 The survey (Supplement [eAppendix]) was shortened and modified to focus on relevant topics. Two cross-sectional surveys were conducted, one before the intervention and another 12 months afterward. Residents were invited to complete the survey by e-mail and were provided a link to the electronic survey. All clinic residents were invited to participate—a total of 170 in the preintervention period and 183 in the postintervention period.

Emergency department (ED) and hospitalization rates, including ED visit rates for avoidable conditions,16 were measured in the 1-year period before implementation and the 1-year period afterward. Emergency department and hospital visits were identified from the electronic health record; visits to outside institutions were not captured. We do not know the percentage of ED and hospital visits that occur at outside institutions, but it is likely to be a minority: 20% to 25% of 30-day readmissions at the medical center occur at outside facilities (personal communication, J. Guterman, MD, Chief of Research and Innovation, Los Angeles County Department of Health Services, March 2013). We also have no reason to suspect that patients from the intervention vs control clinics used outside facilities at different rates.

All patients with at least 1 clinic visit during the study period were included in the utilization analysis except those whose only visit was within 14 days of an ED visit or hospitalization (many of these patients were likely receiving continuity care at other facilities). Patients with visits exclusively during the preintervention or postintervention periods were included only in the preintervention and postintervention analyses, respectively. We also performed an analysis involving only patients who were continuously enrolled in the program. The study was approved by the institutional review board at the University of Southern California.

Triage Resident Physician Feedback

For quality-improvement purposes, we conducted feedback surveys among 16 consecutive resident physicians who worked with the care coordinators triaging calls and assisting with case management. The feedback survey (Supplement [eAppendix]) included questions concerning the quality of the educational experience, supervision, and perceived effect on patients.

Statistical Analysis

For all outcomes, changes between the preintervention and postintervention periods in the intervention vs control clinics were compared using a difference-in-differences analysis. Crude and adjusted analyses were performed for all outcomes.

Logistic regression was performed for the patient satisfaction data controlling for self-reported age, sex, race, ethnicity, educational level, and overall health. An analysis also assessed for interactions between self-reported health status and educational level and change in composite patient satisfaction score. Logistic regression analyses were performed for the resident satisfaction survey controlling for postgraduate year in residency. An analysis also assessed for interactions between postgraduate year and change in composite resident satisfaction score. Finally, repeated-measures Poisson regression analyses were performed for the ED and hospital utilization analysis, controlling for age, sex, race/ethnicity, primary language, and insurance status. An analysis also assessed for interactions between insurance status and changes in ED and hospital utilization. Finally, an interrupted time series analysis assessed for time trends.

Two-tailed power calculations using an α of .05 indicated that our study had 80% power to detect a difference of 40% vs 45% on the composite patient satisfaction score in the intervention vs control clinics, 50% power to detect a difference of 40% vs 45% on the composite resident satisfaction score, and 60% power to detect a 10% difference in combined ED and hospitalization rates. All statistical calculations were conducted using commercial software (SAS, version 9.2; SAS Institute, Inc).

Results

During the 1-year study period, the intervention clinic provided 11 005 patient visits to 4679 distinct patients meeting the inclusion criteria. In the control clinics, 8899 patients met the inclusion criteria. Study patients were predominantly middle-aged Hispanic women (Table 2).

Table 2.

Characteristics of Clinic Patients

Intervention Clinic
Control Clinics
P Valueb
Characteristic Preintervention
(n = 4296)
Postintervention
(n = 4679)
P Valuea Preintervention
(n = 7821)
Postintervention
(n = 8899)
P Valuea
Mean age, y 51.6 51.3 .19 50.9 51.0 .47 .002

Female sex, % 56.8 57.5 .55 55.1 55.7 .44 .008

Race/ethnicity, %

 Asian 9.6 9.8 9.0 9.3


 Black 7.1 7.3 7.7 7.5


 Hispanic 74.7 74.8 .72 74.6 74.8 .92 .37


 White 7.5 6.8 7.5 7.3


 Other 1.2 1.4 1.2 1.2

Insurance, %

 Self-pay 15.9 14.9 16.5 15.6


 Medicaid 32.9 26.5 <.001 30.3 27.9 .002 .48


 Free care program 39.9 46.9 41.8 44.7

 Other 11.4 11.7 11.4 11.8

Preferred language, %

 English 33.2 33.6 35.1 34.6


 Spanish 60.6 60.0 .79 59.1 59.5 .79 .03


 Other 6.2 6.4 5.7 5.8
a

For comparisons between patients in the preintervention vs postintervention periods.

b

For comparisons between patients in the intervention clinic vs control clinics.

Assessment Using the NCQA PCMH Tool

The assessment using the NCQA PCMH certification tool showed that, at baseline, the intervention clinic did not qualify as a PCMH, scoring 35 of 100 possible points and meeting 1 of the 6 must-pass elements. After the intervention, the clinic still did not qualify as a PCMH. However, the clinic’s score improved to 53, and 4 of the 6 must-pass elements were achieved. The improvement occurred on the domains access and continuity, planning and managing care, and measuring and improving performance. The PCMH scores for the control clinics, both at baseline and during the intervention period, were identical to those of the intervention clinic at baseline. Scoring sheets for these assessments are in the Supplement (eAppendix).

Process Evaluation: Program Utilization During Implementation

Of the 4679 postintervention patients, 1866 individuals (39.9%) used the program at least once during the intervention period (eg, the call center or case management services). By the 12th month, there was a mean of 37 incoming calls to the call center per weekday; 15 of these required attention from the resident physician. These calls resulted in a mean of 3 urgent-care appointments per day (the no-show rate for these visits was 13% vs an overall no-show rate of 36%) and the prescribing or renewing of 20 medications per week. Approximately 5% of calls requiring resident physician attention resulted in an ED or hospital referral. The care coordinators completed 18 case management requests and visited 5 hospitalized patients per week. Additional details are reported in the Supplement (eAppendix).

Patient Satisfaction

Response rates for the baseline patient survey were 62% in the intervention and control clinics and 68% and 65% in the follow-up survey for the intervention and control clinics, respectively. There were no significant differences between respondents from the intervention vs control clinics in any of the characteristics examined except that respondents from the intervention clinic were more likely to be white (76% vs 65% reported being either non-Hispanic or Hispanic white; P = .002).

The patient survey results showed significant improvements in the intervention clinic vs the control clinics on all questions concerning access to care and overall care and on the overall composite satisfaction score. There were no improvements indicated in responses to questions relating to care coordination (Table 3).

Table 3.

Patient Satisfaction: Percentage of Patients Who Agree/Strongly Agree That Services Are Adequate Before and After Program Implementationa

Intervention Clinic
Control Clinics
Characteristic Baseline
(n = 300)
Postintervention
(n = 300)
Difb Baseline
(n = 302)
Postintervention
(n = 300)
Difb Dif in Difc P Valued
Care coordination

 Test results communication 30 36 +6 29 31 +2 +4 .96

 Ease of completing tests 88 81 −7 90 94 +4 −10 .03

 Ease of making specialist appointments 75 68 −6 78 77 −1 −5 .99

 Continuity with regular physician 51 57 +6 55 66 +11 −5 .36

 Physician knows information about you 51 73 +22 52 63 +11 +11 .06

Access to primary care services

 Ease of making urgent appointments 12 53 +41 14 18 +5 +36 <.001

 Ease of making routine appointments 28 56 +28 33 41 +8 +20 .04

 Telephone access during regular hours 17 64 +46 19 26 +7 +39 <.001

 Telephone access after hours 15 63 +48 10 16 +6 +42 .02

Overall rating of care

 Good/excellent 56 80 +23 62 64 +2 +21 <.001

Composite scoree 48 65 +17 50 59 +9 +8 .04

Abbreviation: Dif, difference.

a

Not all questions were answered by all respondents; for some questions, “not applicable” was offered as an answer choice.

b

Discrepancies in the Dif values result from rounding.

c

Positive values indicate an improvement in satisfaction in the intervention clinic vs control clinics.

d

P values represent adjusted values from the regression analysis.

e

A composite summary score representing the percentage of all survey questions receiving a favorable response.

Also notable, there were no significant interactions between self-reported health status or educational level and the composite patient satisfaction score in the intervention vs control clinics. In addition, during the intervention period but not at baseline, patients in the intervention clinic were more likely to report calling the clinic during business hours (34% vs 18%; P < .001) and after hours (21% vs 5%; P < .001), or to call with urgent care needs (37% vs 28%; P = .01).

Resident Satisfaction

In the preintervention period, response rates for the resident survey were 72% in the intervention clinic and 69% in the control clinics. In the postintervention period, the rates were 71% and 66%, respectively. Of the responders, 33% were postgraduate year 1, 35% were postgraduate year 2, and 32% were postgraduate year 3 (P = .98).

Resident responses improved to a greater extent in the intervention vs control clinics for all questions, as did the composite score (Table 4). There was no significant interaction between postgraduate year in residency and the composite satisfaction score in the intervention vs control clinics.

Table 4.

Resident Satisfaction: Percentage of Residents Who Agree/Strongly Agree That Services Are Adequate Before and After Program Implementation

Intervention
Control
Characteristic Baseline
(n = 43)
Postintervention
(n = 46)
Difa Baseline
(n = 79)
Postintervention
(n = 78)
Difa Dif in Difb P Valuec
Ancillary support

 Adequate nursing support 56 83 +27 77 79 +2 +25 .06

 Adequate case management 26 54 +29 28 21 −7 +35 .007

 Adequate overall support 53 72 +18 62 63 +1 +18 .19

Primary care delivery

 Clinic structure supports continuity of care 23 33 +9 56 33 −22 +33 .02

 Smooth patient flow during clinic 53 57 +3 61 53 −8 +12 .40

 Timely referrals to subspecialty care 2 24 +22 10 10 0 +22 .03

 Timely completion of ordered tests 42 46 +4 47 41 −6 +10 .47

 Patients easily able to contact physician 16 30 +14 24 24 0 +14 .23

General clinic experiences

 Confidence in postresidency practice of general medicine 58 65 +7 68 65 −3 +10 .43

 Interested in primary care career 16 24 +8 22 19 −2 +11 .39

 More respect for primary care physicians 72 74 +2 63 62 −2 +3 .73

 Overall rating of clinic experience 47 57 +10 39 36 −3 +13 .29

Composite scored 39 51 +13 46 42 −4 +17 .01

Abbreviation: Dif, difference.

a

Discrepancies in the Dif values result from rounding.

b

Positive values indicate an improvement in satisfaction in the intervention clinic vs the control clinics.

c

P values represent adjusted values from the regression analysis.

d

A composite summary score representing the percentage of all survey questions receiving a favorable response.

ED and Hospital Utilization

Relative to the control clinics, hospitalization rates in the intervention clinic increased by a small but significant amount following implementation (Table 5). When analyzing just the 6500 patients (33.9%) who were continuously enrolled (ratio-of-ratio, 1.125; P = .31), this difference was no longer significant. There was no effect of the program on preventable ED visits, and there was no interaction between insurance status and ED or hospital utilization. The interrupted time-series analysis did not reveal any notable time trends.

Table 5.

Emergency Department and Hospitalization Rates in the Intervention and Control Clinicsa

Intervention Clinic
Control Clinics
Outcome Baseline
(n = 4296)
Postintervention
(n = 4679)
Baseline
(n = 7821)
Postintervention
(n = 8899)
Dif in Difb P Valuec
Total use 79 82 78 76 +5 .91

Emergency department visit ratea 53 55 51 52 +1 .92

Hospital visit ratea 26 27 28 25 +5 .02

Abbreviation: Dif, difference.

a

Rates are visits per 1000 patients per month.

b

Positive numbers represent an increase in utilization rates in the intervention clinic vs control clinics. Discrepancies in the Dif values result from rounding.

c

P values represent adjusted values from the regression analysis.

Triage Resident Physician Feedback

Among the 16 consecutive residents who worked with the care coordinators to triage calls and assist with case management, mean scores on feedback surveys were 4 or higher on a scale of 1 to 5 for all questions (higher scores indicate more favorable responses). Details are in the Supplement [eAppendix].

Discussion

Guided by PCMH principles, we developed and implemented an intervention to improve primary care delivery at an internal medicine safety-net clinic with resident physician providers. Although our intervention fulfilled only some elements of the PCMH model according to the NCQA criteria, it represented a substantive step toward the PCMH vision. Our controlled evaluation suggests that in the first year our intervention improved patient and resident satisfaction with several aspects of the clinic but did not reduce ED or hospital utilization. These findings are consistent with PCMH-guided interventions in nonteaching settings.17 To our knowledge, this study is the first controlled evaluation involving a PCMH-guided intervention in a teaching clinic.

We had to overcome challenges unique to a teaching setting. Most notably, resident physicians in our clinic are present only 1 afternoon each week; therefore, considerable effort was required to educate them about the intervention. To address this challenge, the program leadership communicated regularly with the residents, soliciting feedback and reminding them to use the services. In addition, and perhaps more important, the care coordinators communicated regularly with the residents, assisting them with the care for patients with complex conditions. Over time, informal feedback suggested that the residents began to appreciate the value of the program and began to engage in it enthusiastically.

We succeeded in achieving our primary aim of improving patient satisfaction. Patients in the intervention clinic reported improved access to care (scheduling appointments and telephone services) and overall care. We did not observe improvements in satisfaction with care coordination, perhaps because the care coordination enhancements affected patients indirectly. For example, care coordinators facilitated urgent specialty appointments for patients; however, this assistance occurred behind the scenes.

Although the absolute improvements in satisfaction with access to care and overall care were large, the clinical significance is uncertain.18 Specifically, it is unclear whether these improvements will be wide-ranging enough to entice patients to remain within our system as safety-net patients acquire health insurance as part of national health care reform and have choices about where they seek care.19 Thus, although we are encouraged by the improvements, we remain uncertain whether they will have a meaningful long-term impact.

The improvements in patient satisfaction are consistent with a PCMH pilot at the Group Health Cooperative.5 In that study, patients reported improved experiences with care coordination, access to care, quality of the physician-patient interaction, and patient activation and involvement. Because the scope of the Group Health Cooperative intervention was broader than ours, it likely affected a wider array of measures. Our study extends the Group Health Cooperative’s findings by showing that PCMH-guided reform may improve patient satisfaction in teaching settings.

Our intervention also improved resident satisfaction, extending findings from studies in nonteaching settings. For example, the Group Health Cooperative PCMH pilot demonstrated modest improvements in provider burnout scores,5 and an observational study20 demonstrated a positive correlation between PCMH characteristics and provider satisfaction in safety-net clinics.

Also notable, resident physicians in our study who worked with the care coordinators to triage calls and assist with case management reported this to be a valuable experience in feedback surveys. Although involvement of physicians in triage and care coordination is not typical, these skills may be important for young physicians in emerging models of care delivery.21 In addition, and perhaps more important, these experiences may enhance young physicians’ understanding of the roles of nonphysician team members, enabling them to collaborate more effectively.

Our findings are notable in light of concerns about the primary care workforce.6 Many experts hope that improvements in the primary care teaching environment will translate into greater interest in primary care.2224 Although our intervention succeeded in improving resident satisfaction, we observed only a modest, nonsignificant effect on interest in primary care careers. More research is needed to understand the effect of the ambulatory teaching environment on career choices. It is likely that other reforms, such as educational debt relief and/or higher salaries,25,26 as well as improved mentorship,27 may be needed to increase the primary care provider supply.

Our program did not reduce ED or hospital utilization. In fact, there was a small increase in hospitalization rates in the intervention clinic vs the control clinics. Most of the difference was attributable to a reduction in hospitalizations in the control clinics (perhaps resulting from concurrent improvement efforts), and the difference was not significant in the analysis involving continuously enrolled patients. There are likely several explanations for this finding. First, there may have been an increase in appropriate utilization following program implementation: 5% of callers to our call center were referred to the ED or hospital. Second, some other attempts at expanding access to care have also failed to reduce ED and hospital utilization. Some, particularly in underserved populations with unmet care needs, have increased utilization,28,29 presumably because it takes time to fill these needs and change patient behavior.30 (In other settings, large-scale primary care transformations have reduced ED and hospital utilization.5,31) Third, the scope of our intervention may not have been broad enough to affect utilization. It is possible that, with programmatic enhancements and after unmet care needs are addressed, we may observe a reduction in ED and hospital utilization, particularly among a high-risk subset of patients.

We are hopeful that our program will be sustainable without ongoing external funding. For our pilot, we estimate a direct cost of approximately $40 per patient per year. We are currently working with the medical center leadership to modify job descriptions of existing staff so that our intervention may be sustained after grant funding ends. We believe that with appropriate prioritization, creative reorganization, and economies of scale this will be possible.

Finally, our experiences may be relevant to other resource-constrained organizations. Such organizations may also lack resources to implement all components of the PCMH model and will need to focus on the most fundamental elements.32 We demonstrated that modest but substantive steps toward the PCMH vision can positively affect patients and providers.

Strengths of our study include the use of control clinics of similar composition to our intervention clinic, the collection of preintervention and postintervention data, and the large sample size. Limitations include the fact that this was not a randomized trial, that our patient survey may have overrepresented patients with frequent clinic visits and did not capture the small percentage of patients who spoke neither English nor Spanish and some individuals with cognitive difficulties, that patients and resident physicians were not blinded to their group assignment, and that the program evaluators and implementers were the same.

In summary, we implemented an intervention using PCMH principles at a safety-net teaching clinic. Our program improved patient and resident physician satisfaction, but not ED and hospital utilization. Although our intervention fulfills only some elements of the PCMH model, our experience may be relevant to other teaching clinics, including those championing teaching health centers.9 Our findings also demonstrate the feasibility of quality-improvement efforts and system-based reforms in teaching settings.

Supplementary Material

Appendix

Acknowledgments

Funding/Support: The project was funded by the UniHealth Foundation. Drs Hochman, Asch, and Mangione’s efforts were supported by the Robert Wood Johnson Clinical Scholars Program and the US Department of Veterans Affairs (Grant 67799 to the University of California, Los Angeles). Dr Mangione also receives support from the UCLA Resource Centers for Minority Aging Research Center for Health Improvement of Minority Elderly (RCMAR/CHIME) under National Institutes of Health/National Institute on Aging (NIH/NIA) grant P30-AG021684 and from NIH/National Center for Advancing Translational Sciences UCLA Clinical and Translational Science Institute grant UL1TR000124.

Role of the Sponsor: The UniHealth Foundation played an advisory role in designing the program and evaluation.

Footnotes

Author Contributions: Study concept and design: Hochman, Asch, Jibilian, Chaudry, Ben-Ari, Hsieh, Goldstein.

Acquisition of data: Hochman, Jibilian, Berumen, Mokhtari, Raad, Hicks, Sanford, Aguirre.

Analysis and interpretation of data: Hochman, Asch, Ben-Ari, Mokhtari, Raad, Hicks, Tseng, Vangala, Mangione.

Drafting of the manuscript: Hochman, Asch, Hicks, Aguirre, Vangala.

Critical revision of the manuscript for important intellectual content: Hochman, Asch, Jibilian, Chaudry, Ben-Ari, Hsieh, Berumen, Mokhtari, Raad, Hicks, Sanford, Tseng, Mangione, Goldstein.

Statistical analysis: Hochman, Raad, Tseng, Vangala.

Obtained funding: Hochman, Jibilian, Chaudry.

Administrative, technical, and material support: Hochman, Jibilian, Chaudry, Ben-Ari, Hsieh, Berumen, Mokhtari, Hicks, Sanford, Aguirre.

Study supervision: Asch, Jibilian, Hicks, Mangione.

Conflict of Interest Disclosures: None reported.

Publisher's Disclaimer: Disclaimer: The views expressed in this paper do not reflect those of the US Department of Veterans Affairs, the Robert Wood Johnson Foundation, the Keck School of Medicine of USC, the David Geffen School of Medicine, the Jonathan and Karin Fielding School of Public Health, the Los Angeles County Department of Health Services, or Stanford University.

Additional Contributions: We thank the many staff members at the LAC+USC Medical Center who made our program possible.

References

  • 1.Joint principles of the patient centered medical home. doi: 10.2169/naika.104.141. http://www.pcpcc.net/content/joint-principles-patient-centered-medical-home. Accessed July 8, 2013. [DOI] [PubMed]
  • 2.Crabtree BF, Nutting PA, Miller WL, Strange KC, Stewart EE, Jaén CR. Summary of the National Demonstration Project and recommendations for the patient-centered medical home. Ann Fam Med. 2010;8(suppl 1):S80–S90. S92. doi: 10.1370/afm.1107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Larson EB, Reid R. The patient-centered medical home movement: why now? JAMA. 2010;303(16):1644–1645. doi: 10.1001/jama.2010.524. [DOI] [PubMed] [Google Scholar]
  • 4.Fields D, Leshen E, Patel K. Analysis & commentary: driving quality gains and cost savings through adoption of medical homes. Health Aff (Millwood) 2010;29(5):819–826. doi: 10.1377/hlthaff.2010.0009. [DOI] [PubMed] [Google Scholar]
  • 5.Reid RJ, Coleman K, Johnson EA, et al. The Group Health medical home at year two: cost savings, higher patient satisfaction, and less burnout for providers. Health Aff (Millwood) 2010;29(5):835–843. doi: 10.1377/hlthaff.2010.0158. [DOI] [PubMed] [Google Scholar]
  • 6.Phillips RL, Jr, Turner BJ. The next phase of Title VII funding for training primary care physicians for America’s health care needs. Ann Fam Med. 2012;10(2):163–168. doi: 10.1370/afm.1367. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Schwartz MD. The US primary care workforce and graduate medical education policy. JAMA. 2012;308(21):2252–2253. doi: 10.1001/jama.2012.77034. [DOI] [PubMed] [Google Scholar]
  • 8.Hauer KE, Durning SJ, Kernan WN, et al. Factors associated with medical students’ career choices regarding internal medicine. JAMA. 2008;300(10):1154–1164. doi: 10.1001/jama.300.10.1154. [DOI] [PubMed] [Google Scholar]
  • 9.Fancher TL, Henderson MC, Harris T, Pitman D. Teaching health centers and the primary care workforce crisis for the underserved. Ann Intern Med. 2010;152(9):615–616. doi: 10.7326/0003-4819-152-9-201005040-00017. [DOI] [PubMed] [Google Scholar]
  • 10.Markova T, Mateo M, Roth LM. Implementing teams in a patient-centered medical home residency practice: lessons learned. J Am Board Fam Med. 2012;25(2):224–231. doi: 10.3122/jabfm.2012.02.110181. [DOI] [PubMed] [Google Scholar]
  • 11.Patient centered medical home survey tool: National Committee for Quality Assurance. 2011 http://www.ncqa.org/tabid/629Default.aspx. Accessed February 16, 2013.
  • 12.Clarke RM, Tseng CH, Brook RH, Brown AF. Tool used to assess how well community health centers function as medical homes may be flawed. Health Aff (Millwood) 2012;31(3):627–635. doi: 10.1377/hlthaff.2011.0908. [DOI] [PubMed] [Google Scholar]
  • 13.CAHPS Clinician & Group Surveys: 12-month survey 2.0 (adult) www.cahps.ahrq.gov/clinician_group/cgsurvey/adult12mocoresurveyeng2.pdf. Accessed February 16, 2013.
  • 14.Fielding the CAHPS Clinician & Group Surveys: sampling guidelines and protocols. Agency for Healthcare Research and Quality; https://www.cahps.ahrq.gov/clinician_group/cgsurvey/fieldingcahps-cgsurveys.pdf. Accessed July 8, 2013. [Google Scholar]
  • 15.Sisson SD, Boonyasai R, Baker-Genaw K, Silverstein J. Continuity clinic satisfaction and valuation in residency training. J Gen Intern Med. 2007;22(12):1704–1710. doi: 10.1007/s11606-007-0412-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Health matters in San Francisco: preventable emergency room visits. www.healthmattersinsf.org/modules.php?op=modload&name=NS-Indicator&file=indicator&iid=7557377. Accessed October 20, 2012.
  • 17.Jackson GL, Powers BJ, Chatterjee R, et al. The patient-centered medical home: a systematic review. Ann Intern Med. 2013;158(3):169–178. doi: 10.7326/0003-4819-158-3-201302050-00579. [DOI] [PubMed] [Google Scholar]
  • 18.Hays RD, Woolley JM. The concept of clinically meaningful difference in health-related quality-of-life research: how meaningful is it? Pharmacoeconomics. 2000;18(5):419–423. doi: 10.2165/00019053-200018050-00001. [DOI] [PubMed] [Google Scholar]
  • 19.Katz MH. Future of the safety net under health reform. JAMA. 2010;304(6):679–680. doi: 10.1001/jama.2010.1126. [DOI] [PubMed] [Google Scholar]
  • 20.Lewis SE, Nocon RS, Tang H, et al. Patient-centered medical home characteristics and staff morale in safety net clinics. Arch Intern Med. 2012;172(1):23–31. doi: 10.1001/archinternmed.2011.580. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Fisher ES, McClellan MB, Safran DG. Building the path to accountable care. N Engl J Med. 2011;365(26):2445–2447. doi: 10.1056/NEJMp1112442. [DOI] [PubMed] [Google Scholar]
  • 22.Schwartz MD, Linzer M, Babbott D, Divine GW, Broadhead WE, Society of General Internal Medicine Task Force on Career Choice in Internal Medicine The impact of an ambulatory rotation on medical student interest in internal medicine. J Gen Intern Med. 1995;10(10):542–549. doi: 10.1007/BF02640362. [DOI] [PubMed] [Google Scholar]
  • 23.Garibaldi RA, Popkave C, Bylsma W. Career plans for trainees in internal medicine residency programs. Acad Med. 2005;80(5):507–512. doi: 10.1097/00001888-200505000-00021. [DOI] [PubMed] [Google Scholar]
  • 24.Keirns CC, Bosk CL. Perspective: the unintended consequences of training residents in dysfunctional outpatient settings. Acad Med. 2008;83(5):498–502. doi: 10.1097/ACM.0b013e31816be3ab. [DOI] [PubMed] [Google Scholar]
  • 25.Rosenblatt RA, Andrilla CH. The impact of U.S. medical students’ debt on their choice of primary care careers: an analysis of data from the 2002 Medical School Graduation Questionnaire. Acad Med. 2005;80(9):815–819. doi: 10.1097/00001888-200509000-00006. [DOI] [PubMed] [Google Scholar]
  • 26.Kahn MJ, Markert RJ, Lopez FA, Specter S, Randall H, Krane NK. Is medical student choice of a primary care residency influenced by debt? MedGenMed. 2006;8(4):18. [PMC free article] [PubMed] [Google Scholar]
  • 27.Wright S, Wong A, Newill C. The impact of role models on medical students. J Gen Intern Med. 1997;12(1):53–56. doi: 10.1046/j.1525-1497.1997.12109.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Weinberger M, Oddone EZ, Henderson WG, Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission Does increased access to primary care reduce hospital readmissions? N Engl J Med. 1996;334(22):1441–1447. doi: 10.1056/NEJM199605303342206. [DOI] [PubMed] [Google Scholar]
  • 29.Cousineau MR, Partlow KR. Evaluation of the Camino de Salud Frequent Users/Care Management Program: Final Report. Alhambra: Center for Community Health Studies, University of Southern California; Apr, 2010. http://lahealthaction.org/library/Final_COPE_Care_Management_Evaluation.pdf. Accessed July 8, 2013. [Google Scholar]
  • 30.Shi L, Stevens GD. Vulnerability and unmet health care needs: the influence of multiple risk factors. J Gen Intern Med. 2005;20(2):148–154. doi: 10.1111/j.1525-1497.2005.40136.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.A formula for cutting health costs. New York Times. 2012 Jul 21; www.nytimes.com/2012/07/22/opinion/sunday/a-formula-for-cutting-health-costs.html?pagewanted=all&_r=0. Accessed July 8, 2013.
  • 32.The Safety Net Medical Home Initiative. http://www.safetynetmedicalhome.org/. Accessed July 8, 2013.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix

RESOURCES