Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2010 Dec 15;26(5):498–504. doi: 10.1007/s11606-010-1597-1

Physician Groups’ Use of Data from Patient Experience Surveys

Mark W Friedberg 1, Gillian K SteelFisher 2, Melinda Karp 3, Eric C Schneider 1,
PMCID: PMC3077475  PMID: 21161419

Abstract

Background

In Massachusetts, physician groups’ performance on validated surveys of patient experience has been publicly reported since 2006. Groups also receive detailed reports of their own performance, but little is known about how physician groups have responded to these reports.

Objective

To examine whether and how physician groups are using patient experience data to improve patient care.

Design and Participants

During 2008, we conducted semi-structured interviews with the leaders of 72 participating physician groups (out of 117 groups receiving patient experience reports). Based on leaders’ responses, we identified three levels of engagement with patient experience reporting: no efforts to improve (level 1), efforts to improve only the performance of low-scoring physicians or practice sites (level 2), and efforts to improve group-wide performance (level 3).

Main Measures

Groups’ level of engagement and specific efforts to improve patient care.

Key Results

Forty-four group leaders (61%) reported group-wide improvement efforts (level 3), 16 (22%) reported efforts to improve only the performance of low-scoring physicians or practice sites (level 2), and 12 (17%) reported no performance improvement efforts (level 1). Level 3 groups were more likely than others to have an integrated medical group organizational model (84% vs. 31% at level 2 and 33% at level 1; P < 0.005) and to employ the majority of their physicians (69% vs. 25% and 20%; P < 0.05). Among level 3 groups, the most common targets for improvement were access, communication with patients, and customer service. The most commonly reported improvement initiatives were changing office workflow, providing additional training for nonclinical staff, and adopting or enhancing an electronic health record.

Conclusions

Despite statewide public reporting, physician groups’ use of patient experience data varied widely. Integrated organizational models were associated with greater engagement, and efforts to enhance clinicians’ interpersonal skills were uncommon, with groups predominantly focusing on office workflow and support staff.

KEY WORDS: patient experience, quality of care, quality improvement, physician groups, public reporting

INTRODUCTION

The Institute of Medicine has recognized achieving patient-centered care as an essential component of efforts to improve the quality of U.S. health care.1 To assess whether care is patient-centered and to guide improvement efforts, public agencies and private sector organizations have developed valid and reliable methods for surveying patients about their health care experiences.24 These surveys have been used to evaluate patients’ experiences with health plans, hospitals, and most recently, with physicians and physician groups in the ambulatory setting.57

Since 2002, the Massachusetts Health Quality Partners (MHQP), a multistakeholder collaborative, has conducted a statewide patient experience survey of more than 200,000 patients enrolled in the five largest commercial health plans in the state.8 This survey assesses the care delivered by over 4,000 primary care physicians in nearly 500 primary care practice sites of approximately 120 physician groups. In order to inform patients’ choices when selecting providers, MHQP began publicly reporting the patient experience survey performance of primary care practice sites in 2006.9 With the intent of guiding groups’ quality improvement efforts, MHQP also provides each physician group a detailed report of its own performance.

Previous studies have assessed how patients use publicly reported provider performance data1013, and how providers respond to performance reports on measures of technical quality and health outcomes.1416 However, less is known about whether and how physicians and physician groups respond to performance reports of patient experience. In the context of national efforts such as the “medical home” movement, which emphasizes performance measurement and improvement by physician groups, such information may be especially salient.17 In this paper, we assess—in the context of a statewide public reporting effort—the use of confidential, detailed reports of patient experience by physician groups.

METHODS

Sample of Physician Groups

For the purposes of this study, we defined a physician group as a collection of physicians practicing at one or more office addresses (i.e., practice sites) who shared at least one group-level manager (defined as an individual who coordinated contacts with health plans or oversaw group performance). To identify physician groups, we used the 2007 MHQP statewide physician directory, which included all Massachusetts physician groups having at least 3 physicians (but not those having ≤2 physicians) who provided care to enrollees in any of the five largest commercial health plans in the state. The directory, which is updated annually via direct contact with physician groups, also identifies whether each group is affiliated with one of the nine large multi-group provider networks in the state.18

Specialist physician groups that did not provide primary care were not included in the study because reports of patients’ experiences with specialist care had not been released at the time of our interviews. We excluded pediatric-only groups from the study sample for two reasons: (1) there were fewer than 20 such groups in the state, and (2) pediatric and adult patient experience survey instruments differed, which may have led to divergent responses of pediatric and adult primary care groups. However, groups providing primary care for both adults and children were included in the study sample.

After excluding specialist-only and pediatric-only groups, the MHQP group roster contained all 133 physician groups in the state (of size ≥3 physicians) that provided primary care to adult patients. While approaching each of the 133 groups to participate in the survey, we discovered that 18 of these groups were ineligible because they lacked local medical management and thus would be better described as practice sites of other medical groups. These 18 sites were removed from the sample, and 5 physician groups that these sites identified as providing them with management were added to the sample in their stead. Another ten groups from the original roster had recently reorganized into seven “new” groups. After these refinements to the roster (removing 28 ineligible groups and adding 12 “new” groups), the final study sample contained 117 physician groups. All groups had previously received detailed patient experience survey reports, although the manner of dissemination of these reports to the 12 new groups was uncertain.

Interview Guide

Based on a review of the literature and prior surveys of physician group leaders about the quality of care19,20, we developed a guide for conducting semi-structured 30-minute interviews with physician group leaders. The guide’s questions were designed to assess group leaders’ use of patient experience reports, with emphasis on obtaining detailed descriptions of performance improvement activities, when such activities were present.

The guide first elicited performance improvement activities in an open format, asking respondents to list all activities. This open elicitation was followed by prompts about specific activities (e.g., changing office workflow, retraining physicians or other staff) if these were not previously mentioned. For each activity described by group leaders, the guide asked what aspect of patient experience the activity was intended to improve. These improvement targets were elicited in an open format, without follow-up prompts about specific targets. The guide also included queries about group size (number of physicians), organizational model (integrated medical group or independent practice association; IPA), employment of physicians, use of electronic health records (EHRs), and exposure to financial incentives based on clinical quality and patient experience.

The queries about organizational model were distinct from questions that assessed groups’ use of patient experience reports (i.e., a group’s use of reports did not influence its organizational model classification). For this study, integrated medical groups were defined as those in which most decisions about policies, staffing, and resources were made by a group manager or management team. In IPAs, by contrast, management decisions were made predominantly by individual practice sites.

We refined the interview guide based on input from selected colleagues, members of the MHQP staff and physician advisory council, and experts who served on a national advisory panel to the project. After the first few interviews, we made minor adjustments to the question sequence to streamline interview administration.

Interviews with Group Leaders

We conducted the semi-structured interviews via telephone between June and November 2008. An initial roster of medical group leaders was provided by MHQP and all leaders were invited by mail to participate. Non-respondents received up to two additional mailed invitations and four telephone calls. Each respondent was a medical director, administrator, or manager who was considered a leader of the group and who would be knowledgeable about the group’s performance improvement initiatives (if any). Each interview was conducted by at least two investigators, with a project manager taking notes. The interviews were recorded and transcribed, and a research assistant verified interview transcripts for accuracy by comparing them to the original audio recordings. The study was approved by Human Subjects Committees at RAND and the Harvard School of Public Health.

Analysis

We analyzed the data using a three-step approach. First, researchers coded participants’ comments pertaining to their groups' patient experience improvement activities. The coding scheme was developed inductively using a variation of content analysis21, which allowed information obtained during the interviews to be coded into coherent concepts based on participants’ descriptions as opposed to a pre-established set of categories. In all cases, responses were coded according to participants’ detailed description of improvement activities rather than the specific question to which the participant was responding. For each patient experience improvement initiative reported by group leaders, the coding scheme allowed identification of the corresponding performance improvement target. The text of each interview transcript was independently coded by one author (MWF) and a research assistant experienced in qualitative research. Coding discrepancies were resolved via conversation between coders or by consultation with a second author (GKS or ECS), and consensus was reached in all cases.

Second, based on these initial codes, each interview was assigned an aggregate code describing the group's overall level of engagement. We identified three levels of physician group engagement with patient experience survey reports. Level 1 group leaders did not recall receiving patient experience survey reports or made no use of patient experience reports beyond distributing them to members of the group. Level 2 group leaders described taking one or more actions based on patient experience reports but focused these efforts on physicians or practice sites that were low performers. Level 3 group leaders described one or more group-wide initiatives to improve patient experience (including most or all physicians, staff, and practice sites in the group, regardless of achieved performance). We calculated the frequency with which Level 3 groups targeted each domain of patient experience and implemented each type of improvement initiative that was identified by the coding process.

Third, after coding was complete, we compared the characteristics of physician groups across the three levels of engagement. These characteristics included each group’s number of physicians, organizational model, and exposure to performance-based financial incentives. Because cell counts in the comparison tables of categorical data did not allow valid application of chi-square test statistics, we instead used Fisher’s exact tests to evaluate the statistical significance of associations between levels of engagement and group characteristics. Data management and statistical analyses were conducted using SAS software, version 9.2 (SAS Institute, Cary, North Carolina).

RESULTS

Seventy-two group leaders responded (62% response rate). The median number of physicians per group was 15, with a range from 3 physicians to 244 physicians. Non-responding groups had fewer physicians (median ten physicians per group; P = 0.02) but did not differ from responding groups on any other observable characteristic (number of practice sites or rate of network affiliation). Approximately half of groups had only one practice site (Table 1). Sixty-four percent of the groups were organized as integrated medical groups, 22% were IPAs, and 14% had a mixed organizational model (e.g., an IPA that contained a large, more integrated group as well as a collection of independent small practice sites). Fifty-one percent employed the majority of their physicians, and 63% were affiliated with a large multi-group physician network. Only 28% of groups reported being eligible for payment based on measures of patient experience, while 87% reported eligibility for payments based on measures of clinical quality (e.g., Healthcare Effectiveness Data and Information Set measures; HEDIS).

Table 1.

Characteristics of Physician Groups

Responding groups Non-responding groups P value for difference
Median (range)
Number of physicians 15 (3-244) 10 (2-46) 0.02*
Number of practice sites 2 (1-19) 1 (1-10) 0.30*
Organizational model Number of groups (%)
Integrated medical group 46 (64) NA
Independent practice association 16 (22) NA
Mixed 10 (14) NA
Majority of physicians employed by group 33 (51) NA
Network-affiliated 45 (63) 32 (71) 0.42**
Group eligible for performance-based payment…
…based on patient experience 19 (28) NA
…based on clinical quality (e.g., HEDIS measures) 55 (87) NA

HEDIS, Healthcare Effectiveness Data and Information Set; NA, Not Available

*P value from Wilcoxon rank sum test

**P value from Fisher exact test

Table 2 shows that only 17% of group leaders reported that they were unaware of patient experience reports or not using them to improve performance (level 1), and 22% used patient experience to improve the performance of only low scoring physicians or practice sites (level 2). The majority of physician group leaders (61%) reported using patient experience results to undertake group-wide improvement initiatives (level 3).

Table 2.

Level of Engagement with Patient Experience Reports

Level of engagement Level 1 (12 groups; 17%) Level 2 (16 groups; 22%) Level 3 (44 groups; 61%)
Number of physicians % of groups within each level
3-9 50 31 32
10-19 42 19 36
20 or more 8 50 32
Number of practice sites
1 42 40 52
2-4 33 13 24
5 or more 25 47 24
Organizational model
Integrated medical group 33* 31* 84
Independent practice association 42* 44* 9
Mixed 25* 25* 7
Majority of physicians employed by group 20† 25† 69
Network-affiliated 50 38† 75
Group eligible for performance-based payment…
…based on patient experience 0† 25 36
…based on clinical quality (e.g., HEDIS measures) 90 87 87

HEDIS, Healthcare Effectiveness Data and Information Set

Definitions: Level 1, no use of patient experience reports beyond distribution within the group; Level 2, using patient experience reports to improve only low-performing physicians or practice sites; Level 3, engaging in at least one group-wide initiative to improve patient experience

*P < 0.005 for comparison to Level 3 (Fisher’s exact test)

P < 0.05 for comparison to Level 3

Group organizational model was statistically significantly associated with level of engagement. Integrated medical groups comprised approximately one-third of level 1 and level 2 groups but accounted for 84% of groups in level 3 (P < 0.005). Sixty-nine percent of level 3 groups employed the majority of their physicians, compared to 25% of level 2 groups and 20% of level 1 groups (P < 0.05). Level 3 groups were more likely to be network-affiliated (75% vs. 38% in level 2, P < 0.05). While 36% of level 3 groups were eligible for payment based on measures of patient experience, none of the groups in level 1 had such incentives (P < 0.05). In contrast, approximately 90% of groups in all three levels were exposed to financial incentives based on measures of clinical quality.

Among the 44 level 3 groups, the most common patient experience targets for group-wide performance improvement were access (57% of groups), communication with patients (48%), and customer service (45%) (Table 3). Physicians’ interactions with patients, patient education, and the continuity and coordination of care were less commonly reported as areas targeted for improvement.

Table 3.

Improvement Targets Reported by the Leaders of Level 3 Physician Groups

Improvement target* Examples of specific targets Number of groups (%)†
Access • Waiting times (to get an appointment, during an appointment) 25 (57)
• Availability of after-hours care
 
Communication with patients • Triage of incoming phone calls 21 (48)
• In-person communication with office staff
• Communication of test results to patients
 
Customer service • Staff courtesy 20 (45)
• Office and waiting room physical environment
 
Physicians’ interactions with patients • Interpersonal communication and demeanor during appointments 8 (18)
• Communication with patients between appointments
 
Patient education • Patient ability to self-manage disease 7 (16)
 
Continuity and coordination of care • Continuity of care with primary provider over time 3 (7)
• Performance of covering providers
• Coordination of care with providers outside the group
 
Unspecified • Patient experience in general 10 (23)

Definition: Level 3 physician groups engage in at least one group-wide initiative to improve patient experience

*Improvement targets are not mutually exclusive. Each physician group leader could report multiple improvement targets

Percentages of the 44 level 3 physician groups

The most common improvement initiatives reported by the leaders of level 3 groups were to “change office workflow” (e.g., changing patient check-in procedures; 70% of groups), “provide training for non-clinicians” (e.g., classes for administrative assistants; 57%), “conduct EHR-based interventions” (e.g., installing a new EHR; 50%), and “reassign staff responsibilities” (e.g., non-clinicians performing a greater share of routine patient assessment and documentation tasks; 45%) (Table 4). Less common improvement strategies included improving communication systems other than EHRs, hiring or firing staff, training clinicians, improving appointment scheduling processes, expanding office hours, sharing “best practices” within the group, and performing physical plant upgrades.

Table 4.

Improvement Initiatives Reported by the Leaders of Level 3 Physician Groups

Improvement initiative* Examples of specific initiatives Number of groups (%)†
Change office workflow • Changes to patient check-in procedures 31 (70)
• Changes to procedures for handling test results or incoming mail
 
Train non-clinicians • Classes for administrative assistants 25 (57)
 
EHR-based interventions • Install new EHR 22 (50)
• Give existing EHR new features (e.g., patient portal, print out after-visit summary)
 
Reassign staff responsibilities • Non-clinicians perform a greater share of routine patient assessment and documentation tasks 20 (45)
 
Improve non-EHR communication system • Telephone system upgrade 17 (39)
• Mail or fax system upgrade
 
Hiring or firing interventions • Hiring new clinicians or staff 16 (36)
• Firing existing clinicians or staff
• New emphasis on patient experience in hiring new clinicians or staff
 
Train clinicians • New training for clinicians (including retreats, one-time training sessions, and ongoing long-term training) 13 (30)
 
Improve appointment scheduling process • Change processes for scheduling appointments (new or established patients) 12 (27)
• Institute open-access scheduling
 
Expand office hours • Add early morning, night, or weekend hours 6 (14)
 
Share “best practices” • Share best practices between physicians or practice sites within the group 6 (14)
• Learn best practices from another group
 
Physical plant upgrade • Improve office layout to allow greater patient privacy 4 (9)
• Cosmetic changes to waiting and examination rooms

EHR, electronic health record

Definition: Level 3 physician groups engage in at least one group-wide initiative to improve patient experience

*Improvement initiatives are not mutually exclusive. Each physician group leader could report multiple improvement initiatives

Percentages of the 44 level 3 physician groups

The most common improvement initiatives reported by group leaders varied between improvement targets. To improve patients’ access to care, groups most commonly undertook improvements in appointment scheduling processes and changed office workflow (Table 5). When groups attempted to improve communication with patients, they most commonly invested in communication systems other than EHRs, reassigned staff responsibilities, and changed office workflow. In order to improve customer service, groups most commonly provided training to both clinicians and non-clinicians. When groups focused on improving primary care physicians’ interactions with patients, many initiatives were employed; however, training for physicians was not the most frequently-chosen strategy.

Table 5.

Most Common Performance Improvement Initiatives Intended to Improve Each Improvement Target

Improvement target Most common performance improvement initiatives* (number of groups)
Access • Improve appointment scheduling process (11)
• Change office workflow (11)
• Reassign staff responsibilities (5)
• Expand office hours (5)
• Improve non-EHR communication system (5)
• Train non-clinicians (4)
• EHR-based intervention (3)
• Hiring or firing intervention (3)
• Share “best practices (2)
 
Communication with patients • Improve non-EHR communication system (9)
• Reassign staff responsibilities (8)
• Change office workflow (7)
• EHR-based intervention (6)
• Hiring or firing intervention (4)
• Train non-clinicians (3)
• Expand office hours (1)
• Share “best practices” (1)
 
Customer service • Train non-clinicians (12)
• Train clinicians (8)
• Change office workflow (6)
• Hiring or firing intervention (4)
• Physical plant upgrade (3)
 
Physicians’ interactions with patients • Reassign staff responsibilities (3)
• EHR-based intervention (3)
• Train clinicians (2)
• Train non-clinicians (1)
• Share “best practices” (1)
• Change office workflow (1)
• Improve non-EHR communication system (1)
• Hiring or firing intervention (1)
• Physical plant upgrade (1)
 
Patient education • EHR-based intervention (5)
• Share “best practices” (1)
• Reassign staff responsibilities (1)
• Improve non-EHR communication system (1)
 
Continuity and coordination of care • EHR-based intervention (2)
• Change in office workflow (1)

EHR, electronic health record

Definition: Level 3 physician groups engage in at least one group-wide initiative to improve patient experience

*Each physician group leader could report multiple improvement initiatives for each target. Improvement initiatives not associated with a particular target are not displayed

DISCUSSION

Public reporting on patient experience, which has previously focused on health plans and hospitals5, 6, is increasingly being applied to ambulatory physician groups and practice sites.7 Patient experience surveys are intended to produce performance results that physicians can use to identify specific targets for quality improvement, that patients can use to compare and select providers, and that payers can use as a basis for setting incentive payments in pay-for-performance programs.2, 8, 11, 22 Despite substantial investment in these efforts and the potential salience of this information to practicing physicians, little is known about how physician groups have responded.

We found that the majority of Massachusetts physician groups are engaged in efforts to improve the patient experience. Physician groups engaged in these efforts were more likely than others to have an integrated medical group organizational model (as opposed to an IPA or mixed model), to employ the majority of their physicians, and to have financial incentives based on patient experience.

However, a substantial number of physician group leaders reported no efforts to improve patient experience, and others focused their efforts exclusively on low-performing physicians or practice sites. These groups had less integrated organizational models, suggesting that improvement efforts may require a managerial infrastructure capable of starting and directing improvement activities. This finding is consistent with national data suggesting that medical groups may be more likely than IPAs to participate in general quality improvement activities.23 In addition, groups not engaged in improvement activities were more likely to lack payment incentives based on patient experience. This association between group-level financial incentives and engagement in patient experience improvement echoes similar findings at the physician level, where performance incentives that emphasize patient experience have been associated with improved performance.24

The most common areas of patient experience targeted by groups’ improvement efforts were access (e.g., waiting times for an appointment), communication with patients (e.g., triage of incoming phone calls), and the customer service (e.g., staff courtesy). Groups were less likely to focus on the performance of physicians and other clinicians or on patient educational activities to enable self-management. Even though continuity of care has been highly associated with patient satisfaction25, 26, very few groups reported efforts intended to improve the continuity and coordination of care (despite wide performance variation and relatively poorer statewide performance in these domains).9

Though improvements in physician communication skills are thought to be crucial to the provision of patient-centered care, physician groups rarely pursued strategies to train physicians.27 Instead, groups most commonly changed processes for managing interactions between patients and nonclinical staff, trained non-clinicians, and invested in structural capabilities such as EHRs.

It is notable, however, that even when attempting to improve physician communication with patients, groups predominantly pursued this goal by reassigning staff responsibilities and adopting or enhancing EHRs. A reluctance to directly intervene with individual physicians may reflect physicians’ skepticism regarding physician-level patient experience results as well as a sensitivity to low morale among primary care physicians—two explanations that were volunteered by some group leaders.28, 29 Further, groups’ general focus on EHRs as a means of improving patient experience may reflect a previously observed association: in a national sample of physician groups, the use of patient feedback to analyze and improve services has been associated with increased adoption of health information technology.23

The study has limitations. Physician groups’ use of patient experience survey reports was based on self-report by group leaders. Despite our efforts to minimize response bias, leaders may have over-reported their efforts to improve patient experience. Our statewide sample of physician groups was too small to allow meaningful multivariable modeling, so we could not assess the independent effects of organizational variables on groups’ level of engagement with patient experience reports. The observational study design limits our ability to infer causation from associations between groups’ characteristics and activities. Groups that did not respond to our survey may be less likely than respondents to engage in improvement activities.

The study was limited to Massachusetts, and some findings may not generalize to other states. Two national surveys of physician groups have found associations between external incentives to improve patient experience and greater use of processes that may improve quality.20, 30 Because statewide public reporting of patient experience scores may constitute an external incentive, improvement efforts among physician groups in Massachusetts may exceed those in states without public reporting.

We lacked the data necessary to assess whether groups targeted their improvement efforts towards patient experience domains on which they had low performance. The extent to which groups’ reported improvement efforts will improve their scores on patient experience surveys is unknown. Finally, public reporting on patient experience had recently begun at the time of our study, and groups’ responses may evolve over time. Describing this evolution is a planned area of future research.

In a state that has publicly reported patient experience survey results for more than 2 years, we found that many physician groups have engaged in efforts to improve their performance. Groups with more integrated organizational models were especially likely to engage in group-wide improvement efforts, and all groups facing financial incentives based on patient experience reported improvement efforts of some kind. While patient experience surveys assess both provider-level and organization-level aspects of care, groups have predominantly focused their improvement efforts on organizational factors. If policy makers wish to motivate changes in the behavior of individual providers, new incentives that target specific, provider-focused domains of patient experience may be necessary.

Acknowledgments

Contributors: The authors thank Elizabeth Siteman, BA, for assistance in performing the group leader interviews, and Elizabeth Steiner, MPP, for assistance in coding the interview transcripts. The authors also thank the members of the study national advisory committee for constructive feedback on preliminary study results.

Funder: This study was sponsored by the Commonwealth Fund. No funder had a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.

Prior Presentation: Preliminary results from this study were presented at the Annual Research Meeting of AcademyHealth in Chicago, Illinois on June 29, 2009.

Conflict of Interest: None disclosed.

REFERENCES

  • 1.Crossing the Quality Chasm: a New Health System for the 21st Century. Washington, D.C.: National Academies Press; 2001. [PubMed] [Google Scholar]
  • 2.Cleary PD. The increasing importance of patient surveys. Br Med J. 1999;319(7212):720–721. doi: 10.1136/bmj.319.7212.720. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Safran DG, Kosinski M, Tarlov AR, et al. The primary care assessment survey: tests of data quality and measurement performance. Med Care. 1998;36(5):728–739. doi: 10.1097/00005650-199805000-00012. [DOI] [PubMed] [Google Scholar]
  • 4.Hays RD, Shaul JA, Williams VSL, et al. Psychometric properties of the CAHPS (TM) 1.0 survey measures. Med Care. 1999;37(3):MS22–MS31. doi: 10.1097/00005650-199903001-00003. [DOI] [PubMed] [Google Scholar]
  • 5.Landon BE, Zaslavsky AM, Bernard SL, Cioffi MJ, Cleary PD. Comparison of performance of traditional Medicare vs Medicare managed care. JAMA. 2004;291(14):1744–1752. doi: 10.1001/jama.291.14.1744. [DOI] [PubMed] [Google Scholar]
  • 6.Jha AK, Orav EJ, Zheng J, Epstein AM. Patients’ perception of hospital care in the United States. N Engl J Med. 2008;359(18):1921–1931. doi: 10.1056/NEJMsa0804116. [DOI] [PubMed] [Google Scholar]
  • 7.Safran DG, Karp M, Coltin K, et al. Measuring patients’ experiences with individual primary care physicians: results of a statewide demonstration project. J Gen Intern Med. 2006;21(1):13–21. doi: 10.1111/j.1525-1497.2005.00311.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Safran DG. Defining the future of primary care: what can we learn from patients? Ann Intern Med. 2003;138(3):248–255. doi: 10.7326/0003-4819-138-3-200302040-00033. [DOI] [PubMed] [Google Scholar]
  • 9.MHQP website. http://www.mhqp.org. Accessed November 8, 2010.
  • 10.Hibbard JH, Stockard J, Tusler M. Hospital performance reports: impact on quality, market share, and reputation. Health Aff (Millwood) 2005;24(4):1150–1160. doi: 10.1377/hlthaff.24.4.1150. [DOI] [PubMed] [Google Scholar]
  • 11.Hibbard JH. Engaging health care consumers to improve the quality of care. Med Care. 2003;41(1 Suppl):I61–70. doi: 10.1097/00005650-200301001-00007. [DOI] [PubMed] [Google Scholar]
  • 12.Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA. 1998;279(20):1638–1642. doi: 10.1001/jama.279.20.1638. [DOI] [PubMed] [Google Scholar]
  • 13.Faber M, Bosch M, Wollersheim H, Leatherman S, Grol R. Public reporting in health care: how do consumers use quality-of-care information? A systematic review. Med Care. 2009;47(1):1–8. doi: 10.1097/MLR.0b013e3181808bb5. [DOI] [PubMed] [Google Scholar]
  • 14.Schneider EC, Epstein AM. Influence of cardiac-surgery performance reports on referral practices and access to care - A survey of cardiovascular specialists. N Engl J Med. 1996;335(4):251–256. doi: 10.1056/NEJM199607253350406. [DOI] [PubMed] [Google Scholar]
  • 15.Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293(10):1239–1244. doi: 10.1001/jama.293.10.1239. [DOI] [PubMed] [Google Scholar]
  • 16.Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111–123. doi: 10.7326/0003-4819-148-2-200801150-00006. [DOI] [PubMed] [Google Scholar]
  • 17.American Academy of Family Physicians, American Academy of Pediatrics, American College of Physicians, American Osteopathic Association. Joint principles of the patient-centered medical home. March 2007; www.medicalhomeinfo.org/Joint%20Statement.pdf. Accessed November 8, 2010.
  • 18.Friedberg MW, Coltin KL, Pearson SD, et al. Does affiliation of physician groups with one another produce higher quality primary care? J Gen Intern Med. 2007;22(10):1385–1392. doi: 10.1007/s11606-007-0234-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Mehrotra A, Pearson SD, Coltin KL, et al. The response of physician groups to P4P incentives. Am J Manag Care. 2007;13(5):249–255. [PubMed] [Google Scholar]
  • 20.Casalino L, Gillies RR, Shortell SM, et al. External incentives, information technology, and organized processes to improve health care quality for patients with chronic diseases. JAMA. 2003;289(4):434–441. doi: 10.1001/jama.289.4.434. [DOI] [PubMed] [Google Scholar]
  • 21.Pope C, Ziebland S, Mays N. Analysing qualitative data. In: Pope C, Mays N, editors. Qualitative Research in Health Care. 2. London: BMJ Books; 2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Hibbard JH, Peters E, Slovic P, Finucane ML, Tusler M. Making health care quality reports easier to use. Jt Comm J Qual Improv. 2001;27(11):591–604. doi: 10.1016/s1070-3241(01)27051-5. [DOI] [PubMed] [Google Scholar]
  • 23.Robinson JC, Casalino LP, Gillies RR, Rittenhouse DR, Shortell SS, Fernandes-Taylor S. Financial incentives, quality improvement programs, and the adoption of clinical information technology. Med Care. 2009;47(4):411–417. doi: 10.1097/MLR.0b013e31818d7746. [DOI] [PubMed] [Google Scholar]
  • 24.Rodriguez HP, Scoggins JF, Glahn T, Zaslavsky AM, Safran DG. Attributing sources of variation in patients’ experiences of ambulatory care. Med Care. 2009;47(8):835–841. doi: 10.1097/MLR.0b013e318197b1e1. [DOI] [PubMed] [Google Scholar]
  • 25.Saultz JW, Albedaiwi W. Interpersonal continuity of care and patient satisfaction: a critical review. Ann Fam Med. 2004;2(5):445–451. doi: 10.1370/afm.91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Rodriguez HP, Rogers WH, Marshall RE, Safran DG. Multidisciplinary primary care teams: effects on the quality of clinician-patient interactions and organizational features of care. Med Care. 2007;45(1):19–27. doi: 10.1097/01.mlr.0000241041.53804.29. [DOI] [PubMed] [Google Scholar]
  • 27.Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient-centered care. Health Aff. 2010;29(7):1310–1318. doi: 10.1377/hlthaff.2009.0450. [DOI] [PubMed] [Google Scholar]
  • 28.Rider EA, Perrin JM. Performance profiles: the influence of patient satisfaction data on physicians’ practice. Pediatrics. 2002;109(5):752–757. doi: 10.1542/peds.109.5.752. [DOI] [PubMed] [Google Scholar]
  • 29.Bodenheimer T. Primary care--will it survive? N Engl J Med. 2006;355(9):861–864. doi: 10.1056/NEJMp068155. [DOI] [PubMed] [Google Scholar]
  • 30.Rittenhouse DR, Shortell SM, Gillies RR, et al. Improving chronic illness care: findings from a national study of care management processes in large physician practices. Med Care Res Rev. 2010;67(3):301–320. doi: 10.1177/1077558709353324. [DOI] [PubMed] [Google Scholar]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES