Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2017 May 12.
Published in final edited form as: Med Care. 2012 Nov;50(Suppl):S74–S82. doi: 10.1097/MLR.0b013e31826b1087

Improving Organizational Climate for Quality and Quality of Care

Does Membership in a Collaborative Help?

Ingrid M Nembhard *,, Veronika Northrup *, Dale Shaller , Paul D Cleary *
PMCID: PMC5428889  NIHMSID: NIHMS856990  PMID: 23064280

Abstract

Background

The lack of quality-oriented organizational climates is partly responsible for deficiencies in patient-centered care and poor quality more broadly. To improve their quality-oriented climates, several organizations have joined quality improvement collaboratives. The effectiveness of this approach is unknown.

Objective

To evaluate the impact of collaborative membership on organizational climate for quality and service quality.

Subjects

Twenty-one clinics, 4 of which participated in a collaborative sponsored by the Institute for Clinical Systems Improvement.

Research Design

Pre-post design. Preassessments occurred 2 months before the collaborative began in January 2009. Postassessments of service quality and climate occurred about 6 months and 1 year, respectively, after the collaborative ended in January 2010. We surveyed clinic employees (eg, physicians, nurses, receptionists, etc.) about the organizational climate and patients about service quality.

Measures

Prioritization of quality care, high-quality staff relationships, and open communication as indicators of quality-oriented climate and timeliness of care, staff helpfulness, doctor-patient communication, rating of doctor, and willingness to recommend doctor’s office as indicators of service quality.

Results

There was no significant effect of collaborative membership on quality-oriented climate and mixed effects on service quality. Doctors’ ratings improved significantly more in intervention clinics than in control clinics, staff helpfulness improved less, and timeliness of care declined more. Ratings of doctor-patient communication and willingness to recommend doctor were not significantly different between intervention and comparison clinics.

Conclusion

Membership in the collaborative provided no significant advantage for improving quality-oriented climate and had equivocal effects on service quality.

Keywords: service quality, patient experience, organizational climate, CAHPS, quality improvement collaborative


In the United States and throughout the world, there is increasing recognition that medical care should be “patient-centered,” that is, “respectful of and responsive to individual patient preferences, needs, and values and ensuring that patient values guide all clinical decisions.” (p. 6)1 Many also argue the need for greater emphasis on “service quality” broadly.2 Service quality refers to how well the care experience matches patients’ expectations.3,4 In health care, good service quality exists when patient care experiences—from scheduling an appointment to communicating with office staff and clinicians to the decision-making process about treatments—meet or exceed patients’ desires.5,6 Although better service quality and patient centeredness are important aims for the health care system in their own right, research also suggest that they are important because they are positively associated with a variety of desirable outcomes including greater patient adherence to treatment recommendations, better health outcomes,7,8 higher staff satisfaction, and better financial performance.9,10

Despite increasing emphasis on patient-centered care and service quality, patients frequently report poor care experiences. In 1 study, for example, 78% of patients reported at least 1 communication problem during their clinical encounter (e.g., not receiving answers to their questions that were understandable).11 Experts have argued that a lack of quality-oriented organizational climates is partly responsible for poor service quality.12 In an organization with a quality-oriented climate, there is a shared perception among staff that the organization expects, supports, and rewards efforts to ensure that patients receive quality care.13,14 Research suggest that unsupportive climates are also partly responsible for poor technical quality of care, that is, patients not receiving services that “increase the likelihood of desired health outcomes and are consistent with current professional knowledge” (p. 232).1

In 2004, the Institute for Clinical Systems Improvement (ICSI), a nonprofit organization in Minnesota devoted to helping organizations deliver patient-centered and value-driven care, began sponsoring the Leading a Culture of Quality (LCQ) Action Group, a quality improvement collaborative to help medical groups and hospitals assess and improve their climate “to make it more supportive to quality improvement efforts.”15 In a collaborative, several organizations come together to work on improving performance in a target area.16 Historically, collaboratives have been used to facilitate the implementation of clinical and operational practices.17,18 Whether they are an effective strategy for enhancing organizational climate is unknown. To assess their effectiveness, we conducted a study of the impact of membership in the LCQ Action Group on primary care clinics’ climate and service quality. Research suggests that higher service quality fosters better provider-patient relationships, which enables the selection and execution of technically better services.19 Thus, service quality may facilitate better technical quality. In this study, we focused only on service quality because of data availability.

METHODS

Research Sites

We invited the largest of the 9 medical groups participating in ICSI’s fourth LCQ Action Group, HealthPartners Medical Group, to participate in our study. HealthPartners Medical Group is part of HealthPartners, a nonprofit, consumer-governed, integrated health care system. The Medical Group provides primary care at clinics throughout the Minneapolis-St Paul area. Although HealthPartners has instituted many initiatives to improve quality,20 its leadership believed that potential for improvement remained and was interested in whether collaborative membership would help to improve its clinics’ climate and quality. Therefore, the senior leadership enrolled 4 of its 21 clinics in ICSI’s collaborative. All had the same new senior administrator. He and the leaders of these clinics wanted to experience the collaborative and to do so as a group; hence, they were enrolled. We refer to the 4 participating clinics as “intervention clinics” and HealthPartners’ other clinics as “control clinics” (N = 17). The intervention and control clinics did not differ significantly in known characteristics (P≥0.10; Table 1).

TABLE 1.

Comparison of Clinic Characteristics by Time Period and Intervention Status

Baseline
Follow-up
Intervention Clinics (n = 4) Control Clinics (n = 17) Total Clinics (n = 21) Intervention Clinics (n = 4) Control Clinics (n = 17) Total Clinics (n = 21)


Mean (SD) Mean (SD) Mean (SD) P* Mean (SD) Mean (SD) Mean (SD) P*
No. staff 38.20 (8.11) 36.9 (12.6)   37.2 (11.69)   0.57 35.18 (6.76)   34.46 (12.76)   34.60 (11.72)   0.69
 No. clinical leaders (eg, physicians)   8.98 (2.26)   9.24 (3.36)   9.19 (3.13)   1.00   8.62 (1.75)   8.75 (3.61)   8.72 (3.30)   1.00
 No. clinical staff (eg, nurses) 18.10 (6.51) 15.49 (5.63) 15.98 (5.73)   0.63 16.56 (4.11) 14.98 (6.04) 15.28 (5.67)   0.41
 No. administrative staff (eg, office assistants)   7.98 (1.49)   9.48 (3.68)   9.19 (3.40)   0.40   7.45 (1.50)   8.67 (3.70)   8.43 (3.39)   0.57
 No. managers   3.14 (2.23)   2.79 (0.98)   2.78 (1.24)   0.63   2.56 (0.85)   2.07 (0.77)   2.16 (0.79)   0.36
No. patient visits per year 40,571.25 (10,707.84) 35,966.00 (14,931.77) 36,843.19 (14,106.68)   0.45 40,205.5 (9715.4) 35,017.88 (14,667.42) 36,006.00 (13,806.59)   0.51
Percentage of patients with an established relationship to doctor (ie, greater than a year) 67.0 69.4 68.94   0.18 67.6 65.2 65.64   0.10
Years as a member of HealthPartners 20.25 (12.42) 19.23 (13.21) 19.43 (12.77)   0.89
Years clinic chief held position   7.00 (1.82)   5.88 (4.86)   6.10 (4.40)   0.66
*

P-value derived from the Wilcoxon rank-sum test comparing intervention clinics to control clinics.

Mean and SD of the noted characteristic in the referenced group, except for entries indicating whether patients had an established relationship to doctor (ie, greater than a year) for which the percentage of patients is indicated instead.

Years as a member of HealthPartners and years clinic chief held position increased uniformly by 1 year; thus, we only examined the baseline values for these characteristics.

The Intervention

Each intervention clinic participated in the Action Group, which began in January 2009 and ended in January 2010. The clinics were represented by a 5–10 person team that consisted of a senior leader, individuals in the next 2 layers of management, and other clinical and administrative staff. As part of the collaborative, organizations and their teams engaged in several activities. At an initial meeting, the teams met and discussed the results of a survey about their clinic’s climate that was completed before that meeting (see below). After that initial meeting, the teams met every 2–3 months. At those meetings, experts presented strategies for enhancing quality-oriented climate, including use of physician and staff compacts to govern behavior, crucial conversations, adaptive leadership and change, trust building, physician leadership, and fair process. Teams then participated in exercises to “practice what they learned.” Between meetings, teams implemented initiatives to improve the climate for quality in their clinic. They were expected to record their activities quarterly in an online journal through listserv and to report on their efforts during meetings. Lastly, each team was expected to participate in monthly conference calls. These calls were used to address issues that were discussed in journal entries and to follow-up on discussions begun during meetings.

Data Collection

We conducted our first assessment of both intervention and control clinics’ quality-oriented climate and service quality in September to November 2008, before the start of the collaborative in January 2009. We refer to this time period as “baseline.” The postassessment of quality-oriented climate took place about 1 year after the end of the collaborative, November 2010 to January 2011, whereas the postassessment of service quality took place from June to August 2010 (about 6 mo after the collaborative ended). We refer to the postassessment periods as “follow-up.” The baseline and follow-up periods were chosen such that there was no overlap in their reference period that might undermine the ability to detect an effect of the intervention. The reference for staff was the present-day and for patients was an office visit up to 4 and 2 months before the start of surveying for the baseline and follow-up periods, respectively. A shorter time frame for the follow-up period was used because a high and sufficient volume of patient responses was received within this short time. Patients in the baseline period reported on care during May to July 2008, whereas patients in the follow-up period reported on care during April to June 2010.

At both assessments, clinics’ employees (e.g., physicians, nurses, etc.) were surveyed about aspects of the work climate that support quality care, and clinics’ patients were surveyed about the quality of care they received and their satisfaction with that care. Employee assessment about clinic’s climate was obtained using the LCQ survey, which was developed by (and is available from) Satisfaction/Performance/Research Center (http://www.sprcenter.com). The survey contains 25 questions that assess quality-oriented climate and feelings toward work (i.e., job satisfaction, sense of accountability, and intent to leave).21 We used the LCQ measures of quality-oriented climate for this research. All employees were asked to complete the survey online. In 2008, a total of 609 employees (79%) completed the survey, with a mean of 29 respondents per clinic [intervention group mean = 51.25 (range, 26–84); control group mean = 23.77 (range, 4–58)]. In 2010, 762 employees (80%) returned surveys, with a mean of 36 respondents per clinic [intervention group, mean = 55.00 (range, 35–110); control group, mean = 31.88 (range, 13–51)].

Respondents reported their professions through a survey question as: providers [e.g., physicians; N = 121 (response rate = 64%) in 2008, N = 148 (58%) in 2010], clinical support staff [e.g., registered nurses; N = 297 (89%) in 2008, N = 398 (85%) in 2010], administrative support staff (e.g., referral coordinators; N = 144 (77%) in 2008, N = 169 (89%) in 2010], and managers or supervisors [N = 47 (81%) in 2008, N = 47 (100%) in 2010]. Each clinic received a report containing its climate scores, regardless of its status as an intervention or control clinic. The report was prepared by the survey developer, who had also administered the survey. For this study, we excluded managers and supervisors to focus on the climate for those working closely with patients, resulting in a final sample of 562 and 715 staff in 2008 and 2010, respectively. In addition, we use the term “clinical leaders” rather than “providers” to better capture the decision-making role of these individuals and to distinguish them from supporting staff, both clinical and administrative.

A sample of patients treated at the same clinics were mailed the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Clinician & Group Visit Survey, which assesses 3 aspects of the patient care experience (described in Measures below), overall service quality, and patient characteristics. The survey was mailed over a 3-month period to a random sample of adult patients (age, 18 or older) who had had at least 1 visit with a primary care physician at the clinic during the previous 4 months at the most. Patients who did not respond were sent a follow-up survey. The 4-month window was chosen to obtain at least the minimum number of respondents needed for reliable clinic-level estimates of recent service quality in each clinic; our clinic-level reliability threshold was 0.70.

In 2008, a total of 4491 patients (43%) returned completed surveys, with a mean of 214 respondents per clinic [control group mean = 216 (range, 95–332); intervention group mean = 204 (range, 157–252)]. In 2010, a total of 6960 patients (41%) returned surveys, with a mean of 331 respondents per clinic [control group mean = 327 (range, 214–539); intervention group mean = 352 (range, 257–432)]. In both years, the majority of respondents were 45 years of age or older, female, white, non-Hispanic, with at least some college experience and in good or better health (Table 2).

TABLE 2.

Description of the Patient Sample by Time Period and Affiliated Clinic’s Intervention Status

Baseline
Follow-up
Intervention Clinics (n = 815), n (%) Control Clinics (n = 3676), n (%) P Intervention Clinics (n = 1407), n (%) Control Clinics (n = 5553), n (%) P
Age group (y)
 18–24     28 (3.5)   117 (3.2)   0.54*     70 (5.0)   307 (5.6)   0.04*
 25–34     61 (7.5)   222 (6.1)   105 (7.5)   533 (9.7)
 35–44     60 (7.4)   330 (9.1)   107 (7.7)   506 (9.2)
 45–54   135 (16.7)   621 (17.1)   260 (18.7)   991 (18.0)
 55–64   214 (26.4)   884 (24.3)   361 (25.9) 1254 (22.8)
 65–74   138 (17.0)   663 (18.2)   249 (17.9)   902 (16.4)
 75+   174 (21.5)   803 (22.1)   242 (17.4) 1003 (18.3)
 Missing           5         36         13         57
Sex
 Female   445 (55.0) 2206 (60.5) < 0.01   848 (61.4) 3614 (66.4) < 0.001
 Male   364 (45.0) 1438 (39.5)   534 (38.6) 1830 (33.6)
 Missing           6         32         25       109
Highest grade completed
 ≤Eighth grade     20 (2.5)     90 (2.5)   0.39*     25 (1.9)   124 (2.3)   0.62*
 Some high school     33 (4.12)   154 (4.28)     40 (3.0)   198 (3.7)
 High school grad or GED   193 (24.09)   875 (24.33)   321 (23.7) 1190 (22.4)
 Some college/2-y degree   263 (32.83) 1075 (29.89)   425 (31.4) 1686 (31.8)
 4-y college graduate   132 (16.48)   601 (16.71)   248 (18.3)   935 (17.6)
 > 4-y college   160 (19.98)   802 (22.3)   295 (21.8) 1170 (22.1)
 Missing         14         79         53       250
Race
 White   717 (89.6) 3106 (86.3)   0.03 1239 (91.2) 4624 (86.4) < 0.001
 Black     33 (4.1)   220 (6.1)     42 (3.1)   299 (5.6)
 Other     50 (6.3)   273 (7.6)     77 (5.7)   432 (8.1)
 Missing         15         77         49       198
Ethnicity
 Non-Hispanic   773 (98.1) 3421 (97.5)   0.30 1317 (98.9) 5157 (98.0)   0.05
 Hispanic     15 (1.9)     89 (2.5)     16 (1.2)   106 (2.0)
 Missing         27       166         74       290
Health Status
 Excellent   100 (12.5)   494 (13.7)   0.09*   183 (13.2)   813 (14.8)   0.22*
 Very good   264 (33.0) 1273 (35.3)   518 (37.6) 2053 (37.5)
 Good   293 (36.7) 1248 (34.7)   487 (35.1) 1799 (32.8)
 Fair   114 (14.3)   493 (13.7)   151 (10.9)   676 (12.3)
 Poor     28 (3.5)   101 (2.8)     47 (3.4)   139 (2.5)
 Missing         16         67         21         73
*

P-value derived from the Mantel-Haenszel χ2 test comparing intervention clinics and control clinics.

P-value derived from χ2 test comparing intervention clinics and control clinics.

Measures

Quality-oriented Climate

We assessed 3 aspects of quality-oriented climates that have been identified in prior research: the prioritization of quality care, high-quality relationships between staff, and open communication.13 Prioritization of quality care refers to the extent to which an emphasis on quality care permeates the organization’s mission and action.22 High-quality relationships are those characterized by trust and cooperation.23 Open communication exists when individuals express their thoughts without fear of punishment or any other negative repercussion.24,25 The prioritization of quality care ensures that all staff within the organization work toward increasing the likelihood of desired health outcomes for patients, whereas high-quality relationships and open communication between staff facilitates coordination to achieve high-quality care. We assessed the presence of these 3 aspects of quality-oriented climate using items from the LCQ survey (Table 3A). Clinics’ employees indicated their level of agreement with each survey item using a 5-point scale (1 = strongly disagree, 5 = strongly agree).

TABLE 3.

Measures of Quality-Oriented Climate and Service Quality

Reliability—Cronbach α*
Individual-level Clinic-level
(A) Measures of Quality-oriented Climate
 Prioritization of quality care (in mission and action) 0.85 0.93
  Senior management shows by its actions that quality is a top priority in this organization 0.89 0.91
  I have a clear understanding of the organization’s mission, vision, and values
  I know of 1 or more quality initiatives going on within our organization this year
  Results of our quality improvement efforts are measured and communicated regularly to staff
  There is good information flow among departments to provide high-quality patient safety and care
  I am satisfied with the information that I receive from management and what’s going on in the organization
  People here feel a sense of urgency about improving quality of patient care and service
 High-quality relationships between staff 0.88 0.98
  I observe a high level of cooperation among all members of my work unit or department 0.87 0.96
  There is a climate of trust in my department or work unit
 Open communication 0.87 0.93
  I feel free to express my opinion without worrying about the outcome 0.89 0.92
  Staff will freely speak up if they see something that may improve patient care or affect patient safety
  The climate in the organization promotes the free exchange of ideas
(B) Measures of Service Quality
 Patient-centered care: timeliness of care 0.75 0.74
  Did you see this doctor within 15 min of your appointment time? 0.60 0.51
  Did someone from this doctor’s office follow-up to give you results?
 Patient-centered care: staff helpfulness 0.97 0.87
  Clerks and receptionists as helpful as you thought they should be? 0.98 0.92
  Clerks and receptionists treat you with courtesy and respect?
 Patient-centered care: quality of doctor-patient communication 0.98 0.94
  Did this doctor explain things in a way that was easy to understand? 0.97 0.81
  Did this doctor listen carefully to you?
  Did doctor give instructions on taking care of health problems/concerns?
  Did doctor seem to know important information about your medical history?
  Did doctor show respect for what you had to say?
  Did doctor spend enough time with you?
 Overall service quality: overall rating of doctor
  Using any number from 0 to 10, where 0 is worst doctor possible and 10 is best doctor possible, what number would you use to rate this doctor?
 Overall service quality: willingness to recommend office
  Would you recommend this doctor’s office to your family and friends?
*

Cronbach α near or above 0.70 indicates the satisfactory reliability of the measure α between 0.50 and 0.70 indicates moderate reliability.26 The first number in the column is based on 2008 data; the second number is based on 2010 data.

A confirmatory factor analysis of the responses using the robust maximum likelihood method affirmed the a priori assumption about which items belonged to each measure and the discriminant validity of each measure. Affirmation was provided by comparison of the results of the analysis to standard criteria for goodness-of-fit, described elsewhere27: χ2 (degrees of freedom) [χ2(df)] < 0.05; Tucker Lewis Index >0.95; root mean square error of approximation (RMSEA) ≤0.05; and standardized root mean square residual (SRMR)≤0.05. In both years of our sample, the criteria were met [for 2008: χ2(df) = 265(125), P < 0.0001, TFI = 0.96, RMSEA = 0.04, SRMR = 0.04; for 2010: χ2(df) = 363(125), P < 0.0001, TFI = 0.95, RMSEA = 0.05, SRMR = 0.03]. In addition, Cronbach α for each measure was above 0.70, indicating the satisfactory reliability of our measures26 of quality-oriented climate at both the individual and clinic levels.

Service Quality

We assessed service quality by measuring 3 specific aspects of the patient care experience: timeliness of care, staff helpfulness, and quality of doctor-patient communication. When timely care is provided, patients are able to access health care and health information without the distress of waits and delays. When office staff are helpful, patients receive the respect and information they need, which enhances the care experience. Lastly, good doctor-patient communication provides patients with the information they need to manage their conditions and allows patients to actively participate in their health care. The items we used from the CAHPS survey to measure these 3 dimensions of patient-centered care are presented in Table 3B. Patients indicated whether they experienced the action described in each question using the following response scale: “Yes, Definitely,” “Yes, Somewhat,” or “No”. Consistent with the “top-box” approach for reporting CAHPS responses,28 we created a binary variable for each survey item to indicate whether a patient answered “Yes, Definitely” as we were interested in whether patients unquestionably experienced high-quality service.

Because patient characteristics predict patients’ reports of their experiences and because almost all of the measured characteristics differed between intervention and control groups at either baseline or follow-up (Table 2), we calculated the risk-adjusted probability of a “Yes, Definitely” response for each question in both years using generalized estimating equations (Genmod procedure in SAS 9.2). We adjusted for all patient characteristics listed in Table 2. We then averaged the risk-adjusted probabilities across the questions that comprised the scales for each aspect of the patient care experience measured to match the CAHPS-recommended composites (e.g., the doctor-patient communication measure was created by averaging the adjusted probabilities for “Yes, Definitely” across the 6 items in the CAHPS composite).28 Thus, our measures of timeliness of care, doctor-patient communication, and staff helpfulness indicate the risk-adjusted, probability on a 0–1 scale that a patient definitely experienced the specified dimension of patient-centered care. Table 3B shows that these risk-adjusted measures of service quality were generally reliable in our sample.

Lastly, we assessed overall service quality by using questions from the CAHPS survey that asked patients to provide an overall rating of their physician and their willingness to recommend the doctor’s office to family and friends. Patients were asked to report on a scale from 0 (lowest rating or recommendation) to 10 (highest rating or recommendation) and the latter using the response scale used for the other CAHPS measures (i.e., “Yes, Definitely,” “Yes, Somewhat,” or “No”). We then recoded their responses using the top-box approach that is, we created a dummy variable to indicate the percentage of responses in the most positive response categories (e.g., 10 vs. other). We adjusted both measures for patient characteristics as well.

Analyses

We first examined the consistency of employees’ survey responses about quality-oriented climate to determine the appropriateness of including all respondents in a single analysis. We focused on the agreement between members of different professional groups (e.g., clinical leaders vs. clinical support staff vs. administrative support staff) because research has found that perceptions of climate can be significantly different between groups.2931 When such differences are present, analyses should account for the differences between groups. We examined the level of agreement about quality-oriented climate between the 3 professional groups in our sample using SAS PROC MIXED with a repeated statement to account for the correlation between workers within clinic. We assessed the significance of the overall effect of professional group and the differences in least squares means between professional groups. As discussed in the Results section below, we found significant differences between professional groups. Therefore, we conducted separate analyses of quality-oriented climate for clinical leaders and the supporting clinical and administrative staff.

To examine whether the collaborative had an effect, we conducted 2 sets of analyses. First, we assessed whether there were significant differences between intervention and control clinics in each aspect of quality-oriented climate and each risk-adjusted measure of service quality at baseline and follow-up. To conduct these analyses, we used SAS PROC MIXED to estimate individual-level, linear models with fixed effects for the type of clinic (intervention vs. control) and with adjustment for the nesting of staff and patients within clinics. Missing responses were assumed to be missing at random. In all analyses, we modeled a continuous, dependent variable: aspects of quality-oriented climate on a 1–5 scale or risk-adjusted probability of high service quality on a 0–1 scale.

Second, we examined the mean changes in climate and service quality between baseline and follow-up within intervention clinics and control clinics separately, and the difference in change between these groups of clinics. For the former, we used mixed linear models. For the latter, we performed analysis of covariance in which we adjusted for the baseline level of the measure of interest using clinic-level means. For climate-related analyses, we used the clinic-level mean for the relevant professional group (clinical leaders or staff). We adjusted for the baseline levels because there were statistically significant differences between intervention and control clinics at baseline for service quality measures (see below) and we wished to be consistent throughout our analyses.

RESULTS

Employees in different professional groups differed significantly in their perception of each aspect of quality-oriented climate at both baseline and follow-up. In both years, clinical leaders (e.g., physicians) perceived greater prioritization of quality, higher quality relationships between staff, and greater support for open communication than did clinical and administrative support staff (Table 4). The differences in perceptions between clinical leaders and clinical staff and between clinical leaders and administrative staff were statistically significant. There was not a statistically significant difference in perception between clinical and administrative staff. Thus, we regarded them as 1 group of “staff” for the remainder of our analyses.

TABLE 4.

Comparison of Perceptions of Quality-oriented Climate Between Professional Groups at Baseline and Follow-up

Baseline
Follow-up
Significance of Difference in Group Means
Significance of Difference in Group Means
Mean (SE) Clinical Leaders Clinical Support Staff Mean (SE) Clinical Leaders Clinical Support Staff
Aspect of quality-oriented climate
 Prioritization of quality care
  Clinical leaders (eg, physicians) 4.0 (0.07) 4.0 (0.06)
  Clinical support staff 3.6 (0.07) *** 3.8 (0.06) ***
  Administrative support staff 3.5 (0.07) *** NS 3.8 (0.05) *** NS
 Open communication
  Clinical leaders (eg, physicians) 4.1 (0.08) 4.2 (0.11)
  Clinical support staff 3.6 (0.08) *** 3.7 (0.11) ***
  Administrative support staff 3.7 (0.06) *** NS 3.9 (0.10) *** NS
 High-quality staff relationships
  Clinical leaders (eg, physicians) 3.8 (0.08) 3.8 (0.08)
  Clinical support staff 3.5 (0.08) *** 3.5 (0.07) ***
  Administrative support staff 3.4 (0.06) *** NS 3.5 (0.07) *** NS
***

There was a significant difference at P < 0.001 between the means for the professional group noted in the row and the professional group noted in the column. NS indicates there was not a significant difference between means (P > 0.05). Reported means are least square means from SAS Proc Mixed analysis.

At baseline, there was not a significant difference between intervention and control clinics in staff’s reports of any aspect of quality-oriented climate (Table 5). On the basis of staff’s reports, between baseline and follow-up, the intervention clinics significantly improved in the 3 aspects of climate measured, whereas the control clinics only experienced a significant increase in prioritization of quality (see footnotes for Table 5). At follow-up, however, there was not a significant difference between intervention and control groups. Even after we adjusted for baseline performance, the mean changes in all studied aspects of quality-oriented climate for intervention clinics were not significantly different from the mean changes for control clinics. The findings based on clinical leaders’ reports of climate were the same (table not shown).

TABLE 5.

Comparison of Quality-Oriented Climate in Intervention and Control Clinics

Baseline
Follow-up
Adjusted Difference
Intervention (n = 151) Control (n = 290) P Intervention (n = 164) Control (n = 403) P Intervention (n = 4 clinics) Control (n = 17 clinics) P
Prioritization of quality Care 3.49 3.63 0.19 3.73 3.80 0.47 0.14 0.17 0.71
High-quality relationships§ 3.54 3.74 0.24 3.81 3.84 0.84 0.20 0.06 0.60
Open communication 3.29 3.50 0.12 3.53 3.52 0.91 0.05 0.03 0.90

P-values indicate the statistical significance of the difference between the preceding values for the intervention and control clinics.

*

Aspects of climate were measured on a scale from 1 = strongly disagree to 5 = strongly agree.

Values presented in table are based on the responses of clinical and administrative support staff only. Clinical leaders’ ratings, which were significantly higher than clinical and administrative support staff’s (Table 4), resulted in the same findings. Table based on clinical leaders’ responses not shown for parsimony. At baseline, a total of 441 staff members (79%) completed the survey, with a mean of 21 respondents per clinic (intervention group mean = 38; control group mean = 17). At follow-up, 567 staff members (80%) returned surveys, with a mean of 27 respondents per clinic (intervention group mean = 41; control group mean = 24).

Difference between baseline and follow-up was significant in intervention group (P < 0.01) and in control group (P < 0.01).

§

Difference between baseline and follow-up was significant in intervention group (P = 0.02), but not in control group (P = 0.17).

Difference between baseline and follow-up was significant in intervention group (P = 0.01), but not in control group (P = 0.79).

Difference between baseline and follow-up was adjusted for baseline performance.

With respect to service quality, there were significant differences between intervention and control clinics at baseline, with control clinics scoring higher in all areas except for staff helpfulness; intervention clinics performed better on this metric (Table 6). At follow-up, there was no significant difference between groups in the measures of service quality studied, except for timeliness of care. In both groups, between baseline and follow-up, there had been significant improvement in almost all aspects of service quality assessed, except for timeliness of care, which decreased (see footnotes for Table 6). Adjusting for baseline scores, intervention clinics improved significantly more than control clinics with respect to overall rating of doctor. However, staff helpfulness improved less in intervention clinics than in control clinics and timeliness of care declined more in intervention clinics than in control clinics. We found no significant difference in the change between groups for quality of doctor-patient communication and willingness to recommend doctor to family and friends.

TABLE 6.

Comparison of Service Quality in Intervention and Control Clinics

Baseline
Follow-up
Adjusted Difference**
Intervention
(n = 815)
Control
(n = 3676)
P Intervention
(n = 1407)
Control
(n = 5553)
P Intervention
(n = 4 clinics)
Control
(n = 17 clinics)
P
Patient-centered care*
 Timeliness of care 87.7 89.3 <0.01 84.0 87.0 <0.0001 −4.2 −2.1 <0.0001
 Staff helpfulness§ 89.8 88.2   0.04 90.3 90.0   0.75   0.7   1.8   0.03
 Doctor-patient communication 85.5 88.5 <0.001 88.7 89.3   0.21   1.0   1.1   0.85
Overall service quality
 Overall rating of doctor†¶ 38.5 44.4   0.07 45.4 46.1   0.82   4.3   2.4   0.03
 Willingness to recommend*# 79.6 83.7   0.02 84.4 85.6   0.36   2.3   2.4   0.96

P-values indicate the statistical significance of the difference between the preceding values for the intervention and control clinics.

*

The degree to which specific aspects of patient-centered care were present and patients were willing to recommend their doctor was assessed by the risk-adjusted probability that patients reported “Yes, Definitely” experienced this aspect versus “No” or “Yes, Somewhat” on 0–1 scale.

The degree to which the overall rating of the doctor was high was assessed by the risk-adjusted probability that patients provided a rating of 10 on a scale from 0 to 10.

Difference between baseline and follow-up was significant in intervention group (P <0.001) and in control group (P <0.0001).

§

Difference between baseline and follow-up was not significant in intervention group (P = 0.14), but in control group (P <0.0001).

Difference between baseline and follow-up was significant in intervention group (P = 0.001) and in control group (P <0.0001).

Difference between baseline and follow-up was significant in intervention group (P <0.0001) and in control group (P = 0.04).

#

Difference between baseline and follow-up was significant in intervention group (P <0.001) and in control group (P = 0.02).

**

Difference between baseline and follow-up was adjusted for baseline performance.

DISCUSSION

This article reports the results of a controlled study of whether membership in a collaborative resulted in significantly more improvement in the climate for quality and service quality for collaborative participants than for non-participants. Our results suggest that collaborative membership did not offer an advantage compared with other activities that nonparticipants used to improve their quality-oriented climate. All study clinics seem to have pursued equally effective, climate improvement efforts once they received their baseline LCQ survey results.

In contrast to the null effect of collaborative membership on clinics’ climate for quality, our baseline-adjusted results suggest mixed effects of collaborative membership on service quality. Collaborative members (i.e., intervention clinics) significantly improved patients’ overall interaction with doctors relative to nonmembers (as evidenced by larger changes in overall ratings of doctors), but they improved less in other aspects of patient-centered care (as evidenced by worse change scores for timeliness of care and staff helpfulness) and no differently in the quality of doctor-patient communication and willingness to recommend doctors’ office. These mixed results mirror the findings of studies on the effectiveness of collaboratives focused on improving clinical processes and outcomes17,18,32 and studies of other (eg, communication) interventions to improve service quality.33

Our study found a 1–2 percentage point differential—positive for 1 measure and negative for 2 others—in service quality between intervention and control clinics. This magnitude of effect is lower than that found in most before-and-after studies of quality improvement-focused collaboratives with a control group that find any effect. A systematic review of these studies reported changes ranging from 3 to 45 percentage points.17 A separate review of research on the effectiveness of communication interventions also suggests that when these interventions have an effect on service quality, the effect is strong and positive.33 In contrast, the collaborative studied had relatively modest effects, both positive and negative. Research on hospitals suggests that modest changes in service quality can be associated with significant changes in hospitals’ clinical performance and rankings.34 However, we do not know whether the changes that we observed impacted clinical processes, patient outcomes, or other aspects of primary care.

Prior research has found that improving technical quality of care along 1 dimension has no significant effect on other dimensions of quality.35 However, in this study of service quality, we found that metrics moved in opposite directions. Staff helpfulness and doctors’ overall ratings improved, whereas the timeliness of care declined during our study for both intervention and control clinics. It may be that devoting greater attention to individual patients to understand their preferences and be responsive to their specific needs limits the timeliness of care delivered to all patients. This possibility is consistent with the “tradeoffs hypothesis” for service organizations articulated by management scholars.36 According to this hypothesis, increases in customer satisfaction are associated with decreases in productivity (i.e., efficient delivery of services) in service organizations like clinics because of the degree of customization required to address heterogenous customer needs (e.g., patient needs). Our results provide support for this hypothesis. However, more research is needed to determine the extent to which there is a tradeoff and if so, how to overcome it.

Despite our mainly null or negative findings about the impact of collaborative membership on climate for quality and service quality, we caution against assuming that collaboratives offer little benefit for organizations. It may be that collaboratives are more effective for promoting improvement for particular types of topics, as there are a few controlled studies that show a positive effect.37 In addition, the benefit of collaborative membership may depend on organizations’ behavior and characteristics such as their teams’ effectiveness,38 use of learning activities, human resources practices,39 and measurement practices.40 If collaborative effectiveness is contingent on topic and organizational features, then greater attention may need to be devoted to identifying the nature of issues and organizations for which collaboratives are most effective and targeting this population. Alternatively, the collaborative model might be changed to increase its effectiveness for a wider range of issues and organizations. Determining which changes might enhance the model for more organizations requires additional research.

Although our controlled study was helpful for studying the effect of the LCQ collaborative, it is not without its limitations. First, our sample was limited to primary care clinics affiliated with 1 medical group in 1 state. Although this sample was advantageous for removing the influence of group-level factors, our results may not generalize to independent clinics and those located in other regions, for example. Second, our sample sizes were small; nevertheless, our significant results suggest that we had adequate statistical power to detect differences. Third, ideally, we would have conducted a randomized study to prevent selection bias, which typically favors the intervention. Although bias is possible, an effect favoring intervention clinics does not seem to be present in our study as control clinics fared better. Fourth, we examined a limited number of indicators of quality-oriented climate and service quality. The impact of collaborative membership may differ for other aspects of climate and service quality. Finally, by virtue of conducting this study with a high-performing organization20 overall, we may not have assessed the full potential of collaboratives. The scores for our measures were relatively high at baseline, leaving little room for improvement. The collaborative may have greater impact on clinics with more room for improvement. In addition, a collaborative model with different features may have a different effect. More studies are needed to evaluate the conditions under which collaboratives may offer a benefit and to determine the organizations for which collaboratives are a more effective improvement strategy than others.

Acknowledgments

The authors thank Beth Averbeck, Bob Van Why and Linda Halverson of HealthPartners for their support and facilitation of this study and the staff of the clinics that participated in this study. The authors also thank Gary Oftedahl of the Institute for Clinical Systems Improvement (ICSI) for providing information about ICSI s Action Groups; Phil Jury and Joan Krebs of SPR Center for their administration of the Leading a Culture of Quality survey; Westat for preparing the CAHPS Clinician & Group Visit Survey dataset for our use; Praseetha Cherian for research assistance; and Fangyong Li and Karol Katz for programming and statistical assistance. This work was presented at the 2012 AcademyHealth Annual Meeting in Orlando, FL, and the 2012 Academy of Management Meeting in Boston, MA.

Funded by a Cooperative Agreement from the Agency for Healthcare Research and Quality (AHRQ; U18 HS016978). I.M.N. was also supported by a Career Development Award from AHRQ (K01HS01898701).

Footnotes

The study was approved by the Yale University Human Investigations Committee.

The authors declare no conflict of interest.

References

  • 1.Institute of Medicine. Crossing the Quality Chasm: A New System for the 21st Century. Washington, DC: National Academy Press; 2001. [Google Scholar]
  • 2.Kenagy JW, Berwick DM, Shore MF. Service quality in health care. JAMA. 1999;281:661–665. doi: 10.1001/jama.281.7.661. [DOI] [PubMed] [Google Scholar]
  • 3.Lewis RC, Booms BH. The marketing aspects of service quality. In: Berry L, Shostack G, Upah G, editors. Emerging Perspectives on Service Marketing. Chicago, IL: American Marketing; 1983. pp. 99–107. [Google Scholar]
  • 4.Zeithmal VA, Parasuraman A, Berry L. Delivering Quality Service: Balancing Customer Perceptions and Expectations. New York: The Free Press; 1990. [Google Scholar]
  • 5.Gerteis M, Edgman-Levitan S, Daley J. T hrough the Patient’s Eyes: Understanding and Promoting Patient-centered Care. San Francisco, CA: Jossey-Bass; 1993. [Google Scholar]
  • 6.Bechtel C, Ness DL. If you build it, will they come? Designing truly patient-centered health care. Health Aff. 2010;29:914–920. doi: 10.1377/hlthaff.2010.0305. [DOI] [PubMed] [Google Scholar]
  • 7.Fremont AM, Cleary PD, Hargraves JL, et al. Patient-centered processes of care and long-term outcomes of myocardial infarction. J Gen Intern Med. 2001;16:800–808. doi: 10.1111/j.1525-1497.2001.10102.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Meterko M, Wright S, Lin H, et al. Mortality among patients with acute myocardial infarction: the influences of patient-centered care and evidence-based medicine. Health Serv Res. 2010;45:1188–1204. doi: 10.1111/j.1475-6773.2010.01138.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Mead N, Bower P. Patient-centred consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48:51–61. doi: 10.1016/s0738-3991(02)00099-x. [DOI] [PubMed] [Google Scholar]
  • 10.Browne K, Roseman D, Shaller D, et al. Analysis & commentary. Measuring patient experience as a strategy for improving primary care. Health Aff. 2010;29:921–925. doi: 10.1377/hlthaff.2010.0238. [DOI] [PubMed] [Google Scholar]
  • 11.Keating NL, Green DC, Kao AC, et al. How are patients’ specific ambulatory care experiences related to trust, satisfaction, and considering changing physicians? J Gen Intern Med. 2002;17:29–39. doi: 10.1046/j.1525-1497.2002.10209.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Schneider B, Bowen D. Winning the Service Game. Boston, MA: Harvard Business School Press; 1995. [Google Scholar]
  • 13.Kaissi A, Kralewski J, Curoe A, et al. How does the culture of medical group practices influence the types of programs used to assure quality of care? Health Care Manage Rev. 2004;29:129–138. doi: 10.1097/00004010-200404000-00006. [DOI] [PubMed] [Google Scholar]
  • 14.Williams ES, Manwell LB, Konrad TR, et al. The relationship of organizational culture, stress, satisfaction, and burnout with physician-reported error and suboptimal patient care: results from the memo study. Health Care Manage Rev. 2007;32:203–212. doi: 10.1097/01.HMR.0000281626.28363.59. [DOI] [PubMed] [Google Scholar]
  • 15.Institute for Clinical Systems Improvement. Annual Report 2004: Quality in Action Across Minnesota. Bloomington, MN: Institute for Clinical Systems Improvement; 2004. [Google Scholar]
  • 16.Kilo CM. Improving care through collaboration. Pediatrics. 1999;103(suppl E):384–393. [PubMed] [Google Scholar]
  • 17.Schouten LM, Hulscher ME, van Everdingen JJ, et al. Evidence for the impact of quality improvement collaboratives: systematic review. BMJ. 2008;336:1491–1494. doi: 10.1136/bmj.39570.749884.BE. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Mittman BS. Creating the evidence base for quality improvement collaboratives. Ann Intern Med. 2004;140:897–901. doi: 10.7326/0003-4819-140-11-200406010-00011. [DOI] [PubMed] [Google Scholar]
  • 19.Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient-centered care. Health Aff. 2010;29:1310–1318. doi: 10.1377/hlthaff.2009.0450. [DOI] [PubMed] [Google Scholar]
  • 20.McCarthy D, Mueller K, Tillmann I. Health Partners: Consumer-Focused Mission and Collaborative Approach Support Ambitious Performace Improvement Agenda. New York: Commonwealth Fund; 2009. [Google Scholar]
  • 21.SPR Center. ICSI 2005 Culture Survey: Statistical Analysis and Illustrations. Minneapolis, MN: Satisfaction/Performance/Research Center; 2005. [Google Scholar]
  • 22.Baker RG, Murray M, Tasa K. Quality in action: an instrument to measure hospital quality culture. Toronto, Ontario, Canada: Department of Health Administration, University of Toronto; p. 1995. (Working paper). [Google Scholar]
  • 23.Brueller D, Carmeli A. Linking capacities of high-quality relationships to team learning and performance in service organizations. Hum Resour Manage. 2011;50:455–477. [Google Scholar]
  • 24.Morrison EW, Wheeler-Smith SL, Kamdar D. Speaking up in groups: a cross-level study of group voice climate and voice. J Appl Psych. 2011;96:183–191. doi: 10.1037/a0020744. [DOI] [PubMed] [Google Scholar]
  • 25.Roberts KH, O’Reilly CA. Measuring organizational communication. J Appl Psych. 1974;59:321–326. [Google Scholar]
  • 26.Nunnally J. Psychometric Theory. 2nd. New York: McGraw-Hill; 1978. [Google Scholar]
  • 27.Bollen KA, Long JS. Testing Structural Equation Models. Newbury Park, CA: Sage; 1993. [Google Scholar]
  • 28.Aligning Forces for Quality. How to Report Results of the CAHPS Clinician & Group Survey. Washington, DC: Robert Wood Johnson Foundation; 2008. [Google Scholar]
  • 29.Nembhard IM, Edmondson AC. Making it safe: the effects of leader inclusiveness and professional status on psychological safety and improvement efforts in health care teams. J Organ Behav. 2006;27:941–966. [Google Scholar]
  • 30.Singer SJ, Falwell A, Gaba DM, et al. Patient safety climate in us hospitals. Med Care. 2008;46:1149–1156. doi: 10.1097/MLR.0b013e31817925c1. [DOI] [PubMed] [Google Scholar]
  • 31.Thomas E, Sexton J, Helmreich R. Discrepant attitudes about teamwork among critical care nurses and physicians. Crit Care Med. 2003;31:956–959. doi: 10.1097/01.CCM.0000056183.89175.76. [DOI] [PubMed] [Google Scholar]
  • 32.Landon BE, Wilson IB, McInnes K, et al. Effects of a quality improvement collaborative on the outcome of care of patients with HIV infection: the EQHIV study. Ann Intern Med. 2004;140:887–896. doi: 10.7326/0003-4819-140-11-200406010-00010. [DOI] [PubMed] [Google Scholar]
  • 33.Rao JK, Anderson LA, Inui TS, et al. Communication interventions make a difference in conversations between physicians and patients: a systematic review of the evidence. Med Care. 2007;45:340–349. doi: 10.1097/01.mlr.0000254516.04961.d5. [DOI] [PubMed] [Google Scholar]
  • 34.Elliott MN, Lehrman WG, Goldstein EH, et al. Hospital survey shows improvements in patient experience. Health Aff. 2010;29:2061–2067. doi: 10.1377/hlthaff.2009.0876. [DOI] [PubMed] [Google Scholar]
  • 35.Ganz DA, Wenger NS, Roth CP, et al. The effect of a quality improvement initiative on the quality of other aspects of health care: the law of unintended consequences? Med Care. 2007;45:8–18. doi: 10.1097/01.mlr.0000241115.31531.15. [DOI] [PubMed] [Google Scholar]
  • 36.Anderson EW, Fornell C, Rust RT. Customer satisfaction, productivity, and profitability: differences between goods and services. Marketing Sci. 1997;16:129–145. [Google Scholar]
  • 37.Howard DH, Siminoff LA, McBride V, et al. Does quality improvement work? Evaluation of the organ donation breakthrough collaborative. Health Serv Res. 2007;42:2160–2173. doi: 10.1111/j.1475-6773.2007.00732.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Shortell SM, Marsteller JA, Lin M, et al. The role of perceived team effectiveness in improving chronic illness care. Med Care. 2004;42:1040–1048. doi: 10.1097/00005650-200411000-00002. [DOI] [PubMed] [Google Scholar]
  • 39.Nembhard IM. All teach, all learn, all improve? The role of interorganizational learning in quality improvement collaboratives. Health Care Manage Rev. 2012;37:154–164. doi: 10.1097/HMR.0b013e31822af831. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Deo S, McInnes K, Corbett CJ, et al. Associations between organizational characteristics and quality improvement activities of clinics participating in a quality improvement collaborative. Med Care. 2009;47:1026–1030. doi: 10.1097/MLR.0b013e31819a5937. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES