Skip to main content
Health Services Research logoLink to Health Services Research
. 2006 Jun;41(3 Pt 1):663–682. doi: 10.1111/j.1475-6773.2006.00508.x

Using Public Reports of Patient Satisfaction for Hospital Quality Improvement

Judith K Barr, Tierney E Giannotti, Shoshanna Sofaer, Cathy E Duquette, William J Waters, Marcia K Petrillo
PMCID: PMC1713194  PMID: 16704506

Abstract

Objective

To explore the impact of statewide public reporting of hospital patient satisfaction on hospital quality improvement (QI), using Rhode Island (RI) as a case example.

Data Source

Primary data collected through semi-structured interviews between September 2002 and January 2003.

Study Design

The design is a retrospective study of hospital executives at all 11 general and two specialty hospitals in RI. Respondents were asked about hospital QI activities at several points throughout the public reporting process, as well as about hospital structure and processes to accomplish QI. Qualitative analysis of the interview data proceeded through an iterative process to identify themes and categories in the data.

Principal Findings

Data from the standardized statewide patient satisfaction survey process were used by hospitals to identify and target new QI initiatives, evaluate performance, and monitor progress. While all hospitals fully participated in the public reporting process, they varied in the stage of development of their QI activities and adoption of the statewide standardized survey for ongoing monitoring of their QI programs. Most hospitals placed responsibility for QI within each department, with results reported to top management, who were perceived as giving strong support for QI. The external environment facilitated QI efforts.

Conclusion

Public reporting of comparative data on patient views can enhance and reinforce QI efforts in hospitals. The participation of key stakeholders facilitated successful implementation of statewide public reporting. This experience in RI offers lessons for other states or regions as they move to public reporting of hospital quality data.

Keywords: Public reporting, patients satisfaction survey, hospital quality improvement


Although hospitals across the country routinely measure and report patient experience and satisfaction survey data internally, until recently few comparative public reports of hospital patient satisfaction have been available. A recent review identified nine states, cities, or regions that publicly reported comparative data on hospital patient experience and satisfaction (Barr et al. 2004). Similar public reports have been found on other websites; however, only five continue to report regularly (Shearer, Cronin, and Feeney 2004). Voluntary national efforts to publicly report on hospital quality include pilot projects that have tested the use of a standardized instrument (the Hospital CAHPS Survey) to measure patient perspectives on hospital care (Centers for Medicare & Medicaid Services 2005). The intent of public reporting is, not only to provide information for consumers, but also to stimulate quality improvement (QI) efforts in hospitals. Yet, there have been few formal studies about the impact of public reports on hospital QI.

Several evaluations of public reports on hospital clinical measures suggest that facilities do make changes in response to these reports. In Wisconsin, a recent evaluation found that, among low scoring hospitals, those involved in public reporting were significantly more likely to report improvement activities in areas included in the public report than were comparison hospitals not involved in public reporting (Hibbard, Stockard, and Tusler 2003). Hospitals in Pennsylvania and New Jersey (Bentley and Nash 1998), Missouri (Longo et al. 1997), and Cleveland (Rosenthal et al. 1998) used public reports of performance to develop new approaches to improve clinical indicators.

What remains unclear is whether public reports of patient experience will similarly result in efforts that can lead to improvement. Only one report in the literature discusses the impact of a hospital patient satisfaction public report on hospitals (Draper, Cohen, and Buchan 2001). Moreover, it is unknown how individual hospitals use these public reports to make changes that would improve their ratings. In order to listen to QI messages and adopt practices for QI, hospitals must be able to identify areas where they need improvement (Halm and Siu 2005) and have a way to track changes. Factors both internal and external to the hospital may affect their adoption of QI (Scanlon et al. 2001), and an organizational structure and culture that supports QI is critical to its adoption (Shortell et al. 1995; Berwick 2003; Bradley et al. 2003). Understanding the process through which hospitals respond to the public release of comparative data based on patient experience can help answer questions about the impact of public reporting on hospitals.

OBJECTIVES

This study explored the impact of mandatory statewide public reporting of hospital patient satisfaction on hospital QI, using Rhode Island (RI) as a case example. In 1998, the state of RI enacted legislation requiring public reporting of clinical performance and patient satisfaction measures by all licensed health care facilities in the state, with two major goals: public accountability and QI. Supported by area hospitals, the law gave responsibility for implementation to the State Department of Health with the advice of a Steering Committee that included both state legislators and virtually all major stakeholders (General Laws of Rhode Island 1998). This mandate set the stage for widespread collaboration on implementing a standardized public reporting program (Barr et al. 2002). The overall objective of the current study was to understand how comparative public reporting on standardized measures of hospital patient satisfaction in RI was used by hospitals for QI. Two research questions were addressed:

  1. What QI activities were implemented in response to the public report of patient satisfaction, including collection and use of data to identify and track QI initiatives?

  2. What existing structures and processes were in place (both initially and in response to public reporting) in the hospitals to accomplish QI related to patient satisfaction reports, and what were the barriers to and facilitators of QI efforts?

METHODS

Public Report Process

The public report process involved a pilot survey in 2000 with results reported to individual hospitals only, and the first public survey in 2001 with comparative results released to the public (Rhode Island Department of Health 2001a; Barr et al. 2002). All 11 general, acute-care hospitals in the state and two specialty hospitals, for inpatient rehabilitation and psychiatric treatment, were included in the public report process. Because the surveys were limited to adult patients, the pediatric psychiatric hospital in the state was excluded; also excluded were the Veterans Administration hospital, the state-run long-term care hospital, and rehabilitation or psychiatric units located within general hospitals. Three acute care hospitals in RI are classified as tertiary by state licensing regulations; one is a Level I trauma center. One of the hospitals is an academic medical center, and four are community teaching hospitals (Hospital Association of Rhode Island 2005).

A standardized patient satisfaction survey was administered by a single vendor, selected by a group consisting of hospitals and key stakeholders and approved by the Steering Committee (Barr et al. 2002). The satisfaction survey instrument, sampling methodology, and data collection procedures were uniform across all hospitals (except the psychiatric facility). The survey questionnaires were mailed shortly after discharge to adult patients with an overnight stay who received medical, surgical, or obstetrical services in general hospitals and the rehabilitation hospital; the instrument was handed to patients in the psychiatric hospital at discharge to protect confidentiality. The questionnaire asked patients about their satisfaction with hospital care in nine domains (total 56 items) of care: nursing care (n = 11); physician care (n = 8); treatment results (n = 5); patient education, including discharge instructions (n = 5); comfort/cleanliness (n = 5); admitting (n = 4); other staff courtesy (n = 9); food service (n = 6); and patient loyalty (n = 3). This last domain is a measure of general satisfaction that is composed of overall opinion, willingness to return, and likelihood of recommending the hospital. A summary domain, overall patient experience, was calculated as an average of the nine survey domains. The public report displayed each hospital's performance on the nine domains and the summary domain, expressed as ratings of above, below, or about the same as the average score of hospitals in the vendor's national database. These ratings were presented in a comparative format for hospitals in RI. The hospitals received detailed data (e.g., percentage scores) from the vendor on each survey item, as well as on the domains used for public reporting. For complete details on the survey instrument and methodology, see the Technical Report (Rhode Island Department of Health 2001b).

Data Collection

This study relied on data from in-depth retrospective interviews conducted with key hospital staff. Using a semi-structured interview protocol with a preponderance of open-ended questions, we asked respondents to report on QI activities at the beginning of the public reporting process for patient satisfaction, following the initial pilot survey that was not publicly reported and, finally, after the release of the first public report in November 2001 (Rhode Island Department of Health 2001a; Barr et al. 2002). Specifically, we asked them to describe QI activities related to patient satisfaction that occurred in response to the pilot survey and the public report. Other questions focused on how the hospital was organized to identify, implement, and monitor QI activities. Two members of the research team conducted the interviews between September 2002 and January 2003, approximately 1 year after release of the public report on patient satisfaction in hospitals. This approach is consistent with studies of retrospective methods of assessing change through self-reports, which have shown that retrospective interviews may result in a more accurate picture than comparing self-reports before and after to measure change (Levinson, Gordon, and Skeff 1990).

Sample

The study used a purposive sample for interviewing four key executives in each hospital (Chief Executive Officer, Medical Director, Nurse Executive, and Patient Satisfaction Coordinator). The intention was to capture multiple perspectives from hospital staff, including top administration, clinical areas (medicine and nursing), and the person most familiar with the implementation of the patient satisfaction survey. A letter was sent by e-mail from the Hospital Association of RI to the individual in these positions at each hospital, informing them of the evaluation effort and encouraging their participation in the interview. Of the 52 positions identified, a total of 42 people were interviewed at the 11 general and two specialty hospitals. This sample included: 13 CEOs; 16 clinical staff (eight Medical Directors and eight Nurse Executives); and 13 Patient Satisfaction Coordinators. The latter included administrative positions ranging from manager to director and vice president, and nearly all had “quality” or “performance improvement” in the titles. The overall response rate was 81 percent with at least three executives interviewed at each hospital, except for one hospital where two executives were interviewed (mean = 3.2 respondents per hospital). Most interviews (70 percent) were completed by telephone, and 30 percent were conducted in person at the respondent's office. No differences in amount of time spent or responsiveness of those interviewed were noted for these two methods.

Data Analysis

Interviews were audio recorded (except in a few instances because of equipment malfunction) and transcribed to electronic format to provide a written transcript that then could be electronically analyzed. A preliminary set of a priori codes was developed by the authors and, after review of 15 randomly selected transcripts, the team developed additional codes that emerged from the interviews. The revised codes were applied to the transcripts (Miles and Huberman 1994) using QSR International's NVivo, a software package used for analyzing qualitative data, such as interviews. For this study, NVivo was used not only to apply the a priori and emergent codes to the transcripts but also to aggregate the detailed codes into general themes. It was also used to report on the weight of the evidence, i.e., how often a given code was mentioned by respondents. Coding of a 20 percent random sample by a second researcher yielded a high degree of reliability (agreement rate = 93 percent), and discrepancies were resolved by consensus resulting in a refinement to the existing list of codes. Using an iterative approach to move from specific code categories to more general themes, a process of “functional reduction” combined infrequent code categories and added new ones (Becker 1998), resulting in clusters of common responses and consistencies in the data (Miles and Huberman 1994).

The hospital was the unit of analysis for data related to QI activities, QI structure, and barriers to QI. Because we sought to capture multiple perspectives, responses were ascribed to a hospital regardless of how many respondents cited a view or activity within each hospital. For example, if one person described a QI activity, we did not require agreement to count the activity; also, we counted the activity only once even if mentioned by more than one person in the hospital. However, when the analysis focused on attitudes about the public reporting process, the individual respondent was the unit of analysis, and responses were analyzed across hospitals (Hibbard, Stockard, and Tusler 2003). Recognizing that responses were voluntarily given and not always asked directly or in discrete response categories, we sought to identify responses related to the same category or theme. The analysis assessed the extent of agreement among respondents to the interview questions, as well as the range of opinions that describe the multiple perspectives and richness of interpretation of the public reporting process in RI (Sofaer 1999, 2002).

RESULTS

Hospital QI Activities

The interview responses describe QI initiatives implemented by the hospital, as well as their collection and use of patient satisfaction data to identify and monitor these efforts.

QI Initiatives

Respondents reported on a wide range of QI initiatives addressing the nine survey domains in the public report. Every hospital reported at least two of the survey domains in describing their QI activities, and half mentioned six or seven domains. As shown in Table 1, three-fourths or more of the hospitals mentioned at least five domains. The most frequently mentioned areas for improvement were: admitting timeliness and flow (n = 9); patient education in preparation for discharge (n = 9); nursing care (n = 8); treatment results (e.g., pain control, outcome of care) (n = 8); and food service (n = 8). Examples of QI activities are illustrated in the following quotes from respondents.

Table 1.

Hospital QI Activity by Hospital Patient Satisfaction Domains Publicly Reported in Rhode Island (n = 12)*

Survey Domains # Hospitals with Quality Improvement (QI) Activity
Admitting 9
Patient education 9
Nursing care 8
Treatment results 8
Food service 8
Other staff courtesy 6
Physician care 5
Comfort/cleanliness 4
Patient loyalty 1
*

Psychiatric hospital excluded.

A tenth reported score was a summary measure representing overall satisfaction.

Number of hospitals with at least one respondent reporting QI activities in the domain.

One of the things that the … team tried to do [was] to manage the experience of waiting. [Admitting domain]

We added a staff person who was in a supportive function on that unit to make rounds on patients more frequently [to] help with little things, like where their water was, to help with things nurses weren't able to get to on time. [Nursing domain]

We've worked with the dietary department … so that if the patient isn't happy with their meal … they can call, so somebody from dietary can come up and address that concern right away. [Food Service domain]

We have a call back program to be sure that the patient went home with services that they expected, when they expected them; and the patients who went home with no services were accurately determined not to need services. [Patient Education domain]

Fewer hospitals reported QI activities in the other survey domains: other staff courtesy (n = 6); physician care (n = 5); and comfort/cleanliness (n = 4). Only one hospital reported QI activities directed to patient loyalty, the final domain, a more general category that is likely to be affected by QI efforts in other domains.

Most hospitals were also involved in QI initiatives in areas that were not specifically addressed in the public report, although the report may have stimulated activities in these areas. For example, many hospitals (n = 9) had worked on one or more initiatives with a customer service focus, such as staff training, improving accommodations for visitors and family, and improving signage throughout the hospital. This broader approach could affect several of the survey domains, including physician and nursing care, staff courtesy, and comfort/cleanliness.

One of the things … coming out of the … report is a more broad-based customer service program … addressing broadly what the expectations of customers are.

Our service standards are multi-faceted around communications, attitude, appearance, waiting time. It is a customer standards program and customer service training.

Another target for improvement by most hospitals was reducing emergency department waiting times (n = 9). This effort was informed by the results from the admitting domain in the public report, which showed a high proportion of hospital admissions that began in the emergency room; 51 percent of survey respondents reported being admitted through the emergency department.

The biggest problem was [patients] getting admitted through the Emergency Department and the availability of the right bed at the right time a.… We tried to look at that process of expediting that patient from the Emergency Department to a bed.

Other QI initiatives mentioned by respondents, but not derived from the survey, focused on specific clinical conditions or types of care (n = 10) (e.g., acute myocardial infarction, high-volume diagnoses) or specific hospital systems (n = 7) (e.g., infection control and patient safety, lab, and pharmacy services). These areas of improvement are consistent with hospital regulatory requirements and were mentioned by respondents in addition to the QI initiatives that were linked to the patient satisfaction survey.

Collection and Use of Data for QI

Respondents described their collection and use of patient satisfaction survey data at three points in time: prior to the beginning of the statewide survey process, after the pilot survey, and after the first public report. At the time of the 1998 public reporting mandate, all hospitals collected patient satisfaction data. Half (n = 7) used external vendors to measure satisfaction hospital-wide, while others used internal surveys either hospital-wide or for specific areas or departments. All the hospitals participated in the statewide reporting process, resulting in an increase in vendor use for ongoing patient satisfaction surveys. After the first public report, three additional hospitals dropped their internal surveys in favor of using a survey vendor to track and monitor QI efforts on an interim basis and obtain national comparative data. There were no clear differences (e.g., geographic location, ownership, bed size) between hospitals that did or did not adopt and use a vendor after the first public report. The quotes below illustrate attitudes about vendor use.

For a long time we did … a homegrown measure and then … this opportunity with the state-mandated survey came along. We recognized our limitations and we adopted that as our measure.

These were homegrown tools. We didn't have the advantage of having a vendor … compiling the data and having a database to compare with. … Going to [a vendor] was … going from a horse and buggy into a motorized vehicle.

For patients we're only using vendor surveys and we're trying very hard not to use any in-house or homegrown surveys, because if we were to do that, we wouldn't have comparative information.

The remaining three hospitals used the statewide vendor for public reporting purposes and continued to use their “homegrown” or internal surveys for interim data collection and QI monitoring. These hospitals did not differ from those that adopted an external vendor after the public report on characteristics such as bed size, ownership, geographic area, or QI support.

One specific aspect of the public reporting process that respondents particularly noted was the statewide pilot survey, prior to the collection of data for the first public report. For the pilot, patient satisfaction survey data were reported back only to individual hospitals and not publicly released. It gave the hospitals time to review their own results and to develop and implement improvement initiatives in advance of the survey that would be publicly reported. Many hospitals used the pilot data to identify new areas for QI, as well as to validate existing QI efforts. Advantages of the pilot data noted by these respondents were that the pilot survey results provided information that was specific enough to be translated into QI activities, and supported QI efforts. The quotes below indicate that respondents used the pilot survey results to identify areas where improvement was needed and to initiate or continue QI activities in these areas.

I don't know if we would have identified this as an issue without it [the pilot]; in fact, I really don't believe we would have identified it without this particular survey. … It wasn't a specific question at all in our homegrown.

It was the pilot survey that brought it to our attention. … We wanted to know what needed improvement. This was totally new information to us; it was different data from what we had previously collected.

Some of the findings needed to be further clarified by conducting a couple of focus groups but overall they were actionable.

The pilot survey served as a nice tool to allow us to focus on something objective. … The fact that it was going public gave us some internal leverage to reinforce why … we were saying this was important.

Even with the advantages of the pilot data, a theme voiced by key staff at all of the hospitals was that, when they became available in 2001, the public report results were used to validate earlier findings by the hospital and to support existing QI initiatives, as well as to identify new areas for improvement. For some hospitals, despite the availability of the pilot data, release of the public report helped to raise awareness within the hospital, serving as a “wake-up call” or “cold slap in the face” for hospital staff. Other hospitals used the data to focus and refine their QI efforts

It's extremely valuable to know this. This is the voice of our customer, and this is something we haven't really heard that well before. And I think … doing it with a vendor, doing this particular set of questions, having to publicly report this information, has created such an awareness and … listening on our end, that I think that we're learning something really valuable.

We used these results to analyze where our patients told us that we had opportunities for improvement, so that we could develop strategies as part of that service quality initiative.

Hospital Context for QI

In describing their QI efforts, respondents talked about several facets of hospital decision making for QI that could either facilitate or hinder QI efforts. Hospital executives described existing structures and processes to accomplish QI, including those that evolved over time in response to the public reporting program, as well as those already in place at the onset.

Hospital QI Approach

Most hospitals had taken a decentralized approach to QI, in which each department or unit was responsible for identifying opportunities for improvement and implementing QI projects to effect change. Yet, the reporting of QI activities and results was centralized. While hospitals may have separate departments for QI and quality assurance, at most hospitals there was a hospital-wide QI committee that met regularly to monitor and report QI up from the department level to the board. At all the hospitals, results from the public report were usually brought for consideration to senior leadership and management, as well as to other staff at the department level. This structure is illustrated in these quotes.

We have a department … which is … a central repository for all quality [data], though it acts as the facilitator, because each of the areas is responsible for their own quality and improving, making plans and reporting.

Individual supervisors are responsible at a department level for making sure that their department has a performance and outcome project that they're working on and to report the results periodically and also at year-end.

An individual coordinates and integrates the organization, and it's really a decentralized culture here where everybody's responsible for it in their own area.

I presented it to the Board; first I do a quarterly report to them on quality issues, and I presented it to the Senior Management and to the medical staff; and then I gave everyone copies so that they could then take [it] to their respective staff.

Well, I know the Board of Trustees, the leadership of the medical staff, it was fully shared and discussed. We use joint conferences … [and] I think it was taken all through the operational directors.

[It was] presented to the Board, to the Medical Executive Committee, every meeting that we could go to.

Many respondents described the need to prioritize their QI efforts based on the results from the pilot and public report surveys. They described processes for decision making about where to focus their QI activities and reported examining the survey data in order to select the areas on which to concentrate, usually those areas with the greatest need for improvement, based on their satisfaction scores and ratings. Their comments suggest that the hospitals used the survey results and decided whether or not to pursue QI activities related to specific domains.

Well, we looked at those top ten areas for improvement and looked for the similarities and grouped them together and took the larger picture. So it's education and admissions wait time. And we had a team on each of those issues … it was prioritized in the executive management meetings.

We looked at the results, again high level and even some of the subset results, and department heads were asked to look at areas where we were not at average or above average, and others that wanted to even further strengthen areas that were above average. So we let the departments select quality improvement initiatives in their own areas, and those were signed off on by the customer service committee.

I think we looked at where our scores were the lowest, and at the same time, the correlation with patient satisfaction so that, you know, if we had to pick one over the other, we were going to pick the one that had the most impact on moving the score, and the best correlation coefficient.

We put the teams together to do a root cause analysis. … Really spent time looking at the whole process [and] included the medical staff, the house staff, the ER staff, the nursing staff, the floor staff to come up with a different way.

We picked things we thought were actionable based on the questions that we had [survey] data on. So we certainly had data on discharge planning and education … talking to our patients and preparing them for what to expect in managing at home.

Leadership and Support for QI

In answer to a question about the designated hospital QI leader, respondents identified a position (e.g., QI/QA director or vice-president), a department (e.g., QI/QA Department) or committee (e.g., Performance Improvement Committee). While some respondents identified executive level staff (e.g., CEO, COO, VP) as the champion for QI, there was little consensus on this question; and responses varied within and across hospitals, reflecting different perspectives. Sources of support within the hospital were identified by respondents' numerical ratings of the amount of support from key hospital staff for QI activities and for using patient satisfaction data to drive QI, as shown in Table 2. While there was some variation in responses among hospitals, the overall averages for all hospitals show a clear pattern with the greatest support from the board and senior management and the least support from the medical staff. Confirming these ratings, over half volunteered that organizational commitment was an impetus for QI.

Table 2.

Sources of Support for QI and Patient Satisfaction Data in Hospitals (N = 42)

Sources of Support

Types of Support Statistic* Senior Management Department Heads Medical Staff Nursing Staff Board
For QI activities Average (range) 4.5 (3.3–5.0) 4.1 (3.0–5.0) 3.7 (2.2–5.0) 4.1 (3–4.7) 4.6 (4.0–5.0)
For patient satisfaction data for QI Average (range) 4.6 (4.0–5.0) 4.4 (3.0–4.8) 3.7 (2.7–4.8) 4.1 (3.4–5.0) 4.6 (3.8–5.0)
*

Respondents were asked the question: “On a scale of 1–5, with 5 being the highest, how much support would you say there is for QI activities in this hospital from each of the following groups?” The question was repeated substituting “for using patient satisfaction surveys as a source of data for QI activities.” Reported ratings represent the average support by respondents aggregated across all hospitals.

One hundred percent support from the Director level.

I would say that senior leadership in general … all are definitely advocates for patient satisfaction surveys.

I think the medical staff [are least supportive], because they're not necessarily intimately involved … they don't see the direct impact on them.

The physicians don't want to listen to it because they all have a certain scientific bent to them … they all start arguing what the data is.

Barriers and Facilitators to QI

Despite support within the hospitals for QI activities, respondents also described barriers to implementing QI activities, related to hospital resources and the environment. While most agreed that the hospital provided enough resources for the QI process, they cited insufficient capital finance and funding for infrastructure, along with staffing issues, as barriers to QI, resulting in difficulty prioritizing with limited resources to implement changes needed to achieve significant improvements. Several respondents mentioned specific QI data needs, such as software, automated data systems, assistance in analyzing the data, and translating the data into actionable information (“ability to abstract the right data to analyze processes”). About half of the respondents commented on the need for staff training and insufficient time for staff to do all that is expected of them, such as collecting QI data.

Several respondents noted a perception of staff resistance in the internal environment. Examples were lack of staff commitment to QI goals and concern about being held accountable. Also mentioned was the need to promote staff understanding and “buy-in” for the QI program through education, involvement on committees, and employee surveys. Responses indicated that successful QI programs require widespread support for QI, a culture and leadership fostering QI, and a team approach.

It's not an active resistance to this particular measure; it's just sort of a difficulty with one more thing to add into people's schedules.

We need to change their attitude in what responsibilities they need to take action to make it better within their departments … It lies in each individual. …

We had to change our culture and make performance improvement more of a priority. I think that the more data you give people that's meaningful, the more interested they are in working with you.

The statewide external environment provided the context for the hospitals' involvement in the public reporting process, and respondents generally viewed the external environment as a facilitator for the adoption of a standardized approach to quality measurement. As this was the first time hospitals in RI adopted a uniform survey instrument and report, a complex statewide coordination effort was required. Respondents indicated support for the goals of measuring and publicly reporting patient satisfaction by the hospitals.

First of all, philosophically, we were all on the same wavelength … that we should bring the information forward.

I think that it went very well for something of this scale that really hadn't been done before. … There were very definite concerted efforts to involve all the hospitals … to make this scientifically sound … and to try to keep personal opinion and politics out of it. It was very well done.

It has been an excellent process. It has taken a lot of people a lot of time, with healthy discussions and compromises, and a lot of improvements have come from it.

Respondents also pointed to the leadership roles of the state health department and the hospital association in coordinating and facilitating an open and collaborative process, “with about as much buy-in as you can possibly get.” While federal government requirements were an impetus to QI, a few mentioned competing state and federal regulations as potential challenges to QI.

DISCUSSION

Hospital executives in RI described their QI programs and various ways hospitals were using the public report data on patient satisfaction for QI. Their comments indicate that data from the standardized statewide patient satisfaction survey and the public reporting process have been used to identify and target new QI initiatives; evaluate performance over time and in comparison to other hospitals; and monitor QI progress. QI initiatives encompassed all survey domains in the public report, as well as related efforts (e.g., improve customer service). These more general efforts were prompted by but not directly measured in the patient survey, and they were relevant to several survey domains (e.g., staff courtesy, patient loyalty). The increase in QI initiatives, vendor data collection, and use of data over the period that began before the statewide pilot survey and continued through the release of the first hospital patient satisfaction public report in late 2001, suggests that hospitals have strengthened their QI activities consistent with the results on both the pilot and public report surveys.

While all of the hospitals participated in the public reporting process, they varied in the stage of development of QI activities at the beginning of this process and in their adoption of the statewide approach for their own QI programs. Some hospitals were early adopters in using comparative patient survey data to support QI efforts. Before the statewide survey, these hospitals used a vendor-designed instrument to survey their patients, enabling them to track performance and compare their progress to a national database. After the first public report, three additional hospitals adopted a vendor survey to generate the data for interim monitoring. Thus, many hospitals recognized the importance of standardized data with benchmarking capabilities and the value of using a standardized survey to monitor performance. The remaining hospitals were “late” adopters, participating in the statewide survey process and making QI changes but maintaining a homegrown approach to ongoing monitoring of patient satisfaction. Moreover, while most hospitals used the pilot survey data to refine and drive their QI efforts, others described the release of the public report data as the catalyst for QI.

The variation in the way the hospitals collected and used data for QI may reflect internal and external factors that influence adoption of QI initiatives (Scanlon et al. 2001). A number of hospitals in this study recognized the importance of an organizational culture that supports QI, allows for flexibility in implementing change, and views change positively to facilitate the process (Shortell et al. 1995). Most hospitals in RI had an organizational context and managerial factors that are critical to QI (Berwick 2003; Bradley et al. 2003). Organizational structure put the responsibility for QI within each department but enabled results to reach senior management, and strong support from hospital senior management indicates commitment to the process.

Despite some reports of limited resources for QI and staffing issues, hospital executives in RI viewed their organizations as engaged in the statewide process and involved in implementing QI activities and monitoring their impact. In general, the hospitals were facing the challenges required to measure and improve patient satisfaction. The legislative mandate for public reporting, as implemented through the leadership of the health department in collaboration with the hospital association, has provided a context that facilitated many hospitals' pursuit of their QI agendas to improve the patient's hospital experience. This collaborative context for decision making (Barr et al. 2002; Mehrotra, Bodenheimer, and Dudley 2003) encouraged the involvement of stakeholders throughout the process, facilitating the exchange of information and the increasing commitment of the hospitals to measurement, reporting, and improvement.

A range of perceptions was tapped in this study to understand how hospitals in RI have responded to the statewide initiative for public reporting of patient satisfaction. However, as with any retrospective interviewing, respondents' ability to recall accurately events from the past may not be complete. All respondents could speak to the period after the release of the public report and answer questions about its impact. However, in a few cases, respondents were recently employed with the hospital or in a different position and not involved in tasks directly related to the project at the time of the pilot survey. Additional perspectives of other positions within the hospital, such as direct patient care staff, were not captured in these interviews. We relied on verbal reports rather than an audit of activity. Finally, RI is a small state with a small complement of hospitals where collaboration and standardized action may be easier to achieve.

Lessons for Other States

Nevertheless, this experience in RI offers lessons that may be helpful to other states or regions as they move to public reporting of hospital quality data. Comparative public reports can indeed be used to identify opportunities for improvement, not only in individual hospitals, but also across hospitals. Although there are, as yet, no standardized pathways for improvement in patient satisfaction, the RI public report data illuminated areas for improvement that affected many hospitals. The health department and the hospital association followed up the 2003 public report by convening a series of meetings with hospitals and other stakeholders to identify and address common opportunities for improvement. This statewide initiative prompted additional analyses, showing that patients admitted through emergency departments rated the admission process considerably lower than patients admitted directly. The result was a sharing of “best practices” among hospitals on ways to improve the emergency department admitting process (Rhode Island Department of Health 2004).

Given the central importance of the physician's role in patient care and in recommending hospitalization, our finding of less support for QI and patient satisfaction data among the medical (physician) staff suggests that more work is needed to engage physicians in hospital QI and strengthen their support for QI efforts (Bradley et al. 2001). For hospitals to measure and use quality data to direct QI activities, both administrative staff and clinical staff, especially physicians, must understand and accept the need for change (West 1998; Weber and Joshi 2000). More attention should be focused on the dissemination of comparative results of patient surveys to physicians and on their understanding of the uses of such data for QI in hospitals and the relevance of patient satisfaction data to clinical practice.

Although public reporting may have unintended consequences, it can enhance and reinforce QI efforts in hospitals (Marshall, Romano, and Davies 2004). In RI, public reporting of comparative hospital patient satisfaction data fulfilled a major goal of legislation mandating public reporting. Not only were survey results used to confirm existing QI directions, the results also pointed some hospitals to new areas for improvement. As part of the process, hospitals augmented their data collection capabilities and adopted more systematic and standardized processes for QI. Ultimately, the value of using a statewide survey will be whether or not, over time, patient satisfaction survey ratings increase along with QI initiatives in hospitals in RI and other states with public reporting.

Currently, the RI Department of Health's public reporting program continues its efforts to expand public reporting of quality performance. A second public report on hospital patient satisfaction was published in 2003; subsequent reports await federal decision making on the Hospital CAHPS Survey. A statewide initiative to report nursing home resident and family satisfaction is underway, and the project to report home health patient satisfaction has begun. In addition, the RI Department of Health has produced public reports based on CMS clinical measures for hospitals, nursing homes, and home health agencies. These reports are intended to make the data more accessible to the public through reformatting and/or developing composite scores where appropriate. Future work will extend these activities to other types of health care facilities.

Acknowledgments

This research was funded by the state of RI through a contract to Qualidigm. The authors thank the hospital staff who participated in interviews for this study. We appreciate the assistance of Tracey Dewart, Ph.D., in conducting interviews and the support of Patricia A. Nolan, M.D., formerly Director of Health of RI. We also acknowledge the helpful comments and suggestions of two anonymous reviewers.The views expressed in this article are solely those of the authors and do not necessarily reflect the views of their affiliated institutions.

REFERENCES

  1. Barr JK, Banks S, Waters W, Petrillo M. “Methodological Issues in Public Reporting of Patient Perspectives Hospital Quality.”. Joint Commission Journal on Quality and Safety. 2004;23(4):51–70. doi: 10.1016/s1549-3741(04)30067-5. [DOI] [PubMed] [Google Scholar]
  2. Barr JK, Boni CE, Kochurka KA, Nolan P, Petrillo M, Sofaer S, Waters W. “Public Reporting of Hospital Patient Satisfaction: The Rhode Island Experience.”. Health Care Financing Review. 2002;23(4):51–70. [PMC free article] [PubMed] [Google Scholar]
  3. Becker HS. Tricks of the Trade. How to Think about Your Research While You're Doing It. Illinois: University of Chicago Press; 1998. [Google Scholar]
  4. Bentley JM, Nash DB. “How Pennsylvania Hospitals Have Responded to Publicly Released Reports on Coronary Artery Bypass Graft Surgery.”. Journal on Quality Improvement. 1998;24(1):40–9. doi: 10.1016/s1070-3241(16)30358-3. [DOI] [PubMed] [Google Scholar]
  5. Berwick DM. “Disseminating Innovations in Health Care.”. Journal of the American Medical Association. 2003;289(15):1969–75. doi: 10.1001/jama.289.15.1969. [DOI] [PubMed] [Google Scholar]
  6. Bradley EH, Holmboe ES, Mattera JA, Roumanis SA, Radford MJ, Krumholz HM. “A Qualitative Study of Increasing Beta-Blocker Use after Myocardial Infarction.”. Journal of the American Medical Association: Why Do Some Hospitals Succeed? 2001;285(20):2604–11. doi: 10.1001/jama.285.20.2604. [DOI] [PubMed] [Google Scholar]
  7. Bradley EH, Holmboe ES, Mattera JA, Roumanis SA, Radford MJ, Krumholz HM. “The Roles of Senior Management in Quality Improvement Efforts.”. Healthcare Management: What Are the Key Components? 2003;28(1):15–28. [PubMed] [Google Scholar]
  8. Centers for Medicare & Medicaid Services. “Hospital Quality Initiative.”. [November 2, 2005];2005 Available at http://www.cms.hhs.gov/quality/hospital.
  9. Draper M, Cohen P, Buchan H. “Seeking Consumer Views.”. International Journal of Quality in Health Care: What Use Are Results of Hospital Patient Satisfaction Surveys? 2001;13(6):463–8. doi: 10.1093/intqhc/13.6.463. [DOI] [PubMed] [Google Scholar]
  10. General Laws of Rhode Island. “Title 23, Chapter 23-17.17.”. [November 2, 2005];1998 Available at http://www.rilin.state.ri.us/Statutes/TITLE23/23-17.17/INDEX.HTM.
  11. Halm E, Siu AL. “Are Quality Improvement Messages Registering?”. Health Services Research. 2005;40(2):311–5. doi: 10.1111/j.1475-6773.2005.00357.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Hibbard JH, Stockard J, Tusler M. “Does Publicizing Hospital Performance Stimulate Quality Improvement Efforts?”. Health Affairs. 2003;22(2):84–94. doi: 10.1377/hlthaff.22.2.84. [DOI] [PubMed] [Google Scholar]
  13. Hospital Association of Rhode Island. 2005. “Personal communication.”. [Google Scholar]
  14. Levinson W, Gordon G, Skeff K. “Retrospective versus Actual Pre-Course Self-Assessments.”. Evaluation and the Health Professions. 1990;13(4):445–52. [Google Scholar]
  15. Longo DR, Land G, Schramm W, Fraas J, Hoskins B, Howell V. “Consumer Reports in Health Care.”. Journal of the American Medical Association: Do They Make a Difference in Patient Care? 1997;278(19):1579–84. [PubMed] [Google Scholar]
  16. Marshall MN, Romano PS, Davies HTO. “How Do We Maximize the Impact of the Public Reporting of Quality Care?”. International Journal for Quality in Health Care. 2004;16(1):157–63. doi: 10.1093/intqhc/mzh013. [DOI] [PubMed] [Google Scholar]
  17. Mehrotra A, Bodenheimer T, Dudley RA. “Employers' Efforts to Measure and Improve Hospital Quality.”. Health Affairs: Determinants of Success. 2003;22(2):60–71. doi: 10.1377/hlthaff.22.2.60. [DOI] [PubMed] [Google Scholar]
  18. Miles MB, Huberman AM. Qualitative Data Analysis. An Expanded Sourcebook. California: Sage Publications; 1994. [Google Scholar]
  19. Rhode Island Department of Health. “A Report of Patient Satisfaction with Hospital Care in Rhode Island.”. [November 2, 2005];2001a Available at http://www.health.ri.gov/chic/performance/quality/quality10.pdf.
  20. Rhode Island Department of Health. “A Report of Patient Satisfaction with Hospital Care in Rhode Island: Technical Report.”. [November 2, 2005];2001b Available at http://www.health.ri.gov/chic/performance/quality/quality10_technical.pdf.
  21. Rhode Island Department of Health. “Statewide Effort to Improve Hospital Patient Satisfaction Ratings.”. [November 2, 2005];2004 Available at http://www.health.ri.gov/chic/performance/quality/quality23.pdf.
  22. Rosenthal GE, Hammar PJ, Way LE, Shipley SA, Doner D, Wojtala B, Miller J, Harper DL. “Using Hospital Performance Data in Quality Improvement.”. Journal on Quality Improvement: The Cleveland Health Quality Choice Experience. 1998;24(7):347–59. doi: 10.1016/s1070-3241(16)30386-8. [DOI] [PubMed] [Google Scholar]
  23. Scanlon DP, Darby C, Rolph E, Doty HE. “Use of Performance Information for Quality Improvement.”. Health Services Research: The Role of Performance Measures for Improving Quality in Managed Care Organizations. 2001;36(3):619–41. [PMC free article] [PubMed] [Google Scholar]
  24. Shearer A, Cronin C, Feeney D. “The State-of-the-Art of Online Hospital Public Reporting: A Review of Forty-Seven Websites.”. [November 2, 2005];2004 Delmarva Foundation. Available at http://www.delmarvafoundation.org/html/public_reporting_summit_052604/WebSummariesFinal9.2.04.pdf.
  25. Shortell SM, O'Brien JL, Carman JM, Foster RW, Hughes EFX, Boerstler H, O' Connor EJ. “Assessing the Impact of Continuous Quality Improvement/Total Quality Management.”. Health Services Research: Concept versus Implementation. 1995;30(2):377–401. [PMC free article] [PubMed] [Google Scholar]
  26. Sofaer S. “Qualitative Methods.”. Health Services Research: What Are They and Why Use Them? 1999;34(5):1101–18. [PMC free article] [PubMed] [Google Scholar]
  27. Sofaer S. “Qualitative Research Methods.”. International Journal of Health Care Quality. 2002;14(4):329–36. doi: 10.1093/intqhc/14.4.329. [DOI] [PubMed] [Google Scholar]
  28. Weber V, Joshi MS. “Effecting and Leading Change in Health Care Organizations.”. Joint Commission Journal on Quality Improvement. 2000;26(7):388–99. doi: 10.1016/s1070-3241(00)26032-x. [DOI] [PubMed] [Google Scholar]
  29. West T. “Comparing Change Readiness, Quality Improvement, and Cost Management among Veterans Administration, For-Profit, and Nonprofit Hospitals.”. Journal of Health Care Financing. 1998;25(1):46–58. [PubMed] [Google Scholar]

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES