Skip to main content
Health Care Financing Review logoLink to Health Care Financing Review
. 2002 Summer;23(4):51–70.

Public Reporting of Hospital Patient Satisfaction: The Rhode Island Experience

Judith K Barr, Cathy E Boni, Kimberly A Kochurka, Patricia Nolan, Marcia Petrillo, Shoshanna Sofaer, William Waters
PMCID: PMC4194756  PMID: 12500470

Abstract

This article describes a collaborative process for legislatively mandated public reporting of health care performance in Rhode Island that began with hospital patient satisfaction. The goals of the report were both quality improvement and public accountability. Key features addressed include: the legislative context for public reporting; widespread participation of stakeholders; the structure for decisionmaking; and the use of formative testing with cognitive interviews to get responses of consumers and others about the report's readability and comprehensibility. This experience and the lessons learned can guide other States considering public reporting on health care performance.

Introduction

As part of the movement to improve health care quality, increasing attention is being given to public accountability for health care providers, and efforts are being directed toward a national report card (Institute of Medicine, 2001). Performance reports have two complementary objectives: quality improvement (QI) and public accountability. In the hospital arena, QI efforts have tended to focus on internal processes, and performance measurement is used to provide reference points for gauging improvement and monitoring success of interventions. When health care performance is reported to the public, it is intended to demonstrate that providers can be monitored externally and that the health care system is accountable to patients. Moreover, when public reporting is comparative, it requires that organizational performance measurements are developed consistently across providers and presented to highlight variation.

The purpose of this article is to describe a collaborative process for mandated public reporting of comparative hospital performance in the State of Rhode Island that began with a public report of hospital patient satisfaction data. The article begins with an overview of current and recent public reporting of hospital data and the legislative mandate in Rhode Island. Next, the process of moving from mandate to reality—from vendor selection to designing and disseminating a public report—is described. Finally, key elements of the collaborative model that evolved in Rhode Island are identified and lessons learned from this process are discussed.

Hospital Patient Satisfaction Public Reporting

As part of a series of publications on health care quality, the Institute of Medicine (2001) published a monograph on public reporting, outlining an approach to a national report card for the health care system. An important objective of this work has been to focus attention on the need to make health care quality information accessible to the public. The conceptual framework proposed in “Envisioning the National Health Care Quality Report” emphasizes consumer perspectives on health care needs, including patient satisfaction. Recent public reporting efforts have focused on comparative health plan performance (New York State Health Accountability Foundation, 2001), health plan member satisfaction (Crofton, Lubalin, and Darbu, 1999; Guadagnoli, et al., 2000), hospital clinical performance, and the use of standardized data such as the Minimum Data Set and the Outcome and Assessment Information Set for public reporting on nursing homes and home health agencies, respectively.

Despite these national efforts, a review of public reports of hospital data (both print and Web-based) showed that there are relatively few reports that compare data by hospital or that are designed specifically for public dissemination, and even fewer that report patient satisfaction. A few States, including Rhode Island, legislatively mandate public reporting of hospital data. For example, Pennsylvania, Virginia, and Utah mandate that hospital cost and utilization data be publicly reported. Maryland has mandated public reporting of hospital clinical measures, planned for 2003. Other public reporting of hospital utilization data has been undertaken by business and health coalitions (e.g., Pacific Business Group on Health, Houston Healthcare Purchasing Organization). Three hospital and health systems publicly report hospital cost and utilization data (Colorado, Michigan, and Ontario, Canada), and one includes patient satisfaction (Ontario). Hospital patient satisfaction also has been publicly reported by five coalitions: (1) California Institute for Health Systems Performance, (2) Cleveland Health Quality Choice, (3) Massachusetts Health Quality Partners, (4) Niagara Health Quality Coalition, and (5) Southeast Michigan Employer and Purchaser Consortium. Recently, three other regions have joined with Michigan to form the Hospital Profiling Project and report patient satisfaction. Rhode Island becomes the first State to report comparative hospital patient satisfaction to the public under a legislative mandate.

Rhode Island Legislation

In 1998, the General Assembly of Rhode Island enacted legislation that mandates a public reporting system for all licensed health care facilities in the State (General Laws of Rhode Island, 1998).1 (A similar law was enacted in 1996 mandating public reporting for all health plans [General Laws of Rhode Island, 1996]). The legislation specifies a focus on both quality improvement and public accountability. The law requires the Rhode Island Department of Health (HEALTH) to develop a reporting system—the Rhode Island Health Quality Performance Measurement and Reporting Program—and to disseminate comparative data on health care quality to the public. Two types of data are stipulated: clinical performance measures and patient satisfaction. A series of reports is required, beginning with hospitals, that will include nursing homes, home health agencies, freestanding ambulatory facilities, and other licensed health care facilities.

When legislation about public reporting for health care facilities was first introduced in the early 1990s, it received little support from the health care community. In 1997, legislation was proposed again for statewide mandated reporting, however, the health care environment was shifting to a focus on quality. Recognizing this shift, the leadership of the Hospital Association of Rhode Island (HARI) made a commitment to support this legislative initiative. At HEALTH, there was also less reticence and more openness to such a law in 1997. Both organizations gave it full support. While the individual hospitals were already engaged in surveying patient satisfaction, they used different vendors and different survey systems, making meaningful comparisons across hospitals impossible. Recognizing the importance of a standardized methodology and perceiving that a voluntary approach would not work, the hospitals supported this legislation. The legislative process involved little controversy, and there was agreement across stakeholders on the intent of and language in the legislation that passed in 1998.2

To implement the requirements, the law outlined a structure for collaborative decision making. It directed HEALTH to establish a steering committee composed of a maximum of 17 persons and chaired by the Director of Health. Convened by the Director of Health in October 1998, the committee included legislators, representatives of health care providers (e.g., physicians, nurses, hospitals), health plans, business, labor, State departments, and consumers. Aware of the need to deal with the technical issues of developing and determining the most appropriate measurements, the committee established a Hospital Measures Subcommittee in December 1998 with a broad representation from the provider and academic health research communities. To determine the most appropriate approach and format for reporting the data, a Public Release Workgroup (PRWG) was established in April 2000 with representatives of hospitals and consumers, as well as quality improvement consultants and the academic health research community. Decisionmaking about reporting flowed from the PRWG to the Hospital Measures Subcommittee to the steering committee for final approval. Thus, the program provided the framework for collaboration. In addition, HEALTH has maintained a list of “interested parties” (e.g., consumers, provider organizations, community organizations) who regularly were notified of various committee meetings; their attendance expanded the opportunities for collaboration.

Other factors also made collaboration essential. First, the legislature made available a relatively modest appropriation to HEALTH; to actually implement the legislation, therefore, other stakeholders had to make significant resource contributions. Specifically, Rhode Island hospitals paid for the administration of the patient satisfaction survey and some of the vendor consultant fees for the development of the public report. Second, the legislation required that public reporting be comparative, meaning that all hospitals had to agree to use the same measures of patient satisfaction. Participation by the hospitals in the decisionmaking process to select the vendor and then to develop the report format was essential to build and maintain support and to ensure that the measurement of patient satisfaction would be useful to the hospitals for quality improvement. Moreover, because public reporting was new to Rhode Island, the experience of the different hospitals and the input from various experts were essential to the process. Finally, dissemination of the report, once available, would be enhanced by the active participation of agencies and groups that would promote the availability of the report to their constituencies, and also vouch for its trustworthiness and utility (Sofaer, 2000a).

From Concept to Report

Once the committees began to consider hospital performance reporting, they decided for several reasons to focus on patient satisfaction initially. First, hospitals were used to surveying patients about satisfaction. Second, there seemed to be more ready agreement on what the satisfaction measures should be, while deciding the clinical measures for hospitals was thought to be more complex. Third, satisfaction was thought to be more understandable to the consumer; and fourth, a patient satisfaction survey was more feasible. The process involved many steps to bring the hospital patient satisfaction report to the public. From the outset, HEALTH sought to bring external expertise to the process. (HEALTH contracted with Qualidigm®, a health care quality improvement organization based in Connecticut, with extensive experience in measures development and quality improvement interventions. Qualidigm®'s role was to provide ongoing assistance with the project and to develop a series of background reports based on existing data sources, including Medicare claims data, and existing public reports of hospital performance.)

Preparing for the Survey

Hospital Participation

Since the inception of the project, the hospitals in Rhode Island have been committed to assisting with and participating in the patient satisfaction measurement and reporting process. Led by HARI, the hospitals provided input on the selection of the vendor. This decision was a critical one for two important reasons: (1) each hospital had to commit to paying for the survey of its patients; and (2) each hospital needed to work closely with the vendor to assure success in the sampling procedure. While the intent of the legislation is to have all licensed health care facilities reporting quality performance data, it was recognized early on in the process that the small number of hospitals and types of hospitals that should be surveyed would be a consideration. All 11 general acute care hospitals in the State participated, as did the single rehabilitation hospital and the single psychiatric hospital. The Hospital Measures Subcommittee decided that other specialty hospitals would not be included in the first round survey because their patients did not meet the survey sample selection criteria.

Vendor Selection

The first step in planning for a statewide hospital patient satisfaction survey was to select a survey vendor. This process was coordinated by HEALTH in collaboration with HARI. A Patient Satisfaction Workgroup was convened early in 1999 to assist with the process of vendor selection. Meetings were held at two levels: large group sessions and smaller staffing sessions. All meetings were open to anyone from the hospital, health care, or consumer communities. The workgroup had three major goals: (1) discuss the needs and concerns of hospitals in Rhode Island about a patient satisfaction survey to meet the requirements of the legislation; (2) review information about potential vendors; and (3) provide systematic feedback on vendor selection choice. Workgroup meetings were held 1-2 times per month during this process at a centralized location to make them accessible to organizations from all over the State. Approximately 15-25 persons attended each of the sessions. The smaller staffing meetings included representatives from HEALTH, HARI, several hospitals, two consumer groups, and Qualidigm®. Participants reviewed literature on hospital patient satisfaction measures to help identify potential vendors; developed a Request for Proposals; and prepared materials to facilitate vendor selection.

The vendor selection process began early in 1999, and the final recommendation was made in February 2000. Five vendor proposals were reviewed; four vendors were invited to make presentations to the workgroup, and two returned for a second presentation. These face-to-face meetings served to clarify questions, reinforce opinions, and engender comfort with the final decision. Nine hospitals were represented among the 13 raters at the final sessions. The raters used an evaluation form with a five-point scale to rate the vendor attributes that were used as selection criteria. (Rating criteria are available from the authors on request.) The final decision was made by the steering committee, based on input from HARI and the Hospital Measures Subcommittee. HARI contracted with the vendor (Parkside Associates, Inc., subsequently acquired by Press Ganey) on behalf of the hospitals.

Fielding the Survey

Pilot Survey

The hospitals requested that a pilot survey be conducted prior to the first round of public reporting. This pilot was conducted July-October 2000 with all hospitals participating. The purpose of the pilot was twofold. The first aim was to test the sample and data collection methodology and identify potential improvements in the process. The second aim was to provide information regarding data quality, error estimates, and confidence intervals to inform reporting decisions being made by the Measures Subcommittee and the PRWG. The numerical results of the pilot test were confidential, with the exception that each hospital received its own data. This process gave hospitals an opportunity to carry out quality improvement efforts between administration of the pilot survey and administration of the reportable survey.

Sampling Patients

The selection criteria included: patients admitted with an overnight stay; age 18 or over; and discharged to a personal residence. The purpose was to include patients with an inpatient experience who would not necessarily need a proxy to complete the survey (as children would) and who would be easier to track at their homes rather than necessitating a followup at other institutions. These criteria helped ensure the highest quality of data possible for the first public report. Patients were excluded from the sample if they were deceased, had received a survey in the previous 6 months, or were transferred to another facility. For the 11 general acute care hospitals, the sample consisted of a random selection of patients stratified across three service types: surgical, medical, and obstetric. The stratified random sample meant that an equal number of patients from each service type were selected regardless of the proportion of those service types to the patient population. The sampling strategy randomly selected 25 patients per service-type per week for 13 weeks. If there were fewer than 25 patients per type per week, 100 percent of the patients were selected. For the rehabilitation hospital, it was necessary to sample 100 percent of the patients who met the criteria, as there were fewer than 25 patient discharged per week. For the psychiatric hospital, all patients who met the selection criteria were included.

Mail Survey Data Collection

The first round of data collection for public reporting extended from April 1, 2001-July 30, 2001. For the 11 general acute care hospitals and for the rehabilitation hospital, surveys were mailed to patients approximately 1 week after they were discharged for the first 13 weeks of data collection. Reminder post cards were mailed to patients 1 week after the surveys were mailed, with the last reminder post cards mailed in the 14th week. Because of laws governing patient confidentiality, the only exception was for inpatient psychiatric patients. Eligible patients were given a copy of the cover letter and questionnaire along with a return envelope prior to their discharge, according to a pre-determined protocol. Patients were allowed 3 additional weeks to complete and return the survey.

Statewide, approximately 43 percent of patients receiving surveys returned them. The mean hospital response rate was 45 percent. The highest overall response rate achieved by an individual hospital was 53 percent, and the lowest was 31 percent. Nine of the 13 hospitals achieved overall response rates greater than 40 percent. Response rates varied by type of service received by the patient, with the highest response rates for surgical patients (49 percent). These response rates were consistent with expectations, and the respondent population had similar characteristics to the total hospital discharge population (refer to the Technical Report at Internet address: www.healthri.org/chic/performance/satisfaction.htm).

Creating the Public Report Format

Once the survey vendor had been selected, the PRWG was convened in April 2000, and organizations were invited to join in this effort. Participants included: HEALTH, HARI, four hospitals, and four community organizations (including groups representing minority and aging populations). The vendor representative also participated in most of the meetings, as did Qualidigm®, to provide both information and technical support.

Setting the Charge

The first decision made by the PRWG was to agree on the charge for this workgroup. Regarding the report, the charge was to: (1) draft a public report format that consumers can understand and use, (2) identify patient satisfaction domains for public reporting, (3) develop a reporting format based on identified domains and public reporting research prior to release of survey data, (4) provide national and/or regional comparisons when compiling public information, and (5) test the report format with members of the public. Regarding dissemination, the intent was to: identify target audiences and information intermediaries and create a dissemination plan outline. Finally, with respect to the report drafting process, the PRWG agreed to: (1) scan and consider similar public reporting initiatives on a national and regional level, (2) report periodically and submit recommended report format and dissemination plan to the Measures Subcommittee and the steering committee, (3) conduct an open and public process, and (4) conduct a process that would be sensitive to cultural diversity.

These decisions were formulated as guiding principles for the process of report development. For example, the PRWG members believed that format reporting decisions should be made independently of actual data on patient satisfaction, so that there would be no opportunity for the final report to be influenced by any particular hospital's standing in the data. All decisions made by the PRWG were distributed to participants whether they attended the meetings or not. The PRWG worked with minority organizations to test the report format, and met with the Minority Health Advisory Committee to discuss issues related to surveying non-English speaking patients and disseminating results to the minority community. These decisions were presented to the Measures Subcommittee and the steering committee for confirmation or approval, as needed.

Educating the Public Release Workgroup

To give the PRWG a solid foundation for launching the format development process, the next step involved education of the PRWG participants about public reporting. They reviewed the key points of the legislation. The vendor instructed them about the survey process, in particular, sampling, item pools, domain scores, composite scores, and reliability. Qualidigm® presented literature on public reporting of quality measures and consumer perceptions of quality information for review. PRWG also reviewed existing public reports on hospital and health plan performance from other States, hospital associations, and coalitions. Included in this review was the extensive work in the literature on the Consumer Assessments of Health Plans Survey (CAHPS®) (McGee et al., 1999; Hibbard et al., 2000; Carman et al., 1999). These reviews not only helped to educate the PRWG, but also became the core of a report developed by Qualidigm and published by HEALTH (Rhode Island Department of Health, 2000). A presentation on this background information was made in May 2000, and the report was distributed to all members of the steering committee and the Measures Subcommittee.

This educational step was enhanced through consultation with a national expert in consumer reporting of health care information. The consultant provided expertise at several points during the process: in early meetings of the PRWG; mid-way through the process, including a presentation to the committee; during the cognitive testing phase; and at the report development phase. Important contributions included:

  • Providing background materials for review by the PRWG.

  • Outlining and describing the range of issues to be considered in designing a comparative report of quality information for the public.

  • Guiding thinking beyond a “consumer choice” model to include in the report specific action steps people might take in response to the information.

  • Emphasizing the need to use consumer-friendly language and formatting.

  • Noting the importance to consumers of knowing that the information reported was objective and from an authoritative and trustworthy source.

Finally, the consultant provided input on the implications of different approaches to presenting comparative data, including the use of a benchmark.

Defining the Content

The PRWG agreed on the objectives for the public report: (1) to disseminate comparative data on patient satisfaction with hospital care in Rhode Island, (2) to provide information relevant to health care decisionmaking, and (3) to meet the intent of the legislation. They then created key messages that needed to be contained in the public report and to guide its development: importance of quality, confidence and trust in the health care system, uses of the report information, new information about consumer (i.e., patient) perspectives on hospital care, comparative information on hospitals, and health care system responsiveness to public concerns regarding quality.

The PRWG developed a report outline based on these key messages and then made several decisions early in the process about how data would be reported and about the other content that would be included in the report. Critical to the report format were decisions to report information by hospital, by type of service (medical, surgical, and obstetrical) or specialty hospital (psychiatric or rehabilitation), and by survey domain (instead of individual questions). In addition to the data, the PRWG decided the report should also include: information on each hospital participating in the reporting (e.g., address, telephone number, Web site); the background of the report (e.g., legislation), a description of the survey process and respondent profile, sample questions, and efficacy messages such as directions about how to use the report and reference telephone numbers and Web site contacts for assistance. These decisions were important because they represented agreement among the collaborators about format and contents.

Other considerations included checks on literacy levels of the text portions of the report, whether to use positive or negative framing in the presentation (Banks et al., 1995), and the use of both expert review and cognitive testing to assess public receptiveness to the format and wording of the report. At this early stage, the group also identified target audiences and subgroups in order to design a report that would be appropriate for consumers. Audiences for the public report include consumers, providers, payers, purchasers, legislators, and government officials. Anticipating the need to reach these audiences, information intermediaries were identified; these organizations would be in a position to help disseminate the report (Hibbard, Slovic, and Jewett, 1997).

Developing the draft public report was an iterative process, with members of the PRWG taking draft language back to the organizations they represented for additional input at each step. Two examples of this process were the decisions about the title of the report and the presentation of the survey questions in the report. After staff of the PRWG suggested several titles, other suggestions were discussed, and the members polled their constituencies. Following a discussion of the results, the PRWG decided on the title for the draft report (subsequently changed twice during the testing phase). An early decision was to include examples of the survey questions for the reader, but not the entire survey instrument, which would be too cumbersome. Staff initially drafted text language that explained the relevance of each section of questions, with sample questionnaire items listed below. After much discussion, it was decided that the explanatory text detracted from the content of the actual items. The sample questionnaire items were worded in an abbreviated version to accurately reflect the actual wording in the survey instrument. The end result of this iterative process was a draft report with dummy data that could be used for formative testing.

Formative Testing

There were two major purposes of formative testing: (1) to get feedback from experts and from consumers in Rhode Island before the patient satisfaction report format was finalized, and (2) to test consumer responses to the readability, understandability, and utility of the draft public report through cognitive interviews. A three-pronged strategy was developed: (1) reviews of the draft by national experts in consumer reporting of health care performance information, including two persons with extensive experience in creating and testing such reports for low literacy and multi-cultural audiences, (2) informal testing conducted by PRWG members, and (3) formal cognitive testing conducted by Qualidigm®. A fourth strategy was added following the cognitive testing when it was decided to convene a focus group to test a few specific changes made after the cognitive testing was completed.

The expert reviews and the informal testing of consumer responses began in April 2001 and occurred simultaneously. Four national experts were sent the draft for review and invited to provide indepth comments on the language, format, readability, order and flow of text, and any other aspects of the report. They returned detailed reviews that were used to guide the next set of changes to the report. Also, two national quality experts briefly reviewed the draft and gave feedback. The consultant provided detailed input throughout the drafting process. At the same time, members of the PRWG, seeking active engagement in what they clearly viewed as an important learning experience, volunteered to assess responses from family and friends. This informal testing was done with a standard two-page set of questions that was developed for this purpose. PRWG members asked selected persons to look at the draft and answer questions about its interest and understandability. Responses from 15 persons were recorded on the standard form.

The cognitive testing interviews were designed to be indepth assessments of a person's reaction to the draft report. Cognitive interviews were originally designed as a method for assessing whether people similar to potential survey respondents actually consistently understood survey items and response options in the manner intended by the survey designers (DeMaio and Rothgeb, 1996). It has been adapted, primarily in the development of CAHPS® reports, as a vehicle for conducting an indepth review of a draft report in order to assess how the potential audience responds to its content, structure (navigability), language, graphics, formatting, and utility (Harris-Kojetin et al., 2001). The consultants developed an extensive interview protocol intended to take 90 minutes to 2 hours. They trained two interviewers from Qualidigm® in the techniques of cognitive testing of reports. The method used involved handing the report to the person to read for approximately 10 minutes, while the interviewer observed and recorded non-verbal responses to the report as well as how the individual scanned through the pages. Then, detailed questions were asked about specific pages in the report to test understanding of the content and language, understanding of the symbols and hospital comparisons, ease of reading, interest, and readiness to use the information. Only one person at a time was interviewed. The interviewees were selected in a two-step process. First, the PRWG invited several organizations serving on the PRWG and the steering committee or Measures Subcommittee, as well as the HEALTH Office of Minority Affairs and the HEALTH Office of Vital Statistics, to participate. Second, these organizations identified individuals for the cognitive interviews. The intention in selecting the convenience samples was to get representation from specific population segments: elderly, females of child-bearing age, minorities, and others. Eighteen persons were interviewed in the formal cognitive testing phase.

The key themes that emerged from both the informal and cognitive interviews fell into the categories of: understanding, appearance/graphic design, order of the sections, and specific wording. As an example, there continued to be confusion about the survey questions and why they were in the middle of the report. Interviewees did not understand that these were examples of questions from the survey. The page was revised with a title that asked, “What kinds of questions were asked on the survey?” The labels were made more explicit (e.g., Patient Care—sample questions from the survey). The narrative specified the number of questions on the survey and the way they were grouped by topic (i.e., domain). Another early finding concerned the symbol used to indicate that a rate was not reported because the number of responses did not meet the minimum. The initial symbol (i.e., a circle with a line through it) was easily misinterpreted and interviewees did not understand the explanation. The final version uses NR as the symbol with the explanation given in the legend on each rating page: NR= Not Reported (fewer than 40 patients responded). Other changes included reordering different sections of the report (e.g., place the “Acknowledgments” at the end rather than the beginning so the reader can more quickly move to the main sections of the report) based on the cognitive interviews. An example of a wording change made to clarify the meaning is the phrase “clinical care” that had been used to identify the domains related to medical care (e.g., physician care, nursing care). In the initial week of testing, people interpreted this phrase to mean going to the clinic or to see the doctor; further testing revealed that the phrase “medical care” also was not consistently understood. The final change was to the phrase “patient care,” with wording that clarified the meaning. The title was also changed to respond to the interviewees' request for wording that conveyed more simply and exactly what was in the report.

Presentation Issues

During the drafting of the report, the PRWG considered several reporting issues and made key decisions. Two types of issues were addressed: format and statistical concerns. As these issues emerged, they were taken to the Measures Subcommittee for further consideration and recommendations were reported to the steering committee.

Format Decisions

First, the PRWG decided that the ratings would be reported for each hospital and displayed on one page so that all hospitals could be viewed together rather than having a page per hospital. The two specialty hospitals (rehabilitation and psychiatric) would be viewed on separate pages. Second, the hospital ratings would be reported separately by service type (i.e., medical, surgical, and obstetrics), maintaining the stratification of patient selection. Third, the questions would be aggregated into key domains for presentation, and no results of individual questions would be presented. The survey used was already organized by domains. Also, the use of domains would reduce the length of the report and potentially the likelihood of information overload among readers (McGee et al., 1999; Epstein, 2000). Fourth, the ratings would be displayed as symbols; actual scores or mean ratings would not be displayed because of the tendency to rank hospitals with one another based on these numbers and the difficulty of assessing how much of a numerical difference is a real difference. Several types of symbols were considered, including stars, circles, and check marks. The final preference was to use diamonds to visually convey the hospital's standing on each domain; these symbols were tested in the cognitive interviews and found to be very acceptable to consumers. Finally, to keep the report from being overly long or complicated, the group decided to produce a technical report to include more detailed information about sampling, measurement, conducting the survey, and calculating the ratings.

Statistical Decisions

Four major statistical issues were addressed. An early decision concerned the minimum number of respondents required for reporting. The decision was to require at least 40 respondents from each hospital for each domain for reporting. The pilot data error estimates confirmed the adequacy of this decision. One hospital did not have enough surgical service volume, and no ratings were given for this hospital for this type of service in the public report. Another decision concerned the benchmark to be used for comparison. It was decided that each hospital in Rhode Island would be compared with a national benchmark or norm, not to each other, considering the small number of hospitals in the State. The normative scores were based on the vendor's national database of hospitals across the country that had used the same survey instrument. The database varied according to the type of service being reported and was limited to hospitals reporting each type of service. For example, surgical ratings were compared only with hospitals with surgical services. The rating for each domain for each hospital in Rhode Island was compared by type of service with the average rating for each domain for hospitals in the national database. The decision was made to use three rating categories rather than five to make the ratings simpler and more understandable to consumers. Each hospital was given a designation of one diamond, two diamonds or three diamonds, corresponding to below, about the same as, or above the national average score.

A third decision concerned the basis for determining the comparative ratings and possible determination of statistical significance of the ratings. The diamond designations were assigned in a two-step process. In step one, the individual hospital score (i.e., the mean satisfaction score) was compared with the national average score for each domain to determine whether the hospital score was above or below one standard deviation from the national average score. In step two, for those hospitals outside the middle range, a second test was applied to determine whether the 95 percent confidence interval around the hospital score overlapped with the national norm. If the individual hospital score was both outside the one standard deviation range and its confidence interval did not overlap with the national average score, the hospital was assigned one diamond for scores below the national average score and three diamonds for scores above the national average score. Hospitals with scores that fell within the one standard deviation range were assigned two diamonds. A major concern in making these decisions was whether there would be any variation among the hospitals in Rhode Island; the PRWG was concerned that if there were no differences, interest in the report would be minimal. However, the report shows that this method did differentiate among hospitals; also it addresses issues raised by some of the collaborators by giving a statistical basis to the diamond designations and taking into account the potential variation in individual hospital scores. Overall, approximately 5 percent of the ratings given to hospitals in Rhode Island were below the national mean and received one diamond; three diamonds were given to 18 percent of the surgical service ratings, 23 percent of the medical service ratings, and 29 percent of the obstetrical service ratings for the hospitals.

The final statistical issue concerned whether to adjust the scores for case-mix differences. For this report, the hospital scores were not case-mix adjusted. Case-mix adjustment of patient satisfaction scores may be desired, but not necessary because of relatively small differences (Hargraves et al., 2001). In Rhode Island, the data were tested for differences by patient and hospital characteristics (e.g., age, sex, self-reported health status, volume), and no differences were large enough to warrant adjusting the data for reporting purposes. When the data were case-mix adjusted, very few differences in the ratings were found.

Report Production and Dissemination

Two Versions—Public and Technical

To facilitate the use of the hospital patient satisfaction data, the PRWG decided to communicate results through two documents: the public report and technical report. The goal of the public report was to display the hospital patient satisfaction scores in a consumer-friendly format. The report translated each hospital's score for each domain into one of three rating levels displayed as diamonds (refer to the Hospital Patient Satisfaction Report at Internet address: www.healthri.org/chic/performance/satisfaction.htm). It was expected that the public report would be copied and widely distributed. However, the PRWG recognized that there would be consumers, as well as health care professionals and others, who would like to have additional information, including statistical details. Therefore, the technical report expanded on the public report providing detailed information on the project including: survey development, data collection methodology, data quality assessment, indicator explanation, calculation of hospital scores, normative comparison explanation, and score conversions, as well as specific numerical scores for the hospitals. The technical report also was made available on the HEALTH Web site.

Press Briefing

In order to prepare for public release, HARI established a Public Relations Advisory Workgroup consisting of public relations representatives from each hospital. The purposes of this group were to provide expertise for the development of a comprehensive communications plan for public release of the patient satisfaction results, develop key messages, and engage the hospital public relations executives in the public release process. The group identified a communications plan that included objectives, strategies, key audiences, tactics, a timeline, and an evaluation strategy. The initial plan was revised with significant input from HEALTH and a final version was provided to the Measures Subcommittee.

Press kits were constructed jointly by HEALTH and HARI with information about the public reporting project, and spokes-persons were identified. Editorial board meetings and advance background briefings were held with selected media 1 week prior to release. In addition, media advisories were faxed by HEALTH to all media 1 week and 1 day in advance of the announcement day. The press release was faxed by HEALTH to all media on the day of release. The results were embargoed until press time, and copies of the report were made available at the press conference. The press conference was hosted by HEALTH and HARI and included legislators, hospital CEOs, and others.

Lessons Learned

Based on the lessons learned from the public reporting process on hospital patient satisfaction in Rhode Island, four essential components of the process have been identified: (1) collaboration, (2) the pilot survey, (3) formative testing, and (4) dissemination and evaluation. Each component is described here, and the lessons learned about each component are discussed.

Collaborative Model

Basic to the success of the process described in this article is the collaboration among public health, health care, and community organizations to work through the steps necessary to develop and produce a comparative public report on hospitals in Rhode Island. While collaboration is not a new phenomenon, methods and models of collaboration vary. Whatever the structure, goals, or combination of collaborating organizations, major questions are whether the collaboration is successful in achieving its purpose and what factors either facilitate or hinder the outcome (Sofaer, 2000b).

Framework for Collaboration

Based on case examples of collaborations between the medicine and public health sectors, a “collaborative paradigm” (Lasker, 1997) has been proposed that recognizes the distinctiveness and unique contributions of each partner and that requires a willingness to work together because no one partner can accomplish the goals alone. Key elements of a collaborative framework include: a common goal, inclusion of a broad spectrum of people and organizations, and diverse resources and skills that partners can contribute (Lasker, 1997). Strategies for success include: recognition of organizational self-interest, a boundary spanner who understands the different and sometimes competing perspectives of the partners, adequate infrastructure support, backing and support of prominent interested parties, and effective communication among partners.

All of these elements were evident in the Rhode Island case. The common goal was defined by the legislative mandate and supported by HARI and the hospitals. Representatives of hospitals, the health department, academic and research interests, and provider and community organizations all participated at different levels in this collaboration. Participants contributed different skills and resources. For example, the State Department of Elderly Affairs, which serves on the Steering Committee, set up the focus group as the final step in the formative testing process; they arranged a location and invited people who ranged in age and employment status. Qualidigm® and the consultant acted as neutral parties helping to explain the relevance of larger principles that spanned potentially competing interests. The clear and strong commitment of HEALTH and the State's Director of Health, along with HARI, as well as key members of the steering committee, gave visible support to the Health Quality Performance Measurement and Reporting Program and its goals.

Facilitating Factors

The process of developing a public report of hospital patient satisfaction in Rhode Island was facilitated by several important preconditions (Gray and Wood, 1991). First, not only does the legislative mandate require reporting of patient satisfaction in a standardized way, but also it established a formal structure within which the collaboration would occur, designated the responsible party (i.e., HEALTH), and provided resources to enable the collaborative process to function. Second, the leadership of HEALTH and HARI were evident throughout. For example, both HEALTH and HARI assigned dedicated individuals whose roles were pivotal in bridging differing perspectives from the hospital and public health sectors and in balancing the sometimes competing goals of the reporting program for quality improvement at the individual hospital level and public accountability at the community level. Finally, the collaborators had a shared purpose and were united in the goals of improving quality of hospital care and developing an equitable approach to reporting patient satisfaction data to the public. These preconditions formed the structural context for the health department and the hospitals to work together.

The collaborative process in Rhode Island was facilitated by participation of specific hospital representatives who saw their own self interest in developing a reporting program that would be useful to and supported by the hospitals and that met the requirements of the legislation. The role of HEALTH was to facilitate the process by: providing information; managing the collaborative arrangements; maintaining the structure of steering committee, Measures Subcommittee, and PRWG meetings; and clarifying legislative intent. Another important element in the process was the guiding principle, the imperative, for an open and public process. At times, this approach meant that the progress seemed slow, but deliberate efforts were made at all phases of development to be inclusive, informative, and receptive to the views of all organizations and their representatives. For example, for the consultant's presentation to the steering committee and Measures Subcommittee, notices of the special meeting were sent to the organizations designated as “interested parties.” No formal count was made, but every hospital participated at some point or another in the process, and many community organizations were informed of the process and were given the opportunity to participate. Moreover, there was a seriousness of purpose among the participants. All parties wanted to get it right and to make this report the best possible product. Participants were committed to work through the issues, were thoughtful, did not simply accept any decision or viewpoint, and were involved and invested in the process.

Benefits of Collaboration

The collaborative process provided a learning experience for participants, resulting in enhanced knowledge and understanding of measurement issues, what is involved in publicly reporting health care performance data, and how to work together. The process also helped meet the challenges of integrating hospital quality improvement with the needs of statewide public reporting as well as focusing on experiences of individual patients while working to improve systems issues that affect them (Smith et al., 2000). At the steering committee, the Measures Subcommittee, and the PRWG, the balance between views of public health, hospital providers, and consumers were voiced and helped to keep the process on track. This is an important example of the benefits of integrating the perspectives and experiences of both the medical care and public health sectors in the areas of performance measurement reporting and quality improvement.

Another example of the necessity of collaboration relates to the idea of making format reporting decisions prior to having the data. An important lesson learned is that the process of translating numerical scores into symbols (e.g., diamonds) to display relative quality for public reporting is difficult and potentially contentious. An underlying concern is that the facilities are fairly represented; yet, statistical artifacts may mean that results displayed as symbols appear misleading. Resolving such issues requires that key stakeholders agree, and building consensus around these issues is a critical piece of the collaborative process.

Outcomes or Results

An outcome may be products or the survival of the collaboration (Gray and Wood, 1991). In Rhode Island, the collaboration to date can be judged a success using both these gauges. The public and technical reports on hospital patient satisfaction have been produced, and they have been disseminated to the public through a number of different channels. The collaborative process is continuing in two ways. First, the second wave of the hospital patient satisfaction survey is underway for 2003. The vendor arrangements are necessitating some technical changes (e.g., in sampling), and these issues are being addressed by the Hospital Measures Subcommittee and the steering committee in their regular meetings. Additional hospitals are participating in these sessions. Second, the next two phases of public reporting have been launched in two new public release workgroups, one on hospital clinical performance measures and the other on indicators of clinical performance and measurement of patient satisfaction in nursing homes. Participants include hospital and nursing home representatives, Rhode Island Quality Partners (the quality improvement organization for Rhode Island), Qualidigm®, representatives from the academic and research communities, and consumer representatives, such as the Alliance for Better Long-Term Care.

Pilot Survey—Important for Quality Improvement

Relatively little evidence is available about the effect of public reports on quality of health care in hospitals. Some published papers indicate that hospitals responded to the public release of quality data and made improvements (Longo et al., 1997; Bentley and Nash, 1998; Rainwater, Romano, and Antonius, 1998; Rosenthal et al., 1998; Smith et al., 2000; California Health Care Foundation, 2001). Other evidence suggests that hospitals and physicians were not receptive to public release of data (Marshall et al., 2000; Schneider and Epstein, 1998).

The experience in Rhode Island strongly suggests that the hospitals were involved in quality improvement all along, but used the pilot patient satisfaction survey results in particular to help target specific areas for improvement. HARI leadership was guided by the principle that the best uses of performance data are to improve patient care and demonstrate accountability to the community for care delivery. This was a unifying principle among hospital members and a motivating force for hospitals as active and willing participants in the Rhode Island Health Quality Performance Measurement and Reporting Program. As the public release of patient satisfaction and clinical performance data approached, HARI adopted specific principles concerning the appropriate use of these data and further agreed to work together to share best practices and promote improvement across the State. The hospitals received detailed question-by-question results with comparisons to national norms to help them with their quality improvement programs. Opportunities to discuss results and share best practices were provided at monthly HARI meetings. Additionally, HARI staff was available on request to make targeted presentations about the patient satisfaction program to key hospital groups (e.g., nursing leadership, physicians, department directors).

The first round public reporting survey results validate their efforts. Although the pilot results cannot be released publicly, the observed improvement in scores from the pilot to the first round results was more than would be expected in random variation of scores and is likely the result of their improvement efforts. The hospitals used the pilot data to identify target areas for improvement, and they identified patient satisfaction as a key focus of organizational priorities.

Conducting a pilot survey was beneficial for other reasons. The hospitals became comfortable with the survey, the data transmission process, the sampling, and the results. Adjustments could be made, as needed, to the procedures for sending the discharged patient data to the vendor. The pilot data was used to test for representativeness of the respondents, to confirm the minimum number of respondents required to report data, and to identify the need for increased efforts by hospitals to boost the response rate through publicity about the survey.

Formative Testing Strategies

The formative testing strategies— including expert reviews, informal and cognitive testing, and a focus group— together provided critical feedback from various perspectives. Through the multiple iterations of the draft document, they stimulated additional consideration of reporting issues, informed the discussion and decisionmaking, and gave a rationale for and level of comfort with the decisions. The use of experts and ongoing consultation provided access to the most recent thinking in the field of public reporting. The informal testing gave the participants a greater sense of involvement in this phase of the process and greater comfort with the concept of testing consumer response, in addition to the actual feedback on the draft. Cognitive testing tapped the reactions of people who were not particularly knowledgeable about the health care field and provided a sampling of misunderstandings or concerns that might be raised by consumers.

The results of this input either confirmed what had been designed or indicated areas needing change, and changes were made iteratively throughout the formative testing process. When the advice was conflicting, it was used to help the group understand the issues that were involved and that required re-examination and discussion in order to delineate the underlying problem and decide what to do with the report. Also, members of the steering committee and the Hospital Measures Subcommittee learned how the formative testing process, especially the cognitive interviews, contributed to a refinement of the public report format and helped make the report more understandable to consumers. Similarly, the importance of the cover letter from HEALTH and HARI to indicate the source of the report was recognized. While all these formative testing strategies were useful in the development of the public report, the cognitive interviewing was the most critical to understanding the perspective of consumers.

Dissemination and Evaluation

The literature on consumer response to hospital quality reports is mixed. Some studies suggest that consumers use reports in deciding on a hospital (Luft et al., 1990; Mukamel and Mushlin, 1998). Other evaluations of the impact of public reports of health plan or hospital data on consumer decisionmaking (Marshall et al., 2000) and consumer use of coronary artery bypass graft surgery data (Schneider and Epstein, 1998) suggest that consumers and patients may be unaware of the reports, and few who have seen them actually use them. These reports suggest the need for extensive efforts at dissemination followed by assessment of public responses to health care quality performance reports.

In the Rhode Island case, the issue of public accountability raised two critical questions: (1) was the information disseminated to the intended audiences, that is, was the public aware of the report of hospital patient satisfaction? And (2) did the public understand the information, and would they use the report? Indirect and qualitative data can be used to provide initial information on the dissemination question. For example, the press conference held in November 2001 to announce the report to the media was attended by both print and radio media. The press interviewed several participants from HEALTH, HARI, and the hospitals individually about the report and its meaning.

The systematic record of activity on the HEALTH Web site (Internet Address: www.healthri.org) provides an indication of the amount of interest these reports have generated. Table 1 shows the results for the 3-month period from November to January for these reports and the survey instrument. The results show a marked level of activity as of November when the reports were posted. Although the volume decreased from that high point, these files were in the top 10 PDF files viewed on the HEALTH Web site for all 3 months. In November, the public report accounted for nearly one-fifth of the total number of viewed files, and the technical report represented over 8 percent; together, these two reports represented more than one-quarter of the files viewed on the Web site. In December, the public report was still the most viewed file, representing nearly 10 percent of all viewed files. By January, the public report was in second place; together, these two files and the survey accounted for nearly 8 percent of the viewed files. While it is impossible to judge from these data how often the files were printed, how many different persons accessed the files, or which of the audiences they represented, the Internet is thought to be an effective strategy for disseminating hospital information to the public (Sonneborn, 1999).

Table 1. Number of Patient Satisfaction Files Viewed and Percent of Total Files1, by Month and Year.
File Files Viewed

November, 2001 December, 2001 January, 2002



Number Percent Number Percent Number Percent
Public Report 7,440 18.9 2,552 9.3 1,052 3.3
Technical Report 3,272 8.3 934 3.4 738 2.3
Patient Survey 2,117 5.4 545 2.0 643 2.0
1

Data taken from monthly reports on the number of times top 10 PDF files were viewed on the Rhode Island Department of Health Web site. The percent is the proportion of the total number of files viewed (not just the top 10). The report does not provide information on whether the files were actually downloaded to another location or whether they were printed.

SOURCE: Rhode Island Department of Health, February 2002.

The consumer response to the hospital patient satisfaction report in Rhode Island has not been formally assessed due to lack of resources. However, the report meets the following guidelines proposed in the Institute of Medicine (2001) monograph: it is available in print and on the Web site; standards for comparison were used; the findings have a statistical basis; the report includes a section on why health care quality is a salient issue; it was published within 6 months of the survey and will be updated in 2 years; and the hospitals have used the data to target their quality improvement efforts.

It must be recognized that consumers may not have choices in decisions related to hospital use (Sainfort and Boske, 1996). Rather, they may be limited to hospitals where their chosen physician has privileges and/or by the hospital network that their health plan offers. In Rhode Island, there is thought to be less choice at the periphery of the State in geographic areas where there is only one hospital and physicians are likely to have privileges only at that hospital. They may go to border States for hospital care. In urban areas, physicians may be on staff of several hospitals, providing the potential for more choice. While Rhode Islanders may be limited in choice of hospital, they may feel more comfortable about hospital care generally, having seen the comparative data in the public report. Also, the report suggested ways that consumers can use the information, for example, talking to their physician when facing prospective hospitalization.

Future of Quality Reporting in Rhode Island

As noted earlier, public reporting of hospital patient satisfaction information is only the first step mandated by the Rhode Island legislation. This experience, however, has already shaped significantly the plans for future measurement and reporting efforts. The lessons learned regarding how to design reports that are salient, comprehensible, trustworthy, and useful will be directly applied to hospital clinical measures, nursing home, and home health care reports released in the future. As in many State and local efforts, resources are limited, making it all the more essential to recognize the general lessons not only gleaned from the experiences of others, but also acquired through such methods as expert review, cognitive testing, and focus groups. This work has laid the foundation for reporting of clinical quality in hospitals and has developed principles to be used in other settings. The coalition is being maintained through the committee structure and through the ongoing participation of many of the same representatives and the enlistment of additional persons who can contribute to the process. The collaboration that was key to the development of the hospital patient satisfaction public report is continuing and expanding, not only with local stakeholders, but also at the national level, to develop reporting for other quality measures and other types of facilities in Rhode Island.

Generalizability

The experience in Rhode Island may serve as a blueprint to guide other States in their public reporting efforts. Although Rhode Island is a small State, the process may be adapted for use in larger States. Many features of the process are generalizable: (1) the importance of collaboration and involvement of stakeholders in designing the public report, (2) the advantages of a deliberative process to create the draft, (3) the use of multiple strategies to test a draft version of the report format and contents, and (4) the recognition of the value of performance measurement for both quality improvement and public accountability. In other States, the number of hospitals may be larger, but it is also likely that in many such States consumers will be most interested in the hospitals closer to them rather than all across the State, so that developing region-specific reports may make more sense. The availability of resources for such efforts will vary from State to State, but Rhode Island certainly demonstrates that even a small amount of initial funding can be sufficient, if there is commitment on the part of key stakeholders. Indeed, it may be that the most significant way that Rhode Island's experience is unusual is that the hospitals themselves came to the conclusion that active engagement would be in their best interest. At a time when trust is a resource that is at a premium within health care, it is to be hoped (but still cannot be guaranteed) that hospitals will recognize the value of being perceived by the public as willing to demonstrate their commitment to quality.

Acknowledgments

The authors want to thank the many individuals and organizations in Rhode Island participating in this collaborative effort. We appreciate the significant work of Mary Logan, Jean Marie Rocha, Tierney Sherwin, Christina Spaight, and Tracey Dewart.

Footnotes

Judith K. Barr and Marcia Petrillo are with Qualidigm®. Cathy E. Boni is with the Hospital Association of Rhode Island. Kimberly A. Kochurka is with Press Ganey. Patricia Nolan and William Waters are with the Rhode Island Department of Health. Shoshanna Sofaer is with Baruch College. The research in this article was supported in part by the State of Rhode Island. The views expressed in this article are those of the authors and do not necessarily reflect the views of Qualidigm®, the Hospital Association of Rhode Island, Press Ganey, the Rhode Island Department of Health, Baruch College, or the Centers for Medicare & Medicaid Services (CMS).

1

States such as California and Massachusetts rely on voluntary participation in public reporting; all hospitals may not participate (California Institute for Health Systems Performance and California Health Care Foundation, 2001).

2

Lt. Governor Charles J. Fogarty, the original bill sponsor, continues to support this initiative.

Reprint Requests: Judith K. Barr, Sc.D., Qualidigm, 100 Roscommon Drive, Suite 200, Middletown, CT 06457. E-mail: jbarr@qualidigm.org

References

  1. Banks SM, Salovey P, Greener S, et al. The Effects of Message Framing on Mammography Utilization. Health Psychology. 1995;14(2):178–184. doi: 10.1037//0278-6133.14.2.178. [DOI] [PubMed] [Google Scholar]
  2. Bentley JM, Nash DB. How Pennsylvania Hospitals Have Responded to Publicly Released Report on CABG. Joint Commission Journal on Quality Improvement. 1998;24(1):40–49. doi: 10.1016/s1070-3241(16)30358-3. [DOI] [PubMed] [Google Scholar]
  3. California Institute for Health Systems Performance and the California Health Care Foundation. Results from the Patients' Evaluation of Performance (PEP-C) Survey: What Patients Think of California Hospitals. 2001.
  4. California Health Care Foundation. Voices of Experience: Case Studies in Measurement and Public Reporting of Health Care Quality. 2001.
  5. Carman KL, Short PF, Farley DO, et al. Epilogue: Early Lessons From CAHPS® Demonstrations and Evaluations. Consumer Assessment of Health Plans Study. Medical Care. 1999;37(3 Suppl):MS97–105. [PubMed] [Google Scholar]
  6. Crofton C, Lubalin JS, Darbu C. Consumer Assessment of Health Plans Study CAHPS®. Foreword. Medical Care. 1999;37(3 Suppl):MS1–9. doi: 10.1097/00005650-199903001-00001. [DOI] [PubMed] [Google Scholar]
  7. DeMaio TJ, Rothgeb JM. Cognitive Interviewing Techniques: In the Lab and In the Field. In: Schwarz N, Sudman S, editors. Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research. San Francisco, CA.: Jossey Bass and Company; 1996. [Google Scholar]
  8. Epstein AM. Public Release of Performance Data: A Progress Report from the Front. Journal of the American Medical Association. 2000;283(14):1884–1886. doi: 10.1001/jama.283.14.1884. [DOI] [PubMed] [Google Scholar]
  9. General Laws of Rhode Island, 1998. Title 23. Chapter 23-17. Internet address: www.rilin.state.ri.us/Statutes/TITLE23/23-17-17/INDEX.HTM.
  10. General Laws of Rhode Island, 1996. Title 23. Chapter 23-17. Internet address: www.rilin.state.ri.us/Statutes/TITLE23/23-17-13/INDEX.HTM.
  11. Gray B, Wood D. Collaborative Alliances: Moving From Practice to Theory. Journal of Applied Behavioral Science. 1991 Spring;27(1):3–22. [Google Scholar]
  12. Guadagnoli E, Epstein AM, Zaslavsky A, et al. Providing Consumers with Information About the Quality of Health Plans: The Consumer Assessment of Health Plans Demonstration in Washington State. Joint Commission Journal on Quality Improvement. 2000;26(7):410–420. doi: 10.1016/s1070-3241(00)26034-3. [DOI] [PubMed] [Google Scholar]
  13. Hargraves JL, Wilson IB, Zaslavsky A, et al. Adjusting for Patient Characteristics When Analyzing Reports From Patients About Hospital Care. Medical Care. 2001;39(6):635–641. doi: 10.1097/00005650-200106000-00011. [DOI] [PubMed] [Google Scholar]
  14. Harris-Kojetin LD, McCormack LA, Jael EF, et al. Creating More Effective Health Plan Quality Reports for Consumers: Lessons From A Synthesis of Qualitative Testing. Health Services Research. 2001;36(3):447–76. [PMC free article] [PubMed] [Google Scholar]
  15. Hibbard JH, Slovic P, Jewett JJ. Informing Consumer Decisions in Health Care: Implications from Decision-Making Research. The Milbank Quarterly. 1997;75(3):395–414. doi: 10.1111/1468-0009.00061. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hibbard JH, Harris-Kojetin L, Mullin P, et al. Increasing the Impact of Health Plan Report Cards by Addressing Consumers' Concerns. Health Affairs. 2000;19(5):138–143. doi: 10.1377/hlthaff.19.5.138. [DOI] [PubMed] [Google Scholar]
  17. Institute of Medicine. Envisioning the National Health Quality Report. National Academy Press; Washington, D.C.: 2001. [Google Scholar]
  18. Lasker RD. Medicine Public Health: The Power of Collaboration. The New York Academy of Medicine; New York: 1997. [Google Scholar]
  19. Longo DR, Land G, Schramm W, et al. Consumer Reports in Health Care: Do They Make a Difference in Patient Care? Journal of the American Medical Association. 1997 Fall;278(19):1579–1584. [PubMed] [Google Scholar]
  20. Luft HS, Garnick DW, Mark DH, et al. Does Quality Influence Choice of Hospital? Journal of the American Medical Association. 1990;263(21):2899–2906. [PubMed] [Google Scholar]
  21. Marshall MN, Shekelle PG, Leatherman S, Brook RH. The Public Release of Performance Data: What Do We Expect to Gain? A Review of the Literature. Journal of the American Medical Association. 2000;283(14):1866–1874. doi: 10.1001/jama.283.14.1866. [DOI] [PubMed] [Google Scholar]
  22. McGee J, Kanouse DE, Sofaer S, et al. Making Survey Results Easy to Report to Consumers: How Reporting Needs Guided Survey Design in CAHPS®. Consumers Assessment of Health Plans Study. Medical Care. 1999;37(3 Suppl):32–40. doi: 10.1097/00005650-199903001-00004. [DOI] [PubMed] [Google Scholar]
  23. Mukamel DB, Mushlin AI. Quality of Care Information Makes a Difference: An Analysis of Market Share and Price Changes After Publication of the New York State Cardiac Surgery Mortality Reports. Medical Care. 1998;36(7):945–954. doi: 10.1097/00005650-199807000-00002. [DOI] [PubMed] [Google Scholar]
  24. New York State Health Accountability Foundation. New York State HMO Report Card. Summer. 2001. [Google Scholar]
  25. Rainwater JA, Romano PS, Antonius DM. The California Hospital Project: How Useful is California's Report Card for Quality Improvement? Joint Commission Journal on Quality Improvement. 1998;24(1):31–39. doi: 10.1016/s1070-3241(16)30357-1. [DOI] [PubMed] [Google Scholar]
  26. Rhode Island Department of Health. A Review of the Current State of Public Reporting on Health Care Quality Performance. 2000 Jul; Internet address: www.healthri.org/publications/list.htm.
  27. Rosenthal GE, Hammar PJ, Way LE, et al. Using Hospital Performance Data in Quality Improvement: The Cleveland Health Quality Choice Experience. Joint Commission Journal on Quality Improvement. 1998;24(7):347–359. doi: 10.1016/s1070-3241(16)30386-8. [DOI] [PubMed] [Google Scholar]
  28. Sainfort F, Boske BC. Role of Information in Consumer Selection of Health Plans. Health Care Financing Review. 1996;18(1):31–54. [PMC free article] [PubMed] [Google Scholar]
  29. Schneider EC, Epstein AM. Use of Public Performance Reports. A Survey of Patients Undergoing Surgery. Journal of the American Medical Association. 1998;279(20):1638–1642. doi: 10.1001/jama.279.20.1638. [DOI] [PubMed] [Google Scholar]
  30. Smith DP, Rogers G, Dreyfus A, et al. Balancing Accountability and Improvement: A Case Study from Massachusetts. The Joint Commission Journal on Quality Improvement. 2000;26(5):299–312. doi: 10.1016/s1070-3241(00)26024-0. [DOI] [PubMed] [Google Scholar]
  31. Sofaer S. A Classification Scheme of Individuals and Agencies Who Serve as Information Intermediaries for People on Medicare. Baltimore, MD.: May, 2000a. Report to the Health Care Financing Administration. [Google Scholar]
  32. Sofaer S. Working Together, Moving Ahead: A Manual to Support Effective Community Health Coalitions. New York, N.Y.: Baruch College; Nov, 2000b. [Google Scholar]
  33. Sonneborn M. Developing and Disseminating the Michigan Hospital Report. In: Spath Patrice L., editor. Provider Report Cards: A Guide for Promoting Health Care Quality to the Public. Chicago: Health Forum, Inc.; 1999. [Google Scholar]

Articles from Health Care Financing Review are provided here courtesy of Centers for Medicare and Medicaid Services

RESOURCES