Article-at-a-Glance
Background
Hospital leaders play an important role in the success of quality improvement (QI) initiatives, yet little is known about how leaders engaged in QI currently view quality performance measures. In a follow-up to a quantitative study conducted in 2012, a study employing qualitative content analysis was conducted to (1) describe leaders’ opinions about the quality measures reported on the Centers for Medicare & Medicaid Services (CMS) Hospital Compare website, (2) to generate hypotheses about barriers/facilitators to improving hospitals’ performance, and (3) to elicit recommendations about how to improve publicly reported quality measures.
Methods
The opinions of leaders from a stratified sample of 630 hospitals across the United States regarding quality measures were assessed with an open-ended prompt that was part of a 21-item questionnaire about quality measures publicly reported by CMS. Their responses were qualitatively analyzed in an iterative process, resulting in the identification of the presence and frequency of major themes and subthemes.
Results
Participants from 131 (21%) of the 630 hospitals surveyed replied to the open-ended prompt; 15% were from hospitals with higher-than-average performance scores, and 52% were from hospitals with lower-than-average scores. Major themes included (1) concerns regarding quality measurement (measure validity, importance, and fairness) and/ or public reporting; 76%); (2) positive views of quality measurement (stimulate improvement, focus efforts; 13%); and (3) recommendations for improving quality measurement.
Conclusions
Among hospital leaders responding to an open-ended survey prompt, some supported the concept of measuring quality, but the majority criticized the validity and utility of current quality measures. Although quality measures are frequently being reevaluated and new measures developed, the ability of such measures to stimulate improvement may be limited without greater buy-in from hospital leaders.
The Centers for Medicare & Medicaid Services (CMS) began publicly reporting hospital performance on a core set of 10 evidence-based quality measures on the Hospital Compare website in 2005.1 Since then, the number of measures has grown rapidly and now includes not only measures of mortality and re-admission but also measures pertaining to complications of care, patient experience, cost, and volume. By publicly reporting hospitals’ performance on these measures, CMS aims to improve patient outcomes through two mechanisms: (1) stimulating local quality improvement (QI) efforts and (2) encouraging consumers to choose higher-quality hospitals.1 However, it remains unclear what impact public reporting has had on health care processes and patient outcomes, with the majority of studies finding less positive impact than anticipated.2–6 The reasons for this are likely multifactorial, but one contributor may be hospital leaders’ beliefs about the validity and utility of these measures.7–10
Successful efforts to implement organizational change generally require substantial buy-in from leadership.11–13 Hospital leaders’ attitudes toward specific quality measures and public reporting have been quantitatively assessed previously,8 but we know of no current qualitative assessment of attitudes and beliefs specific to CMS measures and public reporting of performance on these measures.
To better understand leaders’ perspectives, in 2012 we conducted a survey that was intended to update and expand understanding of hospital leaders’ opinions about publicly reported quality measures.14 The survey included an open-ended prompt that allowed respondents to expand on opinions elicited earlier in the survey and to express opinions that may not have been captured in the quantitative portion of the study. In this article, we report the results of the qualitative analysis of responses to this prompt.
Methods
Questionnaire Development and Content
As previously reported, we sent a 21-item questionnaire (Appendix 1, available in online article) to a national sample of hospital leaders who were engaged in QI; it included an open-ended prompt intended to elicit respondents’ opinions about quality measures: “Please share your additional thoughts about publicly reported measures of healthcare quality, including strengths or weaknesses of current measures, ideas for new measures, etc.” The study protocol was approved by the Institutional Review Board at Baystate Medical Center.
Study Sample and Questionnaire Administration
We sampled hospitals at three levels of quality performance on the basis of their relative scores on mortality and readmission measures for pneumonia, heart failure, and acute myocardial infarction, as reported on Hospital Compare. We used CMS risk-standardized performance scores to rate each hospital as “better-than-expected,” “at-expected,” or “below-expected” performance. From the 4,459 hospitals in the Hospital Compare database, we excluded 624 (14%) because of missing data for a performance measure (resulting from low-case volumes that did not meet the CMS threshold for reporting) and 136 (3%) that had mixed scores (above expected scores for one measure but below expected for another). Sampling was driven by power calculations for the overall study, with 210 hospitals randomly selected from each performance stratum (N = 630). Hospital characteristics, such as teaching status, were obtained from matched American Hospital Association (AHA) survey data when available.
We identified the names, addresses, telephone numbers, and e-mail addresses, when available, of the CEOs and the leaders responsible for quality at selected hospitals (for example, Chief Quality Officer, Medical Director of Quality, Vice President of Medical Affairs, Chief Medical Officer) by searching the hospitals’ websites and through telephone inquiries. Two weeks before mailing or e-mailing the questionnaire, we sent a postcard or e-mail message notifying potential participants about the goals of the study. After the initial questionnaire mailing, we sent up to three reminders to hospitals that had not yet responded, and made up to three attempts to contact remaining non-respondents by telephone. A $2 bill was included with the mailed questionnaires. All responses to the open-ended prompt were included in the current analysis.
Analysis
Overview
The goal of directed qualitative content analysis is typically to extend and potentially enrich existing knowledge.15 This approach differs from conventional qualitative content analysis in that a priori codes are often developed on the basis of existing research and knowledge, with new codes added when data do not fit the existing coding scheme.15 Codes are then organized into categories and subcategories, presenting evidence of support or lack of support for theories using codes with exemplars and descriptive evidence.15 In this study, we used directed qualitative content analysis to assess respondents’ attitudes regarding quality measures and public reporting of performance on quality measures.
Preparation of Responses and Directed Qualitative Analysis
Each participant’s response was parsed into blocks of text that represented unique thoughts or concepts. We then developed an a priori provisional codebook on the basis of previous studies related to public reporting of heath care quality and review of several of the responses.16 In developing a priori codes, we also considered how the first portion of the questionnaire might have prompted respondents to discuss related topics. Responses from 10 randomly selected participants were then independently coded by three investigators [K.L.H., T.A.J., P.J.T.] trained in qualitative analytic techniques by an internist-pediatrician with expertise in qualitative methods [S.L.G.]. The codes were then discussed and discrepancies resolved through discussion among the three coders. The codebook was amended in an iterative process to incorporate new concepts as they were identified. This process was repeated until no new codes were applied. The three primary coders then independently coded the remaining blocks of text. If new concepts were identified during this stage of coding, the three coders reconvened and discussed the potential new code, with the decision to add a code made by consensus. The first 200 blocks of text were coded by all three coders to check intercoder agreement and reproducibility of the code book.15–17 Two authors [S.L.G., T.L.] then performed second-level coding, in which codes were organized into major pertinent themes.16 We reviewed the coded text after applying secondary codes to check for accuracy of coding and theme assignment as well as completeness of second-level coding. We used descriptive statistics to assess the frequency of select codes and themes. We also characterized each respondent’s comment as positive, negative, both positive and negative, or neutral and used descriptive statistics to determine whether certain hospital and respondent characteristics were associated with positive or negative responses.
Results
Responding Hospitals and Leaders
Of the 630 hospitals surveyed, 131 (21%) provided responses to the open-ended prompt. Fifteen of the hospitals contributed responses from two leaders, for a total of 146 respondents. Of these, 27% were CEOs, 17% were Chief Quality Officers or Vice Presidents of Quality, and 19% were Chief Medical Officers. Fifty-eight percent of the respondents had either an MD or a DO degree with or without an additional master’s degree; the rest had nursing or other non-MD/DO degrees. Fifteen percent of the respondents were from hospitals rated as “better-than-expected” on mortality or readmission measures, 34% were from hospitals with “expected” performance, and 52% were from hospitals with “worse-than-expected” performance. Among the responding hospitals with AHA survey data, 42% had fewer than 200 beds, 34% were teaching institutions, and 52% were located in urban areas. Hospitals that responded to the open-ended prompt did not differ in teaching status but were more likely to be larger, rural, and score lower than expected on quality measures compared to hospitals that did not respond to the survey or did not provide a response to the open-ended prompt (Table 1, above).
Table 1.
Total N (%) |
No Response to Prompt* n (%) |
Response to Prompt n (%) |
P Value† | |
---|---|---|---|---|
Teaching‡ | .80 | |||
Yes | 215 (35) | 171 (35) | 44 (34) | |
No | 408 (65) | 321 (65) | 87 (66) | |
No. of Beds‡ | .005 | |||
< 200 | 332 (53) | 277 (56) | 55 (42) | |
200–400 | 179 (29) | 138 (28) | 41 (32) | |
> 400 | 110 (18) | 76 (15) | 34 (26) | |
Population Served‡ | .001 | |||
Rural | 202 (32) | 142 (28) | 60 (48) | |
Urban | 421 (68) | 355 (72) | 66 (52) | |
Sampling Strata | .001 | |||
> expected | 210 (33.3) | 191 (38) | 19 (15) | |
expected | 210 (33.3) | 166 (33) | 44 (34) | |
< expected | 210 (33.3) | 142 (28) | 68 (52) |
Includes hospitals that did not return a questionnaire and those that returned a questionnaire but did not respond to the open-ended prompt.
p value (chi-square test) compares hospitals with “No response” and “Response” to prompt.
Missing data—American Hospital Association data could not be matched to some hospitals.
Eighty-five percent of respondents’ comments contained a negative component. Respondents from hospitals that scored below expected on quality measures were no more likely to have provided a negative comment than those with expected or better than expected scores (chi-square test; p = .31), and leaders with an MD/ DO degree were no more likely to provide a negative comment compared to respondents with non MD/DO degrees (p = .71).
Participants’ Responses
The open-ended responses provided by the 146 participants yielded 493 discrete blocks of coded text. Codes applied to these blocks of text were unified into three major themes during second-level coding: (1) concerns regarding quality measurement, (2) positive views of quality measurement, and (3) recommendations for improving quality measurement. Each of these major themes contained subthemes (italics), which are described in detail below with illustrative quotes. Details of the line codes that contributed to each major theme and additional quotes are provided in Table 2 (pages 172–173). Agreement among coders on the 200 blocks of text coded was 91%.
Table 2.
Theme: Concerns Regarding Quality Measurement |
---|
Subtheme: Validity of the Measures |
|
Subtheme: Relevance of the Measures |
|
Subtheme: Fairness of the Measures |
|
Subtheme: Concerns About Public Reporting |
|
Theme: Positive Views of Quality Measurement |
|
Theme: Recommendations for Improving Quality Measurement |
|
Quotes are provided verbatim.
Brainy Quote.® Accessed Feb 26, 2015. http://www.brainyquote.com/quotes/quotes/a/alberteins100201.html.
MedPAR: See Centers for Medicare & Medicaid Services. MEDPAR. (Updated: Dec 17, 2014.) Accessed Feb 26, 2015. http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/MedicareFeeforSvcPartsAB/MEDPAR.html.
Concerns Regarding Quality Measurement
The majority (n = 373; 76%) of the 493 discrete blocks of text reflected concerns about quality measures and/or public reporting of performance. These included concerns about the validity, importance, and fairness of the measures, as well as comments about public reporting.
Concerns related to the validity of the measures included comments about perceived issues with risk adjustment and the lack of evidence for an association between process measures and outcomes, such as in the following comment:
The current focus on process measures for MI [myocardial infarction], CHF [congestive heart failure], and pneumonia do not appear to have led to measurable declines in mortality and/or morbidity and should be reconsidered.
Concerns related to the importance of the measures focused largely on the perception that measures are selected based on ease of measurement rather than actual importance as an indicator of care quality:
Our first forays into measurement focused on phenomena that are easy to measure. That doesn’t always equate to importance.
Concern was expressed that the domains measured have little to do with patient care and outcomes. For example, some respondents did not feel patient experience was a useful quality measure:
Too many people—at CMS and in the community—place an undue emphasis on the “prettiness of the facility” and the willingness or ability of the facility to spend time, energy, and money on the “fluff” rather than on patient care. Good customer service does not equal quality care.
Perceived unfairness of the measures was reported by respondents and included concerns about the potential for “gaming” of the system:
I think care is marginally improved in many (though not all) areas as a result of the public reporting requirement. Clearly “gaming” is the fastest, most effective way to improve performance.”
In a concern closely related to gaming, some respondents believed that performance is more related to a hospital’s available resources for documentation than actual care:
I think since there is so much variability in the robustness of hospital clinical documentation programs, that has an impact on comparative mortality data. The risk-adjusted mortality and readmission indicators may actually be reflecting the quality of documentation, comorbid condition documentation, and coding practices versus mortality.
Inability to change patient mix and factors that impact outcomes but are outside of the hospital’s control were also felt to be unfair.
The risk adjustment process does not take into account real and meaningful differences in patient populations for readmissions. There is a growing body of evidence regarding certain patient populations and their risk for readmissions that is being ignored while those hospitals caring for these minority populations are being compared to other hospitals caring for patients not facing the same challenges/having the same cultural values.
Criticisms of public reporting centered on the lag between when measurement is performed and when it is reported, lack of utility these reports have for patients, and a failure to clearly inform the public of the limitations of the measures reported:
I support public reporting, but until CMS and others are more honest with the public about the limitations of the reporting, they have lost a lot of credibility with me.
Additional concerns included the possibility that changes in care that are driven by economic factors, such as reduced length of stay, could have a negative effect on patient satisfaction measures:
Improved patient satisfaction is hard to achieve with … increased emphasis on reducing length of stay and utilization of resources.
Positive Views of Quality Measurement
Sixty-two (13%) of the 493 coded lines contained positive views about quality measures or public reporting of performance. Subcategories included beliefs that measuring and reporting quality can drive change, focus change efforts, and engage patients. Transparency was felt to be one aspect of quality measurement that drives change:
Our state has required [public] reporting of hospital-acquired infections. This certainly drives improvement and awareness.
Approximately half of the positive comments were general observations about the potential utility of measuring quality of care. This subcategory of statements typically started with a statement of theoretical support for quality measurement, followed by a critique of the current measures and/or public reporting:
The transparency that we are trying to create is good—we are just not yet finding the best measurements to reflect good quality of care.
Recommendations for Improving Quality Measurement
Some participants offered recommendations for how quality measurement can be improved. For example, one respondent felt that the workload associated with reporting different measures to different agencies could be reduced through coordination among agencies:
There needs to be more coordination between agencies to eliminate so much duplication.
Another respondent felt that the measures should be updated regularly to ensure ongoing pursuit of excellence:
… care measures should evolve as compliance becomes ingrained thus continuing the improvement cycle.
Another respondent recommended increasing the potential for hospitals to learn from other’s successes:
[there should be an] … opportunity to post/link what performance improvement projects [a] hospital has implemented.
Other respondents recommended expanding responsibility for quality of care beyond the hospital; one recommended developing quality measures that assess health and health care in a community.
“There is a need to develop Q [quality] measures that assess the health of a community.”
Several participants commented on the need to align physician incentives with hospital performance on quality measures:
However, CMS must align incentives with physicians and not focus solely on hospital performance. We struggle to hold the independent medical staff accountable for quality without incentives.
Discussion
In this study of hospital leaders from a diverse sample of 131 hospitals in the United States, we found that although some hospital leaders endorsed the concept of measuring quality and comparing hospitals’ performance in principle, there remain substantial concerns about the current processes for measuring and reporting quality. Criticisms of quality measures included issues related to validity, importance, and fairness of the measures. Criticisms about public reporting of hospitals’ performance included a time lag between measurement and reporting, lack of transparency about limitations of the measures used by CMS, and lack of use by consumers. Recommendations on how best to address some of the issues with quality measures and public reporting were offered.
Many of the concerns about the CMS publicly reported quality measures put forth by participants in this study are similar to those previously described by other stakeholder groups, including physicians and hospital leaders. For example, Barr et al. identified physician concerns about the methodological rigor of the measures,18 while Casalino et al. reported concerns about how public reporting of the measures would cause care to be diverted from important elements that are not measured and reported.19 Hafner et al., who used focus groups to assess hospital staff (administrators, physicians, nurses) opinions, found, similar to the current study, that these stakeholders offered some positive views but also questioned the data quality and the utility of the reports for the public.20 The quantitative portion of this study14 also revealed concerns, but this study provides both a broader and more detailed description of these concerns. Other studies of hospital leaders have used only closed-ended survey questions,8 explored only one measure,7 or sought to describe QI activities rather than opinions about measures and reporting.21 Our study adds to this body of literature by being one of few recently conducted studies that directly elicited both CEOs’ and Quality Improvement Officers’ critiques of CMS quality measures via an open-ended inquiry. This inquiry yielded important insights as to why concerns about measures may persist, and the findings are aligned with current debates regarding use of outcome versus process measures, risk adjustment, and the roles of patient satisfaction in care quality.22–24
Interestingly, some of the comments in the present study demonstrated a lack of awareness of both ongoing assessments of the measures’ validity and importance and availability of resources for information sharing. Specifically, one participant commented that measures should be revisited periodically to be sure they remain pertinent. This is, in fact, something that CMS does on a regular basis.25 In a similar vein, one participant suggested there be a forum for hospitals to share QI strategies they have undertaken. The Agency for Healthcare Research and Quality’s (AHRQ) Health Care Innovations Exchange currently offers such a forum.26 Finally, one participant expressed a belief that there was more concern about appearance than substance at CMS, counter to stated goals of the Hospital Compare reporting system. These findings suggest that more effective outreach and engagement of leaders in CMS and AHRQ efforts to ensure that the measures are valid, meaningful, and fair may reduce some of the negative perceptions expressed.
Although some of the respondents in this study voiced support for the potential value of quality measurement, the concerns they expressed about current measures may represent barriers to improvement. The importance of leadership in bringing about change in an organization is well described.9,11–13,21,27–39 This included engagement in quality initiatives at the level of the board and leader incentives for QI. This study suggests that there may be persistent doubts about the validity of current systems of measuring and reporting quality, likely limiting enthusiasm for initiating changes within their institutions. Efforts to bolster confidence in these measures will need to address these specific concerns.
Our findings should be interpreted in light of several limitations. First, only 21% of the participants who returned questionnaires responded to the open-ended prompt; additional perspectives may have been identified with a greater number of respondents. Second, it is possible, given the overlap between topics addressed in the quantitative portion of the questionnaire and categories/themes that emerged in the qualitative analysis, that respondents may have been primed to share opinions about the topics introduced earlier in the questionnaire. Third, this analysis is qualitative and should be reinforced by quantitative measurement of the prevalence of the opinions expressed in this survey. Finally, respondents differed from nonrespondents in several of the characteristics measured, including having lower performance on quality scores. This may mean that the findings of this study are most pertinent to those who share the characteristics of respondents.40
Conclusion
Although quality measures used to evaluate performance on the CMS Hospital Compare website appear to enjoy theoretical support among the hospital leaders who are a target audience for these data, these leaders express ongoing doubts about the validity, importance, and fairness of the current measures. The specific concerns expressed may indicate a role for broad educational campaigns about the development and ongoing evaluation of the measures. Until hospital leaders trust that the measures are fair and meaningful, their endorsement of strategies to improve quality in their hospitals may be suboptimal because their attitudes and beliefs may strongly influence institutional action, and thus may limit what can be achieved through quality measurement and reporting.
Supplementary Material
Acknowledgments
Dr. Lagu is supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under Award Number K01HL114745. Dr. Lindenauer is supported by a contract with the Centers for Medicare & Medicaid Services to develop quality measures for pneumonia and chronic obstructive pulmonary disease.
Footnotes
This work was presented as a poster at the Society for General Internal Medicine annual meeting on April 25, 2014, San Diego.
Online Only Content http://www.ingentaconnect.com/content/jcaho/jcjqs
See the online version of this article for Appendix 1. Hospital Leaders’ Views on Public Reporting of Health Care
Contributor Information
Sarah L. Goff, Assistant Professors of Medicine, Tufts University School of Medicine, Boston, and Core Faculty, Center for Quality of Care Research (CQCR), Baystate Medical Center, Springfield, Massachusetts..
Tara Lagu, Assistant Professors of Medicine, Tufts University School of Medicine, Boston, and Core Faculty, Center for Quality of Care Research (CQCR), Baystate Medical Center, Springfield, Massachusetts..
Penelope S. Pekow, Assistant Professor of Biostatistics, University of Massachusetts, Amherst, and Director of Biostatistics Core, CQCR..
Nicholas S. Hannon, Formerly Research Coordinator, CQCR, is Medical Student, University of Cork, Ireland..
Kristen L. Hinchey, Formerly Undergraduate Research Scholar, CQCR, is Student, Providence College.
Talia A. Jackowitz, Formerly Undergraduate Research Scholar, CQCR, is Hospitalist Nurse Coordinator, Baystate Medical Center, Springfield, Massachusetts.
Patrick J. Tolosky, Formerly Undergraduate Research Scholar, CQCR, is Student, Bates College, Lewiston, Maine.
Peter K. Lindenauer, Director, CQQR; Associate Professor of Medicine, Tufts University School of Medicine; and a member of the Editorial Advisory Board of The Joint Commission Journal on Quality and Patient Safety..
References
- 1.Centers for Medicare & Medicaid Services. [Accessed Feb 26, 2015];Hospital Compare. (Updated: Oct 9, 2014.) http://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/HospitalCompare.html.
- 2.Chassin MR. Improving the quality of health care: What’s taking so long? Health Aff (Millwood) 2013;32(10):1761–1765. doi: 10.1377/hlthaff.2013.0809. [DOI] [PubMed] [Google Scholar]
- 3.Fung CH, et al. Systematic review: The evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008 Jan 15;148(2):111–123. doi: 10.7326/0003-4819-148-2-200801150-00006. [DOI] [PubMed] [Google Scholar]
- 4.Hibbard JH. What can we say about the impact of public reporting? Inconsistent execution yields variable results. Ann Intern Med. 2008 Jan 15;148(2):160–161. doi: 10.7326/0003-4819-148-2-200801150-00011. [DOI] [PubMed] [Google Scholar]
- 5.Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood) 2003;22(2):84–94. doi: 10.1377/hlthaff.22.2.84. [DOI] [PubMed] [Google Scholar]
- 6.Ryan AM, Nallamothu BK, Dimick JB. Medicares public reporting initiative on hospital quality had modest or no impact on mortality from three key conditions. Health Aff (Millwood) 2012;31(3):585–592. doi: 10.1377/hlthaff.2011.0719. [DOI] [PubMed] [Google Scholar]
- 7.Berwick DM, Wald DL. Hospital leaders’ opinions of the HCFA mortality data. JAMA. 1990 Jan 12;263(2):247–249. [PubMed] [Google Scholar]
- 8.Laschober M, et al. Hospital response to public reporting of quality indicators. Health Care Financ Rev. 2007;28(3):61–76. [PMC free article] [PubMed] [Google Scholar]
- 9.McCormack B, et al. A realist review of interventions and strategies to promote evidence-informed healthcare: A focus on change agency. Implement. Sci. 2013 Sep 8;8:107. doi: 10.1186/1748-5908-8-107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Rangachari P, Rissing P, Rethemeyer K. Awareness of evidence-based practices alone does not translate to implementation: Insights from implementation research. Qual Manag Health Care. 2013;22(2):117–125. doi: 10.1097/QMH.0b013e31828bc21d. [DOI] [PubMed] [Google Scholar]
- 11.Brand CA, et al. A review of hospital characteristics associated with improved performance. Int J Qual Health Care. 2012;24(5):483–494. doi: 10.1093/intqhc/mzs044. [DOI] [PubMed] [Google Scholar]
- 12.Keroack MA, et al. Organizational factors associated with high performance in quality and safety in academic medical centers. Acad Med. 2007;82(12):1178–1186. doi: 10.1097/ACM.0b013e318159e1ff. [DOI] [PubMed] [Google Scholar]
- 13.Weiner BJ, Shortell SM, Alexander J. Promoting clinical involvement in hospital quality improvement efforts: The effects of top management, board, and physician leadership. Health Serv Res. 1997;32(4):491–510. [PMC free article] [PubMed] [Google Scholar]
- 14.Lindenauer PK, et al. Attitudes of hospital leaders toward publicly reported measures of health care quality. JAMA Intern Med. 2014;174(12):1904–1911. doi: 10.1001/jamainternmed.2014.5161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Hsieh HF, Shannon SE. Tree approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–1288. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
- 16.Crabtree BF, Miller WL, editors. Doing Qualitative Research. 2nd ed. Thousand Oaks, CA: Sage; 1999. [Google Scholar]
- 17.Sofaer S. Qualitative research methods. Int J Qual Health Care. 2002;14(4):329–336. doi: 10.1093/intqhc/14.4.329. [DOI] [PubMed] [Google Scholar]
- 18.Barr JK, et al. Physicians’ views on public reporting of hospital quality data. Med. Care Res Rev. 2008;65(6):655–673. doi: 10.1177/1077558708319734. [DOI] [PubMed] [Google Scholar]
- 19.Casalino LP, et al. General internists’ views on pay-for-performance and public reporting of quality scores: A national survey. Health Aff (Millwood) 2007;26(2):492–499. doi: 10.1377/hlthaff.26.2.492. [DOI] [PubMed] [Google Scholar]
- 20.Hafner JM, et al. The perceived impact of public reporting hospital performance data: Interviews with hospital staff. Int J Qual Health Care. 2011;23(6):697–704. doi: 10.1093/intqhc/mzr056. [DOI] [PubMed] [Google Scholar]
- 21.Vaughn T, et al. Engagement of leadership in quality improvement initiatives: Executive Quality Improvement Survey results. J Patient Saf. 2006;2(1):2–9. [Google Scholar]
- 22.DuGoff E, Bishop S, Rawal P. Hospital readmission reduction program reignites debate over risk adjusting quality measures. Health Aff blog. Aug 14, 2014. Accessed Feb 26, 2015. http://healthaffairs.org/blog/2014/08/14/hospital-readmission-reduction-program-reignites-debate-over-risk-adjusting-quality-measures/.
- 23.National Quality Forum. Risk Adjustment for Socioeconomic Status or Other Sociodemographic Factors. Aug 2014. Accessed Feb 26, 2015. http://www.qualityforum.org/Publications/2014/08/Risk_Adjustment_for_Socioeconomic_Status_or_Other_Sociodemographic_Factors.aspx.
- 24.Fenton JJ, et al. The cost of satisfaction: A national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012 Mar 12;172(5):405–411. doi: 10.1001/archinternmed.2011.1662. [DOI] [PubMed] [Google Scholar]
- 25.Centers for Medicare & Medicaid Services. [Accessed Feb 26, 2015];Call for Measures. (Updated: Feb 19, 2015.) http://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/CallForMeasures.html.
- 26.Agency for Healthcare Research and Quality. [Accessed Feb 26, 2015];AHRQ Health Care Innovations Exchange. http://www.innovations.ahrq.gov.
- 27.Balik B. Leaders’ role in patient experience. Hospital leadership must drive efforts to better meet patients’ needs. Healthc Exec. 2011;26(4):76–78. [PubMed] [Google Scholar]
- 28.Boerner S, Dütschke E. The impact of charismatic leadership on followers’ initiative-oriented behavior: A study in German hospitals. Health Care Manage Rev. 2008;33(4):332–340. doi: 10.1097/01.HCM.0000318771.82642.8f. [DOI] [PubMed] [Google Scholar]
- 29.Caldwell C, Jones MT. Transforming culture to drive cost improvement: Senior leaders’ role: Tree practices universal among top performers in improving performance. Healthc Exec. 2012;27(6):76. 78–79. [PubMed] [Google Scholar]
- 30.Dückers ML, et al. Understanding organisational development, sustainability, and diffusion of innovations within hospitals participating in a multilevel quality collaborative. Implement Sci. 2011 Mar 9;6:18. doi: 10.1186/1748-5908-6-18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Fearing G, Barwick M, Kimber M. Clinical transformation: Manager’s perspectives on implementation of evidence-based practice. Adm Policy Ment Health. 2014;41(4):455–468. doi: 10.1007/s10488-013-0481-9. [DOI] [PubMed] [Google Scholar]
- 32.Jha A, Epstein A. Hospital governance and the quality of care. Health Aff (Millwood) 2010;29(1):182–187. doi: 10.1377/hlthaff.2009.0297. [DOI] [PubMed] [Google Scholar]
- 33.Jha AK, Epstein AM. Governance around quality of care at hospitals that disproportionately care for black patients. J Gen Intern Med. 2012;27(3):297–303. doi: 10.1007/s11606-011-1880-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Jiang HJ, et al. Board engagement in quality: Findings of a survey of hospital and system leaders. J Healthc Manag. 2008;53(2):121–134. discussion 135. [PubMed] [Google Scholar]
- 35.Secanell M, et al. Deepening our understanding of quality improvement in Europe (DUQuE): Overview of a study of hospital quality management in seven countries. Int J Qual Health Care. 2014;26(Suppl 1):5–15. doi: 10.1093/intqhc/mzu025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Shabot MM, et al. Memorial Hermann: High reliability from board to bedside. Jt Comm J Qual Patient Saf. 2013;39(6):253–257. doi: 10.1016/s1553-7250(13)39034-5. [DOI] [PubMed] [Google Scholar]
- 37.Stetler CB, et al. An organizational framework and strategic implementation for system-level change to enhance research-based practice: QUERI Series. Implement Sci. 2008 May 29;3:30. doi: 10.1186/1748-5908-3-30. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Stetler CB, et al. Institutionalizing evidence-based practice: An organizational case study using a model of strategic change. Implement. Sci. 2009 Nov 30;4:78. doi: 10.1186/1748-5908-4-78. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Weber V, Joshi MS. Effecting and leading change in health care organizations. Jt Comm J Qual Improv. 2000;26(7):388–399. doi: 10.1016/s1070-3241(00)26032-x. [DOI] [PubMed] [Google Scholar]
- 41.Fowler FJ., Jr . Survey Research Methods. 5th ed. Thousand Oaks, CA: Sage; 2014. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.