Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Dec 1.
Published in final edited form as: J Hosp Med. 2017 Dec;12(12):963–968. doi: 10.12788/jhm.2929

Hospital Perceptions of Medicare’s Sepsis Quality Reporting Initiative

Ian J Barbash 1,2, Kimberly J Rak 2, Courtney C Kuza 2, Jeremy M Kahn 1,2,3,*
PMCID: PMC5830093  NIHMSID: NIHMS938372  PMID: 29236094

Abstract

BACKGROUND

In October 2015, the Centers for Medicare and Medicaid Services (CMS) implemented the Sepsis CMS Core Measure (SEP-1) program, requiring hospitals to report data on the quality of care for their patients with sepsis.

OBJECTIVE

We sought to understand hospital perceptions of and responses to the SEP-1 program.

DESIGN

A thematic content analysis of semistructured interviews with hospital quality officials.

SETTING

A stratified random sample of short-stay, nonfederal, general acute care hospitals in the United States.

SUBJECTS

Hospital quality officers, including nurses and physicians.

INTERVENTION

None.

MEASUREMENTS

We completed 29 interviews before reaching content saturation.

RESULTS

Hospitals reported a variety of actions in response to SEP-1, including new efforts to collect data, improve sepsis diagnosis and treatment, and manage clinicians’ attitudes toward SEP-1. These efforts frequently required dedicated resources to meet the program’s requirements for treatment and documentation, which were thought to be complex and not consistently linked to patient-centered outcomes. Most respondents felt that SEP-1 was likely to improve sepsis outcomes. At the same time, they described specific changes that could improve its effectiveness, including allowing hospitals to focus on the treatment processes most directly associated with improved patient outcomes and better aligning the measure’s sepsis definitions with current clinical definitions.

CONCLUSIONS

Hospitals are responding to the SEP-1 program across a number of domains and in ways that consistently require dedicated resources. Hospitals are interested in further revisions to the program to alleviate the burden of the reporting requirements and help them optimize the effectiveness of their investments in quality-improvement efforts.


Sepsis affects over 1 million Americans annually, resulting in significant morbidity, mortality, and costs for hospitalized patients.14 There is an increasing interest in policy-oriented approaches to improving sepsis care at both the state and national levels.5,6 The most prominent policy is the Centers for Medicare and Medicaid Services (CMS) Sepsis CMS Core (SEP-1) program, which was formally implemented in October 2015; the program mandates that hospitals report their compliance with a variety of sepsis treatment processes (Table 1). Academic quality experts generally applaud the increased attention to sepsis but are concerned that the measure’s design and specifications advance beyond the existing evidence base.7,8 However, remarkably little is known about how front-line hospital quality officials perceive the program and how they are responding or not responding, to the new requirements. This knowledge gap is a critical barrier to evaluating the program’s practical impact on sepsis treatment and outcomes.

TABLE 1.

Summary of Components of SEP-1 Bundle

Patients Time Frame Process
Severe sepsis Within 3 hours Measure lactate
Obtain blood cultures prior to antibiotics
Administer antibiotics
Within 6 hours Remeasure lactate if initial value is elevated
Septic shock Within 3 hours All elements of severe sepsis bundle, plus administer 30 cc/kg of crystalloid
Within 6 hours Administer vasopressors for fluid-refractory hypotension
Document responsiveness to resuscitation via:
  • A 5-component physical exam

    OR

  • 2 out of 4 elements from a quantitative physiological assessment:
    • CVP
    • ScVO2
    • Beside cardiac echocardiogram
    • Straight leg raise/fluid challenge

NOTE: Adapted from Barbash IJ, Kahn JM, Thompson BT Medicare’s Sepsis Reporting Program: Two Steps Forward, One Step Back. Am J Respir Crit Care Med.7 2016;194(2):139-141 Abbreviations: CVP, central venous pressure; ScvO2, central venous oxygen saturation; SEP-1, Sepsis Centers for Medicare and Medicaid Services Core Measure program.

We therefore sought to understand hospital stakeholders’ perceptions of the SEP-1 program in general as well as their characterization of their local hospitals’ responses to the program. We were specifically interested in obtaining a focused perspective on the policy and hospitals’ responses to the policy rather than individual physicians’ attitudes regarding sepsis care protocols, which are complex and may be independent from the policy itself.9 We used a qualitative research approach designed to generate both a deep and broad understanding of how hospitals are responding to SEP-1 requirements, including the resources required to implement their responses.

METHODS

Study Design, Setting, and Subjects

We conducted a qualitative study by using semistructured telephone interviews with hospital quality officers in the United States. We targeted hospital quality officers because they are in a position to provide overarching insights into hospitals’ perceptions of and responses to the SEP-1 program. We enrolled quality officers at general, short-stay, nonfederal acute care hospitals because those are the hospitals to which the SEP-1 program applies. We generated a stratified random sample of hospitals by using 2013 data from Medicare’s Healthcare Cost and Reporting Information System (HCRIS) database.10 We stratified by size (greater than or less than 200 total beds), teaching status (presence or absence of any resident physician trainees), and ownership (for-profit vs nonprofit), creating 8 mutually exclusive strata. This sampling frame was designed to ensure representativeness from a broad range of hospital types, not to enable comparisons across hospital types, which is outside the scope of qualitative research.

Within strata, we contacted hospitals in a random order by phone using the primary number listed in the HCRIS database. We asked the hospital operator to connect us to the chief quality officer or an appropriate alternative hospital administrator with knowledge of hospital quality-improvement activities. We limited participation to 1 respondent per hospital. We did not offer any specific incentives for participation.

The study was approved by the University of Pittsburgh Institutional Review Board with a waiver of signed informed consent.

Data Collection

Interviews were conducted by a trained research coordinator between February 2016 and October 2016. Interviews were conducted concurrently with data analysis by using a constant comparison approach.11 The constant comparison approach involves the iterative refinement of themes by comparing the existing themes to new data as they emerge during successive interviews. We chose a constant comparison approach because we wanted to systematically describe hospital responses to SEP-1 rather than specifically test individual hypotheses.11 As is typical in qualitative research, we did not set the sample size a priori but instead continued the interviews until we achieved thematic saturation.12,13

The interview script included a mix of directed and open-ended questions about respondents’ perspectives of and hospital responses to the SEP-1 program. The questions covered the following 4 domains: hospitals’ sepsis quality-improvement initiatives before and after the Medicare reporting program, reception of the hospital responses, the approach to data abstraction and reporting, and the overall impressions of the program and its impact.68,14 We allowed for updates and revisions of the interview guide as necessary to explore any new content and emergent themes. We piloted the interview guide on 2 hospital quality officers at our institution and then revised its structure again after interviews with the initial 6 hospitals. The complete final interview guide is available in the supplemental digital content.

Analysis

Interviews were audio recorded, transcribed, and loaded onto a secure server. We used NVivo 11 (QSR International, Cambridge, Massachusetts) for coding and analysis. We iteratively reviewed and thematically analyzed the transcripts for structural content and emergent themes, consistent with established qualitative methods.15 Three investigators reviewed the initial 20 transcripts and developed the codebook through iterative discussion and consensus. The codes were then organized into themes and subthemes. Subsequently, 1 investigator coded the remaining transcripts. The results are presented as a series of key themes supported by direct quotes from the interviews.

RESULTS

Sample Description

We performed 29 interviews prior to achieving thematic saturation. Each of the 8 strata from the sampling frame was represented by at least 3 hospitals. Hospitals in the final sample were diverse in total bed size, intensive care unit bed capacity, teaching status, and ownership (Table 2). The median interview length was 25 minutes (interquartile range, 20–32 minutes). Respondents included 6 quality coordinators, 6 quality managers, and 11 quality directors, with the remainder holding a variety of other quality-related titles. Most respondents worked in hospital quality departments, although 4 were affiliated with individual clinical departments (eg, emergency medicine and/or critical care services). Of the 9 respondents who reported their professional training, 8 were registered nurses. Eleven respondents reported participating in measure abstraction.

TABLE 2.

Hospital Characteristics

Characteristic N = 29 Hospitals
Total beds, median (IQR) 210 (111–301)
ICU beds, median (IQR) 19 (10–32)
Teaching hospital (N%) 14 (48%)
Nonprofit, N (%) 14 (48%)
Interview length, median (IQR) 25 minutes (20–32)

NOTE: Abbreviations: ICU, intensive care unit; IQR, interquartile range.

Perspectives on SEP-1

Respondents’ general perspectives on the SEP-1 program are outlined in Table 3, with several key themes emerging. Foremost was the sheer complexity of the measure compounded by its reliance on time-stamped clinical documentation, and in particular, the physical reassessment in individual medical notes. Respondents expressed frustration with the “all-or-none” approach to declaring sepsis treatment a “success,” which they noted was unfair and difficult to justify to their local clinicians. In part because of the time and effort required to comply with the measure and report results to CMS, several respondents noted that the measure is a uniquely burdensome addition to an already-crowded landscape of hospital quality programs. Despite the resources required to adhere to the measures’ standards and report results to CMS, respondents expressed a belief that the increased attention to sepsis is driving positive changes in hospital care and leading to improved patient outcomes.

TABLE 3.

Respondents’ Perspectives on SEP-1

Domain Representative Quotations
The measure is complex “There is absolutely no reason for them to have made it so confusing. If you have to read the darn thing 10 times just to start to understand…”
Heavy reliance on clinical documentation “And for them to miss it because they didn’t document the capillary refill time or something is kind of hard to justify with the physicians. ‘So yea, this falls out because you didn’t chart this.’ You know? …Did that make a difference to the patient?”
All-or-none approach is frustrating “If one person doesn’t do what’s supposed to be done, then the core measure fails.”
Not the only quality program but requires significant resources “I just think there are so many quality initiatives and not enough people to go around.”
It’s driving increased attention to sepsis “As complicated and flawed as the measure is, I think it’s drawing so much more attention to sepsis.”

NOTE: Abbreviation: SEP-1, Centers for Medicare and Medicaid Services Sepsis Core Measure program.

Responses to SEP-1

Respondents identified several specific ways in which their hospitals responded to the SEP-1 mandate (Table 4), including investments in measurement, planning and coordinating sepsis-specific quality-improvement activities, improving the early identification of patients with sepsis, improving sepsis treatment and measure compliance, and addressing negative attitudes towards the implementation of the SEP-1 program.

TABLE 4.

Hospital Responses to SEP-1

Domain of Response Range of Responses Barriers and Challenges Representative Quotations
Efforts to collect data
  • use of third-party vendors

  • employing in-house abstractors

  • time and money

  • coding variation

  • heavy reliance on clinical documentation

“It’s such a horrendous and time-consuming abstraction process.”
Efforts to coordinate hospital responses
  • development of multistakeholder committees

  • employing dedicated staff and sepsis coordinators

  • requires multiple moving parts

  • human resources

  • iterative revision/refinement

“We had a little bit of stumbling issues when we first started that group, as far as assuring that we had the right people at the table. And we have representatives now from critical care, emergency room, administrative support, and our quality folks as well as bedside nurses.”
Efforts to improve sepsis diagnosis
  • electronic sepsis alerts

  • manual screening for sepsis

  • resource requirements

  • alert fatigue

“We’re building [an alert] into the electronic system that we’ve had for some time (and we’re continuing this), is certain vital sign changes go directly to our MET teams that will come and look at people that may have those issues: sepsis or something similar.”
Efforts to improve sepsis treatment
  • sepsis treatment protocols

  • structured order sets

  • resistance to protocolized care:

    “cookbook medicine”

  • different needs in different places

“Well some of them said it was ‘cookbook medicine.’ That they’re trying to tell us how to practice when they don’t know the patient.”
Efforts to manage clinicians’ attitudes
  • local clinician champions

  • show clinicians the data

  • infusion of new individuals/culture

  • top-down support from administration

  • lack of buy-in; particularly around documentation

  • hierarchy (within clinical medicine and QI infrastructure)

“We’re quality nurses. We don’t have any authority or say over the nurses on the floor or in the ER, or the physicians as far as educating them and holding them accountable…and so it’s been real frustrating.”
“I’m very fortunate in the physician champion in the emergency department is very engaged. And then has engaged some of the nursing leadership there.”

NOTE: Abbreviations: ER, emergency room; MET medical emergency team; QI, quality improvement; SEP-1, Centers for Medicare and Medicaid Services Sepsis Core Measure program.

Efforts to Collect Data for SEP-1 Reporting

Respondents reported challenges in reliably and validly measuring and reporting data for the SEP-1 program. First, patient identification and the measurement of treatment processes depends largely on manual medical record review, which is subject to variation across coders. This presents a particular challenge because the clinical definition of sepsis itself is in evolution,1 creating the possibility that treating physicians could identify a given patient as having sepsis or septic shock based on the most up-to-date definitions but not based on the measure’s specifications or vice versa. Second, each case requires up to an hour of manual medical record review and patients who develop sepsis during prolonged hospitalizations can require several hours or more, which is an unprecedented length of time to spend abstracting data for a single measure.

In addressing these measurement challenges, investment in human resources is the rule. No respondent reported automating abstraction of all the SEP-1 data elements, underscoring concerns regarding the measurement burden of the SEP-1 program.7,8,14 Rather, hospitals with sufficient financial resources frequently employ full-time data abstractors and individuals responsible for ongoing performance feedback, which facilitates the iterative revision of sepsis quality-improvement initiatives. In contrast, hospitals with fewer resources often rely on contracts with third-party vendors, which delays reporting and complicates efforts to use the data for individualized performance improvement.

Efforts to Coordinate Hospital Responses Across Care Teams

Complying with the measure involves the longitudinal coordination of multiple care teams across different units, so planning and executing local hospital responses required interdepartmental and multidisciplinary stakeholder involvement. Respondents were uncertain about the ideal strategy to coordinate these quality-improvement efforts, yielding iterative changes to electronic health records (EHRs), education programs, and data collection methods. This “learning by doing” is necessary because no prior CMS quality measure is as complex as SEP-1 or as varied in the sources of data required to measure and report the results. By requiring hospitals to improve coordination of care throughout the hospital, SEP-1 presents a quality-improvement and measurement challenge that may ultimately drive innovation and better patient care.

Efforts to Improve Sepsis Diagnosis

Several hospitals are implementing sepsis screening and alerts to speed sepsis recognition and meet the measure’s time-sensitive treatment requirements. An example of a less-intensive alert is one hospital’s lowering of the threshold for lactate values that are viewed as “critical” (and thus requiring notification of the bedside clinician). Examples of more resource-intensive alerts included electronic screening for vital sign abnormalities that trigger bedside assessment for infection as well as nurse-driven manual sepsis screening tools.

Frequently, these more intensive efforts faced barriers to successful implementation related to the broader issues of performance measurement rather than the specifics of SEP-1. EHRs generally lacked built-in electronic screening capacity, and few hospitals had the resources required for customized EHR modification. Manual screening required nurses to spend time away from direct patient care. For both electronic and manual screening, respondents expressed concern about how these new alerts would fit into a care landscape already inundated with alerts, alarms, and care notifications.16,17

Efforts to Improve Sepsis Treatment

Many hospitals are implementing sepsis-specific treatment protocols and order sets designed to help meet SEP-1 treatment specifications. In hospitals and health systems with preexisting sepsis quality-improvement efforts, SEP-1 stimulated adaptation and acceleration of their efforts; in hospitals without preexisting sepsis-specific quality improvement, SEP-1 inspired de novo program development and implementation. These programs were wide ranging. Several hospitals implemented a process by which an initially elevated lactate value automates an order for a repeat lactate level, facilitating an assessment of the clinical response to treatment. Other examples include triggers for sepsis-specific treatment protocols and checklists that bedside nurses can begin without initial physician oversight. In 1 hospital, sepsis alerts triggered by emergency medical first responders initiate responses prior to hospital arrival in a manner analogous to prehospital alerts for myocardial infarction and stroke.18,19

Efforts to implement these protocols encountered several common challenges. Physicians were often resistant to adopting inflexible treatment rules that did not allow them to tailor therapies to individual patients. Furthermore, even protocols and order sets that worked in 1 setting did not necessarily generalize throughout the hospital or health system, reflecting the difficulty in implementing a highly specified measure across diverse treatment environments.

Efforts to Manage Clinician Attitudes Toward SEP-1 Implementation

In addition to addressing clinicians’ behaviors, hospitals sought to address stakeholders’ attitudes when those attitudes created barriers to SEP-1 implementation. First, hospitals frequently faced a lack of buy-in from clinicians who were resistant to the idea of protocolized care in general and who were specifically skeptical that initiatives designed to increase clinical documentation would drive improvements in patient-centered outcomes. Second, respondents had to confront a hierarchical hospital culture, which manifests not only in clinical care, but also in the quality-improvement infrastructure. Many respondents reported that physicians were more receptive to performance feedback from fellow physicians rather than nonphysician quality administrators.

Respondents described a range of approaches to counteract these attitudes. First, hospitals deployed department- and profession-specific “champions” to provide peer-to-peer performance feedback supported by data demonstrating a link between process improvements and patient outcomes. Second, many respondents noted that the addition of new clinical staff, who were often younger and more receptive to new initiatives, could alter a hospital’s quality culture; in smaller hospitals, just a few individuals could significantly alter the dynamic. Finally, when other efforts failed, some respondents indicated that top-down administrative support could persuade resistant individuals to change their approach. However, this solution worked best with employed physicians and was less effective with independent physician groups without direct financial ties to hospital performance. These efforts to overcome negative attitudes toward SEP-1 implementation required individuals’ time and energy, leading to frustration at times and adding to the resources required to comply with the program.

Planning for the Future of SEP-1

Respondents anticipate that performance of the SEP-1 measure will eventually become publicly reported and incorporated into value-based purchasing calculations. Hospitals are therefore seeking greater interaction with CMS as it makes iterative revisions to the measure because respondents expect that their hospitals’ level of performance, rather than just the act of participating, will affect hospital finances. Respondents expressed a desire for more live, interactive educational sessions with CMS moving forward, rather than limiting the opportunities for clarification to online comment forums or statements elsewhere in the public record. In addition, respondents hope that public reporting and pay-for-performance could be delayed to allow more time to work out the “kinks” in measurement and reporting.

DISCUSSION

We conducted semistructured telephone interviews with quality officers in U.S. hospitals in order to understand hospitals’ perceptions of and responses to Medicare’s SEP-1 sepsis quality-reporting program. Hospitals are struggling with the program’s complexity and investing considerable resources in order to iteratively revise their responses to the program. However, they generally believe that the program is bringing much-needed attention to sepsis diagnosis and treatment. These findings have several implications for the SEP-1 measure in particular and for hospital-based quality measurement and pay-for-performance policies in general.

First, we demonstrate that SEP-1 consistently requires a substantial investment of resources from hospitals already struggling under the weight of numerous local, state, and national quality-reporting and improvement programs.14,20,21 In aggregate, these programs can stretch hospitals’ resources to their limit. Respondents universally reported that the SEP-1 program is requiring dedicated staff to meet the data abstraction and reporting requirements as well as multicomponent quality-improvement initiatives. In the absence of well-established roadmaps for improving sepsis care, these sepsis quality-improvement efforts require experimentation and iterative revision, which can contribute to fatigue and frustration among quality officers and clinical staff. This process of innovation inherently involves successes, failures, and the risk of harm and opportunity costs that strain hospital resources.

Second, our study indicates how SEP-1 could exacerbate existing inequalities in our health system. Sepsis incidence and mortality are already higher in medically underserved regions.22 Given the resources required to respond to the SEP-1 program, optimal performance may be beyond the reach of smaller hospitals, or even larger hospitals, whose resources are already stretched to their limits. Public reporting and pay-for-performance can be adisadvantage to hospitals caring for underserved populations.23,24 To the extent that responding to sepsis-oriented public policy requires resources that certain hospitals cannot access, these policies could exacerbate existing health disparities.

Third, our findings highlight some specific ways that CMS could revise the SEP-1 program to better meet the needs of hospitals and improve outcomes for patients with sepsis. Primarily, although the program’s current specifications take an “all-or-none” approach to treatment success, a more flexible approach, such as a weighted score or composite measure that combines processes and outcomes,25,26 could allow hospitals to focus their efforts on those components of the bundle with the strongest evidence for improved patient outcomes.27 Second, policy makers need to reconcile the 2 existing clinical definitions for sepsis.1,28 CMS has already stated its plans to retain the preexisting sepsis definition,29 but this does not change the reality that frontline providers and quality officials face different, and at times conflicting, clinical definitions while caring for patients. Finally, current implementation challenges may support a delay in moving the measure toward public reporting and pay-for-performance. Hospitals are already responding to the measure in a substantial way, providing an opportunity for early quantitative evaluations of the program’s impact that could inform evidence-based revisions to the measure.

Our study has several limitations. First, by interviewing only individual quality officers within each hospital, it is possible that our findings were not representative of the perspectives of other individuals within their hospitals or the hospital as a whole; indeed, to the extent that quality officers “buy in” to quality measurement and reporting, their perspectives on SEP-1 may skew more positive than other hospital staff. Our respondents represented individuals from a range of positions within the quality infrastructure, whereas “hospital quality leaders” are often chief executive officers, chief medical officers, or vice presidents for quality.30 However, by virtue of our purposive sampling approach, we included respondents from a broad range of hospitals and found similar themes across these respondents, supporting the internal validity of our findings. Second, as is inherent in interview-based research, we cannot verify that respondents’ reports of hospital responses to SEP-1 match the actual changes implemented “on the ground.” We are reassured, however, by the fact that many of the perspectives and quality-improvement changes that respondents described align with the opinions and suggestions of academic quality experts, which are informed by clinical experience.68 Third, while respondents believe that hospital responses to SEP-1 are contributing to improvements in treatment and outcomes, we do not yet have robust objective data to support this opinion or to evaluate the association between quality officers’ perspectives and hospital performance. A quantitative evaluation of the clinical impact of SEP-1, as well as the relationship between hospital performance and quality officers’ perspectives on the measure, are important areas for future research.

CONCLUSIONS

In a qualitative study of hospital responses to Medicare’s SEP-1 program, we found that hospitals are implementing changes across a variety of domains and in ways that consistently require dedicated resources. Giving hospitals the flexibility to focus on treatment processes with the most direct impact on patient-centered outcomes might enhance the program’s effectiveness. Future work should quantify the program’s impact and develop novel approaches to data abstraction and quality improvement.

Supplementary Material

Supplemental Content

Acknowledgments

The authors received funding from the National Institutes of Health (IJB, F32HL132461) (JMK, K24HL133444). This work was submitted as an abstract to the 2017 American Thoracic Society International Conference, May 2017.

Footnotes

Additional Supporting Information may be found in the online version of this article.

Disclosure: Aside from federal funding, the authors have no conflicts of interest to disclose.

References

  • 1.Singer M, Deutschman CS, Seymour CW, et al. The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) JAMA. 2016;315(8):801–810. doi: 10.1001/jama.2016.0287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Angus DC, Linde-Zwirble WT, Lidicker J, Clermont G, Carcillo J, Pinsky MR. Epidemiology of severe sepsis in the United States: analysis of incidence, outcome, and associated costs of care. Crit Care Med. 2001;29(7):1303–1310. doi: 10.1097/00003246-200107000-00002. [DOI] [PubMed] [Google Scholar]
  • 3.Gaieski DF, Edwards JM, Kallan MJ, Carr BG. Benchmarking the incidence and mortality of severe sepsis in the United States. Crit Care Med. 2013;41(5):1167–1174. doi: 10.1097/CCM.0b013e31827c09f8. [DOI] [PubMed] [Google Scholar]
  • 4.Liu V, Escobar GJ, Greene JD, et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA. 2014;312(1):90–92. doi: 10.1001/jama.2014.5804. [DOI] [PubMed] [Google Scholar]
  • 5.Rhee C, Gohil S, Klompas M. Regulatory Mandates for Sepsis Care—Reasons for Caution. N Engl J Med. 2014;370(18):1673–1676. doi: 10.1056/NEJMp1400276. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Cooke CR, Iwashyna TJ. Sepsis mandates: Improving inpatient care while advancing quality improvement. JAMA. 2014;312(14):1397–1398. doi: 10.1001/jama.2014.11350. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Barbash IJ, Kahn JM, Thompson BT. Medicare’s Sepsis Reporting Program: Two Steps Forward, One Step Back. Am J Respir Crit Care Med. 2016;194(2):139–141. doi: 10.1164/rccm.201604-0723ED. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Klompas M, Rhee C. The CMS Sepsis Mandate: Right Disease, Wrong Measure. Ann Intern Med. 2016;165(7):517–518. doi: 10.7326/M16-0588. [DOI] [PubMed] [Google Scholar]
  • 9.Reade MC, Huang DT, Bell D, et al. Variability in management of early severe sepsis. Emerg Med J. 2010;27(2):110–115. doi: 10.1136/emj.2008.070912. [DOI] [PubMed] [Google Scholar]
  • 10.Centers for Medicare & Medicaid Services. CMS Cost Reports. https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/Cost-Reports/. Published 2017. Accessed on January 30, 2017.
  • 11.Glaser BG. The Constant Comparative Method of Qualitative Analysis. Soc Probl. 1965;12(4):436–445. doi: 10.2307/798843. [DOI] [Google Scholar]
  • 12.Morse JM. Data Were Saturated…. Qual Health Res. 2015;25(5):587–588. doi: 10.1177/1049732315576699. [DOI] [PubMed] [Google Scholar]
  • 13.Hennink MM, Kaiser BN, Marconi VC. Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough? Qual Health Res. 2017;27(4):591–608. doi: 10.1177/1049732316665344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wall MJ, Howell MD. Variation and Cost-effectiveness of Quality Measurement Programs. The Case of Sepsis Bundles. Ann Am Thorac Soc. 2015;12(11):1597–1599. doi: 10.1513/AnnalsATS.201509-625ED. [DOI] [PubMed] [Google Scholar]
  • 15.Guest G, MacQueen KM. Handbook for Team-Based Qualitative Research. Plymouth: Altamira Press; 2008. [Google Scholar]
  • 16.Kesselheim AS, Cresswell K, Phansalkar S, Bates DW, Sheikh A. Clinical decision support systems could be modified to reduce “alert fatigue” while still minimizing the risk of litigation. Health Aff (Millwood) 2011;30(12):2310–2317. doi: 10.1377/hlthaff.2010.1111. [DOI] [PubMed] [Google Scholar]
  • 17.Sittig DF, Singh H. Electronic Health Records and National Patient-Safety Goals. N Engl J Med. 2012;367(19):1854–1860. doi: 10.1056/NEJMsb1205420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Kobayashi A, Misumida N, Aoi S, et al. STEMI notification by EMS predicts shorter door-to-balloon time and smaller infarct size. Am J Emerg Med. 2016;34(8):1610–1613. doi: 10.1016/j.ajem.2016.06.022. [DOI] [PubMed] [Google Scholar]
  • 19.Lin CB, Peterson ED, Smith EE, et al. Emergency Medical Service Hospital Prenotification Is Associated With Improved Evaluation and Treatment of Acute Ischemic Stroke. Circ Cardiovasc Qual Outcomes. 2012;5(4):514–522. doi: 10.1161/CIRCOUTCOMES.112.965210. [DOI] [PubMed] [Google Scholar]
  • 20.Meyer GS, Nelson EC, Pryor DB, et al. More quality measures versus measuring what matters: a call for balance and parsimony. BMJ Qual Saf. 2012;21(11):964–968. doi: 10.1136/bmjqs-2012-001081. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting More Performance from Performance Measurement. N Engl J Med. 2014;371(23):2145–2147. doi: 10.1056/NEJMp1408345. [DOI] [PubMed] [Google Scholar]
  • 22.Goodwin AJ, Nadig NR, McElligott JT, Simpson KN, Ford DW. Where You Live Matters: The Impact of Place of Residence on Severe Sepsis Incidence and Mortality. Chest. 2016;150(4):829–836. doi: 10.1016/j.chest.2016.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Sjoding MW, Cooke CR. Readmission Penalties for Chronic Obstructive Pulmonary Disease Will Further Stress Hospitals Caring for Vulnerable Patient Populations. Am J Respir Crit Care Med. 2014;190(9):1072–1074. doi: 10.1164/rccm.201407-1345LE. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Joynt KE, Jha AK. Characteristics of Hospitals Receiving Penalties Under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342. doi: 10.1001/jama.2012.94856. [DOI] [PubMed] [Google Scholar]
  • 25.Nolan T, Berwick DM. All-or-None Measurement Raises the Bar on Performance. JAMA. 2006;295(10):1168–1170. doi: 10.1001/jama.295.10.1168. [DOI] [PubMed] [Google Scholar]
  • 26.Chen LM, Staiger DO, Birkmeyer JD, Ryan AM, Zhang W, Dimick JB. Composite quality measures for common inpatient medical conditions. Med Care. 2013;51(9):832–837. doi: 10.1097/MLR.0b013e31829fa92a. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Rhodes A, Evans LE, Alhazzani W, et al. Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock: 2016. Crit Care Med. 2017;45(3):486–552. doi: 10.1097/CCM.0000000000002255. [DOI] [PubMed] [Google Scholar]
  • 28.Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference. Intensive Care Med. 2003;29(4):530–538. doi: 10.1007/s00134-003-1662-x. [DOI] [PubMed] [Google Scholar]
  • 29.Townsend SR, Rivers E, Tefera L. Definitions for Sepsis and Septic Shock. JAMA. 2016;316(4):457–458. doi: 10.1001/jama.2016.6374. [DOI] [PubMed] [Google Scholar]
  • 30.Lindenauer PK, Lagu T, Ross JS, et al. Attitudes of hospital leaders toward publicly reported measures of health care quality. JAMA Intern Med. 2014;174(12):1904–1911. doi: 10.1001/jamainternmed.2014.5161. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Content

RESOURCES