Abstract
Substantial attention over the last decade has focused on ensuring improvement in the increasingly complex world of healthcare. This has resulted in significant efforts in the arena of patient safety. The issue of how best to evaluate the validity of patient safety initiatives is a matter of significant discussion. Many quality improvement initiatives in healthcare are poorly developed, with few patient safety interventions sharing characteristics with evidence based medicine. We will discuss the key elements of a framework as an example to help structure our thinking and approach to answer these questions. Elements include an explanation of the theoretical basis, the use of appropriate measures, detailing of the involved processes, assessment of the initiative itself as well as the contextual factors surrounding it.
Keywords: Quality improvement, patient safety, improvement science, best practices, evaluation, monitoring, framework
Introduction
The need to constantly improve in the complex world of health care has become increasingly important over the last decade. The 1999 Institute of Medicine report, To Err is Human, focused our attention on unintentional harms in modern health care (1). Since the publication of this eye-opening report, patient safety has assumed a central role in health care delivery, not only in the United States (US), but around the world. Substantial effort over the years has resulted in significant progress in the arena of patient safety (2). Mandatory reporting of specific adverse event types is now required in most states across the US. Hospital accreditation standards are stricter, with measures like the National Patient Safety Goals becoming a critical standard for The Joint Commission (3). Payment policies also reflect this movement towards patient safety, with the Centers for Medicare & Medicaid Services withholding compensation for certain complications and outcomes of hospital-based care (4). While patient safety has been a high priority, measurement and evaluation have been low on the agenda, making it difficult to definitively say that patients are safer today, or that patient outcomes have improved (5–7). Our approach to improving patient safety needs to mature.
Many quality improvement initiatives in health care remain poorly developed, with few patient safety interventions sharing the characteristics of evidence-based medicine. Current quality improvement research predominantly uses non-standardized and non-verified data from single timeframe pre/post designs without concurrent controls (8). The resultant estimations of their efficacy are prone to bias, exaggerating the effect sizes, which make them difficult to reproduce using more objective evaluations. Attributing outcome improvement directly to the study intervention can be a challenge in more controlled study designs with independent data collection (9,10). Most published quality improvement initiatives lack sufficient details when presented in the literature, making it difficult to reproduce them in other environments, thus hurting the field (11).
There are multiple reasons for the current state of improvement science research (12). The goal of safety and quality improvement research is to partner with patients, their loved ones and others to, a) eliminate harm, b) continuously optimize processes to improve patient outcomes and experience, and c) eliminate waste in healthcare. This is usually done by deploying multifaceted interventions rather than validating the impact of individual interventions, as is usually done in clinical research. The choice of methods and study design are usually driven by pragmatic realities, such as intervening and collecting data from a diverse, real-life clinical environment unlike the controlled environments of a laboratory or randomized trial. Data from improvement studies are usually voluntarily collected by clinical providers with minimal to no training in data quality methodologies, unlike the dedicated, trained data collectors used in clinical trials. For example, the Keystone ICU project in Michigan used a cohort design rather than a more rigorous cluster randomized trial design (13). This was in recognition of the reality that participating clinical providers considered it as a quality improvement initiative rather than a research project, and considered being randomized to not use the best-practice bundle of care, even for a limited amount of time, as unacceptable. Determining the uptake and impact of the individual components of the multifaceted intervention might also have been useful, but was not performed by the principal investigators because of the increased burden of data collection, potentially affecting participation and retention of sites.
How best to evaluate the validity of patient safety interventions remains a matter of much discussion (14–16). The Agency for Healthcare Research and Quality (AHRQ) recently convened a panel of international experts to review the literature and explore how to improve the conduct and reporting of patient safety interventions (17). The bulk of the current patient safety literature was noted to be primarily based on initiatives that used causal inferences to establish their effectiveness. The current range of approaches and models commonly utilized in health care is limited when compared to safety sciences in other industries (18). However, the nature of health care can differ significantly from other industries; it is unpredictable, dynamic, and has patient and provider behaviors that can be difficult to control. We do not experience virtually fault-free operation as a starting point, precluding us from directly importing safety models from industries such as aviation and automotive that can easily pinpoint problems and have safe baselines. For example, if an automobile model has a defect, it can be recalled and fixed and car owners will gladly comply because the harm and the benefit are both clear. Healthcare’s perspective also remains reactive, with an over reliance on voluntary incident reporting and root cause analysis, rather than a more proactive approach of fixing the process to prevent the adverse event. Thus, we should look to other industries to provide a robust foundation of conceptual approaches to draw upon for the development of better safety initiatives in health care.
Health care still lacks frameworks to evaluate and monitor safety. Because of the paucity of funding and research capacity, it is not possible to rigorously evaluate most improvement initiatives in a manner that the generalized research community would accept. Thus, there is a significant lost opportunity to share what novel and effective initiatives have learned. Our challenge is to develop rigorous yet pragmatic and scalable evaluation methods. The ideal safety initiative would not only learn from the mistakes of the past, but adapt to any unexpected present or future demands. Any system looking to effectively measure and monitor a safety initiative should, ideally, answer the following questions:
Are we providing safe care?
How safe was this care in the past?
How safe will we be in the future?
Can our process and health system reliably deliver safer care?
How can we be sure that we are getting better?
We will discuss the key elements of a framework as a simple yet valid example to help structure our thinking and approach to answer these questions.
Theoretical basis
An explanation of the theoretical assumptions behind why a proposed intervention should work will help place the initiative in the context of existing knowledge. Most clinical research is established on extensive amounts of defined molecular and physiologic data and principles that provide assumptions about how and why the single intervention should work even before it is demonstrated. A similar intellectual basis and clearly described theory should be required when evaluating safety initiatives to allow for greater acceptance. This can be difficult, since the foundation of most improvement initiatives draws upon diverse disciplines, ranging from clinical medicine, to health services researchers, human factors engineers, organizational psychologists, instructional designers, and systems factors specialists.
A combination of qualitative and quantitative approaches is often needed to provide a meaningful understanding of safety and quality improvement initiatives (13,19,20). For example, public attention attributed decreased central line-associated bloodstream infection rates to the use of a checklist. However, the intervention also included and integrated a model for translating research evidence into practice with a comprehensive unit-based safety program to improve local safety and teamwork culture (21). Such a multifaceted approach was necessary to overcome local barriers, including social, emotional, cultural and political ones, to effect a lasting change in provider behavior (22).
Appropriate measures
With increasing scrutiny on improving patient safety, there is tremendous pressure on health care organizations to provide empirical evidence that patients are safer. Many organizations do not have defined, scientifically sound measures to evaluate their progress regarding safety, and they often lack the infrastructure to monitor performance. Current publicly reported performance measures apply to a minority of hospital discharges, and will likely not be sufficient to effectively evaluate safety (23). Safety assessment presents a significant challenge because there are innumerable ways that patients can be harmed, and only a limited amount can be defined in advance or predicted. To effectively monitor safety will require an information system that recognizes the dynamic nature of health care, and a set of measures that can vary depending on dynamic local conditions and context.
Good safety measures should have several qualities, such as importance, validity, and applicability to the local environment (24). Measures should conceptualize safety as a continuous variable (is it improving) instead of a dichotomous variable (safe or unsafe). Ideally, measures should be quantifiable as rates or proportions to see improvement in outcomes and processes, or should be nonrate-based to evaluate structure and context of care (25). When selecting a measure, it needs to be of strategic importance to be worth the concentration of effort and resources. The measure should also be important to the people who will be responsible for improving it. To help determine the validity of a measure, it is necessary to assess the supporting evidence for your chosen intervention and determine whether it can achieve the intended outcome. It is also necessary to determine its face validity, especially in the local context, to ensure uptake and utilization. A valid measure must also be reliable and reproducible to minimize the potential for bias. Finally, a good safety measure needs to be feasible and useful in its local environment or organization to justify the commitment of scare resources. If the measure does not guide improvement efforts, data collection should be reconsidered.
Detailing of processes
Limited guidance is available regarding successful implementation and evaluation of safety and quality improvement initiatives in health care (26). They often pose program management challenges that make it difficult to report methods and results, such as the use of evolving multifaceted interventions and minimal dedicated data collection resources (27). Describing improvement practices in sufficient detail to allow replication by others should be a key evaluation requirement. The International Committee of Medical Journal Editors advises authors to “Describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to judge its appropriateness for the study and verify the reported results (28).” However, this recommendation does not extend to all facets of the research methodology. A recent review noted that several studies of prominent patient safety practices limited their descriptions to just a few sentences (17). While adequate reporting can be difficult, it has been recommended by several authors and guidelines (17,29,30).
Trials of complex multifaceted interventions such as those used to improve safety and quality should find detailing their processes of particular benefit. With the evolving nature of most safety initiatives, detailing the encountered barriers and how they were addressed is of critical importance. The core intervention may be difficult to separate out from the culture-based efforts to implement it, with the two often blending together over the course of the intervention. For example, the Keystone ICU project recognized the importance of leadership support and safety culture when asking providers to change their practices to help reduce bloodstream infections, and therefore packaged them together as a safety practice (20,31). While some experts might believe that disentangling the co-interventions may not be meaningful, outlining them in detail is essential for any future replication and dissemination.
Assessment of initiative
Health care organizations are continuously adapting to address the dynamic nature of risks to patient safety. Dynamic systems theory suggests that all the factors influencing safety in the future cannot be completely defined today. Successful safety initiatives need to be able to incorporate multiple components, including anticipation of harm, sensitivity to local context and provider feedback, and tracking a variety of outcomes.
Anticipation and preparedness
Safety science is now moving from a reactive to a more proactive perspective. Anticipation is a critical component, requiring foresight to predict potential problems and pre-emptively prepare for them. This is an essential part of clinical care delivery as well, with treatments adjusting to changes in the patient’s condition on a regular basis. However, there is usually no specific set of information or data that is or is not relevant to anticipate safety. Rather, it is a matter of fostering discussion and questioning the issue, even in times of success and stability.
While we do not have an adequate understanding of safety science to definitively assess the safety of a system, some surrogate measures are promising. Safety culture has been associated with patient harm, with hospitals scoring higher on safety climate assessments being significantly less likely to have patient safety indicator events (32,33). Human reliability analysis methods are being borrowed from other industries to systematically examine processes of care to identify potential failure points by developing “safety cases.” This method provides an overall assessment through a detailed knowledge of the process, a quantitative assessment of the likelihood that a variety of possible failures will occur based on reported narratives from regular use, and an assessment of the combination of all possible kinds of failures. While not widely adopted in health care, it is being developed for use in the Safer Clinical Systems program run by the Health Foundation and Warwick Medical School (18).
Environmental sensitivity and feedback
To improve safety, interventions need to have the capacity to monitor and respond to appropriate information on a regular basis. Being sensitive to the local system and environment allows researchers to develop, refine and update interventions based on changes in the situation, including those outside normal operational activities (34). This can allow earlier identification of problems that researchers can deal with before they affect patient safety. The practice of anesthesia is a particularly receptive field, given the wealth of available information regarding the safety of the patient, both in real-time and as trends over time. Examples of mechanisms that support sensitivity in health care organizations include safety walk-rounds, briefings and debriefings, operational rounds, the use of dedicated patient safety officers, and local staff and patient safety assessments (18,35).
Effective safety initiatives seek to identify deviations from best practices and risks to patients while attempting to learn from them, using the information to influence their future functioning. Health care organizations use a variety of formal and informal approaches to obtain information about safety in the context of care delivery. What works effectively in one area or organization may not work in other areas, emphasizing the need to select the appropriate method for the environment. Once the data have been gathered, responsive action needs to happen in a timely manner; the potential risk to patient safety can increase over time. Deciding on the appropriate action to take on safety information is also important; strategies can vary from rapid response to single events to thematic vulnerabilities. Discussion and integration of feedback often needs to be done on multiple levels within an organization as well to obtain the appropriate response. The key is to ensure that the response to safety information engages frontline staff, reassuring them that their reports are being taken seriously.
Outcomes
A formal assessment of outcomes is standard practice for every clinical intervention, often continuing well after the actual trial and study period. Unfortunately, this is often ignored when evaluating safety initiatives, especially when the benefits are marginal or may be outweighed by other effects (36,37). Measurement of changes in the outcome of interest is routinely done. However, tracking indirect consequences and costs is often ignored. Certain practices, while not directly harmful, can cause changes in practices or behaviors that lead to unintended consequences, which can undermine the direct benefits (38). An example is the regulation of resident work hours intended to decrease fatigue and thereby improve patient safety. This was based on studies demonstrating decreased cognitive functioning in sleep deprived individuals and the focused assessment of errors when physicians performed specific cognitive tasks in a simulated environment (38–40). When further analyzed, there were no definitive improvements in patient mortality, but there were some increases in complications (41–43). Another example is the use of computerized provider order entry systems. Even though these systems decrease medication errors, they often do not reduce harm from adverse drug events (44).
These examples highlight some important potential pitfalls when following outcomes for safety initiatives. The measured outcomes are often surrogate markers and subsequent changes do not always affect the true outcome of interest. Changes in a targeted component of a complicated interconnected system like health care often produces unexpected collateral effects. This necessitates ongoing vigilance to monitor for unintended consequences from the multifaceted interventions favored in patient safety and quality improvement.
Assessment of contextual factors
Understanding contextual factors is the researcher’s gateway to successfully implementing safety initiatives in a variety of settings. Evaluating the influence of context on the outcomes and efficacy of a multifaceted safety intervention is analogous to evaluating the heterogeneity of treatment effects in clinical trials. Its influence is an often cited reason why conceptually similar interventions achieve different outcomes when the setting is changed (44,45). However, there is controversy relative to what elements of context might be most in need of tracking and evaluation for patient safety. One published framework proposes the grouping of high-priority contexts into four domains based on theory and the limited evidence available (17). These contextual categories include structural characteristics of the organization; leadership, culture and teamwork; patient safety tools and technologies; and external factors. Although the applications of the proposed high-priority contexts might vary based on the specifics of the initiative in question, evaluations should consider all of them to be potentially applicable.
Organizational structural characteristics are mostly fixed and difficult to change. They include geographic and demographic characteristics of the hospital as well as patients, organizational complexity, and financial status. Local leadership, culture, and teamwork are integrated concepts that could be changed with directed efforts (13). They can be crucial for the success of safety initiatives, determining whether and how well an intervention is implemented and sustained. The use of specific tools for patient safety interventions can have a significant impact on deploying and managing culture-based components, and are relatively easy to influence at the organizational level. External factors consist of the overall environment in which the unit or organization resides, such as regulatory or accreditation requirements. While not under the direct influence of the unit or organization, they often have a profound effect on patient safety and quality, often determining issues such as allocation of resources.
Conclusion
Over the last decade, increasing attention on preventable adverse events has led to significant yet deserved pressures to improve. This has driven the development of a wide variety of initiatives aiming to improve quality and safety in healthcare. For various reasons, most of these favor action over evidence, and often fail to provide definitive evidence of their efficacy and effectiveness (14). In so doing, they often overlook the need to weigh the costs of our safety initiatives against the benefits (46).
The measurement and monitoring of safety is critical if the field is to improve, but is often varied and complicated. With the ongoing maturation of the science of patient safety, we have been seeing progress in how to study improvement. High-quality research is necessary to appreciate the effectiveness and applicability of safety research, while minimizing the risk that research results will be misinterpreted and misapplied. As with all traditional medical research, patient safety and quality improvement research needs an evaluation framework. Despite the appeal to circumvent traditional models of evidence, we need to pursue solutions to quality and safety issues in a way that does not misguide us about their effectiveness, waste our constrained resources, or make us oblivious to any unintended consequences. We need to urge researchers and funders alike to utilize frameworks like the one outlined here to evaluate safety interventions in healthcare to ensure that we keep our patients safe from harm.
Footnotes
Compliance with Ethics Guidelines
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by any of the authors.
Conflict of Interest
Asad Latif declares that he has no conflict of interest.
Christine G. Holzmueller declares that she has no conflict of interest.
Peter J. Pronovost reports receiving grant or contract support from the Agency for Healthcare Research and Quality, the Gordon and Betty Moore Foundation (research related to patient safety and quality of care), the National Institutes of Health (acute lung injury research), and the American Medical Association, Inc. (improve blood pressure control); honoraria from various healthcare organizations for speaking on patient safety and quality (the Leigh Bureau manages these engagements); book royalties from the Penguin Group for his book titled Safe Patients, Smart Hospitals; fees to be a strategic advisor to the Gordon and Betty Moore Foundation; and stock and fees to serve as a director for Cantel Medical. Dr. Pronovost is a founder of Patient Doctor Technologies, a startup company that seeks to enhance the partnership between patients and clinicians with an application called Doctella.
Contributor Information
Asad Latif, Department of Anesthesiology and Critical Care Medicine, Armstrong Institute for Patient Safety and Quality, Johns Hopkins University School of Medicine. 600 North Wolfe Street, Meyer 297-A, Baltimore, MD 21287 P: 410-502-2714 F: 410-955-8978 alatif1@jhmi.edu.
Christine G. Holzmueller, Department of Anesthesiology and Critical Care Medicine, Armstrong Institute for Patient Safety and Quality, Johns Hopkins University School of Medicine. 750 East Pratt Street, 15th Floor, Baltimore, MD 21202 P: 410-637-4391 F: 410-637-4380 cholzmu1@jhmi.edu.
Peter J. Pronovost, Department of Anesthesiology and Critical Care Medicine, Armstrong Institute for Patient Safety and Quality, Johns Hopkins University School of Medicine, Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, 600 North Wolfe Street, Meyer 295, Baltimore, MD 21287, P: 410-502-6127, F: 410-955-8978, ppronovo@jhmi.edu.
References
Papers of particular interest have been highlighted as
* Of importance
** Of major importance
- 1.Institute of Medicine: Committee on Quality of Health Care in America. To Err is Human : Building a Safer Health System. Washington, DC: National Academies Press; 1999. [Google Scholar]
- 2.Wachter RM. Patient safety at ten: unmistakable progress, troubling gaps. Health Aff (Millwood) 2010 Jan-Feb;29(1):165–173. doi: 10.1377/hlthaff.2009.0785. [DOI] [PubMed] [Google Scholar]
- 3.National Patient Safety Goals. 2014 Available at: http://www.jointcommission.org/standards_information/npsgs.aspx. [Google Scholar]
- 4.Never Events - Centers for Medicare & Medicaid Services. 2008 Available at: http://downloads.cms.gov/cmsgov/archived-downloads/SMDL/downloads/SMD073108.pdf. [Google Scholar]
- 5.Vincent C, Aylin P, Franklin BD, Holmes A, Iskander S, Jacklin A, et al. Is health care getting safer? BMJ. 2008 Nov 13;337:a2426. doi: 10.1136/bmj.a2426. [DOI] [PubMed] [Google Scholar]
- 6.Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010 Nov 25;363(22):2124–2134. doi: 10.1056/NEJMsa1004404. [DOI] [PubMed] [Google Scholar]
- 7.Adverse events in hospitals: national incidence among Medicare beneficiaries. 2010 Available at: http://oig.hhs.gov/oei/reports/oei-06-09-00090.pdf. [Google Scholar]
- 8.Pham JC, Frick KD, Pronovost PJ. Why don't we know whether care is safe? Am J Med Qual. 2013 Nov-Dec;28(6):457–463. doi: 10.1177/1062860613479397. [DOI] [PubMed] [Google Scholar]
- 9.Benning A, Dixon-Woods M, Nwulu U, Ghaleb M, Dawson J, Barber N, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ. 2011 Feb 3;342:d199. doi: 10.1136/bmj.d199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Benning A, Ghaleb M, Suokas A, Dixon-Woods M, Dawson J, Barber N, et al. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ. 2011 Feb 3;342:d195. doi: 10.1136/bmj.d195. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Dixon-Woods M, Bosk CL, Aveling EL, Goeschel CA, Pronovost PJ. Explaining Michigan: developing an ex post theory of a quality improvement program. Milbank Q. 2011 Jun;89(2):167–205. doi: 10.1111/j.1468-0009.2011.00625.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Marshall M, Pronovost P, Dixon-Woods M. Promotion of improvement as a science. Lancet. 2013 Feb 2;381(9864):419–421. doi: 10.1016/S0140-6736(12)61850-9. [DOI] [PubMed] [Google Scholar]
- 13.Pronovost PJ, Berenholtz SM, Goeschel C, Thom I, Watson SR, Holzmueller CG, et al. Improving patient safety in intensive care units in Michigan. J Crit Care. 2008 Jun;23(2):207–221. doi: 10.1016/j.jcrc.2007.09.002. [DOI] [PubMed] [Google Scholar]
- 14. Auerbach AD, Landefeld CS, Shojania KG. The tension between needing to improve care and knowing how to do it. N Engl J Med. 2007 Aug 9;357(6):608–613. doi: 10.1056/NEJMsb070738. Outline of the pros and cons for rapid dissemination of quality improvement initiatives, and proposal of a framework for evaluating interventions to improve safety and effectiveness.
- 15.Shojania KG, Duncan BW, McDonald KM, Wachter RM. Safe but sound: patient safety meets evidence-based medicine. JAMA. 2002 Jul 24–31;288(4):508–513. doi: 10.1001/jama.288.4.508. [DOI] [PubMed] [Google Scholar]
- 16.Leape LL, Berwick DM, Bates DW. What practices will most improve safety? Evidence-based medicine meets patient safety. JAMA. 2002 Jul 24–31;288(4):501–507. doi: 10.1001/jama.288.4.501. [DOI] [PubMed] [Google Scholar]
- 17. Shekelle PG, Pronovost PJ, Wachter RM, Taylor SL, Dy SM, Foy R, et al. Advancing the science of patient safety. Ann Intern Med. 2011 May 17;154(10):693–696. doi: 10.7326/0003-4819-154-10-201105170-00011. Outline of the pros and cons for rapid dissemination of quality improvement initiatives, and proposal of a framework for evaluating interventions to improve safety and effectiveness.
- 18. Vincent C, Burnett S, Carthey J. The measurement and monitoring of safety. 2013 Report commissioned by the Health Foundation to help identify where and how improvements in healthcare quality can be made. Draws together academic evidence and practical experience to produce a framework for safety measurement and monitoring.
- 19.Bradley EH, Herrin J, Wang Y, Barton BA, Webster TR, Mattera JA, et al. Strategies for reducing the door-to-balloon time in acute myocardial infarction. N Engl J Med. 2006 Nov 30;355(22):2308–2320. doi: 10.1056/NEJMsa063117. [DOI] [PubMed] [Google Scholar]
- 20.Pronovost P, Needham D, Berenholtz S, Sinopoli D, Chu H, Cosgrove S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006 Dec 28;355(26):2725–2732. doi: 10.1056/NEJMoa061115. [DOI] [PubMed] [Google Scholar]
- 21.Pronovost PJ, Berenholtz SM, Needham DM. Translating evidence into practice: a model for large scale knowledge translation. BMJ. 2008 Oct 6;337:a1714. doi: 10.1136/bmj.a1714. [DOI] [PubMed] [Google Scholar]
- 22.Bosk CL, Dixon-Woods M, Goeschel CA, Pronovost PJ. Reality check for checklists. Lancet. 2009 Aug 8;374(9688):444–445. doi: 10.1016/s0140-6736(09)61440-9. [DOI] [PubMed] [Google Scholar]
- 23.Jha AK, Li Z, Orav EJ, Epstein AM. Care in U S hospitals--the Hospital Quality Alliance program. N Engl J Med. 2005 Jul 21;353(3):265–274. doi: 10.1056/NEJMsa051249. [DOI] [PubMed] [Google Scholar]
- 24. Pronovost PJ, Berenholtz SM, Needham DM. A framework for health care organizations to develop and evaluate a safety scorecard. JAMA. 2007 Nov 7;298(17):2063–2065. doi: 10.1001/jama.298.17.2063. Potential framework to help healthcare organization more effectively and efficiently develop and evaluate their safety scorecards to address the question of whether their patients are safer.
- 25.Pronovost P, Holzmueller CG, Needham DM, Sexton JB, Miller M, Berenholtz S, et al. How will we know patients are safer? An organization-wide approach to measuring and improving safety. Crit Care Med. 2006 Jul;34(7):1988–1995. doi: 10.1097/01.CCM.0000226412.12612.B6. [DOI] [PubMed] [Google Scholar]
- 26.The Commonwealth Fund Commission on a High Performance Health System. Why Not the Best? Results from the National Scorecard on U S. Health System Performance. 2008 2008 July 2008. [Google Scholar]
- 27.Davidoff F. Heterogeneity is not always noise: lessons from improvement. JAMA. 2009 Dec 16;302(23):2580–2586. doi: 10.1001/jama.2009.1845. [DOI] [PubMed] [Google Scholar]
- 28.Preparing for Submission. Available at: http://www.icmje.org/recommendations/browse/manuscript-preparation/preparing-for-submission.html. [Google Scholar]
- 29.Glasziou P, Chalmers I, Altman DG, Bastian H, Boutron I, Brice A, et al. Taking healthcare interventions from trial to practice. BMJ. 2010 Aug 13;341:c3852. doi: 10.1136/bmj.c3852. [DOI] [PubMed] [Google Scholar]
- 30. Davidoff F, Batalden P, Stevens D, Ogrinc G, Mooney S. SQUIRE Development Group. Publication guidelines for improvement studies in health care: evolution of the SQUIRE Project. Ann Intern Med. 2008 Nov 4;149(9):670–676. doi: 10.7326/0003-4819-149-9-200811040-00009. Presentation of guidelines for reporting original studies of improvement interventions. Aims to stimulate the publication of high-caliber improvement studies and to increase the completeness, accuracy, and transparency of published reports of that work, strengthening the evidence base for improvement in healthcare.
- 31.Pronovost PJ, Goeschel CA, Colantuoni E, Watson S, Lubomski LH, Berenholtz SM, et al. Sustaining reductions in catheter related bloodstream infections in Michigan intensive care units: observational study. BMJ. 2010 Feb 4;340:c309. doi: 10.1136/bmj.c309. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Hofmann D, Mark B. An investigation of the relationship between safety climate and medication errors as well as other nurse and patient outcomes. Personnel Psychol. 2006;59:847–869. [Google Scholar]
- 33.Singer SJ, Falwell A, Gaba DM, Meterko M, Rosen A, Hartmann CW, et al. Identifying organizational cultures that promote patient safety. Health Care Manage Rev. 2009 Oct-Dec;34(4):300–311. doi: 10.1097/HMR.0b013e3181afc10c. [DOI] [PubMed] [Google Scholar]
- 34.Schulman PR. General attributes of safe organisations. Qual Saf Health Care. 2004 Dec;13(Suppl 2):ii39–ii44. doi: 10.1136/qshc.2003.009613. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Frankel AS, Leonard MW, Denham CR. Fair and just culture, team behavior, and leadership engagement: The tools to achieve high reliability. Health Serv Res. 2006 Aug;41(4 Pt 2):1690–1709. doi: 10.1111/j.1475-6773.2006.00572.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Stelfox HT, Bates DW, Redelmeier DA. Safety of patients isolated for infection control. JAMA. 2003 Oct 8;290(14):1899–1905. doi: 10.1001/jama.290.14.1899. [DOI] [PubMed] [Google Scholar]
- 37.Shojania KG, Jennings A, Mayhew A, Ramsay C, Eccles M, Grimshaw J. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ. 2010 Mar 23;182(5):E216–E225. doi: 10.1503/cmaj.090578. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Shojania S, Duncan B, McDonald K, Wachter R, Markowitz A, editors. A Critical Analysis of Patient Safety Practices. 2001. Making Health Care Safer. July 2001;AHRQ Publication 01-E058. [PMC free article] [PubMed] [Google Scholar]
- 39.Pilcher JJ, Huffcutt AI. Effects of sleep deprivation on performance: a meta-analysis. Sleep. 1996 May;19(4):318–326. doi: 10.1093/sleep/19.4.318. [DOI] [PubMed] [Google Scholar]
- 40.Weinger MB, Ancoli-Israel S. Sleep deprivation and clinical performance. JAMA. 2002 Feb 27;287(8):955–957. doi: 10.1001/jama.287.8.955. [DOI] [PubMed] [Google Scholar]
- 41.Laine C, Goldman L, Soukup JR, Hayes JG. The impact of a regulation restricting medical house staff working hours on the quality of patient care. JAMA. 1993 Jan 20;269(3):374–378. [PubMed] [Google Scholar]
- 42.Prasad M, Iwashyna TJ, Christie JD, Kramer AA, Silber JH, Volpp KG, et al. Effect of work-hours regulations on intensive care unit mortality in United States teaching hospitals. Crit Care Med. 2009 Sep;37(9):2564–2569. doi: 10.1097/CCM.0b013e3181a93468. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Volpp KG, Rosen AK, Rosenbaum PR, Romano PS, Even-Shoshan O, Wang Y, et al. Mortality among hospitalized Medicare beneficiaries in the first 2 years following ACGME resident duty hour reform. JAMA. 2007 Sep 5;298(9):975–983. doi: 10.1001/jama.298.9.975. [DOI] [PubMed] [Google Scholar]
- 44.Bates DW, Leape LL, Cullen DJ, Laird N, Petersen LA, Teich JM, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998 Oct 21;280(15):1311–1316. doi: 10.1001/jama.280.15.1311. [DOI] [PubMed] [Google Scholar]
- 45.Koppel R, Metlay JP, Cohen A, Abaluck B, Localio AR, Kimmel SE, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA. 2005 Mar 9;293(10):1197–1203. doi: 10.1001/jama.293.10.1197. [DOI] [PubMed] [Google Scholar]
- 46.Pronovost PJ, Faden RR. Setting priorities for patient safety: ethics, accountability, and public engagement. JAMA. 2009 Aug 26;302(8):890–891. doi: 10.1001/jama.2009.1177. [DOI] [PubMed] [Google Scholar]
