Abstract
Health care organizations have incorporated updated safety principles in the analysis of errors and in norms and standards. Yet no research exists that assesses bedside nurses’ perceived skills or attitudes toward updated safety concepts. The aims of this study were to develop a scale assessing nurses’ perceived skills and attitudes toward updated safety concepts, determine content validity, and examine internal consistency of the scale and subscales. Understanding nurses’ perceived skills and attitudes about safety concepts can be used in targeting strategies to enhance their safety practices.
Keywords: attitudes, instrument development, medical errors, nurses, patient safety, skills
In the last decade patient safety in health care has become an urgent concern for the public and health care industry leaders, given its prevalence in US hospitals. Indeed, unintentional harm from medical errors is the 4th leading cause of death in the U.S.1 The financial cost of patient safety can be excessive with estimates ranging from $17 to $29 billion per year.2
Since the advent of quality and safety research, safety principles have been updated with increased emphasis on system contributions to safety lapses rather than focusing primarily on individuals’ contributions or fault. 3 Much work has been done at the administrative level to incorporate these updated safety principles in the analysis of errors and updating of norms, policies and standards.4 Thus, it is now common for administrative personnel to focus their attention on the system level variables that contribute to error and lapses in patient safety.5–7 However, despite the system efforts that have been made to address patient safety, patient injury and death from health care system or providers’ care remains widespread.8 The heightened emphasis placed on system level analysis may have obscured the individual provider’s (eg, the nurse’s) contribution to patient safety practices.
How have nurses stayed current with understanding of updated safety concepts? Little is known regarding bedside registered nurses’ (RN) attitudes towards these updated safety concepts that have guide organizational policy and standards. No research has been done examining nurses’ perceptions of their skills in implementing safety principles, such as initiating, executing and revising standardized processes of care to better manage patients within complex work environments. Most models of nursing practice include individual clinician attitudes and skills as vital variables in the establishment of practice norms; safety practices are no exception.9 Given that nurses are typically at the sharp end of a number of health care errors, most notably medication administration, understanding their attitudes and perceived skills could assist organizations in identifying targeted strategies to enhance nurses’ safety practices. However, a search of the literature yielded no standardized instruments to assess nurses’ attitudes and perceived skills. Therefore, the aims of this study were to a) develop a scale assessing nurses’ perceived skills and attitudes toward updated safety concepts based on a literature review, b) determine content validity of the scale’s items, and c) examine the psychometric reliability of the scale and subscales.
METHODS
Aim 1: Item Development
Phase I: Review of Definitions and Concepts
A literature review was conducted to identify patient safety definitions and concepts. Various health care organizations and researchers have addressed patient safety, providing a number of conceptual frameworks and definitions.
The initial definition of patient safety from the Institute of Medicine (IOM) was “freedom from accidental injury.” 3 This definition has been further expanded to include freedom from injury produced from medical care,10 minimize risk of injury to patient and provider through system and individual performance,11 and prevention of health care errors.12 Emanuel et al’s definition13 further expands the patient safety concept by including elements from the field of safety science. Their definition acknowledges that patient safety can be understood at the individual clinician level as well as at the systems level. The impact of human factors engineering is evident in this more expansive definition of patient safety. Within nursing, Ebright has addressed updated safety concepts as consisting of safety models that include an understanding of active vs. latent failure, complexity of work, human limitations and complexity, and hindsight bias.14
Cronenwett et al, in their national Quality and Safety Education for Nurses (QSEN) research and initiative,11 conducted a conceptual deconstruction of the knowledge, skills and attitudes (KSAs) needed by health care professionals to address patient safety. Working with an advisory board of thought leaders in nursing and medicine, the authors reviewed the relevant literature and adapted the IOM competencies for nursing as well as proposed targets for competence. Descriptions and operationalized facets of KSAs that would apply to all RNs resulted.11 Although QSEN KSAs have been studied in prelicensure nursing students,15,16 there is limited research examining the presence of these KSAs among bedside nurse clinicians. The tool described in this article targets this population – bedside nurses.
Phase II: Development of KSA Items
The literature was reviewed for instruments specific to domains of patient safety. 17 Nine scales were found that assessed the safety competencies of nurses; however, 7 of these were developed for prelicensure nursing students, with minimal application to practicing nurses. Modification of the 2 remaining scales, Schnall’s Patient-Safety Attitudes, Skills and Knowledge (PS-ASK) Survey18 and Chenot and Daniel’s Health Professions Patient Safety Assessment Curriculum Survey (HPPSACS),19 contributed to the scale development targeted to bedside nurses.
Schnall’s PS-ASK Survey is an adaptation of a survey for medical residents initially developed by Madigosky and colleagues 20 to measure medical students’ KSAs about patient safety and medical fallibility. Based on Reasons’ model of human error, 21 Schnall adapted Madigosky’s et al’s survey to reflect patient-safety curriculum objectives and evidence-based, patient-safety practices relevant to advanced practice nurses, which resulted in the 50 item PS-ASK.
Chenot and Daniel19 also based the HPPSACS on Madigosky’s survey for medical residents. The HPPSACS has 34 items, adapted for nurses. It was reviewed by nurse content experts in its development and is now widely used with prelicensure nursing students.
Nurses’ Attitudes and Skills around Updated Safety Concepts (NASUS) Scale
The NASUS Scale scale was developed by the authors. The first version was a 34-item scale that adapted items from the PS-ASK and HPPSACS, based on each instrument’s coverage of the QSEN dimensions of patient safety outlined in the KSAs. Also considered were these instruments’ reliability values associated with individual items and subscales. The NASUS Scale was developed using the attitude sections of the HPPSACS Survey (Cronbach Alpha =.86, .62 and .63), the Error Analysis skill subscale of the PS-ASK Survey (Cronbach Alpha = .84), and the Knowledge subscale of the PS-ASK Survey (Cronbach Alpha = .86), with minor edits. Each item of the NASUS employs a 100-point continuous visual analogue, with some questions employing reverse anchors, so these questions were reverse coded in the item analyses.
Effective and sustained adoption of evidence-based practices is also due to the degree of clinicians’ skepticism about the value of a change in practice. Clinicians who are highly skeptical of the value of evidence for care are less likely to adhere to these standards in their practice. 9 The concept of skepticism is included in the NASUS, specific to safe medication administration practices. The resulting first draft of the NASUS Scale had 8 Skill items, 21 Attitude items, and 5 Knowledge items.
Aim 2: establishing content validity
Content validity refers to the extent to which an instrument measures what it is expected to measure. To conduct an effective content validity index (CVI), 3–10 experts should rate each scale item in terms of its relevance to the underlying construct. 17–19 For the NASUS Scale, 9 experts (2 physicians and 7 RNs) completed a CVI. Standardized definitions were provided to clarify the safety KSAs. A 4-point scale with anchors of not relevant, somewhat relevant, quite relevant, and highly relevant was used for each of the 32 NASUS items.22 For each item, the CVI was computed as the number of experts providing a rating of 3 or 4, divided by the total number of experts. This approach effectively dichotomizes the scale into relevant and not relevant items.23 When there are 6 or more expert reviews of a scale, the recommended criteria is that no item should be lower than a .78.24
Five items were eliminated because of low CVI scores. Additionally, several experts indicated that self-assessment of knowledge is an unreliable and biased assessment for most individuals and especially for health care professionals.25 Several experts also questioned whether the knowledge items that were piloted in the NASUS were the best core elements in the knowledge domain to represent updated safety concepts. Therefore, the 5 questions that targeted nurses’ assessment of their knowledge of updated safety concepts were eliminated. The net result of this content validity review process was a 24-item NASUS Scale (Table).
Table.
Item | Question | Median (IQR) | Item-Total Correlation |
Cronbach’s Alpha if item was deleted |
---|---|---|---|---|
Skill subscale | 62 (52, 73) | Cronbach’s ∞= .73 | ||
Skill subscale: Choose the number that corresponds to your level of comfort with the following: | ||||
1 | Accurately completing an incident report |
83 (65, 96) | .43 | .68 |
2 | Analyzing a case to find the cause of an error |
75 (53, 90) | .63 | .67 |
3 | Supporting and advising a peer who must decide how to respond to an error |
78 (65, 90) | .59 | .66 |
4 | Disclosing an error to a manager or supervisor |
32 (10, 65) | −.23 | .72 |
5 | Disclosing an error to another healthcare professional |
75 (51, 90) | .38 | .68 |
23 | Interpreting aggregate error report data |
50 (26, 66) | .54 | .69 |
24 | Participating as a team in a root cause analysis |
57 (38, 77) | .60 | .68 |
Attitude subscale | 68 (62,74) | Cronbach’s ∞= .66 | ||
Attitude subscale: Choose the number that corresponds to your level of agreement with the following statements: | ||||
6 | Making errors in healthcare is inevitable |
63 (38, 80) | .14 | .70 |
7 | Competent healthcare professionals do not make errors that lead to patient harm |
69 (50, 85) | .19 | 69 |
8 | Healthcare professionals should routinely spend part of their professional time working to improve patient care |
86 (68, 100) | .49 | 67 |
9 | The culture of healthcare makes it easy for healthcare professionals to deal constructively with errors |
42 (27, 66) | .07 | 70 |
10 | Healthcare professionals routinely share information about medical errors and what caused them |
86 (72, 100) | .53 | 67 |
11 | Healthcare professionals routinely report errors |
50 (32, 78) | .13 | .69 |
12 | Reporting systems do little to reduce future errors |
67 (49, 85) | .18 | .69 |
13 | Physicians should be the healthcare professionals that report errors to an affected patient and family |
50 (21, 65) | −.15 | .72 |
14 | After an error occurs, an effective strategy is to work harder to be more careful |
38 (19, 61) | .11 | .72 |
15 | There is a gap between what we know as “best care” and what we provide on a day-to-day basis |
63 (33, 77) | .01 | .72 |
16 | Learning how to improve patient safety is an appropriate use of time in my practice |
89 (73, 100) | .51 | .68 |
17 | If there is no harm to a patient, there is no need to address an error |
94 (78, 100) | .49 | .67 |
18 | If I saw a colleague make an error, I would keep it to myself |
85 (70, 99) | .43 | .67 |
19 | Most errors are due to things that healthcare professionals can’t do anything about |
85 (69, 96) | .39 | .68 |
20 | I have effective strategies in my practice to reduce my reliance on memory |
74 (61, 86) | .32 | .68 |
21 | Standardized medication administration practices improve patient safety outcomes |
86 (71,98) | .47 | .68 |
22 | Standardized medication administration practices get in the way of my nursing practice |
80 (62, 95) | .35 | .69 |
Total NASUS Scale | 66 (60, 72) | Cronbach’s ∞ = .73 |
Nurses’ Attitudes and Skills around Updated Safety Concepts (NASUS) Scale
Aim 3: determining psychometric reliability of NASUS
To determine psychometric reliability properties of the NASUS, we conducted a cross-sectional study using a convenience sample of employed RNs from hospitals participating in the Collaborative Alliance for Nursing Outcomes (CALNOC) registry. CALNOC is a not-for-profit, self-sustaining, national registry that oversees nursing-sensitive measures collected at the unit level of a hospital. CALNOC supports hospital collection of facility-specific and group benchmark data on nursing sensitive outcomes. This study targeted RNs employed on CALNOC hospital units that had collected medication administration data between November 2014 and April 2015. These RN data and the medication administration data contributed to another study conducted by the authors.
The authors’ University Institutional Review Boards and the Institutional Review Board connected to CALNOC approved the study. The introductory letter inviting RNs to complete the NASUS Scale was explicit in stating that nurse participation was voluntary and anonymous, that there was no direct benefit of participation beyond contributing to nursing knowledge, and that the nurses would not be compensated. The NASUS Scale measured participating nurses’ perceived skills and attitudes about update safety concepts. All RNs who were currently practicing on targeted units were invited to participate in the study. There were no exclusion criteria.
Recruitment and data collection procedures
Chief Nursing Officers (CNOs) at 34 facilities received the first inquiry via emails and letters mailed. Initial letters of invitation described the study and requested permission to contact the CALNOC Site Coordinator. Three waves of invitations were sent to CNOs (with a total of 6 communications) over 4 months, with a 30% response rate (n = 11). Of the 11 CNOs that responded to the invitation to participate, 7 agreed to participate. The principal investigator (PI) then contacted CALNOC Site Coordinators to describe the study and set up a phone meeting to answer subsequent questions and identify appropriate units. To maintain anonymity, the PI instructed the CALNOC site coordinators to email a letter of invitation to RNs employed on the identified units. From 7 agencies, 293 bedside RNs responded to the NASUS Scale.
Data management and analysis
Data were collected and managed through a secured web-based application designed to support data capture for research. SPSS Version 23 (IBM Corp, Armonk, NY) was used for all analyses. Collected data were examined for missing values, of which there was a minimum (.01%). No scale items were omitted from the analyses. Missing data were examined for patterns of recurrence or systematic problems. There was no pattern of missing data clustering around an agency or unit. To minimize bias, any participant with 3 or more missing items was removed from the database. 26 This criterion resulted in 8 participants being removed from the database, for a total of 285 participants. There were minimal missing data, with most questions having only 1 or 2 participants who did not answer, and the maximum being question #13, with 9 participants (3%) not answering this item.
Graphical and descriptive statistical methods were used to evaluate data distributions. Continuous data distributions were skewed, therefore, median and interquartile range were used to summarize those data. No data transformations were necessary to meet statistical assumptions.
Psychometric reliability was examined using item-total correlation and Cronbach’s alpha coefficient. Item-total correlation indicates the consistency of an item with the total of scores on all other items in the subscale. A low item-total correlation means the item is not well correlated with the overall scale. A target item-total correlation of .3 or higher indicates satisfactory consistency of the item responses with the remaining item responses. 27 Furthermore, if the internal consistency of the entire scale increased if a specific item was removed, that item was evaluated for possible wording issues or simply lack of consistency with the other items in the scale. Using this criterion, no items were removed from the scale (Table). For the NASUS Scale, a minimal Cronbach’s alpha coefficient was established at .70 for this initial testing.28, 29,30
RESULTS
The Table displays the item median and interquartile ranges as well as item-total correlation values for each item. Item median values range from 32 to 89, suggesting good variability among the data. The 3 lowest median values (32, 38 and 42) were all associated with items that had a reverse visual analogue scale (items 4, 9, and 14), perhaps suggesting that participants responded the same way to all of the scale questions, without reading the items carefully.
The 24-item NASUS Scale had a Cronbach’s alpha of .73, indicating an acceptable level of consistency among items for a new scale. The Skill subscale had a Cronbach’s alpha of .71. The item-total correlation for #4 was a −.23; this item focuses on the nurse’s comfort with disclosing an error to a manager or supervisor and indicates a negative correlation within the subscale. This negative correlation might suggest that as skills with updated safety concepts improves (eg, completing an incident report, analyzing a case to find error, or interpreting aggregate data), a nurse’s reporting of errors to the manager may decrease. However, recent literature indicates that reporting behaviors are impacted by a wide variety of agency, unit, and cultural elements, so this interpretation should be further explored.31–33
The Attitude subscale had a Cronbach’s alpha coefficient of .67, indicating moderate internal consistency among this subscale’s items. No items were deleted because this would not have improved the level of reliability of this subscale. In analysis of the item-total correlations, 8 questions did not meet the .3 target (items 6, 7, 9, 11, 12, 13, 14, 15). Questions 6, 7, 9 and 15 focus on the occurrence of errors in health care, stress of the health care environment, and gap between awareness of errors and best practice. Questions 11, 12, and 14 focus on reporting practices and their value. Question 13 specifically addresses the health care professional who should address errors with patients and families.
DISCUSSION
As a first step in determining methods for intervention to enhance nurses’ safety practices, we developed and tested psychometric properties of a scale that would elicit nurses’ perceived skills and attitudes about updated safety principles. We found that overall the NASUS Scale had an acceptable internal consistency.
During the last decade, tremendous improvements have occurred in how quality and safety are taught in prelicensure education through the QSEN Initiative.15 However, the majority of the current nursing workforce was not educated in these updated concepts of safety. There is no existing instrument that attempts to assess this gap in education and skills. The NASUS Scale is the first instrument to address this disparity.
Nurses are the segment of the health care workforce that is most frequently responsible for implementing quality and safety measures to improve systems and patient outcomes. Some authors refer to the time, energy, and emotional stress related to this “quality burden” as a phenomenon unique to nurses, which may impact nurses’ attitudes about these elements of their practice.34 Understanding nurses’ attitudes about implementation of quality and safety initiatives is important in effective strategizing to recruit their support. The NASUS Scale is the first scale to address this phenomenon, with this pivotal clinical population.
Most competency-based models examine the necessary knowledge, skills and attitudes behind a competency. The NASUS will benefit from future work to identify relevant and reliable knowledge elements to include to fill out the breadth of the tool.
There are several limitations of the study. The pilot sample for the NASUS Scale included only 7 clinical agencies, all of whom participate in the CALOC Consortium. This sample may have an inherent bias that these clinical agencies are committed to improving patient outcomes and engaging in continuous quality improvement. Nevertheless, the scale was able to detect variance among the participants. Second, participation among the 41 units ranged from 1 to 15 participating nurses (1% to 42% unit rate). Voluntary participation holds no incentive for nurses to invest their time and energy into completing a survey. Recent research confirms decreasing rates of nurse participation in surveys.35 Bedside nurses are required to complete a cadre of evaluations on a regular basis and commonly suffer from what is known as survey fatigue. Whether surveys are to assess safety culture, employee satisfaction, for benchmarking purposes (eg, University Health Consortium agencies, or hospitals that have received Magnet status), or evaluations of clinical improvements implemented by leadership or educators, bedside nurses are besieged by surveys. Several CNOs who were invited to allow their nurses to participate in the pilot of the NASUS, declined citing survey fatigue as a concern. With this variance in participation, it is imprudent to make any conclusions about practice context or culture from the pilot results.
Several items of the NASUS Scale need further testing for effective refinement. Question 4 of the Skills subscale had a particularly low item-total correlation. The variability of how managers respond to error reporting may make this item unreliable. This question should not be eliminated because reporting errors is paramount in tracking system gaps. Perhaps rephrasing the question using more objective language would improve the item’s performance on the NASUS Scale. Questions 11, 12 and 14 in the Attitudes subscale also had low item-total correlation and address reporting errors. Because of the high number of subjective variables in error reporting, these questions may need to be reworded.36 Questions 6, 7, 9 and 15 ask broadly worded questions about attitudes. Rephrasing these questions with more nuance may increase their consistency in the Attitudes subscale. One may interpret the low item-correlation value for item 13 to indicate that nurses who completed the NASUS scale feel strongly that nurses need to be included in reporting errors to a patient and family.
CONCLUSION
Although initial psychometric testing revealed acceptable reliability statistics, the NASUS Scale needs further refinement and piloting to enhance its utility in measuring nurses’ attitudes and skills around updated safety concepts. Clinicians, administrators and researchers need to maintain awareness of the importance of attitudes and skills for safety competence. This pilot instrument initiates this area of study. We plan on refining and retesting the NASUS instrument with hospital nurses. With an accurate assessment of nurses’ skills and attitudes around updated safety concepts, ongoing nurse education can be better targeted. Ongoing education may include yearly validation programs at the agency level or continuing education offerings. With better assessment data, targeted educational strategies can be implemented to address change fatigue, reluctance in engagement, or skills deficits.
Acknowledgments
Thanks to Dr. Diane Brown and Dr. Carolyn Aydin of Collaborative Alliance for Nursing Outcomes for their insight, assistance in recruiting nurses, and generous collaboration.
Footnotes
The authors declare no conflict of interest.
Contributor Information
Gail E. Armstrong, College of Nursing, University of Colorado, Aurora, CO US.
Mary Dietrich, Schools of Medicine and Nursing, Vanderbilt University, Nashville, TN US.
Linda Norman, School of Nursing, Vanderbilt University, Nashville, TN US.
Jane Barnsteiner, School of Nursing, University of Pennsylvania, Philadelphia, PA US.
Lorraine Mion, School of Nursing, Vanderbilt University, Nashville, TN US.
REFERENCES
- 1. [Accessed January 6, 2016];FastStats. http://www.cdc.gov/nchs/fastats/deaths.htm.
- 2.Page A, editor. Institute of Medicine. Keeping patients safe: Transforming the work environment of nurses. 1st. National Academies Press; 2004. [PubMed] [Google Scholar]
- 3.Corrigan JM, Donaldson MS, editors. Institute of Medicine. To err is human: Building a safer health system. 1st. National Academies Press; 2000. [PubMed] [Google Scholar]
- 4.Weissman JS, Annas CL, Epstein AM, et al. Error reporting and disclosure systems: Views from hospital leaders. JAMA. 2005;293(11):1359–1366. doi: 10.1001/jama.293.11.1359. [DOI] [PubMed] [Google Scholar]
- 5. [Accessed January 6, 2016];Root Cause Analysis | AHRQ Patient Safety Network. https://psnet.ahrq.gov/primers/primer/10/root-cause-analysis.
- 6.Schnock KO, Dykes PC, Albert J, et al. The frequency of intravenous medication administration errors related to smart infusion pumps: A multihospital observational study. BMJ Qual Saf. 2016 Feb; doi: 10.1136/bmjqs-2015-004465. [DOI] [PubMed] [Google Scholar]
- 7.Quigley PA, Barnett SD, Bulat T, Friedman Y. Reducing falls and fall-related injuries in medical-surgical units: One-year multihospital falls collaborative. J Nurs Care Qual. 2016;31(2):139–145. doi: 10.1097/NCQ.0000000000000151. [DOI] [PubMed] [Google Scholar]
- 8.Landrigan CP. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363(22):2124–2134. doi: 10.1056/NEJMsa1004404. [DOI] [PubMed] [Google Scholar]
- 9.Shimokura G, Weber DJ, Miller WC, Wurtzel H, Alter MJ. Factors associated with personal protection equipment use and hand hygiene among hemodialysis staff. Am J Infect Control. 2006;34(3):100–107. doi: 10.1016/j.ajic.2005.08.012. [DOI] [PubMed] [Google Scholar]
- 10. [Accessed January 23, 2016];Glossaries | AHRQ Patient Safety Network. https://psnet.ahrq.gov/glossary/p.
- 11.Cronenwett L, Sherwood G, Barnsteiner J, et al. Quality and Safety Education for Nurses. Nurs Outlook. 2007;55(3):122–131. doi: 10.1016/j.outlook.2007.02.006. [DOI] [PubMed] [Google Scholar]
- 12. [Accessed January 23, 2016];Patient Safety Dictionary N-Z - National Patient Safety Foundation. http://www.npsf.org/?page=dictionarynz.
- 13.Emanuel L, Berwick D, Conway J, et al. What exactly is patient safety? In: Henriksen K, Battles JB, Keyes MA, Grady ML, editors. Advances in patient safety: New directions and alternative approaches (Vol. 1: Assessment). Advances in Patient Safety. Rockville (MD): Agency for Health care Research and Quality; 2008. [Accessed September 23, 2013]. http://www.ncbi.nlm.nih.gov/books/NBK43629/. [PubMed] [Google Scholar]
- 14.Ebright P. Fundamentals of clinical nurse specialist practice. 2nd. New York: Springer Publishing; 2014. Patient safety; pp. 183–197. [Google Scholar]
- 15.Barnsteiner J, Disch J, Johnson J, McGuinn K, Chappell K, Swartwout E. Diffusing QSEN competencies across schools of nursing: The AACN/RWJF Faculty Development Institutes. J Prof Nurs. 2013;29(2):68–74. doi: 10.1016/j.profnurs.2012.12.003. [DOI] [PubMed] [Google Scholar]
- 16.Lewis DY, Stephens KP, Ciak AD. QSEN: Curriculum integration and bridging the gap to practice. Nurs Educ Perspect. 2016;37(2):97–100. [PubMed] [Google Scholar]
- 17.Okuyama A. Assessing the patient safety competencies of health care professionals: a systematic review. BMJ Qual Saf. 2011;20(11):991–1000. doi: 10.1136/bmjqs-2011-000148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Schnall R, Stone P, Currie L, Desjardins K, John RM, Bakken S. Development of a self-report instrument to measure patient safety attitudes, skills, and knowledge. J Nurs Scholarsh. 2008;40(4):391–394. doi: 10.1111/j.1547-5069.2008.00256.x. [DOI] [PubMed] [Google Scholar]
- 19.Chenot TM, Daniel LG. Frameworks for patient safety in the nursing curriculum. J Nurs Educ. 2010;49(10):559–568. doi: 10.3928/01484834-20100730-02. [DOI] [PubMed] [Google Scholar]
- 20.Madigosky WS, Headrick LA, Nelson K, Cox KR, Anderson T. Changing and sustaining medical students’ knowledge, skills, and attitudes about patient safety and medical fallibility. Acad Med J Assoc Am Med Coll. 2006;81(1):94–101. doi: 10.1097/00001888-200601000-00022. [DOI] [PubMed] [Google Scholar]
- 21.Reason J. Human error: models and management. West J Med. 2000;172(6):393–396. doi: 10.1136/ewjm.172.6.393. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Davis LL. Instrument review: Getting the most from a panel of experts. Appl Nurs Res. 1992;5(4):194–197. [Google Scholar]
- 23.Polit DF, Beck CT. The content validity index: are you sure you know what’s being reported? Critique and recommendations. Res Nurs Health. 2006;29(5):489–497. doi: 10.1002/nur.20147. [DOI] [PubMed] [Google Scholar]
- 24.Lynn MR. Determination and quantification of content validity. Nurs Res. 1986;35(6):382–385. [PubMed] [Google Scholar]
- 25.Davis DA, Mazmanian PE, Fordis M, Van Harrison RR, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA. 2006;296(9):1094–1102. doi: 10.1001/jama.296.9.1094. [DOI] [PubMed] [Google Scholar]
- 26.Bennett DA. How can I deal with missing data in my study? Aust N Z J Public Health. 2001;25(5):464–469. [PubMed] [Google Scholar]
- 27.Gliem RR, Gliem JA. [Accessed January 20, 2016];Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for Likert-type scales. 2003 https://scholarworks.iupui.edu/handle/1805/344.
- 28.Bland JM, Altman DG. Statistics notes: Cronbach’s alpha. BMJ. 1997;314(7080):572. doi: 10.1136/bmj.314.7080.572. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Psychological testing and assessment: An introduction to tests and measurement. 8. New York: McGraw-Hill; 2012. [Google Scholar]
- 30.Bernstein IH, Nunnally JC. Psychometric theory. N Y McGraw-Hill Oliva TA Oliver RL MacMillan IC 1992 Catastr Model Dev Serv Satisf Strateg J Mark. 1994;56:83–95. [Google Scholar]
- 31.Force MV, Deering L, Hubbe J, et al. Effective strategies to increase reporting of medication errors in hospitals. J Nurs Adm. 2006;36(1):34–41. doi: 10.1097/00005110-200601000-00009. [DOI] [PubMed] [Google Scholar]
- 32.Yung H-P, Yu S, Chu C, Hou I-C, Tang F-I. Nurses’ attitudes and perceived barriers to the reporting of medication administration errors. J Nurs Manag. 2016 Feb; doi: 10.1111/jonm.12360. [DOI] [PubMed] [Google Scholar]
- 33.Sears K, O’Brien-Pallas L, Stevens B, Murphy GT. The relationship between the nursing work environment and the occurrence of reported paediatric medication administration errors: A pan Canadian study. J Pediatr Nurs. 2013;28(4):351–356. doi: 10.1016/j.pedn.2012.12.003. [DOI] [PubMed] [Google Scholar]
- 34.Disch J, Sinioris M. The quality burden. Nurs Clin North Am. 2012;47(3):395–405. doi: 10.1016/j.cnur.2012.05.010. [DOI] [PubMed] [Google Scholar]
- 35.Hill CA, Fahrney K, Wheeless SC, Carson CP. Survey response inducements for registered nurses. West J Nurs Res. 2006;28(3):322–334. doi: 10.1177/0193945905284723. [DOI] [PubMed] [Google Scholar]
- 36.Mayo AM. Nurse perceptions of medication errors: What we need to know for patient safety. J Nurs Care Qual. 2004;19(3):209–217. doi: 10.1097/00001786-200407000-00007. [DOI] [PubMed] [Google Scholar]