Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jan 1.
Published in final edited form as: Biomed Instrum Technol. 2017 Jan-Feb;51(1):25–33. doi: 10.2345/0899-8205-51.1.25

Acceptability, feasibility, and cost of using video to evaluate alarm fatigue

Matt MacMurchy 1,2, Shannon Stemler 1,2,3, Mimi Zander 1,2,4, Christopher P Bonafide 1,2,5
PMCID: PMC5376090  NIHMSID: NIHMS818018  PMID: 28103098

INTRODUCTION

The factors that contribute to nurses’ response time to physiologic monitor alarms are poorly understood. Nurses caring for hospitalized patients typically experience high rates of nonactionable alarms, which are thought to contribute to alarm fatigue. Proving the existence of alarm fatigue in the hospital and understanding the interplay between the numerous other factors contributing to alarm response time are complex challenges.

Video is a powerful tool to evaluate the quality of care delivered.1 In 2014, members of our research group published “Video Methods for Evaluating Physiologic Monitor Alarms and Alarm Responses” in Biomedical Instrumentation and Technology.2 In that paper, we described in detail the methods we developed to use video to begin disentangling the wide range of factors thought to contribute to alarm response time. Those methods have resulted in 2 projects. The first was a pilot project during which we recorded 210 hours of patient care and captured 5070 alarms and responses.3 In that pilot, we demonstrated an association between higher numbers of nonactionable alarms in the preceding 2 hours—a proxy for acute alarm fatigue—and slower response time to subsequent alarms.3 In the second (current) project, we recorded 551 hours of patient care and captured 11,745 alarms and responses. This paper describes acceptability, feasibility, and costs of this second project.

One of the most frequent questions we are asked by clinicians, technicians, and alarm experts is how acceptable, feasible, and costly video-based projects are to execute. Video offers the ability to gather a tremendous amount of insight into alarms and staff responses, but analyzing video can be expensive and time consuming. In this paper, we report the metrics we used to describe the acceptability, feasibility, and cost of using video to evaluate physiologic monitor alarm characteristics and responses in our most recent study that generated 100 video recordings averaging 5.5 hours each.

METHODS

Setting

The study took place at The Children’s Hospital of Philadelphia on a medical unit that cares for infants and young children with routine general pediatric problems as well as complex medical conditions. All patients in the study were monitored using General Electric Dash 3000 devices. A central monitoring display was at the nurses’ station but there were no staff assigned to review alarms centrally. In addition to alarming at the bedside and the central station, alarms for asystole, ventricular tachycardia, ventricular fibrillation, apnea, heart rate, respiratory rate, oxygen saturation, probe off, and leads fail also automatically sent text messages to the bedside nurse using a secondary notification system. There were no alarm marquee systems in use.

Eligible patients and nurses

All patients on the unit who were undergoing continuous cardiorespiratory and/or pulse oximetry monitoring were eligible for the study unless they were anticipated to either be discharged or have their monitoring discontinued during the video recording period. Since participating required written, in-person consent from a parent, only patients with a parent present at the bedside could be consented. All nurses were eligible to participate if they were caring for an eligible patient.

Study definitions

We used the following definitions during the study:

Clinical alarm

An alarm for a physiologic parameter out of range or cardiac arrhythmia.

Valid alarm

A clinical alarm that correctly identifies the physiologic status of the patient. Validity was based on waveform quality, signal strength indicators, and artifact conditions, referencing the monitor’s operator’s manual.

Actionable alarm

A valid clinical alarm that either (a) leads to an observed clinical intervention (such as initiating supplemental oxygen) or (b) leads to an observed consultation with another clinician (such as discussing the patient’s tachycardia with a physician) at the bedside or (c) warrants intervention or consultation for a clinical condition (such as a prolonged desaturation) but the condition was unwitnessed: occurring while no clinicians are present and resolving before any clinicians entered the room or visualized the central monitoring station.

Nonactionable alarm

An alarm that does not meet the actionable definition above, including invalid alarms such as those caused by motion artifact, alarms that are valid but nonactionable, and technical alarms.

Technical alarm

An alarm for a problem with the physiologic monitor device or associated sensors.

Alarm response time

The number of seconds elapsed between the start time of an audible alarm on the bedside monitor and the time a clinical staff member either entered the patient’s room or viewed the central monitoring station.

Summary of video methods

For a detailed description of the video methods, please refer to our previously published papers.2,3 Briefly, in order to capture multiple angles while remaining minimally obtrusive to patients, family, and staff, we used temporarily-mounted GoPro4 cameras placed in inconspicuous locations in patient rooms and on the central monitoring station.2 Each video included 5–7 camera views (Figure 1). When video recording during overnight hours, we also experimented with a Canon XA10 camera that featured a built-in infrared illuminator during some sessions. Its low light performance was excellent but rarely necessary since nurses and parents often left lights on in the patient room overnight; in most cases the ambient light was sufficient for the GoPro cameras to perform adequately.

Figure 1.

Figure 1

Before setting up cameras, we obtained consent from both a parent of the patient, as well as the patient’s primary bedside nurse. After obtaining consent from both parties, recording would begin prior to mounting the cameras, so that all cameras could be synchronized together. During setup, the Research Coordinator administered a questionnaire to the bedside nurse regarding their demographics, nursing experience, and their knowledge of this patient. When all cameras were in place and connected to a power source, they would be left to record for approximately 6 hours with checks every 30–60 minutes to ensure they were still recording, attached to power sources, and properly positioned.

After 6 hours, we stopped recording and removed the cameras. We administered additional questionnaires to the parent and bedside nurse on their experience of participating in the study. We then uploaded the videos from the cameras to a computer and compiled them into a synchronized, single view displaying all camera views, and edited the footage. The complete video was then exported and uploaded to a secure server.

After the recording and editing processes were finished, videos were reviewed and annotated. We used BedMasterEx v4.2.1 (Excel Medical Electronics, Jupiter, FL) software to obtain a time-stamped list of all alarms that occurred during the study period to guide the annotation process. All alarms, even those with overlapping durations, were included. We uploaded the list to REDCap5 and used an alarm report to generate a queue of alarms for review. During video review, researchers jumped to each alarm time based on the Bedmaster time stamp data and annotated information including the type of alarm, if the alarm was valid, actionable, and how clinicians responded, according to the study definitions listed above. We intensively trained our Research Coordinator to assist in this process. Following a training period that involved supervised review and discussion of 4675 clinical alarms, the Research Coordinator and Principal Investigator separately reviewed 883 clinical alarms. The Research Coordinator and Principal Investigator agreed on the validity determination for 99.3% and the actionability determination for 99.7%. Based on these reassuring results, for the remaining alarms the Principal Investigator performed secondary review of the valid clinical alarms only.

Acceptability metrics

To measure the acceptability of the study from the perspectives of the parents who we approached for consent, we first evaluated the number of parents who declined. We then analyzed the results of parent questionnaires completed at the conclusion of each video session. The parent questionnaire included the following items:

  • Did participating in this study change the way you interacted with your child?

  • Did participating in this study change the way you interacted with your nurse?

  • Did participating in this study change the way you interacted with your doctors or nurse practitioners?

  • Did participating in this study change the way you interacted with anyone else (such as other family members)?

To measure the acceptability of the study from the perspectives of the nurses who were approached for consent, we first evaluated the number of nurses who declined. We then analyzed the results of nurse questionnaires completed at the conclusion of each video session. The nurse questionnaire included the following items:

  • Did participating in this study affect your ability to care for your patients?

  • Did participating in this study affect your interactions with patients/families?

  • Did participating in this study affect your interactions with other nurses?

  • Did participating in this study affect your interactions with physicians or nurse practitioners?

Feasibility metrics

The 3 main factors contributing to feasibility were calendar time required to complete recruiting, necessary study team composition, and personnel effort required (measurable time spent working on the study).

We calculated calendar time to complete recruiting by identifying the date of the first and last patients enrolled, since we were recruiting every week in between those dates. We defined necessary study team composition as the minimum number of distinctly-skilled team members required to complete the work. Personnel effort categories included research coordination, video recording and management, video review, study oversight, and analytic support.

The personnel effort (measurable time spent) on video review was extracted from the REDCap database we used to annotate the videos. In order to calculate the time spent by each study team member, we used the Logging module of REDCap, which lists all data entries and changes made to the project, along with time stamps accurate to the minute. We used this to identify video annotation data entry “blocks” of time spent by study staff. These data entry periods were defined as consecutive alarm entries for the same video without any gaps of 30 minutes or longer between 2 alarms (corresponding to a likely break or shift in tasks). This method of calculation was intended to include the time to switch between alarms during a block of reviewing a group of alarms but was not intended to include longer breaks such as a lunch break. There was no limit to the number of data entry periods in a video; the start of a data entry period was either the first alarm in the video or the first alarm after a 30 minute or more break between alarms. The last alarm in a data entry period was either the final alarm in the video or the alarm that immediately preceded a 30 minute or more break. For this analysis, we excluded the first 25 videos because the calculation method described above was not valid for the review workflow we initially used for those sessions. We randomly selected 20 videos out of the remaining 75 for this analysis. Then, for each reviewer, we calculated the average time spent reviewing each alarm across all of their data entry periods. We reported the video review tasks based on the 2 main data entry forms used: “Making valid and actionable determinations for each alarm,” and “Identifying alarm responders and measuring response time.”

The total personnel effort (measurable time spent) for other aspects of the study such as screening, consenting, and setting up the video were estimated by the staff performing each task using time stamps from the electronic and paper study documents, whenever they were available.

We performed a complex data analysis that required input from a biostatistician and data manager/analyst, and estimated the time spent for each of those roles as well.

Cost metrics

In order to estimate the costs of the study, we identified the following cost categories: research coordination, video recording and management, video review, study oversight, analytic support, and equipment and storage costs. For each personnel cost, we multiplied the number of hours by the hourly rate for each individual plus a 25% fringe benefit rate. For the Research Coordinator and Video Engineer positions, we estimated $20 per hour plus fringe. For the expert physician reviewer and biostatistician advisor positions, we estimated $75 per hour plus fringe. For data management and analysis, we used the current rate for these services at our hospital’s Healthcare Analytics Unit: $73 per hour (no fringe).

RESULTS

We performed the study between July 22, 2014 and November 11, 2015. In order to yield 100 usable video recordings, we screened for eligible subjects approximately 3.5 days per week during that time from the inpatient medical unit described in Setting.

Parent acceptability

We approached the parents of 126 patients. Thirteen of 126 declined immediately, and 1 parent initially consented but declined before video recording began (88% consent rate). The most commonly cited reason for declining was a desire to breastfeed without being video recorded.

With respect to the 4 questions asked on the parent questionnaire regarding whether participating in this study changed the way the parents interacted with their child, staff, or others, all 112 parents answered no.

Nurse acceptability

All 38 nurses caring for the 112 patients who underwent video recording agreed to participate (100% consent rate). With respect to the 4 questions asked on the nurse questionnaire regarding whether participating in this study affected their ability to care for their patients or affected their interactions with patients, families, or staff, 1 nurse said that participating in the study affected her interactions with a patient. This was due to the patient’s parents requesting privacy when changing diapers, which required the nurse to either move the baby off camera or obscure views of the baby during diaper changes. No other nurses reported adverse impacts on their ability to care for their patients or their interactions with patients, families, or staff.

Feasibility

Calendar time

The total calendar time to complete recruiting was 68 weeks, resulting in 126 approached, 113 consented, 112 enrolled, and 100 evaluable patients with usable video recordings. One of the 113 who consented changed their mind after consenting but before recording began. Twelve of the 112 who enrolled had failures of one or more cameras or memory cards that rendered their video unusable.

The primary driver of the 68 week calendar time was a lower-than-expected availability of eligible patients who were not being discharged and were remaining on monitoring, with parents at the bedside for consent. As a result, there were many days that we screened for eligibility but had no patients to enroll. We chose to continue recruiting on the single unit, rather than expanding to many units, because we wanted to evaluate the same group of nurses across multiple sessions and determine whether the same nurses respond to alarms differently under different conditions. The estimated number of hours to perform study tasks is in Table 1.

Table 1.

Costs associated with using video to evaluate physiologic monitor alarms and responses

Item Description Total cost
Research coordination $ 22,525

Screening 2 hours/day, 3.5 days/week for 68 weeks $ 11,900
Consenting 1 hour/session, 126 sessions attempted $ 3,150
Administering questionnaires 1 hour/session, 112 patients underwent video recording $ 1,875
Record keeping and administration 2 hours/session, 112 patients underwent video recording $ 5,600

Video recording and management $ 33,600

Camera setup, recording, take down, memory card management 9 hours/session, 112 patients underwent video recording $ 25,200
Importing, processing, and editing video files 3 hours/session, 112 patients underwent video recording $ 8,400

Video review and annotation $ 27,588

Making valid and actionable determinations for each alarm 54 seconds per alarm, 11,745 alarms reviewed $ 4,400
Identifying alarm responders and measuring response time 66 seconds per alarm, 11,745 alarms reviewed $ 5,375
Expert review by Principal Investigator 132 seconds per alarm, 5,177 alarms reviewed $ 17,813

Study oversight $ 51,000

Training, supervision, and regulatory tasks performed by Principal Investigator 8 hours/week for 68 weeks $ 51,000

Analytic support $ 12,805

Biostatistician consultation 12 hours total $ 1,125
Data management and analytic services 160 hours total $ 11,680

Equipment and storage $ 24,032

Cameras 12 GoPro cameras $ 4,800
Mounts Assorted camera mounting devices $ 1,000
Editing workstation 27-inch iMac with upgraded memory and graphics card $ 3,000
External hard drives for video backups 6 Lacie 6TB Thunderbolt drives $ 2,700
Lockable equipment cart 1 Harloff 5-drawer mini line anesthesia cart $ 1,000
Memory cards 20 64GB high speed camera memory cards $ 600
Chargers, cables, and extra batteries 1 charger, cable, and extra battery for each camera $ 840
Video editing software 1 copy of Final Cut Pro X $ 300
Server space (without redundancy) 8TB of institutional server space at $51/TB/month for 24 months $ 9,792

Total $ 171,550

Necessary study team composition

In order to perform the study tasks, 3 roles were absolutely essential: (1) the Principal Investigator responsible for providing supervision and troubleshooting any challenges that arose, (2) the Research Coordinator responsible for screening, consenting, and administration of the study as well as video review, and (3) the Video Engineer responsible for all aspects of video setup, recording, and management. On days when video recording could potentially take place, all 3 had to be available in case an eligible subject consented to participate.

Total personnel effort (measurable time spent)

The measurable personnel effort to complete the data collection is shown in Table 1, by category. The Principal Investigator devoted 734 hours for expert review and study oversight, the Research Coordinator devoted 1292 hours for research coordination and video review, and the Video Engineer devoted 1344 hours for video recording and management.

Screening involved identifying patients on the unit each day who were eligible to participate per the criteria listed above under ‘Eligible patients and nurses.’ This occurred 3–4 days per week, averaging 2 hours per day (Table 1).

After identifying patients eligible to participate, in-person consent was required from a parent, often involving waiting for the parent to be present, awake, and available (not already engaged in discussions with healthcare providers). Once the parent was available, the consent form had to be thoroughly reviewed, discussed, and signed. The bedside nurse also had to be consented, involving time-spent waiting until the nurse could review and sign the forms, as well as respond to the questionnaire. After consent was obtained, the forms had to be scanned and then locked in a secure cabinet. On days that a recording took place, these tasks averaged an additional 4 hours per session (Table 1).

After acquiring the video, we imported, processed, and edited the files for review. Because we were compressing multiple high definition video feeds into one screen, exporting the completed video was a time-consuming process. Typically export time was approximately equivalent to the duration of the video (5–6 hours). We would begin the export of videos at the end of the work day so they would be done by the next morning when we came in, and for this reason we did not include video export time in the total person-hours required per video.

We analyzed alarm data from the 20 randomly selected sessions (total of 2177 alarms) to determine the average time spent reviewing and annotating each alarm, including the time required to switch between alarms (Table 1). Making valid and actionable determinations for each alarm took an average of 54 seconds per alarm. Identifying alarm responders and measuring response time took an average of 66 seconds per alarm. These averages were multiplied by the total number of alarms (11,745) to estimate total time spent on review and annotation. Expert review and annotation required an average of 132 seconds per alarm, with fewer than half of the alarms reviewed requiring expert review (5177).

Cost

In order to accurately report the costs of performing the study, we included the research coordination, video recording and management, video review, study oversight, analytic support, and equipment and storage costs in Table 1, including the fringe benefit rates where applicable.

The total cost of the study was estimated to be $171,550, or $311 per recorded hour of video. Some costs were fixed such as equipment and analytic support. Others varied by the number of days we operated the study, then number of days we screened for eligible patients, the number of video recordings we performed, or the number of alarms we reviewed. Teams planning to undertake a video review project should consult our costs in Table 1 to customize the fixed and variable cost estimates for their specific needs, since the budget will vary significantly based on the sample size and study duration. In addition, since the Research Coordinator and Video Engineer were engaged in other active projects, we did not need to include “standby time” for days when video recording was planned but there were no eligible subjects available.

DISCUSSION

For evaluating alarm validity, actionability, and response time, video offers an unparalleled perspective but comes at a substantial cost. The main findings of this study are that video recording of monitor alarms and responses was (a) highly acceptable to participating nurses and parents, (b) feasible to complete using a core team of Principal Investigator, Research Coordinator, and Video Engineer, and (c) was too expensive and time-intensive to complete “on a shoestring” but not beyond reach of the budget of an external grant or a significant institutional financial commitment.

Few other studies have examined the acceptability, feasibility, and cost of using video to evaluate the quality and safety of care delivery. In a survey of 154 parents at a single children’s hospital in Canada, more than 90% of parents rated video recording of patient care acceptable for health care research, medical education, quality improvement, and patient safety purposes.6 Siebig and colleagues have described their in-depth video annotation methods to evaluate alarms occurring in an intensive care units but have not reported in-depth acceptability, feasibility, and cost metrics.7,8 Surgical teams have reported the feasibility of using point-of-view cameras in the operating room using various techniques but have not addressed acceptability or costs beyond the cost of the cameras.911 As one approach to providing rapid video audit and feedback for hand hygiene compliance and operating room safety and efficiency, investigators at a New York hospital partnered with a third party remote video auditing company called Arrowsight and effectively outsourced video management and review. This approach offered the advantage of real time auditing of clinical care by remote staff managed completely by the third party company.1214

Given that video recording is performed in the majority of operating rooms, intensive care units, and neuroepilepsy units in North American and British pediatric hospitals,15 the opportunities for secondary use of video data already collected during the delivery of care to evaluate and improve the quality and safety of care are great. Some hospitals have taken the approach of re-purposing video recorded for clinical purposes (such as video from laparoscopy16 or polysomnography17) to evaluate quality and safety, an approach that is much more cost-efficient than generating video de novo. This secondary use approach also offers the advantage of substantial cost savings. For example, if the costs of video equipment, screening, consent, administering questionnaires, camera setup, and video management were eliminated in this study, the cost per recorded hour could be reduced by nearly half.

There are a few limitations to this analysis. First, we estimated costs as accurately as possible, acknowledging that personnel costs vary at different institutions. By providing the time estimates and transparent cost calculations, we hope the information will be applicable to others interested in using video to study patient safety even if the final costs differ. Second, in an effort to provide generalizable costs, we included the analytic support and server space estimated costs using our institution’s rates even though the services were provided to us without charge (biostatistician consultation and server space) or performed by the Principal Investigator (data management and analytic services). Third, we did not include costs for equipment we purchased but did not ultimately use in the study because they worked less well than the items we reported or were deemed unnecessary after experimentation (such as the Canon XA10 camera with infrared illuminator). We hope that providing a detailed list of the equipment we used, we can save future investigators time and effort. Fourth, we did not have a reliable method to obtain alarm response time data on nurses who were not being observed on video, so we cannot determine the Hawthorne effect induced by video recording. Fifth, our actionability definition was very clinically-oriented and the determination was made largely based on clinical behavior. It is possible that in some situations interventions were performed unnecessarily. A consideration for future studies is to perform a blinded secondary review of only the alarm and waveform data without the video.

In conclusion, video recording is a highly acceptable and feasible tool to evaluate quality and safety in the hospital. At a cost of over $300 per recorded hour to capture, manage, and review, our project was expensive. Video recording should be used selectively for situations in which video can provide insights into care that are not available using other methods. When available, the secondary use of video already collected during the delivery of care can offer the ability to gain similar insights into quality and safety without the high costs of coordinating the study or managing the video.

Acknowledgments

Funding Source: Dr. Bonafide is supported by a Mentored Patient-Oriented Research Career Development Award from the National Heart, Lung, And Blood Institute of the National Institutes of Health under Award Number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding organization. The funding organization had no role in the design, preparation, review, or approval of this manuscript nor the decision to submit the manuscript for publication.

Biographies

Matt MacMurchy, BA is a freelance photographer and videographer in Philadelphia. He was the Video Engineer and a Research Assistant for this project at The Children’s Hospital of Philadelphia.

Shannon Stemler, BA, is a Research Coordinator at The Children’s Hospital of Philadelphia and a student in the Accelerated BSN/MSN Program at the University of Pennsylvania School of Nursing.

Mimi Zander, BA is a Research Assistant at The Children’s Hospital of Philadelphia and a medical student at Touro College of Osteopathic Medicine.

Christopher Bonafide, MD, MSCE is a physician and researcher at The Children’s Hospital of Philadelphia. He was the Principal Investigator of this study.

Footnotes

The authors have no relevant conflicts of interest or financial relationships to disclose.

References

  • 1.Makary MA. The power of video recording: taking quality to the next level. JAMA. 2013;309:1591–1592. doi: 10.1001/jama.2013.595. [DOI] [PubMed] [Google Scholar]
  • 2.Bonafide CP, Zander M, Graham CS, et al. Video methods for evaluating physiologic monitor alarms and alarm responses. Biomed Instrum Technol. 2014;48(3):220–230. doi: 10.2345/0899-8205-48.3.220. [DOI] [PubMed] [Google Scholar]
  • 3.Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children’s hospital. J Hosp Med. 2015;10(6):345–351. doi: 10.1002/jhm.2331. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. [Accessed August 15, 2016];GoPro Official Website. http://www.gopro.com.
  • 5.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inf. 2009;42:377–381. doi: 10.1016/j.jbi.2008.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Taylor K, Vandenberg S, le Huquet A, Blanchard N, Parshuram CS. Parental attitudes to digital recording: A paediatric hospital survey. J Paediatr Child Health. 2011;47:335–339. doi: 10.1111/j.1440-1754.2010.01981.x. [DOI] [PubMed] [Google Scholar]
  • 7.Siebig S, Kuhls S, Imhoff M, Gather U, Scholmerich J, Wrede CE. Intensive care unit alarms—how many do we need? Crit Care Med. 2010;38:451–456. doi: 10.1097/CCM.0b013e3181cb0888. [DOI] [PubMed] [Google Scholar]
  • 8.Siebig S, Kuhls S, Imhoff M, et al. Collection of annotated data in a clinical validation study for alarm algorithms in intensive care--a methodologic framework. J Crit Care. 2010;25(1):128–135. doi: 10.1016/j.jcrc.2008.09.001. [DOI] [PubMed] [Google Scholar]
  • 9.Graves SN, Shenag DS, Langerman AJ, Song DH. Video capture of plastic surgery procedures using the GoPro HERO 3+ PRS Glob Open. 3(2):e312. doi: 10.1097/GOX.0000000000000242. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Makhni EC, Jobin CM, Levine WN, Ahmad CS. Using wearable technology to record surgical videos. Am J Orthop. 2014;44(4):163–166. [PubMed] [Google Scholar]
  • 11.Lin LK. Surgical video recording with a modified GoPro Hero 4 camera. Clin Ophthalmol. 2016;2016(10):117–119. doi: 10.2147/OPTH.S95666. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Overdyk FJ, Dowling O, Newman S, et al. Remote video auditing with real-time feedback in an academic surgical suite improves safety and efficiency metrics: a cluster randomised study. BMJ Qual Saf. 2015 doi: 10.1136/bmjqs-2015-004226. Epub ahead of print. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Armellino D, Hussain E, Schilling ME, et al. Using high-technology to enforce low-technology safety measures: the use of third-party remote video auditing and real-time feedback in healthcare. Clin Infect Dis. 2012;54(1):1–7. doi: 10.1093/cid/cis201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Armellino D, Trivedi M, Law I, et al. Replicating changes in hand hygiene in a surgical intensive care unit with remote video auditing and feedback. Am J Infect Control. 2013;41(10):925–927. doi: 10.1016/j.ajic.2012.12.011. [DOI] [PubMed] [Google Scholar]
  • 15.Taylor K, Mayell A, Vandenberg S, Blanchard N, Parshuram CS. Prevalence and indications for video recording in the health care setting in North American and British paediatric hospitals. Paediatr Child Health. 2011;16:e57–60. doi: 10.1093/pch/16.7.e57. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Bonrath EM, Gordon LE, Grantcharov TP. Characterising “near miss” events in complex laparoscopic surgery through video analysis. BMJ Qual Saf. 2015;24(8):516–521. doi: 10.1136/bmjqs-2014-003816. [DOI] [PubMed] [Google Scholar]
  • 17.Brockmann PE, Wiechers C, Pantalitschka T, Diebold J, Vagedes J, Poets CF. Under-recognition of alarms in a neonatal intensive care unit. Arch Dis Child Fetal Neonatal Ed. 2013;98(6):F524–527. doi: 10.1136/archdischild-2012-303369. [DOI] [PubMed] [Google Scholar]

RESOURCES