Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2023 Apr 1.
Published in final edited form as: Simul Healthc. 2022 Apr 1;17(2):112–119. doi: 10.1097/SIH.0000000000000610

Effect of remote cardiac monitoring system design on response time to critical arrhythmias

Noa Segall 1, Jeffrey A Joines 2, Ron’Nisha D Baldwin 3, Diane Bresch 4, Lauren G Coggins 5, Suzanne Janzen 6, Jill R Engel 7, Melanie C Wright 8
PMCID: PMC8904642  NIHMSID: NIHMS1729668  PMID: 34506366

Abstract

Introduction:

In many hospitals across the country, electrocardiograms (ECGs) of multiple at-risk patients are monitored remotely by telemetry monitor watchers in a central location. However, there is limited evidence regarding best practices for designing these cardiac monitoring systems to ensure prompt detection and response to life-threatening events. To identify factors that may affect monitoring efficiency, we simulated critical arrhythmias in inpatient units with different monitoring systems, and compared their efficiency in communicating the arrhythmias to a first responder.

Methods:

This was a multicenter cross-sectional in situ simulation study. Simulation participants were monitor watchers and first responders (usually nurses) in 2 inpatient units in each of 3 hospitals. Manipulated variables included: (1) number of communication nodes between monitor watchers and first responders; (2) central monitoring station location – on or off the patient care unit; (3) monitor watchers’ workload; (4) nurses’ workload; and (5) participants’ experience.

Results:

We performed 62 arrhythmia simulations to measure response times of monitor watchers, and 128 arrhythmia simulations to measure response times in patient care units. We found that systems in which an intermediary between monitor watchers and nurses communicated critical events had faster response times to simulated arrhythmias than systems in which monitor watchers communicated directly with nurses. Responses were also faster in units co-located with central monitoring stations than in those located remotely. As the perceived workload of nurses increased, response latency also increased. Experience did not affect response times.

Conclusions:

While limited in our ability to isolate the effects of these factors from extraneous factors on central monitoring system efficiency, our study provides a roadmap for using in situ arrhythmia simulations to assess and improve monitoring performance.

Introduction

Every year, more than 200,000 people are treated for in-hospital cardiac arrest in the United States.1 Many of these patients have pulseless ventricular tachycardia (VT) or ventricular fibrillation (VF), and may be saved with timely treatment, including cardioversion/defibrillation. The American Heart Association recommends defibrillation therapy within 2 minutes of recognizing a cardiac arrest. Yet, in 30% of patients, defibrillation is delayed more than 2 minutes from onset, reducing their chances of survival to hospital discharge by half.2

Hospitals have implemented various solutions to ensure prompt detection and response to cardiac arrest and other critical patient events. Often, the electrocardiograms (ECGs) of multiple at-risk patients are monitored remotely by telemetry monitor watchers in a central location (Figure 1). While professional organizations do their best to provide evidence-based guidelines for central telemetry monitoring,3 the standards are currently limited. Consequently, monitoring practices vary widely among hospitals, primarily driven by the available technologies, system constraints, and financial considerations. For example, continuous ECG telemetry monitoring can be implemented using local (on-unit) or remote (off-unit or even off-site) monitoring stations. Watchers may communicate a critical arrhythmia directly to the patient’s nurse or through an intermediary such as a health unit clerk (HUC), and different communication technologies may be used for this purpose (e.g., pagers, overhead speakers, landline and cell phones, or bi-directional voice communication badges). The watcher-to-patient ratio can also vary, with a single watcher monitoring between 16 and 72 patients at any given time.46

Figure 1.

Figure 1.

Central telemetry monitoring station.

The effect of these practices on monitoring efficiency, i.e., how quickly critical arrhythmias are detected and responded to, is largely unknown because such arrhythmias are rare and difficult to observe in clinical settings. To some extent, however, we can extend findings from studies on vigilance in simulated task performance to monitor watchers’ work. Extensive research on vigilance – our ability to discern signals (e.g., critical cardiac arrhythmias) from noise (e.g., artifacts) over prolonged periods of time – has demonstrated a decline in performance over time and identified factors that can affect this vigilance decrement. Among these are workload, false alarm rate, task duration, and environmental stressors such as noise.7 High workload, for example, was shown to decrease performance, specifically response times and some types of errors, in simulated air traffic control and baggage screening tasks.811 However, it is sometimes difficult to draw conclusions from performance in lab-based, simplified tasks to real-world performance,12 where the consequences of poor performance may be catastrophic.

To study monitoring performance, we simulated cardiac arrhythmias in situ such that clinicians could not distinguish the simulated arrhythmia from an arrhythmia in a real patient.1315 Arrhythmia simulations in patient care settings provide an opportunity to measure responses to critical cardiac events without compromising patient safety, and with a degree of control that is not feasible when studying real events. These simulations allow us to capture the critical – but often overlooked – time from arrhythmia onset to recognition, as well as the subsequent time to reach the patient. To the extent that this latency from arrhythmia onset to treatment can be minimized, patients’ odds of surviving cardiac arrest can be improved.

The goal of our research was to use simulation to identify determinants of efficient cardiac monitoring systems. To this end, we compared the process of communicating a critical arrhythmia to a first responder, usually the patient’s nurse, in 6 inpatient units with different monitoring systems, to determine the system that yields the fastest response time. Response times were defined as the time lapse between the beginning of a simulated critical arrhythmia and a first responder’s arrival in the patient’s room.

We hypothesized that response times to simulated critical arrhythmias positively correlate with the number of communication nodes between monitor watchers and first responders. This hypothesis is in line with our previous research, in which we validated in situ simulated cardiac events as a tool for measuring arrhythmia recognition and response performance. In that study, response times were shorter for patients monitored by their unit nurses than for patients monitored by remote watchers.13 With respect to the other factors mentioned above, we expected a faster response time when monitor watchers are located on the same unit as the monitored patient, rather than in a remote location. We based this hypothesis on qualitative research in which information timeliness and accuracy were perceived to be better when monitor watchers were co-located with the nursing unit than when they were in a different unit or hospital.16 Based on our lab-based study on the effects of patient load on monitor watchers’ response times to critical arrhythmias17 and on the vigilance research described above,811 we also expected faster response times when the workload of monitor watchers and nurses is lighter. Finally, we hypothesized that more experienced clinicians and those who had previously participated in arrhythmia simulations would respond more quickly. (See Table 2 for a summary of study hypotheses.)

Table 2.

Study hypotheses.

Hypothesis Dependent Variable(s) Independent Variable
1. Fewer communication nodes lead to shorter unit RTs Unit RT Number of communication nodes (2 or 3)
2a. When patient units are co-located with central monitoring stations, monitor watcher RTs and unit RTs are shorter. Monitor watcher RT
Unit RT
Central monitoring station location (on or off patient care unit)
2b. Monitor watchers who are monitoring fewer patients have shorter RTs. Monitor watcher RT Number of patients monitored
2c. Monitor watchers with a lower perceived workload have shorter RTs Monitor watcher RT Perceived workload (low, medium, or high)
2d. Nurses with a lower perceived workload have shorter RTs Unit RT Perceived workload (low, medium, or high)
2e. Participants with more clinical experience have shorter RTs Monitor watcher RT
Unit RT
Clinical experience (< 1 year or 1 year or more)
2f. Participants who have been previously exposed to arrhythmia simulations have shorter RTs Monitor watcher RT
Unit RT
Previous exposure to arrhythmia simulation (yes or no)

RT – response time.

Materials and Methods

1. Settings

This study involved 2 patient care units in each of 3 participating hospitals:

  1. A large academic hospital in North Carolina (general surgery and mixed units),

  2. A small community hospital in North Carolina (progressive care and medical/oncology units), and

  3. A medium-sized community hospital in Idaho (telemetry and medical/oncology units).

Each hospital had a central monitoring station that served all of its non-critical cardiac telemetry patients, including those in the selected patient care units. In the large academic hospital (A), the monitoring station was located in a dedicated “war room” that was distant from the units. If a patient on one of the units experienced a critical arrhythmia or other urgent monitoring-related event, monitor watchers typically called an emergency (red) phone on that unit, while for less urgent issues, watchers called a regular unit phone. A health unit clerk (HUC) sitting in the unit’s reception area was assigned to respond to these calls and then relayed the information to the patient’s nurse via a call to the nurse’s mobile phone, an overhead page, or a phone call to the nursing station (see Figure 2, top). In the small community hospital (B), the monitoring station was in a small room co-located with the progressive care unit. In this hospital, nurses carried phones, which a monitor watcher could call to inform them about patient arrhythmias (see Figure 2, bottom). Finally, in the medium-sized community hospital (C), the monitoring station was located in the nursing station of the telemetry unit. Similar to the small community hospital, monitor watchers could call nurses to inform them of problems, but in urgent situations, were also often observed verbally calling out to any nearby nurse in the telemetry unit (Figure 2, bottom). Monitor watchers at the medium-sized community hospital were assigned additional, non-monitoring tasks, and were replaced by other staff, e.g., the telemetry unit charge nurse, when they stepped away from the monitoring station. In all 3 hospitals, when a critical arrhythmia occurred (e.g., VF for more than 5 seconds), monitor watchers were expected to urgently call a code response team before or while calling the patient’s nurse. In practice, however, most watchers refrained from “calling a code”, and only called the patient’s nurse. Table 1 summarizes the characteristics and monitoring systems for each unit in the 3 hospitals.

Figure 2.

Figure 2.

Response processes for a patient suffering a critical arrhythmia at the large academic hospital (A; top), small community hospital (B; bottom), and medium-sized community hospital (C; bottom).

Table 1.

Participating patient care units and their monitoring systems.

Site Unit Number of beds Average percent of cardiac telemetry beds* Number of communication nodes 2-watcher to nurse, 3-watcher to HUC to nurse Monitor watcher location Average (min, max) patient load for monitor watchers**
Large Academic Hospital (A) 1 General Surgery 32 13% 3 Remote 27 (13,35)
2 Urology, Otology, Ophthalmology, Gynecology, Plastic Surgery (mixed) 32 9% 3 Remote
Small Community Hospital (B) 3 Progressive Care 33 56% 2 Local 25 (11,35)
4 General Medicine/Oncology 45 22% 2 Remote
Medium-sized Community Hospital (C) 5 Telemetry 24 80% 2 Local 33 (20,44)
6 General Medicine/Oncology 40 9% 2 Remote

HUC – health unit clerk.

*

Data obtained through observations.

**

Data obtained from post-simulation surveys.

2. In situ arrhythmia simulations

To test system response times to critical arrhythmias, we conducted in situ unannounced simulations of cardiac arrest at each hospital’s central monitoring station and in the 6 patient care units. Shift lengths for monitor watchers, nurses, and HUCs in the 3 hospitals were typically 12 hours. Arrhythmia simulations were generally performed at least 30 minutes after the beginning of shifts, to allow time for participants to develop a vigilance decrement.18 Simulation participants were informed of the research and the simulation procedures, but were not told when a simulation would occur. The study was approved by the Institutional Review Board of each participating hospital for research involving the use of human subjects.

Central Monitoring Stations.

To measure response times of monitor watchers, a research nurse connected an ECG rhythm simulator from a patient room into the hospital’s network such that the simulated signal appeared on the monitor watcher’s display as a normal ECG would look for that patient. Using the simulator, the nurse mimicked the patient’s baseline rhythm, then simulated a few premature ventricular contractions (PVCs) before initiating VT or VF (Figure 3). During the simulation, the patient was monitored at the bedside by a nurse proficient in cardiac monitoring using a local monitor. Using a stopwatch, a confederate at the central monitoring station measured the time from the start of the simulated VT/VF until the monitor watcher called the nursing unit. If a call was not placed within 5 minutes, the simulation was stopped.

Figure 3.

Figure 3.

Simulated VF patient (number 15) on a monitor watcher’s display.

Patient Care Units.

To measure response times in the nursing units, a confederate monitor watcher called the unit, i.e., the HUC in the large academic hospital (A) or the patient’s nurse in the small and medium-sized community hospitals (B and C), and stated that a patient is in VT or VF. Using a stopwatch, we measured the time from the phone call until a first responder entered the patient’s room. The response time was documented as 5 minutes if no clinician arrived in the patient’s room within that timeframe, and the simulation was terminated.

Following each arrhythmia simulation, a short debriefing was conducted with the participants – the monitor watcher, HUC, and/or first responder to the patient’s room – to explain the study goals. They also completed a survey that asked for their demographic information, current patient load, clinical experience, and previous arrhythmia simulation experience, and whether they perceived the simulation to be a real event. The participants received a gift card as compensation for their effort.

3. Measures

The primary outcome measure was clinician response time, in seconds, to the simulated arrhythmia. Monitor watcher response time was defined as the time lapse from arrhythmia start until the watcher picked up the phone to call the nursing unit. Unit response time was defined as the time lapse from initiation of the phone call by the monitor watcher until a first responder arrived in the patient’s room.

Based on our hypotheses, independent variables included: (1) number of communication nodes between monitor watchers and first responders (2 or 3 nodes, primary hypothesis); (2) monitor watchers’ location – on or off the unit in which the simulated patient was located; (3) monitor watchers’ workload, both actual (number of patients being monitored during the simulation) and perceived (self-scored as low, medium, or high); (4) nurses’ perceived workload (self-scored as low, medium, or high); and (5) participants’ clinical experience (< 1 year or 1 year or more) and experience with arrhythmia simulations. Study hypotheses are expressed in terms of these variables in Table 2.

4. Statistical analysis

We analyzed our data using a linear mixed-effects model, which removes variation due to both fixed and random effects, and allows the handling of non-independent data (e.g., response times within a unit within a hospital). Fixed effects included (1) number of communication nodes, (2) monitor watchers’ location, (3) patient load, (4) perceived workload, and (5) experience. We controlled for patient care units (nested within hospitals) as random effects.

A mixed model analysis of variance (ANOVA) was used to assess the relationship between the dependent variable, response time, and the independent variables. A p value of 0.05 was considered significant. A significant test result was followed up with a Steel-Dwass non-parametric multiple comparisons test,19 where needed.

Based on data from a previous study comparing monitoring methods,13 a sample size of 20 arrhythmia simulations in each central monitoring station and hospital unit was calculated to have 80% power to detect a significant difference in mean response times between units. Data analyses were performed using JMP Pro version 15 (SAS Institute, Cary, NC, USA).

Results

In all, 190 arrhythmia simulations (62 monitor watcher and 128 unit simulations) were performed. Simulation participant characteristics are summarized in Table 3. Response times for the central monitoring stations in the 3 hospitals are presented in Figure 4. A non-parametric ANOVA found these response times to be significantly different across hospitals (p=0.0162, hp2=0.157), with post-hoc tests showing that response times at the medium-sized community hospital (C) were shorter than at the small community hospital (B). Response times for the 6 patient care units, shown in Figure 5, are significantly different across units (p=0.0059, hp2=0.776). Post-hoc tests showed that response times in the medicine/oncology units in hospitals B and C were significantly longer than in the mixed, progressive care, and telemetry units, and that the response times in the general surgery unit were significantly longer than in the progressive care unit.

Table 3.

Simulation participant characteristics.

Site Unit Number of simulations Clinical experience Percent of participants with < 1 year of experience Average patient load during simulation Number of patients for whom the participant was caring Average perceived patient load during simulation 1=Low, 3=High Percent of participants who had previously been exposed to a simulation Percent of participants who perceived the simulation as a real event
Large Academic Hospital (A) General surgery 20 13.3% 3.6 1.9 53.3% 93.3%
Mixed 19 26.7% 4.2 1.9 26.7% 86.7%
Central Monitoring Station 22 0% 28.1 2.5 81.8% 83.3%
Small Community Hospital (B) Progressive care 24 6.3% 4.5 1.9 12.5% 93.3%
Medicine/oncology 20 17.6% 4.2 2.4 29.4% 82.4%
Central Monitoring Station 20 33.3% 27.8 2.2 94.7% 94.7%
Medium-sized Community Hospital (C) Telemetry 21 11.1% 3.9 1.9 83.3% 89.5%
Medicine/oncology 24 19% 4.5 2 68.1% 100%
Central Monitoring Station 20 31.6% 33.3 2.3 73.7% 100%

Figure 4.

Figure 4.

Central monitoring station response times to simulated arrhythmias (±1 standard deviation) by hospital. A – Large academic hospital, B – Small community hospital, C – Medium-sized community hospital.

Figure 5.

Figure 5.

Patient care unit response times to simulated arrhythmias (±1 standard deviation) by hospital and unit. A – Large academic hospital, B – Small community hospital, C – Medium-sized community hospital.

Hypothesis 1: Fewer communication nodes lead to shorter unit response times.

Our primary hypothesis was that unit response times decrease as the number of communication nodes between monitor watchers and first responders decrease. However, based on the linear mixed-effects model, while controlling for the random effects of unit, telemetry location, and hospital, this hypothesis was not supported. In fact, we found that response times were shorter in the large academic hospital (A) where monitor watchers called HUCs who then called unit nurses to report arrhythmias (mean 39, SD 40 seconds), compared to response times in the other hospitals (mean 54, SD 65 seconds) where monitor watchers called nurses directly (Figure 5; p=0.035, hp2=0.8681).

Hypothesis 2a: When patient units are co-located with central monitoring stations, monitor watcher response times and unit response times are shorter.

We also hypothesized that response times are shorter when monitor watchers are located on the same unit as the monitored patient, rather than in a remote location. Based on the linear mixed-effects model, while controlling for the random effects of unit and hospital, patient care unit response times were significantly affected by location (p<0.0047, hp2=0.04), with shorter response times observed in units co-located with the monitoring stations (i.e., progressive care and telemetry units; mean 39, SD 50 seconds vs. mean 55, SD 62 seconds in units remote from central monitoring stations). Central monitoring station response times were not, however, affected by location (p=0.63), i.e., monitor watchers responded as quickly to arrhythmias occurring in co-located units (progressive care and telemetry units) as to arrhythmias occurring in units located remotely (general surgery, mixed, and the 2 medicine/oncology units).

Hypothesis 2b: Monitor watchers who are monitoring fewer patients have shorter response times.

Hypothesis 2c: Monitor watchers with a lower perceived workload have shorter response times.

Hypothesis 2d: Nurses with a lower perceived workload have shorter response times.

Based on the linear mixed-effects model, while controlling for the random effects of hospital, unit, and telemetry location, monitor watchers’ actual patient load did not affect central monitoring response times (p=0.36). (We did not test patient load effects on unit response times because nurses’ loads were relatively uniform, with an average of 4.2 patients and a median of 4 patients, while some responders were nursing assistants or charge nurses, with different patient loads and care roles.) Monitor watchers’ response times were also not affected by perceived workload (scored by simulation participants as low, medium, or high; p=0.14). However, unit response times were affected by perceived workload (p=0.0159, hp2=0.08). Unit response time means were 71 seconds (SD 82) when workload was high and 44 seconds (SD 45) when workload was low or medium.

Hypothesis 2e: Participants with more clinical experience have shorter response times.

Hypothesis 2f: Participants who have been previously exposed to arrhythmia simulations have shorter response times.

Most participants were experienced – 81.5% had more than 1 year of clinical experience – but experience did not affect unit or central monitoring station response times (p=0.15 and 0.84, respectively). Finally, most participants – 56% of unit responders and 70% of monitor watcher responders – had previous exposure to arrhythmia simulations. Nevertheless, 91.4% of responders perceived the arrhythmia to be real. And neither unit response times (p=0.82) nor central monitoring station response times (p=0.39) were significantly affected by participants’ previous experience with these types of simulations.

Discussion

Our findings have several implications for the design of in-hospital patient monitoring systems. First, nursing unit responses to critical arrhythmias were faster when monitor watchers called a unit’s HUC who then contacted nurses, than when monitor watchers called the patient’s nurse directly. There are several potential reasons for this finding. HUCs were often observed calling the nursing station or using an overhead page, rather than page the patient’s nurse. In these instances, the available nurse closest to the patient’s room was typically the first to respond. Since the responder was the closest available nurse rather than the patient’s assigned nurse, who may have been busy elsewhere, this practice could have contributed to shorter response times. Another contributor to the quick responses could be that monitor watchers always called the same phone numbers (the unit HUCs), and did not need to search for the name and phone number of specific nurses. Likewise, HUCs did not need to search for a specific nurse’s number, which would have been another potential source of inefficiency. Finally, availability to respond to the unit phone remained a constant for HUCs, whereas the variable availability of the nurses could have contributed to longer response times.

It is important to note that other confounding factors may also have contributed to this finding. For example, units that had an HUC were part of hospital A, a large academic center with more complex and sicker patients than the 2 community hospitals (B and C). Therefore, patient acuity may have impacted response time. Most likely, response times were driven by a combination of factors including HUC involvement, patient acuity, and additional factors that may distinguish the large academic hospital from the 2 community hospitals.

We also found that the location of the central monitoring station affected nurses’ response times, with shorter response times in the 2 units co-located with the monitoring station rather than in remote locations. It is likely that monitor watchers’ direct access to the nurses, and familiarity with them, contributed to the timeliness of communication. This setup is also perceived to improve communication accuracy and care coordination.16 However, several confounding factors may also have contributed to this finding. For example, in the hospitals we studied, monitoring stations were located within units that accommodate patients with more severe cardiac problems. In these units, nurses may experience a heightened sense of urgency in responding to critical arrhythmias20 and may be better trained to recognize and address them. These units were also smaller in size and capacity than their remote counterparts (Table 1).

Similarly, we hypothesized that monitor watcher response times are shorter when they are calling a nurse in their co-located unit, due to proximity, rather than a nurse in a remotely located unit. Contrary to our expectations, however, monitor watcher response times were not affected by proximity to the patient care units. Monitor watcher communication methods varied by unit and hospital. In the 2 community hospitals (B and C), they were expected to call the nurse assigned to the patient experiencing an arrhythmia. In practice, in units co-located with the central monitoring station, they often verbally called out to the patient’s assigned nurse, or any nearby nurse. This did not, however, significantly reduce monitor watcher average response times. Directly calling the patient’s nurse (community hospitals B and C) required monitor watchers to locate the name of the assigned nurse, then the nurse’s phone number (and sometimes the name and number of the patient’s backup nurse or charge nurse, if the patient’s nurse was unable to respond). This task did not consume more time on average than calling a unit’s HUC, the practice for watchers in the large academic hospital (A), many of whom had memorized these numbers. However, since the community hospitals also included units co-located with the monitoring stations, average response times may have been shorter due in part to the common practice of verbally calling out to nearby nurses. Thus, we cannot rule out the possibility that monitor watcher response times were affected by the time to locate the assigned nurse’s name and phone number.

Based on our previous study,17 we expected watchers who monitored a larger number of patients to have longer response times to arrhythmia alarms. Findings from the current study did not bear this out, in part due to missing data – of 62 arrhythmia simulations, monitor watchers only reported their patient load in 36 instances and their perceived workload in 41 instances. However, for nurses, as perceived workload increased, response latency also increased. This underscores the notion that workload is a function not only of the number of patients assigned to a clinician, but also of the complexity of their care and other job responsibilities. One other confounding factor is that slower responders may have reported a higher workload to justify their longer response times.

A large majority of participants perceived the simulated arrhythmias to be real, and participants who had previously been exposed to an arrhythmia simulation responded as quickly as those for whom the simulation was a first experience. In addition, response times measured in this study are in line with those measured in other studies.13,17,20 These findings provide evidence for the construct validity of arrhythmia simulations for measuring real arrhythmia recognition and response performance. It bears mentioning, however, that in situ arrhythmia simulations are not a simple, risk-free tool for assessing monitoring performance. Careful planning and control are required to protect patient safety and the professional reputation of participating clinicians.

Our hypothesis that more experienced clinicians have shorter response times was not upheld. Contrary to our expectations, and to the findings of a recent study of nurse response times,20 we did not find that clinical experience affected response latency, possibly due to the relatively small sample of clinicians with < 1 year of experience (28 of 151 survey responders).

This study has several limitations. First, as previously mentioned, an important limitation is the multiple and often unknown factors that may have contributed to the arrhythmia response times we observed. For example, we do not know the extent to which differences between hospitals, nursing units, and health system safety cultures contributed to differences in response times. We were not able to isolate and control for the effects of such extraneous factors. Second, in light of the large variability in response times (Figures 4 and 5), our study may have been underpowered to detect differences between monitoring practices. Response times varied quite a bit. On 5 occasions, when a nurse was already in the room of the patient for whom an arrhythmia was simulated, the response time was recorded as 0 seconds. To minimize disruption to patient care, we also stopped simulated arrhythmias that received no response within 5 minutes. This happened once with a monitor watcher and 4 times with nurses. Although this time limit was appropriate when clinicians did not plan to respond to the arrhythmia at all (e.g., a monitor watcher who perceived the arrhythmia to be artifact, or a nurse who mistakenly thought she was already in the room of the patient for whom the arrhythmia was called), it may not have sufficed for scenarios where clinicians were busy or unavailable to respond immediately (although hospital protocols required them to call for help in these situations).

Conclusions

The practice of remote centralized cardiac monitoring is widespread,21 despite scant evidence to support its use.22 However, little is known about factors that contribute to or inhibit the performance of remote monitoring systems. In this study, we found that systems with an intermediary who acted between monitor watchers and nurses to communicate critical events were more efficient, i.e., had shorter response times to simulated arrhythmias, than systems in which monitor watchers communicated directly with nurses. Responses were also faster in units co-located with the central monitoring stations than in those that were located remotely. The patient load of monitor watchers did not impact response times. However, as their perceived workload and nurses’ perceived workload increased, response latency also increased. Finally, response times were not affected by clinical experience or by previous exposure to arrhythmia simulations. While limited in our ability to isolate the effects of these factors, our study provides initial insights into methods for improving central monitoring system efficiency. In addition, it provides a roadmap for using in situ arrhythmia simulations to assess and improve monitoring performance.

Acknowledgments

We would like to thank L. Kristin Newby, Michael Chrestensen, Tamara Mueller, Eric McClenny, Reid McCabe, Brandon King, Daniel Weikel, and the members of the Duke University Health System Cardiac Monitoring and Monitoring Oversight Committees for their support of this work.

Financial Disclosure Summary

This project was supported by grant number R01HS023387 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. Noa Segall, Jeff Joines, Ron’Nisha Baldwin, Diane Bresch, Lauren Coggins, Suzanne Janzen, and Melanie C. Wright are or were supported by this grant. The authors have no additional conflicts of interest to disclose.

Contributor Information

Noa Segall, Department of Anesthesiology, Duke University School of Medicine.

Jeffrey A. Joines, Textile Engineering, Chemistry, and Science, North Carolina State University.

Ron’Nisha D. Baldwin, Previously, Duke University Health System.

Diane Bresch, Duke Office of Clinical Research, Duke University School of Medicine.

Lauren G. Coggins, Rush University College of Nursing.

Suzanne Janzen, Saint Alphonsus Regional Medical Center.

Jill R. Engel, Duke University Health System.

Melanie C. Wright, College of Pharmacy, Idaho State University.

References

  • 1.Merchant RM, Yang L, Becker LB, et al. Incidence of treated cardiac arrest in hospitalized patients in the United States*. Crit Care Med. 2011;39(11):2401–2406. doi: 10.1097/CCM.0b013e3182257459 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Chan PS, Krumholz HM, Nichol G, Nallamothu BK, American Heart Association National Registry of Cardiopulmonary Resuscitation Investigators. Delayed time to defibrillation after in-hospital cardiac arrest. N Engl J Med. 2008;358(1):9–17. doi: 10.1056/NEJMoa0706467 [DOI] [PubMed] [Google Scholar]
  • 3.Sandau KE, Funk M, Auerbach A, et al. Update to Practice Standards for Electrocardiographic Monitoring in Hospital Settings: A Scientific Statement From the American Heart Association. Circulation. 2017;136(19):e273–e344. doi: 10.1161/CIR.0000000000000527 [DOI] [PubMed] [Google Scholar]
  • 4.Reilly T, Humbrecht D. Fostering Synergy A Nurse-Managed Remote Telemetry Model. Crit Care Nurse. 2007;27(3):22–33. [PubMed] [Google Scholar]
  • 5.Thomas TL. Who’s watching the cardiac monitor? Does it matter? Nursing (Lond). 2011;41 Suppl:8–10. doi: 10.1097/01.NURSE.0000394519.21592.53 [DOI] [PubMed] [Google Scholar]
  • 6.Bonzheim KA, Gebara RI, O’Hare BM, et al. Communication Strategies and Timeliness of Response to Life Critical Telemetry Alarms. Telemed E-Health. 2011;17(4):241–246. doi: 10.1089/tmj.2010.0139 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Mackie RR. Vigilance Research—Are We Ready for Countermeasures? 1987;29(6):707–723. [DOI] [PubMed] [Google Scholar]
  • 8.Majumdar A, Ochieng W. The factors affecting air traffic controller workload: a multivariate analysis based upon simulation modelling of controller workload. Transp Res Rec. 2002;1788:58–69. [Google Scholar]
  • 9.Bravo MJ, Farid H. Search for a Category Target in Clutter. Perception. 2004;33(6):643–652. doi: 10.1068/p5244 [DOI] [PubMed] [Google Scholar]
  • 10.Madhavan P, González C. Effects of Sensitivity, Criterion Shifts, and Subjective Confidence on the Development of Automaticity in Airline Luggage Screening. Proc Hum Factors Ergon Soc Annu Meet. 2006;50:334–338. doi: 10.1177/154193120605000326 [DOI] [Google Scholar]
  • 11.Wolfe JM, Horowitz TS, Kenner NM. Rare items often missed in visual searches. Nature. 2005;435(7041):439–440. doi: 10.1038/435439a [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Wickens CD, Rice S, Keller D, Hutchins S, Hughes J, Clayton K. False alerts in air traffic control conflict alerting system: is there a “cry wolf” effect? Hum Factors. 2009;51(4):446–462. doi: 10.1177/0018720809344720 [DOI] [PubMed] [Google Scholar]
  • 13.Wright MC, Dorsey KW, DeLong R, et al. Simulation of cardiac arrhythmias in hospitalized patients to measure and improve response time. Circulation. 2012;126:A320. [Google Scholar]
  • 14.Kobayashi L, Parchuri R, Gardiner FG, et al. Use of in situ simulation and human factors engineering to assess and improve emergency department clinical systems for timely telemetry-based detection of life-threatening arrhythmias. BMJ Qual Saf. 2013;22(1):72–83. doi: 10.1136/bmjqs-2012-001134 [DOI] [PubMed] [Google Scholar]
  • 15.Segall N, Franklin R, Wright MC. Cardiac Arrhythmia Simulations for Comparing Remote Telemetry Monitoring Systems. Proc Int Symp Hum Factors Ergon Health Care. 2018;7(1):129–131. doi: 10.1177/2327857918071033 [DOI] [Google Scholar]
  • 16.Ardoin W-J, Hoyle WS, Bewaji O, et al. Real-time remote physiological monitoring: The role of communication in three paradigms of inpatient care. Proc Hum Factors Ergon Soc Annu Meet. 2016;60(1):618–622. doi: 10.1177/1541931213601141 [DOI] [Google Scholar]
  • 17.Segall N, Hobbs G, Granger CB, et al. Patient load effects on response time to critical arrhythmias in cardiac telemetry: a randomized trial. Crit Care Med. 2015;43(5):1036–1042. doi: 10.1097/CCM.0000000000000923 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Warm JS, Parasuraman R, Matthews G. Vigilance Requires Hard Mental Work and Is Stressful. 2008;50(3):433–441. doi: 10.1518/001872008X312152 [DOI] [PubMed] [Google Scholar]
  • 19.Douglas CE, Michael FA. On distribution-free multiple comparisons in the one-way analysis of variance. Commun Stat - Theory Methods. 1991;20(1):127–139. doi: 10.1080/03610929108830487 [DOI] [Google Scholar]
  • 20.Bonafide CP, Localio AR, Holmes JH, et al. Video Analysis of Factors Associated With Response Time to Physiologic Monitor Alarms in a Children’s Hospital. JAMA Pediatr. 2017;171(6):524–531. doi: 10.1001/jamapediatrics.2016.5123 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ruppel H, Funk M, Clark JT, et al. Attitudes and Practices Related to Clinical Alarms: A Follow-up Survey. Am J Crit Care. 2018;27(2):114–123. doi: 10.4037/ajcc2018185 [DOI] [PubMed] [Google Scholar]
  • 22.Funk M, Parkosewich JA, Johnson CR, Stukshis I. Effect of dedicated monitor watchers on patients’ outcomes. Am J Crit Care. 1997;6(4):318–323. [PubMed] [Google Scholar]

RESOURCES