Skip to main content
Telemedicine Journal and e-Health logoLink to Telemedicine Journal and e-Health
. 2015 Mar 1;21(3):151–156. doi: 10.1089/tmj.2014.0064

Robotic Telepresence Versus Standardly Supervised Stroke Alert Team Assessments

Cumara B O'Carroll 1, Joseph G Hentz 2, Maria I Aguilar 1, Bart M Demaerschalk 1,
PMCID: PMC4365442  PMID: 25490742

Abstract

Background: Telemedicine has created access to emergency stroke care for patients in all communities, regardless of geography. We hypothesized that there is no difference in speed of assessment between vascular neurologist (VN) robotic telepresence and standard VN-supervised stroke alert patients in a metropolitan primary stroke center. Materials and Methods: A retrospective stroke alert database was used to identify all robotic telepresence and standardly supervised stroke alert patient assessments at a primary stroke center emergency department from 2009 to 2012. The primary outcome measure was the duration of assessment from stroke alert activation to treatment or downgrade. Results: The sample size was 196 subjects. The mean duration of time from stroke alert activation to initiation of intravenous (IV) thrombolytic treatment or downgrade was 8.6 min longer in the robotic group than in the standard group (p=0.03). Among the subgroup of acute ischemic stroke patients treated with IV thrombolysis, the mean duration of time from activation to treatment was 18 min longer in the robotic group than in the standard group (p=0.01). Safety outcomes including thrombolysis protocol violations (0% versus 1%), post-thrombolysis symptomatic intracranial hemorrhagic complications (3% versus 1%), and death during hospitalization (8% versus 6%) were low in the robotic group and not significantly different from that in the standard group. Conclusions: Standard VN-supervised acute stroke team assessments were swifter than those supervised by robotic telepresence. Safety outcomes of robotic telepresence-supervised stroke alerts were excellent, and this modality may be preferred in circumstances when a VN is not immediately available on-site.

Key words: : telemedicine, telestroke, stroke, thrombolytic therapy, tissue plasminogen activator, remote consultation, stroke management

Introduction

The phrase “time is brain”1 emphasizes the importance of timely implementation of acute ischemic stroke intervention. Within the United States there is variability in access to acute stroke care, given limited numbers and uneven geographic distribution of certified primary stroke centers and comprehensive stroke centers, as well as insufficient stroke specialists.2 It is estimated that 40% of the U.S. population resides in counties without access to acute stroke care.3 This awareness as well as technologic advances resulted in the development of “telestroke,” originally described in 1999 by Levine and Gorman.4 Telestroke is the use of telemedicine, such as telephone, the Internet, and videoconferencing, to exchange medical information from one geographic location to another for the purpose of providing acute stroke care. The implementation of telemedicine for stroke has played a vital role in creating universal access to emergency stroke care for all patients, regardless of geographic location or hospital resources.2,5

The number of telestroke networks in the world is growing, with most operating in North America and Europe.2 Recognizing the time-sensitive nature of acute stroke, the American Stroke Association developed a stroke “chain of survival”2,6 to ensure the best clinical outcomes. The success of any telestroke network also depends on explicitly delineating the roles and responsibilities of all providers during the telestroke consultation. The American Heart Association/American Stroke Association published an article in 2009 providing a comprehensive review of the scientific data evaluating the use of telemedicine for stroke care delivery. Among the recommendations with level 1A evidence was “The National Institutes of Health Stroke Scale (NIHSS)-telestroke examination, when administered by a stroke specialist using high-quality videoconferencing, is recommended when an NIHSS-bedside assessment by a stroke specialist is not immediately available for patients in the acute stroke setting, and this assessment is comparable to an NIHSS-bedside assessment.”7

The telestroke experience in Arizona began in 2005 with Mayo Clinic neurologists reviewing available telestroke technologies and visiting established telestroke networks, before developing a new network, Stroke Telemedicine for Arizona Rural Residents (STARR).8 A statewide needs assessment, administered to all remotely located hospitals with emergency departments in the state of Arizona, confirmed the need for increased access to stroke expertise among Arizona's rural communities. In July 2009, robotic telepresence stroke alert assessments began within the Mayo Clinic Hospital, Phoenix, AZ. Robotic telepresence has been used since then for the purpose of providing resident physicians with appropriate supervision during acute stroke assessments, when the consulting vascular neurologist (VN) is not on-site.

In this study, we aimed to assess the difference in speed of assessment between VN robotic telepresence and standard VN-supervised stroke alert patients in a metropolitan primary stroke center, with the hypothesis that there is no difference.

Materials and Methods

Study Population, Design, and Data Collection

Subjects were identified from the Mayo Clinic Hospital in Phoenix stroke alert database. We conducted a retrospective stroke alert database review, identifying 98 subjects with robotic telepresence from 2009 to 2012, and selecting a random sample size of 98 subjects who received usual stroke alert care from the same time period for statistical comparison. Patients meeting stroke alert criteria, who presented to the Mayo Clinic Hospital emergency department (ED) and underwent a stroke team assessment, were eligible for our analysis. Specifically, eligible patients were men and women 18 years of age or older with a Face-Arm-Speech-Time (F-A-S-T)9,10 score of ≥1 within 12 h of symptom onset or presenting with focal neurologic signs, regardless of F-A-S-T score. All eligible patients had to have undergone evaluation by a neurologist while in the Mayo Clinic Hospital ED and had to have data documented in the electronic medical records. Subjects were ineligible if they were younger than 18 years of age, presented without focal neurological signs, presented with focal neurological signs of greater than 12 h in duration, or if the stroke alert evaluation occurred in a hospital inpatient unit other than the ED.

The robotic telepresence group included all robotic telepresence stroke alert patients at the Mayo Clinic Hospital from August 2009 to December 2012. The standard group included a random sample of standard stroke alert patients at the Mayo Clinic Hospital during the same period. The sample was stratified by year. The number of standard patients selected from each year was equal to the number of telepresence patients for that year. A random sample was selected for each stratum by using a computer random number generator. Demographic and clinically relevant data were extracted directly from patients' medical records by one reviewer (C.B.O'C.). Any discrepancy or uncertainty was resolved by discussion with the other authors.

Assessment of Outcomes

The primary outcome measured was the speed of stroke alert assessment defined as the duration of assessment from stroke alert activation to decision to treat or downgrade (i.e., a point in time where it was determined, by the stroke team, that a patient was not eligible for acute stroke therapy [intravenous (IV) thrombolysis and/or endovascular intervention]). Data were extracted regarding stroke alert time, time of patient arrival in the ED, time of stroke team's initial assessment, time of stroke alert downgrade, and time of IV tissue plasminogen activator (tPA) administration or endovascular intervention. Secondary outcomes included morbidity and mortality during the acute care phase of hospitalization, with comparison of usual care subjects versus subjects evaluated via stroke consultant robotic telepresence. Morbidity was determined by NIHSS at admission and at the time of discharge, length of acute care hospital stay, disposition at the time of discharge (home, skilled nursing facility, acute inpatient rehabilitation unit, or hospice), and adverse outcomes after tPA administration (symptomatic intracerebral hemorrhage, asymptomatic intracerebral hemorrhage, systemic hemorrhage, or angioedema). Mortality during the hospitalization was recorded. Additionally, we documented whether patients receiving tPA did so within the 3-h window versus the extended 4.5-h window. A tPA violation was defined as any patient initially within the time window for thrombolysis, subsequently becoming ineligible because of a delay in assessment.

Statistical Analysis

The primary outcome measure was the duration of stroke alert assessment. The mean time for the robotic group was compared with that of the standard group by using the two-sample t test. Interaction and confounding were assessed by using a general linear model. Secondary measures were assessed by using the two-sample t test or Pearson chi-squared test. Fisher's exact test was used instead of the Pearson chi-squared test if the minimum expected cell count was less than 5. The group×type interaction for the percentage of patients discharged home was assessed by using multiple logistic regression.

Results

Among 106 patients in the robotic telepresence stroke alert database from 2009 to 2012, 98 patients were eligible for analysis. Patients were excluded for not meeting stroke alert criteria (n=6), for stroke assessment occurring outside of the ED (n=1), and for having more than one robotic stroke alert encounter (n=1). From the standard stroke alert group 98 were selected, applying the same exclusion criteria. The robotic and standard stroke alert groups each included 10 patients from 2009, 22 patients from 2010, 32 patients from 2011, and 34 patients from 2012, for a total of 196 patients, 98 per group. Age ranged from 23 to 97 years with a mean of 72 years, and 52% were women. The mean age, and the percentage of women did not differ substantially between groups (Table 1). The mean baseline NIHSS score was 2 points higher in the robotic group compared with the standard group (10.6 versus 8.8, p=0.09). Stroke alert activation ranged from 34 min prior to patient arrival in the ED (via emergency medical services advance notification) to 142 min after arrival. The mean duration from stroke alert activation to stroke team at bedside was 11.3 min for the robotic group and 10.4 min for the standard group (p=0.38).

Table 1.

Study Methods and Results Comparing Robotic Versus Standard Stroke Alert Team Assessments

  ROBOTIC STANDARD Δ 95% CI P
Number 98 98      
Demographics
Age (years) [mean (SD)] 74 (14) 70 (16) 4 (15) −1 to 8 0.11
Female 52 (53%) 49 (50%) −0.04 −0.18 to 0.10 0.67
Assessment
NIHSS at admission [mean (SD)] 10.6 (8.1) [n=96] 8.8 (6.7) [n=93] 1.9 (7.4) −0.3 to 4.0 0.09
Primary type of stroke         0.049
 Hemorrhagic 13 (13%) 10 (10%)      
 Ischemic 62 (63%) 49 (50%)      
 Not acute stroke 23 (23%) 39 (40%)      
Acute stroke 75 (77%) 59 (60%) 0.16 0.04–0.29 0.01
Activation
 After hours (17:01–07:59 h) 61 (62%) 35 (36%) 0.27 0.13–0.40 <0.001
 Weekend (Saturday–Sunday) 35 (36%) 21 (21%) 0.14 0.02–0.27 0.03
  After hours or weekend 74 (76%) 50 (51%) 0.24 0.11–0.38 <0.001
Duration (min) from activation [mean (SD)]
 Arrival 0 (29) 1 (22) −2 (26) −9 to 5 0.62
 Bedside 11.3 (7.0) [n=93] 10.4 (5.8) [n=92] 0.8 (6.5) −1.0 to 2.7 0.38
 Treatment/downgradea 59.8 (30) 51.2 (26) 8.6 (28) 0.7–17 0.03
  Acute stroke subgroupb 61.9 (27) [n=75] 52.6 (24) [n=59] 9.3 (26) 0.4–18 0.04
  Not-acute stroke subgroupb 53.0 (38) [n=23] 49.1 (28) [n=40] 3.9 (32) −13 to 21 0.65
  tPA subgroup 84 (22) [n=23] 66 (16) [n=13] 18 (20) 4–32 0.01
  After-hours or weekend activation subgroup 60 (32) [n=74] 51 (28) [n=50] 9 (31) −2 to 20 0.11
 Treatment/downgrade (min) (adjusted mean)c 58.3 50.6 7.7 −0.4 to 16 0.06
Arrival to treatment/downgrade (min) in the tPA subgroup [mean (SD)] 85 (42) [n=23] 69 (27) [n=13] 16 (37) −10 to 42 0.23
Treatment
 IV tPA 23 (23%) 13 (13%) 0.10 −0.01 to 0.21 0.07
 IV tPA period <3 h 17 (74%) [n=23] 12 (92%) [n=13] −0.18 −0.42 to 0.11 0.38
 Endovascular treatment type 3 (3%) 5 (5%) −0.02 −0.09 to 0.04 0.72
  MERCI 0 (0%) 1 (1%) −0.01 −0.06 to 0.03 >0.99
  Penumbra 2 (2%) 4 (4%) −0.02 −0.08 to 0.04 0.68
  Solitaire 0 (0%) 1 (1%) −0.01 −0.06 to 0.03 >0.99
 IV tPA protocol violationd 0 (0%) 1 (1%) −0.01 −0.06 to 0.03 >0.99
Outcomes
 Safety outcomes after IV tPA
  Asymptomatic ICH 2 (2%) 1 (1%) 0.01 −0.04 to 0.06 >0.99
  Symptomatic ICH 3 (3%) 1 (1%) 0.02 −0.03 to 0.08 0.62
  Serious systemic hemorrhage 0 (0%) 0 (0%) 0.00 −0.04 to 0.04 >0.99
  Angioedema 0 (0%) 0 (0%) 0.00 −0.04 to 0.04 >.99
 Length of stay (days) [mean (SD)] 4.0 (3.8) 3.8 (3.7) 0.2 (3.8) −0.9 to 1.3 0.72
 NIHSS at discharge [mean (SD)] 6.1 (7.9) [n=75] 5.2 (6.7) [n=61] 0.9 (7.4) −1.6 to 3.4 0.48
 Discharge disposition         0.35
  Home 41 (42%) 56 (57%)      
  SNF 12 (12%) 12 (12%)      
  Acute inpatient rehabilitation 23 (23%) 14 (14%)      
  Hospice 12 (12%) 9 (9%)      
  Transfer to another hospital 2 (2%) 1 (1%)      
  Died 8 (8%) 6 (6%)      
 Discharged home 41 (42%) 56 (57%) −0.15 −0.29 to −0.01 0.03
  Acute stroke subgroupe 20 (27%) [n=75] 27 (46%) [n=59] −0.19 −0.35 to −0.03 0.02
  Not-acute stroke subgroupe 21 (91%) [n=23] 29 (74%) [n=39] 0.17 −0.04 to 0.35 0.18
 Died 8 (8%) 6 (6%) 0.02 −0.05 to 0.09 0.58
a

Primary outcome measure.

b

Group×type interaction P=0.54.

c

Adjusted for type of stroke.

d

Stroke not recognized in the emergency department and alert activated after presenting to bedside.

e

Group×type interaction P=0.02.

CI, confidence interval; ICH, intracerebral hemorrhage; IV, intravenous; NIHSS, National Institutes of Health Stroke Scale; SD, standard deviation; SNF, skilled nursing facility; tPA, tissue plasminogen activator.

The primary outcome of mean duration of time from stroke alert activation to treatment or downgrade was 8.6 min longer in the robotic group compared with the standard group (p=0.03) (Fig. 1). There were a higher percentage of patients with acute stroke in the robotic group than in the standard group (77% versus 60%, p=0.01). Controlling for acute stroke did not substantially reduce the difference in time from activation to treatment or downgrade (adjusted Δ=7.7). Controlling for acute stroke reduced the difference between groups by less than 1.0 min (from 8.6 to 7.7 min), a change of only 10%. Age and NIHSS were also not correlated with the time from activation to treatment or downgrade (r=−0.02 and 0.08), and controlling for age or NIHSS did not substantially reduce the difference between groups.

Fig. 1.

Fig. 1.

Mean duration of time±standard deviation (SD) from stroke alert activation to treatment or downgrade in the robotic telepresence group compared with the standard group. Average stroke alert assessment in the robotic group was 59.8 (30) min in the robotic telepresence group and 51.2 (26) min in the standard group.

In the robotic sample, the mean duration of time from activation to treatment or downgrade was 8.9 min longer for patients with acute stroke than for patients without acute stroke (61.9 versus 53.0 min), whereas in the standard sample, the mean duration was only 3.5 min longer for patients with acute stroke than for patients without acute stroke (52.6 versus 49.1 min). However, the sample was too small to conclusively assess the interaction between type of assessment and type of diagnosis (p=0.54). Of all the patients in our sample, 23% in the robotic group received IV thrombolysis versus 13% in the standard group (p=0.07). Specifically, of all acute ischemic strokes, 37% were treated with IV thrombolysis in the robotic group versus 24% in the standard group (p=0.16). Among the patients requiring IV thrombolysis, the mean duration of time from activation to treatment was 18 min longer in the robotic group than in the standard group (p=0.01), with 74% of the patients in the robotic group receiving treatment in under 3 h versus 92% in the standard group (p=0.38). Of all after-hours stroke alert activations, a higher proportion was robotic (62%) compared with the standard group (36%, p<0.001), and of all weekend stroke alert activations, a higher proportion were robotic (36%) compared with the standard group (21%, p=0.03). After-hours or weekend stroke assessments took 9 min longer in the robotic group compared with the standard group (p=0.11).

For the secondary outcome measures, the differences between groups corresponded to the difference in the percentage of patients with acute stroke. In the robotic group, 42% of patients were discharged home versus 57% in the standard group (p=0.03). However, the difference between the robotic and standard groups differed between patients with and without acute stroke (group×type interaction p=0.02). The average admission NIHSS was 2 points higher in the robotic group compared with the standard group, whereas the discharge NIHSS was 1 point higher in the robotic group. Among patients with acute stroke, only 27% were discharged home in the robotic group versus 46% in the standard group (p=0.02). Among patients without acute stroke, 91% were discharged home in the robotic group versus only 74% in the standard group. The average length of hospital stay was 4 days in both groups. The incidence of death during hospitalization in the robotic and standard groups was equivalent within 10 percentage points (Δ=0.02; 95% confidence interval, −0.05 to 0.09), with 8% dying in the robotic group and 6% in the standard group. Safety outcomes including thrombolysis protocol violations (0% versus 1%) and post-thrombolysis symptomatic intracranial hemorrhagic complications (3% versus 1%) were low in the robotic group and not significantly different from that in the standard group.

Discussion

The goal of this study was to assess the real-life utilization of two different stroke alert assessment paradigms in routine use, attempting to determine if a robotic telepresence-supervised assessment could be as swift as a standardly supervised assessment. We determined that the robotic acute stroke assessments took 8.6 min longer, on average, than the standard assessments. However, the robotic group had more patients with acute stroke with higher initial NIHSS scores and more patients who received IV thrombolysis overall. Furthermore, patients in the robotic group had a slightly higher NIHSS score upon discharge and a lower probability of discharge home. These factors may have reflected that patients in the robotic group had endured greater impairment from stroke, presented later in the course of their stroke, and required more complex decision making, which may all have potentially disadvantaged the robotic group by contributing to a longer assessment time.

This research had several limitations. The study was observational and retrospective by design, and patients were not randomly allocated to the type of assessment in an unbiased manner. The supervising VNs made a conscious decision whether or not to use robotic telepresence. We propose that supervising neurologists may have been more likely to use robotic telepresence after hours if the patient's clinical presentation was consistent with an authentic stroke, in order to confirm the diagnosis prior to tPA administration. In contrast, we propose that supervising neurologists may have been less likely to use robotic telepresence for clear-cut stroke mimickers. If true, this may be one possible explanation for a smaller proportion of “not acute stroke” in the robotic group than the standard group (23% versus 40%), with stroke mimickers traditionally being downgraded faster. Another limitation was that we did not collect data on the level of training of the neurology resident evaluating the stroke patient, nor did we document which stroke alerts were managed by internal medicine residents, rotating on the inpatient neurology service.

We propose that the supervising VN may have been more likely to supervise with robotic telepresence after hours if a less experienced resident was performing the acute stroke assessment. Similarly, we propose that the level of training of the resident may have influenced the speed of assessment, with more junior residents having longer assessment times. An additional limitation of this real-world study was that the standard assessment paradigm may sometimes have included an immediate phone-only supervision of the resident (followed in time by an in-person supervision), whereas the robotic telepresence paradigm always included a supervising neurologist's inspection and examination of the patient. We did not collect data on how many of the standard assessments were supervised by phone-only in the ED. It is possible that this factor influenced the speed of downgrade in the standard group, with faster downgrade times, because the supervising physician would not have interacted directly with the patient or family during the stroke alert.

Lastly, an acknowledged weakness was that this study period included the entire robotic telepresence series since its inception at the Mayo Clinic in 2009, which included the first patients ever assessed via robot, using brand new technology with inexperienced operators. We suspect that the earliest robotic encounters by inexperienced operators were slow and may have contributed to slower assessment times in the robotic group. We are currently in the process of conducting a second analysis of weekday after-hours and weekend protocolized robotic telepresence-supervised stroke alert patients versus protocolized standardly attended stroke alert patients at the Mayo Clinic.

With the growth of the telestroke program, the VNs have become more experienced robot operators. We suspect that this will make for a more fair comparison between robot and standard supervision. Additionally, this second analysis will allow us to collect data on the level of training of the housestaff and analyze whether the level of experience of a trainee contributed to the speed of the assessment. Stronger methodology would be a prospective randomized controlled trial of robotic telepresence assessment versus standard in-person assessment.

In conclusion, a standard stroke supervisory assessment is preferable to robotic telepresence stroke supervisory assessment in situations where it is possible to use either method. Outcomes from robotic telepresence acute stroke supervisory assessments were good and approached the gold standard. Robotic telepresence may be preferable in situations where no stroke specialist is available in-house, especially for middle of the night and weekend staffing of residents, when a 9-min robot-associated delay is likely better than the delay associated with the supervising physician driving in to the hospital.

Acknowledgments

The authors acknowledge Mayo Clinic neurology residents, vascular neurology fellows, vascular neurology consultants, and nurse practitioners for their assistance and participation. This work was supported by Mayo Clinic Center for Translational Science Activities grant support (UL1 TR000135; REDCap Project).

Disclosure Statement

C.B.O'C., J.G.H., and M.I.A. declare no competing financial interests exist. B.M.D. has received research support from Calgary Scientific and served as a consultant/advisory board member for Cell Trust and REACH.

References

  • 1.Gomez C. Time is brain. J Stroke Cerebrovasc Dis 1993;3:1–2 [DOI] [PubMed] [Google Scholar]
  • 2.Demaerschalk BM. Telestrokologist: Treating stroke patients here, there, and everywhere with telemedicine. Semin Neurol 2010;30:477–491 [DOI] [PubMed] [Google Scholar]
  • 3.Kleindorfer D, Xu Y, Moomaw CJ, Khatri P, Adeoye O, Hornung R. US geographic distribution for rt-PA utilization by hospital for acute ischemic stroke. Stroke 2009;40:3580–3584 [DOI] [PubMed] [Google Scholar]
  • 4.Levine SR, Gorman M. “Telestroke”: The application of telemedicine for stroke. Stroke 1999;30:464–469 [DOI] [PubMed] [Google Scholar]
  • 5.Schwamm LH, Audebert HJ, Amarenco P, Chumbler NR, Frankel MR, George MG, et al. Recommendations for the implementation of telemedicine within stroke systems of care: A policy statement from the American Heart Association. Stroke 2009;40:2635–2660 [DOI] [PubMed] [Google Scholar]
  • 6.Adams HP, Jr, del Zoppo G, Alberts MJ, Bhatt DL, Brass L, Furlan A, et al. Guidelines for the early management of adults with ischemic stroke: A guideline from the American Heart Association/American Stroke Association Stroke Council, Clinical Cardiology Council, Cardiovascular Radiology and Intervention Council, and the Atherosclerotic Peripheral Vascular Disease and Quality of Care Outcomes in Research Interdisciplinary Working Groups: The American Academy of Neurology affirms the value of this guideline as an educational tool for neurologists. Stroke 2007;38:1655–1711 [DOI] [PubMed] [Google Scholar]
  • 7.Schwamm LH, Holloway RG, Amarenco P, Audebert HJ, Bakas T, Chumbler NR, et al. A review of the evidence for the use of telemedicine within stroke systems of care. A scientific statement from the American Heart Association/American Stroke Association. Stroke 2009;40:2616–2634 [DOI] [PubMed] [Google Scholar]
  • 8.Demaerschalk BM, Bobrow BJ, Raman R, Kiernan TE, Aguilar MI, Ingall TJ, et al. ; STRokE DOC AZ TIME Investigators. Stroke team remote evaluation using a digital observation camera in Arizona: The initial Mayo Clinic experience trial. Stroke 2010;41:1251–1258 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Kothari RU, Pancioli A, Liu T, Brott T, Broderick J. Cincinnati Prehospital Stroke Scale: Reproducibility and validity. Ann Emerg Med 1999;33:373–378 [DOI] [PubMed] [Google Scholar]
  • 10.Liferidge AT, Brice JH, Overby BA, Evenson KR. Ability of laypersons to use the Cincinnati Prehospital Stroke Scale. Prehosp Emerg Care 2004;8:384–387 [DOI] [PubMed] [Google Scholar]

Articles from Telemedicine Journal and e-Health are provided here courtesy of Mary Ann Liebert, Inc.

RESOURCES