Skip to main content
Neurology logoLink to Neurology
. 2014 Jun 17;82(24):2241–2249. doi: 10.1212/WNL.0000000000000523

Prehospital stroke scales in urban environments

A systematic review

Ethan S Brandler 1,, Mohit Sharma 1, Richard H Sinert 1, Steven R Levine 1
PMCID: PMC4113467  PMID: 24850487

Abstract

Objective:

To identify and compare the operating characteristics of existing prehospital stroke scales to predict true strokes in the hospital.

Methods:

We searched MEDLINE, EMBASE, and CINAHL databases for articles that evaluated the performance of prehospital stroke scales. Quality of the included studies was assessed using the Quality Assessment of Diagnostic Accuracy Studies–2 tool. We abstracted the operating characteristics of published prehospital stroke scales and compared them statistically and graphically.

Results:

We retrieved 254 articles from MEDLINE, 66 articles from EMBASE, and 32 articles from CINAHL Plus database. Of these, 8 studies met all our inclusion criteria, and they studied Cincinnati Pre-Hospital Stroke Scale (CPSS), Los Angeles Pre-Hospital Stroke Screen (LAPSS), Melbourne Ambulance Stroke Screen (MASS), Medic Prehospital Assessment for Code Stroke (Med PACS), Ontario Prehospital Stroke Screening Tool (OPSS), Recognition of Stroke in the Emergency Room (ROSIER), and Face Arm Speech Test (FAST). Although the point estimates for LAPSS accuracy were better than CPSS, they had overlapping confidence intervals on the symmetric summary receiver operating characteristic curve. OPSS performed similar to LAPSS whereas MASS, Med PACS, ROSIER, and FAST had less favorable overall operating characteristics.

Conclusions:

Prehospital stroke scales varied in their accuracy and missed up to 30% of acute strokes in the field. Inconsistencies in performance may be due to sample size disparity, variability in stroke scale training, and divergent provider educational standards. Although LAPSS performed more consistently, visual comparison of graphical analysis revealed that LAPSS and CPSS had similar diagnostic capabilities.


When recognized in the field, prehospital notification by emergency medical services (EMS) has been associated with improved rates of recombinant tissue plasminogen activator (rtPA) delivery with reduced door-to-needle times.1,2 Increased use of rtPA and shorter door-to-needle times have both been associated with improved stroke outcomes.3,4 However, paramedics and emergency medical technicians (EMTs), limited in both time and training, are not able to perform a detailed stroke examination and thus rely on screening tools that are designed to identify potential strokes with minimal assessment.5 We conducted a systematic review of the diagnostic accuracy of a variety of prehospital stroke scales. Our primary goal was to identify the prehospital stroke scale with optimal operating characteristics for the diagnosis of stroke.

METHODS

Search strategy.

With the aid of a medical librarian, we searched for studies of prehospital stroke scales in MEDLINE, EMBASE, and CINAHL Plus databases from 1966 until October 2, 2013. We also searched the Cochrane Central Register of Controlled Trials and the bibliographies of the included and relevant articles and reviews. We chose the key words “paramedic,” “stroke,” “transient ischemic attack,” “accuracy,” and “reproducibility” as text words and MeSH terms to identify related studies (table e-1 on the Neurology® Web site at Neurology.org). Two authors (E.S.B., M.S.) reviewed each title for relevance. Titles thought to be relevant by either author were then subjected to further review by the other authors (R.H.S., S.R.L.) to ensure that they met all inclusion/exclusion criteria.

Inclusion and exclusion criteria.

We considered studies in which EMTs or paramedics performed prehospital stroke scales as recommended by the American Heart Association/American Stroke Association.6 English articles that studied only adult populations were used. We included studies in which discharge diagnosis of stroke or TIA was used as the standard reference. For this review, we were not concerned with the severity of the stroke; only stroke scales with dichotomous results, i.e., stroke present or absent, were included, because severity indices implied that the diagnosis was already made.

Studies in which physicians were involved in prehospital application of a stroke scale were excluded because physicians are not present in most EMS systems in the United States. All case reports, case reviews, systematic reviews, letters to the editor, and poster presentations were excluded. Studies that did not publish sufficient raw data to calculate operating characteristics were also excluded unless provided by the authors upon request.

Data extraction and quality assessment.

Data from the selected studies were abstracted by 2 authors (E.S.B., M.S.) and were checked for accuracy by 2 other authors (R.H.S., S.R.L.). We used Meta-DiSc7 software to calculate the operating characteristics of the various stroke scales as reported in each study. For statistical and visual comparisons, we plotted a series of graphs. The initial graph, receiver operating characteristic (ROC) plane, plotted sensitivity vs false-positive rate for each scale as measured independently in each study. Symmetric summary ROC (SSROC) curves were produced for scales tested in more than 2 studies.

In order to document potential large differences in study methodologies, we used the inconsistency index (I2) and tau squared (τ2) to evaluate between-study heterogeneity, with I2 >50% or τ2 >1 indicating substantial statistical heterogeneity.8,9 Fixed-effect models (Mantel-Haenszel) were to be used for comparing statistically homogenous studies and random-effects models (DerSimonian and Laird) were to be used for comparing statistically heterogeneous studies.8,9 We also generated an ROC ellipse plot to describe the uncertainty of the pairs of sensitivities and false-positive rates.

For studies meeting our inclusion and exclusion criteria, we performed quality assessments using Quality Assessment of Diagnostic Accuracy Studies–210 (QUADAS-2), which assesses the quality of studies by identifying sources of bias and concerns regarding applicability. Each of the QUADAS-2 variables was graded by 2 physicians (E.S.B., M.S.) independently and compared for interrater reliability using the kappa coefficient. The QUADAS-2 domains were labeled high, low, or unclear, indicating the degree of bias and concerns regarding applicability. Differences in assessments were adjudicated by consensus and by one senior author (R.H.S.).

RESULTS

Search results.

Our search yielded 254 articles from MEDLINE, 66 titles from EMBASE, and 32 titles from CINAHL Plus. Eight studies1118 met all of our inclusion/exclusion criteria (figure 1). Studies by Iguchi et al.,19 Tirschwell et al.,5 and Llanes et al.20 were excluded because their prehospital stroke scales measured stroke severity, not its presence. We excluded a study by Bergs et al.21 where emergency physicians and EMTs jointly diagnosed stroke. A study by Frendl et al.22 was excluded because data for 76% of the study patients were missing. We attempted to retrieve raw data from the authors of several studies (Harbison et al.,23 Nor et al.,24 Frendl et al.,22 and Ramanujam et al.25); however, these data were no longer available to the authors in a usable format. We searched for data on other known prehospital stroke scales including the Miami Emergency Neurological Deficit Scale, the Boston Operation Stroke Scale, and the Birmingham Regional Emergency Medical Services System Scale, but data/articles using these scales could not be found in peer-reviewed journals.

Figure 1. Results of the literature search.

Figure 1

Last updated October 2, 2013.

Description of studies.

We reviewed 8 studies (Kidwell et al.,11 Wojner-Alexandrov et al.,12 Bray et al.,13 Bray et al.,14 Studnek et al.,15 Chenkin et al.,16 Chen et al.,17 and Fothergill et al.18) reporting the operating characteristics of 7 different stroke scales: the Cincinnati Prehospital Stroke Scale (CPSS), the Los Angeles Prehospital Stroke Screen (LAPSS), the Melbourne Ambulance Stroke Screen (MASS), Medic Prehospital Assessment for Stroke Code (Med PACS), Ontario Prehospital Stroke Screen Tool (OPSS), Recognition of Stroke in the Emergency Room (ROSIER), and Face Arm Speech Test (FAST). Included studies used stroke scales with overlapping motor elements without any sensory or coordination/cerebellar testing. See figure 2 for a comparison of the various prehospital stroke scales.

Figure 2. Descriptive comparison of different prehospital stroke scales.

Figure 2

Cincinnati Prehospital Stroke Scale (CPSS), Los Angeles Prehospital Stroke Screen (LAPSS), Melbourne Ambulance Stroke Screen (MASS), Medic Prehospital Assessment for Code Stroke (Med PACS), Ontario Prehospital Stroke Screening Tool (OPSS), and Face Arm Speech Test (FAST) are considered positive if any of the physical findings are present after all eligibility criteria (if applicable) are met. Recognition of Stroke in the Emergency Room (ROSIER) scale assigns either a positive or a negative point value to each factor; scale is positive if the sum is ≥1. EMS = emergency medical services.

All included studies used similar methodologies of a retrospective review of a prospectively collected database of EMS-measured stroke scales, which were eventually linked to inpatient discharge diagnosis of stroke or TIA. Table 1 describes the included studies. Sample sizes from the studies were highly variable, ranging from 10013 to 11,29612 subjects. Sample size and prevalence are reported in table 1 with notable variation in both characteristics among the studies. Sex, race, and age were not uniformly reported.

Table 1.

Characteristics of included studies

graphic file with name NEUROLOGY2013559252TT1.jpg

Studies were conducted in a variety of urban environments and were heterogeneous with respect to patient populations. Patients' ethnicity also varied across study settings, with Melbourne having a larger percentage of Malaysians (5%)26 and Houston with 44% Hispanic/Latino population.27 Similarly, Los Angeles also has a large Hispanic/Latino element (48%),28 while Charlotte has only 12% Hispanic/Latinos.29 The population of the province of Ontario is comprised primarily of persons extracted from the British Isles.30 Beijing has a homogenous population, with 95% comprising Han nationality.31 The city of London has a largely white population (60%) with a significant black (13%) and Asian (19%) population.32

Quality assessment.

Two authors (E.S.B., M.S.) evaluated all studies using the QUADAS-2 tool. Interrater agreement for QUADAS-2 scoring between the authors was almost perfect, kappa 0.89 (95% confidence interval [CI] 0.81–1.0).33

In all the studies, many patients were excluded post hoc due to incomplete data collection of prehospital stroke scales.1724 The reasons for incomplete documentation were unclear in these studies. These excluded patients raise concern over selection bias (table e-2). No significant applicability concerns were noted in the QUADAS-2 assessment.

Performance assessment of prehospital stroke scales.

The scales used in each study are listed in table 2 together with their operating characteristics. The forest plots and ROC plane for sensitivity and specificity are presented for all studies in figure 3. The SSROC and ROC ellipse plots comparing CPSS and LAPSS are shown in figure 4. We could plot SSROC only for CPSS and LAPSS (figure 4A).7 Due to considerable heterogeneity (CPSS: I2 = 97.8%, τ2 = 4.33, LAPSS: I2 = 96.8%, τ2 = 4.16), we used the DerSimonian and Laird methodology to generate the SSROC. Area under the curve for CPSS was 0.813 ± SE 0.129 and for LAPSS 0.964 ± SE 0.028. Because of high heterogeneity (I2 > 50%), we did not report pooled sensitivity and specificity for the various scales under review.

Table 2.

Operating characteristics of prehospital stroke scales

graphic file with name NEUROLOGY2013559252TT2.jpg

Figure 3. Graphical comparison of 7 different prehospital stroke scales.

Figure 3

(A) Forest plots for all prehospital stroke scales in the included studies. (B) Receiver operating characteristic curve (ROC) plane. Size of the circles indicates relative sample size. CI = confidence interval; CPSS = Cincinnati Prehospital Stroke Scale; FAST = Face Arm Speech Test; LAPSS = Los Angeles Prehospital Stroke Screen; MASS = Melbourne Ambulance Stroke Screen; Med PACS = Medic Prehospital Assessment for Code Stroke; OPSS = Ontario Prehospital Stroke Screening Tool; ROSIER = Recognition of Stroke in the Emergency Room.

Figure 4. Graphical comparison of CPSS and LAPSS.

Figure 4

(A) Symmetric summary receiver operating characteristic (SSROC) curve comparing area under the curve (AUC) for Cincinnati Prehospital Stroke Scale (CPSS) and Los Angeles Prehospital Stroke Screen (LAPSS) performance. Computational method: DerSimonian and Laird model. Circles in the plot are proportional to the weight/sample size. (B) Receiver operating characteristic (ROC) curve ellipse plot. Each point estimate is surrounded by 2D 95% confidence intervals.

DISCUSSION

It would appear from the Kidwell et al.11 and Wojner-Alexandrov et al.12 studies that LAPSS had the most favorable operating characteristics. Overall, LAPSS with its low negative likelihood ratio appears to be a good screening test, but despite that, when applied to a large population, it still misses up to 22% of strokes.17 Potential reasons for better performance of LAPSS include the more stringent screening criteria and the lack of a potentially subjective speech assessment.

The ROC plane illustrates a graphical description and visual comparison of different prehospital stroke scales (figure 3B). If a scale has its point estimate close to the diagonal line of uncertainty, the chances of that particular scale picking up a stroke correctly are similar to a coin flip. FAST, ROSIER, Med PACS, and CPSS when studied by Studnek et al.15 appear to be very close to that line. In contrast, the point estimates for LAPSS, OPSS, MASS, and CPSS when studied by Bray et al.14 are concentrated on the upper left corner of the graph, indicating better performance. Furthermore, as seen in the ellipse plot (figure 4B), CPSS when studied by Studnek et al.15 overlaps the line of uncertainty. The ellipses for CPSS do not overlap one another and are spread out on the graph, making us question the reproducibility of CPSS performance. However, the point estimates of LAPSS performance cluster in the upper left hand corner of the graph with confluent ellipses indicating that LAPSS has more consistent performance and perhaps is a more reliable tool. Despite the high between-study heterogeneity, we tried to compare the studies and generate an SSROC using DerSimonian and Laird methodology noting wide CI for CPSS (figure 4A). Although the CIs for CPSS and LAPSS overlap, lower limit of CI for CPSS crosses the line of uncertainty, indicating the scale may not perform better than a coin flip.

Though not included in the present study, an article by Ramanujam et al.25 reported a lower sensitivity (44%) and a low positive predictive value of 40% for CPSS.25 FAST, which has very similar elements to CPSS,23 screened well, but demonstrated very poor specificity.18 MASS, a combination of LAPSS and CPSS, offers no significant benefit over LAPSS alone. When studied by Bray et al.,13 MASS and LAPSS were compared in the same population of patients with statistically indistinguishable operating characteristics. Med PACS, similarly combining elements of LAPSS, CPSS, and adding gaze and motor leg components, counterintuitively added little to specificity while sacrificing sensitivity. Likewise, even after excluding seizures and syncope cases, which are potential confounders in the diagnosis of stroke,34 the ROSIER scale also has poor specificity. Surprisingly, Med PACS and ROSIER have very different sensitivities despite having similar scale elements.

Chenkin et al.16 reported lower specificity than either Kidwell et al.11 or Wojner-Alexandrov et al.12 despite the fact that OPSS excludes on-scene seizure patients. However, Chenkin et al.16 reported rtPA administration rates among OPSS-positive patients and demonstrated an increase in rtPA administration rate from 5.9% to 10.1% after the implementation of OPSS and, perhaps most importantly, none of the patients excluded by OPSS were later found to be eligible for rtPA. Additional study is required to determine whether this finding is reproducible and whether other scales perform similarly in this regard.

Limitations.

We were limited in our attempt because of the flawed methodologies in all of the studies included in this review. Unresponsive patients were excluded in at least 2 of the studies, threatening the applicability of stroke scales to these patients. Furthermore, all included studies were conducted at urban university centers in different cities and thus may not be generalizable to other environments.

While studying varied patient populations is desirable, sources for unwanted heterogeneity include (1) differences in stroke prevalence and (2) divergent background EMS education standards. In addition, both high stroke prevalence and wide variations in stroke prevalence (2.5%–88%) could introduce selection bias. In general, studies with small sample sizes had higher stroke prevalence, suggesting a selection bias in these studies that would inappropriately inflate diagnostic accuracy. There was also a lack of a prestudy sample size estimate by any of these studies except for Studnek et al.15 The large degree of heterogeneity between the reviewed studies prevented us from reporting pooled operating characteristics.

Since all studies included TIA as a stroke diagnosis, physical examination findings present in the prehospital environment may have disappeared by the time the patient was examined by the physician making the discharge diagnosis of stroke. As such, stroke scales performed by prehospital providers may influence the ultimate diagnosis of a TIA in the hospital. Prehospital stroke scales thus have the potential to introduce bias because the reference standard (discharge diagnosis) is not independent of the index test (stroke scale) (table e-2). This bias is clearly unavoidable. However, the prehospital tests were conducted without knowledge of the ultimate discharge diagnosis. These issues were inherent in all of the studies under review and similarly bias all results.

Verification bias is inherent in many of the studies under discussion. Falsely increasing the sensitivity is the fact that the primary inclusion criterion in many studies was suspected stroke. These patients are more likely to have the stroke scale performed and to test positive. True negatives may be inappropriately excluded, thereby falsely decreasing specificity.

Furthermore, the primary reason for prehospital identification of stroke is to speed access to rtPA. Given that all the included studies used discharge diagnosis of stroke as the gold standard and not the appropriate identification of patients for rtPA as the important diagnosis, all the studies may inappropriately overestimate the performance of the various scales for this important screening function.

Due to the availability of numerous prehospital stroke scales, it is important to compare them systematically so that EMS medical directors and vascular neurologists involved in prehospital stroke care can choose the scale that performs optimally for their individual systems.

There are several important methodologic issues in the current application of prehospital stroke scales. The high degree of heterogeneity between the studies suggests variability in methodology and nonrandom sampling. As a result, there is a need for more reliable assessments of prehospital scales for the diagnosis of stroke. More study is required to identify the best currently available methodology for prehospital identification of stroke and to find new tools that are easy to perform and may capture stroke more accurately in the field. Nonetheless, LAPSS appears to have the best operating characteristics when assessed both by likelihood ratios and ROC curve.

Supplementary Material

Data Supplement
Accompanying Editorial

ACKNOWLEDGMENT

The authors thank Jeremy Weedon, PhD, for advice regarding statistical methods.

GLOSSARY

CI

confidence interval

CPSS

Cincinnati Prehospital Stroke Scale

EMS

emergency medical services

EMT

emergency medical technician

FAST

Face Arm Speech Test

LAPSS

Los Angeles Prehospital Stroke Screen

MASS

Melbourne Ambulance Stroke Screen

Med PACS

Medic Prehospital Assessment for Stroke Code

OPSS

Ontario Prehospital Stroke Screen Tool

ROSIER

Recognition of Stroke in the Emergency Room

QUADAS-2

Quality Assessment of Diagnostic Accuracy Studies–2

ROC

receiver operating characteristic

rtPA

recombinant tissue plasminogen activator

SSROC

symmetric summary receiver operating characteristic

Footnotes

Editorial, page 2154

Supplemental data at Neurology.org

AUTHOR CONTRIBUTIONS

E.S.B. conceived the study, designed the study, supervised data collection, performed statistical analyses, drafted the manuscript, and takes responsibility for the study as a whole. M.S. extracted the data, drafted and revised the manuscript, and performed statistical analyses. R.H.S. assisted with study design and conception, assisted with statistical analyses, and revised the manuscript. S.R.L. obtained research funding and revised the manuscript for important scientific content.

STUDY FUNDING

Funded in parts by NIH grants 1U01NS044364, R01 HL096944, 1U10NS077378, and 1U10NS080377.

DISCLOSURE

E. Brandler, M. Sharma, and R. Sinert report no disclosures relevant to the manuscript. S. Levine serves on the Scientific Advisory Boards of Independent Medical/Safety Monitor for National Institute of Neurological Disorders and Stroke–funded IMS 3, FAST MAG, INSTINCT, and CLEAR-ER and Adjudication Committee for National Institute of Neurological Disorders and Stroke–funded WARCEF. He received travel funding or speaker honoraria from Genentech in 2011. He also serves as the Associate Editor of MEDLINK and is the scientific content advisor for the National Stroke Association. He is a consultant for Genentech study on cost-effectiveness of primary stroke centers and receives research support from Genentech, Inc. He was on the Speakers' Bureaus for Medical Education Speakers Network, lecturer, 2008–2012. He receives research support from NIH-NHLBI 1R01 HL096944, principal investigator, 2009–2013; NIH–National Institute of Neurological Disorders and Stroke 1UO1 NS044364, Independent safety monitor, 2003–2012; NIH–National Institute of Neurological Disorders and Stroke 1 U10 NS077378, PI, 2011–2017; NIH–National Institute of Neurological Disorders and Stroke 1 U10 NS080377, PI, 2012–2017; PCORI, Scientific PI, 2012–2014; NIH–National Institute of Neurological Disorders and Stroke 1 R25 NS079211, MPI, 2012–2017; The Patient-Centered Outcomes Research Institute (PCORI) 1IP2PI000781, scientific PI, 2012–2014; NIH-NIA 1 R01 AG040039, Co-I, 2011–2016; NIH–National Institute of Neurological Disorders and Stroke 2 P50 NS044283, safety monitor, 2008–2013; and NIH–National Institute of Neurological Disorders and Stroke 2 U01 NS052220, independent medical monitor, 2005–2013. Go to Neurology.org for full disclosures.

REFERENCES

  • 1.Bae HJ, Kim DH, Yoo NT, et al. Prehospital notification from the emergency medical service reduces the transfer and intra-hospital processing times for acute stroke patients. J Clin Neurol 2010;6:138–142 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Abdullah AR, Smith EE, Biddinger PD, Kalenderian D, Schwamm LH. Advance hospital notification by EMS in acute stroke is associated with shorter door-to-computed tomography time and increased likelihood of administration of tissue-plasminogen activator. Prehosp Emerg Care 2008;12:426–431 [DOI] [PubMed] [Google Scholar]
  • 3.Fonarow GC, Smith EE, Saver JL, et al. Improving door-to-needle times in acute ischemic stroke: the design and rationale for the American Heart Association/American Stroke Association's Target: stroke initiative. Stroke 2011;42:2983–2989 [DOI] [PubMed] [Google Scholar]
  • 4.Lin CB, Peterson ED, Smith EE, et al. Emergency medical service hospital prenotification is associated with improved evaluation and treatment of acute ischemic stroke. Circ Cardiovasc Qual Outcomes 2012;5:514–522 [DOI] [PubMed] [Google Scholar]
  • 5.Tirschwell DL, Longstreth WT, Jr, Becker KJ, et al. Shortening the NIH Stroke scale for use in the prehospital setting. Stroke 2002;33:2801–2806 [DOI] [PubMed] [Google Scholar]
  • 6.Jauch EC, Saver JL, Adams HP, Jr, et al. Guidelines for the early management of patients with acute ischemic stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke 2013;44:870–947 [DOI] [PubMed] [Google Scholar]
  • 7.Zamora J, Abraira V, Muriel A, Khan KS, Coomarasamy A. Meta-DiSc: a software for meta-analysis of test accuracy data. BMC Med Res Methodol 2006;6:31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Higgins JP. Commentary: heterogeneity in meta-analysis should be expected and appropriately quantified. Int J Epidemiol 2008;37:1158–1160 [DOI] [PubMed] [Google Scholar]
  • 9.Tang L, Zhao S, Liu W, et al. Diagnostic accuracy of circulating tumor cells detection in gastric cancer: systematic review and meta-analysis. BMC Cancer 2013;13:314. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Whiting PF, Rutjes AW, Westwood ME, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011;155:529–536 [DOI] [PubMed] [Google Scholar]
  • 11.Kidwell CS, Starkman S, Eckstein M, Weems K, Saver JL. Identifying stroke in the field: prospective validation of the Los Angeles Prehospital Stroke Screen (LAPSS). Stroke 2000;31:71–76 [DOI] [PubMed] [Google Scholar]
  • 12.Wojner-Alexandrov AW, Alexandrov AV, Rodriguez D, et al. Houston paramedic and emergency stroke treatment and outcomes study (HoPSTO). Stroke 2005;36:1512–1518 [DOI] [PubMed] [Google Scholar]
  • 13.Bray JE, Martin J, Cooper G, Barger B, Bernard S, Bladin C. Paramedic identification of stroke: community validation of the Melbourne Ambulance Stroke Screen. Cerebrovasc Dis 2005;20:28–33 [DOI] [PubMed] [Google Scholar]
  • 14.Bray JE, Coughlan K, Barger B, Bladin C. Paramedic diagnosis of stroke: examining long-term use of the Melbourne Ambulance Stroke Screen (MASS) in the field. Stroke 2010;41:1363–1366 [DOI] [PubMed] [Google Scholar]
  • 15.Studnek JR, Asimos A, Dodds J, Swanson D. Assessing the validity of the Cincinnati Prehospital Stroke Scale and the Medic Prehospital Assessment For Code Stroke in an urban emergency medical services agency. Prehosp Emerg Care 2013;17:348–353 [DOI] [PubMed] [Google Scholar]
  • 16.Chenkin J, Gladstone DJ, Verbeek PR, et al. Predictive value of the Ontario prehospital stroke screening tool for the identification of patients with acute stroke. Prehosp Emerg Care 2009;13:153–159 [DOI] [PubMed] [Google Scholar]
  • 17.Chen S, Sun H, Lei Y, et al. Validation of the Los Angeles Pre-Hospital Stroke Screen (LAPSS) in a Chinese urban emergency medical service population. PLoS One 2013;8:e70742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Fothergill RT, Williams J, Edwards MJ, Russell IT, Gompertz P. Does use of the recognition of stroke in the emergency room stroke assessment tool enhance stroke recognition by ambulance clinicians? Stroke 2013;44:3007–3012 [DOI] [PubMed] [Google Scholar]
  • 19.Iguchi Y, Kimura K, Watanabe M, Shibazaki K, Aoki J. Utility of the Kurashiki Prehospital Stroke Scale for hyperacute stroke. Cerebrovasc Dis 2011;31:51–56 [DOI] [PubMed] [Google Scholar]
  • 20.Llanes JN, Kidwell CS, Starkman S, Leary MC, Eckstein M, Saver JL. The Los Angeles Motor Scale (LAMS): a new measure to characterize stroke severity in the field. Prehosp Emerg Care 2004;8:46–50 [DOI] [PubMed] [Google Scholar]
  • 21.Bergs J, Sabbe M, Moons P. Prehospital stroke scales in a Belgian prehospital setting: a pilot study. Eur J Emerg Med 2010;17:2–6 [DOI] [PubMed] [Google Scholar]
  • 22.Frendl DM, Strauss DG, Underhill BK, Goldstein LB. Lack of impact of paramedic training and use of the Cincinnati Prehospital Stroke Scale on stroke patient identification and on-scene time. Stroke 2009;40:754–756 [DOI] [PubMed] [Google Scholar]
  • 23.Harbison J, Hossain O, Jenkinson D, Davis J, Louw SJ, Ford GA. Diagnostic accuracy of stroke referrals from primary care, emergency room physicians, and ambulance staff using the face arm speech test. Stroke 2003;34:71–76 [DOI] [PubMed] [Google Scholar]
  • 24.Nor AM, McAllister C, Louw SJ, et al. Agreement between ambulance paramedic- and physician-recorded neurological signs with Face Arm Speech Test (FAST) in acute stroke patients. Stroke 2004;35:1355–1359 [DOI] [PubMed] [Google Scholar]
  • 25.Ramanujam P, Guluma KZ, Castillo EM, et al. Accuracy of stroke recognition by emergency medical dispatchers and paramedics: San Diego experience. Prehosp Emerg Care 2008;12:307–313 [DOI] [PubMed] [Google Scholar]
  • 26.City of Melbourne 2006 multicultural community demographic profile. Available at: http://www.melbourne.vic.gov.au/AboutMelbourne/Statistics/Documents/Demographic_Profile1_Multicultural_Community.pdf. Accessed June 21, 2013
  • 27.United States Census Bureau. State and County Quick Facts, Houston city. Available at: http://quickfacts.census.gov/qfd/states/48/4835000.html. Accessed August 21, 2013
  • 28.United States Census Bureau. State and County Quick Facts, Los Angeles County. Available at: http://quickfacts.census.gov/qfd/states/06/06037.html. Accessed June 21, 2013
  • 29.United States Census Bureau. State and County Quick Facts, Mecklenburg County, North Carolina. Available at: http://quickfacts.census.gov/qfd/states/37/37119.html. Accessed June 21, 2013
  • 30.Population by selected ethnic origins, by province and territory (2006 Census) (Ontario). Available at: http://www.statcan.gc.ca/tables-tableaux/sum-som/l01/cst01/demo26g-eng.htm. Accessed September 12, 2013
  • 31.Basic statistics on population census (200 Census) (Beijing). Available at: http://www.ebeijing.gov.cn/feature_2/Statistics/Population/t1071366.htm. Accessed October 2, 2013
  • 32.Census data for London. Available at: http://data.london.gov.uk/census/data. Accessed October 2, 2013
  • 33.Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33:159–174 [PubMed] [Google Scholar]
  • 34.Brandler ES, Sharma M, Khandelwal P, et al. Identification of common confounders in the prehospital identification of stroke in urban, underserved minorities. Stroke 2013;44:AWP243 Abstract [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Supplement
Accompanying Editorial

Articles from Neurology are provided here courtesy of American Academy of Neurology

RESOURCES