Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2020 Dec 27;143(4):375–382. doi: 10.1111/ane.13388

Accuracy of NIH Stroke Scale for diagnosing aphasia

Angelina Grönberg 1,2,, Ingrid Henriksson 3, Arne Lindgren 1,2
PMCID: PMC7985870  PMID: 33368189

Abstract

Objectives

The National Institutes of Health Stroke Scale (NIHSS) has not been validated to diagnose aphasia in the stroke population. We therefore examined the diagnostic accuracy of NIHSS for detecting aphasia in acute ischemic stroke.

Methods

Consecutive patients with acute first‐ever ischemic stroke were included prospectively in Lund Stroke Register Study at Skåne University Hospital, Sweden. Exclusion criteria were: (a) non‐native Swedish; (b) obtundation (c) dementia or psychiatric diagnosis. Patients were assessed with NIHSS item 9 (range 0–3, where 1–3 indicate aphasia) by a NIHSS certified research nurse in the acute phase after stroke onset (median 3 days). Within 24 h after this assessment, a speech therapist evaluated the patients’ language function with the comprehensive language screening test (LAST, range 0–15 where 0–14 indicates aphasia). Data were analyzed using LAST as ‘reference standard’.

Results

We examined 221 patients. Among these, 23% (n = 50) had aphasia according to NIHSS (distribution of scores 0, 1, 2, 3 were n = 171, n = 29, n = 12, n = 9) compared to 26% (n = 58) with aphasia according to LAST (score ≤14; median = 11). Assuming LAST as reference standard, NIHSS gave 16 false negatives (NIHSS item 9 = 0) for aphasia (LAST scores range 8–14), and 8 false positives (NIHSS item 9 score = 1) for aphasia, yielding a sensitivity of 72% (0.59–0.83) and a specificity of 95% (0.91–0.98).

Conclusions

When using NIHSS for screening and diagnosing aphasia in adults with acute ischemic stroke, patients with severe aphasia can be detected, however, some mild aphasias might be misclassified. Given the 72% sensitivity, absence of aphasia on the NIHSS should not be used to guide stroke treatment.

Keywords: aphasia, language tests, National Institutes of Health Stroke Scale, sensitivity and specificity, stroke

1. INTRODUCTION

Aphasia is reported in 20%–40% of acute ischemic stroke patients 1 , 2 and is a major source of disability, leading to impaired communication and quality of life. 3 Early detection of aphasia is important to determine basic communication needs and to create a rehabilitation plan. 4 A finding of aphasia suggests that the ischemic stroke was in a large vessel distribution in the left hemisphere, most likely in the territory of the middle cerebral artery. This finding should therefore substantially influence diagnostic evaluation and treatment. Diagnosis of aphasia may be challenging in the acute phase of stroke where patients’ general condition may be affected, and symptoms change rapidly. However, un‐diagnosed aphasia can impact patient rehabilitation 5 and outcome, 6 as well as negatively affect the overall cost of stroke care. 7 A standardized language screening test with accurate diagnostic precision, therefore has considerable implications for stroke care.

The National Institutes of Health Stroke Scale 8 (NIHSS) has become the standard for routine assessment of neurological deficits in the acute phase of stroke 9 and item 9, “Best Language,” evaluates aphasia. NIHSS has also been used in epidemiological studies to detect post‐stroke aphasia 10 , 11 , 12 as well as in stroke care to aid course of treatment. 13 NIHSS has excellent reliability and validity, 13 , 14 , 15 however, it was not originally designed to capture specific deficits, but rather to standardize global testing of individual patients in clinical trials. 9 NIHSS item 9 has therefore not been explicitly validated to determine the presence or absence of aphasia in the stroke population. 16

Several comprehensive language assessments evaluate symptoms and degree of aphasia, 17 , 18 however, these assessments have limited utility in the acute setting. 19 Although NIHSS is a screening test with less detailed description of the specific language deficits, it is routinely included in the acute neurological examination and, therefore, a potentially useful tool for first identification of aphasia and to monitor progress.

The aim of this study was to: (a) validate NIHSS by reporting the diagnostic accuracy measurements: sensitivity/specificity; positive/negative predictive value; likelihood‐ratios; and (b) detect factors and symptoms related to incorrect diagnosis.

2. MATERIALS AND METHODS

2.1. Study design and participants

Patients with first‐ever ischemic stroke were consecutively recruited to the Lund Stroke Register Study (LSR) from the local uptake area of Skåne University Hospital in Lund (SUHL), Sweden, between March 1, 2017 and May 31, 2018. The area consists of eight municipalities, where SUHL is the only hospital designated for acute care of stroke patients. Patients are therefore routinely treated in the acute phase of stroke at SUHL. 20

All patients (age >16 years) with acute first‐ever stroke treated at SUHL are prospectively and consecutively evaluated by LSR in the acute phase. Baseline characteristics regarding age, type of stroke, NIHSS, acute recanalization treatment and level of education, were obtained from the patient or by reviewing patient medical charts.

Patients were subsequently screened for participation in this sub‐study – the Lund Stroke Register Speech Study. Exclusion criteria were (a) non‐native Swedish speaker; (b) mental obtundation; (c) concomitant disease that can affect language function, for example, diagnosed cognitive impairment and/or severe psychiatric diagnosis; (d) not consenting to participate. Patients obtained oral and written information about the purpose of the study and gave written informed consent to participate. Family members were consulted for patients who could not give consent due to, for example, severe aphasia.

2.2. Material and procedure

At a median of 3 days post‐stroke (IQR 2–6) patients were assessed with NIHSS item 9 (range 0–3, where 0 = no aphasia, 1 = mild to moderate aphasia, 2 = severe aphasia, 3 = global aphasia) by a registered research nurse, certified to perform NIHSS evaluations. The research nurse had clinical patient information but was blinded toward the results of the reference language test described below. Within 24 h after this assessment, a speech and language therapist (SLT) evaluated the patients’ language function with the language screening test (LAST). 21 The NIHSS index test results conducted by the research nurse were blinded from the SLT.

Among several possible aphasia screening tools for language evaluation, 16 , 22 we selected LAST, 21 due to its high diagnostic accuracy, comprehensive validation and its recommended use in acute stroke. 16 , 23 , 24 , 25 LAST is specifically constructed to avoid subtests of language test potentially affected by other stroke symptoms, for example, hemiplegia and dysfunction in executive function and includes five subtests with a total of 15 items within language. Expressive speech is tested by the three subtests: naming, repetition, and automatic speech. Comprehension of spoken language is tested by the two subtests: word comprehension and verbal instructions. 21 Each item is scored correct (1 point) or incorrect (0 points) with a maximum score of 15 points, (range 0–15, where 0–14 indicate aphasia and 15 no aphasia). The test duration is approximately 2 min. The test can be accessed at: https://www.ahajournals.org/action/downloadSupplement?doi=10.1161%2FSTROKEAHA.110.609503&file=609503_supplemental_data.pdf

Swedish version: http://www.riksstroke.org/wp‐content/uploads/2020/04/LAST‐S_The‐language‐screening‐test‐Swedish.pdf

All patients completed their full LAST assessment within one session, although pauses between LAST subtests were allowed if needed.

Patients with a discrepancy between results on NIHSS item 9 and LAST, underwent additional analyses of their medical charts to evaluate what may have caused incorrect diagnosis. We reviewed medical charts to detect stroke symptoms frequently used in the differential diagnosis of aphasia 26 : (a) cognitive impairment; (diagnosed with Mini‐Mental State Examination) (b) presence of motor speech disorders or other symptoms concerning language; (c) predominantly comprehension difficulties and (d) self‐reported developmental reading‐ and writing disorders.

2.3. Statistical analyses

The diagnostic accuracy of NIHSS item 9, using LAST as the reference standard was determined by analyzing: (a) Sensitivity and specificity; (b) Positive predictive value and negative predictive value; (c) Likelihood‐ratios; and (d) The means of receiver operating characteristic (ROC) analysis and calculating the area under the curve (AUC). There were no missing data on the index test or reference standard.

The associations between age and NIHSS at baseline between patients with and without aphasia were compared using the Mann–Whitney test. Gender, acute recanalization treatment, and educational level were compared with Chi‐Square test. Values of <0.05 were considered statistically significant. The estimated needed sample size for NIHSS item 9 to diagnose aphasia was set to 240 patients after performing a power analysis with 80% power and 5% significance level. The proportion of stroke patients with aphasia was assumed to be 30%.

The statistical calculations were performed with the SPSS software package 25.

Lund Stroke Register Study was approved by the Regional Ethical Review Authority in Lund, Sweden (registration number 2016/179).

3. RESULTS

Over 15 months, we screened 414 patients. Among these, 275 patients were eligible to be included in the study of which 221 patients (116 males, 105 females) were included (Figure 1). The median age of stroke onset was 75 years (IQR 68–81), the median total score on NIHSS was 4 (IQR 2–7, range 0–40) and 22% (n = 48) received acute recanalization treatment. Baseline characteristics of the patients are shown in Table 1.

Figure 1.

Figure 1

Flowchart of the LSR cohort. Abbreviations: LSR, Lund Stroke Register; NIHSS, National Institutes of Health Stroke Scale; LAST, The Language Screening Test.

Table 1.

Baseline characteristics of patients

Variable All patients (n = 221) Patients without aphasia (n = 163) Patients with aphasia (n = 58) p Value
Age, years, median (IQR) 75 (68–81) 74 (66–80) 78 (72–86) .002
Female gender, n (%) 105 (48) 75 (46) 30 (52) .454 §
Total NIHSS score, median (IQR) 4 (2–7) 3 (1–6) 7 (4–16) <.000
Acute recanalization treatment, n (%) 48 (22) 28 (17) 20 (35) .006 §
Educational level, n (%)
Low ≤9 years 93 (42) 66 (40) 27 (47) .4 §
Middle ≥10 ≤ 12 years 55 (25) 39 (24) 16 (27)
High ≥12 years 73 (33) 58 (36) 15 (26)

Abbreviations: IQR, Inter Quartile Range; NIHSS, National Institutes of Health Stroke Scale.

p Values for comparisons between patients without aphasia and patients with aphasia according to LAST (The Language Screening Test).

Mann–Whitney U test.

§

Chi‐square test.

At a median of 3 days post‐stroke (IQR 2–6), 50 patients (23%) had aphasia according to NIHSS item 9 when examined by a research nurse. The distribution of scores 1–3 were n = 29, n = 12, n = 9, respectively. Aphasia assessment performed by the SLT using LAST resulted in 58 patients with aphasia (i.e., a score ≤14; 26%) with a median LAST score of 11 (IQR 6–14). When using LAST as the reference test for detecting aphasia, NIHSS item 9 provided 16 false negatives for aphasia (NIHSS score = 0, median LAST score of 14, IQR 13–14) and eight false positives (all with a NIHSS item 9 score = 1) for aphasia, yielding a sensitivity of 72% (95% CI 0.59–0.83) and a specificity of 95% (95% CI 0.91–0.98). A summary of measurements of diagnostic accuracy is displayed in Table 2. ROC analysis showed that NIHSS item 9 can discriminate between aphasic and non‐aphasics with acceptable certainty and with good diagnostic value 27 (AUC=0.85, 95% CI 0.78–0.92).

Table 2.

Summary of diagnostic accuracy measurements of NIHSS item 9

Diagnostic accuracy measurement Confidence interval
Sensitivity 72% 0.59–0.83
Specificity 95% 0.91–0.98
Positive predictive value 84% 0.72–0.91
Negative predictive value 91% 0.87–0.94
Likelihood ratio + 15
Likelihood ratio − 0.29

Measurement of diagnostic accuracy of NIHSS item 9 in comparison to assessment with LAST (The Language Screening Test).

Abbreviation: NIHSS, National Institutes of Health Stroke Scale.

The predictive value of a positive test (PPV) was 84% and the predictive value of a negative test (NPV) was 91%. Figure 2 shows the diagnostic accuracy for the sub‐scores of NIHSS item 9.

Figure 2.

Figure 2

Positive and negative predictive value of the 4 different sub‐scores of NIHSS item 9. Patients with severe to global aphasia (item 9 score of 2–3) are all correctly diagnosed with aphasia. Patients with a mild aphasia (item 9 score of 1) are correctly diagnosed with aphasia 72% and patients with no aphasia (item 9 score of 0) are correctly diagnosed as not having aphasia 91%. Abbreviation: NIHSS, National Institutes of Health Stroke Scale; LAST, The Language Screening Test.

Table 3 displays false negative and false positive distributions of NIHSS item 9 in comparison to LAST and possible explanations for an incorrect diagnosis. The presence of motor speech disorders or predominately comprehension deficits were the most common reasons for incorrect diagnosis. Two patients not shown in Table 3 had aphasia according to NIHSS item 9. Even though these two patients scored normal language function according to LAST (LAST=15), aphasia was noted on clinical assessment with symptoms of mild anomia and difficulty planning utterances (detected by the description task of NIHSS item 9).

Table 3.

Explanations for incorrect aphasia diagnosis with the NIHSS

Explanation for incorrect diagnosis with NIHSS False negative assessment with NIHSS a (n = 16) False positive assessment with NIHSS b (n = 6)
Cognitive symptoms 3 1
Predominantly comprehension language difficulties c 4 NA
Concomitant motor speech disorder d 6 0
Only dysarthria diagnosis NA 3
Developmental dyslexia 0 1
No obvious explanation 3 1

Abbreviations: LAST, The Language Screening Test; NA, not applicable; NIHSS, National Institutes of Health Stroke Scale.

a

NIHSS item 9 = 0, but LAST ≤14.

b

All these patients had NIHSS item 9 = 1, but LAST=15.

c

According to LAST assessment: failed LAST tasks of picture recognition and/or verbal instruction.

d

Patients with combined aphasia and motor speech disorder, that is, dysarthria, according to NIHSS item 10.

Among the 58 patients with aphasia detected by LAST, the most common language symptoms were naming difficulties (79%), followed by difficulties with verbal instructions (64%), and difficulties with the repetition tasks (62%). Additional details concerning the patients’ performance on LAST items are shown in Figure 3.

Figure 3.

Figure 3

Proportion of subjects with language impairment per subtest of LAST among the 58 subjects diagnosed with aphasia according to LAST. Expressive speech is tested by the 3 subtests: naming, repetition and automatic speech. Comprehension of spoken language is tested by the 2 subtests: word comprehension and verbal instructions. Abbreviation: LAST, The Language Screening Test.

Patients with aphasia according to LAST that were not detected with NIHSS item 9 (false negatives) had mild to moderate aphasia with a median score on the LAST of 14 (IQR 13–14, range 8–14). The language symptoms included items on LAST within expressive speech and/or comprehension. Patients (n = 9) with exclusively expressive speech symptoms, failed the word‐finding task of LAST and/or repetition; patients (n = 3) with only comprehension deficits, failed the word recognition or verbal instruction task of LAST; whereas four patients had difficulties within both speech and comprehension tasks. The median age of the 16 patients with false negative results (NIHSS item 9 = 0 but aphasia according to LAST) was 84 years (IQR 76–88) and their median level of education was low (9 years or less).

4. DISCUSSION

Our study provides new data on the accuracy of diagnosing aphasia after acute ischemic stroke with NIHSS. We found that this test discriminates between patients with and without aphasia in the stroke population but patients with mild to moderate aphasia may be difficult to diagnose. Our diagnostic validation of NIHSS has important implications for stroke research studies using NIHSS, as well as for clinicians diagnosing patients with aphasia.

The present study indicates that NIHSS can be used as a screening tool to detect aphasia with excellent specificity and acceptable sensitivity. 27 The high specificity supports that patients with a NIHSS item 9 score of 0 do not have aphasia. Patients with severe or global aphasia (receiving a score of 2–3 on NIHSS item 9) are all correctly diagnosed by NIHSS (PPV 100%). NIHSS has a high inter‐rater and intra‐rater reliability within most subitems 13 , 15 (poorer reliability for loss of consciousness, facial palsy, ataxia and dysarthria), 8 and repeated testing is therefore not likely to substantially alter the sensitivity of NIHSS for detecting aphasia.

Other detailed language assessments, that is, Western Aphasia Battery‐Revised (WAB‐R) 17 and the Comprehensive Aphasia Test (CAT), 18 might have yielded other results for diagnostic accuracy of NIHSS. However, we chose LAST as a brief, acceptably comprehensive screening test, since patients in the acute phase of stroke may not tolerate more detailed comprehensive language assessments, and results can be confounded with attention deficits or impairment of executive function. 22 In addition, several formal language tests are not available in Swedish and no studies have validated the Swedish versions of WAB‐R, CAT or aphasia tests developed in Swedish. 28 LAST has in previous studies been recommended for use in acute stroke care due to its high sensitivity of 98% and specificity of 100% in diagnosing aphasia. 16 , 21 LAST has been validated against several language batteries in different countries, including Boston Diagnostic Aphasia Examination (BDAE); the WAB‐R; and the short version of the Token Test. LAST has been reported to have high diagnostic accuracy in relation to these tests. 21 , 24 , 29 However, there are also limitations with these other tests: WAB‐R and BDAE were not primarily developed to detect aphasia, but rather to diagnostically classify aphasia performance into aphasia syndromes. 17 , 30 Furthermore, the spontaneous speech subtests in BDAE and WAB‐R do not measure phonemic fluency, and the repetition items in WAB‐R may not be as well structured as in other tests. 31 These tests’ sensitivity to differentiate mild aphasia from normal language function may therefore be questioned. The Token Test assess solely one language modality, auditory comprehension, and other symptoms of aphasia may subsequently remain unnoticed. Also, an abnormal finding on the Token Test could indicate other impairments instead of aphasia, for example, memory deficits. 32 In addition, tests that are mentally demanding may be difficult to implement in the acute stroke setting.

Rohde et al 33 conclude in a systematic review that few speech pathology tests for aphasia have completed a diagnostic accuracy analysis. Furthermore, the sensitivity of formal more comprehensive language testing to detect mild aphasia has not, to our knowledge, been extensively reported in the literature. However, a small study of WAB‐R, reported a sensitivity of 60% and specificity of 100%, when compared with a clinical expertise assessment using an operational definition for diagnosing mild aphasia. 34 Even though no direct comparison between WAB‐R and LAST or NIHSS item 9 has been reported, this is comparable to the findings of NIHSS in our study.

A screening test needs to be performed in short time, which entails a risk that it is not detailed enough to detect symptoms of mild impairments and tests may therefore need to include, for example, repetition and word fluency task items. A test of verbal fluency is sensitive to language impairment, yet it may also be related to various cognitive disorders such as impaired executive function, attention, or information processing. 35 In addition, cut‐off scores suggested for some tests might yield false negative results in individuals with very mild aphasia. Screening tests are usually good instruments for diagnosing moderate and severe aphasia, but the question of how to use screening tests or more detailed aphasia tests to differentiate mild aphasia from normal language function needs to be further investigated.

Even minor deviations on LAST leads to the diagnosis of aphasia, reflected by that only LAST = 15 represents no aphasia. This may be related to our study's rather high number (n = 16) of patients with a false‐negative finding of aphasia according to NIHSS item 9. Also, in the acute phase after stroke, other cognitive deficits sometimes may lead to an incorrect diagnosis of aphasia on LAST.

All patients who were misclassified with NIHSS in our study had mild to moderate aphasia. Mild to moderate aphasia was correctly diagnosed on NIHSS in 70% of patients (PPV), underdiagnosing (false negatives, n = 16) and misdiagnosing (false positives, n = 8) patients. Patients with mild aphasia at stroke onset often have a good prognosis of full language recovery, 36 however unresolved moderate to more severe language impairment can impact quality of life. 37 Aphasia also affects patients’ overall rehabilitation and costs of stroke healthcare, 38 emphasizing the importance of early diagnosis of aphasia, allowing for prompt aphasia treatment. 16

Other NIHSS items, for example, orientation according to NIHSS item 1b may be of importance when assessing aphasia (Table S1). In our study, only 1 patient with orientation difficulties according to NIHSS item 1b was misclassified as having aphasia (NIHSS item 9 = 1). Aphasia symptoms may also be misjudged as being only orientation deficits and 6 patients with NIHSS item 1b ≥ 1 were categorized as not having aphasia, whereas the reference evaluation of these patients according to LAST indicated presence of aphasia. This indicates that orientation deficits need to be taken into consideration by the clinician when a diagnosis of aphasia is difficult to evaluate.

A concomitant motor speech disorder was the most frequent explanation for not diagnosing aphasia using NIHSS. Three patients were misdiagnosed with aphasia by NIHSS (false positives), having exclusively a diagnosis of dysarthria. When speech intelligibility is present, it can be difficult to establish whether the symptoms are related to dysarthria or if the patient has aphasic symptoms masked by a motor speech disorder. 39 Predominantly language comprehension difficulty was also a symptom where an aphasia diagnosis was missed. Assessment of comprehension may be difficult since patients may derive information from non‐verbal cues 26 (e.g., hand gestures, tone of voice), leading to an underestimation of the comprehension deficit. Likewise, symptoms of anomia and/or comprehension can be misdiagnosed as cognitive impairment 26 while some patients were misdiagnosed as having cognitive impairment instead of aphasia.

The median age of the group of false negatives was 84 years and the level of education was low (9 years or less). An aphasia diagnosis may be complex to diagnose in an older population and level of education can effect language assessment. 40 , 41 , 42

In our study, the sensitivity of NIHSS for detecting aphasia was 72% which could be considered acceptable 27 although an even higher sensitivity would be desirable. NIHSS was administered by research nurses who are certified and have experience in assessing patients with NIHSS. This may have impacted the diagnostic accuracy of NIHSS in comparison to when assessed in clinical routine.

Findings on individual NIHSS subitems may suggest the presence of specific subtypes of ischemic stroke. Aphasia suggests that the ischemic stroke is related to large vessel occlusion (as opposed to lacunar infarcts). This has implications for acute treatment decisions in considering related different underlying pathogenetic mechanisms, such as cardioembolic sources and large artery atherosclerotic lesions. This may also motivate further treatment recommendations for, for example, vascular surgery, antiplatelet agents, or systemic anticoagulation for the individual patient. However, using NIHSS item 9 for this purpose would mandate a higher sensitivity.

Our finding of a sensitivity of 72% on NIHSS item 9, suggests that the absence of aphasia on NIHSS item 9 does not necessarily exclude language impairment indicating large vessel occlusion in the dominant hemisphere. Absence of aphasia according to NIHSS item 9 should therefore be used with caution in guiding acute stroke treatment and this is not a reliable finding for decisions on diagnostic procedures to detect large artery disease or cardioembolic embolism.

Future improved sensitivity of NIHSS item 9 could possibly include (a) adaption to different cultural and language settings 39 , 43 (including task sensitive measures related to language, that is, repetition and fluency tests), (b) adjusted scoring instructions that are more explicit than current instructions on how to interpret the patients’ symptoms, including that some incorrect responses are today partly allowed for a score of 0 on NIHSS item 9, 9 , 43 and (c) emphasizing the possibility of misinterpretation of aphasia for dysarthria.

Our study only included hospitalized stroke patients, which may have caused selection bias as patients with mild strokes might not have been admitted to the hospital or have as long hospital stays as patients with aphasia. 44 However, the consecutive inclusion of all stroke patients with a wide variety of aphasia symptoms, may have mitigated the potential risk of selection bias. The statistical power of our study in comparison to other studies conducted comparing aphasia diagnosis with stroke scales, further underlines our results.

5. CONCLUSIONS

The accuracy of diagnostic tests for aphasia has important implications in stroke care and stroke research. When using NIHSS for screening and diagnosing aphasia in adults with acute ischemic stroke, patients with severe aphasia can be detected, however, some mild aphasias might be misclassified. Given the 72% sensitivity, absence of aphasia on the NIHSS should not be used to guide stroke treatment. Patients with concomitant motor speech disorders, predominantly comprehension deficits and/or patients with high age are at risk of being incorrectly diagnosed.

CONFLICTS OF INTEREST

Dr Lindgren reports personal fees from Bayer, Astra Zeneca, BMS Pfizer, and Portola outside the submitted work.

FUNDING INFORMATION

This project is funded by: The Swedish Research Council (2019‐01757), The Swedish Government (under the “Avtal om Läkarutbildning och Medicinsk Forskning, ALF”), The Swedish Heart and Lung Foundation, Region Skåne, Lund University, Skåne University Hospital, Sparbanksstiftelsen Färs och Frosta, Fremasons Lodge of Instruction Eos in Lund.

Supporting information

Table S1

ACKNOWLEDGEMENTS

We would like to thank statistician Anna Åkesson and speech and language therapist Lucie Forester for their support in data collection and analysis for this manuscript.

DATA AVAILABILITY STATEMENT

Data that support the findings of this study are available from the corresponding author upon reasonable request.

REFERENCES

  • 1. Engelter ST, Gostynski M, Papa S, et al. Epidemiology of aphasia attributable to first ischemic stroke: incidence, severity, fluency, etiology, and thrombolysis. Stroke. 2006;37:1379‐1384. [DOI] [PubMed] [Google Scholar]
  • 2. Flowers HL, Skoretz SA, Silver FL, et al. Poststroke aphasia frequency, recovery, and outcomes: a systematic review and meta‐analysis. Arch Phys Med Rehabil. 2016;97:2188‐2201. [DOI] [PubMed] [Google Scholar]
  • 3. Hilari K. The impact of stroke: are people with aphasia different to those without? Disabil Rehabil. 2011;33:211‐218. [DOI] [PubMed] [Google Scholar]
  • 4. Pedersen PM, Vinter K, Olsen TS. Aphasia after stroke: type, severity and prognosis. The Copenhagen aphasia study. Cerebrovasc Dis. 2004;17:35‐43. [DOI] [PubMed] [Google Scholar]
  • 5. Teasell R, Bitensky J, Salter K, Bayona NA. The role of timing and intensity of rehabilitation therapies. Top Stroke Rehabil. 2005;12:46‐57. [DOI] [PubMed] [Google Scholar]
  • 6. Nesi M, Lucente G, Nencini P, Fancellu L, Inzitari D. Aphasia predicts unfavorable outcome in mild ischemic stroke patients and prompts thrombolytic treatment. J Stroke Cerebrovasc Dis. 2014;23:204‐208. [DOI] [PubMed] [Google Scholar]
  • 7. Ellis C, Simpson AN, Bonilha H, Mauldin PD, Simpson KN. The one‐year attributable cost of poststroke aphasia. Stroke. 2012;43:1429‐1431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Meyer BC, Lyden PD. The modified National Institutes of Health Stroke Scale: its time has come. Int J Stroke. 2009;4:267‐273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Lyden P. Using the National Institutes of Health Stroke Scale: a cautionary tale. Stroke. 2017;48:513‐519. [DOI] [PubMed] [Google Scholar]
  • 10. Pedersen PM, Jörgensen HS, Nakayama H, Raaschou HO, Skyhöj OT. Aphasia in acute stroke: incidence, determinants, and recovery. Ann Neurol. 1995;38:659‐666. [DOI] [PubMed] [Google Scholar]
  • 11. Lima RR, Rose ML, Lima HN, Cabral NL, Silveira NC, Massi GA. Prevalence of aphasia after stroke in a hospital population in southern Brazil: a retrospective cohort study. Top Stroke Rehabil. 2019;27:215‐223. [DOI] [PubMed] [Google Scholar]
  • 12. Boehme AK, Martin‐Schild S, Marshall RS, Lazar RM. Effect of aphasia on acute stroke outcomes. Neurology. 2016;87:2348‐2354. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Kasner SE. Clinical interpretation and use of stroke scales. Lancet Neurol. 2006;5:603‐612. [DOI] [PubMed] [Google Scholar]
  • 14. Goldstein LB, Samsa GP. Reliability of the National Institutes of Health Stroke Scale. Extension to non‐neurologists in the context of a clinical trial. Stroke. 1997;28:307‐310. [DOI] [PubMed] [Google Scholar]
  • 15. Zandieh A, Kahaki ZZ, Sadeghian H, et al. The underlying factor structure of National Institutes of Health Stroke scale: an exploratory factor analysis. Int J Neurosci. 2012;122:140‐144. [DOI] [PubMed] [Google Scholar]
  • 16. El Hachioui H, Visch‐Brink EG, de Lau LM, et al. Screening tests for aphasia in patients with stroke: a systematic review. J Neurol. 2017;264:211‐220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Kertesz A, Raven JC. WAB‐R: Western Aphasia Battery‐Revised. Texas, USA: PsychCorp; 2007. [Google Scholar]
  • 18. Swinburn K, Porter G, Howard D. Comprehensive Aphasia Test. New York, NY: Psychology Press; 2004. [Google Scholar]
  • 19. Berthier ML. Poststroke aphasia: epidemiology, pathophysiology and treatment. Drugs Aging. 2005;22:163‐182. [DOI] [PubMed] [Google Scholar]
  • 20. Aked J, Delavaran H, Norrving B, Lindgren A. Temporal trends of stroke epidemiology in Southern Sweden: a population‐based study on stroke incidence and early case‐fatality. Neuroepidemiology. 2018;50:174‐182. [DOI] [PubMed] [Google Scholar]
  • 21. Flamand‐Roze C, Falissard B, Roze E, et al. Validation of a new language screening tool for patients with acute stroke: the Language Screening Test (LAST). Stroke. 2011;42:1224‐1229. [DOI] [PubMed] [Google Scholar]
  • 22. Salter K, Jutai J, Foley N, Hellings C, Teasell R. Identification of aphasia post stroke: a review of screening assessment tools. Brain Inj. 2006;20:559‐568. [DOI] [PubMed] [Google Scholar]
  • 23. Bourgeois‐Marcotte J, Flamand‐Roze C, Denier C, Monetta L. LAST‐Q: adaptation and normalisation in Quebec of the language screening test. Rev Neurol. 2015;171:433‐436. [DOI] [PubMed] [Google Scholar]
  • 24. Koenig‐Bruhin M, Vanbellingen T, Schumacher R, et al. Screening for Language Disorders in Stroke: German Validation of the Language Screening Test (LAST). Cerebrovasc Dis Extra. 2016;6:27‐31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Flowers HL, Flamand‐Roze C, Denier C, et al. English adaptation, international harmonisation, and normative validation of the Language Screening Test (LAST). Aphasiology. 2015;29:214‐236. [Google Scholar]
  • 26. Goetz C. Textbook of Clinical Neurology. London, UK: Elsevier Health Sciences. 2007. [Google Scholar]
  • 27. Ray P, Le Manach Y, Riou B, Houle TT. Statistical evaluation of a biomarker. Anesthesiology. 2010;112:1023‐1040. [DOI] [PubMed] [Google Scholar]
  • 28. Swedish Agency for Health Technology Assessment and Assessment of Social Services . Test to Investigate Aphasia. https://www.sbu.se/sv/publikationer/sbus‐upplysningstjanst/afasi‐spraktest/. Updated October 29, 2014. Accessed October 28, 2019
  • 29. Yang H, Tian S, Flamand‐Roze C, et al. A Chinese version of the Language Screening Test (CLAST) for early‐stage stroke patients. PLoS One. 2018;13:e0196646. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Goodglass H, Kaplan E, Barresi B. Boston Diagnostic Aphasia Examination, 3rd edn. Texas, USA: Boston Diagnostic Aphasia Examination; 2001. [Google Scholar]
  • 31. Spreen O, Risser AH. Assessment of Aphasia. Acquired Aphasia, 3rd edn. San Diego, CA: Academic Press; 1998:71‐156. [Google Scholar]
  • 32. Lesser R. Verbal and non‐verbal memory components in the Token Test. Neuropsychologia. 1976;14:79‐85. [DOI] [PubMed] [Google Scholar]
  • 33. Rohde A, Worrall L, Godecke E, O'Halloran R, Farrell A, Massey M. Diagnosis of aphasia in stroke populations: a systematic review of language tests. PLoS One. 2018;13:e0194143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Ross K, Wertz R. Accuracy of formal tests for diagnosing mild aphasia: an application of evidence‐based medicine. Aphasiology. 2004;18:337‐355. [Google Scholar]
  • 35. Whiteside DM, Kealey T, Semla M, et al. Verbal fluency: language or executive function measure? Appl Neuropsychol Adult. 2016;23:29‐34. [DOI] [PubMed] [Google Scholar]
  • 36. Maas MB, Lev MH, Ay H, et al. The prognosis for aphasia in stroke. J Stroke Cerebrovasc Dis. 2012;21:350‐357. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Hilari K, Cruice M, Sorin‐Peters R, Worrall L. Quality of life in aphasia: state of the art. Folia Phoniatr Logop. 2015;67:114‐118. [DOI] [PubMed] [Google Scholar]
  • 38. Brady MC, Kelly H, Godwin J, Enderby P, Campbell P. Speech and language therapy for aphasia following stroke. Cochrane Database Syst Rev. 2016;6:CD000425. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Duffy JR. Motor Speech Disorders Substrates, Differential diagnosis, and Management, 3rd edn. St Louis, MO: Elsevier; 2013. [Google Scholar]
  • 40. Mortensen L, Meyer AS, Humphreys GW. Age‐related effects on speech production: a review. Lang Cogn Process. 2006;21:238‐290. [Google Scholar]
  • 41. Ardila A, Ostrosky‐Solis F, Rosselli M, Gomez C. Age‐related cognitive decline during normal aging: the complex effect of education. Arch Clin Neuropsychol. 2000;15:495‐513. [PubMed] [Google Scholar]
  • 42. Yorkston KM, Bourgeois MS, Baylor CR. Communication and aging. Phys Med Rehabil Clin N Am. 2010;21:309‐319. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Martin‐Schild S, Siegler JE, Kumar AD, Lyden P. Troubleshooting the NIHSS: question‐and‐answer session with one of the designers. Int J Stroke. 2015;10:1284‐1286. [DOI] [PubMed] [Google Scholar]
  • 44. Lazar RM, Boehme AK. Aphasia as a predictor of stroke outcome. Curr Neurol Neurosci Rep. 2017;17:83. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Table S1

Data Availability Statement

Data that support the findings of this study are available from the corresponding author upon reasonable request.


Articles from Acta Neurologica Scandinavica are provided here courtesy of Wiley

RESOURCES