Skip to main content
American Journal of Speech-Language Pathology logoLink to American Journal of Speech-Language Pathology
. 2022 Feb 17;31(2):881–895. doi: 10.1044/2021_AJSLP-21-00150

The Reliability of Telepractice Administration of the Western Aphasia Battery–Revised in Persons With Primary Progressive Aphasia

Leela A Rao a,, Angela C Roberts a,b, Rhiana Schafer a, Alfred Rademaker a,c, Erin Blaze a, Marissa Esparza a, Elizabeth Salley a, Christina Coventry a, Sandra Weintraub a,d, M-Marsel Mesulam a,e, Emily Rogalski a,d
PMCID: PMC9150668  PMID: 35175852

Abstract

Purpose:

The use of telepractice in the field of communication disorders offers an opportunity to provide care for those with primary progressive aphasia (PPA). The Western Aphasia Battery–Revised (WAB-R) is used for differential diagnosis, to assess severity of aphasia, and to identify a language profile of strengths and challenges. Telehealth administration of the WAB-R is supported for those with chronic aphasia due to stroke but has not yet been systematically explored in neurodegenerative dementia syndromes. To fill this gap, in-person and telehealth performance on the WAB-R from participants with mild to moderate PPA was compared.

Method:

Nineteen participants with mild to moderate PPA were administered the WAB-R in person and over videoconferencing. Videoconferencing administration included modifications to the testing protocol to ensure smooth completion of the assessment. Subtest and Aphasia Quotient (WAB-AQ) summary scores were compared using concordance coefficients to measure the relationship between the administration modes.

Results:

In-person and telehealth scores showed strong concordance for the WAB-AQ, Auditory Verbal Comprehension subtest, and Naming & Word Finding subtest. The Spontaneous Speech test summary score had slightly lower concordance, indicating the need for caution when comparing these scores across administration modes.

Conclusion:

These findings support extending the use of telehealth administration of the WAB-R via videoconferencing to those with mild to moderate PPA given appropriate modifications to testing protocol.


The utilization of telepractice and telemedicine has grown exponentially given recent advances in telecommunications and the demand for remote health care services. A survey conducted in July 2020 by the U.S. Department of Health and Human Services revealed that almost half (43.5%) of Medicare fee-for-service visits in April 2020 were conducted via telehealth due to the recent outbreak of SARS-CoV-2 (Assistant Secretary for Planning and Evaluation, 2020). This increase in telehealth visits, compared with a mere 0.1% in February 2020, demonstrates an accelerated move toward telepractice in the United States for older adults across health care disciplines. In December 2020, the Centers for Medicare & Medicaid Services finalized the addition of neuropsychological testing to the approved Medicare telehealth list, indicating that telehealth delivery of assessments is a trend likely to persist following the SARS-CoV-2 crisis (Centers for Medicare and Medicaid Services, 2020).

The field of communication disorders has witnessed similar changes in service delivery models. The American Speech-Language-Hearing Association (ASHA) definition of telepractice includes remote delivery of assessments, interventions, and/or consultations (ASHA, 2020b). According to ASHA, 33 states now have policies in place to provide guidance regarding telepractice by audiologists and speech-language pathologists (SLPs; ASHA, 2020a). Additional states created temporary policies to approve telepractice due to the SARS-CoV-2 pandemic. Similar policies guiding telepractice are also in place for Canadian provincial licensing bodies (Carling et al., 2020).

Telepractice provides benefits to both the clinician and the patient, including ease of access and promotion of health equity (Weidner & Lowman, 2020). Telehealth service delivery models can help overcome barriers to care including limited transportation access and limited availability of specialized health care services (e.g., see Khairat et al., 2019; Lowery et al., 2007; Scott et al., 2012). Telehealth can be a valuable avenue for access to basic care, alongside specialized providers such as neurologists, neuropsychologists, and SLPs for those living in rural and other medically underserved communities (U.S. Department of Health and Human Services, 2017). Telepractice service delivery models are particularly relevant in rare conditions, such as primary progressive aphasia (PPA), for which there is limited access to expert care centers (Groft & Posada de la Paz, 2017; Rogalski et al., 2016).

PPA

PPA is a neurodegenerative dementia syndrome characterized by relatively isolated and predominant impairment of language at onset and a progressive course of decline (Mesulam, 2001). Individuals with PPA typically present with a range of language profiles that can include deficits in word retrieval, semantic knowledge, syntax, motor speech, phonological processes, reading, and writing (Mesulam, 2003). There are three recognized research subtypes of PPA that differ in their language profiles: logopenic-variant, in which word finding is the primary impairment; agrammatic-variant, in which grammar and syntax structures are impaired; and semantic-variant, in which word comprehension deficits are the most salient features (Gorno-Tempini et al., 2011). However, individuals do not always meet criteria for a particular subtype (Gorno-Tempini et al., 2011; Mesulam et al., 2014; Mesulam & Weintraub, 2014; Sajjadi et al., 2012; Wicklund et al., 2014). PPA differs from aphasia secondary to stroke in its progressive course and the co-occurrence of other cognitive and behavioral symptoms as the disease progresses (Rogalski & Mesulam, 2009), which may add challenges to remote language and cognition assessments.

There is currently no cure for PPA, but an emerging body of literature suggests that speech and language therapy may improve individuals' language abilities, quality of life, and communicative function (for reviews, see Cadório et al., 2017; Carthery-Goulart et al., 2013; Volkmer et al., 2019). Previous studies showed that telehealth is a viable model for delivering speech-language therapies to individuals with PPA (Dial et al., 2019; Rogalski et al., 2016). The promise and feasibility of delivering speech-language interventions in PPA remotely is also evidenced in an ongoing Stage 2, randomized control trial of a telehealth-delivered speech-language intervention in PPA (NCT03371706; Roberts et al., in press). Given the rarity of PPA and the potential benefit of speech and language interventions, telepractice provides a powerful opportunity to reduce assessment and treatment barriers.

Language Assessment in PPA

Acquiring profiles of language impairment and preserved abilities in PPA is key for planning effective interventions. There are limited language assessments developed specifically for, or including specific normative data of, people with PPA. Notable examples include the Northwestern Naming Battery (Thompson et al., 2012), the Northwestern Anagram Test (Weintraub et al., 2009), the Northwestern Assessment of Verbs and Sentences (Thompson et al., 2013), and the Progressive Aphasia Severity Scale (Saplosky et al., 2014). Given limited PPA-specific assessments, SLPs often turn to the Western Aphasia Battery–Revised (WAB-R) to determine global language impairment profiles (Clark et al., 2019; Shewan & Kertesz, 1980). The WAB-R, in general, has high internal consistency, inter- and intrarater reliability, and test–retest reliability (Bond, 2019; Hula et al., 2010; Kertesz, 1979; Shewan & Kertesz, 1980). Performance on the WAB-R yields four summary scores: Spontaneous Speech, Auditory Verbal Comprehension, Repetition, and Naming & Word Finding. Overall performance is captured in the widely used Aphasia Quotient (WAB-AQ), which can serve as a metric of global aphasia severity. However, it is important to note that most of the normative sample for the WAB-R had chronic aphasia due to a stroke, with a mere four of the 150 (2.6%) sample having degenerative aphasia. For an in-depth review of the WAB-R and its applications in aphasia assessment, see Kertesz (2020).

Mesulam, Rogalski, and colleagues showed that the WAB-R is useful for identifying individuals with PPA in the mild, moderate, and severe impairment ranges and for monitoring language impairment progression over time (Mesulam et al., 2012; Rogalski et al., 2011). Additionally, in a retrospective study, Clark et al. (2019) found that the WAB-R can help differentiate persons with PPA from those with primary progressive apraxia of speech at the group level. In the same study, WAB-R also effectively distinguished those with agrammatic deficits from those with semantic deficits, highlighting that subtest performance may provide meaningful information regarding language strengths and challenges. The WAB-R was validated initially in an in-person administration model in stroke-induced aphasia (Shewan & Kertesz, 1980). Given the growing importance of remote assessment in rare dementias, validating the remote administration of the WAB-R is an important step toward meeting an emerging need.

Clinical Assessment of Cognition and Language in Telehealth

Previous studies comparing telehealth and in-person administrations of cognitive assessments in individuals with stroke-aphasia, mild cognitive impairment, or Alzheimer's disease demonstrated a strong relationship between the two modes of administration (e.g., see Cullum et al., 2006; Hall et al., 2013; Lindauer et al., 2017; Palsbo, 2007; Theodoros et al., 2008; Vestal et al., 2006). Despite the promise of remote neuropsychological assessments, the WAB-R has rarely been tested or validated in a telehealth model. One notable exception is the study by Dekhtyar et al. (2020), which found no significant differences between the WAB-AQ scores of in-person and remote administrations in individuals with aphasia due to stroke. While the study represented an important step in validating the WAB-R for the telehealth model, the results were limited to the WAB-AQ and did not report findings at the subtest level. Additionally, Dekhtyar et al. (2020) used correlation coefficients to establish the relationship between the WAB-AQ scores from in-person and telehealth administrations. While correlation coefficients show relationships at a group level across administration modes, they are not able to reveal the exact agreement between modes within a participant.

The available literature underscores the importance of telepractice in the delivery of speech-language interventions to persons with PPA. While previous studies validated remote cognitive and language assessments in aphasia secondary to stroke and amnestic dementias, there have been no direct comparisons of in-person and remote WAB-R administration in persons living with PPA where variable language profiles, and layered cognitive and behavioral factors, can present unique assessment challenges. Previous studies evaluating the effect of remote assessment on the WAB-R yielded promising results but do not provide information on the equivalence of administration modes for individual WAB-R subtests, shown previously to be important for identifying language subtypes profiles, severity, and progression of PPA (Clark et al., 2019; Mesulam et al., 2012; Rogalski et al., 2011).

This study aims to fill a research and clinical gap by testing the supposition that there will be no difference in performance for the WAB-R in persons with PPA across assessment delivery methods (in person vs. telehealth). We will test this hypothesis by answering the following research question: Is there significant agreement in the WAB-R scores between in-person and telehealth delivery methods across all individual subtests and aphasia quotient scores? Using concordance measures and comparing individual subtest scores, alongside WAB-AQ scores, we will expand the current literature for the use of remote language assessments by providing evidence of the reliability between administration methods for those living with PPA.

Method

Participants

In this retrospective study, we examined study records from 19 different participants with a diagnosis of PPA who completed the WAB-R both in-person and remotely between January 2017 and February 2020. Four participants completed the WAB-R in both delivery methods at two distinct time points, 1 year apart, which resulted in 23 total data sets for the analysis. Supplemental analyses confirmed that including the additional data points from these four participants did not have a significant impact on the study results (see the Statistical Analyses and Results sections). All records for participants co-enrolled in the Northwestern University Primary Progressive Aphasia Research Program and the Communication Bridge randomized controlled trials (ClinicalTrials.gov Identifiers: NCT02439853 and NCT03371706) during the mentioned time frame were included in the sample. Both studies were approved by the Northwestern University Institutional Review Board. Written consent was obtained as part of the enrollment process for each study.

All participants had a diagnosis of PPA that was confirmed through medical record review based on current PPA criteria (Gorno-Tempini et al., 2011; Mesulam et al., 2014). Additionally, each participant had a study partner (typically a spouse, family member, or friend) who could participate in the telehealth administration of the WAB-R. Adequate corrected hearing and vision were confirmed by a brief hearing and vision screen.

All participants demonstrated basic technology competence as measured by a successful login to the videoconference testing session on a personal computer given visual instructions. Internet upload and download speeds greater than or equal to 1 and 4 Mbps, respectively (as tested by speedtest.net), were also required as part of the inclusion criteria. See Appendix A for complete inclusion and exclusion criteria for each study from which these data were taken.

Administration and Scoring of the WAB-R

Administrations of the WAB-R in both service delivery modes were completed within 90 days of one another to limit the influence of disease progression on test results. The time between each administration in each data set ranged from 6 to 90 days (mean: 35.35 days, SD: 26.39 days). In-person administration occurred first in 19 of the 23 (82.6%) administrations. For both administration modes, the WAB-R was administered by a trained research assistant.

In-Person Administration

For the in-person delivery method, all WAB-R subtests were completed in sequential order. The battery was administered in a quiet, distraction-free space dedicated to participant research. Participants were seated across a table from the test administrator. The Spontaneous Speech and Word Fluency tasks were recorded via handheld recorder for later scoring. All other subtests were completed according to the original testing protocol.

Telehealth Administration

For the telehealth administration of the WAB-R, each participant was mailed a MacBook Air or PC laptop as part of the Communication Bridge clinical trial. The WAB-R was administered over HIPAA-compliant videoconferencing platforms BlueJeans (n = 21) or Fuse (n = 2). All telehealth sessions were recorded on the respective videoconferencing platforms. The Spontaneous Speech responses were additionally recorded via a handheld stereo digital voice recorder to allow for easier playback, transcription, and scoring.

Adjustments were made to the WAB-R protocol to allow for remote administration. All participants were mailed the following items for use during the WAB-R administration: a small book, plastic flower, cup, matches, pen, pencil, comb, and screwdriver. Participants were also mailed a Blue Snowball iCE USB microphone and 3 W RMS Logitech external speakers to use during testing. Study partners and participants were instructed on how to arrange the items during testing. Study partners were asked to be present only for parts of the WAB-R that required assistance. See Appendix B for complete details on modifications to test administration for the telehealth administration of the WAB-R.

Scoring

For all but the Spontaneous Speech subtests of the WAB-R, responses were recorded online during test administration, using standard response forms, and scored by an experienced research assistant. Ambiguities that arose during test scoring were flagged and resolved through consensus meetings with the research team that included a clinical neuropsychologist (author S.W.), SLP (author L.A.R.), and trained research staff (author R.S.). Audio (in-person) and audio/video (telehealth) recordings were consulted, as required, to resolve scoring issues.

The Spontaneous Speech subtest responses for both administration methods were scored using the handheld digital voice recordings. To facilitate consistent scoring across samples, an ASHA-certified SLP (author E.B.) used the guidelines provided in the WAB-R Scoring Manual to score the Spontaneous Speech summary score items for all participants. Participant order was randomized, and file names were changed, to support greater separation between the scorer, the administration mode, and the participant ID. However, complete blinding of the SLP rater for the Spontaneous Speech summary score was not possible since the research clinician was not study naïve, and the files contained identifiable information as required by study protocol. Four (two in-person and two telehealth) of the 46 audio files (8.70%) were missing audio recordings of the first part (Conversational Questions) from the Spontaneous Speech task. In these instances, the SLP rater based the Information Content score on the Picture Description task response. See Appendix C for details on the scoring guidelines for incomplete audio recordings.

Statistical Analyses

The primary aim of the statistical analyses was to identify the level of agreement between WAB-R scores obtained from two distinct modes of administration. A Lin's concordance coefficient was used to assess the extent to which the participants' scores were exactly the same across modes of administration (Lin, 1989). Whereas the Pearson correlation coefficient provides a measure of the linear association between two sets of scores without any indication of the degree of their correspondence, Lin's concordance coefficient examines reliability based on covariation and correspondence (Lin, 1989). In this way, the concordance coefficient has an advantage over Pearson's correlation for addressing reliability of the WAB-R administration modes. Correlation coefficients provide information about the level of agreement of scores within a participant relative to the level of agreement of scores between participants, whereas concordance analyses provide information on the level of agreement, or equivalency, between modes within participants. The concordance coefficient measures how close the in-person and telehealth scores lay to the line of identity (the 45° line through the origin) that represents perfect agreement of the two measures. A coefficient of 1.0 indicates that each participant reproduced their same scores on the WAB-R across modes.

Given the stringent assumptions required for concordance coefficient analyses, a series of Shapiro–Wilk tests were first conducted to test the assumption of normality of the distribution of differences in the WAB-AQ and WAB-R subtest scores (Giavarina, 2015). Next, two analyses were completed to determine the level of concordance between the in-person and telehealth performance on the WAB-R and related subtests. First, a concordance analysis using Lin's concordance coefficient was calculated between the in-person and telehealth WAB-AQ and subtest scores (Lin, 1989). Second, the descriptive Bland and Altman (BA) analysis was conducted to measure the degree of bias, or average difference, between scores in the two administration modes. This was done by first calculating the difference between the telehealth WAB-AQ and in-person WAB-AQ. This difference was then plotted against the average of the telehealth and in-person scores. Limits of agreement representing the range of difference that occurred with 95% confidence were calculated (Bland & Altman, 1986; Giavarina, 2015). Thus, a low BA bias with a confidence interval including zero would indicate a nonsignificant difference between scores of the two assessment modes.

Of note, there were 23 pairs of data in our data set, and four participants had two pairs of data. Given concern that the two pairs within the same participants could have closely clustered in a way that would exaggerate the closeness of the relationship between the in-person and telehealth scores through the concordance coefficients and measures of bias, the level of similarity of the two sets of scores produced by the same participants was assessed by conducting an intraclass correlation coefficient. A strongly positive intraclass correlation coefficient would indicate that the two sets of scores were very similar and would provide reason for caution when including these pairs of data. In contrast, a nonsignificant intraclass correlation coefficient would support our decision to use the data from the four participants who contributed two sets of data to the study.

Results

Participant characteristics are detailed in Tables 1 and 2. All participants spoke English as their primary language, and all identified as White/Caucasian, Non-Hispanic/Latinx. All Shapiro–Wilk tests had a p value of greater than .05, indicating that we could proceed with our primary research methods (see Table 3).

Table 1.

Clinical and demographic information for the 19 participants (23 data sets) with primary progressive aphasia.

Demographic domain N M SD Range
Years of education 19 15.8 2.9 12–20
Age at assessment 19 64.4 5.8 55–74
Age at symptom onset 19 59.7 6.2 49–70

WAB-R domain

N

M

SD

Range
WAB-AQ • 23 81.6 10.3 54.1–97.6
WAB-AQ ∇ 23 82.3 9.7 61.9–96.2
Spontaneous Speech • 23 15.3 2.1 12–19
Spontaneous Speech ∇ 23 15.3 2.2 10–19
 Information Content • 23 8.9 1.1 5–10
 Information Content ∇ 23 9.0 1.1 5–10
 Fluency • 23 6.4 1.5 4–9
 Fluency ∇ 23 6.4 1.5 4–9
Auditory Verbal Comprehension • 23 9.2 0.8 6.3–10.0
Auditory Verbal Comprehension ∇ 23 9.2 1.0 5.7–10.0
Repetition • 23 8.3 1.2 5.9–10.0
Repetition ∇ 23 8.4 1.1 5.8–10.0
Naming and Word Finding • 23 8.0 2.4 1.3–10.0
Naming and Word Finding ∇ 23 8.1 2.2 1.5–10.0

Note. N = sample size; WAB-R = Western Aphasia Battery–Revised; WAB-AQ = Western Aphasia Battery Aphasia Quotient; • = in person; ∇ = telehealth. Four participants contributed two unique pairs of WAB-R scores to this study, resulting in 19 unique participants and 23 data sets.

Table 2.

Primary progressive aphasia subtype data for the participants in the current sample.

PPA subtype n %
Logopenic 9 47
Semantic 6 32
Agrammatic 4 21

Note. Subtypes were determined using current diagnostic criteria (Gorno-Tempini et al., 2011; Mesulam et al., 2014).

Table 3.

Preliminary analyses supported the assumption of normality in our sample. Follow-up analyses revealed a significant ceiling effect for the Naming and Word Finding subtest of the Western Aphasia Battery–Revised (WAB-R).

WAB-R score Shapiro–Wilk Test p values Pearson correlation
WAB-AQ .70 −0.17 (p = .45)
Spontaneous Speech .14 0.08 (p = .73)
 Information Content .003* 0.02 (p = .93)
 Fluency < .001* −0.03 (p = .88)
Auditory Verbal Comprehension .50 0.30 (p = .16)
Repetition .17 −0.32 (p = .14)
Naming and Word Finding .72 −0.59 (p = .003)*

Note. With the exception of the Information Content and Fluency component scores of the Spontaneous Speech subtest, p values for the Shapiro–Wilk tests revealed that the Western Aphasia Battery Aphasia Quotient (WAB-AQ) and subscale comparisons all passed the test for normality as outlined by Giavarina (2015). A p < .05 would indicate that the data were nonnormal. Pearson correlations were used to test for ceiling effects. Only the Naming and Word Finding summary had a statistically significant negative Pearson correlation, indicating a ceiling effect for performance on this subtest.

*

p < .05.

WAB-AQ

Overall, there was a strong agreement between the WAB-AQ scores obtained in-person and remotely. The mean WAB-AQ (see Table 1) for the in-person administrations was 81.6 (range: 54.1–97.6, SD = 10.3), compared with a similar telehealth mean of 82.3 (range: 61.9–96.2, SD = 9.7). The mean difference between the scores was 2.6 points (range: 0.2–7.9, SD = 1.9). Two participants had a difference greater than 5.0 points across modes. The concordance coefficient between the WAB-AQ scores was 0.942 (95% CI [0.870, 0.975]), demonstrating a strong agreement between the quotient scores obtained from the two administration methods. Additionally, Bland–Altman analyses revealed a bias of 0.7 with a confidence interval between −0.7 and 2.0, indicating a small and nonsignificant difference between the mean WAB-AQ scores obtained across modes. See Table 4 and Figure 1 for the WAB-AQ concordance coefficients, bias, and agreement limits.

Table 4.

Analyses of concordance and bias found no significant difference in Western Aphasia Battery–Revised (WAB-R) scores across administration modes.

WAB-R score Concordance coefficient (ρc) Bias Agreement limits
WAB-AQ 0.942 0.7 −5.8, 7.2
[0.870, 0.975] [−0.7, 2.0]
Spontaneous Speech total 0.650 −0.0 −3.6, 3.7
[0.335, 0.834] [−0.7, 0.8]
 Information Content 0.451 −0.0 −2.2, 2.3
[0.057, 0.723] [−0.4, 0.5]
 Fluency 0.702 −0.0 −2.3, 2.3
[0.417, 0.861] [−0.5, 0.5]
Auditory Verbal Comprehension total 0.916 0.1 −0.6, 0.8
[0.820, 0.962] [0.0, 0.3]
Repetition total 0.878 0.1 −1.1, 1.2
[0.737, 0.945] [−0.2, 0.3]
Naming and Word Finding total 0.977 0.1 −0.8, 1.1
[0.951, 0.990] [−0.1, 0.3]

Note. A concordance coefficient of 1.0 denotes perfect agreement between the two administration methods. The Western Aphasia Battery Aphasia Quotient (WAB-AQ) scores and the summary scores for the Auditory Verbal Comprehension, Repetition, and Naming and Word Finding subtests had strong agreement between the in-person and telehealth assessments. The Spontaneous Speech subtest had a lower, but still clinically acceptable, level of agreement across assessment modes. Bias, as calculated by a Bland–Altman analysis, reflects the average difference between the two administration modes. A 95% confidence interval was used for the concordance coefficients and bias. Bias was nonsignificant for the WAB-AQ and all subtests. Agreement limits indicate the upper and lower intervals within with 95% of the difference between scores were expected to lie.

Figure 1.

Figure 1.

Concordance plots showing a strong agreement of Western Aphasia Battery Aphasia Quotient (WAB-AQ) scores across administration modes. All WAB-AQ scores for this sample were greater than fifty. The line of origin (orange dotted line) indicates a concordance coefficient of 1.0, or perfect agreement between the in-person and telehealth scores for each participant on the WAB-AQ. Each point on the chart indicates a participant's WAB-AQ scores while in person (horizontal axis) and over telehealth (vertical axis). The close clustering of the points around the line of origin represents a concordance coefficient of 0.942 and a strong agreement between WAB-AQ scores across administration modes. WAB-R = Western Aphasia Battery–Revised.

WAB-R Subtest Scores

Looking closer, Lin's concordance coefficients and measures of bias also supported strong agreement between in-person and telehealth scores of the WAB-R subtests (see Table 4). The confidence intervals for bias for all subtests included zero, indicating no significant differences between assessment modes. The Naming and Word Finding, Auditory Verbal Comprehension, and Repetition subtests also had high concordance coefficients (ρc = 0.997, 0.916, and 0.878, respectively), indicating a strong agreement between the in-person and telehealth scores. In contrast, the Spontaneous Speech subtest had a lower concordance coefficient of 0.650, indicating lower individual agreement (greater variability) in scores across assessment modes. However, results had a bias of 0 with a confidence interval of −0.7 to +0.8, indicating that overall there was no significant difference in mean scores across assessment modes. Detailed agreement statistics are presented in Table 4, with concordance plots provided for all WAB-R subtests in Figure 2.

Figure 2.

Figure 2.

Concordance plots for each subtest in the Western Aphasia Battery–Revised (WAB-R). The line of origin (blue dotted line) indicates a concordance coefficient of 1.0, or perfect agreement between the in-person and telehealth scores for each participant for each subtest. Each point on the chart indicates a participant's scores for each subtest while in person (horizontal axis) and over telehealth (vertical axis). The degree of clustering of the points around the line of origin represents the concordance, or agreement, between WAB-R subtest scores across administration modes. As shown in Figures 2b and 2d, Auditory Verbal Comprehension and Naming & Word Finding had the highest concordance, or strongest agreement, across administration modes (ρc = 0.916 and 0.977, respectively). In contrast, the Spontaneous Speech subtest had the lowest concordance (see Figure 2a; ρc = 0.650).

Given the lower concordance scores for the Spontaneous Speech subtest, concordance coefficients and measures of bias were calculated for the two individual component scores that comprised the total score for this subtest. The confidence intervals for bias for both tasks included zero, indicating no significant differences in scores across assessment modalities. However, the exact agreement (or concordance) of scores was lower for the Information Content of Spontaneous Speech score (ρc = 0.451) than for the Fluency, Grammatical Competence, and Paraphasias of Spontaneous Speech component score (ρc = 0.702; see Figure 3). Additionally, Shapiro–Wilk tests of normality were significant for both components of the Spontaneous Speech subtest, indicating a nonnormal distribution of scores, likely due to the large number (> 50%) of identical scores across assessment modes.

Figure 3.

Figure 3.

Concordance plots for the Spontaneous Speech component scores. The line of origin (blue dotted line) indicates a concordance coefficient of 1.0, or perfect agreement between the in-person and telehealth scores for each participant. Each point on the charts indicates a participant's scores for each Spontaneous Speech component while in person (horizontal axis) and over telehealth (vertical axis). The degree of clustering of the points around the line of origin represents the concordance, or agreement, between scores across administration modes. As shown on the figures, Information Content scores had lower concordance (ρc = 0.451) than scores of Fluency, Grammatical Competence, and Paraphasias (ρc = 0.702). • = 1 instance; + = 2 instances; ○ = 3 instances; × = 4 instances; ▪ = 5 instances; ▴ = 6 instances; ♦ = 8 instances.

Multiple Data Sets

This study had four participants who contributed two data sets to the analyses, each at different time points. To test whether there was a clustering of differences within participants who contributed two data sets each, an intraclass correlation was calculated for our WAB-AQ data set. The result was 0, indicating that there was no clustering of differences within participants. This further indicated that there would be no effect of including repeated participants on the concordance and agreement statistics, supporting our decision to analyze these data as 23 independent pairs.

Length of Time Between WAB-R Administrations

A closer examination of the data set revealed a wide range (6–90 days, M = 35.3 days, SD = 26.4 days) in the length of time between WAB-R administrations. Thus, a targeted follow-up analysis was conducted to further explore the impact of between-test administration time on our results. Specifically, a Spearman's correlation was conducted between the number of days between administrations and the absolute value between WAB-AQ scores. A moderately positive significant correlation was found, r s (22) = 0.57, p = .005, indicating that days between administration had a moderate relationship to the difference between telehealth and in-person WAB-AQ scores such that a higher number of days between administration resulted in a greater difference between WAB-AQ scores across administrations.

Ceiling Effects

Finally, given the high mean and maximum WAB-AQ scores in our sample, a follow-up analysis was conducted to assess the presence and influence of potential ceiling effects on our main findings. Ceiling effects were determined by correlating the difference of the WAB-AQ and summary scores with the average of each score. A negative correlation would indicate the presence of ceiling effects, such that there were smaller differences in the scores for higher values compared with the lower values. Pearson correlations conducted for the WAB-AQ scores and Spontaneous Speech, Auditory Verbal Comprehension, and Repetition summary scores indicated no presence of ceiling effects. Additionally, no ceiling effects were indicated for the Spontaneous Speech Information Content or Fluency component scores. However, the correlation of the Naming and Word Finding summary score was negative and significant, indicating a ceiling effect for this subtest.

Discussion

In this study, we demonstrated strong reliability for the telehealth administration of the WAB-R for individuals with PPA. Our intake technology questionnaire, which evaluated participant's familiarity with various software, indicated that none of the participants had prior experience with BlueJeans or Fuse videoconferencing. However, all participants were able to use these interfaces for testing following a brief baseline training with research staff.

Our findings support the previous literature reporting strong correlations between in-person and telehealth WAB-AQ scores in aphasia secondary to stroke (Dekhtyar et al., 2020) extend these findings showing robust reliability between testing modes in aphasic dementia and for most individual WAB-R subtest scores. For the Auditory Verbal Comprehension, Repetition, and Naming and Word Finding scores, agreement between in-person and telehealth assessment modes was interpreted as “excellent” and acceptable for clinical purposes (Koo & Li, 2016). The Naming and Word Finding subtest had the highest level of agreement, which may have been influenced by significant ceiling performance.

In contrast, the Spontaneous Speech summary score had the lowest agreement across administration modes and had lower reliability than has been reported previously for in-person, stroke-based aphasia assessment. Shewan and Kertesz (1980) reported test–retest reliability scores of .95 for information content and .94 for fluency. Similarly, Bond (2019) reported correlations of .97 for WAB-R Spontaneous Speech subscale scores when administered in person, 2 weeks apart, in persons with aphasia following stroke. Our Spontaneous Speech subscale agreement score fell within a range interpreted as moderate agreement (i.e., .50–.75). While in many clinical cases, this level of reliability may be acceptable (Koo & Li, 2016; Portney & Watkins, 2000), our findings suggest caution is warranted when interpreting the spontaneous speech scores from the WAB-R across assessment modes, particularly when interpreting the Information Content component score. Although the reason is unclear, our data suggest that this component score of the Spontaneous Speech subtest differed as a function of administration mode in a way that did not affect the Fluency, Grammatical Competence, and Paraphasias of Spontaneous Speech component score.

While the use of a single rater for the Spontaneous Speech subtest is a notable strength of our approach, there are methodological considerations that may have contributed to our Spontaneous Speech agreement findings. It is possible that our use of a hand held recorder for the telehealth mode affected the audio quality in a way that was important for scoring. Additionally, viewing the Picture Description stimulus through a computer monitor may have affected the productions of some persons with PPA in a way that is unclear from the available data. Because previous studies assessing in-person versus telehealth modes for the WAB-R in stroke-induced aphasia did not report individual subtest reliability, it is unclear whether the observed findings are unique to our cohort (Dekhtyar et al., 2020). That said, a previous study traumatic brain injury found that the characteristics of discourse samples were not significantly impacted by telehealth administration of assessments (Turkstra et al., 2012). Therefore, optimization of the WAB-R for telehealth delivery will benefit from additional studies examining whether lower agreement on the Spontaneous Speech subtest is an effect of administration mode generally, PPA specifically, or to the WAB stimuli.

Limitations

There are limitations that should be considered when interpreting these findings. While our sample size was consistent with previous telehealth versus in-person assessment comparison studies in aphasia and dementia (Cullum et al., 2006; Dekhtyar et al., 2020; Hall et al., 2013; Lindauer et al., 2017; Vestal et al., 2006), and also with studies evaluating the utility of the WAB-R in PPA (Clark et al., 2019), these findings will benefit from replication in larger samples. Additionally, the impact of the four missing audio files and alternative scoring guidelines for Part A of the Spontaneous Speech subtest is unknown. Because the inclusion criteria for the clinical trial that served as the primary data source were designed to select persons with mild-to-moderate PPA, caution should be used when generalizing these findings to those with more severe disease or those with vastly different WAB-AQ score profiles. A known challenge in PPA research, and one that is reflected in our current sample, is the limited diversity in years of education, race, and ethnicity. Walker et al. (2020) reported that the availability of telepractice services does not necessarily translate to use by underrepresented groups. However, the limited diversity in our sample does not allow us to consider the impact of potential barriers on the remote administration of the WAB-R in persons from underrepresented groups.

There was also a moderate positive relationship for the days between test administrations and the difference between WAB-AQ scores such that a greater number of days between administrations was associated with a greater difference in scores across administration modes. Administrations that were further apart may have been at risk of capturing natural progression effects. Previous studies suggest that natural progression effects in PPA occur typically on a larger timescale than the observation window for this study (< 90 days). For example, Rogalski et al. (2019) found that the participants with PPA demonstrated small, variable declines in language performance over a 6-month period and a change of 6.7%–11.0% in WAB-AQ over a 12-month period. Therefore, it is possible that natural progression effects were seen in our data and may have confounded administration mode reliability for some participants. Given the rarity of PPA, this study represents an important first step in supporting the use of telehealth assessments to increase access to care for these individuals.

Future Research

Future research should focus on broadening the scope of this study to persons living with moderate to severe PPA, since previous studies in stroke-induced aphasia suggest that test–retest reliability (and by extension possibly administration mode reliability) may differ as a function of aphasia severity (Hula et al., 2010). Additionally, increased PPA symptom severity may lead to increased difficulty with understanding test instructions, increased need for tactile cues in the Object Naming subtest, and an increased need for more involved partner training/instruction for the telehealth administration. Finally, as demonstrated by our Spontaneous Speech subtest reliability findings, some WAB-R subtests may be affected more so than others by remote administration.

Conclusions

This study demonstrates that the WAB-R can be administered through telehealth with relative confidence that the assessment findings are consistent with traditional in-person administration in patients with mild to moderate PPA. While caution may be warranted when comparing in-person and telehealth acquired Spontaneous Speech subtest scores, the overall agreement for the WAB-AQ and majority of subtest scores was “good” to “excellent” and, thus, supports the use of telehealth administration of the WAB-R for persons with PPA, clinically (Koo & Li, 2016; Portney & Watkins, 2000). Mailing testing items to persons ahead of time, using a care partner to set up task-specific items and establishing a baseline technology competency ahead of time may be key factors in ensuring smooth administration of the WAB-R to this population. This study provides additional support for clinicians and researchers seeking to expand access to speech-language pathology services and neuropsychological assessment for persons with PPA.

Author Contributions

Leela A. Rao: Conceptualization (Equal), Data curation (Lead), Investigation (Equal), Methodology (Equal), Visualization (Lead), Writing – original draft (Lead), Writing – review & editing (Lead). Angela C. Roberts: Conceptualization (Equal), Funding acquisition (Lead), Supervision (Lead), Writing – original draft (Equal), Writing – review & editing (Equal). Rhiana Schafer: Data curation (Supporting), Methodology (Equal), Writing – review & editing (Supporting). Alfred Rademaker: Formal analysis (Lead), Writing – original draft (Equal), Writing – review & editing (Equal), Methodology (Equal). Erin Blaze: Data curation (Equal), Methodology (Equal), Writing – review & editing (Supporting). Marissa Esparza: Methodology (Equal). Elizabeth Salley: Conceptualization (Equal), Methodology (Equal), Supervision (Equal), Writing – review & editing (Supporting). Christina Coventry: Data curation (Supporting), Methodology (Equal), Writing – review & editing (Supporting). Sandra Weintraub: Data curation (Equal), Writing – review & editing (Supporting). M.-Marsel Mesulam: Funding acquisition (Lead), Writing – review & editing (Supporting). Emily Rogalski: Conceptualization (Lead), Funding acquisition (Lead), Methodology (Lead), Project administration (Lead), Writing – original draft (Equal), Writing – review & editing (Equal).

Acknowledgments

We would like to thank the research participants and families who made this work possible. Research reported in this publication were supported in part by the National Institutes on Aging under Award Numbers R01AG055425, P30AG13854, R01AG056258, the National Institute of Neurologic Disorders and Stroke under Award Number R01NS075075 (awarded to Emily Rogalski), and the National Institute of National Institute on Deafness and Other Communication Disorders under Award Number R01DC008552 (awarded to M.-Marsel Mesulam). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This is not an industry-sponsored study.

Appendix A

Summary of Inclusion and Exclusion Criteria of the Two Studies From Which the Data Were Taken

Language in PPA (in-person study)

Inclusionary criteria:

  • Diagnosis of primary progressive aphasia

  • Right-handed

  • English speaker

  • Visual acuity of 20/30 (corrected)

  • Adequate hearing to follow conversation

Exclusionary criteria:

  • Left-handed

  • Presence of significant medical illness that could interfere with continued participation in the study

Communication Bridge (online study)

Inclusionary criteria:

  • Diagnosis of primary progressive aphasia

  • Internet upload speeds greater than or equal to 1 Mbps

  • Internet download speeds greater than or equal to 4 Mbps

  • English speaker

  • Adequate visual acuity for online sessions and evaluations (corrected)

  • Adequate hearing for online sessions and evaluations (corrected)

  • Availability of a communication partner to participate in the study

Appendix B

Summary of Modifications Made to the WAB-R for Telehealth Administration

Question Modification(s) Study partner (SP) present?
Conversational Questions
6 Original question: Why are you here (in the hospital)?
Modification: Why are we meeting today?
No
Picture Description
N/A The tester shared their screen with a PowerPoint with the participant through videoconferencing. The first slide said “Tell me what's happening in this picture.” The second slide contained the image for the Picture Description task. Transcription was completed after the evaluation by trained research staff. No
Yes/no questions
10 Original question: Are the lights on in this room?
Modification: Are the lights on in the room you're in? a
If needed
11 Original question: Is the door closed?
Modification: Is the door closed in the room you're in? a
If needed
13 Original Question: Is this ____ (actual location).
Modification: Am I at Northwestern University? a
No
Auditory word recognition
1–6 The following instructions were provided:
PT, please close your eyes. SP, push the laptop back and close it halfway so I can see the table in front of PT. SP, please lay < real objects > in front of PT. Spread out the items so they're not on top of each other. PT, open your eyes. Point to the < test items as indicated by the Manual>.
Yes b
7–60 The tester shared their screen with a PowerPoint presentation containing pictures from the testing booklet. Numbers 1–6 were added under each picture to facilitate object identification. For questions 25–30, letters A–F were under the targets. The following instructions were provided: Tell me the number/letter of the picture that goes with the word <target>. If needed c
43–60 None No d
Sequential commands
5–11 The following additional instructions were provided before asking the questions:
SP, pull out the pen, comb, and book. Arrange them in front of PT.
Yes e
10 Original question: Put the pen on top of the book then give it to me.
Modification: Put the pen on top of the book, then give it to SP.
Yes
Repetition
1–15 Repetition of items was permitted if there was poor audio quality during the testing session. Repetitions were marked on the score sheet. No
Object naming
1–20 The tester showed the physical items to the PT over videoconferencing. Tactile cues were not provided. Phonemic and semantic cues were provided if necessary. No
Word fluency
N/A None No
Sentence completion
1–5 None No
Responsive speech
1–5 None No

Note. Assessment components that required the study partners (SPs) were completed together at the start of the Western Aphasia Battery–Revised (WAB-R). Persons with a diagnosis (PTs) completed the rest of the assessment independently.

a

All participants provided verbal responses.

b

PT and SP were asked to have the objects ready for the testing session. If the tester was unclear as to what the PT was point to, PT was asked to touch or pick up the target item.

c

If the tester was unsure of whether the PT accurately pointed to the door or light, SP was asked to give the tester a “tour” of the room at the end of the assessment.

d

If the tester was unsure of whether the PT accurately pointed to the target items, they were asked to stand up and/or adjust the computer screen to allow for verification.

e

If the PT did not understand Question #5, the tester asked SP to demonstrate the response.

Appendix C

Modified Information Content (IC) Scoring Guidelines for the Four Audio Files That Were Missing Part A of the Spontaneous Speech Subtest of the Western Aphasia Battery–Revised (WAB-R)

IC score WAB manual scoring guidelines Scorer guidelines for files missing Part A
8 Correct responses to any five of the items in Task A and an incomplete description of the picture in Task B. An incomplete description of the picture in Task B.
9 Correct responses to all items in Task A and an almost complete description of the picture in Task B; at least 10 people, objects, or actions should be named. Circumlocution may be present. An almost complete description of the picture in Task B; at least 10 people, objects, or actions should be named. Circumlocution may be present.
10 Correct responses to all of the items in Task A and a reasonably complete description of the picture in Task B. Sentences of normal length and complexity, referring to most items and activities A reasonably complete description of the picture in Task B. Sentences of normal length and complexity, referring to most items and activities

Note. Task A = Conversational Questions; Task B = Picture Description.

Funding Statement

We would like to thank the research participants and families who made this work possible. Research reported in this publication were supported in part by the National Institutes on Aging under Award Numbers R01AG055425, P30AG13854, R01AG056258, the National Institute of Neurologic Disorders and Stroke under Award Number R01NS075075 (awarded to Emily Rogalski), and the National Institute of National Institute on Deafness and Other Communication Disorders under Award Number R01DC008552 (awarded to M.-Marsel Mesulam).

References

  1. American Speech-Language-Hearing Association. (2020a). Tracking of state laws and regulations for telepractice and licensure policy. https://www.asha.org/uploadedFiles/State-Telepractice-Policy-COVID-Tracking.pdf
  2. American Speech-Language-Hearing Association. (2020b). Telepractice. https://www.asha.org/Practice-Portal/Professional-Issues/Telepractice/
  3. Assistant Secretary for Planning and Evaluation. (2020). Medicare beneficiary use of telehealth visits: Early data from the start of the COVID-19 pandemic. https://aspe.hhs.gov/sites/default/files/private/pdf/263866/hp-issue-brief-medicare-telehealth.pdf
  4. Bland, J. M. , & Altman, D. G. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. The Lancet, 1(8476), 307–310. https://doi.org/10.1016/S0140-6736(86)90837-8 [PubMed] [Google Scholar]
  5. Bond, B. (2019). The test-retest reliability of the Western Aphasia Battery. University of Kansas. https://kuscholarworks.ku.edu/bitstream/handle/1808/30075/Bond_ku_0099M_16541_DATA_1.pdf?sequence=1 [Google Scholar]
  6. Cadório, I. , Lousada, M. , Martins, P. , & Figueiredo, D. (2017). Generalization and maintenance of treatment gains in primary progressive aphasia (PPA): A systematic review. International Journal of Language & Communication Disorders, 52(5), 543–560. https://doi.org/10.1111/1460-6984.12310 [DOI] [PubMed] [Google Scholar]
  7. Carling, A. , Capman-Jay, S. , & Joglekar, S. (2020). Virtual care standards.
  8. Carthery-Goulart, M. T. , da Silveira, A. D. C. , Machado, T. H. , Mansur, L. L. , Parente, M. , Senaha, M. L. H. , Brucki, S. M. D. , & Nitrini, R. (2013). Nonpharmacological interventions for cognitive impairments following primary progressive aphasia: A systematic review of the literature. Dementia & Neuropsychologia, 7(1), 122–131. https://doi.org/10.1590/s1980-57642013dn70100018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Centers for Medicare and Medicaid Services. (2020). Final policy, payment, and quality provisions changes to the Medicare physician fee schedule for calendar year 2021. Centers for Medicare & Medicaid Services. Retrieved April 22 from. https://www.cms.gov/newsroom/fact-sheets/final-policy-payment-and-quality-provisions-changes-medicare-physician-fee-schedule-calendar-year-1 [Google Scholar]
  10. Clark, H. M. , Utianski, R. L. , Duffy, J. R. , Strand, E. A. , Botha, H. , Josephs, K. A. , & Whitwell, J. L. (2019). Western Aphasia Battery–Revised profiles in primary progressive aphasia and primary progressive apraxia of speech. American Journal of Speech-Language Pathology, 29(1S), 498–510. https://doi.org/10.1044/2019_AJSLP-CAC48-18-0217 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cullum, C. M. , Weiner, M. F. , Gehrmann, H. R. , & Hynan, L. S. (2006). Feasibility of telecognitive assessment in dementia. Assessment, 13(4), 385–390. https://doi.org/10.1177/1073191106289065 [DOI] [PubMed] [Google Scholar]
  12. Dekhtyar, M. , Braun, E. J. , Billot, A. , Foo, L. , & Kiran, S. (2020). Videoconference administration of the Western Aphasia Battery–Revised: Feasibility and validity. American Journal of Speech-Language Pathology, 29(2), 673–687. https://doi.org/10.1044/2019_AJSLP-19-00023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Dial, H. R. , Hinshelwood, H. A. , Grasso, S. M. , Hubbard, H. I. , Gorno-Tempini, M.-L. , & Henry, M. L. (2019). Investigating the utility of teletherapy in individuals with primary progressive aphasia. Clinical Interventions in Aging, 14, 453–471. https://doi.org/10.2147/CIA.S178878 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Giavarina, D. (2015). Understanding Bland Altman analysis. Biochemia Medica, 25(2), 141–151. https://doi.org/10.11613/BM.2015.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Gorno-Tempini, M. L. , Hillis, A. E. , Weintraub, S. , Kertesz, A. , Mendez, M. , Cappa, S. F. , Ogar, J. M. , Rohrer, J. D. , Black, S. , Boeve, B. F. , Manes, F. , Dronkers, N. F. , Vandenberghe, R. , Rascovsky, K. , Patterson, K. , Miller, B. L. , Knopman, D. S. , Hodges, J. R. , Mesulam, M. M. , & Grossman, M. (2011). Classification of primary progressive aphasia and its variants. Neurology, 76(11), 1006–1014. https://doi.org/10.1212/WNL.0b013e31821103e6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Groft, S. C. , & Posada de la Paz, M. (2017). Preparing for the future of rare diseases. Advances in Experimental Medicine and Biology, 1031, 641–648. https://doi.org/10.1007/978-3-319-67144-4_34 [DOI] [PubMed] [Google Scholar]
  17. Hall, N. , Boisvert, M. , & Steele, R. (2013). Telepractice in the assessment and treatment of individuals with aphasia: A systematic review. International Journal of Telerehabilitation, 5(1), 27–38. https://doi.org/10.5195/ijt.2013.6119 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Hula, W. , Donovan, N. J. , Kendall, D. L. , & Gonzalez-Rothi, L. J. (2010). Item response theory analysis of the Western Aphasia Battery. Aphasiology, 24(11), 1326–1341. https://doi.org/10.1080/02687030903422502 [Google Scholar]
  19. Kertesz, A. (1979). Aphasia and associated disorders: Taxonomy, localization and recovery. Holt Rinehart & Winston. [Google Scholar]
  20. Kertesz, A. (2020). The Western Aphasia Battery: A systematic review of research and clinical applications. Aphasiology, 1–30. https://doi.org/10.1080/02687038.2020.1852002 [Google Scholar]
  21. Khairat, S. , Haithcoat, T. , Liu, S. , Zaman, T. , Edson, B. , Gianforcaro, R. , & Shyu, C.-R. (2019). Advancing health equity and access using telemedicine: A geospatial assessment. Journal of the American Medical Informatics Association, 26(8–9), 796–805. https://doi.org/10.1093/jamia/ocz108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Koo, T. K. , & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163. https://doi.org/10.1016/j.jcm.2016.02.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Lin, L. I.-K. (1989). A concordance correlation coefficient to evaluate reproducibility. Biometrics, 45(1), 255–268. https://doi.org/10.2307/2532051 [PubMed] [Google Scholar]
  24. Lindauer, A. , Seelye, A. , Lyons, B. , Dodge, H. H. , Mattek, N. , Mincks, K. , Kaye, J. , & Erten-Lyons, D. (2017). Dementia care comes home: Patient and caregiver assessment via telemedicine. Gerontologist, 57(5), e85–e93. https://doi.org/10.1093/geront/gnw206 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Lowery, C. , Bronstein, J. , McGhee, J. , Ott, R. , Reece, E. A. , & Mays, G. P. (2007). ANGELS and University of Arkansas for Medical Sciences paradigm for distant obstetrical care delivery. American Journal of Obstetrics and Gynecology, 196(6), 534.e1–534.e9. https://doi.org/10.1016/j.ajog.2007.01.027 [DOI] [PubMed] [Google Scholar]
  26. Mesulam, M.-M. (2001). Primary progressive aphasia. Annals of Neurology, 49(4), 425–432. https://doi.org/10.1002/ana.91 [PubMed] [Google Scholar]
  27. Mesulam, M.-M. (2003). Primary progressive aphasia—A language-based dementia. New England Journal of Medicine, 349(16), 1535–1542. https://doi.org/10.1056/NEJMra022435 [DOI] [PubMed] [Google Scholar]
  28. Mesulam, M.-M. , Rogalski, E. J. , Wieneke, C. , Hurley, R. S. , Geula, C. , Bigio, E. H. , Thompson, C. K. , & Weintraub, S. (2014). Primary progressive aphasia and the evolving neurology of the language network. Nature Reviews Neurology, 10(10), 554–569. https://doi.org/10.1038/nrneurol.2014.159 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Mesulam, M.-M. , & Weintraub, S. (2014). Is it time to revisit the classification guidelines for primary progressive aphasia? Neurology, 82(13), 1108–1109. https://doi.org/10.1212/WNL.0000000000000272 [DOI] [PubMed] [Google Scholar]
  30. Mesulam, M.-M. , Wieneke, C. , Thompson, C. , Rogalski, E. , & Weintraub, S. (2012). Quantitative classification of primary progressive aphasia at early and mild impairment stages. Brain, 135(5), 1537–1553. https://doi.org/10.1093/brain/aws080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Palsbo, S. E. (2007). Equivalence of functional communication assessment in speech pathology using videoconferencing. Journal of Telemedicine and Telecare, 13(1), 40–43. https://doi.org/10.1258/135763307779701121 [DOI] [PubMed] [Google Scholar]
  32. Portney, L. G. , & Watkins, M. P. (2000). Foundations of clinical research: Applications to practice (2nd ed.). Prentice Hall Health. [Google Scholar]
  33. Roberts, A. , Rademaker, A. , Salley, E. , Mooney, A. , Morhardt, D. , Fried-Oken, M. , Weintraub, S. , Mesulam, M. , & Rogalski, E. (in press). Communication Bridge™-2 (CB2): A NIH Stage 2 randomized control trial of a speech-language intervention for communication impairments in individuals with mild to moderate primary progressive aphasia. [DOI] [PMC free article] [PubMed]
  34. Rogalski, E. , Cobia, D. , Harrison, T. M. , Wieneke, C. , Weintraub, S. , & Mesulam, M. M. (2011). Progression of language decline and cortical atrophy in subtypes of primary progressive aphasia. Neurology, 76(21), 1804–1810. https://doi.org/10.1212/WNL.0b013e31821ccd3c [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Rogalski, E. J. , & Mesulam, M. M. (2009). Clinical trajectories and biological features of primary progressive aphasia (PPA). Current Alzheimer Research, 6(4), 331–336. https://doi.org/10.2174/156720509788929264 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Rogalski, E. J. , Saxon, M. , McKenna, H. , Wieneke, C. , Rademaker, A. , Corden, M. E. , Borio, K. , Mesulam, M. M. , & Khayum, B. (2016). Communication bridge: A pilot feasibility study of Internet-based speech-language therapy for individuals with progressive aphasia. Alzheimer's & Dementia: Translational Research & Clinical Interventions, 2(4), 213–221. https://doi.org/10.1016/j.trci.2016.08.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Rogalski, E. J. , Sridhar, J. , Martersteck, A. , Rader, B. , Cobia, D. , Arora, A. K. , Fought, A. J. , Bigio, E. H. , Weintraub, S. , Mesulam, M. M. , & Rademaker, A. (2019). Clinical and cortical decline in the aphasic variant of Alzheimer's disease. Alzheimer's & Dementia: Translational Research & Clinical Interventions, 15(4), 543–552. https://doi.org/10.1016/j.jalz.2018.12.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Sajjadi, S. A. , Patterson, K. , Arnold, R. J. , Watson, P. C. , & Nestor, P. J. (2012). Primary progressive aphasia: A tale of two syndromes and the rest. Neurology, 78(21), 1670–1677. https://doi.org/10.1212/WNL.0b013e3182574f79 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Saplosky, D. , Domoto-Reilly, K. , & Dickerson, B. C. (2014). Use of the Progressive Aphasia Severity Scale (PASS) in monitoring speech and language status in PPA. Aphasiology, 28(8–9), 993–1003. https://doi.org/10.1080/02687038.2014.931563 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Scott, J. D. , Unruh, K. T. , Catlin, M. C. , Merrill, J. O. , Tauben, D. J. , Rosenblatt, R. , Buchwald, D. , Doorenbos, A. , Towle, C. , Ramers, C. B. , & Spacha, D. H. (2012). Project ECHO: A model for complex, chronic care in the Pacific Northwest region of the United States. Journal of Telemedicine and Telecare, 18(8), 481–484. https://doi.org/10.1258/jtt.2012.gth113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Shewan, C. M. , & Kertesz, A. (1980). Reliability and validity characteristics of the Western Aphasia Battery (WAB). Journal of Speech and Hearing Disorders, 45(3), 308–324. https://doi.org/10.1044/jshd.4503.308 [DOI] [PubMed] [Google Scholar]
  42. Theodoros, D. , Hill, A. , Russell, T. , Ward, E. , & Wootton, R. (2008). Assessing acquired language disorders in adults via the Internet. Telemedicine Journal and E-Health, 14(6), 552–559. https://doi.org/10.1089/tmj.2007.0091 [DOI] [PubMed] [Google Scholar]
  43. Thompson, C. K. , Lukic, S. , King, M. C. , Mesulam, M. M. , & Weintraub, S. (2012). Verb and noun deficits in stroke-induced and primary progressive aphasia: The Northwestern Naming Battery. Aphasiology, 26(5), 632–655. https://doi.org/10.1080/02687038.2012.676852 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Thompson, C. K. , Meltzer-Asscher, A. , Cho, S. , Lee, J. , Wieneke, C. , Weintraub, S. , & Mesulam, M. M. (2013). Syntactic and morphosyntactic processing in stroke-induced and primary progressive aphasia. Behavioral Neurology, 26(1–2), 35–54. https://doi.org/10.3233/BEN-2012-110220 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Turkstra, L. S. , Quinn-Padron, M. , Johnson, J. E. , Workinger, M. S. , & Antoniotti, N. (2012). In-person versus telehealth assessment of discourse ability in adults with traumatic brain injury. The Journal of Head Trauma Rehabilitation, 27(6), 424–432. https://doi.org/10.1097/HTR.0b013e31823346fc [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. U.S. Department of Health and Human Services. (2017). National healthcare quality and disparities report chartbook on rural health care. http://www.ahrq.gov/research/findings/nhqrdr/index.html [PubMed]
  47. Vestal, L. , Smith-Olinde, L. , Hicks, G. , Hutton, T. , & Hart, J., Jr. (2006). Efficacy of language assessment in Alzheimer's disease: Comparing in-person examination and telemedicine. Clinical Interventions in Aging, 1(4), 467–471. https://doi.org/10.2147/ciia.2006.1.4.467 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Volkmer, A. , Spector, A. , Meitanis, V. , Warren, J. D. , & Beeke, S. (2019). Effects of functional communication interventions for people with primary progressive aphasia and their caregivers: A systematic review. Aging and Mental Health, 24(9), 1381–1393. https://doi.org/10.1080/13607863.2019.1617246 [DOI] [PubMed] [Google Scholar]
  49. Walker, D. M. , Hefner, J. L. , Fareed, N. , Huerta, T. R. , & McAlearney, A. S. (2020). Exploring the digital divide: Age and race disparities in use of an inpatient portal. Telemedicine and e-Health, 26(5), 603–613. https://doi.org/10.1089/tmj.2019.0065 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Weidner, K. , & Lowman, J. (2020). Telepractice for adult speech-language pathology services: A systematic review. Perspectives of the ASHA Special Interest Groups, 5(1), 326–338. https://doi.org/10.1044/2019_persp-19-00146 [Google Scholar]
  51. Weintraub, S. , Mesulam, M. M. , Wieneke, C. , Rademaker, A. , Rogalski, E. J. , & Thompson, C. K. (2009). The Northwestern Anagram Test: Measuring sentence production in primary progressive aphasia. American Journal of Alzheimer's Disease & Other Dementias, 24(5), 408–416. https://doi.org/10.1177/1533317509343104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Wicklund, M. R. , Duffy, J. R. , Strand, E. A. , Machulda, M. M. , Whitwell, J. L. , & Josephs, K. A. (2014). Quantitative application of the primary progressive aphasia consensus criteria. Neurology, 82(13), 1119–1126. https://doi.org/10.1212/wnl.0000000000000261 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from American Journal of Speech-Language Pathology are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES