Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Nov 1.
Published in final edited form as: J Neurolinguistics. 2018 Apr 4;48:26–46. doi: 10.1016/j.jneuroling.2018.03.007

Sentence Processing in Aphasia: An Examination of Material-Specific and General Cognitive Factors

Laura L Murray 1
PMCID: PMC6345386  NIHMSID: NIHMS957885  PMID: 30686860

Abstract

The purpose of this study was to characterize further the nature of sentence processing deficits in acquired aphasia. Adults with aphasia and age-and education-matched adults with no brain damage completed a battery of formal cognitive-linguistic tests and an experimental sentence judgment task, which was performed alone and during focused attention and divided attention or dual-task conditions. The specific aims were to determine whether (a) increased extra-linguistic cognitive demands (i.e., focused and divided conditions) differentially affected the sentence judgement performances of the aphasic and control groups, (b) increased extra- linguistic cognitive demands interact with stimulus parameters (i.e., syntactic complexity, number of propositions) known to influence sentence processing, and (c) syntactic- or material specific resource limitations (e.g., sentence judgment in isolation), general cognitive abilities (e.g., short-term and working memory test scores), or both share a significant relationship with dual-task outcomes. Accuracy, grammatical sensitivity, and reaction time findings were consistent with resource models of aphasia and processing accounts of aphasic syntactic limitations, underscoring the theoretical and clinical importance of acknowledging and specifying the strength and nature of interactions between linguistic and extra-linguistic cognitive processes in not only individuals with aphasia, but also other patient and typical aging populations.

Keywords: aphasia, sentence judgment, resource models, attention, syntax

1. Introduction

Over the last 25 years or so, there has been a steady accrual of evidence within the aphasia literature establishing the influential relationship between extra-linguistic, cognitive processes and aphasia symptoms and outcomes (Baldo, Paulraj, Curran & Dronkers, 2015; Brownsett et al., 2014; Dignam et al., 2017; Marinelli, Spaccavento, Craca, Marangolo & Angelelli, 2017; Martin & Saffran, 1999; Murray, 2012, 2017a; Murray, Holland, & Beeson, 1997a, 1997b, 1997c; Paek & Murray, 2015; Petroi, Koul & Corwin, 2014; Tompkins, Bloise, Timko & Baumgaertner, 1994; Ziegler, Kerkhoff, Cate, Artinger & Zierdt, 2001). That is, regardless of aphasia profile, difficulties across the cognitive domains of attention (e.g., Lee & Pyun, 2014; Murray, 2012; Villard & Kiran, 2015), memory (e.g., Mayer & Murray, 2012; Valilla-Rohter & Kiran, 2013; Vukovic, Vuksanovic, & Vukovic, 2008), and executive functioning (e.g., Baldo et al., 2015; Dean, Della Sala, Beschin, & Cocchini, 2017; Murray, 2017a) have been identified among individuals with aphasia, which can negatively affect their language abilities at the phonological, morphosyntactic, lexical-semantic, pragmatic, and discourse levels (Caplan, Michaud & Hufford, 2013; Dean et al., 2017; Friedman & Gvion, 2007; Meteyard, Bruce, Edmundson & Oakhill, 2015; Murray, 2000, 2012; Murray et al., 1997a, 1997c; Penn, Frankel, Watermeyer & Russell, 2010; Tompkins et al., 1994; Ziegler et al., 2001). Importantly, this line of research has afforded support to contemporary conceptualizations of not only aphasia, in which deficits in cognitive functions other than language are accredited with generating or intensifying linguistic symptoms (Hula & McNeil, 2008; Kurland, 2011; Murray & Kean, 2004), but also more broadly, the neurobiology of language, in which diffuse cortical and subcortical structures and distributed connectivity support language in concert with other functional processes and control mechanisms (Cahana-Amitay & Albert, 2015; Meyer, Cunitz, Obleser & Friederici, 2014; Tremblay & Dick, 2016; Xing, Lacey, Skipper-Kallal, Zeng, & Turkeltaub, 2017).

Despite this ever-growing research base, additional examination of the presence and nature of relationships between specific linguistic abilities and cognitive functions among individuals with aphasia is needed given both the theoretical and applied implications of such research (Moineau, Dronkers, & Bates, 2005; Murray, 2004, 2017b; Oliveira, Marin, & Bertolucci, 2017; Salis, Hwang, Howard, & Lallini, 2017). For example, if a certain linguistic process proves functionally distinct from other aspects of cognition, such a finding would inform the architecture of modular approaches to conceptualizing language, as well as clinically indicate that remediation for that linguistic process when impaired would need to focus specifically on training or compensating for that linguistic process. In contrast, if a potent relationship between a certain linguistic process and other cognitive processes is established, such a finding would provide support to non-modular or distributed views of language and more broadly, cognitive processing, and specify that remediation for that linguistic process when impaired could instead or additionally target the related cognitive processes to abate the effects of the impaired linguistic process.

The purpose of the current study was to delineate further the relationship between specific linguistic and extra-linguistic cognitive abilities in aphasia and thus, the processing or resource model of aphasia, by examining interactions between sentence processing and several cognitive skills, including short-term (STM; supporting temporary storage of nominally processed information) and working memory (WM; supporting temporary storage of information while concomitantly processing that information for a particular intention) abilities, among individuals with aphasia and adults with no brain damage using a dual-task paradigm. Sentence or syntactic processing abilities were of interest given that sentence processing deficits are common irrespective of aphasia type (Caplan, Waters, & Hildebrandt, 1997; Dick et al., 2001; Wilson & Saygin, 2004). Furthermore, there are longstanding deliberations regarding whether such abilities are qualitatively or quantitatively compromised in aphasia (e.g., Grodzinsky, 2000 vs. Caplan, Waters, DeDe, Michaud, & Reddy, 2007) as well as other language disorders (e.g., specific language impairment; for a review, see Montgomery, Gillam, & Evans, 2016). That is, with respect to aphasia, some researchers propose that syntactic processing is qualitatively affected, at least in certain types of aphasia, by linguistic-specific structural impairments (Grodzinsky, 1984, 2000; Mauner, Fromkin, & Cornell, 1993; Sullivan, Walenski, Love & Shapiro, 2017). Alternately, and in accord with resource or cognitive accounts of aphasia, other researchers characterize the deficit as quantitative in nature, often positing cognitive impairments (particularly STM and WM problems) as a source of impedance to syntactic computations (Murray et al., 1997c; Caplan et al., 2007, 2013; Patil, Hanne, Burchert, De Bleser, & Vasishth, 2016). Indeed, within one of the most recently forwarded models of sentence processing in aphasia, the rational inference hypothesis, the role of memory and executive function or higher order cognitive abilities is acknowledged (e.g., Gibson, Sandberg, Fedorenko, Bergen & Kiran, 2016).

Several lines of evidence align favorably with the proposition that sentence processing deficits in aphasia instead or at least in part reflect extra-linguistic challenges versus solely degraded or lost syntactic competence (for a critique of linguistic-specific structural accounts, see Caplan et al., 2007). First, when completing syntactic processing tasks in which other linguistic and/or extra-linguistic task demands have been restrained (e.g., cloze tasks, grammaticality judgments), individuals with aphasia appear able to use syntactic knowledge, with better syntactic performance during restrained versus unrestrained tasks (Caplan et al., 2007; Linebarger, Schwartz & Saffran, 1983; Murray et al., 1997c). Second, variable sentence type effects as well as dissociations in the performances of individuals with aphasia have been identified when completion of different types of off-line tasks such as sentence-picture matching or act-out tasks has been contrasted (Caplan et al., 1997, 2007, 2013). Third, when approaches that allow examining online syntactic processing (e.g., eyetracking; auditory moving window presentation) have been used, the response patterns of individuals with aphasia match those of their non-brain-damaged peers during correct responses, thus supporting that underlying syntactic operations are intact (Caplan et al., 2007, 2013; Dickey & Thompson, 2006; Hanne, Sekerina, Vasishth, Burchert, & De Bleser, 2011). Fourth, investigators have documented that performance of extra-linguistic cognitive tests, particularly STM and WM measures, are related to or predict the ability of individuals with aphasia to perform certain syntactic operations (Slevc & Martin, 2016) or comprehend sentence-level material (Caplan et al., 2013; Friedmann & Gvion, 2007; Pettigrew & Hillis, 2014; Wiener, Connor Tabor, & Obler, 2004). Fifth, a few recent intervention studies have documented that some individuals with aphasia show improved sentence comprehension abilities following practice of treatment activities that focus on strengthening or supporting STM or WM skills (e.g., repetition or listening span tasks), without ever directly targeting comprehension (Eom & Sung, 2016; Salis, 2012; Zakariás, Keresztes, Marton & Warte, 2016). Finally, it is important to note that this resource or processing account of sentence processing difficulties in aphasia is commensurate with explanations of other linguistic symptoms in aphasia (e.g., lexical-semantic processing or retrieval; Murray, 2000; Moineau et al., 2005) as well as changes in sentence processing associated with other acquired neurogenic disorders (e.g., Parkinson’s disease, Colman, Koerts, van Beilen, Leenders & Bastiaanse, 2006; Alzheimer’s disease, Small, Kemper, & Lyons, 2000) or typical aging (DeCaro, Peelle, Grossman & Wingfield, 2016; Goral et al., 2011).

As an example of earlier work exploring the interface of cognitive factors with sentence processing in aphasia, Murray and colleagues (1997c) had adults with and without mild aphasia perform a listening task that required grammaticality judgments under isolation, focused attention, and divided attention or dual-task conditions. Sentence type was constrained to a simple canonical frame to ensure greater than chance grammaticality judgement performances among the aphasic participants, and to focus on extra-linguistic cognitive versus syntactic complexity issues. During optimal listening conditions (i.e., isolation condition), there were no group differences in accuracy or grammatical sensitivity, whereas during the more complex attention conditions, differences were apparent. If the aphasic adults had specific, syntactic competence impairments, substantial differences between aphasic and control groups should have been evident even during the isolation condition because the syntactic processing requirements of the grammaticality judgment task remained the same across conditions. Instead, these aphasic adults were able to make the necessary syntactic analyses during the isolation condition, but were significantly less able to do so during focused and divided attention conditions. Furthermore, although performance of the secondary task (i.e., tone discrimination task) did not overtly require language, the resources upon which it relied, were found to intersect, at least partially, with those used to make grammaticality judgments, an intersection evident in both the aphasic and control groups’ performance patterns. For the participants with aphasia, processing and responding to dual-tasks interfered with both their accuracy and response efficiency or speed, whereas for the control participants, dual-task demands negatively affected response efficiency only. Murray et al. concluded that these findings supported resource accounts of aphasia and that the resource pool upon which syntactic processes (at least those used to make grammaticality judgments) may abstract does not appear to be exclusive to syntax skills (Garraffa & Learmonth, 2013; Wilson et al., 2003).

Despite such evidence, further investigation remains warranted given methodological issues in prior research. For instance, in the Murray et al. (1997c) study, the observed dual-task interference may have solely reflected competition for response selection and preparation, as opposed to competition for limited attentional or processing resources (Pashler, 1994; Pashler & Johnston, 1998). That is, a dual-task “bottleneck” exists in which performance of one task is dependent, at least in part, upon participants’ selection and preparation for a separate choice response to the competing task. Varying the complexity of the sentence stimuli would facilitate examining not only the source of dual-task interference, but also interactions between cognitive factors and linguistic parameters known to influence sentence processing (e.g., syntactic complexity; propositional density). Another limitation in several prior investigations is that the relationship between extra-linguistic cognitive and syntactic abilities has been inferred via manipulation of syntax-external task demands with (a) the absence or inclusion of only a limited number of formal cognitive tests (e.g., only one WM measure), and/or (b) the absence of statistical analyses (e.g., correlations) to identify significant associations between syntactic task and formal cognitive test performances (Caplan et al., 2007; Garraffa & Learmonth, 2013; Murray et al., 1997c; Smith, 2011). Failure to include such tests or statistical analyses obviates identifying which extra-linguistic cognitive abilities (e.g., sustained attention, WM) are important predictors of sentence processing. Another common shortcoming within the sentence processing literature relates to the inordinate focus on individuals with agrammatism or nonfluent aphasia profiles such as Broca’s aphasia (e.g., Garraffa & Learmonth, 2013; Hanne etal., 2011; Linebarger et al., 1983; Mauner et al., 1993; Patil et al., 2016; Salis, 2012), despite research documenting that the individuals representing a breadth of aphasia profiles experience syntactic processing difficulties (Caplan et al., 2007, 2013; Wilson & Saygin, 2004). Thus, the extent to which prior findings as well as sentence processing models developed or refined based on these data relate to this broader aphasic population requires further enumeration.

Accordingly, this study determined whether sentence processing deficits in individuals representing various aphasia profiles are associated, as least in part, with limitations in other cognitive abilities, including STM and WM, by having adults with aphasia or no brain damage complete a sentence judgment task alone and in competition with a tone discrimination task. Task demands were manipulated by varying listening condition and sentence type. The listening condition manipulations utilized within a dual-task paradigm were designed to investigate the effects of increased extra-linguistic cognitive demands on sentence judgment performances and thus examine whether resource or structural models can characterize sentence processing in aphasia and more broadly, aphasia itself. The sentence judgment task (which focused on discriminating the presence or absence of verb aspect violations), was performed by itself, in the presence of auditory distraction (i.e., increased focused attention demands), and under several divided attention conditions (i.e., increased WM, divided attention, and other executive function demands). Theoretically the syntactic processing or resource requisites of the sentence judgment task do not change across these listening conditions. During the dual-task conditions, however, extra-linguistic demands do increase in that limited processing resources must be appropriately allocated only to the sentence task (i.e., focused attention condition), or efficiently shared between both the sentence task and the secondary, competing tone discrimination task (i.e., divided attention conditions). If sentence processing impairments in aphasia emerge, at least in part, from resource versus structural limitations, the sentence judgment performances of the aphasic individuals should be significantly more vulnerable than those of control participants to the increased resource allocation demands of the focused attention and divided attention conditions. In contrast, if linguistic structural constraints are solely responsible for aphasia or more specifically, aphasic sentence processing deficits, performance pattern differences between the aphasic and control participants, particularly qualitative differences, should be similar across the listening conditions. Furthermore, it is important to examine sentence processing abilities during these more demanding focused and divided attention listening conditions as they are replete in everyday communication contexts (e.g., listening to a communication partner in the presence of background music and/or while driving) and thus, such conditions have ecological validity.

Sentence stimuli were constructed to manipulate linguistic factors known to influence sentence processing, syntactic complexity and number of propositions, and, to investigate whether these linguistic factors similarly or differentially interact with the increased extra-linguistic cognitive demands of the dual-task conditions. These sentence stimulus manipulations were also motivated by research exploring whether there is a separate resource pool uniquely dedicated to syntactic processes (Rochon, Waters, & Caplan, 2000; Caplan & Waters, 1999), or whether the same resources used for syntactic processing are also used in other linguistic and cognitive tasks (Fallon, Peelle, & Wingfield, 2006; Wilson et al., 2003). To evaluate syntactic complexity effects, judgments of aspectual information within active versus passive sentences were compared; processing a passive sentence requires greater syntactic demands because of its noncanonical order of thematic roles (Ferreira, 2003; Grossman & White-Devine, 1998). To examine proposition effects, active versus conjoined sentences were contrasted; the larger sets of verb-related thematic roles in conjoined sentences have been associated with greater demands on post-interpretive or semantic processing (Caplan et al., 1997; Rochon et al., 2000). Another reason for such stimulus complexity manipulations is that they allowed excluding response selection as the sole source of dual-task interference (Pashler, 1994): That is, because effects related to response selection were kept constant across the sentence types, interference related to sentence complexity as opposed to simply response demands could be examined. In consideration of the extant literature, the following hypotheses were tested:

  • (a)

    Compared to adults with no brain damage, the sentence judgment performances of adults with aphasia would be negatively affected to a greater extent by increasing the extra-linguistic cognitive demands of the listening conditions (i.e., focused and divided attention conditions).

  • (b)

    If syntactic processing is dependent on an isolated resource system, there would be no disproportionate increase in syntactic complexity effects across listening conditions; that is, the same direction and degree of response differences between active versus passive sentence judgments should be observed across listening conditions, despite variations in the extra-linguistic cognitive demands of each listening condition. Alternately, if syntactic processing, like processing the number of propositions, is dependent, at least in part, on more general language or cognitive resources, there would be a disproportionate increase in both syntactic complexity and propositional effects across listening conditions; that is, response differences between active versus passive and conjoined sentence judgments should be exacerbated by the increased extra-linguistic cognitive demands of the focused and divided attention listening conditions.

  • (c)

    Regardless of listening condition, the sentence judgment performances of adults with aphasia would be related to both linguistic and extra-linguistic cognitive abilities.

2. Methods

2.1. Participants

Study participants included 23 adults with aphasia and 26 adults with no brain damage, all of who spoke English as their native language (see Table 1). All participants gave informed consent prior to participating in the study, with study procedures approved by the Institutional Review Board at Indiana University. Based on self-reports, caregiver reports, or both as well as the review of aphasic participants’ medical charts, it was determined that all participants had negative histories of traumatic brain injury, alcohol or other substance abuse, and pre-existing cognitive impairment, communication disorder, or psychiatric illness within the past six months. Each study participant was required to meet the following inclusionary criteria: (a) premorbid right-handedness as established by 13-item inventory of Briggs and Nebes (1975); (b) aided or unaided hearing thresholds of at least 35db HL at 0.5, 1.0, and 2.0 kHz in at least one ear on a pure-tone air-conduction hearing screening; (c) 80% or better accuracy on the Speech Discrimination subtest of the Arizona Battery for Communication Disorders of Dementia (Bayles & Tomoeda, 1991); and, (d) 100% accuracy on a visual screening task for which participants were instructed to find an identical picture among an array of four visually similar line drawings. Participants with aphasia were additionally required to (a) be at least 6-months post-stroke onset, (b) have a CT or MRI scan report verifying a unilateral, left hemisphere stroke, and (c) attain 100% accuracy on the Apraxia Battery for Adults - 2 (Dabul, 2000) Limb Apraxia subtest to minimize the possibility that apraxic errors might confound their key-press responses. Lastly, participants in the control group were excluded if they displayed possible dementia by scoring below 24 on the Mini Mental Examination (MMSE; Folstein, Folstein, & McHugh, 1975). Because of its significant language demands (Golper, Rau, Erskine, Langhans, & Houlihan, 1987; Vigliecca, Penalva, Molina, Voos, & Vigliecca, 2012), the MMSE was not used to identify the presence of dementia in participants with aphasia; alternatively, only those with a negative medical history for suspected dementia were included in the aphasic participant group.

Table 1.

Group Characteristics

Variable Aphasic Group
(n = 23)
Control Group
(n = 26)
Age
(years)
M 57.5 61.1
SD 12.9 13.7
Range 32–83 30–80
Education
(years)
M 14.5 15.2
SD 1.7 1.5
Range 12–16 12–16
Estimated IQ M 118.5 119.1
SD 5.6 6.2
Range 107–125 97–124
VAMS (raw score)
Sad* M 14.8 10.4
SD 17.1 14.3
Range 0–47 0–42
Happy M 71.0 67.3
SD 21.5 23.7
Range 38–100 27–100
MMSE
(max. = 30)
M 28.5
SD 1.7
Range 25–30
Time Post Stroke
(months)
M 48.2
SD 42.0
Range 6–168
Gender (Male:Female) 16:7 10:16
*

A raw score of < 50 is indicative of depression.

The results of a series of pooled variances t-tests indicated that the aphasic and control groups did not significantly differ in terms of age, t(47) = 0.945, p = 0.349, years of education, t(47) = 1.458, p = 0.151, or estimated IQ (Barona, Reynolds & Chastain, 1984), t(47) = 0.365, p = 0.717. To assure that the groups were similar in terms of emotional well being, all participants rated themselves on the Sad and Happy scales of the Visual Analog Mood Scales (VAMS; Stern, 1998). Each participant gave a self-rating that fell below the raw score cut-off of 50 on the Sad scale indicating that no participant appeared depressed at the time of the experiment. Furthermore, there was no significant difference between the aphasic and control groups on either the Sad, t(47) = −0.981, p = 0.331, or Happy scale, t(47) = −0.567, p = 0.574. Gender representation within the aphasic and control groups differed somewhat, with 16 males in the aphasic group and 10 males in the control group.

2.2. Procedures

Each participant completed two to three sessions, each of which lasted approximately two hours, within a two- to three-week period. Each session was held in a quiet room within the author’s research laboratory, a healthcare facility near the participant’s home, or the participant’s home, depending on the participant’s preference. During the first one to two sessions, participants completed the screening tasks described above and a battery of cognitive tests during subsequent sessions, participants completed the experimental sentence judgment task under a series of listening conditions.

2.2.1. Communication Testing

The aphasic group completed the Aphasia Diagnostic Profiles (ADP; Helm-Estabrooks, 1992) to document the presence, type, and severity of aphasia (see Table 2). ADP test results indicated that participants in the aphasic group had mild to moderate language deficits given that they obtained ADP Aphasia Severity Standard Scores of 88 or better (i.e., above the 20th percentile rank). Participants with aphasia varied in terms of aphasia type: 12 presented with anomic aphasia, 3 with conduction aphasia, 1 with transcortical sensory aphasia, 2 with a borderline fluent aphasia type, 3 with Broca’s aphasia, and 2 with transcortical motor aphasia. To evaluate communication independence during daily social, basic needs, written language, and planning activities, each aphasic participant’s spouse, primary caregiver (e.g., parent, child, sibling), or speech-language pathologist completed the ASHA Functional Assessment of Communication Skills (ASHA FACS: Frattali, Thompson, Holland, Wohl, & Ferketic, 1995). The ratings data from this test suggested that participants in the aphasia group varied from needing moderate (i.e., often requiring assistance, prompting, or both in daily communicative activities/interactions) to no assistance in terms of being independent communicators.

Table 2.

Cognitive and Communicative Test Data

Test Aphasic
Group
Control
Group
t-test
Results
ADP (Standard Scores)
    Auditory M 12.9
    Comprehension SD 3.0
Range 8–17
    Aphasia Severity M 117.3
SD 16.2
Range 88–135
ASHA FACS (rating score with max. = 7)
    Overall Comm. M 6.5
    Independence SD 0.7
Range 3.9–7.0
WMS-R Visual Memory Span (%iles)
    Forwards M 45.9 65.7 t(36.2) = 2.396
SD 34.1 21.4 p = 0.022
Range 2–98 32–98
    Backwards M 52.8 70.5 t(42.6) = 2.991
SD 26.2 20.7 p = 0.005
Range 2–96 36–99
Auditory-Verbal Working Memory (Number of Errors)
    Recall errors M 20.5 6.9 t(28.2) = -5.673
SD 10.8 4.3 p < 0.001
Range 6–42 0–13
    True/False Errors M 2.2 0.5
SD 4.1 1.0
Range 0–16 0–4
Behavioral Inattention Test1 M 138.7 143.8 t(24) = 2.088
SD 11.4 2.6 p = 0.048
Range 90–146 135–146
Test of Everyday Attention2(Scaled Scores)
    Elevator Counting
With Distraction
M 7.7 11.4 t(35.8) = 4.672
SD 3.3 2.0 p < 0.001
Range 3–13 6–13
    Visual Elevator M 7.0 11.5 t(32.3) = 4.953
SD 3.9 2.1 p < 0.001
Range 0–15 6–15
    Elevator Counting
With Reversal
M 8.6 12.8 t(39.8) = 4.209
SD 4.0 2.9 p < 0.001
Range 0–15 7–17
    Telephone Search
With Counting
M 6.3 11.8 t(42.8) = 5.168
SD 4.1 3.4 p < 0.001
Range 0–15 6–19
Rating Scale of
Attentional Behavior
Total Score3
M 12.7
SD 9.4
Range 1–41

1BIT cut-off score of 129 (or less) on the conventional subtests is indicative of visual neglect.

2Scaled scores have M = 10, SD = 3 based on a sample of 154 non-brain-damaged adults.

3Based on summing the ratings of 14 items, each of which is rated on a scale from 0 (attentional behavioral problem does not occur at all) to 4 (attentional behavioral problem always occurs).

2.2.2. Cognitive Testing

The following cognitive test battery, designed to assess the various cognitive functions that have been suggested to support sentence processing in healthy and impaired populations (Montgomery et al., 2016; Murray, 2012), was administered to each participant: (a) to assess nonverbal STM and WM abilities, respectively, forward and backward Visual Memory Span subtests of the Wechsler Memory Scale - Revised (WMS-R; Wechsler, 1987), which involves tapping a series of squares in the same or reverse order, respectively, as demonstrated by the test examiner, (b) to examine auditory-verbal WM, the test protocol of Tompkins et al. (1994), which involves listening to sets of sentences (with an increasing number of sentences per set) and for each sentence determining if it does or does not make sense (i.e., true or false judgment) and then for each set recalling the last word of each sentence, (c) to determine the presence and severity of visual neglect, the six conventional subtests (i.e., line crossing, letter cancelation, star cancellation, figure and shape copying, line bisection, and representational drawing) of the Behavioral Inattention Test (BIT; Wilson, Cockburn, & Halligan, 1987), (d) to evaluate attention and WM abilities, Elevator Counting with Distraction (measure of auditory focused attention), Visual Elevator (measure of visual WM and attention shifting), Elevator Counting with Reversal (measure of auditory WM and attention shifting), and Telephone Search While Counting (measure of divided attention) subtests of the Test of Everyday Attention (TEA; Robertson, Ward, Ridgeway, & Nimmo-Smith, 1994), and (e) to inspect caregiver’ perceptions of the presence and frequency of behaviors associated with attention impairments (e.g., “had difficulty concentrating”) in the aphasic participants, the Rating Scale of Attentional Behavior (Ponsford & Kinsella, 1991), on which higher scores suggest greater issues with attention. The administration order of these tests was randomized across participants to avoid order or fatigue effects. Cognitive tests were administered and scored according to their test manual or in the case of the auditory-verbal WM test protocol, the procedures described by the protocol developers, Tompkins et al. (1994). An exception was that a number strip was made available during the TEA subtests to support the counting or spoken number retrieval abilities of participants with aphasia (for further description and rationale of this administrative accommodation, see Murray, 2012).

2.2.3. Listening Conditions

Each participant completed the experimental sentence judgment and tone discrimination tasks under the following listening conditions:

  • (a)

    Isolation Condition. Each task (i.e., sentence judgment, tone discrimination) was completed without distraction. Participants were instructed to respond as quickly and accurately as possible.

  • (b)

    Focused Attention Condition. Stimuli for the both the target, sentence judgment task and competing tone discrimination task were presented. Participants, however, were instructed to complete only the sentence judgment task. Participants were informed that the sentence and tone stimuli had been superimposed so that they would hear both the sentences and tones at the same time. They were instructed to ignore the tones and to attend and respond to only the sentences.

  • (c)

    Divided Attention #1 (DIV1) Condition. Sentence and tone stimuli were again superimposed, and participants were required to complete both tasks. They were additionally instructed to give attentional priority to the sentence judgment task. That is, they were asked to try to divide their attention so that 75% of their attentional focus was dedicated towards completing the sentence judgment task and 25% was dedicated towards completing the tone discrimination task; a simplified written version of these instructions was displayed on the laptop monitor as well (i.e., “75% sentences, 25% tones”; see 2.2.4 and 2.2.5 for description of task instructions). To illustrate this division of attentional focus, participants were shown a piece of paper on which there was a line drawing of a square. The experimenter told the participant to imagine that this square represented his or her attention capacity and then drew a vertical line to divide the square into a 3/4 and a 1/4 section. Next, the experimenter shaded in the 3/4 section to illustrate the amount of attention that should be dedicated to the sentence judgment task and pointed to the 1/4 section to indicate the amount of attention that should be dedicated to the tone discrimination task. As a reminder, this sheet of paper was left out within the participant’s field vision while the participant completed this listening condition. The experimenter neither provided nor reinforced any condition completion strategies (e.g., respond to the sentence first) during this or the remaining divided attention conditions.

  • (d)

    Divided Attention #2 (DIV2) Condition. Sentence and tone stimuli were again superimposed, and participants were again required to complete both tasks. For this condition, they were asked to give equal emphasis to each task and distribute 50% of their attentional focus to the sentence task and 50% to the tone task. A sheet of paper was again used to illustrate and serve as a reminder regarding the prescribed attention division for this listening condition; that is, a vertical line was drawn to divide the square into in halves, shading in one half to exemplify the amount of attention to be paid to the sentence task.

  • (e)

    Divided Attention #3 (DIV3) Condition. Again sentence and tone stimuli were presented simultaneously, and participants were instructed to complete both tasks. Procedures were similar to the other divided attention conditions, except this time participants were asked to give attentional priority to the tone discrimination task by dedicating 75% of their attentional focus to the tone task and 25% of their attentional focus to the sentence task. Participants were again shown a piece of paper on which there was a line drawn square. The experimenter drew a vertical line to split the square into 3/4 and 1/4 sections and shaded in the 3/4 section to illustrate the amount of attention to be paid to the tone task.

2.2.4. Sentence Judgment Task

Participants listened to lists of 30 grammatically correct sentences and 30 sentences with illegal verb forms and determined whether or not each sentence was grammatically correct. All sentences were recorded by a female speaker who read aloud both the grammatical and ungrammatical sentences with normal speed and intonation. For each list (one practice and five randomly assigned experimental lists), there were 10 active, 10 passive, and 10 conjoined, grammatically correct, present tense sentences, and 10 active, 10 passive, and 10 conjoined, grammatically incorrect sentences (i.e., a total of 360 unique sentences across the 6 lists). Given the number of listening conditions, it was decided that using semantically-irreversible sentences, which were unique to each list, would help minimize confounding effects such as fatigue, boredom, and exposure/learning that may occur when a closed set of stimuli are used. Furthermore, given the type and locus of ungrammaticality (see below for further description), a decision regarding the grammaticality of every sentence could be made prior to listening to both the sentence’s agent and patient.

The effect of syntactic complexity was examined by comparing performance of active versus passive sentences because these sentence types differ only in terms of canonicity; the effect of number of propositions was examined by comparing performance of active versus conjoined sentences, with conjoined sentences having an increased number of propositions. Verb aspect violations were created by affixing an incorrect suffix to the main verb. This locus of ungrammaticality was selected so that the type of sentence construction was evident prior to the point at which participants should be making their grammaticality judgments; research has also indicated that this type of aspect violation is challenging and sensitive to cognitive load manipulations (McDonald, 2008a, b). For each list, the mean temporal location of ungrammaticality (quantified in milliseconds) was matched among the three sentence types. Incorrect suffixes were perceptually dissimilar to the correct suffixes, with sentence lists screened on a pilot group of 10 typical aging adults to assure that these “small” modifications would be perceptible to participants and that lists were of similar difficulty (based on mean accuracy and response time). Lists were also matched for mean word length, with grammatically correct and incorrect sentences randomized within each list (see Figure 1 for sentence examples). Additionally, the three sentence types were matched for the mean number of syllables per sentence. Lists were randomly assigned to each of the listening conditions so that sentences within the lists assigned to focused and divided attention conditions could be mixed with the tone stimuli (see section 2.3 below).

Figure 1.

Figure 1.

Examples of sentence judgment stimuli. Grammatically incorrect sentences are in italic font.

For the sentence judgment task, participants were asked to press as quickly and as accurately as possible a green computer key labeled GOOD (“m” key) when they heard a grammatically correct sentence and to press a red BAD key (“.”) when they heard a sentence with “bad grammar.” Participants with aphasia and hemiparesis were advised to use their unaffected hand. Prior to each listening condition, task instructions were presented orally and in writing. The written instructions were shortened versions of the spoken instructions and were displayed in 24-point, bold font centered on the laptop monitor (e.g., “Okay/Good Grammar Press GOOD, Bad/Funny Grammar Press BAD”). Participants were allowed to ask questions before completing each listening condition to help ensure they understood the instructions. Participants were additionally cautioned that once a listening condition began, no questions or feedback would be permitted. Although instructions underscored both response accuracy and speed, participants were allowed as much time as needed to respond; likewise, given the locus of the grammatical violation, participants were allowed to respond prior to listening to the entire sentence. It should be further noted that the decision to allow as much response time as needed took into consideration that: (a) during the divided attention conditions, participants would need time to provide two responses (i.e., one for the sentence task and one for the tone task); (b) it is well documented that individuals with brain damage have slower response speeds compared to those with no brain damage (e.g., Gerritsen, Berg, Deelman, Visser-Keizer, & Jong, 2003) and thus, imposing a response time limitation would place greater demands on the participants with aphasia; and, (c) cognitive models of two-choice decision tasks acknowledge that placing a response time limitation could be construed as prioritizing response speed over accuracy and influence both decision and nondecision processing components (Ratcliff & McKoon, 2008). Both accuracy and response time data were collected to allow examining speed/accuracy tradeoffs that offer insight into resource allocation strategy and because prior research has documented that accuracy and RT data can yield different information regarding the nature and extent of aphasic deficits (e.g., Caplan et al., 2007).

2.2.5. Tone Discrimination Task

This secondary, competing task required discriminating thirty 500 Hz and thirty 2000 Hz pure tones presented in a random order. This frequency difference is well beyond the adult differential frequency threshold (Small & Brandt, 1963). This tone discrimination task was selected as the competing task because it has been used in prior aphasia studies and thus there was evidence that the task was not challenging for individuals with aphasia when it was completed by itself (e.g., Murray et al., 1997b, 1997c). Additionally, the tone discrimination task was selected because it was hypothesized to compete only in part with the target sentence judgement for processing resources (Wickens, 1989); that is, it was also an auditory task but involved nonverbal stimuli. Thus, some dual-task interference was expected that would allow comparing how aphasic and control participants dealt with this interference. Furthermore, it was anticipated that if the processing or resource demands of the competing task overlapped too much with those of the sentence judgment task (e.g., auditory and verbal), aphasic participantswould be too challenged during the dual-task conditions resulting in uninformative basement effects.

During the isolation condition in which the tone task was completed by itself, tone duration was the average length of the sentence stimuli; for the remaining conditions, tone and sentence durations were matched, with an equal number of high and low tones superimposed upon grammatically correct and incorrect sentences. To complete the tone discrimination task, participants pressed as quickly and accurately as possible a yellow laptop key labeled HIGH (“i” key) when they heard the high tone and a blue laptop key labeled LOW (“p” key) when they heard the low tone. Both spoken and written instructions were provided, with written instructions (e.g., High tone Press HIGH; Low tone Press LOW) in a format identical to that described for the sentence judgment task.

2.2.6. Presentation Order

The listening conditions for the experimental tasks were presented in a relatively set order. First, a practice condition was administered for each task to familiarize participants with the sentence judgment and tone discrimination tasks and to assure that participants, particularly those with aphasia, could achieve adequate performance accuracy under ideal listening conditions (i.e., completing task by itself). So that basement effects would not obliterate possible dual-task effects, only participants who achieved at least 75% during these practice conditions completed the subsequent experimental listening conditions. Whereas participants were provided with feedback regarding their response accuracy and speed during practice trials, they were not given any information regarding appropriate or inappropriate task completion strategies.

Participants who met the practice accuracy criterion next completed the experimental listening conditions in the following order: Isolation followed by focused attention followed by the three divided attention conditions, with the order of the three divided attention conditions counterbalanced across participants. Previous findings (Murray et al., 1997a; 1997b) indicated that e.g., frustration, sense of defeat), which could adversely affect participants’ performance of subsequent conditions (Brookshire, 1976; Cahana-Amitay et al., 2011), could be minimized. Because the three divided attention conditions were conjectured to have comparable task demands (i.e., all involved two responses and abiding by specific attention priorities), their presentation order was counterbalanced across participants to help curtail the confounding effects of time-based factors (e.g., practice, fatigue) on divided attention performance patterns.

Ten practice trials preceded the focused attention and each divided attention condition. During these practice trials, participants were encouraged to ask questions and were provided feedback about their performance accuracy and response speed. There was no cut-off score for these practice trials; instead, these trials served to assure that participants understood instructions for these more complex listening conditions and in the case of the divided attention conditions, gave responses to both the sentence and tone stimuli for each trial (see Figure 2 for an example of simplified written instructions for one of the divided attention conditions). For example, during the practice trials for DIV1, participants could be reminded of attentional priorities (i.e., 75% sentences, 25% tones) if they were responding more quickly and accurately to the tone versus sentence task.

Figure 2.

Figure 2.

Example of simplified written instructions for one of the divided attention conditions, DIV1.

2.3. Equipment and Stimulus Recording Procedures

Sentence and tone stimuli for the experimental tasks were prepared and administered using a Apple PowerBook G3 laptop and PsyScope (Cohen, MacWhinney, Flatt, & Provost, 1993), an experimental design and control software system. Sentence stimuli were first recorded in a double-walled, sound-treated booth with a Sony PCM-MI digital audio recorder. Next, SoundEdit software was used to generate the tone stimuli as well as to edit, amplify, and mix the competing sentence and tone stimuli. So that paired, competing stimuli (i.e., those presented during focused and divided attention conditions) were heard concurrently but did not mask each other (Deatherage, 1972), each pair’s stimulus onsets were synchronized, stimulus durations were equated, and peak intensities differed by less than 5 dB. The auditory stimuli were presented free field via two speakers, one approximately 6 inches on each side of the laptop computer; the computer itself was positioned 6 to 12 inches in front of the participant. During practice trials, the loudness level of the stimuli was adjusted according to each participant’s preference.

Stimulus presentation for the experimental tasks was programmed and controlled by the PsyScope software (Cohen et al., 1993). A silent, 1500 ms inter-trial interval (i.e., the time period between response offset and the onset of the subsequent stimulus) was utilized across listening tasks and conditions. PsyScope also allowed computation of on-line response accuracy and reaction time (RT). RTs were quantified from the onset of the target stimulus to the participant’s input on the laptop keyboard.

2.4. Data Analyses

Comparison of the control and aphasic groups’ performances of the cognitive test battery was completed via a series of independent samples t-tests. For experimental task data, RTs from errors and RT outliers (i.e., greater than 2 SD above or below the individual participant's mean RT for that condition) were first removed from the data set. The vast majority of RT outliers occurred during initial trials of a condition; the remaining RT outliers were associated with delays related to participants commenting about their performance. For each participant group, less than 1.5% of the RT data were identified as outliers and thus, excluded. Sentence judgment accuracy data were examined in terms of percent correct as well as A', a measure of grammatical sensitivity. According to Grier’s (1971) computational formula, A' adjusts for systematic response bias and estimates the proportion of correct responses so that A' = 1.00 indicates 100% correct or perfect discrimination (for further explanation of A' and assumptions underlying the signal detection approach to grammaticality judgments, see Linebarger et al., 1983).

Next, “across group” and “across condition” variances were evaluated via Fmax (i.e., the ratio of the largest to smallest variance; Keppel, 1991) to determine if the ANOVA assumption of variance homogeneity had been met. Percent correct accuracy data were arcsine transformed and RT data were logarithmically transformed when Fmax was found to exceed 3; analyses were then repeated on the transformed data to assure that they were appropriate for parametric statistical testing. A series of repeated measures ANOVAs (mixed 3-factor within-subjects) were applied to evaluate the sentence judgment data; group served as the between-subjects factor, and attention condition and sentence type served as the within-subjects factors. For the tone discrimination data, repeated measures ANOVAs (2-factor) with group as the between-subjects factor and attention condition as the within-subjects A', and RT data. Tukey post-hoc pairwise comparisons (p ≤.05) were utilized to examine further significant main and interaction effects.

Lastly, Pearson product moment correlations of dependent measures with continuous variables (e.g., demographic characteristics, linguistic and cognitive test results) were calculated separately for each group to investigate factors associated with performing the sentence judgment task by itself (i.e., isolation condition), in the presence of distraction (i.e., focused attention condition), and concomitantly with another task (i.e., divided attention). Only one divided attention condition was included in the correlational analysis to control the number of correlations calculated, because completion of each of the three divided attention conditions should in terms of task and response demands (e.g., respond to both the sentence and tone task; follow a prescribed attentional priority) draw upon similar linguistic and cognitive abilities (indeed an examination of correlations between DIV2 and DIV3 data [accuracy and RT] for each group indicated that all correlations were significant at the p < .01 level, ranging from r = .54 to r = .82), and because interest was in identifying factors related to completing the sentence judgment task under the contrasting listening conditions as opposed to factors related to performing the three different prescribed attentional priorities or dual-task conditions (i.e., 75% vs. 50% vs. 25% attention allocated to the sentence task); DIV3, which proved to be the most challenging divided attention condition particularly for the control group (e.g., greater performance variation and thus hopefully more robust correlations), was selected to be entered into these calculations. To ensure data adhered with linear model assumptions and to identify extreme outliers, scatter diagrams and residual means and plots were checked (Verran & Ferketich, 1987). Given that numerous correlations were computed but that these analyses were exploratory in nature, an effect size criterion was adopted to interpret correlations: Only correlations which exceeded a moderate effect size criterion of r > .3 were considered meaningful (Cohen, 1988).

3. Results

3.1. Cognitive Testing

Across the cognitive tests (see Table 2), the aphasic group achieved significantly (p < .05) lower scores compared to the control group indicating poorer STM, WM, and attention test performances. Although as a group the aphasic participants achieved lower BIT scores than the control participants, visual neglect was indicated in only two aphasic participants who obtained scores at or below the neglect cut-off score of 129.

3.2. Sentence Judgment Task

3.2.1. Accuracy

The repeated measures ANOVA yielded significant main effects of group, F(1,47) = 16.842, p < .001, condition, F(4,188) = 64.023, p < .001, and sentence type, F(2,94) = 17.866, p < .001, as well as significant condition by group, F(4,188) = 8.302, p < .001, condition by sentence, F(8,376) = 16.460, p < .001, and condition by sentence by group interaction effects, F(8,376) = 2.557, p = .01. Tukey post-hoc pairwise comparisons (with alpha set at ≤ .05) indicated that regardless of condition or sentence type, the control group performed significantly more accurately than the aphasic group (see Table 3 and Figure 3). Examination of the condition by group interaction effect indicated that the control group’s grammatical judgment accuracy dropped significantly during DIV2 and DIV3 compared to the other listening conditions; they also performed DIV3 less accurately than DIV2. The aphasic group appeared more sensitive to distraction as their sentence judgment accuracy dropped as they moved from the isolation to the focused attention condition; in fact, for the aphasic group, the isolation condition was performed more accurately than any other listening condition. The aphasic group also completed the focused attention and DIV1 conditions significantly more accurately than DIV2 and DIV3; there was no significant accuracy difference between DIV2 and DIV3 for the aphasic group.

Table 3.

Accuracy (% Correct) and Reaction Time (msec) Group Means, Standard Deviations, and Ranges for the Sentence Judgment Task.

Sentence Type
Data Type Condition Group Active Passive Conjoined
Accuracy Isolation Aphasic M 95.0 92.6 91.3
SD 5.4 9.5 8.3
Range 85–100 70–100 75–100
Control M 98.1 98.5 97.7
SD 3.2 3.1 4.9
Range 90–100 90–100 80–100
Focused Aphasic M 85.0 87.4 81.5
Attention SD 11.3 17.8 15.8
Range 60–100 45–100 50–100
Control M 97.5 98.7 97.1
SD 2.9 2.7 3.2
Range 90–100 90–100 90–100
Divided Aphasic M 87.8 83.9 79.3
Attention #1 SD 14.9 12.2 19.1
Range 50–100 50–95 40–95
Control M 98.8 98.3 94.4
SD 2.6 3.7 6.4
Range 90–100 85–100 75–100
Divided Aphasic M 81.5 80.4 77.6
Attention #2 SD 16.9 14.8 14.3
Range 40–100 50–100 55–95
Control M 97.1 96.0 92.7
SD 3.5 4.5 6.5
Range 90–100 85–100 70–100
Divided Aphasic M 84.6 74.1 80.2
Attention #3 SD 14.1 12.3 15.2
Range 50–100 45–90 50–100
Control M 97.5 85.2 93.8
SD 3.5 4.1 6.4
Range 90–100 80–95 80–100
Reaction Isolation Aphasic M 4497.4 3955.7 4256.5
Time SD 537.0 468.7 523.1
Range 3666–5605 3218–4841 3513–5187
Control M 3926.8 3401.4 3655.7
SD 330.4 275.5 225.6
Range 3277–4448 3013–4081 3252–4168
Focused Aphasic M 4281.4 3901.1 4113.4
Attention SD 553.4 611.6 537.3
Range 3541–5683 3108–5510 3422–5210
Control M 3675.6 3223.2 3425.9
SD 225.0 211.8 237.9
Range 3239–4051 2874–3751 3031–3890
Divided Aphasic M 5370.8 4874.5 4935.8
Attention #1 SD 629.6 617.0 525.3
Range 4309–6726 3877–6044 3944–5978
Control M 4602.9 4161.9 4327.6
SD 328.8 269.4 319.0
Range 3982–5154 3771–4646 3700–5053
Divided Aphasic M 4983.4 4728.8 4862.6
Attention #2 SD 498.1 529.3 440.1
Range 4060–5986 3919–5684 4276–5621
Control M 4369.0 4061.0 4242.2
SD 356.1 395.9 307.4
Range 3765–4912 3335–4638 3637–4777
Divided Aphasic M 4984.5 4587.92 4683.4
Attention #3 SD 539.9 518.8 408.2
Range 3927–5942 3513–5902 3668–5266
Control M 4389.9 4097.0 4250.4
SD 423.8 357.2 373.9
Range 3665–5212 3351–4574.6 3541–4917

Note. Divided Attention #1 = 75/25% condition in which participants were asked to allot 75% of their attentional capacity to the sentence judgment task and 25% to the tone task; Divided Attention #2 = 50/50% priority condition in which participants were asked to distribute equally their attention to both tasks; Divided Attention #3 = 25/75% condition in which participants were instructed to allot 25% of their attentional capacity to the sentence judgment task and 75% to the tone task.

Figure 3.

Figure 3.

Mean proportion of correct grammaticality judgments (and 95% confidence interval bars) for each group across each attention condition. DIV1 = 75/25% condition prioritizing the sentence judgment task; DIV2 = 50/50% priority condition; DIV3 = 25/75% condition prioritizing the tone discrimination task.

Exploration of the condition by sentence by group interaction effect indicated differences between the groups in terms of their responses to active sentences: Whereas the control group displayed no significant accuracy differences in responding to active sentences across the various listening conditions, the aphasic group did. That is, the aphasic group judged active sentences most accurately during the isolation condition; they also responded to active sentences significantly more accurately during DIV1 compared to focused attention, DIV2, and DIV3 conditions. For passive sentences, both groups demonstrated no significant difference in their performance accuracy as they moved from isolation to focused attention conditions. The divided attention conditions did, however, have a negative effect on their judgment of passives: Control participants completed isolation, focused, and DIV1 conditions more accurately than DIV2 and DIV3, and also performed DIV3 less accurately than DIV2. Aphasic participants completed passives during all divided attention conditions less accurately than during isolation and focused attention conditions; among the divided attention conditions, they completed DIV3 significantly less accurately than DIV1 and DIV2. For conjoined sentences, the control participants completed the isolation condition significantly more accurately than all divided attention conditions and also completed the focused attention condition significantly more accurately than DIV2 and DIV3. In contrast, the aphasic group judged conjoined sentences most accurately during the isolation condition, and also demonstrated a significant difference between the focused attention and DIV2 conditions, with the former condition being performed more accurately.

Performance patterns also differed for the groups when sentence types within each listening condition were examined. For example, the control group displayed no significant differences among the sentence types during the isolation and focused attention conditions; during the divided attention conditions, they judged active sentences most accurately and with the exception of DIV3, judged passives more accurately than conjoined sentences. In contrast, the aphasic group demonstrated significant differences among the sentence types within several conditions. Similar to the control group, the aphasic group demonstrated no significant accuracy differences among the sentence types during the isolation condition. During the focused attention condition, however, they did in that they responded to actives and passives significantly more accurately than conjoined sentences. During each of the divided attention conditions, they judged active sentences most accurately (although the difference between passives and actives during DIV2 did not reach significance) and conjoined sentences least accurately, with the exception of DIV3, when they, like the control group, responded least accurately to passives.

3.2.2. Grammatical Sensitivity

Significant group, F(1,47) = 18.653, p < .001, condition, F(4,188) = 25.760, p < .001, and condition by group effects, F(4,188) = 10.571, p < .001, were identified following statistical analysis of A' data (see Table 4). Further examination of these effects indicated that regardless of condition, the control group demonstrated significantly greater grammatical sensitivity than the aphasia group. Whereas the only significant across-condition comparison for the control group was between the isolation and DIV3 conditions, the aphasic group’s grammatical sensitivity dropped significantly as they moved from the simpler to more demanding listening conditions; that is, they displayed greatest sensitivity during the isolation condition and also completed the focused attention condition with greater sensitivity than DIV2 and DIV3 conditions. Similarities between the statistical analysis outcomes for the A' and accuracy data suggested that neither the aphasic nor control groups were demonstrating systematic response bias; accordingly, to avoid redundancy, only the accuracy data are discussed in Section 4.

Table 4.

Proportions of hits and false alarms, and A' sensitivity scores for the sentence judgment task.

Condition Group
Aphasic
Hits
False
Alarms
A’ Control
Hits
False
Alarms
A’
Isolation M .942 .083 .961 .986 .022 .991
SD .077 .089 .040 .027 .034 .011
Range .8–1 0–.3 .89–1 .9–1 0–.13 .96–1
Focused M .888 .196 .894 .992 .037 .989
Attention SD .129 .167 .115 .014 .029 .008
Range .6–1 0–.57 .64–.99 .97–1 0–.1 .97–1
Divided Attention #1 M .890 .216 .888 .982 .038 .986
SD .141 .197 .124 .030 .046 .014
Range .5–1 .03–.7 .56–.98 .9–1 0–.13 .95–1
Divided Attention #2 M .867 .265 .862 .978 .073 .975
SD .144 .205 .132 .027 .053 .017
Range .47–1 .03–.8 5–98 .9–1 0–.2 .94–1
Divided Attention #3 M .880 .280 .884 .973 .129 .959
SD .152 .178 .056 .037 .057 .019
Range .4–1 .1–.7 .77–.95 .87–1 .07–.3 .92–.98

Note. Divided Attention #1 = 75/25% condition in which participants were asked to allot 75% of their attentional capacity to the sentence judgment task and 25% to the tone task; Divided Attention #2 = 50/50% priority condition in which participants were asked to distribute equally their attention to both tasks; Divided Attention #3 = 25/75% condition in which participants were instructed to allot 25% of their attentional capacity to the sentence judgment task and 75% to the tone task.

3.2.3. Reaction Time

Statistical analysis of the sentence judgment RT data yielded significant group, F(1,47) = 40.997, p < .001, condition, F(4,188) = 136.283, p < .001, and sentence main effects, F(2,94) = 252.355, p < .001. Whereas the condition by sentence type also reached significance, F(8,376) = 11.016, p < .001, the condition by sentence by group interaction effect only approached significance, F(8,376) = 1.915, p = .055. Post-hoc testing indicated that for all sentence types and conditions, the control group made sentence judgments more quickly than the aphasic group. Across the listening conditions, the focused attention condition was completed most quickly followed by the isolation condition and then the three divided attention conditions (see Table 3 and Figure 4). Across the divided attention conditions, DIV1 was performed more slowly than DIV2 and DIV3 when responding to actives or passives; for conjoined sentences, DIV1 was only performed more slowly than DIV3. Comparison of the different sentence types revealed the following speed hierarchy, from slowest to quickest: actives followed by conjoined followed by passives. Because the condition by sentence by group interaction also approached significance, where group differences might be occurring was explored. These analyses revealed that the aphasic group compared to the control group demonstrated fewer significant RT differences among the three sentence types during divided attention conditions. That is, there was no significant RT difference between passives and conjoined sentences during DIV1 and DIV3 whereas there was for the control group; additionally, the aphasic group displayed no significant RT difference between actives and conjoined sentences during DIV2 even though the control group did.

Figure 4.

Figure 4.

Mean reaction times (and 95% confidence interval bars) on the grammaticality judgment task for each group across each attention condition. DIV1 = 75/25% condition prioritizing the sentence judgment task; DIV2 = 50/50% priority condition; DIV3 = 25/75% condition prioritizing the tone discrimination task.

3.3. Tone Discrimination Task

3.3.1. Accuracy

Group accuracy means, standard deviations, and ranges for the secondary, tone discrimination task are listed in Table 5 (see also Figure 5). Statistical analysis revealed significant group, F(1,47) = 1.656, p < .001, condition, F(3,141) = 11.520, p < .001, and condition by group effects, F(3,141) = 8.537, p < .001. Post-hoc analyses indicated that although there was no significant difference between the aphasic and control groups’ tone discrimination accuracy during the isolation condition, the aphasic group performed significantly less accurately during all of the divided attention conditions. Whereas the control group’s accuracy did not significantly vary across conditions, the aphasic group performed the isolation condition most accurately, with no significant differences among the three divided attention conditions.

Table 5.

Accuracy (% Correct) and Reaction Time (msec) Group Means, Standard Deviations, and Ranges for the Tone Discrimination Task

Condition Group Data Type
Accuracy Reaction Time
isolation Aphasic M 98.0 748.6
SD 2.3 218.0
Range 93–100 446–1140
Control M 97.9 556.6
SD 3.0 116.1
Range 92–100 374–777
divided attention #1 Aphasic M 89.1 5176.2
SD 11.3 796.2
Range 53–100* 3526–6714
Control M 98.1 4352.5
SD 2.3 467.0
Range 90–100 3722–5205
divided attention #2 Aphasic M 89.5 5149.4
SD 10.7 838.4
Range 60–100 3761–6372
Control M 97.4 4288.0
SD 3.1 492.8
Range 88–100 3610–5153
divided attention #3 Aphasic M 91.3 4963.3
SD 10.0 919.4
Range 67–100 3200–6773
Control M 98.8 3993.7
SD 1.5 365.6
Range 95–100 3231–4731

Note. Divided Attention #1 = 75/25% condition in which participants were asked to allot 75% of their attentional capacity to the sentence judgment task and 25% to the tone task; Divided Attention #2 = 50/50% priority condition in which participants were asked to distribute equally their attention to both tasks; Divided Attention #3 = 25/75% condition in which participants were instructed to allot 25% of their attentional capacity to the sentence judgment task and 75% to the tone task.

Figure 5.

Figure 5.

Mean proportion of correct tone discriminations (and 95% confidence interval bars) for each group across each attention condition. DIV1 = 75/25% condition prioritizing the sentence judgment task; DIV2 = 50/50% priority condition; DIV3 = 25/75% condition prioritizing the tone discrimination task.

3.3.2. Reaction Time

As shown in Figure 6, the 95% confidence intervals of the aphasic group failed to overlap with those of the control group for all listening conditions (see also Table 5). This finding indicated that the aphasic group responded significantly more slowly to the tones than the control group, regardless of listening condition (Keppel, 1991). Accordingly, separate repeated measures ANOVAs were computed for each participant group to evaluate the effects of listening condition on their tone discrimination RTs. A significant condition effect was observed for both the aphasic, F(3,66) = 895.453, p < .001, and control groups, F(3,66) = 2319.929, p < .001. Post-hoc testing indicated that for the aphasic group, the isolation condition was performed significantly more quickly than all of the divided attention conditions, and there were no significant RT differences among the divided attention conditions. Whereas the control group also performed most quickly during the isolation condition, they also achieved significantly faster RTs during DIV3 (i.e., attentional priority to the tone task) compared to DIV1; the comparison of DIV3 to DIV2 (i.e., attentional priority to the sentence task) also approached significance (i.e., p = .06).

Figure 6.

Figure 6.

Mean reaction times (and 95% confidence interval bars) on the tone discrimination task for each group across each attention condition. DIV1 = 75/25% condition prioritizing the sentence judgment task; DIV2 = 50/50% priority condition; DIV3 = 25/75% condition prioritizing the tone discrimination task.

3.4. Correlates ofExperimental Task Performances

Across the attention conditions, sentence judgment accuracy and speed for the aphasic group were significantly associated with both language (ADP Aphasia Severity and Auditory Comprehension scores), attention (TEA subtests), and working memory (auditory-verbal working memory measure) test performances (see Table 6). Additionally, performance of the sentence judgment task during the isolation condition significantly correlated with performance during the focused and divided attention conditions: Aphasic participants who responded more accurately and more quickly during the isolation condition also responded more accurately and more quickly during the more demanding listening conditions. The negative correlation between sentence judgment accuracy and RT during the isolation condition indicated that aphasic participants who more accurately made sentence judgments also made their judgments more quickly.

Table 6.

Correlational Findings

Isolation
r(p)
Listening Condition
Focused Attention r(p) Divided Attention* r(p)
Aphasie Group % RT % RT % RT
ADP
    Auditory Comp. 70*** −.56** .57** −.66*** .69*** −.53**
    Aphasia Severity 64*** −.65*** 70*** - 74*** .56** −.40*
ASHA FACS .29 −.55** .31 −.67*** .32 −.38
Sentence Judgment1
    % −.61** .91*** .81***
    RT −.61** .82*** .46*
TEA
    ECD .57** −.37 .50* −.38 .68*** −.26
    VE .47* −.58** .39 −.45* .59** −.18
    ECR .48* −.54** .49* −.48* .45* −.23
    TSC .56** −.45* .39 −.54** .42* −.57**
BIT .34 −.39 .16 −.57** .14 −.33
WMS-R
    VMS-F .11 −.38 .14 −.55** .26 −.27
    VMS-B .26 −.11 .38 −.55** .32 −.31
Auditory-Verbal Working Memory
Recall Errors −.75*** .47* −.62** .65** −.67*** −.58**
Control Group
Sentence Judgment1
    % −.10 .27 .60**
    RT −.10 64** .28
TEA
    ECD .30 −.54** .12 −.42* .27 −.44*
    VE .21 −.09 .15 −.13 .21 −.46*
    ECR .04 −.32 .28 −.24 .24 −.29
    TSC .31 −.14 .10 −.18 .19 −.43*
WMS-R
    VMS-F .10 −.03 .02 −.06 .12 −.13
    VMS-B .23 −.26 .14 −.13 .15 −.11
BIT .05 −.32 .01 −.31 .29 −.28
Auditory-Verbal Working Memory
Recall Errors −.16 .33 −.13 .30 −.25 .48*

Note.

*

p <.05;

**

p <.01;

***

p <.001;

1

sentence judgment data from the isolation condition.

For the control group, there were fewer significant correlations compared to the aphasic group (see Table 6). Only their sentence judgment RT was found to significantly correlate with their performance of select TEA subtests or the auditory-verbal working memory measure. There were also some significant relationships between performance of the sentence judgment task during the isolation condition with performance during the more demanding listening conditions: Isolation accuracy correlated with divided attention accuracy, and isolation RT correlated with focused attention RT.

4. Discussion

Findings from the current study complement those of earlier investigations regarding more generally, resource accounts of aphasia (Hula & McNeil, 2008; Kurland, 2011; Mayer & Murray, 2012; Murray et al., 1997a), and more specifically, the relationship between sentence processing and extra-linguistic cognitive abilities in individuals with aphasia (Caplan et al., 2013; Murray et al., 1997c; Sung et al., 2009). Regardless of listening condition and sentence types manipulations, the aphasic participants responded less accurately and more slowly than the control participants; however, performing a sentence judgment task that focused on processing aspectual information, under listening conditions with increased extra-linguistic cognitive demands proved more challenging for the aphasic versus control group. Furthermore, all sentence types became difficult for participants with aphasia during more demanding listening conditions, and significant associations were identified between the aphasic participants’ sentence judgment performances and their scores on tests of not only language ability, but also attention and memory. The following sections offer a discussion of these findings as well as their theoretical and clinical implications.

4.1. Listening Condition Effects

Consistent with earlier findings (e.g., Garraffa & Learmonth, 2013; Murray et al., 1997c), increasing the cognitive demands of the listening conditions was more detrimental to the sentence judgment performances of the aphasic versus control participants. For example, control participants only showed significant decreases in their sentence judgment accuracy during DIV2 (equal attention to both tasks) and DIV3 (attentional priority to the tone task), with research previously documenting that non-brain-damaged adults found making grammaticality judgments about similar types of agreement errors more challenging when extra-linguistic demands were increased (Blackwell & Bates, 1995; McDonald, 2008a). As in prior investigations of sentence judgment abilities of adults with no brain damage (Boyle & Coltheart, 1996; Murray et al., 1997c; Smith, 2011), distraction (focused attention condition) had no negative effect upon the control group’s accuracy. In contrast, aphasic participants demonstrated significant decrements in their sentence judgment accuracy as soon as an auditory distracter was introduced (focused attention condition) and during each of the three divided attention conditions compared to the isolation condition. Difficulties completing listening as well as speaking tasks in the presence of distraction or a competing task, regardless of whether the stimuli were linguistic or nonlinguistic, have been previously documented among individuals with aphasia (Erickson, Goldinger, & LaPointe, 1996; Hula, McNeil, & Sung, 2007; Murray, 2000; Murray et al., 1997a, c; Villard & Kiran, 2015).

In concert with prior research involving cognitive task demand manipulations to either linguistic (Murray et al., 1997a, c) or nonlinguistic tasks (Villard & Kiran, 2015), as listening demands were increased, so too, generally, did the sentence judgment RTs of participants with aphasia as well as their non-brain damaged peers. Interestingly, RTs for both groups were faster during the focused attention condition compared to the isolation condition. In prior studies (Goff et al., 2006; Smith, 2011), increased response speed in distracting conditions has been reported and attributed to the increased resources contributed to focus concentration on the target task; these increased resources in turn enable faster responses. In the current study, the increased response speed during the focused attention condition could also or instead reflect a practice effect given that the condition order was fixed, with the isolation condition always preceding the focused attention condition. Indeed, the control group maintained near ceiling-level sentence judgment accuracy despite faster RTs in the focused attention versus isolation condition. The aphasie group’s sentence judgments, however, significantly decreased in accuracy as well as RT during the focused attention versus isolation condition. Similar speed/accuracy trade-off patterns were noted for the tone discrimination task. That is the control group appeared to utilize a speed/accuracy trade-off; in contrast, despite significantly slower RTs during the divided attention conditions, the aphasic group’s tone discrimination accuracy also dropped during these more complex listening conditions. These ineffective speed/accuracy trade-offs observed for the aphasic group may suggest issues in terms of judging task demands, self-monitoring speed or overall performance, or both (Murray et al., 1997a, c; Petroi et al., 2014). For example, if individuals with aphasia perceived that the isolation and focused attention conditions were of similar difficulty and/or that they were performing as accurately as they were during the first run of the sentence judgment task (i.e., isolation condition), they might have preceded to respond more quickly during the second run of the task (i.e., focused attention condition).

The findings also indicated that the aphasic group had greater difficulty than the control group in adhering to the prescribed attention priority instructions. Among the three listening conditions in which responding to the sentence judgment task was the priority (i.e., isolation, focused attention, DIV1), the control group made sentence judgments with comparable accuracies. Only when instructed to equally divide their effort (DIV2) or to prioritize effort to the competing, tone discrimination task (DIV3), did the control group demonstrate decreases in their sentence judgment accuracy; they also performed DIV3 less accurately than DIV2, further indicating adherence to task instructions and consequently, appropriate resource allocation. Whereas the aphasic group also completed DIV1 significantly more accurately than the other divided attention conditions, following allocation instructions for DIV2 and DIV3 proved challenging to them, as they completed these conditions with similar sentence judgment accuracies. The aphasie group’s performance of the tone discrimination task across the listening conditions further evinced resource allocation difficulties. That is, they performed all three divided attention conditions with similar accuracies and RTs, despite the differential effort priority instructions for each of these listening conditions. In contrast, the control group’s RT pattern across the divided attention conditions more closely followed the stipulated attentional priorities (e.g., they achieved faster RTs in the divided attention condition prioritizing the tone task).

Collectively, the above results support processing accounts of aphasia, and indicate that the participants with aphasia demonstrated insufficient resource capacity and allocation abilities (Hula et al., 2007; Murray et al., a, c; Petroi et al., 2014). For example, capacity reductions are suggested given their poorer performance of the sentence judgment (accuracy and RT) and tone discrimination (RT) tasks in the isolation condition, as well as greater vulnerability to increased extra-linguistic task demands (focused and divided attention conditions) compared to the control participants. Allocation impairments are suggested given that aphasic participants displayed difficulty adhering to the prescribed attention priorities during the three divided attention conditions. In future investigations, incorporating ratings of perceived task difficulty and performance could help delineate whether inaccurate judgement of task demands, self monitoring issues, or both contribute to the allocation impairments observed during the divided attention conditions (Dean et al., 2017; Murray et al., 1997a, c).

4.2. Sentence Type Effects

Both similarities and differences were observed in terms of how the aphasic and control groups responded to judging aspectual information within the three sentence types, actives, passives, and conjoined, across the listening conditions. In terms of similarities, during the least demanding, isolation condition, both groups demonstrated no sentence effect for their response accuracy, and a similar RT pattern across sentence types (i.e., fastest RTs for passives and slowest RTs for actives). These similarities between the aphasic and control groups are consistent with prior research (Caplan et al., 2007; Hanne et al., 2011; Schumacher et al., 2015; Smith, 2011; Wilson et al., 2003; Wilson & Saygin, 2004). For instance, Wilson and colleagues 2003; (Wilson & Saygin, 2004) found that although aphasic participants made grammaticality judgments less accurately than control participants, both groups showed the same performance pattern across sentence types (e.g., the same sentence type that was most difficult for the controls was most difficult for the aphasic participants).

An unexpected finding was that both groups responded more quickly to passive versus active sentences during the isolation condition, given that passives have increased syntactic processing demands (Ferreira, 2003). In reviewing prior sentence processing studies to determine if this RT pattern had been previously reported, it was noted that RT has been infrequently examined (e.g., Caplan et al., 1997; Wilson et al., 2003; Wilson & Saygin, 2004). When RT was examined, differences in the task demands of the sentence processing task (e.g., comprehension vs. judgment task; end-of-sentence response vs. allowing response as soon as participant wanted), participants (e.g., only agrammatic vs. spectrum of aphasia types) and stimuli (e.g., German vs. English; different forms of passives) make comparing the current and prior findings (e.g., Caplan et al., 2007; Hanne et al., 2011; Schumacher et al., 2015) challenging. It is plausible that participants were listening longer to the active sentences to assure they weren’t conjoined (and thus that an aspect violation might occur on a second possible verb) whereas for the passive sentences once they heard the “by” they responded. Nonetheless, future research is needed to determine the reliability of this RT pattern and the validity of the hypothesized heuristic (e.g.,investigators could ask participants about their strategy use after completing the sentence judgment task).

Another similarity for both participant groups was that the more complex sentence types, that is, passives (being more syntactically demanding) and conjoined sentences (being more propositionally demanding), were particularly vulnerable to the increased extra-linguistic demands of the divided attention conditions. For example, both aphasic and control participants responded to passive sentences less accurately and more slowly during divided attention conditions compared to the isolation and focused attention conditions. Other investigations in which extra-linguistic demands were increased (e.g., addition of a concomitant memory task) have similarly reported this greater sensitivity of more complex sentence structures to more challenging listening or reading conditions (Fallon et al., 2006; Garraffa & Learmonth, 2013; have similarly reported this greater sensitivity of more complex sentence structures to more challenging listening or reading conditions (Fallon et al., 2006; Garraffa & Learmonth, 2013; DIV3 (the condition prioritizing the tone discrimination task), this pattern reversed: Both groups were more accurate with conjoined versus passive sentences. Therefore, in contrast to a modular, syntax-specific resource theory, processing of at least aspectual information within passive constructions was vulnerable to the increased extra-linguistic task demands of this divided attention condition.

There were, however, also differences between the groups’ responses to the various sentence types across the listening conditions. For example, the aphasic participants’ judgment of active sentences was sensitive to listening condition manipulations, showing accuracy decrements in focused and divided attention conditions compared to the isolation condition; no such listening condition effect was identified for the control group’s responses to active sentences. That responding to even simple active sentences was difficult in the presence of distraction replicates the findings of Murray et al. (1997c), and further suggests that these participants with aphasia were demonstrating resource capacity limitations (as described in Section 4.1). An unexpected finding was that the aphasic group judged active sentences more accurately during DIV1 compared to the focused attention condition. It is possible that because condition order was fixed (with focused attention preceding DIV1), at least some aphasic participants were accommodating to listening to the sentences in the presence of the concurrent tone stimuli, but that this exposure effect only benefited their response accuracy for the simplest sentence type; however, given the small, albeit statistically significant, mean accuracy difference (i.e., 2.8%) between these two listening conditions for active sentences, additional research is warranted to establish the reliability of this finding.

The groups also demonstrated different response patterns across the divided attention conditions. As an example, for passive sentences, the control group’s judgment accuracy did not drop until DIV2 and DIV3, the two divided attention conditions in which the sentence judgment task was not prioritized; they further demonstrated an accuracy drop between DIV2 and DIV3. Accordingly, they were following the imposed attention priorities during the three divided attention conditions. In contrast, aphasic participants struggled to abide by the attentional priority instructions: Judging passives became significantly more difficult for them during the dual-versus single task conditions (i.e., DIV1 was less accurate than isolation and focused attention conditions), and there was no significant difference between their judgments of passives during DIV1 and DIV2. This finding again suggests resource allocation problems among the aphasic participants.

Finally, because significant interactions between sentence type and listening condition were observed in the accuracy and RT data, response selection did not appear to be the only source of dual-task interference. That is, if response selection (i.e., demands or a bottle-neck related to making two responses vs. one) was the sole determinant of dual-task interference, similar degrees of interference would have been expected for all three sentence types; however, that was not the case. Instead, or additionally, competition for limited processing resources made completing the sentence judgment task during the divided attention conditions challenging for both participant groups.

Collectively, the sentence type effects accord more closely with sentence processing models that specify a non-modular resource pool for completing syntactic and other linguistic operations than those that stipulate a sentence- or syntax-specific resource (e.g., McDonald, 2008a vs. Caplan & Waters, 1999, respectively): Given that both the syntactic (i.e., active vs. passive) and number of proposition manipulations (i.e., active vs. conjoined) in the sentence stimuli appeared sensitive to listening condition manipulations in the aphasic and control groups’ response patterns, post-interpretive as well as syntactic processes, respectively, appear to draw, at least in part, or perhaps to a variable extent, from a general, non-modular resource system (Garraffa & Learmonth, 2013; Fallon et al., 2006; McDonald, 2008a; Wilson et al., 2003). It should be noted, however, that because the current sentence judgment task only focused on processing aspectual information in more of an off-line versus on-line manner and because sentence stimuli were not controlled for semantic irreversibility, the extent to which the current sentence type manipulations and findings inform how specific linguistic and non-linguistic cognitive components (e.g., use of semantic heuristics vs. syntactic knowledge to judge sentences) contribute to broader sentence processing and comprehension abilities is limited. For instance, whereas it was elected in the present investigation to include sentence stimuli that were unique to each listening condition to minimize fatigue, boredom, and exposure or learning effects across conditions, it would be of interest to use a closed set of semantically reversible sentences in future studies to control for some of these current sentence stimuli limitations and to examine the reliability of the above described sentence type effects.

4.3. Relationships Between Sentence Judgment and Cognitive Test Battery Performance

The exploratory correlational findings were in concert with the models of sentence processing that acknowledge the role of extra-linguistic cognitive abilities (e.g., Caplan et al., 2007, 2013; Garraffa & Learmonth, 2013). That is, for both groups, associations were identified between sentence judgment and cognitive test performances, albeit a smaller number of these associations were found for the control group. For the aphasic group, significant associations were identified between sentence judgment performance (accuracy and RT) and attention and memory measures, regardless of listening condition. The strong association between their performance of cognitive tests that tap WM (i.e., auditory-verbal working memory test, TEA Visual Elevator and Elevator Counting with Reversal) and their sentence judgment accuracy and RT during the isolation condition is consistent with research establishing a relationship between WM and sentence processing in not only aphasia (Caplan et al., 2013; Eom & Sung, 2016; Sung et al., 2009; Tompkins et al., 1994), but also typical aging (DeCaro et al., 2016) and various neurological disorders (Colman et al., 2006; Small et al., 2000), including studies involving the discrimination of similar types of morphosyntactic errors (Blackwell & Bates, 1995; McDonald, 2008a, b). Likewise, relationships between other cognitive abilities such as focused attention and relatedly, inhibition (skills tapped by TEA subtests), and sentence processing have also been previously identified in the aphasia literature (Wiener et al., 2004). In contrast to some prior research (Caplan et al., 2013; Pettigrew & Hillis, 2014), however, the simpler span measure (i.e., WMS-R Visual Span) was involved in only one significant correlation for the aphasic group (with sentence judgment speed during the focused attention condition) and none for the control group. A likely explanation for this somewhat conflicting finding is that compared to the span measures used in earlier studies (e.g., auditory modality, digit and/or word stimuli, phonological STM; Caplan et al., 2013), there was less overlap between the task demands of the Visual Span subtests (e.g., visual modality, nonverbal stimuli, visuospatial STM) and the current sentence judgment task (McDonald, 2008a).

For the control group, few significant correlations were identified, with only their sentence judgment RTs showing a relationship with a small subset of cognitive measures. No significant correlations between their sentence judgment accuracy and cognitive measures likely reflects their near ceiling level accuracies across the listening conditions, which accordingly, limited variation and in turn, correlations. Relatedly, even though both participant groups obtained sentence judgment accuracies that exceeded 90% during the isolation condition, there were many significant correlations between sentence judgments and cognitive test scores for the aphasic group but not the control group; this finding may indicate that whereas the aphasic participants had to recruit both attention and memory abilities to support their sentence judgment performance (i.e., the language task was challenging for them), the control group did not (Brownsett et al., 2014).

The exploratory correlational findings for both groups, albeit to a lesser extent for the control group, also acknowledged the contributions of material-specific abilities to sentence processing (Caplan et al., 1997, 2007; Slevc & Martin, 2016). That is, for the aphasic group, there were significant relationships between sentence judgment performance (across listening conditions) and language measures (ADP scores), indicating that the individuals with broader and more severe language impairments were more challenged by the current sentence judgment task that focused on processing aspectual information. Moreover, the aphasic group’s performance of the sentence judgment task during the isolation condition significantly correlated with performance of the sentence judgment task during the focused and divided attention conditions. Although Murray et al. (1997c) found no significant correlations between aphasia battery test scores and sentence judgment performance during a divided attention condition, they acknowledged the homogeneity of their sample in terms of aphasia profile (e.g., mild severity only) as well as demographic variables (e.g., limited age range).

4.5. Theoretical and Clinical Implications

The current study outcomes offer support to resource or processing models of aphasia and sentence processing (e.g., Caplan et al., 2007; Patil et al., 2016) by further delineating interactions between sentence judgment performance and extra-linguistic cognitive abilities and task demands in both aphasic and healthy aging populations. Difficulties completing the sentence judgment task, particularly during conditions reflective of complex and demanding daily listening conditions, were identified among individuals whose language impairments reflected a range of aphasia profiles. Although lesion data were not available for all participants with aphasia, in the cases for whom this information was available, lesion(s) location varied widely (e.g., primarily subcortical vs. primarily cortical; primarily frontal vs. primarily parietal; circumscribed within a single lobe vs. diffuse). That aphasic participants with diverse behavioral and structural profiles were challenged by the sentence-level task requiring discrimination of verb aspect violations meshes well with prior sentence processing studies in aphasia (e.g., Caplan et al., 1997, 2007), recent imaging findings pertaining to the distributed nature of sentence processing networks (e.g., Teichmann et al., 2015; Xing et al., 2017), and more generally, conceptualizations of the neurobiology of language (e.g., Meyer et al., 2014; Tremblay & Dick, 2016).

Different sentence constructions were included in the current study to examine whether increased syntactic versus propositional demands were differentially affected by the listening condition manipulations. The aphasic and control groups’ ability to judge the grammaticality of both passives and conjoined sentences were compromised by the increased cognitive demands of the dual-task conditions. This finding aligns with the proposition that sentence processing skills of the type tapped by the current grammaticality judgment task are supported, at least partially, by a domain general resource (McDonald, 2008a, b) versus solely reliant upon a syntax-specific resource pool (Caplan & Waters, 1999). It is also important to keep in mind, however, that the current grammaticality judgment task only required discriminating verb aspect violations. Thus it cannot be assumed that these conclusions relating to the nature of resources supporting performance of this sentence judgment task apply to other or broader aspects of sentence processing.

Clinically, the findings underscore the importance of viewing extra-linguistic cognitive testing as an integral component of aphasia assessment (Marinelli et al., 2017; Murray, 2012, 2017b; Simic, Rochon, Greco, & Martino, 2017; Villard & Kiran, 2015). That is, attention and memory impairments were common in the aphasic participants and importantly, strongly related to their performance of the sentence task, during both ideal and taxing listening conditions. The findings also suggest that there may be value in assessing language skills in contexts that vary in their extra-linguistic cognitive demands. For example, whereas many aphasic participants completed the sentence judgment as accurately as the control participants during the isolation condition, with the introduction of distraction, their accuracy significantly dropped. Therefore, there appears to be the potential to underestimate the extent of language challenges that individuals with aphasia may face in daily communication activities and settings, and in turn, the amount of intervention and support they need, if aphasia assessment is restricted to the insulated and structured environments and tasks typical of most clinical settings.

5. Conclusion

In summary, complex listening conditions were found to negatively affect sentence judgment performances of individuals with aphasia as well as typical aging adults. Sentence processing accuracy and efficiency among the individuals with aphasia, however, were more vulnerable to task demand manipulations. Performance patterns across both the listening condition and sentence type manipulations were in concert with potent relationships between linguistic and general cognitive processes and thus, with resource and processing accounts of sentence processing, and more generally, linguistic symptoms in aphasia (Hula & McNeil, 2008; Moineau et al., 2005; Murray & Kean, 2004). Further refinement (e.g., incorporation of ratings of task difficulty; collection of lesion data; use of semantically reversible sentences) and replication of the current study’s approach are recommended to expand and consolidate its findings.

  • The sentence judgment performances of participants with aphasia were negatively affected by focused and divided attention conditions, whereas control participants were only challenged in divided attention conditions

  • Participants with aphasia demonstrated resource allocation difficulties during the divided attention conditions

  • Manipulations of sentence type (i.e., syntactic and propositional complexity) and correlational analyses suggested sentence processing draws from both material-specific and general cognitive resources

  • The findings accord well with resource accounts of not only aphasia but also sentence processing

Supplementary Material

Acknowledgments

Funding: Preparation of this article was supported, in part, by Grant DC03886 from the National Institute on Deafness and Other Communication Disorders.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Baldo JV, Paulraj SR, Curran BC, & Dronkers NF (2015). Impaired reasoning and problem-solving in individuals with language impairment due to aphasia or language delay. Frontiers in Psychology, 6. doi: 10.3389/fpsyg.2015.01523 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Barona A, Reynolds C, & Chastain R (1984). A demographically based index of premorbid intelligence for the WAIS-R. Journal of Clinical and Consulting Psychology, 52, 885–887. [Google Scholar]
  3. Bayles KA, & Tomoeda CK (1991). Arizona Battery for Communication Disorders of Dementia. Tucson, AZ: Canyonlands Publishing. [DOI] [PubMed] [Google Scholar]
  4. Blackwell A & Bates E (1995). Inducing agrammatic profiles in normals: Evidence for the selective vulnerability of morphology under cognitive resource limitation. Journal of Cognitive Neuroscience 7, 228–57. [DOI] [PubMed] [Google Scholar]
  5. Boyle R & Coltheart V (1996). Effects of irrelevant sounds on phonological coding in reading comprehension and short term memory. The Quarterly Journal of Experimental Psychology: Section A, 49(2), 398–416. [DOI] [PubMed] [Google Scholar]
  6. Briggs GG, & Nebes RD (1975). Patterns of hand preference in a student population. Cortex, 11, 230–238. [DOI] [PubMed] [Google Scholar]
  7. Brookshire RH (1976). Effects of task difficulty on sentence comprehension performance of aphasic subjects. Journal of Communication Disorders, 9, 167–173. [DOI] [PubMed] [Google Scholar]
  8. Brownsett SLE, Warren JE, Geranmayeh F, Woodhead Z, Leech R, & Wise RJS (2014). Cognitive control and its impact on recovery from aphasic stroke. Brain: A Journal of Neurology, 137(Pt. 1), 242–254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Cahana-Amitay D, & Albert ML (2015). Redefining recovery from aphasia. New York: Oxford University Press. [Google Scholar]
  10. Cahana-Amitay D, Albert ML, Pyun SB, Westwood A, Jenkins T, Wolford S, & Finley M (2011). Language as a stressor in aphasia. Aphasiology, 25(5), 593–614. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Caplan D, & Waters GS (1999). Verbal working memory and sentence comprehension. Behavioral and Brain Sciences, 22, 77–94. [DOI] [PubMed] [Google Scholar]
  12. Caplan D, Waters GS, & Hildebrandt N (1997). Determinants of sentence comprehension in aphasic patients in sentence-picture matching tasks. Journal of Speech,Language, and Hearing Research, 40, 542–555. [DOI] [PubMed] [Google Scholar]
  13. Caplan D, Michaud J, & Hufford R (2013). Short term memory, working memory, and syntactic comprehension in aphasia. Cognitive Neuropsychology, 30(2), 77–109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Caplan D, Waters G, DeDe G, Michaud J, & Reddy A (2007). A study of syntactic processing in aphasia I: Behavioral (psycholinguistic) aspect. Brain and Language, 101, 103–150. [DOI] [PubMed] [Google Scholar]
  15. Cohen J, & Cohen P (1983). Applied regression/correlation analysis for the behavioral sciences (2nd ed.). New York: John Wiley. [Google Scholar]
  16. Cohen J (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Academic Press. [Google Scholar]
  17. Cohen JD, MacWhinney B, Flatt M, & Provost J (1993). PsyScope: A new graphic interactive environment for designing psychology experiments. Behavioral Research Methods, Instruments and Computers, 25, 257–271. [Google Scholar]
  18. Colman K, Koerts J, van Beilen M, Leenders KL, & Bastiaanse R (2006). The role of cognitive mechanisms in sentence comprehension in Dutch speaking Parkinson’s disease patients: Preliminary data. Brain and Language, 99, 120–121. [Google Scholar]
  19. Dabul B (2000). Apraxia Battery for Adults - 2. Austin, TX: Pro-Ed. [Google Scholar]
  20. Dickey MW, & Thompson CK (2006). Automatic processing of wh-and NP-movement in agrammatic aphasia: Evidence from eyetracking. Brain and Language, 99, 73–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Dean MP, Della Sala S, Beschin N, & Cocchini G (2017). Anosognosia and selfcorrection of naming errors in aphasia. Aphasiology, 31(7), 725–740. [Google Scholar]
  22. DeCaro R, Peelle JE, Grossman M, & Wingfield A (2016). The two sides of sensory-cognitive interactions: Effects of age, hearing acuity, and working memory span on sentence comprehension. Frontiers in Psychology, 7 DOI: 10.3389/fpsyg.2016.00236 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Dick F, Bates E, Wulfeck B, Utman JA, Dronkers N, & Gernsbacher MA (2001). Language deficits, localization, and grammar: Evidence for a distributive model of language breakdown in aphasic patients and neurologically intact individuals. Psychological Review, 108(4), 759–88. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Dignam J, Copland D, O'Brien K, Burfein P, Khan A, & Rodriguez AD (2017). Influence of cognitive ability on therapy outcomes for anomia in adults with chronic poststroke aphasia. Journal of Speech, Language, and Hearing Research, 60(2), 406–421. [DOI] [PubMed] [Google Scholar]
  25. Eom B, & Sung JE (2016). The effects of sentence repetition-based working memory treatment on sentence comprehension abilities in individuals with aphasia. American Journal of Speech-Language Pathology, 25(4S), S823–S838. [DOI] [PubMed] [Google Scholar]
  26. Erickson RJ, Goldinger SD, & LaPointe LL (1996). Auditory vigilance in aphasic individuals: Detecting nonlinguistic stimuli with full or divided attention. Brain and Cognition, 30, 244–253. [DOI] [PubMed] [Google Scholar]
  27. Gibson E, Sandberg C, Fedorenko E, Bergen L, & Kiran S (2016). A rational inference approach to aphasic language comprehension. Aphasiology, 30(11), 1341–1360. [Google Scholar]
  28. Hanne S, Sekerina IA, Vasishth S, Burchert F, & De Bleser R (2011). Chance in agrammatic sentence comprehension: What does it really mean? Evidence from eye movements of German agrammatic aphasic patients. Aphasiology, 25(2), 221–244. [Google Scholar]
  29. Hula WD, McNeil MR, & Sung JE (2007). Is there an impairment of language-specific attentional processing in aphasia? Brain and Language, 103, 240–241. [Google Scholar]
  30. Hula WD, & McNeil MR (2008). Models of attention and dual-task performance as explanatory constructs in aphasia. Seminars in Speech and Language, 29(3), 169–187). [DOI] [PubMed] [Google Scholar]
  31. Fallon M, Peelle JE, & Wingfield A (2006). Spoken sentence processing in young and older adults modulated by task demands: Evidence from self-paced listening. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 61(1), P10–P17. [DOI] [PubMed] [Google Scholar]
  32. Ferreira F (2003). The misinterpretation of noncanonical sentences. Cognitive Psychology, 47(2),164–203. [DOI] [PubMed] [Google Scholar]
  33. Folstein MF, Folstein SE, & McHugh PR (1975). “Mini-Mental State:” A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189–198. [DOI] [PubMed] [Google Scholar]
  34. Frattali C, Thompson C, Holland A, Wohl A, & Ferketic M (1995). American Speech-Language-Hearing Association Functional Assessment of Communication Skills for Adults. Rockville, MD: ASHA. [Google Scholar]
  35. Friedmann N, & Gvion A (2007). As far as individuals with conduction aphasia understood these sentences were ungrammatical: Garden path in conduction aphasia. Aphasiology, 21(6/7/8), 570–586. [Google Scholar]
  36. Garraffa M, & Learmonth G (2013). Sentence comprehension and memory load in aphasia: The role of resource reduction. Procedia-Social and Behavioral Sciences, 94, 143–144. [Google Scholar]
  37. Gerritsen MJ, Berg IJ, Deelman BG, Visser-Keizer AC, & Jong BMD (2003). Speed of information processing after unilateral stroke. Journal of Clinical and Experimental Neuropsychology, 25(1), 1–13. [DOI] [PubMed] [Google Scholar]
  38. Goff RA, LaPointe LL, Hancock AB, Stierwalt JAG, & Heald G (2006, November). Quality and intensity of cognitive distraction: Cafeteria noise and babble in younger vs. older adults. Presentation at the annual convention of the American Speech- Language-Hearing Association, Miami, FL. [Google Scholar]
  39. Golper LA, Rau M, Erskine B, Langhans J, & Houlihan J (1987). Aphasic patients’ performance on a mental status examination. Clinical Aphasiology, 17, 124–135. [Google Scholar]
  40. Goral M, Clark-Cotton M, Spiro A, Obler LK, Verkuilen J, & Albert ML (2011). The contribution of set switching and working memory to sentence processing in older adults. Experimental Aging Research, 37(5), 516–538. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Grier JB (1971). Nonparametric indexes for sensitivity and bias: Computing formulas. Psychological Bulletin, 75, 424–429. [DOI] [PubMed] [Google Scholar]
  42. Grodzinsky Y (1984). The syntactic characterization of agrammatism. Cognition, 16, 99–120. [DOI] [PubMed] [Google Scholar]
  43. Grodzinsky Y (2000). The neurology of syntax: Language use without Broca’s area. Behavioral and Brain Sciences, 23, 1–71. [DOI] [PubMed] [Google Scholar]
  44. Grossman M, & White-Devine T (1998). Sentence comprehension in Alzheimer’s disease. Brain and Language, 62, 186–201. [DOI] [PubMed] [Google Scholar]
  45. Hanne S, Sekerina IA, Vasishth S, Burchert F, & De Bleser R (2011). Chance in agrammatic sentence comprehension: What does it really mean? Evidence from eye movements of German agrammatic aphasic patients. Aphasiology, 25(2), 221–244. [Google Scholar]
  46. Hula WD, & McNeil MR (2008). Models of attention and dual-task performance as explanatory constructs in aphasia. Seminars in Speech and Language, 29(3), 169–187. [DOI] [PubMed] [Google Scholar]
  47. Keppel G (1991). Design and analysis: A researcher's handbook. Englewood Cliffs, NJ: Prentice Hall. [Google Scholar]
  48. Kurland J (2011). The role that attention plays in language processing. Perspectives on Neurophysiology and Neurogenic Speech and Language Disorders, 21(2), 44–77. [Google Scholar]
  49. Lee B, & Pyun SB (2014). Characteristics of cognitive impairment in patients with post-stroke aphasia. Annals of Rehabilitation Medicine, 38(6), 759–765. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Linebarger MC, Schwartz M, & Saffran E (1983). Sensitivity to grammatical structure in so-called agrammatic aphasics. Cognition, 13, 361–392. [DOI] [PubMed] [Google Scholar]
  51. Marinelli CV, Spaccavento S, Craca A, Marangolo P, & Angelelli P (2017). Different cognitive profiles of patients with severe aphasia. Behavioural Neurology, 2017. doi.org/ 10.1155/2017/3875954 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Martin N, & Saffran EM (1999). Effects of word processing and short-term memory deficits on verbal learning: Evidence from aphasia. International Journal of Psychology, 34, 339–346. [Google Scholar]
  53. Mauner G, Fromkin VA, & Cornell TL (1993). Comprehension and acceptability judgments in agrammatism: Disruptions in the syntax of referential dependency. Brain and Language, 45(3), 340–370. [DOI] [PubMed] [Google Scholar]
  54. Mayer JF, & Murray LL (2012). Measuring working memory deficits in aphasia. Journal of Communication Disorders, 45, 325–339. [DOI] [PubMed] [Google Scholar]
  55. McDonald JL (2008a). Differences in the cognitive demands of word order, plural, and subject-verb agreement constructions. Psychonomic Bulletin & Review, 15(5), 980–984. [DOI] [PubMed] [Google Scholar]
  56. McDonald JL (2008b). Grammaticality judgments in children: The role of age, working memory and phonological ability. Journal of Child Language, 35(2), 247–268. [DOI] [PubMed] [Google Scholar]
  57. Meteyard L, Bruce C, Edmundson A, & Oakhill J (2015). Profiling text comprehension impairments in aphasia. Aphasiology, 29(1), 1–28. [Google Scholar]
  58. Meyer L, Cunitz K, Obleser J, & Friederici AD (2014). Sentence processing and verbal working memory in a white-matter-disconnection patient. Neuropsychologia, 61, 190–196. [DOI] [PubMed] [Google Scholar]
  59. Moineau S, Dronkers NF, & Bates E (2005). Exploring the processing continuum of single-word comprehension in aphasia. Journal of Speech, Language, and Hearing Research, 48(4), 884–896. [DOI] [PubMed] [Google Scholar]
  60. Montgomery JW, Gillam RB, & Evans JL (2016). Syntactic versus memory accounts of the sentence comprehension deficits of specific language impairment: Looking back, looking ahead. Journal of Speech, Language, and Hearing Research, 59(6), 1491–1504. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Murray LL (2000). The effects of varying attentional demands on the word-retrieval skills of adults with aphasia, right hemisphere brain-damage or no brain-damage. Brain and Language, 72, 40–72. [DOI] [PubMed] [Google Scholar]
  62. Murray LL (2004). Cognitive treatments for aphasia: Should we and can we help attention and working memory problems? Journal of Medical Speech-Language Pathology, 12, xxi–xxxviii. [Google Scholar]
  63. Murray LL (2012). Attention and other cognitive deficits in aphasia: Presence and relation to language and communication measures. American Journal of Speech-Language Pathology, 21, 167–179. [DOI] [PubMed] [Google Scholar]
  64. Murray LL (2017a). Design fluency subsequent to onset of aphasia: A distinct pattern of executive function difficulties? Aphasiology, 31(7), 793–818. [Google Scholar]
  65. Murray L (2017b). Focusing attention on executive functioning in aphasia. Aphasiology, 31(7), 721–724. [Google Scholar]
  66. Murray LL, & Kean J (2004). Resource theory and aphasia: Time to abandon or time to revise? Aphasiology, 18, 830–835. [Google Scholar]
  67. Murray LL, Holland AL, & Beeson PM (1997a). Accuracy monitoring and task demand evaluation in aphasia. Aphasiology, 11, 401–414. [Google Scholar]
  68. Murray LL, Holland AL, & Beeson PM (1997b). Auditory processing in individuals with mild aphasia: A study of resource allocation. Journal of Speech, Language, and Hearing Research, 40, 792–809. [DOI] [PubMed] [Google Scholar]
  69. Murray LL, Holland AL, & Beeson PM (1997c). Grammatically judgements of mildly aphasic individuals under dual-task conditions. Aphasiology, 11(10), 993–1016. [Google Scholar]
  70. Oliveira FFD, Marin SDMC, & Bertolucci PHF (2017). Neurological impressions on the organization of language networks in the human brain. Brain Injury, 31(2), 140–150. [DOI] [PubMed] [Google Scholar]
  71. Paek EJ, & Murray LL (2015). Working memory treatment for an individual with chronic aphasia: A case study. eHearsay: Electronic Journal of the Ohio Speech-Language and Hearing Association, 5(1), 86–98. [Google Scholar]
  72. Pashler H (1994). Dual-task interference in simple tasks: Data and theory. Psychological Bulletin, 116, 220–244. [DOI] [PubMed] [Google Scholar]
  73. Pashler H, & Johnston JC (1998). Attentional limitations in dual-task performance In Pashler H (ed.), Attention (pp. 155–189). East Sussex, UK: Psychology Press. [Google Scholar]
  74. Patil U, Hanne S, Burchert F, De Bleser R, & Vasishth S (2016). A computational evaluation of sentence processing deficits in aphasia. Cognitive Science, 40(1), 5–50. [DOI] [PubMed] [Google Scholar]
  75. Penn C, Frankel T, Watermeyer J, & Russell N (2010). Executive function and conversational strategies in bilingual aphasia. Aphasiology, 24(2), 288–308. [Google Scholar]
  76. Petroi D, Koul RK, & Corwin M (2014). Effect of number of graphic symbols, levels, and listening conditions on symbol identification and latency in persons with aphasia. Augmentative and Alternative Communication, 30(1), 40–54. [DOI] [PubMed] [Google Scholar]
  77. Pettigrew C, & Hillis AE (2014). Role for memory capacity in sentence comprehension: Evidence from acute stroke. Aphasiology, 25(10), 1258–1280. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Ponsford J, & Kinsella G (1991). The use of a rating scale of attentional behavior. Neuropsychological Rehabilitation, 1, 241–257. [Google Scholar]
  79. Ratcliff R, & McKoon G (2008). The diffusion decision model: Theory and data for two-choice decision tasks. Neural Computation, 20(4), 873–922. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Robertson IH, Ward T, Ridgeway V, & Nimmo-Smith I (1994). The Test of Everyday Attention. Gaylord, MI: Northern Speech Services. [Google Scholar]
  81. Rochon E, Waters GS, & Caplan D (2000). The relationship between measures of working memory and sentence comprehension in patients with Alzheimer’s disease. Journal of Speech, Language, and Hearing Research, 43(2), 395–413. [DOI] [PubMed] [Google Scholar]
  82. Salis C (2012). Short-term memory treatment: Patterns of learning and generalisation to sentence comprehension in a person with aphasia. Neuropsychological Rehabilitation, 22, 428–448. [DOI] [PubMed] [Google Scholar]
  83. Salis C, Hwang F, Howard D & Lallini N (2017) Short-term and working memory treatments for improving sentence comprehension in aphasia: A review and a replication study. Seminars in Speech and Language, 38(1), 29–39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Schumacher R, Cazzoli D, Eggenberger N, Preisig B, Nef T, Nyffeler T, ... Müri RM (2015). Cue recognition and integration-eye tracking evidence of processing differences in sentence comprehension in aphasia. PloS one, 10(11), e0142853. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Simic T, Rochon E, Greco E, & Martino R (2017). Baseline executive control ability and its relationship to language therapy improvements in post-stroke aphasia: A systematic review. Neuropsychological Rehabilitation, 1–45. doi.org/ 10.1080/09602011.2017.1307768 [DOI] [PubMed] [Google Scholar]
  86. Slevc LR, & Martin RC (2016). Syntactic agreement attraction reflects working memory processes. Journal of Cognitive Psychology, 28(7), 773–790. [Google Scholar]
  87. Small AM, & Brandt JF (1963). Differential threshold for frequency. Journal of the Acoustical Society of America, 35, 785. [Google Scholar]
  88. Small JA, Kemper S, & Lyons K (2000). Sentence repetition and processing resources in Alzheimer's disease. Brain and Language, 75(2), 232–258. [DOI] [PubMed] [Google Scholar]
  89. Smith PA (2011). Attention, working memory, and grammaticality judgment in typical young adults. Journal of Speech, Language, and Hearing Research, 54(3), 918–931. [DOI] [PubMed] [Google Scholar]
  90. Stern RA (1998). Visual Analog Mood Scales. Odessa, FL: Psychological Assessment Resources. [Google Scholar]
  91. Sullivan N, Walenski M, Love T, & Shapiro LP (2017). The comprehension of sentences with unaccusative verbs in aphasia: A test of the intervener hypothesis. Aphasiology, 37(1), 67–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Sung JE, McNeil MR, Pratt SR, Dickey MW, Hula WD, Szuminsky N, & Doyle PJ (2009). Verbal working memory and its relationship to sentence-level reading and listening comprehension in persons with aphasia. Aphasiology, 23, 1040–1052. [Google Scholar]
  93. Teichmann M, Rosso C, Martini JB, Bloch I, Brugières P, Duffau H, ... Bachoud-Lévi AC (2015). A cortical-subcortical syntax pathway linking Broca's area and the striatum. Human Brain Mapping, 36(6), 2270–2283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Tompkins CA, Bloise CGR, Timko ML, & Baumgaertner A (1994). Working memory and inference revision in brain-damaged and normally aging adults. Journal of Speech and Hearing Research, 37, 896–912. [DOI] [PubMed] [Google Scholar]
  95. Tremblay P, & Dick AS (2016). Broca and Wernicke are dead, or moving past the classic model of language neurobiology. Brain and Language, 162, 60–71. [DOI] [PubMed] [Google Scholar]
  96. Valilla-Rohter S & Kiran S (2013). Non-linguistic learning and aphasia: Evidence from a paired-associate and feedback-based task. Neuropsychologia, 51, 79–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Verran JA, & Ferketich SL (1987). Testing linear model assumptions: Residual analysis. Nursing Research, 36, 127–130. [PubMed] [Google Scholar]
  98. Vigliecca NS, Penalva MC, Molina SC, Voos JA, & Vigliecca MR (2012). Is the Folstein's Mini-Mental test an aphasia test? Applied Neuropsychology: Adult, 19(3), 221–228. [DOI] [PubMed] [Google Scholar]
  99. Villard S, & Kiran S (2015). Between-session intra-individual variability in sustained, selective, and integrational non-linguistic attention in aphasia. Neuropsychologia, 66, 204–212. [DOI] [PubMed] [Google Scholar]
  100. Vukovic M, Vuksanovic J, & Vukovic I (2008). Comparison of recovery patterns of language and cognitive functions in patients with post-traumatic language processing deficits and in patients with aphasia following stroke. Journal of Communication Disorders, 41, 531–552. [DOI] [PubMed] [Google Scholar]
  101. Wechsler D (1987). Wechsler Memory Scale-Revised. San Antonio, TX: The Psychological Corporation. [Google Scholar]
  102. Wickens CD (1989). Attention and skilled performance In Holding D (Ed.), Human Skills (pp. 72–105). New York: John Wiley & Sons. [Google Scholar]
  103. Wiener D, Tabor Connor L, & Obler L (2004). Inhibition and auditory comprehension in Wernicke's aphasia. Aphasiology, 18(5–7), 599–609. [Google Scholar]
  104. Wilson BA, Cockburn J, & Halligan P (1987). The Behavioral Inattention Test Bury St. Edmunds, Suffolk, England: Thames Valley Test Company. [Google Scholar]
  105. Wilson SM, & Saygin AP (2004). Grammaticality judgment in aphasia: Deficits are not specific to syntactic structures, aphasic syndromes, or lesion sites. Journal of Cognitive Neuroscience, 16(2), 238–252. [DOI] [PubMed] [Google Scholar]
  106. Wilson SM, Saygin AP, Schleicher E, Dick F, & Bates E (2003). Grammaticality judgment under non-optimal processing conditions: Deficits induced in normal participants resemble those observed in aphasic patients. Brain and Language, 87, 67–68. [Google Scholar]
  107. Wingfield A, McCoy SL, Peele JE, Tun PA, & Cox LC (2006). Effects of adult aging and hearing loss on comprehension of rapid speech varying in syntactic complexity. Journal of the American Academy of Audiology, 17(7), 487–497. [DOI] [PubMed] [Google Scholar]
  108. Xing S, Lacey EH, Skipper-Kallal LM, Zeng J, & Turkeltaub PE (2017). White matter correlates of auditory comprehension outcomes in chronic post-stroke aphasia. Frontiers in Neurology, 8 DOI: 10.3389/fneur.2017.00054 [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Zakariâs L, Keresztes A, Marton K, & Wartenburger I (2016). Positive effects of a computerised working memory and executive function training on sentence comprehension in aphasia. Neuropsychological Rehabilitation, DOI: 10.1080/09602011.2016.1159579. [DOI] [PubMed] [Google Scholar]
  110. Ziegler W, Kerkhoff G, Cate D, Artinger F, & Zierdt A (2001). Spatial processing of spoken words in aphasia and in neglect. Cortex, 37, 754–756. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

RESOURCES