Abstract
Aphasia is an acquired language impairment affecting speaking, listening, reading, and writing. Aphasia occurs in about a third of patients who have ischemic stroke and significantly affects functional recovery and return to work. Stroke is more common in older individuals but also occurs in young adults and children. Because people experiencing a stroke are typically aged between 65 and 84 years, hearing loss is common and can potentially interfere with rehabilitation. There is some evidence for increased risk and greater severity of sensorineural hearing loss in the stroke population and hence it has been recommended that all people surviving a stroke should have a hearing test. Auditory processing difficulties have also been reported poststroke. The International Classification of Functioning, Disability and Health (ICF) can be used as a basis for describing the effect of aphasia, hearing loss, and auditory processing difficulties on activities and participation. Effects include reduced participation in activities outside the home such as work and recreation and difficulty engaging in social interaction and communicating needs. A case example of a young man (M) in his 30s who experienced a left-hemisphere ischemic stroke is presented. M has normal hearing sensitivity but has aphasia and auditory processing difficulties based on behavioral and cortical evoked potential measures. His principal goal is to return to work. Although auditory processing difficulties (and hearing loss) are acknowledged in the literature, clinical protocols typically do not specify routine assessment. The literature and the case example presented here suggest a need for further research in this area and a possible change in practice toward more routine assessment of auditory function post-stroke.
Keywords: Stroke, aphasia, auditory processing, ICF, hearing loss, cortical auditory evoked potentials
Learning Outcomes: As a result of this activity, the participant will be able to (1) list possible effects of having a stroke on hearing thresholds and auditory and language processing; (2) explain how language and auditory processing could be assessed in people who have had a stroke; and (3) describe how auditory and language difficulties can affect activities and participation.
This article discusses auditory and language processing after stroke and presents a single case study of an adult with difficulties in these areas. The International Classification of Functioning, Disability and Health (ICF) framework can be used to look beyond the information about impairment provided by the test results in this case example to consider the broader effects of auditory and language processing difficulties on participation in everyday life activities. The ICF framework can be used to better understand the complexity of the difficulties experienced by people with co-occurring auditory and language difficulties, but it is also useful for people experiencing difficulties in just one of these areas.
The American Speech-Language-Hearing Association describes ICF contextual factors and functioning and disability that affect people with a health condition such as hearing loss or aphasia as follows1:
Personal factors—race, gender, age, educational level, coping styles, among others (Personal factors are not specifically coded in the ICF because of the wide variability among cultures. They are included in the framework, however, because although they are independent of the health condition, they may have an influence on how a person functions.)
Environmental factors—factors that are not within the person's control, such as family, work, government agencies, laws, and cultural beliefs
Activity and participation—the person's functional status, including communication, mobility, interpersonal interactions, self-care, learning, applying knowledge, and so on
Body functions and structures—the actual anatomy and physiology/psychology of the human body
Despite increasing recognition of the relevance and clinical utility of the ICF in the aphasia and audiology fields, there are still few clinical instruments designed to specifically evaluate activity and participation. The Living with Aphasia: Framework for Outcome Measurement adaptation of the ICF for people with aphasia does not separate activities from the participation domain as activities are viewed as an important component of real-life participation.2 3 Brandenburg et al conducted a systematic review of self-report instruments for aphasia developed for adults and published in English and identified 29 instruments that contained over 50% participation items; there were few instruments that solely assessed participation.4 The most frequently included participation categories were education; paid employment; recreation; socializing; being a care-giver; relating with friends, family, and spouses; volunteer work; managing finances; community life; civic duties; human rights; and religion/spirituality.
In the current case example, there is extensive information from standardized tests and auditory evoked potentials regarding the impact of the stroke on body functions and structure in the auditory and language domains. Activity and participation information was obtained via interview with the client. The literature on auditory processing and hearing in people surviving a stroke is briefly reviewed, and a case study of an otherwise healthy adult (M) who experienced a stroke at age 35 years is presented. M's findings are discussed using the ICF framework because it highlights the broad impact of hearing loss, auditory processing difficulties, aphasia, and other poststroke disabilities on people's lives.5 6 7 8 9 10
Effects of Stroke on Communication and Auditory Processing
Stroke continues to be a major cause of disability internationally despite evidence for declining rates of intracerebral hemorrhage in some populations.11 12 13 14 15 As stroke is more common in older patients16; hearing professionals are likely to encounter clients with hearing difficulties who have experienced a stroke. Approximately two-thirds of stroke survivors have a communication disorder and about a third of stroke survivors have aphasia.17 People with stroke-related communication problems can have problems in all language modalities (listening, speaking, reading, writing).18 19 Communication problems in people with aphasia poststroke can include difficulty speaking or finding words, poor voice quality, imprecise articulation, and reduced perception and production of speech prosody (intonation, loudness, rate, emotional expression). Impaired prosody perception and production after left or right hemisphere stroke could, in part, reflect auditory processing changes after stroke.20 21 22
Robson et al found that 10 patients diagnosed with Wernicke's aphasia after stroke had impaired temporal processing for complex tonal stimuli compared with age- and hearing-matched controls.22 The participants all had word-finding, sentence repetition, and auditory comprehension problems. Temporal processing was measured using frequency modulation detection and detection of dynamic modulation in moving ripple stimuli. Consistent with the view that impaired auditory processing after stroke could contribute to linguistic difficulties experienced by people with aphasia poststroke, there is evidence that temporal processing training can improve auditory comprehension in people with aphasia.23 Szelag and colleagues conducted a blinded randomized controlled study of training effects in people with aphasia (n = 18) randomly assigned to a temporal versus a control (loudness adjustment) training group.23 The groups did not differ in age, gender, time since stroke, level of comprehension deficits, or pretraining rapid auditory processing deficits. Posttraining language scores (token test) and temporal processing improved significantly in the temporal training group but not the control group.
Stroke and Hearing Loss
Because the risk of sensorineural hearing loss (SNHL) and stroke increases with age, it is likely that people with aphasia after a stroke also may have SNHL.16 For adults aged 65 and over in the United States Health and Retirement Study (HRS) (n = 10,557) the incidences of hearing loss and stroke in 2008 were 25.0 and 9.2%, respectively. Several studies have found high rates of untreated hearing loss in people with aphasia.24 25 26 Lassig et al reported that 21% (n = 15) of the 72 patients with aphasia following left-hemisphere ischemic stroke who were able to be tested using pure tone audiometry had a hearing loss and were not wearing hearing aids.25 Formby et al measured hearing levels in 243 people aged 30 to 103 years (mean age ≈ 70 years) who had had a single stroke and concluded that the rate of hearing loss was more than that seen in healthy controls, but similar to rates of hearing loss for older people in residential care.27 These authors defined normal hearing as a pure tone threshold of 25-dB hearing level (HL) or better; only 8% of the 243 people tested had normal hearing at all audiometric frequencies (250 to 8,000 Hz). On average, the participants had mild low-frequency and moderate to severe high-frequency hearing loss, but there was considerable individual variability. There is some evidence for increased cardiovascular risk in adults with hearing loss, particularly low frequency hearing loss,28 which could account in part for hearing loss being more common in people who have had a stroke than in healthy age-matched peers; however, mechanisms for possible comorbidity of stroke and hearing loss are unknown. A study of a large cohort of patients hospitalized for sudden hearing loss (n = 1,423) found that the hazard of stroke during the 5-year follow-up was 1.64 times greater (p < 0.001) for patients with sudden hearing loss than for a control group of patients admitted for appendectomy (n = 5,692),29 highlighting a possible cardiovascular link between hearing loss and stroke risk. Bamiou noted that the disruption to the blood supply to the brain during stroke can affect all levels of the auditory pathway.30
Effects of Hearing Difficulties on Activity and Participation
The National Social Life, Health, and Aging Project survey of 3,005 community-dwelling older adults aged 57 to 85 found that 42% had trouble hearing whispered words; 18 to 20% experienced frustration when talking with family or when visiting with friends, relatives, and neighbors; but only 12% reported that hearing difficulties limited their personal or social life.31 Thus, the presence of hearing impairment may or may not be perceived as causing activity limitations or participation restrictions. A study of 380 veterans with adult-onset SNHL and no prior hearing aid experience found that self-rated hearing difficulties were moderately correlated (p < 0.01) with activities and participation measured using the World Health Organization's (WHO) Disability Assessment Scale II (WHO-DAS II), which examines health status using the WHO ICF approach.32 The activities and participation sections in the WHO-DAS II ask questions such as how difficult is it “getting all the work done that you need to” and “joining in community activities”. Because the degree of hearing impairment alone cannot predict impact, it is important that both are measured when determining rehabilitation needs. Communication difficulties isolate people from families and friends and social isolation significantly increases risk of depression after stroke.33 Both hearing loss and aphasia are associated with social isolation, depression and poor quality of life.32 34 35 36 37 Quality of life in people with aphasia after stroke is inversely associated with the severity of the disability in communication, emotion, and activity levels.36
Auditory Processing and Cognition
The American Academy of Audiology guidelines separate out memory and attention as higher-order cognitive functions and distinguish them from auditory processes such as auditory discrimination and auditory temporal processing.38 The integral role of memory and attention in the perception of speech and nonspeech sounds is noted in the British Society of Audiology Position Statement, which states that perception “results from both sensory activation (via the ear) and neural processing that integrates this bottom-up information with activity in other brain systems (e.g., vision, attention, memory)”.39 (p.3) There is evidence for auditory memory and central auditory processing difficulties as well as peripheral hearing impairment in people with aphasia. Perceptual functions (ICF code: b156) and memory functions (ICF code b144) are included in the comprehensive ICF core set for stroke.40 Auditory memory difficulties and altered auditory processing compared with controls have been reported poststroke.41 42 43 44 45 46 47 48 Functional effects of poor working memory and auditory processing difficulties include difficulty perceiving speech in background noise or with multiple talkers,49 which can affect participation in many everyday life activities. It may be difficult to tease out cognitive and language impairments resulting from a stroke,50 51 just as it may be difficult to separate auditory and linguistic processing difficulties.
Objective Assessment of Auditory Processing
Because of the difficulty assessing auditory processing in people with language difficulties, several studies have used objective, evoked potential measures of auditory processing and have shown longer evoked potential latencies and/or reduced response amplitudes for nonspeech and speech stimuli in people with aphasia compared with age-matched controls.43 44 46 47 Participants in these studies had left hemisphere strokes. In the current case study, language and auditory processing was assessed using standardized tests and obligatory cortical auditory evoked potentials (CAEPs) were recorded to provide an objective measurement of cortical auditory processing.
Case Example
A case example of a person (M) with poststroke aphasia is discussed here with information provided on contextual factors (environmental, personal) and functioning and disability (activity and participation, body functions and structures).
Contextual (Personal and Environmental) Factors
M was a physically active, self-employed, tertiary-educated male aged 37 years who had a left middle cerebral artery stroke 2.5 years previously. He has a supportive partner but other family members live overseas. Although his coping style was not measured using a standardized assessment, his ongoing engagement with therapy and other group activities poststroke indicate an active coping style.52 He has been actively seeking employment, but has not been in paid employment since his stroke.
Activity and Participation
M is participating in group therapy to improve his confidence and competence in speaking publically. M perceives the greatest effects of his aphasia in the ICF participation area “work and employment” (ICF codes: d840–d859).4 Communication confidence measured using the Communication Confidence Rating Scale for Aphasia has improved since the early days after the stroke but M's aphasia makes it difficult for him to participate in job interviews and he reports that it is difficult to read new information quickly, a skill that is required in his work.53 M noted concerns about processing auditory information and speaking at the same time: “My brain gets filled up with noise and I can't speak.” He reports that it is harder for him to speak when other people were talking or with background noise present. Thus “speaking” (ICF code: d330), “communicating with–receiving spoken messages” (ICF code: d310), and “conversation” (ICF code: d350) are affected. Difficulty ignoring extraneous sounds can occur in people who have poor frequency selectivity and other psychoacoustic deficits associated with SNHL and/or auditory processing/selective attention/auditory streaming difficulties.54 55 56
Body Functions and Structures
Hearing and Auditory Processing
An audiological assessment was undertaken that included a peripheral hearing test and tests of central auditory processing. M had a normal pure tone audiogram with hearing thresholds of 15-dB HL or better (Fig. 1) and type A tympanograms consistent with normal middle ear function bilaterally. Acoustic reflexes were present ipsilaterally at 80- to 100-dB HL at 500, 1,000, 2,000, and 4,000 Hz. Thus, peripheral auditory function was intact. M is relatively young and had not been exposed to loud noise in his work environment prestroke and hence it was anticipated that he would have good peripheral hearing.
Figure 1.

M's pure tone audiogram showing hearing sensitivity within normal limits bilaterally. Abbreviations: ANSI, American National Standards Institute; HL, hearing level.
Auditory processing measures were selected that did not require verbal responses. This included computer-based behavioral assessments of frequency discrimination and temporal processing (backward masking) from a test battery standardized in the United Kingdom and an objective measure of auditory processing (cortical evoked potentials).57 58 59 The computer-based assessments were three-interval forced-choice tests from the Institute of Hearing Research STAR2 (System for Testing Auditory Response) test battery. The STAR2 1,000-Hz frequency discrimination (FD) and backward masking with 0-millisecond gap between signal and masker (BM0) subtests each consist of two tracks of 20 trials. Each trial involves presentation of three stimuli sequentially: two identical, standard stimuli and a different, randomly ordered target. The listener identifies (by mouse-click) the odd one out. Successive trials vary the difference between the standard and target tones using a three-down/one-up adaptive staircase paradigm. The subtests include two familiarization tracks consisting of six trials each to familiarize the listener with the response paradigm and task demands. For four of these trials, the target is clearly discriminable. The target for the other two trials is set so that participant is forced to guess. The criterion for passing the familiarization phase is that all the easily discriminable targets are identified correctly. A demonstration track consisting of five trials is used to introduce each task. If a participant provides an incorrect response on the first three relatively easy trials of a task, this is deemed an early failure, the responses are discarded and the task is restarted. For the FD task, the 200-millisecond target tone had a higher frequency than the two standard tones. For the BM0 task, a 20-millisecond pulse tone target occurs immediately before a block of noise centered at 1,000 Hz with a bandwidth of 800 Hz. Stimuli were presented from the laptop through Sennheiser HD25-II (Wedemark, Germany) headphones, accompanied by animated visual stimuli.
Poor performance on auditory processing tasks is defined as a score more than two standard deviations below the result for the normative sample.38 M's FD threshold of 2.77 Hz (geometric mean of last three reversals) exceeds the FD cutoff (mean plus 2 standard deviations) for 10- to 11-year-olds in the United Kingdom STAR2 normative sample of 1.65 Hz. M's BM0 threshold of 81.67 dB (arithmetic mean of last three reversals) is slightly above (i.e., poorer than) the normative cutoff of 80.76 dB (mean plus 2 standard deviations) for the United Kingdom sample of 10- to 11-year-olds. Normative data for these tests are not available for adults, but discrimination and other auditory processing abilities plateau in older children, or continue to improve into adulthood depending on the task.61 62 Hence M has greater spectral and temporal discrimination difficulties than would be expected for adults in their 30s with normal audiometric thresholds.
CAEPs were tested on two occasions separated by 6 months using three speech syllables (246-millisecond duration), /di/, /gi/, /ti/, recorded using a native New Zealand English female speaker. The speech syllables were presented at 65-dB sound pressure level via a loudspeaker on the right or left side (45 degrees), with continuous multitalker babble presented at 60-dB sound pressure level from the opposite side. Stimulus presentation order was randomized, with two runs of 150 stimuli for each stimulus and condition. The Neuroscan STIM (Compumedics, Charlotte, NC) system was used to present the speech stimuli with a 920-millisecond interstimulus interval. The Neuroscan SCAN (version 4.3) was used to record CAEPs using one electroencephalogram (EEG) recording channel, with gold 10-mm disk electrodes placed at Cz referenced to M2. The ground electrode was located on the forehead, and eye blink activity was monitored using an electrode placed above the left eye, referenced to M2. Electrode impedances were kept under 3 to 5 kΩ. EEG was amplified with a gain of 50,000 and sampled at the rate of 1,000 Hz. EEG epochs with −100-millisecond prestimulus to 600-millisecond poststimulus time windows were extracted post hoc from the continuous file. Before averaging, responses were digitally low-pass filtered at 30 Hz. All recordings were baseline corrected prior to averaging. Recordings with eye blink artifacts were corrected using the regression procedure ocular artifact rejection function in Neuroscan software. The artifact rejection threshold was set in the range ± 50 to ± 75 µV. Short breaks were given between testing conditions if needed. M was tested while seated in a comfortable reclining chair, watching a captioned movie in a double-walled sound attenuating booth.
Amplitude and latency values for N1 peaks were determined for each condition. The amplitude of N1 was identified as the largest negative deflection between 80 and 160 milliseconds after stimulus onset. Latency of the peak was measured at the center of the peak. When the waveform contained a double peak of equal amplitude or a peak with a plateau, the latency was measured at the midpoint of the peak. Responses were determined with the agreement of two judges. CAEPs were compared with results for a normal hearing control group consisting of 11 adults (8 women, 3 men) aged 18 to 52 years (mean 31.3, standard deviation 9.7).
Fig. 2 shows that M had robust P1-N1-P2 CAEP responses for the /gi/ and /di/ stimuli and a reduced amplitude response for /ti/ in the approximate latency range 85 to 250 milliseconds. P2 amplitude was smaller than the norm for all stimuli, but particularly for /ti/. P1, N1, and P2 latencies were delayed in M compared with the norm by ∼15 to 20 milliseconds for N1 (Table 1). N1 and P2 latencies are more than 2 standard deviations longer than the norm, and P2 amplitudes are more than 2 standard deviations smaller than the norm (Table 2). The finding of delayed N1 and reduced CAEP amplitudes are consistent with results of Ofek et al for a group of participants with aphasia following left hemisphere stroke.44 Later N1 and smaller CAEP amplitudes are associated with reduced speech perception in noise in the nonstroke population and are consistent with the behavioral findings of impaired auditory processing in this case.63 Amplitudes were quite stable between recordings (baseline and +6 months), but there was some enhancement of P2 amplitude when M was retested. In the 6 months between CAEP assessments, M participated in group communication therapy.
Figure 2.

Cortical auditory evoked potential waveforms recorded at the vertex (Cz) for speech stimuli presented in babble noise at +5-dB signal-to-noise ratio for M (darker lines) and for n = 11 control participants with no hearing or neurologic impairment. The left hand column shows CAEPs for the right side stimulus presentation; the right hand column shows recordings for the left side. Recordings were made on two occasions for M, prior to and after 6 months of group therapy. Abbreviations: NH, normal hearing.
Table 1. N1 Amplitude and Latency for M and for Normal Hearing Controls (n = 11) Recorded at Cz for Right and Left Ear Presentations.
| M | NH controls | |||||
|---|---|---|---|---|---|---|
| Baseline | +6 mo | |||||
| Amplitude (μV) | Latency (ms) | Amplitude (μV) | Latency (ms) | Amplitude, μV (SD) | Latency, ms (SD) | |
| Right ear | ||||||
| di/ | ||||||
| N1 | −5.9 | 156 | −6.6 | 157 | −5.6 (2.3) | 135.6 (6.2) |
| P2 | −1.0 | 213 | −0.9 | 237 | 1.8 (1.1) | 215.4 (12.7) |
| /gi/ | ||||||
| N1 | −5.1 | 162 | −5.1 | 163 | −4.3 (1.3) | 149.7 (6.1) |
| P2 | −2.2 | 213 | −1.6 | 220 | 0.6 (1.2) | 227.9 (13.2) |
| /ti/ | ||||||
| N1 | −3.2 | 148 | −3.7 | 137 | −3.5 (1.4) | 124.1 (11.5) |
| P2 | −1.8 | 194 | −2.3 | 198 | 1.5 (1.0) | 201.7 (16.3) |
| Left ear | ||||||
| /di/ | ||||||
| N1 | −4.5 | 160 | −5.7 | 155 | −6.6 (2.2) | 137.7 (6.5) |
| P2 | −0.9 | 223 | −0.9 | 244 | 1.3 (1.9) | 215 (9.2) |
| /gi/ | ||||||
| N1 | −3.8 | 161 | −4.4 | 167 | −4.3 (1.9) | 147.1 (8.1) |
| P2 | 0.0 | 236 | −0.6 | 256 | 0.9 (1.3) | 231.8 (15.8) |
| /ti/ | ||||||
| N1 | −3.2 | 160 | −2.7 | 158 | −3.2 (1.3) | 125.1 (12.8) |
| P2 | −0.5 | 201 | −1.2 | 194 | 0.9 (1.7) | 198.8 23.8) |
Abbreviation: SD, standard deviation.
Table 2. Comparison of CAT Scores for M and Nonaphasic Subjects.
| CAT Subtest | M Score | Nonaphasic | ||
|---|---|---|---|---|
| Mean | SD | Mean − 1 SD | ||
| Comprehension: spoken sentences | 25 | 30.17 | 1.85 | 28.32 |
| Comprehension: written sentences | 28 | 29.78 | 2.50 | 27.28 |
| Word fluency | 11 | 32 | 10.10 | 21.90 |
Abbreviations: CAT, Comprehensive Aphasia Test; SD, standard deviation.
Language
Language was assessed using two commonly used clinical tests, the Boston Naming Test (BNT), which assesses the ability to name pictured objects, and the Comprehension of Spoken Sentences subtest (from the Comprehensive Aphasia Test [CAT]). The 60-item BNT was used to obtain a detailed examination of naming abilities. The line drawings in the BNT depict objects that become less familiar as the test progresses.64 The test contains words that are high- and low-frequency in printed text.65 When the participant fails to produce a response, prompting cues are given (stimulus cue such as “Tell me the name of the picture” or “Tell me another name for that” or a phonological cue based on the initial phoneme). A cueing hierarchy is used to aid the participant to produce the correct response to the picture items. If the participant failed to produce the correct response spontaneously in the given 20-second time frame, he was provided with a semantic cue (e.g., “a piece of furniture” for bed). If the participant failed to provide the correct response after a semantic cue, a phonemic cue was provided (e.g., /b/ for bed). The multiple-choice cue was only provided at the end of the test for the items the cues did not work for. The total number of items produced correctly without a cue and the number of items produced correctly following a semantic cue were then added together for the final score. Because M was able to name all of the pictures (although not all correctly), the discontinuation criteria (eight consecutive failures) did not apply. Full credit was given when the item was correctly named either spontaneously or following a semantic cue. M spontaneously provided the correct response for the first 28 items, except for item 9 (saw). When presented with a picture saw, M incorrectly but spontaneously named it as an axle; a semantic or phonemic cue should have been provided but this was inadvertently omitted.
Some of the BNT items are culturally biased. Previous studies using the BNT with an older New Zealand sample showed errors with three BNT items pretzel, beaver, and protractor.66 M spontaneously provided the correct response for two-thirds of the culturally biased items (pretzel, protractor). M was able to choose the correct response for beaver only when the multiple-choice cue was provided. He had trouble naming items that were low frequency but often knew what the items were and knew semantic information relating to the items (e.g., “‘I don’t know what it's called but I know doctors use it, normally around the neck” for stethoscope). He was able to name a few items with the help of phonemic cues. For the rest of the items that M failed to provide a correct response for, the multiple-choice cue was used.
M frequently requested the phonemic cue or responded with “I don't know what it is/I don't remember it.” He correctly identified the pretzel by describing the item first: “It is bread and can't think of the name, you get it when you go to a concert or something like that pretzel” within 16 seconds. M failed to identify the compass when presented with the multiple-choice options and selected protractor instead. However, when M was presented with the picture of the protractor he correctly named it. He spontaneously identified the beaver as an otter but was able to select the correct response when given the multiple-choice cue. When asked, “Could you think of another name?” M responded “Umm, it's in Australia.” He was given a semantic (“It's an animal”) and a phonemic cue “And it starts with /b/,” to which he replied “No not a badger. Can you give me a little bit more clue?” At the end of testing, when M was given the multiple-choice picture with the options raccoon, squirrel, rat, beaver, he correctly selected beaver. M identified the tripod as an easel even when the multiple-choice options were given. Only spontaneously named and correct responses after a semantic cue are included in the final score. Overall M spontaneously named 40/60 (66.7%) items in the BNT and an additional 9 items (15%) were named with a phonemic cue. Eight items were identified when a multiple-choice cue was provided. Even if allowance is made for missing saw (incorrect instructions) and beaver (cultural bias), M's performance was below the norm for 35- to 44-year-olds (mean 55.5/60, standard deviation 3.9) and showed a mild naming deficit.67
The CAT Comprehension of Spoken Sentences subtest involves listening to a sentence and selecting the matching picture from a choice of four (three distractor pictures and one correct picture). M correctly selected 12 of 16 pictures within 5 seconds of the stimulus sentence and 1 further correct picture, but this was delayed. Most of the errors were made from sentence 10 onward where the test uses noncanonical sentences with more complex structures (e.g., “The policeman is painted by the dancer”).68 M scored 1 for the accurate but delayed response (over 5 seconds). The final score was 25/32, which is poorer than nonaphasic performance (Table 1). The CAT Comprehension of Written Sentences subtest is similar to spoken sentence comprehension, but the written subtest required M to read a written sentence and choose the correct picture from a choice of four. Performance for written comprehension (14/16, final score 28/32) was slightly better than for the spoken subtest and falls within 1 standard deviation of the result for nonaphasic controls. M made errors on the noncanonical sentences.
A similar pattern of difficulty for the auditory and written sentence comprehension tasks is consistent with a central impairment of sentence comprehension.69 Performance on both subtests showed that M experienced difficulties comprehending noncanonical sentences in which it is necessary to recognize the syntactic structure of the sentence before understanding the meaning of the sentence. M often chose the reversible distractor option in reversible target sentences in both written and spoken tests (e.g., where the target sentence was “The nurse is chased by the butcher,” he selected “The butcher is chased by the nurse”).
Another measure of naming is verbal fluency. This was assessed using the two Word Fluency subtests from the CT: “animals” and “words beginning with /s/”. M was able to name 9 animals in 1 minute and 2 words beginning with /s/ in 1 minute. Results were 1 standard deviation below the nonaphasic norm (Table 2).
Discussion
The American Speech-Language-Hearing Association provides a useful framework based on the ICF for considering the effects of aphasia on body structure and function, activities and participation.76 Table 3 summarizes M's findings using this framework. Behavioral (FD, BM0) and electrophysiological (CAEP latencies and amplitudes) measures of auditory processing indicated that M had spectrotemporal discrimination difficulties for nonspeech sounds and his auditory cortical processing of simple consonant–vowel speech stimuli was slow compared with nonaphasic control adults. M complained that he had difficulty speaking when there were other auditory distractions. Poor auditory selectivity can have a negative effect on memory function,70 71 72 which could contribute to M's difficulties with talking and listening in background noise. The BNT and CT assessments indicated impaired confrontation naming, verbal fluency, and written and auditory sentence comprehension. These auditory and language difficulties are likely playing a role in M's perception that the greatest impact of his aphasia is in the ICF participation area work and employment, because obtaining and keeping a job typically involves conversations with multiple unfamiliar speakers, sometimes in challenging listening environments such as in background noise or with other people talking in the background. The literature indicates that younger stroke patients (18 to 65 years of age) have fewer strokes than older patients but are more likely to live longer with disability, have dependents, and be engaged in employment when they have a stroke.60 Thus, rehabilitation for poststroke aphasia in younger people in particular needs to address vocational needs. The review by Graham et al (1,612 participants total, 415 with aphasia) found that the average rate of successful return to work for young survivors with aphasia was quite low at only 28.4% compared with 44.7% across all young stroke survivors.60
Table 3. Summary of M's Findings Based on International Classification of Functioning, Disability and Health Categories.
| Body functions and structures | • Reduced auditory discrimination ability and slower cortical auditory processing • Reduced word fluency • Impaired confrontation naming (Boston Naming Test) • Impaired comprehension of spoken sentences (Comprehensive Aphasia Test) |
| Activities (tasks engaged in by the individual) | • Slow rate of reading new written materials • Improving confidence in giving public speeches |
| Participation in life situations specific to the individual | • Difficulty communicating orally when competing noise/speech present • Difficulty participating in job interviews |
| Environmental and personal factors | • High level of motivation • Supportive partner, strong work and education history • Engagement in weekly group communication therapy |
Clinical Implications and Future Directions
The results for this case example and previous studies suggest that it may be worthwhile to assess auditory processing more routinely for people with aphasia. A hearing test to check pure tone sensitivity is recommended for older people who have poststroke aphasia and who are likely to have age-related SNHL that could limit access to auditory information during aphasia therapy. Hearing loss may co-occur with auditory processing difficulties in older stroke survivors. In younger people experiencing a stroke, such as M, auditory processing difficulties could be present in the absence of peripheral hearing loss. An auditory processing assessment involving nonlinguistic stimuli and objective measures of cortical auditory processing such as the assessments used with M is not time-consuming (∼30 to 40 minutes in addition to the hearing test) and can clarify whether there are auditory perceptual difficulties for nonspeech sounds, in addition to the speech processing difficulties associated with aphasia. A person with auditory processing difficulties as a result of a stroke may benefit from the use of technological solutions to improve the signal to noise ratio, such as remote microphone hearing aids.46 73 74 75 There is also evidence from a randomized controlled trial that auditory temporal training improves auditory comprehension for adults who have had a left hemisphere stroke.23 Further research is needed to establish the incidence and specific nature of auditory processing difficulties in people with poststroke aphasia and to determine the feasibility of routine clinical assessment of auditory processing in this population.
Acknowledgment
We are grateful to M for his participation in this research.
References
- 1.American Speech-Language Hearing Association (ASHA) International Classification of Functioning, Disability, and Health (ICF) Available at: http://www.asha.org/slp/icf/. n.d. Accessed March 24, 2016
- 2.Kagan A. A-FROM in action at the Aphasia Institute. Semin Speech Lang. 2011;32(3):216–228. doi: 10.1055/s-0031-1286176. [DOI] [PubMed] [Google Scholar]
- 3.Kagan A, Simmons-Mackie N, Rowland A. et al. Counting what counts: a framework for capturing real-life outcomes of aphasia intervention. Aphasiology. 2008;22(3):258–280. [Google Scholar]
- 4.Brandenburg C, Worrall L, Rodriguez A, Bagraith K. Crosswalk of participation self-report measures for aphasia to the ICF: what content is being measured? Disabil Rehabil. 2015;37(13):1113–1124. doi: 10.3109/09638288.2014.955132. [DOI] [PubMed] [Google Scholar]
- 5.Skolarus L E, Burke J F, Brown D L, Freedman V A. Understanding stroke survivorship: expanding the concept of poststroke disability. Stroke. 2014;45(1):224–230. doi: 10.1161/STROKEAHA.113.002874. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Granberg S, Pronk M, Swanepoel W. et al. The ICF core sets for hearing loss project: functioning and disability from the patient perspective. Int J Audiol. 2014;53(11):777–786. doi: 10.3109/14992027.2014.938370. [DOI] [PubMed] [Google Scholar]
- 7.Granberg S, Swanepoel W, Englund U, Möller C, Danermark B. The ICF core sets for hearing loss project: international expert survey on functioning and disability of adults with hearing loss using the International Classification of Functioning, Disability, and Health (ICF) Int J Audiol. 2014;53(8):497–506. doi: 10.3109/14992027.2014.900196. [DOI] [PubMed] [Google Scholar]
- 8.Sharma M, Purdy S C. San Diego, CA: Plural Publishing; 2013. Management of auditory processing disorder for school-aged children: applying the ICF (International Classification of Functioning, Disability and Health) framework; pp. 495–530. [Google Scholar]
- 9.Grawburg M, Howe T, Worrall L, Scarinci N. Describing the impact of aphasia on close family members using the ICF framework. Disabil Rehabil. 2014;36(14):1184–1195. doi: 10.3109/09638288.2013.834984. [DOI] [PubMed] [Google Scholar]
- 10.Simmons-Mackie N, Kagan A. Application of the ICF in aphasia. Semin Speech Lang. 2007;28(4):244–253. doi: 10.1055/s-2007-986521. [DOI] [PubMed] [Google Scholar]
- 11.National Stroke Foundation National Stroke Audit—Acute Services Clinical Audit Report Melbourne, Australia; 2013
- 12.Child N, Barber P A, Fink J, Jones S, Voges K, Vivian M. New Zealand National Acute Stroke Services Audit 2009: organisation of acute stroke services in New Zealand. N Z Med J. 2011;124(1340):13–20. [PubMed] [Google Scholar]
- 13.Zahuranec D B, Lisabeth L D, Sánchez B N. et al. Intracerebral hemorrhage mortality is not changing despite declining incidence. Neurology. 2014;82(24):2180–2186. doi: 10.1212/WNL.0000000000000519. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Khaw K T. Epidemiological aspects of ageing. Philos Trans R Soc Lond B Biol Sci. 1997;352(1363):1829–1835. doi: 10.1098/rstb.1997.0168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Krishnamurthi R V, Moran A E, Feigin V L. et al. Stroke prevalence, mortality and disability-adjusted life years in adults aged 20-64 years in 1990-2013: data from the Global Burden of Disease 2013 Study. Neuroepidemiology. 2015;45(3):190–202. doi: 10.1159/000441098. [DOI] [PubMed] [Google Scholar]
- 16.Hung W W, Ross J S, Boockvar K S, Siu A L. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatr. 2011;11:47. doi: 10.1186/1471-2318-11-47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Engelter S T, Gostynski M, Papa S. et al. Epidemiology of aphasia attributable to first ischemic stroke: incidence, severity, fluency, etiology, and thrombolysis. Stroke. 2006;37(6):1379–1384. doi: 10.1161/01.STR.0000221815.64093.8c. [DOI] [PubMed] [Google Scholar]
- 18.Parr S et al. Talking about aphasia: living with loss of language after stroke Buckingham, England: Open University Press; 1997:xiii [Google Scholar]
- 19.Kadojić D, Bijelić B R, Radanović R, Porobić M, Rimac J, Dikanović M. Aphasia in patients with ischemic stroke. Acta Clin Croat. 2012;51(2):221–225. [PubMed] [Google Scholar]
- 20.Walker J P, Joseph L, Goodman J. The production of linguistic prosody in subjects with aphasia. Clin Linguist Phon. 2009;23(7):529–549. doi: 10.1080/02699200902946944. [DOI] [PubMed] [Google Scholar]
- 21.Nicholson K G, Baum S, Cuddy L L, Munhall K G. A case of impaired auditory and visual speech prosody perception after right hemisphere damage. Neurocase. 2002;8(4):314–322. doi: 10.1076/neur.8.3.314.16195. [DOI] [PubMed] [Google Scholar]
- 22.Robson H, Grube M, Lambon Ralph M A, Griffiths T D, Sage K. Fundamental deficits of auditory perception in Wernicke's aphasia. Cortex. 2013;49(7):1808–1822. doi: 10.1016/j.cortex.2012.11.012. [DOI] [PubMed] [Google Scholar]
- 23.Szelag E, Lewandowska M, Wolak T. et al. Training in rapid auditory processing ameliorates auditory comprehension in aphasic patients: a randomized controlled pilot study. J Neurol Sci. 2014;338(1–2):77–86. doi: 10.1016/j.jns.2013.12.020. [DOI] [PubMed] [Google Scholar]
- 24.Campbell P, Pollock A, Brady M. Should hearing be screened in the first 30 days after an acute stroke? A systematic review. Int J Stroke. 2014;9:38. [Google Scholar]
- 25.Läßig A K, Kreter S, Nospes S, Keilmann A. [Hearing disorders in aphasia] Laryngorhinootologie. 2013;92(8):531–535. doi: 10.1055/s-0033-1347201. [DOI] [PubMed] [Google Scholar]
- 26.Onoue S S, Ortiz K Z, Minett T S, Borges A C. Audiological findings in aphasic patients after stroke. Einstein (Sao Paulo) 2014;12(4):433–439. doi: 10.1590/S1679-45082014AO3119. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Formby C, Phillips D E, Thomas R G. Hearing loss among stroke patients. Ear Hear. 1987;8(6):326–332. doi: 10.1097/00003446-198712000-00007. [DOI] [PubMed] [Google Scholar]
- 28.Gates G A, Cobb J L, D'Agostino R B, Wolf P A. The relation of hearing in the elderly to the presence of cardiovascular disease and cardiovascular risk factors. Arch Otolaryngol Head Neck Surg. 1993;119(2):156–161. doi: 10.1001/archotol.1993.01880140038006. [DOI] [PubMed] [Google Scholar]
- 29.Lin H C, Chao P Z, Lee H C. Sudden sensorineural hearing loss increases the risk of stroke: a 5-year follow-up study. Stroke. 2008;39(10):2744–2748. doi: 10.1161/STROKEAHA.108.519090. [DOI] [PubMed] [Google Scholar]
- 30.Bamiou D E. Hearing disorders in stroke. Handb Clin Neurol. 2015;129:633–647. doi: 10.1016/B978-0-444-62630-1.00035-4. [DOI] [PubMed] [Google Scholar]
- 31.Pinto J M, Kern D W, Wroblewski K E, Chen R C, Schumm L P, McClintock M K. Sensory function: insights from Wave 2 of the National Social Life, Health, and Aging Project. J Gerontol B Psychol Sci Soc Sci. 2014;69 02:S144–S153. doi: 10.1093/geronb/gbu102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Chisolm T H, Abrams H B, McArdle R, Wilson R H, Doyle P J. The WHO-DAS II: psychometric properties in the measurement of functional health status in adults with acquired hearing loss. Trends Amplif. 2005;9(3):111–126. doi: 10.1177/108471380500900303. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Haley W E, Roth D L, Kissela B, Perkins M, Howard G. Quality of life after stroke: a prospective longitudinal study. Qual Life Res. 2011;20(6):799–806. doi: 10.1007/s11136-010-9810-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Bainbridge K E, Wallhagen M I. Hearing loss in an aging American population: extent, impact, and management. Annu Rev Public Health. 2014;35:139–152. doi: 10.1146/annurev-publhealth-032013-182510. [DOI] [PubMed] [Google Scholar]
- 35.Lee H, Lee Y, Choi H, Pyun S B. Community integration and quality of life in aphasia after stroke. Yonsei Med J. 2015;56(6):1694–1702. doi: 10.3349/ymj.2015.56.6.1694. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Hilari K, Byng S. Health-related quality of life in people with severe aphasia. Int J Lang Commun Disord. 2009;44(2):193–205. doi: 10.1080/13682820802008820. [DOI] [PubMed] [Google Scholar]
- 37.Brown K, Davidson B, Worrall L E, Howe T. “Making a good time”: the role of friendship in living successfully with aphasia. Int J Speech-Language Pathol. 2013;15(2):165–175. doi: 10.3109/17549507.2012.692814. [DOI] [PubMed] [Google Scholar]
- 38.American Academy of Audiology Clinical Practice Guidelines for the Diagnosis, Treatment and Management of Children and Adults with Central Auditory Processing Disorder: Diagnosis, Treatment and Management of Children and Adults with Central Auditory Processing Disorder 2010. Available at: www.audiology.org/publications-resources/document-library/central-auditory-processing-disorder. Accessed June 27, 2016
- 39.British Society of Audiology Position Statement Auditory Processing Disorder (APD) 2011. Available at: http://www.thebsa.org.uk/wp-content/uploads/2014/04/BSA_APD_PositionPaper_31March11_FINAL.pdf. Accessed June 27, 2016
- 40.Geyh S Cieza A Schouten J et al. ICF Core Sets for stroke J Rehabil Med 2004(44, Suppl):135–141. [DOI] [PubMed] [Google Scholar]
- 41.Salis C, Kelly H, Code C. Assessment and treatment of short-term and working memory impairments in stroke aphasia: a practical tutorial. Int J Lang Commun Disord. 2015;50(6):721–736. doi: 10.1111/1460-6984.12172. [DOI] [PubMed] [Google Scholar]
- 42.Wright H H, Shisler R J. Working memory in aphasia: theory, measures, and clinical implications. Am J Speech Lang Pathol. 2005;14(2):107–118. doi: 10.1044/1058-0360(2005/012). [DOI] [PubMed] [Google Scholar]
- 43.Ilvonen T, Kujala T, Kozou H. et al. The processing of speech and non-speech sounds in aphasic patients as reflected by the mismatch negativity (MMN) Neurosci Lett. 2004;366(3):235–240. doi: 10.1016/j.neulet.2004.05.024. [DOI] [PubMed] [Google Scholar]
- 44.Ofek E, Purdy S C, Ali G, Webster T, Gharahdaghi N, McCann C M. Processing of emotional words after stroke: an electrophysiological study. Clin Neurophysiol. 2013;124(9):1771–1778. doi: 10.1016/j.clinph.2013.03.005. [DOI] [PubMed] [Google Scholar]
- 45.Oron A, Szymaszek A, Szelag E. Temporal information processing as a basis for auditory comprehension: clinical evidence from aphasic patients. Int J Lang Commun Disord. 2015;50(5):604–615. doi: 10.1111/1460-6984.12160. [DOI] [PubMed] [Google Scholar]
- 46.Talvitie S S, Matilainen L E, Pekkonen E, Alku P, May P J, Tiitinen H. The effects of cortical ischemic stroke on auditory processing in humans as indexed by transient brain responses. Clin Neurophysiol. 2010;121(6):912–920. doi: 10.1016/j.clinph.2010.03.003. [DOI] [PubMed] [Google Scholar]
- 47.Alvarenga KdeF, Lamônica D C, Costa Filho O A, Banhara M R, Oliveira D T, Campo M A. Electrophysiological study of the central and peripheral hearing system of aphasic individuals. Arq Neuropsiquiatr. 2005;63(1):104–109. doi: 10.1590/s0004-282x2005000100019. [DOI] [PubMed] [Google Scholar]
- 48.Saygin A P, Dick F, Wilson S M, Dronkers N F, Bates E. Neural resources for processing language and environmental sounds: evidence from aphasia. Brain. 2003;126(Pt 4):928–945. doi: 10.1093/brain/awg082. [DOI] [PubMed] [Google Scholar]
- 49.Musiek F E, Chermak G D. Amsterdam, the Netherlands: Elsevier; 2015. Psychophysical and behavioral peripheral and central auditory tests; pp. 313–332. [DOI] [PubMed] [Google Scholar]
- 50.Gibson L, MacLennan W J, Gray C, Pentland B. Evaluation of a comprehensive assessment battery for stroke patients. Int J Rehabil Res. 1991;14(2):93–100. doi: 10.1097/00004356-199106000-00001. [DOI] [PubMed] [Google Scholar]
- 51.Lee B, Pyun S B. Characteristics of cognitive impairment in patients with post-stroke aphasia. Ann Rehabil Med. 2014;38(6):759–765. doi: 10.5535/arm.2014.38.6.759. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Aben L, Busschbach J J, Ponds R W, Ribbers G M. Memory self-efficacy and psychosocial factors in stroke. J Rehabil Med. 2008;40(8):681–683. doi: 10.2340/16501977-0227. [DOI] [PubMed] [Google Scholar]
- 53.Cherney L R, Babbitt E M, Semik P, Heinemann A W. Psychometric properties of the communication Confidence Rating Scale for Aphasia (CCRSA): phase 1. Top Stroke Rehabil. 2011;18(4):352–360. doi: 10.1310/tsr1804-352. [DOI] [PubMed] [Google Scholar]
- 54.Kortlang S, Mauermann M, Ewert S D. Suprathreshold auditory processing deficits in noise: effects of hearing loss and age. Hear Res. 2016;331:27–40. doi: 10.1016/j.heares.2015.10.004. [DOI] [PubMed] [Google Scholar]
- 55.Münte T F, Spring D K, Szycik G R, Noesselt T. Electrophysiological attention effects in a virtual cocktail-party setting. Brain Res. 2010;1307:78–88. doi: 10.1016/j.brainres.2009.10.044. [DOI] [PubMed] [Google Scholar]
- 56.Lagacé J, Jutras B, Gagné J P. Auditory processing disorder and speech perception problems in noise: finding the underlying origin. Am J Audiol. 2010;19(1):17–25. doi: 10.1044/1059-0889(2010/09-0022). [DOI] [PubMed] [Google Scholar]
- 57.Barry J G, Ferguson M A, Moore D R. Making sense of listening: the IMAP test battery. Semin Hear. 2010;(44) doi: 10.3791/2139. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Anderson S, Chandrasekaran B, Yi H G, Kraus N. Cortical-evoked potentials reflect speech-in-noise perception in children. Eur J Neurosci. 2010;32(8):1407–1413. doi: 10.1111/j.1460-9568.2010.07409.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Sharma M, Purdy S C, Kelly A S. The contribution of speech-evoked cortical auditory evoked potentials to the diagnosis and measurement of intervention outcomes in children with auditory processing disorder. Semin Hear. 2014;35(1):51–64. [Google Scholar]
- 60.Graham J R, Pereira S, Teasell R. Aphasia and return to work in younger stroke survivors. Aphasiology. 2011;258:8. [Google Scholar]
- 61.Buss E, He S, Grose J H, Hall J W III. The monaural temporal window based on masking period pattern data in school-aged children and adults. J Acoust Soc Am. 2013;133(3):1586–1597. doi: 10.1121/1.4788983. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Neijenhuis K, Snik A, Priester G, van Kordenoordt S, van den Broek P. Age effects and normative data on a Dutch test battery for auditory processing disorders. Int J Audiol. 2002;41(6):334–346. doi: 10.3109/14992020209090408. [DOI] [PubMed] [Google Scholar]
- 63.Billings C J, McMillan G P, Penman T M, Gille S M. Predicting perception in noise using cortical auditory evoked potentials. J Assoc Res Otolaryngol. 2013;14(6):891–903. doi: 10.1007/s10162-013-0415-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Kaplan E, Goodglass H, Weintraub S. Austin, TX: Pro-Ed; 2001. Boston Naming Test-2 (BNT-2). 2nd ed. [Google Scholar]
- 65.Brookshire R H, Nicholas L E. Relationship of word frequency in printed materials and judgments of word frequency in daily life to Boston Naming Test performance of aphasic adults. Clinical Aphasiology. 1995;23:107–119. [Google Scholar]
- 66.Barker-Collo S. Boston naming test performance of older New Zealand adults. Aphasiology. 2007;21(12):9. [Google Scholar]
- 67.Spreen O, Strauss E. New York, NY, Oxford, England: Oxford University Press; 1998. A Compendium of Neuropsychological Tests: Administration, Norms, and Commentary. 2nd ed. [Google Scholar]
- 68.Hilpert M. Edinburgh, Scotland: Edinburgh University Press; 2014. Construction Grammar and Its Application to English. Edinburgh Textbooks on the English Language Advanced. [Google Scholar]
- 69.DeDe G. Effects of word frequency and modality on sentence comprehension impairments in people with aphasia. Am J Speech Lang Pathol. 2012;21(2):S103–S114. doi: 10.1044/1058-0360(2012/11-0082). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Jones D M, Hughes R W, Macken W J. Auditory distraction and serial memory: the avoidable and the ineluctable. Noise Health. 2010;12(49):201–209. doi: 10.4103/1463-1741.70497. [DOI] [PubMed] [Google Scholar]
- 71.Heinrich A, Schneider B A, Craik F I. Investigating the influence of continuous babble on auditory short-term memory performance. Q J Exp Psychol (Hove) 2008;61(5):735–751. doi: 10.1080/17470210701402372. [DOI] [PubMed] [Google Scholar]
- 72.Zeamer C, Fox Tree J E. The process of auditory distraction: disrupted attention and impaired recall in a simulated lecture environment. J Exp Psychol Learn Mem Cogn. 2013;39(5):1463–1472. doi: 10.1037/a0032190. [DOI] [PubMed] [Google Scholar]
- 73.Bamiou D E, Musiek F E, Stow I. et al. Auditory temporal processing deficits in patients with insular stroke. Neurology. 2006;67(4):614–619. doi: 10.1212/01.wnl.0000230197.40410.db. [DOI] [PubMed] [Google Scholar]
- 74.Johnston K N, John A B, Kreisman N V, Hall J W III, Crandell C C. Multiple benefits of personal FM system use by children with auditory processing disorder (APD) Int J Audiol. 2009;48(6):371–383. doi: 10.1080/14992020802687516. [DOI] [PubMed] [Google Scholar]
- 75.Chisolm T H, Noe C M, McArdle R, Abrams H. Evidence for the use of hearing assistive technology by adults: the role of the FM system. Trends Amplif. 2007;11(2):73–89. doi: 10.1177/1084713807300879. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.American Speech-Language-Hearing Association Person-centered focus on function: aphasia Available at: http://www.asha.org/uploadedFiles/ICF-Aphasia.pdf. Accessed May 23, 2016
