Abstract
Object naming impairments or anomias are the most frequent symptom in aphasia, and can be caused by a variety of underlying neurocognitive mechanisms. Anomia in neurodegenerative or primary progressive aphasias (PPA) often appears to be based on taxonomic blurring of word meaning: words such as “dog” and “cat” are still recognized generically as referring to animals, but are no longer conceptually differentiated from each other, leading to coordinate errors in word-object matching. This blurring is the hallmark symptom of the “semantic variant” of PPA, who invariably show focal atrophy in the left anterior temporal lobe. In this study we used eye tracking to characterize information processing online (in real time) as non-aphasic controls, semantic and non-semantic PPA participants completed a word-to-object matching task. All participants (including controls) showed taxonomic capture of gaze, spending more time viewing foils that were from the same category as the target compared to unrelated foils, but capture was more extreme in the semantic PPA group. The semantic group showed heightened capture even on trials where they ultimately pointed to the correct target, demonstrating the superiority of eye movements over traditional testing methods in detecting subtle processing impairments. Heightened capture was primarily driven by a tendency to direct gaze back and forth, repeatedly, between a set of related foils on each trial, a behavior almost never shown by controls or non-semantic participants. This suggests semantic PPA participants were accumulating and weighing evidence for a probabilistic rather than definitive mapping between the noun and several candidate objects. Neurodegeneration in PPA thus appears to distort lexical concepts prior to extinguishing them altogether, causing uncertainty in recognition and word-object matching.
Keywords: Primary Progressive Aphasia, Eye tracking, Visual World Paradigm, Taxonomic, Semantic Interference
1. Introduction
1.1. Word-object linkage
We frequently employ the ability to link words with objects in everyday life. For example, we can utilize nouns (e.g. a shopping list) to sort through objects in visually complex environments (the grocery store), in order to accomplish our goals (preparing dinner). Although seemingly simple, from an information processing standpoint this ability is remarkable: humans are able to recognize a seemingly limitless number of common objects, and to map them onto their respective nouns with great precision. This capacity dwarfs that of non-human species (dogs, parrots, apes, etc), the most gifted of which have an “object vocabulary” in the hundreds to low thousands (Kaminski, Call, & Fischer, 2004; Lyn, 2007; Pepperberg, 2002; Pilley & Reid, 2011). In contrast, even conservative estimates place average human vocabulary in the tens of thousands of words (Zechmeister, Chronis, Cull, D’Anna, & Healy, 1995), with a significant proportion of these being object-referential nouns, particularly in early stages of language acquisition (Gentner & Boroditsky, 2001). Our unique facility with word-object linkage is supported by close coordination between two large-scale neurocognitive networks: the left perisylvian language network and the bilateral inferotemporal object recognition network. Given the sheer amount of neural real estate involved, it is perhaps unsurprising that disrupted word-object linkage is one of the most common consequences of brain injury, often manifest as an inability to name objects aloud (anomia).
Successful word-object linkage requires the crossmodal mapping between the visual form of an object and the auditory or visual form of a word (letters or phonemes). As characterized in cognitive models of word production and object recognition (Bauer, 2006; Dell, Schwartz, Martin, Saffran, & Gagnon, 1997; Ellis & Young, 1988; Farah & McClelland, 1991; Humphreys, Price, & Riddoch, 1999; Levelt, Praamstra, Meyer, Helenius, & Salmelin, 1998), successful linkage is based on the completion of a number of distinct processing stages, including both structural and conceptual stages of processing in the object recognition and language networks.
For example, when attempting to name an object aloud, the structure of the object (form and features) must first be encoded in early stages of the ventral visual stream. Identification of unique, diagnostic features allows the object to be differentiated from other visually-similar objects (Clarke, 2015; Hoffman & Logothetis, 2009; Humphreys, et al., 1999). Recognition unlocks conceptual knowledge of the object, which is based on a variety of learned associations, including crossmodal associations with the language network. This crossmodal interface allows the object concept to be connected to a corresponding verbal concept via shared meaning. Crossmodal associations are thought to contact the language lexicon (the theoretical storehouse of word knowledge) as a pattern of spreading activation. A lexical concept that corresponds to the object is chosen once it reaches an activation threshold. According to serial and interactive models of language (Dell & O’Seaghdha, 1991; Levelt, Roelofs, & Meyer, 1999), lexical concepts represent the meaning but not the sound of the word. Thus, in order to name an object aloud, the lexical concept must then be mapped onto a structural (phonological) representation, which in turn is converted into a motor speech program for vocal articulation. Failure at any of these stages (structural or conceptual stages of object or word processing) will result in the common symptom of anomia, requiring careful testing in order to reveal the underlying source of disruption.
1.2. Anomia in primary progressive aphasia
Anomia is the most common symptom of acquired language disorders (aphasias), whether they are caused by stroke (Laine, 2013), tumor resection (Davie, Hutcheson, Barringer, Weinberg, & Lewin, 2009), or neurodegenerative disease (Mesulam, Wieneke, et al., 2009), the latter known as primary progressive aphasias (PPA). Unlike vascular lesions, atrophy in PPA is equally likely to occur in any region of the language network, including areas not typically vulnerable to cerebrovascular incident (Gorno-Tempini, et al., 2004; Mesulam, Wieneke, et al., 2009). Consistent with this, individuals with PPA show lesions and corresponding forms of anomia not seen in stroke aphasias.
The semantic variant of PPA (PPA-S) shows a particularly severe anomia apparently based on degradation of word knowledge; they are unable to match nouns with objects or to define those same nouns (Mesulam, Rogalski, et al., 2009; Mesulam, et al., 2013). Verbal comprehension deficits take a peculiar form in PPA-S: these individuals no longer differentiate words from the same category such as “cat” and “dog” (taxonomic blurring). Loss of word meaning in PPA-S is gradual rather than absolute: although words can still be assigned to categories, indicating a generic level of recognition is retained, they cannot be differentiated from one another at a more specific level. Behavioral evidence of taxonomic blurring is provided by both superordinate and co-ordinate errors in naming, co-ordinate errors in picture-word matching, and overly-vague word definitions (Mesulam, Rogalski, et al., 2009; Mesulam, et al., 2013). Blurring is also evident in electrophysiology: during picture-word matching tasks controls generate lower amplitude N400 event-related potentials in response to objects’ names compared to related words, while PPA-S participants show equivalent responses to both types of words (Hurley, Paller, Rogalski, & Mesulam, 2012). Surprisingly, loss of word meaning in PPA-S is consistently mapped to the anterior temporal lobe (ATL) (Hurley, et al., 2012; Rogalski, et al., 2011b) rather than the temporoparietal junction (the sometime seat of “Wernicke’s area”) (Bogen & Bogen, 1976; Geschwind, 1965; Mesulam, Thompson, Weintraub, & Rogalski, 2015).
The two other most common presentations of PPA are the agrammatic and logopenic subtypes, characterized by deficits in syntax and verbal repetition, respectively (Gorno-Tempini, et al., 2011; Mesulam, Wieneke, et al., 2009). These “non-semantic” variants (PPA-NS) show peak atrophy in posterior and dorsal components of the language network, and are also frequently anomic (Gorno-Tempini, et al., 2004; Mesulam, Wieneke, et al., 2009). Naming impairments in PPA-NS are sometimes referred to as retrieval-based anomias, since they are able to recognize words (i.e. define them and match them with objects) but unable to produce them by confrontation or in spontaneous speech. The mechanisms of retrieval-based anomias are currently enigmatic and likely heterogeneous, as they can plausibly arise from a number of structural or conceptual factors (phonology, motor speech, or lexical access).
1.3. Eye tracking with the visual world paradigm
Word-object linkage is usually assessed behaviorally via manual responses (button pressing, pointing, writing, drawing) and overt speech, but these approaches limit observation to inputs and manual/oral outputs, with processing in the intervening conceptual stages of word-object linkage interpreted via inference. Online techniques such as eye tracking, however, index processing in real-time with excellent temporal resolution. Eye movement recordings are particularly informative for the study of cognition: if you know where a person is looking then you know much of what they are thinking, for example which stimuli are being considered as candidates when attempting to identify a target.
The visual world paradigm (VWP) is a commonly employed eye tracking task, in which participants are provided with a verbal cue and tasked with identifying a target object embedded amongst an array of foils (Cooper, 1974; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). The VWP is therefore comparable to standardized word-to-object matching tests of single-word comprehension, such as the Peabody Picture Vocabulary Test (PPVT) (Dunn, 2007) and the auditory word recognition subtest of the Western Aphasia Battery (WAB) (Kertesz, 1982). Eye movements in the VWP, however, provide a great deal of information not captured by manual responses (accuracy) in standardized offline tests. The speed with which participants fixate the target object indexes the efficiency of word-object linkage. Consistent with this, individuals with stroke aphasia are slower to fixate targets than non-aphasic controls (Yee, Blumstein, & Sedivy, 2008), and non-aphasic adults are slower to fixate targets with low-frequency names (Magnuson, Dixon, Tanenhaus, & Aslin, 2007). Capture of gaze by foils reveals which factors are relevant for word-object linkage; non-aphasic adults spend more time viewing foils that are conceptually or perceptually similar to the target (Huettig, Mishra, & Olivers, 2011).
Non-aphasic adults consistently spend more time viewing related foils from the same category as the target object, relative to unrelated foils (Huettig & Altmann, 2005; Huettig & Hartsuiker, 2008; Magnuson, Tanenhaus, Aslin, & Dahan, 2003; Meyer, Belke, Telling, & Humphreys, 2007; Mirman & Graziano, 2012a, 2012b; Yee & Sedivy, 2006). This effect, which we will refer to as “taxonomic capture” of gaze, has also been observed in individuals with stroke-induced aphasia. Although people with and without stroke aphasia spend roughly the same overall percentage of time viewing related foils (Yee, et al., 2008), indicating that the overall amplitude of taxonomic capture does not differ between groups, time-course analyses show that people with aphasia take longer to resolve this capture while executing their visual search (Mirman & Graziano, 2012a).
1.4. Eye tracking in PPA
Word-object matching tasks are optimal for studying the mechanisms of anomia in PPA-S, as they probe conceptual stages of word-object linkage without the involvement of speech production. In addition, taxonomic blurring, which is hypothesized to underlie anomia in PPA-S, is most evident when having to choose between multiple category coordinates. Eye movements generated during word-object matching tasks provide useful online indices of word-object linkage, which could be used to gain new insights into the mechanisms of anomia in PPA. In particular, the phenomena of taxonomic capture in gaze provides an opportunity to further characterize blurring of word meaning in PPA-S via the novel approach of eye tracking. Although several groups have examined eye movements in PPA (Boxer, et al., 2006; Coppe, Orban de Xivry, Yuksel, Ivanoiu, & Lefevre, 2012; Garbutt, et al., 2008), these studies focused on low-level oculomotor functioning, so the current study represents the first time eye tracking has been used to investigate language in a group of participants with PPA.
In this study the eye movements of PPA participants and controls were examined while they completed a word-object matching task, which as modified in several ways from the standard VWP in order to place greater focus on conceptual stages of word-object linkage. Verbal cues were given 3 seconds prior to the onset of the object array rather than concurrently (as occurs in the standard VWP), minimizing the influence of verbal encoding on speed of visual search, and allowing time for participants to generate expectations as to the identity of the object target (i.e. predictive coding) (Federmeier & Kutas, 2002). We also increased the number of object foils from 4 to 16, making the task more difficult, eliciting more informative visual search patterns, and ideally amplifying taxonomic capture through a greater number of related foils (7 rather than the standard 1).
We hypothesized that the eye movements of PPA-S participants would show heightened levels of taxonomic capture during the word-object matching task, thus providing an eye movement “signature” of blurred word meaning. In contrast, individuals with PPA-NS (logopenic and agrammatic variants) tend to have preserved single-word comprehension and atrophy in areas outside ATL (Gorno-Tempini, et al., 2011; Rogalski, et al., 2011a). However, even these “nonsemantic” variants have shown abnormal interference from taxonomic competitors in certain behavioral paradigms (Rogalski, Rademaker, Mesulam, & Weintraub, 2008; Thompson, et al., 2012). Thus, a secondary goal of this study was to explore whether disproportionate levels of taxonomic capture would also be observed in PPA-NS (compared to controls).
2. Methods
2.1. Participants
Fifteen PPA participants (9 male, 6 female) and fourteen non-aphasic controls (10 male, 4 female) subjects were tested. All participants were right-handed, native English-speakers. The diagnosis of PPA was made using established guidelines that required the characterization of progressive language impairment which remained as the most salient feature of the clinical picture for at least the first 2 years of the disease (Gorno-Tempini, et al., 2011; Mesulam, Wieneke, et al., 2009). PPA participants were sorted into PPA-S (N=9) and PPA-NS (N=6) subgroups based on their scores on the PPVT, using a cutoff of 75% accuracy on a 36-item subset of moderately difficult items (Mesulam, Wieneke, Thompson, Rogalski, & Weintraub, 2012).
Demographic and neuropsychological characteristics of controls and both PPA groups are shown in Table 1. Scores on all variables were compared between groups via independent samples t-tests. By definition the PPA-S group showed lower PPVT scores than the other groups (vs controls t(19) =11.6, p<.001; vs PPA-NS t(11) =5.9, p<.001). They showed excellent performance on the Auditory Word Discrimination subtest of the Northwestern Naming Battery (Thompson & Weintraub, 2012), which requires participants to judge whether word pairs differ by a single phoneme, suggesting that lower PPVT scores in the PPA-S group did not arise from failures to perceive auditory word cues, but rather from conceptual failures in word-object linkage. The PPA-S group was also more anomic than the other groups, as evidenced by lower scores on the Boston Naming Test (Kaplan, Goodglass, & Weintraub, 1983) (vs controls t(20) =24.0, p<.001; vs PPA-NS t(12) =7.0, p<.001). All groups showed excellent regular word reading scores on the Psycholinguistic Assessments of Language Processing in Aphasia (PALPA) (Kay, Lesser, & Coltheart, 1992), suggesting participants were well-able to read verbal cues during the visual world test. Likewise, there were no group differences in Benton Facial Recognition scores (Benton, Harnsher, Varney, & Speen, 1998) or delayed picture recognition on the Rivermead Behavioural Memory Test (Wilson, et al., 2008), suggesting that the structural processing of objects is preserved and an apperceptive agnosia is unlikely. Performance of both PPA groups was as good or better than that of controls on tests of visuospatial functioning, including the Trail-Making tests (number of lines completed) (Reitan & Wolfson, 1993) and Judgment of Line Orientation (Randolph, 1998). All participants also completed the Visual Target Cancellation test (Mesulam, 2000; Weintraub & Mesulam, 1987). In this task, participants are shown a sheet covered with hundreds of geometric shapes, and are required to mark through 60 target shapes while ignoring over 300 foils. Many of the foils share features with the target, necessitating a serial (i.e. conjunctive) search rather than parallel search (Treisman & Gelade, 1980). Both PPA groups showed equivalent performance to controls on Visual Target Cancellation (number of cancellations in 180 seconds), indicating that any differences from controls on the eye tracking task could not be explained by impairments in visual search ability due to spatial or executive dysfunction.
Table 1.
Demographics and neuropsychological characteristics of the experimental samples (M±SD). PPA participants were sorted into PPA-S and PPA-NS subgroups based on PPVT scores. All groups were matched in terms of demographics, reading ability, and visuospatial skills, differing only in ability to name objects and comprehend words.
| Variable (Max Score) | Controls N=14 |
PPA-NS N=6 |
PPA-S N=9 |
|---|---|---|---|
| Age | 68±2 | 66±3 | 66.8±2 |
| Years of Education | 15.9±.6 | 16.2±1 | 14.5±.7 |
| Aphasia Quotient (100) | N/A | 80.9±4 | 67.9±9 |
| PPVT (36) | 35.4±.2 | 31.±.6a | 15.6±2a,b |
| Auditory Word Discrimination (10) | N/A | 9.5±.5 | 9.6±.5 |
| Boston Naming Test (60) | 58.7±.4 | 43.7± 4a | 9.5±3a,b |
| PALPA Regular Word Reading (10) | 10±0 | 10±0 | 9.4±.4 |
| Benton Facial Recognition (54) | 45.8±1 | 47.3±2 | 45±1 |
| Rivermead Picture Recognition (10) | 9.5±.8 | 10±0 | 9.7±.7 |
| Trail-Making Test part A (24) | 24±0 | 24±0 | 24±0 |
| Trail-Making Test part B (24) | 23.9±.1 | 22.7±1 | 24±0 |
| Visual Target Cancellation (60) | 59.1±.3 | 59.2±.7 | 59.1±.4 |
| Judgment of Line Orientation (20) | 15.9±.9 | 19±.4a | 16.3±1 |
p<.05 vs controls;
p<.05 vs PPA-NS.
2.2. Equipment
A 20.5×11.5″ touchscreen monitor was used to present visual stimuli (1920×1080 resolution) and collect touch responses, using the Presentation experimental software package (Neurobehavioral Systems, Inc., Albany, CA, US). Participants were placed approximately 22″ in front of the monitor. Eyelink 1000 (SR Research, Mississauga, ON, Canada) remote tracking system was used to monitor eye movements. A nine-point calibration procedure was administrated before the experiment to ensure accurate estimates of gaze position. Eye and head movements were simultaneously monitored and accounted for as part of the calibration procedure.
2.3. Stimuli
The object probes were composed of shaded gray scale drawings (Rossion & Pourtois, 2004) from the Snodgrass and Vanderwart (1980) image set, scaled to 122×122 pixels (visual angle 3.4°). Sixteen object probes were simultaneously displayed in an array (Fig. 1), equidistantly spaced along an iso-acuity ellipse with a horizontal axis of 1152 pixels (31.4°) and a vertical axis of 878 pixels (24.2°). This aspect ratio equates parafoveal acuity across positions when centrally-fixating (Iordanescu, Grabowecky, & Suzuki, 2011). Objects were thus located from 12.1–15.7° from the center of the screen, depending on their location along the ellipse. At this degree of eccentricity typical adults are generally able to detect whether or not an object is present (Thorpe, Gegenfurtner, Fabre-Thorpe, & Bulthoff, 2001), but are unable to discern fine-grained featural differences between objects (Nelson & Loftus, 1980). Participants were therefore unlikely to complete the word-to-object task solely with covert attention, necessitating overt eye movements.
Fig. 1.
Schematic of the eye tracking task. Written word cues were followed by an array of 16 object pictures, including the target, 7 taxonomically-related foils, and 8 unrelated foils. At the end of each trial participants indicated their confidence on a 4-point scale.
Forty eight pictures were used as object probes, with equal numbers from each of 4 taxonomic categories (animals, clothes, fruits-vegetables, and manipulable objects). Each probe array included a target, 7 related foils from the same category as the target, and 8 unrelated foils from other categories. Half of the 48 pictures were designated as an object target across the 24 trials of the experiment. Pictures employed as targets also appeared 3 times as a related foil and 4 times as an unrelated foil on other trials. Pictures never employed as targets appeared 4 times as a related foil and 4 times as an unrelated foil. Thus all 48 pictures appeared 8 times total throughout the experiment, controlling for effects of exposure (e.g. perceptual and conceptual priming).
The targets and foils on each trial were closely matched for psycholinguistic and visual characteristics, as may be expected given that the same pictures were employed repeatedly as both targets and foils. Repeated measures ANOVA contrasts showed that targets did not differ from the 15 foils on each trial in terms of visual complexity, according to both subjective complexity norms collected from human raters (Rossion & Pourtois, 2004) (F(1,23) = .12, p = .73) and objective complexity norms generated by computer algorithms (Szekely & Bates, 2000) (F(1,23) = .02, p = .89). Visual saliency of the objects in each array was modeled using the Saliency Toolbox (Walther & Koch, 2006). Screen captured images of each array were inputted into the Saliency Toolbox, and converted into estimated salience images. Salience values were averaged across the pixels within each 122×122 object in the array, and log-transformed to correct for non-normality. Results indicated that targets and foils did not differ in terms of salience (F(1,23) = .1, p = .77). The verbal labels corresponding to the targets and foils from each trial were also matched in terms of lexical frequency (Lund & Burgess, 1996) (F(1,23) = .01, p = .92), number of phonemes (F(1,23) = .2, p = .68), and phonological neighborhood density (Balota, et al., 2007) (F(1,23) = .1, p = .78). A second set of contrasts comparing the related versus unrelated foils from each trial showed that they were also matched on these metrics (subjective complexity F(1,23) = .004, p = .95; objective complexity F(1,23) = 1.5, p = .24; salience F(1,23) = .02, p = .88; lexical frequency F(1,23) = .1, p = .74; number of phonemes F(1,23) = .01, p = .95; phonological neighborhood density F(1,23) = .004, p = .95).
2.4. Experimental procedures
There were 24 trials in the word-object matching task. All stimuli were presented on a white background. On each trial, a lowercase written word cue was presented for 2.5 seconds in the center of the screen, followed by a fixation cross (.5 seconds) and then the object array, which included the target and fifteen foils (Fig. 1). Seven of the fifteen foils were taxonomically related to the target (from the same category, e.g. animals), and the remaining eight foils belonged to unrelated categories. Targets were balanced across the sixteen array positions, and were equiprobable to appear in each vertical and horizontal hemifield. Likewise, the number of targets and foils from each category was balanced across locations and hemifields.
Participants were instructed to read each word and then point to the corresponding object presented on the screen. They were further instructed not to reach out to touch the screen until after they had identified the target, in order to minimize the possibility that their hands would block the eye tracking camera. After a touch response was detected and recorded, participants were asked to rate how confident they were that they had selected the correct object. Participants were asked to point to one of four colored boxes, with labels below each box reading “very unsure”, “unsure”, “sure”, and “very sure”, and their ratings were recorded on a 4-point scale. A fixation cross was then presented for 2 seconds in between trials.
2.5. Acquisition and coding of eye movements
Eye movements were recorded at a sampling rate of 500Hz. Each epoch of eye movement data began with the onset of the object array and ended with a touch response. Continuous eye-movement records were transformed into a time series of fixations, saccades, and blinks using standard parameters. Motion (0.15 degrees), velocity (30 degrees per second), and acceleration (8000 degrees per second2) thresholds were used to identify saccades. Events in which the pupil was not detected were classified as blinks. Saccade- and blink-free periods lasting ≥40ms were categorized as fixations.
The space surrounding the object probes was divided into 16 trapezoidal areas of interest (AOIs). Fixations falling outside of these AOIs, including those falling in the center of the screen or in the extreme periphery, were excluded (masked out) from analysis. We used four metrics to quantify fixation patterns: 1) percentage of time spent viewing the targets and foils, 2) number of foils viewed, 3) duration of viewing time per foil and, 4) number of times gaze returned to a previously viewed foil. Percentage of time spent viewing targets was calculated by dividing the duration of gaze in the target object AOI by the total time spent viewing all object AOIs on that trial. Similar calculations were performed to find relative time spent viewing related and unrelated foils. The number of foils viewed was expressed as a percentile, in order to equate for the fact that there were 7 related and 8 unrelated foils on each trial. Thus, if a participant viewed 5 related and 3 unrelated foils on a given trial, then they would have viewed 71% (5/7) of related and 38% (3/8) of unrelated foils. The mean duration of viewing time per foil was calculated by taking the amount of time spent viewing each individual object foil on that trial, and averaging those values. Finally, the number of times gaze returned to a previously viewed foil was counted separately for related versus unrelated foils, and then averaged across trials. The first time gaze entered an AOI was ignored, but each subsequent re-entry was added to the tally for that trial. This metric is therefore potentially orthogonal to the number of foils viewed: a participant could engage in multiple revisits despite only viewing a small number of foils, for example by repeatedly looking back and forth between two foils.
2.6. Inferential analysis
In order to limit the experiment-wise type 1 error rate, all touch response and gaze metrics were first subjected to omnibus ANOVA tests including all three groups (control/PPA-NS/PPA-S). In cases where the omnibus test was significant, follow-up tests were then conducted to determine which pairs of groups differed from one another (control/PPA-NS, PPA-NS/PPA-S, and control/PPA-S). Speed of touch responses, accuracy of touch responses, and percent time viewing the target objects were subjected to omnibus one-way ANOVAs, which were followed up by two-tailed independent samples t-tests when significant. The remaining analyses were all designed to compare the magnitude of taxonomic capture in eye movements across the three experimental groups. The interaction term in omnibus 2×3 (related/unrelated by control/PPA-NS/PPA-S) mixed model ANOVAs were first examined, and when significant were followed up by similar 2×2 interactions (related/unrelated by control/PPA-NS, etc) to find out whether taxonomic capture differed between specific pairs of groups.
3. Results
3.1. Manual responses
Accuracy in pointing responses for each group is shown in Fig. 2a. One-way ANOVA revealed that accuracy differed significantly across groups (F(2,26) =12.4, p<.001). Follow-up independent samples t-tests revealed that the PPA-NS group did not differ in accuracy from controls (t(18) =.5, p=.63) however, the PPA-S group was less accurate than the control (t(21)=4.3, p<.01) and PPA-NS groups (t(13)=2.7, p<.05). On 91% of inaccurate trials the PPA-S participants pointed to a related foil from the same category as the target rather than an unrelated foil.
Fig. 2.

Manual responses in the eye tracking task. PPA participants were split into two subgroups based on single-word comprehension (PPVT) scores. A) The PPA-S group was less accurate than the other two groups, tending to select (via touchscreen) foils rather the object target. B) The PPA-NS group showed slower touch responses than controls, and the PPA-S group was slower than both controls and PPA-NS. *: significantly different from the other groups
Mean reaction times of each group are shown in Fig. 2b. The one-way ANOVA was significant (F(2,26)=14.4, p<.001), indicating differences between groups. PPA-NS were slower than controls (t(18)=3.7, p<.01), and PPA-S were slower than both controls (t(21)=4.8, p<.001) and PPA-NS (t(13)=2.4, p<.05). Confidence ratings also differed between the control (M±SD = 3.9±.1), PPA-NS (M±SD = 4.0±.08), and PPA-S groups (M±SD = 3.2±.6) (F(2,22)=11.9, p<.001). Although ratings did not significantly differ between PPA-NS and controls (t(14) = .6, p=.56), PPA-S participants were less confident than both PPA-NS (t(13) = 3.1, p=.009) and controls (t(17) =3.9, p<.01).
3.2. Overall allocation of gaze
Fig. 3 shows the average proportion of time during the trial epoch spent viewing the target, related foils, and unrelated foils. Instances where gaze fell outside the object AOIs was excluded from analysis, so the percentage of time spent viewing targets and both foil types adds up to 100%. A one-way ANOVA revealed group differences in the percentage of time spent viewing the target (F(2,26) =15.2, p<.001). Follow-up independent samples t-tests showed that whereas PPA-NS participants and controls spent a similar percentage of time viewing targets (t(18) =.7, p=.50), PPA-S participants spent proportionately less time viewing targets than the other groups (versus controls t(21)=5.4, p<.001; versus PPA-NS t(13) =3.6, p<.01).
Fig. 3.
Percentage of time spent viewing the target object, related foils, and unrelated foils. The PPA-S group spent less time viewing the target, and conversely more time viewing foils, than the other two groups. Whereas all groups tended to spend more time viewing taxonomically-related foils than unrelated foils, the amplitude of “taxonomic capture” was disproportionate in the PPA-S group. *: significantly less time spent viewing the target than other groups, ╪: significantly greater taxonomic capture (related-vs-unrelated viewing time) than other groups
Another set of analyses were then conducted in order to examine the significance of taxonomic capture, such that participants spent a greater proportion of time viewing related compared to unrelated foils. A 2×3 mixed-model ANOVA showed a significant interaction between relatedness (related/unrelated) and group (control/PPA-NS/PPA-S) (F(2,26) =6.1, p<.01), indicating the magnitude of taxonomic capture differed between groups. Follow-up 2×2 ANOVAs showed that capture did not differ between PPA-NS and controls (F(1,13) =4.9, p<.05). PPA-S participants, however, showed greater taxonomic capture than the other groups (vs controls F(1,21) =10.1, p<.01; vs PPA-NS F(1,21) =10.1, p<.01).
3.3. Sources of taxonomic capture
The previous analysis (Fig. 3) showed that PPA-S participants spent a disproportionate amount of time viewing related foils. This effect could be produced if either a) they were viewing a greater number of the related foils on each trial, b) they spend an abnormally lengthy amount of time viewing each related foil, or c) some combination of the two. In other words, the average amount of time a participant spends viewing each object is potentially independent from the likelihood of viewing it in the first place. We examined these competing accounts with a pair of complementary analyses (Fig. 4).
Fig. 4.
Characteristics of viewing behavior on foils. A) Number of the related and unrelated foils viewed on each trial (expressed as a percentage). Taxonomic capture was equivalent across groups in terms of this metric. B) Duration of viewing time per foil. All groups viewed related foils longer than unrelated foils, but this effect was more extreme in the PPA-S group. ╪: significantly greater taxonomic capture than other groups
The average number of related and unrelated foils viewed on each trial is shown in Fig. 4a. The interpretation of raw numbers in this case is potentially clouded by an imbalance in the study design, as there were seven related and eight unrelated foils in the object array, so the odds of viewing an unrelated foil are slightly greater based on chance alone. In order to correct for this imbalance, the number of related and unrelated foils viewed is expressed as a ratio of the total number of foils instead. The omnibus 2×3 interaction was not significant (F(2,26)=1.3, p=.29), indicating no group differences in the propensity to view a related foil more so than an unrelated foil.
In contrast, the related/unrelated effect in viewing time per foil (Fig. 5b) was significantly different between groups (F(2,26)=13.1, p<.001). Follow-up 2×2 interactions showed that PPA-S participants spent a disproportionate amount of time viewing each related object compared to controls (F(1,21)=20.5, p<.001) and PPA-NS (F(1,13)=6.6, p<.05), who did not differ from one another (F(1,18)=2.2, p=.16). Taxonomic capture in PPA-S thus appears to be driven by an abnormal amount of time viewing each related foil (Fig. 4b), rather than merely viewing a greater number of them (Fig. 4a).
Fig. 5.

Return of gaze to a previously viewed foil. A) Representative scan paths from a single trial. The control participant briefly fixates a few foils before identifying the target object (cat). The PPA-S participant conducts a lengthier search, and his gaze often returns to foils he already viewed earlier in the trial. B) Quantified return of gaze in each group. The PPA-S group re-fixated objects often, especially when they were related to the target. This behavior is rarely seen in control and PPA-NS participants. ╪: significantly greater taxonomic capture than other groups.
3.4. Return of gaze on foils
In order to better understand why PPA-S participants spend so long viewing each related foil, we visually inspected their scan paths. Scan paths from representative trials of a control and of a PPA-S participant are plotted in Fig. 5a. The control participant briefly fixates on several foils before viewing and then pointing to the target. The PPA-S participant instead conducts a lengthy visual search, eventually viewing the majority of foils. His gaze often returns to previously viewed foils, and he ends the trial with a series of back and forth fixations between the target (cat) and a taxonomically-related competitor (lion), before finally touching the target. This “return of gaze” was all the more striking as it was not apparent in any of the inspected scan paths from controls.
In order to quantitatively analyze return of gaze, the number of times a participant’s gaze reentered a previously viewed object AOI was counted, and a separate tally was calculated for return to related versus unrelated foils (Fig. 5b). The omnibus 2×3 interaction was significant (F(2,26)=11.9, p<.001), indicating group differences in return of gaze to related compared to unrelated foils. Although the PPA-S group showed greater return of gaze even for unrelated foils (vs controls t(21) = 4.8, p<.01; vs PPA-NS t(13) = 1.98, p=.06), 2×2 interactions showed that the amplitude of taxonomic capture in gaze return (greater tendency to revisit related than unrelated foils) was heightened in PPA-S (vs controls F(1,21)=18.6, p<.001 vs PPA-NS F(1,13)=6.3, p<.05), but did not differ between controls and PPA-NS (F(1,18)=1.0, p=.33).
Correlations were then examined between return of gaze and confidence ratings, in order to assess whether increased return of gaze in PPA-S was associated with uncertainty in the decision making process. In order to do this, a correlation coefficient (r) was calculated for each of the 9 PPA-S participants, based on the confidence ratings and return of gaze to related foils on each trial. The group average of these r values indicates a strong correlation in the PPA-S group (r = −.67±.15), and a one-sample t-test revealed that this correlation is significantly different from zero (t(8)=−13.5, p<.001). Thusly on trials where PPA-S participants were less confident in matching the verbal cue to the target object, they were more likely to re-fixate related foils.
3.5. Taxonomic capture on accurate and inaccurate trials
PPA-S participants showed heightened taxonomic capture across three of the four gaze metrics examined, compared to both controls and PPA-NS (Figs. 3, 4b, 5b). The PPA-S group was also less accurate than the other groups in pointing to the target object (Fig. 2a). This raises a question: is hyper-taxonomic capture in PPA-S driven solely by inaccurate trials where the target was not recognized? In order to address this, we replicated the omnibus 2×3 ANOVA interaction tests (related/unrelated by control/PPA-NS/PPA-S) on the three gaze metrics, this time based solely on trials where the participant correctly pointed to the target. Again, there were group differences in taxonomic capture in terms of viewing time per foil (F(2,26)=6.7, p<.01) and return of gaze (F(2,26)=8.8, p<.01), and there was a trend for group differences in overall percent viewing time (F(2,26)=2.8, p=.08). These results demonstrate that taxonomic capture in PPA-S was heightened even when manual responses were correct. Likewise, control and PPA-NS participants gave the highest possible confidence ratings on correct trials (4 out of 4), but ratings were significantly lower in the PPA-S group (M±SD = 3.6±.4; F(2,21) =10.9, p<.01).
Since pointing to the wrong object indicates a more severe disruption of word-object linkage, behavioral and eye movement indices of taxonomic blurring could be expected to be even greater on inaccurate trials. We were unable to conduct standard inferential analyses of return of gaze and confidence ratings based solely on inaccurate trials in the current study, as there were too few such trials to achieve adequate power (M±SD = 6.2±4.8 inaccurate trials from each PPA-S participant). Descriptive analyses, however, suggest lower confidence ratings and greater return of gaze on inaccurate trials. Four of the 9 PPA-S participants showed low enough accuracy (≤75% correct) to estimate average confidence on inaccurate trials (M±SD = 1.8±.4 out of 4), and these ratings were lower than on accurate trials (M±SD = 3.2±.3). A trial level analysis, examining accuracy and return of gaze across all trials in which eye movements from PPA-S participants were available (N=216), further demonstrated the close relationship between return of gaze and accuracy in the PPA-S group. Gaze returned repeatedly (at least twice) to previously viewed objects on roughly half of the total trials (N=120), and inaccurate responses were made on roughly a quarter of the total trials (N=56). Cross tabulation revealed that gaze returned repeatedly on 98.2% of the inaccurate trials, but on only 40.6% of the accurate trials. Conversely, accurate responses were given on 99.0% of the trials when gaze returned only once or not at all. This describes a relationship where return of gaze is a necessary but not sufficient precondition for an inaccurate response.
4. DISCUSSION
4.1. Heightened taxonomic capture in PPA-S
The primary goal of the current study was to characterize blurring of word meaning in PPA via the examination of taxonomic capture in eye movements. Individuals with PPA and impaired single word comprehension (PPA-S), with relatively preserved comprehension (PPA-NS), and non-aphasic controls completed a word-object matching task while eye movements were recorded. All three groups showed taxonomic capture in both the temporal and spatial distribution of gaze on foils, but, as hypothesized, capture was abnormally heightened in the PPA-S group. When overall temporal distribution of gaze across the trial epoch was examined, PPA-S participants were shown to spend a disproportionate amount of time viewing related foils compared to the other groups. Having established this general finding, follow-up analyses were conducted in order to offer a more detailed account.
In trying to explain why PPA-S participants spend an abnormal percentage of their time viewing related foils, one possibility is that during the visual search, covert (parafoveal) attentional mechanisms drew their gaze preferentially toward those foils. According to this explanation, we would expect the PPA-S group to view a disproportionate number of related foils (compared to unrelated foils) on each trial. While this effect was observed in all groups, indicating that gaze does indeed tend to be drawn towards related objects, the amplitude of the effect did not differ between groups. Analysis of gaze duration per foil, however, showed that when PPA-S participants do view a related foil, they do so for an abnormal length of time. Further analysis showed that this effect was mainly driven by their directing gaze back and forth repeatedly between a set of related foils on each trial, a behavior almost never observed in control or PPA-NS participants, who do not re-fixate previously viewed objects in general. Importantly, the ratio of gaze return to related versus unrelated foils was more extreme in PPA-S compared to the other groups, and this propensity to selectively re-fixate related objects appears to account for hyper taxonomic capture shown by the PPA-S group showed across multiple gaze metrics.
The eye tracking task we employed in this study entailed a more difficult visual search than the standard VWP, as the target was embedded within 15 foils rather than 3 as occurs in the VWP. The PPA-S group showed intact performance on ancillary tests with even greater visual search demands (the Visual Target Cancellation test), indicating that abnormal performance on the eye tracking task could not be explained as failures of visual search due to visuospatial or executive dysfunction. Furthermore, such deficits would not explain or confound the main finding of heightened taxonomic capture in PPA-S, as the presence of visuospatial or executive dysfunction would not differentially affect the viewing of related versus unrelated foils.
In summary, the primary goal of this study was accomplished: an eye movement signature of blurred word meaning in PPA was identified, in the form of heightened taxonomic capture. This signature appears to be specific to PPA-S: PPA-NS participants in this study were no more likely to re-view related objects than controls, and, although they were slower to initiate a manual response, they did so with extremely high accuracy and certainty (confidence ratings). Likewise, individuals with stroke-induced aphasia in previous studies showed normal levels of taxonomic capture (Mirman & Graziano, 2012a; Yee, et al., 2008), including those with impaired comprehension (Wernicke’s aphasia). Future studies may help to more directly relate taxonomic capture to the symptom of anomia in PPA-S, for example by using gaze metrics to predict the ability to name individual objects by confrontation.
4.2. Return of gaze and uncertainty in word-object linkage
Eye movements revealed aspects of conceptual processing in PPA-S participants that were not detectable in their manual responses. Return of gaze was strongly correlated with confidence ratings, suggesting that both measures reflect uncertainty in matching words with objects. Unlike the control and PPA-NS groups, the PPA-S group showed evidence of uncertainty even on trials where they ultimately pointed to the correct target. Gaze returned repeatedly to related foils on almost half of accurate trials, and the ratio of gaze return to related versus unrelated foils was heightened compared to the other groups, demonstrating elevated taxonomic capture even on accurate trials. Likewise, on accurate trials the control and PPA-NS groups gave the highest possible confidence ratings but ratings were lower and more variable in the PPA-S group. The examination of eye movements thus allows for the detection of subtle deficits in word-object linkage, which could be occurring in early or even prodromal stages of PPA when scores on standardized behavioral assessments appear relatively normal (Mesulam, et al., 2012).
As may be expected, uncertainty appears to be more extreme in situations where participants point to the wrong object altogether. Return of gaze was not only predictive of when PPA-S participants pointed to the wrong object, but appeared to be a necessary (but not sufficient) precondition: virtually all inaccurate responses were preceded by repeated return of gaze, and virtually all trials without substantial return of gaze ended with accurate responses. PPA-S participants also provided lower confidence ratings on inaccurate than on accurate trials. These findings reinforce the results from a recent case study, in which an individual with PPA and impaired verbal comprehension showed similar relationships between accuracy, confidence ratings, and return of gaze in a similar eye tracking design (Seckin, et al., 2015).
A theoretical account of the current results is schematized in Fig. 6. According to serial and interactive models of language (Dell, et al., 1997; Levelt, et al., 1998), object concepts and lexical concepts are thought to be interconnected via crossmodal associations, with connections between specific word-object pairs varying in strength. For example, the visual percept of a lion is strongly associated with its name, but is also associated with taxonomically-related concepts such as “cat”. When attempting to match a noun cue with an object target, non-aphasic controls can assume that the noun will be much more strongly associated with the target than with related objects (Fig. 6a). This gulf enables a clear associative threshold for a match, analogous to the threshold for lexical selection in serial/interactive models. Controls discontinue visual search when the first object surpasses threshold, as they can assume that only the target is capable of doing so. Gaze does not return to previously viewed objects, and the highest confidence ratings are given, in reflection of a word-object mapping that is absolutely certain.
Fig. 6.

A theoretical account of results in the current study. The perceived strength of association between various types of object probes and the verbal cue is schematized in arbitrary theoretical units. Controls are able to cancel visual search immediately after a pre-determined threshold is exceeded by an object (the target). Individuals with PPA-S are unable to set a threshold due to blurring of word meaning, instead identifying a candidate set of objects that could each plausibly match the verbal cue.
PPA-S participants are unable to set a steadfast threshold for recognition, as, due to blurring of word meaning, many objects from the same category now share a similar strength of association with the word cue (Fig. 6b). Note that blurring could either weaken the association with the target, strengthen the associations with related competitors, or a combination of both factors (as depicted). PPA-S participants are thus forced to engage in an alternative strategy, in order to accomplish a probabilistic rather than a definitive mapping between noun and object. Their scan paths seemed to indicate that PPA-S participants first “scout” the array, flagging any objects that could conceivably match the verbal cue for further consideration. After identifying this set of “suspects” (usually belonging to the same category), they then direct gaze back and forth between set members, evaluating and comparing their strength of association with the verbal cue, eventually selecting the object with the best fit. When engaging in this strategy participants have a subjective experience of uncertainty, and provide low confidence ratings afterwards. The participant’s final selection is probabilistic, and under conditions of maximal uncertainty when the cue is equally associated with both targets and competitors (Fig. 6b) the individual will show coordinate errors in pointing. Future studies may help to further establish the validity of this interpretation.
Although PPA-S participants most often returned to gaze to related foils, it should be noted that they also returned gaze to unrelated foils more so than the other groups. This raises the question as to whether these unrelated foils were being considered as candidates for a match with the verbal cue, which would suggest a more serious corruption in word-object linkage that transcends category boundaries. Degradation of word knowledge results in taxonomic blurring in mild to moderate stages of PPA-S, but in some severe cases word recognition is eventually lost entirely, at which point preference for related competitors vanishes and word-object matching errors occur across category boundaries (Mesulam, Rogalski, et al., 2009; Seckin, et al., 2015). While it is possible that some PPA-S participants in this study may have “entirely” failed to recognize some of the verbal cues, in general this does not appear to be the case: they pointed to a related foil on 91% of inaccurate trials, even though there are more unrelated foils present in the array. Thus, if unrelated foils were being considered as candidates they were not ultimately selected, presumably because they shared a lesser strength of association with the verbal cue (Fig. 6b).
More likely, repeat viewings of unrelated foils is a byproduct of the challenging visual search demands in the current paradigm. The array included 16 object locations, unlike the standard VWP with 4 locations. This many object identities and corresponding spatial locations cannot be maintained within the span of working memory (Huettig, Olivers, & Hartsuiker, 2011), so unintended fixations on unrelated foils are to be expected as participants attempt to relocate related objects. Future studies may be able to disentangle these competing interpretations by assessing verbal comprehension with ancillary testing, for example by requiring participants to define each verbal cue (Mesulam, Rogalski, et al., 2009; Mesulam, et al., 2013), or by requiring participants to explicitly judge the probabilities that related and unrelated foils match verbal cues.
Section 4.3. Loci of taxonomic blurring
The current results show that eye movements are sensitive to the mechanisms of word-object mislinkage in PPA-S. As observed in previous behavioral and event-related potential paradigms, and now demonstrated through the analysis of eye movements, degradation of conceptual representations in PPA-S takes the form of taxonomic blurring. Further work is needed to establish the nature of the concepts being degraded, however. Multistore models posit separate but interacting knowledge stores for words and for objects (Baddeley, 2012; Mesulam, et al., 2013; Paivio, 1986; Warrington, 1975; Warrington & McCarthy, 1994). The best evidence for this comes from the double dissociation of aphasia and visual agnosia, where the knowledge of words and of objects are corrupted, respectively. As an aphasic syndrome, in PPA-S word comprehension is more impaired than object recognition, especially during initial stages of the disease when a diagnosis requires aphasia to be the most salient feature of the clinical picture (Gorno-Tempini, et al., 2011; Mesulam, Wieneke, et al., 2009). Individuals with PPA-S often provide definitions of words that are too vague to differentiate them from category coordinates, while object definitions tend to be more precise (Mesulam, Rogalski, et al., 2009; Mesulam, et al., 2013). Unlike controls and PPA-NS participants, PPA-S participants generate N400 responses to related words that are indistinguishable from those to target words, but this electrophysiological signature of blurring is not evident in control conditions with non-verbal object stimuli (Hurley, et al., 2012). These results suggest that taxonomic blurring takes place within the language network in PPA-S.
Individuals with an isolated associate visual agnosia, however, have also shown a greater propensity to commit related errors in object naming and word-object matching (Davidoff & Wilson, 1985; Fery & Morais, 2003), and are often able to identify objects at a generic category level but not at a specific item level (Carbonnel, Charnallet, David, & Pellat, 1997; Jankowiak, Kinsbourne, Shalev, & Bachman, 1992), suggesting that taxonomic blurring may also take place within the object recognition network. As the disease process in PPA-S unfolds, atrophy becomes more prominent in the right hemisphere (particularly in the ATL region) (Rogalski, et al., 2014), and when this occurs those individuals often develop a secondary associative agnosia in addition to deficits in single-word comprehension (Gefen, et al., 2013; Hurley, Rogalski, Mesulam, & Thompson, 2014). This raises the possibility that apparent blurring in the PPA-S group could arise in either the language or object recognition networks, with relative contributions from each network differing on an individual basis.
To further complicate matters, there are also unitary store alternatives to the multistore viewpoint, which posit the existence of amodal (or panmodal) conceptual representations that are not proper constituents of either the language or object networks (Bright, Moss, & Tyler, 2004; Caramazza, Hillis, Rapp, & Romani, 1990; Lambon Ralph & Patterson, 2008; Riddoch, Humphreys, Coltheart, & Funnell, 1988). In this tradition, individuals who demonstrate impaired conceptual judgments of both words and objects are characterized as having a “semantic dementia” syndrome rather than concurrent aphasia and agnosia (Adlam, et al., 2006; Hodges, Patterson, Oxbury, & Funnell, 1992; Snowden, Goulding, & Neary, 1989). Taxonomic blurring in semantic dementia has been theorized to result from the gradual dissolution or “dimming” of amodal concepts, resulting from incremental disconnection between amodal and modality-specific representations (Lambon Ralph, Lowe, & Rogers, 2007; Rogers, et al., 2004).
Differential assessment of word and object processing will be key to distinguishing whether conceptual impairments reflect aphasic, agnosic, or amodal syndromes. We were unable to definitively disentangle word from object processing in the current study, as the eye tracking paradigm includes both verbal and object stimuli, and both must be held concurrently in working memory to judge whether they match. In future eye tracking studies it will be informative to observe whether taxonomic capture of gaze occurs in within-modal control conditions where verbal cues are being matched to verbal probes, and non-verbal cues are being matched to non-verbal probes. According to multistore theories, individuals with a relatively pure single-word comprehension deficit should fail to show taxonomic capture in a non-verbal control task, while an individual with visual agnosia should fail to show capture in an all-verbal control task. If taxonomic blurring results from the dimming of amodal concepts, then capture should occur regardless of stimulus modality (Lambon Ralph, 2014). Eye movement indices of taxonomic blurring, such as the return of gaze phenomena described in the current study, will thus provide valuable new ways to evaluate theories of conceptual knowledge.
4.4. Normal taxonomic capture in PPA-NS
Our secondary goal was to examine taxonomic capture in the eye movements of PPA-NS participants, as PPA-NS variants have shown hyper levels of interference from taxonomically-related competitors in previous behavioral (non-eye tracking) investigations (Rogalski, et al., 2008; Thompson, et al., 2012). The PPA-NS participants in the current study did not show evidence of taxonomic processing abnormalities, instead showing equivalent taxonomic capture to controls across all metrics examined. The PPA-NS group in this study showed similar aphasia quotients to those included in the aforementioned behavioral investigations, ruling out the interpretation that language deficits in the current sample were too mild to create interference. Further investigation is needed to clarify the underlying factors that lead to taxonomic processing abnormalities in some experimental paradigms but not others, and to identify the eye movement signatures of retrieval anomia in PPA-NS.
Eye tracking is used to study blurred word meaning in primary progressive aphasia
Patients disproportionately viewed foils that were related to the target
Word-object linkage is probabilistic rather than definitive in these patients
Acknowledgments
This work was supported by NIH/NIA P30 AG13854, NIH/NINDS R01 NS075075 and NIH/NIDCD R01 DC008552. Additional support for M.S. was provided by the Turkish Education Foundation and World Federation of Neurology. Additional support for R.S.H. was provided by the Northwestern University Mechanisms of Aging and Dementia Training Grant; NIH/NIA T32 AG20506. We would like to thank Christina Wieneke and Danielle Barkema for help with coordinating assessments, and Arjuna Chatrathi for help with salience modeling.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Adlam AL, Patterson K, Rogers TT, Nestor PJ, Salmond CH, Acosta-Cabronero J, Hodges JR. Semantic dementia and fluent primary progressive aphasia: two sides of the same coin? Brain. 2006;129:3066–3080. doi: 10.1093/brain/awl285. [DOI] [PubMed] [Google Scholar]
- Baddeley A. Working memory: theories, models, and controversies. Annual Review of Psychology. 2012;63:1–29. doi: 10.1146/annurev-psych-120710-100422. [DOI] [PubMed] [Google Scholar]
- Balota DA, Yap MJ, Cortese MJ, Hutchison KA, Kessler B, Loftis B, Neely JH, Nelson DL, Simpson GB, Treiman R. The English Lexicon project. Behavior Research Methods. 2007;39:445–459. doi: 10.3758/bf03193014. [DOI] [PubMed] [Google Scholar]
- Bauer RM. The Agnosias. In: Snyder PJ, Nussbaum PD, Robins DL, editors. Clinical Neuropsychology: A Pocket Handbook for Assessment. Washington: American Psychological Association; 2006. pp. 508–533. [Google Scholar]
- Benton A, Harnsher K, Varney N, Speen O. Contributions to neuropsychological assessment. New York: Oxford UP; 1998. [Google Scholar]
- Bogen JE, Bogen GM. Wernicke’s region--Where is it? Annals of the New York Academy of Sciences. 1976;280:834–843. doi: 10.1111/j.1749-6632.1976.tb25546.x. [DOI] [PubMed] [Google Scholar]
- Boxer AL, Garbutt S, Rankin KP, Hellmuth J, Neuhaus J, Miller BL, Lisberger SG. Medial versus lateral frontal lobe contributions to voluntary saccade control as revealed by the study of patients with frontal lobe degeneration. Journal of Neuroscience. 2006;26:6354–6363. doi: 10.1523/JNEUROSCI.0549-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bright P, Moss H, Tyler LK. Unitary vs multiple semantics: PET studies of word and picture processing. Brain and Language. 2004;89:417–432. doi: 10.1016/j.bandl.2004.01.010. [DOI] [PubMed] [Google Scholar]
- Caramazza A, Hillis AE, Rapp BC, Romani C. The multiple semantics hypothesis: Multiple confusions? Cognitive Neuropsychology. 1990;7:161–189. [Google Scholar]
- Carbonnel S, Charnallet A, David D, Pellat J. One or several semantic system(s)? Maybe none: evidence from a case study of modality and category-specific “semantic” impairment. Cortex. 1997;33:391–417. doi: 10.1016/s0010-9452(08)70227-2. [DOI] [PubMed] [Google Scholar]
- Clarke A. Dynamic information processing states revealed through neurocognitive models of object semantics. Lang Cogn Neurosci. 2015;30:409–419. doi: 10.1080/23273798.2014.970652. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cooper RM. The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Vol. 6. Netherlands: Elsevier Science; 1974. [Google Scholar]
- Coppe S, Orban de Xivry JJ, Yuksel D, Ivanoiu A, Lefevre P. Dramatic impairment of prediction due to frontal lobe degeneration. Journal of Neurophysiology. 2012;108:2957–2966. doi: 10.1152/jn.00582.2012. [DOI] [PubMed] [Google Scholar]
- Davidoff J, Wilson B. A case of visual agnosia showing a disorder of pre-semantic visual classification. Cortex. 1985;21:121–134. doi: 10.1016/s0010-9452(85)80020-4. [DOI] [PubMed] [Google Scholar]
- Davie GL, Hutcheson KA, Barringer DA, Weinberg JS, Lewin JS. Aphasia in patients after brain tumour resection. Aphasiology. 2009;23:1196–1206. [Google Scholar]
- Dell GS, O’Seaghdha PG. Mediated and convergent lexical priming in language production: a comment on Levelt et al. (1991) Psychological Review. 1991;98:604–614. doi: 10.1037/0033-295x.98.4.604. discussion 615–608. [DOI] [PubMed] [Google Scholar]
- Dell GS, Schwartz MF, Martin N, Saffran EM, Gagnon DA. Lexical access in aphasic and nonaphasic speakers. Psychological Review. 1997;104:801–838. doi: 10.1037/0033-295x.104.4.801. [DOI] [PubMed] [Google Scholar]
- Dunn LM. Peabody picture vocabulary test ppvt-4. S l: Pearson Assessments; 2007. [Google Scholar]
- Ellis AW, Young AW. Human cognitive neuropsychology. Hove U K ; Hillsdale (USA): L. Erlbaum Associates Publishers; 1988. [Google Scholar]
- Farah MJ, McClelland JL. A computational model of semantic memory impairment: modality specificity and emergent category specificity. Journal of Experimental Psychology: General. 1991;120:339–357. [PubMed] [Google Scholar]
- Federmeier KD, Kutas M. Picture the difference: electrophysiological investigations of picture processing in the two cerebral hemispheres. Neuropsychologia. 2002;40:730–747. doi: 10.1016/s0028-3932(01)00193-2. [DOI] [PubMed] [Google Scholar]
- Fery P, Morais J. A Case Study of Visual Agnosia without Perceptual Processing or Structural Descriptions Impairment. Cogn Neuropsychol. 2003;20:595–618. doi: 10.1080/02643290242000880. [DOI] [PubMed] [Google Scholar]
- Garbutt S, Matlin A, Hellmuth J, Schenk AK, Johnson JK, Rosen H, Dean D, Kramer J, Neuhaus J, Miller BL, Lisberger SG, Boxer AL. Oculomotor function in frontotemporal lobar degeneration, related disorders and Alzheimer’s disease. Brain. 2008;131:1268–1281. doi: 10.1093/brain/awn047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gefen T, Wieneke C, Martersteck A, Whitney K, Weintraub S, Mesulam MM, Rogalski E. Naming vs knowing faces in primary progressive aphasia: a tale of 2 hemispheres. Neurology. 2013;81:658–664. doi: 10.1212/WNL.0b013e3182a08f83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gentner D, Boroditsky L. Individuation, relativity, and early word learning. In: Bowerman M, Levinson SC, editors. Langauge acquisition and conceptual development. Cambridge, UK: Cambridge University Press; 2001. pp. 215–256. [Google Scholar]
- Geschwind N. Disconnexion syndromes in animals and man. I. Brain. 1965;88:237–294. doi: 10.1093/brain/88.2.237. [DOI] [PubMed] [Google Scholar]
- Gorno-Tempini ML, Dronkers NF, Rankin KP, Ogar JM, Phengrasamy L, Rosen HJ, Johnson JK, Weiner MW, Miller BL. Cognition and anatomy in three variants of primary progressive aphasia. Annals of Neurology. 2004;55:335–346. doi: 10.1002/ana.10825. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gorno-Tempini ML, Hillis AE, Weintraub S, Kertesz A, Mendez M, Cappa SF, Ogar JM, Rohrer JD, Black S, Boeve BF, Manes F, Dronkers NF, Vandenberghe R, Rascovsky K, Patterson K, Miller BL, Knopman DS, Hodges JR, Mesulam MM, Grossman M. Classification of primary progressive aphasia and its variants. Neurology. 2011;76:1006–1014. doi: 10.1212/WNL.0b013e31821103e6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hodges JR, Patterson K, Oxbury S, Funnell E. Semantic dementia. Progressive fluent aphasia with temporal lobe atrophy. Brain. 1992;115(Pt 6):1783–1806. doi: 10.1093/brain/115.6.1783. [DOI] [PubMed] [Google Scholar]
- Hoffman KL, Logothetis NK. Cortical mechanisms of sensory learning and object recognition. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences. 2009;364:321–329. doi: 10.1098/rstb.2008.0271. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huettig F, Altmann GT. Word meaning and the control of eye fixation: semantic competitor effects and the visual world paradigm. Cognition. 2005;96:B23–32. doi: 10.1016/j.cognition.2004.10.003. [DOI] [PubMed] [Google Scholar]
- Huettig F, Hartsuiker RJ. When you name the pizza you look at the coin and the bread: eye movements reveal semantic activation during word production. Memory and Cognition. 2008;36:341–360. doi: 10.3758/mc.36.2.341. [DOI] [PubMed] [Google Scholar]
- Huettig F, Mishra RK, Olivers CN. Mechanisms and representations of language-mediated visual attention. Front Psychol. 2011;2:394. doi: 10.3389/fpsyg.2011.00394. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huettig F, Olivers CN, Hartsuiker RJ. Looking, language, and memory: bridging research from the visual world and visual search paradigms. Acta Psychologica. 2011;137:138–150. doi: 10.1016/j.actpsy.2010.07.013. [DOI] [PubMed] [Google Scholar]
- Humphreys GW, Price CJ, Riddoch MJ. From objects to names: A cognitive neuroscience approach. Psychological Research/Psychologische Forschung. 1999;62:118–130. doi: 10.1007/s004260050046. [DOI] [PubMed] [Google Scholar]
- Hurley RS, Paller KA, Rogalski EJ, Mesulam MM. Neural mechanisms of object naming and word comprehension in primary progressive aphasia. Journal of Neuroscience. 2012;32:4848–4855. doi: 10.1523/JNEUROSCI.5984-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hurley RS, Rogalski EJ, Mesulam MM, Thompson CK. A dual-route account of object knowledge deficits in primary progressive aphasia. Annual Meeting of the Society for the Neurobiology of Language; Amsterdam, Netherlands. 2014. [Google Scholar]
- Iordanescu L, Grabowecky M, Suzuki S. Object-based auditory facilitation of visual search for pictures and words with frequent and rare targets. Acta Psychologica. 2011;137:252–259. doi: 10.1016/j.actpsy.2010.07.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jankowiak J, Kinsbourne M, Shalev RS, Bachman DL. Preserved visual imagery and categorization in a case of associative visual agnosia. Journal of Cognitive Neuroscience. 1992;4:119–131. doi: 10.1162/jocn.1992.4.2.119. [DOI] [PubMed] [Google Scholar]
- Kaminski J, Call J, Fischer J. Word learning in a domestic dog: evidence for “fast mapping”. Science. 2004;304:1682–1683. doi: 10.1126/science.1097859. [DOI] [PubMed] [Google Scholar]
- Kaplan E, Goodglass H, Weintraub S. Boston Naming Test. Philadelphia: Lea & Febiger; 1983. [Google Scholar]
- Kay J, Lesser R, Coltheart M. PALPA : psycholinguistic assessments of language processing in aphasia. Hove: Lea; 1992. [Google Scholar]
- Kertesz A. The Western Aphasia Battery. New York ; London: Grune & Stratton; 1982. [Google Scholar]
- Laine M. Anomia: Theoretical and Clinical Aspects. Taylor & Francis; 2013. [Google Scholar]
- Lambon Ralph MA. Neurocognitive insights on conceptual knowledge and its breakdown. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences. 2014;369:20120392. doi: 10.1098/rstb.2012.0392. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lambon Ralph MA, Lowe C, Rogers TT. Neural basis of category-specific semantic deficits for living things: evidence from semantic dementia, HSVE and a neural network model. Brain. 2007;130:1127–1137. doi: 10.1093/brain/awm025. [DOI] [PubMed] [Google Scholar]
- Lambon Ralph MA, Patterson K. Generalization and differentiation in semantic memory: insights from semantic dementia. Annals of the New York Academy of Sciences. 2008;1124:61–76. doi: 10.1196/annals.1440.006. [DOI] [PubMed] [Google Scholar]
- Levelt WJ, Praamstra P, Meyer AS, Helenius P, Salmelin R. An MEG study of picture naming. Journal of Cognitive Neuroscience. 1998;10:553–567. doi: 10.1162/089892998562960. [DOI] [PubMed] [Google Scholar]
- Levelt WJ, Roelofs A, Meyer AS. A theory of lexical access in speech production. Behavioral and Brain Sciences. 1999;22:1–38. doi: 10.1017/s0140525x99001776. discussion 38–75. [DOI] [PubMed] [Google Scholar]
- Lund K, Burgess C. Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments & Computers. 1996;28:203–208. [Google Scholar]
- Lyn H. Mental representation of symbols as revealed by vocabulary errors in two bonobos (Pan paniscus) Anim Cogn. 2007;10:461–475. doi: 10.1007/s10071-007-0086-3. [DOI] [PubMed] [Google Scholar]
- Magnuson JS, Dixon JA, Tanenhaus MK, Aslin RN. The dynamics of lexical competition during spoken word recognition. Cognitive science. 2007;31:133–156. doi: 10.1080/03640210709336987. [DOI] [PubMed] [Google Scholar]
- Magnuson JS, Tanenhaus MK, Aslin RN, Dahan D. The time course of spoken word learning and recognition: studies with artificial lexicons. Journal of Experimental Psychology: General. 2003;132:202–227. doi: 10.1037/0096-3445.132.2.202. [DOI] [PubMed] [Google Scholar]
- Mesulam M. Principles of Behavioral and Cognitive Neurology. Oxford University Press; USA: 2000. [Google Scholar]
- Mesulam M, Rogalski E, Wieneke C, Cobia D, Rademaker A, Thompson C, Weintraub S. Neurology of anomia in the semantic variant of primary progressive aphasia. Brain. 2009;132:2553–2565. doi: 10.1093/brain/awp138. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mesulam M, Wieneke C, Rogalski E, Cobia D, Thompson C, Weintraub S. Quantitative template for subtyping primary progressive aphasia. Archives of Neurology. 2009;66:1545–1551. doi: 10.1001/archneurol.2009.288. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mesulam MM, Thompson CK, Weintraub S, Rogalski EJ. The Wernicke conundrum and the anatomy of language comprehension in primary progressive aphasia. Brain. 2015 doi: 10.1093/brain/awv154. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mesulam MM, Wieneke C, Hurley R, Rademaker A, Thompson CK, Weintraub S, Rogalski EJ. Words and objects at the tip of the left temporal lobe in primary progressive aphasia. Brain. 2013;136:601–618. doi: 10.1093/brain/aws336. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mesulam MM, Wieneke C, Thompson C, Rogalski E, Weintraub S. Quantitative classification of primary progressive aphasia at early and mild impairment stages. Brain. 2012;135:1537–1553. doi: 10.1093/brain/aws080. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meyer AS, Belke E, Telling AL, Humphreys GW. Early activation of object names in visual search. Psychon Bull Rev. 2007;14:710–716. doi: 10.3758/bf03196826. [DOI] [PubMed] [Google Scholar]
- Mirman D, Graziano KM. Damage to temporo-parietal cortex decreases incidental activation of thematic relations during spoken word comprehension. Neuropsychologia. 2012a;50:1990–1997. doi: 10.1016/j.neuropsychologia.2012.04.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mirman D, Graziano KM. Individual differences in the strength of taxonomic versus thematic relations. Journal of Experimental Psychology: General. 2012b;141:601–609. doi: 10.1037/a0026451. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nelson WW, Loftus GR. The functional visual field during picture viewing. J Exp Psychol Hum Learn. 1980;6:391–399. [PubMed] [Google Scholar]
- Paivio A. Mental representations : a dual coding approach. Oxford University Press; USA: 1986. [Google Scholar]
- Pepperberg IM. In search of king Solomon’s ring: cognitive and communicative studies of Grey parrots (Psittacus erithacus) Brain, Behavior and Evolution. 2002;59:54–67. doi: 10.1159/000063733. [DOI] [PubMed] [Google Scholar]
- Pilley JW, Reid AK. Border collie comprehends object names as verbal referents. Behav Processes. 2011;86:184–195. doi: 10.1016/j.beproc.2010.11.007. [DOI] [PubMed] [Google Scholar]
- Randolph C. Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) San Antonio: The Psychological Corporation; 1998. [Google Scholar]
- Reitan R, Wolfson D. The Halstead-Reitan Neuropsychological Test Battery: Theory and Clinical Interpretation. 2. Tucson: Neuropsychology Press; 1993. [Google Scholar]
- Riddoch MJ, Humphreys GW, Coltheart M, Funnell E. Semantic systems or system? Neuropsychological evidence re-examined. Cognitive Neuropsychology. 1988;5:3–25. [Google Scholar]
- Rogalski E, Cobia D, Harrison TM, Wieneke C, Thompson CK, Weintraub S, Mesulam MM. Anatomy of language impairments in primary progressive aphasia. J Neurosci. 2011a;31:3344–3350. doi: 10.1523/JNEUROSCI.5544-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rogalski E, Cobia D, Harrison TM, Wieneke C, Thompson CK, Weintraub S, Mesulam MM. Anatomy of language impairments in primary progressive aphasia. The Journal of neuroscience : the official journal of the Society for Neuroscience. 2011b;31:3344–3350. doi: 10.1523/JNEUROSCI.5544-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rogalski E, Cobia D, Martersteck A, Rademaker A, Wieneke C, Weintraub S, Mesulam MM. Asymmetry of cortical decline in subtypes of primary progressive aphasia. Neurology. 2014;83:1184–1191. doi: 10.1212/WNL.0000000000000824. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rogalski E, Rademaker A, Mesulam M, Weintraub S. Covert processing of words and pictures in nonsemantic variants of primary progressive aphasia. Alzheimer Disease and Associated Disorders. 2008;22:343–351. doi: 10.1097/WAD.0b013e31816c92f7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rogers TT, Lambon Ralph MA, Garrard P, Bozeat S, McClelland JL, Hodges JR, Patterson K. Structure and deterioration of semantic memory: a neuropsychological and computational investigation. Psychological Review. 2004;111:205–235. doi: 10.1037/0033-295X.111.1.205. [DOI] [PubMed] [Google Scholar]
- Rossion B, Pourtois G. Revisiting Snodgrass and Vanderwart’s object pictorial set: the role of surface detail in basic-level object recognition. Perception. 2004;33:217–236. doi: 10.1068/p5117. [DOI] [PubMed] [Google Scholar]
- Seckin M, Mesulam MM, Rademaker AW, Voss JL, Weintraub S, Rogalski EJ, Hurley RS. Eye movements as probes of lexico-semantic processing in a patient with primary progressive aphasia. Neurocase. 2015 doi: 10.1080/13554794.2015.1045523. electronic publication ahead of print. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Snodgrass JG, Vanderwart M. A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. J Exp Psychol Hum Learn. 1980;6:174–215. doi: 10.1037//0278-7393.6.2.174. [DOI] [PubMed] [Google Scholar]
- Snowden JS, Goulding PJ, Neary D. Semantic dementia: A form of circumscribed cerebral atrophy. Behavioural Neurology. 1989;2:167–182. [Google Scholar]
- Szekely A, Bates E. Center for Research in Language Newsletter. Vol. 12. La Jolla: University of California, San Diego; 2000. Objective complexity as a variable in studies of picture naming. http://crl.ucsd.edu/newsletter/12-2/article.html. [Google Scholar]
- Tanenhaus MK, Spivey-Knowlton MJ, Eberhard KM, Sedivy JC. Integration of visual and linguistic information in spoken language comprehension. Science. 1995;268:1632–1634. doi: 10.1126/science.7777863. [DOI] [PubMed] [Google Scholar]
- Thompson CK, Cho S, Price C, Wieneke C, Bonakdarpour B, Rogalski E, Weintraub S, Mesulam MM. Semantic interference during object naming in agrammatic and logopenic primary progressive aphasia (PPA) Brain and Language. 2012;120:237–250. doi: 10.1016/j.bandl.2011.11.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thompson CK, Weintraub S. Northwestern Naming Battery. Evanston, IL: Northwestern University; 2012. Retrieved from http://northwestern.flintbox.com/public/project/9299/ [Google Scholar]
- Thorpe SJ, Gegenfurtner KR, Fabre-Thorpe M, Bulthoff HH. Detection of animals in natural images using far peripheral vision. European Journal of Neuroscience. 2001;14:869–876. doi: 10.1046/j.0953-816x.2001.01717.x. [DOI] [PubMed] [Google Scholar]
- Treisman AM, Gelade G. A feature-integration theory of attention. Cogn Psychol. 1980;12:97–136. doi: 10.1016/0010-0285(80)90005-5. [DOI] [PubMed] [Google Scholar]
- Walther D, Koch C. Modeling attention to salient proto-objects. Neural Netw. 2006;19:1395–1407. doi: 10.1016/j.neunet.2006.10.001. [DOI] [PubMed] [Google Scholar]
- Warrington EK. The selective impairment of semantic memory. Quarterly Journal of Experimental Psychology. 1975;27:635–657. doi: 10.1080/14640747508400525. [DOI] [PubMed] [Google Scholar]
- Warrington EK, McCarthy RA. Multiple meaning systems in the brain: a case for visual semantics. Neuropsychologia. 1994;32:1465–1473. doi: 10.1016/0028-3932(94)90118-x. [DOI] [PubMed] [Google Scholar]
- Weintraub S, Mesulam MM. Right cerebral dominance in spatial attention. Further evidence based on ipsilateral neglect. Archives of Neurology. 1987;44:621–625. doi: 10.1001/archneur.1987.00520180043014. [DOI] [PubMed] [Google Scholar]
- Wilson BA, Greenfield E, Clare L, Baddeley A, Cockburn J, Watson P, Tate R, Sopena S, Nannery R. Rivermead Behavioral Memory Test. 3. London: Pearson Assessment; 2008. [Google Scholar]
- Yee E, Blumstein SE, Sedivy JC. Lexical-semantic activation in Broca’s and Wernicke’s aphasia: evidence from eye movements. Journal of Cognitive Neuroscience. 2008;20:592–612. doi: 10.1162/jocn.2008.20056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yee E, Sedivy JC. Eye movements to pictures reveal transient semantic activation during spoken word recognition. Journal of Experimental Psychology Learning, Memory, and Cognition. 2006;32:1–14. doi: 10.1037/0278-7393.32.1.1. [DOI] [PubMed] [Google Scholar]
- Zechmeister EB, Chronis AM, Cull WL, D’Anna CA, Healy NA. Growth of a Functionally Important Lexicon. Journal of Literacy Research. 1995;27:201–212. [Google Scholar]



