Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 May 1.
Published in final edited form as: J Clin Exp Neuropsychol. 2019 Feb 6;41(4):411–431. doi: 10.1080/13803395.2019.1567692

Validity of an Eyetracking Method for Capturing Auditory-Visual Cross Format Semantic Priming

Javad Anjum 1, Brooke Hallowell 2
PMCID: PMC6428604  NIHMSID: NIHMS1518396  PMID: 30727826

Abstract

Introduction:

Semantic priming paradigms are important for understanding lexical-semantic processing and the nature of linguistic deficits accompanying language performance in neurologically impaired individuals such as people with aphasia. Reaction-Time (RT) based traditional semantic priming tasks entail potential confounds especially problematic when applied to people with aphasia, who may have concomitant neurocognitive challenges that limit task performance. Some of these confounds include requirements of following complex instructions, making metalinguistic judgements, and using speech or limb-based motor actions to indicate overt responses. Eyetracking methods have great potential for avoiding some of these confounds. We tested the validity of an eyetracking method in capturing semantic priming in an auditory-visual cross-format priming paradigm (auditory word prime-visual image target).

Method:

A total of 72 neurologically unimpaired adults participated in two phases: a stimulus development phase using traditional priming (n = 32) and an experimental eyetracking phase (n = 40). Each phase included two conditions, representing distinct levels of prime-target semantic relatedness: unrelated and related. Mean RT data from the traditional priming (stimuli development) phase guided image selection for the eyetracking experiment. Eyetracking indices of fixation duration and latency of fixation were recorded to capture semantic priming in the eyetracking experiment.

Results:

Eye fixation data indicated that images related to auditory primes were attended to earlier and attracted significantly greater visual attention compared to unrelated images. These results mirrored RT data from the traditional priming method, which showed faster RT latencies and more accurate naming performance for related images compared to unrelated images.

Conclusions:

Results support the validity of eyetracking indices of semantic priming and offer a robust testing protocol for future studies in this line of research. Current clinical relevance for people with aphasia is highlighted. Further empirical testing of the psychometric properties of the eyetracking measures in various semantic priming contexts is recommended.

Keywords: Priming, Eyetracking, Assessment, Aphasia, Language Disorders

Introduction

Semantic priming is any modification of the speed or accuracy in the performance of a response as a consequence of prior exposure to a semantically related or unrelated stimulus (Balota, 1994; Meyer & Schvaneveldt, 1971; Neely, 1991; Odekar, Hallowell, Kruse, Moates, & Lee, 2009; Tabossi, 1996). The stimulus to which the responses are made is known as the target and the stimulus preceding the target is the prime (McNamara, 2005). The concept of semantic priming has received significant attention ever since Meyer and Schvaneveldt (1971) first examined the influence of semantic contexts on word recognition. In a lexical decision task that required participants to determine if two simultaneously presented letter strings were both words, they observed that participants were faster and more accurate while responding to semantically related words (e.g., BREAD-BUTTER), than those that were unrelated (e.g., BREAD-DOCTOR). Following the discovery of this phenomenon, semantic priming effects have emerged as the most robust, reliably observed, and well-established of all psycholinguistic priming effects studied to date within and across the disciplines of psychology, linguistics, speech-language science, and cognitive neuroscience (Balota & Duchek, 1988; Balota, 1994; McNamara, 2005; Meyer, 2014; Odekar et al., 2009; Rossell, Price, & Nobre, 2003; Yap, Hutchison, & Tan, 2017). Such an ardent multidisciplinary research focus and collaboration in studying the phenomenon of semantic priming stems from the idea that semantic priming is inherent to natural language processing and critical to language comprehension and formulation (Branigan, Pickering, Liversedge, Stewart, & Urbach, 1995; McNamara, 2005; Neely, 1991). Therefore, semantic priming paradigms have been widely used to understand the influence of semantic contexts in lexical access (Forster, 2004), lexical organization (Frost, Kugler, Deutsch, & Forster, 2005), lexical processing (Forster, 1981), word recognition (Hutchison, et al., 2013), and sentence comprehension (Angwin, Chenery, Copland, Murdoch, & Silburn, 2005). Recently, semantic priming paradigms have also been used to determine the role of nonlinguistic cognitive processes such as working memory and attention in language processing (Heyman, Van Rensbergen, Storms, Hutchison, & De Deyne, 2015; Hutchison, 2007; Murphy & Hunt, 2013).

In addition to studying language processing in neurologically unimpaired individuals, semantic priming paradigms have also been used in people with neurocognitive impairments such as aphasia. The term aphasia can be defined as “an acquired communication disorder caused by brain damage, characterized by an impairment of language modalities: speaking, listening, reading, and writing; it is not the result of a sensory deficit, a general intellectual deficit, or a psychiatric disorder” (Hallowell & Chapey, 2008, p. 3). Specifically, semantic priming paradigms have been helpful in enhancing our understanding of the nature of linguistic deficits accompanying language performance in aphasia (Bushell, 1996; Del Toro, 2000; Hagoort, 1997; Howells & Cardell, 2015; Milberg, Blumstein, Katz, Gershberg, & Brown, 1995; Silkes & Rogers, 2012; Siyambalapitiya, Chenery, & Copland, 2013; Soni, Lambon Ralph, & Woollams, 2012).

Semantic priming paradigms entail three important ways of manipulating prime-target relationships. First, varying the temporal aspects of prime-target presentation and controlling the format and modality of stimuli presentation. Inter-stimulus Interval (ISI) and Stimulus Onset Asynchrony (SOA) refer to two main types of varying the time interval between prime and target presentation. ISI denotes a time interval between the offset of prime and the onset of target (Carter, Hough, Stuart, & Rastatter, 2011) and SOA reflects a time interval between the onset of prime and the onset of target (Boyd, Patriciu, McKinnon, & Kiang, 2014). Second, using cross-format and cross-modal semantic priming contexts as means of controlling the format and modality of stimuli presentation. A cross-format priming condition includes differing forms of stimulus presented within the same modality (e.g., orthographic word prime and picture target; Leberton, Baron, & Eustache, 2001). On the other hand, a cross-modal priming condition entails stimulus presentation in a combination of different modalities (e.g., spoken word prime and picture target; Chen & Spence, 2011). Third, using the semantic distance between the concepts to study the strength of priming effects. In the most common form, a sequence of triplet words is presented (prime 1 – prime 2 – target 1 or target 2 or target 3), wherein the second word (prime 2) is always an ambiguous word (e.g., bark), with a minimum of two distinct meanings. The first word (prime 1) is either related to the dominant associative meaning (target 1 – dog) of the ambiguous word, the subordinate associative meaning (target 2 – tree) of the ambiguous word, or an unrelated word (target 3 – car) of the ambiguous word. These three semantic contexts are called congruent, incongruent, and unbiased respectively, and presented in a randomized and counterbalanced manner. Each sequence of the triplet word includes serial presentation of the initial two words in quick succession (varying the SOA), followed by the target that remains on the screen for a specified amount of time (e.g., 2 s). Participants are instructed to pronounce the name of the target as quickly and accurately as possible. Voice RT measures across these three conditions are compared for analyzing semantic priming effects (Andreou, Veith, Bozikas, Lincoln, & Moritz, 2014; Kischka, et al., 1996; Schvaneveldt, Meyer, & Becker, 1976).

Traditional Semantic Priming Methods

The most commonly used response tasks in traditional semantic priming methods are lexical decision and naming (Chetail & Mathey, 2009; Ferrand, Segui, & Grainger, 1996; Liu, Bates, Powell, & Wulfeck, 1997). Dependent measures include accuracy and speed of overt participant responses to targets, following exposure to primes. Mean differences in reaction time (RT) across related versus unrelated trials are calculated to determine semantic priming effects. A typical lexical decision task entails written or spoken formats and are constructed similar to the Meyer and Schvaneveldt (1971) experiment. Participants are instructed to indicate, through spoken response or button pressing, whether target stimuli are words or non-words. Naming tasks include exposure to an auditory or visual prime, immediately followed by a target stimulus. Participants are required to name the target as quickly and accurately as possible (McNamara, 2005; Neely, 1991).

A critical limitation in the use of RT-based traditional semantic priming tasks relates to potentially confounding factors, which may adversely influence the validity of the assessment method. Some of these confounds include requirements of following complex instructions, making metalinguistic judgements, and using limb-based motor actions to indicate responses. These task requirements compel participants to use abilities that, in and of themselves, are not the intended focus of priming research (Goldinger, 1996; Lively, Pisoni and Goldinger, 1994) and especially problematic when used in individuals with neurological impairments. Therefore, is important to consider novel methods for indexing semantic priming in people without neurological disorders to ensure the validity of those methods applied during unimpaired language processing. Validated methods may then be tested and applied in people with aphasia to gain more clarity on underlying factors that influence varied types of linguistic deficits.

Using Eyetracking Methods to Address Limitations of Traditional Priming Methods

Eyetracking tasks offer hope for an alternative set of priming research methods to study language processing (Odekar et al., 2009; Schad, Risse, Slattery, & Rayner, 2014; Yee, Overton, & Thompson-Schill, 2009). A typical eyetracking priming task includes participants attending to the visual stimuli by looking at the computer screen naturally, without requirements of overt tasks to be accomplished (Odekar et al., 2009; van der Laan, Papies, Hooge, & Smeets, 2017). Therefore, eyetracking methods offer several advantages to address some of the limitations inherent to traditional priming methods.

First, traditional priming methods require participants to follow complex instructions, as well as attend to and remember stimuli for later recall or judgment. This is problematic when testing people with neurological deficits, who may have various language impairments or cognitive difficulties such as those related to memory, attention, or learning (Gorden, 1983; McNeil, Odell, & Tseng, 1991; Murray, 1999; Heuer & Hallowell, 2007; Ivanova & Hallowell, 2013). In eyetracking methods, participants may simply be encouraged to look at carefully designed stimulus displays in any way that comes naturally to them.

Second, traditional priming tasks require participants to use planned motor responses, including speech, button pressing, or pointing. Such a prerequisite is problematic for testing people with neurological impairments who may be affected by paralysis, paresis, problems of motor programming including verbal expression, or other challenges leading to reduced speed or accuracy. In contrast, eyetracking methods do not require participants to use spoken or limb-motor responses. Rather, they involve measuring spontaneous eye movements, wherein participants do not have to consciously plan to look at anything in particular (Hallowell et al., 2002; Heuer & Hallowell, 2009; Ivanova & Hallowell, 2012), which is helpful in reducing the likelihood of conscious planning and execution of eye movements (Hallowell et al., 2002; Ivanova & Hallowell, 2012). This is especially important when testing people with potential ocular motor apraxia, which is characterized by deficits in eye-movement programming (Hallowell, 1999, 2008). Given that neuromuscular control of eye movements is generally robust, even in the face of severe brain injury and progression of neurodegenerative disease, the use of eyetracking indices is feasible in people with a wide array of neurological deficits (Anderson & MacAskill, 2013; Leigh & Zee, 1983; Ramat, Leigh, Zee, & Optican, 2007).

Third, traditional semantic priming paradigms require participants to make overt judgements and perform volitional responses towards targets. For example, lexical decision tasks require participants to classify stimuli as words and nonwords or to respond whether a word has been seen or heard previously within a session. This requirement may encourage participants to develop task-related strategies and poses a critical confound in measuring semantic priming, as it is not possible to ascertain if the priming effects are a consequence of the semantic context created by the prime word or influenced by intentional and strategic processes involved in the selection of semantically related targets. Given that metalinguistic tasks have been noted to lack ecological validity, a known potential confound in assessment of people with brain injury (Chaytor & Schmitter-Edgecombe, 2003), the validity of results and conclusions derived from traditional priming methods developed for individuals who may have impaired metalinguistic abilities (often associated with brain injury) is questionable. On the other hand, eyetracking methods do not require participants to make metalinguistic judgments (Tanenhaus, Magnuson, Dahan, & Chambers, 2000). Participants are simply encouraged to look at the visual targets in whichever way that comes naturally to them. In such free-viewing conditions, participants can still develop strategies to improve task performance. Such as, expecting a semantically related target to be presented and directing their attention more towards onset of targets, as opposed to completely processing primes. However, given that eyetracking priming methods do not entail any planned motor movements, participants have lesser incentive to engage in task-based strategies compared to traditional priming methods that require participants to make intentional responses towards related targets (Yee & Sedivey, 2006).

Fourth, traditional priming responses rely on the final interpretation of target stimuli, after the process of comprehension is complete. Due to this inherent offline nature of traditional priming tasks, results are mainly contingent upon the synthesis of the final product, as opposed to performance during the temporal course of lexical processing. The phenomenon of priming is an index of the relatively automatic and dynamic nature of online language processing that includes activation, processing, integration, and retrieval of specific stored knowledge in real time (Shapiro, Swinney, & Borsky, 1997). Eyetracking methods enable continuous mapping of spoken language processing along with the processing task, without any interruption (Farris-Trimble, & McMurray, 2013; Hallowell, Wertz, & Kruse, 2002; Yee, Blumstein, & Sedivy, 2008). There is an allowance for the time-locking of eye fixations corresponding to auditory or visual stimuli so that the cognitive-linguistic performance of participants during the task can be captured with an enhanced temporal sensitivity (Altmann & Kamide, 1999; Mirman, Yee, Blumstein, & Magnuson, 2011; Yee & Sedivy, 2006). Additionally, given that no overt responses are required towards prime stimuli, it is possible to study semantic priming effects at SOAs that are shorter than participants’ reaction times (Neely, 1991).

In addition to these task-based advantages, eyetracking methods can provide additional and complimentary information about lexical-semantic processing not afforded by some instrumentation-dependent techniques commonly used to measure priming effects, such as neuroimaging (e.g., positron emission tomography [PET] and functional magnetic resonance imaging [fMRI; cf. Cabeza & Neiberg, 2000]). Neuroimaging methods provide enhanced information regarding neural substrates involved in specific lexical-semantic processes. However, given that hemodynamic changes measured by neuroimaging occur over a time interval longer than the actual duration of most linguistic processing tasks (Osterhout, Kim, & Kuperberg, 2012), inadequate temporal resolution is an existing limitation innate to neuroimaging methods when studying real-time cognitive linguistic process, such as those underlying semantic priming. Eyetracking measures provide greater temporal resolution for capturing semantic priming as eye movements are recorded in real time when participants are engaged in the task. Although event-related potentials (ERPs) provide good temporal resolution (see Osterhout, McLaughlin, Kim, Greenwald, & Inoue, 2004; Rodriguez-Fornells, Münte, & Clahsen, 2002), they are highly sensitive to acoustic and electrical disturbances; small physical movements of participants trigger electrical discharges, reducing the reliability and accuracy of ERP results (Picton, et al., 2000). Another challenge with instrumentation-based techniques is that minimal modifications to experimental design or stimuli can lead to change in patterns of brain activation, which may be difficult to interpret (Marshall, 2001; cf. Uttal, 2003). In light of the potential advantages, several authors have successfully used eyetracking to study lexical-semantic processing in people without neurological problems (Carroll & Slowiaczek, 1986; Hyönä & Niemi, 1990; Rayner, 1998) and in people with aphasia (Dickey, Choy, & Thompson, 2007; Heuer & Hallowell, 2007; Ivanova & Hallowell, 2012; Mack, Ji, & Thompson, 2013; Meyer, Mack, & Thompson, 2012; Thiessen, Beukelman, Ullman, & Longenecker, 2014).

Eyetracking Methods for Semantic Priming

Despite the use of eyetracking methods to study varied cognitive linguistic phenomena, few studies have addressed the need for using eyetracking methods in capturing semantic priming. Some eyetracking methods have used the Visual World Paradigm (VWP) to capture semantic priming. In this paradigm, eye movements are recorded while participants listen to spoken information and then point by hand or with a mouse-controlled cursor to specific objects on the computer screen (Allopenna, et al., 1998; Mirman & Magnuson, 2009; Yee & Sedivy, 2006). VWP provides a sensitive tool which offers good temporal resolution for detecting phonological competitor effects and semantic similarity, as participants listen to spoken information in order to guide their visual attention towards specific targets on the screen in real time. However, the requirement of overt responses using limb-based motor actions still highlights some of the method-based confounds discussed earlier, including the influence of task-specific strategies and the requirement of planned motor responses, which are potentially confounding when testing people with aphasia. These limitations are addressed in the present study in that participants are not required to make overt responses or engage in tasks that can be considered as drawing on metalinguistic abilities.

Second, some eyetracking-based priming methods include recording eye movements using miniature video cameras mounted on a television screen on which visual stimuli are presented, requiring later offline frame-by-frame analyses of direction and duration of eye fixations (Arias-Trejo & Plunkett, 2009; Arias-Trejo & Plunkett, 2013). Such methods are vulnerable for manual errors during analysis and coding (evoking concerns regarding intra- and inter-rater reliability) and lack sensitivity (temporal parameters of what constitutes an eye fixation are not defined and the direction of fixations is simply coded as left, right, and other). Thus, they are not ideal for capturing such a temporally sensitive and dynamic linguistic phenomenon as semantic priming. The eyetracking method developed in the present study uses advanced image processing algorithms that automatically calculate eye fixations, thus capturing temporally dynamic and sensitive data and avoiding manual and coding errors inherent to frame-by-frame analyses.

Third, most eyetracking methods use a singular measure for studying semantic priming effects (e.g., proportion or average of fixations, Mirman and Magnuson, 2009; Yee, et al., 2008; Yee, et al., 2009). Given that semantic priming is a temporal phenomenon that represents the complex interplay of several cognitive-linguistic processes in real time, there is great variability in behavioral performance within and across tasks. Performance also varies depending on the modality and duration of prime and target presentations and the nature of task requirements (Yee & Sedivy, 2006). The current eyetracking method includes careful examination of varied dependent measures (e.g., proportion of fixation duration, mean fixation duration, first-pass fixation duration, and latency of fixation) and their effectiveness in capturing semantic priming effects.

Fourth, images used in previously published eyetracking-based studies of semantic priming lack prior visual stimulus validation for use in eyetracking contexts. The selection criteria for semantically related prime-target candidates in some of the eyetracking methods are sometimes arbitrary (e.g., Yee & Sedivy, 2006). In other cases, it is based on psycholinguistic criteria derived from other experimental studies. For example, Arias-Trejo and Plunkett (2009) used frequency and familiarity indices; Arias-Trejo and Plunkett (2013) used adult association norms. The selection criteria for visual stimuli are also based on experimenter judgments of category coordinates, shared functions (e.g., SAW-AXE and SCISSORS-KNIFE as in Yee et al., 2008), or linguistic attributes (e.g., semantically related targets, rhyme competitors, cohort competitors, and semantically unrelated targets, as in Allopenna, et al., 1998 and Farris-Trimble and McMurray, 2013). These issues are addressed in the current eyetracking method by conducting an initial validation of auditory prime words and their corresponding picture targets to ensure that prime-target pairs are highly effective in evoking priming effects before they are used as stimuli in the eyetracking-based method.

Odekar et al. (2009) addressed some of the limitations intrinsic to traditional semantic priming tasks and current eyetracking methods by developing a novel eyetracking method to index semantic priming effects1 in a visual cross-format priming context for single words. The method included testing the validity of a set of eyetracking measures to capture semantic priming effects with printed prime words and picture targets. The set of eyetracking measures were based on analyses of eye fixations recorded during presentation of picture targets following orthographic primes. Odekar et al. (2009) defined an eye fixation as a stable eye position lasting a minimum of 100 ms (temporal threshold used to differentiate transitory eye movements from those involved in information processing; Manor & Gordon, 2003) with a minor degree of tolerance (40 vertically and 20 horizontally) for change in position. Such a high degree of temporal resolution afforded by eyetracking methods makes it possible to study semantic priming effects that emerge as early as 100 ms following the presentation of visual targets.

Purpose of the Current Study

The purpose of the current study was to develop and test the construct validity of an eyetracking method in capturing semantic priming in an auditory-visual, cross-modal priming context. The current study is similar to Odekar et al. (2009) in that a single-word semantic priming paradigm was used to capture semantic priming effects. In a single-word semantic priming paradigm, participants attend to a single string of letters (prime word) before responding to another word or image (target). This makes the task simpler to administer when compared to procedures in which overt responses were required for two strings of target letters (Meyer & Schvaneveldt, 1971) or to prime words instead of targets (Meyer, Schvanveldt, & Ruddy, 1975). The current study also expands upon and differs from Odekar et al. (2009) in that auditory prime words were used instead of orthographic prime words.

The decision to implement auditory priming in the present study was grounded in the following rationale. First, auditory priming is fundamental to lexical recognition and thus to auditory comprehension. Auditory skills emerge first during language development, compared to orthographic abilities that are learned later in life. There is evidence to show that in people with aphasia, lexical access for auditory words is more stable and better facilitated compared to same words presented orthographically (DeDe, 2012). In semantic priming research, it has been observed that words administered through the auditory modality lead to stronger priming effects compared to equivalent words presented visually (Anderson & Holcomb, 1995; Pilotti, Bergman, Gallo, Sommers, & Roediger, 2000). Second, the fact that spoken words occur over a period of time, while words in the written format are available in an ongoing way constitutes a prominent temporal difference between the two processes and warrants deeper exploration for enhancing our understanding of language processing in people with and without neurological impairments. Given that auditorily presented primes in a single-word semantic priming context creates “little interference with the on-going process of comprehension” (Tabossi, 1996, p. 573), a temporally sensitive method such as eyetracking that requires no overt responses may be ideal in capturing the online phenomenon of semantic activation produced by spoken stimuli as they unfold in real time. Third, semantic priming paradigms involving auditory priming have converging evidence to show that visual attention tends to be allocated towards semantically related image targets compared to unrelated images even in the face of a mismatch along other dimensions (e.g., color or shape), highlighting the importance of linguistically influenced eye fixations as a sensitive index of conceptual overlap between spoken words and visual objects (Huettigg & Altmann, 2004; Huettigg & Altmann, 2005). Fourth, unlike orthographic priming methods, which require participants comprehend printed words, auditory priming has potential applications for testing people with and without aphasia, who are not literate. including bilinguals who may not be able to read or write in their second language.

Methods

The study included two experimental phases: traditional priming (picture naming) and eyetracking priming. Each phase included two conditions, with each condition corresponding to the degree of semantic relation between prime and target: unrelated and related. For unrelated prime-target pairs, an auditorily presented prime word was semantically unrelated to the target image. For related pairs, an auditorily presented word was highly semantically related to the target image. The block of unrelated trials was always presented first in order to limit the potential of obvious semantic relationships between prime-target pairs in the related condition influencing participant performance on the picture targets in the unrelated condition (Odekar et al., 2009). This also meant that within each phase, participants saw the target images twice-initially in the unrelated condition and then again in the related condition.

The traditional priming experiment was conducted as part of the stimulus development (pair generation) phase to guide selection of the most suitable prime-target candidates for the eyetracking experiment. Each trial consisted of an auditory word prime presented via headphones, followed by a picture target on a computer screen. Participants named the image as quickly and accurately as possible and the voice RT for each image was recorded. Thus, in the eyetracking priming experiment, each trial consisted of a spoken prime word plus a visual display containing a set of images in which one image was highly associated with the auditory prime (target) and two other images were unrelated to the prime (nontarget foils). Participant eye movements in response to each array of images were monitored.

Participants

A total of 72 adult native speakers of American English participated, with 32 in traditional priming phase (age range: 21 to 30 years; M = 22.75, SD = 2.13) and 40 in eyetracking priming phase (age range: 21 to 27 years; M = 22.60, SD = 1.58) of the study. Participants were recruited via flyers and e-mails in the university community following an Institutional Review Board (IRB) approval. No individual participated in both experiments. Participants with a self-reported history of any neurological impairment, learning disability, attention-deficit/hyperactivity disorder (ADHD), or attention-deficit disorder (ADD) were excluded. All participants had completed a minimum of high school education.

Screening procedures included hearing and vision evaluations. Participants were required to pass a hearing screening for the frequencies of 500, 1000 and 2000 Hz at 30 dB SPL using pure tone audiometry. Participants also underwent a vision testing (Hallowell, 2008) to confirm (a) near-vision acuity sufficient to identify clearly 0.25-in.-square pictures, and text presented on a hand-held chart from a 20-in. distance; (b) no signs of acute infection or inflammation such as drainage, swelling, or redness; (c) no fixation nystagmus; (d) no visual neglect or other visual attention deficits; and (e) peripheral vision within normal limits. The use of glasses or contact lenses was documented.

Materials

Developing prime-target pairs.

Visual stimuli.

The picture targets included a total of 129 gray-shaded images selected from the set of 260 object images originally developed by Snodgrass and Vanderwart (1980). Rossion and Pourtois (2004) later modified these images and established normative data for naming agreement, familiarity, complexity, and imagery judgments, enabling them to be used in a wide range of clinical studies of object recognition. These images were downloaded from the Tarr Laboratory Web site, Cognitive and Linguistic Sciences Department, Brown University. The criteria for the selection of the set of 129 images were based on the results of a free-association task conducted in a previous eyetracking priming study by Odekar, et al. (2009) to group the images into high-association target images and low-association nontarget foils.

Each image was individually processed using GIMP software (Version 2.6; GNU Image Manipulation Program, 2016) to remove color and normalize for brightness and contrast levels across the image set. A graphic template was used to maintain equal sizes across images. The rationale for using grey-shaded images was based on results from color diagnosticity and object categorization studies. Color diagnosticity represents the strength of association between an object and a specific color (Tanaka & Presnell, 1999). It is observed that naming performance is significantly facilitated for objects that are high in color diagnosticity (e.g., carrot or banana) compared to objects that are low in diagnosticity (e.g., book; Price & Humphreys, 1989; Therriault, Yaxley, & Zwaan, 2009). In addition, evidence from eyetracking studies suggests that viewers’ initial fixations are influenced by perceived physical attributes of the images in the visual display including color and brightness (Heuer & Hallowell, 2007, 2009; Huettig & Altmann, 2011).

Auditory stimuli.

Auditory primes were derived from the high frequency response words obtained from a free association task in the Odekar et al. (2009) study. Given the inherent differences between the processing of printed versus auditory words, the orthographic word primes from Odekar et al. (2009) were not used in the current study. Instead, auditory prime words were developed by prerecording and digitizing high frequency response words and testing their semantic relatedness to target images in a traditional picture naming task (pair generation phase). Auditory primes were recorded by a male, native speaker of English in a soundproof booth using an Audio Technica (AT 825) high-quality microphone that was directly connected to a PC. Each verbal stimulus was digitized (22 kHz, low-pass filtered at 10.5 kHz) and stored on the computer using Bliss Software (Mertus, 2000). Adobe Audition (Version 3.0; Adobe Systems Inc., 2017) was used to edit each stimulus to (a) remove any noise at the start and end of the stimulus, (b) amplify, and (c) normalize for intensity.

Traditional semantic priming (pair generation) phase.

Thirty-two adult native speakers of American English participated in a traditional priming task of picture naming. A familiarization phase preceded the naming task, based on a method developed by Allopenna, Magnusan, and Tanenhaus (1998) to measure the time course of spoken word recognition in an auditory-visual cross-format context. In the familiarization phase, participants were instructed to look at the image-name pairs and read the printed name aloud before proceeding to the next image-name display. Similar preview sessions have been used in cross-modal priming studies, albeit with a time constraint for each image-name display (e.g., 1,012ms, Chen & Spence, 2010; Chen & Spence, 2011). In the current study, the familiarization phase was self-paced to allow participants sufficient time to look at the image and understand the name of the image.

The naming task included the presentation of two blocks of trials. In the first block, each target image was presented on the computer screen, which was preceded by an unrelated prime presented auditorily through head phones. In the next block, each target image was presented following an auditorily related prime. The order of image presentation was randomized within each block of trials using an online, random-sequence generator but was held constant across the unrelated and related blocks of trials. For each trial, the auditory prime was presented first, immediately followed by the presentation of the image target. This step was undertaken to discourage participants from naming the target image without completely attending to the prime word. To minimize decrements in task performance due to cognitive fatigue or drifts in attentiveness over the course of the experiment, a five-minute break was included between unrelated and related trials.

DirectRT software was used for the presentation of the auditory primes through headphones and the visual stimuli on the computer screen. DirectRT is a suite of applications designed to administer a variety of tasks across randomized or fixed presentations of text, pictures, and sounds and record voice RT in ms (Empirisoft, 2014). High-quality Sennheiser noise-canceling headphones were used to present the auditory stimuli. Each trial of the traditional priming condition included the (a) presentation of a blank screen for 100 ms, (b) auditory presentation of an unrelated word (for unrelated trials) or a related word (for related trials), and (c) immediate presentation of the target image after the completion of the prime word. The image was presented in the center of the computer screen (size: 450 × 450 pixels, resolution: 300 pixels/in.) until a response was obtained. There was a pause of 1000 ms before the initiation of the next trial. Participants were instructed to name the target image as quickly as possible. A Sennheiser over-the-head noise-canceling boom microphone was used to record verbal responses. The microphone was connected to the computer through a phase amplifier to allow the recording of voice responses by minimizing environmental noise. Participants were instructed not to use overt fillers such as “um” or “er” so as not to record inaccurate verbal RT measures. DirectRT (Empirisoft Corporation, 2016) was programmed to capture and record RT data. During the testing session, investigator recorded relevant observations such as inaccurate verbal responses, response delays, and technical errors on a separate sheet of paper. Trials corresponding to these events were excluded from the data analyses.

Of the 129 prime-target pairs included within each block of trials, results from 14 pairs were not included as they elicited ambiguous responses across participants. For instance, the image of lettuce was named as flower or rose by most participants. Furthermore, trials for which voice RT were unusually delayed (more than 5000 ms) and trials for which responses were incorrect were excluded from analysis. A paired sample t test was conducted for voice RT measures of the remaining 115 prime-target pairs, comparing unrelated and related trials. Item-wise mean RTs and their corresponding paired-samples t tests are summarized in Appendix A.

Selecting prime-target pairs.

The criterion for inclusion of any prime-target pair in the eyetracking experiment was that during the pair generation phase (traditional priming study), the high-association image was named significantly faster in the related compared to the unrelated condition (alpha = .01). A total of 42 prime-target pairs met this criterion (see Appendix A). Next, sets of triplet images were assembled, each set containing a high-association target image (taken from the 42 most significant prime-target pairs of the traditional priming condition) and two low-association images. The low-association images were designated as nontarget foils and bore a low-semantic relationship to the auditory prime corresponding to the target image within each triplet. The reason for assembling these triplets was that each trial in the eyetracking phase included a multiple-image array containing a visual target that shared a high semantic association with the auditory prime and two nontarget foils with no or low semantic associations. To determine which two images qualified to be low-association nontarget items corresponding to the prime, the following steps were taken, as consistent with Odekar et al. (2009):

  1. A random number table was used where each number represented a picture stimulus. A set of five pictures were chosen as possible choices for selecting two low-association nontarget items for every prime word.

  2. Word association norms proposed by Palermo and Jenkins (1964) were used to ensure that the pictures randomly assigned as nontarget items were not related to the prime. If a word corresponding to any of the five images had been given as a response to the corresponding primes in the stimulus development process, new image targets were selected from the random number table.

  3. To select the final two nontarget foils for each visual display and to determine the extent of their non-association with the corresponding primes, 20 additional adult native speakers of English with normal language abilities were recruited. These participants were instructed to rate the degree of association between the prime word and each of the five images selected as low-association nontarget foils. The rating was based on a 6-point rating scale, ranging from 0 (non-association) to 5 (moderate association). The two picture stimuli with the lowest ratings (2 or lower) were assigned as the final two nontarget low-association images for a particular prime word. For each prime word, the corresponding multiple-image array included one target image and two nontarget foils. These nontarget foils had not been used in the traditional priming condition; they were only used in the eyetracking phase to obtain a comparative index for eyetracking measures between fixations on visual targets versus fixations on nontarget foils.

Of the 42 potential triplet sets selected as visual targets for the eyetracking phase, a total of 22 sets of triplets were then excluded because they shared common images across one or more of the other triplets. Based on previous eyetracking evidence that participants preferentially allocate initial visual attention towards objects that are phonological onset competitors of the target word compared to phonologically unrelated words, care was taken to ensure that the unrelated images within each triplet were not phonological onset competitors (e.g., beaker-beetle) of the target word (Allopenna, et al., 1998; Yee & Sedivy, 2006). Finally, a total of 20 triplet sets (regular trials) were selected to be included as regular trials in the eyetracking phase. An additional set of four trials (20% of the total number of trials) were designated as sham trials and interspersed within the regular prime-target trials. The set of sham trials were built similar to regular trials, with the exception that the presentation time for the image arrays in sham trials was set at 1000 ms, as opposed to 4000 ms in regular trials. This was done to prevent participants from developing strategies for object identification based on the duration of anticipated time for which the pictures would be presented in each trial. Finally, a total of 24 trials (20 regular trials and four sham trials) were presented in the eyetracking phase. Each trial included two different levels of presentation: unrelated condition and related condition. Within each condition, the ratio of regular to sham trials was 5:1, wherein a sequence of five regular trials was followed by a sham trial. A separate online, random-sequence generator was used to determine the presentation order of regular and sham trials. This step was taken to ensure that for a given sequence of 5:1, five regular trials and one sham trial were randomly selected from the set of 24 trials. The presentation order of trials was held constant across both conditions.

Before the eyetracking phase, a photometric luminance analysis was conducted on all 60 images constituting the set of 20 regular prime-target trials. This step was taken to prevent any disproportionate visual allocation to images due to stimulus-driven aspect of images, such as luminance. The photometric luminance value (in cd/mm2) for each of the images within a trial (one target item and two nontarget foils) was repeatedly measured on three separate exposures using a Gossen Starlite 2 light meter. The mean luminance for each image was computed across the three exposures. Following this, the standard deviation of each image was calculated in comparison to the remaining images within the trial and across all the images used in regular prime-target trials. The luminance value of each image was within 1.5 standard deviations of the mean luminance value for all the images within the regular trial and within 2 standard deviations across all the trials, except two outliers with standard deviations of 2.02 and 2.36 respectively. Results of the luminance analysis are provided in Appendix B.

Procedure

A total of 40 additional adult native speakers of American English participated in an eyetracking experiment. Each trial consisted of the presentation of an auditorily presented word prime, followed by a multiple-image display containing a set of three images and a blank space, one appearing in each of the four corners of the screen (size of pictures: 450 × 450 pixels, resolution: 300 pixels/in). For related trials, each image set consisted of one image target which was highly associated with the prime and the remaining two images (nontarget foils), which had a low association with the prime. For unrelated trials, none of the images were related to the auditory prime.

Each array of images in the eyetracking condition was arranged such that targets and nontarget foils appeared in all the four corners of the display with equal frequencies in each related/unrelated condition. The eyetracking priming condition included two conditions: unrelated and related. The unrelated condition was always presented first to avoid the influence of related trials on the spontaneous eyetracking responses with the unrelated trials. The following instructions were given: “You will hear words through the headphones and see picture sets on a computer screen. Listen to the words and look at the pictures on the screen in whatever way comes naturally to you. You do not have to remember any of the words or pictures.”

The order of presentation of the stimulus items for each trial was: (a) presentation of a blank screen for 100ms, (b) presentation of the auditory prime word, and (c) presentation of the visual array of images after 400ms following the onset of auditory prime. The visual display within each trial lasted for a duration of 4 s. There is evidence that a stimulus onset asynchrony (SOA) of less than 400ms leads to automatic priming effects while strategic priming processes are activated at SOA of above 400ms (DeGroot, 1984; DenHeyer, Briand, & Dannenbring, 1983; Neely, 1977). Since separating these processes was not a goal, the 400ms SOA was intended to subsume the processes that lead to all priming effects. Although there is no consensus in the literature regarding the maximum duration of priming effects, evidence from ERP and eyetracking suggests that priming effects are sustained for a minimum of 2 s (Anderson & Holocomb, 1995; Odekar et al., 2009). Given this, and the fact that initial fixations on an image array are significantly determined by physical stimulus characteristics (such as contrast, brightness, and size) as opposed to semantic relationships (Henderson & Ferreira, 2004; Heuer & Hallowell, 2007), the multiple-image display within each trial was shown for 4 s to capture early and late priming effects. The trial sequence of the eyetracking phase is depicted in Figure 1. Items constituting individual trial structure (auditory prime, target image, and nontarget foils) are provided in Appendix C.

Figure 1.

Figure 1.

Trial sequence in the eyetracking experiment.

Participants were given a 5-minute break between unrelated and related trials. A threshold of 100 milliseconds and tolerance of 6 degrees horizontally and 4 degrees vertically were used to determine eye fixations (Karsh & Breitenbach, 1983; Manor & Gordon, 2003). Data were analyzed for the duration of each trial (four seconds). Similar to the naming task, the investigator manually recorded technical errors and participant behavior during task performance, including logging observations about interruptions in auditory/visual presentations, loss of eye position (as seen on the investigator screen of the eyetracking machine), and instances (trials) wherein participants may not have engaged in the task (sleepy or tired). Trials corresponding to these observations were designated bad trials and excluded from the eyetracking analyses.

Instrumentation.

Eye fixations were monitored using an LC Technologies Eyegaze pupil-center/corneal reflection system. The system uses an infrared light-emitting diode (LED) and an infrared-sensitive video camera to record pupillary and corneal points of reflection at a sampling rate of 60 Hz. For each sample, the system’s software performs a vector calculation of eye position using the relative positions of the pupil and corneal reflections with respect to images presented on a computer monitor (Hallowell et al., 2002; Odekar et al., 2009). According to the manufacturer, the typical eyetracking error for each fixation is less than 0.63 degrees of visual angle (LC Technologies, 2013). Advanced image processing algorithms accommodate several common sources of eyetracking errors, including pupil diameter variation and drooping eyelids. Even though the eyetracking system compensates for small physical movements, especially head movements, and therefore does not require head stabilization, a chin rest was used to ensure consistent positioning of participants’ eyes relative to the display. Participants first completed a calibration procedure for which they were instructed to focus on a small dot displayed on the screen and to follow it as it moved around the screen. This procedure took about 15 seconds.

Dependent measures.

Based on Odekar et al. (2009) method, dependent measures in the current study included three types of fixation duration measures: (a) proportion of fixation duration, (b) mean fixation duration, and (c) first pass fixation duration, plus a fourth measure, latency of fixation. All entail allocation of fixations to specific areas of interest within the visual display. A brief description of measures is summarized in Table 1.

Table 1.

Brief Description of the Eyetracking Dependent Measures Used in the Current Study

Measure Definition References
1. Proportion of Fixation Duration (PFD) Fraction of time spent viewing an area of interest during a trial, calculated by dividing the fixation duration on a specific image by the total fixation time allocated towards viewing all the images within the display. The term “area of interest” corresponds to a specific image within the multiple-image display (e.g., picture target). (Hallowell et al., 2002; Ivanova & Hallowell, 2012; Knoblich, Ohlsson, & Raney, 2001; Odekar et al., 2009)
2. Mean Fixation Duration (MFD) Average time spent viewing an area of interest during a fixation, calculated by dividing the total fixation duration across all the fixations on a specific image by the total number of fixations allocated to the same image (De Graef, Christiaens, & d’Ydewalle, 1990; Henderson, Weeks, & Hollingworth, 1999; Odekar et al., 2009)
3. First Pass Fixation Duration (FPFD) Fixation time between the first fixation on and away from an area of interest (Kemper & McDowd, 2006; Odekar et al., 2009)
4. Latency of Fixation (LF) Total duration of looking anywhere within the multiple-image display before fixating on an area of interest for the first time (De Graef et al., 1990; Henderson et al., 1999; Odekar et al., 2009)

Hypotheses

Hypotheses were based on well-known and robust patterns of traditional priming effects reported in the literature: that there would be significant differences between unrelated and related conditions. We tested for this pattern according to each of four eyetracking-based measures in order to test the validity of the eyetracking method.

Hypothesis 1.

It was hypothesized that: (a) the mean proportion of fixation duration (PFD) on the target image (highly associated to the auditory prime) would be significantly greater than the PFD on nontarget foils in the related condition; and (b) the PFD on the target would be significantly greater in the related compared to the unrelated condition. In free-viewing tasks, wherein participants are instructed not to search for or look at anything in particular but to look at the images naturally, there is evidence supporting greater visual attention allocation to images that are related to auditory stimuli compared to images that are unrelated (Allopenna et al., 1998; Altmann & Kamide, 1999; Dahan, Magnusan, & Tanenhaus, 2001; Hallowell, 1999; Hallowell et al., 2002; Heuer & Hallowell, 2007; Henderson et al., 1999). Odekar et al. (2009) found a similar result using written word primes to semantically related versus unrelated images.

Hypothesis 2.

It was hypothesized that the average mean fixation duration (MFD) on the target item would be (a) significantly greater compared to nontarget foils in the related condition, and (b) significantly greater in the related compared to the unrelated condition. In light of such inconsistent findings pertaining to this measure in other contexts (De Graef, et al., 1990; Henderson, et al., 1999), Odekar et al. (2009) posed a nondirectional hypothesis and then reported that average MFD indices were longer on picture targets that were related versus unrelated to orthographic primes; hence, the hypothesis for the current study is directional.

Hypothesis 3.

It was hypothesized that: (a) first-pass fixation duration (FPFD) values would be significantly greater on target compared to nontarget foils in the related condition, and (b) FPFD on the target would be significantly greater in the related compared to the unrelated condition. Odekar et al. (2009) found significant results for parallel hypotheses using their eyetracking-based method entailing written word primes to picture targets.

Hypothesis 4.

It was hypothesized that: (a) latency of fixation (LF) would be significantly shorter for the target image than for nontarget images in the related condition, and (b) LF would be significantly shorter in the related compared to the unrelated condition. These hypotheses are analogous to the voice RT measure from the traditional semantic priming naming task. were based on evidence that latency to target fixation is shorter for semantically related items, compared to unrelated foils (De Graef et al., 1990; Henderson et al., 1999) and longer for images sharing semantic interrelations with other images within a display (cohort competitors; Allopenna et al., 1998). Given that the images used in the visual displays are devoid of diagnostic color and controlled for luminance, it was reasoned that participants would spend less time looking elsewhere in the display before attending semantically related images.

Results

Data Preparation

Data from the sham trials was excluded from the analyses. Of the 20 regular trials, eyetracking data from four trials was eliminated because the auditory prime words in these trials had an arguable semantic relationship with one of the nontarget images in the unrelated condition. This could have created a priming confound as participants may have looked preferentially at those images. For instance, the prime word cook may have primed the participants to attend preferentially to the image of lemon. Similarly, eyetracking data from an additional regular trial was excluded from the analyses, as the auditory prime word god was unintentionally used in both the unrelated condition (image-basket) and in the related condition (image-church). Additionally, data from bad trials, as designated by the investigator’s experimental observations were excluded from the analyses. Finally, eyetracking data was analyzed for 15 regular trials that included a total of 45 target images and nontarget foils.

Analyses

Differences for mean voice RT in related versus unrelated conditions for the 15 target items in the corresponding traditional priming trials were calculated to quantify the significant differences captured in that condition for that subset of items t(31) = 9.42, p < .001 (one-tailed). The traditional priming voice RT results for those 16 items are shown in Figure 2.

Figure 2.

Figure 2.

Mean Voice RT on target items across related and unrelated conditions.

Fixation duration measures on stimulus areas.

Participants allocated longer fixations towards target items compared to nontarget foils in the related condition. Furthermore, fixation duration measures were observed to be greater for related target items compared to unrelated items. The descriptive statistics for PFD, MFD, and FPFD for both unrelated and conditions are summarized in Table 2.

Table 2.

Descriptive Statistics for Fixation Duration Measures on Target Items

Related Condition Unrelated Condition

PFD MFD FPFD PFD MFD  FPFD
M 0.67 658.41 1644.64 0.37 371.34 724.81
SD 0.18 252.96 687.41 0.05 48.18 218.07
SE 0.02 39.99 108.68 0.01 7.62 34.48

Note. N = 40; PFD – Proportion of Fixation Duration; MFD – Mean Fixation Duration; FPFD – First-Pass Fixation Duration; All values are in milliseconds.

Proportion of fixation duration (PFD).

The mean PFD for target items was significantly greater than the nontarget foils in the related condition t(39) = 11.60, p < .001, d = 1.84. Additionally, it was observed that the mean PFD was significantly greater for target items in the related condition compared to the unrelated condition, t(39) = 10.18, p < .001, d = 1.61.

Mean fixation duration (MFD).

In the related condition, the average MFD was significantly greater for target items than for the nontarget foils t(39) = 7.73, p < .001, d = 1.22. The average MFD of target items was significantly greater in the related condition versus the unrelated condition, t(39) = 7.05, p < .001, d = 1.11.

First pass fixation duration (FPFD).

The mean FPFD was significantly greater for target items than for the nontarget foils in the related condition, t(39) =10.11, p < .001, d = 1.60. Furthermore, the average FPFD of target items was significantly greater in the related condition, as opposed to the unrelated condition, t(39) = 8.01, p < .001, d = 1.27.

Latency of fixation measure on stimulus areas.

The mean LF measure was significantly shorter for the target items compared to the nontarget foils in the related condition, t(39) = 2.03, p < .001, d = 0.32. Additionally, the mean latency of fixation was also significantly shorter for target items in the related (M = 518.29, SD = 137.00, SE = 21.66) compared to the unrelated (M = 661.13, SD = 193.07, SE = 30.53) prime-target pairs; t(39) = 5.51, p < .001, d = 0.87. LF results are depicted in Figure 3.

Figure 3.

Figure 3.

Proportion of fixation duration on target items compared to non-target foils across related and unrelated conditions.

Discussion

Construct Validity: Semantic Priming Effects Indexed by Eyetracking Measures

Semantic priming effects were successfully captured by all of the fixation duration measures (PFD, MFD, FPFD, and LF). In the related condition, a significantly greater proportion of fixation time was allocated to target images compared to nontarget foils, and mean durations of individual fixations were greater for target images. Additionally, a greater proportion of fixation time was allocated to the target image in the related compared to the unrelated condition, and mean durations of individual fixations were greater for the related condition. Therefore, it appears that all four eyetracking dependent measures may be considered valid indices for capturing semantic priming effects within the method tested. These results are consistent with those of Odekar et al. (2009), who concluded that such measures are effective in capturing orthographic visual cross-format semantic priming. Unlike the Odekar et al. (2009) study, the intent for the traditional semantic priming phase in the current study was not to study priming effects in and of themselves. Strong priming effects were expected based on the known degree of semantic relationship between spoken words and content represented by images. Rather, the intent was to select prime-target pairs that evoked the strongest semantic priming, as potential auditory-visual stimuli in the eyetracking priming study. Thus, no correlational analyses were conducted for dependent measures between the traditional priming and eyetracking phases.

Results from the present study demonstrate that the novel eyetracking method captures semantic priming effects in the auditory-visual cross-format context, regardless of the dependent measures tested. This implies that valid measures and a robust testing protocol are available for future studies in this line of research. Further empirical testing of the psychometric properties of each of these eyetracking measures in various priming contexts is needed.

Limitations

The main limitation of the current study is the potential influence of repetition priming effects on voice RT and eyetracking to targets in related trials as a consequence of prior exposure to the same targets in the unrelated trials. This confound is inherent to within-participant designs and stems from carryover effects, wherein participant responses in an experimental condition are influenced by prior exposure to a similar experimental condition. In cognitive literature, this confound has also been referred to as “proactive interference” or “task-set inertia” (Altmann & Gray, 2008). In the current study, this confound makes it difficult to ascertain if faster and more accurate responses towards related targets were entirely influenced by semantic priming itself, or by repetition priming effects as well. We argue that although the influence of repetition priming effects could not be ruled out, the importance of avoiding the influence of prior exposures to images within related pairs on responses in the unrelated condition outweighed the risk of possible repetition priming effects. Given that the internal validity of within-participant study designs is not reliant on randomization (Charness, Gneezy, & Kuhn, 2012), incorporating two separate conditions (unrelated versus related) instead of counterbalancing unrelated and related prime-target pairs allowed for determining the influence of same pictures in different experimental conditions that could be aptly reflected by the set of dependent measures across both phases. Some of the previous cross-modal semantic priming studies in people without neurological deficits (Chen & Spence, 2010; 2011) and auditory semantic priming studies in people with aphasia (Hagoort, 1993; Milberg, Blumstein, & Dworetzky, 1988) have used similar procedures to obtain within-participants indices of performance of related versus unrelated prime-target pairs. In future studies, this confound can be addressed by balancing related and unrelated prime-target pairs within participants, wherein the order of presentation of related versus unrelated conditions is counterbalanced between two groups of randomly assigned participants. However, it should be noted that such counterbalancing and randomizing procedures may be most suited for priming studies requiring no overt responses (e.g., eyetracking) as opposed to traditional semantic priming tasks requiring consistent overt target responses throughout a given experimental session (e.g., naming). We argue that requirement of overt responses to continually changing related and unrelated pairs amounts to a task switching paradigm, wherein participants have to continually monitor, anticipate, and respond to visual targets-engaging metalinguistic abilities of cognitive control and switch preparation (Altmann, 2004; Altmann & Gray, 2008) as opposed to responding to separate and consecutive blocks of unrelated and related prime-target pairs. An additional limitation is that the thorough selection criteria implemented to control for myriad potential confounds led to a relatively small set of stimuli that could be tested using the novel eyetracking method.

Future Directions

In the future, developing new, well controlled image sets for use in eyetracking studies (see criteria set forth by Heuer & Hallowell, 2007; 2009), and not relying on existing image sets, would enable larger numbers of stimuli and thus trials per condition when testing eyetracking methods. In this direction, prime-target stimuli can be developed by controlling for important psycholinguistic aspects, including lexical frequency, familiarity, phonological length (number of syllables), age of acquisition, semantic category (e.g., objects, vehicles, foods, etc.), concreteness, and imageability. Next, the target stimuli can be converted into custom images for use in eyetracking and other visuo-cognitive experiments by controlling visual aspects of images, such as luminance, size, orientation, and complexity.

For minimizing order effects, an ordering principle can be selected that includes a presentation list divided into two experimental blocks, each containing an equal number of unrelated and related items, rotated across different semantic categories, and presented in a random order. Given that semantic priming effects are also influenced by a variety of factors, including task requirements (lexical decision, naming, etc.), semantic relatedness, modality, format, and duration of prime and target items, we argue that using a single SOA provides limited information about the lexical-semantic processing for a given sample of time. To this end, it will be important to use multiple prime-target time intervals to understand the time course of word activation patterns over time (Yee & Sedivy, 2006) and compare test performance in people with and without aphasia. The results of the present study complement previous work validating eyetracking methods to capture semantic priming effects in cross-format contexts. Potential applications for people with aphasia are especially important to explore given the method’s strength of not requiring participants to follow instructions, use motor limb-based or spoken responses, or make overt judgments.

Figure 4.

Figure 4.

Mean latency of fixation on target items compared to non-target foils across related and unrelated conditions.

Acknowledgments:

This study was supported in part by an Ohio University Graduate Fellowship from Ohio University awarded to the first author and funding from the National Institutes of Health/National Institute on Deafness and Other Communication Disorders (R43DC010079) and the National Science Foundation Biomedical Engineering Research to Aid Persons with Disabilities Program awarded to the second author. We thank Dr. Vanessa Shaw for assistance with data collection and Mr. Ken Dobo and Mr. James Herpy for help in stimulus development. We express our gratitude to Dr. James Montgomery, Dr. Chao-Yang Lee, Dr. Danny Moates, and Dr. Laura Chapman for valuable suggestions regarding experimental design and manuscript review.

Appendix A: Itemized Paired Comparisons of Target Picture Naming RTs

Trial Target Related Unrelated Related RT Unrelated RT t df p
1 anchor ship wool 916.71 966.42 −0.78 30 .441
2* arrow left autumn 723.26 802.71 −3.00 30 .005
3 axe wood toast 740.10 847.69 −2.99 28 .006
4 baby carriage baby drummer 752.17 1091.90 −3.61 28 .001
5 ball bounce smell 617.03 778.78 −4.44 31 .000
6* basket baseball god 679.00 875.47 −5.49 31 .000
7 bat picnic write 571.47 815.41 −5.28 31 .000
8 bed sleep love 651.97 957.19 −5.82 31 .000
9 bell ring turn 829.50 905.28 −1.61 31 .118
10 belt pants milk 801.10 812.90 −0.60 30 .556
11 bicycle ride smoke 767.06 829.84 −2.15 31 .040
12* book read halloween 676.72 879.94 −6.29 31 .000
13 bow tie picnic 893.29 1044.19 −2.33 20 .031
14* bowl cereal hat 731.03 890.74 −4.86 30 .000
15 broom sweep lion 789.24 1016.17 −1.97 28 .059
16* brush hair nuts 763.28 999.93 −5.67 28 .000
17 bus school spoon 788.11 947.19 −4.13 26 .000
18 butterfly wings stop 762.74 765.29 −0.11 30 .917
19* cake birthday feet 737.06 840.10 −2.67 30 .012
20 camel hump measure 888.90 995.47 −2.34 29 .026
21 cannon war pepper 813.10 1029.58 −4.52 30 .000
22 carrot rabbit baseball 810.06 820.42 −0.41 30 .685
23 chair sit salad 877.90 908.03 −0.86 30 .397
24 church god sew 767.61 991.16 −4.52 30 .000
25 clock time cook 733.59 942.44 −3.35 31 .002
26 clown scary night 851.06 856.29 −0.13 30 .896
27 coat warm bounce 907.80 1027.28 −1.46 24 .158
28 corn cob slow 882.93 1049.21 −2.26 28 .031
29 cow milk rain 862.90 1179.68 −3.82 30 .001
30 crown king ring 760.65 973.77 −5.30 30 .000
31* dog bark tall 651.16 879.34 −5.21 31 .000
32* door open jeans 831.56 1075.56 −3.98 31 .000
33 door knob turn baby 1002.50 1023.83 −0.48 29 .634
34* drum drummer sting 782.75 947.19 −4.06 31 .000
35 duck quack left 576.13 957.94 −6.58 30 .000
36 ear hear light 753.23 774.35 −0.50 30 .621
37 elephant big oink 820.07 1024.70 −5.36 26 .000
38 envelope letter home 848.77 836.17 0.23 29 .820
39 eye see play 656.03 767.42 −3.24 30 .003
40 fence white hoot 844.69 834.56 0.33 31 .742
41 finger point set 781.10 844.87 −1.31 30 .199
42 flag america ball 702.62 958.93 −4.74 28 .000
43 fork spoon chair 802.80 851.37 −0.69 29 .495
44 garbage can trash feathers 903.65 1079.23 −2.57 30 .016
45 giraffe tall pants 902.79 982.07 −1.13 27 .267
46 glasses eyes iron 769.20 739.43 1.28 29 .211
47 glove hand fruit 947.07 866.79 1.68 28 .105
48 gorilla monkey tie 775.73 912.53 −1.38 29 .177
49 grapes fruit key 773.34 852.56 −3.52 31 .001
50 guitar music watch 834.37 882.30 −1.20 29 .241
51 gun bullet thread 854.03 815.30 0.30 29 .764
52 hammer nail rabbit 868.09 802.53 2.44 31 .021
53 hand finger quack 872.15 802.37 1.20 26 .243
54 hanger clothes snow 765.32 795.13 −0.88 30 .383
55 hat head flowers 851.59 881.03 −0.58 31 .563
56 heart love point 672.00 807.09 −2.95 31 .006
57 house home bow 861.23 812.77 1.07 25 .293
58 ironing board iron jump 777.11 851.61 −1.55 27 .133
59 kangaroo jump wine 762.13 846.73 −4.11 29 .000
60 kettle tea hump 1016.32 1019.00 −0.05 24 .964
61 kite fly pan 781.63 808.31 −0.97 31 .342
62* ladder climb monkey 695.13 889.34 −4.20 31 .000
63 lamp light ocean 884.04 869.64 0.27 27 .791
64 leaf autumn king 710.15 844.96 −3.15 25 .004
65 lemon sour time 841.50 814.88 0.63 31 .535
66 leopard spots hammer 970.14 1058.29 −1.40 20 .176
67 lips kiss nail 685.52 807.87 −3.63 30 .001
68 lock key water 828.97 959.58 −2.95 30 .006
69 mitten glove sweep 976.28 774.28 2.19 24 .038
70 moon night teeth 878.84 783.58 2.81 30 .009
71 necklace pearls hear 672.00 818.10 −3.55 30 .001
72 nose smell climb 629.45 762.55 −3.54 30 .001
73 onion cry clothes 1013.96 862.11 2.14 26 .042
74 owl hoot hand 705.78 823.50 −2.51 31 .018
75* paintbrush paint ride 792.72 988.44 −3.50 17 .003
76 pants jeans see 806.97 766.97 0.92 29 .366
77 peacock feathers america 843.21 1144.93 −1.69 28 .102
78 pencil write fly 796.48 752.29 1.62 30 .116
79* pig oink read 649.54 894.58 −4.62 25 .000
80 pipe smoke bark 832.61 847.68 −0.61 30 .545
81 pliers tool cob 964.91 1090.17 −1.73 22 .097
82 pumpkin halloween cry 843.93 879.50 −0.62 29 .543
83 refrigerator food letter 832.42 842.39 −0.39 30 .702
84 rhinoceros horns ship 767.61 870.96 −2.62 27 .014
85 ring marriage stripes 792.44 910.56 −2.97 31 .006
86 rocking chair grandma sour 875.03 976.48 −2.12 28 .043
87 ruler measure head 840.80 853.07 −0.25 29 .801
88 scissors cut big 818.81 1106.58 −1.14 30 .264
89 sea horse ocean spin 1065.59 1050.93 0.16 28 .872
90* sheep wool sit 796.70 980.30 −3.61 26 .001
91 shoe feet eggs 819.61 845.65 −0.65 30 .520
92 skunk stink kiss 691.12 924.52 −5.47 24 .000
93 sled snow music 967.70 998.33 −0.76 29 .455
94 snail slow red 876.56 886.72 −0.21 31 .833
95 spoon fork sleep 809.03 883.48 −1.67 28 .105
96 squirrel nuts fork 976.53 911.03 0.78 29 .440
97 swing set horns 1014.07 1066.30 −0.93 26 .359
98 table chair stink 990.13 920.00 1.46 30 .156
99 television watch butter 906.84 932.77 −0.64 30 .525
100 tennisracket ball spots 951.40 1094.30 −2.61 19 .017
101 tie neck wood 810.00 854.47 −1.20 29 .241
102 tiger lion tea 1132.35 1026.61 1.02 22 .320
103 toaster toast eyes 887.83 1160.10 −2.71 28 .011
104 tomato red white 1006.39 1090.58 −1.23 30 .230
105 toothbrush teeth warm 834.43 932.65 −2.32 22 .030
106 top spin on/off 833.75 1194.54 −3.90 23 .001
107 umbrella rain pearls 847.61 903.23 −1.58 30 .124
108* vase flowers birthday 831.50 990.38 −3.70 31 .001
109 violin play bullet 1096.72 1313.11 −2.50 17 .023
110 watermelon seeds marriage 842.94 876.29 −0.80 30 .430
111 well water finger 943.10 1021.26 −1.31 30 .200
112 whistle bow open 986.00 833.25 2.84 27 .008
113 windmill wind bolt 929.27 1218.27 −2.43 25 .023
114 wineglass wine school 716.17 1015.10 −6.71 28 .000
115 zebra stripes glove 903.23 894.68 0.33 30 .744

Notes. alpha = 0.01

*

items selected for the eyetracking phase.

Appendix B: Descriptive Statistics for the Photometric Luminance of Images in Regular Trials (Eyetracking Phase)

Trial
No.
ML
(Target)
SDLT
(Target)
SDLO
(Target)
ML
(Foil 1)
SDLT
(Foil 1)
SDLO
(Foil 1)
ML
(Foil 2)
SDLT
(Foil 2)
SDLO
(Foil 2)
Trial
Mean
1 132.80 1.15 1.60 124.80 −0.58 1.04 124.80 −0.58 1.04 127.47
2 94.40 −0.98 −1.12 124.80 1.02 1.04 108.80 −0.04 −0.10 109.33
3 132.80 1.15 1.60 124.80 −0.58 1.04 124.80 −0.58 1.04 127.47
4 100.80 −0.07 −0.66 108.80 1.03 −0.10 94.40 −0.96 −1.12 101.33
5 88.00 −0.85 −1.57 108.80 1.10 −0.10 94.40 −0.25 −1.12 97.07
6 100.80 −0.80 −0.66 132.80 1.12 1.60 108.80 −0.32 −0.10 114.13
7 94.40 −1.15 −1.12 108.80 0.58 −0.10 108.80 0.58 −0.10 104.00
8 108.80 −1.13 −0.10 116.00 0.36 0.41 118.00 0.77 0.55 114.27
9* 76.80 −1.09 −2.36 108.80 0.22 −0.10 124.80 0.87 1.04 103.47
10* 81.60 −0.90 −2.02 116.00 1.07 0.41 94.40 −0.17 −1.12 97.33
11 108.80 −0.58 −0.10 108.80 −0.58 −0.10 124.80 1.15 1.04 114.13
12* 116.00 −0.07 0.41 124.80 1.03 1.04 108.80 −0.97 −0.10 116.53
13 108.80 −0.58 −0.10 108.80 −0.58 −0.10 132.80 1.15 1.60 116.80
14 124.80 0.58 1.04 124.80 0.58 1.04 116.00 −1.15 0.41 121.87
15 116.00 1.15 0.41 108.80 −0.58 −0.10 108.80 −0.58 −0.10 111.20
16* 88.00 −0.39 −1.57 81.60 −0.74 −2.02 116.00 1.14 0.41 95.20
17 116.00 0.98 0.41 108.80 0.04 −0.10 100.80 −1.02 −0.66 108.53
18 116.00 −0.58 0.41 116.00 −0.58 0.41 132.80 1.15 1.60 121.60
19 124.80 0.37 1.04 94.40 −1.13 −1.12 132.80 0.76 1.60 117.33
20 124.80 0.58 1.04 108.80 −1.15 −0.10 124.80 0.58 1.04 119.47

Notes.

*

Excluded from the eyetracking analysis.

Overall mean = 110.17

ML = Mean luminance value across three observations, SDLT = Standard deviation of luminance across all images within a trial, SDLO = Standard deviation of luminance across all images in the eyetracking phase.

Appendix C: Eyetracking Trials

Trial Trial type Auditory prime Target image Nontarget foil 1 Nontarget foil 2
1 regular baseball basket bow heart
2* regular cereal bowl light switch bee
3* regular paint paint brush salt shaker glasses
4* regular read book skunk tooth brush
5* regular birthday cake umbrella onion
6 regular god church bat squirrel
7* regular king crown leaf sea horse
8* regular bark dog kite guitar
9* regular open door whistle grapes
10* regular drummer drum carriage traffic light
11 regular quack duck hand wine glass
12* regular climb ladder nose ball
13* regular oink pig elephant necklace
14* regular wool sheep anchor violin
15* regular flowers vase hat spool of thread
16 regular time clock lemon leopard
17* regular hair brush frying pan peacock
18 regular marriage ring ear nail
19 regular night moon clown hanger
20* regular left arrow butterfly fork
21 sham trash garbage can nut television
22 sham nail hammer lips watermelon
23 sham glove mitten zebra toaster
24 sham cob corn swing owl

Notes. Trials presented in random sequence.

*

Trials included in the analyses.

Footnotes

1

Odekar et al. (2009) used the term “semantic (associative) priming” to indicate cumulative context effects of pure semantic and associative relatedness that exist between the prime and target. In the present study, we simply use the term “semantic priming”, as it is believed to encapsulate both forms of priming for all practical purposes (Fischler, 1977; Lucas, 2000; McNamara, 2005).

Contributor Information

Javad Anjum, University of Mary.

Brooke Hallowell, Ohio University.

References

  1. Adobe Systems Inc. [Computer Software]. (2017). Retrieved from http://supportdownloads.adobe.com/detail.jsp?ftpID=4237
  2. Allopenna PD, Magnusan JS, & Tanenhaus MK (1998). Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. Journal of Memory and Language, 38(4), 419–439. 10.1006/jmla.1997.2558 [DOI] [Google Scholar]
  3. Altmann EM (2004). The preparation effect in task switching: Carryover of SOA. Memory and Cognition, 32(1), 153–163. 10.3758/bf03195828 [DOI] [PubMed] [Google Scholar]
  4. Altmann EM, & Gray WD, (2008). An integrated model of cognitive control in task switching. Psychological Review, 115(3), 602–639. 10.1037/0033-295X.115.3.602 [DOI] [PubMed] [Google Scholar]
  5. Altmann GTM, & Kamide Y (1999). Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition, 73(3), 247–264. 10.1016/S0010-0277(99)00059-1 [DOI] [PubMed] [Google Scholar]
  6. Anderson JE, & Holcomb PJ (1995). Auditory and visual semantic priming using different stimulus onset asynchronicities: An event-related potential study. Psychophysiology, 32(2), 177–190. 10.1111/j.1469-8986.1995.tb03310.x [DOI] [PubMed] [Google Scholar]
  7. Anderson TJ, & MacAskill MR (2013). Eye movements in patients with neurodegenerative disorders. Nature Reviews Neurology, 9(2), 74–85. 10.1038/nrneurol.2012.273 [DOI] [PubMed] [Google Scholar]
  8. Andreou C, Veith K, Bozikas VP, Lincoln TM, & Moritz S (2014). Effects of dopaminergic modulation on automatic semantic priming: a double-blind study. Journal of Psychiatry & Neuroscience, 39(2), 110–117. 10.1503/jpn.130035 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Angwin AJ, Chenery HJ, Copland DA, Murdoch BE, Silburn PA (2005). Summation of semantic priming and complex sentence comprehension in Parkinson’s disease, Cognitive Brain Research, 25(1), 78–89. 10.1016/j.cogbrainres.2005.04.008 [DOI] [PubMed] [Google Scholar]
  10. Arias-Trejo N, & Plunkett K (2009). Lexical-semantic priming effects during infancy. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1536), 3633–3647. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Arias-Trejo N, & Plunkett K (2013). What’s in a link: Associative and taxonomic priming effects in the infant lexicon. Cognition, 128(2), 214–227. 10.1016/j.cognition.2013.03.008 [DOI] [PubMed] [Google Scholar]
  12. Balota DA (1994) Visual word recognition: the journey from features to meaning. In Gernsbacher MA (Ed.), Handbook of Psycholinguistics (pp. 303–348). San Diego, CA: Academic Press. [Google Scholar]
  13. Balota DA, & Duchek JM (1988). Age-related differences in lexical access, spreading activation, simple pronunciation. Psychology and Aging, 3(1), 84–93. 10.1037//0882-7974.3.1.84 [DOI] [PubMed] [Google Scholar]
  14. Boyd JE, Patriciu I, McKinnon MC, & Kiang M (2014). Test-retest reliability of N400 event-related brain potential measures in a word-pair semantic priming paradigm in patients with schizophrenia. Schizophrenia Research, 158(1–3), 195–203. 10.1016/j.schres.2014.06.018 [DOI] [PubMed] [Google Scholar]
  15. Branigan HP, Pickering MJ, Liversedge SP, Stewart AJ, & Urbach TP (1995). Syntactic priming: Investigating the mental representation of language. Journal of Psycholinguistic Research, 24(6), 489–506. 10.1007/BF02143163 [DOI] [Google Scholar]
  16. Brodeur MB, Dionne-Dostie E, Montreuil T, & Lepage M (2010). The Bank of Standardized Stimuli (BOSS), a new set of 480 normative photos of objects to be used as visual stimuli in cognitive research. PloS ONE, 5(5), e10773 10.1371/journal.pone.0010773 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Bushell C (1996). Dissociated identity and semantic priming in Broca’s aphasia: How controlled processing produces inhibitory semantic priming. Brain and Language, 55(2), 264–288. 10.1006/brln.1996.0104 [DOI] [PubMed] [Google Scholar]
  18. Cabeza R, & Nyberg L (2000). Imaging cognition II: An empirical review of 275 PET and fMRI studies. Journal of Cognitive Neuroscience, 12(1), 1–47. 10.1162/08989290051137585 [DOI] [PubMed] [Google Scholar]
  19. Carroll P, & Slowiaczek ML (1986). Constraints on semantic priming in reading: A fixation time analysis. Memory and Cognition, 14(6), 509–522. 10.3758/BF03202522 [DOI] [PubMed] [Google Scholar]
  20. Carter MD, Hough MS, Stuart A, & Rastatter MP (2011). The effects of inter-stimulus interval and prime modality in a semantic priming task. Aphasiology, 25(6–7), 761–773. 10.1080/02687038.2010.539697 [DOI] [Google Scholar]
  21. Charness G, Gneezy U, & Kuhn M (2012). Experimental methods: Between-subject and within-subject design. Journal of Economic Behavior and Organization, 8(1), 1–8. [Google Scholar]
  22. Chaytor N, & Schmitter-Edgecombe M (2003). The ecological validity of neuropsychological tests: A review of the literature on everyday cognitive skills. Neuropsychology Review, 13(4), 181–197. 10.1023/B:NERV.0000009483.91468.fb [DOI] [PubMed] [Google Scholar]
  23. Chen Y-C, & Spence C (2010). When hearing the bark helps to identify the dog: Semantically-congruent sounds modulate the identification of masked pictures. Cognition, 114(3), 389–404. 10.1016/j.cognition.2009.10.012 [DOI] [PubMed] [Google Scholar]
  24. Chen Y-C, & Spence C (2011). Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity. Journal of Experimental Psychology: Human Perception and Performance, 37(5), 1554–1568. 10.1037/a0024329 [DOI] [PubMed] [Google Scholar]
  25. Chetail F, & Mathey S (2009). Syllabic priming in lexical decision and naming tasks. Canadian Journal of Experimental Psychology, 63(1), 40–48. 10.1037/a0012944 [DOI] [PubMed] [Google Scholar]
  26. Choy JJ, & Thompson CK (2005). Online comprehension of anaphor and pronoun constructions in Broca’s aphasia: Evidence from eyetracking. Brain and Language, 95(1), 119–120. 10.1016/j.bandl.2005.07.064 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Dahan D, Magnuson JS, & Tanenhaus MK (2001). Time Course of Frequency Effects in Spoken-Word Recognition: Evidence from Eye Movements. Cognitive Psychology, 42(4), 317–367. 10.1006/cogp.2001.0750 [DOI] [PubMed] [Google Scholar]
  28. DeDe G (2012). Effects of Word Frequency and Modality on Sentence Comprehension Impairments in People with Aphasia. American Journal of Speech-Language Pathology / American Speech-Language-Hearing Association, 21(2), S103–S114. 10.1044/1058-0360(2012/11-0082 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. De Graef P, Christiaens D, & d’Ydewalle G (1990). Perceptual effects of scene context on object identification. Psychological Research, 52(4), 317–329. 10.1007/BF00868064 [DOI] [PubMed] [Google Scholar]
  30. DeGroot AM (1984). Primed lexical decision: Combined effects of the proportion of related prime-target pairs and the stimulus-onset asynchrony of prime and target. Quarterly Journal of Experimental Psychology, 36(2), 253–280. 10.1080/14640748408402158 [DOI] [Google Scholar]
  31. Del Toro J (2000). An examination of automatic versus strategic semantic priming effects in Broca’s aphasia. Aphasiology, 14(9), 925–947. 10.1080/02687030050127720 [DOI] [Google Scholar]
  32. Den Heyer K., Briand K, & Dannenbring G (1983). Strategic factors in a lexical decision task: Evidence for automatic and attention-driven processes. Memory and Cognition, 11(4), 374–381. 10.3758/BF03202452 [DOI] [PubMed] [Google Scholar]
  33. Eddy MD, & Holcomb PJ (2010). The temporal dynamics of masked repetition picture priming effects: Manipulations of stimulus-onset asynchrony (SOA) and prime duration. Brain Research, 1340, 24–39. 10.1016/j.brainres.2010.04.024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Empirisoft Corporation (2016). DirectRT Available from http://www.empirisoft.com/directrt.aspx
  35. Farris-Trimble A, & McMurray B (2013). Test-retest reliability of eye tracking in the visual world paradigm for the study of real-time spoken word recognition. Journal of Speech, Language, and Hearing Research, 56(4), 1328–1345. 10.1044/1092-4388(2012/12-0145) [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Ferrand L, Segui J, & Grainger J (1996). Masked priming of word and picture naming: The role of syllabic units. Journal of Memory and Language, 35(5), 708–723. 10.1006/jmla.1996.0037 [DOI] [Google Scholar]
  37. Fischler I (1977). Semantic facilitation without association in a lexical decision task. Memory and Cognition, 5(3), 335–339. 10.3758/BF03197580 [DOI] [PubMed] [Google Scholar]
  38. Forster KI (1981). Priming and the effects of the sentence and lexical contexts on naming time: Evidence for autonomous lexical processing. Quarterly Journal of Experimental Psychology, 33A, 465–495. 10.1080/14640748108400804 [DOI] [Google Scholar]
  39. Forster KI (2004). Category size effects revisited: Frequency and masked priming effects in semantic categorization. Brain and Language, 90(1–3), 276–286. 10.1016/S0093-934X(03)00440-1 [DOI] [PubMed] [Google Scholar]
  40. Frost R, Kugler T, Deutsch A, & Forster K (2005). Orthographic structure versus morphological structure: Principles of lexical organization in a given language. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(6), 1293–1326. 10.1037/0278-7393.31.6.1293 [DOI] [PubMed] [Google Scholar]
  41. GNU Image Manipulation Program [Computer Software] (2016). Retrieved from http://www.gimp.org/
  42. Goldinger SD (1996). Auditory lexical decision. Language and Cognitive Processes, 11(6), 559–567. 10.1080/016909696386944 [DOI] [Google Scholar]
  43. Gorden WP (1983). Memory disorders in aphasia—I. Auditory immediate recall. Neuropsychologica, 21(4), 325–339. 10.1016/0028-3932(83)90019-2 [DOI] [PubMed] [Google Scholar]
  44. Hagoort P (1993). Impairments of lexical-semantic processing in aphasia: Evidence from the processing of lexical ambiguities. Brain and Language, 45(2), 189–232. 10.1006/brln.1993.1043 [DOI] [PubMed] [Google Scholar]
  45. Hagoort P (1997). Semantic priming in Broca’s aphasia at a short SOA: No support for an automatic access deficit. Brain and Language, 56(2), 287–300. 10.1006/brln.1997.1849 [DOI] [PubMed] [Google Scholar]
  46. Hallowell B (1999). A new way of looking at linguistic comprehension. In Becker W, Deubel H, & Mergner T (Eds.). Current Occulomotor Research: Physiological and psychological aspects (pp. 287–291). New York, NY: Plenum Press. [Google Scholar]
  47. Hallowell B (2008). Strategic design of protocols to evaluate vision in research on aphasia and related disorders. Aphasiology, 22(6), 600–617. 10.1080/02687030701429113 [DOI] [Google Scholar]
  48. Hallowell B, & Chapey R (2008). Introduction to language intervention strategies in adult aphasia. In Chapey R (Ed.), Language intervention strategies in aphasia and related communication disorders (5th ed., pp. 3–19). Philadelphia: Lippincott Williams & Wilkins. [Google Scholar]
  49. Hallowell B, Wertz RT, & Kruse H (2002). Using eye-movement responses to index auditory responses to index auditory comprehension. An adaptation of the Revised Token Test. Aphasiology, 16(4–6), 587–594. 10.1080/02687030244000121 [DOI] [Google Scholar]
  50. Henderson JM, & Ferreira F (2004). Scene perception for psycholinguistics. In Henderson JM & Ferreira F (Eds.). The interface of language, vision and action: Eye movements and the visual world (pp.1–58). New York: Psychology Press. [Google Scholar]
  51. Henderson JM, Weeks PA, & Hollingworth A (1999). The effects of semantic consistency on eye movements during complex scene viewing. Journal of Experimental Psychology: Human Perception and Performance, 25(1), 210–228. 10.1037/0096-1523.25.1.210 [DOI] [Google Scholar]
  52. Heuer S, & Hallowell B (2007). An evaluation of test images for multiple-choice comprehension assessment in aphasia. Aphasiology, 21(9), 883–900. 10.1080/02687030600695194 [DOI] [Google Scholar]
  53. Heuer S, & Hallowell B (2009). Visual attention in a multiple-choice task: Influences of image characteristics with and without presentation of a verbal stimulus. Aphasiology, 23(3), 351–363. 10.1080/02687030701770474 [DOI] [Google Scholar]
  54. Heyman T, Van Rensbergen B, Storms G, Hutchison KA, & De Deyne S (2015). The influence of working memory load on semantic priming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(3), 911–920. 10.1037/xlm0000050 [DOI] [PubMed] [Google Scholar]
  55. Howells SR, & Cardell EA (2015). Semantic priming in anomic aphasia: A focused investigation using cross-modal methodology. Aphasiology, 29(6), 744–761. 10.1080/02687038.2014.985184 [DOI] [Google Scholar]
  56. Huettig F, & Altmann GTM (2004). The online processing of ambiguous and unambiguous words in context: Evidence from head-mounted eye-tracking. In Carreiras M, & Clifton C (Eds.), The online study of sentence comprehension: eyetracking, ERP and beyond (pp. 187–207). New York, NY: Psychology Press. [Google Scholar]
  57. Huettig F, & Altmann GTM (2005). Word meaning and the control of eye fixation: semantic competitor effects and the visual world paradigm. Cognition, 96(1), 23–32. 10.1016/j.cognition.2004.10.003 [DOI] [PubMed] [Google Scholar]
  58. Huettig F, & Altmann GTM (2011). Looking at anything that is green when hearing “frog”: How object surface colour and stored object colour knowledge influence language-mediated overt attention. The Quarterly Journal of Experimental Psychology, 64(1), 122–145. 10.1080/17470218.2010.481474 [DOI] [PubMed] [Google Scholar]
  59. Hutchison KA (2007). Attentional control and the relatedness proportion effect in semantic priming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(4), 645–662. 10.1037/0278-7393.33.4.645 [DOI] [PubMed] [Google Scholar]
  60. Hutchison KA, Balota DA, Neely JH, Cortese MJ, Cohen-Shikora ER, Tse C-S, . . . Buchanan E (2013). The semantic priming project. Behavior Research Methods, 45(4), 1099–1114. 10.3758/s13428-012-0304-z [DOI] [PubMed] [Google Scholar]
  61. Hyönä J, & Niemi P (1990). Eye movements during repeated reading of a text. Acta Psychologica, 73(3), 259–280. 10.1016/0001-6918(90)90026-C [DOI] [PubMed] [Google Scholar]
  62. Ivanova MV, & Hallowell B (2012). Validity of an eye-tracking method to index working memory in people with and without aphasia. Aphasiology, 26(3–4), 556–578. 10.1080/02687038.2011.618219 [DOI] [Google Scholar]
  63. Ivanova MV, & Hallowell B (2013). A tutorial on aphasia test development in any language: Key substantive and psychometric considerations. Aphasiology, 27(8), 891–920. 10.1080/02687038.2013.805728 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Jacob R, & Karn K (2003). Eyetracking in human-computer interaction and usability research: Ready to deliver the promises. In Radach R, Hyona J, & Duebel H (Eds.). The mind’s eye: Cognitive and applied aspects of eye-movement research (pp. 573–605). Holland: Elsevier Press. [Google Scholar]
  65. Karsh R, & Breitenbach FW (1983). Looking at looking: The amorphous fixation measure. In Groner R, Menz C, D, & Monty R Eye-movements and psychological functions. International views (pp. 53–64). Hillsdale, NJ: Lawrence Erlbaum. [Google Scholar]
  66. Kemper S, & McDowd J (2006). Eye movements of young and older adults while reading with distraction. Psychology and Aging, 21(1), 32–39. 10.1037/0882-7974.21.1.32 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Kischka U, Kammer T, Maier S, Weisbrod M, Thimm M, & Spitzer M (1996). Dopaminergic modulation of semantic network activation. Neuropsychologia, 34(11), 1107–1113. http://doi.org.10.1016/0028-3932(96)00024-3. [DOI] [PubMed] [Google Scholar]
  68. LC Technologies (2015). Eyegaze Available from http://www.eyegaze.com/
  69. Leberton K, Baron J, & Eustache F (2001). Visual priming within and across symbolic format using a tachistoscopic picture identification task: A PET study. Journal of Cognitive Neuroscience, 13(5), 670–686. 10.1162/089892901750363226 [DOI] [PubMed] [Google Scholar]
  70. Liu H, Bates E, Powell T, & Wulfeck B (1997). Single-word shadowing and the study of lexical access. Applied Psycholinguistics, 18(2), 157–180. 10.1017/S0142716400009954 [DOI] [Google Scholar]
  71. Lively SE, Pisoni DB, & Goldinger SD (1994). Spoken word recognition: Research and theory. In Gernbacher MA (Ed.). Handbook of Psycholinguistics (pp. 265–301). San Diego, CA: Academic Press. [Google Scholar]
  72. Lucas M (2000). Semantic priming without association: A meta-analytic review. Psychonomic Bulletin & Review, 7(4), 618–630. 10.3758/BF03212999 [DOI] [PubMed] [Google Scholar]
  73. Mack JE, Ji W, & Thompson CK (2013). Effects of verb meaning on lexical integration in agrammatic aphasia: Evidence from eyetracking. Journal of Neurolinguistics, 26(6), 619–636. 10.1016/j.jneuroling.2013.04.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Manor B, & Gordon E (2003). Defining the temporal threshold for ocular fixation in free-viewing visuocognitive tasks. Journal of Neuroscience Methods, 128(1–2), 85–93. 10.1016/S0165-0270(03)00151-1 [DOI] [PubMed] [Google Scholar]
  75. McNamara TP (2005). Semantic priming perspectives from memory and word recognition New York, NY: Psychology Press. [Google Scholar]
  76. Mertus JA (2000). The Brown Lab Interactive Speech System (Brown University, Providence, RI: ). [Google Scholar]
  77. Meyer AM, Mack JE, & Thompson CK (2012). Tracking passive sentence comprehension in agrammatic aphasia. Journal of Neurolinguistics, 25(1), 31–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Meyer DE (2014). Semantic priming well established. Science, 345(6196), 523 10.1126/science.345.6196.523-b [DOI] [PubMed] [Google Scholar]
  79. Meyer DE, & Schvaneveldt RW (1971). Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology, 90(2), 227–234. 10.1037/h0031564 [DOI] [PubMed] [Google Scholar]
  80. Meyer DE, & Schvaneveldt RW, & Ruddy MG (1975). Loci of contextual effects on visual word recognition. In Rabbit PMA & Dornic S (Eds.), Attention and performance V (pp. 99–118). New York, NY: Academic Press. [Google Scholar]
  81. Milberg W, Blumstein SE, & Dworetzky B (1988). Phonological processing and lexical access in aphasia. Brain and Language, 34(2), 279–293. 10.1016/0093-934X(88)90139-3 [DOI] [PubMed] [Google Scholar]
  82. Milberg W, Blumstein SE, Katz D, Gershberg F, & Brown T (1995). Semantic facilitation in aphasia: Effects of time and expectancy. Journal of Cognitive Neuroscience, 7(1), 33–50. 10.1162/jocn.1995.7.1.33 [DOI] [PubMed] [Google Scholar]
  83. Mirman D, Yee E, Blumstein SE, & Magnuson JS (2011). Theories of spoken word recognition deficits in Aphasia: Evidence from eyetracking and computational modeling. Brain and Language, 117(2), 53–68. 10.1016/j.bandl.2011.01.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Murphy K, & Hunt H (2013). The time course of semantic and associative priming effects is different in an attentional blink task. Cognitive Processing, 14(3), 283–292. 10.1007/s10339-013-0560-6 [DOI] [PubMed] [Google Scholar]
  85. Neely JH (1977). Semantic priming and retrieval from lexical memory: Roles of inhibitionless spreading activation and limited capacity attention. Journal of Experimental Psychology: General, 106(3), 226–254. 10.1037/0096-3445.106.3.226 [DOI] [Google Scholar]
  86. Neely JH (1991). Semantic priming effects in visual word recognition: A selective review of current findings and theories. In Besner D & Humphreys GW (Eds.), Basic processes in reading: Visual word recognition (pp. 265–335). Hillsdale, NJ: Erlbaum. [Google Scholar]
  87. Odekar A, Hallowell B, Kruse H, Moates D, & Lee C (2009). Validity of eye-movement methods and indices for capturing semantic (associative) priming effects. Journal of Speech, Language, and Hearing Research, 52, 31–48. 10.1044/1092-4388(2008/07-0100) [DOI] [PubMed] [Google Scholar]
  88. Osterhout L, Kim A, & Kuperberg GR (2012). The neurobiology of sentence comprehension. In Spivey M, Joannisse M, & McCrae K (Eds.), The Cambridge handbook of psycholinguistics (pp. 365–389). Cambridge, MA: Cambridge University Press. [Google Scholar]
  89. Palermo DS, & Jenkins JJ (1964). Word association norms: Grade school through college Minneapolis: University of Minnesota Press. [Google Scholar]
  90. Pilotti M, Bergman ET, Gallo DA, Sommers M, & Roediger HL (2000). Direct comparison of auditory implicit memory tests. Psychonomic Bulletin & Review, 7(2), 347–353. 10.3758/BF03212992 [DOI] [PubMed] [Google Scholar]
  91. Price CJ, & Humphreys GW (1989). The effects of surface detail on object categorization and naming. The Quarterly Journal of Experimental Psychology, 41(4), 797–827. 10.1080/14640748908402394 [DOI] [PubMed] [Google Scholar]
  92. Ramat R, Leigh J, Zee D, & Optican L (2007). What clinical disorders tell us about the neural control of saccadic eye movements. Brain, 130(1), 10–35. 10.1093/brain/awl309 [DOI] [PubMed] [Google Scholar]
  93. Rayner K (2009). Eye movements and attention in reading, scene perception, and visual search. Quarterly Journal of Experimental Psychology, 62(8), 1457–1506. 10.1080/17470210902816461 [DOI] [PubMed] [Google Scholar]
  94. Rossell SL, Price CJ, & Nobre AC (2003). The anatomy and time course of semantic priming investigated by fMRI and ERPs. Neuropsychologia, 41(5), 550–564. 10.1016/S0028-3932(02)00181-1 [DOI] [PubMed] [Google Scholar]
  95. Rossion B, & Pourtois G (2004). Revisiting Snodgrass and Vanderwart’s object pictorial set: The role of surface detail in basic-level object recognition. Perception, 33(2), 217–236. 10.1068/p5117 [DOI] [PubMed] [Google Scholar]
  96. Schad DJ, Risse S, Slattery T, & Rayner K (2014). Word frequency in fast priming: Evidence for immediate cognitive control of eye movements during reading. Visual Cognition, 22(3), 390–414. 10.1080/13506285.2014.892041 [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Schvaneveldt RW, Meyer DE, Becker CA (1976). Lexical ambiguity, semantic context, and visual word recognition. Journal of Experimental Psychology: Human Perception and Performance, 2(2), 243–256. [DOI] [PubMed] [Google Scholar]
  98. Shapiro L, Swinney D, & Borsky S (1998). Online examination of language performance in normal and neurologically impaired adults. American Journal of Speech-Language Pathology, 7, 49–60. 10.1044/1058-0360.0701.49 [DOI] [Google Scholar]
  99. Silkes JP, & Rogers MA (2012). Masked priming effects in aphasia: Evidence of altered automatic spreading activation. Journal of Speech, Language, and Hearing Research, 55(6), 1613–1625. 10.1044/1092-4388(2012/10-0260) [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Siyambalapitiya S, Chenery HJ, & Copland DA (2013). Lexical-semantic representation in bilingual aphasia: Findings from semantic priming and cognate repetition priming. Aphasiology, 27(11), 1302–1321. 10.1080/02687038.2013.817521 [DOI] [Google Scholar]
  101. Snodgrass JG, & Vanderwart M (1980). A standardized set of 260 pictures: Norms for name agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6(2), 174–215. 10.1037/0278-7393.6.2.174 [DOI] [PubMed] [Google Scholar]
  102. Soni M, Lambon Ralph MA, & Woollams AM (2012). Repetition priming of picture naming in semantic aphasia: The impact of intervening items. Aphasiology, 26(1), 44–63. 10.1080/02687038.2011.602302 [DOI] [Google Scholar]
  103. Stevenage SV, Hale S, Morgan Y, & Neil GJ (2014). Recognition by association: Within- and cross-modality associative priming with faces and voices. British Journal of Psychology, 105(1), 1–16. 10.1111/bjop.12011 [DOI] [PubMed] [Google Scholar]
  104. Tabossi P (1996). Cross-modal semantic priming. Language and Cognitive Processes, 11(6), 569–576. 10.1080/016909696386953 [DOI] [Google Scholar]
  105. Tanaka JW, & Presnell LM (1999). Color diagnosticity in object recognition. Perception & Psychophysics, 61(6), 1140–1153. 10.3758/bf03207619 [DOI] [PubMed] [Google Scholar]
  106. Tanenhaus MK, Magnusan JS, Dahan D, & Chambers C (2000). Eye movements and lexical access in spoken-language comprehension: Evaluating a linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistic Research, 29(6), 557–580. 10.1023/A:1026464108329 [DOI] [PubMed] [Google Scholar]
  107. Therriault DJ, Yaxley RH, & Zwaan RA (2009). The role of color diagnosticity in object recognition and representation. Cognitive Processing, 10(4), 335–342. 10.10.1007/s10339-009-0260-4 [DOI] [PubMed] [Google Scholar]
  108. Thiessen A, Beukelman D, Ullman C, & Longenecker M (2014). Measurement of the Visual Attention Patterns of People with Aphasia: A Preliminary Investigation of Two Types of Human Engagement in Photographic Images. Augmentative and Alternative Communication, 30(2), 120–129. 10.10.3109/07434618.2014.905798 [DOI] [PubMed] [Google Scholar]
  109. van der Laan LN, Papies EK, Hooge ITC, & Smeets PAM (2017). Goal-directed visual attention drives health goal priming: An eye-tracking experiment. Health Psychology, 36(1), 82–90. 10.1037/hea0000410 [DOI] [PubMed] [Google Scholar]
  110. Yap MJ, Hutchison KA, & Tan LC (2017). Individual differences in semantic priming performance: Insights from the Semantic Priming Project. In Jones MN (Ed.), Big data in cognitive science: From methods to insights (pp. 1–42). New York, NY: Psychology Press. [Google Scholar]
  111. Yee E, Overton E, & Thompson-Schill S (2009). Looking for meaning: Eye movements are sensitive to overlapping semantic features, not association. Psychometric Bulletin and Review, 16(5), 869–874. 10.3758/PBR.16.5.869 [DOI] [PMC free article] [PubMed] [Google Scholar]
  112. Yee E, & Sedivy JC (2006). Eye movements to pictures reveal transient semantic activation during spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(1), 1–14. [DOI] [PubMed] [Google Scholar]

RESOURCES