Skip to main content
The Journal of Deaf Studies and Deaf Education logoLink to The Journal of Deaf Studies and Deaf Education
. 2015 Jul 3;20(4):310–330. doi: 10.1093/deafed/env025

Understanding Language, Hearing Status, and Visual-Spatial Skills

Marc Marschark 1,*, Linda J Spencer 2, Andreana Durkin 1, Georgianna Borgna 1, Carol Convertino 1, Elizabeth Machmer 1, William G Kronenberger 3, Alexandra Trani 4
PMCID: PMC4836709  PMID: 26141071

Abstract

It is frequently assumed that deaf individuals have superior visual-spatial abilities relative to hearing peers and thus, in educational settings, they are often considered visual learners. There is some empirical evidence to support the former assumption, although it is inconsistent, and apparently none to support the latter. Three experiments examined visual-spatial and related cognitive abilities among deaf individuals who varied in their preferred language modality and use of cochlear implants (CIs) and hearing individuals who varied in their sign language skills. Sign language and spoken language assessments accompanied tasks involving visual-spatial processing, working memory, nonverbal logical reasoning, and executive function. Results were consistent with other recent studies indicating no generalized visual-spatial advantage for deaf individuals and suggested that their performance in that domain may be linked to the strength of their preferred language skills regardless of modality. Hearing individuals performed more strongly than deaf individuals on several visual-spatial and self-reported executive functioning measures, regardless of sign language skills or use of CIs. Findings are inconsistent with assumptions that deaf individuals are visual learners or are superior to hearing individuals across a broad range of visual-spatial tasks. Further, performance of deaf and hearing individuals on the same visual-spatial tasks was associated with differing cognitive abilities, suggesting that different cognitive processes may be involved in visual-spatial processing in these groups.


Among teachers of deaf learners, researchers involved in deaf education, sign language linguistics or socio-cultural Deaf studies, and deaf individuals themselves, it is not uncommon to encounter references to deaf people as having superior visual-spatial skills or being visual learners (e.g., Dowaliby & Lang, 1999; Hauser, Lukomski, & Hillman, 2008; Marschark & Hauser, 2012). Such suggestions can be taken (and intended) more or less literally. Certainly, any attenuation of the auditory sense will result in individuals, becoming relatively more dependent on vision than audition. The extent to which that relative dependence bestows deaf people with better visual-spatial skills than hearing individuals or somehow results in their being visual learners rather than verbal learners is another matter, one that has significant practical as well as theoretical implications.

On the practical side, Marschark and Knoors (2012) and Knoors and Marschark (2014) argued that subtle and not-so-subtle cognitive, metacognitive, and knowledge differences between deaf and hearing learners are such that they may require somewhat different instructional methods and materials in order to benefit optimally in formal and informal educational settings. This is quite different from suggesting that deaf students are visual learners or that teachers of the deaf need “instruction on how to moderate a classroom of visual learners” (Hauser et al., 2008, p. 299; see also Marschark & Hauser, 2012, p. 82). Marschark, Morrison, Lukomski, Borgna, and Convertino (2013; see also López-Crespo, Daza, & Méndez-López, 2012) argued that such descriptions notwithstanding, there is no evidence to indicate that deaf students are any more likely to be visual learners than verbal learners or more likely to be visual learners than are hearing students. Deaf learners may be more dependent on vision than hearing peers, but the vast majority of children and youth referred to as deaf are not profoundly deaf, but have some amount of residual hearing (Gallaudet Research Institute, 2011), which may be augmented by hearing aids and/or have access to sound through cochlear implants (CIs). To date, it does not appear that there have been investigations of the extent to which such individuals balance the use of visual and auditory input in real-world situations, even if the enhancement to (auditory) speech perception gained through speechreading is a frequent part of speech and hearing assessments.

More complex (and potentially sensitive) is the extent to which deaf individuals who rely primarily on sign language or spoken language utilize input in the other modality. Blom and Marschark (2015), for example, found that simultaneous communication (speech and sign together) can lead to better comprehension than spoken language alone by deaf individuals using CIs, at least when the material is more difficult or complex. Other studies have demonstrated that, in the hands of a skilled user, simultaneous communication can be effective in the classroom for deaf learners deemed to rely primarily on sign language (Cokely, 1990; Convertino, Marschark, Sapere, Sarchet, & Zupan, 2009; Newell, 1978). How the two modes of input are balanced in everyday incidental communication and how that balance is affected by an individual’s fluencies in their signed and spoken languages remains to be determined.

This issue goes beyond communication to cognitive abilities at large, because there is likely to be an interaction between an individual’s signed and spoken language fluencies and the extent to which they can utilize auditory information. In the simplest terms, the human auditory system deals with sequential information better than the visual system, for example with greater temporal resolution (Krumbholz, Patterson, Nobbe, & Fastl, 2003), and the visual system deals with spatial information better than the auditory system, for example with greater spatial acuity (Bruce, Green, & Georgeson, 1996). Correspondingly, hearing individuals would be expected to outperform deaf individuals on tasks that depend on sequential and/or temporal processing and deaf individuals generally are expected to outperform hearing individuals on tasks that depend on visual-spatial processing. In the broadest terms, those expectations are often confirmed (Marschark & Knoors, 2012, but see below). The situation is seen to be more complex, however, when one considers that, as noted earlier, people who are referred to as being deaf frequently have some auditory ability. Further, as will be discussed later, although visual-spatial ability frequently is considered monolithically, visual and spatial abilities are demonstrably separable. It may well be that deaf individuals who utilize signed or spoken language to a greater or lesser extent gain additional benefit to their spatial and sequential abilities, respectively. Understanding the interplay among these factors and how that interplay can be affected by variability within each of the factors would require studies in which individuals vary, at minimum, in their hearing thresholds, their sign language fluencies, and their spoken language fluencies.

The above issues can be considered in the context of education as well as cognitive psychology, although investigations in carefully controlled laboratories and in classrooms may not be fully comparable. In any case, it is important to note that the use of sign language rather than spoken language by deaf students should not be equated with their being visual learners. Learning via sign language is a verbal-linguistic skill, as is reading, even if it depends on vision rather than audition. Marschark, Machmer, and Convertino (in press) thus suggested that the appropriateness of assuming that deaf signers are visual learners or teaching them as though they are (Hauser et al., 2008; Marschark & Hauser, 2012) remains to be demonstrated.

On the theoretical side, also, referring to someone as a visual or verbal learner is not as simple or straightforward as the frequent generalizations might suggest. Among other things, visual learning and verbal learning are not ends of a continuum but represent aspects of an individual’s thinking or learning style that are not mutually exclusive (Paivio & Harshman, 1983). Underlying the construct of learning styles is the assumption that teaching methods and materials will be most effective when they match the learning strategies of the student. Learning styles are multidimensional, however, and describing a particular student or group of students in terms of a single dimension is of questionable educational utility. Individuals’ learning styles typically are identified through the administration of standardized assessments, obtaining information about their mental habits, or determining how they deal with the presentation of information in different modalities. For example, individuals may prefer to acquire new content or skills through language (text or through the air) or diagrams or pictures (static or animated). The assumption that “visualizers” learn better with visual methods of instruction and “verbalizers” learn better with verbal methods is referred to as the attribute-treatment interaction (ATI) (Mayer & Massa, 2003; Sternberg & Zhang, 2001). Studies by Massa and Mayer (2006), Litzinger, Lee, Wise, and Felder (2007), and others, however, have indicated that demonstrations of ATIs are rare (for a review, see Pashler, McDaniel, Rohrer, & Bjork, 2008).

Beyond learning styles, it is also important to avoid broad generalizations like those of Hauser et al. (2008, p. 291) and Marschark and Hauser (2012, p. 68) that by virtue of either auditory deprivation or the use of a (visual-spatial) signed language, deaf individuals generally have better visual-spatial skills than hearing individuals. As noted earlier, the majority of individuals commonly referred to as “deaf” have some amount of residual hearing, and students who are considered hard of hearing (i.e., with mild to moderate hearing losses) outnumber those considered deaf (i.e., with severe to profound hearing losses) by at least 2 to 1 (e.g., Shaver, Marschark, Newman, & Marder, 2014).

A variety of studies, indeed, has provided support for deaf individuals’ having some advantages in the visual domain (e.g., Bettger, Emmorey, McCullough, & Bellugi, 1997; Hall & Bavelier, 2010; Hauser, Cohen, Dye, & Bavelier, 2007; Proksch & Bavelier, 2002; Rettenbach, Diller, & Sireteanu, 1999). To avoid confounds, however, most of those studies have involved profoundly deaf individuals who came from deaf families, are native users of sign language, and usually attended schools for the deaf. Bavelier, Dye, and Hauser (2006) concluded that visual-spatial advantages even among those individuals (approximately 5% of the deaf population; Mitchell & Karchmer, 2004) are not particularly generalized but are most evident in tasks that place high demands on spatial attention, including those that require sensitivity to events in the visual periphery (but see Chen, Zhang, & Zhou, 2006; Dye, Green, & Bavelier, 2009). Bavelier et al. (2006) attributed areas of “deficient visual cognition” among deaf individuals to “the complex etiology of deafness” (p. 512).

The literature with regard to the effects on visual-spatial cognition of auditory deprivation and the use of signed versus spoken languages is quite large, complex, and at times equivocal (for reviews, see, e.g., Emmorey, 2002; Hall & Bavelier, 2010; Marschark et al., in press; Mayberry, 2002). In large part, inconsistency in empirical findings with regard to deaf individuals’ visual-spatial abilities and functioning is a result of the considerable heterogeneity of the deaf population, not only due to the complex etiology of hearing loss, but also large individual differences resulting from diverse developmental histories, language abilities, and educational experiences. Studies that have involved deaf and hearing native users of sign language raised in deaf families have provided important theoretical insights, but they are less informative with regard to the approximately 95% of the deaf population that comes from more diverse backgrounds. Given our own interests and involvement with deaf children and young adults, the latter majority of deaf individuals is the population we focus on here. In particular, despite frequent claims about deaf students being visual learners and the belief among teachers that greater use of sign language and more visual materials will remedy deaf students’ chronic underachievement, there is little or no adequate research to support this assumption or to guide educational or other interventions. In fact, only a handful of studies have explored possible links between deaf individuals’ visual-spatial abilities and academic functioning, and those apparently only with regard to short-term mathematics performance and its foundations.

Zarfaty, Nunes, and Bryant (2004) found that deaf preschoolers were able to remember and reproduce spatial arrays better than hearing peers. Pagliaro (2015, p. 183) suggested that such findings raise the possibility that for deaf children, “geometry concepts and skills are developed sooner and/or more quickly than those of other areas, perhaps influenced by their visual access to information.” Blatto-Vallee, Kelly, Gaustad, Porter, and Fonzi (2007) examined visual-spatial abilities and mathematical problem solving among deaf and hearing students from Grade 7 through university. They found that deaf students at all ages were less likely than hearing peers to utilize the kinds of schematic, visual-spatial representations that support mathematics problem solving. Instead, they appeared to rely primarily on pictorial representations that included visual aspects of the problems but not relations important to problem solution. Blatto-Vallee et al. (2007) also evaluated students’ visual-spatial abilities through the Primary Mental Abilities Spatial Relations Test (Optometric Extension Program, 1995), in which participants were presented with drawings of incomplete squares and had to choose from five alternatives the missing part that would complete each, and the Revised Minnesota Paper Form Board Test (MPFB; Likert & Quasha, 1994), in which they had to choose one of five figures that would be created by the combination of several parts. Performance on the visual-spatial tasks and use of schematic representations were associated with greater mathematics performance for both deaf and hearing students. However, hearing students at all grade levels scored higher than the deaf students on the visual-spatial tasks (see Cockcroft & Dhana-Dullabh, 2013, for similar results with younger children).

Marschark et al. (2013) also examined relations between visual-spatial processing and mathematics performance among deaf learners. Their primary interest involved deaf students who used sign language as their primary mode of communication, the subgroup most often referred to as visual learners. They administered a brief mathematics test (word problems with diagrams drawn from the American College Test) and seven visual-spatial tasks. Five of the latter were drawn from the Woodcock-Johnson III Tests of Cognitive Abilities (WJ-III; Woodcock, McGrew, & Mather, 2001): Spatial Relations, Picture Recognition, Visual Matching, Decision Speed, and Pair Cancellation. The other two were an Embedded Figures (figure-ground) task and the Corsi Blocks, a visual-spatial working memory task. Stepwise multiple regression analyses indicated that when other scores were controlled, only performance on the Embedded Figures test predicted mathematics performance for the hearing students. For the deaf students, when visual-spatial scores and several aspects of (self-reported) expressive and receptive sign language abilities were controlled, only scores on the Spatial Relations test predicted mathematics performance. Consistent with the Blatto-Vallee et al. (2007) study, hearing students scored as well or better than the deaf students across all of the visual-spatial tasks. There was no difference in performance on any of the tasks between deaf students who learned to sign early (prior to age 2½) and those who learned to sign later. The investigators concluded that their results offered little support for the assumption that deaf students are visual learners and indicated that, at some level, commonly administered visual-spatial tasks tap somewhat different cognitive abilities in deaf and hearing individuals.

The purpose of the present study was to further examine the visual-spatial abilities of deaf learners and, in particular, to obtain a better understanding of relations among language skills and visual-spatial abilities. 1 In order to examine separate effects of hearing status and sign language ability, the study included two groups of deaf learners, one of which was comprised of CI users, and two groups of hearing students, one of which was comprised of sign language interpreting students.

Experiment 1

On the basis of the Blatto-Vallee et al. (2007) and Marschark et al. (2013) studies and previous literature (see Marschark et al., in press; Mayberry, 2002, for reviews), visual-spatial tasks for use in this experiment were selected so as to tap somewhat different aspects of nonverbal cognitive abilities. Three were chosen from those used by Marschark et al. (2013): Spatial Relations, Pair Cancellation, and Embedded Figures. Spatial relations (or perceptual-organizational) tasks entail perceptual and cognitive abilities involved in visualizing, orienting, and manipulating mental images of geometric or real-world figures; some spatial relations tasks also involve the analysis and synthesis of part-to-whole relationships in complex visual designs. Such tasks frequently are used in order to test the nonverbal, visuospatial component of intelligence without the confounding influences of fund of knowledge or language ability. Emmorey, Kosslyn, and Bellugi (1993) found that compared to nonsigning hearing individuals, both deaf and hearing native signers were faster in generating complex mental images and demonstrated faster response times in a mental rotation task. Talbot and Haude (1993) showed that mental rotation performance was influenced by sign language ability but not age of acquisition.

Van Dijk, Kappers, and Postma (2013a, 2013b) examined the effects of hearing status and sign language ability on spatial relations abilities using haptic rather than visual tasks. Both studies involved signing deaf individuals, hearing sign language interpreters (including native signers), and nonsigning hearing individuals. Van Dijk et al. (2013a) used a haptic parallel setting task in which blindfolded participants put one hand on a horizontal stylus placed between 0° and 150° from the left-right axis of the table and used the other hand to rotate a second stylus so as to be parallel to the first. Deaf individuals were significantly more accurate in the task than hearing individuals, both signers and nonsigners. Van Dijk et al. (2013b) used a tactual performance task in which the same participants, while blindfolded, were asked to fit 10 geometric shapes into a board containing 10 corresponding cutouts. In that task, deaf and hearing signers outperformed the hearing nonsigners. At face value, the most obvious difference between these two tasks is that the first is almost exclusively a spatial task whereas the second has a large visual component, an issue to be addressed later (see Della Sala, Gray, Baddeley, Allamano, & Wilson, 1999; López-Crespo et al., 2012).

Picture cancellation tasks require rapid scanning of large stimulus arrays in order to identify (and mark or cancel) only items that meet certain criteria, such as finding all instances of an object or class of objects within an array. These tasks, which are dependent on perceptual-organization, visual attention, controlled fluency-speed of cognitive processing, and focused-sustained mental efficiency, have been used to examine executive functioning (EF) and visual fluency, without language confounds. Prior research has demonstrated strong relations between picture cancellation tasks and measures of mental efficiency, controlled attention, and other components of EF (Roid & Miller, 1997; Wechsler, 2014; Woodcock et al., 2001). Performance on picture cancellation tasks is more dependent on visuospatial attention and visual processing than access to the meaning or label of an image; in fact, extensive verbal mediation of the visual image can slow the speed of processing during such tasks. Picture cancellation tasks therefore usually are described as tapping speed of visual processing or controlled visual or perceptual fluency.

Embedded figures tests are frequently used as measures of perceptual field dependence or independence, that is, the extent to which individuals are able to ignore background perceptual information. They also are taken as indicative of an associated cognitive/learning style. The assumption that deaf individuals are visually oriented makes the task potentially interesting at both levels. Generally, however, evaluations of field dependence/independence in studies involving deaf individuals have been inconclusive. Gibson (1985) found no relation of the variable to hearing thresholds, while Parasnis and Long (1979) found greater hearing thresholds to be a significant predictor of field dependence, but only among males. They also reported no difference between the deaf and hearing college students on the Spatial Relations subtest of the Differential Aptitude Tests compared to hearing norms, whereas Blatto-Vallee et al. (2007) and Marschark et al. (2013) found significantly better performance on spatial relations tasks by hearing than deaf individuals from middle school through college age.

This and the following experiments investigated visual-spatial performance and language abilities, both spoken and signed, among deaf individuals with and without CIs, a distinction intended to be consistent with the existing literature. It is not assumed that the former exclusively use spoken language and the latter exclusively use sign language. At least in a college-age population, many individuals use both forms of communication at one time or another (sometimes together). In the sample of first-year college students described below, for example, only 12 of the 51 (23%) CI users indicated that they did not know any sign language, while 10 of 55 (18%) students without CIs indicated that they did not know any sign language. Similarly, deaf students who use CIs do not necessarily use them 100% of the time or depend on them entirely even when they are using them. The current research program has indicated that the use (and utility) of CIs and the use (and fluency) of sign language or spoken language among deaf students is far more variable than is generally acknowledged in the literature. Given the present focus on evaluating the assumption that deaf individuals generally possess superior visual-spatial abilities or are more likely to be visual learners relative to hearing individuals (e.g., Hauser et al., 2008; Marschark & Hauser, 2012; cf. Marschark et al., 2013), the issue of spoken versus sign language use is considered statistically below and in subsequent experiments.

Method

Participants

The participants were all first-year university students paid for their participation. A brief recruiting questionnaire distributed during autumn registration included questions about hearing status, including CI use and age of implantation, and language abilities, including sign language skill and age of acquisition. For the purpose of examining the effects of both hearing status and sign language skill, four groups of students initially were recruited. Included among the 175 participants were 106 students receiving university services (e.g., audiological, interpreting, tutoring) because of hearing loss (hereafter, deaf), 51 of whom were current CI users. Of the 55 deaf students who did not use CIs, 33 used hearing aids. The 69 hearing participants included 14 who were sign language interpreting students. 2 The participants ranged in age from 17.7 years to 35.9 years with a mean of 19.1 years (SD = 1.74; see Table 1). The only significant age differences among the groups resulted from the hearing students’ being almost a year younger than the deaf students who did not use CIs and 2 years younger than the interpreting students.

Table 1.

Number of participants, administered tasks, and task means and SDs for age and measures of sign language, spoken language, and visual-spatial abilities in Experiment 1

Deaf with CI Deaf without CI Hearing Interpreting students
N Mean SD N Mean SD N Mean SD N Mean SD
Age 51 19.28 1.28 55 19.32 1.24 54 18.45 0.35 14 20.55 4.78
Language measures
 Age of sign acquisition (years) 39 6.66 6.38 45 2.91 3.42
 Expressive sign language–SLPI (0–5) 50 1.98 1.49 53 2.90 1.87 55 0.17 .57 14 1.93 0.76
 Receptive sign language (% + %) 42 0.88 0.26 47 0.95 0.22 18 0.80 0.27 14 1.16 0.19
 Speech production–phonemes (%) 47 86.04 14.74 33 86.97 17.92
 Speech recognition–audiovisual (AV%) 50 73.95 30.96 54 49.37 39.60 52 84.74 11.97 14 88.66 7.56
 Speech recognition–audio only (A%) 50 56.49 36.73 54 32.47 37.68 52 66.03 15.95 14 69.60 11.41
 Audiovisual enhancement (AV%–A%) 50 17.46 15.61 54 16.90 18.30 52 18.71 12.62 14 19.06 10.45
 Speech-to-noise ratio 52 3.56 1.51 14 3.93 1.14
Visual-spatial measures
 Spatial relations (%) 51 89.25 8.06 55 90.06 6.51 55 95.22 4.00 14 95.50 2.25
 Embedded figures (%) 51 37.83 13.18 55 37.83 13.18 55 46.79 10.93 14 50.92 12.96
 Pair cancellation (%) 51 91.02 9.52 55 92.72 9.46 55 95.94 5.06 14 95.76 5.00

Note. CI = cochlear implant; SLPI = Sign Language Proficiency Interview.

The CI users reported receiving their (first) implants between 1.4 and 20.0 years of age with a mean of 6.4 years (SD = 4.8); 17 reported receiving a second CI between 7.0 and 18.0 years of age with a mean of 14.5 years (SD = 2.8). The CI users’ aided, four-frequency pure tone average (PTA) hearing thresholds in the better ear ranged from 15 to 45 dB with a mean of 28.22 dB (SD = 7.59). The aided PTAs of the deaf students who did not use CIs ranged from 25 to 69 dB with a mean of 46.17 dB (SD = 10.92). The better ear unaided four-frequency PTAs for the CI users ranged from 79 to 125 dB with a mean of 111.08 dB (SD = 11.92), those of the deaf hearing aid users ranged from 25 to 106 dB with a mean of 77.62 dB (SD = 18.02), those of deaf students who used no amplification ranged from 7.5 to 123.75 dB with a mean of 95.06 dB (SD = 29.27), and those of the hearing students ranged from 0 to 16 dB with a mean of 4.66 dB (SD = 3.31). Because deaf participants used their devices during testing, only their aided PTAs are considered below.

Procedure

Sign language assessment

Expressive sign language skill. The Sign Language Proficiency Interview (SLPI) is a tool for evaluating sign language skills widely used in the United States. It consists of a one-to-one signed conversation between an interviewer and interviewee (https://www.rit.edu/ntid/slpi/). The three sign language interpreter-researchers involved in this study underwent formal SLPI training explicitly for the purpose of this project. During recruitment, all participants rated their sign language skills on a 6-point Likert scale from 0 to 5. Although an SLPI of Level 2 appears to be the lowest at which one might be considered to know sign language (i.e., they can “discuss basic social and school topics and respond usually with 1–3 sentences,” as opposed to knowing some signs; see Appendix), participants who rated themselves 1 or higher were administered an SLPI. Those who rated themselves 0 were assigned an expressive sign language score of 0.

Administration of the SLPI involved each participant engaging in a 20-min one-on-one interview with the same certified interpreter. The interviews were recorded, with the students’ permission, using an HD camera. The interviewer asked a series of questions about each student’s family, schooling, and extracurricular activities in order to have a sample reflecting expressive and receptive sign language abilities, both form and function. The recordings subsequently were rated by the interviewer and the other two interpreter-researchers who had SLPI training. Ratings followed the standard 11-point SLPI rating scale, from “No Functional Skills” to “Superior Plus” including six levels (0–5) with “plus” sublevels for Levels 1 through 5 (1.5, 2.5,…5.5). Viewing the recorded interviews for approximately 1hr each, the raters independently evaluated students’ vocabulary knowledge, sign production, fluency, American Sign Language (ASL) grammatical features, and comprehension as well as documenting any errors. Each rater assigned an independent rating, after which the ratings were discussed. If initial ratings were not in agreement, the raters discussed their perspectives on the language sample and watched the interview again until they reached agreement, and an overall score was given to each student. Following the rating of all interviews, the raters re-reviewed scores, group by group, to assure consistency at each level.

Receptive sign language skill. Sign language reception was assessed by having all students who qualified for an SLPI watch a 3-min (3:15) presentation in ASL. The presentation consisted of a Grade 5 level narrative passage about Margaret Mead drawn from the Qualitative Reading Inventory—3 (QRI; Leslie & Caldwell, 2001). Immediately after the presentation, participants were asked to retell the story in as much detail as possible. When they were finished, they were given a multiple-choice test on the content. Given the findings of Marschark et al. (2009) indicating that deaf and hearing college students’ retelling of passages in sign language or spoken language, respectively, did not result in differences from retelling by writing, written retelling was used in this study in order to simplify scoring. Retelling was scored according to the QRI instructions, assigning 1 point each for reproduction of 4 background/setting idea units, 7 goal idea units, 32 event idea units, and 3 resolution idea units, all ignoring errors of spelling and grammar. The three interpreter-researchers scored each retelling together; any disagreements remaining after discussion were resolved by accepting the majority decision. The multiple-choice test consisted of 17 questions, each with 4 alternative responses, covering the same range of information as the 8 open-ended questions suggested by QRI. Both tests yielded proportional scores that were added together to provide a composite passage comprehension score.

Speech and hearing assessment

Assessments of speech and hearing were performed in a double-walled sound-treated booth using a GSI 61 audiometer, GSI 1761-9635 speakers, and TDH-50P supra-aural headphones. Equipment was calibrated in compliance with American National Standards Institute (ANSI) S3.6 Specification for Audiometers. Testing required a 1-hr session and was conducted by a licensed audiologist proficient in ASL. Participants who used hearing aids, CIs, or both were tested with their devices. Hearing participants received only a partial battery.

Hearing. Unaided pure-tone air-conduction thresholds were determined for all participants using headphones at octaves from 250 to 4000 Hz and 6000 Hz. Deaf participants who used hearing aids or CIs also completed aided, warble-tone threshold testing in the soundfield for the same frequencies. Only the aided hearing thresholds for deaf students are considered here.

Speech production. Speech production accuracy was assessed using the McGarr sentences to elicit speech samples from the deaf participants (McGarr, 1981, 1983). Deaf participants who expressed discomfort in using their voices were able to opt out of this assessment; speech production data were obtained for 80 of the 106 deaf participants. Test material consisted of 36 sentences including 12 each of 3, 5, and 7 syllables. Participants viewed the sentences on a monitor and read them aloud while positioned approximately 12 inches from a condenser microphone (Audio-Technica AT897). Input was recorded via PC in waveform audio file format. Two independent pairs of speech and hearing clinicians who were skilled in phonetics then transcribed the students’ speech samples using broad phonemic transcription. Correlation between the two pairs’ transcriptions was 0.83 using a Lambda analysis (Hays, 1973). The measure reported here is the proportion of phonemes correctly produced.

Speech recognition. Speech perception (i.e., recognition) was assessed via the open-set Iowa Sentence Test (Tyler, Preece, & Tye-Murray, 1986), using the Tye-Murray, Sommers, and Spehar (2007) adaptation. Stimuli consisted of 100 sentences, spoken by 10 female and 10 male adults, with vocabulary that would be familiar to children with hearing loss. Five lists of 20 sentences each were randomized across groups and conditions. Each sentence in a list was spoken by a different person and each list had a similar number of words. The test was administered in auditory, audiovisual, and visual conditions (i.e., with the speaker visible in the last two conditions); only the audiovisual and auditory conditions will be considered here. In addition to scores in those two conditions, the difference between them provided a measure of visual enhancement resulting from multimodal integration during speech recognition (e.g., Bergeson, Pisoni, & Davis, 2005; Kirk et al., 2012). Sentences were scored by the number of words in each sentence that were repeated correctly. Tests were performed in quiet for deaf participants and with 20-talker babble as background noise for hearing participants in all conditions. Babble level was individually set for each hearing participant to approximate 50% correct performance in the auditory condition, thus avoiding ceiling-level performance and allowing for visual enhancement (see Sommers, Tye-Murray, & Spehar, 2005). Hearing students’ speech-to-noise ratios (SNRs) therefore were included in subsequent analyses as a measure of hearing in noise (lower SNRs indicate better hearing in noise). Their raw audiovisual and auditory-only speech recognition scores were included for analysis with caution, because although the background noise was manipulated to elicit similar performance in the auditory condition, actual performance varied considerably.

The task was administered in the free field with the participant seated facing the loudspeaker. Stimuli were calibrated before each session using a calibration tone. Test stimuli were presented via PC (Dell Latitude E6430); babble was presented via CD (Auditec, Inc.), and both were routed through the audiometer to the soundfield. Sentences were presented at a constant 60 dB SPL for auditory and audiovisual conditions. Participants viewed visual stimuli consisting of the head and neck of the test talker on a 19-inch LCD monitor (HP L1945w) approximately 32 inches from their eyes. Participants repeated stimuli in their preferred modality, either speaking, signing, or writing. If a participant completely missed the first 10 items on any test list, testing was halted to reduce frustration, and a score of 0 was recorded.

Visual-spatial processing measures

As noted earlier, there were three visual-spatial tasks selected so as to obtain measures of different aspects of visual-spatial processing. The Spatial Relations task, drawn from the WJ-III, requires individuals to identify the two or three shapes (out of six) that can be combined to form a complex target shape. As a test of visual-spatial processing, the task requires visual feature detection, manipulation of mental images, visual-spatial matching, and visual-spatial construction skills. “Visual-spatial thinking” here refers to “the ability to perceive, analyze, synthesize, and think with visual patterns, including the ability to store and recall visual presentations” (Mather & Woodcock, 2001, p. 19).

The Pair Cancellation task, also drawn from the WJ-III, is a timed test that requires individuals to identify instances of a target pair of pictures (a ball followed by a dog) from a page containing hundreds of pictures of a ball, a cup, and a dog. The task taps EF (interference control), attention/concentration (sustained attention), visual fluency-speed, and the ability “to stay on task in a vigilant manner” in the visual-spatial domain (Mather & Woodcock, 2001, p. 16), another aspect of EF.

The Embedded Figures task required identification of objects hidden within a visually noisy background, that is, separating figure from ground. As visual-spatial measures of cognitive style, such tasks involve analytical problem solving, central coherence, and field dependence/independence (e.g., Hauptman & Eliot, 1986). Our task involved 2 embedded figures drawn from Highlights for Children, one containing 18 hidden figures and one containing 16 hidden figures. Pretesting in an earlier study indicated that imposing a time limit on the task made it sufficiently difficult for deaf and hearing college students, avoiding floor and ceiling effects.

The three visual-spatial tasks were administered to participants by one of two interpreter-researchers trained on the WJ-III tasks by a PhD-level psychologist who uses the battery regularly. The test instructions were communicated via spoken language, sign language, or both according to student hearing status and preference. In order to facilitate group testing for the large number of participants, the Embedded Figures and Pair Cancellation tasks were time-limited to 90 s and 3min, respectively, rather than administered individually and timed. Scores were the percentages of correct responses. The Spatial Relations task was not time-limited, but for comparison purposes, it too was scored as the percentage of correct responses.

Total test time approached 3hr, scheduled in three separate 1-hr sessions. Of the 175 total participants, 170 completed all three sessions. Disagreements in participant numbers and degrees of freedom in later analyses represent missing data (including unanswered questions), which were not interpolated.

Results and Discussion

Means (and SDs) for all measures are provided in Table 1. Unless otherwise indicated, results described in the present experiments were significant at the .05 level or beyond.

Several preliminary analyses were conducted prior to examining effects of hearing status, sign language skill, and visual-spatial abilities. First, within the sample of 55 deaf participants who did not use CIs, t-tests indicated no significant differences between 33 who reported using hearing aids and the 22 who reported not using them on any of the visual-spatial tasks, all ts (53) ≤1.0, and they were considered as a single group for the purposes of further analyses. Second, of the deaf participants, 24 indicated that they were native signers. As a group, the native signers demonstrated better sign language skills than later learners of sign language as indicated by their expressive skills (SLPI), t(79) = 4.32, p < .01, and their receptive (passage comprehension) skills, t(80) = 1.31, p = .19 (likely due to a ceiling effect). Nevertheless, analyses indicated no advantages for the native signers on any of the three visual-spatial tasks, −1.10 < t(82) < 0.50, and they are not considered separately in this experiment. Third, the visual-spatial scores of 12 CI users who had received an implant early (for their cohort), prior to age 3, were compared to the 39 who had received them later. The early implantees scored significantly higher on the Spatial Relations task, t(49) = 4.32; later implantees scored slightly (not significantly) higher on the Embedded Figures and Pair Cancellation tasks. Similar results were obtained when the analyses compared 25 participants who received CIs prior to age 5 and the 26 who received them later.

Visual-spatial performance and hearing status. Because the three visual-spatial tasks were selected so as to tap different aspects of cognitive functioning, they were analyzed separately using one-way analyses of variance (ANOVAs) in which group (deaf participants with CIs, deaf participants without CIs, hearing sign language interpreting students, other hearing participants) was a between-subjects factor. Analyses of scores on all three of the tasks yielded significant main effects of group: Spatial Relations, F(3,171) = 11.75, mean squared error (MSE) = 38.00; Embedded Figures, F(3,171) = 9.49, MSE = 159.99; Pair Cancellation, F(3,171) = 3.84, MSE = 64.76. Bonferroni-corrected comparisons indicated that on both the Spatial Relations and Embedded Figures tasks, neither the two groups of deaf participants nor the two groups of hearing participants differed significantly from each other, but as can be seen in Table 1, the performance of both groups of hearing participants was significantly better than the performance of both groups of deaf participants. The same pattern can be seen for the Pair Cancellation task, although the only significant paired comparison was between the hearing (noninterpreter) participants and the deaf participants with CIs. Overall, however, the Pair Cancellation scores of the hearing participants were significantly higher than deaf participants’ scores, t(173) = 3.22.

The above ANOVAs were repeated including only those deaf participants with CIs (38) and without CIs (46) who assigned themselves a self-rated SLPI score of 2 or higher (i.e., thought they knew sign language, as opposed to some signs). Analyses yielded significant main effects of group for Spatial Relations, F(3,148) = 14.21, MSE = 36.38; Embedded Figures, F(3,148) = 12.22, MSE = 139.46; and Pair Cancellation, F(3,148) = 3.26, MSE = 64.66. Bonferroni-corrected comparisons again indicated that the performance of both groups of hearing participants was significantly better than the performance of both groups of deaf participants.

In short, the performance of deaf and hearing college students on tasks tapping three different domains of visual-spatial functioning indicated not only that deaf individuals did not demonstrate any generalized advantage, but that hearing participants performed as well or better. The lack of differences between deaf participants with and without CIs or between hearing participants with and without sign language skill further suggests that the observed differences were a function of hearing status (although not in the direction typically expected) rather than participants’ generally preferred language modality. This issue can be addressed further by examining relations between scores on the visual-spatial tasks and participants’ sign language and spoken language skills.

Visual-spatial performance and language skills

Table 2 provides the results of correlational analyses examining associations between participants’ visual-spatial task scores and their expressive and receptive language skills. 3 The coefficients indicate that for the deaf participants with CIs, the only significant association of their visual-spatial processing skills with their sign language skills was the negative correlation between their scores on the Spatial Relations task and their expressive skills. The association of visual-spatial skills with their speech skills appeared somewhat stronger, as there were positive correlations between their scores on the Spatial Relations task and their speech recognition (both auditory only and audiovisual) and a negative correlation with their age of implantation (earlier implantation associated with higher scores). Embedded Figures and Pair Cancellation scores were negatively associated with age of implantation, although the coefficients were not statistically significant. There also was a significant negative correlation between Pair Cancellation and Audiovisual Enhancement in speech recognition. For the deaf participants without CIs, the only significant correlation between their visual-spatial scores and their language scores was a positive association between their Spatial Relations scores and their sign language reception (passage comprehension) scores. There were no significant correlations between visual-spatial scores and language scores for the interpreting students, and in the other group of hearing students, only the correlation between their audiovisual speech recognition and Embedded Figures scores was significant. 4

Table 2.

Correlation coefficients between sign language measures and spoken language measures with visual-spatial ability measures in Experiment 1

Spatial relations Embedded figures Pair cancellation
Deaf with cochlear implants
 Age of sign acquisition .16 .03 −.10
 Expressive sign language −.49** −.06 −.09
 Receptive sign language −.01 −.07 .03
 Speech production–phonemes .22 .15 .15
 Speech recognition–audiovisual .32* −.07 .06
 Speech recognition–audio only .34* .02 .24
 Audiovisual enhancement −.16 −.18 −.44**
 Age of implantation −.41** −.25 −.21
Deaf without cochlear implants
 Age of sign acquisition −.19 −.02 −.19
 Expressive sign language .06 −.06 .11
 Receptive sign language .35* .10 .15
 Speech production–phonemes −.15 .08 −.13
 Speech recognition–audiovisual −.23 −.01 −.08
 Speech recognition–audio only −.18 .11 −.01
 Audiovisual enhancement −.14 −.24 −.13
Hearing
 Expressive sign language −.11 .14 −.10
 Receptive sign language −.02 −.10 −.40
 Speech-to-noise ratio .06 .24 .14
 Speech recognition–audiovisual .21 .35** .22
 Speech recognition–audio only .13 .26 .05
 Audiovisual enhancement .04 .01 .14
Hearing interpreting students
 Expressive sign language .27 .17 .08
 Receptive sign language .44 .06 .05
 Speech-to-noise ratio .31 −.13 .29
 Speech recognition–audiovisual .03 −.14 .19
 Speech recognition–audio only .13 .18 .24
 Audiovisual enhancement −.12 −.30 −.12

Note. *p < .05, **p < .01.

Neither the CI users nor the nonusers obtained significant relations between any of their visual-spatial scores and aided hearing thresholds, −.22 ≤ r(48) ≤ .18, and −.16 ≤ r(29) ≤ .12, respectively (cf. Hauptman & Eliot, 1986; Marschark et al., 2013).

In summary, the correlational analyses suggest that for deaf participants, better visual-spatial ability as tapped by the Spatial Relations task generally is associated with better skills in their preferred language modality rather than being a function of the modality of those language skills. That is, the use of sign language did not appear to bestow any particular benefit to performance on that task (or any other). In fact, among deaf participants who indicated that they knew sign language (i.e., rating themselves 2 or higher on the SLPI), those with CIs scored slightly higher than their deaf peers without CIs on two of the three visual-spatial tasks (see Table 1) despite reporting that they learned to sign significantly later, t(82) = 3.42. Further, the 24 deaf participants who claimed to be native signers did not score higher on any of the three tasks than the remaining deaf participants who knew sign language.

The observed association between deaf participants’ language skills (regardless of modality) and Spatial Relations scores, but not performance on the other visual-spatial tasks, appears to add to the complexity of findings in the literature reflecting (inconsistent) interactions among language, language modality, and visual-spatial functioning among deaf individuals. A related visual-spatial domain in which results have been variable, but potentially enlightening both theoretically and practically, is visual-spatial working memory. In an effort to further clarify possible associations among hearing status, sign language ability, language modality, and visual-spatial ability, Experiment 2 involved a nonverbal working memory task, the Corsi Blocks, with a group of participants from Experiment 1 for whom the battery of language measures were available.

Experiment 2

Visual-spatial working memory is one of several domains in which deaf individuals sometimes have been reported to have an advantage (see Hall & Bavelier, 2010; Mayberry, 2002). More than just a short-term memory store for retaining series of items, working memory is centrally involved in language comprehension, problem solving, and learning (Baddeley & Logie, 1999). Among deaf learners, working memory has been found to be a significant predictor of spoken language abilities (Cleary, Pisoni, & Geers, 2001), reading abilities (Garrison, Long, & Dowaliby, 1997; Geers, 2003), and mathematics achievement (Gottardis, Nunes, Lunt, 2011; Lang & Pagliaro, 2007). Marschark et al. (in press) emphasized the centrality of language for working memory insofar as hearing individuals typically outperform deaf individuals in tasks involving stimuli amenable to verbal coding, and native or near-native-signing deaf individuals have been found to outperform hearing peers on working memory tasks involving the Corsi Blocks (Romero Lauro, Crespi, Papagno, & Cecchetto, 2014; Wilson, Bettger, Niculae, & Klima, 1997) and other stimuli that are not easily verbally coded (e.g., Campbell & Wright, 1990; Dawson, Busby, McKay, & Clark, 2002; see Hamilton, 2011). That is, better language skills typically are associated with better working memory regardless of the nature of the materials. Stiles, McGregor, and Bentler (2012), however, found hearing children to score higher on the Corsi Blocks than children with mild to moderately-severe hearing losses who did not sign, and deaf children and adults who are less-skilled signers have been found not to differ from hearing peers on the task (Alamargot, Lambert, Thebault, & Dansac, 2007; Logan, Mayberry, & Fletcher, 1996; Marschark et al., 2013).

Although frequently referred to as a visual-spatial working memory task, the Corsi Blocks is primarily a spatial memory task involving a set of nine identical blocks, randomly placed. The task involves tapping series of blocks, of increasing length, in the same order as indicated by an experimenter. Della Sala et al. (1999) showed that Corsi Blocks performance was disrupted more by spatial than visual interference, whereas performance on a visual task parallel to the Corsi Blocks was disrupted more by visual than spatial interference. This distinction between visual and spatial components of working memory is rare in the literature with regard to deaf individuals. López-Crespo et al. (2012), however, argued that there is no evidence for deaf individuals’ having better visual memory than hearing individuals. They used a delayed matching-to-sample task in which deaf and hearing children saw Kanji characters and had to indicate whether a comparison character was identical to one they had just seen either immediately or after a 4-s delay. The hearing children and deaf bilingual children were more accurate than deaf peers who used either spoken language or sign language only. All three deaf groups had significantly longer response times than the hearing group.

Marschark et al. (in press) interpreted the López-Crespo et al. (2012) results as consistent with the haptic tasks of Van Dijk et al. (2013a, 2013b), described earlier, insofar as the Van Dijk et al. (2013a) haptic parallel setting task was predominantly a spatial rather than a visual task (and deaf participants performed better) while their (2013b) tactual performance task entails a visual imagery component (and both deaf and hearing signers performed better). Marschark et al. (in press) suggested that those results indicated that neither sign language nor hearing loss alone can explain differences in nonverbal working memory in the relevant literature. Further, the finding that significant associations between receptive vocabulary size and recall are found even when the to-be-remembered items are nonverbal stimuli such as Corsi Blocks (Stiles et al., 2012) suggests that EF or some more global cognitive ability may be involved beyond language modality and hearing status. Stiles et al. (2012), for example, found a significant difference in Corsi Blocks performance between children with and without hearing loss who were low in EF but not those high in EF.

This experiment involved administration of the Corsi Blocks to subgroups of deaf and hearing individuals who had participated in Experiment 1. It offered the opportunity to examine nonverbal (spatial) working memory performance in terms of hearing status, spoken language ability, and sign language ability.

Method

Participants

One hundred and twenty of the participants in Experiment 1 agreed to return and participate in this experiment. They again were paid for their participation. Of the 75 deaf participants, 33 were CI users who received their (first) CI at a mean age of 7.2 years (SD = 5.7); 13 reported receiving a second CI between 7 and 16 years of age. Among the CI users, 24 indicated that they knew sufficient sign language to rate themselves at 2 on the SLPI in Experiment 1 (see Appendix), but only 4 of them considered themselves fluent (i.e., assigned themselves 5 on the SLPI). Of the 42 deaf students who did not use CIs, all but 6 indicated that they knew sign language, and 19 of them assigned themselves 5 on the SLPI. Of the 45 hearing participants who agreed to return, 8 were sign language interpreting students, none of whom considered themselves fluent signers.

Procedure

An automated version of the Corsi Blocks (Cornoldi & Mammarella, 2008), written using E-Prime software, was used in this experiment. Instructions appearing on the computer screen informed participants that they would see displays of nine gray squares, some of which would turn black one at a time, and their goal was to remember the squares in sequence. Following each sequence, the gray squares all appeared with red rectangles around them for a 500ms delay, after which participants used a mouse to click on the squares in the order in which they were presented. The order of selection appeared on each block, but no feedback was given. On each trial, participants had the option of restarting their recall, skipping blocks they could not recall, or indicating they were finished. Three trials were presented at each sequence length from 2 to 8, but the experiment was halted after three consecutive incorrect trials. The task thus yields two performance measures: the total number of trials correct and the highest span reached (with at least one correct trial). A sign language interpreter-researcher was in the room with each student as they were tested individually to ensure that all participants understood the task.

Results and Discussion

Preliminary analyses indicated no differences on either of the measures between the 8 sign language interpreting students and the remaining 37 hearing participants (with a slight advantage for the noninterpreters on both measures). Given the sizes of the hearing groups and the lack of significant differences between them in Experiment 1, they were combined for the purposes of further analyses.

Working memory performance and hearing status

Two one-way ANOVAs were conducted in which group (deaf participants with CIs, deaf participants without CIs, and hearing participants) was the between-groups factor. In one analysis, the highest memory span achieved was the dependent variable, and in the other, the total number of correct trials was the dependent variable. Neither analysis indicated a significant effect of group, F(2,117) = 0.49, MSE = 1.08 and F(2,117) = 0.53, MSE = 7.28, respectively (see Table 3). As in Experiment 1, the two analyses were repeated including only those deaf participants who indicated that they knew sign language well enough to rate themselves at 2 on the SLPI (24 with CIs and 36 without CIs). Neither of those analyses yielded a significant main effect of group, F(2,101) = 0.73, MSE = 1.09 and F(2,101) = 0.64, MSE = 7.28, respectively. Comparisons of the scores of 19 deaf participants who indicated that they were native signers and 39 who learned to sign later also indicated no significant difference on either the highest span achieved or the number of correct trials, t(56) = −0.78, and t(56) = −0.60, respectively. Similar analyses comparing the 13 deaf participants who reported having at least one deaf parent and 63 who reported having only hearing parents also failed to yield a significant difference on either measure, t(74) = 0.36 and t(74) = 0.14, respectively. Comparisons of 10 deaf participants who received CIs prior to age 3 and 23 who received them later indicated no differences between them on either Corsi Blocks measure, both ts(31) <1.0. Similar results were obtained in comparisons of 15 participants who received their CIs prior to age 5 and 18 who received them later.

Table 3.

Means and SD for working memory performance in Experiment 2

Corsi block measures Deaf with CIs Deaf without CIs Hearing
Mean SD Mean SD Mean SD
Highest span reached 6.52 1.25 6.44 0.96 6.68 0.93
Total # trials correct 13.39 3.35 13.10 2.47 13.40 2.69

Note. CI = cochlear implant.

Working memory performance and language skills

Despite there being no main effects of group in analyses of the Corsi Blocks measures, correlational analyses indicated rather different associations among the three groups between the Corsi Block scores and the language measures collected in Experiment 1. As can be seen in Table 4, the only significant correlations for the deaf participants with CIs were between their speech recognition scores (audiovisual and auditory) and the two Corsi Block measures. For the deaf participants without CIs, their receptive sign language scores were significantly correlated with both Corsi block measures, and there was a nonsignificant trend for both Corsi measures to be associated with earlier ages of sign language acquisition (ps = .06). There were no significant correlations between language measures and Corsi block measures for the hearing participants.

Table 4.

Correlation coefficients between sign language measures and spoken language measures with working memory performance in Experiment 2

Corsi span Corsi trials
Deaf with cochlear implants
 Age of sign acquisition .21 .18
 Expressive sign language −.28 −.26
 Receptive sign language .07 .05
 Speech production–phonemes .29 .23
 Speech recognition–audiovisual .40* .43*
 Speech recognition–audio only .36* .37*
 Audiovisual enhancement −.09 −.06
 Age of implantation −.06 .02
Deaf without cochlear implants
 Age of sign acquisition −.32 −.32
 Expressive sign language .25 .22
 Receptive sign language .43** .46**
 Speech production–phonemes −.04 −.10
 Speech recognition–audiovisual −.02 −.02
 Speech recognition–audio only .01 −.01
 Audiovisual enhancement −.06 −.06
Hearing
 Expressive sign language −.10 −.13
 Receptive sign language −.06 −.12
 Speech-to-noise ratio .14 .14
 Speech recognition–audiovisual .04 .04
 Speech recognition–audio only .09 .09
 Audiovisual enhancement −.12 −.08

Note. *p < .05, **p < .01.

The results thus are fully in accord with those of Experiment 1 in suggesting that for deaf participants, better spatial abilities—as tapped by the Corsi Blocks and Spatial Relations—but not the more visual abilities tapped by the Pair Cancellation and Embedded Figures tasks in Experiment 1 are associated with better language skills rather than being associated with sign language per se. In fact, none of the tasks employed thus far, typically referred to as being visual-spatial, has been exclusively associated with sign language ability, age of sign language acquisition, or auditory deprivation (i.e., and not spoken language). At the very least, these results indicate that for both theoretical and practical (e.g., instructional) purposes, the spatial and visual abilities of deaf individuals might need to be dissociated and their relations to language abilities—not just sign language—considered in more depth.

Previous studies have demonstrated working memory span to be related to language abilities and vocabulary knowledge in deaf children both with and without CIs (e.g., Macsweeney, Campbell, & Donlan, 1996; Pisoni & Geers, 2000; Tang, 2002) as well as hearing children (e.g., Gathercole, Willis, Emslie, & Baddeley, 1992). Demonstrations that deaf children who use sign language as well as those with CIs who use spoken language are less likely to utilize verbal rehearsal (Bebko & McKinnon, 1990; Burkholder & Pisoni, 2006; Pisoni, Conway, Kronenberger, Henning, & Anaya, 2010) suggest that observed differences in verbal working memory performance might be linked to differences in language fluency and EF rather than language modality. Similar findings with nonverbal stimuli and the Corsi blocks task, however, suggest that other cognitive factors such as EF and general cognitive ability, also are at play. Experiment 3 examined this possibility and the locus of findings indicating performance of hearing individuals to be as good as or better than deaf individuals on “visual-spatial” processing tasks of the sort used in Experiments 1 and 2. In particular, as described earlier, the working memory task used here and the three visual-spatial tasks used in the previous experiment—as well as most visual-spatial processing in the laboratory and the real world—require the coordination of several dimensions of cognitive ability. Experiment 3 allowed examination of this issue through the administration of tasks that assess nonverbal cognitive abilities and EF.

Experiment 3

Mayberry (2002), Hall and Bavelier (2010), and Marschark et al. (in press) discussed the complexity of understanding visual-spatial and other cognitive skills in the deaf population, which varies widely in language fluencies in signed, spoken, and written language. Those reviews and our earlier discussion pointed out that there are some visual-spatial and haptic-spatial tasks in which deaf (usually signing) people tend to score higher than hearing people and others in which skilled (usually native) signers, both deaf and hearing, score higher than nonsigners. The issue of relations among various cognitive abilities and preferred language modality becomes more complex when studies involve deaf individuals who are more representative of that heterogeneous population (e.g., nonnative signers, bimodal bilinguals). The complexity added by the inclusion of CI users, many of whom use sign language to some extent, makes this situation even more interesting and potentially more revealing, if somewhat more difficult to study (Marschark et al., in press). The (intentional) diversity of the participants in Experiment 1 and the availability of expressive and receptive language measures suggested that those individuals would provide excellent samples for further exploration of cognitive abilities associated with visual-spatial processing. The participants from Experiment 1 therefore were invited to participate in an additional experiment involving the administration of two pencil and paper tests tapping several cognitive domains: the General Ability Measure for Adults (GAMA; Naglieri & Bardos, 1997) and the Learning, Executive, and Attention Functioning (LEAF) scale (Kronenberger, Beer, Castellanos, Pisoni, & Miyamoto, 2014; Kronenberger & Pisoni, 2009).

There is still considerable debate about the nature of the EF construct, but there is broad agreement that EF includes as essential components shifting, inhibition, and working memory. The latter two have been demonstrated to be related to academic performance in reading and mathematics in hearing children and adolescents (Best, Miller, & Naglieri, 2011). Kronenberger, Pisoni, Henning, and Colson (2013) found that despite matching on nonverbal IQ, long-term CI users (7 or more years), aged 7–25, implanted prior to age 7, scored significantly below hearing peers in several aspects of EF including verbal working memory, inhibition, visual matching, and concentration. Kronenberger, Colson, Henning, and Pisoni (2014) conducted a study with a similar group examining relations between spoken language and EF. They found a stronger association of spoken language ability with verbal working memory and fluency-speed components of EF among CI users than a hearing comparison group. Spatial working memory and inhibition-concentration, in contrast, were associated with spoken language skills in the hearing group but not the CI group. These results indicated that the cognitive abilities underlying spoken language are rather different for CI users and hearing individuals, that is, deaf learners are not simply hearing learners who cannot hear (Marschark & Knoors, 2012). Neither Kronenberger et al. (2013) nor Kronenberger, Colson, et al. (2014) included a comparison group of deaf individuals who did not use CIs. Hauser, Lukomski, and Isquith (2007), however, found no significant differences in self-reported EF between the deaf and hearing college students as indicated by an administration of the BRIEF-A (Behavior Rating Inventory of Executive Function - Adult Version).

Method

Participants

Ninety-two of the participants from Experiment 1 agreed to participate in this experiment and again were paid for their participation. Of the 63 deaf participants, 32 were CI users who received their (first) CI at a mean age of 6.9 years (SD = 5.1); nine reported receiving a second CI at a mean age of 13.2 years (SD = 3.2). Of the 30 hearing participants, 9 were sign language interpreting students. Because of the small number of interpreting students and the lack of any significant differences between the two groups of hearing participants in Experiment 1, the hearing participants comprised a single group in this experiment.

Measures

In addition to the language and visual-spatial measures available from Experiments 1 and 2, several dimensions of cognitive ability were assessed using the GAMA (Naglieri & Bardos, 1997) and the LEAF (Kronenberger, Beer, et al., 2014). The GAMA is a nonverbal intelligence test that “evaluates an individual’s overall general ability with items that require the application of reasoning and logic to solve problems that exclusively use abstract designs and shapes” (Naglieri & Bardos, 1997, p. 1). It contains 66 items comprising four subscales. As described by Naglieri and Bardos (1997), the Matching subtest involves selection of one of six figures that is identical to the target in color, shape, and configuration. It requires examination and comparison of shapes and colors as well as analysis of specific details. The Analogy subtest requires identification of a relationship between two figures and the selection of one of six figures that bears the same relationship to a target. This requires recognition of parallel conceptual relationships between different pairs of figures. The Sequences subtest involves selection of one of six figures that completes a logical sequence of geometric designs varying in shape, color, and location. It requires analysis of interrelationships among designs as they change in a sequence, emphasizing attention to spatial and sequential arrangements of the geometric figures. The Construction subtest is similar to the Spatial Relations task described earlier in that it involves the selection of one of six figures showing how provided shapes would appear if assembled. Naglieri and Bardos (1997, p. 25) emphasized that the Construction subtest involves the analysis, synthesis, and rotation of the component shapes to construct the target figure. The four GAMA subscales (norm-based scaled scores with a mean of 10 and SD of 3) can be combined to yield a nonverbal IQ score with a mean of 100 and a SD of 15. Naglieri and Bardos (1977) obtained comparable GAMA IQ scores for a sample of deaf adults and a hearing comparison group.

The LEAF (Kronenberger, Beer, et al., 2014; Kronenberger & Pisoni, 2009) is a questionnaire-based measure of EF behaviors in daily life that has been used extensively in studies involving deaf children and young adults with CIs as well as their hearing peers (e.g., Kronenberger et al., 2013; Kronenberger, Colson, et al. 2014). The adult version of the LEAF is based on self-report and includes 40 questions about individuals’ recent experiences and behaviors reflecting EF-related cognitive abilities. Each item is rated on a 4-point Likert scale from “never” to “very often.” The LEAF includes eight subscales: Comprehension and Conceptual Learning, Factual Memory, Attention, Processing Speed, Visual-Spatial Organization, Sustained Sequential Processing, Working Memory, and Novel Problem Solving.

The GAMA was administered one-on-one or in small groups by one of two sign language interpreter-researchers. Participants read the instructions, which were also read and/or signed to them according to student preference. After finishing the GAMA, participants completed the LEAF.

Results and Discussion

Nonverbal reasoning, EF, and hearing status

Table 5 provides the means and SDs for the GAMA and the LEAF. Because of the number of subscales involved, each instrument was analyzed using a multivariate analysis of variance in which group (deaf participants with CIs, deaf participants without CIs, hearing participants) was a between-subjects variable. Analysis of the five GAMA scores yielded a main effect of group (Wilks’ λ), F(10,172) = 2.04, reflected in the pattern of significant effects seen in Table 5. Bonferroni-corrected comparisons indicated no significant differences between the CI users and the other deaf participants on any of the scales. Significant differences were obtained, however, between the CI group and the hearing group on Matching, Construction, and IQ scores, all in favor of the hearing group, and significant differences between the deaf group of nonusers and the hearing group on Construction and IQ scores, also in favor of the hearing group. A similar analysis involving only those 51 deaf participants (26 with CIs) who indicated that they knew sign language well enough to rate themselves at 2 on the SLPI yielded a marginal main effect of group (Wilks’ λ), F(10,146) = 2.17, still in favor of the hearing group. Significant between-groups effects were obtained on all five GAMA scales. Bonferroni-corrected comparisons comparing the hearing group to signers in both deaf groups yielded a significant difference between the deaf group with CIs and the hearing group on Matching, Analogy, Construction, and GAMA IQ and between the deaf group without CIs and the hearing group on Sequences and GAMA IQ, all in favor of the hearing group. Comparisons of 8 deaf participants who received CIs prior to age 3 and 24 who received them later indicated no difference between them on any of the GAMA measures, all ts(30) <1.70. Similar results were obtained in comparisons of 14 participants who received their CIs prior to age 5 and 18 who received them later.

Table 5.

Means and SD for GAMA and LEAF scores in Experiment 3

Deaf with CIs Deaf without CIs Hearing Between-Ss
Mean SD Mean SD Mean SD F p
GAMA
 Matching 9.88 2.94 10.71 3.16 11.83 2.41 3.65 .03
 Analogy 11.31 3.13 11.31 3.12 13.13 2.24 3.66 .03
 Sequences 11.84 2.90 11.84 2.90 13.17 2.53 2.81 .06
 Construction 9.94 2.85 9.94 2.85 12.53 3.28 7.27 .00
GAMA IQ 104.41 13.15 105.35 16.55 116.13 12.38 6.47 .00
LEAF
 Comprehension 4.22 2.61 3.97 3.02 1.17 1.62 14.07 .00
 Factual memory 4.16 2.57 4.19 3.05 2.53 2.21 3.95 .02
 Attention 4.34 2.80 4.52 3.33 3.63 3.80 0.75 .48
 Processing speed 4.66 2.84 4.53 3.25 2.17 2.53 7.18 .00
 V-S organization 2.91 2.45 3.50 3.44 2.23 1.92 1.74 .18
 Sequential processing 3.53 2.16 3.39 2.78 1.85 2.59 4.19 .02
 Working memory 4.92 3.02 4.16 3.20 2.43 2.06 6.29 .00
 Problem solving 4.09 2.66 3.13 3.00 1.47 1.61 8.69 .00

Note. CI = cochlear implant; GAMA = General Ability Measure for Adults; LEAF = Learning, Executive, and Attention Functioning.

Multivariate analyses of the LEAF yielded a main effect of group (Wilks’ λ), F(16,166) = 2.20, and the pattern of effects seen in Table 5. Bonferroni-corrected comparisons indicated significantly lower scores (better EF function) among the hearing participants than the deaf CI users on the Comprehension, Processing Speed, Sequential Processing, Working Memory, and Problem-Solving LEAF scales (Kronenberger et al., 2013). Those analyses also indicated significantly lower scores (better EF function) among hearing participants than deaf participants without CIs on Comprehension, Factual Memory, Processing Speed, and Problem Solving. No significant paired comparisons were found between the CI users and the other deaf participants on any of the scales. A similar analysis involving the 51 deaf participants (26 with CIs) who reported knowing sign language yielded a main effect of group (Wilks’ λ), F(16,140) = 2.40, and significant effects for each of the LEAF subscales except for Attention and Visual-Spatial Organization Skills in favor of the hearing participants. There were no significant comparisons between the two groups of deaf participants. Comparisons of the 8 deaf participants who received CIs prior to age 3 and 24 who received them later indicated no differences between them on any of the LEAF scales, all ts(30) <1.0. Similar results were obtained in comparisons of 14 participants who received their CIs prior to age 5 and 18 who received them later, all ts(30) ≤1.7.

Considering LEAF subscale scores from the perspective of clinical experience, summing across the four items in each subscale, ratings less than 5 generally suggest that the individual is average in the area, with no significant problems. Subscale scores between 5 and 9 suggest that the individual may have mild or somewhat elevated problems in this area, but likely does not have very significant problems. Subscale scores of 10 or greater indicate that the individual may have frequent and significant problems in that area, possibly clinical significance. Totaling scores in the present study across the eight LEAF subscales indicated that 40 (63%) of the 63 deaf participants, including 21 (66%) of those with CIs, scored in the “average” range, with 23 (36%), including 11 with CIs (34%) of those with CIs, scoring in the “mildly elevated” range. Of the 30 hearing participants, 27 (90%) scored in the “average” range, and the remaining 3 in the “mildly elevated” range. Consistent with that pattern of effects, an ANOVA using the total of the LEAF scales as the dependent variable yielded a main effect of group, F(2,90) = 8.21, MSE = 267.23, and Bonferroni-corrected comparisons indicated significant differences between the hearing students and both groups of deaf students, in favor of the hearing students, but no significant difference between the latter two.

The only significant correlation between aided hearing thresholds and scores on the GAMA and LEAF was a positive correlation, r(31) = .43, indicating that higher thresholds were associated with greater difficulties in novel problem solving among participants with CIs. Age of cochlear implantation was not related to any of the cognitive abilities tapped by the GAMA and LEAF.

Nonverbal reasoning, EF, and language skills

Results of analyses examining relations between language measures for these participants from Experiment 1 and their GAMA and LEAF scores are presented in Tables 6 and 7 and are easily summarized. Among deaf participants who used CIs, significant correlation coefficients indicated that GAMA IQ scores were negatively related to their expressive sign language skills and positively related to their auditory speech recognition skills. Consistently, their Sequences subscores, reflecting nonverbal logical reasoning with sequential information, were negatively related to their expressive sign language skills and positively related to their auditory and audiovisual speech recognition skills; earlier ages of cochlear implantation also were associated with higher Sequences scores, and higher Construction scores were associated with less visual enhancement in speech recognition. In a similar analysis involving the 25 CI users who self-rated their sign language skills on the SLPI at Level 2 or higher, expressive sign language skills were significantly associated with all GAMA scores; receptive skills were significantly related to all but the Sequences score. Among deaf participants who did not use CIs, all GAMA scores except Sequences were positively related to their sign language receptive skills as reflected in passage comprehension scores and none of the speech measures. There were no significant correlations for the hearing participants. Taken together, these results thus replicate and extend the results of Experiment 1, indicating that deaf individuals’ cognitive abilities reflected in tasks heavily dependent on spatial processing are associated with language ability in their preferred communication modality and is not specific to the use of sign language.

Table 6.

Correlation coefficients between Experiment 3 GAMA scores and language measures from Experiment 1

Matching Analogy Sequences Construction GAMA IQ
Deaf with cochlear implants
 Age of sign acquisition −.20 −.02 .08 .18 .05
 Expressive sign language −.21 −.20 −.48** −.37 −.41*
 Receptive sign language .19 .16 .01 −.01 .11
 Speech production–phonemes .14 −.06 .29 .15 .17
 Speech recognition–audiovisual .12 .11 .46** .16 .29
 Speech recognition–audio only .06 .25 .46** .34 .38*
 Audiovisual enhancement .10 −.34 −.18 −.46** −.30
 Age of implantation −.14 −.13 −.42* −.17 −.28
Deaf without cochlear implants
 Age of sign acquisition .14 −.17 −.20 −.21 −.12
 Expressive sign language .06 .10 .19 .18 .13
 Receptive sign language .40* .41* .31 .50** .48*
 Speech production–phonemes .01 −.16 −.47* −.31 −.26
 Speech recognition–audiovisual .13 .01 −.15 .01 .01
 Speech recognition–audio only .10 .05 −.10 −.03 .04
 Audiovisual enhancement .06 −.10 −.11 −.04 −.06
Hearing
 Age of sign acquisition
 Expressive sign language −.18 .21 .21 .03 .11
 Receptive sign language −.04 .34 .32 .16 .26
 Speech-to-noise ratio −.18 .18 .25 .15 .14
 Speech production–phonemes
 Speech recognition–audiovisual −.04 .09 .33 .18 .20
 Speech recognition–audio only −.06 .24 .36 .13 .23
 Audiovisual enhancement .04 −.22 −.16 −.01 −.12

Notes. GAMA = General Ability Measure for Adults.

*p < .05, **p < .01.

Table 7.

Correlation coefficients among Experiment 3 LEAF scores and language measures from Experiment 1

Comprehension Factual memory Attention Processing speed Visual-spatial organization Sequential processing Working memory Problem solving
Deaf with CIs
 Age sign acquisition .60** .16 −.10 .06 −.32 −.13 .10 −.20
 Expressive sign language .10 .35 −.01 .35 .13 .13 .21 .22
 Signed comprehension −.01 .30 .33 .39* .17 .25 .23 .25
 Speech production–phonemes −.24 −.23 .23 −.07 .01 .09 −.05 −.15
 Speech recognition–AV .04 −.13 .27 −.01 .04 −.02 .02 −.19
 Speech recognition–A −.03 −.18 .30 −.03 −.06 −.02 −.04 −.27
 Audiovisual enhancement .14 .15 −.18 .04 .20 −.01 .11 .24
 Age of implantation −.06 .04 −.16 .09 .01 −.05 .10 .16
Deaf without CIs
 Age sign acquisition .39 .12 .46* .65** .46* .63** .48* .44*
 Expressive sign language −.26 .22 −.16 −.12 −.12 −.16 −.10 −.31
 Signed comprehension −.46* −.31 −.28 −.34 .08 −.36 −.36 −.50**
 Speech production–phonemes .29 −.11 .24 .19 .25 −.11 .30 .39
 Speech recognition–AV .11 −.25 .17 .12 .31 .12 .24 .12
 Speech recognition–A .03 −.32 .07 .03 .16 .04 .11 .06
 Audiovisual enhancement .15 .13 .22 .20 .31 .16 .27 .14
Hearing
 Expressive sign language .08 −.11 −.30 −.01 .04 −.09 −.04 −.05
 Signed comprehension .18 −.11 −.08 −.02 −.27 −.18 −.13 .16
 Speech-to-noise ratio −.12 .02 .32 .13 −.11 −.15 −.15 −.10
 Speech production–phonemes
 Speech recognition–AV −.14 −.29 .01 −.22 −.12 −.17 −.21 −.04
 Speech recognition–A −.05 −.15 .27 .01 −.09 −.19 .02 .06
 Audiovisual enhancement −.06 −.06 −.34 −.18 .01 −.15 −.15 −.11

Notes. CI = cochlear implant; LEAF = Learning, Executive, and Attention Functioning.

*p < .05, **p < .01.

Examination of relations between language scores and EF, as reflected in LEAF subscales, indicated that among deaf participants who used CIs, greater difficulty in Comprehension-related EF was associated with learning sign language at later ages, and greater difficulty in Processing Speed was associated with better comprehension of sign language (see Table 7). Among those who did not use CIs, earlier sign language acquisition was associated with fewer difficulties in EF across most of the domains tapped by the LEAF. In short, better language skills were associated with better intellectual abilities, but which language modality was involved differed for the deaf participants with and without CIs. There were no significant correlations between EF and language measures among the hearing participants.

As can be seen in Table 8, GAMA IQ scores (as well as all subscale scores) were significantly correlated with Spatial Relations scores in both deaf groups and nearly so in the hearing group (p = .07). That link is consistent with the Blatto-Vallee et al. (2007) and Marschark et al. (2013) findings that Spatial Relations scores were strongly related to mathematics problem solving among deaf and hearing students, although more strongly for the former. In contrast, GAMA Matching and IQ scores were significantly correlated with the Corsi Block measures for the hearing participants and the deaf participants who did not use CIs but not deaf participants in the CI group, suggesting that spatial working memory is more strongly related to nonverbal IQ in the former groups than in the latter group, which might have depended more exclusively on visual-spatial skills. At the very least, these results suggest that the hearing individuals, deaf individuals, and to some extent deaf individuals with CIs were dealing with the visual-spatial tasks in different ways, an issue that can be addressed further by examining relations between scores on the visual-spatial tasks and subscales of the LEAF.

Table 8.

Correlation coefficients between Experiment 3 GAMA scores and scores on visual-spatial tasks from Experiments 2 and 3

Spatial relations Embedded figures Pair cancellation Corsi span Corsi # trials
Deaf with cochlear implants
 Matching .55** .03 −.12 −.21 −.26
 Analogy .55** .19 .17 −.07 −.01
 Sequences .65** −.01 .11 .38 .36
 Construction .53** .39* .32 .28 .22
GAMA IQ .76** .20 .16 .12 .12
Deaf without cochlear implants
 Matching .43* .34 .18 .45* .44*
 Analogy .40* .43* .32 .25 .26
 Sequences .40* .26 .22 .38* .41*
 Construction .53** .24 .12 .50** .58**
GAMA IQ .52** .37* .24 .46* .49**
Hearing
 Matching .30 .27 .20 .59** .53*
 Analogy .27 .16 .18 .18 .34
 Sequences .35 .39* .31 .32 .43
 Construction .16 .30 .17 .34 .39
GAMA IQ .34 .38* .27 .44* .53*

Notes. GAMA = General Ability Measure for Adults.

*p < .05, **p < .01.

Results of the correlational analyses involving visual-spatial tasks and LEAF subscales are presented in Table 9. For deaf participants with CIs, the only significant association between visual-spatial scores and LEAF scores was that between Embedded Figures scores and the LEAF Working Memory subscale score, the negative coefficient indicating that better self-reported working memory ability was associated with better performance in separating figures from ground. Table 9 reveals that there were several significant inverse correlations between LEAF subscores and Spatial Relations and Embedded Figures scores for deaf students who did not use CIs. Those results indicate that participants who reported better EF scored higher on the visual-spatial tasks, consistent with the Stiles et al. (2012) finding that children with hearing loss who were higher in their EF demonstrated better performance on the Corsi Blocks than children, who were lower in their EF.

Table 9.

Correlation coefficients among Experiment 3 LEAF scores and scores on visual-spatial tasks from Experiments 2 and 3

Spatial relations Embedded figures Pair cancellation Corsi span Corsi # trials
Deaf with cochlear implants
 Comprehension .08 −.11 −.01 .01 .11
 Factual memory −.06 .01 .16 .15 .12
 Attention .01 −.21 .15 .33 .27
 Processing speed −.04 −.19 .01 .22 .30
 V-S organization −.27 −.18 −.17 .02 .08
 Sequential processing −.19 −.12 −.23 .08 −.02
 Working memory −.23 −.35* .01 .11 .14
 Problem solving −.22 −.06 .07 .18 .26
Deaf without cochlear implants
 Comprehension −.14 .08 −.19 −.15 −.17
 Factual memory −.15 −.07 .13 .07 .06
 Attention −.32 −.38* −.17 −.18 −.20
 Processing speed −.34 −.27 −.11 −.24 −.26
 V-S organization −.30 −.45* .02 −.06 −.12
 Sequential processing −.46** −.42* −.07 −.14 −.18
 Working memory −.52** −.36* .12 −.16 −.21
 Problem solving −.30 −.25 −.14 −.27 −.27
Hearing
 Comprehension .21 −.08 .06 −.22 −.16
 Factual memory .17 −.16 .17 −.43 −.52*
 Attention .34 .09 .19 −.07 −.06
 Processing speed .08 −.03 .18 −.48* −.60**
 V-S organization .04 .18 .06 −.62** −.55**
 Sequential processing .05 −.01 −.05 .03 −.07
 Working memory .12 .15 .07 −.17 −.29
 Problem solving .24 −.06 .10 −.13 −.19

Notes. LEAF = Learning, Executive, and Attention Functioning.

*p < .05, **p < .01.

There were no significant correlations between LEAF subscores and the three visual-spatial scores for the hearing participants, lending weight to the suggestion that the visual-spatial processing tasks used in Experiment 1 tap somewhat different cognitive abilities in deaf and hearing individuals. As can be seen in Table 9, however, Corsi Blocks scores were significantly related to LEAF Memory, Processing Speed, and Visual-Spatial Organization EF subscales for the hearing participants. Because the LEAF is a “real-world” measure of EF based on self-report, these findings suggest that the performance of hearing individuals on a visual working memory task may be more reflective of real-world EF behaviors (about which the individual is aware) than is the case for deaf individuals. This could indicate that the processes used by deaf individuals in the Corsi Blocks task are different from the ones that they use in daily EF or that the deaf participants had lesser awareness of their actual real-world EF behaviors, and therefore, their correlations with the Corsi Blocks measures were not significant. The latter alternative is consistent with findings indicating that despite their belief to the contrary, deaf adolescents and young adults learn no more from sign language than they do from text, and that deaf college students overestimate their comprehension and learning to a significantly greater extent than their hearing peers (Borgna, Convertino, Marschark, Morrison, & Rizzolo, 2011).

The finding of very similar LEAF scores among deaf students who use CIs and those who do not suggests that the groups do not differ in self-reported everyday EF (Hauser et al., 2008; Pisoni et al., 2010). Finally, it may be noteworthy that Corsi block performance was not significantly related to the LEAF Working Memory subscale. That subscale is defined as the likelihood of being overwhelmed by the volume of information, only being able to do one thing at a time, or forgetting or losing track of things during learning, none of which appear specifically related to spatial processing ability in our samples.

The availability of GAMA IQ scores and scores on the three visual-spatial tasks, and Spatial Relations in particular, allows for one other analysis relevant to the failure to find any generalized visual-spatial advantages for deaf participants. Blatto-Vallee et al. (2007, p. 446) suggested that the “seeming contradiction” of not finding deaf individuals to score higher than hearing individuals on their Spatial Relations and Minnesota Paper Form Board tasks might have been the result of a confounding: “Since no tests were conducted for nonverbal reasoning, effects of overall intelligence on these results cannot be ruled out.” To evaluate that possibility, the ANOVAs with Spatial Relations, Embedded Figures, and Pair Cancellation scores as dependent variables were repeated using GAMA IQ as a covariate. The results were essentially the same as those described earlier for the first two tests, F(3,89) = 6.21, MSE = 28.02 and F(3,89) = 6.01, MSE = 146.65, as hearing participants scored significantly higher than the two groups of deaf participants, which did not differ according to Bonferroni-corrected comparisons. Analysis of Pair Cancellation scores controlling for GAMA IQ failed to yield a significant effect of group, F(3,89) = 1.49, MSE = 84.22, although the pattern was exactly the same as in the other analyses, with scores of 91.06, 91.55, and 95.03 for the deaf participants with CIs, deaf participants without CIs, and hearing participants, respectively. Taken together, these results reinforce those “seemingly contradictory” results of Blatto-Vallee et al. (2007) and Marschark et al. (2013) indicating that if there are benefits to generalized visual-spatial abilities among deaf individuals, they appear to be quite small and perhaps limited to spatial functioning (see López-Crespo et al., 2012; Marschark et al., in press).

General Discussion

Perhaps stemming from assumptions about sensory compensation, it is frequently assumed that deaf individuals, and particularly those who use sign language, are visual learners and have better visual-spatial skills than hearing individuals. The available literature is silent with regard to deaf individuals’ being visual learners, however, at least in the sense in which “visual learner” is used in the empirical literature, in reference to learning styles. Alternatively, demonstrations that deaf individuals perform better on some tasks that apparently involve visual-spatial skills than they do on tasks that involve verbal skills or that benefit from verbal coding, and that they may perform better on those tasks than hearing nonsigners, are often taken as evidence of generally superior visual-spatial skills. The literature with regard to deaf individuals’ visual-spatial skills, however, is complex and sometimes inconsistent (see Mayberry, 2002). Some advantages in visual-spatial tasks initially ascribed to deaf individuals (i.e., the result of auditory deprivation) subsequently have been linked to sign language ability and also are found among hearing individuals who are skilled signers. As described earlier, some visual-spatial advantages have been found to be limited to native signers (e.g., enhanced spatial working memory), while others (e.g., sensitivity to change in the visual periphery, at least on near-transfer tasks) can be acquired by hearing nonsigners through real-world experience (e.g., video games) or experimental manipulations (e.g., the proportion of peripheral detection trials).

Findings identifying superior visual-spatial or other abilities among deaf individuals or native signers, whether deaf or hearing, are of theoretical importance. The present study, however, was aimed at understanding relations between language use and visual-spatial abilities among deaf individuals more representative of the deaf population than the 5% or so who have deaf parents. In addition to a sample of deaf individuals who varied in their abilities and preferences for sign language and spoken language, the three experiments in this study included a targeted sample of deaf individuals who used CIs. Not only are CI users a growing segment of the deaf population, but a variety of studies has indicated that the cognitive, neurobehavioral, and psychosocial functioning of CI users differs to some extent from both other deaf individuals and from hearing individuals. Insofar as CIs do not provide the same auditory input as that available either to hearing individuals or to hearing aid users, such findings perhaps should not be surprising. Nevertheless, for methodological as well as theoretical reasons, research into cognitive abilities rarely includes “naturally diverse” samples of deaf individuals whose sign language and spoken language abilities are documented or samples of deaf individuals (beyond childhood) with and without CIs.

Beyond seeking to disentangle relationships among hearing status, language modality, and visual-spatial abilities, the present study also was undertaken in the interests of ameliorating or at least better understanding the academic underachievement observed among many deaf learners. Three experiments examined language, visual-spatial, nonverbal reasoning, and EF abilities among deaf and hearing individuals. To provide further insights into one domain of relevant research, the deaf CI users were compared to the deaf nonusers. Insights into another domain of relevant research were provided by including samples of hearing individuals with and without sign language skills. Finally, the assessment of sign language and spoken language abilities, as appropriate, in relatively large samples from the same cohort of first-year university students—not yet immersed in what is a rather large signing Deaf community—allowed additional analyses involving subgroups of deaf individuals who were native signers and/or had deaf parents.

Experiment 1 involved deaf and hearing participants’ being given a battery of language assessments and the administration of three tasks tapping different aspects of visual-spatial functioning. Three findings were of primary interest. First, on all three of the visual-spatial tasks, Spatial Relations, Pair Cancellation, and Embedded Figures, no advantages were observed as a function of being deaf, using sign language, or even being a native signer. Rather, consistent with other studies involving deaf and hearing college students (e.g., Blatto-Vallee et al., 2007; Marschark et al., 2013), hearing participants demonstrated better performance on the visual-spatial tasks than did deaf participants, and whether or not deaf participants used hearing aids or CIs did not alter that pattern of findings. Second, the results indicated that performance on the Spatial Relations task was associated with deaf participants’ language ability in their preferred modality, whatever it was. Thus, for those with CIs, better performance was associated with better speech perception ability and earlier ages of implantation. It was negatively related to their sign language expressive skills. For deaf participants who did not use CIs, Spatial Relations scores were associated positively with their sign language receptive abilities but not related to their speech reception abilities (see Table 2). The hearing interpreting students also showed a strong association between their Spatial Relations scores and their receptive sign language abilities. For the other hearing participants, Embedded Figures performance was associated with speech recognition in noise, two tasks requiring the ability to separate signals from noise. Following from the foregoing, the third primary finding from Experiment 1 is that deaf and hearing individuals apparently dealt with the demands of the visual-spatial tasks very differently, as did signers and nonsigners.

The results of Experiment 1 also bear on the observation of López-Crespo et al. (2012) that there is more empirical evidence supporting an advantage for deaf individuals in the spatial domain than in the visual domain. Although the performance of the deaf participants did not surpass that of hearing participants, the only significant correlations between the deaf participants’ performance on the visual-spatial tasks and their language assessments involved Spatial Relations, the most spatial of the three tasks; the only significant correlations for hearing participants involved Embedded Figures, the most visual of the three tasks. The present results are thus consistent in some ways and complex in others, well representing the larger body of literature in the area involving both deaf and hearing individuals (Della Sala et al., 1999; Mayberry, 2002).

Experiment 2 sought to extend the initial investigation by examining hearing status, language abilities, and working memory using a task that is not conducive to verbal coding. Working memory tasks for verbal materials as well as nonverbal materials that are easy to verbally code in spoken language or sign language (e.g., colors, shapes) typically result in greater performance by hearing individuals. The most popular nonverbal working memory task that precludes verbal coding is the Corsi Blocks. Results in the literature involving that task are inconsistent but, as described earlier, it appears that studies that have involved deaf native signers have found them to outperform hearing nonsigners, while studies that have involved nonnative signers more variable in their sign language skills have yielded no differences or advantages for hearing individuals. The present study involved primarily nonnative signers, and differences in Corsi blocks performance across deaf participants with CIs, deaf participants without CIs, and hearing participants were negligible. However, there also were no advantages observed for deaf participants who indicated that they were either native signers or had deaf parents; nor were there differences in Corsi scores between the small sample of interpreting students and other hearing participants. Consistent with the results of Experiment 1, performance on both of the Corsi Block measures was associated with greater abilities in the deaf participants’ stronger language modality, regardless of what it was: spoken language receptive ability for those with CIs (Edwards & Anderson, 2014) and sign language receptive and expressive ability for those without CIs. Although not statistically significant, performance on the Corsi Blocks also was related to earlier sign language acquisition among participants who primarily used sign language and later acquisition among participants who primarily used spoken language. Neither sign language nor spoken language abilities predicted performance among the hearing participants.

Working memory tasks involve the active management and coordination of lower-level cognitive abilities including, in the case of Corsi blocks, visual attention, visual-spatial processing, retention of sequential information, and eye-hand coordination. As indicated earlier, previous studies involving deaf children with and without CIs, accordingly, have found working memory performance to be associated with EF as well as language abilities. Experiment 3 explored deaf and hearing individuals’ nonverbal cognitive functioning (GAMA) and self-reported EF (LEAF) as they related to signed and spoken language abilities and performance on the visual-spatial tasks in Experiment 1 and 2. The GAMA involves visual-spatial tasks similar to those used in Experiment 1, and GAMA IQ scores were found to be associated with Spatial Relations scores for both deaf and hearing participants. Consistent with Experiment 1 and previous studies described earlier, no visual-spatial advantage was observed for deaf participants on the GAMA, as hearing participants’ GAMA subscores and overall nonverbal IQ scores surpassed those of their deaf peers with and without CIs. Similar results were obtained when only those deaf participants with sign language skills were considered. Also consistent with results of Experiment 1, GAMA IQ and subtest scores were negatively related to sign language skill but positively related to spoken language skill among CI users. GAMA IQ and Sequences subscores were positively related to sign language skill among deaf participants who did not use CIs, and there were no significant correlations between GAMA scores and language scores for hearing participants.

Self-reported EF in the context of daily life, as indicated by the LEAF, appeared to favor the hearing participants, 90% of whom scored within the “average” range, while about 36% of the deaf participants, about half of whom (48%) used CIs, indicated some EF-related difficulties. The finding that most of the deaf participants who fell into that “mildly elevated” range of difficulties used CIs is noteworthy, because previous studies reported by Pisoni et al. (2010) documenting EF difficulties among deaf children who used CIs did not include peers without CIs for comparison purposes. Those findings thus suggest that EF difficulties among individuals with CIs are the result of more than just auditory deprivation (Pisoni et al., 2010); language delay may also factor into EF difficulties reported by CI users (Kronenberger, Colson, et al., 2014). On the other hand, the finding of similar LEAF scores among deaf participants who use CIs and those who do not use CIs also suggests that EF delays previously found in children and adolescents with CIs (Kronenberger, Beer, et al., 2014) are not the result of using a CI device. Rather, in the context of prior studies of EF in children and adolescents with varying degrees of hearing loss and use of assistive devices (Figueras, Edwards, & Langdon, 2008; Kronenberger et al., 2013), the results of the current study suggest that auditory deprivation and language delay act together to influence EF delays. Clearly, additional research is needed to better understand the magnitude of these contributions and the processes by which they operate. Further support for the contribution of language skills to EF in deaf individuals in the present study was found in correlational analyses of LEAF scores and language assessments. Relations observed between LEAF EF scores and the language measures were consistent with those observed between GAMA scores and the language measures. Deaf participants who did not use CIs showed earlier acquired and better sign language skills related to fewer EF difficulties across domains.

Taken together, the results of the present experiments consistently point to three general conclusions. First, consistent with conclusions of Bavelier et al. (2006), visual-spatial advantages among deaf individuals, even those who are native signers, were not particularly generalized. Hearing participants outperformed deaf peers with and without CIs in the visual-spatial domains tapped by tasks in Experiment 1 and Experiment 3 (see also Blatto-Vallee et al., 2007; Marschark et al., 2013). Although it was not addressed explicitly, none of the findings from the present study are consistent with the notion that deaf participants are visual learners. Second, better performance on visual-spatial tasks among deaf participants in this study were not specifically linked to their sign language abilities, but were related more to their abilities in their preferred language modality, spoken or signed. Deaf participants who relied primarily on sign language, many from an early age, did not demonstrate any advantage over deaf peers who relied primarily on spoken language, whether or not they used CIs. Relatedly, the third general conclusion following from the present experiments is that hearing and deaf individuals, as well as deaf individuals with and without CIs, may utilize somewhat different cognitive abilities in dealing with the same (apparently) visual-spatial tasks. Knoors and Marschark (2014) emphasized that in educational settings, it should not be assumed that deaf learners are essentially hearing learners who cannot hear. The present findings reinforce that point insofar as the present experiments indicated that different cognitive abilities among deaf participants and between deaf and hearing participants yielded the same levels of performance on some of the present tasks, and different cognitive abilities among deaf individuals with and without CIs yielded comparable performance even if it was below that of hearing peers.

The strength of the present study of having all participants drawn from the same cohort of first-year university students is also a limitation insofar as deaf college students may not be representative of the deaf population at large. Deaf students at this institution, enrolled both at the associate degree level and the baccalaureate level, are more likely to persist through their first year and to graduate than deaf students at other institutions in the United States, and deaf baccalaureate students graduate at a somewhat higher rate than their hearing peers. If this study is limited by “overqualified” participants, however, the results of Experiment 3 suggest that they did not constitute a sample that was sufficiently advantaged to demonstrate logical reasoning and EF abilities fully comparable to hearing peers.

This study also was limited by the relatively small number of hearing participants who had sign language skills and the relatively low level of skills possessed by that sample as compared to previous studies that have involved hearing native signers. The present sample of sign language interpreting students, however, likely was more representative of hearing sign language users than are hearing native signers, most of whom have had experience in the complex cognitive task of sign language interpreting, which is heavily dependent on working memory, professionally or at home. They thus represent a more appropriate comparison group for the samples of nonnative signers of primary interest here.

Finally, the range of visual-spatial tasks involved in the present experiments was quite limited compared to the variety of tasks described in the relevant literature. But this study was intended only as a single step toward a better understanding of relations among language and cognition in the heterogeneous population of deaf learners. The diversity of the participant samples and the measures and tasks involved was greater than in many previous studies, even if it was less than might be desired. Taken together, however, the consistency of the results both internally and with regard to other recent studies suggests that the study has moved us forward, toward a better understanding of relations among hearing status, language, and visual-spatial functioning.

Funding

National Institute on Deafness and Other Communication Disorders (R01DC012317). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of NIDCD.

Conflicts of Interest

No conflicts of interest were reported.

Acknowledgment

Consideration of this manuscript was handled by Co-Editor Susan Easterbrooks and Associate Editor Elizabeth Fitzpatrick with the assistance of three anonymous reviewers. The authors thank Irene Mammarella and Cesare Cornoldi for sharing their technology for Experiment 2.

Appendix

Primary scoring levels for the Sign Language Proficiency Interview (half levels also assigned to levels 1 through 5 in scoring)

How well can you have a conversation using sign language? Please circle which number applies (only one!).

5

  • I am able to have a very comfortable, in-depth conversation about social and school topics.

  • I have a very large sign language vocabulary.

  • I am a highly skilled signer and can easily understand someone signing to me.

4

  • I can have a natural conversation for social and school topics.

  • I have a large sign language vocabulary, and I can sign and fingerspell clearly.

  • I may sign something wrong occasionally, but it doesn’t interrupt the flow of conversation.

  • I can easily understand someone signing to me.

3

  • I can discuss social and school topics with some details.

  • I can generally sign three to five sentences accurately using basic sign language, but I do make some errors.

  • I have a fairly clear signing style.

  • I can understand someone signing but I may ask for a few signs to be repeated or ask that something be signed a different way.

2

  • I can discuss basic social and school topics and respond usually with one to three sentences.

  • I know some basic signs but I often sign them incorrectly.

  • I can understand some signing, but I often ask for signs to be repeated or ask that something be signed a different way.

1

  • I know some signs or short phrases, and I can respond to basic questions signed to me, but I very often have to ask for signs to be repeated or ask that something be signed a different way.

  • I know vocabulary related to everyday signs like family/colors/numbers and names of weekdays.

  • I very often respond using fingerspelling or sign incorrectly with many pauses.

0

  • I either do not know any sign, or I know very few basic signs and have to fingerspell most of my responses to basic questions signed to me.

  • I need to ask for many signs to be repeated and often ask that things be signed in a different way.

Footnotes

Notes

The language measures described in this paper were administered as part of a battery in a longitudinal project examining relations among various aspects of language, cognition, learning, and psychosocial functioning among deaf individuals with and without CIs.

The interpreting students, like the other participants, were all in their first year at the university. Their sign language skills (analyzed below) were variable, but entry into the program required, at minimum, “the skill equivalent to a typical semester-long ASL I course.”

Although correcting alpha levels for multiple correlations is becoming common in medical studies (e.g., genetics, epidemiology), which can contain hundreds or thousands of correlation coefficients, they generally are not found in educational or psychological research. Further, while reducing the possibility of Type I errors, such adjustments increase the possibility of Type II errors. Although the results of the correlational and other analyses in the present study are quite consistent within and between experiments, because of the number of correlations conducted, some caution should be taken in generalizing from these results, particularly with regard to those coefficients reported as significant at the .05 level.

Of tangential interest, American College Test (ACT) entrance scores on the English, Reading Comprehension, and Mathematics subtests as well as Composite scores were available for 46–50 deaf students with CIs, 47–55 deaf students without CIs, and 23–50 hearing students. In both groups of deaf students, their Spatial Relations scores were significantly correlated with their Mathematics scores, r(44) = .35, and r(45) = .40, and their Composite scores, r(48) = .28, and r(53) = .31, respectively, and Reading Comprehension scores were significantly correlated with Spatial Relations scores for the CI group, r(44) = .33. There were no other significant correlations among ACT and visual-spatial test scores for any of the groups. ACT scores were not significantly related to aided or unaided better ear PTAs for either group of deaf students. These results replicate findings of Blatto-Vallee et al. (2007) and Marschark et al. (2013).

References

  1. Alamargot D. Lambert E. Thebault C., & Dansac C (2007). Text composition by deaf and hearing middle-school students: The role of working memory. Reading and Writing, 20, 333–360. doi:10.1007/s11145-006-9033-y [Google Scholar]
  2. Baddeley A. D., Logie R. H. (1999). Working memory: The multiple component model. In A., Miyake, P., Shah (Eds.), Models of working memory: Mechanisms of active maintenance and executive control (pp. 28–61). New York, NY: Cambridge University Press. [Google Scholar]
  3. Bavelier D., Dye M. W., Hauser P. C. (2006). Do deaf individuals see better? Trends in Cognitive Sciences, 10, 512–518. doi:10.1016/j.tics.2006.09.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bebko J. M., McKinnon E. E. (1990). The language experience of deaf children: its relation to spontaneous rehearsal in a memory task. Child Development, 61, 1744–1752. [PubMed] [Google Scholar]
  5. Bergeson T. Pisoni D. B., & Davis R. A. O (2005). Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants. Ear & Hearing, 26, 149–164. doi:0196/0202/05/2602-0149/0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Best J. R. Miller P. H., & Naglieri J. A (2011). Relations between executive function and academic achievement from ages 5 to 17 in a large, representative national sample. Learning and Individual Differences, 21, 327–336. doi:10.1111/j.1467-8624.2010.01499.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bettger J., Emmorey K., McCullough S., Bellugi U. (1997). Enhanced facial discrimination: Effects of experience with American Sign Language. Journal of Deaf Studies and Deaf Education, 2, 223–233. [DOI] [PubMed] [Google Scholar]
  8. Blatto-Vallee G. Kelly R. R Gaustad M. G. Porter J., & Fonzi J (2007). Spatial-relational representation in mathematical problem-solving by deaf and hearing students. Journal of Deaf Studies and Deaf Education, 12, 432–448. doi:10.1093/deafed/enm022 [DOI] [PubMed] [Google Scholar]
  9. Blom H., Marschark M. (2015). Simultaneous communication and cochlear implants in the classroom? Deafness and Education International, 17, 123–131. doi:10.1179/1557069X14Y.0000000045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Borgna G., Convertino C., Marschark M., Morrison C., Rizzolo K. (2011). Enhancing deaf students’ learning from sign language and text: Metacognition, modality, and the effectiveness of content scaffolding. Journal of Deaf Studies and Deaf Education, 16, 79–100. doi:10.1093/deafed/enq036 [DOI] [PubMed] [Google Scholar]
  11. Bruce V. Green P. R., & Georgeson M. A (1996). Visual perception: Physiology, psychology, and ecology (3rd Ed.). East Sussex, UK: Psychology Press. [Google Scholar]
  12. Burkholder R. A., & Pisoni D. B (2006). Working memory capacity, verbal rehearsal speed, and scanning in deaf children with cochlear implants. In P. E., Spencer, M., Marschark (Eds.), Advances in the spoken language development of deaf and hard-of-hearing children (pp. 328–357). New York, NY: Oxford University Press. [Google Scholar]
  13. Campbell R., Wright H. (1990). Deafness and immediate memory for pictures: dissociations between “inner speech” and the “inner ear”? Journal of Experimental Child Psychology, 50, 259–286. [DOI] [PubMed] [Google Scholar]
  14. Chen Q., Zhang M., Zhou X. (2006). Effects of spatial distribution of attention during inhibition of return (IOR) on flanker interference in hearing and congenitally deaf people. Brain Research, 1109, 117–127. doi:10.1016/j.brainres.2006.06.043 [DOI] [PubMed] [Google Scholar]
  15. Cleary M., Pisoni D. B., Geers A. E. (2001). Some measures of verbal and spatial working memory in eight- and nine-year-old hearing-impaired children with cochlear implants. Ear and Hearing, 22, 395–411. doi:019610202/0112205-0395/O [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Cockcroft K., & Dhana-Dullabh H (2013). Deaf children and children with ADHD in the inclusive classroom: Working memory matters. International Journal of Inclusive Education, 17, 1023–1039. doi:10.1080/13603116.2012.728252 [Google Scholar]
  17. Cokely D. (1990). The effectiveness of three means of communication in the college classroom. Sign Language Studies, 69, 415–439. [Google Scholar]
  18. Convertino C. M., Marschark M., Sapere P., Sarchet T., Zupan M. (2009). Predicting academic success among deaf college students. Journal of Deaf Studies and Deaf Education, 14, 324–343. doi:10.1093/deafed/enp005 [DOI] [PubMed] [Google Scholar]
  19. Cornoldi C., & Mammarella I. C (2008). A comparison of backward and forward spatial spans. The Quarterly Journal of Experimental Psychology, 61A, 674–682. doi:10.1080/17470210701774200 [DOI] [PubMed] [Google Scholar]
  20. Dawson P. Busby P. McKay C., & Clark G (2002). Short-term auditory memory in children using cochlear implants and its relevance to receptive language. Journal of Speech, Language, and Hearing Research, 45, 789–802. doi:10.1044/jslhr.4202.261 [DOI] [PubMed] [Google Scholar]
  21. Della Sala S. Gray C. Baddeley A. Allamano N., & Wilson L (1999). Pattern span: A tool for unwelding visuo-spatial memory. Neuropsychologia, 37, 1189–1199. [DOI] [PubMed] [Google Scholar]
  22. Dowaliby F., Lang H. (1999). Adjunct aids in instructional prose: a multimedia study with deaf college students. Journal of Deaf Studies and Deaf Education, 4, 270–282. doi:10.1093/deafed/4.4.270 [DOI] [PubMed] [Google Scholar]
  23. Dye M. W., Green C. S., Bavelier D. (2009). Increasing speed of processing with action video games. Current Directions in Psychological Science, 18, 321–326. doi:10.1111/j.1467-8721.2009.01660.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Edwards L., Anderson S. (2014). The association between visual, nonverbal cognitive abilities and speech, phonological processing, vocabulary and reading outcomes in children with cochlear implants. Ear and hearing, 35, 366–374. doi:10.1097/AUD.0000000000000012 [DOI] [PubMed] [Google Scholar]
  25. Emmorey K. (2002). Language, cognition, and the brain: Insights from Sign Language research. Mahwah, NJ: Lawrence Erlbaum Associates. [Google Scholar]
  26. Emmorey K., Kosslyn S. M., Bellugi U. (1993). Visual imagery and visual-spatial language: Enhanced imagery abilities in deaf and hearing ASL signers. Cognition, 46, 139–181. [DOI] [PubMed] [Google Scholar]
  27. Figueras B., Edwards L., Langdon D. (2008). Executive function and language in deaf children. Journal of Deaf Studies and Deaf Education, 13, 362–377. doi:10.1093/deafed/enm067 [DOI] [PubMed] [Google Scholar]
  28. Gallaudet Research Institute. (April, 2011). Regional and National Summary Report of Data from the 2009–10 Annual Survey of Deaf and Hard of Hearing Children and Youth. Washington, DC: GRI, Gallaudet University. [Google Scholar]
  29. Garrison W., Long G., Dowaliby F. (1997). Working memory capacity and comprehension processes in deaf readers. Journal of Deaf Studies and Deaf Education, 2, 78–94. [DOI] [PubMed] [Google Scholar]
  30. Gathercole S. Willis C. Emslie H., & Baddeley A (1992). Phonological memory and vocabulary development during the early school years: A longitudinal study. Developmental Psychology, 28, 887–898. doi:10.1037/0012-1649.28.5.887 [Google Scholar]
  31. Geers A. E. (2003). Predictors of reading skill development in children with early cochlear implantation. Ear and Hearing, 24, 59S–68S. doi:10.1097/01.AUD.0000051690.43989.5D [DOI] [PubMed] [Google Scholar]
  32. Gibson J. M. (1985). Field dependence of deaf students: Implications for education. In D., Martin (Ed.), Cognition, education, and deafness: Directions for research and instruction (pp. 50–54). Washington DC: Gallaudet College Press. [Google Scholar]
  33. Gottardis L. Nunes T., & Lunt I (2011). A synthesis of research on deaf and hearing children’s mathematical achievement. Deafness and Education International, 13, 131–50. doi:10.1353/aad.2011.0034 [Google Scholar]
  34. Hall M. L., Bavelier D. (2010). Working memory, deafness, and sign language. In M., Marschark, P. E., Spencer (Eds.), The Oxford handbook of deaf studies, language, and education, Volume 2 (pp. 458–471). New York, NY: Oxford University Press. [Google Scholar]
  35. Hamilton H. (2011). Memory skills of deaf learners: Implications and applications. American Annals of the Deaf, 156, 402–423. doi:10.1353/aad.2011.0034 [DOI] [PubMed] [Google Scholar]
  36. Hauptman A., Eliot J. (1986). Contribution of figural proportion, figural memory, figure-ground perception and severity of hearing loss to performance on spatial tests. Perceptual and Motor Skills, 63, 187–190. [DOI] [PubMed] [Google Scholar]
  37. Hauser P. C., Cohen J., Dye M. W., Bavelier D. (2007). Visual constructive and visual-motor skills in deaf native signers. Journal of Deaf Studies and Deaf Education, 12, 148–157. doi:10.1093/deafed/enl030 [DOI] [PubMed] [Google Scholar]
  38. Hauser P. C. Lukomski J., & Hillman T (2008). Development of deaf and hard-of-hearing students’ executive function. In M., Marschark, P. C., Hauser (Eds.), Deaf cognition: Foundations and outcomes (pp. 286–308). New York: Oxford University Press. [Google Scholar]
  39. Hauser P. C. Lukomski J., & Isquith P (2007). Deaf college students’ performance on the BRIEF-A. Unpublished manuscript. [Google Scholar]
  40. Hays W. L. (1973). A measure of predictive association. In W. L., Hays (Ed.), Statistics for the social sciences (2nd Ed., pp. 745–749). New York, NY: Holt, Rinehart and Winston. [Google Scholar]
  41. Kirk K. Prusick L. French B. Gotch C. Eisenberg L. S., & Young N (2012). Assessing spoken word recognition in children who are deaf or hard of hearing: A translational approach. American Academy of Audiology, 23, 464–75. doi:10.3766/jaaa.23.6.8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Knoors H., Marschark M. (2014). Teaching deaf learners: Psychological and developmental foundations. New York, NY: Oxford University Press. [Google Scholar]
  43. Kronenberger W. G., Beer J., Castellanos I., Pisoni D. B., Miyamoto R. T. (2014). Neurocognitive risk in children with cochlear implants. JAMA Otolaryngology– Head & Neck Surgery, 140, 608–615. doi:10.1001/jamaoto.2014.757 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Kronenberger W. G., Colson B. G., Henning S. C., Pisoni D. B. (2014). Executive functioning and speech-language skills following long-term use of cochlear implants. Journal of Deaf Studies and Deaf Education, 19, 456–470. doi:10.1093/deafed/enu011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Kronenberger W. G., & Pisoni D. B (2009). Measuring learning-related executive functioning: Development of the LEAF scale. Paper presented at the 117th Annual Convention of the American Psychological Association, Toronto, CA. [Google Scholar]
  46. Kronenberger W. G., Pisoni D. B., Henning S. C., Colson B. G. (2013). Executive functioning skills in long-term users of cochlear implants: A case control study. Journal of Pediatric Psychology, 38, 902–914. doi:10.1093/jpepsy/jst034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Krumbholz K. Patterson R. Nobbe A., & Fastl H (2003). Microsecond temporal resolution in monaural hearing without spectral cues? Journal of the Acoustical Society of America, 113, 2790–2800. doi:10.1121/1.1547438 [DOI] [PubMed] [Google Scholar]
  48. Lang H. G., Pagliaro C. (2007). Factors predicting recall of mathematics terms by deaf students: Implications for teaching. Journal of Deaf Studies and Deaf Education, 12, 449–60. doi:10.1093/deafed/enm021 [DOI] [PubMed] [Google Scholar]
  49. Leslie L., Caldwell J. (2001). Qualitative Reading Inventory - 3. New York, NY: Addison Wesley Longman, Inc. [Google Scholar]
  50. Likert R., & Quasha W. H (1994). Revised Minnesota Paper Form Board Test (2nd Ed.). San Antonio, TX: The Psychological Corporation, Harcourt Brace & Company. [Google Scholar]
  51. Litzinger T. A. Lee S. H. Wise J. C., & Felder R. M (2007). A psychometric study of the Index of learning styles. Journal of Engineering Education, 96, 309–319. doi:10.1002/j.2168–9830.2011.tb00006.x [Google Scholar]
  52. Logan K. Mayberry M., & Fletcher J (1996). The short-term memory of profoundly Deaf people for words, signs, and abstract spatial stimuli. Applied Cognitive Psychology, 10, 105–119. doi:10.1002/(SICI)1099-0720(199604)10:2<105::AID-ACP367>3.0.CO;2-4 [Google Scholar]
  53. López-Crespo G., Daza M. T., Méndez-López M. (2012). Visual working memory in deaf children with diverse communication modes: Improvement by differential outcomes. Research in Developmental Disabilities, 33, 362–368. doi:10.1016/j.ridd.2011.10.022 [DOI] [PubMed] [Google Scholar]
  54. Macsweeney M., Campbell R., Donlan C. (1996). Varieties of short-term memory coding in deaf teenagers. Journal of Deaf Studies and Deaf Education, 1, 249–262. [DOI] [PubMed] [Google Scholar]
  55. Marschark M., Hauser P. C. (2012). How deaf children learn. New York, NY: Oxford University Press. [Google Scholar]
  56. Marschark M., Machmer E., Convertino C.(In press). Understanding language in the real world. In M., Marschark, P. E., Spencer (Eds.), The Oxford handbook of deaf studies in language. New York, NY: Oxford University Press. [Google Scholar]
  57. Marschark M., Knoors H. (2012). Educating deaf children: Language, cognition, and learning. Deafness and Education International, 14, 137–161. doi:10.1179/1557069X12Y.0000000010 [Google Scholar]
  58. Marschark M., Morrison C., Lukomski J., Borgna G., Convertino C. (2013). Are deaf students visual learners? Learning and Individual Differences, 25, 156–162. doi:10.1016/j.lindif.2013.02.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Marschark M., Sapere P., Convertino C. M., Mayer C., Wauters L., Sarchet T. (2009). Are deaf students’ reading challenges really about reading? American Annals of the Deaf, 154, 357–370. [DOI] [PubMed] [Google Scholar]
  60. Massa L. J., & Mayer R. E (2006). Testing the ATI hypothesis: Should multimedia instruction accommodate verbalizer-visualizer cognitive style? Learning and Individual Differences 16, 321–335. doi:10.1016/j.lindif.2006.10.001 [Google Scholar]
  61. Mather N., & Woodcock R. W (2001). Examiner’s manual. Woodcock-Johnson III tests of cognitive abilities. Rolling Meadow, IL: Riverside Publishing Company. [Google Scholar]
  62. Mayberry R. I. (2002). Cognitive development in deaf children: the interface of language and perception in neuropsychology. In S. J., Segalowitz, I., Rapin (Eds.), Handbook of neuropsychology (2nd Ed., Vol. 8, Part II, pp. 71–107). Philadelphia, PA: Elsevier. [Google Scholar]
  63. Mayer R. E., Massa L. (2003). Three facets of visual and verbal learners: Cognitive ability, cognitive style, and learning preference. Journal of Educational Psychology, 95, 833−841. doi:10.1037/0022-0663.95.4.833 [Google Scholar]
  64. McGarr N. S. (1981). The effect of context on the intelligibility of hearing and deaf children’s speech. Language and Speech, 24, 255–264. [DOI] [PubMed] [Google Scholar]
  65. McGarr N. S. (1983). The intelligibility of deaf speech to experienced and inexperienced listeners. Journal of Speech and Hearing Research, 26, 451–458. [DOI] [PubMed] [Google Scholar]
  66. Mitchell R. E., Karchmer M. A. (2004). Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign Language Studies, 4, 138–163. [Google Scholar]
  67. Naglieri J. A., Bardos A. N. (1997). General Ability Measure for Adults. San Antonio, TX: Pearson. [Google Scholar]
  68. Newell W. (1978). A study of the ability of day-class deaf adolescents to comprehend factual information using four communication modalities. American Annals of the Deaf, 123, 558–562. [PubMed] [Google Scholar]
  69. Optometric Extension Program. (1995). Primary mental abilities: Spatial relations and perceptual speed. Reprinted by permission of Macmillan/McGraw Hill School Publishing Co; Adapted for OEP by Groffman S., Solan H. Santa Ana, CA: Optometric Extension Program Foundation, Inc. [Google Scholar]
  70. Pagliaro C. M. (2015). Developing numeracy in individuals who are deaf and hard of hearing. In H., Knoors, M., Marschark (Eds.), Educating deaf learners: Creating a global evidence base (pp. 173–195). New York, NY: Oxford University Press. [Google Scholar]
  71. Paivio A., Harshman R. A. (1983). Factor analysis of a questionnaire on imagery and verbal habits and skills. Canadian Journal of Psychology, 37, 461–483. [Google Scholar]
  72. Parasnis I., Long G. (1979). Relationships among spatial skills, communication skills, and field independence in deaf students. Perceptual and Motor Skills, 49, 879–887. [DOI] [PubMed] [Google Scholar]
  73. Pashler H. McDaniel M. Rohrer D., & Bjork R (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest 9, 106–119. [DOI] [PubMed] [Google Scholar]
  74. Pisoni D. B. Conway C. M. Kronenberger W. Henning S., & Anaya E (2010). Executive function, cognitive control and sequence learning in deaf children with cochlear implants. In M., Marschark, P. E., Spencer (Eds.), The Oxford handbook of deaf studies, language, and education (Vol. 2, pp. 439–457). New York, NY: Oxford University Press. [Google Scholar]
  75. Pisoni D., Geers A. (2000). Working memory in deaf children with cochlear implants: Correlations between digit span and measures of spoken language processing. Annals of Otology, Rhinology, and Laryngology, 185, 92–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Proksch J., Bavelier D. (2002). Changes in the spatial distribution of visual attention after early deafness. Journal of Cognitive Neuroscience, 14, 687–701. doi:10.1162/08989290260138591 [DOI] [PubMed] [Google Scholar]
  77. Rettenbach R., Diller G., Sireteanu R. (1999). Do deaf people see better? Texture segmentation and visual search compensate in adult but not in juvenile subjects. Journal of Cognitive Neuroscience, 11, 560–583. [DOI] [PubMed] [Google Scholar]
  78. Roid G. H., & Miller L. J (1997). Leiter International Performance Scale - Revised. Wood Dale, IL: Stoelting Company. [Google Scholar]
  79. Romero Lauro L. J., Crespi M., Papagno C., Cecchetto C. (2014). Making sense of an unexpected detrimental effect of sign language use in a visual task. Journal of Deaf Studies and Deaf Education, 19, 358–365. doi:10.1093/deafed/enu001 [DOI] [PubMed] [Google Scholar]
  80. Shaver D. M., Marschark M., Newman L., Marder C. (2014). Who is where? Characteristics of deaf and hard-of-hearing students in regular and special schools. Journal of Deaf Studies and Deaf Education, 19, 203–219. doi:10.1093/deafed/ent056 [DOI] [PubMed] [Google Scholar]
  81. Sommers M. S., Tye-Murray N., Spehar B. (2005). Auditory-visual speech perception and auditory-visual enhancement in normal-hearing younger and older adults. Ear and Hearing, 26, 263–275. [DOI] [PubMed] [Google Scholar]
  82. Sternberg R. J., & Zhang L (Eds.). (2001). Perspectives on thinking, learning, and cognitive styles. Mahwah, NJ: Erlbaum. [Google Scholar]
  83. Stiles D. J., McGregor K. K., Bentler R. A. (2012). Vocabulary and working memory in children fit with hearing aids. Journal of Speech, Language, and Hearing Research: JSLHR, 55, 154–167. doi:10.1044/1092-4388(2011/11-0021) [DOI] [PubMed] [Google Scholar]
  84. Talbot K. F., & Haude R. H (1993). The relationship between sign language skill and spatial visualizations ability: Mental rotation of three-dimensional objects. Perceptual and Motor Skills, 77, 1387–1391. [DOI] [PubMed] [Google Scholar]
  85. Tang S. -J. (2002). Working memory, language production rate, and reading comprehension of Chinese deaf readers. Bulletin of Special Education, 22, 155–169. [Google Scholar]
  86. Tye-Murray N., Sommers M. S., Spehar B. (2007). Audiovisual integration and lipreading abilities of older adults with normal and impaired hearing. Ear and Hearing, 28, 656–668. doi:10.1097/AUD.0b013e31812f7185 [DOI] [PubMed] [Google Scholar]
  87. Tyler R. S. Preece J., & Tye-Murray N (1986). The Iowa laser videodisk tests. Iowa City, IA: University of Iowa Hospitals. [Google Scholar]
  88. Van Dijk R., Kappers A. M., Postma A. (2013a). Superior spatial touch: Improved haptic orientation processing in deaf individuals. Experimental Brain Research, 230, 283–289. doi:10.1007/s00221-013-3653-7 [DOI] [PubMed] [Google Scholar]
  89. Van Dijk R. Klappers A. M. L., & Postma A (2013b). Haptic spatial configuration learning in deaf and hearing individuals. PLoS ONE, 8, e61336. doi:10.1371/journal.pone.0062374 [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Wechsler D. (2014). WISC-V Technical and Interpretive Manual. Bloomington, MN: NCS Pearson. [Google Scholar]
  91. Wilson M., Bettger J. G., Niculae I., Klima E. S. (1997). Modality of language shapes working memory: Evidence from digit span and spatial span in ASL signers. Journal of Deaf Studies and Deaf Education, 2, 152–162. [DOI] [PubMed] [Google Scholar]
  92. Woodcock R. W. McGrew K. S., & Mather N (2001). Woodcock-Johnson III Tests of cognitive abilities. Rolling Meadows, IL: Riverside. [Google Scholar]
  93. Zarfaty Y., Nunes T., Bryant P. (2004). The performance of young deaf children in spatial and temporal number tasks. Journal of Deaf Studies and Deaf Education, 9, 315–326. doi:10.1093/deafed/enh034 [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Deaf Studies and Deaf Education are provided here courtesy of Oxford University Press

RESOURCES