Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Nov 1.
Published in final edited form as: Cogn Sci. 2018 Oct 15;42(8):3177–3190. doi: 10.1111/cogs.12691

Visual statistical learning with stimuli presented sequentially across space and time in deaf and hearing adults

Beatrice Giustolisi a,*, Karen Emmorey b
PMCID: PMC6286205  NIHMSID: NIHMS989296  PMID: 30320454

Abstract

This study investigated visual statistical learning (VSL) in 24 deaf signers and 24 hearing non-signers. Previous research with hearing individuals suggests that SL mechanisms support literacy. Our first goal was to assess if VSL was associated with reading ability in deaf individuals, and if this relation was sustained by a link between VSL and sign language skill. Our second goal was to test the Auditory Scaffolding Hypothesis, which makes the prediction that deaf people should be impaired in sequential processing tasks. For the VSL task, we adopted a modified version of the triplet learning paradigm, with stimuli presented sequentially across space and time. Results revealed that measures of sign language skill (sentence comprehension/repetition) did not correlate with VSL scores, possibly due to the sequential nature of our VSL task. Reading comprehension scores (PIAT-R) were a significant predictor of VSL accuracy in hearing but not deaf people. This finding might be due to the sequential nature of the VSL task and to a less salient role of the sequential orthography-to-phonology mapping in deaf readers compared to hearing readers. The two groups did not differ in VSL scores. However when reading ability was taken into account, VSL scores were higher for the deaf group than the hearing group. Overall, this evidence is inconsistent with the Auditory Scaffolding Hypothesis, suggesting that humans can develop efficient sequencing abilities even in the absence of sound.

Keywords: Statistical Learning, Auditory Scaffolding Hypothesis, Deafness, American Sign Language, Reading

1. Introduction

Statistical learning (SL) involves a set of mechanisms that work in different modalities and through which we can encode regularities across space and time (Frost, Armstrong, Siegelman, & Christiansen, 2015). Those mechanisms operate implicitly (Arciuli 2017). Studies assessing implicit SL are typically composed of two phases: familiarization and testing. Without receiving any explicit information, participants are exposed to some kind of stimulus regularity (familiarization phase). Then, learning the (unmentioned) regularity is assessed (testing phase). If SL has occurred, participants should be able to discriminate between familiar and non-familiar stimuli (e.g. familiar vs. non-familiar triplets of stimuli; see Arciuli & Simpson, 2012; Saffran, Aslin, & Newport, 1996), and they should be faster/more accurate to perform actions on familiar compared to non-familiar stimuli (e.g. serial reaction time paradigm; Niessen & Bullemer, 1987).

In recent decades, studies on SL have grown exponentially, which is justified by the recognized role of SL as a learning mechanism involved in almost every cognitive process (Perruchet & Pacton, 2006), but particularly for language. For example, SL is considered crucial in first language (L1) acquisition, supporting infants in segmenting words from the speech stream based on the statistical properties of the speech stream itself (Saffran et al., 1996). Moreover, a growing body of evidence suggests that some SL mechanisms support literacy as well. Such SL mechanisms are argued to sustain the recognition of the probabilistic patterns of association within orthographic representations and between graphemes and phonemes, building a fundamental scaffolding for the development of reading and spelling skills (Treiman & Kessler, 2006).

Various studies have reported a relationship between SL and reading ability in both children and adults (Arciuli & Simpson, 2012; Spencer, Kaschak, Jones, & Lonigan, 2015) and in both first (L1) and second (L2) languages (Frost, Siegelman, Narkiss, & Afek, 2013). For example, Arciuli and Simpson (2012) showed a positive correlation between SL and L1 reading proficiency as measured by the reading subtest of the Wide Range Achievement Test 4 (Wilkinson & Robertson, 2006), which assesses the ability to read aloud different orthographic strings. Frost et al. (2013) found that, in English L1 speakers, SL positively correlated with the ability to learn the structural properties of Hebrew (L2), a Semitic language that follows different statistical properties than those usually found in Indo-European languages. In addition to these studies, the link between SL and literacy is supported by the finding that individuals with dyslexia seem to show SL impairments (e.g. Gabay, Thiessen, & Holt, 2015; Sigurdardottir et al., 2017; but see Rüsseler, Gerth, & Münte, 2006).

In the present study, we focused on the link between SL and literacy considering a population that, to the best of our knowledge, has never been considered before in this regard: congenitally deaf adult signers. First, we were guided by the following consideration: computational models taking a SL approach to literacy have been developed to represent the behavior of hearing individuals, and studies linking SL abilities and written language proficiency have only involved hearing readers. With this in mind, one might wonder if the same relation between SL and reading/writing occurs in deaf people. This question is interesting for several reasons. In hearing people, orthographic processes assume a mapping between sound-based phonological representations and orthographic representations. From a developmental perspective, this mapping develops from preexisting phonological representations of spoken language to not-yet-known orthographic representations. Considering the case of deaf readers/writers means considering the case of people who have partial phonological representations of speech that must be mapped onto orthographic words (Goldin-Meadow & Mayberry, 2001). This difference in mapping between orthography and phonology might result in difficulties recognizing statistical regularities between phonemes and graphemes.

We hypothesized that variation in SL could partially account for the high variability in reading proficiency in the deaf population (Qi & Mitchell, 2012), and one goal of this study was to test this hypothesis. To do so, we ran a visual SL (VSL) experiment, and we collected several measures assessing reading, spelling, American Sign Language (ASL), and cognitive skills in order to perform a correlational analysis. In particular, we investigated whether a possible association between SL and reading ability might be mediated by a more general relationship between SL and natural language ability (Arciuli & Simpson, 2012), in this case ASL skill.

Moreover, studying SL in deaf adults is of great interest in light of the recent debate concerning the Auditory Scaffolding Hypothesis (Conway, Pisoni, & Kronenberger, 2009; Hall, Eigsti, Bortfeld, & Lillo-Martin, 2017; von Koss Torkildsen, Arciuli, Haukedal, & Wie, 2018). According to this hypothesis, learning and producing sequential information (i.e. cognitive functions related to time and serial-order) might be impaired in deaf individuals, because the development of those abilities are sustained by hearing experience. The first experimental evidence in favor of this hypothesis came from a series of studies demonstrating sequence learning/processing deficits and motor sequencing disturbances in deaf children with cochlear implants (Conway, Pisoni, Anaya, Karpicke, & Henning, 2011; Conway, Karpicke, Anaya, Henning, Kronenberger, & Pisoni, 2011). Deaf children with CIs have also been reported to show reduced sequential processing capacity (Ulanet, Carson, Mellon, Niparko, & Ouellette, 2014) and difficulties in tasks tapping sequential memory and visuo-motor sequencing (Bharadwaj & Mehta, 2016). Furthermore, statistical learning impairments have been observed in children with peripheral hearing loss (Studer-Eichenberger, Studer-Eichenberger, & Koenig, 2016) and in deaf adults using either hearing aids or CIs (Lévesque, Théoret, and Champoux, 2014). However, a major problem with these studies is that the population under investigation underwent both a period of auditory deprivation and a period of language deprivation, and therefore any observed temporal/sequential impairments might be the result of either or both factors, instead of being caused by auditory deprivation alone (see Hall et al., 2017 for discussion).

A first challenge for the Auditory Scaffolding Hypothesis came from a study in which deaf adult signers outperformed hearing adults in a visual rhythmic task, which was highly sequential in nature (Iversen, Patel, Nicodemus, & Emmorey, 2015). In addition, Hall et al. (2017) failed to replicate Conway, Pisoni et al. (2011). Specifically, Hall et al. (2017) found no evidence of sequence learning in either deaf or hearing children using Conway’s task, and they found evidence of similar sequence learning in both groups using a different task (a serial reaction time paradigm). Recently, von Koss Torklidsen et al. (2017) also reported similar VSL in deaf children with CIs and hearing children (aged 7 – 12 years). The further goal of the present study was to test the Auditory Scaffolding Hypothesis for congenitally deaf adults who did not receive CIs but who had early sign exposure. If there is an important link between sequencing ability and auditory input as proposed by the Auditory Scaffolding Hypothesis, then deaf adults should perform worse than hearing adults in a sequential statistical learning task because of their lifelong lack of hearing experience.

2. Methods

2.1. Participants

Twenty-five deaf ASL signers and 27 hearing non-signers (native English speakers) participated. Through a background questionnaire, all participants reported no history of language impairment. Deaf participants were all native signers (born into deaf signing families) or early signers (ASL was acquired before age 6 years); they all used ASL as their primary means of communication and written English as alternative means of communication. They were all born deaf, with either severe (71–90 dB) or profound (90–120 dB) hearing loss. One deaf participant and two hearing participants were excluded from the analysis because of lack of attention during the SL familiarization phase, whereas one hearing participant was excluded because he was unwilling to complete the spelling assessment. The final sample consisted of 24 deaf participants (mean age=32.5, SD=8.3; mean years of education = 16.5, SD= 3.0; 13 females) and 24 hearing participants (mean age= 30.9, SD=13.2, mean year of education = 15.6, SD= 1.9; 13 females). The two groups did not differ significantly on age (t(38.77)=0.48, p=.63) or level of education (t(38.80)=1.44, p=.16).

The experiment took place at San Diego State University. All deaf participants received a monetary reimbursement for their participation. Hearing participants received either course credit or a monetary reimbursement for their participation. The Institutional Review Board of San Diego State University approved this study.

All participants underwent an assessment battery that measured print exposure, English reading and spelling skills, nonverbal IQ, and ASL skills (deaf participants only). The battery included the following tests:

Author Recognition Test (ART; Acheson, Wells, & MacDonald, 2008).

This test measures print exposure by asking participants to recognize names of authors presented in written form. Scores are computed as the number of hits (correctly identified authors) minus false alarms (incorrect identifications). Maximum score is 65.

Peabody Individual Achievement Test Revised (PIAT-R; Markwardt, 1989) – Reading comprehension subtest.

This task measures reading comprehension by asking participants to silently read a sentence and choose among four pictures the one that best matches the sentence. While performing the decision, the sentence is not visible. In this task, vocabulary level progressively increases. Maximum score is 100.

Spelling recognition test (S-rec, Andrews & Hersch, 2010).

This test measures spelling skills by asking participants to identify incorrectly spelled words from a list of 88 words (half correctly spelled and half misspelled). The test score is calculated as the number of correctly classified items, both hits and correct rejections. Maximum score is 88.

Spelling Production task. (S-pro).

This task measures spelling abilities by asking participants to type words using a Cloze procedure in which a sentence context is provided and the first letter of the target word is presented (e.g., In the US, temperature is measured in degrees F_________ ). Maximum score is 30.

Kaufman Brief Intelligence Test – Matrices (KBIT-2; Kaufman & Kaufman, 2004).

This task assesses non-verbal intelligence through a visual pattern completion task. Maximum score is 46.

ASL Comprehension Task (ASL-CT; Hauser et al., 2015).

This task assesses ASL comprehension skills through a 30-item multiple-choice task (matching between four drawings/videos and a signed stimulus or vice versa).

ASL Sentence Repetition Task (ASL-SRT; Supalla, Hauser, & Bavelier, 2014).

This test assesses ASL fluency by asking participants to repeat pre-recorded ASL sentences of increasing complexity. The maximum score is 35.

2.2. Materials

The present paradigm is a modification of the one used in Siegelman, Bogaerts, and Frost (2017). Specifically, we used the same methods to construct the triplets (see 2.2.1 Stimuli) and for the testing phase (see 2.2.3 Testing), but we modified the familiarization phase (location and timing of the triplets, as reported in 2.2.2 Familiarization) in order to stress the temporal and sequential dimension of the stimuli.

2.2.1. Stimuli

Stimuli were 16 visual shapes taken from Fiser & Aslin (2001). They were organized into eight triplets following Siegelman et al. (2017). Specifically, four triplets, made from four shapes, had between-shape transitional probabilities (TPs) of .33, whereas four triplets, made from the remaining twelve shapes, had TPs of 1. Labelling the shapes with a number from 1 to 16, four triplets were made from four shapes (e.g. 1-2-3; 2-1-4; 4-3-1; 3-4-2) and four triplets were made from 12 shapes (e.g 5-6-7; 8-9-10; 11-12-13; 14-15-16).

2.2.2. Familiarization

During the familiarization phase, eight triplets appeared on the screen in a pseudo-random order (the same triplet never appeared twice in a row). Each triplet was repeated 30 times, for a total duration of about 10 minutes. Each shape appeared on the screen for 400ms, with an inter-stimulus interval of 250ms1. Within triplets, the shapes appeared at three different screen locations, as shown in Fig. 1. This modification of the Siegelman et al. (2017) paradigm was made in order to emphasize the temporal and sequential dimension of our stimuli, and to resemble the process of words being typed.

Fig. 1 –

Fig. 1 –

Presentation modality: 1, 2, 3 correspond to the sequential location of different shapes. The three shapes making a triplet were never visible all together on the screen: each shape appeared on the screen for 400 ms, and the following shape appeared 250 ms later.

2.2.3. Testing

The testing phase followed Siegelman et al. (2017). In short, two sub-scores composed the final score: a pattern recognition score and a pattern completion score. Pattern recognition was composed of 34 trials, randomly presented to each participant. In each trial, participants had to select the familiar stimuli among a list of four or two sequences presented simultaneously. The position of the target item was randomized across trials. Non-familiar sequences (foils) were of different degree of difficulty, considering both TPs between shapes and shape positions, For example, given 1-2-3, 4-5-6, 7-8-9 as familiar triplets, there were both foils with shapes in the same order as in the familiar triplets (e.g., 1-5-9) and foils with shapes in a different order (e.g., 2-9-5). Pattern completion was composed of eight trials randomly presented to each participant. In each trial, participants had to select among three shapes which one completed an incomplete pattern. The maximum VSL score was 42 (34 pattern recognition + 8 pattern completion). Individual above chance level was success on 23 or more trials (Siegelman et al., 2017).

2.2.4. Overall procedure

Deaf participants completed the VSL Task and the assessment tests in separate sessions on different days. Hearing participants completed the VSL Task and the assessment tests in one session lasting about 60 minutes. The VSL Task was always administered first.

3. Results

Table 1 reports the assessment scores. Crucially, deaf and hearing participants did not differ significantly on nonverbal IQ. Spelling recognition scores also did not differ significantly between groups, but the hearing group outperformed the deaf group on the reading comprehension task (PIAT-R), on the Author Recognition Test (ART), and on the spelling production task. Performance on the ART was highly correlated with age in the hearing group (r=.60, p=.002), suggesting that this test might not be an appropriate measure of print exposure in adult participants who vary in age. Therefore, we considered this measure only for the deaf group in the correlation analysis.

Table 1 –

Means and SDs for assessment scores for deaf and hearing participants

N=48 Deaf M(SD) Hearing M(SD) t
Print exposure (ART) 10.9 (7.8) 16.7 (11.8) −2.04*
Reading comprehension (PIAT-R) 79.3 (12.3) 91.0 (4.7) −4.36***
Spelling recognition 72.9 (6.8) 74.8 (6.8) −0.97
Spelling production 0.70 (0.20) 0.82 (0.12) −2.61*
Nonverbal IQ (KBIT) 105.7 (11.4) 108.8 (13.0) −0.88
ASL comprehension (CT) 0.88 (0.09)
ASL repetition (SRT) 22.7 (5.0)
***

p<.001

*

p<.05;

The mean VSL score for the deaf participants was 26.5/42 correct (SD=5.3) and for the hearing participants it was 24.5/42 correct (SD=4.7). This difference was not significant (t(46)=1.38, p=.17, Cohen’s d=.40). Individual scores are shown in Fig. 2.

Fig. 2 –

Fig. 2 –

VSL score distribution in deaf (left) and hearing (right) participants. The dashed black line indicates chance level (<23).

Individual chance level was set at 23 correct trials, following the criteria proposed by Siegelman et al. (2017) (obtained through a computer simulation). In both groups, the majority of participants performed above chance (deaf participants: 18/24; hearing participants 15/24).

To examine the relationship between VSL scores, demographic characteristics (age and education), and assessment scores we performed an exploratory correlation analysis. Table 2 shows correlation coefficients (r), uncorrected p-values, and upper and lower 95% confidence intervals (CIs). The correlation between VSL scores and demographic information was not significant for either group. Similarly, for both groups there were no significant correlations between VSL scores and spelling scores. The correlation between VSL scores and KBIT scores was positive in both group, but not significant. In the deaf group, VSL scores positively correlated with ART scores, but the correlation was not significant (r=.37, p=.07). Finally, for the deaf participants, the correlation between VSL scores and ASL proficiency (ASL-SRT or ASL- CT scores) was not significant. VSL scores positively correlated with reading comprehension scores in the hearing group (r=.44; p=.03) and in the deaf group, although this correlation was not significant (r=.30; p=.16).

Table 2 –

Correlations between VSL scores and demographic characteristics and assessment scores in the deaf (D) and the hearing (H) group.

Age Edu PIAT-R S-rec S-pro KBIT ART ASL-CT ASL-SRT
D r −0.23 −0.11 0.30 0.12 0.04 0.20 0.37 0.13 0.13
p 0.28 0.62 0.16 0.44 0.84 0.34 0.07 0.55 0.54
Upper 95% CI 0.19 0.31 0.63 0.50 0.53 0.51 0.68 0.50 −0.51
Lower 95% CI −0.58 −0.49 −0.12 −0.30 −0.25 −0.28 −0.03 −0.29 −0.29
H r −0.09 −0.16 0.44 −0.02 0.04 0.37
p 0.68 0.45 0.03 0.92 0.84 0.08
Upper 95% CI 0.32 0.26 0.71 0.39 0.63 0.68
Lower 95% CI −0.47 −0.53 0.05 −0.42 −0.11 −0.02

The relationship between VSL scores and PIAT-R scores in both groups is depicted in Fig. 3. As the left panel shows, one deaf subject had lower reading scores than the other deaf participants, having a clear impact in the correlation analysis2. To further examine the relationship between VSL scores and reading comprehension scores in the two groups, controlling for the variance associated with each participant, we performed a mixed effects logistic regression analysis. The deaf participant with very low PIAT-R score was not considered in this analysis. As the dependent variable, we considered accuracy (dichotomously coded) on every trial. Thus, we had 42 data points for each participant. As fixed factors, we had group (deaf vs. hearing), the continuous predictor PIAT-R score (mean centered), and their interaction. As random effects, we entered into the model intercepts for subjects and items, as well as by-subject and by-item random slopes for the effect of group and PIAT-R and their interaction. The effect of group was significant, with higher accuracy for the deaf participants than for the hearing participants when the PIAT-R score was taken into account (β=−0.50, SE=0.17, z=−2.904, p=.004). The main effect of PIAT-R was not significant (β=−0.01, SE=0.02, z=−0.445, p=.66), however, the PIAT-R by group interaction was significant (β=0.06, SE=0.03, z=2.168, p=.03) indicating that PIAT-R was a significant predictor of VSL accuracy in the hearing group but not in the deaf group (see Fig. 4).

Fig. 3 –

Fig. 3 –

Scatter plot showing VSL scores as a function of PIAT-R scores in the deaf (left) and in the hearing (right) groups. These plots represent the raw relationship between VSL and PIAT-R scores, without taking other factors into account (and include the outlier participant in the deaf group).

Fig. 4 –

Fig. 4 –

Mixed models logistic regression results: no relationship between PIAT-R scores and VSL accuracy in the deaf group (D, left panel) vs. a positive relation between PIAT-R scores and VSL accuracy in the hearing group (H, right panel). In contrast to Figure 3, these graphs represent the relationship between VSL and PIAT-R scores when other factors (i.e., participants and items) are taken into account (and the outlier participant in the deaf group is excluded).

4. Discussion

In this study, we used a VSL task with stimuli presented sequentially across time and space to assess SL skills in deaf and hearing adults. The first goal was to examine whether SL abilities correlated with reading ability in deaf adults. Interestingly, reading comprehension scores were a significant predictor of VSL accuracy in the hearing group, but not in the deaf group. Our results are consistent with those of Arciuli & Simpson (2012) who found a significant correlation between SL and oral reading skill in hearing children and adults. Measuring the ability to read out loud is not appropriate for deaf individuals who do not use speech, and we therefore used a reading comprehension task (PIAT-R) to assess reading skill. Performance on this task is tightly linked to vocabulary knowledge, as the level of vocabulary difficult increases trial by trial. Therefore, the present results with hearing adults are also in line with previous research reporting higher SL abilities in children with greater vocabulary knowledge (Evans, Saffran, & Robe-Torres, 2009).

The intriguing question is why the same pattern did not occur for the deaf participants. One possibility is that the present results arise from the sequential nature of our VSL task and the different role that sequential SL mechanisms might play in detecting statistical regularities between orthographic and phonological representations for deaf compared to hearing readers. Previous work has shown that phonological codes are less stable for deaf than for hearing individuals (Krakow & Hanson, 1985), and it has been hypothesized that deaf individuals might learn how to read by mapping orthography directly with meaning (Harris & Moreno, 2004). In addition, it has been shown that deaf readers – regardless of reading proficiency – do not seem to rely on phonological codes when performing reading tasks, in contrast to hearing participants (Bélanger, Baum, & Mayberry, 2012). Finally, there is evidence suggesting that even if both signed and spoken languages present simultaneous and sequential information, the role for serial encoding may be less critical in comprehending signed compared to spoken language (Emmorey, Giezen, Petrich, Spurgeon, & Farnady, 2017). Therefore, it might be that deaf signers adopt a more holistic word processing strategy while reading, because of less stable phonological codes and the less sequential nature of processing their L1, a signed language. With this in mind, and considering the abovementioned literature, it might not be surprising that we found no correlation between a sequential SL task and reading measures in the deaf readers.

Similarly, we suspect that the absence of a correlation between ASL skill and VSL scores might be a consequence of the sequential presentation modality of the present paradigm. Previous research has indicated that the strategies for segmenting a signed language differ from those used to segment spoken languages (Brentari, 2006). We hypothesize that scores on a SL task with stimuli presented simultaneously instead of sequentially might correlate with ASL skill because the SL mechanism involved in extracting simultaneous visual patterns might be particularly relevant for parsing sign language input. Further, it might be that SL with stimuli presented simultaneously might correlate with reading skills in deaf signers. Further research is needed to test these hypotheses.

An interesting finding is that, despite the fact that our group of deaf participants had lower reading skills than the hearing participants, the two groups did not differ in VSL scores. As indicated by the mixed models logistic regression analysis, when reading comprehension scores are taken into account, VSL scores are higher for the deaf than the hearing group. This finding might indicate that deaf people with high SL abilities may be more likely to become proficient readers, even though the general impact of SL abilities on reading seemed to be weak for the deaf participants. This hypothesis needs further investigation, especially because it is possible that training SL skills could have an impact on reading skills (although this hypothesis has not yet been tested to our knowledge). A longitudinal study assessing SL abilities before and after literacy instruction might provide evidence for this line of reasoning. In addition, it would be interesting to compare groups of highly skilled and poor deaf readers to determine whether the groups differ in their SL abilities.

Our second goal was to test predictions that follow from the Auditory Scaffolding Hypothesis (Conway et al., 2009). According to this hypothesis, sound experience has a fundamental role in developing the ability to encode, learn, and manipulate sequential stimuli. Using a sequential VSL task, we found no overall difference between the performance of deaf and hearing participants, and when reading ability was taken into account, the deaf participants actually exhibited better VSL performance. The prediction of the Auditory Scaffolding Hypothesis is that deaf individuals should perform worse than hearing individuals in tasks tapping temporal and sequential order, and this prediction was not met. We interpret these results as indicating that there is no need for auditory scaffolding to develop efficient sequencing skills. However, another possibility is that deaf adults have had enough time throughout their lives to develop a set of strategies that might compensate for early deficits in temporal sequencing skills, as has been proposed for individuals with dyslexia (Lum, Ullman, & Conti_Ramsden, 2014). With regard to the deaf population, this second hypothesis is weakened by the results of Hall et al. (2017) showing comparable visual statistical learning skills between hearing children and deaf children who were native signers (and thus were not language-deprived).

With respect to limitations of the present study, we acknowledge that the task that we used, which was designed to detect individual differences (Siegelman et al., 2017), might not be the best choice to investigate group-level effects (see Hedge, Powell, & Summer, 2017, for discussion of the difficulty in designing tasks to tap both individual and group-level differences). Thus, further studies using online measures of SL, which are better suited for assessing between-groups comparisons are warranted (see Siegelman, Bogaerts, Kronenfeld, & Frost, 2017). Nonetheless, our results demonstrated that the majority of deaf participants (as well as the majority of hearing participants) did not perform at chance on the VSL task, which is inconsistent with the auditory scaffolding hypothesis.

As an additional future line of research into the sequence learning abilities of deaf individuals, studies could extend the present results by employing artificial grammar learning paradigms that assess the acquisition of rules and their generalization to new stimuli (see Fitch & Friederici, 2012). In particular, it is possible that the present results might be influenced by the spatial component of the VSL task, which could have allowed a boost in performance for signers. However, the results regarding better performance in spatial serial recall tasks for signers is mixed. For example, Geraci, Gozzi, Papagno, and Cecchetto (2008) reported deaf signers outperforming hearing non-signers in the Corsi block tapping task, while Emmorey et al. (2017) found no significant difference between deaf and hearing groups in the same task. Nonetheless, although we do not favor this hypothesis, it cannot be ruled out by the present study.

In summary, the results of our study indicated that VSL capacity for stimuli presented across space and time is comparable between deaf and hearing individuals matched in age, education, and IQ. Moreover, we showed that when reading scores are taken into account, deaf participants performed better than hearing participants. Overall, the current results are inconsistent with the auditory scaffolding hypothesis. Our results, together with evidence that 1) the ability to synchronize with a visual flash is higher in deaf compared to hearing adults (Iversen et al., 2015) and 2) deaf native signing children without cochlear implants show no difference in sequence learning compared to hearing children (Hall et al., 2017), indicate that humans can develop efficient sequencing abilities even in the absence of sound.

Acknowledgments

This research was supported in part by NIH grants DC014246 and DC010997. We would like to thank Lucinda O’Grady Farnady for assistance with deaf participant recruitment and testing. We would also like to thank Chris Brozdowski, Soren Mickelsen, Israel Montano, and Blaise Pfaffmann for help in hearing participant recruitment. We thank also Noam Siegelman for sharing the experimental stimuli with us. Moreover, we are grateful to all the deaf and hearing participants who made this research possible. Finally, we thank the Editor, Padraic Monaghan, and the two anonymous reviewers for the detailed and helpful comments to the first version of the manuscript.

Footnotes

1

Pilot data suggested that participants preferred shorter presentation times (400ms) instead of longer presentation times (800ms). The final setting was chosen in order to have a familiarization phase of about 10 minutes, as in Siegelman et al. (2017).

2

If that participant is excluded, the correlation coefficient between VSL scores and PIAT-R scores in the deaf group is 0.11, p=0.6).

References

  1. Acheson DJ, Wells JB, & MacDonald MC (2008). New and updated tests of print exposure and reading abilities in college students. Behavior Research Methods, 40(1), 278–289. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Andrews S, & Hersch J (2010). Lexical precision in skilled readers: Individual differences in masked neighbor priming. Journal of Experimental Psychology: General, 139(2), 299. [DOI] [PubMed] [Google Scholar]
  3. Arciuli J (2017). The multi-component nature of statistical learning. Philosophical Transactions of the Royal Society B, 372(1711), 20160058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Arciuli J, & Simpson IC (2012). Statistical learning is related to reading ability in children and adults. Cognitive Science, 36(2), 286–304. [DOI] [PubMed] [Google Scholar]
  5. Bélanger NN, Baum SR, & Mayberry RI (2012). Reading difficulties in adult deaf readers of French: Phonological codes, not guilty!. Scientific Studies of Reading, 16(3), 263–285. [Google Scholar]
  6. Bharadwaj SV & Mehta JA (2016). An exploratory study of visual sequential processing in children with cochlear implants. International Journal of Pediatric Otorhinalaryngology, 85, 158–165. [DOI] [PubMed] [Google Scholar]
  7. Brentari D (2006). Effects of language modality on word segmentation: An experimental study of phonological factors in a sign language. Papers in Laboratory Phonology, 8, 155–164. [Google Scholar]
  8. Conway CM, Pisoni DB, & Kronenberger WG (2009). The importance of sound for cognitive sequencing abilities: The auditory scaffolding hypothesis. Current Directions in Psychological Science, 18(5), 275–279. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Conway CM, Pisoni DB, Anaya EM, Karpicke J, & Henning SC (2011). Implicit sequence learning in deaf children with cochlear implants. Developmental Science, 14(1), 69–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Conway CM, Karpicke J, Anaya EM, Henning SC, Kronenberger WG, & Pisoni DB (2011). Nonverbal cognition in deaf children following cochlear implantation: Motor sequencing disturbances mediate language delays. Developmental Neuropsychology, 36(2), 237–254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Emmorey K, Giezen MR, Petrich JA, Spurgeon E, & Farnady LOG (2017). The relation between working memory and language comprehension in signers and speakers. Acta psychologica, 177, 69–77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Evans JL, Saffran JR, & Robe-Torres K (2009). Statistical learning in children with specific language impairment. Journal of Speech, Language, and Hearing Research, 52(2), 321–335. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Fiser J, & Aslin RN (2001). Unsupervised statistical learning of higher-order spatial structures from visual scenes. Psychological Science, 12(6), 499–504. [DOI] [PubMed] [Google Scholar]
  14. Fitch WT, & Friederici AD (2012). Artificial grammar learning meets formal language theory: an overview. Philosophical Transactions of the Royal Society B, 367(1598), 1933–1955. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Frost R, Siegelman N, Narkiss A, & Afek L (2013). What predicts successful literacy acquisition in a second language? Psychological Science, 24(7), 1243–1252. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Frost R, Armstrong BC, Siegelman N, & Christiansen MH (2015). Domain generality versus modality specificity: The paradox of statistical learning. Trends in Cognitive Sciences, 19(3), 117–125. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Gabay Y, Thiessen ED, & Holt LL (2015). Impaired statistical learning in developmental dyslexia. Journal of Speech, Language, and Hearing Research, 58(3), 934–945. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Geraci C, Gozzi M, Papagno C, & Cecchetto C (2008). How grammar can cope with limited short-term memory: Simultaneity and seriality in sign languages. Cognition, 106(2), 780–804. [DOI] [PubMed] [Google Scholar]
  19. Goldin-Meadow S, & Mayberry RI (2001). How do profoundly deaf children learn to read?. Learning Disabilities Research & Practice, 16(4), 222–229. [Google Scholar]
  20. Hall ML, Eigsti IM, Bortfeld H, & LilloMartin D (2017). Auditory access, language access, and implicit sequence learning in deaf children. Developmental Science. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Hauser PC, Paludneviciene R, Riddle W, Kurz KB, Emmorey K, & Contreras J (2015). American Sign Language Comprehension Test: A tool for sign language researchers. Journal of Deaf Studies and Deaf Education, 21(1), 64–69. [DOI] [PubMed] [Google Scholar]
  22. Hedge C, Powell G, & Sumner P (2017). The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior Research Methods, 1–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Iversen JR, Patel AD, Nicodemus B, & Emmorey K (2015). Synchronization to auditory and visual rhythms in hearing and deaf individuals. Cognition, 134, 232–244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Kaufman SA, & Kaufman NL (2004). Kaufman Brief Intelligence Test, (KBIT-2) (2nd Ed). Bloomington, MN: Pearson, Inc. [Google Scholar]
  25. Lévesque J, Théoret H, & Champoux F (2014). Reduced procedural motor learning in deaf individuals. Frontiers in Human Neuroscience, 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Markwardt FD (1998). Peabody Individual Achievement Test-Revised (PIAT-R). Circle Pines. MN: American Guidance Service. [Google Scholar]
  27. Marschark M, Spencer LJ, Durkin A, Borgna G, Convertino C, Machmer E, … & Trani A (2015). Understanding language, hearing status, and visual-spatial skills. Journal of Deaf Studies and Deaf Education, 20(4), 310–330. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Nissen MJ, & Bullemer P (1987). Attentional requirements of learning: Evidence from performance measures. Cognitive Psychology, 19(1), 1–32. [Google Scholar]
  29. Perruchet P, & Pacton S (2006). Implicit learning and statistical learning: One phenomenon, two approaches. Trends in Cognitive Sciences, 10(5), 233–238. [DOI] [PubMed] [Google Scholar]
  30. Qi S & Mitchell RE (2012). Large-scale academic achievement testing of deaf and hard-of-hearing students: Past, present, and future. Journal od Deaf Studies and Deaf Education, 17, 1–18. [DOI] [PubMed] [Google Scholar]
  31. Rüsseler J, Gerth I, & Münte TF (2006). Implicit learning is intact in adult developmental dyslexic readers: Evidence from the serial reaction time task and artificial grammar learning. Journal of Clinical and Experimental Neuropsychology, 28(5), 808–827. [DOI] [PubMed] [Google Scholar]
  32. Saffran JR, Aslin RN, & Newport EL (1996). Statistical learning by 8-month-old infants. Science, 274, 1926–1928. [DOI] [PubMed] [Google Scholar]
  33. Saffran JR, Newport EL, & Aslin RN (1996). Word segmentation: The role of distributional cues. Journal of Memory and Language, 35(4), 606–621. [Google Scholar]
  34. Siegelman N, Bogaerts L, & Frost R (2017). Measuring individual differences in statistical learning: Current pitfalls and possible solutions. Behavior Research Methods, 49(2), 418–432. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Siegelman N, Bogaerts L, Kronenfeld O, & Frost R (2017). Redefining “Learning” in Statistical Learning: What Does an Online Measure Reveal About the Assimilation of Visual Regularities? Cognitive Science. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Sigurdardottir HM, Danielsdottir HB, Gudmundsdottir M, Hjartarson KH, Thorarinsdottir EA, & Kristjánsson Á (2017). Problems with visual statistical learning in developmental dyslexia. Scientific Reports, 7(1), 606. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Spencer M, Kaschak MP, Jones JL, & Lonigan CJ (2015). Statistical learning is related to early literacy-related skills. Reading and Writing, 28(4), 467–490. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Studer-Eichenberger E, Studer-Eichenberger F, & Koenig T (2016). Statistical learning, syllable processing, and speech production in healthy hearing and hearing-impaired preschool children: A mismatch negativity study. Ear & Hearing, 37(1), e57–e71. [DOI] [PubMed] [Google Scholar]
  39. Supalla T, Hauser PC, & Bavelier D (2014). Reproducing American Sign Language sentences: cognitive scaffolding in working memory. Frontiers in Psychology, 5(859). [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Treiman R, & Kessler B (2006). Spelling as statistical learning: Using consonantal context to spell vowels. Journal of Educational Psychology, 98(3), 642. [Google Scholar]
  41. Ulanet PG, Carson CM, Mellon NK, Niparko JK, & Ouellette M (2014). Correlation of neurocognitive processing subtypes with language performance in young children with cochlear implants. Cochlear Implants International, 15(4), 230–240. [DOI] [PubMed] [Google Scholar]
  42. von Koss Torkildsen J, Arciuli J, Haukedal CL, & Wie OB (2018). Does a lack of auditory experience affect sequential learning? Cognition, 170, 123–129. [DOI] [PubMed] [Google Scholar]
  43. Wilkinson GS, & Robertson GJ (2006). Wide range achievement test. Psychological Assessment Resources. [Google Scholar]

RESOURCES