Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Nov 1.
Published in final edited form as: J Commun Disord. 2019 Jul 20;82:105921. doi: 10.1016/j.jcomdis.2019.105921

Visual and Verbal Semantic Productions in Children with ASD, DLD, and Typical Language

Allison Gladfelter 1, Kacy L Barron 2, Erik Johnson 3
PMCID: PMC6842699  NIHMSID: NIHMS1535656  PMID: 31351344

Abstract

Purpose:

Associations between visual and verbal input allow children to form, augment, and refine their semantic representations within their mental lexicons. However, children with autism spectrum disorder (ASD) and with developmental language disorder (DLD; also known as specific language impairment) process visual and verbal information differently than their typically developing peers, which may impact how they incorporate visual and verbal features into their semantic representations. The purpose of this exploratory study was to investigate how children with ASD and DLD use visually and verbally presented input to produce semantic representations of newly learned words.

Method:

Semantic features produced by 36 school-aged children (12 with ASD, 12 with DLD, and 12 with typical language development) were extracted from previously collected novel word definitions and coded based on their initial presentation modality (either visual, verbal, or both in combination) during an extended novel word learning paradigm. These features were then analyzed to explore group differences in the use of visual and verbal input.

Results:

The children with ASD and DLD produced significantly more visually-presented semantic features than their typical peers in their novel word definitions. There were no differences between groups in the proportion of semantic features presented verbally or via both modalities in combination. Also, the children increased their production of semantic features presented via both modalities combined across the sessions; this same increase in production was not observed for the semantic features taught in either the visual or verbal modality alone.

Conclusion:

Children with ASD and DLD benefit from visually presented semantic information, either in isolation or combined with verbal input, during tasks of word learning. Also, the reinforcement of combined visual-verbal input appears to enhance semantic learning over time.

Keywords: autism spectrum disorder, specific language impairment, developmental language disorder, semantics, word learning, visual processing, verbal processing


Visual information is often an infant’s earliest facilitative cue to begin the word learning process, serving as the bridge to connect the word form to its associated referent (Clerkin, Hart, Rehg, Yu, & Smith, 2017). Although visual information is beneficial for establishing initial word-referent pairings, visual input is not always forthcoming about the deeper aspects of a word’s full meaning, such as figurative uses, alternative interpretations, or multiple meanings. Verbal input, such as a spoken word label, a definition read aloud, or the telling of a story, allows us to assign additional semantic features to the words in our lexicons, enhancing our depth of word knowledge (Gupta & MacWhinney, 1997). As children age, they learn to readily access semantic information through both the visual and verbal modalities equally well (Kamio & Toichi, 2000). Both input modalities, visual and verbal, play an integral role in learning the nuanced meanings of new words.

Children with autism spectrum disorder (ASD; Erdodi, Lajiness-O’Neill, & Schmitt, 2013; McCleery et al., 2010) and with developmental language disorder (DLD; Alt, 2013; Alt & Plante, 2006; Archibald & Gathercole, 2006; Leclercq, Maillart, Pauquay, & Majerus, 2012; Marton, 2008; Norrix, Plante, Vance, & Boliek, 2007; Schul, Stiles, Wulfeck, & Townsend, 2004) process visual and verbal information differently than their typically developing peers. For both groups of children, these differences manifest as either slowed or less accurate processing of visual or verbal information in isolation or when processing input from both modalities in combination.

Given these differences, children with ASD and DLD might be expected to have difficulty learning new words. Indeed, children with ASD (Kostyuk et al., 2010; Landa, Gross, Stuart, & Faherty, 2013) and DLD (Trauner, Wulfeck, Tallal, & Hesselink, 2000) produce their first words at later ages than their typical peers. As they grow, they go on to acquire smaller vocabularies (for ASD, see Loucas et al., 2008; for DLD, see McGregor, Oleson, Bahnsen, & Duff, 2013), and their depth of word knowledge is less robust than their typical peers (for ASD, see Kostyuk et al., 2010; for DLD, see Marinellie & Johnson, 2002; McGregor et al., 2013). Without intervention, these vocabulary deficits persist over time (McGregor et al., 2013). Because of these visual and verbal processing differences, it is unclear which input cues children with ASD and DLD rely on as they learn to produce new words. The purpose of this exploratory study was to analyze novel word definitions produced by children with ASD, DLD, and typical language development (TLD) to determine which input modality (visual, verbal, or both in combination) these children draw from when learning to say new words.

Visual and Verbal Processing of Semantic Information in Children with ASD

For the young child with ASD, visual processing differences affect how they acquire semantic information, even during their earliest years of age. For example, 24- to 54-month-old children with ASD fail to show a shape bias, or the assumption that objects with a similar shape are represented by the same word (Potrzeba, Fein, & Naigles, 2015). This facilitative visual processing cue is well-established in typically developing two-year-old children (Landau, Smith, & Jones, 1988). Because the shape bias is one of the strongest word learning cues typically mastered in early childhood, this finding highlights how differences in recognizing pertinent semantic information when processing visual input have direct consequences for how children with ASD learn semantic aspects of new words.

Even though young children with ASD do not take advantage of the shape bias, by adolescence they appear to rely more heavily on visually presented semantic cues than verbally presented cues when retrieving words (Kamio & Toichi, 2000). This finding has led some researchers to speculate that pictorial access is superior to verbal access for semantic information in individuals with ASD (Kamio & Toichi, 2000), which may be an indication that visually presented semantic cues would be more facilitative than verbal cues when teaching children with ASD new words. In fact, when children with ASD are given visual cues of semantically related words (such as a picture of a bathroom to cue the word toothbrush), rather than verbal cues, they demonstrate recall of the semantically related words to the same degree as their typically developing peers (Farrant, Boucher, & Blades, 1999). This preference for visual over verbal stimuli in processing word meanings is in stark contrast to the verbal (word) preference observed in typical language learners as early as six months of age (Fulkerson & Waxman, 2007). This ‘auditory dominance’ during cross-modal visual and verbal processing has been well-established in typically developing infants and young children (e.g., Robinson & Sloutsky, 2019).

A learning curve analysis study by Erdődi, Lajiness-O’Neill, and Schmitt (2013) offers some insights on this apparent developmental shift in which visual inputs, such as the shape bias, are not initially beneficial during language tasks, but then switch to a visual over verbal learning advantage as children with ASD continue to grow. These researchers investigated visual and verbal learning curves in children with ASD and typically developing children between the ages of six- and 16-years-old by comparing their performance on the Visual and Word Selective Reminding tasks of the Test of Memory and Learning (Reynolds & Bigler, 1994). During initial learning trials, the participants with ASD showed a steeper learning curve for the verbal learning trials than the visual learning trials, which would point to weaker visual learning than verbal. However, when extended learning was assessed, the participants with ASD improved their performance on the visual learning task. The authors concluded that individuals with ASD may be more efficient at consolidating information presented visually than verbally over time. However, because the authors did not directly compare the visual and verbal learning curves to each other, it is difficult to determine whether visual or verbal cues were truly more beneficial for the children with ASD.

How well children with ASD process visual input is likely influenced by the duration of the exposure. For example, in one neural activation study, when processing times were short, individuals with ASD responded differently to moving visual stimuli in their primary visual cortex and middle temporal area compared to typically developing individuals (Robertson et al., 2014). However, when given more time, the same individuals with ASD performed similarly to their typically developing peers. This finding highlights the potential utility, if given sufficient time and exposure, of incorporating visual cues to facilitate semantic learning in children with ASD.

Although children with ASD may benefit from visual input more than verbal, it’s important to note that not all auditory information is processed differently in children with ASD. For example, in one event-related potentials (ERP) study directly comparing the semantic relationships between matching and mismatching picture-word and picture-environmental sound pairings, the children with ASD showed the expected N400 (semantic incongruence) effect in the environmental sound condition (auditory nonverbal), but not in the word (auditory verbal) condition (McCleery et al., 2010). Processing of verbal, rather than auditory nonverbal, semantic information seems especially difficult for children with ASD.

Visual and Verbal Processing of Semantic Information in Children with DLD

As with children with ASD, children with DLD also do not demonstrate a shape bias in an object naming context with novel words (Collisson, Grela, Spaulding, Rueckl, & Magnuson, 2015), hindering their ability to benefit from this early facilitative word learning cue. Individuals with DLD also have shown slowed processing on visuospatial tasks (Schul et al., 2004). Schul and colleagues (2004) studied visual processing in children with DLD and typically developing peers using digital graphic displays. On this task, the children with DLD demonstrated slower visual processing than their peers. However, this study did not determine whether the children with DLD would have been able to perform at the same level as their typical peers if given more time or exposure to the visual stimuli.

Even though visual information is processed more slowly in DLD, it remains unclear whether it is a relative strength compared to verbal processing. To compare processing of incongruent auditory-visual information between a group of preschool children with DLD and their typically developing peers, Norrix and colleagues (2007) utilized the McGurk effect, a perceptual phenomenon that relies on a person’s ability to integrate audio-visual information (McGurk & Macdonald, 1976). The children with DLD demonstrated a weaker McGurk effect than their typical peers, who were more influenced by the incongruent visual and auditory stimuli. The authors concluded that the presence of a reduced McGurk effect indicated that the children’s challenges with speech perception may not be limited to the processing of auditory information but may also extend to the visual modality. Although this study included visual and verbal input, the design does not permit the researchers to tease apart the relative contributions of each modality individually because they were always combined during the presentation of the targeted stimuli.

A later study by Cummings and Čeponienė (2010) specifically attributed weaker verbal processing to observed semantic integration deficits in children with DLD. In an ERP study similar to the one described above, children with DLD were compared to their age-matched peers during a picture-sound matching task. In one condition, the picture-sound matches included semantically matching or mismatching environmental sounds (auditory nonverbal input). In another condition, the picture-sound pairings included semantically matching or mismatching words (auditory verbal input). Although the children with DLD did demonstrate the expected N400 effect for the picture-environmental sound (nonverbal) condition, in the picture-word (verbal) condition the N400 effect was significantly delayed and their accuracy was lower than that of their typical peers.

To explore how these differences in visual and verbal processing impacted children with DLD’s ability to acquire lexical and semantic information when learning new words, Alt and Plante (2006) implemented a semantic fast-mapping task. Children with DLD and typical peers were taught novel words in a visual input only condition, a visual input paired with auditory nonverbal information (i.e. environmental sounds) condition, and a visual input paired with auditory verbal information (i.e. novel words) condition. The novel words condition was further divided into high and low verbal processing demand tasks, with half of the novel words possessing low phonotactic probability (high verbal processing demands) and half possessing high phonotactic probability (low verbal processing demands). Unsurprisingly, the children with DLD showed poorer semantic learning than the children with typical language overall. But, they showed the most striking weaknesses in the visual only condition and in the high verbal processing demands (low phonotactic probability) condition. Based on these findings, it is possible that children with DLD would benefit from a combination of visual and verbal input during word learning tasks, but only when the verbal processing demands are low.

The aforementioned studies all emphasize the role of visual and verbal processing on the comprehension of words and semantic features in children with ASD and DLD, but little research has explored how these processing differences impact which words and semantic features these children ultimately use and produce. Although children can successfully learn to comprehend new words with only a few exposures in a process sometimes called fast-mapping (Carey & Bartlett, 1978; Dollaghan, 1985), this learning is often incomplete, fragile, and poorly retained (Bion, Borovsky, & Fernald, 2013; Horst & Samuelson, 2008; Kucker, McMurray, & Samuelson, 2015; McMurray, Horst, & Samuelson, 2012; Munro, Baker, McGregor, Docking, & Arciuli, 2012). The successful production of words requires deeper knowledge. Knowing how verbal and visual processing impacts productions during semantic learning in these children over time is an integral component in developing individualized interventions to best facilitate their ability to more effectively produce language and form more age-appropriate vocabularies. This exploratory study aims to better understand how these visual and verbal processing differences influence how children with ASD and DLD use visual and verbal input when learning to produce new words, and how these factors may shift over time.

Research Questions

This investigation explored which presentation modality (visual, verbal, or a combination of the two) children with ASD, DLD, and typical language development (TLD) primarily use when forming and producing semantic representations of newly learned words by analyzing the input modality of the semantic features the children produced in a definition task. Because children with ASD have been speculated to be more visual learners (Farrant et al., 1999; Kamio & Toichi, 2000; Quill, 1997), it was predicted that they may rely on the visual modality more heavily to learn and produce semantic information. In contrast, although children with DLD often perform in ways similar to children with ASD on visual and verbal processing tasks, they show weaker use of visual information alone on semantic learning tasks (Alt & Plante, 2006), so it was predicted that the additional reinforcement of the semantic information presented through both modalities simultaneously would enhance semantic learning for the children with DLD. Finally, because children with TLD do not show processing deficits, they were expected to benefit from the combination of semantic information provided through both modalities, as this is the most information-rich way children acquire new words.

Methods

Previously collected novel word definitions from children with ASD, DLD, and TLD provided the data for this study. These previous novel word learning studies explored whether children with ASD and DLD respond to semantically enhanced input as well as their typical peers. The current study is designed to explore which modality of semantic input the children were responding to during this initial word learning task. All experimental, recruitment, and analytical procedures performed in the original word learning studies and in this follow-up exploratory study were in accordance with the ethical standards of each university’s respective ethical review committees.

Participants

Using G*Power statistical software (Buchner, Erdfelder, Faul, & Lang, 2017; Faul, Erdfelder, Lang, & Buchner, 2007), a power analysis was conducted using an alpha level of .05, power of .80, and a moderate effect size of .25 for the planned 3 X 3 repeated measures ANOVA with the within (input modality) by between (groups) interaction as the set parameters. Based on this power analysis, a sample size of 36 participants was considered appropriate for this follow-up exploratory study. The definitions from participants with ASD were acquired from a study investigating semantic richness and word learning in children with ASD (Gladfelter & Goffman, 2017). Data from participants with DLD and TLD were acquired from a longitudinal study investigating language and motor relationships in children with DLD and TLD (Gladfelter, Goffman, Benham, & Steeb, in preparation). Initial inclusionary testing for the DLD and TLD participants in the longitudinal study occurred 1-2 years prior to the collection of the novel word definitions analyzed in the current study. All participants were recruited from several nearby Midwestern counties in the United States.

To be included in the ASD group, each participant had an independent diagnosis of and received services for ASD per parent report and medical records. To confirm the medical diagnosis, participants with ASD were also required to meet the cutoff scores for autism or autism spectrum on the Autism Diagnostic Observation Schedule-Second Edition (Lord et al., 2012) as administered by a trained clinician. Although it was not required for inclusion in the original investigation, eleven children with ASD were also given either the Structured Photographic Expressive Language Test-Third Edition (SPELT-3; Dawson, Stout, & Eyer, 2003) or the core battery of the Clinical Evaluation of Language Fundamentals-4 (CELF-4; Semel, Wiig, & Secord, 2003) to get a broad idea of their expressive language skills (one child with ASD was not given either test due to time constraints).

Participants in the DLD group were included based on criteria for DLD outlined by Leonard (2014); these participants achieved standardized nonverbal IQ scores above 85, normal hearing and oral-mechanism functioning, and had no history of a neurological disorder. To be included in the original longitudinal study, each participant with DLD obtained a standard score of 87 or lower on the Structured Photographic Expressive Language Test- Preschool – 2nd ed. (Dawson et al., 2007; for SPELT-P2 inclusion criteria, see Greenslade, Plante, & Vance, 2009), which has been shown to have good sensitivity and specificity when diagnosing DLD (Plante & Vance, 1994). Additionally, all children with DLD scored within the “Minimal-to-No symptoms” range on the Childhood Autism Rating Scale – 2nd Edition (Schopler, Van Bourgondien, Wellman, & Love, 2010) to rule out a possible ASD diagnosis.

For inclusion in the TLD group, the children had no history of language delays (per parent report) and had to achieve age appropriate scores (a standard score of 85 or higher) on the Structured Photographic Expressive Language Test-Third Edition (Dawson, Stout, & Eyer, 2003) or the core battery of the Clinical Evaluation of Language Fundamentals-4 (Semel, Wiig, & Secord, 2003), whichever was age-appropriate during the initial year of participation in the longitudinal study (Gladfelter, Goffman, Benham, & Steeb, in preparation). As with the children with DLD, participants in the TLD group scored within the “Minimal-to-No symptoms” range on the Childhood Autism Rating Scale – Second Edition (Schopler, Van Bourgondien, Wellman, & Love, 2010) to rule out an ASD.

Finally, to be included in the original word learning studies, the children in all three groups had to be monolingual English speakers. They also had to pass an oral-mechanism examination, a bilateral pure tone hearing screening, and a nonverbal IQ test. Specifically, all children obtained a nonverbal IQ score of 85 or higher on the Primary Test of Nonverbal Intelligence (Ehrler & McGhee, 2008), the Columbia Mental Maturity Scale (Burgemeister, Blum, & Lorge, 1972), or the Test of Nonverbal Intelligence, Fourth Edition (Brown, Sherbenou, & Johnsen, 2010), apart from one participant (ASD1) who could not be trained to the task due to a behavioral rigidity. Her data were still included because nonverbal IQ has not been shown to consistently correlate to performance on visual processing tasks (e.g., Alt, 2013). Furthermore, she successfully participated in the experimental study, and her expressive vocabulary score was matched to a TLD and DLD participant.

Following these inclusionary criteria and in accordance with the power analysis, the novel word definitions from 12 school-aged children with ASD (M = 7;9 (years; months), range 4;6 – 11;3, three females), 12 with DLD (M = 7;1, range 5;9 - 8;4, three females), and 12 with TLD (M= 5;10, range 4;3 - 7;3, six females) were analyzed forthis exploratory study (see Table 1). Because these original studies focused on the production of semantic information, the participants were matched using raw scores from the Expressive Vocabulary Test-2nd Edition (Williams, 2007). Expressive vocabulary has been shown to be an area of deficit in children with ASD (Loucas et al., 2008) and with DLD (McGregor et al., 2013); as such, this matching procedure led to a significantly younger group with TLD than the ASD and DLD groups.

Table 1:

Participant Characteristics

DLD (n = 12) M (Range) ASD (n = 12) M (Range) TLD (n = 12) M (Range) F value p value
Age 7;1 (5;9 – 8;4) 7;9 (4;6 – 11;3) 5; 10 (4;3 – 7;3) 6.39 0.01
Sex 3 F, 9 M 3 F, 9 M 6 F, 6 M 1.10 0.34
EVT-2 Raw Score 82.00 (67 – 97) 88.67 (53 – 120) 94.5 (68 – 128) 1.41 0.26
EVT-2 Standard Score 94.17 (78 – 106) 95.75 (79 – 112) 114.83 (91 – 135) 15.66 < 0.01
Nonverbal IQ Standard Score 104.08 (91 – 125) 96.6 (85 – 106)* 121.5 (96 – 149) 12.88 < 0.01
Language Standard Score 73.67 (42 – 87) 86.18 (58 – 111)* 112.09 (90 – 125) 21.63 < 0.01

Note. EVT-2 = Expressive Vocabulary Test-2nd Edition; F = female, M = male; Nonverbal IQ Standard Scores were from either the Primary Test of Nonverbal Intelligence, the Columbia Mental Maturity Scale, or the Test of Nonverbal Intelligence’, Language Standard Scores were from either the Structured Photographic Expressive Language Test- Preschool – 2nd ed, Structured Photographic Expressive Language Test – 3rd ed., or the Clinical Evaluation of Language Fundamentals – 4th ed;

*

only includes scores from 11 participants with ASD. One-way ANOVA with equal variance assumed for statistical comparisons.

The Extended Word Learning Paradigms and Original Data Collection

To determine which input modality, visual, verbal, or both, the children with ASD, DLD, and TLD primarily utilized as they established semantic representations of new words, semantic features extracted from previously collected novel word definitions were coded for their original input modality during a novel word learning paradigm (Gladfelter & Goffman, 2017; Gladfelter, Goffman, Benham, & Steeb, in preparation). This prior work manipulated the semantic richness of novel word referents, which were presented with no semantic cues, sparse semantic cues, or rich semantic cues over an extended learning period. In three identical experimental sessions over the course of three separate days roughly one week apart, the researchers presented the novel words using Microsoft PowerPoint seven times in each semantic learning condition (for 21 total exposures to each word-referent pairing). For the current analysis, only the novel words taught in the rich semantic cues condition were analyzed because these words were embedded within a children’s story, which included visually and verbally presented semantic features. After being shown the children’s story during each session, the participants defined the newly-learned words. To elicit the novel word definitions, the examiners asked, “What does ____ mean?” followed by “What else can you tell me about ____ ?” (McGregor, Sheng, & Ball, 2007).

Verbal Stimuli

The novel words used in the original studies included /fʌʃpəm/, /pʌvgəb/, /bʌpkəv/, /mʌfpəm/, /fʌspəb/, and /pʌbtəm/. The phonotactic probability and neighborhood density for all novel words were controlled to be low. Recordings of the novel words and story scripts read by a female native-English speaker were loaded into the acoustic analysis software Praat (Boersma & Weenink, 2012) and equated for intensity (70 dB HL). Four of the six novel words were simultaneously presented with visual-referents on the computer screen and auditorily through a set of external speakers placed in front of the participants. The remaining two words were never paired with visual-referents but were used in the original word learning studies to compare the productions of words given semantic cues to those taught without any semantic information. Only the two novel words taught within the semantically rich story condition were included in the current study because they included visual and verbal semantic information. The number of verbally presented semantic features was equated between the two novel words within the story. All of the novel word-referent pairings were randomized and counterbalanced across participants and groups.

Visual Stimuli

Child-friendly cartoon-like visual images drawn by a professional illustrator were used as visual referents for the novel words and for the story illustrations (Gladfelter & Goffman, 2017; Gladfelter, Goffman, Benham, & Steeb, in preparation). Each visual-referent resided in a unique superordinate semantic category: an instrument, a tool, a vehicle, and an animal. The visual stimuli were delivered to participants using Microsoft PowerPoint from a laptop connected to a separate, larger monitor placed in front of the participants. A full description of the novel words, the children’s story script, and the visual images are available in Gladfelter and Goffman (2017). The current study examined all the semantic features produced by the children for a total of 216 definitions (36 participants X 2 words X 3 sessions).

The extraction of the semantic features from the novel word definitions.

The children’s definitions were orthographically transcribed and scored for the number of semantic features deemed accurate based on the method used by McGregor, Sheng, and Ball (2007). For example, one participant with ASD defined the novel animal target as follows: “It’s small (1). It’s fuzzy (2). It has a long tail (3). It has ears (4) and that. And feet (5).” This child’s definition was judged to contain five accurate semantic features. To determine reliability of the originally selected semantic features, a second, blind coder was trained to score the definitions. The first coder randomly selected one participant from each diagnostic group for training purposes, and both coders scored the definitions independently. The coders then discussed disagreements and reached consensus. Following training, reliability was calculated using definitions from a new set of randomly selected participants for 25% of all sessions distributed equally across groups. The first coder identified 270 semantic features and the second coder 284, with an overlap of 269. Reliability was judged to be between 94.7% (269/284) and 99.6% (269/270) (Gladfelter & Goffman, 2017; Gladfelter, Goffman, Benham, & Steeb, in prep.).

Coding of Semantic Features Based on Modality of Presentation

The semantic features identified in the original word learning studies were examined for modality of presentation in the current investigation. To explore the contributions of each modality, the semantic features were coded per the guidelines established in a coding manual developed by the primary and secondary authors of the current study and recorded in a coding tool (Microsoft Excel spreadsheet). Semantic features were coded as “Visual” if the semantic feature was shown in the target referent’s image. If the semantic feature was auditorily presented in the story describing the referent, then it was coded as “Verbal” Finally, a semantic feature was coded as “Both” if it occurred in both the visual images and auditorily in the story. To prevent any coding biases, the coder (the second author) was blinded to the diagnostic category of each participant using a de-identifying alphanumeric protocol developed by the first author.

Reliability and training.

An undergraduate research assistant majoring in Communicative Disorders was recruited to conduct reliability coding of the semantic features. Following initial training, the undergraduate assistant coded data from nine randomly selected de-identified participants (three from each group) for inter-reliability coding (i.e. 25% of the total data). Using the same alphanumeric coding system to de-identify the participants, the undergraduate assistant also was blind to the diagnostic category of the participants to prevent biases. Cohen’s Kappa was calculated using ratings described by Hallgren (2012). The inter-rater reliability results indicated perfect agreement (k = 1.00) for modality (visual, verbal, or both). The extremely high inter-rater reliability is likely due to the convenient access to the original children’s story during the coding process to confirm the initial presentation modality.

Statistical Analyses

A three (ASD vs. DLD vs. TLD) by three (visual vs. verbal vs. both) by three (session 1 X session 2 X session 3) mixed-model ANOVA was used to determine how the modality of presentation contributed to the newly learned semantic representations of words for participants in each diagnostic group. For this ANOVA, group served as the between-subjects variable, and session and modality served as within-subjects variables. From the original 216 definitions, a total of 493 semantic features, with 151 from the children with ASD, 196 from the children with DLD, and 146 from the children with TLD were coded. A summary of the means, standard deviations, and totals of the raw number of semantic features for each modality for each group within each session is displayed in Table 2. To account for varying amounts of semantic features produced within each child’s definitions, an overall proportion of responses for each coding category (visual, verbal, or both) was calculated for each definition for each participant. For example, during the second session, participant ASD7 produced 10 total semantic features; seven were coded under the Visual category and three under Both. It was then calculated that 70% of his features were originally presented via visual input, 0% were from verbal input, and 30% were through a combination of both modalities. These derived proportions served as the modality of presentation within-subjects data. A 0.05 alpha level was considered significant. Effect sizes were interpreted using Cohen’s (1969) proposed benchmarks for ηp2 (Richardson, 2011); or more specifically, an ηp2 of .0099 reflected a small effect, an ηp2 of .0588 indicated a medium effect, and an ηp2 of .1379 represented a large effect.

Table 2:

Semantic feature means, standard deviations, and totals for each session for each modality by group

Session 1 Session 2 Session 3
Modality GrouP Mean SD Total Mean SD Total Mean SD Total
Visual ASD 1.83 1.85 22 3.17 2.37 38 2.08 2.11 25
DLD 1.50 1.62 18 3.42 2.11 41 2.25 1.66 27
TLD 1.33 2.90 16 1.75 3.14 21 2.17 2.85 26
Verbal ASD 0.00 0.00 0 0.00 0.00 0 0.00 0.00 0
DLD 0.08 0.29 1 0.00 0.00 0 0.00 0.00 0
TLD 0.08 0.29 1 0.08 0.29 1 0.08 0.29 1
Both ASD 0.83 1.03 10 2.08 2.19 25 2.58 2.47 31
DLD 1.42 1.73 17 4.00 2.45 48 3.67 2.87 44
TLD 1.17 1.99 14 3.00 2.66 36 2.50 2.58 30

Results

This study explored potential differences in which input modality, visual, verbal, or both, children with ASD, DLD, and TLD relied on as they produced semantic features of new words over time. A summary of the ANOVA results is presented in Table 3. The mixed-model ANOVA revealed a significant effect based on the modality of presentation, p < 0.01, ηp2=0.54. Follow-up pairwise comparisons (Fisher’s Least Significant Difference (LSD) tests) indicated that larger proportions of semantic features were from the visual input (p < .01) and the combination of input modalities (p < .01) than from the verbal input alone; the relative proportion of semantic features from the visual input and combination of input modalities did not significantly differ (p = .86). Although there was no significant effect based on group, p = 0.16, ηp2=0.10, an interaction between diagnostic group and modality of presentation approached significance, p = 0.06, ηp2=0.13. Follow-up pairwise comparisons (Fisher’s LSD tests) indicated that the groups with ASD (M = .44, p = 0.01) and DLD (M= 39, p = 0.04) produced significantly greater proportions of visually presented semantic features than the group with TLD (M = .20). There was no significant difference between the groups with ASD and DLD (p = 0.56). Also, there were no significant differences between the groups (ASD, DLD, and TLD) in the verbal modality alone, nor in their proportion of semantic features that were presented with a combination of input (all p values greater than 0.05). A summary of the key modality findings is presented in Figure 1.

Table 3.

Summary of ANOVA for Group, Modality of Presentation, and Session

Source df F P ηp2
Group 2 1.91 0.16 0.10
Error (Group) 33
Modality 2 38.27 < 0.01** 0.54
Error (Modality) 66
Session 2 8.18 < 0.01** 0.20
Error (Session) 66
Modality*Group 4 2.40 0.06 0.13
Session*Group 4 0.90 0.47 0.05
Modality*Session 4 3.66 < 0.01** 0.10
Modality*Session*Group 8 1.54 0.15 0.08
Error (Modality*Session) 132

Note.

**

p < .01.

Figure 1.

Figure 1.

The modality proportion means for each group. Error bars reflect standard error.

The mixed-model ANOVA also revealed a significant effect based on session, p< 0.01, ηp2=0.20. Follow-up pairwise comparisons (Fisher’s LSD tests) indicated that the children’s overall proportion of responses were higher in Sessions 2 (M= .27, p < .01) and 3 (M= .25, p < .01) than in Session 1 (M= .17). Sessions 2 and 3 did not significantly differ (p = .32). In other words, the participants produced more semantic features in their definitions in Session 2 than in Session 1, and then maintained their proportion of responses in Session 3. There was also a significant interaction between session and the modality of input, p < 0.01, ηp2=0.10. Follow-up pairwise comparisons (Fisher’s LSD tests) indicated that the proportion of semantic features learned from both modalities combined significantly increased from Session 1 to Session 2 (p < .01) and remained higher in Session 3 (p < .01). There were no significant differences across sessions in the proportion of semantic features produced from either of the modalities in isolation (all p values > .05). In other words, the children increased their proportion of semantic features that were reinforced across both input modalities over time but didn’t significantly change their proportion of features learned via the visual or verbal modalities alone.

Discussion

The results of this exploratory study offer insights into how processing differences in children with ASD and DLD affect which semantic features they learn and produce when acquiring new words. We initially hypothesized that the children with ASD would produce more semantic features presented visually, and that the children with DLD and TLD would benefit from the combination of semantic information provided through both modalities. Instead, both the children with ASD and DLD produced relatively more visually-presented semantic features in their definitions than the children with TLD, and there were no differences between the three groups in the use of semantic information presented only verbally or presented via both modalities in combination. Based on our small sample, it is possible that children with ASD and DLD depend more heavily than their typical peers on visual input when acquiring semantic information during production tasks of word learning.

Also, the findings of this study contribute to a growing body of work emphasizing the beneficial effects of semantically-rich contexts on extended learning (Angwin, Phua, & Copland, 2014; Capone & McGregor, 2005; Gladfelter & Goffman, 2017; Henderson, Weighall, & Gaskell, 2013; Justice, Meier, & Walpole, 2005; McGregor et al., 2007; Rabovsky, Schad, & Rahman, 2016; Rabovsky, Sommer, & Rahman, 2012). The children in the current study increased their relative use of semantic features taught via both modalities across sessions, indicating that the reinforcing semantic information across modalities may play a facilitative role in semantic learning over time.

Before we discuss the implications of these findings, four methodological limitations should be borne in mind. The first methodological consideration is the small sample size. As it stands, the group differences observed in this exploratory study only trended toward significance,indicating that future studies must include a larger sample to more fully investigate the impact of visual and verbal processing during word learning tasks on the production of semantic features.

Second, because production, rather than comprehension, of newly learned words and semantic features was the primary focus of the original word learning studies, expressive vocabulary was used for participant matching purposes. Although the groups do not significantly differ on expressive vocabulary, this matching procedure did lead to a significantly younger group with TLD than the ASD and DLD groups. This is not surprising given that expressive vocabulary has been shown to be an area of deficit in children with ASD (Loucas et al., 2008) and with DLD (McGregor et al., 2013). Future research with a chronological age-matched control group would better determine whether age influences which visual or verbal aspects of semantic learning children learn to produce.

Third, our use of a more traditional statistical approach (i.e., an ANOVA) to compare the relative proportions of the responses from the three groups of children may not be as sensitive as more advanced statistical models at uncovering findings beyond group differences. Because the participants in two of our groups came from atypically developing populations, using more advanced, mixed-effects models could be particularly fruitful. Future studies that employ more sophisticated statistical models which take more variability into account are necessary.

Finally, all three groups of children produced very few semantic features in their novel word definitions that were presented via the verbal modality alone. Because the semantic feature productions were collected via an open-ended definition task, the children were free to produce whichever features they recalled from the story; the experimenters did not provide any feedback or cues to elicit any specific type of semantic feature. It is possible that differences between the groups of children on their production of verbally presented semantic features would have emerged if the experimenter explicitly prompted for semantic features from each input modality. In the future, an experimental design that better balances the amount of visual and verbally presented semantic features is necessary. Because of this limitation, we will primarily focus our discussion on the visual and combined input findings from this exploratory study.

Children with ASD and DLD Benefit from Visual Cues when Building Semantic Representations

In the current study, the children with ASD and DLD both produced larger proportions of semantic features that were introduced via the visual modality in their definitions than the children with TLD, illustrating the facilitative effect of visual input when learning new words in these populations. This finding is consistent with previous research that has demonstrated that picture cues are more beneficial than word cues (i.e., a pictorial-word superiority) during tasks of word retrieval in children with ASD (Kamio & Toichi, 2000), and that children with ASD demonstrate enhanced performance on language tasks when visual supports are provided (Trembath, Vivanti, Iacono, & Dissanayake, 2015). Although this finding was expected for the children with ASD, the visual input was not initially anticipated to be as beneficial for the children with DLD based on previous work by Alt and Plante (2006) demonstrating that children with DLD struggled to acquire semantic information presented only through the visual modality. However, as Alt and Plante initially intuited, the children with DLD likely responded more poorly to the visual only condition because it differed from the expected auditory and visual presentation pattern of the other word learning tasks in their study. Also, children with ASD (Robertson et al., 2014) and DLD (Schul et al., 2004) demonstrate improved visual processing when provided more time to study the visual stimuli. In the original word learning paradigms used to extract data for this study, the visual referents were shown to the children in a child-friendly story at a comfortable pace over three sessions, which may have allowed the children with ASD and DLD to overcome slowed visual processing skills. Further research beyond this exploratory study is needed to more stringently determine the influence of visual processing speed and the acquisition of visually presented semantic features in children with ASD and DLD.

Because of the early developmental dependence on visual input to learn new semantic information (Clerkin et al., 2017; Landau et al., 1988), it is plausible that the children with ASD and DLD in the current study presented with a developmentally immature semantic learning strategy. Typically developing infants as young as six months of age show a preference for verbal cues when processing semantic information (Fulkerson & Waxman, 2007), yet the children with ASD and DLD in the current study, who were also significantly older than the vocabulary-matched controls, produced proportionally more visually-learned semantic features. Future research that tracks visual and verbal semantic learning in children with ASD and DLD over a much longer developmental timespan is warranted. But, given their developmentally early benefits and the preference for visual inputs by the children with ASD and DLD in the current study, it is perhaps unsurprising that visual supports in interventions for children with ASD (Quill, 1997) and DLD (Washington & Warr-Leeper, 2013) are often found to be facilitative. Even though atypical visual processing is widely documented in these populations, it remains a relative strength during tasks of semantic learning.

Children with ASD and DLD Produce Semantic Information Presented through Dual-Modalities Similarly to their Peers with TLD

The children with ASD, DLD, and TLD did not differ in their use of semantic features presented via both modalities. In a thoughtful review article on beneficial learning strategies for children with language impairments, Alt and her colleagues (2012) discussed how intervention for populations with language disorders should reflect the principles used in the teaching of typical learners. In this review, the authors stated that “stripping away” complexity and context “may not allow a child’s full linguistic knowledge to emerge” (p.489). An example of complexity could be using sentences instead of words in isolation when introducing a new word or concept. In the present study, the definitions were collected from novel words taught in a rich, naturalistic children’s story, embedded in full sentences. Perhaps by using this more linguistically complex approach, the children with ASD and DLD were facilitated in their ability to demonstrate learning that resembled their typical peers. The story format used in the current study could have supported the children with ASD and DLD through pairing the visual and the verbal information together in synchrony, a pairing that commonly occurs in more naturalistic learning contexts. This more naturalistic (and congruent) pairing of visual and verbal information is unlike previous studies that were designed to test the participants’ abilities to recognize the incongruence of auditory and visual information (Cummings & Ceponiene, 2010; McCleery et al., 2010; Norrix et al., 2007). Based on the current findings, clinicians should continue to feel comfortable introducing new concepts in multi-modality and naturalistic learning contexts to children with ASD and DLD.

Semantic Information Presented Across Modalities Enhances Learning Over Time

As summarized by an account of word learning put forth by Kucker, McMurray, and Samuelson (2015), semantic aspects of words are augmented and refined through principles of associative learning across multiple experiences with new words. Although aspects of word learning can be demonstrated following only a few experiences with the word-referent pairing (e.g., Carey & Bartlett, 1978; Dollaghan, 1985), this initial learning is fragile. With each new and semantically distinct experience with the word and its referent, the learner develops more nuanced meanings of words. In the current study, the semantic aspects of the words that were presented via both modalities in combination grew in their relative proportion across sessions. It is possible that this combined input served to semantically reinforce information across modalities, allowing the children to more efficiently strengthen the associative mappings between these semantic features than the features only presented through a single modality.

Conclusions

When acquiring new words, children with ASD and DLD benefit from visual input to create semantic representations. Also, they produce semantic information taught through the verbal and visual modalities combined as well as their typically developing peers who have comparable expressive vocabulary sizes. This similarity between groups in the use of dual-modalities is reflective of the sentiments of Alt and colleagues (2012) that children with language impairment benefit from the same cues provided to their peers with TLD, even if they are more complex. This study also indicates that children with processing differences can demonstrate abilities that closely resemble those of their typically developing peers when they are provided the opportunity to strengthen their semantic representations of new words during naturalistic, extended learning tasks. Enriching the semantic learning contexts by reinforcing the input across modalities can lead to the production of more robust semantic representations during tasks of new word learning, but this benefit is most apparent over time.

Highlights.

  • Children with ASD and DLD utilize visual input to produce semantic representations.

  • They produce information taught through both modalities comparably to their peers.

  • Children with ASD and DLD benefit from the same learning cues as their peers.

  • Input across modalities leads to more robust semantic representations over time.

Acknowledgments

We would like to extend our gratitude to Janet Olson, Patricia Tattersall, Alycia Tyler, Jessica Knapp, and Alyssa Bavaro for their feedback and assistance throughout this project. The current research analyses were supported by the Center for the Interdisciplinary Study of Language and Literacy (CISLL) at Northern Illinois University. Data collection for the original word-learning studies was supported by the National Institute on Deafness and other Communication Disorders (NIDCD) grants R01DC004826 (PI: Lisa Goffman) and 2T32DC000030 (PI: Laurence Leonard). Finally, we’d like to thank the participants and their families for making our research possible.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Allison Gladfelter, Speech-Language Pathology, School of Allied Health & Communicative Disorders, Northern Illinois University, DeKalb, IL 60115.

Kacy L. Barron, Speech-Language Pathology, School of Allied Health & Communicative Disorders, Northern Illinois University

Erik Johnson, Speech-Language Pathology, School of Allied Health & Communicative Disorders, Northern Illinois University.

References

  1. Alt M (2013). Visual fast mapping in school-aged children with specific language impairment. Topics in Language Disorders, 33(4), 328–346. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Alt M, & Plante E (2006). Factors that influence lexical and semantic fast mapping of young children with specific language impairment. Journal of Speech Language and Hearing Research, 49(5), 941–954. [DOI] [PubMed] [Google Scholar]
  3. Angwin AJ, Phua B, & Copland DA (2014). Using semantics to enhance new word learning: AnERP investigation. Neuropsychologia, 59(0), 169–178. [DOI] [PubMed] [Google Scholar]
  4. Archibald LMD, & Gathercole SE (2006). Short-term and working memory in specific language impairment. International Journal of Language & Communication Disorders, 41(6), 675–693. [DOI] [PubMed] [Google Scholar]
  5. Bion RAH, Borovsky A, & Fernald A (2013). Fast mapping, slow learning: Disambiguation of novel word-object mappings in relation to vocabulary learning at 18, 24, and 30 months. Cognition, 126(1), 39–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Boersma P, & Weenink D (2012). Praat: doing phonetics by computer (Version Version 5.1.29): Mathworks; Retrieved from http://www.praat.org/ [Google Scholar]
  7. Brown L, Sherbenou RJ, & Johnsen SK (2010). Test of Nonverbal Intelligence, Fourth Edition: Pearson. [Google Scholar]
  8. Buchner A, Erdfelder E, Faul F, & Lang A-G (2017). G*Power: Statistical Power Analyses for Windows and Mac (Version 3.1.9.2 for Windows): Heinrich-Heine-Universität Düsseldorf. [Google Scholar]
  9. Burgemeister BB, Blum LH, & Lorge I (1972). Columbia Mental Maturity Scale. New York: Harcourt Brace Jovanovich. [Google Scholar]
  10. Capone NC, & McGregor KK (2005). The effect of semantic representation on toddlers’ word retrieval. Journal of Speech Language and Hearing Research, 48(6). [DOI] [PubMed] [Google Scholar]
  11. Carey S, & Bartlett E (1978). Acquiring a single new word. Papers and Reports on Child Language Development, 15, 17–29. [Google Scholar]
  12. Clerkin EM, Hart E, Rehg JM, Yu C, & Smith LB (2017). Real-world visual statistics and infants’ first-learned object names. Philosophical Transactions of the Royal Society B-Biological Sciences, 372(1711). [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Cohen J (1969). Statistical power analysis for the behavioral sciences. New York; Academic Press. [Google Scholar]
  14. Collisson BA, Grela B, Spaulding T, Rueckl JG, & Magnuson JS (2015). Individual differences in the shape bias in preschool children with specific language impairment and typical language development: theoretical and clinical implications. Developmental Science, 18(3), 373–388. [DOI] [PubMed] [Google Scholar]
  15. Cummings A, & Ceponiene R (2010). Verbal and nonverbal semantic processing in children with developmental language impairment. Neuropsychologia, 48(1), 77–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Dawson J, Stout C, & Eyer J (2003). Structured Photographic Expressive Language Test (3rd Edition ed.). Dekalb, IL: Janelle Publications Inc. [Google Scholar]
  17. Dawson J, Stout C, Eyer J, Tattersall P, Fonkalsrud J, & Croley K (2007). Structured Photographic Expressive Language Test-Preschool: Second Edition Dekalb, IL: Janelle Publications. [Google Scholar]
  18. Dollaghan C (1985). Child meets word - Fast mapping in preschool children. Journal of Speech and Hearing Research, 28(3). [PubMed] [Google Scholar]
  19. Ehrler DJ, & McGhee RL (2008). Primary Test of Nonverbal Intelligence. Austin, TX: Pro-Ed Inc. [Google Scholar]
  20. Erdodi L, Lajiness-O’Neill R, & Schmitt TA (2013). Learning curve analyses in neurodevelopmental disorders: Are children with autism spectrum disorder truly visual learners? Journal of Autism and Developmental Disorders, 43(4), 880–890. [DOI] [PubMed] [Google Scholar]
  21. Farrant A, Boucher J, & Blades M (1999). Metamemory in Children with Autism. Child Development, 70(1), 107–131. [DOI] [PubMed] [Google Scholar]
  22. Faul F, Erdfelder E, Lang A-G, & Buchner A (2007). G*Power: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavioral Research Methods, 39, 175–191. [DOI] [PubMed] [Google Scholar]
  23. Fulkerson AL, & Waxman SR (2007). Words (but not tones) facilitate object categorization: Evidence from 6-and 12-month-olds. Cognition, 105(1), 218–228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Gladfelter A, & Goffman L (2017). Semantic Richness and Word Learning in Children with Autism Spectrum Disorder. Developmental Science. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Gladfelter A, Goffman L, Benham S, & Steeb A (in preparation). Extended word learning and the procedural deficit hypothesis in children with developmental language disorder.
  26. Greenslade KJ, Plante E, & Vance R (2009). The diagnostic accuracy and construct validity of the Structured Photographic Expressive Language Test-Preschool: Second Edition. Language Speech and Hearing Services in Schools, 40(2). [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Gupta P, & MacWhinney B (1997). Vocabulary acquisition and verbal short-term memory: Computational and neural bases. Brain and Language, 59(2), 267–333. [DOI] [PubMed] [Google Scholar]
  28. Henderson L, Weighall A, & Gaskell G (2013). Learning new vocabulary during childhood: Effects of semantic training on lexical consolidation and integration. Journal of Experimental Child Psychology, 116(3), 572–592. [DOI] [PubMed] [Google Scholar]
  29. Horst JS, & Samuelson LK (2008). Fast mapping but poor retention by 24-month-old infants. Infancy, 13(2), 128–157. [DOI] [PubMed] [Google Scholar]
  30. Justice LM, Meier J, & Walpole S (2005). Learning new words from storybooks: An efficacy study with at-risk kindergartners. Language Speech and Hearing Services in Schools, 36(1). [DOI] [PubMed] [Google Scholar]
  31. Kamio Y, & Toichi M (2000). Dual access to semantics in autism: Is pictorial access superior to verbal access. Journal of Child Psychology and Psychiatry and Allied Disciplines, 41(7), 859–867. [PubMed] [Google Scholar]
  32. Kostyuk N, Isokpehi RD, Rajnarayanan RV, Oyeleye TO, Bell TP, & Cohly HHP (2010). Areas of language impairment in autism. Autism Insights(2), 31–38. [Google Scholar]
  33. Kucker SC, McMurray B, & Samuelson LK (2015). Slowing down fast mapping: Redefining the dynamics of word learning. Child Development Perspectives, 9(2), 74–78. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Landa RJ, Gross AL, Stuart EA, & Faherty A (2013). Developmental trajectories in children with and without autism spectrum disorders: The first 3 years. Child Development, 84(2), 429–442. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Landau B, Smith LB, & Jones SS (1988). The importance of shape in early lexical learning. Cognitive Development, 3(3), 299–321. [Google Scholar]
  36. Leclercq AL, Maillart C, Pauquay S, & Majerus S (2012). The impact of visual complexity on visual short-term memory in children with specific language impairment. Journal of the International Neuropsychological Society, 18(3), 501–510. [DOI] [PubMed] [Google Scholar]
  37. Leonard LB (2014). Children with Specific Language Impairment (Ross M Ed. Second Edition ed.): MIT Press. [Google Scholar]
  38. Lord C, Rutter M, DiLavore PC, Risi S, Gotham K, & Bishop SL (2012). Autism Diagnostic Observation Schedule, Second Edition: ADOS-2. In. Torrance, CA: Western Psychology Services. [Google Scholar]
  39. Loucas T, Charman T, Pickles A, Simonoff E, Chandler S, Meldrum D, & Baird G (2008). Autistic symptomatology and language ability in autism spectrum disorder and specific language impairment. Journal of Child Psychology and Psychiatry, 49(11), 1184–1192. [DOI] [PubMed] [Google Scholar]
  40. Marinellie SA, & Johnson CJ (2002). Definitional skill in school-age children with specific language impairment. Journal of Communication Disorders, 35(3), 241–259. [DOI] [PubMed] [Google Scholar]
  41. Marton K (2008). Visuo-spatial processing and executive functions in children with specific language impairment. International Journal of Language & Communication Disorders, 43(2), 181–200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. McCleery JP, Ceponiene R, Burner KM, Townsend J, Kinnear M, & Schreibman L (2010). Neural correlates of verbal and nonverbal semantic integration in children with autism spectrum disorders. Journal of Child Psychology and Psychiatry, 51(3), 277–286. [DOI] [PubMed] [Google Scholar]
  43. McGregor KK, Oleson J, Bahnsen A, & Duff D (2013). Children with developmental language impairment have vocabulary deficits characterized by limited breadth and depth. International Journal of Language & Communication Disorders, 48(3), 307–319. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. McGregor KK, Sheng L, & Ball T (2007). Complexities of expressive word learning over time. Language Speech and Hearing Services in Schools, 38(4). [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. McGurk H, & Macdonald J (1976). Hearing lips and seeing voices. Nature, 264(5588), 746–748. [DOI] [PubMed] [Google Scholar]
  46. McMurray B, Horst JS, & Samuelson LK (2012). Word learning emerges from the interaction of online referent selection and slow associative learning. Psychological Review, 119(4), 831–877. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Munro N, Baker E, McGregor K, Docking K, & Arciuli J (2012). Why word learning is not fast. Frontiers in Psychology, 3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Norrix LW, Plante E, Vance R, & Boliek CA (2007). Auditory-visual integration for speech by children with and without specific language impairment. Journal of Speech Language and Hearing Research, 50(6), 1639–1651. [DOI] [PubMed] [Google Scholar]
  49. Potrzeba ER, Fein D, & Naigles L (2015). Investigating the shape bias in typically developing children and children with autism spectrum disorders. Frontiers in Psychology, 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Quill KA (1997). Instructional considerations for young children with autism: The rationale for visually cued instruction. Journal of Autism and Developmental Disorders, 27(6), 697–714. [DOI] [PubMed] [Google Scholar]
  51. Rabovsky M, Schad DJ, & Rahman RA (2016). Language production is facilitated by semantic richness but inhibited by semantic density: Evidence from picture naming. Cognition, 146, 240–244. [DOI] [PubMed] [Google Scholar]
  52. Rabovsky M, Sommer W, & Rahman RA (2012). Implicit word learning benefits from semantic richness: Electrophysiological and behavioral evidence. Journal of Experimental Psychology-Learning Memory and Cognition, 35(4). [DOI] [PubMed] [Google Scholar]
  53. Reynolds CR, & Bigler ED (1994). Test of Memory and Learning: Examiner’s manual: Pro-ed. [Google Scholar]
  54. Richardson JTE (2011). Eta squared and partial eta squared as measures of eeffect size in educational research. Educational Research Review, 6, 135–147. [Google Scholar]
  55. Robertson CE, Thomas C, Kravitz DI, Wallace GL, Baron-Cohen S, Martin A, & Baker CI (2014). Global motion perception deficits in autism are reflected as early as primary visual cortex. Brain, 137, 2588–2599. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Robinson CW, & Sloutsky VM (2019). Two mechanisms underlying auditory dominance: Overshadowing and response competition. Journal of Experimental Child Psychology, 178, 317–340. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Schopler E, Van Bourgondien M, Wellman GJ, & Love SR (2010). Childhood Autism Rating Scale (Second Edition ed.): Western Psychological Services. [Google Scholar]
  58. Schul R, Stiles J, Wulfeck B, & Townsend J (2004). How ‘generalized’is the ‘slowed processing’in SLI? The case of visuospatial attentional orienting. Neuropsychologia, 42(5), 661–671. [DOI] [PubMed] [Google Scholar]
  59. Semel E, Wiig EH, & Secord WA (2003). Clinical Evaluation of Language Fundamentals--Fourth Edition. San Antonio, TX: The Psychological Corporation. [Google Scholar]
  60. Trauner D, Wulfeck B, Tallal P, & Hesselink J (2000). Neurological and MRI profiles of children with developmental language impairment. Developmental Medicine and Child Neurology, 42(7), 470–475. [DOI] [PubMed] [Google Scholar]
  61. Williams K (2007). Expressive Vocabulary Test-II. Circle Pines, MN: American Guidance Service. [Google Scholar]

RESOURCES