Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Apr 10.
Published before final editing as: J Autism Dev Disord. 2025 Jul 26:10.1007/s10803-025-06960-3. doi: 10.1007/s10803-025-06960-3

Associations Between Audiovisual Integration and Reading Comprehension in Autistic and Non-autistic School-Aged Children

Grace Pulliam 1,2,3,4, Jacob I Feldman 3,4, Mark T Wallace 2,4,6, Laurie E Cutting 2,5,7, Tiffany G Woynaroski 2,5,8
PMCID: PMC13063400  NIHMSID: NIHMS2162432  PMID: 40715978

Abstract

Although not considered a core feature of autism, autistic children often present with difficulties in reading comprehension, which is a multisensory process involving translation of print to speech sounds (i.e., decoding) and interpreting words in context (i.e., language comprehension). This study tested the hypothesis that audiovisual integration may explain individual differences in reading comprehension, through its relations with decoding and language comprehension, in autistic and non-autistic children. To test our hypothesis, we conducted a concurrent correlational study involving 50 autistic and 50 non-autistic school-aged children (8–17 years of age) matched at the group level on biological sex and chronological age. Participants completed a battery of tests probing their reading comprehension, decoding, and language comprehension, as well as a psychophysical task assessing audiovisual integration as indexed by susceptibility to the McGurk illusion. A series of regression analyses was carried out to test relations of interest. Audiovisual integration was significantly associated with reading comprehension, decoding, and language comprehension, with moderate-to-large effect sizes. Mediation analyses revealed that the relation between audiovisual integration and reading comprehension was completely mediated by decoding and language comprehension, with standardized indirect effects indicating significant mediation through both pathways. These associations did not vary according to diagnostic group. This work highlights the potential role of audio-visual integration in language and literacy development and underscores the potential for multisensory-based interventions to improve reading outcomes in autistic and non-autistic children. Future research should employ longitudinal designs and more diverse samples to replicate and extend these findings.

Keywords: Autism, Reading, Audiovisual Integration, Literacy, Language, Mechanisms

Introduction

Autism is a neurodevelopmental condition defined by differences in social communication and by the presence of restricted interests and repetitive behaviors, as well as sensory processing differences (American Psychiatric Association [APA], 2013). Although not considered amongst the core features of the condition, impairments in language and literacy commonly co-occur with autism (e.g., Baixauli et al., 2021; Vogindroukas et al., 2022). School-age autistic1 children are at risk for disproportionate difficulties with reading comprehension, which could impact their academic and future vocational success (e.g., McIntyre et al., 2017; Nation et al., 2006; Norbury & Nation, 2010). Thus, there is a pressing need to identify factors and mechanisms that may help to explain variance in reading comprehension in autism.

Decoding and Language Comprehension Contribute To Reading Comprehension

According to the simple view of reading, reading comprehension is theorized to result from the combined contributions of decoding and language comprehension (Gough & Tunmer, 1986). A large extant literature has provided empirical support for this theory in non-autistic children (e.g., Cutting & Scarborough, 2006; Foorman et al., 2018; Hoover & Gough, 1990). While numerous factors have been hypothesized or suggested to covary with reading comprehension in autistic children, a growing body of research suggests that the simple view of reading may hold for this population, in whom decoding and language comprehension have also been found to account for the majority of the variance in reading comprehension (e.g., Brown et al., 2013; Davidson et al., 2018).

Audiovisual Integration Likely Influences Decoding and Language Comprehension

One factor that may influence decoding and language comprehension and translate to difficulties with reading comprehension is audiovisual integration, the ability to combine information from the auditory and visual modalities in order to benefit behavior and perception. This is important to consider, given that reading is inherently a multisensory process, which requires one to rapidly and accurately convert visual information (i.e., graphemes) into auditory information (i.e., phonemes) to decode text (Squires, 2018) and to access the semantic meaning of such orthographic information to comprehend what has been read (Torppa et al., 2010). It has been theorized that audiovisual integration is foundational for a number of higher-order processes, and that altered audiovisual integration may cascade onto development across domains, including language and literacy acquisition (Cascio et al., 2016; Wallace et al., 2020).

Audiovisual Integration Is Often Altered in Autistic Children

Children on the autism spectrum often present with differences in the processing and integration of audiovisual stimuli, in particular audiovisual speech, in comparison to non-autistic children (e.g., Stevenson et al., 2014b; Woynaroski et al., 2013; Zhou et al., 2022; see Feldman et al., 2018; Jertberg et al., 2024; Wallace et al., 2020 for reviews). Several studies have shown, for example, a reduced magnitude of audiovisual integration in response to audiovisual speech for autistic children relative to non-autistic children, when assessed by a psychophysical illusion called the McGurk effect, wherein the presentation of incongruent auditory and visual speech information (e.g., an auditory “pa” and visual “ka”) induces a fused percept (e.g., “ta” or “ha”) believed to represent integration of the cues presented across sensory modalities (e.g., Iarocci et al., 2010; Irwin et al., 2011; Mongillo et al., 2008; Stevenson et al., 2014b; see Jertberg et al., 2024 and Zhang et al., 2019 for reviews). There is mounting evidence that audiovisual integration as measured by susceptibility to the McGurk effect is linked with language comprehension in autistic children, as well as broader language and literacy skill in a range of clinical populations (e.g., Feldman et al., 2022; Pulliam et al., 2023). However, no study to date has comprehensively evaluated whether audiovisual integration is associated with reading comprehension through decoding and language comprehension in autistic and non-autistic children at school age (see Fig. 1).

Fig. 1.

Fig. 1

Conceptual Figure of Models Tested. Note. Depiction of the model tested in analyses. Audiovisual integration as indexed by the proportion of trials wherein participants reported perception of the McGurk illusion. Reading comprehension was indexed as an aggregate from the comprehension scaled score from the Gray Oral Reading Test, 5th edition (GORT; Wiederholt & Bryant, 2000) and the text comprehension scaled score of the Test of Reading Comprehension, 4th edition (TORC; Brown et al., 1978), following z-score transformation. Decoding was measured via an aggregate of (a) the fluency scaled score from the GORT, (b) the contextual fluency scaled score from the TORC, and (c) the sight word efficiency and phonemic decoding efficiency scaled scores from the Test of Word Reading Efficiency, 2nd edition (Torgesen et al., 1999) following z-score transformation. Language comprehension was indexed as an aggregate the receptive language standard score from the Clinical Evaluation of Language Fundamentals, 5th edition (Wiig et al., 2013), the standard score from the Receptive One-Word Picture Vocabulary Test, 4th edition (Martin & Brownell, 2011) and the receptive scaled score from the Vineland Adaptive Behavior Scales, 2nd edition (Sparrow et al., 2005) following z-score transformation

The Possibility that Diagnostic Group May Influence Relations of Interest

At the same time, it is possible that the factors contributing to reading comprehension may vary according to diagnostic group. For example, our team has previously shown that audiovisual integration as indexed by the McGurk effect may be more strongly associated with language comprehension in autistic relative to non-autistic children (Feldman et al., 2022). Other studies have shown that decoding may be less strongly associated with reading comprehension in autistic compared to non-autistic children (Henderson et al., 2014). Therefore, it is critical to consider group as a potential moderator of the relations of interest.

Purpose

The present study, therefore, sought to evaluate whether audiovisual integration as measured by the McGurk illusion explains individual differences in reading comprehension, through its relations with decoding and/or language comprehension, in school-aged autistic and non-autistic children, with consideration of the possibility that associations of interest may vary according to diagnostic status. Our research questions were as follows:

  1. Is audiovisual integration, as measured by susceptibility to the McGurk effect, associated with reading comprehension in school-aged autistic and non-autistic children? We hypothesized that increased audiovisual integration would covary with better reading comprehension.

  2. Is audiovisual integration, as measured by susceptibility to the McGurk effect, also associated with decoding and language comprehension in school-aged autistic and non-autistic children? We hypothesized that increased audiovisual integration would be associated with better decoding and language comprehension.

  3. Is the relation between audiovisual integration and reading comprehension mediated, or explained at least in part, by (a) decoding and/or (b) language comprehension? We hypothesized that increased audiovisual integration would be associated with better reading comprehension via increased decoding and language comprehension.

  4. Are the aforementioned relations (i.e., relations between audiovisual integration and reading comprehension, decoding, and language comprehension; the full mediation model) moderated by diagnostic group? We hypothesized that some relations of interest may be moderated by group, given prior findings for differential associations between audiovisual integration, decoding, language comprehension, and/or reading comprehension for autistic versus non-autistic children.

Methods

This study was completed at Vanderbilt University Medical Center with procedures approved by the Vanderbilt University Institutional Review Board.

Participants

Participants were 50 autistic children (Mage = 13.1 years; 37 male, 13 female) and 50 non-autistic children (Mage = 13.0 years; 37 male, 13 female) drawn from a larger NIH-funded study of sensory functioning (e.g., Feldman et al., 2020, 2024) and matched at the group level on both chronological age and biological sex. See Table 1 for a summary of participant characteristics according to group.

Table 1.

Summary of participant characteristics at study entry according to group

Autism (n = 50)
M (SD)
Non-autism (n = 50)
M (SD)
Chronological Age (Years) 13.1 (3.1) 13.0 (2.7)
Nonverbal IQ* 109.0 (16.2) 118.0 (13.0)
Biological Sex n (%) n (%)
Male 37 (74%) 37 (74%)
Female 13 (26%) 13 (26%)
Race n (%) n (%)
Asian 3 (6%) 1 (2%)
Black/African American 2 (4%) 4 (8%)
White 35 (70%) 40 (80%)
Multiple Races 7 (14%) 5 (10%)
Not Reported 3 (6%) 0 (0%)
Ethnicity n (%) n (%)
Hispanic or Latino 5 (10%) 5 (10%)
Not Hispanic or Latino 43 (86%) 44 (88%)
Not Reported 2 (4%) 1 (2%)

Nonverbal IQ = Nonverbal intelligence as measured by the Leiter International Performance Scale, 3rd edition or the Test of Nonverbal Intelligence, 4th edition (Brown et al., 2010; Roid et al., 2013)

*

Denotes groups significantly differed, p <.01

Inclusion criteria for this study were: (a) chronological age between 8;0 and 17;11 years; (b) normal or corrected-to-normal vision and normal hearing, as confirmed by screening at entry to the study; (c) no history of seizure disorders; and (d) no diagnosed genetic disorders, such as Fragile X or tuberous sclerosis, per caregiver report. An additional inclusion criterion for autistic children was a diagnosis of autism spectrum disorder according to DSM-5 criteria, as confirmed by a research-reliable administration of the Autism Diagnostic Observation Schedule, 2nd edition (ADOS-2; Lord et al., 2012) and the judgement of a licensed clinician on the research team. Additional inclusion criteria for non-autistic children were: (a) scores below the screening threshold for autism concern on the Social Communication Questionnaire (SCQ; Rutter et al., 2003); (b) no immediate family members with a diagnosis of autism; (c) nonverbal cognitive ability (NVIQ), as measured by standard scores on either the Leiter International Performance Scale, 3rd edition (Leiter–3; Roid et al., 2013) or the Test of Nonverbal Intelligence, 4th edition (TONI–4; Brown et al., 2010), ≥ 85; and (d) no prior history or present indicators of psychiatric conditions or learning disorders.

Measurement of Audiovisual Integration

Audiovisual integration was measured via a psychophysical task assessing susceptibility to the McGurk illusion (McGurk & Macdonald, 1976). This specific task was chosen because previous work from our lab showed that it yielded indices of audiovisual integration that were stable and valid for predicting core and related features of autism in school-aged autistic and non-autistic children (e.g., Dunham et al., 2020; Feldman et al., 2024). Stimulus presentation for all tasks was managed by E-Prime software.

Stimuli

The stimuli presented in the psychophysical task were videos of a female speaker saying the syllables “pa” and “ka” at a natural rate and volume, with neutral affect. These videos were recorded against a grey background with the speaker’s face and neck visible, in a similar manner to other stimuli used in the extant literature (e.g., Woynaroski et al., 2013). The auditory and visual tracks were separated and manipulated in Adobe Premiere to create the stimulus conditions as summarized below. Each stimulus was 1.85 s long.

Procedure

Participants completed the psychophysical task in a WhisperRoom (WhisperRoom Inc., Morristown, TN, USA) using a Samsung Syncmaster 2233RZ 22-inch PC monitor and Sennheiser HD550 series supra-aural headphones. Speech stimuli were presented in four conditions: auditory-only, visual-only, and congruent audiovisual speech syllables (both “pa” and “ka”), and incongruent audiovisual speech syllables (auditory “pa” and visual “ka”; i.e., McGurk stimuli). Following each stimulus presentation, participants reported what they perceived (i.e., “pa,” “ka,” “ta,” or “ha”) on a keyboard or a button box. The task consisted of 10 trials of each stimulus presented in a random order. Audiovisual integration was defined as the proportion of incongruent audiovisual speech trials in which participants reported perceiving the McGurk illusion (i.e., reported the fused percept of “ta” or “ha”). Higher values are interpreted as reflecting greater audiovisual integration and are considered more adaptive.

Measurement of Reading Comprehension

Reading comprehension was measured via the Gray Oral Reading Test, 5th edition (GORT; Wiederholt & Bryant, 2000) and the Test of Reading Comprehension, 4th edition (TORC; Brown et al., 1978). We derived (a) the comprehension scaled score from the GORT, and (b) the text comprehension scaled score of the TORC as component variables for use in analyses.

Measurement of Decoding

Decoding was measured via the GORT, the TORC, and the Test of Word Reading Efficiency, 2nd edition (TOWRE; Torgesen et al., 1999). We derived (a) the fluency scaled score from the GORT, (b) the contextual fluency scaled score from the TORC, and (c) the sight word efficiency and phonemic decoding efficiency scaled scores from the TOWRE as component variables for use in analyses.

Measures of Language Comprehension

Language comprehension was measured via the Clinical Evaluation of Language Fundamentals, 5th edition (CELF; Wiig et al., 2013), the Receptive One-Word Picture Vocabulary Test, 4th edition (ROWPVT; Martin & Brownell, 2011), and the Vineland Adaptive Behavior Scales, 2nd edition (Vineland; Sparrow et al., 2005). We derived the receptive language standard score from the CELF, the standard score from the ROWPVT, and the receptive scaled score from the Vineland for use in analyses (see Table 2 for list of constructs, measures, and variables related to each research question).

Table 2.

Summary of constructs and variables used in analyses

Construct Variable Used in Analyses Role
Reading Comprehension Average of z-scores for: Dependent Variable (RQ1,3,4)
(a) GORT comprehension scaled score
(b) TORC text comprehension scaled score
Decoding Average of z-scores for: Dependent Variable (RQ2); Mediator (RQ3&4)
(a) GORT fluency scaled score
(b) TORC contextual fluency scaled score
(c) TOWRE sight word efficiency scaled score
(d) TOWRE phonemic decoding efficiency scaled score
Language Comprehension Average of z-scores for: Dependent Variable (RQ2); Mediator (RQ3&4)
(a) CELF receptive language standard score
(b) ROWPVT standard score
(c) Vineland receptive scaled score
Audiovisual Integration Proportion of incongruent audiovisual speech trials in which the participant perceived the McGurk Illusion and reported a fused percept Independent Variable (RQ1,2,3,4)
Diagnostic Group Presence/absence of autism, dichotomized according to ADOS-2 and clinical interview Putative Moderator (RQ4)

RQ research question, GORT Gray Oral Reading Test, 5th edition (Wiederholt & Bryant, 2000), TORC Test of Reading Comprehension, 4th edition (Brown et al., 1978), TOWRE Test of Word Reading Efficiency, 2nd edition (Torgesen et al., 1999), CELF Clinical Evaluation of Language Fundamentals, 5th edition (Wiig et al., 2013), ROWPVT Receptive One-Word Picture Vocabulary Test, 4th edition (Martin & Brownell, 2011), Vineland Vineland Adaptive Behavior Scales, 2nd edition (Sparrow et al., 2005), ADOS-2 Autism Diagnostic Observation Schedule, 2nd edition (Lord et al., 2012)

Analytic Plan

Prior to conducting analyses, all variables were evaluated for normality, specifically for skewness >|1.0| and kurtosis >|3.0|. Subsequently, missing data (ranging from 0 to 11% across variables derived for use in analyses) were imputed using the missForest package (Stekhoven & Bühlmann, 2012) in RStudio (R Core Team, 2023). We then evaluated whether component variables purported to tap the same construct (e.g., for decoding, language comprehension, and reading comprehension) were sufficiently intercorrelated (i.e., r ≥.4) to generate aggregates tapping those abilities, following z-score transformation. Scores indexing audio-visual integration were raised to the 4th power to address detected violations of normality.

A series of regression analyses was then carried out to test hypothesized associations. The PROCESS macro in R was utilized to evaluate whether the relation between audio-visual integration and reading comprehension was mediated by language comprehension and/or decoding and to test whether any associations of interest varied according to diagnostic group (in models including the putative predictor [audiovisual integration], the putative moderator [group], and the product term [audiovisual integration*group]; Hayes, 2022). Cook’s D was utilized to monitor for undue influence across analyses (Cook, 1977).

Power Analyses

Power analyses were conducted with G*Power 3 and Monte Carlo power analysis simulation, given the plan for imputation of discrete missing data (Faul et al., 2009; Schoemann et al., 2017). Sensitivity analyses run in G*Power 3 specified with alpha = 0.05 and power = 0.80 revealed that we would be powered to detect unconditional and conditional relations that were small to moderate per Cohen’s (1998) criteria (i.e., f2 ≥ 0.11) with up to three predictors inclusive of putative predictor, putative moderator, and product terms, assuming that retention of all was warranted in final regression models, with a sample size of 100. Monte Carlo simulations specified with 1000 replications, 20,000 Monte Carlo draws per rep, and CI set to 95% indicated that we would be powered at > 0.80 (actual power ≥ 0.82) to detect parallel indirect effects assuming that paths comprising the indirect effects (a1b1 and a2b2) were at least moderate (≥ 0.3) in magnitude at n = 100. We were confident that we would observe effects of this magnitude based on our prior work estimating associations between audiovisual integration of speech stimuli and core and related features of autism (Feldman et al., 2018, 2022).

Results

Preliminary Results

Prior to running primary analyses, we ran a series of t-tests to confirm that autistic and non-autistic children displayed the expected pattern of between-group differences in response to the stimuli presented in the context of the psychophysical task. Consistent with extant studies, autistic children (M = 0.791, SD = 0.329) displayed reduced audio-visual integration in response to incongruent audiovisual speech stimuli (i.e., McGurk stimuli) in comparison to non-autistic children (M = 0.941, SD = 0.121), t(77.212) = 5.830, p <.001. This effect was moderate in magnitude (Cohen’s d = 0.60). Autistic children (M = 0.494, SD = 0.173) also displayed reduced accuracy in response to auditory-only stimuli relative to non-autistic children (M = 0.614, SD = 0.148), t(95.167) = −3.985, p <.001. This effect was moderate in magnitude (Cohen’s d = 0.75). Groups did not differ in their accuracy in response to visual-only stimuli (autistic children: M = 0.787, SD = 0.124; non-autistic children: M = 0.771, SD = 0.134; t(97.186) = 0.857, p =.394) or congruent audiovisual speech stimuli (autistic children: M = 0.951, SD = 0.058; non-autistic children: M = 0.970, SD = 0.053; t(97.014) = −1.579, p =.118). These effects were negligible to small in magnitude (Cohen’s d = 0.12 and 0.32, respectively).

Generation of Aggregate Scores

All component variables used to tap reading comprehension, decoding, and language comprehension were sufficiently intercorrelated to support aggregation (i.e., with all r values ≥ 0.5, p <.001; see Table 3).

Table 3.

Intercorrelations of component variables used to generate aggregate scores

Component Variable 1 2 3 4 5 6 7 8 9
Variables Purported to Tap Reading Comprehension
1. GORT Comprehension
2. TORC Text Comprehension 0.79***
Variables Purported to Tap Decoding
3. GORT Fluency
4. TORC Contextual Fluency 0.69***
5. TOWRE Sight Word Efficiency 0.77*** 0.70***
6. TOWRE Phonemic Decoding Efficiency 0.86*** 0.66*** 0.74***
Variables Purported to Tap Language Comprehensioi
7. CELF Receptive Language
8. ROWPVT 0.83***
Standard
9. Vineland Receptive 0.57*** 0.50***

GORT Gray Oral Reading Test, 5th edition (Wiederholt & Bryant, 2000); TORC Test of Reading Comprehension, 4th edition (Brown et al., 1978); TOWRE Test of Word Reading Efficiency, 2nd edition (Torgesen et al., 1999), CELF Clinical Evaluation of Language Fundamentals, 5th edition (Wiig et al., 2013), ROWPVT Receptive One-Word Picture Vocabulary Test, 4th edition (Martin & Brownell, 2011), Vineland Vineland Adaptive Behavior Scales, 2nd edition (Sparrow et al., 2005)

***

p<.001

RQ1: The Association Between Audiovisual Integration and Reading Comprehension

The association between audiovisual integration and reading comprehension was statistically significant (β = 0.458, p <.001; see Fig. 2). Specifically, increased audiovisual integration (i.e., increased reported perception of the fused percept of “ta” or “ha” in response to incongruent audiovisual speech trials) covaried with better reading comprehension abilities across groups. This association was moderate in magnitude.

Fig. 2.

Fig. 2

Scatterplot Depicting the Relation Between Audio-visual Integration and Reading Comprehension. Audiovisual integration as indexed by the proportion of trials wherein participants reported perception of the McGurk illusion was significantly associated with reading comprehension, with a moderate effect size. Audiovisual integration was transformed using a quartic transformation. This association did not vary according to diagnostic group

RQ2: Associations for Audiovisual Integration with Decoding and Language Comprehension

Audiovisual integration was also significantly associated with decoding (β = 0.431, p <.001; see Fig. 3) and with language comprehension (β = 0.536, p <.001; see Fig. 4), such that increased audiovisual integration covaried with better decoding and language comprehension across groups. These associations were moderate and large in magnitude, respectively.

Fig. 3.

Fig. 3

Scatterplot Depicting the Association Between Audiovisual Integration and Decoding. Audiovisual integration as indexed by the proportion of trials wherein participants reported perception of the McGurk illusion was significantly associated with decoding, with a moderate effect size. Audiovisual integration was transformed using a quartic transformation. This association did not vary according to diagnostic group

Fig. 4.

Fig. 4

Scatterplot Depicting the Association Between Audiovisual Integration and Language Comprehension. Audiovisual integration as indexed by the proportion of trials wherein participants reported perception of the McGurk illusion was significantly associated with language comprehension, with a large effect size. Audiovisual integration was transformed using a quartic transformation. This association did not vary according to diagnostic group

RQ3: Tests of Mediation

We subsequently tested whether decoding and language comprehension mediated the relation between audiovisual integration and reading comprehension via a parallel mediation model, which comprises the indirect relation between audiovisual integration and reading comprehension through decoding, the indirect relation between audiovisual integration and reading comprehension through language comprehension, and the indirect relation between audiovisual integration and reading comprehension though both decoding and language comprehension in parallel. The results of this parallel mediation model are summarized in Table 4 and depicted in Fig. 5.

Table 4.

Summary of significant parallel mediation model

Model/Variable B (SE) β t p f 2
Model 1: Decoding (a1 path)
 1. Constant − 0.677(0.165) − 4.11 < 0.001***
 2. Audiovisual Integration 1.080(0.229) 0.431 4.72 < 0.001*** 0.237
Model 2: Language Comprehension (a2 path)
 1. Constant − 1.933(0.354) − 5.46 < 0.001***
 2. Audiovisual Integration 3.082(0.491) 0.536 1.28 < 0.001*** 0.403
Model 3: Reading Comprehension (b and c’ paths)
 1. Constant − 0.006(0.112) − 0.054 0.958
 2. Audiovisual Integration 0.010(0.162) 0.004 0.059 0.953 < 0.001
 3. Decoding 0.317(0.086) 0.300 3.706 < 0.001*** 0.099
 4. Language Comprehension 0.280(0.040) 0.609 7.040 < 0.001*** 0.593

Coefficients, p values, and f2 values for regression analyses. f2 ≥ 0.02 indicates a small effect size, f2 ≥ 0.15 indicates a moderate effect size, f2 ≥ 0.35 indicates a large effect size (Cohen, 1988)

***

p value for effect < 0.001

Fig. 5.

Fig. 5

Depiction of Parallel Mediation Model. The indirect effect of audiovisual integration on reading comprehension via decoding and language comprehension. All values are standardized coefficients. a1 = the relation between audiovisual integration and decoding, not controlling for any other factors; b1 = the relation between decoding and reading comprehension, controlling for audiovisual integration and language comprehension; a2 = the relation between audiovisual integration and language comprehension, not controlling for any other factors; b2 = the relation between language comprehension and reading comprehension, controlling for audiovisual integration and decoding; c’ = the direct relation between audiovisual integration and reading comprehension, controlling for decoding and language comprehension. Both indirect effects comprising this model, as well as the parallel indirect effect, were statistically significant, meaning that the total relation between audiovisual integration and reading comprehension was significantly reduced when controlling for the putative mediator/s. In this case, the direct relation was statistically non-significant when controlling for the mediators of interest, meaning that the relation between audiovisual integration and reading comprehension is completely mediated, or explained, by decoding and language comprehension. ***p <.001, ns nonsignificant

The indirect relation between audiovisual integration and reading comprehension through decoding includes (a) the relation between audiovisual integration and decoding, not controlling for any other factors (the a1 path in this model); and (b) the relation between decoding and reading comprehension, controlling for audiovisual integration and language comprehension (the b1 path in this model). As reported above, audiovisual integration was significantly associated with decoding (ß = 0.431, p <.001). Additionally, decoding was significantly associated with reading comprehension, controlling for audiovisual integration and language comprehension (ß = 0.300, p <.001). The completely standardized indirect relation of audiovisual integration on reading comprehension through decoding was statistically significant, 95% CI: [0.068, 0.224].

The indirect relation between audiovisual integration and reading comprehension through language comprehension includes (a) the relation between audiovisual integration and language comprehension, not controlling for any other factors (the a2 path in this model); and (b) the relation between language comprehension and reading comprehension, controlling for audiovisual integration and decoding (the b2 path in this model). As reported above, audiovisual integration was significantly associated with language comprehension (ß = 0.536, p <.001). Additionally, language comprehension was significantly associated with reading comprehension, controlling for audiovisual integration and decoding (ß = 0.609, p <.001). The completely standardized indirect effect of audiovisual integration on reading comprehension through language comprehension was statistically significant, 95% CI: [0.209, 0.453].

The indirect effect of audiovisual integration on reading comprehension via decoding and language comprehension in parallel was also statistically significant, 95% CI: [0.314, 0.588]. The direct effect of audiovisual integration on reading comprehension, controlling for decoding and language comprehension, was non-significant (ß = 0.004, p =.953), meaning that the association between audiovisual integration and reading comprehension was completely mediated, or explained, by decoding and language comprehension across groups.

RQ4: Planned Tests of Moderation According to Diagnostic Group

None of the relations or indirect effects of interest significantly varied according to diagnostic group. The p values for product terms were all > 0.05 in regression models testing moderated associations, and confidence intervals testing moderated mediation relations all included 0.

Post-Hoc Analyses

As our sample of autistic and non-autistic children was heterogeneous, we ran a series of post-hoc analyses to explore whether the associations and indirect effects of interest here were robust to controlling for chronological age, NVIQ, and diagnostic group. All of the associations of interest and indirect effects remained significant when controlling for these variables. The results of these models are summarized in Tables S1, S2, and S3 (see Supplemental Materials).

We also considered whether the relations of interest here were specific to audiovisual integration or perhaps may hold for accuracy in perceiving auditory-only, visual-only, and congruent audiovisual stimuli that were also presented in the context of our psychophysical task. Associations between perceptual accuracy in other stimulus conditions and criterion variables of interest to this study (i.e., decoding, language comprehension, and reading comprehension) were all negligible to small in magnitude. The results of all zero-order associations across and within groups are summarized in Table S4 to facilitate meta-analyses and planning for future primary studies that may be conducted with autistic and/or non-autistic children.

Discussion

The present study investigated the degree to which and the mechanisms by which audiovisual integration is related to reading comprehension in autistic and non-autistic children. Our results indicate that increased audiovisual integration is associated with better reading comprehension through its relations with greater decoding abilities and language comprehension. These associations did not vary according to diagnostic group but rather were observed across autistic and non-autistic children.

Implications for Research, Theory, and Practice

Findings from this investigation extend the large and growing body of literature highlighting potential links between audiovisual integration, language, and literacy skills, and their neural substrates, in autistic children and other clinical and at-risk populations, including children with or at elevated likelihood for developmental language disorder and dyslexia (e.g., Bastien-Toniazzo et al., 2010; Calabrich et al., 2021; Edwards et al., 2018; Frei et al., 2025; Harrar et al., 2014; Meronen et al., 2013; Pulliam et al., 2023; Wang et al., 2020). The current results also lend increased empirical support for the simple view of reading (Hoover & Gough, 1990), as well as the theory that altered audiovisual integration can cascade onto the development of higher-order skills in both autistic and non-autistic children (Wallace et al., 2020). This work additionally has implications for clinical practice, suggesting that measuring multisensory integration may prove useful for predicting future difficulties in language and literacy acquisition and that multisensory approaches to intervention may prove efficacious for facilitating optimal language and literacy outcomes (e.g., Kujala et al., 2001; Magnan, 2006; Veuillet et al., 2007), though these hypotheses must be rigorously tested (Hazaymeh & Khasawneh, 2025; Schlesinger & Gray, 2017).

Summary of Strengths

This work is the first to test associations between audiovisual integration and reading comprehension, through decoding and language comprehension, and to consider whether these associations differ based on diagnostic group in autistic and non-autistic children. There are several notable strengths of our study. First, we recruited a sample of autistic and non-autistic children that was well-matched on several characteristics, including biological sex and chronological age. This sample was adequate in size for us to test relations of interest using advanced analytic approaches and complex (mediation and moderation) models, unveiling the mechanisms by which audiovisual integration is linked with reading comprehension across our populations of interest. We additionally employed multiple measures to tap constructs of interest to the extent possible (i.e., for reading comprehension, decoding, and language comprehension), which increased the stability, and thus the predictive validity, of the aggregate scores utilized in analyses, and limited the number of statistical tests to be run and, thereby, the family-wise error of the study (Rushton et al., 1983).

Limitations and Future Directions

There are, however, several limitations to our study. First, our use of a concurrent correlational design limits our ability to draw firm conclusions regarding the direction or causal nature of the relations we observed. Future work using longitudinal correlational and/or experimental designs is much needed to increase our confidence in the directionality or causality of links between audiovisual integration and later language comprehension, decoding, and reading comprehension.

We also only leveraged one measure of audiovisual integration, the McGurk effect. Future research could consider more measures of audiovisual processing and integration (e.g., other psychophysical or neural measures) as putative predictors of language and literacy skill (Stevenson et al., 2014a). Employing such paradigms in future work may help us to determine whether these associations are specific to the McGurk task, or if they may generalize to other aspects of processing, binding, and integration of syllables or speech as presented in a more natural and continuous manner (e.g., Van Engen et al., 2022). Further, the specific McGurk task that we used involved a closed response set, wherein participants were limited to reporting percepts presented on the button box. Going forward, researchers should consider alternative response options that permit reporting a range of potential percepts while being mindful to still limit challenges in reporting that could arise due to speech, language, imitation, and/or fine motor impairments in autistic children (i.e., asking autistic children to repeat or type more extensively on a keyboard to report their perception).

Given known differences in visual attention in autism and the high rates of co-occurring attention-deficit/hyper-activity disorder (ADHD) in autistic children, it is also important to consider broader attentional factors that could have contributed to individual differences in performance on the McGurk task that we observed (e.g., Ioannou et al., 2020; Seernani et al., 2021). In the present study, the presence of comorbid ADHD was not evaluated, and eye gaze was not monitored beyond having a member of the research team watch to ensure that participants were looking towards the computer monitor throughout the psychophysical task. Investigators should plan a priori for the assessment of ADHD symptomatology, ideally in a dimensional manner, and work to incorporate eye tracking technology into their tasks tapping audiovisual integration.

Finally, these findings may not generalize to all children on the entire autism spectrum, as the participants in our study were predominantly White and non-Hispanic and all participants were able to complete psychophysical testing and were relatively cognitively and linguistically able. Subsequent studies should increase purposive recruitment efforts and employ measurement approaches that are lower in demand (e.g., passive biobehavioral measures of audio-visual integration) to boost sample diversity and ensure the generalizability of these findings to the broader population of autistic and non-autistic children.

Conclusion

We found a significant relation between audiovisual integration and reading comprehension that was fully explained by decoding and language comprehension across autistic and non-autistic children. These results advance our understanding of the intricate relations between audiovisual integration, language, and reading, underscoring the need to consider multisensory processes in language and literacy development and to conduct rigorous research evaluating multisensory approaches as a potential means to promote more optimal language and literacy outcomes for children who are and are not on the autism spectrum.

Supplementary Material

supplementary material

Supplementary Information The online version contains supplementary material available at https://doi.org/10.1007/s10803-025-06960-3.

Acknowledgements

This research was funded by NIH U54HD083211 and P50HD103537 (PI: Neul), NIH/NCATS KL2TR000446 (PI: Woynaroski), NIH/NIDCD R21DC016144 and R01DC020186 (PI: Woynaroski), NSF NRT grant DGE 19-22697 (PI: Wallace), NIH/NIMH grant T32MH064913 (PI: Winder; training support for Pulliam), NSF NRT grant 19-22697 (PI: Stassun; training support for Pulliam), NIH/NCATS TL1TR002244 (PI: Bastarache; training support for Feldman), and NIH/NIDCD K99DC021501 (PI: Feldman).

Conflict of interest

JIF and TW are employed by the Department of Hearing and Speech Sciences at Vanderbilt University Medical Center, which offers communication assessment and intervention services for autistic children through their outpatient clinics and trains clinical students in the provision of assessments and treatments delivered over the course of early childhood. JIF and TW are parents of autistic children. TW has received internal and external funding to support her research focused on children with or at high likelihood for a diagnosis of autism as well as speaker fees to summarize the findings from this research. All other authors have no conflicts of interest to declare.

References

  1. American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders (5th ed). 10.1176/appi.books.9780890425596 [DOI] [Google Scholar]
  2. Baixauli I, Rosello B, Berenguer C, Téllez de Meneses M, & Miranda A (2021). Reading and writing skills in adolescents with autism spectrum disorder without intellectual disability. Frontiers in Psychology, 12, 646849. 10.3389/fpsyg.2021.646849 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bastien-Toniazzo M, Stroumza A, & Cavé C (2010). Audio-visual perception and integration in developmental dyslexia: An exploratory study using the McGurk effect. Current Psychology Letters: Behaviour Brain & Cognition, 25(3), 1–25. 10.4000/cpl.4928 [DOI] [Google Scholar]
  4. Bottema-Beutel K, Kapp SK, Lester JN, Sasson NJ, & Hand BN (2021). Avoiding ableist language: Suggestions for autism researchers. Autism in Adulthood, 3(1), 18–29. 10.1089/aut.2020.0014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Brown V, Hammill D, & Wiederholt JL (1978). Test of reading comprehension (4th ed.). Pro-Ed. [Google Scholar]
  6. Brown L, Sherbenou RJ, & Johnsen SK (2010). Test of nonverbal intelligence (4th ed.). Pro-Ed. [Google Scholar]
  7. Brown HM, Oram-Cardy J, & Johnson A (2013). A meta-analysis of the reading comprehension skills of individuals on the autism spectrum. Journal of Autism and Developmental Disorders, 43, 932–955. 10.1007/s10803-012-1638-1 [DOI] [PubMed] [Google Scholar]
  8. Calabrich SL, Oppenheim GM, & Jones MW (2021). Audiovisual learning in dyslexic and typical adults: Modulating influences of location and context consistency. Frontiers in Psychology, 12, 754610. 10.3389/fpsyg.2021.754610 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Cascio CJ, Woynaroski TG, Baranek GT, & Wallace MT (2016). Toward an interdisciplinary approach to Understanding sensory function in autism spectrum disorder. Autism Research, 9(9), 920–925. 10.1002/aur.1612 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cohen J (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates. [Google Scholar]
  11. Cook RD (1977). Detection of influential observation in linear regression. Technometrics, 19, 15–18. 10.1080/00401706.1977.10489493 [DOI] [Google Scholar]
  12. R Core Team (2023). R: A language and environment for statistical computing (Version 4.3.2). R Foundation for Statistical Computing. https://www.R-project.org/ [Google Scholar]
  13. Cutting LE, & Scarborough HS (2006). Prediction of reading comprehension: Relative contributions of word recognition, Language proficiency, and other cognitive skills can depend on how comprehension is measured. Scientific Studies of Reading, 10(3), 277–299. 10.1207/s1532799xssr1003_5 [DOI] [Google Scholar]
  14. Davidson MM, Kaushanskaya M, & Weismer SE (2018). Reading comprehension in children with and without ASD: The role of word reading, oral language, and working memory. Journal of Autism and Developmental Disorders, 48(10), 3524–3541. 10.1007/s10803-018-3617-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Dunham K, Feldman JI, Liu Y, Cassidy M, Conrad JG, Santapuram P, Suzman E, Tu A, Butera I, Simon DM, Broderick N, Wallace MT, Lewkowicz DJ, & Woynaroski TG (2020). Stability of variables derived from measures of multisensory function in children with autism spectrum disorder. American Journal on Intellectual and Developmental Disabilities, 125(4), 287–303. 10.1352/1944-7558-125.4.287 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Edwards ES, Burke K, Booth JR, & McNorgan C (2018). Dyslexia on a continuum: A complex network approach. PLoS One, 13(12), Article e0208923. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Faul F, Erdfelder E, Buchner A, & Lang AG (2009). Statistical power analyses using G* power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. 10.3758/BRM.41.4.1149 [DOI] [PubMed] [Google Scholar]
  18. Feldman JI, Dunham K, Cassidy M, Wallace MT, Liu Y, & Woynaroski TG (2018). Audiovisual multisensory integration in individuals with autism spectrum disorder: A systematic review and meta-analysis. Neuroscience & Biobehavioral Reviews, 95, 220–234. 10.1016/j.neubiorev.2018.09.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Feldman JI, Dunham K, Conrad JG, Simon DM, Cassidy M, Liu Y, Tu A, Broderick N, Wallace MT, & Woynaroski TG (2020). Plasticity of Temporal binding in children with autism spectrum disorder: A single case experimental design perceptual training study. Research in Autism Spectrum Disorders, 74. 10.1016/j.rasd.2020.101555. Article 101555. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Feldman JI, Conrad JG, Kuang W, Tu A, Liu Y, Simon DM, Wallace MT, & Woynaroski TG (2022). Relations between the McGurk effect, social and communication skill, and autistic features in children with and without autism. Journal of Autism and Developmental Disorders, 52(5), 1920–1928. 10.1007/s10803-021-05074-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Feldman JI, Dunham K, DiCarlo GE, Cassidy M, Liu Y, Suzman E, Williams ZJ, Pulliam G, Kaiser S, Wallace MT, & Woynaroski TG (2024). A randomized controlled trial for audiovisual multisensory perception in autistic youth. Journal of Autism and Developmental Disorders, 53(11), 4318–4335. 10.1007/s10803-022-05709-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Foorman BR, Petscher Y, & Herrera S (2018). Unique and common effects of decoding and Language in predicting reading comprehension in grades 1–10. Learning and Individual Differences, 63, 12–23. 10.1016/j.lindif.2018.02.011 [DOI] [Google Scholar]
  23. Frei N, Willinger D, Haller P, Fraga-González G, Pamplona GS, Haugg A, Lutz C, Coraj S, Hefti E, & Brem S (2025). Toward a mechanistic Understanding of reading difficulties: Deviant audiovisual learning dynamics and network connectivity in children with poor reading skills. Journal of Neuroscience, 45(17). 10.1523/JNEUROSCI.1119-24.2025. Article e1119242025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Gough PB, & Tunmer WE (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7(1), 6–10. 10.1177/074193258600700104 [DOI] [Google Scholar]
  25. Harrar V, Tammam J, Pérez-Bellido A, Pitt A, Stein J, & Spence C (2014). Multisensory integration and attention in developmental dyslexia. Current Biology, 24(5), 531–535. 10.1016/j.cub.2014.01.029 [DOI] [PubMed] [Google Scholar]
  26. Hayes AF (2022). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (3rd ed.). Guilford Press. [Google Scholar]
  27. Hazaymeh WA, & Khasawneh MAS (2025). Exploring the efficacy of multisensory techniques in enhancing reading fluency for dyslexic english Language learners. World Journal of English Language, 15(1), 146–159. 10.5430/wjel.v15n1p146 [DOI] [Google Scholar]
  28. Henderson LM, Clarke PJ, & Snowling MJ (2014). Reading comprehension impairments in autism spectrum disorders. L’Année Psychologique, 114(4), 779–797. 10.3917/anpsy.144.0779 [DOI] [Google Scholar]
  29. Hoover WA, & Gough PB (1990). The simple view of reading. Reading and Writing, 2, 127–160. 10.1007/BF00401799 [DOI] [Google Scholar]
  30. Iarocci G, Rombough A, Yager J, Weeks DJ, & Chua R (2010). Visual influences on speech perception in children with autism. Autism, 14(4), 305–320. 10.1177/1362361309353615 [DOI] [PubMed] [Google Scholar]
  31. Ioannou C, Seernani D, Stefanou ME, Riedel A, van Tebartz L, Smyrnis N, Fleischhaker C, Biscaldi-Schaefer M, & Boccignone G (2020). Comorbidity matters: Social visual attention in a comparative study of autism spectrum disorder, attention-deficit/hyperactivity disorder and their comorbidity. Frontiers in Psychiatry, 11, 545567. 10.3389/fpsyt.2020.545567 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Irwin JR, Tornatore LA, Brancazio L, & Whalen DH (2011). Can children with autism spectrum disorders hear a speaking face? Child Development, 82(5), 1397–1403. 10.1111/j.1467-8624.2011.01619.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Jertberg RM, Wienicke FJ, Andruszkiewicz K, Begeer S, Chakrabarti B, Geurts HM, de Vries R, & Van der Burg E (2024). Differences between autistic and non-autistic individuals in audiovisual speech integration: A systematic review and meta-analysis. Neuroscience & Biobehavioral Reviews, 164., Article 105787. 10.1016/j.neubiorev.2024.105787 [DOI] [PubMed] [Google Scholar]
  34. Kujala T, Karma K, Ceponiene R, Belitz S, Turkkila P, Tervaniemi M, & Näätänen R (2001). Plastic neural chances and reading improvement caused by audiovisual training in reading-impaired children. Psychological and Cognitive Sciences, 98(18), 10509–10514. 10.1073/pnas.181589198 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Lord C, Rutter M, DiLavore P, Risi S, Gotham K, & Bishop SL (2012). Autism diagnostic observation schedule (2nd ed.). Western Psychological Services. [Google Scholar]
  36. Magnan A, & Ecalle J (2006). Audio-visual training in children with reading disabilities. Computers & Education, 46(4), 407–425. 10.1016/j.compedu.2004.08.008 [DOI] [Google Scholar]
  37. Martin NA, & Brownell R (2011). Receptive One-Word picture vocabulary test (4th ed.). Pro-Ed. [Google Scholar]
  38. McGurk H, & Macdonald J (1976). Hearing lips and seeing voices. Nature, 264, 746–748. 10.1038/264746a0 [DOI] [PubMed] [Google Scholar]
  39. McIntyre NS, Solari EJ, Gonzales JE, Solomon M, Lerro LE, Novotny S, Oswald TM, & Mundy PC (2017). The scope and nature of reading comprehension impairments in school-aged children with higher-functioning autism spectrum disorder. Journal of Autism and Developmental Disorders, 47, 2838–2860. 10.1007/s10803-017-3209-y [DOI] [PubMed] [Google Scholar]
  40. Meronen A, Tiippana K, Westerholm J, & Ahonen T (2013). Audiovisual speech perception in children with developmental Language disorder in degraded listening conditions. Journal of Speech Language and Hearing Research, 56(1), 211–221. 10.1044/1092-4388(2012/11-0270) [DOI] [PubMed] [Google Scholar]
  41. Mongillo EA, Irwin JR, Whalen DH, Klaiman C, Carter AS, & Schultz RT (2008). Audiovisual processing in children with and without autism spectrum disorders. Journal of Autism and Developmental Disorders, 38, 1349–1358. 10.1007/s10803-007-0521-y [DOI] [PubMed] [Google Scholar]
  42. Nation K, Clarke P, Wright B, & Williams C (2006). Patterns of reading ability in children with autism spectrum disorder. Journal of Autism and Developmental Disorders, 36, 911–919. 10.1007/s10803-006-0130-1 [DOI] [PubMed] [Google Scholar]
  43. Norbury C, & Nation K (2010). Understanding variability in reading comprehension in adolescents with autism spectrum disorders: Interactions with Language and decoding. Scientific Studies of Reading, 15(3), 191–210. 10.1080/10888431003623553 [DOI] [Google Scholar]
  44. Pulliam G, Feldman JI, & Woynaroski TG (2023). Audiovisual multisensory integration in individuals with reading and Language impairments: A systematic review and meta-analysis. Neuroscience & Biobehavioral Reviews, 149, 105–130. 10.1016/j.neubiorev.2023.105130 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Roid GH, Miller LJ, Pomplun M, & Koch C (2013). ). Leiter international performance scale (3rd ed.). Western Psychological Services. [Google Scholar]
  46. Rushton JP, Brainerd CJ, & Pressley M (1983). Behavioral development and construct validity: The principle of aggregation. Psychological Bulletin, 94(1), 18–38. 10.1037/0033-2909.94.1.18 [DOI] [Google Scholar]
  47. Rutter M, Bailey A, & Lord C (2003). Social communication questionnaire. Western Psychological Services. [Google Scholar]
  48. Schlesinger NW, & Gray S (2017). The impact of multisensory instruction on learning letter names and sounds, word reading, and spelling. Annals of Dyslexia, 67(3), 219–258. 10.1007/s11881-017-0140-z [DOI] [PubMed] [Google Scholar]
  49. Schoemann AM, Boulton AJ, & Short SD (2017). Determining power and sample size for simple and complex mediation models. Social Psychological and Personality Science, 8(4), 379–386. 10.1177/1948550617715068 [DOI] [Google Scholar]
  50. Seernani D, Damania K, Ioannou C, Penkalla N, Jill H, Foulsham T, Kingstone A, Anderson N, Boccignone G, Bender S, Smyrnis N, Biscaldi M, Ebner-Priemer U, & Klein C (2021). Visual search in ADHD, ASD and ASD + ADHD: Overlapping or dissociating disorders? European Child and Adolescent Psychiatry, 30, 549–562. 10.1007/s00787-020-01535-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Sparrow SS, Cicchetti D, & Balla DA (2005). Vineland adaptive behavior scales (2nd ed.). Pearson. [Google Scholar]
  52. Squires KE (2018). Decoding: It’s not all about the letters. Language Speech and Hearing Services in Schools, 49(3), 395–408. 10.1044/2018_LSHSS-17-0104 [DOI] [PubMed] [Google Scholar]
  53. Stekhoven DJ, & Bühlmann P (2012). missForest—Non-parametric missing value imputation for mixed-type data. Bioinformatics, 28, 112–118. 10.1093/bioinformatics/btr597 [DOI] [PubMed] [Google Scholar]
  54. Stevenson RA, Ghose D, Fister JK, Sarko DK, Altieri NA, Nidiffer AR, Kurela LR, Siemann JK, James TW, & Wallace MT (2014a). Identifying and quantifying multisensory integration: A tutorial review. Brain Topography, 27, 707–730. 10.1007/s10548-014-0365-7 [DOI] [PubMed] [Google Scholar]
  55. Stevenson RA, Siemann JK, Schneider BC, Eberly HE, Woynaroski TG, Camarata SM, & Wallace MT (2014b). Multisensory Temporal integration in autism spectrum disorders. Journal of Neuroscience, 34(3), 691–697. 10.1523/JNEUROSCI.3615-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Torgesen JK, Wagner RK, & Rashotte CA (1999). Test of word reading efficiency (2nd ed.). Pro-Ed. [Google Scholar]
  57. Torppa M, Lyytinen P, Erskine J, Eklund K, & Lyytinen H (2010). Language development, literacy skills, and predictive connections to reading in Finnish children with and without Familial risk for dyslexia. Journal of Learning Disabilities, 43(4), 308–321. 10.1177/0022219410369096 [DOI] [PubMed] [Google Scholar]
  58. Van Engen KJ, Dey A, Sommers MS, & Peelle JE (2022). Audiovisual speech perception: Moving beyond McGurk. Journal of the Acoustical Society of America, 152(6), 3216–3225. 10.1121/10.0015262 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Veuillet E, Magnan A, Ecalle J, Thai-Van H, & Collet L (2007). Auditory processing disorder in children with reading disabilities: Effect of audiovisual training. Brain, 130(11), 2915–2928. 10.1093/brain/awm235 [DOI] [PubMed] [Google Scholar]
  60. Vogindroukas I, Stankova M, Chelas EN, & Proedrou A (2022). Language and speech characteristics in autism. Neuropsychiatric Disease and Treatment, 14(18), 2367–2377. 10.2147/NDT.S331987 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Wallace MT, Woynaroski TG, & Stevenson RA (2020). Multisensory integration as a window into orderly and disrupted cognition and communication. Annual Review of Psychology, 71, 193–219. 10.1146/annurev-psych-010419-051112 [DOI] [PubMed] [Google Scholar]
  62. Wang F, Karipidis II, Pleisch G, Fraga-González G, & Brem S (2020). Development of print-speech integration in the brain of beginning readers with varying reading skills. Frontiers in Human Neuroscience, 14, 289. 10.3389/fnhum.2020.00289 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Wiederholt L, & Bryant B (2000). Gray oral reading test (4th ed.). Pro-Ed. [Google Scholar]
  64. Wiig EH, Semel E, & Secord WA (2013). Clinical evaluation of Language fundamentals (5th ed.). NCS Pearson. [Google Scholar]
  65. Woynaroski TG, Kwakye LD, Foss-Feig JH, Stevenson RA, Stone WL, & Wallace MT (2013). Multisensory speech perception in children with autism spectrum disorders. Journal of Autism and Developmental Disorders, 43, 2891–2902. 10.1007/s10803-013-1836-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Zhang J, Meng Y, He J, Xiang Y, Wu C, Wang S, & Yuan Z (2019). McGurk effect by individuals with autism spectrum disorder and typically developing controls: A systematic review and meta-analysis. Journal of Autism and Developmental Disorders, 49(1), 34–43. 10.1007/s10803-018-3680-0 [DOI] [PubMed] [Google Scholar]
  67. Zhou HY, Cui XL, Yang BR, Shi LJ, Luo XR, Cheung EF, Liu S, & Chan RC (2022). Audiovisual Temporal processing in children and adolescents with schizophrenia and children and adolescents with autism: Evidence from simultaneity-judgment tasks and eye-tracking data. Clinical Psychological Science, 10(3), 482–498. 10.1177/21677026211031543 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supplementary material

RESOURCES