Abstract
Differences in sensory function have been documented for a number of neurodevelopmental conditions, including reading and language impairments. Prior studies have measured audiovisual multisensory integration (i.e., the ability to combine inputs from the auditory and visual modalities) in these populations. The present study sought to systematically review and quantitatively synthesize the extant literature on audiovisual multisensory integration in individuals with reading and language impairments. A comprehensive search strategy yielded 56 reports, of which 38 were used to extract 109 group difference and 68 correlational effect sizes. There was an overall difference between individuals with reading and language impairments and comparisons on audiovisual integration. There was a nonsignificant trend towards moderation according to sample type (i.e., reading versus language) and publication/small study bias for this model. Overall, there was a small but non-significant correlation between metrics of audiovisual integration and reading or language ability; this model was not moderated by sample or study characteristics, nor was there evidence of publication/small study bias. Limitations and future directions for primary and meta-analytic research are discussed.
Keywords: sensory, audiovisual, multisensory integration, meta-analysis, systematic review, reading impairment, language impairment, developmental dyslexia, specific language impairment, developmental language disorder
Differences in sensory function have been documented for a number of neurodevelopmental conditions. These conditions include specific reading impairments, or conditions in which individuals show difficulties with word recognition, spelling, and/or decoding of printed text (hereafter referred to as reading impairments; e.g., dyslexia; Eden et al., 1996; Hämäläinen et al., 2012), and developmental language impairments, or differences in language use and/or understanding (e.g., phonology, morphology, syntax) not attributable to other conditions (hereafter referred to as language impairments; e.g., specific language impairment, developmental language disorder [DLD]; see Paul, 2020 for a discussion of terminology; e.g., Elliott & Hammer, 1988; Tallal et al., 1985). These two impairments commonly co-occur, given that they both involve difficulties with aspects of language, but they do have distinguishable features (e.g., Bishop & Snowling, 2004; Catts et al., 2005; McArthur et al., 2000).
Multisensory integration, or the ability to combine and respond to sensory input from multiple modalities (Murray et al., 2016), may play a key role in the development of cognitive-linguistic skills (Wallace et al., 2020). One aspect of multisensory integration that may be particularly important is the audiovisual integration of speech and language, which has both an auditory component (i.e., the voice) and a corresponding visual component (i.e., the face).
Within the context of other neurodevelopmental conditions (e.g., autism), it has been hypothesized that foundational differences in sensory function may have implications for the acquisition of “higher-level” abilities, such as language, communication, and social interaction (Cascio et al., 2016). To evaluate these “sensory first” hypotheses of autism (i.e., those that posit sensory-differences occur early/first in development and are foundational; Robertson & Baron-Cohen, 2017), much work has been done to characterize audiovisual integration in autistic samples and to link those abilities with language and communication skills (see Feldman et al., 2018 for a review). However, this theory has not yet been comprehensively evaluated in or applied to other populations; thus, at present it is unclear whether such sensory first hypotheses are specific to autism or perhaps a more general feature of neurodevelopmental conditions writ large.
Although a sensory first hypothesis has not been specifically posed for developmental language and reading impairments, many researchers have posited that there is a strong sensory or multisensory component to these conditions (e.g., Birch & Belmont, 1964; Morgan, 1896; Tallal, 1984). Given that language is a multisensory process and that reading relies on linking visual features (i.e., orthography) to auditory features (i.e., phonology), it is logical to propose that disruptions in audiovisual integration may underlie, or be present in, these populations. To that end, a growing body of work has assessed audiovisual integration abilities in individuals with developmental language and reading impairments.
To date, reviews on audiovisual multisensory integration in individuals with reading and language impairments have been published; however, they were selective in nature and did not include quantitative syntheses of summary effects (e.g., Dionne-Dostie et al., 2015; Hahn et al., 2014; Wallace & Stevenson, 2014). However, given the diverse terms used to describe these conditions, the multidisciplinary nature of this work, and the piecemeal and discrepant nature of extant findings, there is a pressing need to conduct a systematic review and quantitative synthesis. A systematic review of this literature will allow us to derive more precise estimations of summary-level effects and to carry out tests of moderation in order to describe the specific multisensory differences associated with reading and language impairments and to allow researchers to better understand and treat such conditions. The present study, thus, sought to systematically review and meta-analyze the extant literature on audiovisual multisensory integration in individuals with reading and language impairments. The first aim of this study was to evaluate the extent to which audiovisual multisensory integration in individuals with reading and language impairments differs from their non-impaired peers in the extant literature. The second aim of this study was to evaluate the extent to which audiovisual multisensory integration is associated with language and reading skills in these populations.
Methods
This study was carried out in accordance with recommended procedures for conducting systematic reviews and meta-analyses (i.e., the Preferred Reporting Items for Systematic Reviews and Meta-Analyses [PRISMA] guidelines; Moher et al., 2009; Page et al., 2021).
Eligibility Criteria
The goal of the present study was to synthesize information from all existing studies on the subject of audiovisual integration in individuals with developmental language and reading impairments. Thus, there were two primary inclusion criteria for eligible studies. First, participants had to have a diagnosis of either (a) a developmental language impairment (e.g., DLD, specific-language impairment, primary language impairment) or (b) a developmental reading impairment (e.g., dyslexia). We chose to include studies with both disorders in our review because they commonly co-occur (e.g., Bishop & Snowling, 2004; Catts et al., 2005; McArthur et al., 2000) and because none of the studies included in the review utilized mutually exclusive diagnostic categories. Second, studies had to include either a behavioral or neural measure of audiovisual integration. We utilized a list of tasks tapping audiovisual integration and eligible effect sizes from Feldman et al. (2018) to ensure that included studies were properly assessing audiovisual multisensory integration; studies that observed the integration of other senses and studies that used parent report measures (e.g., Sensory Experiences Questionnaire; Baranek et al., 2006; Sensory Profile; Dunn, 1999) of sensory behavior were not considered. No exclusion criteria were implemented on the basis of participants’ language, country of origin, or publication status. However, studies were excluded if they were published in a language other than English.
Search Strategy
To identify all eligible studies, a comprehensive search strategy was devised using the PsycINFO, PsycARTICLES, and ProQuest Dissertations & Theses Global databases available through the ProQuest search engine and the PubMed database. Following our previously developed methods (see Feldman et al., 2018), two blocks of search terms were developed: one block focused on multisensory integration and synonyms thereof, and one block focused on the specific tasks that assess multisensory integration (for the exact terms used in each search, see Table 1). Given the lack of consistently used terminology to describe the populations of interest, the search was not limited by terminology used to describe participant characteristics. The final primary literature search was carried out through April 8, 2022.
Table 1.
Search Terms Used in Proquest Database Search
Term Block | Terms Used |
---|---|
| |
Task | ti,ab((sensory OR multisensory OR temporal OR multimodal OR bimodal OR intermodal OR crossmodal OR scanning OR gaze OR eyegaze OR attention) AND (integration OR perception OR Processing OR binding OR synchrony OR asynchrony OR speech OR face)) |
Audiovisual | ((visu* AND audi*) OR McGurk OR "flash-beep" OR (flash NEAR/5 beep) OR (incongruent NEAR/5 speech) OR (sound-induced flash illusion) OR audiovisual OR ((simultaneity OR temporal) near/5 judg*) OR (pip near/5 pop)) |
Grey literature searches included forward and backward citation searching of included studies and previously-published review articles on this topic (i.e., Wallace et al., 2020) and contacting the first authors and corresponding authors of studies included in our qualitative review published within the past 15 years (i.e., since 2007).
Study Selection
Results of the searches were exported into abstrackr (Wallace et al., 2012), duplicate records were removed, and titles and abstracts were screened by two independent undergraduate student researchers from a team of seven to determine whether the studies met inclusion criteria. In cases where members of the research team disagreed, the studies in question were independently reviewed by the supervising research fellow.
Records considered for full-text review were imported into REDCap (Harris et al., 2009). During full text review, exclusion criteria were considered in the following order:
duplicate not previously removed;
manuscript not written in English;
manuscript did not describe a quantitative study;
no eligible participants (i.e., did not report on participants with developmental reading and/or language impairments); and
no task assessing audiovisual multisensory integration.
Additionally, all dissertations that were later published and did not report additional effect sizes of interest were also excluded. All studies that met the aforementioned inclusion and exclusion criteria were included in the qualitative synthesis, regardless of whether an effect size of interest could be extracted.
Data Extraction
Articles that reported a group difference on a behavioral or neural metric of multisensory integration and/or a correlation between multisensory integration and either language/communication skill or reading skill were included in the quantitative analysis.
Extraction of Group Differences
Data were extracted from all eligible studies that reported a group difference between individuals with a language or reading impairment and comparison participants without reading or language impairments. Extracted effect sizes had to index an aspect of audiovisual multisensory integration (i.e., extracted effect sizes could not solely be considered a measure of audiovisual processing, such as reaction times to audiovisual stimuli or correct identification of audiovisual syllables without a unisensory contrast).
Some studies reported multiple effect sizes of interest for a single sample. Similarly, data from some samples were reported in multiple records. To handle statistically dependent effect sizes, robust variance estimation procedures were used (see Analyses). All nonoverlapping eligible effect sizes (i.e., effect sizes which differed in more than just the level at which they analyzed the data) were extracted from each study. Group differences were calculated as the effect size metric Cohen’s d (d) and then converted to the effect size metric used for analyses, Hedges’ g (g), to better account for differences in sample size and reduce bias (see Borenstein et al., 2009).
Extraction of Correlations
Data were extracted from all studies that reported a correlation coefficient (r) between an index of audiovisual multisensory integration and a concurrent metric of language, communication, or reading ability. As robust variance estimation procedures were again used to handle statistically dependent effect sizes, all correlations of interest were extracted from each eligible study. Correlations were coded such that positive values indicated that better multisensory integration was associated with better outcomes (i.e., better language, communication, or reading skill) and negative values indicated that better multisensory integration was associated with worse outcomes (i.e., worse language, communication, or reading skill). Fisher transformations were then used to convert correlation coefficients (r) to the effect size metric Fisher’s z (z) using the Metafor package in R (Viechtbauer, 2010) to better account for differences in sample size and reduce bias (see Borenstein et al., 2009).
Missing Data
Data could not be extracted from all eligible reports (e.g., in instances wherein authors did not report statistics relevant to non-significant group differences or utilized reporting styles for effects of interest that did not permit extraction of the necessary effect size information). When this occurred for articles published within the last 15 years, we asked for the information we would need to derive an effect size when contacting authors as a part of our grey literature search (see Search Strategy). In some cases, scatterplots were used to extract effect sizes using WebPlotDigitizer (Rohatgi, 2020).
Data Coding
Each study was additionally coded for participant demographic and eligibility variables. The demographic variables extracted from each study by group whenever they were available were: (a) the number of participants, (b) the mean age, (c) the percent of male participants, (d) average verbal IQ, (e) average nonverbal IQ, (f) average mental age equivalent, and (g) average language age equivalent. These variables were intended to be tested as moderators in meta-regression models, if enough studies reported this information. Eligibility variables included (a) whether the sample was made up of individuals with reading impairments, language impairments, or both and (b) the reported IQ cutoff for study participants.
In addition to the participant variables, each study was coded for several aspects of task difficulty and task stimuli. These coding variables included: (a) the principle of multisensory integration assessed (i.e., temporal, spatial, inverse effectiveness; see Stein & Stanford, 2008); (b) the data collection method used (i.e., eyetracking, EEG/ERP, fMRI, psychophysics, other behavioral observation); (c) the task demands imposed (i.e., passive viewing, explicit measure of temporal perception, report of audiovisual illusion; adapted from Stevenson et al., 2016); (d) the specific task used; and (e) several dimensions of visual stimulus complexity (i.e., static visual vs. dynamic visual, face absent vs. present, other body part absent vs. present) and auditory stimulus complexity (i.e., speech absent vs. present, natural speech vs. synthesized speech, and whether speech stimuli were syllables, words, non-/pseudo-words, or sentences).
Analytic Plan
All analyses were conducted in R (R Core Team, 2022). As summarized above, Cohen’s d was extracted from all eligible studies that included a group difference, then converted to Hedge’s g. Correlational effect sizes were extracted as Pearson’s r and similarly converted to Fisher’s z. As several studies reported multiple effects of interest, effect sizes were analyzed using robust variance estimation (RVE) procedures according to the latest recommendations from Pustejovsky and Tipton (2022). Specifically, correlated and hierarchical (CHE) models were implementing using the clubSandwich (Pustejovsky, 2020) and metafor packages (Viechtbauer, 2010) to assess overall effect sizes and moderators of effect sizes. To assess possible publication bias across the extant literature, we conducted Egger’s regression test with cluster-robust variance estimation methods using the rma.mv function in the metafor package (i.e., the Egger MLMA approach described in Rodgers & Pustejovsky, 2021).
Results
Study Selection
Of the 14,173 abstracts screened via primary and grey literature searches, 162 were selected for full text review (see Figure 1). Studies were excluded during full text review for the following reasons: (a) studies were not published in English (n = 1), (b) studies were duplicates not removed at an earlier stage of the review (n = 2), (c) studies did not have a quantitative analysis (n = 6), (d) studies did not include data on individuals with developmental language and/or reading impairments (n = 16), and (e) studies did not report findings for an eligible multisensory task (n = 81).
Figure 1.
Flow Chart of Search Results.
A total of 57 studies published in 56 reports met inclusionary criteria for our qualitative synthesis. Of those reports, we were able to extract effect sizes from 30 reports without any difficulties. When we contacted the authors of 19 reports included in our qualitative review but not our quantitative review as a part of our grey literature search (i.e., because they were published after 2007), we requested information that we could use to extract effect sizes. We received sufficient information to include an additional three reports in our quantitative analysis (Harrar et al., 2014; Kaganovich et al., 2016; van Laarhoven et al., 2018), along with additional effect sizes we could not initially extract from another report (Schaadt et al., 2019). Finally, we used WebPlotDigitizer (Rohatgi, 2020) to extract effect sizes from four other reports that otherwise would not have been included in our quantitative analysis (Blau et al., 2009, 2010; Kaganovich et al., 2014; Yang et al., 2020) and additional effect sizes from two other studies that were already included in the quantitative analysis (Chen et al., 2016; Kast et al., 2011). Thus, our final quantitative analyses included 39 studies published in 38 reports and grouped into 35 clusters based on overlapping samples (see Tables 2 and S1 for information about studies included in the quantitative and qualitative reviews, respectively).
Table 2.
Characteristics of Studies Included in Quantitative Analyses
# ES | Clinical | Comparison | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
|
|
||||||||
Study | g | r | Type | n | Prop male | M age | n | Prop male | M age | Task(s) |
| ||||||||||
Bastien-Toniazzo et al. (2010) | 3 | 0 | L | 10 | 0.67 | 9.98 | 20 | 0.25 | 9.99 | McGurk Effect |
Blau et al. (2009) † | 2 | 6 | R | 13 | 0.92 | 23.50 | 13 | 0.69 | 26.8 | Directed Watching (fMRI) |
Blau et al. (2010) † | 2 | 4 | R | 18 | 0.94 | 9.39 | 16 | 0.75 | 9.43 | Directed Watching (fMRI) |
Boliek et al. (2010) | 0 | 4 | C | 20 | NR | 10.13 | 20 | NR | 9.50 | McGurk Effect |
Chen et al. (2016) † | 2 | 4 | R | 17 | 0.35 | 9.96 | 20 | 0.40 | 10.16 | Temporal Order Judgement |
de Gelder and Vroomen (1998) | 1 | 0 | R | 14 | 0.79 | 11.40 | 28 | 0.64 | NR | McGurk Effect |
Drader (1973) | 1 | 0 | R | 20 | 1.00 | 11.04 | 20 | 1.00 | 10.83 | Multisensory Identification |
Francisco, Groen, et al. (2017) | 1 | 0 | R | 60 | 0.21 | 22.62 | 54 | 0.25 | 22.08 | McGurk Effect |
Froyen et al. (2011) | 0 | 8 | R | 16 | 0.75 | 11.05 | ?? | NA | NA | Directed Watching (EEG) |
Gelfand (2015) | 9 | 0 | L | 12 | 0.00 | 22.74 | 12 | 0.00 | 23.49 | Oddball Detection |
Gori et al. (2020) | 1 | 0 | R | 32 | 0.47 | 10.33 | 32 | 0.47 | 10.33 | Temporal Order Judgement |
Groen and Jesse (2013) | 4 | 0 | R | 20 | 0.80 | 13.87 | 18 | 0.83 | 13.59 | McGurk Effect |
Grondin et al. (2007) | 4 | 0 | L | 23 | 0.59 | 7.49 | 19 | 0.53 | 7.24 | Temporal Order Judgement |
Hairston et al. (2005) | 1 | 1 | R | 36 | 0.64 | NR | 29 | 0.55 | NR | Temporal Order Judgement |
Harrar et al. (2014) ‡ | 1 | 0 | R | 17 | 0.47 | 21.21 | 17 | 0.47 | 21.47 | Reaction Time |
Kaganovich Cluster | ||||||||||
Kaganovich et al. (2014)† | 2 | 2 | L | 15 | 0.80 | 9.00 | 15 | 0.73 | 9.08 | Simultaneity Judgement |
Kaganovich et al. (2016)‡ | 3 | 6 | L | 19 | 0.74 | 10.00 | 19 | 0.63 | 10.00 | Listening in Noise |
Kaganovich (2017) | 2 | 0 | L | 34 | 0.79 | 9.65 | 34 | 0.68 | 9.78 | Simultaneity Judgement |
Kaganovich et al. (2015) | 1 | 0 | L | 15 | 0.81 | 9.41 | 16 | 0.81 | 9.58 | McGurk Effect |
Kaltenbacher (2018) | 5 | 1 | R | 33 | 0.77 | 19.42 | 31 | 0.23 | 20.58 | McGurk Effect |
Kast et al. (2011) † | 1 | 1 | R | 12 | 0.75 | 26.10 | 13 | 0.77 | 26.30 | Directed Watching (fMRI) |
Laasonen et al. (2000) | 1 | 0 | R | 13 | 0.38 | 10.46 | 26 | 0.58 | 10.24 | Simultaneity Judgement |
Laasonen et al. (2002) | 2 | 0 | R | 16 | 0.25 | 26.94 | 16 | 0.44 | 25.13 | Temporal Order Judgement |
Leybaert et al. (2014) - Exp1 | 4 | 0 | R | 14 | 0.50 | 11.50 | 14 | 0.50 | 11.75 | Listening in Noise, McGurk Effect |
Leybaert Cluster | ||||||||||
Huyse et al. (2015) | 6 | 0 | L | 14 | 0.50 | 9.58 | 14 | 0.50 | 9.83 | McGurk Effect |
Leybaert et al. (2014) - Exp2 | 4 | 0 | R | 27 | 0.63 | 10.90 | 27 | 0.48 | 10.20 | Listening in Noise, McGurk Effect |
Liu et al. (2019) | 1 | 3 | R | 63 | 0.65 | 9.82 | 63 | 0.65 | 9.66 | Temporal Order Judgement |
McNorgan et al. (2013) | 0 | 4 | R | 13 | 0.54 | 11.00 | 13 | 0.54 | 11.00 | Directed Watching (fMRI) |
Megnin-Viggars and Goswami (2013) | 4 | 0 | R | 23 | 0.13 | 23.10 | 22 | 0.33 | 25.20 | Multisensory Identification |
Meronen et al. (2013) | 6 | 0 | L | 25 | 0.72 | 8.75 | 30 | 0.60 | 8.42 | Listening in Noise, McGurk Effect |
Meyler and Breznitz (2005) | 2 | 0 | R | 18 | 0.55 | NR | 19 | 0.53 | NR | Multisensory Identification |
Norrix et al. (2006) | 1 | 0 | L | 18 | 0.33 | 20.50 | 18 | 0.28 | 19.08 | McGurk Effect |
Norrix et al. (2007) | 1 | 0 | L | 28 | 0.57 | 4.66 | 28 | 0.57 | 4.66 | McGurk Effect |
Pons et al. (2013) | 8 | 0 | L | 20 | NR | 6.69 | 20 | NR | 6.72 | Undirected Watching |
Romanovska et al. (2021) | 1 | 0 | R | 22 | 0.43 | 9.40 | 22 | 0.43 | 9.10 | Multisensory Identification |
Rüsseler et al. (2018) | 1 | 0 | R | 11 | 0.85 | 24.40 | 13 | 0.85 | 27.40 | Incongruent AV Speech |
Schaadt et al. (2019) ‡ | 8 | 10 | R | 16 | 0.69 | 9.62 | 19 | 0.42 | 9.57 | Directed Watching |
van Laarhoven et al. (2018) ‡ | 8 | 4 | R | 15 | 0.53 | 10.20 | 15 | 0.53 | 10.20 | Listening in Noise |
Yang et al. (2020) † | 6 | 10 | R | 12 | 0.71 | 10.99 | 11 | 0.63 | 11.30 | Multisensory Identification |
Notes. #ES = number of effect sizes; Type = clinical sample comprised individuals with reading impairments (R), language impairments (L), or both combined (C), Prop male = proportion of the sample that was male, NR = not reported.
At least some effect sizes were derived using WebPlotDigitizer (Rohatgi, 2020).
At least some effects sizes were obtained directly from the authors,
Qualitative Synthesis
Most of the studies included in the present review reported on general properties of multisensory integration (n = 31), including multisensory gain in accuracy and reaction times. The second largest group of studies reported on the temporal principle of multisensory integration (n = 22). The least studied principle was the inverse effectiveness principle of multisensory integration (n = 11). These groups are not mutually exclusive, as some studies reported on multiple properties of multisensory integration. Similar to our previous review on audiovisual integration in autism (Feldman et al., 2018), no studies included in this review reported on the spatial property of multisensory integration.
Most studies reported on samples from non-English speaking European countries, including the Netherlands (n = 11), Finland (n = 5), Germany (n = 4), Belgium (n = 3 French speaking samples), Switzerland (n = 2 German speaking samples), Austria (n = 1), France (n = 1), Norway (n = 1), Poland (n = 1), and Spain (n = 1). A large number of studies reported on English-speaking samples from North America (n = 14 United States; n = 3 Canada; n = 1 Unclear) and the United Kingdom (n = 3). The remaining studies focused on speakers of Chinese (n = 3) and Hebrew (n = 3).
This review indicates that a broad range of tasks have been used to assess audiovisual multisensory integration in individuals with reading and language impairments. The most frequently utilized tasks for assessing audiovisual multisensory integration were the McGurk effect (n = 15; i.e., Bastien-Toniazzo et al., 2010; Boliek et al., 2010; de Gelder & Vroomen, 1998; Francisco, Groen, et al., 2017; Francisco, Jesse, et al., 2017; Groen & Jesse, 2013; Kaganovich et al., 2015; Kaltenbacher, 2018; Leybaert et al., 2014; Meronen et al., 2013; Norrix et al., 2006, 2007; Ramirez, 2010) and listening-in-noise tasks (n = 8; Kaganovich et al., 2016; Leybaert et al., 2014; Meland, 2017; Meronen et al., 2013; Ramirez & Mann, 2005; Ramirez, 2010; Rüsseler et al., 2015; van Laarhoven et al., 2018; see Tables 2 and S1).
Notably, there were 10 studies that utilized fMRI to evaluate differences in multisensory integration that met our inclusionary criteria; all samples comprised individuals with reading impairments. However, due to the wide heterogeneity of their methods, it was not possible to conduct a meta-analysis of fMRI results (Costafreda, 2009). Three of the fMRI studies (Kast et al., 2011; Kronschnabel et al., 2014; Ye et al., 2017) evaluated differences in activation in response to audiovisual versus unisensory stimuli, and eight of the fMRI studies evaluated effects of audiovisual congruency (i.e., congruent versus incongruent written language and auditory pronunciations), including four (Kronschnabel et al., 2014; McNorgan et al., 2013; Rüsseler et al., 2015; Yang et al., 2020) that focused on words, two (Blau et al., 2009, 2010) that focused on syllables, one (Pekkola et al., 2006) that focused on vowels, and one (Yang et al., 2020) that focused on Chinese characters and psuedocharacters.
Quantitative Review
Synthesis of Group Differences
Thirty-five reports (38 studies grouped into 32 clusters based on overlapping samples) met inclusion criteria for our first research question (see Figure 1 and Table 2). Of these 32 clusters, 23 clusters focused on individuals with reading impairments and nine clusters focused on individuals with language impairments. As there were less than 10 clusters with effect sizes focused on individuals with language impairments, we aggregated all of the effect sizes together (Tanner-Smith & Tipton, 2014).
Overall, there was a significant summary effect, g = −0.74, 95% CI = [−1.04, −0.45], p < .001 (see Figures 2 and S1). This indicates a moderate to large difference in audiovisual multisensory integration in individuals with reading and language impairments and comparison participants in the extant literature. Samples of individuals with reading or language impairments presented with worse audiovisual multisensory integration, on average, compared to controls.
Figure 2.
Forest Plot of Group Difference Effect Sizes by Cluster. Note. #ES = number of effect sizes in the cluster. 95% CI represents the minimum and maximum values for the 95% confidence intervals of each effect size in the cluster. For a forest plot of all effect sizes, see Figure S1.
There was not a significant difference in effect sizes between samples of reading impaired individuals versus samples of language impaired individuals, B = 0.35, p = .278. There was, however, a non-significant trend towards greater differences in the language impaired samples (g = −0.95, p < .001) compared to the reading impaired samples (g = −0.60, p = .001). This trend may be in part due to the greater number of studies with extreme outliers focused on language impaired samples (n = 2, contributing 8 effect sizes) compared to the number of studies with extreme outliers focused on reading impaired samples (n = 1; contributing 2 effect sizes; see Figure S1).
Although there was significant heterogeneity in the model (Q = 1018.2, p < .001), we were only able to run a limited number of meta-regression analyses due to excessive missingness for moderators related to participant characteristics. Specifically, sample age and percent of male participants were the only coded putative moderators missing data for < 50% of clusters in the analysis. Neither of those two participant characteristic variables significantly moderated group difference effect sizes (B = 0.01, p = .339 for mean sample age; B = −0.25, p = .757 for percent of male participants). There was a significant effect of whether the visual stimuli were static (e.g., flashes, letters, words) or dynamic (e.g., talking faces; B = −0.63, p = .048); effect sizes tended to be larger when there were static stimuli. Results were not moderated by whether the audiovisual stimuli contained speech in each experiment (B = 0.51, p = .18). We were not sufficiently powered to run meta-regression models with our remaining categorical variables, which did not utilize binary variables.
The Egger MLMA test (Rodgers & Pustejovsky, 2021) indicated that that there was limited evidence for publication bias across the extant literature, B = 1.54, p = .168 (see Figure 3). Although there are 10 effect sizes from 3 study clusters that appear to be extreme negative outliers (i.e., favoring comparisons) in our current study, these are at least somewhat balanced by the large number of effect sizes that are null or positive (i.e., favor the individuals with reading or language impairments).
Figure 3.
Funnel Plot of Group Difference Effect Sizes.
Synthesis of Correlations
Sixty-eight effect sizes from 15 reports (14 clusters) were included in our quantitative synthesis of correlations. Of those 14 clusters, 11 comprised samples of individuals with reading impairments, two comprised samples of individuals with language impairments, and one was a combined sample. Overall, there was not a significant correlation between measures of multisensory integration and clinical outcomes across the samples, z = 0.134, p = .16 (see Figures 4 and S2). This represents a small overall effect size per Cohen’s (1992) guidelines for correlation interpretations.
Figure 4.
Forest Plot of Correlation Effect Sizes by Cluster. Note. #ES = number of effect sizes in the cluster. 95% CI represents the minimum and maximum values for the 95% confidence intervals of each effect size in the cluster. For a forest plot of all effect sizes, see Figure S2.
Although this number of effect sizes and clusters is sufficient to estimate the average effect size using RVE meta-analytic techniques, it is well below the recommend number of clusters and effect sizes to conduct meta-regression using RVE (Tanner-Smith & Tipton, 2014). Thus, for these moderation analyses we adjusted our alpha level (i.e., α = .01). Effect sizes for correlations of interest were not moderated by the type of sample (B = 0.06, p = .78), mean sample age (B = 0.06, p = .55), percent of male participants (B = −0.00, p = .86), or whether the stimuli utilized speech (B = 0.00, p = .98). There was no evidence for publication bias, B = −0.01, p = .50 (see Figure 5).
Figure 5.
Funnel Plot of Correlation Effect Sizes.
Discussion
This study comprehensively and quantitatively synthesized the literature on multisensory integration in individuals with reading and language impairments. The results indicate that (a) audiovisual multisensory integration is reduced, on average, in individuals with language and reading impairments relative to their controls and (b) there is a small but not significant association between audiovisual integration abilities and language, broader communication, and literacy skills in these clinical populations.
Individuals with Language and/or Reading Impairments Present with Differences in Audiovisual Integration
Results indicate that there is a moderate group difference between individuals with reading and language impairments relative to contrast groups included in past studies, such that samples of individuals with reading or language impairments presented with worse audiovisual multisensory integration, on average, compared to their peers. This summary effect did not differ based on whether the clinical samples in case-control studies comprised individuals with reading impairments versus individuals with language impairments. Results were moderated according to whether studies utilized dynamic or static visual stimuli, though it should be noted that many of the studies using static visual stimuli included letters, words, and/or pseudowords as their visual stimuli, and it is unclear how to reconcile this finding with the lack of moderation according to stimulus type (speech versus non-speech). Further, there was no moderation by samples’ mean age or biological sex (i.e., percent male).
These between-group differences are evident across a wide range of audiovisual multisensory integration tasks, in keeping with data extraction methods employed our previous review of audiovisual integration in autism (i.e., Feldman et al., 2018). It is notable, though, that there were many studies that were eligible for inclusion in this systematic review and synthesis and that reported group differences but that could not be included in the quantitative synthesis because effect sizes could not be extracted. Though our secondary search strategy of contacting authors and using WebPlotDigitizer (Rohatgi, 2020) increased the number of effect sizes included in our model and may have reduced the amount of publication and/or small study bias, it is possible that failing to include all of these reports in the meta-analysis may have influenced our results.
Audiovisual Integration is Not Significantly Correlated with Language and Literacy Skill
Findings indicate that audiovisual integration is not significantly associated with clinical features of language and reading impairments in the literature. However, the overall estimate (z = 0.134) did indicate that there was a small summary effect across primary studies in the literature, suggesting that our synthesis was possibly underpowered to detect an overall effect.
None of the tested study characteristics (i.e., average age of the sample, percent of male participants, whether the study utilized speech stimuli) significantly moderated the results relevant to correlations. Notably, though these results are likely not influenced by publication and/or small study bias, we had limited power to evaluate putative moderators of these effect sizes due to the very small number of clusters included in this analysis (Tanner-Smith & Tipton, 2014).
Future Directions and Limitations Related to the Primary Literature
This systematic review and meta-analysis points towards many directions for future research. Our qualitative review revealed a focus predominantly on the general benefits of multisensory integration (as indexed, for example, via faster reaction times and boosts in perceptual accuracy) and, to a lesser extent, on the temporal principle and principle of inverse effectiveness in individuals with reading and language impairments. As the included studies were all non-experimental in nature (reflecting results from case-control and/or concurrent correlational studies), further research should seek to evaluate whether the observed relations between aspects of audiovisual integration and reading or language ability are causal in nature. Furthermore, additional primary research is needed to evaluate the degree to which individuals with reading and language impairments display differences in their application of the spatial principle of multisensory integration and, if so, whether individual differences in spatial integration are associated features of reading and language impairments.
Our meta-analysis was notably limited by the number of effect sizes we could extract from eligible reports. To facilitate meta-analytic research, authors should report means and standard deviations for all variables and raw correlations between all variables, both within and across diagnostic groups. Additionally, we request that researchers report sample characteristics and study design elements in greater detail. Doing so will allow for more robust analyses of data in future syntheses, thereby advancing our understanding of audiovisual integration in individuals with reading and language impairments. In addition, authors should consider making data available to other researchers via a repository to facilitate meta-analytic research.
Though a large age range was represented across primary studies included in this report, with participant ages ranging from childhood to adulthood across studies, most samples to date have consisted of school-aged participants. Further work is, thus, necessary to classify, compare, and explore theorized correlations in individuals with reading and language impairments across a broader range of chronological ages, with particular emphasis on expanding paradigms downward to assess children in infancy, toddlerhood, and the preschool period, when early language and emergent literacy skills may be most malleable and targeted interventions may be most effective. Additional work focused on older individuals with reading and language impairments is also needed; the oldest samples in the literature primarily consisted of young adults (i.e., individuals in their mid-twenties).
In regard to functional neuroimaging studies, there was great heterogeneity in the methods used to evaluate audiovisual integration in individuals with reading impairments, making it difficult to synthesize this literature. Future work should attempt to replicate and extend prior fMRI studies to allow for a quantitative synthesis of this literature. Additionally, no functional neuroimaging work to date has been done to assess audiovisual integration in individuals with language impairments, which would be necessary to evaluate the neural basis of alterations in response to multisensory stimuli in this population. This represents a promising avenue for future research.
Implications and Future Directions for Foundational Differences in Multisensory Integration
Given that audiovisual integration abilities differ in language and reading impairments, with a moderate to large effect across primary studies conducted to date, our results point towards differences in audiovisual integration as a feature of these neurodevelopmental conditions. However, more research is needed to evaluate the hypothesis that audiovisual integration difficulties are foundational to reading and language impairments. First, longitudinal research is needed to determine when differences in audiovisual integration emerge, whether these differences precede early markers of language and reading impairments, and whether these differences early in life predict language and reading abilities later in life. Although such longitudinal research may be challenging, given that reading and language impairments are not often diagnosed in the earliest stages of life, such work could possibly borrow from ongoing studies focused on infants at increased familial likelihood for autism (see Ozonoff et al., 2011) and prospectively follow infants and toddlers with first-degree relatives with language and reading impairments, given the high heritability of these conditions (Andreola et al., 2021; Hart et al., 2013; Rice et al., 1998, 2020; Tomblin, 1989).
Additionally, these results point towards multisensory integration as a potential general feature of neurodevelopmental conditions, rather than an autism-specific mechanism. It has been hypothesized that several developmental (e.g., Down syndrome, fragile X syndrome; D’Souza et al., 2016) and psychological disorders (e.g., schizophrenia; Ross et al., 2007; Zhou et al., 2018) may arise from foundational differences in multisensory integration. However, further primary and meta-analytic research is needed to firmly establish these links, as well as the precedence of multisensory integration difficulties.
Future Directions and Limitations Related to the Present Review
In addition to the limitations associated with the primary literature, there are several limitations specific to our meta-analysis. Although the use of robust error variance estimation techniques allowed for all effect sizes in the present synthesis to be extracted from each sample cluster, neither model had the optimal number of clusters or mean outcomes per cluster for ideal meta-regression models (i.e., 40 clusters with 5 effect sizes per cluster; Tanner-Smith & Tipton, 2014). This suggested “ideal” does not reflect a minimum number necessary to employ this analytic approach; rather, it serves as an estimate for a number of clusters and effect sizes desired capture and model the heterogeneity across populations of interest (in this case, reading and language impaired groups). The use of small sample corrections and CHE models used in these analyses may have mitigated this limitation. It is also possible that a more focused meta-analysis of only a subset of multisensory tasks, as opposed to more varied measures of multisensory integration, could yield different results. Notably, such an approach would be challenging to implement at present, based simply on the relatively limited primary literature on this topic that we were able to identify. It may be, however, as is the case with all meta-analyses, that some relevant studies were not found as a part of the specified search strategy.
Further, it is possible that individual differences, both those tested in this report by meta-regressions using proxy factors (i.e., mean chronological age of the study sample as opposed to chronological age of individual participants) and those not tested in this manner could be influencing findings across the extant literature. Our meta-regressions were limited by the information available in the published reports and our ability to contact authors. For example, we could not assess the influence of several putative moderators (e.g., IQ, language and/or mental age) because that information was not sufficiently reported; in fact, the mean age of participants and the proportion of male participants were the only participant characteristics that were reported in more than 50% of the included studies. An individual participant data meta-analysis may permit a more comprehensive assessment of the influence of participant-level factors on audiovisual multisensory integration.
Finally, it is notable that the majority of the studies in this review were focused on non-English speakers. As there were no members of the study team that spoke multiple languages with the fluency required to reliably extract relevant effect size and other study-level information from non-English reports, we could not systematically search for and evaluate that literature. It is, therefore, possible that there may be additional reports published in languages other than English that would have been eligible for this synthesis and helpful for estimating summary effects of interest.
Conclusion
Strong differences in audiovisual integration were evident in the literature on individuals with reading and language impairments, such that individuals with language and reading impairments presented with worse audiovisual integration abilities relative to comparisons. However, these differences in audiovisual integration were not significantly correlated with clinical features evaluated, limiting our ability to draw conclusions regarding precisely how differences in audiovisual integration relate to language, communication, and literacy skill in these populations. Results indicate that differences in audiovisual integration are a feature of reading and language impairments, and perhaps point towards differences in audiovisual integration as a feature of neurodevelopmental disorders writ large. However, further primary and meta-analytic research is needed to more fully evaluate whether these differences in audiovisual integration may be causal in nature.
Supplementary Material
Highlights.
Our meta-analysis of audiovisual integration in individuals with reading and language impairments included 109 group difference and 68 correlational effect sizes from 38 reports.
Individuals with language and reading impairments, on average, demonstrate reduced audiovisual multisensory integration relative to comparison participants.
There was not a significant correlation between audiovisual integration and reading and language abilities across the extant literature.
Acknowledgments
The authors would like to thank Peter Abdelmessih, Margaret Cassidy, Yupeng Liu, and Emily Terrebonne for their assistance in screening abstracts for this review and all the authors of primary literature who responded to our emails and requests for information. This work was supported by the Vanderbilt Undergraduate Summer Research Program (awarded to Grace Pulliam), the Frist Center for Autism and Innovation at Vanderbilt University, NIH/NCATS TL1TR002244 (PI: Hartmann), and NIH/NIDCD R01DC020186 (PI: Woynaroski).
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Data Availability
Data and analysis materials are available at https://osf.io/jtzr7/.
References
*Denotes study included in the qualitative review
- Andreola C, Mascheretti S, Belotti R, Ogliari A, Marino C, Battaglia M, & Scaini S (2021). The heritability of reading and reading-related neurocognitive components: A multi-level meta-analysis. Neuroscience & Biobehavioral Reviews, 121, 175–200. 10.1016/j.neubiorev.2020.11.016 [DOI] [PubMed] [Google Scholar]
- Baranek GT, David FJ, Poe MD, Stone WL, & Watson LR (2006). Sensory Experiences Questionnaire: Discriminating sensory features in young children with autism, developmental delays, and typical development. Journal of Child Psychology and Psychiatry, 47, 591–601. 10.1111/j.1469-7610.2005.01546.x [DOI] [PubMed] [Google Scholar]
- *Bastien-Toniazzo M, Stroumza A, & Cavé C (2010). Audio-visual perception and integration in developmental dyslexia: An exploratory study using the McGurk effect. Current Psychology Letters: Behaviour, Brain & Cognition, 25(3), 1–15. 10.4000/cpl.4928 [DOI] [Google Scholar]
- Birch HG, & Belmont L (1964). Auditory-visual integration in normal and retarded readers. American Journal of Orthopsychiatry, 34(5), 852–861. 10.1111/j.1939-0025.1964.tb02240.x [DOI] [PubMed] [Google Scholar]
- Bishop DVM, & Snowling MJ (2004). Developmental dyslexia and specific language impairment: Same or different? Psychological Bulletin, 130, 858–886. 10.1037/0033-2909.130.6.858 [DOI] [PubMed] [Google Scholar]
- *Blau V, Reithler J, van Atteveldt N, Seitz J, Gerretsen P, Goebel R, & Blomert L (2010). Deviant processing of letters and speech sounds as proximate cause of reading failure: A functional magnetic resonance imaging study of dyslexic children. Brain, 133(Pt 3), 868–879. 10.1093/brain/awp308 [DOI] [PubMed] [Google Scholar]
- *Blau V, van Atteveldt N, Ekkebus M, Goebel R, & Blomert L (2009). Reduced neural integration of letters and speech sounds links phonological and reading deficits in adult dyslexia. Current Biology, 19(6), 503–508. 10.1016/j.cub.2009.01.065 [DOI] [PubMed] [Google Scholar]
- *Boliek C, Keintz C, Norrix L, & Obrzut J (2010). Auditory-visual perception of speech in children with learning disabilities: The McGurk effect. Canadian Journal of Speech-Language Pathology and Audiology, 34(2), 124–131. https://www.cjslpa.ca/detail.php?ID=1020&lang=en [Google Scholar]
- Borenstein M, Hedges LV, Higgins JPT, & Rothstein HR (2009). Introduction to metaanalysis. Wiley. [Google Scholar]
- Cascio CJ, Woynaroski T, Baranek GT, & Wallace MT (2016). Toward an interdisciplinary approach to understanding sensory function in autism spectrum disorder. Autism Research, 9, 920–925. 10.1002/aur.1612 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Catts H,W, Adlof SM, Hogan TP, & Ellis Weismer S (2005). Are specific language impairment and dyslexia distinct disorders? Journal of Speech, Language, and Hearing Research, 48(6), 1378–1396. 10.1044/1092-4388(2005/096) [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Chen L, Zhang M, Ai F, Xie W, & Meng X (2016). Crossmodal synesthetic congruency improves visual timing in dyslexic children. Research in Developmental Disabilities, 55, 14–26. 10.1016/j.ridd.2016.03.010 [DOI] [PubMed] [Google Scholar]
- Cohen J (1992). A power primer. Psychological Bulletin, 112(1), 155–159. 10.1037/0033-2909.112.1.155 [DOI] [PubMed] [Google Scholar]
- Costafreda S (2009). Pooling fMRI data: Meta-analysis, mega-analysis and multi-center studies. Frontiers in Neuroinformatics, 3, Article 33. 10.3389/neuro.11.033.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- D’Souza D, D’Souza H, Johnson MH, & Karmiloff-Smith A (2016). Audio-visual speech perception in infants and toddlers with Down syndrome, fragile X syndrome, and Williams syndrome. Infant Behavior and Development, 44, 249–262. 10.1016/j.infbeh.2016.07.002 [DOI] [PubMed] [Google Scholar]
- *de Boer-Schellekens L, & Vroomen J (2014). Multisensory integration compensates loss of sensitivity of visual temporal order in the elderly. Experimental Brain Research, 232(1), 253–262. 10.1007/s00221-013-3736-5 [DOI] [PubMed] [Google Scholar]
- *de Gelder B, & Vroomen J (1998). Impaired speech perception in poor readers: Evidence from hearing and speech reading. Brain and Language, 64(3), 269–281. 10.1006/brln.1998.1973 [DOI] [PubMed] [Google Scholar]
- *Demark J (2004). Awareness of auditory-visual temporal synchrony by young children with autism or language delays [Doctoral dissertation, York University]. Toronto, Ontario, Canada. https://bac-lac.on.worldcat.org/oclc/66890989 [Google Scholar]
- Dionne-Dostie E, Paquette N, Lassonde M, & Gallagher A (2015). Multisensory integration and child neurodevelopment. Brain Sciences, 5(1), 32–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Drader DL (1973). Multimodal sensory integration as a function of age and reading ability [Doctoral dissertation, Carleton University]. ProQuest Dissertations & Theses Global. https://www.proquest.com/docview/302631794?accountid=14816 [Google Scholar]
- Dunn W (1999). The Sensory Profile: User’s manual. Psychological Corporation. [Google Scholar]
- Eden GF, VanMeter JW, Rumsey JM, & Zeffiro TA (1996). The visual deficit theory of developmental dyslexia. NeuroImage, 4(3), S108–S117. 10.1006/nimg.1996.0061 [DOI] [PubMed] [Google Scholar]
- Elliott LL, & Hammer MA (1988). Longitudinal changes in auditory discrimination in normal children and children with language-learning problems. Journal of Speech and Hearing Disorders, 53(4), 467–474. 10.1044/jshd.5304.467 [DOI] [PubMed] [Google Scholar]
- Feldman JI, Dunham K, Cassidy M, Wallace MT, Liu Y, & Woynaroski TG (2018). Audiovisual multisensory integration in individuals with autism spectrum disorder: A systematic review and meta-analysis. Neuroscience & Biobehavioral Reviews, 95, 220–234. 10.1016/j.neubiorev.2018.09.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Francisco AA, Groen MA, Jesse A, & McQueen JM (2017). Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences, 54, 60–72. 10.1016/j.lindif.2017.01.003 [DOI] [Google Scholar]
- *Francisco AA, Jesse A, Groen M, & McQueen JM (2017). A general audiovisual temporal processing deficit in adult readers with dyslexia. Journal of Speech, Language, and Hearing Research, 60(1), 144–158. 10.1044/2016_jslhr-h-15-0375 [DOI] [PubMed] [Google Scholar]
- *Froyen D, Willems G, & Blomert L (2011). Evidence for a specific cross-modal association deficit in dyslexia: An electrophysiological study of letter-speech sound processing. Developmental Science, 14(4), 635–648. 10.1111/j.1467-7687.2010.01007.x [DOI] [PubMed] [Google Scholar]
- *Gelfand HM (2015). Intersensory redundancy processing in adults with and without SLI University of California, San Diego; San Diego State University]. ProQuest Dissertations & Theses Global. https://search.proquest.com/docview/1753117846?accountid=14816&bdid=6869&_bd=0T0CtJ8IKNRRsuXhiz6sdUqobew%3D [Google Scholar]
- *González GF, Žarić G, Tijms J, Bonte M, & van der Molen MW (2017). Contributions of letter-speech sound learning and visual print tuning to reading improvement: Evidence from brain potential and dyslexia training studies. Brain Sciences, 7(1), Article 10. 10.3390/brainsci7010010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Gori M, Ober KM, Tinelli F, & Coubard OA (2020). Temporal representation impairment in developmental dyslexia for unisensory and multisensory stimuli. Developmental Science, 23(5), Article e12977. 10.1111/desc.12977 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Groen M, & Jesse A (2013). Audiovisual speech perception in children and adolescents with developmental dyslexia: No deficit with Mcgurk stimuli. Proceedings of the International Conference of Audiovisual Speech Processing, 77–80. http://hdl.handle.net/2066/167372 [Google Scholar]
- *Grondin S, Dionne G, Malenfant N, Plourde M, Cloutier ME, & Jean C (2007). Temporal processing skills of children with and without specific language impairment. Canadian Journal of Speech-Language Pathology and Audiology, 31(1), 38–46. [Google Scholar]
- Hahn N, Foxe JJ, & Molholm S (2014). Impairments of multisensory integration and crosssensory learning as pathways to dyslexia. Neuroscience & Biobehavioral Reviews, 47, 384–392. 10.1016/j.neubiorev.2014.09.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Hairston WD, Burdette JH, Flowers DL, Wood FB, & Wallace MT (2005). Altered temporal profile of visual-auditory multisensory interactions in dyslexia. Experimental Brain Research, 166(3–4), 474–480. 10.1007/s00221-005-2387-6 [DOI] [PubMed] [Google Scholar]
- Hämäläinen JA, Salminen HK, & Leppänen PHT (2012). Basic auditory processing deficits in dyslexia: Systematic review of the behavioral and event-related potential/field evidence. Journal of Learning Disabilities, 46(5), 413–427. 10.1177/0022219411436213 [DOI] [PubMed] [Google Scholar]
- *Harrar V, Tammam J, Pérez-Bellido A, Pitt A, Stein J, & Spence C (2014). Multisensory integration and attention in developmental dyslexia. Current Biology, 24(5), 531–535. 10.1016/j.cub.2014.01.029 [DOI] [PubMed] [Google Scholar]
- Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, & Conde JG (2009). Research electronic data capture (REDCap) - A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics, 42(2), 377 – 381. 10.1016/j.jbi.2008.08.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hart SA, Logan JA, Soden-Hensler B, Kershaw S, Taylor J, & Schatschneider C (2013). Exploring how nature and nurture affect the development of reading: An analysis of the Florida Twin Project on reading. Developmental Psychology, 49(10), 1971–1981. 10.1037/a0031348 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Hayes EA, Tiippana K, Nicol TG, Sams M, & Kraus N (2003). Integration of heard and seen speech: A factor in learning disabilities in children. Neuroscience Letters, 351(1), 46–50. 10.1016/s0304-3940(03)00971-6 [DOI] [PubMed] [Google Scholar]
- *Huyse A, Berthommier F, & Leybaert J (2015). I don’t see what you are saying: Reduced visual influence on audiovisual speech integration in children with specific language impairment Joint Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing, Vienna, Austria. https://www.isca-speech.org/archive_v0/avsp15/papers/av15_022.pdf [Google Scholar]
- *Kaganovich N (2017). Sensitivity to audiovisual temporal asynchrony in children with a history of specific language impairment and their peers with typical development: A replication and follow-up study. Journal of Speech, Language, and Hearing Research, 60(8), 2259–2270. 10.1044/2017_jslhr-l-16-0327 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Kaganovich N, Schumaker J, & Christ S (2021). Impaired audiovisual representation of phonemes in children with developmental language disorder. Brain Sciences, 11(4), Article 507. 10.3390/brainsci11040507 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Kaganovich N, Schumaker J, Leonard LB, Gustafson D, & Macias D (2014). Children with a history of SLI show reduced sensitivity to audiovisual temporal asynchrony: An ERP study. Journal of Speech, Language, and Hearing Research, 57(4), 1480–1502. 10.1044/2014_JSLHR-L-13-0192 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Kaganovich N, Schumaker J, Macias D, & Gustafson D (2015). Processing of audiovisually congruent and incongruent speech in school-age children with a history of specific language impairment: A behavioral and event-related potentials study. Developmental Science, 18(5), 751–770. 10.1111/desc.12263 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Kaganovich N, Schumaker J, & Rowland C (2016). Atypical audiovisual word processing in school-age children with a history of specific language impairment: An event-related potential study. Journal of Neurodevelopmental Disorders, 8(1), Article 33. 10.1186/s11689-016-9168-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Kaltenbacher T (2018). Audiovisual speech perception in dyslexia. Salzburg Institute of Reading Research Publishing (SIRR). http://shorturl.at/jltEX [Google Scholar]
- *Kast M, Bezzola L, Jäncke L, & Meyer M (2011). Multi- and unisensory decoding of words and nonwords result in differential brain responses in dyslexic and nondyslexic adults. Brain and Language, 119(3), 136–148. 10.1016/j.bandl.2011.04.002 [DOI] [PubMed] [Google Scholar]
- *Kronschnabel J, Brem S, Maurer U, & Brandeis D (2014). The level of audiovisual print-speech integration deficits in dyslexia. Neuropsychologia, 62, 245–261. 10.1016/j.neuropsychologia.2014.07.024 [DOI] [PubMed] [Google Scholar]
- *Laasonen M, Service E, & Virsu V (2002). Crossmodal temporal order and processing acuity in developmentally dyslexic young adults. Brain and Language, 80(3), 340–354. 10.1006/brln.2001.2593 [DOI] [PubMed] [Google Scholar]
- *Laasonen M, Tomma-Halme J, Lahti-Nuuttila P, Service E, & Virsu V (2000). Rate of information segregation in developmentally dyslexic children. Brain and Language, 75(1), 66–81. 10.1006/brln.2000.2326 [DOI] [PubMed] [Google Scholar]
- *Lehn ER (1981). Cross-modal integration in a retarded reading population: A refinement [Doctoral dissertation, Brandeis University]. ProQuest Dissertations & Theses Global. https://www.proquest.com/docview/616662313?accountid=14816 [Google Scholar]
- *Leybaert J, Macchi L, Huyse A, Champoux F, Bayard C, Colin C, & Berthommier F (2014). Atypical audio-visual speech perception and McGurk effects in children with specific language impairment. Frontiers in Psychology, 5, Article 422. 10.3389/fpsyg.2014.00422 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Liu S, Wang L-C, & Liu D (2019). Auditory, visual, and cross-modal temporal processing skills among Chinese children with developmental dyslexia. Journal of Learning Disabilities, 52(6), 431–441. 10.1177/0022219419863766 [DOI] [PubMed] [Google Scholar]
- McArthur GM, Hogben JH, Edwards VT, Heath SM, & Mengler ED (2000). On the “specifics” of specific reading disability and specific language impairment. The Journal of Child Psychology and Psychiatry and Allied Disciplines, 41(7), 869–874. 10.1111/1469-7610.00674 [DOI] [PubMed] [Google Scholar]
- *McNorgan C, Randazzo-Wagner M, & Booth JR (2013). Cross-modal integration in the brain is related to phonological awareness only in typical readers, not in those with reading difficulty. Frontiers in Human Neuroscience, 7, Article 388. 10.3389/fnhum.2013.00388 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Megnin-Viggars O, & Goswami U (2013). Audiovisual perception of noise vocoded speech in dyslexic and non-dyslexic adults: the role of low-frequency visual modulations. Brain and Language, 124(2), 165–173. 10.1016/j.bandl.2012.12.002 [DOI] [PubMed] [Google Scholar]
- *Meland Z (2017). Eye movements during audiovisual speech perception with dyslexia [Master’s thesis, Norwegian University of Science and Technology]. https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/2449996 [Google Scholar]
- *Meronen A, Tiippana K, Westerholm J, & Ahonen T (2013). Audiovisual speech perception in children with developmental language disorder in degraded listening conditions. Journal of Speech, Language, and Hearing Research, 56(1), 211–221. 10.1044/1092-4388(2012/11-0270) [DOI] [PubMed] [Google Scholar]
- *Meyler A, & Breznitz Z (2005, May). Visual, auditory and cross-modal processing of linguistic and nonlinguistic temporal patterns among adult dyslexic readers. Dyslexia, 11(2), 93–115. 10.1002/dys.294 [DOI] [PubMed] [Google Scholar]
- Moher D, Liberati A, Tetzlaff J, Altman DG, & the PRISMA Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. BMJ, 339, 1–8. 10.1136/bmj.b2535 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morgan WP (1896). A case of congenital word blindness. British Medical Journal, 2(1871), 1378. 10.1136/bmj.2.1871.1378 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Murray MM, Lewkowicz DJ, Amedi A, & Wallace MT (2016). Multisensory processes: A balancing act across the lifespan. Trends in Neurosciences, 39, 567–579. 10.1016/j.tins.2016.05.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Norrix LW, Plante E, & Vance R (2006). Auditory-visual speech integration by adults with and without language-learning disabilities. Journal of Communication Disorders, 39(1), 22–36. 10.1016/j.jcomdis.2005.05.003 [DOI] [PubMed] [Google Scholar]
- *Norrix LW, Plante E, Vance R, & Boliek CA (2007). Auditory-visual integration for speech by children with and without specific language impairment. Journal of Speech, Language, and Hearing Research, 50(6), 1639–1651. 10.1044/1092-4388(2007/111) [DOI] [PubMed] [Google Scholar]
- Ozonoff S, Young GS, Carter A, Messinger D, Yirmiya N, Zwaigenbaum L, Bryson S, Carver L, Constantino JN, Dobkins K, Hutman T, Iverson JM, Landa R, Rogers S, Sigman M, & Stone W (2011). Recurrence risk for autism spectrum disorders: A baby siblings research consortium study. Pediatrics, 128(3), e488–e495. 10.1542/peds.2010-2825 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hróbjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, McGuinness LA, Stewart LA, Thomas J, Tricco AC, Welch VA, Whiting P, & Moher D (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. PLOS Medicine, 18(3), Article e1003583. 10.1371/journal.pmed.1003583 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paul R (2020). Children’s language disorders: What’s in a name? Perspectives of the ASHA Special Interest Groups, 5(1), 30–37. 10.1044/2019_PERS-SIG1-2019-0012 [DOI] [Google Scholar]
- *Pekkola J, Laasonen M, Ojanen V, Autti T, Jääskeläinen IP, Kujala T, & Sams M (2006). Perception of matching and conflicting audiovisual speech in dyslexic and fluent readers: An fMRI study at 3T. NeuroImage, 29(3), 797–807. 10.1016/j.neuroimage.2005.09.069 [DOI] [PubMed] [Google Scholar]
- *Pons F, Andreu L, Sanz-Torrent M, Buil-Legaz L, & Lewkowicz DJ (2013). Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment. Journal of Child Language, 40(3), 687–700. 10.1017/s0305000912000189 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Power AJ, Mead N, Barnes L, & Goswami U (2013). Neural entrainment to rhythmic speech in children with developmental dyslexia. Frontiers in Human Neuroscience, 7, Article 777. 10.3389/fnhum.2013.00777 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pustejovsky JE (2020). clubSandwich: Cluster-robust (Sandwich) variance estimators with small-sample corrections. In (Version 0.4.2) [R package]. https://github.com/jepusto/clubSandwich
- Pustejovsky JE, & Tipton E (2022). Meta-analysis with robust variance estimation: Expanding the range of working models. Prevention Science, 23(3), 425–438. 10.1007/s11121-021-01246-3 [DOI] [PubMed] [Google Scholar]
- R Core Team. (2022). R: A language and environment for statistical computing. In (Version 4.2.1) Vienna, Austria. https://www.R-project.org/ [Google Scholar]
- *Ramirez J, & Mann V (2005). Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy. Journal of the Acoustical Society of America, 118(2), 1122–1133. 10.1121/1.1940509 [DOI] [PubMed] [Google Scholar]
- *Ramirez JC (2010). Towards a phonetic feature-based account of dyslexic speech perception deficits [Doctoral dissertation, University of California, Irvine]. https://www.proquest.com/docview/305188102?accountid=14816 [Google Scholar]
- Rice ML, Haney KR, & Wexler K (1998). Family histories of children with SLI who show extended optional infinitives. Journal of Speech, Language, and Hearing Research, 41(2), 419–432. 10.1044/jslhr.4102.419 [DOI] [PubMed] [Google Scholar]
- Rice ML, Taylor CL, Zubrick SR, Hoffman L, & Earnest KK (2020). Heritability of specific language impairment and nonspecific language impairment at ages 4 and 6 years across phenotypes of speech, language, and nonverbal cognition. Journal of Speech, Language, and Hearing Research, 63(3), 793–813. 10.1044/2019_JSLHR-19-00012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Robertson CE, & Baron-Cohen S (2017). Sensory perception in autism. Nature Reviews Neuroscience, 18, 671–684. 10.1038/nrn.2017.112 [DOI] [PubMed] [Google Scholar]
- Rodgers MA, & Pustejovsky JE (2021). Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes. Psychological Methods, 26(2), 141–160. 10.1037/met0000300 [DOI] [PubMed] [Google Scholar]
- Rohatgi A (2020). WebPlotDigitizer (Version 4.3) [Computer software]. https://apps.automeris.io/wpd/
- *Romanovska L, Janssen R, & Bonte M (2021). Cortical responses to letters and ambiguous speech vary with reading skills in dyslexic and typically reading children. Neuroimage: Clinical, 30, Article 102588. 10.1016/j.nicl.2021.102588 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ross LA, Saint-Amour D, Leavitt VM, Molholm S, Javitt DC, & Foxe JJ (2007). Impaired multisensory processing in schizophrenia: Deficits in the visual enhancement of speech comprehension under noisy environmental conditions. Schizophrenia Research, 97(1), 173–183. 10.1016/j.schres.2007.08.008 [DOI] [PubMed] [Google Scholar]
- *Rüsseler J, Gerth I, Heldmann M, & Münte TF (2015). Audiovisual perception of natural speech is impaired in adult dyslexics: An ERP study. Neuroscience, 287, 55–65. 10.1016/j.neuroscience.2014.12.023 [DOI] [PubMed] [Google Scholar]
- *Rüsseler J, Ye Z, Gerth I, Szycik GR, & Münte TF (2018). Audio-visual speech perception in adult readers with dyslexia: An fMRI study. Brain Imaging and Behavior, 12(2), 357–368. 10.1007/s11682-017-9694-y [DOI] [PubMed] [Google Scholar]
- *Schaadt G, van der Meer E, Pannekamp A, Oberecker R, & Männel C (2019). Children with dyslexia show a reduced processing benefit from bimodal speech information compared to their typically developing peers. Neuropsychologia, 126, 147–158. 10.1016/j.neuropsychologia.2018.01.013 [DOI] [PubMed] [Google Scholar]
- *Sela I (2014). Visual and auditory synchronization deficits among dyslexic readers as compared to non-impaired readers: A cross-correlation algorithm analysis. Frontiers in Human Neuroscience, 8, Article 364. 10.3389/fnhum.2014.00364 [DOI] [PMC free article] [PubMed] [Google Scholar]
- *Shaul S (2014). Visual, auditory and cross modal lexical decision: A comparison between dyslexic and typical readers. Psychology, 5(16), 1855–1869. 10.4236/psych.2014.516191 [DOI] [Google Scholar]
- Stein BE, & Stanford TR (2008). Multisensory integration: Current issues from the perspective of the single neuron. Nature Reviews Neuroscience, 9, 255–266. 10.1038/nrn2331 [DOI] [PubMed] [Google Scholar]
- Stevenson RA, Segers M, Ferber S, Barense MD, Camarata SM, & Wallace MT (2016). Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing. Autism Research, 9, 720–738. 10.1002/aur.1566 [DOI] [PubMed] [Google Scholar]
- Tallal P (1984). Temporal or phonetic processing deficit in dyslexia? That is the question. Applied Psycholinguistics, 5(2), 167–169. 10.1017/S0142716400004963 [DOI] [Google Scholar]
- Tallal P, Stark RE, & Mellits ED (1985, 1985/07/01/). Identification of language-impaired children on the basis of rapid perception and production skills. Brain and Language, 25(2), 314–322. 10.1016/0093-934X(85)90087-2 [DOI] [PubMed] [Google Scholar]
- Tanner-Smith EE, & Tipton E (2014). Robust variance estimation with dependent effect sizes: Practical considerations including a software tutorial in Stata and SPSS. Research Synthesis Methods, 5(1), 13–30. 10.1002/jrsm.1091 [DOI] [PubMed] [Google Scholar]
- Tomblin JB (1989). Familial concentration of developmental language impairment. Journal of Speech and Hearing Disorders, 54(2), 287–295. 10.1044/jshd.5402.287 [DOI] [PubMed] [Google Scholar]
- *van Laarhoven T, Keetels M, Schakel L, & Vroomen J (2018). Audio-visual speech in noise perception in dyslexia. Developmental Science, 21(1), Article e12504. 10.1111/desc.12504 [DOI] [PubMed] [Google Scholar]
- Viechtbauer W (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36, 1–48. 10.18637/jss.v036.i03 [DOI] [Google Scholar]
- *Virsu V, Lahti-Nuuttila P, & Laasonen M (2003). Crossmodal temporal processing acuity impairment aggravates with age in developmental dyslexia. Neuroscience Letters, 336(3), 151–154. 10.1016/s0304-3940(02)01253-3 [DOI] [PubMed] [Google Scholar]
- Wallace BC, Small K, Brodley CE, Lau J, & Trikalinos TA (2012). Deploying an interactive machine learning system in an evidence-based practice center: abstrackr. Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium, 819–824. 10.1145/2110363.2110464 [DOI] [Google Scholar]
- Wallace MT, & Stevenson RA (2014). The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia, 64, 105–123. 10.1016/j.neuropsychologia.2014.08.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wallace MT, Woynaroski TG, & Stevenson RA (2020). Multisensory integration as a window into orderly and disrupted cognition and communication. Annual Review of Psychology, 71, 193–219. 10.1146/annurev-psych-010419-051112 [DOI] [PubMed] [Google Scholar]
- *Yang Y, Yang YH, Li J, Xu M, & Bi H-Y (2020). An audiovisual integration deficit underlies reading failure in nontransparent writing systems: An fMRI study of Chinese children with dyslexia. Journal of Neurolinguistics, 54, Article 100884. 10.1016/j.jneuroling.2019.100884 [DOI] [Google Scholar]
- *Ye Z, Rüsseler J, Gerth I, & Münte TF (2017). Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia. Neuroscience, 356, 1–10. 10.1016/j.neuroscience.2017.05.017 [DOI] [PubMed] [Google Scholar]
- Zhou HY, Cai XL, Weigl M, Bang P, Cheung EFC, & Chan RCK (2018). Multisensory temporal binding window in autism spectrum disorders and schizophrenia spectrum disorders: A systematic review and meta-analysis. Neuroscience & Biobehavioral Reviews, 86, 66–76. 10.1016/j.neubiorev.2017.12.013 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data and analysis materials are available at https://osf.io/jtzr7/.