Skip to main content
Language, Speech, and Hearing Services in Schools logoLink to Language, Speech, and Hearing Services in Schools
. 2022 Jan 10;53(2):454–465. doi: 10.1044/2021_LSHSS-21-00054

Is Bilingual Receptive Vocabulary Assessment via Telepractice Comparable to Face-to-Face?

Anny Castilla-Earls a,, Juliana Ronderos b, Autumn McIlraith c, Damaris Martinez a
PMCID: PMC9549969  PMID: 35007430

Abstract

Purpose:

This study investigated the effect of delivery method (face-to-face or telepractice), time, home language, and language ability on bilingual children's receptive vocabulary scores in Spanish and English.

Method:

Participants included bilingual children with (n = 32) and without (n = 57) developmental language disorders (DLD) that were assessed at 2 time points about 1 year apart. All children participated in face-to-face assessment at Time 1. At Time 2, 41 children were assessed face-to-face and 48 children were assessed using telepractice.

Results:

Delivery method was not a significant predictor of receptive scores in either Spanish or English. Spanish and English receptive vocabulary increased over time in both children with and without DLD. Children with DLD had lower receptive vocabulary raw scores than children with typical development. Children who spoke English-only at home had significantly higher English receptive scores than children who spoke Spanish-only or both Spanish and English at home.

Conclusions:

Face-to-face and telepractice assessments seem to be comparable methods for the assessments of Spanish and English receptive skills. Spanish and English receptive skills increased over time in children with and without DLD.

Supplemental Material:

https://doi.org/10.23641/asha.17912297


Bilingual children in the United States are a heterogeneous group with diverse cultural and linguistic backgrounds. Most of these bilingual children primarily speak Spanish at home and learn English at school (United States Census Bureau, 2015). Bilingual families often encounter barriers to access face-to face speech-language services for their children. One of the barriers is the shortage of bilingual language service providers. Approximately 6% of all speech-language pathologists (SLPs) meet the American Speech-Language-Hearing Association (ASHA) requirements for bilingual service provider (ASHA, 2019; United States Census Bureau, 2015). A second barrier is accessibility. For example, minority groups, including those who identify their ethnicity as Hispanic or Latino and who speak Spanish at home, disproportionately live in low-income areas with limited access to speech and language clinics (ASHA, 2019; United States Census Bureau, 2015). For bilingual families who live in urban areas, access to transportation and child care also represent barriers to access services. Consequently, telepractice has been proposed as a model to increase accessibility to speech and language services for rural and underserved communities and to allow inclusion of minority populations in research studies (ASHA, 2011).

Telepractice is the use of telecommunication technology (audio and/or video via technology) to deliver speech-language pathology services (assessment, intervention, and/or consultation) remotely when clinician and client are in two different physical locations (ASHA, 2020). The term “telepractice” has been adopted by ASHA rather than similar terms such as “telemedicine” and “telehealth” to avoid confusion that these services are only available in health care and medical settings (ASHA, 2020). Telepractice has been promoted as a model with potential to improve access to services, but authorization under Medicaid and private insurance has been slow, impacting widespread adoption of telepractice models in the United States (Cusack et al., 2008).

Telepractice also has its own barriers. For example, a barrier for low-income communities is access to broadband and technology. Although 85% of Americans own a smartphone and 77% have home broadband subscriptions, these numbers are significantly lower for low-income families (Perrin, 2021). Other barriers for access to telepractice include the quality of the broadband and the equipment, training available for SLPs, electronic illiteracy, and preference for face-to-face services among others (Hodge et al., 2019; May & Erikson, 2014; Moffat & Eley, 2011; Sutherland et al., 2016). For bilingual communities, barriers similar to those of monolingual communities exist. For example, access to broadband and technology and hesitancy about the use of technology were reported as the most common barriers to telepractice by Spanish–English families (Khoong et al., 2020). However, a barrier that seems to be unique to bilingual communities is the potential need to communicate in English to complete a telepractice session. In Khoong et al. (2020), adults who preferred to communicate in Spanish were more likely to need help during telepractice sessions than adults who preferred either Spanish or English. Importantly, the limited evidence of telepractice efficacy is also documented as a challenge for telepractice implementation in both monolingual and bilingual populations (Mashima & Doarn, 2008).

Over the past decade, some evidences in the literature that supports the use of telepractice as a comparable approach to face-to-face services have started to emerge. For example, Sutherland et al. (2016, 2017) examined the use of telepractice services as a way to administer standardized language assessments in rural and remote areas in Australia. In a study with 23 children ages 8–12 years old, children were administered four subtests of the Clinical Evaluation of Language Fundamentals–Fourth Edition (CELF-4; Semel et al., 2003) via telepractice. The subtests selected from the CELF-4 included concepts and following directions, word structure, recalling sentences, and formulated sentences, which evaluate both expressive and receptive skills. Children's responses to the telepractice assessment were scored by the SLP and by a second SLP who was present (face-to-face) during the telepractice evaluation. All of the sessions were successfully completed with overall audio and video quality rated as “good” for over 70% of the sessions. Furthermore, there were strong correlations (.96–1.0) between the scoring of the subtests by the telepractice and the face-to-face SLPs. Similarly, Waite et al. (2010) administered CELF-4's subtests via telepractice and face-to-face to 25 children ages 5–9 years. During this study, sessions were set up between two rooms, one examiner sat in the room with the child (face-to-face SLP) and another in the adjoining room connected via real-time video conferencing technology (telepractice SLP). Each participant was assigned to either a face-to-face-led condition or a telepractice-led condition, where either the face-to-face SLP or the telepractice SLP administered and scored the assessments and the other SLP simultaneously scored the assessments as an observer. Waite et al. (2010) found no significant differences between telepractice and face-to-face raw and scaled scores with very good intra- and interrater reliability (intraclass correlation [ICC] range: .84–1.0). In the United States, Ciccia et al. (2011) administered the Screening Kit of Language and Development (Bliss & Allen, 1983), which screens for both expressive and receptive language skills, to 10 children in the inner-city areas of Cleveland. This study included participants of minority and low-income families. The administration of the screening was simultaneously scored by a face-to-face and a telepractice SLP. Their results showed 100% agreement between the results (pass/fail) on the screener between the face-to-face and telepractice conditions. These results suggest that conducting standardized assessments, such as the CELF-4, via telepractice is feasible, reliable, and comparable to those conducted in person. However, little research exists to provide evidence of standardized assessment equivalence based on delivery via telepractice in populations with cultural and linguistic differences.

As a result of the COVID-19 pandemic, there has been an increased interest in the use of telepractice to provide speech-language services and for research purposes. For example, before March 2020, all research studies conducted in our laboratory at the University of Houston were conducted face-to-face either at the children's school or home or in our laboratory. The lockdowns resulting from the COVID-19 pandemic forced us to quickly adapt to the use of telepractice for assessment purposes. One of the biggest challenges during the transition from face-to-face for our laboratory was the assessment of standardized receptive skills, in particular for vocabulary.

The assessment of standardized receptive skills often requires the child to respond by pointing to the right answer among various options. For example, during the administration of the Peabody Picture Vocabulary Test–Fourth Edition (PPVT-4; Dunn & Dunn, 2007), the examiner says a word, and the child points to the picture that best represents this word among four options on an easel. During face-to-face assessment, the examiner observes the child pointing and records their response. During telepractice, the process of pointing to the pictures corresponding to the word said by the examiner is more complicated because the camera captures the child and not the location where the child is pointing. Although some computers have touch screen options and/or allow cursor (pointer) sharing, this is not a default option. Therefore, assessment tasks that require pointing pose an additional barrier for the assessment of receptive vocabulary using telepractice (Waite et al., 2010). Other barriers to the administration of standardized assessments to children using telepractice include potential difficulties with the child's behavior management and level of engagement, and the potential effect of the modification of standard procedures (Haaf et al., 1999; Hodge et al., 2019).

One of the components of child language assessment is the evaluation of vocabulary knowledge. Vocabulary knowledge is an important predictor of reading comprehension and academic achievement (Davison et al., 2011; Snow, 1991). Importantly, for this study, children with developmental language disorders (DLD) often have vocabulary delays and show consistent deficits in word learning compared to their typically developing peers (Bishop et al., 2017; Gray, 2004; Gray et al., 1999). Although standardized receptive vocabulary tests on their own are not appropriate for the identification of language disorders (see Anaya et al. [2018] and Gray et al. [1999]), they can provide valuable information about receptive vocabulary knowledge. For English monolingual children with language disorders, vocabulary growth tends to follow similar trajectories as for their typically developing peers, although they do not catch up with their typically developing peers over time (Rice & Hoffman, 2015). In bilingual children, vocabulary knowledge is distributed as they might know the word for some concepts in one of their languages and other concepts in the other, depending on context (Bialystok et al., 2010; Patterson, 1998). Bilingual children's combined vocabulary may be equivalent or greater than the vocabulary of monolinguals, but it may appear significantly smaller when only one language is assessed in a specific context. It is therefore crucial that the evaluation of vocabulary in bilingual children includes both of the child's languages to avoid underestimating vocabulary knowledge.

There is considerable variation in vocabulary outcomes in English and in Spanish for bilingual children. Bilingual children in the United States tend to live in low-income areas and their parents tend to have low levels of education, both of which are factors associated with low vocabulary outcomes in children (e.g., Calvo & Bialystok, 2014; Pace et al., 2017). Other factors such as language use at home and home literacy resources have also been found to have an effect on vocabulary outcomes. For example, increased use of English at home was found to have no significant impact on bilingual's children English vocabulary in preschool but was found to slow the growth of Spanish vocabulary (Hammer et al., 2009). In another study, English vocabulary was positively related to the mother's English literacy practices, but these same factors were negatively associated with Spanish vocabulary scores (Quiroz et al., 2010). Furthermore, Spanish use at home was positively related to Spanish vocabulary outcomes. Similarly, mother's English use was positively related to children's English language and negatively related to Spanish language outcomes in a study with more than 1,000 bilingual children (Branum-Martin et al., 2014). These findings suggest that language practices at home directly contribute to bilingual children's vocabulary skills in both of their languages but that they impact each language differently.

This Study

The COVID-19 pandemic significantly increased the use of telepractice for language assessments, creating a potential paradigm shift in how we deliver assessment services to children for both clinical and research purposes. However, little is known about the potential difference in results of receptive vocabulary assessments conducted virtually via telepractice versus those conducted via face-to-face. Furthermore, although a handful of studies of telepractice have included underserved and remote communities, there is a gap in studies evaluating bilingual children in both of their languages, via telepractice. The purpose of this study is to investigate the potential effect of delivery method (face-to-face or telepractice) on children's receptive language scores in Spanish and English. This study also examines the potential impact of time, home language, and language ability on bilingual children's receptive vocabulary scores in Spanish and English. We ask: What is the effect of delivery method (face-to-face vs. telepractice), time, home language (Spanish, English, or Both), and diagnosis classification (typical language development [TD] vs. DLD) on bilingual children's Spanish and English receptive vocabulary scores?

Method

Participants

Children in this study participated in a bilingual longitudinal study of Spanish and English skills in children with and without DLD. We selected children for inclusion in this study if (a) they completed Spanish and English receptive vocabulary assessments at 2 points in time (Time 1 and Time 2), (b) they completed the Bilingual English-Spanish Assessment (BESA; Peña et al., 2018) or the Bilingual English-Spanish Assessment-Middle Extension (BESA-ME; Peña et al., 2020), (c) they passed a hearing screening, (d) they obtained a score of 70 or above in nonverbal scale of the Kaufman Brief Intelligence Test–Second Edition (Kaufman & Kaufman, 2004), and (e) their parents reported the language spoken at home using a parent questionnaire. Therefore, participants in this study included 89 bilingual children (49 boys, 40 girls) that were on average 5 years 9 months (SD = 12 months; range: 47–98) at the onset of the study. Thirty-two of these children were identified as having DLD using the standard score in the best language of the BESA or BESA-ME Morphosyntax Subtest and age-derived cutoff scores. The Morphosyntax Subtest of the BESA/BESA-ME utilizes a cloze and a sentence repetition task to target a variety of grammatical morphemes and sentence structures. This subtest has been shown to discriminate children with DLD in English and Spanish with appropriate diagnostic accuracy using the best language score (BESA: 90%–96% sensitivity and 83%–90% specificity depending on age using best language score, Peña et al., 2018; BESA-ME: preliminary information suggest 89% specificity and 90% sensitivity for second graders using best language score, Peña et al., 2020). All children participated in face-to-face data collection at Time 1. At Time 2, 41 children participated in face-to-face data collection, and 48 children completed telepractice assessments. Parents reported that all children in this study were exposed to both Spanish and English. At home, 45 children spoke Spanish-only, eight children English-only, and 36 both Spanish and English. At school, 78 children were in Spanish-English bilingual/dual language classrooms, five children were in English-only classrooms, five children were in English-only during the week and attended Spanish Saturday school, and one child was in Spanish immersion. This study was approved by the University of Houston's Institutional Review Board (IRB). Parents provided informed consent to participate in the longitudinal study at the beginning of Year 1, and children provided verbal assent before participating in any of the study sessions.

Measures

Two measures of receptive vocabulary were used in this study: the PPVT-4 (Dunn & Dunn, 2007) and the Test de Vocabulario en Imágenes Peabody (TVIP; Dunn et al., 1986). The purpose of the PPVT-4 is to examine the comprehension of English spoken words. This test was normed with an English monolingual sample that provided a close representation of the U.S. population with respect to age, gender, race/ethnicity, socioeconomic status (education level), and geographic region. The norming sample for the PPVT-4 was also administered the Expressive Vocabulary Test, Second Edition (Williams, 2007). These two tests were significantly correlated, providing support for the construct validity of this test in assessing vocabulary. Test–retest reliability for the PPVT-4 was .93 (range r = .92 to .96). The PPVT-4 was also administered to children with language disorders and these children obtained significantly lower receptive vocabulary scores. The TVIP is a Spanish version of the PPVT, developed by the same group of researchers, using similar items and methodology used to construct the PPVT. The purpose of the TVIP, similarly to the PPVT, is to measure comprehension of Spanish spoken words. This test was standardized with Puerto Rican and Mexican Spanish monolingual children. For the report of standard scores, we used the Hispanic American composite norm that combines data from both groups. As both of these measures were normed in monolingual children, neither of them is a perfect measure of vocabulary ability in bilingual children. However, we use these measures in our studies to allow comparisons to other studies in our laboratory that include monolingual and bilingual children with and without DLD (e.g., Castilla-Earls et al., 2020, 2021). For all analyses in this study, we use raw scores instead of standard scores to estimate growth in receptive vocabulary skills.

A parent questionnaire was also used in this study to gather information about the children and their parents, the language practices at home in each language, and language concerns about their children. This questionnaire included the questions in Restrepo (1998) and questions about language practices at home (Ronderos et al., 2021). In this study, we used the information provided in this questionnaire about language use at home and the level of education of the mother. Parents are asked to identify the language use pattern that is the closest to their family among three options: (a) At home, we speak English-only; (b) at home, we speak Spanish-only; and (c) at home, we speak Spanish and English. Parents are also asked to identify the highest level of education of the mother among six options: (a) elementary school, (b) high school, (c) some college, (d) associate degree, (e) bachelor's degree, and (f) graduate degree.

Procedure

At the onset of the study (Time 1), all children participated in face-to-face assessment. Data collection for Year 2 was ongoing when the COVID-19 pandemic forced research studies to transition to telepractice. Therefore, some children completed face-to-face assessment (Pre–COVID-19) and some children completed telepractice assessment in Time 2 (Post–COVID-19). Although the interval between Time 1 and Time 2 was originally designed to be approximately 1 year, there is a time lag of about 5 months on average for children tested using telepractice. This time lag was due to the process of moving all testing online, including modifications to IRB protocols, procurement of necessary hardware and software, training of research assistants, and contacting participants.

For all face-to-face assessments, testing sessions took place at either the children's home or school or at our research laboratory at the University of Houston. Prior to testing, parents completed the parent questionnaire. To test children for Spanish and English language abilities, we assessed them in two sessions, one Spanish-only and one English-only. Assessment was administered separately by two trained research assistants who were native speakers of the target session language. Spanish testing session included the assessment of language ability (BESA or BESA-ME), receptive vocabulary (TVIP), a language sample, a morphological development elicitation task, a child questionnaire, and a sentence repetition task. Testing sessions in English included the administration of the BESA or BESA-ME, receptive vocabulary (PPVT), a language sample, and a sentence repetition task. The language of the testing sessions and the tasks within a session were administered in randomized order. The procedures for telepractice were identical to the procedures face-to-face with the exception of (a) the delivery method, which was done via Zoom (Zoom Video Communications, 2020) using a University of Houston-approved account, and (b) instead of pointing, children were asked to identify the number of the image/word selected. Identifying the number of the picture is one of the acceptable responses for the administration of the PPVT, but in face-to-face assessments, most children used pointing. Both the PPVT and TVIP include small numbers below each picture in the quadrants, but to make it more visible to the children, bigger numbers were added next to each picture. We used an IPEVO ultra high-definition document camera to show the images on the PPVT and TVIP to make the telepractice experience as close as possible to the face-to-face experience (e.g., turning pages) and to follow copyright guidelines. Because some children did not have an access to hardware and/or Internet service, we created a loan program in which we delivered computers equipped with headphones/microphones and hotspots to families who needed these resources to participate in the study. For this study, we only conducted analyses on the raw data of the PPVT and TVIP.

Analytic Approach

To examine the effect of delivery method (face-to-face or telepractice), home language, language ability, and time on bilingual children's receptive vocabulary scores in Spanish and English, we estimated a series of linear mixed models for Spanish and English vocabulary (raw scores) separately. Models were estimated in R (Version 4.0.3; R Core Team, 2020), using the packages lme4 (Bates et al., 2015) and lmertest (Kuznetsova et al., 2017). Observations (Level 1) were nested within subject (Level 2). Restricted maximum likelihood estimation was used to obtain the reported model estimates. In order to obtain Akaike information criterion (AIC) and Bayesian information criterion (BIC) values for model comparison, the models were re-estimated using maximum likelihood estimation. Although random intercepts were estimated for all models, random slopes were not estimated due to the small number of data points per subject. Chronological age in months, centered at 48 months, was used as the metric for time. First, we estimated unconditional (Model 1) and unconditional growth (Model 2) models. Second, we added predictors individually and in combination (Models 3–7). Predictors included delivery method (values were Time 1 and Time 2 face-to-face, or Time 1 face-to-face and Time 2 telepractice, with the first value used as the reference group in Models 3, 5, 6, and 7), home language (English-only, Spanish-only, or both English and Spanish, with both English and Spanish used as the reference group in Models 4, 5, and 7; to provide an estimate of the difference between “English-only” and “Spanish-only”, we re-coded the variable “Home Language” using “English-only” as the reference group), and diagnosis classification (TD or DLD, with TD used as the reference group in Models 6 and 7). Third, we tested interactions between delivery method and diagnosis classification, delivery method and home language, delivery method and time, and home language and time.

Results

Descriptive statistics for Spanish and English receptive vocabulary by delivery method, time, home language, and diagnosis classification are presented in Table 1. For Spanish vocabulary, in the unconditional growth model, the ICC was 0.63 after accounting for the effect of time. This indicated a substantial amount of variance was between subjects and that the random intercept term for subject should be retained in the model. There was a significant fixed effect of time, such that Spanish vocabulary was predicted to increase by 0.74 points (raw score) for every 1-month increase in age. A series of conditional models were next estimated (see Table 2). There was no statistically significant effect found for delivery method or for home language, either alone or in models with any other predictors. Diagnosis classification was a significant predictor, such that children with DLD were predicted to score 10.5 points lower (raw score) compared to TD children. Delivery method did not significantly interact with either time (γ = −0.03, SE = 0.16, p = .86, added to Model 3), diagnosis classification (γ = −3.05, SE = 4.96, p = .54, added to Model 6), or home language (delivery by home language: both vs. English, γ = 15.30, SE = 12.61, p = .23; delivery by home language: both vs. Spanish, γ = 12.00, SE = 6.46, p = .07, added to Model 5). Diagnosis classification did not significantly interact with time (γ = 0.08, SE = 0.15, p = .59, added to Model 6) either.

Table 1.

Descriptive statistics by group.

Variable PPVT raw
PPVT SS
TVIP raw
TVIP SS
M SD M SD M SD M SD
TD
Time 1
  Spanish (n = 29) 80.83 35.46 84.55 19.91 44.59 13.74 94.97 15.67
  English (n = 8) 120.13 17.03 109.63 13.11 32.38 13.87 78.13 16.88
  Both (n = 20) 84.35 27.52 93.65 19.29 37.80 15.84 93.15 18.25
Time 2
 Telepractice
  Spanish (n = 20) 109.15 40.00 90.85 24.32 60.60 17.24 98.15 20.39
  English (n = 2) 136.00 24.04 103.50 23.34 57.00 19.80 85.50 31.82
  Both (n = 11) 119.09 30.40 98.82 22.95 43.27 19.73 81.09 18.75
 Face-to-face
  Spanish (n = 9) 91.56 27.52 84.78 14.65 48.67 15.36 91.11 19.54
  English (n = 6) 134.17 19.20 112.17 11.36 40.17 7.03 77.17 10.53
  Both (n = 9) 90.67 28.17 90.78 18.06 50.67 15.35 100.78 21.00
DLD
Time 1
  Spanish (n = 16) 43.25 22.82 69.63 16.59 21.00 13.72 79.06 16.78
  Both (n = 16) 47.31 17.75 74.94 10.87 17.12 11.68 76.25 11.68
Time 2
 Telepractice
  Spanish (n = 7) 56.71 18.06 66.14 17.27 29.71 13.93 73.43 16.66
  Both (n = 8) 68.75 27.09 75.25 15.15 26.25 19.58 71.50 18.02
 Face-to-face
  Spanish (n = 9) 64.56 37.12 77.11 25.90 34.00 21.49 84.00 23.51
  Both (n = 8) 67.25 17.93 80.25 7.57 29.75 17.92 81.13 19.06

Note. TD = typical language development; DLD = developmental language disorder; PPVT = Peabody Picture Vocabulary Test; TVIP = Test de Vocabulario en Imágenes Peabody; SS = standard score.

Table 2.

Spanish vocabulary models.

Fixed effects Model 1
Model 2
Model 3
Model 4
Model 5
Model 6
Model 7
Est. SE Est. SE Est. SE Est. SE Est. SE Est. SE Est. SE
 Intercept 38.3* 1.88 16.89* 2.66 17.05* 2.96 15.10* 3.06 15.67* 3.33 22.64* 3.09 21.92* 3.33
 Age in months 0.74* 0.07 0.74* 0.08 0.73* 0.08 0.74* 0.08 0.65* 0.08 0.66* 0.08
 Delivery −0.38 3.16 −1.38 3.21 0.10 2.84 −1.15 2.83
 Home language: Both vs. English-only −3.71 5.7 −4.17 5.82 −6.65 5.16
 Both vs. Spanish-only 4.37 3.25 4.43 3.27 4.31 2.87
 English-only vs. Spanish-only 8.08 5.54 8.60 5.70 10.96 5.04
 DLD −10.55* 2.57 −11.31* 2.56

Random effects

Variance

Variance

Variance

Variance

Variance

Variance

Variance
 Random intercept (child) 240.20 162.32 164.61 159.60 161.34 119.10 109.60
 Residual 151.60 95.62 95.66 95.61 95.71 102.20 103.40
 Intraclass correlation 0.61 0.63

 Number of parameters

3

4

5

6

7

6

8
 AIC 1531 1454 1457 1455 1457 1444 1442
 BIC 1541 1468 1473 1475 1480 1463 1467

Note. For models that include home language as a predictor, the reported intercept estimate represents the predicted value for a subject in the group Both, which was used as the reference group. The estimate of the difference between the English-only and Spanish-only groups was obtained by recoding home language using English-only as the reference group and re-estimating the models. All estimates aside from the intercept are not affected by recoding the reference group. SE = standard error; DLD = developmental language disorder; Est. = estimate. AIC = Akaike information criterion; BIC = Bayesian information criterion.

*

p < .01.

For English vocabulary, the ICC was 0.88 after accounting for the effect of time, indicating a substantial amount of variance was between subjects and that the random intercept for subject should be retained in the model. There was a significant fixed effect of time, such that vocabulary was predicted to increase by 1.38 points (raw score) for every 1-month increase in age (see Table 3). There was no significant effect found for delivery method, either alone or in models with any other predictors. Home language was a significant predictor of English vocabulary, such that subjects whose home language was English-only were predicted to score 32.70 points higher (raw score) compared to subjects who spoke both English and Spanish at home. There was no significant difference found between children who spoke both English and Spanish and home and those who spoke Spanish-only at home, but children who spoke Spanish at home only were predicted to score 41.31 points lower than those who spoke English at home only. This significant effect of home language was also present in a model including diagnosis classification. Diagnosis classification was significant after accounting for home language, such that children with DLD were predicted to score 10.02 points lower (raw score) than children without DLD. Delivery method did not significantly interact with either time (γ = −0.32, SE = 0.21, p = .13, added to Model 3), diagnosis classification (γ = 1.00, SE = 7.21, p = .89, added to Model 6), or home language (delivery by home language: both vs. English, γ = −17.51, SE = 25.23, p = .49; delivery by home language: both vs. Spanish, γ = −1.0, SE = 12.92, p = .94, added to Model 5). Diagnosis classification did not significantly interact with time (γ = −0.03, SE = 0.19, p = .87, added to Model 6) either.

Table 3.

English vocabulary models.

Fixed effects Model 1
Model 2
Model 3
Model 4
Model 5
Model 6
Model 7
Est. SE Est. SE Est. SE Est. SE Est. SE Est. SE Est. SE
 Intercept 82.68* 3.82 42.5* 4.29 42.85* 5.35 44.28* 5.31 42.33* 6.08 47.62* 5.42 47.54* 6.07
 Age in months 1.38* 0.10 1.39* 0.10 1.37* 0.10 1.36* 0.10 1.33* 0.10 1.30* 0.1
 Delivery −0.70 6.51 4.12 6.20 −0.42 6.15 4.18 5.86
 Home language: Both vs. English-only 32.70* 11.10 33.95* 11.30 31.53* 10.7
 Both vs. Spanish-only −8.61 6.34 −8.86 6.38 −9.04 6.01
 English-only vs. Spanish-only −41.31* 10.86 −42.81* 11.12 −40.57* 10.52
 DLD −10.14* 3.64 −10.02* 3.59

Random effects

Variance

Variance

Variance

Variance

Variance

Variance

Variance
 Random intercept (child) 1135.20 853.20 863.40 738.70 743.90 759.30 651.60
 Residual 332.20 119.30 119.30 119.20 119.20 124.90 124.70
 Intraclass correlation 0.77 0.88

 Number of parameters

3

4

5

6

7

6

8
 AIC 1727 1605 1607 1595 1597 1602 1591
 BIC 1736 1618 1623 1614 1619 1621 1617

Note. For models that include home language as a predictor, the reported intercept estimate represents the predicted value for a subject in the group Both, which was used as the reference group. The estimate of the difference between the English-only and Spanish-only groups was obtained by recoding home language using English-only as the reference group and re-estimating the models. All estimates aside from the intercept are not affected by recoding the reference group. SE = standard error; DLD = developmental language disorder; Est. = estimate. AIC = Akaike information criterion; BIC = Bayesian information criterion.

*

p < .01.

AIC and BIC values are also presented in Tables 2 and 3. Based on these values, there is not a clear “best fitting model” since AIC prefers one model and BIC another, for both outcomes. In terms of variance explained, Model 7 was able to explain the most child-level variance (random intercept variance), for both TVIP and PPVT. However, the most parsimonious model could be argued to be Model 4 for English, and Model 6 for Spanish in terms of BIC value, and number of significant effects versus overall effects.

Discussion

The purpose of this study was to examine the effect of delivery method (face-to-face or telepractice), home language, and language ability on bilingual children's receptive vocabulary scores in Spanish and English over time. Our results suggested that, as expected, both Spanish and English receptive skills increased over time for all children in the study. Language ability significantly predicted children's receptive vocabulary in both Spanish and English, whereas home language significantly predicted English vocabulary only. Importantly, difference in delivery method was not found to significantly predict receptive vocabulary over time in either language.

Children with language disorders, in this study, showed lower receptive vocabulary skills in comparison to children with typical language development. These results are in agreement with previous studies documenting the vocabulary difficulties of children with DLD (Gray, 2004; Gray et al., 1999; Rice & Hoffman, 2015) and highlight the importance of considering vocabulary deficits when assessing children with DLD as it has been suggested by broader definitions of DLD (Bishop et al., 2017). For example, although children with DLD in this study were identified with the morphosyntactic subtest of the BESA or BESA-ME, they also struggled with vocabulary in comparison to typically developing children. Interestingly, although children with DLD showed lower receptive vocabulary in both languages in general, both children with and without DLD show similar growth patterns. This is in agreement with the conceptualization that the vocabulary skills of children with DLD lag behind their peers but continue to grow over time (Rice & Hoffman, 2015).

The results of the impact of home language on receptive vocabulary scores in this study suggest that children in English-only homes have higher English vocabularies in comparison to children in either bilingual or Spanish-only homes. Interestingly, children in Spanish-only homes did not show an advantage in Spanish vocabulary. This in agreement with previous studies that found that the use of English at home supports the development of English but that this is not the case for Spanish (Branum-Martin et al., 2014; Hammer et al., 2009; Quiroz et al., 2010). Three potential reasons for these findings in our data are the impact of English as the dominant language in the community, vocabulary context dependence, and the impact of socioeconomic differences. English is the majority language of the community even for children who are being raised in bilingual or Spanish-only households. Thus, the combined effect of English-only at home plus the language of the community provides an advantage for English vocabulary for those children (DeCapua & Wintergerst, 2009; Francis & Ryan, 1998). Also, given that bilingual vocabulary development is context dependent, the test stimuli in the PPVT could potentially be biased toward words learned in their school environment where those words are not likely to be encountered in their Spanish home environments (Anaya et al., 2018; Patterson & Pearson, 2004). Lastly, lower socioeconomic status and lower levels of education have been associated with lower vocabulary outcomes in children (Calvo & Bialystok, 2014; Pace et al., 2017). In our study, all mothers of children in the English-only homes reported that they had college or graduate degrees, compared to 23.3% of mothers from Spanish-only homes and 31.4% of mothers from bilingual homes. In fact, 60.5% of the mothers from Spanish-only homes and 48.6% of mothers from bilingual homes had a high school education or below. However, socioeconomic level and language outcomes in our study are highly confounded because our sample does not include mothers who spoke English-only at home without educational level up to high school. 1 It is imperative to note that the relationship between maternal education, bilingual outcomes, and home language is complex. For example, some research evidence suggests that the effect of maternal education on bilingual outcomes depends on whether the maternal educations was completed in a context where either Spanish or English was the majority language (Hoff et al., 2018; Sorenson Duncan & Paradis, 2020). Therefore, it is not possible to disentangle the relationship between mother's education, home language, and bilingual outcomes in this study.

An important finding of this study is that we did not see an effect of delivery method on receptive vocabulary assessment when we compared children who were tested twice in person versus children who were tested both in person and using telepractice. Children who were tested face-to-face at both time points did not differ significantly from children who were tested face-to-face in Time 1 and using telepractice at Time 2. Therefore, we believe that testing receptive vocabulary skills through telepractice is a comparable approach to testing children face-to-face. When our complete research assessment battery was adapted to telepractice, our biggest concern was the assessment of receptive skills that necessitated pointing to complete the task, such as is the case for the PPVT and TVIP. Recall that we used numbers to identify the quadrant in which the picture corresponded to the word said by the examiner instead of pointing. Although the use of numbers to identify quadrants is an approach that is suggested in the examiner manual of the PPVT, the experience of doing this through telepractice had not been formally examined. One strategy that we believe helped us to make this telepractice assessment close to the face-to-face experience is the use of the document camera to simulate the experience (e.g., passing pages) as close as possible to the face-to-face. In addition, the strategy of adding big numbers next to each quadrant so that children could easily identify the number might have helped to make the assessment approaches comparable.

Our results for face-to-face and telepractice administration of standardized language assessments are in agreement with the results of previous studies of who found no difference between telepractice and face-to-face scoring with the telepractice administration of the assessment (Sutherland et al., 2016, 2017; Waite et al., 2010). It is important to note that our study design is different from these previous studies. In Sutherland et al. (2016, 2017) and Waite et al. (2010), all scoring was conducted either face-to-face or using telepractice, while only one assessment session took place. In our study, both administration and scoring were completed either face-to-face or using telepractice; therefore, our results are more in line with existing telepractice assessments in the U.S. constrained by the COVID-19 pandemic restrictions and provide support for current practices.

The results of this study have important clinical implications for service delivery for bilingual children across the United States who may have difficulty accessing speech-language pathology services because they live in underserved areas and/or do not have access to a bilingual SLP. Telepractice has been shown to provide a positive impact on rural and underserved areas (Cusack et al., 2008). Furthermore, while the number of bilingual SPLs remains low, telepractice would provide a way for bilingual SLPs to be able to assess and provide services in both languages more widely for this population. This study provides additional evidence which supports the administration of standardized assessments over telepractice as a feasible and reliable alternative to face-to-face administration and scoring.

There are limitations to this study that are important to acknowledge. First, this longitudinal study was not designed a priori to test the differences between face-to-face and telepractice assessments in bilingual children with different home language backgrounds. Instead, this study was a result of COVID-19 restrictions to in-person research. Therefore, crucial research aspects to investigate the impact of telepractice on the assessment of vocabulary skills, such as comparison of children with and without language disorders who complete testing both in face-to-face and telepractice conditions, were not included in this study. Second, our participants came from families with different language practices at home; however, the group of children who only spoke English at home was much smaller than the groups from Spanish-only or bilingual households and had different characteristics to the rest of the sample (e.g., no children with DLD and higher maternal levels of education on average were included). Third, we only included information about home language use at the onset of the study. Changes in the home use of the Spanish and English might have occurred during the lockdowns. Although the limitations for in-person learning in Texas were relatively short (e.g., Texas was open for in person learning as early as August of 2020), the exposure to English at school for these children might also have affected our results. Future studies that are designed to test impact of assessment methodology on receptive vocabulary are needed to confirm the results of this study.

Conclusions

The assessment of Spanish and English receptive language skills using telepractice seems to be comparable to face-to-face assessment in bilingual children with and without DLD. Children with DLD performed significantly lower than typically developing children in both Spanish and English. Receptive vocabulary in Spanish and English increased over time for children with and without DLD. The assessment of receptive vocabulary using standardized assessment via telepractice can provide important information about the language ability and language growth of bilingual children with and without language disorders.

Supplementary Material

Supplemental Material S1. Analysis including maternal education.

Acknowledgments

Research reported in this publication was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under Award Number K23DC015835 granted to Anny Castilla-Earls. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Funding Statement

Research reported in this publication was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under Award Number K23DC015835 granted to Anny Castilla-Earls.

Footnote

1

In addition to the analyses previously presented, we also ran analyses including mother's education as suggested by a reviewer. To do so, we created a dummy variable to examine the difference between children whose mother had education levels up to high school and those children whose mothers had some college and above. These results are presented in Supplemental Material S1.

References

  1. American Speech-Language Hearing Association. (2011). Scope of practice. http://www.asha.org/policy/SP2016-00343/
  2. American Speech-Language Hearing Association. (2019). ASHA Summary Membership and Affiliation Counts, Year-End 2018. http://www.asha.org
  3. American Speech-Language Hearing Association. (2020). Telepractice as a service delivery model [ASHA Practice Portal] . https://www.asha.org/practice-portal/professional-issues/telepractice/
  4. Anaya, J. B. , Peña, E. D. , & Bedore, L. M. (2018). Conceptual scoring and classification accuracy of vocabulary testing in bilingual children. Language, Speech, and Hearing Services in Schools, 49(1), 85–97. https://doi.org/10.1044/2017_LSHSS-16-0081 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bates, D. , Mächler, M. , Bolker, B. , & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. [Google Scholar]
  6. Bialystok, E. , Luk, G. , Peets, K. F. , & Yang, S. (2010). Receptive vocabulary differences in monolingual and bilingual children. In Abutalebi J. (Ed.), Bilingualism: Language and cognition (pp. 525–531). Cambridge University Press. https://doi.org/10.1017/S1366728909990423 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bishop, D. V. M. , Snowling, M. J. , Thompson, P. A. , Greenhalgh, T. , & CATALISE-2 Consortium. (2017). Phase 2 of CATALISE: A multinational and multidisciplinary Delphi consensus study of problems with language development: Terminology. Journal of Child Psychology and Psychiatry, 58(10), 1068–1080. https://doi.org/10.1111/jcpp.12721 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bliss, L. , & Allen, D. (1983). Screening kit of language development (SKOLD). University Park Press. [DOI] [PubMed] [Google Scholar]
  9. Branum-Martin, L. , Mehta, P. D. , Carlson, C. D. , Francis, D. J. , & Goldenberg, C. (2014). The nature of Spanish versus English language use at home. Journal of Educational Psychology, 106(1), 181–199. https://doi.org/10.1037/a0033931 [Google Scholar]
  10. Calvo, A. , & Bialystok, E. (2014). Independent effects of bilingualism and socioeconomic status on language ability and executive functioning. Cognition, 130(3), 278–288. https://doi.org/10.1016/j.cognition.2013.11.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Castilla-Earls, A. , Auza, A. , Pérez-Leroux, A. T. , Fulcher-Rood, K. , & Barr, C. (2020). Morphological errors in monolingual Spanish-speaking children with and without developmental language disorders. Language, Speech, and Hearing Services in Schools, 51(2), 270–281. https://doi.org/10.1044/2019_LSHSS-19-00022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Castilla-Earls, A. , Pérez-Leroux, A. T. , Fulcher-Rood, K. , & Barr, C. (2021). Morphological errors in Spanish-speaking bilingual children with and without developmental language disorders. Language, Speech, and Hearing Services in Schools, 52(2), 497–511. https://doi.org/10.1044/2020_LSHSS-20-00017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Ciccia, A. H. , Whitford, B. , Krumm, M. , & McNeal, K. (2011). Improving the access of young urban children to speech, language and hearing screening via telehealth. Journal of Telemedicine and Telecare, 17(5), 240–244. https://doi.org/10.1258/jtt.2011.100810 [DOI] [PubMed] [Google Scholar]
  14. Cusack, C. M. , Pan, E. , Hook, J. M. , Vincent, A. , Kaelber, D. C. , & Middleton, B. (2008). The value proposition in the widespread use of telehealth. Journal of Telemedicine and Telecare, 14(4), 167–168. https://doi.org/10.1258/jtt.2007.007043 [DOI] [PubMed] [Google Scholar]
  15. Davison, M. D. , Hammer, C. , & Lawrence, F. R. (2011). Associations between preschool language and first grade reading outcomes in bilingual children. Journal of Communication Disorders, 44(4), 444–458. https://doi.org/10.1016/j.jcomdis.2011.02.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. DeCapua, A. , & Wintergerst, A. C. (2009). Second-generation language maintenance and identity: A case study. Bilingual Research Journal, 32(1), 5–24. https://doi.org/10.1080/15235880902965672 [Google Scholar]
  17. Dunn, L. M. , & Dunn, D. M. (2007). Peabody Picture Vocabulary Test–Fourth Edition. Pearson Assessments. https://doi.org/10.1037/t15144-000 [Google Scholar]
  18. Dunn, L. M. , Padilla, E. R. , Lugo, D. E. , & Dunn, L. M. (1986). TVIP: Test de Vocabulario en Imágenes Peabody: Adaptacion Hispanoamericana = Peabody Picture Vocabulary Test : Hispanic-American adaptation. AGS. https://search.library.wisc.edu/catalog/999767172102121 [Google Scholar]
  19. Francis, N. , & Ryan, P. M. (1998). English as an international language of prestige: Conflicting cultural perspectives and shifting ethnolinguistic loyalties. Anthropology & Education Quarterly, 29(1), 25–43. https://doi.org/10.1525/aeq.1998.29.1.25 [Google Scholar]
  20. Gray, S. (2004). Word learning by preschoolers with specific language impairment: Predictors and poor learners. Journal of Speech, Language, and Hearing Research, 47(5), 1117–1132. https://doi.org/10.1044/1092-4388(2004/083) [DOI] [PubMed] [Google Scholar]
  21. Gray, S. , Plante, E. , Vance, R. , & Henrichsen, M. (1999). The diagnostic accuracy of four vocabulary tests administered to preschool-age children. Language, Speech, and Hearing Services in Schools, 30(2), 196–206. https://doi.org/10.1044/0161-1461.3002.196 [DOI] [PubMed] [Google Scholar]
  22. Haaf, R. , Duncan, B. , Skarakis-Doyle, E. , Carew, M. , & Kapitan, P. (1999). Computer-based language assessment software: The effects of presentation and response format. Language, Speech, and Hearing Services in Schools, 30(1), 68–74. https://doi.org/10.1044/0161-1461.3001.68 [DOI] [PubMed] [Google Scholar]
  23. Hammer, C. S. , Davison, M. D. , Lawrence, F. R. , & Miccio, A. W. (2009). The effect of maternal language on bilingual children's vocabulary and emergent literacy development during head start and kindergarten. Scientific Studies of Reading: The Official Journal of the Society for the Scientific Study of Reading, 13(2), 99–121. https://doi.org/10.1080/10888430902769541 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Hodge, M. A. , Sutherland, R. , Jeng, K. , Bale, G. , Batta, P. , Cambridge, A. , Detheridge, J. , Drevensek, S. , Edwards, L. , Everett, M. , Ganesalingam, C. , Geier, P. , Kass, C. , Mathieson, S. , McCabe, M. , Micallef, K. , Molomby, K. , Pfeiffer, S. , Pope, S. , … Silove, N. (2019). Literacy assessment via telepractice is comparable to face-to-face assessment in children with reading difficulties living in rural Australia. Telemedicine and e-Health, 25(4), 279–287. https://doi.org/10.1089/tmj.2018.0049 [DOI] [PubMed] [Google Scholar]
  25. Hoff, E. , Burridge, A. , Ribot, K. M. , & Giguere, D. (2018). Language specificity in the relation of maternal education to bilingual children's vocabulary growth. Developmental Psychology, 54(6), 1011–1019. https://doi.org/10.1037/dev0000492 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Kaufman, A. S. , & Kaufman, N. L. (2004). Kaufman Brief Intelligence Test (2nd ed.). Pearson. [Google Scholar]
  27. Khoong, E. , Butler, B. , Mesina, O. , Su, G. , DeFries, T. B. , Nijagal, M. , & Lyles, C. R. (2020). Patient interest in and barriers to telemedicine video visits in a multi-lingual urban safety-net system. Journal of the American Medical Informatics Association, 28(2), 349–353. https://doi.org/10.1093/jamia/ocaa234 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Kuznetsova, A. , Brockhoff, P. B. , & Christensen, R. H. B. (2017). lmerTest Package: Tests in linear mixed effects models. Journal of Statistical Software, 82(13), 1–26. [Google Scholar]
  29. Mashima, P. A. , & Doarn, C. A. (2008). Overview of telehealth activities in speech-language pathology. Telemedicine and e-Health, 14(10), 1101–1119. https://doi.org/10.1089/tmj.2008.0080 [DOI] [PubMed] [Google Scholar]
  30. May, J. , & Erikson, S. (2014). Telehealth: Why not? Perspectives of speech pathologists not engaging in telehealth. Journal of Clinical Practice in Speech-Language Pathology, 16(3), 147–151. [Google Scholar]
  31. Moffat, J. J. , & Eley, D. S. (2011). Barriers to the up-take of telemedicine in Australia—A view from providers. Rural and Remote Health, 11(2), 1581. https://doi.org/10.22605/RRH1581 [PubMed] [Google Scholar]
  32. Pace, A. , Luo, R. , Hirsh-Pasek, K. , & Michnick Golinkoff, R. (2017). Identifying pathways between socioeconomic status and language development. Annual Review of Linguistics, 3, 285–308. https://doi.org/10.1146/annurev-linguistics-011516-034226 [Google Scholar]
  33. Patterson, J. L. (1998). Expressive vocabulary development and word combinations of Spanish-English bilingual toddlers. American Journal of Speech-Language Pathology, 7(4), 46–56. https://doi.org/10.1044/1058-0360.0704.46 [Google Scholar]
  34. Patterson, J. L. , & Pearson, B. Z. (2004). Bilingual lexical development: Influences, contexts, and processes. In Goldstein B. A. (Ed.), Bilingual language development and disorders in Spanish-English speakers (pp. 77–104). Brookes. [Google Scholar]
  35. Peña, E. D. , Bedore, L. M. , Lugo-Neris, M. J. , & Albudoor, N. (2020). Identifying developmental language disorder in school age bilinguals: Semantics, grammar, and narratives. Language Assessment Quarterly, 17(5), 541–558. https://doi.org/10.1080/15434303.2020.1827258 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Peña, E. D. , Gutierrez-Clellen, V. F. , Iglesias, A. , Goldstein, B. , & Bedore, L. M. (2018). Bilingual English-Spanish Assessment (Besa). Brookes. [Google Scholar]
  37. Perrin, A. (2021, June 3). Mobile technology and home broadband 2021. Pew Research Center. [Google Scholar]
  38. Quiroz, B. G. , Snow, C. E. , & Jing, Z. (2010). Vocabulary skills of Spanish–English bilinguals: Impact of mother–child language interactions and home language and literacy support. International Journal of Bilingualism, 14(4), 379–399. https://doi.org/10.1177/1367006910370919 [Google Scholar]
  39. R Core Team. (2020). R: A language and environment for statistical computing. R foundation for statistical computing. https://www.R-project.org/ [Google Scholar]
  40. Restrepo, M. A. (1998). Identifiers of predominantly Spanish-speaking children with language impairment. Journal of Speech, Language, and Hearing Research, 41(6), 1398–1411. https://doi.org/10.1044/jslhr.4106.1398 [DOI] [PubMed] [Google Scholar]
  41. Rice, M. L. , & Hoffman, L. (2015). Predicting vocabulary growth in children with and without specific language impairment: A longitudinal study from 2;6 to 21 years of age. Journal of Speech, Language, and Hearing Research, 58(2), 345–359. https://doi.org/10.1044/2015_JSLHR-L-14-0150 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Ronderos, J. , Castilla-Earls, A. , & Ramos, G. M. (2021). Parental beliefs, language practices and language outcomes in Spanish-English bilingual children. International Journal of Bilingual Education and Bilingualism. Advance online publication. https://doi.org/10.1080/13670050.2021.1935439 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Semel, E. , Wiig, E. H. , & Secord, W. A. (2003). Clinical Evaluation of Language Fundamentals, Fourth Edition (CELF-4) . The Psychological Corporation/A Harcourt Assessment Company. [Google Scholar]
  44. Snow, C. E. (1991). The theoretical basis for relationships between language and literacy in development. Journal of Research in Childhood Education, 6(1), 5–10. https://doi.org/10.1080/02568549109594817 [Google Scholar]
  45. Sorenson Duncan, T. , & Paradis, J. (2020). Home language environment and children's second language acquisition: The special status of input from older siblings. Journal of Child Language, 47(5), 982–1005. https://doi.org/10.1017/S0305000919000977 [DOI] [PubMed] [Google Scholar]
  46. Sutherland, R. , Hodge, A. , Trembath, D. , Drevensek, S. , & Roberts, J. (2016). Overcoming barriers to using telehealth for standardized language assessments. Perspectives of the ASHA Special Interest Groups, 1(18), 41–50. https://doi.org/10.1044/persp1.SIG18.41 [Google Scholar]
  47. Sutherland, R. , Trembath, D. , Hodge, A. , Drevensek, S. , Lee, S. , Silove, N. , & Roberts, J. (2017). Telehealth language assessments using consumer grade equipment in rural and urban settings: Feasible, reliable and well tolerated. Journal of Telemedicine and Telecare, 23(1), 106–115. https://doi.org/10.1177/1357633X15623921 [DOI] [PubMed] [Google Scholar]
  48. United States Census Bureau. (2015). Detailed languages spoken at home and ability to speak English for the population 5 years and over: 2009–2013. https://www.census.gov/data/tables/2013/demo/2009-2013-lang-tables.html
  49. Waite, M. C. , Theodoros, D. G. , Russell, T. G. , & Cahill, L. M. (2010). Internet-based telehealth assessment of language using the CELF-4. Language, Speech, and Hearing Services in Schools, 41(4), 445–458. https://doi.org/10.1044/0161-1461(2009/08-0131) [DOI] [PubMed] [Google Scholar]
  50. Williams, K. T. (2007). Expressive Vocabulary Test, Second Edition (EVT-2). AGS. https://doi.org/10.1037/t15094-000 [Google Scholar]
  51. Zoom Video Communications. (2020). Zoom (Version 5.4.7) . https://zoom.us/

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Material S1. Analysis including maternal education.

Articles from Language, Speech, and Hearing Services in Schools are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES