Skip to main content
Language, Speech, and Hearing Services in Schools logoLink to Language, Speech, and Hearing Services in Schools
. 2022 Feb 16;53(2):445–453. doi: 10.1044/2021_LSHSS-21-00057

Feasibility of Assessing Expressive and Receptive Vocabulary via Telepractice for Early Elementary-Age Children With Language Impairment

Mary Beth Schmitt a,, Sherine Tambyraja b, Megan Thibodeaux, Jen Filipkowski b
PMCID: PMC9549970  PMID: 35171709

Abstract

Purpose:

Due to the COVID-19 pandemic, millions of school-age children with language impairment (LI) and their speech-language pathologists (SLPs) relied on telepractice service delivery models. Unfortunately, the dearth of evidence and procedural guidance available to SLPs has made this transition challenging at best.

Method:

The current study utilized a sample of 20 young children with LI to determine the feasibility of procedures necessary for conducting vocabulary assessments via telepractice platforms and the reliability of scoring participant responses using standardized assessments.

Results:

Study findings resulted in numerous practical suggestions for SLPs working with young children with LI via telepractice. Results suggest that these adaptations result in strong interrater reliability for scoring participant responses in an online format.

Conclusion:

Study findings suggest that conducting telepractice assessments can be a useful and reliable tool for school-based SLPs, with implications reaching beyond the pandemic era.


In the spring of 2020, the majority of public schools closed due to the COVID-19 pandemic, leaving millions of school children across the country to engage in virtual, at-home learning. Among those most critically impacted by school closures were children with language impairment (LI) receiving school-based special education services, such as speech-language therapy, who had to quickly adapt to a new way of therapy receipt—via telepractice. Indeed, the idea of telepractice as a method of service delivery for school-age children with LI is not novel; however, studies examining the feasibility and use of telepractice have primarily focused on understanding how to reach and serve children living in remote or rural areas (e.g., Fairweather et al., 2016), and thus, its utility as an alternate method of service delivery.

Although the majority of studies concerning telepractice for school-age children has focused on the treatment aspect of service delivery (e.g., Grogan-Johnson et al., 2011, 2013), a smaller body of work has sought to determine the validity of assessment practices via telepractice models, with consistently positive results indicating high levels of agreement between assessment items administered in-person versus telepractice (e.g., Hodge et al., 2019; Sutherland et al., 2016; Waite et al., 2010). Waite et al. (2010), for example, compared the validity and reliability of administered items from the Core Language subtests of the Clinical Evaluation of Language Fundamentals–Fourth Edition (Semel et al., 2003) when administered face-to-face versus virtual assessment for 25 school-age children with LI. Results showed no significant differences in scores between the two conditions, and both interrater and intrarater reliability agreements were high for all items.

Despite the promising outcomes from these studies regarding the validity of telepractice for both assessment and treatment of young children with LI, research suggests that many speech-language pathologists (SLPs) have overall low levels of comfort and experience with using telepractice to serve their students. For example, Tucker (2012) surveyed 170 SLPs in a southeastern state regarding their use of telepractice. Of the 170 SLPs who responded, only seven had any experience with using teletherapy (6%). Of those seven, only three were currently providing teletherapy at the time of the study. Moreover, the majority of responding SLPs (68%) either disagreed or strongly disagreed with the statement that assessment practices could be as effective via telepractice compared to face-to-face contexts. Comparatively, only 36% of SLPs in that study disagreed or strongly disagreed with the statement that therapy could be similarly effective in either setting. These results indicate that even though research suggests both assessment and treatment procedures are feasible and effective in telepractice contexts, SLPs perceive important differences in how telepractice can be used for assessment and treatment.

More recently, however, a survey of SLPs who regularly use telepractice as a service delivery option (n = 305) has indicated that they (a) are predominately self-employed (55%), (b) use telepractice to serve clients with LI (73%), and (c) were somewhat prepared to provide telepractice services when they first began (American Speech-Language-Hearing Association 2016). These data, offering an alternative perspective to SLPs in the Tucker (2012) study, highlight that there may be an uptick in the use of telepractice for children with LI, but in specific contexts (e.g., private practice vs. school settings). Moreover, a recent scoping review of telepractice studies highlights the lack of implementation data in the majority of studies (Campbell et al., 2020). In the current study, we address this research to practice gap and describe practical and applicable procedures for administering both receptive and expressive language assessments via telepractice.

In large part, findings from the Tucker (2012) survey align with research suggesting that many SLPs are relatively unfamiliar and inexperienced with telepractice as a service delivery model (but see American Speech-Language-Hearing Association, 2016). Although the extent to which telepractice is regularly used by school-based SLPs is not widely reported in the literature, accumulating data do suggest that the majority of SLPs do not feel prepared to engage in telepractice and/or did not receive explicit clinical training regarding telepractice in their graduate programs of clinical fellowships (Akamoglu et al., 2018; Hines et al., 2015).

In sum, as effects from the COVID-19 pandemic endure and some children with LI continue to receive therapy in virtual learning environments, the need for more robust and comprehensive research on the procedural details for administering assessments via telepractice is warranted for two main reasons. First, as indicated above, research suggests that telepractice is comparably effective for both assessing and treating children in need of speech-language supports. These are encouraging findings that support the continued use of telepractice to reach children who are medically fragile, attend school districts that may not have an on-site SLP, and provide a more inclusive model of school-based services that serves children from low-resource environments and school districts. However, second, research of current clinical practice and perspectives also suggests that most SLPs lack the training and experience to provide services virtually. That is, although there is solid evidence to suggest that language assessments can be effectively administered virtually in researcher-controlled and laboratory contexts, practical information regarding the training, materials, and protocols to do so in business-as-usual settings, to include children's home environments, is currently lacking. To that end, this study reports on effective methodological considerations for administering both receptive and expressive vocabulary assessments for children with LI that may guide ongoing and future clinical practice in the area of language assessment.

Specifically, we address the following research questions: (a) What procedures are needed to conduct receptive and expressive vocabulary assessments via telepractice for children with LI? (b) What is the reliability of scoring standardized receptive and expressive vocabulary assessments conducted virtually?

Method

Project Overview

The current study is a component of a parent study of treatment intensity for children with LI. The parent project was designed for in-person experiences; however, due to the COVID-19 pandemic, the authors applied for and received supplemental funding from the National Institutes of Health to convert the project to an online format. As a component of the parent project, all interested participants completed a series of orientation sessions and testing sessions to determine eligibility for participation. All children who met eligibility criteria then completed two additional testing sessions to establish baseline skills. For the current investigation, we include the first 20 consented children with LI who completed Testing Session 1. All procedures included were approved by the institutional review board of the primary site for the parent project, including conducting sessions with participants online.

General Procedures

All study participants met the following eligibility criteria: (a) had an existing primary diagnosis of LI, (b) had no comorbid diagnoses that would explain the language concerns, (c) were currently receiving speech-language therapy services in schools, (d) were between the ages of 5:0 (years;months) and 6:11, and (e) spoke English as their primary language.

Participants were recruited using multiple methods, adapted in response to the pandemic. One method was via school-based SLPs. Study personnel presented the study to SLPs via online seminars and staff development; SLPs were provided flyers in PDF format to share with caregivers of children on their caseload. Second, flyers were sent to schools via PeachJar, an online digital communication system where schools approve flyers and e-mail it to families in their district. Third, social media outlets were used to share study information with both SLPs and families. Fourth, study personnel contacted groups specifically associated with families of children with LI (e.g., DLD and Me: Developmental Language Disorder and Me: https://dldandme.org; RADLD: Raising Awareness of Developmental Language Disorder: https://radld.org) who advertised the study on their websites.

Families of children with LI who were interested in learning more contacted study personnel directly via e-mail or phone. Families who were still interested after hearing more details of the study completed a consent form for participation. Once consent was received, the family was then scheduled for an orientation session (see Orientation Session in Results section). Following the orientation session, the family was scheduled for Test Session 1 at a time that was convenient for them.

Technology

All study procedures occurred in home environments for both the study personnel and the families. Study personnel conducting telepractice sessions were trained to find a quiet, private room without distractions or background noises. Prior to beginning the telepractice session, study personnel completed a system's check to ensure that their video, audio, and Wi-Fi were functioning properly. The families chose whether they wanted to use a home tablet or computer and Wi-Fi or whether they wanted to borrow a device from the study team with cellular data. All 20 participants elected to use their own devices. To ensure confidentiality using an online platform, the study team purchased HIPAA Zoom licenses for every study personnel who would be conducting orientation or testing sessions.

Participants

Participants for the current study include 20 children with LI (eight girls, 12 boys). All children had an Individualized Education Program for language from their public school and were between 5;0 and 6;11 (M = 71 months). Six participating families did not complete demographic data. Of the 14 participants reporting demographic data, the majority identified as Not Hispanic/Latinx (n = 13) and White (n = 9). Maternal education ranged from some college but no degree to master's degree, with the majority of maternal caregivers having a master's degree (n = 8). Total household income ranged from $35,000 to $85,000 or more, with the majority of families earning $85,000 or more (n = 9); two of the 14 families chose not to report familial income. See Table 1 for complete demographic data.

Table 1.

Demographic data for study participants (n = 14).

Demographic Count (% of sample)
Ethnicity
 Hispanic/Latinx 1 (5%)
 Not Hispanic/Latinx 13 (65%)
Race
 Black/African American 0
 American Indian or Alaska Native 0
 White/Caucasian 9 (45%)
 Native Hawaiian 0
 Other Pacific Islander 0
 Asian 3 (15%)
 Other 2 (14%)
Maternal education
 Some college no degree 1 (5%)
 AA/AS 2-year degree 1 (5%)
 Bachelor's degree 4 (20%)
 Master's degree 8 (40%)
Total household income
 $35,000–$45,000 1 (5%)
 $55,000–$65,000 1 (5%)
 $75,000–$85,000 1 (5%)
 > $85,000 9 (45%)
 Not reported 2 (14%)

Note. AA/AS = Associate of Arts/Associate of Science. Six of the 20 families did not complete demographic questionnaire. An additional two families did not report total household income.

Children's LI status in the public schools was confirmed at several points during the enrollment process including recruitment, consent, and orientation sessions. This study was particularly interested in children diagnosed and treated for LI in the public schools. Because a diagnosis of LI in the schools is determined by several factors, including performance in the general education classroom, scores on a standardized test were not used to qualify children as LI for this study. Overall, the mean standardized score on all study measures was below average, with scores ranging from > 2 SDs below the mean to > 1 SD above the mean (see Table 2 for complete data).

Table 2.

Raw and scaled scores for measures of vocabulary.

Measure & subtests Expressive or receptive task Raw score mean (range) Standard score mean (range)
DELV-NR
 Verb Contrast Expressive 4.15 (0–9)
 Preposition Contrast Expressive 3.35 (1–6)
 Quantifiers Both 6.40 (3–9)
 Fast Mapping: Real Verbs Receptive 4.25 (2–8)
 Fast Mapping: Novel Words Receptive 4.90 (1–12)
 Semantics Scaled Score Both 23.05 (12–42) 7.35 (2–19)
CREVT-3
 Expressive Vocabulary Expressive 5.45 (0–13) 97.75 (76–120)
 Receptive Vocabulary Receptive 17.95 (4–29) 99.20 (68–118)
TOLD-P:5
 Picture Vocabulary Receptive 12.30 (7–26) 8.55 (5–15)
 Relational Vocabulary Expressive 7.55 (3–14) 7.55 (3–14)
 Oral Vocabulary Expressive 6.35 (0–14) 6.40 (2–12)
 Semantics Index Score Both 84.7 (63–112)

Note. Scaled scores for the DELV-NR and TOLD-P:5 have a mean of 10, standard deviation (SD) of 3. Standard scores for the CREVT-3 and TOLD-P:5 index score have a mean of 100 and SD of 15. DELV-NR = Diagnostic Evaluation of Language Variation–Norm Referenced; CREVT-3 = Comprehensive Receptive and Expressive Vocabulary Test–Third Edition; TOLD-P:5 = Test of Language Development (Primary)–Fifth Edition.

Measures

For the parent project, participants completed three testing sessions to establish eligibility and baseline functioning. For the purpose of the current study, only data from Test Session 1 (eligibility testing) were used. Online versions of each test were purchased through the appropriate publishing companies. Additionally, for the purposes of the larger parent study, licenses were purchased to convert paper protocols to Qualtrics to minimize the chance of data loss and breach of confidentiality by using paper protocols, given that all study personnel were working remotely. Sessions lasted an average of 97 min (range of 70–127 min).

Measures used in Testing Session 1 included three norm referenced measures of language, specifically subtests evaluating children's vocabulary skills: Diagnostic Evaluation of Language Variation–Norm Referenced (DELV-NR; Seymour et al., 2005), the Comprehensive Receptive and Expressive Vocabulary Test–Third Edition (CREVT-3; Wallace & Hammill, 2013), and the Test of Language Development (Primary)–Fifth Edition (TOLD-P:5; Newcomer & Hammill, 2019). Four subtests of the DELV-NR were used, including Verb Contrast, Preposition Contrast, Quantifiers, and Fast Mapping. These subtests measure ability to contrast verbs and prepositions, understand quantifiers, and learn new meanings from sentence contexts. Reported internal consistency of the semantics subtests is .65–.80 for ages 5;0–6;11 and interscorer reliability is .93, suggesting strong reliability. Two subtests of the CREVT-3 were administered, including Receptive Vocabulary and Expressive Vocabulary. Reported internal consistency for these subtests of the CREVT-3 is .86–.91 and interscorer reliability is .99, suggesting strong reliability. Three subtests of the TOLD-P:5 were administered, including Picture Vocabulary, Relational Vocabulary, and Oral Vocabulary. Reported internal consistency for these subtests ranged from .84 to .88 and interscorer reliability is .97–.99, suggesting strong reliability.

For each of the standardized measures, the publishing companies had an online version. Purchase of the online versions of each measure came with access to their respective online platforms. CREVT-3 and TOLD P:5 used RedShelf, and DELV-NR used Ventris Learning. We used the measures as intended by the publishers; the only adaptation to instructions was instead of saying “Point to ______” for receptive tasks, participants were asked to “Place your stamp on _______” to align with annotation features of online administration (see Results for more information). Additionally, because these standardized measures were available in an online format, we relied on the publisher's reliability and validity measures, as reported above.

All measures were administered by research assistants (RAs) who were trained using a three-step process. First, RAs completed three training modules (one for each measure) that described critical components of each measure (e.g., administration rules, ceiling and basal, and repetition of prompts). In this phase, RAs also watched a gold-standard administration of each assessment by senior staff. After reviewing the module, the RA completed a quiz with a minimum of 80% accuracy before moving on the next module. Second, each RA then completed in-depth training by senior staff. During this time, the RAs participated in activities that allowed them to practice administering each task, ask questions, and develop a deeper understanding of each measure. The RAs then met with each other separately to practice administering each measure. Third, the RAs completed two mock sessions via telepractice, one with a senior staff member and one with a neurotypical child volunteer, and received feedback based on a fidelity checklist. During the mock sessions, the RA had to score at least 80% on the fidelity checklist and hit critical markers for success to be considered a reliable assessor.

Ongoing fidelity checks. After training was completed, the senior staff conducted random fidelity checks and provided feedback for each RA during live telepractice testing sessions. Additionally, each month, senior staff created at least two master scores of a videotaped test session. During the first week of each month, RAs were assigned to watch a test session recording and complete score forms based on it. During the second week, RAs meet with senior staff to review their scores and compare them with the master scores. To promote consistency in future scoring, all discrepancies were tracked and discussed with the entire assessment team. All RAs earned 80% or higher on all fidelity checks.

Reliability

To determine how reliably assessors could score responses from participants via telepractice, RAs were randomly assigned to rescore Test Session 1 for our 20 participants. Four trained RAs conducted the original test sessions for our participants (M = 5 test sessions; range = 1–9 sessions). Five trained RAs (the original four RAs plus one additional team member) were randomly assigned to rescore Test Session 1 for participants they did not conduct live (M = 4; range = 3–5 sessions). The RAs viewed recordings of the full assessment and rescored each item using a new record form. They were blind to the scores from the original assessor. The original scores were compared with the reliability scores to calculate intraclass correlation coefficients (ICCs).

Results

Research Aim 1: What Procedures Are Needed to Conduct Receptive and Expressive Vocabulary Assessments via Telepractice?

To address the first aim of determining procedures for conducting receptive and expressive vocabulary assessments via telepractice, all information from our review of the literature, consultation with experts, and discussions with publishing companies was gathered and processed. The study team determined a preliminary set of procedures and then tested each procedure to answer this question. Procedures that facilitated reliable and valid assessments and were feasible to conduct in home environments were adopted for us and are described in the following paragraphs.

Adaptations for Receptive Language

First, the most apparent need was to find a reliable way to evaluate receptive language skills. To maintain the reliability and validity of each measure, it was important that receptive language skills not rely on caregiver report (e.g., “My child pointed to the horse”) or the child's expressive language skills (e.g., “Box A” or “the green one”). Additionally, it was not feasible to send video cameras to each participant's home to record their selection on the screen for later scoring. However, there were two primary features with Zoom that allowed for reliable and valid assessment: share screen and annotation.

Share screen and annotation. The RAs utilized several Zoom features in order to administer the assessments. The “share screen” feature was used to present assessment stimuli with the participant. The “share screen” feature allows one Zoom user (i.e., the RA) to share a portion of their screen or sounds with another Zoom user (i.e., the participant). The “annotation” feature allows the participant to select their responses on receptive tasks. After the RA shares their screen with the stimuli, the participant can then annotate the screen with a stamp or with the drawing tool. The participant selects their response by placing the stamp or drawing on their answer choice. Both the participant and the RA are able to view the shared image and the annotated response.

The participant's annotated response remains on the shared screen until the RA clears the annotation or stops sharing the screen. This allows the RA time to accurately score the response. The RA must clear the participant's annotated response after each item is administered using the “delete” icon on the annotation bar.

A challenge with annotation. The annotation feature needed to be set up at the beginning of each receptive task, which was a challenge for many of the participants to complete independently. If the caregiver was unavailable to set up the annotation feature throughout the session, then the “remote control” feature of Zoom was used. This feature places the Zoom functions in the hands of the RA instead of the participant or caregiver. By using “remote control,” the RA was able to set up the annotation feature on their screen and then give the participant access to the annotation features remotely.

Orientation Sessions

Once the procedures for conducting the actual assessment were determined, it became evident that another significant need was training the caregivers and participants in these new procedures separate from the testing session itself. Because participants and their caregivers join the telepractice sessions from their own residence, it was critical to set expectations and clarify any questions or concerns before conducting assessment sessions. Specifically, the study team needed to train participating children and families on the technology, including Zoom features described above, and expectations for caregiver involvement to ensure valid assessment.

Technology. After greeting the family, the study team member introduced them to how Zoom would be used to conduct study activities. First, the staff member confirmed that the Zoom video boxes were not overlapping the shared PowerPoint screen. If the family indicated that the video boxes were overlapping the shared screen, the staff member walked them through step-by-step instructions to change Zoom settings. The purpose of this was to ensure that the child would be able to see entire stimulus pages during study sessions. Next, the staff member confirmed that the child could see and identify a picture of a simple animal on the shared screen. This was an opportunity for the child to practice answering a question based on what was displayed on the screen. A sound check was also conducted by asking the family to indicate if they could hear the staff member using a Likert scale of too quiet, just right, or too loud. If the family or child indicated that the volume was too quiet or too loud, the staff member walked the family through step-by-step instructions to adjust Zoom and or device settings. The staff member then walked the family through step-by-step instructions to set up stamps through Zoom annotations and explained how the stamps would be used during the testing session. Once stamps were activated in Zoom settings, the child was asked to view a screen displaying simple pictures of various animals. The staff member then instructed the child to place their stamp on a specific animal. This was repeated with different animals as many times as necessary to give the child an opportunity to practice using stamps in a similar way to how they would be used during testing sessions. Note that the actual assessment measures were not used in any form during the orientation session. Study personnel created an animal slide to practice annotations with the participant and caregivers (see Figure 1).

Figure 1.

Figure 1.

Slide used with participants to practice annotations during orientation session.

Caregiver involvement. After practicing the technology, the staff member explicitly described the role of the caregiver. The staff member explained the caregiver's roles in the following tasks: (a) to make sure the child can hear and see the staff member and shared screens, (b) to remind the child to look at the screen and listen to the staff member, and (c) to tell the staff member if the video buffers, causing the child to not be able to hear. Additionally, to the extent possible, the staff member asks the caregiver to have the child use the bathroom before sessions, to have a water bottle handy, to have their device charged and/or have a charger handy, and to minimize distractions such as background television noise. The staff member also explained what the caregiver should not do in the context of a research study. Specifically, the caregiver was asked to not (a) help the child give a correct response, (b) tell the child if any of their responses were correct or incorrect, and (c) practice any of the questions outside of the telepractice sessions. The RA then reminded the caregiver of these roles at the beginning of the assessment session. If a caregiver gave any type of coaching or feedback, the RA immediately reminded the caregiver of the expectation (e.g., “Please remember the helper shouldn't let the learner know if their response is right or wrong.”) before moving on. Any needed corrections were noted by RAs at the end of the assessment session. For this particular study, a gentle reminder was sufficient for caregivers to remember their role during the assessment session.

Copyright Regulations and Ethical Considerations

It is important to understand the copyright regulations and ethical considerations for utilizing assessment materials in a remote capacity. The study team purchased electronic versions of each assessment manual and stimulus book. For each telepractice testing session, the RA assigned to that participant accessed the assessment materials remotely on secure online platforms provided by each publisher. Only one RA accesses each version of the remote materials at any given time. This ensures that the materials are used in a manner consistent with the terms agreed upon with the publisher at the time of purchase.

These procedures were a critical step in determining effective processes for conducting virtual testing sessions. These procedures were used effectively by all 20 participants and their families without incident. In the case of glitches with technology or Internet capability, the session was rescheduled.

Research Aim 2: What Is the Reliability of Scoring Receptive and Expressive Vocabulary Assessments Conducted Virtually?

To address the second research aim of determining the reliability of scoring receptive and expressive vocabulary assessments conducted virtually, ICCs were calculated for each measure. ICCs were computed using Shrout and Fleiss (1979). Total raw scores for each rater were entered in a two-way random effects model using absolute agreement; single measures are reported (see Table 3). All ICCs were strong for each of the three norm-referenced measures across all subtests, ranging from .937 to .998. According to Nunnally and Bernstein (1994), ICCs greater than .80 are strong for research and greater than .90 are strong for clinical decision making. These values suggest that our RAs were reliable in scoring vocabulary assessments administered via telepractice modality for both research and clinical standards (see Table 2).

Table 3.

Intraclass correlation coefficients (ICCs) for study measures.

Measure ICC Lower–upper bounds
DELV-NR
 Verb Contrast .960 .903, .984
 Preposition Contrast .963 .911, .985
 Quantifiers .972 .931, .989
 Fast Mapping: Real Verbs .993 .982, .997
 Fast Mapping: Novel Verbs .994 .984, .997
CREVT-3
 Expressive Vocabulary .937 .849, .974
 Receptive Vocabulary .998 .996, .999
TOLD-P:5
 Picture Vocabulary .997 .992, .999
 Relational Vocabulary .993 .983, .997
 Oral Vocabulary .980 .952, .992

Note. ICCs were calculated on total raw scores for each measure. DELV-NR = Diagnostic Evaluation of Language Variation–Norm Referenced; CREVT-3 = Comprehensive Receptive and Expressive Vocabulary Test–Third Edition; TOLD-P:5 = Test of Language Development (Primary)–Fifth Edition.

Discussion

Telepractice as a means for service delivery for speech-language pathology has been increasingly studied over the past decade, but because of the COVID-19 pandemic, telepractice quickly became the primary method of service delivery for many SLPs (e.g., Tambyraja et al., 2021). Unfortunately, research concerning feasible and reliable implementation of telepractice procedures, particularly for language assessment, has been lacking (Campbell et al., 2020). The current study addressed this critical gap in clinical practice by describing methods for conducting both receptive and expressive vocabulary assessments that are generalizable for multiple contexts (e.g., from SLPs' homes). We explore the findings from this work and discuss the clinical implications below.

Our first research aim sought to describe the steps needed to conduct receptive and expressive vocabulary assessments via telepractice. As indicated above, although previous research has suggested that telepractice methods for administering language assessments have comparable reliability and validity as face-to-face methods, parallel research indicates that many SLPs do not agree (Tucker, 2012). Although this study did not compare telepractice and face-to-face assessment practices directly, our results extend previous findings in important ways regarding the reliability of telepractice assessments that can better support SLPs' use of virtual methods of service.

First, the final procedures described here in detail, and yielded from extensive and iterative processes of trial and error and fine tuning, can be widely and easily applied across and within multiple contexts and environments. For example, our procedures were successful regardless of the device or technology used by both the RAs who were administering the assessments and the participating families. Whereas previous research has investigated the validity of telepractice with the SLP, and sometimes the therapy recipient, using researcher-defined and controlled technological devices with high-grade infrastructure (e.g., Grogan-Johnson et al., 2011), our results indicate that these levels of technological support are not necessary for either party. In sum, SLPs can feasibly conduct telepractice assessments using home computers or tablets, Internet bandwidth that is typical for home use, and inexpensive, readily available videoconferencing software (e.g., Zoom) with children whose families are also using similar technological capabilities.

Second, beyond the technological considerations, our resulting procedures also considered the “soft skills” needed to use telepractice in a clinically meaningful and viable way. For example, previous work has identified issues such as building rapport as a potential challenge in telepractice, with some preliminary evidence suggesting no differences between face-to-face and online contexts (Freckmann et al., 2017) in terms of building and ensuring trust between the SLP and the patient/client. However, similar to studies examining the internal validity and reliability of assessment administration, there has been a lack of information as how to use the telepractice medium in an engaging manner to build relationships, particularly with children. Our results highlighted some simple ways in which this essential aspect of service delivery can be attained. For example, conducting an orientation with a visual aid, such as PowerPoint, to explain the procedures ensures and increases the accessibility of information about the process. Incorporating interactive and hands-on activities in initial sessions can also serve to build rapport with the child and family but also ensures the child is comfortable with the technology. Including these steps can make a big difference in making sure that the assessments run smoothly from a technological standpoint and that the child and family feel confident with their roles and expectations and provides multiple opportunities for the SLP to engage with the child in an informal way prior to assessments.

It is important to note that some procedures described here may vary depending on the clinical context. For example, for our purposes, converting paper record forms of the language assessments was deemed necessary for data management; however, in different settings, and/or if conducting fewer assessments, continuing to use paper forms may be best practice. Overall, however, the results reported here regarding effective methods for administering language assessments represent a much-needed step forward with respect to the practical application of telepractice services.

Our second research aim sought to substantiate the resulting procedures used in this study and reported on the reliability of scoring participant responses on receptive and expressive vocabulary assessments conducted virtually. Overwhelmingly, the reliability data supported the feasibility of the procedures used in capturing responses accurately, especially when using standardized tests adapted for online use by the publishers. Our work strengthens previous findings, however, in that this study examined reliability of scoring receptive and expressive measures of the same language construct: vocabulary. Previous work has determined reliability of administering both expressive and receptive measures of language, but not of the same domain (e.g., Waite et al., 2010). As such, these findings further support the use of telepractice for assessment purposes, regardless of the construct or modality.

In addition, this study expands upon earlier findings in that we report high reliability for scoring standardized assessments administered in representative environments and contexts, as detailed above. That is, although previous work suggests assessment via telepractice can be conducted with similar reliability to face-to-face, those studies included contexts that may not always be replicable in terms of the technological infrastructure included. In this way, our findings are very encouraging and suggest that the clinical application of the procedures outlined in this work may extend beyond pandemic-era constraints and times. Telepractice as a means of service delivery is often considered as an alternative to reach students in rural areas that may experience SLP shortages or to homebound students who are medically fragile. The present findings offer strong support for telepractice to be used to address both of these issues and can further ensure that this may be used as a more routine method to accommodate even more students and SLPs.

Limitations

Limitations to the current project are as follows. First, this was a feasibility study, including only 20 participants. Additional research is warranted with a larger participant pool to continue identifying important procedural components to effective telepractice sessions. Second, this study was not a direct comparison to face-to-face modalities. This was not the intent of this study; however, in future studies when in-person assessment can more safely be utilized, it would be important to measure the reliability of these procedures for assessment when compared to traditional assessment. Third, all 20 participants self-selected into this study understanding that it would utilize a telepractice model. Future studies should investigate whether additional considerations or procedures are needed for students and/or families who are more reluctant or have limited resources. Fourth, this study did not address the social validity of using telepractice platforms for assessments from the caregiver and participant perspective. Future studies should gather information on how this modality is received by those being served.

In sum, our work adds to a sparse yet growing body of work substantiating the feasibility of telepractice service delivery models for children with LI. It is our hope that the current study not only adds critical data for future researchers looking to employ similar methodologies but also provides immediate, specific resources to SLPs who need effective strategies for conducting virtual assessments in the current pandemic.

Acknowledgments

This study was funded by the National Institutes of Health Grant 3R01DC016272-01A1S1 REVISED to Mary Beth Schmitt and Sherine Tambyraja. The authors would like to thank the children and families who have participated in this study, as well as the research assistants for their time, attention to detail, and rapport with the families.

Funding Statement

This study was funded by the National Institutes of Health Grant 3R01DC016272-01A1S1 REVISED to Mary Beth Schmitt and Sherine Tambyraja.

References

  1. Akamoglu, Y. , Meadan, H. , Pearson, J. N. , & Cummings, K. (2018). Getting connected: Speech and language pathologists' perceptions of building rapport via telepractice. Journal of Developmental and Physical Disabilities, 30(4), 569–585. https://doi.org/10.1007/s10882-018-9603-3 [Google Scholar]
  2. American Speech-Language-Hearing Association. (2016). 2016 SIG 18 telepractice survey results. http://www.asha.org
  3. Campbell, J. , Theodoros, D. , Hartley, N. , Russell, T. , & Gillespie, N. (2020). Implementation factors are neglected in research investigating telehealth delivery of allied health services to rural children: A scoping review. Journal of Telemedicine and Telecare, 26(10), 590–606. https://doi.org/10.1177/1357633X19856472 [DOI] [PubMed] [Google Scholar]
  4. Fairweather, G. C. , Lincoln, M. A. , & Ramsden, R. (2016). Speech-language pathology teletherapy in rural and remote educational settings: Decreasing service inequities. International Journal of Speech-Language Pathology, 18(6), 592–602. https://doi.org/10.3109/17549507.2016.1143973 [DOI] [PubMed] [Google Scholar]
  5. Freckmann, A. , Hines, M. , & Lincoln, M. (2017). Clinicians' perspectives of therapeutic alliance in face-to-face and telepractice speech–language pathology sessions. International Journal of Speech-Language Pathology, 19(3), 287–296. https://doi.org/10.1080/17549507.2017.1292547 [DOI] [PubMed] [Google Scholar]
  6. Grogan-Johnson, S. , Gabel, R. M. , Taylor, J. , Rowan, L. E. , Alvares, R. , & Schenker, J. (2011). A pilot exploration of speech sound disorder intervention delivered by telehealth to school–age children. International Journal of Telerehabilitation, 3(1), 31–42. https://doi.org/10.5195/ijt.2011.6064 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Grogan-Johnson, S. , Schmidt, A. M. , Schenker, J. , Alvares, R. , Rowan, L. E. , & Taylor, J. (2013). A comparison of speech sound intervention delivered by telepractice and side-by-side service delivery models. Communication Disorders Quarterly, 34(4), 210–220. https://doi.org/10.1177/1525740113484965 [Google Scholar]
  8. Hines, M. , Lincoln, M. , Ramsden, R. , Martinovich, J. , & Fairweather, C. (2015). Speech pathologists' perspectives on transitioning to telepractice: What factors promote acceptance. Journal of Telemedicine and Telecare, 21(8), 469–473. https://doi.org/10.1177/1357633X15604555 [DOI] [PubMed] [Google Scholar]
  9. Hodge, M. A. , Sutherland, R. , Jeng, K. , Bale, G. , Batta, P. , Cambridge, A. , Detheridge, J. , Drevensek, S. , Edwards, L. , Everett, M. , Ganesalingam, C. , Geier, P. , Kass, C. , Mathieson, S. , McCabe, M. , Micallef, K. , Molomby, K. , Pfeiffer, S. , Pope, S. , … Silove, N. (2019). Literacy assessment via telepractice is comparable to face-to-face assessment in children with reading difficulties living in rural Australia. Telemedicine and e-Health, 25(4), 279–287. https://doi.org/10.1089/tmj.2018.0049 [DOI] [PubMed] [Google Scholar]
  10. Newcomer, P. L. , & Hammill, D. D. (2019). TOLD - P: 5: Test of Language Development, Primary (5th ed.). Pro-Ed. [Google Scholar]
  11. Nunnally, J. C. , & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). Tata McGraw-Hill Education. [Google Scholar]
  12. Semel, E. , Wiig, E. H. , & Secord, W. A. (2003). Clinical Evaluation of Language Fundamentals–Fourth Edition. The Psychological Corporation. [Google Scholar]
  13. Seymour, H. N. , Roeper, T. , & de Villiers, J. G. (2005). Diagnostic evaluation of language variation-Norm referenced. The Psychological Corporation. [Google Scholar]
  14. Shrout, P. E. , & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420–428. https://doi.org/10.1037/0033-2909.86.2.420 [DOI] [PubMed] [Google Scholar]
  15. Sutherland, R. , Hodge, A. , Trembath, D. , Drevensek, S. , & Roberts, J. (2016). Overcoming barriers to using telehealth for standardized language assessments. Perspectives of the ASHA Special Interest Groups, 1(18), 41–50. https://doi.org/10.1044/persp1.SIG18.41 [Google Scholar]
  16. Tambyraja, S. R. , Farquharson, K. , & Coleman, J. (2021). Speech-language teletherapy services for school-aged children in the United States during the COVID-19 pandemic. Journal of Education for Students Placed at Risk, 26(2), 91–111. https://doi.org/10.1080/10824669.2021.1906249 [Google Scholar]
  17. Tucker, J. K. (2012). Perspectives of speech-language pathologists on the use of telepractice in schools: Quantitative survey results. International Journal of Telerehabilitation, 4(2), 61–72. https://doi.org/10.5195/ijt.2012.6100 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Waite, M. C. , Theodoros, D. G. , Russell, T. G. , & Cahill, L. M. (2010). Internet-based telehealth assessment of language using the CELF–4. Language, Speech, and Hearing Services in Schools, 41(4), 445–458. https://doi.org/10.1044/0161-1461(2009/08-0131) [DOI] [PubMed] [Google Scholar]
  19. Wallace, G. , & Hammill, D. D. (2013). Comprehensive Receptive and Expressive Vocabulary Test-Third Edition (CREVT-3). Pro-Ed Inc. [Google Scholar]

Articles from Language, Speech, and Hearing Services in Schools are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES