Skip to main content
Journal of Speech, Language, and Hearing Research : JSLHR logoLink to Journal of Speech, Language, and Hearing Research : JSLHR
. 2020 Feb 27;63(3):827–833. doi: 10.1044/2019_JSLHR-19-00336

A Randomized Controlled Trial Investigating Online Training for Prelinguistic Communication

Julie L Feuerstein a,, Lesley B Olswang b
PMCID: PMC7229705  PMID: 32109176

Abstract

Purpose

This study explored the utility of online training as a platform for teaching early intervention speech-language pathologists to recognize potentially communicative, prelinguistic behaviors in young children with physical disabilities and complex communication needs.

Method

Using a randomized controlled trial, 45 early intervention speech-language pathologists were randomly assigned to one of three conditions within an online training: practice with implicit problem-solving (identification condition), practice with explicit problem-solving (reflection condition), or no practice (control condition). Knowledge about early communication, skill at recognizing prelinguistic behaviors, time taken to complete the training, and perceptions of the training experience were examined.

Results

Participants in the no-practice control condition took significantly less time to complete the training, achieved the same positive outcomes on the knowledge and skill assessments, and rated the training as appealing as compared with participants assigned to the more time-intensive identification and reflection practice conditions.

Conclusions

Results suggest the importance of considering efficiency and appeal when designing successful trainings for moving evidence into practice.


Integrating research evidence into clinical practice is a major challenge facing clinicians and researchers alike. The challenge for researchers is to design interventions that are both effective in promoting change (internally valid) and feasible to use in authentic practice settings (externally valid). The challenge for clinicians is to seek out, interpret, and apply the evidence to their own clinical caseloads. Research suggests that simply disseminating information about evidence-based protocols via traditional strategies, such as manuals or journal publications, may increase knowledge, but may not be sufficient to support new skill acquisition or to foster longer term behavior change (Beidas et al., 2011). Clinician training is required to accomplish these goals and, ultimately, support sustained use of evidence-based protocols in clinical practice (Fixsen et al., 2005). Given the demands placed on both clinicians' and researchers' time, training must be effective, efficient, and appealing. Online training may be one platform to accomplish these goals (Brown & Woods, 2012; Hamad et al., 2010; Kyzar et al., 2014). For clinicians, online training offers an opportunity to self-pace through materials at a convenient time and place (Cook et al., 2008; Dimeff et al., 2009). For researchers, online training offers an opportunity to reach a larger number of participants across a wider range of geographical areas, in a more cost-effective and time-efficient manner than face-to-face training permits (Brouwers et al., 2011; Cucciare et al., 2008; Dimeff et al., 2009). Evaluating training techniques that maximize learner outcomes is a critical step in implementation research and contributes to knowledge about effective implementation strategies (Becker & Stirman, 2011; Fixsen et al., 2005).

The current study examined the effectiveness of three different practice techniques within an online, self-guided training designed to teach early intervention speech-language pathologists (EI SLPs) to recognize potentially communicative prelinguistic communication behaviors produced by young children with physical disabilities and complex communication needs (CCN). Specifically, these behaviors included gaze with or without gestures and vocalizations. Recognizing these prelinguistic behaviors produced by these children can be especially challenging: Some children's behaviors are unconventional or idiosyncratic, whereas others are subtle and fleeting, making them difficult to identify as potentially communicative (Arens et al., 2005; Cress et al., 2000; Sigafoos et al., 2000). Triadic Gaze Intervention (TGI) is an evidence-based protocol designed to help clinicians accurately recognize these often subtle and difficult-to-identify behaviors, shape these behaviors into intentional signals, and build communication competence early in life (Olswang et al., 2014, 2013). We created an online training for recognizing child behaviors, one core component of TGI, because it previously has proved difficult for clinicians to learn (Feuerstein et al., 2017). The aim was to explore the utility of online training for this specific component and to examine training techniques that may be relevant to a broader program of implementation research.

To accomplish this aim, we examined different conditions under which practice and problem-solving may be integrated into online training: either implicitly, via close-ended (i.e., multiple choice) identification questions, or explicitly, via open-ended (i.e., short answer) reflection questions against a no-practice, control condition. We hypothesized that the reflection condition would take more time to complete but ultimately would prove a more effective form of training practice. This hypothesis was based on the extant literature that supports practice plus explicit problem-solving through reflection in consolidating skill (e.g., Mann et al., 2009). By experimentally examining practice techniques within training, results from this study contribute to the growing body of implementation research in our discipline.

Research Questions:

  1. Is there a significant between-groups difference observed in training effectiveness, as measured by (a) knowledge scores on pretest and posttest assessment and (b) skill in identifying prelinguistic communication behaviors at posttest using video exemplars?

  2. Is there a significant between-groups difference observed in training efficiency, as measured by time taken to complete the training?

  3. Is there a significant between-groups difference observed in training appeal, as measured by participants' ratings on a posttraining survey?

Method

Design

The study utilized a randomized controlled design. Sample size was determined based on feasibility, given project resources. This research was approved by the institutional review board at the University of Washington.

Participants

Recruitment and Enrollment

EI SLPs were recruited from six states in the Pacific Northwest, via an informational e-mail and recruitment flyer sent to regional speech-language-hearing associations, EI agencies, direct service providers with publicly available contact information, and academic colleagues from local universities. The e-mail and recruitment flyer provided potential participants with contact information for the lead investigator (first author). The lead investigator responded to all inquiries by conducting a telephone screening to obtain informed consent and determine eligibility for study participation. Eligible participants met the following criteria: (a) were state licensed to practice speech-language pathology; (b) held active membership with the American Speech-Language-Hearing Association; (c) were fluent in written and spoken English; (d) had access to a laptop, desktop, or tablet computer with an Internet connection; and (e) were currently treating at least one child with physical disabilities, defined as any child on the clinician's current caseload who demonstrated a delay in gross motor development sufficient to qualify for EI services. Individuals who participated in past TGI training were excluded. Figure 1 presents a CONSORT flow diagram, which displays participant information from enrollment through data analysis (Schulz et al., 2010).

Figure 1.

Figure 1.

Participant recruitment and randomization flow diagram, from enrollment through data analysis. TGI = Triadic Gaze Intervention; SLP = speech-language pathologist.

Of the 54 eligible participants, 49 (91%) completed a demographic questionnaire and were randomly allocated to one of three training conditions by the first author, who was blind to questionnaire results prior to randomization. Randomization was conducted to ensure relatively equal number of participants per group: 17 were allocated randomly to the identification condition, 15 to the reflection condition, and 17 to the control condition. Three of the 49 participants never initiated the training; therefore, no pretest or posttest data were collected for these participants. Among the remaining 46 participants, one deviated from the protocol and was excluded from analysis. As only one of the 46 participants who initiated the training deviated from the protocol, a per-protocol (PP) rather than intent-to-treat (ITT) analysis was employed. Given that one subject represented a minimal amount of missingness (2%), the exclusion of this subject likely did not introduce potential bias in the results. Thus, the final study sample consisted of 45 participants (15 per condition).

Participant Characteristics

Table 1 presents demographics for the final sample (N = 45). All were female and held a master's degree in speech-language pathology. The majority had either less than 2 or more than 11 years of work experience (40.0% and 28.9%, respectively). There were no significant differences in any of the baseline variables, by condition.

Table 1.

Participant demographic characteristics by training condition.

Variable Training condition
Total (N = 45) χ2 or F p value
Identification
Reflection
Control
(n = 15) (n = 15) (n = 15)
Female, n (%) 15 (100.0) 15 (100.0) 15 (100.0) 45 (100.0) ++
Master's degree, n (%) 15 (100.0) 15 (100.0) 15 (100.0) 45 (100.0) ++
Age, n (%) 5.77 .67
 18–24 0 (0.0) 00 (0.0) 1 (6.7) 1 (2.2)
 25–34 5 (33.3) 8 (53.3) 7 (46.7) 20 (44.4)
 35–44 4 (26.7) 4 (26.7) 2 (13.3) 10 (22.2)
 45–55 3 (20.0) 2 (13.3) 4 (26.7) 9 (20.0)
 55–64 3 (20.0) 1 (6.7) 1 (6.7) 5 (11.1)
Ethnicity, n (%) 6.78 .56
 White or Caucasian 11 (73.3) 14 (93.3) 12 (80.0) 37 (82.2)
 Hispanic or Latino 0 (0.0) 0 (0.0) 1 (6.7) 1 (2.2)
 Asian/Pacific Islander 2 (13.3) 1 (2.2) 2 (13.3) 5 (11.1)
 Other 1 (6.7) 0 (0.0) 0 (0.0) 1 (2.2)
 Multiple 1 (6.7) 0 (0.0) 0 (0.0) 1 (2.2)
Certification, n (%) 0.55 .76
 CF-SLP 1 (6.7) 2 (13.3) 1 (6.7) 4 (8.9)
 CCC-SLP 14 (93.3) 13 (86.7) 14 (93.3) 41 (91.1)
Work status, n (%) 5.23 .48
 Part time 4 (26.7) 4 (26.7) 2 (13.3) 10 (22.2)
 Full time 11 (73.3) 9 (60.0) 13 (86.7) 33 (74.4)
 Contractor 0 (0.0) 1 (6.7) 0 (0.0) 1 (2.2)
 Other 0 (0.0) 1 (6.7) 0 (0.0) 1 (2.2)
Current caseload, M (SD) 5.5 (9.60) 4.6 (3.54) 5.5 (4.12) 5.2 (6.24) 0.10 .91
Experience in years, n (%) 0.67 .99
 0–2 5 (33.3) 6 (40.0) 7 (46.7) 18 (40.0)
 3–5 4 (26.7) 4 (26.7) 3 (27.3) 11 (24.4)
 6–10 1 (6.7) 1 (6.7) 1 (6.7) 3 (6.7)
 11+ 5 (33.3) 4 (26.7) 4 (26.7) 13 (28.9)
Work location, n (%) 11.27 .08
 Urban 6 (40.0) 8 (53.3) 5 (33.3) 19 (42.2)
 Suburban 6 (40.0) 6 (40.0) 2 (13.3) 14 (31.1)
 Rural 3 (20.0) 0 (0.0) 5 (33.3) 8 (17.8)
 More than one 0 (0.0) 1 (6.7) 3 (20.0) 4 (8.9)

Note. ++ = Statistic not available because variable is a constant; CF-SLP = Clinical Fellow in Speech-Language Pathology; CCC-SLP = Certificate of Clinical Competence in Speech-Language Pathology; Urban = population > 50,000; Suburban = population 10,000–50,000; Rural = population < 10,000; Current caseload = number of children on clinician's current caseload who have physical disabilities.

Training Procedures

The online training included three modules, developed and delivered on a secure site using Canvas Learning Management Software (Instructure, Inc., 2019). Training modules were completed sequentially. Participants were asked to complete all modules plus the pretraining and posttraining assessments in one sitting, taking breaks between modules as needed.

Training Module 1: Instruction

All participants viewed a 21-min PowerPoint presentation recorded by the first author; the content and organization resulted from presentations delivered by members of our research laboratory at peer-reviewed conferences over the previous decade. Module 1 reviewed prelinguistic communication development, described communication and motor characteristics of young children with physical disabilities and CCN, and presented a communication continuum (see Figure 2) that introduced nine gaze behaviors representing single, dual, and triadic focus, each with and without gestures and vocalizations (Brady et al., 2012).

Figure 2.

Figure 2.

Communication continuum (adapted from Brady et al., 2012).

Training Module 2: Demonstration

Module 2 provided an operational definition and corresponding video exemplar for each of nine behaviors along the communication continuum. Videos of young children (ages 10–24 months) with physical disabilities and CCN secondary to a variety of etiologies (e.g., cerebral palsy, Down syndrome) were selected from a larger bank of videos collected as part of previous research and consented for such use (Olswang et al., 2014). Video clips were de-identified, beyond that could be ascertained by visual recognition alone. All videos were previously coded for child behaviors by two independent coders; only videos for which 100% agreement in behavioral coding was documented were used as exemplars. A third expert coder reviewed and verified child behaviors in each of the nine videos.

Training Module 3: Practice + Feedback

Module 3 presented a case study video of one expert EI SLP (a member of our research team) implementing TGI. The case study video was divided into three 2-min segments. Participants viewed each segment and then practiced recognizing the child's potentially communicative behavior(s) in different ways, depending upon the condition to which they were assigned. Participants assigned to the identification condition practiced with implicit problem-solving by answering close-ended, multiple-choice practice questions. After viewing a video segment, these participants identified the child's behavior from a multiple-choice list of possible behaviors along the communication continuum. Participants assigned to the reflection condition practiced with explicit problem-solving by responding in writing to three open-ended questions designed to target different components of reflective practice (e.g., observing, hypothesizing, and integrating past experiences with new knowledge). Participants assigned to the control condition passively viewed the same video segments but did not overtly practice recognizing child behaviors.

Feedback was provided to each participant, regardless of condition. Following each video segment and practice condition, the video was replayed with annotations that labeled components of the TGI protocol delivered by the clinician (e.g., provide opportunity, wait for child response). At the end of the annotated video replay, a prompt appeared on the screen, “What was the child's most sophisticated behavior?” followed by a 5-s pause. Following this pause, a still image of the child's most sophisticated behavior appeared, with a caption that labeled the behavior from the communication continuum.

Data Collection and Description of Measures

Training Effectiveness: Pretraining/Posttraining Knowledge Assessment

The knowledge assessment consisted of 20 items targeting (a) early communication development (seven items), (b) characteristics of children with physical disabilities and CCN (seven items), and (c) communication behaviors from the continuum described previously (six items). Items were piloted with SLP graduate students; only one item was altered significantly following pilot testing.

Items consisted of open-ended (e.g., short answer) and close-ended (e.g., true/false, multiple choice) questions and included a balanced number of item types per content area. Items were presented in random order, and for multiple-choice questions, response options were also presented in random order at pretest and posttest. Participants were required to progress through the knowledge assessment one item at a time and were not permitted to go back and change their answers. Responses were scored as correct or incorrect (1, 0) by Canvas, and a total score was calculated (maximum score = 20).

Training Effectiveness: Posttraining Skill Assessment

The skill assessment examined participants' abilities to apply the knowledge learned in the training to a novel set of videos following completion of all three training modules. This assessment was administered at posttest only and thus served as a measure of immediate generalization. Each of the nine behaviors from the communication continuum was represented in three different videos, for a total of 27 videos (nine behaviors × three different, novel video exemplars = 27 total items). Each of the 27 videos was selected from the pool of videos described above and reviewed by three expert coders previously trained to reliability. Only videos on which two of three coders agreed on the child behaviors were used. These codes served as the gold standard against which the participants' responses were compared. Videos were presented in random order. For each video, participants selected the child's most sophisticated behavior from the communication continuum. Responses were scored as correct or incorrect (1, 0), a total score was calculated (maximum raw score = 27), and percent correct was obtained (total number correct divided by total number possible × 100).

Training Efficiency

Training efficiency was assessed by examining the mean number of minutes taken to complete all three training modules, as logged by the Canvas training site.

Training Appeal

A 20-item posttraining survey was administered immediately following the skill assessment. Survey items were adopted from two measures: the Workshop Evaluation Form described by Bartholomew et al. (2007) and the Satisfaction Survey described by Kyzar et al. (2014). Survey items included statements about the training's usability, practicality, and acceptability; perceptions of self-efficacy following training; mental effort required to complete the training; and overall satisfaction with their training experiences. Participants rated how strongly they agreed with each statement, using a 5-point, Likert-type rating scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree), and a mean rating for total appeal was calculated (range: 1–5).

Results

Effectiveness

Knowledge

Using a PP analysis, results of the 2 (time) × 3 (condition) repeated measures analysis of variance (ANOVA) indicated a statistically significant effect of time, F(1) = 280.46, p < .001, η2 = .87. On average, participants scored significantly higher on the knowledge assessment at posttest (M = 16.62, SD = 1.45) than at pretest (M = 10.71, SD = 1.94). There was no statistically significant interaction of Time × Condition, F(2) = 0.19, p = .84.

Skill

Agreement between participants' responses and expert coders' responses for recognizing single versus dual versus triadic focus behaviors produced by children in the videos was examined using a one-way ANOVA. All participants achieved a high degree of agreement with expert coders (M = 80%, SD = 9%); no statistically significant effect of condition on percent correct was observed, F(2) = 0.72, p = .50.

Efficiency

Five participants were excluded from the efficiency analysis (two from the identification condition, one from the control condition, and two from the reflection condition), due to an error in the Canvas training logs' reporting of time spent on one or more of the training modules. Results from a one-way ANOVA indicated a statistically significant effect of condition on mean number of minutes taken to complete the training modules, F(2) = 6.31, p < .01. Post hoc analysis using Tukey's honestly significant difference indicated that the control condition participants took significantly less time (19.64 fewer min, on average) to complete the training modules (M = 52.08, SD = 13.42) than the reflection condition participants (M = 71.71, SD = 14.62), p < .01.

Appeal

A Kruskal–Wallis H test (nonparametric analysis) was conducted to determine if there were statistically significant differences in the mean ratings for overall training appeal on the posttraining survey among the three conditions. Mean ranks were highest for participants in the control condition (24.90), followed by participants in the reflection condition (23.83), and lowest for participants from the identification condition (20.27). However, results were not statistically significantly different, χ 2(2) = 1.03, p = .60.

Discussion

The successful translation of evidence-based protocols into clinical practice relies on feasible and effective strategies for training clinicians in their use. Training must be considered as part of the clinical research process if we wish to fully understand and evaluate how to best implement evidence into practice. This study moves one step forward in tackling a formative question for our discipline: How do we begin to integrate training into clinical research?

To begin addressing this question, this exploratory study evaluated different practice conditions within an easily accessible, online, self-guided training for EI SLPs learning one evidence-based component of the TGI protocol: recognizing potentially communicative behaviors from an established continuum of expected prelinguistic behaviors. Results provided preliminary evidence for the training's effectiveness in helping clinicians to accurately identify broad categories of child prelinguistic gaze behaviors. Specifically, results suggested that the most efficient version of this training (no practice, control condition) was as effective and appealing as the more time-intensive practice versions of the training. Thus, our hypothesis that the reflection condition would support greater skill acquisition was not supported by the data from this study. One explanation for this finding is that the training conditions, although designed to target different kinds of practice plus problem-solving (explicit vs. implicit), were not maximally different from each other or the control condition. All conditions included feedback with time for the participants to potentially engage in their own type of implicit problem-solving. These results carry implications for future TGI research and have application for considering training of other clinical interventions.

These results suggest the value of online training as a major strategy for teaching one of the TGI components to clinicians who have busy schedules yet are committed to learning an evidence-based protocol. These results are encouraging for planning and researching how an online approach might be situated within a comprehensive TGI training package. Undoubtedly, not all elements of an evidence-based protocol may be appropriately taught through online training. However, results from this study suggest the potential value of online training for at least some protocols or aspects of protocols. For example, online training might be useful when multiple exemplars are needed to illustrate any variations of a target behavior as part of an intervention protocol. Perhaps most importantly, this study emphasizes the importance of considering clinicians' needs and preferences that may best support learning and ultimate implementation.

Limitations and Future Directions

Our limited ability to assess the validity of the knowledge and skills assessments for predicting whether clinicians will be better able to implement TGI posttraining remains a limitation of the current study. Assessing clinicians' abilities to successfully implement TGI in practice, following training, and evaluating child outcomes were beyond the scope of this study but are obvious and important endeavors for future research.

There are some deviations from CONSORT reporting guidelines that should be acknowledged. First, a PP (versus ITT) analysis was employed. This analytic approach was chosen as only one of the 46 participants who initiated the online training deviated from the protocol. Given that one subject represents a minimal amount of missingness in this sample (2%), the exclusion of this subject's data likely did not introduce potential bias in the results. However, use of ITT analysis in the future would guard against such bias. Finally, this trial was not preregistered, and there are likely power limitations given the small sample size.

Conclusions

Considering when and how training is designed, delivered, and evaluated is relevant to both clinicians and researchers. Although clinicians likely are committed to implementing protocols with high fidelity to achieve optimal client outcomes, many have limited time to participate in training. The burden of learning evidence-based protocols should not fall solely on clinicians. Rather, examining effectiveness of training strategies must be considered a critical step in clinical research. Furthermore clinicians' opinions about the type of practice and problem-solving experiences that support their learning are important to consider. In future research, participants' experiences with and perceptions about training should be more carefully documented as researchers begin to pull training into the clinical research process. Hybrid designs, which combine clinical effectiveness and implementation research methodology, offer one practical approach for addressing such questions (Curran et al., 2012).

Ultimately, bringing clinicians and researchers together to tackle these challenges will advance our discipline by providing evidence addressing training critical stakeholders. Such efforts ultimately will serve to close the research–practice gap and ensure high-quality services to individuals with communication disorders.

Acknowledgments

This work was supported by an institutional training grant through the National Center for Advancing Translational Sciences of the National Institutes of Health Award TL1TR000422, awarded to the first author, and the University of Washington Gatzert Child Welfare Fellowship, also awarded to the first author.

We would like to thank the children, families, and clinicians who made this research possible. Special thanks to Patricia Dowden and Gay Lloyd Pinder, for their unending support of and contributions to this work.

Funding Statement

This work was supported by an institutional training grant through the National Center for Advancing Translational Sciences of the National Institutes of Health Award TL1TR000422, awarded to the first author, and the University of Washington Gatzert Child Welfare Fellowship, also awarded to the first author.

References

  1. Arens K., Cress C., & Marvin C. (2005). Gaze-shift patterns of young children with developmental disabilities who are at risk for being nonspeaking. Education and Training in Developmental Disabilities, 40(2), 158–170. [Google Scholar]
  2. Bartholomew N. G., Joe G. W., Rowan-Szal G. A., & Simpson D. D. (2007). Counselor assessments of training and adoption barriers. Journal of Substance Abuse Treatment, 33(2), 193–199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Becker K. D., & Stirman S. W. (2011). The science of training in evidence-based treatments in the context of implementation programs: Current status and prospects for the future. Administration and Policy in Mental Health and Mental Health Services Research, 38(4), 217–222. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Beidas R. S., Koerner K., Weingardt K. R., & Kendall P. C. (2011). Training research: Practical recommendations for maximum impact. Administration and Policy in Mental Health and Mental Health Services Research, 38(4), 223–237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Brady N. C., Fleming K., Thiemann-Bourque K., Olswang L., Dowden P., Saunders M. D., & Marquis J. (2012). Development of the communication complexity scale. American Journal of Speech-Language Pathology, 21(1), 16–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Brouwers M. C., Makarski J., Durocher L. D., & Levinson A. J. (2011). E-learning interventions are comparable to user's manual in a randomized trial of training strategies for the AGREE II. Implementation Science, 6(1), 81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brown J. A., & Woods J. J. (2012). Evaluation of a multicomponent online communication professional development program for early interventionists. Journal of Early Intervention, 34(4), 222–242. [Google Scholar]
  8. Cook D. A., Levinson A. J., Garside S., Dupras D. M., Erwin P. J., & Montori V. M. (2008). Internet-based learning in the health professions: A meta-analysis. JAMA, 300(10), 1181–1196. [DOI] [PubMed] [Google Scholar]
  9. Cress C., Shapley K., Linke M., Havelka S., Dietrich C., Elliott J., & Clark J (2000). Characteristics of intentional communication in young children with physical impairments. Presented at the 9th Biennial Conference of the International Society for Augmentative and Alternative Communication, Washington, D.C., United States. [Google Scholar]
  10. Cucciare M. A., Weingardt K. R., & Villafranca S. (2008). Using blended learning to implement evidence-based psychotherapies. Clinical Psychology: Science and Practice, 15(4), 299–307. [Google Scholar]
  11. Curran G. M., Bauer M., Mittman B., Pyne J. M., & Stetler C. (2012). Effectiveness-implementation hybrid designs: Combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical Care, 50(3), 217–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Dimeff L. A., Koerner K., Woodcock E. A., Beadnell B., Brown M. Z., Skutch J. M., Paves A. P., Bazinet A., & Harned M. S. (2009). Which training method works best? A randomized controlled trial comparing three methods of training clinicians in dialectical behavior therapy skills. Behaviour Research and Therapy, 47(11), 921–930. [DOI] [PubMed] [Google Scholar]
  13. Feuerstein J., Olswang L. B., Greenslade K., Pinder G. L., Dowden P., & Madden J. (2017). Moving triadic gaze intervention into practice: Measuring clinician attitude and implementation fidelity. Journal of Speech, Language, and Hearing Research, 60(5), 1285–1298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Fixsen D. L., Naoom S. F., Blase K. A., Friedman R. M., & Wallace F. (2005). Implementation research: A synthesis of the literature. Tampa: University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network. [Google Scholar]
  15. Hamad C. D., Serna R. W., Morrison L., & Fleming R. (2010). Extending the reach of early intervention training for practitioners: A preliminary investigation of an online curriculum for teaching behavioral intervention knowledge in autism to families and service providers. Infants and Young Children, 23(3), 195–208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Instructure Inc. (2019). Canvas learning management software [Computer software]. https://www.canvaslms.com/ [Google Scholar]
  17. Kyzar K. B., Chiu C., Kemp P., Aldersey H. M., Turnbull A. P., & Lindeman D. P. (2014). Feasibility of an online professional development program for early intervention practitioners. Infants and Young Children, 27(2), 174–191. [Google Scholar]
  18. Mann K., Gordon J., & MacLeod A. (2009). Reflection and reflective practice in health professions education: A systematic review. Advances in Health Sciences Education, 14(4), 595–621. [DOI] [PubMed] [Google Scholar]
  19. Olswang L., Dowden P., Feuerstein J., Greenslade K., Pinder G. L., & Fleming K. (2014). Triadic gaze intervention for young children with physical disabilities. Journal of Speech, Language, and Hearing Research, 57(5), 1740–1753. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Olswang L., Feuerstein J., Pinder G. L., & Dowden P. (2013). Validating dynamic assessment of triadic gaze for young children with severe disabilities. American Journal of Speech-Language Pathology, 22(3), 449–462. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Schulz K. F., Altman D. G., Moher D., & CONSORT Group. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. BMJ, 340, C332. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Sigafoos J., Woodyatt G., Keen D., Tait K., Tucker M., Roberts-Pennell D., & Pittendreigh N. (2000). Identifying potential communicative acts in children with developmental and physical disabilities. Communication Disorders Quarterly, 21(2), 77–86. [Google Scholar]

Articles from Journal of Speech, Language, and Hearing Research : JSLHR are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES