Skip to main content
Telemedicine Journal and e-Health logoLink to Telemedicine Journal and e-Health
. 2011 Oct;17(8):640–644. doi: 10.1089/tmj.2011.0029

Web-Based Training in Early Autism Screening: Results from a Pilot Study

Kenneth A Kobak 1,, Wendy L Stone 2, Opal Y Ousley 3, Amy Swanson 4
PMCID: PMC3179612  PMID: 21939382

Abstract

Background

Lack of familiarity with early signs of autism by community service providers has resulted in significant delays in children receiving early intervention services necessary to improve long-term outcomes. The Screening Tool for Autism in Toddlers and Young Children (STAT) was specifically developed to identify early behavioral features of autism. Although STAT training has been available for years, access is limited because of few STAT trainers and geographic concerns. This study evaluated the efficacy and acceptability of Web-based training of the STAT as a means of increasing accessibility to this training.

Materials and Methods

Thirty professionals from three geographic areas participated. Roughly 1 of 3 had little or no training on autism assessment. The tutorial contains a general overview, administration and scoring conventions, and item-specific content and concepts. Participants completed a pretest and then completed the STAT tutorial at their own pace, followed by a post-test and a user satisfaction questionnaire.

Results

Mean scores on STAT concepts significantly improved after taking the tutorial (p < 0.001). At pretest, only 1 person (3%) obtained correct scores on at least 80% of the items (a priori cutoff for a “pass”), compared with 22 (73%) at post-test (p < 0.001). The majority of trainees enjoyed taking the tutorial, thought it was well organized, relevant, interesting, and useful, and felt it was easy to understand and operate.

Discussion

Results support Web-based training as a promising method for promoting early identification of autism and may help overcome problems associated with the critical shortage of autism-screening professionals.

Key words: autism, diagnosis, training, Internet, computer-assisted instruction

Introduction

In recent years, the early identification of children with autism has become an important priority for research as well as clinical endeavors. A significant impetus for this movement has been the recognition that autism-specialized intervention can lead to significant gains in social, communication, behavioral, and cognitive functioning when implemented at young ages.13 Unfortunately, there are often long delays between the time a child is suspected of autism and the time he or she receives a definitive diagnosis. Research over the years has consistently found that parents begin to have concerns about their child's development at an average age of 16–20 months48; however, many children fail to receive a definitive diagnosis of autism until the age of 4 or 4½, or even older.912 This delay can prevent children from receiving appropriately specialized early intervention and may ultimately increase the lifetime cost and service system demands associated with providing care and support to individuals with autism and their families.

Because early identification is the first step toward providing children and families with specialized intervention, there has been a dual focus on identifying early behavioral signs of autism as well as developing effective methods and measures for early screening. Several professional groups, including the American Academy of Neurology13 and the American Academy of Pediatrics (AAP),14 have developed practice guidelines to promote early identification and intervention for children with autism. Reports of increasing prevalence of autism spectrum disorders, currently estimated to be 1 in 110 children,15 led to the AAP's recommendation that all children receive routine autism-specific screening twice before the age of 30 months.14 However, many obstacles to conducting autism screening in primary care settings have been identified, including time constraints and a lack of familiarity or training in the early behavioral manifestations of autism, and previous studies have indicated that autism screening by primary healthcare providers is not a routine practice.16,17

An additional, and complementary, approach for increasing early detection is the use of second-stage autism-specific screeners in referral settings.17,18 After developmental concerns have been identified, second-stage autism screeners can be used by Birth to Three personnel and other community providers to direct high-risk children toward autism-specialized assessments or interventions. The Screening Tool for Autism in Toddlers and Young Children (STAT)19,20 is an interactive, play-based screening measure that was developed for this purpose. It consists of 12 activities assessing imitation, play, and communication that take about 20 min to administer. Originally designed for children between 24 and 36 months, its potential utility for children as young as 14 months has been demonstrated.21 The STAT has good sensitivity and specificity, high concurrent validity with both the Autism Diagnostic Observation Schedule (ADOS) and Diagnostic and Statistic Manual of Mental Disorders, Fourth Edition (DSM-IV) diagnoses, and good inter-rater reliability when used by professionals trained in its administration and scoring.19,20

Because of the interactive nature of the STAT items and the subtlety of the early social-communication markers of autism (i.e., negative symptoms), training on the STAT is required to ensure appropriate administration and behavioral interpretation. For over 10 years, community service providers have received STAT training through participation in formal face-to-face training workshops. More recently, a Web-based training tutorial for service providers was developed to increase access to STAT training. The purpose of this study was to provide a preliminary assessment of the acceptability and usability of the Web-based STAT tutorial as well as its effectiveness in increasing the user's knowledge of STAT administration procedures and scoring conventions.

Methods

Participants

Thirty community healthcare professionals (29 female and 1 male) from three geographic areas (Madison, WI; Nashville, TN; and Atlanta, GA) participated in the evaluation of the Web-based STAT tutorial. The sample included a range of professionals in community and academic settings who self-identified as psychologists (n=6), researchers or professors (n=5), research staff (n=5), speech-language pathologists (n=4), educators (n=3), consultants (n=3), pediatricians (n=2), a nurse, and an occupational therapist. Twelve participants (40%) had an M.D. or Ph.D., 14 (47%) had a master's degree, and 4 (13%) had a bachelor's degree. Their experience working with children with autism ranged from <1 year to 35 years (mean=12.0 years; SD=10.7; mode=4 years). About 25% of the sample (n=8) reported having little or no training in autism assessment. Twenty-five participants (83%) were Caucasian, 2 (6%) were African American, 2 (6%) were Asian, and 1 (3%) was Hispanic. Inclusion criteria were broad to enroll professional from a wide variety of educational backgrounds and professions and consisted of having a college degree and be currently working with children with autism spectrum disorder (ASD).

For the purpose of obtaining a benchmark, results obtained by participants completing the Web-based tutorial were compared with those obtained by individuals (n=27) participating in two face-to-face STAT training workshops before the Web-based tutorial was developed. STAT training workshops are conducted in small groups (usually 12–18 participants) in 1 day. The face-to-face training parallels the online training in terms of content and topics covered. Participants begin with a general overview of the STAT, a review of general administration and scoring guidelines, and video illustrations of scores on the STAT items, followed by ratings of videos and discussion. Although lack of randomization to the different training formats limits the inferences that can be made, this comparison sample provides a reference point for expected gains in knowledge. Participant demographics for the workshop sample were similar to those of the Web-based training cohort. The workshop sample was 70% Caucasian and 85% female. Participants included four clinical psychologists, one pediatrician, two social workers, nine speech-language or behavioral analysts/interventionists, and 11 from a variety of other related professions. Six participants (22%) had an M.D. or Ph.D., 17 (63%) had a master's degree, and 4 (15%) had a bachelor's degree.

STAT Training Tutorial

The Web-based STAT training tutorial covers the same content areas that are provided in the workshop training format and was designed using principles of adult learning and instructional design, such as use of multimodal input,22 high levels of interactivity,23 appropriate “chunking” and sequencing of information to increase retention,24 and use of the “test-enhanced learning” to improve long-range retention.25 Interactivity was extensively utilized to engage learners and improve retention. The tutorial contains three content sections: a general overview on the structure of the STAT; a description of general administration procedures; and a description of item-specific content, concepts, and scoring conventions. The Overview describes the 12 STAT items that comprise the domains of play (2 items), motor imitation (4 items), requesting (2 items), and directing attention (4 items). The General Administration section reviews rules such as presenting the materials exactly as specified, scoring each item before proceeding to the next item, praising the child's efforts as well as his or her successes, and remaining flexible in terms of the location and order of presenting the material. An interactive, informal self-test is incorporated at the end of the section to reinforce learning. The Individual Item section describes the content and scoring for each of the 12 STAT items. Information for each item includes a description of what the item measures, the specific administration procedures, video examples of a child who passes and fails the item, and a final video that the trainee scores. The correct answer is presented after the trainee's response as a means of reinforcing learning.

In addition, two forms of evaluation were used to assess the training uptake and experience: a pre- and post-test assessment of knowledge; and a post-test assessment of feasibility of use. The Knowledge Questionnaire comprises 25 items (10 multiple-choice questions and 15 video vignettes that are scored pass or fail). The multiple-choice questions assess the participant's knowledge of administration procedures (e.g., the use of verbal instructions and the order and presentation of items) as well as scoring conventions (e.g., coding refusals and behaviors directed to a parent instead of the examiner). The video vignettes consist of segments that the participant scores as “pass,” “fail,” or “emerge.” The Feasibility Questionnaire consists of 13 items assessing different dimensions of the training experience, such as organization of the tutorial, utility of video examples, and time allotment. Each item consists of a statement that is rated on a four-point scale (4, strongly agree; 3, agree; 2, disagree; 1, strongly disagree); scores can range from 13 to 52, with higher scores indicating greater satisfaction.

Procedures

The Web-based training sample received instructions about how to access the Web site and then all subsequent training procedures occurred through the Web interface. Participants first completed the Knowledge Questionnaire, then completed the STAT Tutorial at their own pace, and then completed the Knowledge Questionnaire again. Following completion of the post-test Knowledge Questionnaire, the correct answers and their justifications were provided to trainees, so that the assessment would serve a teaching function as well as an evaluative one. Knowledge Questionnaire results were automatically scored, stored in a database, and sent to the first author via e-mail. A score of 80% correct or better was decided a priori to be a passing score. Following completion of the post-test Knowledge Questionnaire, trainees completed the paper-and-pencil Feasibility Questionnaire and mailed it back to the first author. The workshop training sample completed a hard copy of the Knowledge Questionnaire before and after the STAT workshop. This study was reviewed by the Allendale Institutional Review Board, and participants in both the Web-based and workshop training formats were aware that their responses would be used for the purpose of program evaluation.

Results

The average time to complete the tutorial was 3.17 h (SD=0.89; range: 2–5 h). About 70% of the participants completed the tutorial in 3 h or less. Scores on the Knowledge Questionnaire showed significant improvement after completing the STAT tutorial, from a mean of 15.7 (SD=2.2) at pretest to a mean of 21.4 (SD=1.8) at post-test (t(29)=11.5, p<0.001). The average improvement was 5.7 points (SD=2.7). At pretest, only one person (3%) scored at or above 80% (our a priori cutoff for a “pass”), compared with 22 (73%) at post-test (χ2(1)=31.09, p<0.001). This improvement is comparable to that demonstrated by the reference sample of STAT workshop attendees, whose scores increased from a pretest mean of 17.1 (SD=3.2) to a post-test mean of 20.3 (SD=2.6). The average improvement from pre- to post-test was 3.2 points (SD=2.4). At pretest, 6 trainees (22%) scored at or above 80%, compared with 18 (66%) at post-test.

The mean score on the Feasibility Questionnaire completed only by the tutorial participants was 46.2 (SD=3.8), indicating that participants enjoyed taking the tutorial, thought it was well organized, relevant, interesting, and useful, thought the time allotted to each section was about right, and felt it was easy to understand and navigate (Table 1).

Table 1.

Results of Feasibility Questionnaire

  STRONGLY DISAGREE DISAGREE AGREE STRONGLY AGREE
The objectives of the tutorial were stated clearly     6 (20%) 24 (80%)
The tutorial was well organized and easy to follow     1 (3%) 29 (97%)
The tutorial was comprehensive and covered all the important information   1 (3%) 7 (23%) 22 (73%)
Information was presented in an interesting manner     11 (37%) 19 (63%)
The video examples were helpful in illustrating the scoring of the STAT     6 (20%) 24 (80%)
The time allotted to cover each STAT item was about the right length   1 (3%) 15 (52%) 13 (45%)
The time allotted for the general guidelines was about the right length   1 (3%) 12 (41%) 16 (55%)
I feel capable administering the STAT   5 (17%) 12 (40%) 13 (43%)
I feel capable of scoring each STAT item   3 (10%) 18 (60%) 9 (30%)
I feel capable of interpreting and explaining STAT results to a child's parents   4 (13%) 14 (47%) 12 (40%)
It was easy to understand how to start and navigate through the CD tutorial   2 (7%) 8 (27%) 20 (67%)
The CD tutorial is an effective tool for learning the STAT across ethnic groups   1 (3%) 15 (50%) 14 (47%)

STAT, Screening Tool for Autism in Toddlers and Young Children.

Discussion

Results from this pilot study demonstrate that a Web-based training format can be effective for increasing knowledge about early autism screening among community providers with diverse levels of education and experience. Because the STAT involves observation of key behavioral features of young children with autism, this training tutorial may serve not only to identify children's risk status, but also to enhance recognition of early behavioral features of autism and lead to earlier referral and treatment. The dearth of trained autism experts, and the long waits for diagnostic evaluations, makes the identification of signs and symptoms by community professionals who are not necessarily autism experts a critical factor in facilitating early identification of this disorder.

Lack of randomization to the different training formats limits inferences that can be made between the training formats. With that caveat, however, exploratory comparisons can be made in terms of the types of gains to be expected in an online format, compared with those found in face-to-face training. After training, both groups had similar scores on the post-test and similar rates of passing. Mean improvement was slightly higher in the online training group, largely a function of slightly lower pretest scores. Time to complete the online tutorial was about 3 h, substantially less than the face-to-face training, which is typically a full day. These positive results suggest that further research beyond this pilot study is warranted, using a fully randomized design to more effectively compare online to face-to-face training.

Web-based e-learning is rapidly becoming an effective tool for mental health education and training. Similar online programs have been shown to be effective in training clinicians on the identification and assessment of the symptoms of other disorders, such as depression26 and schizophrenia.27 However, the extent to which increased knowledge resulting from this training tutorial will translate into clinical competence in administering and interpreting the STAT in clinical or community settings is not yet known. Several studies have shown that didactic and applied skills are independent skill sets and that a didactic knowledge of how to administer a measure does not necessarily imply clinical competence in its administration.28,29 Thus, future work will be needed to determine the reliability with which the STAT is administered, scored, and interpreted within community settings.

One promising new approach in telemedicine is augmenting Web-based didactic training with videoconferencing, so that the trainee can be observed in real time applying the conceptual knowledge learned in the Web-based training into actual clinical skills. This two-stage training approach of combining Web-based tutorials with live remote observation via videoconference has been successfully used in training clinicians on the assessment of other psychiatric disorders,26,27 and combining face-to-face didactic training with remote observation via videoconference has been recently reported in parent training for autism.30 This combined approach has the potential to augment future Web-based training with the STAT and will help ensure that the training results in clinical competence in the use of the scale in community settings.

Acknowledgments

This project has been funded in whole or in part by Federal Funds from the National Institute of Mental Health, National Institutes of Health, Department of Health and Human Services, under Contract No. N44MH54080 (PI: K.A. Kobak).

Disclosure Statement

Drs. W.L. Stone and O. Ousley have a commercial interest in the STAT. No other conflicts of interest exist.

References

  • 1.Cohen H. Amerine-Dickens M. Smith T. Early intensive behavioral treatment: Replication of the UCLA model in a community setting. J Dev Behav Pediatr. 2006;27(2 Suppl):S145–S155. doi: 10.1097/00004703-200604002-00013. [DOI] [PubMed] [Google Scholar]
  • 2.Dawson G. Early behavioral intervention, brain plasticity, and the prevention of autism spectrum disorder. Dev Psychopathol. 2008;20:775–803. doi: 10.1017/S0954579408000370. [DOI] [PubMed] [Google Scholar]
  • 3.Rogers SJ. Vismara LA. Evidence-based comprehensive treatments for early autism. J Clin Child Adolesc Psychol Aging. 2008;37:8–38. doi: 10.1080/15374410701817808. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.De Giacomo A. Fombonne E. Parental recognition of developmental abnormalities in autism. Eur Child Adolesc Psychiatry. 1998;7:131–136. doi: 10.1007/s007870050058. [DOI] [PubMed] [Google Scholar]
  • 5.Short AB. Schopler E. Factors relating to age of onset in autism. J Autism Dev Disord. 1988;18:207–216. doi: 10.1007/BF02211947. [DOI] [PubMed] [Google Scholar]
  • 6.Spitzer RL. Siegel B. The DSM-III-R field trial of pervasive developmental disorders. J Am Acad Child Adolesc Psychiatry. 1990;29:855–862. doi: 10.1097/00004583-199011000-00003. [DOI] [PubMed] [Google Scholar]
  • 7.Rutter M. Autism: Its recognition, early diagnosis, and service implications. J Dev Behav Pediatr. 2006;27(2 Suppl):S54–S58. doi: 10.1097/00004703-200604002-00002. [DOI] [PubMed] [Google Scholar]
  • 8.Coonrod EE. Stone WL. Early concerns of parents of children with autistic and nonautistic disorders. Infants Young Child. 2004;17:258–268. [Google Scholar]
  • 9.Mandell DS. Listerud J. Levy SE. Pinto-Martin JA. Race differences in the age at diagnosis among medicaid-eligible children with autism. J Am Acad Child Adolesc Psychiatry. 2002;41:1447–1453. doi: 10.1097/00004583-200212000-00016. [DOI] [PubMed] [Google Scholar]
  • 10.Shattuck PT. Durkin M. Maenner M, et al. Timing of identification among children with an autism spectrum disorder: Findings from a population-based surveillance study. J Am Acad Child Adolesc Psychiatry. 2009;48:474–483. doi: 10.1097/CHI.0b013e31819b3848. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Croen LA. Grether JK. Selvin S. Descriptive epidemiology of autism in a California population: Who is at risk? J Autism Dev Disord. 2002;32:217–224. doi: 10.1023/a:1015405914950. [DOI] [PubMed] [Google Scholar]
  • 12.Yeargin-Allsopp M. Rice C. Karapurkar T. Doernberg N. Boyle C. Murphy C. Prevalence of autism in a US metropolitan area. JAMA. 2003;289:49–55. doi: 10.1001/jama.289.1.49. [DOI] [PubMed] [Google Scholar]
  • 13.Filipek PA. Accardo PJ. Ashwal S, et al. Practice parameter: Screening and diagnosis of autism: Report of the Quality Standards Subcommittee of the American Academy of Neurology and the Child Neurology Society. Neurology. 2000;55:468–479. doi: 10.1212/wnl.55.4.468. [DOI] [PubMed] [Google Scholar]
  • 14.Johnson CP. Myers SM. Identification and evaluation of children with autism spectrum disorders. Pediatrics. 2007;120:1183–1215. doi: 10.1542/peds.2007-2361. [DOI] [PubMed] [Google Scholar]
  • 15.Rice C. Prevalence of autism spectrum disorders—Autism and Developmental Disabilities Monitoring Network, United States, 2006. Vol. 58. Atlanta, GA: Center for Disease Control; 2009. [PubMed] [Google Scholar]
  • 16.Dosreis S. Weiner CL. Johnson L. Newschaffer CJ. Autism spectrum disorder screening and management practices among general pediatric providers. J Dev Behav Pediatr. 2006;27(2 Suppl):S88–S94. doi: 10.1097/00004703-200604002-00006. [DOI] [PubMed] [Google Scholar]
  • 17.Zwaigenbaum L. Stone WL. Early screening for autism spectrum disorder in clinical practice settings. In: Charman T, editor; Stone WL, editor. Social and communication development in autism spectrum disorders. New York: Guilford; 2006. pp. 88–113. [Google Scholar]
  • 18.Warren Z. Stone W. Humberd Q. A training model for the diagnosis of autism in community pediatric practice. J Dev Behav Pediatr. 2009;30:442–446. doi: 10.1097/DBP.0b013e3181ba0e4e. [DOI] [PubMed] [Google Scholar]
  • 19.Stone WL. Coonrod EE. Ousley OY. Brief Report: Screening Tool for Autism in Two-Year-Olds (STAT): Development and preliminary data. J Autism Dev Disord. 2000;30:607–612. doi: 10.1023/a:1005647629002. [DOI] [PubMed] [Google Scholar]
  • 20.Stone WL. Coonrod EE. Turner LM. Pozdol SL. Psychometric properties of the STAT for early autism screening. J Autism Dev Disord. 2004;34:691–701. doi: 10.1007/s10803-004-5289-8. [DOI] [PubMed] [Google Scholar]
  • 21.Stone WL. McMahon CR. Henderson LM. Use of the Screening Tool for Autism in Two-Year-Olds (STAT) for children under 24 months: An exploratory study. Autism. 2008;12:557–573. doi: 10.1177/1362361308096403. [DOI] [PubMed] [Google Scholar]
  • 22.Gardner H. Multiple intelligences: The theory in practice. New York: Basic Books; 1993. [Google Scholar]
  • 23.Williams SW. Instructional design factors and the effectiveness of web-based training/instruction. In: Cervero RM, editor; Courtenay BC, editor; Monaghan CH, editor. The Cyril O. Houle Scholars in Adult and Continuing Education Program global research perspectives. Athens, GA: University of Georgia Department of Adult Education; 2002. pp. 132–145. [Google Scholar]
  • 24.Good TE. Brophy JE. Educational psychology: A realistic approach. 3rd. New York: Longman Publishing; 1986. [Google Scholar]
  • 25.Roediger HL. Karpicke JD. Test-enhanced learning. Taking memory tests improves long-term retention. Psychol Sci. 2006;17:249–254. doi: 10.1111/j.1467-9280.2006.01693.x. [DOI] [PubMed] [Google Scholar]
  • 26.Kobak KA. Lipsitz JD. Feiger A. Development of a standardized training program for the Hamilton Depression Scale using internet-based technologies: Results from a pilot study. J Psychiatr Res. 2003;37:509–515. doi: 10.1016/s0022-3956(03)00056-6. [DOI] [PubMed] [Google Scholar]
  • 27.Kobak KA. Opler MG. Engelhardt N. PANSS rater training using Internet and videoconference: Results from a pilot study. Schizophr Res. 2007;92:63–67. doi: 10.1016/j.schres.2007.01.011. [DOI] [PubMed] [Google Scholar]
  • 28.Kobak KA. Lipsitz JD. Williams JB. Engelhardt N. Bellew KM. A new approach to rater training and certification in a multicenter clinical trial. J Clin Psychopharmacol. 2005;25:407–412. doi: 10.1097/01.jcp.0000177666.35016.a0. [DOI] [PubMed] [Google Scholar]
  • 29.Kobak KA. Engelhardt N. Lipsitz JD. Enriched rater training using Internet based technologies: A comparison to traditional rater training in a multi-site depression trial. J Psychiatr Res. 2006;40:192–199. doi: 10.1016/j.jpsychires.2005.07.012. [DOI] [PubMed] [Google Scholar]
  • 30.Baharav E. Reiser C. Using telepractice in parent training in early autism. Telemed J E Health. 2010;16:727–731. doi: 10.1089/tmj.2010.0029. [DOI] [PubMed] [Google Scholar]

Articles from Telemedicine Journal and e-Health are provided here courtesy of Mary Ann Liebert, Inc.

RESOURCES