Abstract
The purpose of this study was to assess the validity of an autism e-screener, Paisley, when utilized in a clinical research setting via a tablet application. The Paisley application used a series of play-based activities, all of which incorporated varying aspects of the ASD-PEDS. Participants included children (18–36 months; n = 198) referred for evaluation of autism spectrum disorder (ASD) and community providers (n = 66) with differing levels of familiarity with ASD. Community providers administered the Paisley application to children who then completed a comprehensive psychological evaluation. Based on comprehensive evaluation, 75% of children met diagnostic criteria for ASD. Paisley scores were significantly higher for children diagnosed with ASD (15.06) versus those not diagnosed (9.34). The newly determined cutoff ASD-PEDS cutoff score of 13 had significantly higher specificity and positive predictive value than the originally proposed cutoff of 11. Results support the use of Paisley by community providers to identify autism risk in toddlers. Limitations and strengths of the work, as well as opportunities for future clinical validation, are described.
Keywords: autism spectrum disorder, paisley project, clinical, research, e-screener, early identification
Lay Summary
Prior research has shown the importance and significance of early screening of autism spectrum disorder (ASD) for children under the age of three to improve long-term outcomes. In this paper, we discuss the validity of a tablet-based autism screener (“Paisley”) deployed in a clinical research setting to detect early signs of ASD in children. Each child that was seen through this research study all received a diagnostic evaluation to assess the presence or non-presence of ASD following their interaction with the provider using the Paisley application. Results showed that the application has been successful at the detection of ASD and that its’ users were also satisfied with their experience in and around using it.
I. Introduction
Although Autism Spectrum Disorder (ASD) is very common (Maenner et al., 2021), obtaining a diagnosis can be a difficult journey (Constantino et al., 2020; Wiggins et al., 2020). Screening for ASD within community settings has grown increasingly common (Carbone et al., 2020) and reflects best practice as recommended by the American Academy of Pediatrics (Johnson & Myers, 2007). However, screening and referring all children, regardless of symptom severity, onward to tertiary diagnostic centers, creates diagnostic bottlenecks that delay crucial early intervention services and maintain an average age of diagnosis in many parts of the United States that is after four years of age (Maenner et al., 2020). Promising work indicates that community providers can learn to recognize ASD, and that children with a high likelihood to have ASD can be identified within community settings via enhanced interaction protocols (Bellesheim et al., 2020; Hine et al., 2021; Juarez et al., 2018; Mazurek et al., 2021; McNally Keehn et al., 2020; Stainbrook et al., 2018), particularly in the presence of high levels of parental and provider concern. Novel technological supports could facilitate this decision making, enabling trained providers to identify young children with high likelihood to meet DSM-5 criteria for ASD within-practice via a series of guided, data-driven activities, prompts, and risk assessment procedures.
Many toddlers eventually diagnosed with ASD are first flagged as showing non-specific developmental differences by community care systems that lack specialized ASD diagnostic providers, including primary care clinics (Hyman et al., 2020) and state early intervention programs (Sheldrick et al., 2022). Attendance at a 24-month well-child visit has been associated with significant earlier diagnostic age versus children that do not attend that visit (DeGuzman et al., 2022). However, after being flagged, it can take extended periods of time (months to over a year) for families to get appointments for diagnostic confirmation; these appointments also present barriers related to geography and urbanicity, point-in-time care without follow-up, lengthy visit times that may not be covered by insurance, and limited familiarity with a specific family’s language or cultural values (Antezana et al., 2017; Dababnah et al., 2018; Guerrero & Sobotka, 2022; Smith-Young et al., 2020).
These delays disproportionately affect minoritized children and families (Constantino et al., 2020; Kuhn et al., 2021a) with cascading effects across the lifespan. Increasingly, the field is calling on community providers to assume responsibility for, and coordination of, developmental care to provide more equitable outcomes for children with ASD, including diagnostic outcomes (Hine et al., 2018; Hyman & Johnson, 2012; Nolan et al., 2016; Zuckerman et al., 2021). Unfortunately, even when they suspect ASD, many PCPs and other community screening providers, including early interventionists or pediatric community psychological providers, report low levels of training and comfort related to talking about symptoms and next steps with families (Carbone et al., 2016; Hine et al., 2021; Penner et al., 2017; Soares et al., 2014).
Existing training programs to support community providers in within-practice ASD identification increase clinical accuracy, but can present intensive time, training, supervision, and billing requirements that undermine stability and broad utilization (Hine et al., 2021; McNally Keehn et al., 2020; Swanson et al., 2014). Level 2 screening tools such as the Screening Tool for Autism in Toddlers (Stone et al., 2000) and the Rapid Interactive Screening Test for Autism in Toddlers (Lemay et al., 2020) can facilitate more rapid identification in medical and early intervention settings while also increasing provider confidence (Juarez et al., 2018; Stainbrook et al., 2018; Swanson et al., 2014). However, these existing training models, although impactful, offer only time-limited consultative support without real-time prompts for task administration.
To increase feasibility of ongoing use in community settings without intensive resource burdens, novel decision-making technologies have been developed to attempt to identify young children prior to or in place of intensive evaluation at tertiary care centers. Each of them requires uploading information for offline analysis. These include two commercially available options, CanvasDX (Megerian et al., 2022) and NODA (Solutions, 2019), both of which involve uploading parent- and provider-questionnaires as well as home videos for offline coding by human experts, as well as primarily research-focused applications of machine learning and artificial intelligence to analyze home videos and questionnaires (Abbas et al., 2018; Tariq et al., 2018; Wall et al., 2012). Although promising, none of these options facilitate real-time diagnostic decision-making on the part of a concerned provider. That is, there is not yet a risk assessment technology for use when a provider has a high level of concern for ASD within the context of developmental monitoring and wants a guided methodology for immediately eliciting ASD symptoms to facilitate conversations with parents and subsequent referrals.
To address this gap, we developed a tablet application to facilitate identification of likely ASD in community settings. We leveraged procedures used in the previous development of novel risk assessment tools and processes (Corona et al., 2021; Wagner et al., 2021) to identify behaviors that best discriminated young children with ASD from those without. We then consulted with clinical experts to create prompts to elicit those behaviors, a rating scale to code items and gestalt diagnostic impressions, and an application interface with real-time coaching capacity. Finally, we conducted a pilot feasibility study in a clinical research laboratory with a multidisciplinary sample of community providers with varied levels of ASD expertise to examine application functionality, utility, accuracy relative to blinded traditional psychological evaluation, and acceptability to community providers.
II. Methods
Participants
Participants included community providers as well as children referred for ASD evaluation and their primary caregivers. All participants (providers, caregivers, and children) completed informed consent and assent procedures and were compensated for completing study activities. All activities, which took place in a research laboratory, were approved by the Institutional Review Board.
Providers.
Sixty-six community pediatric providers with varying levels of familiarity with ASD participated at the research center (Licensed Psychological Providers [clinical psychologists, licensed psychological examiners], n = 8; Early Interventionists [developmental therapists, speech pathologists], n = 10; Medical Providers [nurse practitioners, medical residents], n = 47; unknown = 1; total n = 66). Providers were recruited from community partner practices and training networks.
Children.
Children (n = 206; mean age = 28.66 months, sd = 4.76 months) and primary caregivers were recruited from community referral sources (primary care providers; state early intervention system) and clinical research pathways associated with the affiliated tertiary diagnostic center and associated research programs. Eligibility criteria included ages 18–36 months (aligning with recommended ages for ASD screening in practice and targeted Paisley age group), English speaking (due to the pilot nature of this preliminary evaluation), and adequate vision, hearing, and motor skills to complete interactive play-based tasks.
Apparatus
The Paisley application is an e-screener designed to help providers detect signs of ASD in children aged 18–36 months. This tablet-friendly web application presents the ASD-PEDS item-by-item, with embedded instructions, videos, and visual aids for real-time task administration.
Paisley (formerly known as “Autoscreen”) was designed in two phases. In the initial development phase, the design team consulted with clinical experts in the early diagnosis of autism (6 Licensed Clinical Psychologists with clinical and research reliability on validated assessment tools and 2 Developmental Behavioral Pediatricians, with a range of 4–15 years of diagnostic experience) to develop potential interactive tasks and items. In the second phase, the team engaged in an iterative design process that included pilot evaluation of procedures, feedback from participating clinicians and families, and refinement of the protocol and the interface based upon that feedback (Sarkar et al., 2018). The refined version of Paisley was used in the current work.
Paisley is a dynamic web application capable of running in the browser on any mobile device (Sarkar et al., 2018). The back end was modeled on a microservice architecture pattern in Amazon Web Services in order to take advantage of dynamic scaling features such as load balancing and automated database backups. These included CloudFormation (for portability of the cloud stack), EC2, RDS, and Lambda endpoints accessible via the API Gateway service. Once deployed for use in the clinical study, there were zero observed instances of platform downtime and/or resource unavailability or data loss over a period of more than 18 months.
The Paisley application presents the ASD-PEDS in tablet format. It is designed for use with commonly available, low-cost toys and provides guidance on what types of materials providers should have nearby. A pilot development study with community providers indicated that sessions took an average of 17.29 minutes (SD = 3.26 minutes). The ASD-PEDS items, in order of administration, are 1) free play with a suggested assortment of developmentally appropriate toys, 2) response to name, in which the provider calls the child’s name to gauge level of response, 3) turn-taking, in which the provider rolls a ball or a car to the child to elicit reciprocal play, 4) bubble play, in which the provider blows bubbles to gauge requesting strategies and body movements, 5) cause-and-effect toy, in which the provider activated a pop rocket, 6) snack, in which the provider offers a snack in a closed container that the child cannot open, 7) sound machine, in which the provider activates a noisemaking toy or an audio clip on the tablet interface, and 8) parent play, an optional activity in which a parent could show the provider how they interacted with their child. Please see Corona et al. (2021) for more information about ASD-PEDS development.
When providers are first learning to use the Paisley app, it presents a mock run-through that allows them to move through the app to review activities and understand scoring criteria. Once logged in, Paisley gives providers a list of recommended toys and materials. It then allows providers to start the session when they are ready. Once launched, Paisley presents the first task. It gives a list of recommended materials and simple text-based instructions, with the option for earpiece-delivered voice prompts from Paisley if providers desire. Each task also provides a video model clip of what to do, if needed (see Figure 1). Providers then click the app screen to move to the next activity.
Figure 1.
Screenshot of Paisley instructions.
As in the ASD-PEDS, providers do not score each item as they administer it. Rather, they complete all of the items and then score 7 post-assessment behaviors that indicate elevated autism risk (Corona et al., 2021). Each item is scored from 1–3 (1 = most typical, 3 = most indicative of autism) and equally weighted in the final “risk assessment” score, with a range of 7–21 (higher scores more indicative of autism symptoms). This final score is designed to help providers gauge likelihood of ASD within the context of ongoing developmental monitoring and parent concern. Importantly, providers did not make diagnostic decisions in this study (see Methods below).
Procedures
Following assent and consent, research staff instructed providers to spend approximately 10 minutes reviewing the Paisley application and procedures (the “mock run-through” described above). This allowed providers to ask questions of the research assistant and troubleshoot technology, such as Bluetooth earpiece connectivity, if needed. As providers reviewed the application, caregivers (who sat with children in a separate room) completed two clinical screening measures, the Social Responsiveness Scale – Second Edition (SRS-2; Constantino & Gruber, 2005) and the Modified Checklist for Autism in Toddlers – Revised (M-CHAT-R; Robins et al., 2014) in a separate space. A sample of 20 children returned for a second visit and repeated the Paisley administration with different providers to examine its test-retest reliability.
After the provider reviewed the administration instructions and the family completed questionnaires, they joined each other in a clinical research space. The provider administered the Paisley application to children with the parent watching nearby. Following Paisley administration, providers completed the application’s guided ASD risk assessment questions. Each provider administered the Paisley application up to five times with different children in the clinical research space.
Following the Paisley administration, the parent and child participated in a traditional diagnostic evaluation with a licensed psychological provider that was blinded to the results of the Paisley interaction. These children have not previously been diagnosed, nor evaluated, for ASD. These occurred immediately following Paisley (n = 166) or on another day (n = 34; mean days between sessions = 19.67, sd = 14.96) as part of research protocol. These evaluations included a clinical interview, the Autism Diagnostic Observation Schedule – Second Edition (Lord et al., 2000), the Mullen Scales of Early Learning (Mullen, 1995), and the Vineland Adaptive Behavior Scales-Third Edition (Sparrow & Cicchetti, 1985). At the end of this visit, children received diagnoses where appropriate and families were given recommendations based on best practice clinical care. Information from the Paisley administration was not incorporated into this diagnostic decision-making.
Analytic Approach
Characteristics of the study sample were analyzed using descriptive statistics. Differences between characteristics of children with and without a clinical diagnosis of ASD and their caregivers were examined using t-tests or Welch’s t-test for continuous variables and Chi-square tests for categorical variables. Welch’s t-test was used as necessary to account for unequal variances across the groups as indicated by Levene’s test.
Cronbach’s alpha statistic was calculated to assess the internal consistency of Paisley (Cronbach, 1951). A subset of 20 children was re-administered Paisley 1 ½ to 13 weeks after their initial assessment to measure the test-retest reliability of Paisley. The current trial was conducted during the COVID-19 pandemic with significant shutdowns influencing certain study actions, including the prioritization of test-retest evaluation. Poisson regression suggested that time in days was related to difference in test scores (p=.01). The intraclass correlation coefficient (ICC) based on a single-rating, absolute-agreement, two-way mixed-effect model was also calculated to measure the test-retest reliability of Paisley total scores (Shrout & Fleiss, 1979; Koo & Li, 2016). ICC values between 0.75 and 0.90 indicate good reliability and values greater than 0.90 indicate excellent reliability.
Known-group validity was assessed by examining whether Paisley total scores were higher in children with a clinical diagnosis of ASD than in children without a clinical diagnosis of ASD using t-tests or Welch’s t-tests. Pearson’s correlations between Paisley and other measures of ASD risk were calculated to assess the convergent validity of Paisley.
We examined the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and Youden’s index of Paisley using the original published scoring cutoff for the ASD-PEDS (i.e., ≥ 11; Wagner et al., 2021; Corona et al., 2020) relative to the best estimate clinical diagnosis. In this context, sensitivity is defined as the proportion of participants with a clinical diagnosis of ASD who screened positive for ASD on Paisley (i.e., ≥ 11), and specificity is the proportion of children without a clinical diagnosis of ASD who screened negative for ASD on Paisley (i.e., < 11). The PPV represents the proportion of children who screened positive for ASD on Paisley that truly have a clinical diagnosis of ASD, and the NPV is proportion of children who screened negative for ASD on Paisley who did not have a clinical diagnosis of ASD. Youden’s Index (J; Youden, 1950) is the sum of the sensitivity and specificity minus 1 and represents the likelihood of a positive screen among children with versus without ASD (Zhou et al., 2011). Youden’s Index can range from 0 to 1 and is independent of the absolute and relative sizes of the affected and unaffected groups (Youden, 1950). When J = 1, the underlying distributions of the scores for the affected and unaffected populations are completely separated, and the test perfectly discriminates between the two populations of patients. However, when J = 0, the distributions of the scores completely overlap, and the test is completely ineffective in discriminating between individuals with and without the target diagnosis.
Receiver operating characteristic (ROC) curve analysis was used to estimate the corresponding area under the curve (AUC) for Paisley (Zhou et al., 2011). The AUC measures the instrument’s ability to discriminate between clinical and non-clinical cases (i.e., ASD vs. non-ASD) (Hanley & McNeil, 1982; Zhou et al., 2011). The AUC can be interpreted as the probability that a randomly chosen child with a clinical diagnosis of ASD would score higher on Paisley than a randomly chosen child without a clinical diagnosis of ASD. Confidence intervals for the AUC were calculated using DeLong’s method (DeLong et al., 1998). According to consensual criteria for psychological testing, AUC values between 0.7 and 0.8 are considered “acceptable”, values between 0.8 and 0.9 are “excellent”, and values ≥ 0.9 are “outstanding” (Hosmer & Lemeshow, 2000). The ability of Paisley to distinguish between children with and without ASD was also compared with the discriminative capacity of the ADOS-2 Module T, ADOS-2 Module 1, M-CHAT, and SRS-2 using DeLong’s test for paired ROC curves (DeLong, DeLong, & Clarke-Pearson, 1988).
Alternative scoring cutoffs for Paisley were also examined and compared. To determine the effectiveness of these potential cutoffs, sensitivity, specificity, PPV, NPV, and Youden’s Index were calculated. Given that the goal of Paisley is to enhance the ability of providers to identify children with a higher likelihood of having ASD and reduce the number of referrals to tertiary diagnostic centers regardless of symptom severity, greater weight was placed on specificity than sensitivity to minimize the occurrence of false positives. A cutoff score with a sensitivity of at least 0.70 and a specificity of at least 0.80 is considerable favorable for diagnostic utility (Hong & Comer, 2015; Matthey & Petrovski, 2002). Differences between the sensitivity and specificity of the original and newly identified cutoff scores were analyzed using McNemar’s test, while differences in PPV and NPV were analyzed using the method proposed by Moskowitz and Pepe (Pepe, 2006; Zhou et al., 2011). Differences in Youden’s Index were analyzed using a two-sample t-test as detailed by Youden (1950).
Data were analyzed using R version 4.0.2 (R Core Team, 2020). The pROC package was used to conduct the ROC curve analyses (Robin, et al., 2011). The cutpointr package was used to calculate indices used to select the optimal cutoff points (Thiele & Hirschfeld, 2021). The DTComPair package was used to comparatively evaluate the sensitivity, specificity, PPV, and NPV associated with different cutoff points on Paisley (Stock & Hielscher, 2014).
III. Results
Of the 206 children that participated, 8 had incomplete data on either the ASD-PEDS or the best estimate clinical diagnosis. Data for those individuals were not included in the present analyses, yielding a final sample of 198 children. 148 children (75%) of this clinically referred sample of children met diagnostic criteria for ASD. Characteristics of the final sample are presented in Table 1. Group comparisons between children with and without a clinical diagnosis of ASD are presented in Table 2.
Table 1.
Sample Characteristics.
Characteristics | Mean (SD) [range] or N (%) |
---|---|
Children (n = 198) | |
Age, months | 28.68 (4.77) [18–37] |
Sex, male | 133 (67.00%) |
Clinical diagnosis of ASD | 148 (75.00%) |
ASD-PEDs | 13.62 (4.23) [7–21] |
ADOS-2: Module T (n = 109) | 16.90 (7.67) [0–28] |
ADOS-2: Module CSS (n = 108) | 7.39 (3.06) [1–10] |
ADOS-2: Module 1 (n = 66) | 16.59 (7.42) [0–28] |
ADOS-2: Module 1 CSS (n = 66) | 6.62 (2.75) [1–10] |
ADOS-2: Module 2 (n = 15) | 8.53 (6.63) [1–20] |
ADOS-2: Module 2 CSS (n = 15) | 4.60 (3.60) [1–10] |
M-CHAT (n = 196) | 5.74 (4.76) [0–18] |
SRS-2 (n = 181) | 82.97 (40.60) [5–167] |
Ethnicity | |
African American | 34 (17.17%) |
Asian | 1 (0.51%) |
Asian/Native American | 3 (1.52%) |
Indian American | 5 (2.53%) |
White | 141 (71.21%) |
White (Albinism) | 1 (0.51%) |
White (Arabic) | 1 (0.51%) |
White (Middle Eastern) | 1 (0.51%) |
White/African American | 7 (3.54%) |
White/Asian | 1 (0.51%) |
Unknown | 3 (1.52%) |
Hispanic | 22 (11.11%) |
Test-Retest Sample (Children n = 20) | |
Age, months | 28.25 (5.63) [19–35] |
Sex, male | 11 (55.00%) |
Interval for test-retest, days | 50.40 (25.52) [13–97] |
Clinical diagnosis of ASD | 11 (55.00%) |
Providers (n = 66) | |
Provider Profession | |
Early Interventionist | 10 (15.15%) |
Clinical Psychologist | 8 (12.12%) |
Medical Provider | 47 (71.21%) |
Unknown | 1 (1.52%) |
Certainty in Paisley impression | 3.98 (0.86) [1–5] |
Provider confidence during Paisley administration | 4.32 (0.66) [2–5] |
Table 2.
Group differences between children with and without a clinical diagnosis of ASD.
Variable | Group (n) | Mean (SD) or N (%) | Test Statistic | p-value | Effect Size |
---|---|---|---|---|---|
Age | ASD (n = 148) | 28.93 (4.26) | t*(66.34) = 1.07 | 0.29 | d = 0.19 |
Non-ASD (n = 50) | 27.94 (6.02) | ||||
Sex, male | ASD (n = 148) | 108 (54.55%) | (1) = 7.93 | 0.01 | V = 0.20 |
Non-ASD (n = 50) | 25 (12.63%) | ||||
Provider Certainty | ASD (n = 129) | 3.96 (0.87) | t(169) = −0.41 | 0.68 | d = 0.07 |
Non-ASD (n = 42) | 4.02 (0.84) | ||||
Provider Confidence | ASD (n = 129) | 4.33 (0.65) | t(169) = 0.60 | 0.55 | d = 0.107 |
Non-ASD (n = 42) | 4.26 (0.70) | ||||
ASD-PEDs | ASD (n = 148) | 15.06 (3.65) | t*(113.95) = 11.81 | < 0.001 | d = 1.78 |
Non-ASD (n = 50) | 9.34 (2.69) | ||||
ADOS-2 Module T Raw Scores | ASD (n = 84) | 20.25 (4.78) | t(107) = 14.02 | < 0.001 | d = 3.19 |
Non-ASD (n = 25) | 5.64 (3.80) | ||||
ADOS-2 Module T Comparison Scores | ASD (n = 83) | 8.82 (1.58) | t(106) = 17.03 | < 0.001 | d = 3.89 |
Non-ASD (n = 25) | 2.64 (1.63) | ||||
ADOS-2 Module 1 Raw Scores | ASD (n = 55) | 18.64 (5.71) | t(64) = 6.34 | < 0.001 | d = 2.09 |
Non-ASD (n = 11) | 6.36 (6.61) | ||||
ADOS-2 Module 1 Comparison Scores | ASD (n = 55) | 7.44 (2.02) | t(64) = 7.18 | < 0.001 | d = 2.37 |
Non-ASD (n = 11) | 2.55 (2.30) | ||||
ADOS-2 Module 2 Raw Scores | ASD (n = 5) | 16.80 (2.59) | t(13) = 8.04 | < 0.001 | d = 4.40 |
Non-ASD (n = 10) | 4.40 (2.91) | ||||
ADOS-2 Module 2 Comparison Scores | ASD (n = 5) | 9.20 (0.84) | t(13) = 9.49 | < 0.001 | d = 5.20 |
Non-ASD (n = 10) | 2.30 (1.49) | ||||
M-CHAT | ASD (n = 146) | 7.12 (4.43) | t*(119.37) = 9.37 | < 0.001 | d = 1.41 |
Non-ASD (n = 50) | 1.72 (3.15) | ||||
SRS-2 | ASD (n = 135) | 95.56 (33.44) | t(179) = 8.43 | < 0.001 | d = 1.44 |
Non-ASD (n = 46) | 46.00 (37.26) |
Welch’s t-test
Internal Consistency and Reliability
Cronbach’s alpha for Paisley was 0.88 (95% CI: 0.86 – 0.90), indicating good internal reliability. Paisley demonstrated good test-retest reliability (ICC = 0.85, 95% CI = 0.67 – 0.94, p < 0.001). The average duration of the follow-up period was 50.4 days (SD = 25.42, range = 13 – 97) for test-retest.
Validity
Children with ASD had significantly higher Paisley scores than children without ASD (see Table 2). Convergent validity of Paisley was assessed by comparing children’s ASD-PEDs scores with their scores on the ADOS-2 Module T, ADOS-2 Module 1, M-CHAT, and SRS-2. Table 3 presents the Pearson correlation coefficients between Paisley and the other measures of ASD risk.
Table 3.
Pearson correlations between Paisley and other measures of ASD risk.
Variable | r | 95% CI | p |
---|---|---|---|
ADOS-2 Module T Raw Scores (n = 109) | 0.76 | 0.67 – 0.83 | < 0.001 |
ADOS-2 Module T Comparison Scores (n = 108) | 0.70 | 0.59 – 0.79 | < 0.001 |
ADOS-2 Module 1 Raw Scores (n = 66) | 0.66 | 0.50 – 0.78 | < 0.001 |
ADOS-2 Module 1 Comparison Scores (n = 66) | 0.59 | 0.41 – 0.73 | < 0.001 |
ADOS-2 Module 2 Raw Scores (n = 15) | 0.78 | 0.45 – 0.92 | < 0.001 |
ADOS-2 Module 2 Comparison Scores (n = 15) | 0.79 | 0.46 – 0.93 | < 0.001 |
M-CHAT (n = 196) | 0.49 | 0.38 – 0.59 | < 0.001 |
SRS-2 (n = 181) | 0.42 | 0.29 – 0.53 | < 0.001 |
Diagnostic Accuracy
ASD-PEDS scores were dichotomized to indicate high or low risk for ASD based on the previously established cut-off where scores greater than or equal to 11 represent high risk for ASD. Paisley identified 134 children as having ASD and 64 children as not having ASD. The sensitivity and specificity were 0.83 (95% CI = 0.77 – 0.90) and 0.78 (95% CI = 0.67 – 0.90), respectively. The PPV was 0.92 (95% CI = 0.87 – 0.96) and the NPV was 0.61 (95% CI = 0.49 – 0.73). The ROC curve in Figure 2 shows sensitivity against 1-specificity for the total ASD-PEDS score derived from Paisley. Paisley performed well, resulting in an AUC of 0.89 (95% CI = 0.83 – 0.94). An AUC of 0.89 indicates that a randomly chosen child with a clinical diagnosis of ASD is 88.6% more likely to score higher on Paisley than a child without ASD.
Figure 2.
Receiver operating characteristics (ROC) curve for Paisley (AUC = 0.89, 95% CI = 0.83–0.94).
ROC analyses were also used to examine the performance of Paisley when administered by different classes of providers. The AUC of Paisley was 0.88 (95% CI = 0.74 – 1.00) when administered by early interventionists, 0.97 (95% CI = 0.92 – 1.00) when administered by psychology providers, and 0.88 (95% CI = 0.81 – 0.95) when administered by medical providers.
ROC curves comparing the diagnostic accuracy of the ASD-PEDS administered within Paisley with the ADOS-2 Module T, ADOS-2 Module 1, M-CHAT, and SRS-2 are presented in Figure 3. In the sample of children who completed Paisley and the ADOS-2 Module T (n = 109), the AUC of the ADOS-2 Module T (0.99, 95% CI = 0.97 – 1.00) was significantly higher than the AUC of Paisley (0.91, 95% CI = 0.86 – 0.96; Z = −2.78 p = 0.005). The AUC of the ADOS-2 Module T comparison scores (0.99, 95% CI = 0.97 – 1.000) was also significantly higher than the AUC of Paisley (0.91, 95% CI = 0.86 – 0.96; Z = −2.75, p = 0.01). In the sample of children who completed Paisley and the ADOS-2 Module 1 (n = 66), the AUC of the ADOS-2 Module 1 raw scores (0.92, 95% CI = 0.81 – 1.00) was significantly higher than the AUC of Paisley (0.73, 95% CI = 0.55 – 0.91; Z = −2.41, p = 0.016). Similarly, the AUC of the ADOS-2 Module 1 comparison scores (0.93, 95% CI = 0.83 – 1.00) was significantly higher than the AUC of Paisley (0.73, 95% CI = 0.55 – 0.91; Z = −2.34, p = 0.02). In the sample of children who completed Paisley and the M-CHAT (n = 196), the AUC of the M-CHAT (0.86, 95% CI = 0.80 – 0.92) was not significantly different from the AUC of Paisley (0.89, 95% CI = 0.83 – 0.94; Z = 0.64, p = 0.53). In the sample of children who completed Paisley and the SRS-2 (n = 181), the AUC of the SRS-2 (0.84, 95% CI = 0.76 – 0.92) was not significantly different from the AUC of Paisley (0.88, 95% CI = 0.82 – 0.93; Z = 0.84, p = 0.40).
Figure 3.
ROC curves comparing Paisley with (a) ADOS-2 Module T Raw Scores, (b) ADOS-2 Module T Comparison Scores, (c) ADOS-2 Module 1 Raw Scores, (d) ADOS-2 Module 1 Comparison Scores, (e) M-CHAT, and (f) SRS-2.
Optimal Cutoff Score Analysis
Table 4 presents the number of children identified as high risk and low risk, sensitivity, specificity, PPV, NPV, and Youden’s Index for all possible cutoff scores of the ASD-PEDS. A cutoff score of ≥ 13 was found to be the optimal cutoff score for Paisley given that our goal was to establish a cutoff that would minimize false positives (maximize specificity) while maintaining a sensitivity of at least 0.70. Notably, Youden’s Index also identified ≥ 13 as the optimal cutoff score. Youden’s Index is routinely used to identify optimal cutoff scores as it represents the best tradeoff between sensitivity and specificity. The specificity and positive predictive power of the ASD-PEDS using a cutoff score of ≥ 13 was significantly higher than the specificity associated with the original cutoff score of ≥ 11 (p = 0.01 and p = 0.02, respectively).
Table 4.
Operating characteristics of Paisley for identifying ASD according to the best estimate clinical diagnosis at selected cut-offs.
Cutoff | ASD + | ASD − | Sensitivity | Specificity | PPV | NPV | Youden Index |
---|---|---|---|---|---|---|---|
≥ 8 | 184 | 14 | 0.99 | 0.26 | 0.80 | 0.93 | 0.25 |
≥ 9 | 169 | 29 | 0.97 | 0.48 | 0.85 | 0.83 | 0.45 |
≥ 10 | 145 | 53 | 0.88 | 0.70 | 0.90 | 0.66 | 0.58 |
≥ 11 | 134 | 64 | 0.83 | 0.78 | 0.92 | 0.61 | 0.61 |
≥ 12 | 126 | 72 | 0.80 | 0.84 | 0.94 | 0.58 | 0.64 |
≥ 13 | 118 | 80 | 0.76 | 0.90 | 0.96 | 0.56 | 0.66 |
≥ 14 | 106 | 92 | 0.68 | 0.90 | 0.95 | 0.49 | 0.58 |
≥ 15 | 97 | 101 | 0.63 | 0.92 | 0.96 | 0.46 | 0.55 |
≥ 16 | 77 | 121 | 0.50 | 0.94 | 0.96 | 0.39 | 0.44 |
≥ 17 | 62 | 136 | 0.41 | 0.96 | 0.97 | 0.35 | 0.37 |
≥ 18 | 42 | 156 | 0.28 | 0.98 | 0.98 | 0.31 | 0.26 |
≥ 19 | 26 | 172 | 0.18 | 1.00 | 1.00 | 0.29 | 0.18 |
≥ 20 | 17 | 181 | 0.11 | 1.00 | 1.00 | 0.28 | 0.11 |
≥ 21 | 7 | 191 | 0.05 | 1.00 | 1.00 | 0.26 | 0.05 |
IV. Discussion
This work presents preliminary evaluation of a novel technologically supported approach to evaluating autism risk within community settings, called Paisley. Our results offer initial support related to the ability of providers to use Paisley to discriminate between children that did and did not receive a formal autism diagnosis. Based on the ASD-PEDS (Corona et al., 2021), the Paisley application supports providers in actionable screening within practice, which has been identified across multiple works as a challenge to prompt, equitable care access (Constantino et al., 2020; Hamp et al., 2023; Kuhn et al., 2021b; Mazurek et al., 2021; McClure et al., 2021; Wiggins et al., 2020; Zuckerman et al., 2021). By walking providers through audio and visual prompts in real time, Paisley may address ongoing provider-reported challenges regarding low levels of training, competence, and support related to autism identification, particularly administration of interactive screening tools (Carbone et al., 2020; Mazurek et al., 2021; McCormack et al., 2020; Voigt, 2022).
Overall, the results indicate that the discriminative capacity of Paisley to detect ASD is strong (i.e., AUCs of 0.8 and above are considered “excellent”). The AUCs of the M-CHAT and SRS-2 were not significantly different from the AUCs of Paisley in the respective samples, indicating that the measures have similar diagnostic performance in isolation (when utilized independently, and not as part of a comprehensive assessment process). However, we note that Paisley provides the opportunity for providers to interact directly with their patients in the presence of their guardians, engaging in structured activities to elicit autism symptoms. Given that many parents and providers both identify a need for pathways of enhanced communication and shared decision-making as part of the screening process, (Locke et al., 2020; Weitlauf et al., 2023) and that providers report a need for increased competence and capacity for serving these families (Hamp et al., 2023), tools such as Paisley may offer clinical benefit in the form of the opportunity for parents to observe and comment on their children’s behavior while talking with their providers.
Our findings also provided new information about optimal scoring cut-offs for the ASD-PEDS. Although previous work identified a benchmark of 11 as indicative of ASD risk (Corona et al., 2021), a cutoff of 13 had significantly higher specificity and positive predictive value in this sample. This higher cut-off aligns with our goal of identifying children with high likelihood to meet DSM-5 criteria for ASD within the community, reserving specialized tertiary care evaluation – with its longer wait times, increased cost, and heavier burden on provider time – for children with less well-defined phenotypes. This aligns with calls for innovative approaches to recognizing autism symptoms (Zwaigenbaum & Warren, 2021), particularly given that within-practice assessment of autism and autism risk within primary care may be associated with more rapid and equitable access to intervention (McNally Keehn et al., 2020; Schrader et al., 2020; Zuckerman et al., 2021). Importantly, like any risk assessment tool, Paisley is not designed for standalone utilization. Although evaluated in that format for the sake of this validation study, Paisley, like approved Level 2 autism screeners for primary care (Stone et al., 2004; American Academy of Pediatrics, 2021), is designed for use by knowledgeable clinical providers with a comprehensive understanding of a child’s profile and a family’s needs.
Although promising, this study had several limitations. It included providers from different backgrounds but did not document provider familiarity with ASD assessment prior to using the app. Future work should control for ASD knowledge when understanding ratings of provider acceptability. This study also did not deploy Paisley in clinical settings, rather in a research laboratory. It is unknown how it would function in a community pediatric or early intervention setting, without a research assistant to explain it and troubleshoot any technology issues. Paisley providers rated diagnostic impressions and certainty as part of this visit but did not then give feedback to families. It will be important for future studies to evaluate it as part of real-time clinical workflow and provider-patient interactions as part of developmental monitoring. Although our clinically referred sample provided important preliminary data, future work should also enlist a larger, more phenotypically diverse sample of children to better understand adaptive, cognitive, and language characteristics of children for whom Paisley does or does not enable risk identification. Lastly, while each provider administered the Paisley application up to five times with different children, we did not account for this lack of independence among observations.
Despite these limitations, this study offers important reinforcement for the use of e-screeners to support clinical decision-making regarding autism risk. Understanding how technology can support providers in real-time, as part of direct patient care, likely represents an important avenue of additional future research to continue the mission of providing more equitable outcomes for children with ASD (Hine et al., 2018; Hyman & Johnson, 2012; Nolan et al., 2016; Zuckerman et al., 2021).
Funding:
This research is supported by the National Institute of Mental Health (NIMH) and the National Institute of Health (NIH) under Award Numbers R43MH115528 and R44MH115528. Further support is received through the Vanderbilt Kennedy Center and the Vanderbilt Institute for Clinical and Translational Research. The Vanderbilt Kennedy Center is funded by a Eunice Kennedy Shriver National Institute of Child Health and Human Development Grant under Award Number #1P50HD103537-0. The Vanderbilt Institute for Clinical and Translational Research (VICTR) is funded by the National Center for Advancing Translational Sciences (NCATS) Clinical Translational Science Award (CTSA) Program Award Number 5UL1TR002243–03.
Footnotes
Conflicts of Interest: Amy Weitlauf and Zachary Warren received consultant fees from Adaptive Technology Consulting from December 2018 through March 2019. The authors have no additional financial relationships or conflicts of interest relevant to this article to disclose.
Compliance with Ethical Standards: All procedures were approved and overseen by the Vanderbilt University Medical Center Institutional Review Board.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request. Joshua Wade was the principal investigator for this work and confirms that he had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
References
- Abbas H, Garberson F, Glover E, & Wall DP (2018). Machine learning approach for early detection of autism by combining questionnaire and home video screening. J Am Med Inform Assoc. 10.1093/jamia/ocy039 [DOI] [PMC free article] [PubMed] [Google Scholar]
- American Academy of Pediatrics (2021). Autism Spectrum Disorder: Links to Commonly Used Screening Instruments and Tools. AAP Toolkits. [Google Scholar]
- Antezana L, Scarpa A, Valdespino A, Albright J, & Richey JA (2017). Rural Trends in Diagnosis and Services for Autism Spectrum Disorder. Frontiers in Psychology, 8, 590. 10.3389/fpsyg.2017.00590 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bellesheim KR, Kizzee RL, Curran A, & Sohl K (2020). ECHO Autism: Integrating Maintenance of Certification with Extension for Community Healthcare Outcomes Improves Developmental Screening. J Dev Behav Pediatr, 41(6), 420–427. 10.1097/dbp.0000000000000796 [DOI] [PubMed] [Google Scholar]
- Carbone PS, Campbell K, Wilkes J, Stoddard GJ, Huynh K, Young PC, & Gabrielsen TP (2020). Primary Care Autism Screening and Later Autism Diagnosis. Pediatrics, 146(2). 10.1542/peds.2019-2314 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carbone PS, Norlin C, & Young PC (2016). Improving Early Identification and Ongoing Care of Children With Autism Spectrum Disorder. Pediatrics, 137(6). 10.1542/peds.2015-1850 [DOI] [PubMed] [Google Scholar]
- Constantino J, & Gruber C (2005). Social Responsiveness Scale (SRS). Western Psychological Services. [Google Scholar]
- Constantino JN, Abbacchi AM, Saulnier C, Klaiman C, Mandell DS, Zhang Y, Hawks Z, Bates J, Klin A, Shattuck P, Molholm S, Fitzgerald R, Roux A, Lowe JK, & Geschwind DH (2020). Timing of the Diagnosis of Autism in African American Children. Pediatrics, 146(3). 10.1542/peds.2019-3629 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Corona LL, Wagner L, Wade J, Weitlauf AS, Hine J, Nicholson A, Stone C, Vehorn A, & Warren Z (2021). Toward Novel Tools for Autism Identification: Fusing Computational and Clinical Expertise. Journal of Autism and Developmental Disorders. 10.1007/s10803-020-04857-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dababnah S, Shaia WE, Campion K, & Nichols HM (2018). “We Had to Keep Pushing”: Caregivers’ Perspectives on Autism Screening and Referral Practices of Black Children in Primary Care. Intellectual and Developmental Disabilities, 56(5), 321–336. 10.1352/1934-9556-56.5.321 [DOI] [PubMed] [Google Scholar]
- DeGuzman PB, Lyons G, Huang G, Keim-Malpass J, & Mazurek MO (2022). Statewide Analysis Reveals Period of Well-Child Visit Attendance for Earlier Diagnosis of Autism Spectrum Disorder. The Journal of Pediatrics, 241, 181–187.e181. https://doi.org/ 10.1016/j.jpeds.2021.09.028 [DOI] [PubMed] [Google Scholar]
- Guerrero MGB, & Sobotka SA (2022). Understanding the Barriers to Receiving Autism Diagnoses for Hispanic and Latinx Families. Pediatr Ann, 51(4), e167–e171. 10.3928/19382359-20220322-03 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hamp N, DeHaan SL, Cerf CM, & Radesky JS (2023). Primary Care Pediatricians’ Perspectives on Autism Care. Pediatrics, 151(1). 10.1542/peds.2022-057712 [DOI] [PubMed] [Google Scholar]
- Hine JF, Herrington CG, Rothman AM, Mace RL, Patterson BL, Carlson KL, & Warren ZE (2018). Embedding Autism Spectrum Disorder Diagnosis Within the Medical Home: Decreasing Wait Times Through Streamlined Assessment. J Autism Dev Disord, 48(8), 2846–2853. 10.1007/s10803-018-3548-3 [DOI] [PubMed] [Google Scholar]
- Hine JF, Wagner L, Goode R, Rodrigues V, Taylor JL, Weitlauf A, & Warren ZE (2021). Enhancing developmental-behavioral pediatric rotations by teaching residents how to evaluate autism in primary care. Autism : the international journal of research and practice, 1362361320984313. 10.1177/1362361320984313 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hyman SL, & Johnson JK (2012). Autism and pediatric practice: toward a medical home. J Autism Dev Disord, 42(6), 1156–1164. 10.1007/s10803-012-1474-3 [DOI] [PubMed] [Google Scholar]
- Hyman SL, Levy SE, & Myers SM (2020). Identification, Evaluation, and Management of Children With Autism Spectrum Disorder. Pediatrics, 145(1). 10.1542/peds.2019-3447 [DOI] [PubMed] [Google Scholar]
- Johnson CP, & Myers SM (2007). Identification and evaluation of children with autism spectrum disorders. Pediatrics, 120(5), 1183–1215. 10.1542/peds.2007-2361 [DOI] [PubMed] [Google Scholar]
- Juarez AP, Weitlauf AS, Nicholson A, Pasternak A, Broderick N, Hine J, Stainbrook JA, & Warren Z (2018). Early Identification of ASD Through Telemedicine: Potential Value for Underserved Populations. J Autism Dev Disord. 10.1007/s10803-018-3524-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kuhn J, Levinson J, Udhnani MD, Wallis K, Hickey E, Bennett A, Fenick AM, Feinberg E, & Broder-Fingert S (2021a). What Happens After a Positive Primary Care Autism Screen Among Historically Underserved Families? Predictors of Evaluation and Autism Diagnosis. J Dev Behav Pediatr, 42(7), 515–523. 10.1097/dbp.0000000000000928 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kuhn J, Levinson J, Udhnani MD, Wallis K, Hickey E, Bennett A, Fenick AM, Feinberg E, & Broder-Fingert S (2021b). What Happens After a Positive Primary Care Autism Screen Among Historically Underserved Families? Predictors of Evaluation and Autism Diagnosis. Journal of Developmental & Behavioral Pediatrics, 42(7), 515–523. 10.1097/dbp.0000000000000928 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lemay JF, Amin P, Langenberger S, & McLeod S (2020). Experience with the Rapid Interactive Test for Autism in Toddlers in an Autism Spectrum Disorder Diagnostic Clinic. J Dev Behav Pediatr, 41(2), 95–103. 10.1097/dbp.0000000000000730 [DOI] [PubMed] [Google Scholar]
- Locke J, Ibanez LV, Posner E, Frederick L, Carpentier P, & Stone WL (2020). Parent Perceptions About Communicating With Providers Regarding Early Autism Concerns. Pediatrics, 145(Suppl 1), S72–S80. https://doi-org.proxy.library.vanderbilt.edu/10.1542/peds.2019-1895J [DOI] [PubMed] [Google Scholar]
- Lord C, Risi S, Lambrecht L, Cook EH, Leventhal BL, DiLavore PC, Pickles A, & Rutter M (2000). The Autism Diagnostic Observation Schedule-Generic: A standard measure of social and communication deficits associated with the spectrum of autism. Journal of Autism and Developmental Disorders, 30(3), 205–223. https://doi.org/Doi 10.1023/A:1005592401947 [DOI] [PubMed] [Google Scholar]
- Maenner MJ, Shaw KA, Baio J, Washington A, Patrick M, DiRienzo M, Christensen DL, Wiggins LD, Pettygrove S, Andrews JG, Lopez M, Hudson A, Baroud T, Schwenk Y, White T, Rosenberg CR, Lee LC, Harrington RA, Huston M, … Dietz PM (2020). Prevalence of Autism Spectrum Disorder Among Children Aged 8 Years - Autism and Developmental Disabilities Monitoring Network, 11 Sites, United States, 2016. MMWR Surveill Summ, 69(4), 1–12. 10.15585/mmwr.ss6904a1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maenner MJ, Shaw KA, Bakian AV, Bilder DA, Durkin MS, Esler A, Furnier SM, Hallas L, Hall-Lande J, Hudson A, Hughes MM, Patrick M, Pierce K, Poynter JN, Salinas A, Shenouda J, Vehorn A, Warren Z, Constantino JN, … Cogswell ME (2021). Prevalence and Characteristics of Autism Spectrum Disorder Among Children Aged 8 Years - Autism and Developmental Disabilities Monitoring Network, 11 Sites, United States, 2018. MMWR Surveill Summ, 70(11), 1–16. 10.15585/mmwr.ss7011a1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mazurek MO, Kuhlthau K, Parker RA, Chan J, & Sohl K (2021). Autism and General Developmental Screening Practices Among Primary Care Providers. J Dev Behav Pediatr, 42(5), 355–362. 10.1097/dbp.0000000000000909 [DOI] [PubMed] [Google Scholar]
- McClure LA, Lee NL, Sand K, Vivanti G, Fein D, Stahmer A, & Robins DL (2021). Connecting the Dots: a cluster-randomized clinical trial integrating standardized autism spectrum disorders screening, high-quality treatment, and long-term outcomes. Trials, 22(1), 319. 10.1186/s13063-021-05286-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCormack G, Dillon A, Healy O, Walsh C, & Lydon S (2020). Primary Care Physicians’ Knowledge of Autism and Evidence-Based Interventions for Autism: A Systematic Review. Review Journal of Autism and Developmental Disorders, 7. 10.1007/s40489-019-00189-4 [DOI] [Google Scholar]
- McNally Keehn R, Ciccarelli M, Szczepaniak D, Tomlin A, Lock T, & Swigonski N (2020). A Statewide Tiered System for Screening and Diagnosis of Autism Spectrum Disorder. Pediatrics, 146(2). 10.1542/peds.2019-3876 [DOI] [PubMed] [Google Scholar]
- Megerian JT, Dey S, Melmed RD, Coury DL, Lerner M, Nicholls CJ, Sohl K, Rouhbakhsh R, Narasimhan A, Romain J, Golla S, Shareef S, Ostrovsky A, Shannon J, Kraft C, Liu-Mayo S, Abbas H, Gal-Szabo DE, Wall DP, & Taraman S (2022). Evaluation of an artificial intelligence-based medical device for diagnosis of autism spectrum disorder. NPJ Digit Med, 5(1), 57. 10.1038/s41746-022-00598-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mullen EL (1995). Mullen Scales of Early Learning. American Guidance Service. [Google Scholar]
- Nolan R, Walker T, Hanson JL, & Friedman S (2016). Developmental Behavioral Pediatrician Support of the Medical Home for Children with Autism Spectrum Disorders. J Dev Behav Pediatr, 37(9), 687–693. 10.1097/DBP.0000000000000348 [DOI] [PubMed] [Google Scholar]
- Penner M, King GA, Hartman L, Anagnostou E, Shouldice M, & Hepburn CM (2017). Community General Pediatricians’ Perspectives on Providing Autism Diagnoses in Ontario, Canada: A Qualitative Study. J Dev Behav Pediatr, 38(8), 593–602. 10.1097/DBP.0000000000000483 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Robins DL, Casagrande K, Barton M, Chen CM, Dumont-Mathieu T, & Fein D (2014). Validation of the modified checklist for Autism in toddlers, revised with follow-up (M-CHAT-R/F). Pediatrics, 133(1), 37–45. 10.1542/peds.2013-1813 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sarkar A, Wade J, Swanson A, Weitlauf A, Warren Z, & Sarkar N (2018). A Data-Driven Mobile Application for Efficient, Engaging, and Accurate Screening of ASD in Toddlers. In (pp. 560–570). Springer International Publishing. 10.1007/978-3-319-92049-8_41 [DOI] [Google Scholar]
- Schrader E, Delehanty AD, Casler A, Petrie E, Rivera A, Harrison K, Paterniti T, Sebastiany L, Nottke C, Sohl K, Levy SE, & Wetherby AM (2020). Integrating a New Online Autism Screening Tool in Primary Care to Lower the Age of Referral. Clinical Pediatrics, 59(3), 305–309. 10.1177/0009922819900947 [DOI] [PubMed] [Google Scholar]
- Sheldrick RC, Carter AS, Eisenhower A, Mackie TI, Cole MB, Hoch N, Brunt S, & Pedraza FM (2022). Effectiveness of Screening in Early Intervention Settings to Improve Diagnosis of Autism and Reduce Health Disparities. JAMA Pediatr, 176(3), 262–269. 10.1001/jamapediatrics.2021.5380 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith-Young J, Chafe R, & Audas R (2020). “Managing the Wait”: Parents’ Experiences in Accessing Diagnostic and Treatment Services for Children and Adolescents Diagnosed With Autism Spectrum Disorder. Health Service Insights, 13, 1178632920902141. 10.1177/1178632920902141 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Soares N, Wu Q, & Kanungo S (2014). Developmental-behavioral pediatric teaching of medical students: a national COMSEP survey. Teach Learn Med, 26(4), 366–372. 10.1080/10401334.2014.945392 [DOI] [PubMed] [Google Scholar]
- Solutions, B. I. (2019). NODA. In.
- Sparrow SS, & Cicchetti DV (1985). Diagnostic Uses of the Vineland Adaptive-Behavior Scales. Journal of Pediatric Psychology, 10(2), 215–225. https://doi.org/DOI 10.1093/jpepsy/10.2.215 [DOI] [PubMed] [Google Scholar]
- Stainbrook JA, Weitlauf AS, Juarez AP, Taylor JL, Hine J, Broderick N, Nicholson A, & Warren Z (2018). Measuring the service system impact of a novel telediagnostic service program for young children with autism spectrum disorder. Autism, 1362361318787797. 10.1177/1362361318787797 [DOI] [PubMed] [Google Scholar]
- Stone WL, Coonrod EE, & Ousley OY (2000). Brief report: screening tool for autism in two-year-olds (STAT): development and preliminary data. J Autism Dev Disord, 30(6), 607–612. https://www.ncbi.nlm.nih.gov/pubmed/11261472 [DOI] [PubMed] [Google Scholar]
- Stone WL, Coonrod EE, Turner LM, & Pozdol SL (2004). Psychometric properties of the STAT for early autism screening. Journal of autism and developmental disorders,34(6), 691–701. 10.1007/s10803-004-5289-8 [DOI] [PubMed] [Google Scholar]
- Swanson AR, Warren ZE, Stone WL, Vehorn AC, Dohrmann E, & Humberd Q (2014). The diagnosis of autism in community pediatric settings: does advanced training facilitate practice change? Autism, 18(5), 555–561. 10.1177/1362361313481507 [DOI] [PubMed] [Google Scholar]
- Tariq Q, Daniels J, Schwartz JN, Washington P, Kalantarian H, & Wall DP (2018). Mobile detection of autism through machine learning on home video: A development and prospective validation study. PLoS Med, 15(11), e1002705. 10.1371/journal.pmed.1002705 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Voigt RG (2022). Going Back to the Future: A First Step in Addressing the DBP Workforce Crisis? (AAP Section on Developmental and Behavioral Pediatrics Newsletter, Issue. [Google Scholar]
- Wagner L, Corona LL, Weitlauf AS, Marsh KL, Berman AF, Broderick NA, Francis S, Hine J, Nicholson A, Stone C, & Warren Z (2021). Use of the TELE-ASD-PEDS for Autism Evaluations in Response to COVID-19: Preliminary Outcomes and Clinician Acceptability. J Autism Dev Disord, 51(9), 3063–3072. 10.1007/s10803-020-04767-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wall DP, Kosmicki J, Deluca TF, Harstad E, & Fusaro VA (2012). Use of machine learning to shorten observation-based screening and diagnosis of autism. Transl Psychiatry, 2, e100. 10.1038/tp.2012.10 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weitlauf AS, Miceli A, Vehorn A, Dada Y, Pinnock T, Harris JW, Hine J, & Warren Z (2023). Screening, Diagnosis, and Intervention for Autism: Experiences of Black and Multiracial Families Seeking Care. Journal of autism and developmental disorders, 10.1007/s10803-022-05861-z. Advance online publication. https://doi-org.proxy.library.vanderbilt.edu/10.1007/s10803-022-05861-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wiggins LD, Durkin M, Esler A, Lee LC, Zahorodny W, Rice C, Yeargin-Allsopp M, Dowling NF, Hall-Lande J, Morrier MJ, Christensen D, Shenouda J, & Baio J (2020). Disparities in Documented Diagnoses of Autism Spectrum Disorder Based on Demographic, Individual, and Service Factors. Autism Res, 13(3), 464–473. 10.1002/aur.2255 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Youden WJ (1950). Index for rating diagnostic tests. Cancer, 3(1), 32–35. [DOI] [PubMed] [Google Scholar]
- Zuckerman KE, Broder-Fingert S, & Sheldrick RC (2021). To reduce the average age of autism diagnosis, screen preschoolers in primary care. Autism, 25(2), 593–596. 10.1177/1362361320968974 [DOI] [PubMed] [Google Scholar]
- Zwaigenbaum L, & Warren Z (2021). Commentary: Embracing innovation is necessary to improve assessment and care for individuals with ASD: a reflection on Kanne and Bishop (2020). J Child Psychol Psychiatry, 62(2), 143–145. 10.1111/jcpp.13271 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request. Joshua Wade was the principal investigator for this work and confirms that he had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.