Abstract
A wide array of digital supports (such as apps) have been developed for the autism community, many of which have little or no evidence to support their effectiveness. A Delphi study methodology was used to develop a consensus on what constitutes good evidence for digital supports among the broader autism community, including autistic people and their families, as well as autism-related professionals and researchers. A four-phase Delphi study consultation with 27 panel members resulted in agreement on three categories for which evidence is required: reliability, engagement and effectiveness of the technology. Consensus was also reached on four key sources of evidence for these three categories: hands-on experience, academic sources, expert views and online reviews. These were differentially weighted as sources of evidence within these three categories.
Lay abstract
Digital supports are any type of technologies that have been intentionally developed to improve daily living in some way. A wide array of digital supports (such as apps) have been developed for the autism community specifically, but there is little or no evidence of whether they work or not. This study sought to identify what types of evidence the autistic community valued and wanted to see provided to enable an informed choice to be made regarding digital supports. A consensus was developed between autistic people and their families, practitioners (such as therapists and teachers) as well as researchers, to identify the core aspects of evidence that everyone agreed were useful. In all, 27 people reached agreement on three categories for which evidence is required: reliability, engagement and the effectiveness of the technology. Consensus was also reached on four key sources of evidence for these three categories: hands-on experience, academic sources, expert views and online reviews. The resulting framework allows for any technology to be evaluated for the level of evidence identifying how effective it is. The framework can be used by autistic people, their families, practitioners and researchers to ensure that decisions concerning the provision of support for autistic people is informed by evidence, that is, ‘evidence-based practice’.
Keywords: autism, co-development, Delphi study, digital support, evidence-based practice
Autism spectrum disorder (hereafter autism) is a life-long condition characterised by persistent differences in social communication and interaction alongside restricted and repetitive patterns of behaviour, interests or activities (American Psychiatric Association, 2013). Studies in Asia, Europe, and North America have identified individuals with autism with an average prevalence of between 1% and 2%, diagnosed in three to four times as many males as females (Baio et al., 2018). The impact on the economy from intervention costs, lost earnings, and care and support for children and adults with autism is estimated at £32 billion per year in the United Kingdom and $175 billion per year in the United States (Buescher et al., 2014). Supports developed for autism have the potential to help members of the autistic community achieve better life quality, greater autonomy and inclusion (Brosnan et al., 2019). This is best achieved through participatory research approaches which critically reflect on the current status of support with the autistic and broader autism communities, in order to identify how the field can develop more inclusively (Fletcher-Watson et al., 2019; Parsons et al., 2019).
Reviews of the academic literature have shown that there is a growing number of digital supports for autistic individuals (Chia et al., 2018; Grynszpan et al., 2014; Odom et al., 2015; Pennington, 2010; Ploog et al., 2013; Ramdoss et al., 2011; Virnes et al., 2015; Wainer & Ingersoll, 2011; Wong et al., 2015; Zervogianni et al., in press). Digital supports are defined as ‘any electronic item/equipment/application/or virtual network that is used intentionally to increase/maintain, and/or improve daily living, work/productivity, and recreation/leisure capabilities’ (Odom et al., 2015, p. 3806). In the past decade touchscreen, tangible and immersive digital technologies have become increasingly popular and accessible, with many autistic people and their parents/carers reporting high levels of digital technology use for supporting both leisure and academic pursuits (Knight et al., 2013; Laurie et al., 2018; MacMullin et al., 2016; Pennington, 2010; Shane & Albert, 2008). Digital technologies have been developed to support autism in areas such as social skills and social interaction (for reviews see Camargo et al., 2014; Grossard et al., 2017; Ramdoss et al., 2011; Schlosser & Wendt, 2008) and emotion recognition (for review see Berggren et al., 2018). Digital technology, both in school and home settings, is being used in a variety of supportive ways such as increasing autonomy, reducing anxiety and increasing social opportunities for autistic people (Hedges et al., 2018), encapsulated by the term ‘digital support’. To get an idea of how profuse development in this area is, one curated database of autism apps (http://www.appyautism.com/en/) lists over 400 apps for the iOS format alone.
Digital supports aim to facilitate a wide range of outcomes for autistic people, across a wide variety of ages (Wong et al., 2015). Despite the extensive use of digital supports for autism, most digital supports available to the autistic community have little or no evidence to support their effectiveness (Constantin et al., 2017; Kim et al., 2018). Studies reporting on the effects of digital supports are often in-depth case-study reports (e.g. De Leo et al., 2011; Hagiwara & Smith Myles, 1999; Herrera et al., 2008; Mechling et al., 2009; Mechling & Savidge, 2011; Parsons et al., 2006). Beginning with 29,105 potential articles, Wong et al. (2015) identified 27 focused intervention practices for autism that met their inclusion criteria. An intervention practice met the level of research evidence necessary to be included if it was supported by (1) two high-quality experimental or quasi-experimental design studies conducted by two different research groups, or (2) five high-quality single case design studies conducted by three different research groups and involving a total of 20 participants across studies or (3) a combination of research designs that must include at least one high-quality experimental/quasi-experimental design, three high-quality single case designs and was conducted by more than one researcher or research group (Wong et al., 2015). Using similar inclusion criteria for research evidence, Knight et al. (2013) identified 29 studies that met these inclusion criteria; however, of these studies only three single-subject studies and no group studies met the criteria for quality of research evidence. Grynszpan et al. (2014) conducted a meta-analysis of digital technology supports for autism and identified only 22 out of 379 (6%) using pre-post group design studies. Studies were included based on criteria for quality of evidence that took into account participants’ diagnoses, outcome measures and interaction with digital technology. Of the 22 pre-post group studies, only 10 followed a randomised controlled design (2.6% of the initial sample). The analysis of efficacy on these 10 studies provided evidence for a beneficial effect of technology-based training for autistic children overall, irrespective of age and intelligence quotient (IQ). The effect size was in the small-to-medium range, with a significant heterogeneity among studies. Together, these reviews indicate that digital supports can be effective for autism, but only a very small proportion of the research evidence from group or single case designs is of sufficient quality to permit an informed decision on whether to use the digital support.
Under these circumstances, using evidence to make an informed choice about supports for members of the autistic community and broader autism community (including professionals and researchers) is challenging. The digital supports that do have published evidence of effectiveness are frequently developed in research projects, and are rarely made available to the autistic consumer (Constantin et al., 2017). Dijkers et al. (2012) argue that research findings and related expert opinions represent only one source of potential information influencing support-related decisions in health professions and sciences in general. Other sources of potential evidence include (1) personal (e.g. own experience and expertise as well as recommendations from peers), (2) professional practice guidelines (e.g. clinical recommendations) and (3) personal values and preferences (alongside societal values and norms). In addition, online information, from a product website to digital store reviews, may be a potential source of evidence for digital supports, especially when these are commercially available. Such sources of evidence have varying degrees of perceived independence, and relevance to the consumer’s priorities. The abundance of digital supports for autism and the lack of research evidence makes it difficult for the autistic and broader autism communities to select the most appropriate digital support for their needs. A framework to evaluate the effectiveness of digital supports from multiple sources of potential evidence is urgently required to support evidence-based practice (EBP) and is the aim of this study.
EBP describes the integration of the best available research, clinical expertise, patient values and circumstances and healthcare system policies (Dijkers, 2011; Sackett et al., 1996). EBP has its origins within medicine, and clinical expertise refers to both the clinician’s individual knowledge acquired through clinical experience and practice, as well as external clinical evidence based on relevant scientific research using the best available methodologies, with meta-analyses and randomised controlled trials (RCTs) being considered ‘gold standard’ methods (Sackett et al., 1996). This is integrated with the values and circumstances within the context of broader healthcare policy to ensure that practice is evidence-based. If a practice is not evidence-based, it risks being inapplicable to a specific patient, out of date or even potentially harmful to the patient.
Efforts have been made to extend EBP to primary care and specialised clinics for autism (Anagnostou et al., 2014), but defining EBP is not straightforward in this context (Mesibov & Shea, 2011). Dijkers et al. (2012) outline issues faced by professionals when trying to apply evidence-based practice. For instance, their routine clinical practice is often remote from the controlled circumstances in which an RCT is conducted. Specifically, patients often have specific comorbidities (e.g. attention deficit hyperactivity disorder (ADHD), intellectual disabilities) that do not match the inclusion criteria for an RCT. In autism, other factors include unexpected life events occurring during the evaluation period that affect the outcome of the support and, specifically for autism, the heterogeneity of the condition (Mesibov & Shea, 2011). Criteria for EBP in autism support have been proposed by Reichow et al. (2008) which take into account both group designs and single case designs. Strong, adequate or weak judgements can be made concerning the rigour of the underlying research based on common primary quality indicators, including clear and reproducible accounts of participant characteristics, dependent measures and independent variables (in addition to secondary quality indicators including inter-observer agreement, blind raters, procedural fidelity, generalisation and maintenance and social validity). Depending on the quality and quantity of the research (e.g. see the inclusion criteria described by Wong et al., 2015, above), EBP for a support for autism can then be classified as established or promising (see Reichow & Volkmar, 2010; Reichow et al., 2008). Such criteria are particularly valuable in the context of digital supports as they are more able to take account of the range of possible uses of technology, the multitude of ways technology can be personalised and the variety of potential outcome measures for digital supports. Evaluating rapidly developing technology-based supports in RCTs is difficult, given the mismatch between the timelines of commercial and academic progress (Fletcher-Watson, 2015).
EBP is the integration of best available research and clinical expertise with patient values. Integrating the values and opinions of the autism community within the consideration of the research evidence is therefore essential to EBP (see also Fletcher-Watson et al., 2019; Parsons et al., 2019). This study aimed to co-develop a framework for evaluating the evidence base for digital supports for autism, through better understanding of what constitutes evidence for the autistic and broader autism communities and what sources are being used to obtain that knowledge when considering digital supports for autism. We used an online, four-round Delphi study methodology, ideal for integrating the perspectives of multiple stakeholders (Hasson et al., 2000), with feedback managed by a moderator at all stages (Trevelyan & Robinson, 2015). The Delphi study methodology was selected as it has been proposed to be more effective for group-based judgement and decision-making than traditional group meetings by both increasing a group’s access to multiple interpretations and views and decreasing the negative features of group discussions such as domineering individuals and opinions (Belton et al., 2019; Hasson et al., 2000; Rowe & Wright, 2001; see also Humphrey-Murto & de Wit, 2019). The Delphi methodology was therefore chosen as an ideal format for systematically capturing and integrating opinion from a diverse group of experts, who were not co-located and remained anonymous from each other (Goodman, 1987; Hsu & Sandford, 2007). Since the method allows each individual to contribute anonymously and in their own time, the study allowed us to accommodate different communication preferences that do not include face-to-face communication and to avoid direct confrontation between people of differing opinions. Allowing participants to contribute at their own pace without having to manage live group discussions therefore made it easier to include autistic individuals.
Methods
Panel members
Four key groups of stakeholders were identified: (1) autistic people, (2) families of autistic people, (3) professionals who support autistic people and (4) researchers – all with experience of using or developing digital supports for autism and advising others on the topic (see Table 1). The literature recommends between 15 and 30 panel members (Hasson et al., 2000; Paliwoda, 1983), and we aimed for 10 participants from each of our stakeholder subgroups. We contacted members of our networks directly and invited them to take part or to recommend another expert if they were unable to participate. We only contacted people that met our inclusion criteria as an ‘expert’ and we asked those who were referred by other people to confirm they met these criteria. We defined ‘experts’ as people with the necessary experience with technology for autism to advise others. All potential participants completed a brief questionnaire detailing their experience with digital interventions for autism. Experts were therefore those who reported that they had experience using and advising others on technologies for autism and could therefore be researchers, practitioners and/or members of the autism community. As the needs of the autistic community and their immediate providers of support were of primary importance to this study, researchers were not included in the first two Delphi study rounds (see Table 1), but joined at the mid-point to help refine statements on evidence. Panel members were recruited on the basis of recommendations from autism networks and associations internationally, especially those relevant to digital technology for supporting autism (e.g. www.asdtech.ed.ac.uk). Panel members were recruited through personal invitations to experts from autism-related networks in different countries for relevant experts: Asociación Española de Profesionales del Autismo AETAPI (Spain), Autism Speaks (United States), Research Autism (United Kingdom) and Centres Ressources Autisme (France). The inclusion criteria were that panel members were adults, fluent in English and that autistic panel members had formal evidence of diagnosis. As a screener, we asked all potential panel members to discuss their knowledge and experience with digital technology including (but not limited to) touchscreen tablets, smartphones, gaming devices, computers, robots and argumentative and alternative communication (AAC) devices.
Table 1.
Round | Community members |
Researchers | Total per round | ||
---|---|---|---|---|---|
Autistic people | Family members | Professionals | |||
1 | 6 | 8 | 13 | – | 27 |
2 | 6 | 7 | 11 | – | 24 |
3 | 5 | 2 | 6 | 12 | 25 |
4 | 5 | 2 | 5 | 11 | 23 |
The age range of the community group was between 22 and 72 years (mean = 38.76 SD = 12.35) including 8 males and 17 females. In the researchers’ group there were 9 males and 3 females and no information on age was given. The sample was recruited from the United Kingdom (21), France (6), United States (3), Spain (3), Israel (3) and one each from Germany, Austria and Ireland. Some panel members fulfilled the criteria for multiple groups (e.g. autistic practitioners) but are only listed in one group here – as selected by themselves.
Procedure
The study was conducted using an online survey software (www.qualtrics.com) over four rounds (Table 2). A literature review was conducted on EBP for digital supports for autism (Zervogianni et al., in press) providing information about the goals of existing digital supports. These informed the design of the first round of the Delphi study, providing context for panel members to consider how they may seek sources of potential evidence. Panel members’ comments and ratings in each round were collected and analysed by the moderator (first author), and used to create content for the following round.
Table 2.
Round | Description | Panel members |
---|---|---|
1 | Brainstorming: open enquiry about the reasons for using digital supports and the kind of information used to select digital supports. | Family members Professionals–practitioners Autistic people |
2 | Categorisation of evidence: organise evidence in categories, locate new types of evidence | Family members Professionals–practitioners Autistic people |
3 | Drafting the framework: ranking and editing lists of statements about evidence | Family members Professionals–practitioners Autistic people Researchers |
4 | Finalisation: ranking a selection of the statements and final modifications in wording | Family members Professionals–practitioners Autistic people Researchers |
Round 1 – Brainstorming
In Round 1 the panel answered open-ended questions (see Supplemental material, Appendix I) about their goals and sources of evidence when selecting a digital support (as defined above). They were asked to think about recent experiences when choosing or recommending a digital support intended for an autistic person (potentially including themselves). A thematic analysis was performed on panel responses, and illustrative quotes were selected (Braun & Clarke, 2006). This identified recurrent themes pertaining to the purpose of digital supports and the outcomes which are sought when using digital supports. We also identified potential sources of information that panel members detailed in regard to these purposes and outcomes.
Round 2 – categorisation
The panel was asked to rate potential sources of information identified during round 1 using 5-point Likert-type scales for the following dimensions:
Relevance: whether information from this source is likely to relate to their situation;
Importance: whether information from this source is likely to be of high quality;
Usefulness: whether information from this source is likely to make a difference to their decisions/actions;
Accessibility: whether information from this source is likely to be easy to find and understand.
We also aimed to refine the list of features and outcomes of digital support the panel may want evidence for and to match sources of evidence to these features/outcomes. The list of features/outcomes was derived from comments and illustrative quotes made during round 1. The panel were presented with features/outcomes beginning with the phrase ‘You want to know whether . . .’ (see Table 3) and asked to list the sources of information they would use to find out specifically about those features/outcomes.
Table 3.
• The product has ongoing tech support from the development team |
• Special interests of autistic people are taken into consideration in the product design |
• The product encourages original creations |
• The product is aesthetically pleasant |
• The product is easy to use |
• The product is customisable |
• The product is easy to find and order/buy |
• The product can be used autonomously by the autistic person |
• The product helps the autistic person to be more autonomous in their life |
• The product contributes to a better life quality for the family/carers of the autistic person |
• The product is amusing/entertaining |
• The product helps the autistic person develop new skills/improve existing skills |
• The product encourages social interaction between the autistic user and other people |
• The effects of its usage are long-lasting |
• The autistic user can generalise the skills they acquired via the technology in different contexts |
• The product achieves better results than similar products that are not technology-based |
• The product matches the abilities of the autistic user |
• The product matches the needs of the autistic user |
• The product is age-appropriate for the autistic user |
• There are opportunities to try out the product before buying it |
Third, in an open commentary the panel members were asked to discuss whether their personal experience was similar to specific quotes from the panel’s responses in the previous round. Those were selected to match sources of information proposed in the first round (Table 4).
Table 4.
‘I need a piece of technology to help me keep track of anxiety and offer suggestions and tips based on my experiences’ |
‘I would appreciate being able to buy harder levels or aspects of a game’ |
‘I’d look whether this technology is approved by several scientific communities specialised in autism’ |
‘I’d only rely on my personal judgement resulting from hands on experience’ |
‘As there were bugs in the app the person got a bit angry with it and stopped using it’ |
‘The technology we currently have does keep him entertained and occupied’ |
‘He uses elsewhere the things he has learnt with the app’ |
Mean ratings for relevance, importance, usefulness and accessibility for each source of information were computed. Using thematic analysis, codes representing desired features and outcomes of digital supports were clustered into sub-themes and then top-level themes by two raters independently (first two authors). The themes were reviewed, validated and, if necessary, revised by two other independent raters (last two authors). In the first round, to gather input from the community we made an open-ended enquiry regarding the kind of evidence they seek when considering whether to use, or recommend that someone else use, a new technology. Specific examples of evidence were requested as illustrations of this. Analysis of these responses culminated in three high-level categories of evidence: ‘engagement’ (how the user experiences the product itself, its ease of use and attractiveness), ‘effectiveness’ (outcomes reached/directly observed changes) and ‘reliability’ (the technology is functional). The resulting output was composed of statements on potential sources of evidence grouped into three high-level categories: that is, evidence for reliability, evidence for engagement and evidence for effectiveness. This constituted the basis of what was used in round 3 to create the first draft of the framework.
Round 3 – refinement
Round 3 integrated the perspectives of autistic community, families and professionals with researchers. The expanded panel were asked to rank and edit the statements that had emerged from Round 2 regarding what constitutes evidence. They were given the opportunity to remove statements that they thought were inappropriate or irrelevant. They were told that not all statements would make it to the final framework and that they should prioritise statements that they would want to see appear in the final framework. The moderator merged revisions that were similar, yielding a list of ranked statements for each category of evidence. For the framework to be easy to use, the number of statements per category was restricted. Hence, only the five most highly ranked statements in each category were maintained.
Round 4 – finalisation
The panel was required to review and, if necessary, revise each statement in a draft framework. They were given three possibilities for each statement: (1) accept it as is, (2) make adjustments and (3) remove it from the framework. They were required to justify their edits when they chose to make adjustments or remove a statement. The panel was also asked to signal any ‘words of caution’ concerning the finalised framework. They ranked the five top statements from 1 to 5 with 1 being the most important source of evidence for them.
The moderator merged the edits suggested by the panel when they were similar and then classified them and responded as listed below. The classification was reviewed by two independent coders (last two authors). In case of disagreement between them, consensus was achieved through discussion.
- Amendment: This is clarification or expansion of the scope of the statement without fundamentally changing it. For each amendment, two independent coders gave a score from 1 to 3:
- Should be integrated into the statement;
- Neutral stance regarding integration in the statement;
- Need not to be integrated.
To integrate an amendment, it had to have a mean score of less than 2.
2. Words of caution: These are important risks or constraints associated with the statement to an extent that they should be acknowledged in conjunction with the statement. These were adjoined to the statements or category of evidence they were associated with (see Supplemental material, Appendix II).
3. Rejections: This is when a panel member opposed a statement, or criticised major aspects of it. A threshold of 90% agreement (i.e. fewer than 10% of the panel rejected a specific statement) was set for inclusion of statements in the final list, following the emerging convention in Delphi studies (Ager et al., 2010).
4. Misunderstandings: Comments that appeared to be unrelated to the statement. The statement was double-checked and reworded for clarity if needed.
Results
Round 1
The desired outcomes of digital support for the autistic and autism communities that emerged from the panel’s responses primarily related to autonomy, time awareness and management, enhanced quality of life for family/carers, better communication, social participation, fun/leisure, learning support, creativity and enhanced cognitive skills. The features of digital support sought by the autistic and autism communities revolved mainly around the products being reliable (bug-free, tech support and battery life) and how the products look and feel (clear instructions, scaffolding and progress awareness, customisability, attractive design, user control and reward system). Table 5 summarises the desirable features and outcomes that were derived from the thematic analysis.
Table 5.
Features | Outcomes |
---|---|
Technical support | Encouraging original creativity |
Bug-free | Encouraging social interactions |
Aesthetically pleasing | Amusing/entertaining |
Ease of use | Autonomy |
Customisation | Better quality of life for family/carers |
Accessible and affordable | Generalisability of learnt skills |
Adapted to autistic user’s special interest and needs | Long-term effectiveness |
Age-appropriate | |
Progressive levels of difficulty |
Sources of information with regard to those features and outcomes were reviews and recommendations specifically from the autism community, personal hands-on experience and direct observation, expertise of the design team, involvement of autistic users in the design, scientific evidence and non-specific online reviews.
Round 2
Six potential sources of information relevant to choosing and using technologies were rated for relevance, importance, usefulness and accessibility (see Table 6).
Table 6.
Source of information | Mean position (SD) |
|||
---|---|---|---|---|
Relevance | Importance | Usefulness | Accessibility | |
Positive online reviews specifically from the autistic community | 4.4 (0.70) | 4.5 (0.76) | 4.46 (0.71) | 4.29 (1.02) |
Expertise of the product’s development/design team | 3.25 (1.48) | 3.75 (1.13) | 3.71 (1.31) | 3.96 (1.17) |
Observable positive changes in the autistic user’s behaviour | 4.25 (1.05) | 4.21 (1.08) | 4.13 (1.05) | 4.25 (0.88) |
The product’s development/design team specifically includes autistic people | 4.04 (1.27) | 4.13 (0.88) | 4.08 (0.95) | 3.88 (1.17) |
Academic research | 3.88 (0.93) | 3.75 (0.92) | 3.79 (0.82) | 3.50 (1) |
Positive online reviews (e.g. Amazon stars, comments on product’s Facebook page) | 3.00 (1.15) | 3.08 (1.04) | 3.04 (0.89) | 3.33 (1.11) |
Thematic analysis of panel responses produced three high-level categories of for which evidence might be required, defined as follows:
The product is reliable: The efficacy of a product at the level of engineering. Is it technically sound/functional? How well does it work? For example, does the face recognition functionality actually work? Does the app crash often?
The product is engaging: The user perception of the technology. How usable, agreeable, pleasant and accessible a product is for the specific users? Its ease of use/look and feel.
The product is effective: The outcome of using the product. How much impact does it have for the people using it? Does it make an observable difference in the user’s life/behaviour?
Round 3
The sources of evidence used for these three categories are summarised in Table 7, followed by descriptions that were summaries of comments that appeared across panel members and groups.
Table 7.
Reliability |
---|
Try it out. You might request a trial version from the developer, or borrow a copy/device from a friend. Take your time to explore all the functionalities. Bear in mind that a trial version might differ from the full version. |
Get an expert opinion. Ask people you know who have skills and experience with technology, or read official documentation provided by agencies such as a government council on technology standards. |
Read online reviews. Look on app review sites and social media. Include reviews from autistic users and their families and pay attention to long-time users. Keep in mind that reviewers’ circumstances (e.g. their needs, goals or budget) may not be the same as yours. |
Seek academic opinions. Read an academic article evaluating a product’s design. You might also see scientists writing in the mainstream media or find a talk given online. Check the academic’s relevant qualifications, affiliations and potential conflicts of interest when you decide how much trust to put in them. |
Consult the company’s website. While this information does not constitute independent evidence, the technical description of the product and the kind of technical support may be informative. You can also look for tech industry accreditations such as kite marks, badges, ISO norms and so on. |
Engagement |
Try it out. You might request a trial version from the developer, or borrow a copy/device from a friend. Take your time to explore all the functionalities. Bear in mind that a trial version might differ from the full version |
Read online reviews. Include reviews from autistic users and their families and pay attention to long-time users. Keep in mind that reviewers’ circumstances (e.g. their needs, goals or budget) may not be the same as yours. |
Get an expert opinion. Ask people you know who have skills and experience with technology. Talk to relevant professionals such as a teacher or speech and language therapist. |
Consult review websites. Look for sites that compare different technologies and search for case studies. |
Seek academic opinions. Read an academic article evaluating a product’s design. You might also see scientists writing in the mainstream media or find a talk given online. Check the academic’s relevant qualifications, affiliations and potential conflicts of interest when you decide how much trust to put in them. |
Effectiveness |
Read an academic paper. Ideally look for a review that systematically combines the results from multiple independent studies. It may be worth checking the quality of the original studies too and the journals where they were published. |
Get an expert opinion. Talk to relevant professionals such as a teacher or speech and language therapist. Ask people you know who have skills and experience with technology, or take the advice of reference centres. Read official documentation provided by agencies such as a government council on technology standards. |
Read online reviews. Include reviews from autistic users and their families and pay attention to long-time users. Keep in mind that reviewers’ circumstances (e.g. their needs, goals or budget) may not be the same as yours. |
Try it out. You might request a trial version from the developer, or borrow a copy/device from a friend. Take your time to explore all the functionalities. Bear in mind that a trial version might differ from the full version. |
Search online for expert perspectives. Read a media article on the technology written by a scientist or listen to a talk given by an academic. Check the academic’s relevant qualifications, affiliations and potential conflicts of interest when you decide how much trust to put in them. Consult websites that review and compare technologies, and search for relevant case studies. |
Round 4
In this final round the panel had to edit and rank statements in each category of sources of evidence. The statements that had the highest mean ratings as a source of evidence were similar for the three categories (Table 8).
Table 8.
Statements | Reliability ranking Mean (SD) |
Engagement ranking Mean (SD) |
Effectiveness ranking Mean (SD) |
---|---|---|---|
Try it out | 1.43 (0.82) | 1.39 (0.87) | 3.39 (1.69) |
Get an expert opinion | 2.57 (0.88) | 2.87 (1.08) | 2.48 (1.02) |
Read online reviews | 2.87 (1.19) | 2.74 (0.85) | 3.09 (1.14) |
Seek academic opinions | 3.43 (0.97) | 4.13 (1.12) | Not featured |
Consult the company’s website | 4.70 (0.69) | Not featured | Not featured |
Consult review websites | Not featured | 3.87 (1.19) | Not featured |
Read an academic paper | Not featured | Not featured | 2.26 (1.45) |
Search online for expert perspectives | Not featured | Not featured | 3.78 (1.06) |
Of the 23 panel members, 3 (more than 10%) rejected the statements shown in Table 9, so they were excluded. The final framework statements reaching inclusion for consensus by the autistic and autism communities as well as researchers are listed in Table 10, with the agreed explanatory text.
Table 9.
Category of desirable evidence | Removed statements |
---|---|
Reliability | Consult the company’s website |
Engagement | Consult review websites |
Seek academic opinions | |
Effectiveness | Search online for expert perspectives |
Table 10.
How to select digital supports for autistic users: an evidence-based framework |
---|
Is it reliable? |
1. Try it out You might request a trial version from the developer, or borrow a copy/device from a friend. Take your time to explore all the functionalities. Ask how the trial version differs from the full version. |
2. Get an expert opinion Talk to relevant professionals (e.g. a specialist teacher, speech and language therapist, specialist psychologist, etc.). Ask (autistic) people, organisations or agencies you know who have specialist skills and relevant experience with technology. |
3. Read online reviews Look on app review websites and social media. Include reviews from autistic users and their families and pay attention to people that have been using the product for a (relatively) long time. Read and compare as many reviews as possible to improve objectivity. Keep in mind that reviewers’ circumstances (e.g. their needs, age, goals or budget) may not be the same as yours and individual experiences may not be generalisable. |
4. Seek academic opinions Read an academic article evaluating the product, or find an article/online talk in the mainstream media by a qualified scientist. Check the academic’s relevant qualifications, affiliations and potential conflicts of interest when you decide how much trust to put in them. |
Is it engaging? |
1. Try it out You might request a trial version from the developer, or borrow a copy / device from a friend. Explore all the functionalities and see if it might be motivating to keep using it in the medium and long term, as well as the short term. Ask how the trial version differs from the full version. |
2. Read online reviews Include reviews from autistic users and their families and pay attention to people that have been using the product for a (relatively) long time. Keep in mind that reviewers’ circumstances (e.g. their needs, age, goals or budget) may not be the same as yours and individual experiences may not be generalisable. |
3. Get an expert opinion Ask people you know who have skills and experience with this technology and/or autistic users. Talk to relevant professionals such as a as a teacher, therapist or support worker. Ideally look for someone who also knows you as your personality has a key role in how engaging you will find it. |
Is it efficient? |
1. Read an academic paper Ideally look for a review that systematically combines the results from multiple independent studies. It may be worth checking the quality and potential affiliations/bias of the original studies too and the journals where they were published. |
2. Get an expert opinion Talk to relevant professionals (e.g. a specialist teacher, speech and language therapist, specialist psychologist, etc.). Ask (autistic) people, organisations or agencies you know who have specialist skills and relevant experience with technology. |
3. Read online reviews Include reviews from autistic users and their families and pay attention to people that have been using the product for a (relatively) long time. Keep in mind that reviewers’ circumstances (e.g. their needs, age, goals or budget) may not be the same as yours and individual experiences may not be generalisable. |
4. Try it out You might request a trial version from the developer, or borrow a copy/device from a friend. Take your time to explore all the functionalities. Ask how the trial version differs from the full version. |
Discussion
There is a plethora of highly accessible digital supports purporting to support the autistic community (Chia et al., 2018; Grynszpan et al., 2014; Odom et al., 2015; Pennington, 2010; Ploog et al., 2013; Ramdoss et al., 2011; Virnes et al., 2015; Wainer & Ingersoll, 2011; Wong et al., 2015; Zervogianni et al., in press) but no mechanism by which consumers, practitioners or researchers can gauge the level of evidence supporting their use. This is the first study to generate a consensus from an international group made up from the autistic and broader autism communities as well as researchers as to what constitutes good evidence for digital supports for autism. Through a Delphi study methodology, consensus was achieved on a detailed framework providing the parameters for which evidence is sought and the sources of evidence perceived to be important. This novel framework allows users of digital supports to incorporate evidence into their decision-making regarding the selection and use of digital support, for themselves, or their autistic family members, pupils, clients, participants and so on. The framework can also inform those developing digital supports for the autistic community, highlighting what types of evidence are considered important. For the first time, the autistic and autism communities can incorporate EBP into the development, application and use of digital supports. Importantly, this framework has been co-developed through a participatory research approach which connects researchers with relevant autistic and broader autism communities to achieve shared goals. These methods can deliver results that are relevant to people’s lives and thus likely to have a positive impact (Fletcher-Watson et al., 2019; Parsons et al., 2019).
The study revealed that academic evidence obtained with carefully conducted empirical research was just one of the aspects that may inform the autistic and broader autism communities when selecting appropriate digital support. This was clearly expressed by the panel from the very beginning of the study, when the importance of reliability, engagement and effectives emerged. Clinical research methodologies, such as RCTs, need to be augmented with other sources of empirical evidence, as well as hands-on experience or other users’ feedback, to identify the extent to which a digital support is reliable, engaging or effective. Reliability and engagement may be particularly pertinent as features of digital support which are not present in the same way for non-technology-based supports (Mesibov & Shea, 2011). EBP for digital supports therefore departs from non-technology-based EBP for autism, highlighting the need for a specific EBP framework.
There are important similarities and differences between the proposed EBP framework for digital supports for autism and other general models of evidence provision in the field of system and software engineering. For example, the ISO/IEC 25010:2011 standard (International Organisation for Standardisation/International Electrotechnical Commission) defines a ‘quality in use’ model composed of five characteristics: effectiveness, efficiency, satisfaction, freedom from risk and context coverage (Bevan et al., 2016). ‘Effectiveness’ is a common theme between the framework co-developed with the autistic and broader autism communities and this standard, and ‘engagement’ maps closely to ‘satisfaction’. ‘Reliability’, however, represents a different perspective, potentially related to (but distinct from) ‘efficiency’, which also takes into account task time, time efficiency, cost-effectiveness, productive time ratio, unnecessary actions and fatigue. ‘Reliability’ focuses more upon, ‘will the app crash?’, for example, reflecting the end-users experience of technology which is not captured separately by the ISO/IEC model, but is embedded within ‘satisfaction’ which incorporates ‘proportion of users complaining, proportion of user complaints about a particular feature and user trust’. Thus, there are parallels between the requirements of the autistic and broader autism communities and international standards, but the participatory research approach ensures the relevance of the framework to those who the digital supports are being developed for. In addition, words of caution associated with the framework (see Supplemental material, Appendix II) emphasised potential downsides of technology and thus introduced the notion of freedom from risk. Indeed, risks for health and social status were acknowledged in the word of caution related to ‘engagement’, which warned about possible over-engagement with technology that would monopolise the child’s time, and the framework needs to be interpreted within reference to the words of caution.
There were also similarities and differences in which sources of evidence were perceived to be most salient for reliability, engagement and effectiveness. While trying out the product was identified as the best source of evidence for informing reliability and engagement, academic research was viewed as the best source of evidence for effectiveness. This highlights that EBP, as informed by the broader autism community, requires multiple sources of information. Online reviews and expert opinions were also identified as key sources of evidence in all three domains. Recent accounts of fictitious online reviews (Morris, 2017) and the independence of the expert are important considerations when evaluating these sources of evidence, and this is highlighted in the ‘words of caution’ (see Supplemental material, Appendix II). Thus while similar sources of evidence are identified for reliability, engagement and effectiveness, they are weighted differently for each category.
As noted above, EBP is informed by integrating best available evidence along with practitioner expertise and the values of recipients of the practice. Co-developing the proposed EBP framework with researchers, technology developers, practitioners and the autism community helps ensure that it will be useful for these communities (see Fletcher-Watson et al., 2019; Parsons et al., 2019). However, different participant groups may have different levels of access to different kinds of evidence, which may lead to inconsistencies in the type of sources of information which will actually be used by different types of potential users. For example, researchers may have greater access to and expertise in interpreting academic papers, while educators and caregivers may have more experience supporting day-to-day use and evaluating the long-term utility of digital technologies. In addition, while the framework identified commonalities in what constitutes evidence, it is important to note that there may be additional sources of evidence that are also significant for only one of the participant groups. Finally, we found the Delphi study methodology to be an effective method for integrating potentially divergent perspectives into an agreed-upon framework. However, despite our number of participants being consistent with those proposed from previous research (Hasson et al., 2000; Paliwoda, 1983), our sample was relatively small given the heterogeneity of autism, and of stakeholder perspectives in the broader autism community. This needs to be borne in mind when considering if the framework is suitable for the entire autism community.
Future work will apply this framework to digital supports for the autistic community, to identify the level of evidence available (complete, adequate, limited, none) from each source for reliability, engagement and effectiveness to highlight if the available evidence is strong, adequate or weak (after Reichow et al., 2008). An online version of the framework that enables researchers, developers and the autism community to evaluate the evidence base for any digital supports they are interested in is freely available at beta-project.org. Importantly, this framework identifies the strength (i.e. availability, quality) of the evidence, not the outcome of the evidence. It is possible, for example, that there could be strong evidence that an app is not engaging. For instance, de Vries et al. (2015) conducted an RCT trial on a computerised support for training executive functions that yielded non-significant changes associated with high attrition rate in autistic participants, thus discouraging continuing practice. The framework developed here supports the sourcing and consideration of evidence into best practice, not necessarily what that best practice should be.
Supplemental Material
Supplemental material, AUT898331_Supplemental_material_Appendix_I for A framework of evidence-based practice for digital support, co-developed with and for the autism community by Vanessa Zervogianni, Sue Fletcher-Watson, Gerardo Herrera, Matthew Goodwin, Patricia Pérez-Fuster, Mark Brosnan and Ouriel Grynszpan in Autism
Supplemental material, AUT898331_Supplemental_material_Appendix_II for A framework of evidence-based practice for digital support, co-developed with and for the autism community by Vanessa Zervogianni, Sue Fletcher-Watson, Gerardo Herrera, Matthew Goodwin, Patricia Pérez-Fuster, Mark Brosnan and Ouriel Grynszpan in Autism
Acknowledgments
The authors thank all the panel members who participated in the Delphi study. Without their input, the project would not have been possible.
Footnotes
Declaration of conflicting interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: We are grateful to FIRAH who funded the study (Grant: APa2016_026: https://www.firah.org/).
ORCID iDs: Vanessa Zervogianni https://orcid.org/0000-0003-0638-4087
Sue Fletcher-Watson https://orcid.org/0000-0003-2688-1734
Mark Brosnan https://orcid.org/0000-0002-0683-1492
Supplemental material: Supplemental material for this article is available online.
References
- Ager A., Stark L., Akesson B., Boothby N. (2010). Defining best practice in care and protection of children in crisis-affected settings: A Delphi study. Child Development, 81(4), 1271–1286. [DOI] [PubMed] [Google Scholar]
- American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). American Psychiatric Publishing. [Google Scholar]
- Anagnostou E., Zwaigenbaum L., Szatmari P., Fombonne E., Fernandez B. A., Woodbury-Smith M., Brian J., Bryson S., Smith I. M., Drmic I., Buchanan J. A. (2014). Autism spectrum disorder: Advances in evidence-based practice. Canadian Medical Association Journal, 186(7), 509–519. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baio J., Wiggins L., Christensen D. L., Maenner M. J., Daniels J., Warren Z., Kurzius-Spencer M., Zahorodny W., Rosenberg C. R., White T., Durkin M. S. (2018). Prevalence of autism spectrum disorder among children aged 8 years – autism and developmental disabilities monitoring network, 11 sites, United States, 2014. MMWR Surveillance Summaries, 67(6), 1–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Belton I., MacDonald A., Wright G., Hamlin I. (2019). Improving the practical application of the Delphi method in group-based judgment: A six-step prescription for a well-founded and defensible process. Technological Forecasting and Social Change, 147, 72–82. [Google Scholar]
- Berggren S., Fletcher-Watson S., Milenkovic N., Marschik P. B., Bölte S., Jonsson U. (2018). Emotion recognition training in autism spectrum disorder: A systematic review of challenges related to generalizability. Developmental Neurorehabilitation, 21(3), 141–154. [DOI] [PubMed] [Google Scholar]
- Bevan N., Carter J., Earthy J., Geis T., Harker S. (2016). New ISO standards for usability, usability reports and usability measures. In: Kurosu M. (Eds.), International conference on human-computer interaction (pp. 268–278). Springer. [Google Scholar]
- Braun V., Clarke V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Google Scholar]
- Brosnan M., Good J., Parsons S., Yuill N. (2019). Look up! Digital technologies for autistic people to support interaction and embodiment in the real world. Research in Autism Spectrum Disorders, 58, 52–53. [Google Scholar]
- Buescher A. V., Cidav Z., Knapp M., Mandell D. S. (2014). Costs of autism spectrum disorders in the United Kingdom and the United States. JAMA Pediatrics, 168(8), 721–728. [DOI] [PubMed] [Google Scholar]
- Camargo S. P. H., Rispoli M., Ganz J., Hong E. R., Davis H., Mason R. (2014). A review of the quality of behaviorally-based intervention research to improve social interaction skills of children with ASD in inclusive settings. Journal of Autism and Developmental Disorders, 44(9), 2096–2116. [DOI] [PubMed] [Google Scholar]
- Chia G. L. C., Anderson A., McLean L. A. (2018). Use of Technology to Support Self-Management in Individuals with Autism: Systematic Review. Review Journal of Autism and Developmental Disorders, 5(2), 142–155. [Google Scholar]
- Constantin A., Johnson H., Smith E., Lengyel D., Brosnan M. (2017). Designing computer-based rewards with and for children with Autism Spectrum Disorder and/or Intellectual Disability. Computers in Human Behavior, 75, 404–414. [Google Scholar]
- De Leo G., Gonzales C. H., Battagiri P., Leroy G. (2011). A smart-phone application and a companion website for the improvement of the communication skills of children with autism: Clinical rationale, technical development and preliminary results. Journal of Medical Systems, 35(4), 703–711. [DOI] [PubMed] [Google Scholar]
- de Vries M., Prins P. J., Schmand B. A., Geurts H. M. (2015). Working memory and cognitive flexibility-training for children with an autism spectrum disorder: A randomized controlled trial. Journal of Child Psychology and Psychiatry, 56(5), 566–576. [DOI] [PubMed] [Google Scholar]
- Dijkers M. P. (2011). External validity in research on rehabilitative interventions: Issues for knowledge translation. FOCUS Technical Brief, 33, 1–23. [Google Scholar]
- Dijkers M. P., Murphy S. L., Krellman J. (2012). Evidence-based practice for rehabilitation professionals: Concepts and controversies. Archives of Physical Medicine and Rehabilitation, 93(8), S164–S176. [DOI] [PubMed] [Google Scholar]
- Fletcher-Watson S. (2015). Evidence-based technology design and commercialisation: Recommendations derived from research in education and autism. TechTrends, 59(1), 84–88. [Google Scholar]
- Fletcher-Watson S., Adams J., Brook K., Charman T., Crane L., Cusack J., Pellicano E. (2019). Making the future together: Shaping autism research through meaningful participation. Autism, 23, 943–953. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goodman C. M. (1987). The Delphi technique: A critique. Journal of Advanced Nursing, 12(6), 729–734. [DOI] [PubMed] [Google Scholar]
- Grossard C., Grynspan O., Serret S., Jouen A.-L., Bailly K., Cohen D. (2017). Serious games to teach social interactions and emotions to individuals with autism spectrum disorders (ASD). Computers & Education, 113, 195–211. 10.1016/j.compedu.2017.05.002 [DOI] [Google Scholar]
- Grynszpan O., Weiss P. L., Perez-Diaz F., Gal E. (2014). Innovative technology-based interventions for autism spectrum disorders: A meta-analysis. Autism, 18(4), 346–361. [DOI] [PubMed] [Google Scholar]
- Hagiwara T., Smith Myles B. (1999). A multimedia social story intervention: Teaching skills to children with autism. Focus on Autism and Other Developmental Disabilities, 14(2), 82–95. [Google Scholar]
- Hasson F., Keeney S., McKenna H. (2000). Research guidelines for the Delphi survey technique. Journal of Advanced Nursing, 32(4), 1008–1015. [PubMed] [Google Scholar]
- Hedges S. H., Odom S. L., Hume K., Sam A. (2018). Technology use as a support tool by secondary students with autism. Autism, 22(1), 70–79. [DOI] [PubMed] [Google Scholar]
- Herrera G., Alcantud F., Jordan R., Blanquer A., Labajo G., De Pablo C. (2008). Development of symbolic play through the use of virtual reality tools in children with autistic spectrum disorders: Two case studies. Autism, 12(2), 143–157. [DOI] [PubMed] [Google Scholar]
- Hsu C. C., Sandford B. A. (2007). The Delphi technique: Making sense of consensus. Practical Assessment, Research & Evaluation, 12(10), 1–8. [Google Scholar]
- Humphrey-Murto S., de Wit M. (2019). The Delphi method – More research please. Journal of Clinical Epidemiology, 106, 136–139. [DOI] [PubMed] [Google Scholar]
- Kim J. W., Nguyen T.-Q., Gipson Y.-M. T., Shin A. L., Torous J. (2018). Smartphone apps for autism spectrum disorder – understanding the evidence. Journal of Technology in Behavioral Science, 3(1), 1–4. 10.1007/s41347-017-0040-4 [DOI] [Google Scholar]
- Knight V., McKissick B. R., Saunders A. (2013). A review of technology-based interventions to teach academic skills to students with autism spectrum disorder. Journal of Autism and Developmental Disorders, 43(11), 2628–2648. [DOI] [PubMed] [Google Scholar]
- Laurie M. H., Warreyn P., Uriarte B. V., Boonen C., Fletcher-Watson S. (2018). An international survey of parental attitudes to technology use by their autistic children at home. Journal of Autism and Developmental Disorders, 49, 1517–1530. [DOI] [PMC free article] [PubMed] [Google Scholar]
- MacMullin J. A., Lunsky Y., Weiss J. A. (2016). Plugged in: Electronics use in youth and young adults with autism spectrum disorder. Autism, 20(1), 45–54. [DOI] [PubMed] [Google Scholar]
- Mechling L. C., Gast D. L., Seid N. H. (2009). Using a personal digital assistant to increase independent task completion by students with autism spectrum disorder. Journal of Autism and Developmental Disorders, 39(10), 1420–1434. [DOI] [PubMed] [Google Scholar]
- Mechling L. C., Savidge E. J. (2011). Using a personal digital assistant to increase completion of novel tasks and independent transitioning by students with autism spectrum disorder. Journal of Autism and Developmental Disorders, 41(6), 687–704. [DOI] [PubMed] [Google Scholar]
- Mesibov G. B., Shea V. (2011). Evidence-based practices and autism. Autism, 15(1), 114–133. [DOI] [PubMed] [Google Scholar]
- Morris D. Z. (2017, December 10). How an entirely fake restaurant became London’s hottest reservation. Fortune. http://fortune.com/2017/12/10/tripadvisor-london-shed-fake-restaurant/last
- Odom S. L., Thompson J. L., Hedges S., Boyd B. A., Dykstra J. R., Duda M. A., Szidon K. L., Smith L. E., Bord A. (2015). Technology-aided interventions and instruction for adolescents with autism spectrum disorder. Journal of Autism and Developmental Disorders, 45(12), 3805–3819. [DOI] [PubMed] [Google Scholar]
- Paliwoda S. J. (1983). Predicting the future using Delphi. Management Decision, 21(1), 31–38. [Google Scholar]
- Parsons S., Leonard A., Mitchell P. (2006). Virtual environments for social skills training: Comments from two adolescents with autistic spectrum disorder. Computers & Education, 47(2), 186–206. [Google Scholar]
- Parsons S., Yuill N., Good J., Brosnan M. (2019). ‘Whose agenda? Who knows best? Whose voice?’ Co-creating a technology research roadmap with autism stakeholders. Disability & Society. Advance online publication. 10.1080/09687599.2019.1624152 [DOI]
- Pennington R. C. (2010). Computer-assisted instruction for teaching academic skills to students with autism spectrum disorders: A review of literature. Focus on Autism and Other Developmental Disabilities, 25(4), 239–248. [Google Scholar]
- Ploog B. O., Scharf A., Nelson D., Brooks P. J. (2013). Use of computer-assisted technologies (CAT) to enhance social, communicative, and language development in children with autism spectrum disorders. Journal of Autism and Developmental Disorders, 43(2), 301–322. [DOI] [PubMed] [Google Scholar]
- Ramdoss S., Lang R., Mulloy A., Franco J., O’Reilly M., Didden R., Lancioni G. (2011). Use of computer-based interventions to teach communication skills to children with autism spectrum disorders: A systematic review. Journal of Behavioral Education, 20(1), 55–76. [Google Scholar]
- Reichow B., Volkmar F. R. (2010). Social skills interventions for individuals with autism: Evaluation for evidence-based practices within a best evidence synthesis framework. Journal of Autism and Developmental Disorders, 40(2), 149–166. [DOI] [PubMed] [Google Scholar]
- Reichow B., Volkmar F. R., Cicchetti D. V. (2008). Development of the evaluative method for evaluating and determining evidence-based practices in autism. Journal of Autism and Developmental Disorders, 38(7), 1311–1319. [DOI] [PubMed] [Google Scholar]
- Rowe G., Wright G. (2001). Expert opinions in forecasting: The role of the Delphi technique. In Armstrong J. S. (Ed.), Principles of forecasting (pp. 125–144). Springer. [Google Scholar]
- Sackett D. L., Rosenberg W. M., Gray J. M., Haynes R. B., Richardson W. S. (1996). Evidence based medicine. British Medical Journal, 313(7050), 170–171. [Google Scholar]
- Schlosser R. W., Wendt O. (2008). Effects of augmentative and alternative communication intervention on speech production in children with autism: A systematic review. American Journal of Speech-Language Pathology, 17, 212–230. [DOI] [PubMed] [Google Scholar]
- Shane H. C., Albert P. D. (2008). Electronic screen media for persons with autism spectrum disorders: Results of a Survey. Journal of Autism and Developmental Disorders, 38(8), 1499–1508. 10.1007/s10803-007-0527-5 [DOI] [PubMed] [Google Scholar]
- Trevelyan E. G., Robinson N. (2015). Delphi methodology in health research: How to do it? European Journal of Integrative Medicine, 7(4), 423–428. [Google Scholar]
- Virnes M., Kärnä E., Vellonen V. (2015). Review of research on children with autism spectrum disorder and the use of technology. Journal of Special Education Technology, 30(1), 13–27. 10.1177/016264341503000102 [DOI] [Google Scholar]
- Wainer A. L., Ingersoll B. R. (2011). The use of innovative computer technology for teaching social communication to individuals with autism spectrum disorders. Research in Autism Spectrum Disorders, 5(1), 96–107. [Google Scholar]
- Wong C., Odom S. L., Hume K. A., Cox A. W., Fettig A., Kucharczyk S., Brock M. E., Plavnick J. B., Fleury V. P., Schultz T. R. (2015). Evidence-based practices for children, youth, and young adults with autism spectrum disorder: A comprehensive review. Journal of Autism and Developmental Disorders, 45(7), 1951–1966. [DOI] [PubMed] [Google Scholar]
- Zervogianni V., Fletcher-Watson S., Herrera G., Goodwin M., Triquell E., Pérez-Fuster P., Brosnan M., Grynszpan O. (in press). Assessing evidence-based practice and user-centered design for technology-based supports for autistic users. PLoS ONE. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental material, AUT898331_Supplemental_material_Appendix_I for A framework of evidence-based practice for digital support, co-developed with and for the autism community by Vanessa Zervogianni, Sue Fletcher-Watson, Gerardo Herrera, Matthew Goodwin, Patricia Pérez-Fuster, Mark Brosnan and Ouriel Grynszpan in Autism
Supplemental material, AUT898331_Supplemental_material_Appendix_II for A framework of evidence-based practice for digital support, co-developed with and for the autism community by Vanessa Zervogianni, Sue Fletcher-Watson, Gerardo Herrera, Matthew Goodwin, Patricia Pérez-Fuster, Mark Brosnan and Ouriel Grynszpan in Autism