Skip to main content
PLOS One logoLink to PLOS One
. 2020 Aug 21;15(8):e0236939. doi: 10.1371/journal.pone.0236939

The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy

Erik Billing 1,*, Tony Belpaeme 2,10, Haibin Cai 3, Hoang-Long Cao 4,8, Anamaria Ciocan 5, Cristina Costescu 5, Daniel David 5, Robert Homewood 1, Daniel Hernandez Garcia 2, Pablo Gómez Esteban 4,8, Honghai Liu 3, Vipul Nair 1, Silviu Matu 5, Alexandre Mazel 6, Mihaela Selescu 5, Emmanuel Senft 2, Serge Thill 1,9, Bram Vanderborght 4,8, David Vernon 1, Tom Ziemke 1,7
Editor: Lucia Billeci11
PMCID: PMC7444515  PMID: 32823270

Abstract

We present a dataset of behavioral data recorded from 61 children diagnosed with Autism Spectrum Disorder (ASD). The data was collected during a large-scale evaluation of Robot Enhanced Therapy (RET). The dataset covers over 3000 therapy sessions and more than 300 hours of therapy. Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. Each session was recorded with three RGB cameras and two RGBD (Kinect) cameras, providing detailed information of children’s behavior during therapy. This public release of the dataset comprises body motion, head position and orientation, and eye gaze variables, all specified as 3D data in a joint frame of reference. In addition, metadata including participant age, gender, and autism diagnosis (ADOS) variables are included. We release this data with the hope of supporting further data-driven studies towards improved therapy methods as well as a better understanding of ASD in general.

1 Introduction

Children diagnosed with Autism Spectrum Disorder (ASD) typically suffer from widespread difficulties in social interactions and communication, and they exhibit restricted interests and repetitive behavior [1]. ASD is referred to as a spectrum disease because the type and the severity of the symptoms vary significantly between individuals. At one pole mild difficulties in social interaction and communication, such as problems in the initiation and maintenance of a conversation, the integration of verbal and nonverbal communication and the behavior adaptation to various contexts, together with some behavior rigidity can be seen. The opposite pole is characterized by severe deficits in verbal and nonverbal communication, low level of social initiation, absence of peer interest, strong behavior inflexibility, and restricting/repetitive behaviors [1].

Behavioral and psychosocial interventions are the main approach for the treatment of ASD, while medication is sometimes prescribed in order to control associated symptoms or other comorbid problems [2]. Behavioral and psychosocial interventions try to facilitate the development and adaptation of children by teaching them appropriate social and communication skills, according to their developmental age. These interventions vary in terms of how structured the therapeutic activities are (e.g., during naturalistic play or following a pre-established activity), who delivers the intervention (e.g., a trained therapist or a parent), and the degree to which the child is required to follow a desired curriculum or the curriculum will be developed around child’s preferences and interests [3, 4].

The therapeutic intervention that currently has the most consistent empirical support is Applied Behavior Analysis (ABA) [5]. ABA is a structured intervention following behavioral learning principles, in which reinforcements are manipulated in order to increase the frequency of desired behaviors and decrease the frequency for those that are maladaptive. The discrete trial training (DTT) is a common method employed by ABA treatments, in which the child is presented with a discriminative stimulus for a specific behavior (e.g., an instruction from the therapist), and the child receives a reward if he or she performs the expected behavior. If he or she does not, the therapist might correct the behavior by offering a demonstration or by offering a prompt [6]. In order to be effective, ABA therapies need to be both intensive and extensive and are thus associated with significant efforts from both patients and therapists providing the treatments.

An alternate form of therapy receiving a lot of attention over the last decade is Robot Assisted Therapy (RAT) [711], sometimes referred to as Robot Enhanced Therapy (RET) [1216]. While RAT refers to a wide spectrum of approaches to autism therapy involving robots in one way or another, the notion of RET is used in a more narrow sense and refers to therapies following an ABA protocol where a humanoid robot constitutes an interaction partner. Both RAT and RET typically involve triadic interactions, comprising the child, a robot, and an adult, e.g., a therapist.

In RET interventions developed on ABA principles, the robot guides the child through a game-like activity in order to develop a behavior that is relevant for social communication, while the therapist supervises the interaction. The robot acts as a model by performing the desired behavior, or as a discriminant stimulus, by giving verbal or non-verbal instructions. The robot also acts as a source of social reinforcement, by providing positive or negative feedback on the performance of the child. The justification for using a robot in this form of treatment relies on the empirical findings indicating that ASD children are learning social behaviors from these interactions and might be more motivated to participate in the intervention as a result of the presence of the robot [17, 18].

Robots have also been proposed as a means for screening, diagnosis, and improved understanding of ASD [19, 20], the potential of which are still not fully exploited due to a majority of research on RAT and RET taking the form of small scale or single-case studies, without the methodological rigor required to make the data applicable in clinical domains [12, 21].

Within the European research project DREAM—Development of Robot-Enhanced therapy for children with Autism spectrum disorders [22], we have conducted a large scale clinical evaluation of RET, involving 61 children (9 female) between 3 and 6 years of age. 30 of the children interacted with a humanoid robot NAO [23] (RET-group), and the remaining 31 participants received a standard human treatment (SHT-group). The clinical efficacy of RET was tested in a randomized clinical trial design, with a study protocol consisting of an initial assessment, eight bi-weekly personalized behavioral interventions, and a final assessment. Each intervention targeted three social skills; imitation, joint-attention, and turn-taking.

All therapies were recorded using a sensorized intervention table able to record and interpret the child’s behavior during the intervention (analyzing, for example, eye gaze, body movement, facial expression) [24]. The table was developed in order to inform the control of a supervised-autonomous robot used in RET [13], but was also used to support assessment and analysis of SHT. A total of 3121 therapy sessions was recorded, covering 306 hours of therapy.

While the clinical results from the evaluation are in the process of being published elsewhere, we here present a public release of the DREAM dataset, made available for download by the Swedish National Data Service, https://doi.org/10.5878/17p8-6k13. Following the ethical approval and agreements with caregivers, this public release does not comprise any primary data from the study. Primary data refers to direct measurements, e.g., video and audio recordings, of children in therapy. Instead, this public release comprises secondary data not revealing the identity of the children. Secondary data refers to processed measurements from primary data, including 3D skeleton reconstructions and eye-gaze vectors.

Further background on data-driven studies of autism and relevant datasets is presented in Section 2, followed by a presentation of the clinical evaluation from which this dataset was gathered (Section 3). Details of the DREAM dataset are provided in Section 4. Finally, the paper concludes with a discussion in Section 5.

2 Background

Diagnosis of ASD involves the assessment of the child behaviors considering their social initiations and responses, their joint attention episodes, their social play and their repetitive and stereotypic movements [1]. This involves, for example, attention to the patient’s eye-gaze, face-expressions, and hand movements at specific points in time. While this is very difficult for a novice, therapists, knowing the protocol well, are trained to observe and identify these expressions.

Considering the large effort involved in its diagnosis and treatment of ASD, there is an urgent need to better understand the autism spectrum and to develop new methods and tools to support patients, caregivers, and therapists [25]. One initiative was made by Thabtah [26], who developed a mobile application for screening of ASD, based on DSM-5 [1] and two questionnaire based AQ and Q-CHAT screening methods [27, 28]. While this is far from the only mobile application for screening of ASD, we believe this initiative stands out by, in contrast to several other applications, being supported by published research and by sharing parts of the underlying databases publicly [29]. Such datasets, covering for example traits, characteristics, diagnoses and prognoses of individuals diagnosed with ASD could be important assets, and are still very rare.

Mobile applications could be excellent tools, for example during screening, not the least by being very accessible to the broader population. However, complete diagnosis and treatments require more information, and other forms of interaction, than what can be achieved with a mobile application. For example, coverage of the patient’s behavior and social interactions are critical components for both improved understanding of ASD and development of new tools.

One example that clearly demonstrates the value of data-driven analysis of ASD is the work by Anzulewicz et al. [30]. The authors report a computational analysis of movement kinematics and gesture forces recorded from 82 children between 3 and 6 years old. 37 of these children were diagnosed with autism. The analysis revealed systematic differences in force dynamics within the ASD group, compared to the typically developed children included in the study. Unfortunately, this dataset has not been released publicly.

Moving outside the autism spectrum, there are a couple of relevant datasets focusing on social interaction. One such example is the Tower Game Dataset [31], comprising multi-modal recordings from 39 adults engaged in a tower building game. A total of 112 annotated sessions were collected, with an average length of three minutes. It focuses specifically on rich dyadic social interactions. Similar to the data-set presented here, the Tower Game Dataset contains body skeleton and eye-gaze estimates. Additionally, the dataset is manually annotated with so-called Essential Social Interaction Predicates (ESIPs). The authors promise that “a dataset visualization software […] is available and will be released with the dataset” [31], but unfortunately the dataset does not appear to be publicly available online.

Another dataset covering social interaction is the Multimodal Dyadic Behavior Dataset (MMDB) [32, 33]. This dataset comprises audio and video recordings from semi-structured play between one adult and one child in the age of 1 to 2 years. To date, 160 sessions of 5-minute interaction from 121 children have been released. Videos are annotated automatically for gaze shifts, smiling, play gestures, and engagement. An attractive aspect of this dataset is that the raw data streams are provided, including a rich set of 13 RGB cameras, one Kinect (RGBD) camera, 3 microphones, and 4 Affectiva Q-sensors for electrodermal activity and accelerometry, worn by both the adult and the child.

Focusing instead on human-robot interaction, the UE-HRI dataset [34] is a recent example. It includes audio and video recordings of 54 adult participants engaged in spontaneous dialogue with the social robot Pepper. The interactions took place in a public space, and include both one-to-one and multi-party interactions.

To our knowledge, the only public dataset covering children’s interaction with robots is the PInSoRo dataset [35]. This dataset concerns typically developed children not associated with autism, but shares a similar ambition to support a data-driven study of interaction. PInSoRo covers 45 hours of RGB video recordings, 3D recordings of the faces, skeletal information, audio recordings, as well as game interactions. In addition, the dataset comprises manual annotations of, for example, task engagement, social engagement, and attitude.

In sum, several datasets related to the study of social interaction, human-robot interaction, and autism can be found in the literature. Some are also released publicly, but none of them reach the same size as the dataset we present here. While there are other datasets with a similar, or even richer, set of features, none of these cover children diagnosed with ASD. Under the label Behavior Imaging, Rehg et al. [36, p. 87] explicitly argue for the need for such a dataset:

We believe this approach can lead to novel, technology-based methods for screening children for developmental conditions such as autism, techniques for monitoring changes in behavior and assessing the effectiveness of intervention, and the real-time measurement of health-related behaviors from on-body sensors that can enable just-in-time interventions.

Since children diagnosed with ASD are often sensitive to new clothing and wearable equipment, we consciously avoided on-body sensors. However, in other respects, we hope that the present work constitutes one important step towards a data-driven study of autism outlined by Rehg et al. [36].

3 Clinical evaluation and protocol

The clinical evaluation of RET, from which the present dataset is gathered, was conducted between March 2017 and August 2018 at three different locations in Romania. 76 children, age 3 to 6 years, were recruited to the study, out of which 70 met the inclusion criteria and were randomly assigned one of two conditions, RET or SHT. Participants in both groups went through a protocol of initial assessment, eight interventions, and a final assessment. The effect of the treatment was assessed using the Autism Diagnostic Observation Schedule (ADOS), in terms of the difference between the initial and final assessments [37]. Nine children did not continue the treatment beyond initial diagnosis, e.g., as a result of high skill performance, leaving 61 children with an initial ADOS score between 7 and 20 in the study (RET n = 30, SHT n = 31). A letter of consent was signed by at least one parent before initiating the study, expressing their consent to record the assessment and the intervention sessions and to use the data and recordings for scientific purposes in an anonymous fashion. The clinical study where this data was collected received prior ethical approval from the Scientific Council of Babes-Bolyai University in Cluj-Napoca, Romania, where the trial was conducted (record no. 30664/February 10th, 2017). The clinical trial was pre-registered to the U.S. National Library of Medicine database (ClinicalTrials.gov) under the number NCT03323931.

The therapy environment followed two configurations illustrated in Fig 1. The two configurations (RET and SHT) were designed to be as similar as possible, with the interaction partner constituting the primary difference. A therapist was present during both conditions, seated at the side of the table. A picture from the RET condition is shown in Fig 2. The exact setup varied slightly between different tasks. Some tasks made use of a touch screen placed between the child and the interaction partner, referred to as a sandtray [38]. Other tasks had a table as illustrated in Fig 1.

Fig 1. Configuration of the therapy environment during the two conditions used.

Fig 1

The child interacts with either a humanoid robot (RET, left) or a therapist (SHT, right).

Fig 2. Example of the therapy environment.

Fig 2

Red axes describe the orientation for the joint coordinate system for all data in the DREAM dataset.

Each intervention targeted three basic social skills that have been previously shown to be affected in individuals on the spectrum, namely imitation [39], joint-attention [40] and turn-taking in collaborative play [41]. The intervention had the same structure across all skills and was employed during both RET and SHT:

  1. the interaction partner (robot or human) provided a discriminative stimulus (i.e., an instruction to perform a behavior that is relevant for a particular skill);

  2. the interaction partner waited for the response of the child;

  3. the interaction partner offered feedback that was contingent on the behavior of the child, namely a positive feedback if the behavior matched the one that was expected, or an indication to try again if the performance was below the expectation.

For each discriminative stimulus, the child had three attempts to perform the behavior, each trial following the same sequence from above. If the child failed to perform the behavior at the last attempt, then the therapist offered a behavioral prompt.

Each intervention was divided into three to six parts, following a task script. This script specifies the task used during the intervention, instructions given to the child by the interaction partner (human or robot), as well as actions made by the interaction partner. In the RET condition, the robot follows the script automatically, supervised by a second therapist (supervisor) sitting behind the child (c.f. Fig 1). The supervisors role is to monitor the automatic interpretation of the child’s behavior and to adjust the robot’s responses if necessary. In the SHT condition, the supervisor is not present and the human interaction partner follows the script manually. Twelve unique intervention scripts were used, specifying different exercises and three difficulty levels. As the child reached maximum performance on one level, he/she moved to the next one.

For imitation there were three scripts: 1) imitation of objects (e.g., the child had to imitate a common way of playing with a toy car); 2) imitation of common gestures (e.g., waving hand and saying goodbye); and 3) imitation of gestures without a particular meaning (e.g., moving hand in a position that does not have any common reference). In each imitation script, the interaction partner performed the move first and asked the child to do the same. The child received positive feedback if he/she imitated accurately the behavior of the interaction partner.

For joint-attention there were also three levels of difficulty, varying by the number of cues offered by the interaction partner: 1) pointing and looking at an object placed in front of the child while also giving a verbal cue (i.e., “look”); 2) pointing and looking at an object without verbal cues; and 3) just looking at an object. The objects for this task were displayed as pictures on the sandtray placed in front of the child. The child received a positive feedback if he/she followed the cues and looked at the object indicated by the interaction partner.

For turn-taking there were three different types of tasks, each with two levels of difficulty. One task was focused on sharing information about what one likes most, by choosing from five pictograms that were displayed at once. The two levels differed by the complexity of the pictograms (e.g., a simple color vs. an activity). Another task was focused on categorizing objects. In the first level of difficulty only one object that had to be categorized was displayed at a time, while in the second level there were eight such objects displayed, and the child had to choose one and move it in the correct category. The third task consisted of completing a series of pictures arranged in a pattern. In level one the child had to choose from two pictures the one that continues the pattern, while in the second level the child had to continue the pattern by choosing from four pictures. All turn-taking tasks were performed using the sandtray. The interaction partner and the child took turns in performing moves on the sandtray (e.g., choosing a favorite color). The child received a good performance rating and positive feedback if he/she waited without touching the screen while the interaction partner performed a move.

As mentioned above, the clinical protocol included an initial assessment, eight interventions, and a final assessment. The first and last assessments combined an ADOS evaluation with an evaluation of pre- and post-test performance in imitation, joint-attention and turn-taking. In the pre- and post-tests, the interaction partner did not provide any feedback, and the therapist did not offer any prompt (behavioral performance was only measured).

3.1 Sensors and setting

All therapy sessions were recorded using the same sensorized therapy table [24]. The table was equipped with three high-resolution RGB cameras and two RGBD (Kinect) cameras that, in combination with state of the art sensor interpretation methods, provide information about the child’s position, motion, eye-gaze, face expressions, and verbal utterances. In addition, the table captured the presence and location of objects used in the therapy. A range of different algorithms were employed to compute these perceptions. A complete list of sensor primitives and associated methods is provided in Table 1. Note that only a subset of these features are included in the public dataset, see Section 4 for details.

Table 1. Sensor primitives extracted by the sensorized intervention table.

Sensor primitive Interpretation method
Relative eye-gaze Two-eye model-based gaze estimation based on RGBD [42]
Head pose Pose from Orthog-raphy and Scaling with ITerations (POSIT) [43]
Gaze estimation A 3D gaze vector is achieved by combining the relative eye-gaze with calculated head pose [24]
Face detection Boosted cascade face detector [44]
Facial features Supervised descent method proposed by [45]
Face expressions Frontalised Local Binary Patterns (LBP) classified using SVM [46]
3D skeleton Microsoft Kinect SDK
Action recognition 3D joints Moving Trend method based on skeleton data [47]
Object tracking GM-PHD Tracker [48]
Sound direction Microsoft Kinect SDK

Performance evaluations of each sensor primitive is available in Cai et al. [24].

The data gathered by the sensorized table was collected in real time with a temporal resolution of 25 Hz and was used to guide the behavior of the robot in the RET condition, following the session script. In addition, recorded data was used off-line to support analysis and assessment of the child’s progress through therapy, in both RET and SHT conditions. The table works without sensors placed on the child and without individual calibration, both of which are problematic to employ in therapy with autistic children.

While sensor data was used to guide the robot’s behavior on-line, robot responses were kept consistent throughout the study, i.e., the robot did not learn from previous interactions with the child. Instead, suitable task difficulty was achieved through the session scripts as described above, combined with supervised autonomy ensuring reliable robot behavior even in cases when the system failed to correctly assess the child’s actions [13]. While this architecture could effectively be combined with robot learning [49], here we chose a static system in order to increase validity of the clinical study.

4 Open dataset

The dataset resulting from the clinical evaluation presented above comprises a total of 306 hours of therapy. 41 out of 61 children finished all 8 interventions and the final diagnosis. The remaining participants finished an average of 4.5 interventions, i.e., just above half of the complete protocol. The total length of each intervention varied from a 3 to 87 minutes as a result of script length and child behavior, with a median duration of 32 minutes. Average intervention durations for each of the tow conditions (RET, SHT) are presented in Fig 3. To our knowledge, this is the largest dataset of autism therapy involving robots and probably also the largest recorded data set of children interacting with robots in general.

Fig 3. Average duration of each intervention in the two conditions over the complete protocol, comprising initial assessment, 8 interventions, and a final assessment.

Fig 3

Envelopes represent the 95% confidence interval of the mean.

As mentioned in the introduction, this dataset does not comprise any direct measurements, i.e., raw data, from the conducted therapies. However, given that a RET framework involves processing of sensory data into higher level perceptions (see Section 3.1), we have access to comprehensive secondary data from these recordings. Any variables related to the clinical evaluation can however not yet be released, concerning for example the child’s performances during therapy. This type of information may be included in a later release of the dataset. Other variables, such as face expressions, have relatively low reliability and were excluded from dataset for this reason. Finally, any variables that may reveal the child’s identity have been excluded from the dataset.

The following data has been selected for inclusion in the public release of the DREAM dxataset:

  1. Child ID (numerical index),

  2. Child’s gender,

  3. Child’s age in months,

  4. 3D skeleton comprising joint positions for upper body,

  5. 3D head position and orientation,

  6. 3D eye gaze vectors,

  7. Therapy condition (RET or SHT),

  8. Therapy task (Joint attention, Imitation, or Turn-taking),

  9. Date and time of recording,

  10. Initial ADOS scores.

With the ambition of releasing the dataset in an easily accessible, well-specified, and commonly used file format, JavaScript Object Notation (JSON) was selected. JSON is a stripped form of the JavaScript programming language, intended for data representation, https://www.json.org. JSON has many of the attractive attributes found in XML, including standard libraries for most programming languages, validation patterns, and human readability. However, JSON is less verbose than XML and includes standard notation for arrays, making it much more suitable for storing numeric data.

An example structure of the DREAM dataset JSON format is included in Table 2. The complete format is specified by the JSON Schema found in Appendix A. An important attribute of this dataset is that all attributes are defined in a common frame of reference using a Cartesian coordinate system. The orientation of the Cartesian space in relation to the therapy environment is visualized in Fig 2.

Table 2. Example structure of the open DREAM dataset, in JSON format.

“[…]” corresponds to numeric arrays that are too long to include here.

graphic file with name pone.0236939.e001.jpg

4.1 Licence

The Open DREAM dataset is licensed under

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Licence [50].

This licence permits copying and redistribution of the material in any medium or format and states that the licensor cannot revoke these freedoms as long as you follow the licence terms. The material here refers to all secondary data and metadata included in this public release of the dataset, excluding its underlying recordings or direct measurements. This freedom is given under the following terms:

  1. Attribution—You must give appropriate credit, provide a link to the licence, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

  2. NonCommercial—You may not use the material for commercial purposes.

  3. ShareAlike—If you remix, transform, or build upon the material, you must distribute your contributions under the same licence as the original.

  4. No additional restrictions—You may not apply legal terms or technological measures that legally restrict others from doing anything the licence permits.

Note:

  • You do not have to comply with the licence for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.

  • No warranties are given. The licence may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.

4.2 Data visualization

An important aspect of any public dataset is to make it accessible, and understandable, for a larger audience. In addition to the technical specification provided in Appendix A, we release a visualization tool named DREAM Data Visualizer. This tool runs directly in the web-browser and comprises a 3D environment where a user can playback each intervention, view the child’s movements in relation to the interaction partner.

The DREAM Data Visualizer is released as open source under GNU GPL and available for download at github.com/dream2020/DREAM-data-visualizer. An example screenshoot from the DREAM data visualizer is presented in Fig 4.

Fig 4. Visualization of the DREAM dataset, including 3D skeleton of upper body and head rotations.

Fig 4

5 Discussion

Although the global prevalence of autism is difficult to assess [51] and its exact origin has not been determined [5, 52], autism is becoming an increasingly common diagnosis worldwide. This affects not only all the people receiving diagnosis, but also constitutes a significant cost for society [30]. RAT/RET has been put forward as a potentially cost-effective treatment, but still lacks large-scale clinical trials and longitudinal studies in order to assess its effects [12, 15, 17].

In the present work, we present a dataset covering behavioral data recorded during therapeutic interventions with 61 children. The dataset comprises a rich set of features which we believe are essential for understanding and assessing childrens’ behavior during therapy. By providing a large set of data, comprising 61 children with varying degree of ASD taking part in more than 3000 therapy sessions, we hope that the DREAM Dataset can constitute an important asset in future studies in the field.

The dataset may for example be used by studies employing machine learning or artificial intelligence to find patterns in behavioral data. Such patterns may guide further clinical studies by providing new insights into how to appropriately select between RET and traditional ABA therapies or constitute input to new therapeutic methods.

Data Availability

The complete dataset is available for download through the Swedish National Data Service, https://doi.org/10.5878/17p8-6k13. Sample data and usage instructions can be accessed at https://github.com/dream2020/data. In addition, source code for the DREAM RET System [24], with which the present dataset was gathered, is made available at https://www.doi.org/10.5281/zenodo.3571992.

Appendix A: Dataset specification

This is a JSON schema for the open DREAM dataset presented in Section 4. The schema presented here specifies the format for a single therapy session with one child under therapy, including definitions of all mandatory attributes of the data. A JSON database file may however comprise additional attributes not defined here. The complete dataset comprises a large set of these sessions.

graphic file with name pone.0236939.e002.jpg

Data Availability

The complete dataset is available for download through the Swedish National Data Service, https://doi.org/10.5878/17p8-6k13. Sample data and usage instructions can be accessed at https://github.com/dream2020/data. In addition, source code for the DREAM RET System, with which the present dataset was gathered, is made available at https://www.doi.org/10.5281/zenodo.3571992.

Funding Statement

This work was funded by the EU, under the Seventh Frame Programme grant #611391: Development of Robot-Enhanced therapy for children with AutisM spectrum disorders (DREAM). The commercial company SoftBank Robotics provided support in the form of salaries for author A.M., but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.

References

  • 1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders (DSM-5®). Washington, D.C.: American Psychiatric Publishing; 2013. [Google Scholar]
  • 2. De Filippis M, Wagner KD. Treatment of autism spectrum disorder in children and adolescents. Psychopharmacology Bulletin. 2016;46(2):18–41. [PMC free article] [PubMed] [Google Scholar]
  • 3. Lord C, Elsabbagh M, Baird G, Veenstra-Vanderweele J. Autism spectrum disorder. The Lancet. 2018;392(10146):508–520. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Narzisi A, Costanza C, Umberto B, Filippo M. Non-Pharmacological Treatments in Autism Spectrum Disorders: An Overview on Early Interventions for Pre-Schoolers. Current clinical pharmacology. 2014;9(1). [DOI] [PubMed] [Google Scholar]
  • 5. Smith T, Iadarola S. Evidence Base Update for Autism Spectrum Disorder. Journal of Clinical Child and Adolescent Psychology. 2015;44(6):897–922. [DOI] [PubMed] [Google Scholar]
  • 6. Roane HS, Fisher WW, Carr JE. Applied Behavior Analysis as Treatment for Autism Spectrum Disorder. Journal of Pediatrics. 2016;175:27–32. [DOI] [PubMed] [Google Scholar]
  • 7. Dautenhahn K, Werry I. Towards interactive robots in autism therapy: Background, motivation and challenges. Pragmatics & Cognition. 2004;12(1):1–35. [Google Scholar]
  • 8. Diehl JJ, Schmitt LM, Villano M, Crowell CR. The clinical use of robots for individuals with Autism Spectrum Disorders: A critical review. Research in Autism Spectrum Disorders. 2012;6(1):249–262. 10.1016/j.rasd.2011.05.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Thill S, Pop CA, Belpaeme T, Ziemke T, Vanderborght B. Robot-Assisted Therapy for Autism Spectrum Disorders with (Partially) Autonomous Control: Challenges and Outlook. Paladyn, Journal of Behavioral Robotics. 2012;3(4):209–217. [Google Scholar]
  • 10. Mengoni SE, Irvine K, Thakur D, Barton G, Dautenhahn K, Guldberg K, et al. Feasibility study of a randomised controlled trial to investigate the effectiveness of using a humanoid robot to improve the social skills of children with autism spectrum disorder (Kaspar RCT): a study protocol. BMJ open. 2017;7(6):e017376 10.1136/bmjopen-2017-017376 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Scassellati B, Boccanfuso L, Huang CM, Mademtzi M, Qin M, Salomons N, et al. Improving social skills in children with ASD using a long-term, in-home social robot. Science Robotics. 2018;3(21). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Costescu CA, Vanderborght B, David DO. The effects of robot-enhanced psychotherapy: A meta-analysis. Review of General Psychology. 2014;18(2):127–136. [Google Scholar]
  • 13. Esteban PG, Baxter P, Belpaeme T, Billing EA, Cai H, Cao H, et al. How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder. Paladyn, Journal of Behavioral Robotics. 2017;8(1):18–38. [Google Scholar]
  • 14. Cao HL, Esteban P, Bartlett M, Baxter PE, Belpaeme T, Billing E, et al. Robot-Enhanced Therapy: Development and Validation of a Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy. IEEE Robotics and Automation Magazine. 2019. [Google Scholar]
  • 15. David D, Costescu CA, Matu S, Szentagotai A, Dobrean A. Effects of a Robot-Enhanced Intervention for Children With ASD on Teaching Turn-Taking Skills. Journal of Educational Computing Research. 2019; p. 073563311983034. [Google Scholar]
  • 16.Hernández García D, Esteban PG, Lee HR, Romeo M, Senft E, Billing EA. Social Robots in Therapy and Care. In: Workshop at the ACM/IEEE International Conference on Human Robot Interaction (HRI). Daegu, South Korea; 2019.
  • 17.Pennisi P, Tonacci A, Tartarisco G, Billeci L, Ruta L, Gangemi S, et al. Autism and social robotics: A systematic review; 2016. [DOI] [PubMed]
  • 18.Rabbitt SM, Kazdin AE, Scassellati B. Integrating socially assistive robotics into mental healthcare interventions: Applications and recommendations for expanded use; 2015. [DOI] [PubMed]
  • 19. Scassellati B. How Social Robots Will Help Us to Diagnose, Treat, and Understand Autism In: Thrun S, Brooks R, Durrant-Whyte H, editors. Robotics Research. Berlin, Heidelberg: Springer Berlin Heidelberg; 2007. p. 552–563. [Google Scholar]
  • 20.Ramírez-Duque AA, Frizera-Neto A, Bastes TF. Robot-Assisted Diagnosis for Children with Autism Spectrum Disorder Based on Automated Analysis of Nonverbal Cues. Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics. 2018; p. 456–461.
  • 21. Begum M, Serna RW, Yanco HA. Are Robots Ready to Deliver Autism Interventions? A Comprehensive Review. International Journal of Social Robotics. 2016;8(2):157–181. [Google Scholar]
  • 22.DREAM. Development of Robot-Enhanced Therapy for Children with Autism Spectrum Disorders; 2020. Available from: https://www.dream2020.eu/.
  • 23.Gouaillier D, Hugel V, Blazevic P, Kilner C, Monceaux J, Lafourcade P, et al. Mechatronic design of NAO humanoid. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Institute of Electrical and Electronics Engineers (IEEE); 2009. p. 769–774.
  • 24. Cai H, Fang Y, Ju Z, Costescu CA, David D, Billing EA, et al. Sensing-enhanced Therapy System for Assessing Children with Autism Spectrum Disorders: A Feasibility Study. IEEE Sensors Journal. 2019;9(4):1508–1518. [Google Scholar]
  • 25. Thabtah F. An accessible and efficient autism screening method for behavioural data and predictive analyses. Health Informatics Journal. 2018; p. 146045821879663. [DOI] [PubMed] [Google Scholar]
  • 26.Thabtah F. Autism Spectrum Disorder screening: Machine learning adaptation and DSM-5 fulfillment. Proceedings of the 1st International Conference on Medical and Health Informatics—ICMHI’17. 2017; p. 1–6.
  • 27. Kleinman JM, Robins DL, Ventola PE, Pandey J, Boorstein HC, Esser EL, et al. The modified checklist for autism in toddlers: a follow-up study investigating the early detection of autism spectrum disorders. Journal of Autism and Developmental Disorders. 2008;38(5):827–39. 10.1007/s10803-007-0450-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Allison C, Auyeung B, Baron-Cohen S. Toward Brief “Red Flags” for Autism Screening: The Short Autism Spectrum Quotient and the Short Quantitative Checklist in 1,000 Cases and 3,000 Controls. Journal of the American Academy of Child & Adolescent Psychiatry. 2012;51(2):202–212. [DOI] [PubMed] [Google Scholar]
  • 29.Thabtah F. Autism Datasets; 2019. Available from: http://fadifayez.com/autism-datasets/.
  • 30. Anzulewicz A, Sobota K, Delafield-Butt JT. Toward the Autism Motor Signature: Gesture patterns during smart tablet gameplay identify children with autism. Scientific Reports. 2016;6(July):1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Salter DA, Tamrakar A, Siddiquie B, Amer MR, DIvakaran A, Lande B, et al. The Tower Game Dataset: A multimodal dataset for analyzing social interaction predicates. 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015. 2015; p. 656–662.
  • 32.Rehg JM, Abowd GD, Rozga A, Romero M, Clements MA, Sclaroff S, et al. Decoding children’s social behavior. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2013; p. 3414–3421.
  • 33.Georgia Tech. The Multimodal Dyadic Behavior Dataset; 2019. Available from: http://www.cbi.gatech.edu/mmdb/.
  • 34.Ben-Youssef A, Clavel C, Essid S, Bilac M, Chamoux M, Lim A. UE-HRI: A new dataset for the study of user engagement in spontaneous human-robot interactions. ICMI 2017—Proceedings of the 19th ACM International Conference on Multimodal Interaction. 2017; p. 464–472.
  • 35. Lemaignan S, Edmunds CER, Senft E, Belpaeme T. The PInSoRo dataset: Supporting the data-driven study of child-child and child-robot social dynamics. PLOS ONE. 2018;13(10):e0205999 10.1371/journal.pone.0205999 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Rehg JM, Rozga A, Abowd GD, Goodwin MS. Behavioral Imaging and Autism. IEEE Pervasive Computing. 2014;13(2):84–87. [Google Scholar]
  • 37. Gotham K, Pickles A, Lord C. Standardizing ADOS scores for a measure of severity in autism spectrum disorders. Journal of Autism and Developmental Disorders. 2009;39(5):693–705. 10.1007/s10803-008-0674-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Baxter P, Wood R, Belpaeme T. A touchscreen-based’sandtray’ to facilitate, mediate and contextualise human-robot social interaction. HRI’12—Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction. 2012; p. 105–106.
  • 39. Ingersoll B. The Social Role of Imitation in Autism. Infants & Young Children. 2008;21(2):107–119. [Google Scholar]
  • 40. Dawson G, Toth K, Abbott R, Osterling J, Munson J, Estes A, et al. Early Social Attention Impairments in Autism: Social Orienting, Joint Attention, and Attention to Distress. Developmental Psychology. 2004;40(2):271–283. [DOI] [PubMed] [Google Scholar]
  • 41. Wimpory DC, Hobson RP, Williams JM, Nash S. Are infants with autism socially engaged? A study of recent retrospective parental reports. Journal of autism and developmental disorders. 2000;30(6):525–36. [DOI] [PubMed] [Google Scholar]
  • 42.Zhou X, Cai H, Li Y, Liu H. Two-eye model-based gaze estimation from a Kinect sensor. In: IEEE International Conference on Robotics and Automation. IEEE; 2017. p. 1646–1653.
  • 43. Dementhon DF, Davis LS. Model-based object pose in 25 lines of code. International Journal of Computer Vision. 1995;15(1-2):123–141. [Google Scholar]
  • 44. Viola P, Jones MJ. Robust Real-Time Face Detection. International Journal of Computer Vision. 2004;57(2):137–154. [Google Scholar]
  • 45.Xiong X, De La Torre F. Supervised descent method and its applications to face alignment. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; 2013. p. 532–539.
  • 46. Wang Y, Yu H, Dong J, Stevens B, Liu H. Facial expression-aware face frontalization In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 10113 LNCS. Springer Verlag; 2017. p. 375–388. [Google Scholar]
  • 47.Liu B, Yu H, Zhou X, Tang D, Liu H. Combining 3D joints Moving Trend and Geometry property for human action recognition. In: 2016 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016—Conference Proceedings. Institute of Electrical and Electronics Engineers Inc.; 2017. p. 332–337.
  • 48. Zhou X, Yu H, Liu H, Li Y. Tracking Multiple Video Targets with an Improved GM-PHD Tracker. Sensors. 2015;15(12):30240–30260. 10.3390/s151229794 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Senft E, Lemaignan S, Baxter PE, Belpaeme T. SPARC: an efficient way to combine reinforcement learning and supervised autonomy. In: Future of Interactive Learning Machines Workshop at NIPS’16. Nips 2016; 2016. p. 1–5.
  • 50.Creative Commons. Attribution-NonCommercial-ShareAlike 4.0 International—CC BY-NC-SA 4.0; 2020. Available from: https://creativecommons.org/licenses/by-nc-sa/4.0/.
  • 51. Poovathinal SA, Anitha A, Thomas R, Kaniamattam M, Melempatt N, Meena M. Global Prevalence of Autism: A Mini-Review. SciFed Journal of Autism. 2018;2(1):1–9. [Google Scholar]
  • 52. Volkmar FR, Paul R, Rogers SJ, Pelphrey KA, editors. Handbook of Autism and Pervasive Developmental Disorders. 4th ed New York, NY: Wiley: John Wiley & Sons, Ltd; 2014. [Google Scholar]

Decision Letter 0

Lucia Billeci

19 May 2020

PONE-D-20-08566

The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy

PLOS ONE

Dear Dr. Billing,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Jun 29 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Lucia Billeci

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

3. Thank you for stating the following in the Financial Disclosure section:

'EU FP7 Grant #611391: Development of Robot-Enhanced therapy for children with AutisM spectrum disorders (DREAM).'

We note that one or more of the authors are employed by a commercial company: SoftBank Robotics

a. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form.

Please also include the following statement within your amended Funding Statement.

“The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.”

If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement.

b. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc.  

Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

c. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

Reviewer #3: N/A

Reviewer #4: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for the opportunity to review this very interesting manuscript entitled “The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy” which has been submitted for consideration in the Plos One. Robot Enhanced Therapy attracted great attention. I agree the importance of Robot Enhanced Therapy and dataset supporting further data-driven studies toward improved therapy methods as well as better understanding of ASD in general. However, I could not find any result in this article. In addition Discussion is very short.

Reviewer #2: The MS describes a database/set corresponding to a study involving a robot (robot enhanced therapy) in children with ASD. Authors have recorded many features during 3000 sessions and they offer a data visualizer that is very welcome. Given the unique dataset they are describing, authors should be congratulated for such a commitment.

However they are many imprecisions and writing issues that make the manuscript inadequate for publication. Authors need to work on it a bit more (It looks like a conference paper that we often have in the field).

1. The intro does not stand as it is. It should be reorganized and some points need to be added or modified.

I suggest:

- start by ASD treatment principles (see Narzisi et al. 2015).

- Don’t start by medication!! It is not the treatment of autism.

- The claim that medication has strong evidence is wrong! The statements on medication are useless.

- Then be more specific with ABA (because) it inspired your robot enhanced therapy.

- Add a brief paragraph on robot and ASD (they are several recent reviews).

- Then present RET and indicate in the intro the exact design (randomized?, duration?)

- Page 2 correct form for from

2. The clinical evaluation

- This section should be renamed “clinical evaluation and protocol”

- The most important revision should be done when authors describe the setting in this section. To me, it should be a separate section called “sensors and setting”.

- In this new section, the authors should detail more how the robot is used (at least grossly as I understand that details would be available in the clinical paper). Also they should offer a table with the extracted features and made available in the dataset (and the corresponding algorithm they used). Also they should indicate whether some raw data are availables.

3. The section open data set needs a 4.1 “dataset variables”. Also the 4.2 “licence” seems to be a very general statement found on websites! Please if you give a note be more specific in the case of Dream data set. No need of general legal statements!

4. The discussion is minimalist and why not, I am OK. But please edit the section as there are at least 2 sentences that I did not understood.

Also, please cite more recent reviews instead of [11, 14]. Finally, the statement very general that the data set will offer opportunity to develop new screening method makes no sense to me. You don’t have typical developing controls!

Reviewer #3: This is really impressive work and I commend the authors on what they have achieved in this project. Such a through dataset is a valuable contribution to the field and will undoubtedly be useful to other researchers. My only suggestion is that the authors provide more details on the duration of the therapy sessions in their overview. Currently the authors say that sessions lasted anywhere between a few minutes and 40 minutes. It would be useful to have a graph batching the sessions and providing an indication the average session might last and also if theses session durations increased or decreased over the course of the study.

Overall great work!

Reviewer #4: In this paper, the authors present a dataset of behavioral data recorded through a Robot Enhanced Therapy (RET) with 61 children diagnosed with Autism Spectrum Disorders (ASD). Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. The dataset comprises body motion, head position and orientation, and eye gaze variables recorded with three RGB cameras and two RGBD (Kinect) cameras. It also includes metadata such as participant age, gender, and autism diagnosis (ADOS) variables. Participants in both groups went through a protocol of initial diagnosis, eight interventions, and a final diagnosis, targeting three social skills; Turn-Taking (TT), Imitation (IM) and Joint Attention (JA). The effect of the treatment was assessed using ADOS, in terms of the difference between the initial and final diagnosis. The clinical study where this data was collected received prior ethical approval from the Scientific Council of Babes-Bolyai University in Cluj-Napoca, Romania, where the trial was conducted (record no. 30664/February 10th, 2017). A letter of consent was signed by at least one parent before initiating the study.

I am very happy that this type of datasets is being publicly released. This is a very important step to help future robot-assisted therapies in the case of autism. I would like to congratulate the authors for this work.

I have only a few comments/questions and minor corrections suggested below.

Minor points

About the content of the dataset itself: Page 8, in the list of included data, I would replace ‘date’ by ‘time’, so that it is clear that not only the day is specified. Shouldn’t the success of the current trial also be indicated? This could help users of the dataset distinguish correct from incorrect movements, and maybe better understand cases of hesitation for instance. Moreover, the authors put an emphasis in the introduction on the different levels of difficulties/deficits of people within the ASD spectrum. Shouldn’t the dataset provide information distinguishing these different levels?

Lines 19:23, the sentence is ill-formulated: ‘[5] labeled two interventions as well-established: individual, comprehensive Applied Behavior Analysis (ABA) and teacher-implemented, focused, ABA+ developmental social-pragmatic (DSP), an intervention that combines ABA and DSP strategies.’ When I read the sentence, it seems to me that there are three interventions: ABA, DSP and ABA+. While actually there are ABA and ABA+DSP. And the way the sentence is written could suggest that DSP is an intervention that combines ABA and DSP. I suggest reformulating and separating the interventions with semicolons. Also, rather than following the way they are presented in the abstract of Smith & Iadarola 2015, it seems to me that it would be better to first list ABA and DSP, with a short explanation of what is the difference. And then explain that a combination of the two called ABA+DSP also exists. Then the authors could explain that Smith & Iadarola 2015 have emphasized their difference in efficacy. I am not sure it really useful to mention the distinction between focused and comprehensive, individual vs. teacher-implemented, since this is not further explained nor used in the present manuscript.

Lines 33-34: ‘an ABA protocol where a humanoid robot constitutes the interaction partner.’ I think ‘the’ is not appropriate here. Because it suggests at first glance that the robot is the unique interaction partner. I would replace with ‘an’. At the end of the paragraph, I think it would great to emphasis that in RET the goal is not to replace the human therapist by a robot, but instead to assist the therapist, the robot being only a mediator (of the therapy, as opposed to the therapist being a mediator of the interaction with the robot) or a tool.

I think it is important to state that the robot’s behavior is preprogrammed, not allowing any on-the-fly learning while interacting with the child. The strengths of doing so could be emphasized, such as stability, predictability, perhaps easier acceptability by children with ASD, and making sure that the behavior of the robot during the experiment is perfectly controlled. In contrast, it would interesting to mention that alternative studies enable the robot to learn on-the-fly while interacting with children with autism. The strengths and weaknesses of doing so could also be discussed in comparison with the present method. I think this would be very useful, first so that potential users of the dataset know clearly what was the robot’s behavior and its abilities, and second to provide some insights to the community about the pros and cons of enabling robots to learn or not during RAT/RET.

Page 3/16, the authors should not forget to remove the last three sentences of footnote 2 before publication.

One the one hand, the background section stresses the potential of this kind of dataset to contribute to the diagnosis of ASD. On the other hand, the introduction mentions only therapies, but not the use of robots in diagnosis. I think the objectives should be more clearly stated, and the potential contribution to therapy, diagnosis or both should be discussed. This of particular interest for the social robotics community which is currently wondering whether there is a potential for social robots to contribute to therapy only, or also to diagnosis.

Line 98, ‘It focus specifically’ -> It focuses /or/ Its focus is.

Figure 1 seems to suggest that no supervising human was involved in the SHT configuration. Could the authors confirm? Did the supervisor have an active role (like intervention), in addition to controlling the robot in case of problem, or only a passive role (monitoring)? In the former case, was it a problem not to have a supervisor during SHT? Was there a difference between RET and SHT in terms of interventions by the supervisor?

Figure 2 does not explicitly refer to any touchscreen between the child and the robot. The sandtray is actually not clearly visible in Figure 2, in contradiction with what is written (lines 164-166).

Line 170: ‘were employed’ -> was employed.

Line 206, objected -> object.

Lines 231-232: ‘state of the art sensor interpretation algorithms’. Could the authors more explicitly state which algorithms were used? I think it is important for people using the dataset to know.

What happened when children only spent ‘a few minutes’ in a session? Did he/she complete only a single intervention script? Was the script completed? Was the data still included in the dataset?

Line 313, ‘screen shoot’ -> screenshot.

Line 327, ‘be used studies’ -> be used by studies.

Line 352, comprise -> comprises.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

Reviewer #4: Yes: Mehdi Khamassi

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 21;15(8):e0236939. doi: 10.1371/journal.pone.0236939.r002

Author response to Decision Letter 0


25 Jun 2020

Dear Prof Kovacs,

thanks for looking over the revised manuscript. I've now removed all figures in the manuscript but left the figure labels/texts in order to keep in-text figure references consistent. I hope this update is satisfactory.

Please find updated statements regarding financial support and competing of interests in the cover letter, and a detailed response to all reviewer comments in the attached rebuttal letter. We've attached the revised manuscript in two versions, with and without change tracking.

Thanks again for a valuable input to the article and I wish you a good summer!

Kind regards, Dr. Erik Billing on behalf of all the authors.

Attachment

Submitted filename: Rebutal letter.pdf

Decision Letter 1

Lucia Billeci

17 Jul 2020

The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy

PONE-D-20-08566R1

Dear Dr. Billing,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Lucia Billeci

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: (No Response)

Reviewer #3: All comments have been addressed

Reviewer #4: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: (No Response)

Reviewer #3: Yes

Reviewer #4: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: (No Response)

Reviewer #3: Yes

Reviewer #4: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: (No Response)

Reviewer #3: Yes

Reviewer #4: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: (No Response)

Reviewer #3: Yes

Reviewer #4: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: (No Response)

Reviewer #3: After reviewing the paper I am happy that the authors have made the necessary changes to the manuscript and it is acceptable to publish in its current form.

Reviewer #4: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #3: No

Reviewer #4: Yes: Mehdi Khamassi

Acceptance letter

Lucia Billeci

28 Jul 2020

PONE-D-20-08566R1

The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy

Dear Dr. Billing:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Lucia Billeci

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Rebutal letter.pdf

    Data Availability Statement

    The complete dataset is available for download through the Swedish National Data Service, https://doi.org/10.5878/17p8-6k13. Sample data and usage instructions can be accessed at https://github.com/dream2020/data. In addition, source code for the DREAM RET System, with which the present dataset was gathered, is made available at https://www.doi.org/10.5281/zenodo.3571992.

    The complete dataset is available for download through the Swedish National Data Service, https://doi.org/10.5878/17p8-6k13. Sample data and usage instructions can be accessed at https://github.com/dream2020/data. In addition, source code for the DREAM RET System [24], with which the present dataset was gathered, is made available at https://www.doi.org/10.5281/zenodo.3571992.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES