Skip to main content
PLOS One logoLink to PLOS One
. 2021 Jun 4;16(6):e0252245. doi: 10.1371/journal.pone.0252245

Rationale and validation of a novel mobile application probing motor inhibition: Proof of concept of CALM-IT

Elise M Cardinale 1,*, Reut Naim 1, Simone P Haller 1, Ramaris German 1, Christian Botz-Zapp 1, Jessica Bezek 1, David C Jangraw 2, Melissa A Brotman 1
Editor: Veena Kumari3
PMCID: PMC8177631  PMID: 34086728

Abstract

Identification of behavioral mechanisms underlying psychopathology is essential for the development of novel targeted therapeutics. However, this work relies on rigorous, time-intensive, clinic-based laboratory research, making it difficult to translate research paradigms into tools that can be used by clinicians in the community. The broad adoption of smartphone technology provides a promising opportunity to bridge the gap between the mechanisms identified in the laboratory and the clinical interventions targeting them in the community. The goal of the current study is to develop a developmentally appropriate, engaging, novel mobile application called CALM-IT that probes a narrow biologically informed process, inhibitory control. We aim to leverage the rigorous and robust methods traditionally used in laboratory settings to validate this novel mechanism-driven but easily disseminatable tool that can be used by clinicians to probe inhibitory control in the community. The development of CALM-IT has significant implications for the ability to screen for inhibitory control deficits in the community by both clinicians and researchers. By facilitating assessment of inhibitory control outside of the laboratory setting, researchers could have access to larger and more diverse samples. Additionally, in the clinical setting, CALM-IT represents a novel clinical screening measure that could be used to determine personalized courses of treatment based on the presence of inhibitory control deficits.

Introduction

Precise identification of behavioral mechanisms underlying psychopathology is essential to the assessment and development of novel, targeted therapeutics [14]. Despite efforts for clinicians to make use of research in their practice [5,6], challenges persist in the dissemination of evidence-based tools to directly probe mechanisms underlying psychopathology [710]. Traditionally, identification of behavioral mechanisms underlying psychopathology has relied on rigorous, time-intensive, clinic-based laboratory research [11], tools that are largely inaccessible to the clinical community. Advances in technology have resulted in promising developments in more easily disseminatable and accessible platforms, including the use of mobile technology to measure clinical symptoms during daily life [12]. However, the majority of this work does not directly assess precise underlying behavioral mechanisms, such as inhibitory control. Furthermore, little work has focused on efficacy and feasibility of mobile applications as assessments of mechanisms underlying psychopathology specifically in pediatric samples. The current study introduces a mobile application, “CALM-IT,” an engaging, patient and clinician user-friendly, accessible and disseminatable cognitive measure that probes inhibitory control, a well-validated behavioral construct that has been posited as one critical mechanism underlying childhood psychopathology broadly [10,13,14]. In developing CALM-IT, we aim to flexibly deploy the rigorous and robust methods traditionally used in laboratory settings to validate a mechanism-driven but easily disseminatable tool that can be used by clinicians to probe inhibitory control in the community.

Inhibitory control a well-defined and brain-related construct capturing an individual’s ability to resist or modulate impulses and suppress prepotent or automatic behavioral responses [15]. Inhibitory control is a deeply studied construct across species [1618], developmental periods [1921], and research modalities [2225]. This rich body of literature has led to the development of a number of canonical laboratory-based tasks. It has also led to the identification of an evolutionarily-conserved underlying neural circuitry involving the ventrolateral, ventromedial, and dorsolateral prefrontal cortices; the orbitofrontal cortex; the inferior frontal gyrus; and the dorsal, posterior, anterior cingulate cortices [16,20,26,27]. As such, researchers are well equipped with neuroscience-informed tools to study inhibitory control within the laboratory setting through the employment of standardized paradigms. However, these tasks tend to be time intensive, repetitive, and expensive, making them difficult to effectively deploy within in a community setting. For example, some tasks rely on eye-tracking technology [28] that requires equipment and specific environmental controls (i.e., the luminance in the room and participant head position relative to the presented stimuli) while others involve large numbers of repetitive trials presenting single simplistic stimuli one at a time (i.e., letters or shapes) that require long periods of sustained attention [2932].

Inhibitory control also has broad clinical relevance. Research indicates that youth with psychopathology, including mood and attentional difficulties, have exhibit difficulty inhibiting responses, and exhibit aberrant neural activity when attempting to inhibit a motor response [710]. Emotion dysregulation broadly is posited to reflect compromised top-down control [33,34] resulting from dysfunction of the ventrolateral and dorsal lateral prefrontal cortices regulation of subcortical regions involved in the processing of emotion of lower level emotional processes [35,36]. These findings are consistent with the behavior clinicians observe in youth with severe irritability and attentional difficulties, as they often fail to inhibit a response, which can manifest clinically as temper outbursts, increased motor activity, and problems engaging in goal-directed actions [37]. While literature examining treatment of youth psychopathology, such as attention deficit hyperactivity disorder (ADHD) and irritability, using interventions specifically targeting inhibitory control is limited, previous work shows a reduction of mood symptoms, including anxiety, following simulant medication treatment for co-occurring attentional [38,39]. These findings provide some evidence suggesting that inhibitory control may reflect a key behavioral mechanism underlying the emergence of mood symptoms and treatment response. The proposed work could be of particular importance in relation to anxiety, irritability, and ADHD symptoms as these symptoms are common, co-occurring symptoms that are present across a wide range of childhood psychopathology [37,4042], and are implicated in the later development of mood disorders including depression [4346]. Furthermore, aberrant cognitive control more broadly has been implicated as potential mechanism associated with the presentation of anxiety [4749], irritability [8,50,51], and ADHD [19,52] symptoms. However, any innovations targeting inhibitory control will need to be both clinically valuable and accessible if they are to be embraced by the broader community.

There is a profound need for mental health treatment research efforts to probe underlying mechanisms to inform diagnosis and treatment decisions [14]. Clinical researchers are well poised to harness new technologies to increases access, enhance developmental approaches, and assess transdiagnostic behavioral mechanisms underlying psychopathology [53,54]. Efforts to translate this work to clinical settings is crucial as providing clinicians with tools to directly probe behavioral mechanisms could lead towards increased precision medicine, patient engagement, and cost-effectiveness [5557]. However, application of these innovations in clinical settings is lagging [53,54]. Few clinical approaches attempt to directly probe the aberrant brain-based behavioral mechanisms that have been linked to psychopathology [58] while also providing practical use in clinical settings. A significant contributing factor is researchers’ inability to create neuroscience-informed paradigms that are easily disseminated, suitable and practical for clinical use.

The goal of the present work is to leverage the strong psychometric properties of a canonical inhibitory control task in the creation of CALM-IT. By targeting a precise pathophysiologically-informed mechanism, and using a platform that is highly engaging and easily disseminated in pediatric samples, the mobile application CALM-IT was designed to bridge the gap between precise mechanisms-driven basic science research and community-based assessment of childhood psychopathology. CALM-IT is designed to leverage the strong methodological design of well-established laboratory based inhibitory control tasks while increasing participant engagement through gamification of the tasks. By using a mobile platform, participant interaction with CALM-IT mirrors those with other mobile-based games with the goal of increasing participant engagement via dynamic stimuli and in-game incentives. The development of CALM-IT would allow clinicians and researchers to screen for and track inhibitory control deficits in the community. This could extend the investigation of inhibitory control beyond the confines of the laboratory setting, therefore facilitating access to a larger and more diverse sample of patients. Ultimately, CALM-IT could lead to novel clinical screening measures for inhibitory control deficits that could be used to inform a personalized course of treatment.

Study aims

Aim 1: Assessment of feasibility and consistency of CALM-IT

Aim 1a

Our first aim is to assess the feasibility of CALM-IT as a mobile-paradigm. We will assess the ease of dissemination, mobile application engagement, and ability to extract mobile application-based behavioral measures of inhibitory control within a large transdiagnostic pediatric sample.

Aim 1b

Next, we will assess the reliability of mobile application-based behavioral measures of inhibitory control across two sessions of play, spaced one week apart.

Aim 2: Validation of mobile application-based behavioral metrics as measures of inhibitory control

Our second aim is to validate mobile application-based measures of inhibitory control against measures of inhibitory control extracted from four canonical laboratory-based tasks: the Antisaccade Task [28], the AX Continuous Performance Task [32], the Flanker Task [30], and the Stop Signal Task [31].

Aim 3: Identify clinical correlates of mobile application-based behavioral metrics

Finally, our third aim is to identify clinical correlates of mobile application-based measures of inhibitory control. Specifically, we will examine associations between mobile application-based behavioral measures of inhibitory control with shared vs. unique variance of three symptoms present across a wide range of pediatric psychopathologies: ADHD, anxiety, and irritability symptoms.

Methods

Participants

Youth aged 8–18 years old will be recruited from the community to participate in research at the National Institute of Mental Health (NIMH). Participants will be recruited as part of a larger protocol that aims to specifically recruit youth with a primary diagnosis of an Anxiety Disorder, Disruptive Mood Dysregulation Disorder (DMDD), ADHD, and youth with no psychiatric diagnosis. Past work within this sample has found that recruiting samples characterized by these clinical diagnoses has resulted in the full range of irritability, anxiety, and ADHD symptoms [51,5962]. All diagnoses will be assessed by a Board-Certified psychiatrist or licensed psychologist using the Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime version (KSADS-PL; [63]). Exclusionary criteria for participation will include an IQ<70, diagnosis of autism spectrum disorder, past and/or current post-traumatic stress disorder, schizophrenia, or depression, or neurological disorder.

Prior to participation, parents will provide written informed consent and youth will provide written assent. Participants will complete up to three different testing sessions. Administration of CALM-IT will occur outside of the laboratory setting. Families will be sent instructions on how to download CALM-IT onto their mobile device. As such, CALM-IT testing sessions will be completed within a community setting (e.g. at home) at whatever time is most convenient for the family. Parents will be instructed that CALM-IT should be played only by the child with no assistance. Adherence to this procedure will be assessed in a follow up interview. Families will also have the option of coming in for in-laboratory testing where children will complete four canonical inhibitory control tasks (detailed below). Families will receive monetary compensation for completion of CALM-IT. All study procedures have been approved by the NIMH Institutional Review Board.

To investigate the feasibility of CALM-IT (Aim 1a), we will recruit 200 youth. For investigation of the stability of CALM-IT (Aim 1b), we will invite 75 youth out of the overall sample of 200 to play CALM-IT twice. For validation of mobile application-based behaviors of inhibitory control (Aim 2), we will invite 100 youth out of the overall sample of 200 to complete a battery of in-laboratory inhibitory control paradigms. We evaluated each of these proposed sample sizes using the pwr package in R to determine the size of the effect that the sample would allow us to detect using a significance level of 0.05 and a power level of 0.80 [64]. For the feasibility of CALM-IT (Aim 1a), our proposed sample size is sufficient to detect effects of r≥0.20. For the stability of CALM-IT (Aim 1b), our proposed sample size is sufficient to detect effects of r≥0.32. Finally, for the validation of application-based behaviors of inhibitory control (Aim 2), the proposed sample size is sufficient to detect effects of r≥0.28. Thus all proposed samples would allow us sufficient power to detect small to medium effect sizes that are similar to those found within comparable samples [7,47,65,66].

CALM-IT mobile application

In collaboration with a2 Group, we have developed a mobile application assessing motor inhibition using an engaging, user-friendly game called CALM-IT. Participants are invited to play a game set in space where they are an astronaut exploring different galaxies. Each galaxy represents a distinct level (i.e., block of trials). While exploring the galaxy, participants are instructed via visual cue that that their goal is to destroy space objects by swiping their finger across the screen and hitting each space object, or target. These targets include comets, space rocks, and asteroids that appear in red, purple, or blue. At the same time, participants are instructed to avoid hitting stars which appear in yellow (Fig 1, Panel A). Multiple space objects and stars can appear on the screen at any given time thus requiring participants to swipe at the targets while avoiding the stars (Fig 1, Panel B). If participants hit a star, an explosion animation appears on the screen (Fig 1, Panel C). Participants earn points based on the number of targets they hit (two points per target) and they lose points based on the number of stars they hit (one point per star). A running tally of the score is presented in the top left corner of the screen throughout gameplay and an indicator is displayed when points are lost (Fig 1, Panels B and C). At the end of each level, participants are presented with their cumulative score (Fig 1, Panel D).

Fig 1. Screenshots depicting gameplay during CALM-IT.

Fig 1

(A) Tutorial screen with the instructions to swipe targets (red, purple, and blue objects) and avoid swiping stars (yellow) (B) Screen capture from in-level play depicting targets and stars (C) Explosion and negative point indicator feedback in response to incorrectly swiping a star (D) Display of end of level cumulative score.

CALM-IT is comprised of 10 separate galaxies (e.g. levels). Each galaxy includes 13 stars and 39 targets and takes approximately 1 minute long to complete for a total of approximately 10 minutes of gameplay. Gameplay in the first five galaxies was modeled after the Go/No-Go paradigm [29]; targets represent go-stimuli and stars represent no-go stimuli. The last five galaxies were modeled after the Stop Signal Delay paradigm [31] such that 25% of the targets (e.g. go-stimuli) turn into stars (e.g. no-go stimuli) a a random time within two seconds of appearing on the screen. If a participant hit the go-turned-no-go stimuli, they receive the same explosion feedback image and accompanying sound as one does when you hit a star. Unlike standard Stop Signal Delay paradigms, for the go-turned-no-go stimuli, the time delay between appearing on the screen and turning into the star is fully random and not tied to participant performance.

A subset of participants (n = 75) will complete all 10 levels of the mobile application a second time approximately 1 week after their first time playing. These data will be used to assess the stability of mobile application-derived behavioral measures.

Mobile application-based behavior

For each galaxy, there are four variables collected corresponding to the presentation of each target and star: (1) whether or not the object was hit, (2) the reaction time of motor response, (3) the duration of time the object was present on the screen, and (4) whether or not the object turns into another object (go-turned-no-go stimuli on galaxies 6–10). All data will be processed separately for galaxies 1–5 and 6–10. Measures of accuracy and reaction time will be calculated for stars and targets separately.

Inhibitory control will be operationalized using a signal detection theory (SDT) approach which can be used to measure sensitivity, or an individual’s ability to discriminate signal from noise. Targets represent signal trials whereas stars represent noise trials. Thus, correctly swiping at targets are hit trials, not swiping targets are miss trials, incorrectly swiping at stars are false alarm trials and not swiping stars are correct rejection trials (Fig 2).

Fig 2. Signal detection theory operationalization of mobile application-based behavior.

Fig 2

In-laboratory measures of inhibitory control

Antisaccade task

The antisaccade task measures motor inhibition in the context of automatic visual saccades towards novel stimuli [28]. Using a mixed event version of the task, participants are instructed to engage in either prosaccade trial, in which they are instructed to direct their eye gaze towards a visual target, or an antisaccade trial, in which they are instructed to direct their eye gaze in the direction opposite a visual target. The order of prosaccade and antisaccade trials are fully randomized within each block. Each trial begins with a preparatory period during which participants are presented with either a green or red instructional fixation cross indicating that the next trial will either be a prosaccade or antisaccade trail, respectively. During the prosaccade and antisaccade trials, a yellow visual target is presented for 1 second in a pseudorandomized location 630 pixels or 315 pixels to the left or right of the center of the screen for 1 second. The number of trials with the visual target in each location is equal across prossacade and antisaccade trials. The testing session consists first of a practice block. After completing the practice, participants will complete 3 experimental blocks each with 16 antisaccade and 16 prosaccade trials. The EyeLink 1000 Plus eye tracking system will be used to collect and process eye gaze data (a full description of the eye-tracking set up and eye gaze processing can be found in [57]). For each participant the percentage of correct antisaccade trials will be computed as a measure of successful motor inhibition.

AX Continuous Performance Task (AXCPT)

The AXCPT is a type of continuous performance task in which children are continuously presented with a series of letters and instructed to press a button in response to each presented letter [32]. Children are instructed to press buttons depending on the sequence of letters presented. Specifically, when a child sees a letter pair A followed by X they are instructed to press 2 in response to the A and 3 in response to the X. For all other letter pairs, the child is instructed to press 2 for both letters. Critically, the majority of the time an X appears on the screen, it is proceeded by an A, meaning that pressing 3 in response to the X quickly becomes a learned prepotent response. Trials can be categorized based on the letter pairings, with trials that contain an A cue with an X probe categorized as “AX” trials and trials containing a non-A cue with an X probe categorized as “BX” trials. In these BX trials, participants must inhibit the prepotent response of pressing 3 and instead press 2 in response to the X. Following a practice phase, during which children receive feedback based on their responses, participants will complete three experimental blocks, during which participants complete a total of 150 trials. For each participant, d’ context (% correct AX trials—% incorrect BX trials) will be calculated as a measure of inhibitory control.

Flanker task

The flanker task [30] is a well-established task measuring interference effects on cognitive control. In this task, participants are asked to inhibit interfering task irrelevant information in order to engage in the task goal. Participants will be instructed to press the left or right arrow button to indicate the direction of the central arrow in series of five side-by-side arrows centered on the screen. The trial will terminate upon response and participants will be instructed to respond as quickly as they can. Trials are categorized as either congruent or incongruent trials based on the direction of the flanking arrows. Congruent trials correspond to trials where the flanking arrows all point in the same direction as the central arrow. The congruency of the visual stimuli therefore facilitates the correct motor response. In contrast, incongruent trials correspond to trials where the flanking arrows all point in the opposite direction of the central arrow. The incongruency of the visual stimuli therefore interferes with the execution of the correct motor response. Participants first will complete a practice block in which they receive feedback regarding the accuracy of their responses. The experimental task will consist of four blocks, each containing 30 congruent and 30 incongruent trials, presented in randomized order. For each participant, we will extract reaction time metrics for correct responses to congruent and incongruent trials, with the reaction time difference between the two trial types as a measure of inhibitory control efficiency.

Stop signal task

During the stop-signal delay task [31], participants are instructed to press the right arrow-key when presented with an X and the left arrow-key when presented with an O. These trials are called “go-trials”. On 25% of trials, the X or O will be accompanied by a 1000 Hz auditory “stop cue”. When the stop cue is presented, participants are instructed to withhold their motor response to the X or O. These are called “stop-trials”. The time elapsed between the presentation of the go-stimulus and the stop cue (stop-signal delay; SSD) varies as a function of performance to calibrate successful inhibition of the go-response to 50% of stop-trials. Initially the SSD is set to 25 0ms and increased by 50 ms following correct responses to stop-trials and decreased by 50 ms following incorrect responses to stop-trials. Following completion of two 16-trial practice blocks, participants will complete 5 experimental blocks with 88 trials each, resulting in a total of 330 go-trials and 110 stop-trials. For each participant, we will extract the stop-signal reaction time (SSRT), the difference between the average SSD and the average reaction time to go-trials, as a measure of inhibitory control.

Symptom measures

Irritability

Irritability symptoms will be measured using the parent- and youth-report versions of the Affective Reactivity Index (ARI, [67]). In addition to item-level responses to the six parent-report and six youth-report items, a total parent-report ARI and child-report ARI score will be calculated as the sum of the six items.

Anxiety

Anxiety symptoms will be measured using the parent- and youth-report versions of the Screen for Anxiety Related Emotional Disorders (SCARED; [68]). Subscale scores will be calculated for each of the five subscales (Generalized Anxiety, Panic, School Anxiety, Separation Anxiety, and Social Anxiety) separately for the parent- and youth-report SCARED. Total parent- and child-report SCARED Total scores will also be calculated as the sum across all items.

ADHD

ADHD symptoms will be measured using the parent-report Conners Comprehensive Behavior Ratings Scale (CBRS; [69]). ADHD symptoms will be measured using the DSM-IV ADHD Total raw score. Additionally, using results of a confirmatory factor analysis examining the factor loadings of each individual item from the DSM-IV ADHD Total Score loading on a single latent factor, we have selected the six items with the highest factor loadings (S1 File).

Secondary analyses will be conducted examining associations with the two components of ADHD: hyperactive-impulsivity and inattentiveness. For these analyses, the CBRS DSM-IV Hyperactive-Impulsive and CBRS DSM-IV Inattentive subscales will be used.

Depression

For secondary analyses, depression symptoms will be measured using the parent-report Mood and Feelings Questionnaire (MFQ; [70]). Total scores on the MFQ will be calculated for each participant as a measure of depressive symptoms.

Bifactor model of symptoms

Latent factor scores for shared vs unique variance of irritability, anxiety, and ADHD symptoms will be estimated using confirmatory factor analyses of an extended bifactor model (Fig 3; [59,61]). A general factor will be estimated using the six parent-report ARI items, five parent-report SCARED subscale scores, and six CBRS items identified by the CFA analysis. An irritability-unique, anxiety-unique, and ADHD-unique factor will be estimated using the ARI items, SCARED subscale scores, and six CBRS items respectively. Consistent with past work, model fit will be assessed using the following criteria: Tucker Lewis Index (TLI) and Comparative Fit Index (CFI) values > .950, and a Root Mean-Square Error of Approximation (RMSEA) value < .08. Factor scores on each of the resulting four latent factors will be extracted for each participant.

Fig 3. Bifactor model for clinical symptoms.

Fig 3

Note: ARI P = Parent-Report Affective Reactivity Index, SCARED P = Parent-Report Screen for Anxiety Related Emotional Disorders, CBRS = Conners Comprehensive Behavior Ratings.

Analytic plan

Aim 1: Assessment of feasibility and consistency of CALM-IT

CALM-IT adoption and completion

First, we will assess mobile application adoption by calculating the percentage of participants who agreed to play the mobile application out of those who were contacted to assess interest. Second, we will assess mobile application completion by calculating the percentage of levels with complete and usable data for each participant who agreed to play the app.

CALM-It engagement

Performance on the mobile application will be evaluated to assess whether participants understand the visual instructions and were engaged in the task. The percentage of targets hit and the percentage of stars hit will be assessed as measures of correct engagement and incorrect engagement in task behavior, respectively. Average percent targets hit and percent stars hit will be reported across all participants. Average percent targets hit and percent stars hit will also be examined across each level to assess maintenance of attention and engagement across all levels of the task. Finally, a follow-up interview will be conducted to collect a qualitative assessment of CALM-IT engagement from each participant.

Extracting mobile application-based measures of inhibitory control

Using SDT we will extract a measure of sensitivity, or the discrimination between noise and signal, as a measure of inhibitory control using mobile application behavior. The variable d-prime (d’) assesses the degree of overlap between the standardized probability of hit trials (i.e. Hit Rate) and the standardized probability of false-alarm trials (i.e. False Alarm Rate; Eq 1; [71,72]). A value of 0 represents an inability to discriminate signal from noise with increasing values representing being better able to discriminate signal from noise. Values for d’ will be calculated for each participant using the psycho package in R [73].

d=z(HitRate)z(FalseAlarmRate) (1)

Using d’ we will assess both individual differences in inhibitory control and average discrimination on the task. An average d’ value will be assessed across all participants to whether, on average, participants are able to discriminate between noise and signal on the task. Both the average d’ value and the receiver operating characteristic (ROC) curve will be reported to provide a graphical visualization of how often false alarms versus hits occur at any level of sensitivity.

Stability of mobile application-based behaviors

In the subset of participants who completed the mobile application twice, one week apart from another, the temporal stability of mobile application behaviors will be assessed using test-retest reliability. Specifically, we will examine test-retest reliability through examination of correlation between time-one with time-two measures of percentage of targets hit, percentage of stars hit, and d’. Consistency will also be reported using the intraclass correlation coefficient (ICC). In contrast to the Pearson’s correlation coefficient, ICC captures not only the degree of correlation, but also the agreement across measurements [74,75]. Because we are investigating test-retest reliability, or the stability across measurement, we will use ICC(3,1) which is a two-way mixed effects assessment of consistency within a single measurement. An ICC value of less than 0.40 would indicate poor consistency, an ICC value of 0.40–0.70 would indicate moderate consistency, an ICC value of 0.70–0.90 would indicate good consistency, and an ICC value greater than 0.90 would indicate excellent consistency.

Aim 2: Validation of mobile application-based behavioral metrics as measures of inhibitory control

In order to robustly measure inhibitory control, we will employ a framework through which a latent factor of inhibitory control is computationally derived from measurements of motor inhibition across four canonical inhibitory control laboratory-based tasks described above [59]. This latent factor will be computed using confirmatory factor analysis with a measure of inhibitory control from each of the four tasks loading onto a single latent factor. By leveraging latent variable analysis across a range of tasks that differ in their behavioral goals, response modality, and context, we can quantify a relatively pure measure of inhibitory control that is immune from task-specific impurities and thus increase statistical power. Next, we will extract individual-level inhibitory control latent factor scores for each participant to examine associations with mobile application-based measures of inhibitory control. We will conduct a multiple regression analysis with CALM-IT d’ as a predictor of inhibitory control latent factor scores, with age, sex, order of task completion (e.g., CALM-IT or laboratory testing administered first), and IQ entered as covariates.

Aim 3: Identify clinical correlates of mobile application-based behavioral metrics

First, raw bivariate correlations between raw total scores on the five symptom measurements and CALM-IT d’ will be examined. Next, associations with shared vs. unique variances of anxiety, irritability, and ADHD symptoms will be examined using multiple regression analyses. All four latent factor scores from the bifactor model will be entered as concurrent predictors of CALM-IT d’. For all models, any demographic variable that varies as a function of either symptoms and/or CALM-IT d’ will be included as a covariate (i.e., age, sex, and IQ). Finally, secondary analyses will be run examining bivariate correlations between measures of hyperactive-impulsivity, inattentiveness, and depression.

Summary

The present protocol introduces a novel, user-friendly mobile application aimed to assess inhibitory control, a precise and well-defined construct that has been posited as one critical mechanism underlying psychopathology in youth. There is a profound need for mental health treatment research efforts to use neuroscience-based assessments to inform treatment decisions. However, such assessments are not accessible to community providers. The goal of this work is to bridge that gap by creating and facilitating the usage of an accessible and engaging novel mobile application that probes a validated behavioral construct. The present protocol would lay the groundwork for an important line of future work that could provide researchers and clinicians a multifaceted tool to measure multiple aspects of inhibitory control. For example, future versions of the app could include manipulation of the stimuli presented such that participants are required to update the which stimuli represents the stop-stimuli, thus targeting one’s ability to flexibly deploy inhibitory control. Furthermore, an adaptive version of CALM-IT that increases difficulty based on participants’ individual performance could function as an intervention aimed at improving inhibitory control. Validation of this neuroscience-informed mobile application represents a critical first step forward in bridging the gap between precise, mechanism-driven basic science research and community-based assessment and treatment of childhood psychopathology. CALM-IT is poised to provide an accessible tool for clinicians in the community to make clinical and treatment decisions based on neuroscience-based mechanisms.

Supporting information

S1 File. Identification of ADHD items for inclusion in bifactor model of symptoms.

Description of methods and results of a confirmatory factor analysis of the 18 items measuring ADHD symptoms from the Conners Comprehensive Behavior Ratings Scale.

(PDF)

Data Availability

All relevant data from this study will be made available upon study completion.

Funding Statement

All work supported by the NIMH Intramural Research Program.

References

  • 1.Gordon J. Medicine and the Mind. New England Journal of Medicine. 2020; 382: 878–880. doi: 10.1056/NEJMc1916446 [DOI] [PubMed] [Google Scholar]
  • 2.Hyman SE. Revolution Stalled. Science Translational Medicine. 2012; 4. doi: 10.1126/scitranslmed.3003142 [DOI] [PubMed] [Google Scholar]
  • 3.Insel TR. The NIMH experimental medicine initiative. World Psychiatry. 2015; 14: 151–153. doi: 10.1002/wps.20227 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Insel TR, Gogtay N. National Institute of Mental Health Clinical Trials: New Opportunities, New Expectations. JAMA Psychiatry. 2014; 71: 745. doi: 10.1001/jamapsychiatry.2014.426 [DOI] [PubMed] [Google Scholar]
  • 5.Field TA, Beeson ET, Jones LK. The New ABCs: A Practitioner’s Guide to Neuroscience-Informed Cognitive-Behavior Therapy. Journal of Mental Health Counseling. 2015; 37: 206–220. [Google Scholar]
  • 6.Miller R. Neuroeducation: Integrating Brain-Based Psychoeducation into Clinical Practice. Journal of Mental Health Counseling. 2016; 38: 103–115. [Google Scholar]
  • 7.Derakshan N, Ansari TL, Hansard M, et al. Anxiety, Inhibition, Efficiency, and Effectiveness: An Investigation Using the Antisaccade Task. Experimental Psychology. 2009; 56: 48–55. doi: 10.1027/1618-3169.56.1.48 [DOI] [PubMed] [Google Scholar]
  • 8.Deveney CM, Briggs-Gowan MJ, Pagliaccio D, et al. Temporally sensitive neural measures of inhibition in preschool children across a spectrum of irritability. Developmental Psychobiology. 2018. doi: 10.1002/dev.21792 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Joormann J. Cognitive Inhibition and Emotion Regulation in Depression. Current Directions in Psychological Science. 2010; 19: 161–166. doi: 10.1080/02699930903407948 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Nigg JT. On inhibition/disinhibition in developmental psychopathology: Views from cognitive and personality psychology and a working inhibition taxonomy. Psychological Bulletin. 2000; 126: 220. doi: 10.1037/0033-2909.126.2.220 [DOI] [PubMed] [Google Scholar]
  • 11.Smith AR, Kircanski K, Brotman MA, et al. Advancing clinical neuroscience through enhanced tools: Pediatric social anxiety as an example. Depression and Anxiety. 2019; 36: 701–711. doi: 10.1002/da.22937 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Shiffman S, Stone AA, Hufford MR. Ecological Momentary Assessment. Annual Review of Clinical Psychology. 2008; 4: 1–32. doi: 10.1146/annurev.clinpsy.3.022806.091415 [DOI] [PubMed] [Google Scholar]
  • 13.Luna B, Garver KE, Urban TA, et al. Maturation of Cognitive Processes From Late Childhood to Adulthood. Child Development. 2004; 75: 1357–1372. doi: 10.1111/j.1467-8624.2004.00745.x [DOI] [PubMed] [Google Scholar]
  • 14.McTeague LM, Huemer J, Carreon DM, et al. Identification of Common Neural Circuit Disruptions in Cognitive Control Across Psychiatric Disorders. American Journal of Psychiatry. 2017; 174: 676–685. doi: 10.1176/appi.ajp.2017.16040400 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Friedman NP, Miyake A. Unity and diversity of executive functions: Individual differences as a window on cognitive structure. Cortex. 2017; 86: 186–204. doi: 10.1016/j.cortex.2016.04.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Aron AR, Robbins TW, Poldrack RA. Inhibition and the right inferior frontal cortex: one decade on. Trends in Cognitive Sciences. 2014; 18: 177–185. doi: 10.1016/j.tics.2013.12.003 [DOI] [PubMed] [Google Scholar]
  • 17.Eagle DM, Bari A, Robbins TW. The neuropsychopharmacology of action inhibition: cross-species translation of the stop-signal and go/no-go tasks. Psychopharmacology. 2008; 199: 439–456. doi: 10.1007/s00213-008-1127-6 [DOI] [PubMed] [Google Scholar]
  • 18.Roberts AC. Inhibitory Control and Affective Processing in the Prefrontal Cortex: Neuropsychological Studies in the Common Marmoset. Cerebral Cortex. 2000; 10: 252–262. doi: 10.1093/cercor/10.3.252 [DOI] [PubMed] [Google Scholar]
  • 19.Durston S, Thomas KM, Yang Y, et al. A neural basis for the development of inhibitory control. Developmental Science. 2002; 5: F9–F16. [Google Scholar]
  • 20.Luna B, Marek S, Larsen B, et al. An Integrative Model of the Maturation of Cognitive Control. Annual Review of Neuroscience. 2015; 38: 151–170. doi: 10.1146/annurev-neuro-071714-034054 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Williams BR, Ponesse JS, Schachar RJ, et al. Development of Inhibitory Control Across the Life Span. Developmental Psychology. 1999; 35: 205–213. doi: 10.1037//0012-1649.35.1.205 [DOI] [PubMed] [Google Scholar]
  • 22.Enticott PG, Ogloff JRP, Bradshaw JL. Associations between laboratory measures of executive inhibitory control and self-reported impulsivity. Personality and Individual Differences. 2006; 41: 285–294. [Google Scholar]
  • 23.Hutton SB, Ettinger U. The antisaccade task as a research tool in psychopathology: A critical review. Psychophysiology. 2006; 43: 302–313. doi: 10.1111/j.1469-8986.2006.00403.x [DOI] [PubMed] [Google Scholar]
  • 24.Logue SF, Gould TJ. The neural and genetic basis of executive function: Attention, cognitive flexibility, and response inhibition. Pharmacology Biochemistry and Behavior. 2014; 123: 45–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Venables NC, Foell J, Yancey JR, et al. Quantifying Inhibitory Control as Externalizing Proneness: A Cross-Domain Model. Clinical Psychological Science. 2018; 1–20. [Google Scholar]
  • 26.Aron AR, Poldrack RA. The Cognitive Neuroscience of Response Inhibition: Relevance for Genetic Research in Attention-Deficit/Hyperactivity Disorder. Biological Psychiatry. 2005; 57: 1285–1292. doi: 10.1016/j.biopsych.2004.10.026 [DOI] [PubMed] [Google Scholar]
  • 27.Ridderinkhof KR, van den Wildenberg WPM, Segalowitz SJ, et al. Neurocognitive mechanisms of cognitive control: The role of prefrontal cortex in action selection, response inhibition, performance monitoring, and reward-based learning. Brain and Cognition. 2004; 56: 129–140. doi: 10.1016/j.bandc.2004.09.016 [DOI] [PubMed] [Google Scholar]
  • 28.Hallett PE. Primary and secondary saccades to goals defined by instructions. Vision Research. 1978; 18: 1279–1296. doi: 10.1016/0042-6989(78)90218-3 [DOI] [PubMed] [Google Scholar]
  • 29.Donders FC. On the speed of mental processes. Acta Psychologica. 1969; 30: 412–431. doi: 10.1016/0001-6918(69)90065-1 [DOI] [PubMed] [Google Scholar]
  • 30.Eriksen CW. The flankers task and response competition: A useful tool for investigating a variety of cognitive problems. Visual Cognition. 1995; 2: 101–118. [Google Scholar]
  • 31.Logan GD, Cowan WB. On the Ability to Inhibit Simple and Choice Reaction Time Responses: A Model and a Method. Journal of Experimental Psychology: Human Perception and Performance. 1984; 10: 276–291. doi: 10.1037//0096-1523.10.2.276 [DOI] [PubMed] [Google Scholar]
  • 32.Rutschmann J. Sustained Attention in Children at Risk for Schizophrenia: Report on a Continuous Performance Test. Archives of General Psychiatry. 1977; 34: 571. doi: 10.1001/archpsyc.1977.01770170081007 [DOI] [PubMed] [Google Scholar]
  • 33.Beauchaine TP. Future Directions in Emotion Dysregulation and Youth Psychopathology. Journal of Clinical Child & Adolescent Psychology. 2015; 44: 875–896. doi: 10.1080/15374416.2015.1038827 [DOI] [PubMed] [Google Scholar]
  • 34.Beauchaine TP, Cicchetti D. Emotion dysregulation and emerging psychopathology: A transdiagnostic, transdisciplinary perspective. Development and Psychopathology. 2019; 31: 799–804. doi: 10.1017/S0954579419000671 [DOI] [PubMed] [Google Scholar]
  • 35.Kohn N, Eickhoff SB, Scheller M, et al. Neural network of cognitive emotion regulation—An ALE meta-analysis and MACM analysis. NeuroImage. 2014; 87: 345–355. doi: 10.1016/j.neuroimage.2013.11.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Ochsner KN, Silvers JA, Buhle JT. Functional imaging studies of emotion regulation: a synthetic review and evolving model of the cognitive control of emotion: Functional imaging studies of emotion regulation. Annals of the New York Academy of Sciences. 2012; 1251: E1–E24. doi: 10.1111/j.1749-6632.2012.06751.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Brotman MA, Kircanski K, Leibenluft E. Irritability in Children and Adolescents. Annual Review of Clinical Psychology. 2017; 13: 317–341. doi: 10.1146/annurev-clinpsy-032816-044941 [DOI] [PubMed] [Google Scholar]
  • 38.Posner J, Kass E, Hulvershorn L. Using Stimulants to Treat ADHD-Related Emotional Lability. Current Psychiatry Reports. 2014. doi: 10.1007/s11920-014-0478-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Towbin K, Vidal-Ribas P, Brotman MA, et al. A Double-Blind Randomized Placebo-Controlled Trial of Citalopram Adjunctive to Stimulant Medication in Youth With Chronic Severe Irritability. Journal of the American Academy of Child & Adolescent Psychiatry. 2019. doi: 10.1016/j.jaac.2019.05.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Karalunas SL, Gustafsson HC, Fair D, et al. Do we need an irritable subtype of ADHD? Replication and extension of a promising temperament profile approach to ADHD subtyping. Psychological Assessment. 2019; 31: 236–247. doi: 10.1037/pas0000664 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Leibenluft E. Pediatric Irritability: A Systems Neuroscience Approach. Trends in Cognitive Sciences. 2017; 21: 277–289. doi: 10.1016/j.tics.2017.02.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Pine DS. Research Review: A neuroscience framework for pediatric anxiety disorders. Journal of Child Psychology and Psychiatry. 2007; 48: 631–648. doi: 10.1111/j.1469-7610.2007.01751.x [DOI] [PubMed] [Google Scholar]
  • 43.Brotman MA, Schmajuk M, Rich BA, et al. Prevalence, Clinical Correlates, and Longitudinal Course of Severe Mood Dysregulation in Children. Biological Psychiatry. 2006; 60: 991–997. doi: 10.1016/j.biopsych.2006.08.042 [DOI] [PubMed] [Google Scholar]
  • 44.Copeland WE, Shanahan L, Egger H, et al. Adult Diagnostic and Functional Outcomes of DSM-5 Disruptive Mood Dysregulation Disorder. American Journal of Psychiatry. 2014; 171: 668–674. doi: 10.1176/appi.ajp.2014.13091213 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Eyre O, Riglin L, Leibenluft E, et al. Irritability in ADHD: association with later depression symptoms. European Child & Adolescent Psychiatry. 2019. doi: 10.1007/s00787-019-01303-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Pine DS, Cohen P, Gurley D, et al. The Risk for Early-Adulthood Anxiety and Depressive Disorders in Adolescents With Anxiety and Depressive Disorders. Archives of General Psychiatry. 1998; 55: 56. doi: 10.1001/archpsyc.55.1.56 [DOI] [PubMed] [Google Scholar]
  • 47.Cardinale EM, Subar AR, Brotman MA, et al. Inhibitory control and emotion dysregulation: A framework for research on anxiety. Development and Psychopathology. 2019; 31: 859–869. doi: 10.1017/S0954579419000300 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Eysenck MW, Derakshan N, Santos R, et al. Anxiety and cognitive performance: Attentional control theory. Emotion. 2007; 7: 336–353. doi: 10.1037/1528-3542.7.2.336 [DOI] [PubMed] [Google Scholar]
  • 49.Moser JS, Moran TP, Schroder HS, et al. On the relationship between anxiety and error monitoring: a meta-analysis and conceptual framework. Frontiers in Human Neuroscience. 2013; 7. doi: 10.3389/fnhum.2013.00466 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Chaarani B, Kan K-J, Mackey S, et al. Neural Correlates of Adolescent Irritability and Its Comorbidity With Psychiatric Disorders. Journal of the American Academy of Child & Adolescent Psychiatry. 2020; 59: 1371–1379. doi: 10.1016/j.jaac.2019.11.028 [DOI] [PubMed] [Google Scholar]
  • 51.Tseng W-L, Deveney CM, Stoddard J, et al. Brain Mechanisms of Attention Orienting Following Frustration: Associations With Irritability and Age in Youths. American Journal of Psychiatry. 2018; 1. doi: 10.1176/appi.ajp.2018.18040491 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Willcutt EG, Doyle AE, Nigg JT, et al. Validity of the Executive Function Theory of Attention-Deficit/Hyperactivity Disorder: A Meta-Analytic Review. Biological Psychiatry. 2005; 57: 1336–1346. doi: 10.1016/j.biopsych.2005.02.006 [DOI] [PubMed] [Google Scholar]
  • 53.Holmes EA, Ghaderi A, Harmer CJ, et al. The Lancet Psychiatry Commission on psychological treatments research in tomorrow’s science. The Lancet Psychiatry. 2018; 5: 237–286. doi: 10.1016/S2215-0366(17)30513-8 [DOI] [PubMed] [Google Scholar]
  • 54.Peverill M, McLaughlin KA. Harnessing the Neuroscience Revolution to Enhance Child and Adolescent Psychotherapy. In: Evidence Based Psychotherapies for Children and Adolescents. Guilford Press, 2017, pp. 520–536. [Google Scholar]
  • 55.Cuijpers P, Ebert DD, Reijnders M, et al. Technology-assisted treatments for mental health problems in children and adolescents. In: Evidence-based psychotherapies for children and adolescents. The Guilford Press, 2018, pp. 555–574. [Google Scholar]
  • 56.Ferrari M, McIlwaine SV, Reynolds JA, et al. Digital Game Interventions for Youth Mental Health Services (Gaming My Way to Recovery): Protocol for a Scoping Review. JMIR Research Protocols. 2020; 9: e13834. doi: 10.2196/13834 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Powers MB, Carlbring P. Technology: Bridging the Gap from Research to Practice. Cognitive Behaviour Therapy. 2016; 45: 1–4. doi: 10.1080/16506073.2016.1143201 [DOI] [PubMed] [Google Scholar]
  • 58.Emmelkamp PMG, David D, Beckers T, et al. Advancing psychotherapy and evidence-based psychological interventions: Advancing psychotherapy. International Journal of Methods in Psychiatric Research. 2014; 23: 58–91. doi: 10.1002/mpr.1411 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Cardinale EM, Kircanski K, Brooks J, et al. Parsing neurodevelopmental features of irritability and anxiety: Replication and validation of a latent variable approach. Development and Psychopathology. 2019; 1–13. doi: 10.1017/S095457941900035X [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Kircanski K, Zhang S, Stringaris A, et al. Empirically derived patterns of psychiatric symptoms in youth: A latent profile analysis. Journal of Affective Disorders. 2017; 216: 109–116. doi: 10.1016/j.jad.2016.09.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Kircanski K, White LK, Tseng W-L, et al. A Latent Variable Approach to Differentiating Neural Mechanisms of Irritability and Anxiety in Youth. JAMA Psychiatry. 2018; 75: 631. doi: 10.1001/jamapsychiatry.2018.0468 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Stoddard J, Tseng W-L, Kim P, et al. Association of Irritability and Anxiety With the Neural Mechanisms of Implicit Face Emotion Processing in Youths With Psychopathology. JAMA Psychiatry. 2017; 74: 95. doi: 10.1001/jamapsychiatry.2016.3282 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Kaufman J, Birmaher B, Brent D, et al. Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime Version (K-SADS-PL): Initial Reliability and Validity Data. Journal of the American Academy of Child & Adolescent Psychiatry. 1997; 36: 980–988. doi: 10.1097/00004583-199707000-00021 [DOI] [PubMed] [Google Scholar]
  • 64.Stephane C. pwr: Basic Functions for Power Analysis, https://CRAN.R-project.org/package=pwr (2020).
  • 65.Bari A, Robbins TW. Inhibition and impulsivity: Behavioral and neural basis of response control. Progress in Neurobiology. 2013; 108: 44–79. doi: 10.1016/j.pneurobio.2013.06.005 [DOI] [PubMed] [Google Scholar]
  • 66.Haller SP, Stoddard J, Pagliaccio D, et al. Computational Modeling of Attentional Impairments in Disruptive Mood Dysregulation and Attention Deficit/Hyperactivity Disorder. Journal of the American Academy of Child & Adolescent Psychiatry. 2020. doi: 10.1016/j.jaac.2020.08.468 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Stringaris A, Goodman R, Ferdinando S, et al. The Affective Reactivity Index: a concise irritability scale for clinical and research settings: The Affective Reactivity Index. Journal of Child Psychology and Psychiatry. 2012; 53: 1109–1117. doi: 10.1111/j.1469-7610.2012.02561.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Birmaher B, Khetarpal S, Brent D, et al. The Screen for Child Anxiety Related Emotional Disorders (SCARED): Scale Construction and Psychometric Characteristics. Journal of the American Academy of Child & Adolescent Psychiatry. 1997; 36: 545–553. doi: 10.1097/00004583-199704000-00018 [DOI] [PubMed] [Google Scholar]
  • 69.Conners C, Pitkanen J, Rzepa S. Conners comprehensive behavior rating scale. Encyclopedia of clinical neuropsychology. 2011; 678–680. [Google Scholar]
  • 70.Ancold A, Costello E, Messer S, et al. Development of a short questionnaire for use in epidemiological studies of depression in children and adolescents. International Journal of Methods in Psychiatric Research. 1995; 237–249. [Google Scholar]
  • 71.Snodgrass JG, Corwin J. Pragmatics of Measuring Recognition Memory: Applications to Dementia and Amnesia. Journal of Experimental Psychology: General. 1988; 117: 34–50. [DOI] [PubMed] [Google Scholar]
  • 72.Stanislaw H, Todorov N. Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers. 1999; 31: 137–149. doi: 10.3758/bf03207704 [DOI] [PubMed] [Google Scholar]
  • 73.Makowski D. The psycho Package: an Efficient and Publishing-Oriented Workflow for Psychological Science. The Journal of Open Source Software. 2018; 3: 470. [Google Scholar]
  • 74.Koo TK, Li MY. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. Journal of Chiropractic Medicine. 2016; 15: 155–163. doi: 10.1016/j.jcm.2016.02.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Shrout PE, Fleiss JL. Intraclass Correlations: Uses in Assessing Rater Reliability. Psychological bulletin. 1979; 86: 420–428. doi: 10.1037//0033-2909.86.2.420 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Veena Kumari

3 Mar 2021

PONE-D-20-39885

Rationale and validation of a novel mobile application probing motor inhibition: Proof of concept of CALM-IT

PLOS ONE

Dear Dr. Cardinale,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 25 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Veena Kumari

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on software sharing (http://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-software) for manuscripts whose main purpose is the description of a new software or software package. In this case, new software must conform to the Open Source Definition (https://opensource.org/docs/osd) and be deposited in an open software archive. Please see http://journals.plos.org/plosone/s/materials-and-software-sharing#loc-depositing-software for more information on depositing your software.

3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

4.We noticed you have some minor occurrence of overlapping text with the following previous publication(s), which needs to be addressed:

https://www.cambridge.org/core/journals/development-and-psychopathology/article/abs/inhibitory-control-and-emotion-dysregulation-a-framework-for-research-on-anxiety/3DB40FE1CD1D6293778A9BC0272F3005

In your revision ensure you cite all your sources (including your own works), and quote or rephrase any duplicated text outside the methods section. Further consideration is dependent on these concerns being addressed.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions?

The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field.

Reviewer #1: Partly

**********

2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses?

The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory.

Reviewer #1: Partly

**********

3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable?

Reviewer #1: Yes

**********

4. Have the authors described where all data underlying the findings will be made available when the study is complete?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics.

You may also provide optional suggestions and comments to authors that they might find helpful in planning their study.

(Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This manuscript is a protocol for a validation study of a mobile application for inhibitory control evaluation in young people. It presents an interesting idea of measuring inhibitory control via a mobile application which presumably is a more feasible way in comparison to other experimental approaches when used in the community settings. The protocol summarises the main details of the proposed validation study. Several important details in the methods are missing which should be clarified. I recommend this protocol for publication after major revision.

Abstract:

The abstract could be more specific and include the importance of this research and the possibility of future application of the CALM-IT in clinical practice.

Introduction:

Page 4: “However, these tasks tend to be time-intensive, repetitive, and expensive, making them infeasible in a community setting”. Can the authors add an example of the tasks which are not feasible to conduct in a community setting and why? In what sense is the new application CALM-IT less repetitive? My understanding is that it comes from well-established paradigms (Go/No-Go and Stop-Signal Task) which are repetitive but their use in the community is more about the use of appropriate device rather than creating a whole new paradigm or measuring instrument.

Page 5: “While literature examining treatment of youth psychopathology using interventions specifically targeting inhibitory control is limited, previous work shows a reduction of mood symptoms following simulant medication treatment for co-occurring attentional (Posner et al., 2014; Towbin et al., 2019).”. What is exactly meant by youth psychopathology? Diagnoses of mental illness or behavioural problems (e.g., at school, with peers, etc.)? Could you please provide examples of specific psychopathology conditions in youth treated in inhibitory control and how are these findings connected with the present study?

Methods: This part is described into considerable details, however, to make the methodology replicable several details are missing.

Participants: The protocol does not clearly state the age range of participants the authors plan to recruit for their study and the process of their recruitment and selection (randomly approached participants, all participants from a certain clinic will be offered the pre-screening and participation, etc.). How will the authors assure that the participants will represent a whole range of symptoms? Moreover, if the participants will be under-aged, I would recommend clarifying the role of parents or legal guardians in the process. Whether they will be present or not during the use of the app and additional evaluations and how the authors plan for potential interference from the parents in the app testing (e.g., using the app instead of the participant recruited). It is unclear if the app will be tested/used at home or in a controlled environment (e.g., a laboratory or a clinic).

What is the rationale behind selecting the particular diagnoses of anxiety (but not depression), ADHD (including ADD?), DMDD? The topic of inhibitory control problems in children with these selected diagnoses could be more elaborated also in the introduction. To summarise the existing evidence of this problem may give a better understanding of the importance of this study and the CALM-IT application.

Stimuli: The application seems to present a limited number of trials and stimuli types. Did the authors consider using different colours and shapes of stimuli for each application testing session to avoid habituation for specific stimuli? For example, the no-go stimuli are always yellow stars, changing the colour or type of the no-go stimulus for each level may reflect more on the flexibility of each participant to inhibit inappropriate response in a variety of different scenarios.

Effect size calculation: What is the rationale behind selecting the effects of r between .20 and .32. Could the authors give examples of similar studies or otherwise justify the selected effects and/or the effect size calculation?

Symptom measures: I would recommend to the authors to consider measuring the impulsivity of participants also by using standardised questionnaires to investigate the different facets of impulsive behaviour. Some established measures as UPPS-P can be used in children.

Assessing the engagement: This seems to be assessed only based on the performance metrics from the app. I would recommend to the authors considering running a focus group or using a questionnaire of user experience to gain more information on the engagement and the usability of the app. This can be an important source of information for adjusting apps in development based on real user-experience feedback. If the app engagement is assessed only based on the performance metrics, how will the authors distinguish between poor performance in the app based on boredom or lack of engagement versus a poor performance caused by very poor inhibitory control abilities?

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Martina Vanova

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Jun 4;16(6):e0252245. doi: 10.1371/journal.pone.0252245.r002

Author response to Decision Letter 0


16 Apr 2021

Editor’s Comments:

1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

We have reviewed the PLOS ONE style templates and ensured that our manuscript meets the PLOS ONE’s style requirements.

2.Please note that PLOS ONE has specific guidelines on software sharing (http://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-software) for manuscripts whose main purpose is the description of a new software or software package. In this case, new software must conform to the Open Source Definition (https://opensource.org/docs/osd) and be deposited in an open software archive. Please see http://journals.plos.org/plosone/s/materials-and-software-sharing#loc-depositing-software for more information on depositing your software.

CALM-IT is a mobile behavioral task and not specifically a software or software package. We would happily share access to the mobile application and once we have completed this pilot and app-development is completed.

3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

We now include captions at the end of our manuscript for the supplementary materials.

4. We noticed you have some minor occurrence of overlapping text with the following previous publication(s), which needs to be addressed: https://www.cambridge.org/core/journals/development-and-psychopathology/article/abs/inhibitory-control-and-emotion-dysregulation-a-framework-for-research-on-anxiety/3DB40FE1CD1D6293778A9BC0272F3005 In your revision ensure you cite all your sources (including your own works), and quote or rephrase any duplicated text outside the methods section. Further consideration is dependent on these concerns being addressed.

We apologize for any overlapping text in the submitted manuscript with our prior work. After reviewing the two manuscripts closely it appears that the overlapping text is in the description of the antisaccade and flanker in-laboratory tasks. We have revised the language to no longer overlap while maintaining accuracy and clarity:

(p. 12): “Each trial begins with a preparatory period during which participants are presented with either a green or red instructional fixation cross indicating that the next trial will either be a prosaccade or antisaccade trail, respectively. During the prosaccade and antisaccade trials, a yellow visual target is presented for 1 second in a pseudorandomized location 630 pixels or 315 pixels to the left or right of the center of the screen for 1 second. The number of trials with the visual target in each location is equal across prossacade and antisaccade trials. The testing session consists first of a practice block. After completing the practice, participants will complete 3 experimental blocks each with 16 antisaccade and 16 prosaccade trials. The EyeLink 1000 Plus eye tracking system will be used to collect and process eye gaze data (a full description of the eye-tracking set up and eye gaze processing can be found in Cardinale et al., 2019).”

(p. 13): “Participants will be instructed to press the left or right arrow button to indicate the direction of the central arrow in series of five side-by-side arrows centered on the screen. The trial will terminate upon response and participants will be instructed to respond as quickly as they can. Trials are categorized as either congruent or incongruent trials based on the direction of the flanking arrows. Congruent trials correspond to trials where the flanking arrows all point in the same direction as the central arrow. The congruency of the visual stimuli therefore facilitates the correct motor response. In contrast, incongruent trials correspond to trials where the flanking arrows all point in the opposite direction of the central arrow. The incongruency of the visual stimuli therefore interferes with the execution of the correct motor response.”

Reviewer #1 :

This manuscript is a protocol for a validation study of a mobile application for inhibitory control evaluation in young people. It presents an interesting idea of measuring inhibitory control via a mobile application which presumably is a more feasible way in comparison to other experimental approaches when used in the community settings. The protocol summarises the main details of the proposed validation study. Several important details in the methods are missing which should be clarified. I recommend this protocol for publication after major revision.

1. The abstract could be more specific and include the importance of this research and the possibility of future application of the CALM-IT in clinical practice.

We have expanded the abstract to now include more specific language and a description of the importance of this research to future application in clinical practice (p. 2):

“The development of CALM-IT has significant implications for the ability to screen for inhibitory control deficits in the community by both clinicians and researchers. By facilitating assessment of inhibitory control outside of the laboratory setting, researchers could have access to larger and more diverse samples. Additionally, in the clinical setting, CALM-IT represents a novel clinical screening measure that could be used to determine personalized courses of treatment based on the presence of inhibitory control deficits.”

2. Page 4: “However, these tasks tend to be time-intensive, repetitive, and expensive, making them infeasible in a community setting”. Can the authors add an example of the tasks which are not feasible to conduct in a community setting and why? In what sense is the new application CALM-IT less repetitive? My understanding is that it comes from well-established paradigms (Go/No-Go and Stop-Signal Task) which are repetitive but their use in the community is more about the use of appropriate device rather than creating a whole new paradigm or measuring instrument.

We appreciate the opportunity to provide more clarity to this statement. We have revised the above referenced sentence (p. 4):

“However, these tasks tend to be time intensive, repetitive, and expensive, making them difficult to effectively deploy within a community setting.”

Additionally, we believe that there are two critical aspects that need to be more effectively communicated in our introduction. First, we now provide an example of the types of demands that traditional in-laboratory tasks have that make it difficult to translate to the community setting (p. 4):

“For example, some tasks rely on eye-tracking technology (Hallett, 1978) that requires equipment and specific environmental controls (i.e., the luminance in the room and participant head position relative to the presented stimuli) while others involve large numbers of repetitive trials presenting single simplistic stimuli one at a time (i.e., letters or shapes) that require long periods of sustained attention (Donders, 1969; Eriksen, 1995; Logan & Cowan, 1984; Rutschmann, 1977).”

Second, we now more explicitly discuss the aspects of CALM-IT that allow it to be both more easily accessible and engaging while still leveraging the methodological designs from well-established paradigms (p. 6-7):

“CALM-IT is designed to leverage the strong methodological design of well-established laboratory based inhibitory control tasks while increasing participant engagement through gamification of the tasks. By using a mobile platform, participant interaction with CALM-IT mirrors those with other mobile-based games with the goal of increasing participant engagement via dynamic stimuli and in-game incentives.”

3. “While literature examining treatment of youth psychopathology using interventions specifically targeting inhibitory control is limited, previous work shows a reduction of mood symptoms following simulant medication treatment for co-occurring attentional (Posner et al., 2014; Towbin et al., 2019).”. What is exactly meant by youth psychopathology? Diagnoses of mental illness or behavioural problems (e.g., at school, with peers, etc.)? Could you please provide examples of specific psychopathology conditions in youth treated in inhibitory control and how are these findings connected with the present study?

We apologize for the omission. We now specifically state the youth psychopathology referenced in the cited work (p. 5):

“While literature examining treatment of youth psychopathology, such as attention deficit hyperactivity disorder (ADHD) and irritability, using interventions specifically targeting inhibitory control is limited, previous work shows a reduction of mood symptoms, including anxiety, following simulant medication treatment for co-occurring attentional (Posner et al., 2014; Towbin et al., 2019). These findings provide some evidence suggesting that inhibitory control may reflect a key behavioral mechanism underlying the emergence of mood symptoms and treatment response.”

4. The protocol does not clearly state the age range of participants the authors plan to recruit for their study and the process of their recruitment and selection (randomly approached participants, all participants from a certain clinic will be offered the pre-screening and participation, etc.). How will the authors assure that the participants will represent a whole range of symptoms? Moreover, if the participants will be under-aged, I would recommend clarifying the role of parents or legal guardians in the process. Whether they will be present or not during the use of the app and additional evaluations and how the authors plan for potential interference from the parents in the app testing (e.g., using the app instead of the participant recruited). It is unclear if the app will be tested/used at home or in a controlled environment (e.g., a laboratory or a clinic).

The participants section has been updated to provide more information regarding recruitment procedures and the targeted age range and clinical psychopathology (p. 8):

“Youth aged 8 – 18 years old will be recruited from the community to participate in research at the National Institute of Mental Health (NIMH). Participants will be recruited as part of a larger protocol that aims to specifically recruit youth with a primary diagnosis of an Anxiety Disorder, Disruptive Mood Dysregulation Disorder (DMDD), ADHD, and youth with no psychiatric diagnosis. Past work within this sample has found that recruiting samples characterized by these clinical diagnoses has resulted in the full range of irritability, anxiety, and ADHD symptoms (Cardinale, Kircanski, et al., 2019; Kircanski et al., 2017, 2018; Stoddard et al., 2017; Tseng et al., 2018).”

We also now include a discussion of the role of parents or legal guardians and the testing environment in the discussion of our procedures (p. 9):

“Prior to participation, parents will provide written informed consent and youth will provide written assent. Participants will complete up to three different testing sessions. Administration of CALM-IT will occur outside of the laboratory setting. Families will be sent instructions on how to download CALM-IT onto their mobile device. As such, CALM-IT testing sessions will be completed within a community setting (e.g. at home) at whatever time is most convenient for the family. Parents will be instructed that CALM-IT should be played only by the child with no assistance. Adherence to this procedure will be assessed in a follow up interview. Families will also have the option of coming in for in-laboratory testing where children will complete four canonical inhibitory control tasks (detailed below). Families will receive monetary compensation for completion of CALM-IT. All study procedures have been approved by the NIMH Institutional Review Board.”

5. What is the rationale behind selecting the particular diagnoses of anxiety (but not depression), ADHD (including ADD?), DMDD? The topic of inhibitory control problems in children with these selected diagnoses could be more elaborated also in the introduction. To summarise the existing evidence of this problem may give a better understanding of the importance of this study and the CALM-IT application.

The reviewer raises an important point about the relevance of depression. Epidemiological studies demonstrate that youth with anxiety and DMDD often develop depression (Brotman et al., 2006; Copeland et al., 2014; Eyre et al., 2019; Pine et al., 1998). As such, depression may be a relevant construct within the proposed sample. We now include a secondary analysis of depression using the Mood and Feelings Questionnaire:

(p. 15): “For secondary analyses, depression symptoms will be measured using the parent-report Mood and Feelings Questionnaire (MFQ; Ancold et al., 1995). Total scores on the MFQ will be calculated for each participant as a measure of depressive symptoms.”

(p. 19): “Finally, secondary analyses will be run examining bivariate correlations between measures of hyperactive-impulsivity, inattentiveness, and depression.”

We now also include additional discussion of the prevalence and comorbidity of anxiety, ADHD, and DMDD in childhood and adolescence and associations with inhibitory control in the introduction (p. 4-5):

“The proposed work could be of particular importance in relation to anxiety, irritability, and ADHD symptoms as these symptoms are common, co-occurring symptoms that are present across a wide range of childhood psychopathology (Brotman et al., 2017; Karalunas et al., 2019; Leibenluft, 2017; Pine, 2007), and are implicated in the later development of mood disorders including depression (Brotman et al., 2006; Copeland et al., 2014; Eyre et al., 2019, 2019; Pine et al., 1998, p. 199). Furthermore, aberrant cognitive control more broadly has been implicated as potential mechanism associated with the presentation of anxiety (Cardinale, Subar, et al., 2019; Eysenck et al., 2007; Moser et al., 2013), irritability (Chaarani et al., 2020; Deveney et al., 2018; Tseng et al., 2018), and ADHD (Durston et al., 2002; Willcutt et al., 2005) symptoms.”

6. Stimuli: The application seems to present a limited number of trials and stimuli types. Did the authors consider using different colours and shapes of stimuli for each application testing session to avoid habituation for specific stimuli? For example, the no-go stimuli are always yellow stars, changing the colour or type of the no-go stimulus for each level may reflect more on the flexibility of each participant to inhibit inappropriate response in a variety of different scenarios.

The reviewer brings up a very interesting recommendation regarding the study of the flexibility of inhibitory control in the context of changing task demands/stimuli characteristics. We believe that the first critical step is to first establish validity of CALM-IT as a measure of baseline inhibitory control and thus have limited versions of the game to two, one modeled after go-no-go and one modeled after stop signal delay paradigms. This also allows us to keep the game play to a reasonable length of time while maximizing the amount of data collected.

However, the reviewer’s suggestion would be a very important avenue of future work as well as potentially other manipulations of the stimuli that could be leveraged to train inhibitory control. While this is out of the scope of the current proposed project, we now include additional discussion of potential future versions of CALM-IT should the proposed study confirm valid measurement of inhibitory control using the current proposed version of CALM-IT (p. 19):

“The present protocol would lay the groundwork for an important line of future work that could provide researchers and clinicians a multifaceted tool to measure multiple aspects of inhibitory control. For example, future versions of the app could include manipulation of the stimuli presented such that participants are required to update the which stimuli represents the stop-stimuli, thus targeting one’s ability to flexibly deploy inhibitory control. Furthermore, an adaptive version of CALM-IT that increases difficulty based on participants’ individual performance could function as an intervention aimed at improving inhibitory control.”

7. Effect size calculation: What is the rationale behind selecting the effects of r between .20 and .32. Could the authors give examples of similar studies or otherwise justify the selected effects and/or the effect size calculation?

We realize that the language used in the description of the power analyses suggests that we began with a priori effect sizes of r between .20 and .32 however, those effects were reported as a function of the sample size to provide a reader with context or the size of effects that could be detected with a power of .80 at a significance level of alpha = .05. We have revised the language in this section to provide more clarity (p. 9-10):

“To investigate the feasibility of CALM-IT (Aim 1a), we will recruit 200 youth. For investigation of the stability of CALM-IT (Aim 1b), we will invite 75 youth out of the overall sample of 200 to play CALM-IT twice. For validation of mobile application-based behaviors of inhibitory control (Aim 2), we will invite 100 youth out of the overall sample of 200 to complete a battery of in-laboratory inhibitory control paradigms. We evaluated each of these proposed sample sizes using the pwr package in R to determine the size of the effect that the sample would allow us to detect using a significance level of 0.05 and a power level of 0.80 (Stephane, 2020). For the feasibility of CALM-IT (Aim 1a), our proposed sample size is sufficient to detect effects of r≥0.20. For the stability of CALM-IT (Aim 1b), our proposed sample size is sufficient to detect effects of r≥0.32. Finally, for the validation of application-based behaviors of inhibitory control (Aim 2), the proposed sample size is sufficient to detect effects of r≥0.28. Thus all proposed samples would allow us sufficient power to detect small to medium effect sizes that are similar to those found within comparable samples (Bari & Robbins, 2013; Cardinale, Subar, et al., 2019; Derakshan et al., 2009; Haller et al., 2020).”

8. Symptom measures: I would recommend to the authors to consider measuring the impulsivity of participants also by using standardised questionnaires to investigate the different facets of impulsive behaviour. Some established measures as UPPS-P can be used in children.

Unfortunately, due to demands placed on our pediatric patients with severe clinical impairment within the current protocol, we are hesitant to add additional measures. However, we appreciate the important point made by the reviewer regarding impulsivity specifically and have added secondary analyses in which we will specifically examine associations with the two components of ADHD (impulsive hyperactivity and inattentiveness) as measured by the Conners Comprehensive Behavior Ratings Scale. Moreover, we will look specifically at the clinician endorsed items for on the ADHD module of the Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime version.

(p. 15): “Secondary analyses will be conducted examining associations with the two components of ADHD: hyperactive-impulsivity and inattentiveness. For these analyses, the CBRS DSM-IV Hyperactive-Impulsive and CBRS DSM-IV Inattentive subscales will be used.”

(p. 19): “Finally, secondary analyses will be run examining bivariate correlations between measures of hyperactive-impulsivity, inattentiveness, and depression.”

9. Assessing the engagement: This seems to be assessed only based on the performance metrics from the app. I would recommend to the authors considering running a focus group or using a questionnaire of user experience to gain more information on the engagement and the usability of the app. This can be an important source of information for adjusting apps in development based on real user-experience feedback. If the app engagement is assessed only based on the performance metrics, how will the authors distinguish between poor performance in the app based on boredom or lack of engagement versus a poor performance caused by very poor inhibitory control abilities?

The degree to which participants engage in experimental tasks is a persistent issue across measurement and construct. Unfortunately, this issue is present even for the most canonical in-laboratory measures of inhibitory control. One of central aims of CALM-IT is to create a more engaging measurement of inhibitory control that could limit confounding issues of loss of attention or limited motivation. Thus, we agree with the reviewer that it is critical that we evaluate the degree to which CALM-IT was in fact engaging. To improve our evaluation of engagement we now additional propose to examine our measure of app engagement across levels (p. 16):

“Average percent targets hit and percent stars hit will also be examined across each level to assess maintenance of attention and engagement across all levels of the task.”

Additionally, we will now include a follow up interview as part of our procedures to provide feedback from participants regarding their experience playing the game and specifically the degree to which they report being engaged (p. 16):

“Finally, a follow-up interview will be conducted to collect a qualitative assessment of CALM-IT engagement from each.”

Attachment

Submitted filename: ResponseToReviews_CALMIT.docx

Decision Letter 1

Veena Kumari

12 May 2021

Rationale and validation of a novel mobile application probing motor inhibition: Proof of concept of CALM-IT

PONE-D-20-39885R1

Dear Dr. Cardinale,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Veena Kumari

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Veena Kumari

25 May 2021

PONE-D-20-39885R1

Rationale and validation of a novel mobile application probing motor inhibition: Proof of concept of CALM-IT

Dear Dr. Cardinale:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Veena Kumari

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Identification of ADHD items for inclusion in bifactor model of symptoms.

    Description of methods and results of a confirmatory factor analysis of the 18 items measuring ADHD symptoms from the Conners Comprehensive Behavior Ratings Scale.

    (PDF)

    Attachment

    Submitted filename: ResponseToReviews_CALMIT.docx

    Data Availability Statement

    All relevant data from this study will be made available upon study completion.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES