Table 4.
Summarized findings from systematic review on 27 randomized controlled trials involving smartphones in the field of psychiatry. Consolidated Standards of Reporting Trials electronic health checklist is used as a guideline to systematically display trial design, methodology, and reporting of the identified trials.
| Consort item | Summarized findings according to CONSORT item, with references to relevant articles. |
| Title and abstract (1a and 1b) | All but 6 titles [22,35,37,38,41,42] described the mode of delivery, components of treatment, target group, and trial design according to Consolidated Standards of Reporting Trials electronic health guidelines [20]. Often only broad terms of components were used, such as “mobile” or “mHealth.” |
| Introduction (2a and 2b) | The trials were published from 2013 to 2019 with equal distribution through 2013 to 2017 and increasing numbers from the from 2018 and forward [33-43]. Trials were mainly from western countries, especially from Scandinavia [23,25,27-30,34,40]. |
| Trial design (3) | A total of 19 trials were classic RCTs [23,25-27,29-34,38-41,43-45,48], 7 were pilot RCTs [22,35-37,42,46,47], and 1 was a noninferiority RCT [28]. |
| Participants (4a and 4b) | A total of 22 trials used research-based diagnoses: 5 were based on questionnaires [44-48], 8 used phone interviews [22 25,28-30,34,42], 1 used Facetime or Skype interview [38], and 8 used personal interviews [26,27,31,32,36,39,43] mostly based on either MINI or DSM-4; 5 trials based their diagnoses only on clinical-based information [24,33,35,37,41]; 13 trials excluded patients with various degree of suicidal ideation [22,23,25,28-32,34,36,39,42,47]; 3 trials excluded patients with too severe symptomatology within the diagnosis of interest [22,26,27]. Most trials excluded patients with severe psychiatric comorbidity from lower International Statistical Classification of Diseases and Related Health Problems-10 chapter. In addition, 12 trials supplied participants with smartphones, either voluntary or mandatory [24,26,27,31,33,35,38,40,41,46-48]; 12 trials compensated participation in assessments with money or gift cards [26,33-38,42,43,46-48]. |
| Interventions (5) | Intervention length varied substantially: from 3 weeks [36] to 52 weeks [41]. Most interventions lasted between 4 and 12 weeks; 8 trials used unaffected standard treatment beside intervention [24,27,35,37,38,40,44,45]; 8 trials tested the app alone [35-37,44-48], the remaining used variations of blended treatment; 1 trial tested blended therapy against app alone [47]; 18 trials used prompts to engage users, either from the app or by investigators [22,26-28,30,32-44]; 1 trial compared with an inactive “placebo” version of the app [41]; 2 trials compared with a placebo training module [43,44]. In 1 trial, participants in the control group received a “placebo” smartphone without the app system [27]; 5 trials used standard treatment as comparator [24,27,35,38,40], 11 trials used waitlist control [29,30,34,38,39,42-46,48], 4 trials used clinical intervention [26,28,31,33], and 5 trials compared with another app [23,25,35,36,42]; 1 trial collected automatically generated data [40] and further 8 trials collected data on app usage [24,33,34,42,43,45,47,48]; in 3 trials, intervention was only for iPhone [32,34,45] and in 3 only for Android [27,42,43]. The rest of the trials either had a Web-based version available or app for both platforms. Only 1 article mentioned information about updates of apps or intervention [27]. |
| Outcomes (6a and 6b) | Overall, 8 trials did not use a predefined hierarchy of outcome measures [22,37,41-44,46,47]; 1 trial used tested levels of drug use in urine as a specific detection [37]; 1 trial used objective measurements of feasibility, use, and attrition as the primary outcome [35]; 1 trial tested a specific task [38]; 1 trial used video call–based clinical ratings [41]. Only 4 trials used clinical ratings as the primary outcome [26,27,31,40]. Remaining trials used patient-reported outcome measures. A total of 12 trials used internet platform for data collection [23,25,28-30,34,38,39,42-46,48], with 6 of these mentioning validations of questionnaires for online use [23,25,28,30,34,46]; 2 trials used the app for outcome measure [31,41]. |
| Sample size (7a and 7b) | Sample size varied from 20 participants [47] to 429 participants [44]; 11 trials with numbers above 100 participants [24,30,32-34,39-41,44,45,48]. Pilot trials were smaller. |
| Randomization (8, 9, and 10) | Overall, 8 trials did not supply information about randomization [29,31,36-38,45-47]; 1 used Excel [43], and 1 used the app for randomization [41]. The remaining mainly used online software. |
| Blinding (11a and 11b) | Overall, 2 trials claim to be double-blinded with no further explanation on how blinding was assured [43,44]; 1 trial blinded app allocation for the patients (they tested 2 different apps) [45]. The remaining trials had no blinding of patients. In 12 trials, authors explicitly wrote that they used blinded assessments for outcome measures [25-29,31-33,35,38,40,44]. Within these 12 trials, 5 trials used patient-reported outcome measures as the primary outcome measure with nonblinded patients [25,28,29,32,33]; 1 trial tested for the success of blinding [32]. |
| Statistical methods (12a and 12b) | A total of 11 trials based sample size on power calculations [24,27,28,30-33,39,40,45,48]. All but 1 of these managed to recruit at least the desired number [31]. No trials took changes and updates of software or technical problems into account in statistical methods. |
| Participant flow (13a and 13b) | All but 2 trials [43,44] presented a trial flow chart of eligible subjects, although with various details on reasons not to participate and drop out. Completion rates were reported very differently, varying from 163/164 completing primary outcome [32] to 74/283 completing posttreatment assessments (primary outcome) [45]. All but 2 trials reported on adherence to treatment [36,43]. |
| Recruitment (14a and 14b) | Recruitment length was reported in 16 trials and varied from a few months to several years [22-24,27,30,32,33,35,37,41,42,44-48]; 10 trials used closed recruitment with a referral from clinicians or researchers seeking out participants from a well-defined patient population [24,27,31-33,35,37,40,41,47]; 1 trial gave no information on recruitment [43]. The remaining trials used open recruitment mainly via Craigslist.org, advertising in traditional ways, or through social media. |
| Baseline data (15) | Only 2 trials included technology-specific baseline data or information about participant technological abilities [32,33]. |
| Numbers analyzed (16) | All but 6 trials [22,36,37,41-43] used the intent-to-treat principles in the primary analysis. |
| Outcome and estimation (17) | A total of 17 trials presented intensity of use or user data, either in the article or in supplementary data, with significant variations in usage among subjects and between trials [22-25,27,32-35,37-40,42,43,45,47]. |
| Harms (19) | Overall, 5 trials prospectively measured harms or adverse events and reported directly in paper [32,33,35,41,42]. These trials found no harm from smartphone treatment used. One of these trials [42] had a safety protocol with clear, standardized instructions on how to react to suicidal ideation; 1 trial found a negative effect of treatment in secondary analysis, indicating fewer improvements in symptoms in a subgroup with a higher baseline score on the Hamilton Rating Scale compared with controls [27].No trials mentioned privacy breaches. Three trials mentioned technical problems and how these affected the intervention [23,40,48]. |
| Generalizability (21) | Trials were heterogeneous. Some had strict criteria on diagnosis, comorbidity, and ongoing treatment, whereas others were pragmatic trials with few exclusion criteria. Trial populations varied from patients recruited among the general population who might not have sought help in the regular treatment system [22,23,25,28,30,34,39,42,44], whereas others came from specialized clinical functions setups [24,27,31-33,35,37,40,41,47]. |
| Registration (23) | A total of 14 articles included information about trial registration [22,24-27,29-33,35,39-41]. |
| Protocol (24) | A total of 5 trials published their trial protocol [27,30,32,35,40]; 1 trial had the protocol attached to the publication [28]. |
| Funding (25) | Most authors came from universities; 15 trials reported information regarding funding [24-28,31,32,35-38,40,41,43,47] with funding mainly coming from public funds and institutions. |
| Competing interest (X27)a | A total of 9 articles declared having various degree of affiliation with private technology companies or closed relation to the app that they tested [25,30-34,40,42,45]; 8 trials did not include conflicts of interest in the printed article [23,28,29,35,37,39,46,48]. |
aNot an original Consolidated Standards of Reporting Trials item but included in the Consort electronic health checklist as X27.