It has been over a decade since Espie first proposed the stepped care model for delivering cognitive behavioral therapy for insomnia (CBT-I) here at SLEEP with the foundational step involving self-administered therapy [1]. The CBT-I programs Sleepio and SHUTi (now Somryst) were the pioneers in digital therapeutics for the chronic insomnia population [2]. These programs use a combination of behavioral and cognitive modules that sequentially delivers therapy over weeks, with a strong similarity to clinician-based therapy. These are probably the most thoroughly tested the digital insomnia therapy programs. There have been a welcome plethora of digital CBT-I randomized controlled trials testing Sleepio and SHUTi but also others [3–5] demonstrating clinical efficacy [6]. Collectively, these digital CBT-I programs can be used as the “first step” in the stepped care CBT-I model. This model has proved effective for triaging, delivering, and allocating resources for therapeutic providers [7]. However, these digital CBT-I programs were designed for insomnia populations, yet less severe or prodromal sleep disturbance is reported by a large percentage of the population [8] and formalized CBT-I may be therapeutically redundant for these people. Maybe these people with sleep disturbance could develop insomnia disorder later.
It could be that simpler behavior approaches may be effective on a wide scale for people like this who have not developed insomnia disorder. One promising approach has been the introduction of noise into the sleeping environment. This may initially seem to be counterintuitive given general sleep hygiene recommendations, but randomly generated noise may help drown out other less predictable environmental noises that can disturb sleep. Some formal trials or in-laboratory experiments have indeed been conducted delivering white, pink, or brown noise; although it is still not clear given the evidence base whether these are helpful, harmful, or neither [9]. But the widespread use of devices that incidentally deliver white noise such as fans or air conditioners does imply that people find this noise acceptable and at least benign for their sleep. The subject of this editorial is a very different approach to using random noise. Instead, they use noise encoded with information—music with nature sounds, and narrative stories.
Many humans would find listening to music or narrated stories to be an intrusive and unwanted addition to their sleep routine and it might be that readers warn against this for their patients. Much like the difficult-to-interpret data around white and other colors of noises [9] it might be that the potential benefits or harms of environmental noise of intermediate volume could be mediated through feelings of annoyance. Different humans can interpret the same noise as being soothing or annoying and that could powerfully determine their response. What might be important is the locus of control—did the person get to choose what the noise is or was it imposed on them? That’s why some people can find the sounds of a highway in the intermediate distance to be soothing when they chose to live there or to be a sleep stressor when the highway was built near their home. Some people choose to sleep with fans and air-conditioning running in their rooms but nocturnal air-conditioning being run by neighbors is one of the common sources of noise complaints in urban areas.
The current issue of SLEEP reports the interesting three-arm trial by Economides and colleagues testing a new mHealth tool to reach people with poor sleep rather than clinically diagnosed patient groups [10]. This sponsor-initiated study investigated feasibility and acceptability of two interventions in the proprietary Unmind digital mental health app. One consisted of simple audio-based ambient music and nature sounds (Nightwaves) and the other narrated sleep stories (Sleep Tales). This study randomized 300 participants in a 1:1:1 ratio to the two treatment arms and a wait-listed control group. The participants were working adults with self-reported sleep disturbance recruited online and who were not in formal therapy for sleep problems.
The results of the study showed that feasibility and acceptability were high based on recruitment, uptake, and retention. More than 90% of participants were retained at 4 weeks and completed at least one intervention session. The secondary sleep outcomes showed significant improvements in self-reported sleep disturbance (Hedge’s g effect size 0.92 [0.63–1.22]) and sleep-related impairments (0.80 [0.51–1.09]). This is in alignment with effect sizes from digital CBT-I studies.
Interestingly, as this participant population were recruited from the internet with nonspecific sleep disturbance, they are likely to be somewhat but not completely overlapping with patients in insomnia digital trials. They may potentially represent an earlier trajectory of patients who subsequently develop insomnia, predominantly female (71.7%) but somewhat younger (35.5 years) than the insomnia population [11]. As such these results may be viewed in light of whether they could reduce the incidence of insomnia.
This pilot study had an impressive sample size for a feasibility and acceptability trial (n = 300) and the authors also appropriately followed their own preregistration plan for the trial [12] (ICTRN 12614821). The logical next step is a fully powered efficacy trial with a sleep disturbance primary outcome and an extension beyond the 4-week follow-up period to ensure the effects are lasting. To demonstrate the sustainable effects on sleep disturbance, trials should be extended to 6 or 12 months to determine if relapse rates are low and people with sleep disturbance have remitted. Recent digital CBT-I trials [13, 14] using similar methods of internet enrollment of community participants provide a blueprint for probable sample sizes (n~1700). It should be noted that dropout rates for digital CBT-I trials are higher than for well-conducted face-to-face clinical trials. Nevertheless, large sample sizes for digital sleep health interventions are likely to become the norm to improve generalizability and demonstrate wide access to digital health literate populations.
Any criticisms we might have for this study are common to all studies like this. All of these trials are biased toward finding more positive results because they are unavoidably unblinded (i.e. open label—everybody can tell which intervention they are getting) and with subjectively self-reported endpoints. Another very important bias is caused by employing a control group that is wait-listed. Wait-listed control arms in this context are probably biased toward finding interventions are more effective than they really are. Participants know what they were hoping to receive and what they actually ended up receiving and what the intended answer is (the treatment given now is better than waiting for the treatment later). So some of these impressive effect sizes are probably inflated. Having such a control group is certainly an improvement on having no control group but in these circumstances, it introduces powerful biases that occur even after randomization [15].
Our new world of digital therapeutics is certainly exciting and is moving very fast. We applaud all of the efforts being made to rigorously test these new interventions. But it’s worth remembering that testing them fairly is very difficult to do well and that these biases are likely to overestimate true effectiveness.
Contributor Information
Christopher J Gordon, CIRUS Centre for Sleep and Chronobiology, Woolcock Institute of Medical Research, Sydney, NSW, Australia; Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia.
Nathaniel S Marshall, CIRUS Centre for Sleep and Chronobiology, Woolcock Institute of Medical Research, Sydney, NSW, Australia; Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, NSW, Australia.
Disclosure Statement
Financial disclosure: CG and NM reports no disclosures. Nonfinancial disclosure: CG has patents in a digital sleep intervention, SleepFix. NM reports no conflicts.
References
- 1. Espie CA. “Stepped care”: a health technology solution for delivering cognitive behavioral therapy as a first line insomnia treatment. Sleep. 2009;32(12):1549–1558. doi: 10.1093/sleep/32.12.1549 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Manber R, Alcántara C, Bei B, Morin CM, van Straten AA.. Integrating technology to increase the reach of CBT-I: state of the science and challenges ahead. Sleep. 2023;46(1). doi: 10.1093/sleep/zsac252 [DOI] [PubMed] [Google Scholar]
- 3. Koffel E, Kuhn E, Petsoulis N, et al. A randomized controlled pilot study of CBT-I Coach: feasibility, acceptability, and potential impact of a mobile phone application for patients in cognitive behavioral therapy for insomnia. Health Informatics J. 2018;24(1):3–13. doi: 10.1177/1460458216656472 [DOI] [PubMed] [Google Scholar]
- 4. Lancee J, van den Bout J, van Straten A, Spoormaker VI.. Internet-delivered or mailed self-help treatment for insomnia? a randomized waiting-list controlled trial. Behav Res Ther. 2012;50(1):22–29. [DOI] [PubMed] [Google Scholar]
- 5. Watanabe Y, Kuroki T, Ichikawa D, Ozone M, Uchimura N, Ueno T.. Effect of smartphone-based cognitive behavioral therapy app on insomnia: a randomized, double-blind study. Sleep. 2022;46(3). doi: 10.1093/sleep/zsac270 [DOI] [PubMed] [Google Scholar]
- 6. Zachariae R, Lyby MS, Ritterband LM, O'Toole MS.. Efficacy of internet-delivered cognitive-behavioral therapy for insomnia - a systematic review and meta-analysis of randomized controlled trials. Sleep Med Rev. 2016;30:1–10. doi: 10.1016/j.smrv.2015.10.004 [DOI] [PubMed] [Google Scholar]
- 7. Kalmbach DA, Cheng P.. Embracing telemedicine and digital delivery of cognitive-behavioral therapy for insomnia: where do we come from and where are we going? Sleep. 2023;46(1). doi: 10.1093/sleep/zsac291 [DOI] [PubMed] [Google Scholar]
- 8. Grandner MA, Jackson NJ, Pigeon WR, Gooneratne NS, Patel NP.. State and regional prevalence of sleep disturbance and daytime fatigue. J Clin Sleep Med. 2012;08(01):77–86. doi: 10.5664/jcsm.1668. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Riedy SM, Rocha S, Basner M.. Noise as a sleep aid: a systematic review. Sleep Med Rev. 2021;55:101385. doi: 10.1016/j.smrv.2020.101385 [DOI] [PubMed] [Google Scholar]
- 10. Economides M, Bolton H, Cavanagh K.. Feasibility and preliminary efficacy of app-based audio tools to improve sleep health in working adults experiencing poor sleep: a multi-arm randomized pilot trial. Sleep. 2023;46(7):zsad053. doi: 10.1093/sleep/zsad053 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Miller CB, Valenti L, Harrison CM.. Time trends in the family physician management of insomnia: the Australian experience (2000-2015). J Clin Sleep Med. 2017;13(6):785–790. doi: 10.5664/jcsm.6616 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Loffler KA, Patel SR.. Reporting findings in sleep medicine: is it time for some spin control? Sleep. 2023:zsad045. doi: 10.1093/sleep/zsad045 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Vedaa O, Kallestad H, Scott J, et al. Effects of digital cognitive behavioural therapy for insomnia on insomnia severity: a large-scale randomised controlled trial. Lancet Digit Health. 2020;2(8):e397–e406. doi: 10.1016/S2589-7500(20)30135-7 [DOI] [PubMed] [Google Scholar]
- 14. Espie CA, Emsley R, Kyle SD.. Effect of digital cognitive behavioral therapy for insomnia on health, psychological well-being, and sleep-related quality of life: a randomized clinical trial. JAMA Psychiatry. 2019;76(1):21–30. doi: 10.1001/jamapsychiatry.2018.2745 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Altman DG, Bland JM.. Statistics notes. Treatment allocation in controlled trials: why randomise? BMJ. 1999;318(7192):1209. [DOI] [PMC free article] [PubMed] [Google Scholar]