Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Mar 1.
Published in final edited form as: Int J Clin Exp Hypn. 2021 Mar 16;69(2):277–295. doi: 10.1080/00207144.2021.1883988

Efficacy of a self-hypnotic relaxation app on pain and anxiety in a randomized clinical trial: Results and considerations on the design of active and control apps

Elvira V Lang *, William Jackson, Paul Senn *, Aroni Donavon-Khosrow *, Matthew D Finkelman , Thomas Corino *, Graham Conway *, Ronald J Kulich
PMCID: PMC9976960  NIHMSID: NIHMS1870647  PMID: 33724898

Abstract

Despite an explosion of mobile offerings for management of pain and anxiety, the evidence for effectiveness is scarce. Placebo-controlled trials are the most desirable but designing inactive placebo apps can be challenging. For a prospective randomized clinical trial with 72 patients in a craniofacial pain center we created one app with self-hypnotic relaxing (SHR) for use with iOS and Android systems. A placebo background audio (BA) app was built with the same look and functionality. Both SHR apps by themselves and in comparison, to the BA group significantly reduced pain and anxiety during the waiting room time. The Android BA app significantly reduced anxiety, but not pain. The iOS BA app affected neither pain nor anxiety functioning as an ideal placebo. Usage analysis revealed that different default approaches of the iOS and Android devices accounted for the difference of results.

Keywords: Pain, anxiety, app, hypnosis, placebo app


As smartphone technology has become more prevalent, there has been an explosion of medical-related mobile offerings (mHealth apps) promising a variety of benefits, ranging from relaxation to psychological betterment, diagnosis and even cures (Grand View Research, 2017). The reduction of pain is a particular target as one of the most common and distressing symptoms for patients in all clinical settings (Gregory & McGowan, 2016; Johannes, Le, Zhou, Johnston, & Dworkin, 2010; Klepstad, Kaasa, Cherny, Hanks, & de Conno, 2005; Zhao, Yoo, Lancey, & Varghese, 2019). Management of anxiety not only complicates the pain experience under regular conditions (Woo, 2010), but also is even more urgent as a prevalent symptom in times of health scares, such as the current COVID-19 pandemic (Wang et al., 2020).

By now, there are well over 103,000 unique apps and >1,500 hypnosis apps alone (Research2Guidance, 2015; Sucala et al., 2013). Nevertheless, there is a dearth of evidence for the effectiveness of these apps (Lalloo et al., 2017; Sucala et al., 2013; Thurnheer, Gravestock, Pichierri, Steurer, & Burgstaller, 2018; Zhao et al., 2019). Assessment of apps faces challenges in terms of content, usability, feasibility of use by the target population, and clinical efficacy (Reynoldson et al., 2014; Salazar, de Sola, Failde, & Moral-Munoz, 2018; Zhao et al., 2019). Clinical outcome assessments, when available at all, typically rely on comparisons of pain experiences before and after app use (Guetin et al., 2016; Jamison, Mei, & Ross, 2018; Jibb et al., 2017; Lee et al., 2017; Oldenmenger, Baan, & van der Rijt, 2018; Raj, Brunelli, Klepstad, & Kaasa, 2017) or randomization among users versus non-users of an app (Blodt et al., 2018; Schatz et al., 2015; Skrepnik et al., 2017; Sun et al., 2017). As self-care in the form of engagement with any app, however, may affect outcome, comparison to a placebo control is preferable and is considered to provide a higher level of scientific evidence of efficacy (Goldet & Howick, 2013).

Prior randomized clinical trials in procedure and imaging settings showed that reading a short self-hypnotic relaxation script upfront favorably shaped the patients’ pain and overall experience (Lang et al., 2000; Lang et al., 2006; Lang et al., 2008). Throughout these and subsequent trials focusing on staff education in advanced communication, it was possible to validate specific verbal content and suggestions in terms of pain and anxiety reduction (Ajam, Nguyen, Kelly, Ladapo, & Lang, 2017; Norbash et al., 2016). While these word sequences were delivered live by personnel in response to patients’ prevailing mood state, we hypothesize that pain and anxiety reduction should also be achievable when the same suggestions are provided in an app that respects patients’ preferences. Towards this end, we designed a self-hypnosis app containing these previously tested word sequences.

When assessing the effect of audio content on the state of mind in clinical trials, choice of the control condition becomes challenging. Blank tapes have been used for comparison in trials assessing effectiveness of audiotapes (Mandle et al., 1990). A blank app with no audio content, however, might upset the user for lack of choices which are usually expected with an app. No sound may also make the user wonder if the device works correctly, and result in extra interactions with the research team, which in term introduce bias. Background audio thus appears the best option for the “placebo” in an audio trial. Consequently, we designed a placebo app with the same internal architecture and looks as the self-hypnosis app with use of various background audio selections as audio output. Since it was not known whether the operating systems, such as Google’s Android or Apple’s iOS as the two most commonly used operating system, would affect the clinical outcomes, we constructed versions of the test and placebo apps on both platforms.

For testing, we chose the waiting room of a craniofacial pain center because of the high likelihood of preexisting pain exacerbated by the acute stressor of a clinic visit in a dental facility. Dental office settings, even in the general population, instill acute anticipatory anxiety in 9–42% of patients, and severe anxiety in 3–30% (Heyman et al., 2016; Hill, Chadwick, Freeman, O’Sullivan, & Murray, 2013).

The objective of this work was to demonstrate feasibility of app use at a time of immediate need and to assess its effect on patients’ pain and anxiety. Inclusion of a control app allowed for a randomized placebo-controlled design and to for gauging the impact of app use in general. The trial further helped elucidate potential pitfalls in design and use of placebo and control apps.

Methods

Trial Design

This HIPAA-compliant trial was based on a parallel-group, placebo-controlled, single-blind, intent-to-treat design at the Craniofacial Pain Center of a university-based dental school. It was approved by the site IRB and the Office of Clinical Research Affairs (OCRA) at the National Institute of Health/National Center for Complementary and Integrative Medicine (NIH/NCCIH) within a Small Business Innovation in Research (SBIR) grant. All study data were controlled by the authors at the academic site, which had no relationships or conflicts of interest with the small business.

Participants

Eligible participants were male or female patients age 18 years and older who were scheduled to undergo procedures for relief of craniofcaial pain or sleep apnea; were able to hear, write and read in English, as the audio recordings and study scales were in English; were able to operate a standard smart tablet or smart phone to enable later download at home; had access to a smart tablet, smart phone or computer-based app download; and were willing and able to give informed consent. Exclusion criteria included the presence of major psychiatric disorder or cognitive disorders that would prevent the patient from following the procedures. Participants were not screened for hypnotizability.

The consent invited patients to take part in a research study which evaluates the effects of a Comfort Talk® patient app on their comfort levels because they were scheduled for a visit at the Craniofacial Pain Center, assess how use of the app affects their anxiety and pain before, during and after their dental visit, and assess the overall feasibility of the study design before a large-scale study.

Interventions

App Design

Content of the test app derived from segments of self-hypnotic relaxation scripts that had been used successfully in prior clinical trials (Lang et al., 2000; Lang et al., 2006; Lang et al., 2008). When spoken live by staff immediately preceding invasive medical procedures, use of these scripts was associated with significant reductions in pain and anxiety. Training and observation of medical staff in use of calmative language provided additional language snippets which, when spoken live, were associated with fewer no-shows, incompletions of scans, and disruptive patient motion in clinical trials in magnetic resonance imaging facilities (Ajam et al., 2017; Ladapo, Spritzer, Nguyen, Pool, & Lang, 2018; Norbash et al., 2016).

To enable personalization while keeping the design simple, a self-hypnotic relaxation (SHR) app was built with four main topics (relaxation, confidence, comfort, and peace), four “extras” (balanced body, decisions, letting go, and surgery), two voice options (male and female), and long and short version of each topic (Fig. 1a).

Fig. 1a.

Fig. 1a

iPad (iOS) version of the SHR-app with the opening screen (left top), selection menu of main themes (right top), choice of voice, listening time, and addition of extras (bottom left), which are spelled out on the extra screen (bottom right).

The majority of text derived from the study-script used in the prior clinical trials (reprinted in (Lang et al, 2006) ) and a book we used for training of medical teams {Lang & Laser, 2009). The app starts with a general explanation about self-hypnotic relaxation and how an eye-roll with counting from 1 to 3 is used for entry into a relaxed state and how patients can return to their natural state of awareness by counting backward from 3 to 1. Then follows an eye-roll induction, an invitation to float, to breathe in strength and letting go of what to let go of, and to choose a preferred place, to associate all senses with it, and to use it as an anchor of comfort and safety. This basic text, used for relaxation, then concludes with a reorientation. Based on the main topic chosen, different inserts are then added to the basic text. One insert invites patients to associate with a state of confidence and to bring this feeling to the current situation. For pain management, use of an ice pack, warmth, or a delicious sense of tingling as well as rubbing the fingertips together is suggested in a shorter version; in a longer version the metaphor of a floating stone for deepening of the hypnotic state and the mantra of “strong, numb, and courageous” is added. For anxiety management, a pile of sand analogy invites the patient to pile up all anxieties and worries as high as desired and then have them carried away by gentle ways; the longer version presents the imagery of a balloon that carries all worries in its basket away.

Extras were added based on issues we noted to come up for patients in conjunction with invasive medical procedures over the years. They include how to make ”Decisions” by ideomotor signals, how to achieve a “Balanced Body” with nongeneric suggestions for having all body systems work together as needed, how to experience “Surgery” safely and heel. A metaphorial tale about “Letting go” of something that was once helpful but has become a burden was included since we often found patients grieve about loss of disease states, infected material, and even tumors, that had become part of their identities.

The app was published in the Apple Store (Comfort Talk®), initially with 2 voices, one speed, and no extras. Based on user feedback, two more voices were added (male and female) as well as extras, if so desired, and enable further personalization of choice by choosing listening times. Behavior Driven Development (Smart, 2015) was used to finalize playtime, options, and voices resulting in the active test app (Comfort Talk® Pro, SHR-app)(Fig 1a). The XCode development environment was used for the Apple iOs app, and then the Android Studio development environment for Android app using the libraries and simulation/testing tools available in these environments.

A similar development process was followed for development of a background audio control app (“BA-app”), using the SHR-app as a base and branching the code to create a new version with similar functionality but with audio segments that provided a choice of neutral themes with no spoken language (Fig 1b). The timings on the BA-app mirrored those on the active SHR-app. Care was taken to ensure that opening screens were the same and that choice buttons all had a similar look to minimize selection bias and to minimize risk for personnel and research assistants to identify en passant the type of app a patient was using.

Fig. 1b.

Fig. 1b

iPad version of the BA-app which was designed to resemble the SHR-app showing the opening screen (left top), selection menu of main themes (right top), choice of voice, listening time, and addition of extras (bottom left), which are spelled out on the extra screen (bottom right). Exterior A refers to nature sounds, Exterior B to urban sounds, Exterior C to sounds from the waterfront and interior to indoor sounds. Note that on the Android BA the sequence of themes from top to bottom was Exterior A, B, C, Interior, not as on the iOS Interior, Exterior A, C, B.

For the choice of the background audio sounds, we opted for unedited background sounds people might hear in their daily lives. Exterior sounds A,B,C were recorded from a window or from the street, the interior sound in front of a front-loading waschine machine.

App Use

After clinical screening for inclusion and obtaining consent and baseline data, the research assistant (a psychologist not involved in the treatment of the patient) handed the participant a tablet containing, depending on group attribution, either the SHR-app or BA-app in the iOS or Android version, and the participant was shown how to operate the tablet to the opening screen which was identical for the test and control app. All patients also obtained the identical glossy description pamphlet with sample screens on how to operate the tablet. The participants then returned to the clinic waiting area with the tablet and waited for their scheduled appointment. The participants were at liberty to choose when, for how long, and which segment(s) to listen to.

Outcomes

The pre-specified primary feasibility criterion was to obtain complete on-site data sets from at least 90% as a measure of patient acceptance.

Clinical outcome measures were changes in pain and anxiety from the time before listening to the app to the end of the waiting room time were. At these time points patients were asked to rate their comfort level on 0–10 scales, first on a scale with 0=no pain at all and 10=worst pain possible, and then on a scale with 0=no anxiety at all, and 10=worst anxiety possible. These verbal analog scales had been previously validated for use in the medical setting (Benotsch, Lutgendorf, Watson, Fick, & Lang, 2000; Murphy, McDonald, Power, Unwin, & MacSullivan, 1988; Paice & Cohen, 1997). The more neutral query about comfort levels as an introduction rather than asking solely about the severity of pain and anxiety was used to avoid setting of the expectations of pain and anxiety as well as potential enhancement of the pain perception that has been associated with use of negative suggestions (Cyna & Lang, 2011; Lang et al., 2005).

Having ≥90% of patients in the app group listen to the active SHR app ≥5 min (300 sec.) was another secondary pre-determined outcome measure. Therefore, anonymized app usage patterns were to be based on background capture of usage data of the SHR apps and then uploaded to a database server via the hospital and/or tablet internet connection. This required that the local investigator completely powered the Android device down before enrolling the next patient, not just turn off it off. For the iOS device, a return to the home screen sufficed. To maintain blinding of the research team, all tablets had large stickers, “Power down after use.”

An automatically generated data file in comma-separated value file format (csv) displayed variables such as options chosen, audio segments played and time spent listening to each segment. Retrieving the BA app listening data was initially not an objective. It was done post hoc by use of software, which allowed access to internal app database storage, and selective searches to isolate log records describing user behavior.

Sample Size Determination

Since preoperative anxiety proved a key predictor of the procedural pain experience in our past experience and we had available data (Schupp, Berbaum, Berbaum, & Lang, 2005), we based sample size determination on anxiety reduction in the waiting room. Based on a prior clinical trial (Fig 4, time 0 of the procedure, in the publication by Schupp et al (Lang et al., 2000)) we assumed an average anxiety rating of 4 at the end of the waiting room time without prior guidance in relaxation or app. Based on Schupp et al, one should expect improved subsequent outcomes with a 35% reduction in anxiety to 2.6 (33 vs 53 on STAI - (Schupp et al., 2005)). Using this value and a standard deviation of 2, 66 patients would be needed for a two-sided test with a power of 0.80 and an alpha of 0.05. With 66 patients as a guide to determine a target study size for assessment of the feasibility parameters, an enrollment of 72 patients seemed compatible with an early drop-out of 6 patients resulting in an expected 91.7% onsite data return rate.

Statistical Methods

Due to non-normality of data, Wilcoxon Signed Rank tests were used to assess the change in pain and anxiety from the start of waiting room time to the end of waiting room time. For the between-group comparisons, Mann-Whitney U tests were used. All tests were two-sided with significance levels of P<0.05 and were executed with SPSS software Version 25. For illustration purposes, compliance with CONSORT 2010 criteria, and to facilitate use in meta-analyses, results are given in terms of means, ranges, standard deviations (STD), and 95% confidence intervals (Schulz, Altman, & Moher, 2010).

Randomization

Two iPads with iOS systems and two Android tablets, labeled A, B, C, D, were preloaded with the respective SHR-app or BA-app content in a sequence obtained by a computer-generated randomization list at the small business, unknown to the site investigators. The site statistician randomized the subjects to one of four devices using a block randomization plan generated by SAS version 9.4 PROC PLAN. The blocking factor was time of entry to the study, and the block size was four. The site statistician delivered the randomization plan to the company. There, cards with the respective tablet designation of A, B, C, or D based on the statistician’s sequence were placed in sequentially numbered opaque envelopes, which were sealed and placed in their corresponding sequentially labeled patient folders. Folders and tablets were then delivered to the site. Treating personnel and the research assistant at the site were blinded to group assignment

After the research assistant had consented the patients and had entered the baseline data in the patient’s file folder, he opened a sealed envelope revealing the patient’s tablet attribution. He then left to retrieve the tablet written on the card. The assistant queried and recorded the trial data but did not analyze them to maintain blinding of evaluation.

Results

Participants

Enrollment lasted from November 2017 to May 2018. The flow of participants is shown in Fig. 2. Seventy-five patients were consented. One participant was withdrawn after consent by the research assistant when the participant realized that she did not actually have a smartphone (part of inclusion criteria) or knew how to use a tablet. She had thought her flip phone was considered a smartphone which it was not. Two subjects who initially consented declined to participate when the research assistant returned with their assigned tablet before the app was turned on. One mentioned time constraints; the other followed the input of his accompanying mother. Thus seventy-two patients were analyzed. Twenty-two were male, fifty female. Further baseline data are shown in Table 1.

Fig. 2.

Fig. 2

Participant Flow Chart. * One participant who was withdrawn after consent by the research assistant when the participant realized that she did not actually have a smartphone (part of inclusion criteria) or knew how to use a tablet. ** Two participants who withdrew after consent before listening.

Table 1.

Baseline Characteristics

SHR App BA App Total

Number 37 35 72
Age (mean, range) 46.6 (18–78) 46.0 (18–69) 46.3 (18–78)
Gender
 Male 8 (22%) 14 (40%) 22 (31%)
 Female 29 (78%) 21 (60%) 50 (69%)
Racial Category
 Asian 6 (16%) 1 (3%) 7 (10%)
 Black 0 2 (6%) 2 (3%)
 White 31 (84%) 30 (86%) 61 (85%)
 More than one race 0 1 (3%) 1 (1%)
 Not provided 0 1 (3%) 1 (1%)
Ethnic Category
 Not Hispanic 37 (100%) 34 (97%) 71 (99%)
 Hispanic 0 0 0
 Not given 0 1 (3%) 1 (1%)
Procedure
 Initial evaluation 8 (22%) 2 (6%) 10 (14%)
 Follow-up evaluation 20 (54%) 22 (63%) 42 (58%)
 Appliance insertion 6 (16%) 8 (23%) 14 (19%)
 Trigger point injection 2 (5%) 1 (3%) 3 (4%)
 Botox 1 (3%) 2 (6%) 3 (4%)
Pain (mean, range) 3.4 (0–9) 2.8 (0–9) 3.1 (0–9)
Anxiety (mean, range) 2.6 (0–7) 2.5 (0–9) 2.6 (0–9)
Waiting Time (mean, range) 16 (6–33) min 16 (11–40) min 16 (6–40) min

Primary Feasibility Parameters

The primary feasibility parameter to obtain complete on-site data sets from at least 90% of patients enrolled was met: 72 of 75 eligible patients consented (96%) and all 72 patients who did not withdraw or were withdraw delivered on-site pain, anxiety, and questionnaire data.

There were no adverse effects observed or reported.

Pain and Anxiety

Pain in the SHR app group (iOS and Android combined) decreased significantly from the beginning to the end of the waiting room time by a mean of −0.76 (95% CI −1.14 to −0.37; p<0.001), but did not change significantly in the BA group with a mean −0.25 (95% CI −0.56 to +0.05; p= 0.141). This difference in the change of pain between the combined groups was significant (p=0.038). Subgroup analyses by device type also showed a significant reduction in pain for the active SHR apps both in the iOS and Android configurations, and lack thereof for the BA apps both in the iOS and Android configurations (Table 2 and Fig. 3).

Table 2.

Change in anxiety and pain from the baseline at the beginning of the waiting room to the end of the waiting room time.

Tablet A (ipad BA) (n=15) C (Android BA) (n=20) D (iPad App) (n=17) B (Android App) (n=20)

Change in pain −0.07 (−0.46 to 0.32); −0.40 (−0.87 to 0.07); −0.88 (−1.45 to −0.31); −0.65 (−1.22 to −0.08);
 Mean (95% CI) 0.70 0.99 1.11 1.23
 STD −1.00 to +2.00 −4.00 to 0 −3.00 to +1.00 −4.00 to 0
Range

Wilcoxon Signed Rank Test (two-tailed) P=1.000 P=0.125 P=0.008 * P=0.031 *

Change in anxiety −0.13 (−0.85 to 0.59); −1.05 (−1.74 to −0.36) −0.88 (−1.48 to −0.28) −0.75 (−1.34 to −0.16)
 Mean (95% CI) 1.30 1.47 1.17 1.25
 STD −2.00 to +3.00 −5.00 to 0 −4.00 to +1.00 −4.00 to +1.00
 Range

Wilcoxon Signed Rank Test (two-tailed) P=0.656 P=0.002 * P=0.008 * P=0.020 *

CI – Confidence interval of the mean; STD = standard deviation.

*

Significant results.

Fig. 3.

Fig. 3

Change (Delta) in anxiety and pain from start of the waiting room time to the end of the waiting room time. Scale 0–10 with 0=no anxiety/no pain at all and 10=worst anxiety/pain possible. P-values based on Wilcoxon Signed Rank tests. ★ Significant difference of the delta pain or anxiety respectively.

Anxiety in the SHR app group (iOS and Android combined) decreased significantly from the beginning to the end of the waiting room time by a mean of −0.81 (95% CI −1.21 to −0.41; p<0.001). Anxiety also decreased in the combined BA group with a mean of −0.66 (95% CI −1.16 to −0.16; p= 0.008), resulting in an overall non-significant difference among the combined groups (p=0.724). Subanalyses demonstrated that the lack of group differences was driven by the Android BA app reducing anxiety, whereas the iOS app did not do so (Table 2 and Fig. 3).

These results leave us with four apps associated with three different clinical performance profiles: the SHR iOS and SHR Android apps, both reducing pain and anxiety; the Android BA app reducing anxiety, but not pain; and the iOS BA app reducing neither and thus serving as true control app.

App Use Patterns and Listening Times

The tablets had been set up to send electronically csv files of usage of the apps every time the system would return to the app’s home screen. The iOS tablets were more reliable in doing so, transmitting 15 of 17 (88%) sessions for the SHR app and 8 of 15 (53%) sessions for BA app. The Android SHR app transmitted only 3 of 20 (15%) sessions and the Android BA app 4 of 20 (20%) sessions.

For the active SHR apps listening times ranged from 313 to 764 sec for the iOS SHR app (mean, 532 sec) and 419–876 sec (mean, 582 sec) for the Android app, meeting the secondary outcome criterion of listening times over 5 min (300 sec) in the samples. Listening times for the BA apps were 507–1,500 sec (mean, 615 sec) for the iOS app and 263–650 sec (mean, 425 sec) for the Android app.

For the iOS SHR app, selections of topics were distributed relatively evenly among the possible themes (next to the yellow buttons with blue centers in Figure 1a) with four patients choosing “Relaxation,” four “Peace,” three “Comfort,” and two “Confidence.” In all three samples of the SHR app from the Android tablets, patients played “Confidence.” With the iOS BA app, themes were again more evenly distributed with two patients choosing to start with the top choice “Interior” on the iOS (Fig. 1b), two with “Exterior A,” one with “Exterior B,” and three with “Exterior C.” BA Android users favored “Exterior A” (n=3) which was the top choice on their tablets, and they stayed with it; one user started with “Exterior B.” and then cycled through the three other themes.

When exploring the discrepancy in selection choices, it turned out that defaults of what the prior patient had selected remained active on the Android tablets, unless the device was completely powered down (not just turned off) at the end of the patient’s visit. This issue was not operative with the iOS tablets.

Discussion

This study demonstrated that app use for immediate pain and anxiety relief is feasible, as tested in the medical/dental waiting room, and that effectiveness depends on app content and design. Both the active iOS and Android SHR apps significantly reduced pain and anxiety, the Android BA app reduced anxiety but not pain, and the iOS BA app reduced neither. Thus the iOS BA app came the closest to a placebo control as one might wish for a clinical trial.

Patients presented with a mean pain rating of 3.1. While a mean reduction of −0.76 on a 0–10 scale in the SHR group may appear small, it is comparable to the most potent drugs. A systematic review and meta-analysis of opioid use for chronic non-cancer pain showed only a −0.69 cm (95% CI −0.82 to −0.56) difference on a 10-cm visual analog scale for opioids as compared to placebo (Busse et al., 2018).

The mean anxiety reduction of −0.81 from a baseline of 2.6 on a scale of 0–10 represents an improvement at a critical range. Prior research showed that, for example, patients who entered their surgical/medical procedure room with mean anxiety levels of 3.6 on a scale of 0–10, as compared to those with anxiety levels around 1.9, had significantly worse pain and anxiety during their procedure, needed more opioid analgesics and sedatives, and had longer procedures (Schupp et al, 2005). Such factors directly affect procedure safety and throughput.

The feasibility of anxiety reduction by smartphone was also reported in a meta-analysis of 11 randomized trials showing a significant, but small effect in patients with chronic conditions (Firth et al., 2017). The interventions tested in these trials were mainly focused on mindfulness with lengthier protocols as compared to the short listening times in this trial. A game-based app about the dental visit required children to watch 15 min twice a day for 2 weeks (Meshki, Basir, Alidadi, Behbudi, & Rakhshan, 2018). In contrast to the above approaches, the current app offers a cost-effective advantage with an average listening time of 8.3 min while patients are in the waiting room without prior preparation.

The fact that the iPad BA app did not significantly reduce anxiety makes it unlikely that just being in the waiting room may have decreased anxiety. In medical settings, anxiety also tends to increase rather than decrease over time (Lang et al, 2014). The ability of the Android BA app to reduce anxiety while failing to affect pain mirrors the experience of providing patients structured attention during medical procedures (Lang et al., 2006). A reduction in anxiety, but not pain, was also observed with use of a music app in which patients awaiting coronary angiography could choose a standardized musical sequence of adjustable length in a preferred style of music (e.g., classic rock or folk music) (Guetin et al., 2016).

The findings suggest that the verbal content of a self-hypnotic and relaxing nature that reduces pain and anxiety during live patient-provider interactions (Lang et al., 2000; Lang et al., 2006; Lang et al., 2008), can also effectively reduce pain and anxiety when delivered via an app. The findings further suggest that construction of a control placebo app can be fraught with pitfalls. The nature of the device turned out to be a major factor, at least for the control BA app. An initial concern we had that patients wouldn’t listen to the control apps, was unfounded. Distraction by background audio, even with extended listening times, does not necessarily result in changes of pain or anxiety as the experience with the BA iOS app showed.

The findings demonstrate that the operating system of an app can matter for audio apps. The same content doesn’t necessarily produce the same outcomes. Just because one sound is soothing in one system doesn’t mean it will be so when delivered in another system. It may also lead to faulty acceptance of efficacy. On the other hand, one cannot assume BA to be a placebo in a clinical trial unless it has been tested. Reliable pain and anxiety reduction requires targeted and tested suggestions that are immune to system variability. This can likely be best achieved, as in this trial, by having each selection on the menu act as stand-alone self-hypnotic relaxation.

Google’s Android and Apple’s iOS systems are the predominant operating systems for mobile devices, such as smartphones and tablets. Android is Linux-based and more PC-like than iOS, which may have advantages for developers, but may not be seen as user-friendly as iOS (Diffen, 2020), and affect use in research. For example, the need for the user or research assistant to power the Android app down completely to transmit and store usage data turned out more of an impediment and potential confounder than anticipated. Even though all tablets had stickers attached urging complete shutdown, one can see how such may not happen in a busy clinical setting. Researchers may keep that in mind when selecting device types. The default of keeping the prior patient’s preferences with the Android device is even more of concern. We speculate that this default setting in the Android tablets favored the selection of “Confidence” on the Android SHR app. The same reason and/or possibly that “Exterior A” was the top choice on the Android BA app may have contributed to the predominant selection of that theme with the Android BA app, but not the iOS app on which “Interior” was the top choice (Fig. 1 a+b). “Exterior A” contained nature sounds which may have elicited associations with pleasant or familiar scenes as compared to the other scenarios and thereby may have exerted a self-hypnotic effect. This also highlights the difficulty in designing control apps: In the exterior scenarios, we attempted but were not entirely able to avoid environmental sounds such as waves, birds, wind, cars, etc. that may have stroked the memory of the listeners and created internal imagery that may have reduced anxiety. In contradistinction, the interior sound was the most neutral and most likely unfamiliar to the listeners (German front-loading washing machine).

Familiarity with a tablet device or lack thereof may possibly also have played a role. Results might have been different had patients been allowed to download the apps to their own devices. This step would have touched some privacy concerns in that we might have been able to identify the users’ smartphone addresses, but also run the risk that commercial downloads outside the study at the same time might have been entered into analysis. We also wanted to avoid having patients use their devices to surf the web. The tablets had been programmed such that the users only could access the app and no other internet content.

Another issue may derive from the naming of the options. In efforts to enhance possible positive (therapeutic) placebo effects, some investigators might choose the same names for background audio options that they have for their active apps. In our case, this would have been suggestions of relaxation, confidence, comfort, or peace. How much such naming affects outcomes requires further research. We had concerns, such naming may have annoyed the listeners who would have quickly found out that the recordings were rather nonspecific. Investigators, who choose suggestive names for the control devices in their trials, however, might want to add “no-treatment” control groups, which would quickly enlarge sample size and complexity. In that regard, the availability of the iPad control app that has been tested and been shown not to alter pain and anxiety can be of considerable value in future clinical trials that assess auditory content.

Limitations

This study was conducted at a university-based facial pain center. Hence, the patient population may not be representative of the general population outside the waiting room setting, and even a general dentistry or, orofacial pain management office. Also the preponderance of females may have affected the outcome. A further limitation was lack of a no-app, standard care control group that would have allowed more comprehensive assessment of effect size. Also technical issues, that were most pronounced with the Android app, interfered with the full offerings of the BA control app. On the other hand this information may help other app developers to avoid our pitfalls.

Conclusion

An app with self-hypnotic and relaxing content can successfully reduce pain and anxiety in the moment, and its use is feasible in the medical/dental waiting room. The benefit can be achieved without prior preparation and with brief listening times, making its use also suitable for a busy practice. Reduction of pain cannot be solely achieved through a reduction of anxiety as seen with the Android BA app, but seems to require specific hypnotic guidance as provided through the SHR app. The ability of background audio to affect anxiety is variable and may depend on which associations it evokes for the listener, thereby making the design of audio placebo apps challenging. The nature of the device used for app delivery, content, and sequence of options in a theme list may affect outcomes. The unexpected design pitfall in the Android BA app now provides the unique situation of having apps with three different clinical profiles with regard to pain and anxiety management available. This can support subsequent randomized clinical trials in which inclusion of a control app is desirable. It also offers the opportunity to test biomarkers concurrently with app use to identify what kind of app content contributes to which outcomes.

Acknowledgements

The trial is registered at ClinicalTrials.gov, number NCT 03328208 where also the protocol is available for download. This study was funded by a Small Business Innovation in Research (SBIR) grant number 1 R43 AT009517 from the National Center for Complementary and Integrative Health (NCCIH) at the National Institutes of Health. The grant was awarded to Hypnalgesics, LLC, of which the author EVL is the CEO and also the PI of the grant. Tufts University School of Dental Medicine site received the funding through a subcontract tied to the overall grant. Execution of the clinical trial, data collection and analysis was under control of the site investigators. The Small Business designed the SHR-app and BA app, supplied the loaded tablets to the site, and retrieved the app usage data.

The content of the manuscript are solely the responsibility of the authors and do not necessarily represent the official views of NCCIH.

We are deeply grateful to Prof. Noshir Mehta, DMD, for the help in planning and executing this trial, Alexis Vasciannie for her assistance with data quality assurance, and also greatly appreciate the expert assistance of Tamar Roomian, MS, MPH, in the statistical planning, and Nicolette Demetria Kafasis, and Amada Gozzi.

The work was supported by NIH/NCCIH 1 R43 AT009517

Footnotes

CLINIAL TRIALS REGISTRATION: NCT03328208

Declaration of Interest Statement

Author EVL is owner of the company that designed the app; authors PS, TC, and GC were/are employees of the company. The other authors do not have conflict of interest.

Contributor Information

Aroni Donavon-Khosrow, Department of Diagnostic Sciences, Tufts University School of Dental Medicine.

Matthew D. Finkelman, Department of Public Health and Community Service at the Tufts University School of Dental Medicine.

Ronald J. Kulich, Department of Diagnostic Sciences, Tufts University School of Dental Medicine and Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital/Harvard Medical School.

References

  1. Ajam AA, Nguyen XV, Kelly RA, Ladapo JA, & Lang EV (2017). Effects of Interpersonal Skills Training on MRI Operations in a Saturated Market: A Randomized Trial. J Am Coll Radiol, 14(7), 963–970. doi: 10.1016/j.jacr.2017.03.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Benotsch E, Lutgendorf SK, Watson DW, Fick LJ, & Lang EV (2000). Rapid anxiety assessment in medical patients: evidence for the validity of verbal anxiety ratings. Ann Behav Med, 22, 199–203. [DOI] [PubMed] [Google Scholar]
  3. Blodt S, Pach D, Eisenhart-Rothe SV, Lotz F, Roll S, Icke K, & Witt CM (2018). Effectiveness of app-based self-acupressure for women with menstrual pain compared to usual care: a randomized pragmatic trial. Am J Obstet Gynecol, 218(2), 227 e221–227 e229. doi: 10.1016/j.ajog.2017.11.570 [DOI] [PubMed] [Google Scholar]
  4. Busse JW, Wang L, Kamaleldin M, Craigie S, Riva JJ, Montoya L, . . . Guyatt GH (2018). Opioids for Chronic Noncancer Pain: A Systematic Review and Meta-analysisOpioids for Chronic Noncancer PainOpioids for Chronic Noncancer Pain. JAMA, 320(23), 2448–2460. doi: 10.1001/jama.2018.18472 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Cyna MA, & Lang EV (2011). How words hurt. In Cyna M, Andrew MI, Tan SGM, & Smith F (Eds.), Handbook of communication in anaesthesia and critical care (pp. 30–37). Oxford New York: Oxford University Press. [Google Scholar]
  6. Diffen.com (2020). Android vs iOS.” Diffen LLC, n.d. Web. 22 May 2020. < https://www.diffen.com/difference/Android_vs_iOS
  7. Firth J, Torous J, Nicholas J, Carney R, Rosenbaum S, & Sarris J (2017). Can smartphone mental health interventions reduce symptoms of anxiety? A meta-analysis of randomized controlled trials. J Affect Disord, 218, 15–22. doi: 10.1016/j.jad.2017.04.046 [DOI] [PubMed] [Google Scholar]
  8. Goldet G, & Howick J (2013). Understanding GRADE: an introduction. J Evid Based Med, 6(1), 50–54. doi: 10.1111/jebm.12018 [DOI] [PubMed] [Google Scholar]
  9. Grand View Research. (2017). mHealth Apps Market Size Worth $111.8 Billion By 2025 | CAGR: 44.2%. Retrieved from https://www.grandviewresearch.com/press-release/global-mhealth-app-market
  10. Gregory J, & McGowan L (2016). An examination of the prevalence of acute pain for hospitalised adult patients: a systematic review. J Clin Nurs, 25(5–6), 583–598. doi: 10.1111/jocn.13094 [DOI] [PubMed] [Google Scholar]
  11. Guetin S, Brun L, Deniaud M, Clerc JM, Thayer JF, & Koenig J (2016). Smartphone-based Music Listening to Reduce Pain and Anxiety Before Coronarography: A Focus on Sex Differences. Altern Ther Health Med, 22(4), 60–63. [PubMed] [Google Scholar]
  12. Heyman RE, Slep AM, White-Ajmani M, Bulling L, Zickgraf HF, Franklin ME, & Wolff MS (2016). Dental Fear and Avoidance in Treatment Seekers at a Large, Urban Dental Clinic. Oral Health Prev Dent, 14(4), 315–320. doi: 10.3290/j.ohpd.a36468 [DOI] [PubMed] [Google Scholar]
  13. Hill KB, Chadwick B, Freeman R, O’Sullivan I, & Murray JJ (2013). Adult Dental Health Survey 2009: relationships between dental attendance patterns, oral health behaviour and the current barriers to dental care. Br Dent J, 214(1), 25–32. doi: 10.1038/sj.bdj.2012.1176 [DOI] [PubMed] [Google Scholar]
  14. Jamison RN, Mei A, & Ross EL (2018). Longitudinal trial of a smartphone pain application for chronic pain patients: Predictors of compliance and satisfaction. J Telemed Telecare, 24(2), 93–100. doi: [DOI] [PubMed] [Google Scholar]
  15. Jibb LA, Stevens BJ, Nathan PC, Seto E, Cafazzo JA, Johnston DL, . . . Stinson JN (2017). Implementation and preliminary effectiveness of a real-time pain management smartphone app for adolescents with cancer: A multicenter pilot clinical study. Pediatr Blood Cancer, 64(10). doi: 10.1002/pbc.26554 [DOI] [PubMed] [Google Scholar]
  16. Johannes CB, Le TK, Zhou X, Johnston JA, & Dworkin RH (2010). The prevalence of chronic pain in United States adults: results of an Internet-based survey. J Pain, 11(11), 1230–1239. doi: 10.1016/j.jpain.2010.07.002 [DOI] [PubMed] [Google Scholar]
  17. Klepstad P, Kaasa S, Cherny N, Hanks G, & de Conno F (2005). Pain and pain treatments in European palliative care units. A cross sectional survey from the European Association for Palliative Care Research Network. Palliat Med, 19(6), 477–484. doi: 10.1191/0269216305pm1054oa [DOI] [PubMed] [Google Scholar]
  18. Ladapo JA, Spritzer CE, Nguyen XV, Pool J, & Lang E (2018). Economics of MRI Operations After Implementation of Interpersonal Skills Training. J Am Coll Radiol, 15(12), 1775–1783. doi: 10.1016/j.jacr.2018.01.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Lalloo C, Shah U, Birnie KA, Davies-Chalmers C, Rivera J, Stinson J, & Campbell F (2017). Commercially Available Smartphone Apps to Support Postoperative Pain Self-Management: Scoping Review. JMIR Mhealth Uhealth, 5(10), e162. doi: 10.2196/mhealth.8230 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Lang EV, Benotsch EG, Fick LJ, Lutgendorf S, Berbaum ML, Berbaum KS, . . . Spiegel D (2000). Adjunctive non-pharmacological analgesia for invasive medical procedures: a randomised trial. Lancet, 355(9214), 1486–1490. doi: 10.1016/s0140-6736(00)02162-0 [DOI] [PubMed] [Google Scholar]
  21. Lang EV, Berbaum KS, Faintuch S, Hatsiopoulou O, Halsey N, Li X, . . . Baum J (2006). Adjunctive self-hypnotic relaxation for outpatient medical procedures: a prospective randomized trial with women undergoing large core breast biopsy. Pain, 126(1–3), 155–164. doi: 10.1016/j.pain.2006.06.035 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Lang EV, Berbaum KS, Pauker SG, Faintuch S, Salazar GM, Lutgendorf S, . . . Spiegel D (2008). Beneficial effects of hypnosis and adverse effects of empathic attention during percutaneous tumor treatment: when being nice does not suffice. J Vasc Interv Radiol, 19(6), 897–905. doi: 10.1016/j.jvir.2008.01.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Lang EV, Hatsiopoulou O, Koch T, Berbaum K, Lutgendorf S, Kettenmann E, . . . Kaptchuk TJ (2005). Can words hurt? Patient-provider interactions during invasive procedures. Pain, 114(1–2), 303–309. doi: 10.1016/j.pain.2004.12.028 [DOI] [PubMed] [Google Scholar]
  24. Lang EV, & Laser E (2009). Patient sedation without medication. Rapid rapport and quick hypontic techniques. A resource guide for doctors, nurses, and technolgists. Raleigh, NC: Lulu [Google Scholar]
  25. Lang EV, Tan G, Amihai I, & Jensen MP (2014). Analyzing acute procedural pain in clinical trials. Pain, 155(7), 1365–1373. doi: 10.1016/j.pain.2014.04.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Lee M, Lee SH, Kim T, Yoo HJ, Kim SH, Suh DW, . . . Yoon B (2017). Feasibility of a Smartphone-Based Exercise Program for Office Workers With Neck Pain: An Individualized Approach Using a Self-Classification Algorithm. Arch Phys Med Rehabil, 98(1), 80–87. doi: 10.1016/j.apmr.2016.09.002 [DOI] [PubMed] [Google Scholar]
  27. Mandle CL, Domar AD, Harrington DP, Leserman J, Bozadjian EM, Friedman R, & Benson H (1990). Relaxation response in femoral angiography. Radiology, 174, 737–739. [DOI] [PubMed] [Google Scholar]
  28. Meshki R, Basir L, Alidadi F, Behbudi A, & Rakhshan V (2018). Effects of Pretreatment Exposure to Dental Practice Using a Smartphone Dental Simulation Game on Children’s Pain and Anxiety: A Preliminary Double-Blind Randomized Clinical Trial. J Dent (Tehran), 15(4), 250–258. [PMC free article] [PubMed] [Google Scholar]
  29. Murphy D, McDonald A, Power A, Unwin A, & MacSullivan R (1988). Measurement of pain: A comparison of the visual analogue with a nonvisual analogue scale. J Clin Pain, 3, 197–199. [Google Scholar]
  30. Norbash A, Yucel K, Yuh W, Doros G, Ajam A, Lang EV, . . . Mayr N (2016). Effect of team training on improving MRI study completion rates and no show rates. J Magn Reson Imaging, 44(4), 1040–1047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Oldenmenger WH, Baan MAG, & van der Rijt CCD (2018). Development and feasibility of a web application to monitor patients’ cancer-related pain. Support Care Cancer, 26(2), 635–642. doi: 10.1007/s00520-017-3877-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Paice JA, & Cohen FL (1997). Validity of a verbally administered numeric rating scale to measure cancer pain intensity. Cancer Nurs, 20, 88–93. [DOI] [PubMed] [Google Scholar]
  33. Raj SX, Brunelli C, Klepstad P, & Kaasa S (2017). COMBAT study - Computer based assessment and treatment - A clinical trial evaluating impact of a computerized clinical decision support tool on pain in cancer patients. Scand J Pain, 17, 99–106. doi: 10.1016/j.sjpain.2017.07.016 [DOI] [PubMed] [Google Scholar]
  34. Research2Guidance. (2015). mHealth App Market Sizing 2015–2020. Retrieved from http://research2guidance.com/r2g-shop/
  35. Reynoldson C, Stones C, Allsop M, Gardner P, Bennett MI, Closs SJ, . . . Knapp P (2014). Assessing the quality and usability of smartphone apps for pain self-management. Pain Med, 15(6), 898–909. doi: 10.1111/pme.12327 [DOI] [PubMed] [Google Scholar]
  36. Salazar A, de Sola H, Failde I, & Moral-Munoz JA (2018). Measuring the Quality of Mobile Apps for the Management of Pain: Systematic Search and Evaluation Using the Mobile App Rating Scale. JMIR Mhealth Uhealth, 6(10), e10718. doi: 10.2196/10718 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Schatz J, Schlenz AM, McClellan CB, Puffer ES, Hardy S, Pfeiffer M, & Roberts CW (2015). Changes in coping, pain, and activity after cognitive-behavioral training: a randomized clinical trial for pediatric sickle cell disease using smartphones. Clin J Pain, 31(6), 536–547. doi: 10.1097/ajp.0000000000000183 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Schulz KF, Altman DG, & Moher D (2010). CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Med, 8, 18. doi: 10.1186/1741-7015-8-18 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Schupp CJ, Berbaum K, Berbaum M, & Lang EV (2005). Pain and anxiety during interventional radiologic procedures: effect of patients’ state anxiety at baseline and modulation by nonpharmacologic analgesia adjuncts. J Vasc Interv Radiol, 16(12), 1585–1592. doi: 10.1097/01.rvi.0000185418.82287.72 [DOI] [PubMed] [Google Scholar]
  40. Skrepnik N, Spitzer A, Altman R, Hoekstra J, Stewart J, & Toselli R (2017). Assessing the Impact of a Novel Smartphone Application Compared With Standard Follow-Up on Mobility of Patients With Knee Osteoarthritis Following Treatment With Hylan G-F 20: A Randomized Controlled Trial. JMIR Mhealth Uhealth, 5(5), e64. doi: 10.2196/mhealth.7179 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Smart JF (2015). BDD in action. Behavior-Driven Development for the whole software lifecycle. Shelter Island, NY: Manning Publications. [Google Scholar]
  42. Sucala M, Schnur JB, Glazier K, Miller SJ, Green JP, & Montgomery GH (2013). Hypnosis--there’s an app for that: a systematic review of hypnosis apps. Int J Clin Exp Hypn, 61(4), 463–474. doi: 10.1080/00207144.2013.810482 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Sun Y, Jiang F, Gu JJ, Wang YK, Hua H, Li J, . . . Ding G (2017). Development and Testing of an Intelligent Pain Management System (IPMS) on Mobile Phones Through a Randomized Trial Among Chinese Cancer Patients: A New Approach in Cancer Pain Management. JMIR Mhealth Uhealth, 5(7), e108. doi: 10.2196/mhealth.7178 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Thurnheer SE, Gravestock I, Pichierri G, Steurer J, & Burgstaller JM (2018). Benefits of Mobile Apps in Pain Management: Systematic Review. JMIR Mhealth Uhealth, 6(10), e11231. doi: 10.2196/11231 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Wang C, Pan R, Wan X, Tan Y, Xu L, Ho CS, & Ho RC (2020). Immediate Psychological Responses and Associated Factors during the Initial Stage of the 2019 Coronavirus Disease (COVID-19) Epidemic among the General Population in China. Int J Environ Res Public Health, 17(5). doi: 10.3390/ijerph17051729 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Woo AK (2010). Depression and Anxiety in Pain. Rev Pain, 4(1), 8–12. doi: 10.1177/204946371000400103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Zhao P, Yoo I, Lancey R, & Varghese E (2019). Mobile applications for pain management: an app analysis for clinical usage. BMC Med Inform Decis Mak, 19(1), 106. doi: 10.1186/s12911-019-0827-7 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES