Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Dec 5.
Published in final edited form as: Behav Ther. 2020 Apr 30;52(2):365–378. doi: 10.1016/j.beth.2020.04.013

HabitWorks: Development of a CBM-I smartphone app to augment and extend acute treatment

Courtney Beard 1, Ramya Ramadurai 2, R Kathryn McHugh 3, JP Pollak 4, Thröstur Björgvinsson 5
PMCID: PMC9720670  NIHMSID: NIHMS1589612  PMID: 33622506

Abstract

The month following discharge from acute psychiatric care is associated with increased risk of relapse, rehospitalization, and suicide. Effective and accessible interventions tailored to this critical transition are urgently needed. Cognitive bias modification for interpretation (CBM-I) is a low-intensity intervention that targets interpretation bias, a transdiagnostic process implicated in the development and maintenance of emotional disorders. We describe the development of a CBM-I smartphone app called HabitWorks as an augmentation to acute care that extends through the high-risk month post-discharge. We first obtained input from various stakeholders including adults who had completed partial hospital treatment (Patient Advisory Board), providers, CBM experts, and clinic program directors. We then iteratively tested versions of the app, incorporating feedback over three waves of users. Participants were recruited from a partial hospital program and completed CBM-I sessions via the HabitWorks app while attending the hospital program and during the month post-discharge. In this Stage 1A treatment development work, we obtained preliminary data regarding feasibility and acceptability, adherence during acute care, and target engagement. Pilot data met our a priori benchmarks. While adherence during acute treatment was good, it decreased during the post-acute period. Qualitative feedback was generally positive and revealed themes of usability and helpfulness of app features. Participants varied in their perception of skill generalization to real life situations. The feasibility and acceptability data suggest that a controlled trial of HabitWorks is warranted.

Keywords: cognitive bias, interpretation bias, smartphone, treatment, mHealth


There is an urgent need for interventions that facilitate the transition from acute psychiatric care to outpatient treatment. People are at highest risk of relapse, rehospitalization, and suicide in the weeks immediately following discharge (Vigod, 2013; Woo, 2006). However, very few post-acute interventions are available to patients, and those that exist require intensive resources (e.g., Miller et al., 2016). Smartphone technology may provide a low-cost and scalable solution to help bridge this critical transition. Smartphone apps offer on-demand support and can be used to promote therapeutic skill use outside the therapy session. In the United States, smartphone ownership is high (77%; Pew, 2018), as is interest in using smartphones for mental health (56% to 84% of people with psychiatric disorders; Beard, Silverman, Forgeard, Wilmer, Torous, & Björgvinsson, 2019; Di Matteo et al., 2018; Torous et al., 2014; Torous & Friedman, 2014). The present study capitalized on this technology to develop a skills-based app to augment acute psychiatric care and ease the transition following discharge. We describe the rationale for developing this tool and the iterative development process.

Rationale for HabitWorks

Decades of empirical studies demonstrate that interpretation bias, the tendency to resolve ambiguity in a negative manner (or to lack a positive interpretation bias), plays an important role in maintaining emotional disorders (see Hirsch et al., 2016 for a review). Psychological treatments have long targeted this cognitive bias, most notably Cognitive Behavioral Therapy (CBT). CBT-based approaches typically focus on the identification of biased thinking (e.g., overestimating a negative outcome) and using verbal strategies to modify this thinking, such as examining the evidence for and against a particular thought. CBT techniques commonly require elaborative processing of a single ambiguous situation over an entire therapy session.

In contrast, cognitive bias modification for interpretation (CBM-I) targets interpretation bias via training tasks that provide quick, repeated practice (e.g., 100 situations per 10-minute session) making benign interpretations. Numerous studies have demonstrated that these brief, simple tasks effectively shift interpretation bias, and that doing so may reduce negative mood states (see meta-analyses Jones & Sharpe, 2017; Menne-Lothman et al., 2014). For example, Menne-Lothman and colleagues’ meta-analysis found that CBM-I significantly increased benign interpretations (d = 0.43), whereas various comparisons conditions (negative interpretation induction, neutral comparison training, no training) did not. Additionally, change in interpretation bias was significantly correlated with improvements in negative mood state (Menn-Lothmann et al., 2014). Moderator analyses suggest that CBM-I effects may increase with more sessions. In contrast, meta-analyses that have combined CBM-I with other types of CBM, i.e., attention bias modification, have not supported CBM’s efficacy (Cristea, Kok, & Cuijpers, 2015). These mixed results may be due to the combination of heterogeneous interventions targeting different mechanisms. Indeed, the Cristea et al. (2015) study found that CBM-I has larger effects on anxiety and depression compared to attention bias modification.

Although most studies have examined CBM-I as a stand-alone intervention, preliminary evidence suggests that CBM-I may be a useful adjunct to CBT (Amir et al., 2015; Beard et al., 2019; Salemink, Wolters, & de Haan, 2015; Schmidt, Norr, Allan, Raines, & Capron, 2017; Williams, Blackwell, Mackenzie, Holmes, & Andrews, 2013; Williams, O’Moore, Blackwell, Smith, Holmes, & Andrews, 2015). Building on these studies, we adapted a lab-based CBM-I intervention to augment treatment as usual at a partial hospital program (Beard et al., 2019). The CBM-I task relied on the Word-Sentence Association Paradigm (WSAP) to reinforce an adaptive interpretation style. Like most prior studies of interpretation bias, the CBM-I task presented ambiguous situations paired with negative, neutral, or positive interpretations. Specifically, this task presented a word representing either a negative (“criticize”), neutral (“appointment”) or positive (“praise”) interpretation of an ambiguous sentence that followed (“Your boss wants to meet with you”). People then pressed a button to indicate whether or not they thought the word was related to the sentence. To induce the positive bias observed in people without anxiety or depression, the task presented corrective feedback, “Correct!” if participants endorsed benign (neutral or positive) interpretations or rejected negative ones (see Supplemental Figure). Participants completed 100 trials of CBM-I training daily via a computer while attending the partial hospital. Augmenting psychiatric treatment with CBM-I was highly feasible due to its brevity (< 10-minute sessions), computerized delivery, and minimal staff time required. Participants stated that they experienced CBM-I as simple and motivating. Several themes emerged from participants about potential mechanisms of action, including increased awareness of cognitive bias and increased cognitive flexibility. Additionally, several participants suggested that we create an app version so that more patients could access the intervention at one time and so that they could continue using the app after discharge. This suggestion is consistent with guidelines for on-line treatments (Hitchcock et al., 2016) that recommend a standardized, concentrated intervention dose followed by spaced out, booster sessions.

There are hundreds of smartphone apps that purportedly offer CBT. However, a review found that 90% of these apps do not follow CBT principles, lack privacy policies, and lack efficacy data (Huguet et al., 2016). Additionally, to our knowledge, there are currently no smartphone apps that deliver CBM-I using the WSAP. Thus, we developed our own CBM-I smartphone app. Based on our previous pilot study of a computerized version of CBM-I using the WSAP (Beard et al., 2019), we identified several areas in need of further development to create a CBM-I app for use in the transition from acute psychiatric care to outpatient care. First, hospital populations are characterized by comorbidity and clinical heterogeneity. Thus, the app should allow for personalization to create personally relevant CBM-I situations. Second, the app should be easy to self-administer so that patients can use it independently during acute care and following discharge. Finally, the app should include features that facilitate continued engagement during the post-acute period.

Current study

Consistent with the Stage Model of Behavioral Therapies Research (Rounsaville, Carroll, & Onken, 2001; Onken et al., 2014), we used an iterative process to develop a CBM-I app called “HabitWorks.” The current paper describes Stage 1A research, which includes intervention generation and refinement. We developed the intervention incorporating stakeholder feedback and adapted existing CBM-I protocols to the acute psychiatric treatment population. We then modified the intervention over three waves of users and collected preliminary feasibility data from these participants. In this Stage IA work, we also selected and piloted the measures which will be used in a subsequent Stage IB feasibility and pilot trial in which we will compare HabitWorks to a symptom monitoring active comparison condition.

Feasibility and acceptability testing, and the iterative development of a mHealth app is critical because very few apps developed in research settings are ever tested in clinical populations or sustained after the research is completed (Firth et al., 2017). Thus, the specific aims of this treatment development project were to: 1) Develop a personalized, transdiagnostic, smartphone-delivered intervention to augment acute psychiatric treatment and refine based on stakeholder feedback and 2) Conduct a Stage 1A open trial to (a) confirm target engagement (improvement in interpretation bias) and (b) compare feasibility and acceptability of hospital and home delivery outcomes to a priori benchmarks. We expected that most participants’ interpretation bias scores would shift into a healthy range, that HabitWorks would be acceptable to partial hospital patients, and that HabitWorks would be feasible to deliver during acute treatment. The results will be used to inform the final intervention and protocol for the subsequent Stage 1B feasibility and pilot randomized controlled trial (RCT).

Method

Iterative Development with Advisory Board

We formed an advisory board of several groups of stakeholders to guide the development of the app and methods for linking the app to acute care. The first group included the partial hospital’s Patient Advisory Board. This ongoing group is chaired by the first author and meets quarterly. During the app development phase, members included five people who recently received treatment in the partial hospital and five clinic staff members. During initial meetings, we sought patient feedback about instructional video scripts, the personalization process, how to visualize performance over time, ideas for enhancing retention, and preferences regarding the logistics of app use in daily life (e.g., scheduling sessions and receiving reminders). This feedback guided the development of a functional prototype. Subsequently, advisory board members downloaded the prototype on their personal smartphones and provided feedback during one group testing session. Members were asked to continue using the app for the next month and to provide any additional feedback via email.

The second group of advisory board members included three experts in CBM-I and mHealth. These experts reviewed a written summary of the planned app flow and features and downloaded a prototype of the app on their personal smartphones. Via email correspondence, we solicited general feedback and suggestions, as well as responses to specific prompts (technical bugs, any non-intuitive features, accuracy threshold for progressing to next level, how to enhance engagement). The first author compiled all email responses in a written summary, and incorporated suggestions into subsequent iterations of the app.

Finally, to inform future implementation methods, a third group of stakeholders included four program directors of other acute care clinics (drug and alcohol partial hospital, general inpatient unit, residential obsessive compulsive disorder unit, intensive outpatient adolescent anxiety program). The first author met individually, in-person with each director to describe the app and show a prototype. These semi-structured interviews specifically prompted feedback about considerations for their specific population (e.g., substance use) and potential implementation methods for their treatment setting. The first author took notes during these meetings and created a written summary immediately following each interview.

Iterative app versions

The initial version of the app was created for the iOS mobile phone platform via Apple’s ResearchKit and distributed to testers via Testflight. We first created a prototype with limited functionality. This initial version was used to demonstrate the core app features in meetings with Advisory Board Members. We then developed a full prototype with data storage on the user’s phone (not linked to a research database). We piloted this version again with the Patient Advisory Board and the first wave of open trial participants (January to April of 2019). We made refinements based on user feedback and piloted again in the second wave of participants (April to early May 2019). Finally, we created a fully functional version with data storage on our research database and tested this in the third wave of participants (May 2019 to September 2019). In the next section, we describe the final version of the app.

HabitWorks app description

App flow and features.

Users first set a passcode and scanned a unique URL linking them to the research database. Next, they watched instructional videos, completed a personalization checklist, and completed their first weekly check-in (all described below). The dashboard included the following features: weekly check-in, exercises, bonus mini-round, mood check-in, habit diary, and progress status (see descriptions below). When users were due for a check-in or an exercise, these buttons were highlighted on the dashboard. In settings, users could update their notification reminder times and view the consent form, open source documents, and FAQs. The FAQs included information about the empirical evidence supporting the task in the app, suggestions for dose (three times per week for a month), and common responses to the task.

HabitWorks included two phases - daily and long-term - which corresponded respectively to acute treatment and the transition period from hospital discharge to outpatient treatment. In the ‘daily’ phase, users were prompted in person to complete exercise sessions daily. On their discharge day, they switched the app to ‘long-term’, which prompted the user to schedule days and times to complete exercises. Users were encouraged to schedule three sessions per week separated by at least one day and received notifications on scheduled days.

Data security and privacy.

Patient privacy and data security were crucial. The HabitWorks app collected minimal personal information. We took several steps to prevent others from gaining access to app data. First, the app was protected by a passcode and available biometrics (FaceID or TouchID). Any time the user exited the app, the data were fully encrypted on the device, and the passcode or biometric unlock had to be completed to resume. Second, the app only pushed data to the REDCap database, it did not retrieve it. As such, the database was configured to only permit access to the research team, further reducing the risk of leakage. Data were transmitted to the REDCap data base using 128-bit encryption.

Instructional videos.

Based on treatment rationales developed in prior CBM studies (Beard et al., 2011; Beard et al., 2019), we developed scripts for three instructional videos. We obtained feedback on written scripts from the Patient Advisory Board and made several wording changes to make the rationale and instructions friendlier to a consumer audience (e.g., instead of referring to an interpretation bias, we referred to mental habits and jumping to conclusions). Video 1 introduced the concept of interpretation bias and how it maintains emotional disorders. Video 2 oriented users to what HabitWorks was designed to do based on scientific research and described the first step of personalization. Video 3 presented instructions for completing the CBM-I task. These videos were embedded within the app and appeared in order as users proceeded. They were also available in the FAQ section of the app settings. Please contact the corresponding author to request copies of the video scripts.

Personalization and CBM-I content.

We designed HabitWorks to be a personally relevant, transdiagnostic CBM-I intervention. To this end, we compiled word-sentence pairs used in previous studies using the Word Sentence Association Paradigm (WSAP; see Gonsalves et al., 2019). This initial pool included over 1,200 word-sentence pairs. The PI and two research assistants examined each word-sentence pair and decided to keep, modify, or exclude. The research assistants independently reviewed all 1200 word-sentence pairs and suggested edits. The PI then reviewed the stimuli and the suggestions, and we discussed any discrepancies. Any modifications were made after 100% consensus was reached. We made modifications primarily to enhance the generalizability of situations to all people, e.g., change “brother” to “sibling”, “Christmas” to “holiday”, “outfit” to “clothes”. We excluded pairs that were redundant or confusing. We excluded groups of pairs that had insufficient numbers to create their own domain (e.g., OCD, trauma). We included 800 word-sentence pairs in our final set.

Next, we coded each word-sentence pair across four domains: interpretation type (negative, neutral, positive), worry domain (e.g., individual social interaction, hostility, perfectionism, heart racing), and demographic (e.g., work, kids, partner). A research assistant applied codes to all stimuli. The codes were then checked by the first author, inconsistencies were resolved, and finally the codes were checked a third time by a different research assistant. Any remaining inconsistencies were resolved until we reached 100% agreement.

Users completed brief checklists (see Supplemental Materials) assessing the types of situations they worried about and demographic characteristics. We developed an algorithm to assign word-sentence pairs to each user based on these checklists. A subset of word-sentence pairs was generated by matching the list of each user’s situational and demographic responses to the codes assigned to each word-sentence pair as described above. For example, if the user reported worrying about individual social interactions and finances, and indicated that they have children, a custom subset of word-sentence pairs was generated to include all pairs with combinations of those codes. Each time the user performed the CBM-I task going forward, word-sentence pairs were randomly chosen from the personalized subset.

CBM-I task.

Similar to the computerized version, we designed the app to deliver a CBM-I intervention based on the WSAP (Beard & Amir, 2008). The WSAP is well-suited to smartphone app delivery as it is simple, requires very little text on screen, and has inherent gamification. A session began with a start page reminding users to imagine themselves in each situation. A WSAP trial began with a fixation cross that appeared for 500ms. Second, a word appeared for 750ms that represented either the “incorrect” negative interpretation (“embarrassing”) or a “correct” benign interpretation (neutral - “chuckle” or positive - “funny”), of an ambiguous sentence (“People laugh after something you said”) that followed. Third, the ambiguous sentence appeared. A “Yes” button appeared in the lower left corner of the phone screen, and a “No” button appeared on the lower right corner. Users were instructed to press ‘Yes’ if they thought the word and sentence were related or to press ‘No’ if the word and sentence were not related. Sessions during the acute phase of treatment comprised 100 trials; sessions during the long-term phase comprised 50 trials. Users could also complete a 30-trial brief bonus exercise whenever desired. All sessions, regardless of trial number, comprised 50% negative interpretations and 50% benign interpretations (neutral or positive).

Performance feedback.

There were no objectively correct or incorrect responses in the WSAP; words reflected different types of interpretations, all of which could be true. However, users received corrective feedback about their responses to encourage a healthier interpretive style. Specifically, users received immediate feedback after each trial in the form of a green check mark and the text “Correct!” if they endorsed a benign interpretation or rejected a negative interpretation. They received a red ‘X’ and the text “Let’s try another” if they endorsed a negative interpretation or rejected a benign interpretation. At the end of an exercise session, the app provided the user’s mean reaction time. Unlike our prior trial, which provided accuracy feedback after each session, we decided to only present accuracy following sessions in which the user achieved at least 80% accuracy. If users’ accuracy was below this threshold, they either did not see any accuracy feedback or they were presented with their percent improvement if they had improved. We treated accuracy feedback in this manner based on early testing with our Patient Advisory Board. Members first engaging with the app experienced a sense of failure when their first accuracy was low. Thus, we developed this solution because we expected most users’ accuracy to be low (reflecting the presence of an interpretation bias) and because all users would be patients with acute symptoms for whom failure feedback may be counterproductive. Post-exercise feedback was presented alongside an encouraging GIF (e.g., famous actor giving a thumbs up). We selected 17 GIFs that represented people with diverse ages and ethnoracial backgrounds. Users could access their lifetime statistics through the dashboard and see a visual representation of the accuracy and speed over time.

Level progression.

During the long-term phase, participants progressed through levels each time their accuracy exceeded 80%, for a total of 10 levels. Levels differed in the ratio of neutral to positive interpretations, task instructions, and task speed. All levels included sentences paired once with a negative interpretation (50% of trials) and once with a benign interpretation (50%). The benign interpretation gradually shifted from neutral to positive across the levels. Specifically, the frequency of positive (versus neutral) interpretations presented increased according to the following ratio: Level one consists of 100% neutral, 0% positive; level two consisted of 90% neutral and 10% positive; and so on until level 10 which was 10% neutral and 90% positive. The frequency of negative interpretations remained the same. The task instructions also switched at levels 2, 4, 6, 8, and 10 to a reverse order task in which the sentence appeared first. Participants pressed a “next” button after reading the sentence. The word then flashed for 500ms, and participants were instructed to respond. The speed also changed across levels; the words flashed on screen for 750ms (level 1), 650ms (level 2), 550ms (level 3), and 500ms (level 4 and up). Sentence duration was not modified; sentences remained until users responded.

Weekly check-in.

Participants were prompted to complete a weekly self-assessment that included re-personalization, a 50-trial WSAP task with no corrective feedback after each trial, a mood check-in, and a habit diary. The mood check-in included the PHQ-9 (Kroenke & Spitzer, 2002) and GAD-7 (Spitzer, Kroenke, Williams, & Löwe, 2006) so that users could self-monitor their symptoms. We selected these measures for self-monitoring because they are brief, publicly available scales with excellent psychometric properties (Kroenke & Spitzer, 2002; Kroenke, Spitzer, Williams, Monohan, & Löwe, 2007; Löwe et al., 2008; Spitzer et al., 2006) and are commonly used for routine clinical monitoring in real world treatment settings (e.g., Ewbank et al., 2020). The habit diary prompted users to think about their week and write free text about instances in which they caught themselves jumping to conclusions. In the final version of the app, both the habit diary and mood check-in were available on-demand on the dashboard for more frequent self-monitoring. Finally, users completed the personalization checklists, and a new personally relevant stimuli pool was generated for the following week’s exercises. At the end of the weekly check-in, the app presented their weekly statistics: number of exercises and bonus rounds completed. Within lifetime statistics, participants could view their weekly check-in exercise accuracy and reaction time, as well as progression of anxiety and depression symptoms.

Open trial of HabitWorks

As we iteratively refined the app over three waves of pilot participants, we also obtained preliminary feasibility, acceptability, safety, and target engagement data from these participants.

Participants and Setting.

Participants included patients attending the Behavioral Health Partial Hospital Program at McLean Hospital. Patients received intensive, brief CBT-based treatment from 9:00 am to 2:50 pm every weekday from a multi-disciplinary staff (e.g., mental health counselors, nurses, psychologists, social workers, psychiatrists, psychology trainees). See Forgeard et al. (2018) for more details about the partial hospital.

People attending the partial hospital were required to be age 18 or older, speak English well enough to participate in group therapy, and to abstain from substance use (not including nicotine products) while in treatment. Additional inclusion criteria for the open trial included: access to an Apple smartphone, at least moderate symptom severity (PHQ-9 or GAD-7 ≥ 10), at least a minimal level of interpretation bias (below 80% accuracy on the WSAP), and willingness to sign a release of information for their outpatient providers upon discharge. Exclusion criteria included active mania or psychosis symptoms that would prevent informed consent or completion of research procedures, severe current suicidal ideation (PHQ-9 item 9 score ≥ 2), and significant cognitive impairment or clinical acuity as judged by clinical staff.

Of the 26 patients offered participation, 15 were eligible and consented (see Consort Flowchart, Figure 1). One participant had to be withdrawn from the study due to dismissal from the partial hospital due to inappropriate behavior. Thus, 14 total patients enrolled in the study (64% female) and ranged in age from 19 to 64 years (M = 35, SD = 15.4). Participants reported the following ethnoracial backgrounds: Latinx (n = 1, 7.1%), Black (n = 1, 7.1%), Non-Latinx White (n = 12, 85.7%). Regarding highest educational attainment, five participants (35.7%) reported a high school degree or some college; three (21.4%) reported a college degree, and six (42.9%) reported some graduate school. Participants reported the following sexual identities: heterosexual/straight (n = 11; 78.6%), bisexual (n = 2; 14.2%), and queer (n = 1; 7.1%). Primary diagnoses per program psychiatrists included Major Depressive Disorder (n = 6), Bipolar Disorder (n = 6), Panic Disorder (n = 1), and Posttraumatic Stress Disorder (n = 1).

Figure 1.

Figure 1.

CONSORT Flow Chart

Measures.

Our primary outcomes for the open trial included indicators of feasibility, acceptability, and adherence. We specified a priori benchmarks for outcomes based on prior studies (Beard et al., 2019) and what we believed would be clinically useful for a low-intensity smartphone delivered augmentation to acute psychiatric care (see Table 1). Participants completed an Exit Questionnaire similar to the prior study assessing the helpfulness, relevance, ease of use, satisfaction and whether or not patients would recommend the HabitWorks app, on a scale of 1 (“Completely Disagree”) to 7 (“Completely Agree”).

Table 1.

Open Trial Benchmarks for Feasibility and Acceptability

Outcome Benchmark Actual outcome
Feasibility/acceptability (% provided consent) 50% 71%
Retention (% completed assessments) 75% Post-acute: 11/14 (78.6%)
1-month: 10/14 (71.4%)
3-month: 8/14 (57.1%)
Adherence (%complete HabitWorks daily during acute treatment [≥ 5 sessions]) 75% 11/14 (78.6%)
Satisfaction (average rating on exit questionnaire items) 5 Helpfulness: M= 5.3 SD= 1.9
Relevance: M= 5.6 SD= 0.8
Easy to Use: M= 5.9 SD= 2.1
Satisfaction: M= 5.8 SD= 2.0
Recommendation: M= 5.3 SD= 1.9
Credibility & Expectancy (average on CEQ) 5 on CEQ item
“How logical?”
50% on CEQ item
“How much improvement?”
M= 7.6 SD= 1.3
M= 47% SD= 19.7
Benign interpretation (% of participants moving into healthy range (70% accuracy) for benign trials 75% 100% of participants
Negative interpretation (% of participants moving into healthy range (70% accuracy) for negative trials) 75% 80%

Qualitative feedback.

We obtained qualitative feedback in order to quickly identify (1) features of the app and aspects of our implementation methods that worked well, and (2) actionable suggestions for the app or its implementation. We utilized three sources of patient feedback to inform refinements. First, the first author conducted exit interviews during the 1-month follow-up assessment. These semi-structured interviews included prompts about the following topics: impressions of the HabitWorks app, study protocol, hospital care linkage, user friendliness, engagement, app content, credibility of treatment, adverse effects, benefits, overall satisfaction, cultural acceptability, and suggestions for improvement. Second, the Exit Questionnaire included four open text items with prompts asking what was most helpful and unhelpful, whether there were any perceived changes due to app usage, and whether participants had any suggestions for improvement. Third, staff recorded all spontaneous feedback provided during acute care sessions in a tracking spreadsheet.

Our treatment development process involved making modifications to the app and implementation methods immediately after each wave of open trial participants; thus, we did not have time to conduct traditional qualitative analyses. When efficiency is paramount, a rapid evaluation process can produce similar findings as more time-intensive qualitative analyses (Gale et al., 2019). In brief, we compiled all participant comments that pertained to barriers and potential actionable items in a spreadsheet. We grouped similar suggestions and created a list of requests for our programmers. The first author and programmers discussed each potential modification. We made final decisions about changes based on the ease of modification, budget, and consistency of the recommendation across participants. Finally, the first and second author reviewed all participant comments for themes regarding facilitators of use. The first author identified initial themes, and the second author identified exemplar quotations for each theme.

The Credibility/Expectancy Questionnaire (CEQ; Devilly & Borkovec, 2000) is a 6-item measure assessing treatment credibility and expectancy. The three credibility items (e.g., “At this point, how logical does the therapy offered to you seem?”) were rated from one to nine, with higher scores indicating greater credibility of treatment beliefs. The three expectancy items (e.g., How much improvement in your symptoms do you think will occur?”) were rated from 0 to 10, corresponding to 0% to 100%.

Participants completed an assessment version of the WSAP at baseline, post-acute treatment, and one-month follow-up. A research assistant administered the WSAP on a laptop in a private room in the hospital. The WSAP is a commonly used measure of interpretation bias with good internal consistency and test-retest reliability across clinical populations (see Gonsalves et al., 2019 for a review of psychometric properties). The WSAP has demonstrated adequate split-half reliability in a similar partial hospital sample (Beard et al., 2019).

In the assessment version of the WSAP, no feedback was provided about responses, and stimuli were not personalized to the individual. Thus, all participants saw the same 50 word-sentence pairs. The word-sentence pairs were drawn from a prior study of interpretation bias in this same setting (Beard, Rifkin, & Björgvinsson, 2017). Following prior studies, we calculated two interpretation bias outcomes: (1) average accuracy for benign trials and (2) average accuracy for negative trials. Accurate responses were ‘yes’ to benign trials and ‘no’ to negative trials. Thus, for both trial types, lower accuracy reflects greater interpretation bias.

Finally, we included measures of depression and anxiety symptom severity (PHQ-9; Kroenke & Spitzer, 2002 and the GAD-7; Spitzer, Kroenke, Williams, & Löwe, 2006). Patients were asked how often in the past two weeks they have been bothered by specific symptoms according to a 4-point Likert type scale, from 0 (“not at all”), to 3 (“nearly every day”). At discharge, the timeframe in the instructions was the past 24 hours so that admission and discharge scores did not overlap in time period (Beard & Björgvinsson, 2014).

Procedure.

The Partners Healthcare Institutional Review Board approved all procedures. Patients were invited to participate on their second day of acute treatment. Interested individuals provided informed consent and completed the eligibility WSAP assessment. Once deemed eligible, research staff administered baseline surveys via Research Electronic Data Capture (REDCap; Harris et al., 2009). REDCap was used for the collection and management of data.

The following day, patients completed a 40-minute orientation session during which they downloaded HabitWorks, watched the instructional videos, completed the personalization process, and completed the first CBM-I session within the app. For all remaining days in the partial hospital program, participants met with research staff for approximately 10 minutes to complete a daily session and a brief check in about how they felt the study/task was going. At discharge, participants set up notifications within the app so it would send them push notifications at the desired session times. During the first month post-discharge, they were asked to complete HabitWorks sessions three times a week, bonus sessions as desired, and weekly “check in” sessions. During the second and third month following discharge, participants were instructed to use the HabitWorks app as much or as little as they desired.

Compensation.

Participants were compensated for completing assessments. For pre-treatment/eligibility they received $20, and $30 for each post-treatment, 1-month and 3-month assessment. They also received $20 for the qualitative exit interview.

Participants completed additional assessments in order to pilot research procedures for a future Stage 1B RCT, in which HabitWorks will be compared to a control condition in a larger pilot trial. Participants were compensated for completing these additional surveys. They received $30 per week for completing at least 80% of the surveys. We do not present results from these additional measures here because they were not included in the planned analyses and a priori benchmarks involving feasibility and acceptability for the Stage 1A trial.

Results

A priori benchmarks and obtained data are presented in Table 1. See Table 2 for themes from the qualitative feedback and direct quotes.

Table 2.

Qualitative feedback from patients

Themes Example quotations
Facilitators
Ease of use “well organized and user friendly”
“beautifully simple”
“very convenient”
App features “liked the notifications to complete my exercises”
“GIFs were awesome!”
Human connection “liked having someone to check in with about app”
“I was having a bad day [in treatment] but have been really looking forward to checking in and doing a daily session”
CBM-I task features “situations were relevant to me”
“liked this instant feedback after the exercises”
Effect of CBM-I “simple at first, but crept in”
“I can already see this re-training my mind”
“wow, this is just showing me how warped my interpretations are”
“I am approaching certain situations in my life differently”
“helped normalize my anxiety in daily life”
Overall usefulness “very helpful”
“nice bridge to outpatient”
“eased the abrupt transition [to outpatient]”
“another tool in the toolbox”
“I have so many friends who would benefit from this app”
“this is going to make my transition out of here, from groups all day to nothing, so much easier.”
Barriers
CBM-I task features “having the same phrases or words repeated [was not helpful], it was more helpful when I started to get new scenarios”
“Being told ‘right’ or ‘wrong’ after each trial instead of addressing the gray areas of life”
“[exercises] took a lot of time”
App features “notifications should reappear and remain highlighted in app when exercises are due”
“make status bar during exercise optional”
“include symptom surveys in HabitWorks”
Flexibility in scheduling “allow us to complete exercises at any time during the day while [in treatment]”
“allow people to schedule exercises on consecutive days”

Feasibility.

Of the 21 patients potentially eligible for the study, 16 (76%) were interested. One individual was hospitalized before the consent session, resulting in 15 (71%) consented.

Credibility and expectancy.

On average, patients reported that HabitWorks was moderately credible (M = 7.6, SD = 1.3, Range = 5 to 9). Patients’ expectation for percent improvement in their mental health symptoms was 47% (SD = 19.69, Range= 10% to 80%).

Acceptability.

Acceptability ratings met our a priori benchmarks for satisfaction, perceived helpfulness, personal relevance, and user-friendliness (Table 1). Patients expressed positive perceptions of ease of use, simplicity, and aesthetic appearance (Table 2). They commented on the task’s ability to increase awareness of their cognitive biases in their daily lives. Patients viewed the app as a useful tool to ease the transition into daily life. Negative feedback centered around rigidity of the corrective feedback during the exercise (correct vs incorrect), too long duration of the exercises, desire for more variability in the situations presented, desire for more app features or customization of app features, and desire for greater flexibility in study protocol (i.e., daily session timing, scheduling of sessions).

Adherence during acute treatment.

Twelve of the 14 participants (86%) completed at least one session during acute care; 11 (79%) completed at least five daily sessions (M = 6.75, SD = 1.29, Range = 4 to 8). Reasons for not completing daily sessions included inpatient hospitalization (n = 1), early discharge (n = 1), and lack of interest (n = 1).

Adherence during one-month post-acute period.

The first two iterations of the app were not linked to a database; thus, we only obtained user statistics from the last six participants. In weeks 1 through 3, these six participants completed two sessions per week on average (SD = 1.18, range 0 to 4); 33% (n = 2) completed 3 sessions per week as prescribed. In week 4, app use dropped to 0.83 sessions on average (SD = 0.98, range 0 to 2); 0% completed 3 sessions.

Target engagement.

Of the 14 participants enrolled, 13 completed the baseline WSAP, and 10 completed the post-treatment and 1-month follow-up WSAP. Our a priori benchmark for target engagement was met. At the one-month follow-up assessment, 100% (10/10) of participants’ interpretation bias scores on the WSAP task were in the healthy range for benign trials; 80% (8/10) of scores were in the healthy range for negative trials. Paired samples t-test (n = 10) revealed that WSAP accuracy significantly improved from baseline to post-treatment for both benign, t(9) = −2.58, p = .03, d = 1.00, and negative trials, t(9) = −4.19, p = .002, d = 1.32. WSAP accuracy also improved from baseline to one-month follow-up for both benign, t(9) = −5.02, p < .001, d = 1.99, and negative trials, t(9) = −8.36, p < .001, d = 2.65 (see Figure 2).

Figure 2.

Figure 2.

Accuracy on WSAP interpretation assessment at baseline, post-acute treatment, and 1-month follow-up

Symptom severity.

Of the 14 participants enrolled, 13 completed the baseline symptom measures, and 10 completed the post-treatment and 1-month follow-up symptom measures. Paired samples t-test revealed that depression symptoms significantly improved from baseline (M = 12.90, SD = 5.78) to post-treatment (M = 8.20, SD = 4.34), t(9) = 3.2, p = .011, d = 1.01. Anxiety symptoms also improved from baseline (M = 13.60, SD = 3.57) to post-treatment (M = 6.90, SD = 3.73), t(9) = 4.14, p = .003, d = 1.31. However, at one-month follow-up, depression scores (M = 18.20, SD = 5.16) were significantly worse than baseline, t(9) = −3.68, p = .005, d = 1.08. Anxiety scores did not change significantly from baseline to one-month (M = 15.90, SD = 4.23), t(9) = −1.55, p = .155, d = 0.58.

Discussion

We described the development of a smartphone app designed to augment acute psychiatric care and facilitate skills practice in patients’ daily lives during the challenging post-acute period. Building upon prior work in the field of CBM-I and input from various stakeholder groups, we designed the HabitWorks app to deliver a personalized, transdiagnostic intervention targeting interpretation bias. We then refined the intervention iteratively and obtained feasibility and acceptability data via three waves of recruitment in an open trial.

Stakeholder and user feedback informed the development of the HabitWorks app and implementation methods. Overall, positive feedback from patients in the open trial converged around common themes. Most patients noted the user-friendly design and ease of use. Several patients appreciated specific features we included to enhance user engagement, such as GIFs and notification reminders. The personalization feature of HabitWorks appeared to successfully create personally relevant CBM-I stimuli, as most patients noted that the situations depicted in the exercises were relevant. Patient comments reflected the intent of the app to shift interpretation bias and provide a bridge from acute care to outpatient treatment.

Patients were mixed in their experiences with checking in with staff during acute treatment. Some patients found this aspect of the intervention to be helpful and crucial to their engagement, whereas others thought it was unnecessary and preferred to complete the app independently. Similarly, patients diverged in their opinion of our original scheduling protocol. Following discharge, we initially required users to schedule three sessions per week in the app, and these sessions were required to be separated by at least one day (per previously established CBM-I protocols). Some patients had no problems adhering to this protocol, whereas others suggested we make the app more flexible to accommodate people with different schedules. In general, we observed that people varied greatly in their preferences, and an overarching theme was to make features of the app and implementation methods flexible and personalizable.

To establish preliminary feasibility and acceptability, we selected a priori benchmarks based on prior studies and on clinical judgment of appropriate benchmarks for implementation in an acute treatment setting. HabitWorks met most of these benchmarks. Most of the eligible patients (71%) consented to the open trial. Adherence was good during acute treatment: 79% completed daily sessions. Reasons for non-adherence during acute treatment were primarily expected and unavoidable in psychiatric hospital populations (e.g., discharged unexpectedly). Engagement with the app following discharge was good during the first three weeks, with an average of two sessions per week completed. This frequency of use is promising given the noted challenges patients face during the post-acute period and that participants received no further contact from research staff during this period. However, only one-third of participants adhered to the prescribed three sessions per week during the first 3 weeks following discharge, and no participants completed all 3 sessions in the final week. Study retention was good for the post-treatment and 1-month follow-up assessments, but drop-out by the 3-month follow-up was high. Poor adherence and high drop-out are known barriers to implementing digital interventions (see Torous et al., 2020 for review). These challenges may be exacerbated when implementing a smartphone app intervention immediately following discharge from psychiatric hospital care, which is a stressful transition period. The final version of HabitWorks included in-app mood monitoring, which may enhance engagement in the subsequent pilot RCT (Torous et al., 2020).

Satisfaction benchmarks were met for every domain including helpfulness, relevance, ease of use, satisfaction, and recommendation to a friend. After the first HabitWorks session, patients rated it as credible, but the average expected improvement was slightly lower than our benchmark (47% vs 50%). This may be due to skepticism about how a smartphone app could benefit someone with the severe symptoms characteristic of this sample. It could also be due to lack of understanding of the treatment rationale. We delivered the treatment rationale via the app’s instructional videos developed with the input of a patient advisory board. Future studies might assess participants’ comprehension of the rationale after watching the videos.

We also examined preliminary evidence of target engagement. Patients evidenced improvement in their WSAP accuracy with large effect sizes at the post-acute treatment and follow-up assessment. These effect sizes are consistent with a prior CBM-I study in this same acute psychiatric setting (Beard et al., 2019) and provide preliminary evidence of further gains post-discharge. Symptom outcomes revealed an initial decrease in depression and anxiety during acute treatment, followed by an increase in symptoms by the 1-month follow-up. These results should be interpreted cautiously in light of the preliminary nature of this study, including a small sample size and the absence of a control condition. The increase in symptoms following discharge is expected because the current study focused on a time period characterized by high rates of relapse. Future larger trials are needed that compare HabitWorks to a control group.

In addition to a randomized controlled trial to test the effectiveness of this intervention, future work is needed to evaluate potential implementation methods. The appropriate implementation methods and outcomes depend upon how we categorize HabitWorks as a behavioral intervention technology (BIT). Hermes and colleagues (2019) define BITs as technology designed to assist a user in changing their behaviors, cognitions, or emotions. They suggest that BITs fall on a continuum of support encompassing completely human-delivered interventions, adjunctive BIT, guided BIT, and fully automated BIT. During acute care, HabitWorks was clearly an adjunctive BIT, as the primary interventions - CBT and pharmacotherapy - were delivered by hospital providers, and the app supported these interventions through skills practice. During the transition period following discharge, HabitWorks continued being an adjunctive BIT to some patients’ outpatient care. However, for many patients, it may have become a fully automated BIT either because they used it independently with no explicit role for providers or because they did not have outpatient providers. Future work is also needed to determine the appropriate metrics for adherence and engagement during the transition period. We instructed participants to complete the CBM-I exercises three times per week for four weeks. However, it is likely that some people would benefit from more regular practice. It is also likely that some people may stop using the app because they are feeling better. User data from larger trials will provide more information about individual differences in use patterns and associations with outcomes.

This open trial was intended to guide the development of HabitWorks, and the evaluation is preliminary by nature and limited. First, it is limited by the small, pilot sample size and lack of a control condition. Without a control group and larger sample size, the current treatment development study cannot test whether HabitWorks led to the theorized improvement in interpretation bias and subsequent improvement in clinical symptoms. Such mediational hypotheses will be crucial to test in future larger studies. In the open trial, WSAP interpretation bias scores improved from baseline to post-treatment and follow-up, whereas clinical symptoms deteriorated after post-treatment. Thus, WSAP scores may not necessarily translate to improved mood for all people requiring acute psychiatric treatment. We are currently conducting a Stage 1B randomized controlled trial comparing HabitWorks to an active self-monitoring app condition. Data from that larger pilot trial will provide better information about the feasibility and acceptability of HabitWorks, use of the app during the post-acute phase, and preliminary data about clinical outcomes. It is unlikely that such a low-intensity intervention will be effective for the entire acute psychiatric population. Thus, larger trials are also needed to test hypothesized moderators of CBM-I response, such as baseline interpretation bias and symptom severity. Second, while the current pilot sample was diverse in diagnoses, age, and sexual identity, it lacked ethno-racial and educational diversity. This is primarily due to the demographic characteristics of the psychiatric hospital and surrounding area. It will be important to evaluate HabitWorks in a sample with a range of sociocultural identities.

The current pilot data extends prior CBM-I work to a sample of people with acute mental health symptoms, suicidal ideation and risk, comorbidity, and a range of primary diagnoses, most commonly Major Depressive Disorder and Bipolar Disorder. This sample differs substantially from most CBM-I studies in severity and primary diagnosis. Overall, these preliminary findings suggest that further testing is warranted. Results also support the handful of studies that have examined CBM-I as an augmentation to CBT (Goldin et al, 2012; Teachman, Marker, & Clerkin, 2010). Smartphone delivered skills-based apps are one tool for meeting the huge unmet treatment need of people discharging from acute care. Given the personalization features and transdiagnostic relevance, HabitWorks may also be an appropriate intervention in non-acute treatment settings or as a low-intensity self-help intervention.

Supplementary Material

1
2

Highlights.

  • Developed a smartphone delivered Cognitive Bias Modification (CBM) intervention (“HabitWorks”)

  • HabitWorks was feasible and acceptable as an augmentation to acute psychiatric care

  • Participants’ interpretation bias shifted into the healthy range at post-treatment

  • Pilot study results support next phase of testing in a randomized controlled trial

Acknowledgments

This research was supported by the National Institute of Mental Health (MH113600) awarded to the first author and was registered on ClinicalTrials.gov (Identifier: NCT03509181). HabitWorks was developed by the first author. Curiosity Health programmed the app and assisted with all technological issues. We are grateful to members of our Advisory Board: Nader Amir, Bethany Teachman, John Torous, Risa Weisberg, Jaqueline Sperling, Eve Lewandowski, Hilary Connery, Andrew Kuller, Andrew Peckham, Kirsten Christensen, Ivar Snorrason. We are grateful to the entire staff and patients at the Behavioral Health Partial Hospital Program who made this work possible, especially members of the Patient Advisory Board. We thank McLean Hospital’s Institute for Technology in Psychiatry for providing helpful guidance at various stages of the project. We thank LaurenPageWadsworth Photography for recording the instructional videos in the HabitWorks app and CBTeam, LLC for providing office space to record the videos. We thank Meghan O’Brien for assistance with coding the word-sentence pairs in the CBM task.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Courtney Beard, Department of Psychiatry, McLean Hospital/Harvard Medical School.

Ramya Ramadurai, Department of Psychiatry, McLean Hospital.

R. Kathryn McHugh, Department of Psychiatry, McLean Hospital/Harvard Medical School.

JP Pollak, Curiosity Health and Cornell Tech.

Thröstur Björgvinsson, Department of Psychiatry, McLean Hospital/Harvard Medical School.

References

  1. Amir N, Kuckertz JM, Najmi S, & Conley SL (2015). Preliminary evidence for the enhancement of self-conducted exposures for OCD using cognitive bias modification. Cognitive Therapy and Research, 39(4), 424–440. 10.1007/s10608-015-9675-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Beard C, & Amir N (2008). A multi-session interpretation modification program: Changes in interpretation and social anxiety symptoms. Behaviour Research and Therapy, 46(10), 1135–1141. 10.1016/j.brat.2008.05.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Beard C, & Amir N (2009). Interpretation in social anxiety: when meaning precedes ambiguity. Cognitive Therapy and Research, 33(4), 406–415. 10.1007/s10608-009-9235-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Beard C, Weisberg RB, & Amir N (2011). Combined cognitive bias modification treatment for social anxiety disorder: a pilot trial. Depression and anxiety, 28(11), 981–988. 10.1002/da.20873 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Beard C, Rifkin LS, & Björgvinsson T (2017). Characteristics of interpretation bias and relationship with suicidality in a psychiatric hospital sample. Journal of Affective Disorders, 207, 321–326. 10.1016/j.jad.2016.09.021 [DOI] [PubMed] [Google Scholar]
  6. Beard C, Silverman AL, Forgeard M, Wilmer MT, Torous J, & Björgvinsson T (2019). Smartphone, Social Media, and Mental Health App Use in an Acute Transdiagnostic Psychiatric Sample. JMIR mHealth and uHealth, 7(6), e13364. 10.2196/13364 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Beard C, Rifkin LS, Silverman AL, & Björgvinsson T (2019). Translating CBM-I into real-world settings: Augmenting a CBT-based psychiatric hospital program. Behavior Therapy, 50(3), 515–530. 10.1016/j.beth.2018.09.002 [DOI] [PubMed] [Google Scholar]
  8. Cristea IA, Kok RN, & Cuijpers P (2015). Efficacy of cognitive bias modification interventions in anxiety and depression: meta-analysis. The British Journal of Psychiatry, 206(1), 7–16. doi: 10.1192/bjp.bp.114.146761 [DOI] [PubMed] [Google Scholar]
  9. Demographics of Mobile Device Ownership and Adoption in the United States. (2019, June 12). Retrieved from https://www.pewinternet.org/fact-sheet/mobile/ [Google Scholar]
  10. Devilly GJ, & Borkovec TD (2000). Psychometric properties of the credibility/expectancy questionnaire. Journal of Behavior Therapy and Experimental Psychiatry, 31(2), 73–86. 10.1016/S0005-7916(00)00012-4 [DOI] [PubMed] [Google Scholar]
  11. Di Matteo D, Fine A, Fotinos K, Rose J, & Katzman M (2018). Patient willingness to consent to mobile phone data collection for mental health apps: structured questionnaire. JMIR Mental Health, 5(3), e56. 10.2196/mental.9539 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Ewbank MP, Cummins R, Tablan V, Bateup S, Catarino A, Martin AJ, & Blackwell AD (2020). Quantifying the Association Between Psychotherapy Content and Clinical Outcomes Using Deep Learning. JAMA psychiatry, 77(1), 35–43. http://doi:10.1001/jamapsychiatry.2019.2664 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Forgeard M, Beard C, Kirakosian N, & Björgvinsson T (2018). Research in partial hospital settings, In Codd RT (Ed.), Practice-Based Research (1st ed., pp. 212–240). New York: Routledge. 10.4324/9781315524610 [DOI] [Google Scholar]
  14. Gale RC, Wu J, Erhardt T, Bounthavong M, Reardon CM, Damschroder LJ, & Midboe AM (2019). Comparison of rapid vs in-depth qualitative analytic methods from a process evaluation of academic detailing in the Veterans Health Administration. Implementation Science, 14(1), 1–11. doi: 10.1186/s13012-019-0853-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Goldin PR, Ziv M, Jazaieri H, Werner K, Kraemer H, Kraemer H, Heimberg RG, & Gross JJ (2012). Cognitive reappraisal self-efficacy mediates the effects of individual cognitive- behavioral therapy for social anxiety disorder. Journal of Consulting and Clinical Psychology, 80, 103. 10.1037/a0028555 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Gonsalves M, Whittles RL, Weisberg RB, & Beard C (2019). A systematic review of the word sentence association paradigm (WSAP). Journal of Behavior Therapy and Experimental Psychiatry, 64, 133–148. 10.1016/j.jbtep.2019.04.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, & Conde JG (2009). Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics, 42(2), 377–381. 10.1016/j.jbi.2008.08.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Hermes ED, Lyon AR, Schueller SM, & Glass JE (2019). Measuring the Implementation of Behavioral Intervention Technologies: Recharacterization of Established Outcomes. Journal of Medical Internet Research, 21(1), e11752. 10.2196/11752 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hitchcock PF, Forman EM, & Herbert JD (2016). Best learning practices for Internet treatments. The Behavior Therapist, 39, 51–55. [Google Scholar]
  20. Hirsch CR, Meeten F, Krahé C, & Reeder C (2016). Resolving ambiguity in emotional disorders: The nature and role of interpretation biases. Annual Review of Clinical Psychology, 12, 281–305. https://doi/org/10.1146/annurev-clinpsy-021815-093436 [DOI] [PubMed] [Google Scholar]
  21. Jones EB, & Sharpe L (2017). Cognitive bias modification: A review of meta-analyses. Journal of Affective Disorders, 223, 175–183. 10.1016/j.jad.2017.07.034 [DOI] [PubMed] [Google Scholar]
  22. Kroenke K, & Spitzer RL (2002). The PHQ-9: A new depression diagnostic and severity measure. Psychiatric Annals,32, 509–515. 10.3928/0048-5713-20020901-06 [DOI] [Google Scholar]
  23. Kroenke K, Spitzer RL, Williams JBW, Monahan PO, & Löwe B (2007). Anxiety disorders in primary care: Prevalence, impairment, comorbidity, and detection. Annals of Internal Medicine, 146(5), 317–325. 10.7326/0003-4819-146-5-200703060-00004 [DOI] [PubMed] [Google Scholar]
  24. Löwe B, Decker O, Müller S, Brähler E, Schellberg D, Herzog W, & Herzberg PY (2008). Validation and Standardization of the Generalized Anxiety Disorder Screener (GAD-7) in the General Population: Medical Care, 46(3), 266–274. 10.1097/MLR.0b013e318160d093 [DOI] [PubMed] [Google Scholar]
  25. Menne-Lothmann C, Viechtbauer W, Höhn P, Kasanova Z, Haller SP, Drukker M, … & Lau JY (2014). How to boost positive interpretations? A meta-analysis of the effectiveness of cognitive bias modification for interpretation. PloS One, 9(6), e100925. 10.1371/journal.pone.0100925 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Miller IW, Gaudiano BA, & Weinstock LM (2016). The coping long term with active suicide program: description and pilot data. Suicide and Life-threatening Behavior, 46(6), 752–761. 10.1111/sltb.12247 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Onken L, Carroll K, Shoham V, Cuthbert B, & Riddle M (2014). Reenvisioning clinical science: Unifying the discipline to improve the public health. Clinical Psychological Science, 2, 22–34. 10.1177/2167702613497932 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Rounsaville BJ, Carroll KM, & Onken LS (2001) A stage model of behavioral therapies research: Getting started and moving on from Stage 1. Clinical Psychology: Science and Practice, 8(2),133–142. 10.1093/clipsy.8.2.133 [DOI] [Google Scholar]
  29. Salemink E, Wolters L, & de Haan E (2015). Augmentation of treatment as usual with online cognitive bias modification of interpretation training in adolescents with obsessive compulsive disorder: a pilot study. Journal of Behavior Therapy and Experimental Psychiatry, 49, 112–119. 10.1016/j.jbtep.2015.02.003 [DOI] [PubMed] [Google Scholar]
  30. Schmidt NB, Norr AM, Allan NP, Raines AM, & Capron DW (2017). A randomized clinical trial targeting anxiety sensitivity for patients with suicidal ideation. Journal of Consulting and Clinical Psychology, 85(6), 596. 10.1037/ccp0000195 [DOI] [PubMed] [Google Scholar]
  31. Spitzer RL, Kroenke K, Williams JB, & Löwe B (2006). A brief measure for assessing generalized anxiety disorder: the GAD-7. Archives of Internal Medicine, 166(10), 1092–1097. https://10.1001/archinte.166.10.1092 [DOI] [PubMed] [Google Scholar]
  32. Teachman BA, Marker CD, & Clerkin EM (2010). Catastrophic misinterpretations as a predictor of symptom change during treatment for panic disorder. Journal of Consulting and Clinical Psychology, 78(6), 964. 10.1037/a0021067 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Torous J, Friedman R, & Keshavan M (2014). Smartphone ownership and interest in mobile applications to monitor symptoms of mental health conditions. JMIR mHealth and uHealth, 2(1), e2. 10.2196/mhealth.2994 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Torous J & Friedman R (2014). Smartphone use among patients age greater than 60 with mental health conditions and willingness to use smartphone applications to monitor their mental health conditions. The American Journal of Geriatric Psychiatry, 22(3), S128–S129. 10.2196/mhealth.2994 [DOI] [Google Scholar]
  35. Torous J, Lipschitz J, Ng M, & Firth J (2020). Dropout rates in clinical trials of smartphone apps for depressive symptoms: a systematic review and meta-analysis. Journal of Affective Disorders, 263, 413–419. 10.1016/j.jad.2019.11.167 [DOI] [PubMed] [Google Scholar]
  36. Vigod SN, Kurdyak PA, Dennis CL, Leszcz T, Taylor VH, Blumberger DM, & Seitz DP (2013). Transitional interventions to reduce early psychiatric readmissions in adults: systematic review. The British Journal of Psychiatry, 202(3), 187–194. 10.1192/bjp.bp.112.115030 [DOI] [PubMed] [Google Scholar]
  37. Williams AD, Blackwell SE, Mackenzie A, Holmes EA, & Andrews G (2013). Combining imagination and reason in the treatment of depression: A randomized controlled trial of internet-based cognitive-bias modification and internet-CBT for depression. Journal of Consulting and Clinical Psychology, 81(5), 793. 10.1037/a0033247 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Williams AD, O’Moore K, Blackwell SE, Smith J, Holmes EA, & Andrews G (2015). Positive imagery cognitive bias modification (CBM) and internet-based cognitive behavioral therapy (iCBT): a randomized controlled trial. Journal of Affective Disorders, 178, 131–141. 10.1016/j.jad.2015.02.026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Woo BK, Golshan S, Allen EC, Daly JW, Jeste DV, & Sewell DD (2006). Factors associated with frequent admissions to an acute geriatric psychiatric inpatient unit. Journal of Geriatric Psychiatry and Neurology, 19(4), 226–230. 10.1177/0891988706292754 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1
2

RESOURCES