Abstract
Objective
To assess the feasibility and acceptability of a mobile health platform supporting Collaborative Care.
Method
Collaborative Care patients (n = 17) used a smartphone app to transmit PHQ-9 and GAD-7 scores and sensor data to a dashboard used by one care manager. Patients completed usability and satisfaction surveys and qualitative interviews at 4 weeks and the care manager completed a qualitative interview. Mobile metadata on app usage was obtained.
Results
All patients used the app for 4 weeks, but only 35% (n = 6) sustained use at 8 weeks. Prior to discontinuing use, 88% (n = 15) completed all PHQ-9 and GAD-7 measures, with lower response rates for daily measures. Four themes emerged from interviews: understanding the purpose; care manager’s role in supporting use; benefits of daily monitoring; and privacy / security concerns. Two themes were user-specific: patients’ desire for personalization; and care manager burden.
Conclusions
The feasibility and acceptability of the mobile platform is supported by the high early response rate, however attrition was steep. Our qualitative findings revealed nuanced participant experiences and uncovered some concerns about mobile health. To encourage retention, attention may need to be directed toward promoting patient understanding and provider engagement, and offering personalized patient experiences.
Keywords: Depression, Primary health care, Mental health services, Telemedicine, Smartphone, Patient reported outcome measures
1. Introduction
Mobile health tools have generated considerable enthusiasm among researchers and clinical leaders, as they offer features that may support a range of activities that contribute to healthcare delivery for chronic health conditions, including common mental disorders [1–4]. However, technology-based interventions deployed as standalone interventions have low uptake and may be less effective than those paired with human support [5–9], and are thus unlikely to fulfill the potential to transform healthcare delivery. To maximize impact on care delivery and patient outcomes, mobile tools need to be embedded into effective clinical care models, such as the Collaborative Care model [10].
Collaborative Care is an approach to delivering care for depressive and anxiety disorders using a team-based care model. This approach, supported by > 80 randomized trials, is twice as effective as usual depression care and has now been widely disseminated [11,12]. Essential principles of Collaborative Care include a patient-centered, population-based approach, and the delivery of measurement-based care [13,14]. Health information technologies that support these principles, such as a patient registry, are integral to the delivery of Collaborative Care, and recently, automated symptom monitoring by interactive voice response systems has been investigated [15]. To date, the technologies typically have consisted of clinician-facing tools [10,16]. Because Collaborative Care is a patient-centered approach that seeks to inform and activate patients to improve self-management, the use of a patient-facing mobile tool is a logical extension of the Collaborative Care model [10,17].
Research on mobile tools to support depression care has occurred in a variety of settings, however little is known about the experiences of patients and care providers using these tools and these studies have not deployed mobile tools within Collaborative Care [8,18,19]. Potential benefits include improving patient engagement through education and automated reminders and improving patient satisfaction with a convenient, asynchronous method for patient-provider communication. Patients and providers may benefit from timely remote symptom monitoring to drive measurement-based care, thus improving quality of care. Providers may benefit by reducing time obtaining and documenting symptom measures and reducing time-consuming synchronous telephone outreach. However, new technologies also may be disruptive to clinicians’ workflows and could increase clinician cognitive load and time burden from accessing, reviewing and responding to patient-generated data.
We conducted a pilot study of a mobile health system that consisted of a patient-facing smartphone application (“app”) that transmitted patient-reported data to a depression care manager via an online dashboard for patients in a Collaborative Care program. The purpose of the study was to assess the feasibility, acceptability, and fit of the mobile health platform with the Collaborative Care workflow.
2. Methods
2.1. Site and participants
The study was conducted in a primary care clinic affiliated with the University of Washington that offers Collaborative Care services for patients with depression and anxiety. The Collaborative Care program, described previously [20], was operational for nearly three years prior to the study. English-speaking adults receiving treatment for a depressive or anxiety disorder from one care manager employed by the University of Washington clinic were eligible for the study. Exclusion criteria included active suicidality or a current diagnosis of dementia, substance dependence, bipolar disorder, or a psychotic disorder.
2.2. Mobile platform
The mobile health platform was furnished by Ginger.io and included a smartphone app (available for iPhone or Android devices) for patients and a web-based provider dashboard. The mobile app provided patients with notifications to complete regular clinical surveys, occasional satisfaction surveys, and health tips approximately 3–4 times per week. The health tips were selected from tips used in a recent trial of depression apps [8,21] and included suggestions for managing depressed mood such as self-care activities (e.g., healthy eating, pleasant activities) or managing challenges (e.g., meditation, finding balance). Table 1 lists the survey schedule for the clinical measures and satisfaction surveys. Smartphone sensor data was collected passively to assess movement (all participants) and communication patterns (Android users only). The provider dashboard offered several views, which included a list of all patients using the app and an individual patient view with all data submitted via the app and a graphing feature to visualize responses to measures over time. The platform flagged participants who were persistently symptomatic based on patient self-report, were isolated based on movement and communication patterns, reported thoughts of self-harm, reported medication concerns or ran out of medications, or requested an outreach call from the care manager.
Table 1.
Administration schedule | Measures |
---|---|
Baseline | Age |
Gender | |
Race/ethnicity | |
Education | |
Employment | |
Daily | Modified PHQ-2 |
Subjective Units of Distress Scale | |
Medication use | |
Outreach request | |
Weekly | PHQ-9 |
GAD-7 | |
Week 4 [8 or 12]a | Technology obtrusiveness |
Week 4, 8, 12b | Developer product feedback survey |
This survey was originally scheduled at Week 4 and 12. When the study timeline was truncated, the Week 12 survey was re-scheduled to Week 8.
The Week 12 survey was not administered to participants who had access to the App for fewer than 12 weeks.
2.3. Procedure
All study procedures were conducted remotely. The study was approved by the University of Washington Institutional Review Board. At the start of recruitment, the care manager reviewed all patients on her active caseload to identify patients who were ineligible based on the clinical exclusion criteria described above. Weekly during the 6-week recruitment period, she also reviewed patients newly enrolled in Collaborative Care for potential eligibility. All patients who did not meet clinical exclusion criteria (n = 54) received a letter describing the study and were offered the opportunity to opt out of contact. The opt-out method yields higher enrollment and less sampling bias than an opt- in strategy [22]. Recruitment activities were conducted by the research team who attempted to contact all individuals who did not opt out (n = 53) and were successful in reaching most (n = 38) to inform them about the study, answer questions, and obtain informed consent (Supplementary figure). Interested participants received an email with highlights of the informed consent and once they had agreed to participate, the act of downloading and installing the phone app signified their consent to participate in the project. Due to the remote nature of the study, a waiver of written consent was obtained. Participants received a brief description of the app and contact information for the study team should they experience any technical difficulties. After installing the app, participants completed a brief demographic survey (e.g., age group, gender, race/ethnicity, education, employment). An open-ended semi-structured telephone interview was conducted 4 weeks after the participant installed the app. At that time, participants were encouraged to continue using the app for 8 to 12 weeks total. A semi-structured interview with the care manager was conducted following completion of patient data collection. Interviews assessed participants’ general experiences using the mobile system, their perceptions of its contribution to their care, and satisfaction with specific features of the system. No compensation was provided to participants for using the system; however, a $50 gift card was provided following completion of the research interview. After the study was underway, the platform was scheduled to undergo changes in the features on the mobile app and the provider dashboard was reconfigured, thus the follow-up interval was truncated. The earliest enrolled participants had access for 12 weeks, and those who enrolled later had access for 8 to 12 weeks based on enrollment date. Data was also obtained from the University of Washington’s Care Management Tracking System, which is a patient registry that tracks individuals’ treatment history and includes the dates of all care management contacts and the associated symptom scores on validated measures (the PHQ-9 [23] for depressive symptoms and the GAD-7 [24] for anxiety symptoms). This information was used to characterize the study population by determining how long participants had been engaged in Collaborative Care prior to enrolling in this study and describing the severity of participants’ depressive and anxiety symptoms at the initiation of treatment.
2.4. Study outcomes
We employed a concurrent triangulation design comprised of mixed quantitative and qualitative methods to assess patients’ use of and experience with the mobile app, as well as the care manager’s experience with the system [25]. This method allowed us to compare and integrate the results of our quantitative and qualitative analyses to generate complementary data about the feasibility and acceptability for patients and for the care manager.
2.4.1. Quantitative
All responses that participants submitted through the patient app were time-stamped. Passive data on location and communication were aggregated daily. To assess overall use of the app, we determined the date of last PHQ-9 or GAD-7 response, date of last passive data submission, and defined the last day of app use as the latter of these dates. We calculated the proportion of surveys returned for each type of measure: daily surveys of mood and medication (for patients taking psychotropic medications) and weekly PHQ-9 and GAD-7. Participants completed the developer’s product feedback survey rated on a 6-point Likert scale and a measure of technology obtrusiveness rated on a 7- point Likert scale (see Table 3 for item wording).
Table 3.
Strongly disagree/disagree/somewhat disagree |
Neutral |
Strongly agree/agree/somewhat agree |
||||
---|---|---|---|---|---|---|
n | % | n | % | n | % | |
This technology requires little effort to use | 0 | 0% | 0 | 0% | 16 | 100% |
This technology was easy to learn how to use | 0 | 0% | 0 | 0% | 16 | 100% |
This technology is reliable | 0 | 0% | 2 | 13% | 14 | 88% |
This technology is useful | 1 | 6% | 4 | 25% | 11 | 69% |
This technology keeps my information private | 0 | 0% | 9 | 56% | 7 | 44% |
This technology violates my personal space | 15 | 94% | 1 | 6% | 0 | 0% |
This technology fits into my daily activities | 1 | 6% | 4 | 25% | 11 | 69% |
This technology requires me to learn a new routine | 9 | 56% | 5 | 31% | 2 | 13% |
If my health declines I can no longer use this technology | 13 | 81% | 3 | 19% | 0 | 0% |
This technology reduces the number of visits I have with healthcare providers | 5 | 31% | 9 | 56% | 2 | 13% |
This technology positively impacts my relationships with family and friends | 4 | 25% | 9 | 56% | 3 | 19% |
Using this technology means that I am no longer independent | 15 | 94% | 1 | 6% | 0 | 0% |
Using this technology causes me embarrassment | 14 | 88% | 0 | 0% | 2 | 13% |
The Ginger.io app is easy to use | 0 | 0% | 13 | 100% | ||
The time required to answer questions in the Ginger.io app is reasonable | 0 | 0% | 13 | 100% | ||
Ginger.io helps me feel more connected to my doctor or care team | 7 | 54% | 6 | 46% | ||
Ginger.io helps me feel like my care team understands and response to my unique needs | 7 | 54% | 6 | 46% | ||
Ginger.io helps me feel more confident that I am able to manage my mental health condition | 7 | 54% | 6 | 46% | ||
Ginger.io helps me feel I am able to be more open and honest about how I am feeling | 5 | 38% | 8 | 62% | ||
I am satisfied with the overall Ginger.io experience | 3 | 23% | 10 | 77% |
2.4.2. Qualitative
Interviews were audio recorded, professionally transcribed and checked for accuracy. Using directed content analysis [26], a priori codes were identified and refined during the initial coding of a subset of patient interviews (n = 3) by 3 members of the team (AMB, MIS, RHG). These focused on understanding patients’ overall experience using the app including perspectives on sharing mental health data through mobile devices, patients’ understanding of the app’s purpose, and their perceptions of the app’s impact on their mental health and on their healthcare. An analogous set of codes was developed for the care manager interview. A codebook was generated and two team members (MIS, RHG) then coded all transcripts and any discrepancies were identified and resolved in team meetings.
3. Results
3.1. Participants
Of 38 individuals contacted by the research team, 6 people were ineligible because they did not have an Android or iPhone and 14 declined participation, most commonly citing being too busy, although 2 individuals declined due to concerns about the passive data collection. Overall, 18 individuals consented, downloaded, and used the app; however, one of these individuals completed treatment concurrently with study enrollment and therefore was determined to be ineligible due to discontinuation of services in the Collaborative Care program (Supplementary figure). Among the final sample of 17 participants, most were female (n = 10; 59%), white (n = 16; 94%), and had received Collaborative Care services for > 180 days (n = 11; 65%; see Table 2).
Table 2.
Characteristic | n | % |
---|---|---|
Age | ||
18–24 | 3 | 18% |
25–34 | 6 | 35% |
35–44 | 3 | 18% |
45–54 | 3 | 18% |
55–64 | 1 | 6% |
65 + | 1 | 6% |
Gender | ||
Male | 7 | 41% |
Female | 10 | 59% |
Race | ||
White or Caucasian | 16 | 94% |
Black or African American | 1 | 6% |
Asian/Pacific Islander/Native American/other | 0 | 0% |
Ethnicity | ||
Not Hispanic or Latino | 17 | 100% |
Hispanic or Latino | 0 | 0% |
Employment status | ||
Employed | 11 | 65% |
Student | 3 | 18% |
Retired or homemaker | 0 | 0% |
Unemployed or unable to work | 3 | 18% |
Education | ||
Less than high school | 0 | 0% |
High school or GED | 1 | 6% |
Some college | 3 | 18% |
Bachelor’s degree | 10 | 59% |
Graduate or professional degree | 3 | 18% |
Phone type | ||
Android | 7 | 41% |
iPhone | 10 | 59% |
Time in treatment prior to study start | ||
0–30 days | 2 | 12% |
31–60 days | 1 | 6% |
60–90 days | 0 | 0% |
90–180 days | 3 | 18% |
180–365 days | 4 | 24% |
> 365 days | 7 | 41% |
PHQ-9 score at start of treatment | ||
0–4 | 5 | 29% |
5–9 | 7 | 41% |
10+ | 5 | 29% |
GAD-7 score at start of treatment | ||
0–4 | 6 | 35% |
5–9 | 0 | 0% |
10+ | 11 | 65% |
PHQ-9 score at study start | ||
0–4 | 10 | 59% |
5–9 | 6 | 35% |
10+ | 1 | 6% |
GAD-7 score at study start | ||
0–4 | 11 | 65% |
5–9 | 6 | 35% |
10+ | 0 | 0% |
Any psychotropic medication at study start | ||
No | 11 | 65% |
Yes | 6 | 35% |
3.2. Patient app use
All participants used the app for the first 4 weeks, however only 6 participants (35%) continued use through 8 weeks (Supplementary table). Participants responded to most self-report measures during the time they used the app. Prior to discontinuing use, the response rate for weekly PHQ-9 and GAD-7 scales ranged from 86 to 100% with 88% of participants (n = 15) completing all measures. Compared to the weekly measures, the response rate was more variable for daily measures of mood. Before discontinuing use, the rate of completion was 61–100% for the modified PHQ-2 and 18–96% for the subjective units of distress scale, with 15 participants (88%) completing more than half of the latter measure. Among the 6 participants taking psychotropic medications, the response rate to the medication survey ranged from 30 to 67%.
3.3. Patient app usability, acceptability and satisfaction
Due to the small number of participants using the app at Weeks 8 and 12, we report results from Week 4 only. All participants who responded reported that the app was easy to use and the amount of time was reasonable (Table 3). Perceptions of the impact of the app varied. The majority of participants reported overall satisfaction with the app (n = 10/13; 77%) and thought the app was useful (n = 11/16; 69%). Nearly half of participants reported feeling more connected to their doctor (n = 6/13; 46%) or more confident in managing their mental health (n = 6/13; 46%). Only a few participants endorsed overtly negative views of the app, such as feeling embarrassed (n = 2/16; 13%), but many (n = 9/16; 56%) were neutral on whether the app kept their information private.
3.4. Qualitative feedback
Four major themes emerged from both patients and the care manager: 1) understanding the purpose of the system; 2) benefits of daily monitoring; 3) the care manager’s role in reinforcing app use; and 4) privacy, confidentiality and security concerns. Two additional themes were user-specific: 5) patients’ desire for more personalized features; and 6) the care manager’s burden using the system (Table 4).
Table 4.
ID | Days of app use/days app available | Quote |
---|---|---|
Theme 1. Understanding the purpose of the system | ||
CM | N/A | It gives us a daily insight into how our patients are doing so that when we see them, we can really focus our efforts on what’s really coming up as important or what really are the outstanding symptoms that we need to be targeting in such a brief follow-up visit with them. ‘Cause most patients I only see for half an hour every two weeks which is so short. So I think it’s designed to help focus those interactions, but also I think to give patients a sense over time of how they’re actually doing objectively, ‘cause so much of the time we ask them, it’s coming out of whatever the current emotional state is versus what’s really the whole picture. |
P14 | 65/72 | My understanding of it is that it’s supposed to number one, be a tool for me to kind of keep track of that information myself. It helps me think about that day to day when normally I would not be thinking about it. It also helps my care providers to have some of the information that they may not be able to get on a daily basis. I’m assuming that it sends it to them, and then it kind of creates a chart for them or a graph that helps them keep track of my responses so that they have a better idea of any ups and downs in the time that we’re not initially meeting. |
P01 | 71/84 | I was thinking that it was tracking the correlation between my social media use and email and all of those things and how that behavior affects my feelings kind of and how much I’m maybe using my phone and how that - I don’t know - the number of text messages I’m sending or receiving and how that affects my attitude and my quality of life |
P17 | 53/61 | So I don’t know what all the goals of the app are. But if it were a goal of the app to provide immediate help in terms of some crisis, then you probably want to make that a little bit more clear. |
P05 | 83/84 | I don’t know… I don’t have any understanding of it. It seems like it’s some kind of long term monitoring of mood which helps interventions and solution. But I don’t really know the purpose of it. |
Theme 2. Benefits of daily monitoring | ||
P07 | 45/84 | I feel like that’s a good reminder to be more mindful. |
P04 | 56/84 | whether you like it or not, it keeps it current and in your face and it makes you do things or think about things - actually also do things |
CM | N/A | < Our registry > does the PHQ-9 but it is on a two week basis mostly or whenever they come in. So I mean I love that patients can get - now that I see that it doesn’t end up being like they need to talk to us every single day, I love that they could tell us regularly how they were doing |
CM | N/A | I remember I had one patient who had really high PHQ when they came in but I didn’t get any alerts for them. And I was like, “Well, what’s going on?” And I went back and looked and it turns out he had pretty low or moderate daily PHQs or daily mood tracks. And I asked him about that and he’s like, “Oh, I’m just having a bad day today when I filled it out. “ So it’s like, okay, this gives a better picture. It’s not that the whole two weeks were terrible between when I saw you - I see you now and I saw you last but - so that kind of helped I think kind of to have people do better reporting on a daily basis than on a - when I asked them, I asked them about how they did for the last two weeks and it can be pretty skewed |
CM | N/A | it can alert if they’ve had a low daily mood tracker for a couple days in a row or something so you don’t have to wait as long. So I like the faster alert. So no, I think, gosh, I think it was helpful |
CM | N/A | ‘Cause that actually came up as a few alerts of times I would call people, that they’d run out of medication. And then we call them and say, “Looks like you’re out, do you need a refill?” “Oh yes, I do.” And then we wouldn’t have to skip several days of not being on medication so that was a super helpful alert with < this platform > that we don’t have otherwise. |
Theme 3. Care manager reinforcing use | ||
P03 | 84/84 | if my care provider was definitely keeping up with the data I think, yeah, I would definitely use it. If this was something that was just like I downloaded from the app store and it was something that, you know, it wasn’t really keeping track of anything, like it wasn’t actionable for my care provider no, I definitely wouldn’t use it. |
P03 | 84/84 | I got a phone call right away the next day when I said that yeah, I wanted to talk to my care provider, whatever. So yeah, so that was really helpful. |
P14 | 65/72 | If there was something that I personally had a hard time saying about my feelings or the week or whatever, she already had an idea because I had already put in some input for that. |
P13 | 37/72 | It could make me a little more lazy in scheduling the next appointment because I feel like, “Oh I have the app. I’m checking in. I feel like I’m doing good.” or, “I’m having a greatweek. I don’t need to go see someone today or this week. I’m doing fine.” So I don’t know if that’s a good or a bad thing. |
P07 | 45/84 | I don’t know if they even knew I was using it |
CM | N/A | The overall impression I got from our patients is they felt really well-cared for. They felt remembered and that we are really invested in wanting to know how they’re doing regularly. That even if we can only offer brief amounts of visits per month, we care about how they’re doing |
Theme 4. Privacy, confidentiality, and security | ||
P17 | 53/61 | Well, I’m pretty leery of having personal information out there medically speaking, but I mean, I don’t feel like this really particularly infringed on something that felt like I didn’t want it out there. |
P03 | 84/84 | All the information is being run by my care provider I think that’s - kind of outweighs any privacy issues, I think. I definitely wouldn’t use this if it wasn’t part of my care. |
P11 | 42/76 | Sometimes I worry what are the repercussions if I say that I’m feeling particular bad today. What happens if I admit to feeling really bad? Is somebody going to - are the paramedics going to show up at my house? I don’t know. [laughs] I guess I wasn’t really sure what would happen on the other end, what I was feeding data into and what the response might be |
P16 | 41/64 | I would need a lot more assurances that I had very clear indicators as to what was being shared and with who, explicitly who had access to it. … I mean, as much as Facebook is an oversharing society or oversharing technology, there’s still pretty granular controls around who has access to what and you have a lot of ability to go in and look at that and make sure that what’s being shared is under your control. And I didn’t feel like the app had that kind of transparency. … So I think as a product it would need a lot of transparency about the access to information and potentially the ability to shut it off. You know, once I’ve - it would be nice for me to say, “No, this information is in escrow essentially and I own it and I have the ability to shut it off.” |
P06 | 44/83 | I’d have - well I don’t know how to do this but I would say less intrusive notifications. Sometimes I had my phone out and then the screen will wake up saying, “Oh you received some survey,” and I don’t like that. |
CM | N/A | I think one thing that stands out about how maybe it could be better is I heard from a lot of people that they worried it was draining their battery pretty quickly or that the surveys, that the pop-ups were too invasive, meaning people would have their phones out and there’d be an alert saying, “Oh, your mood tracker’s here.” And then they’d be like, “Oh my gosh, I don’t want everybody to see that,” and they’d be scrambling to cover up their phones. So I think in that way, it being a little bit more discreet could be a huge plus while still somehow reminding people about it. |
Theme 5. Patients’ desire for more personalized features | ||
P03 | 84/84 | I guess if you could have set the time when those reminders came I think that would have been a little bit more helpful. … I mean it would hit like 7:00 and I’m more of a late night person. So it would hit me when my day is kind of not even half over yet because I - well I’m going to night school so I’m more active in the evening. |
P02 | 29/84 | I guess for me, my specific case, those questions aren’t very penetrating. They’re just very superficial to me. I think it doesn’t capture what my particular predicament is. So I just don’t know that it helps me to try to think about these things at all times. It doesn’t move me in any direction. … my particular problems it takes more than just discussing those kinds of symptoms like - yeah, that’s sort of - I feel like it feels very shallow to me or very unsatisfactory. |
P12 | 32/74 | What kept me motivated? … Honestly, probably the, “Oh, congratulations! You have done X number of things in a row.” |
P02 | 29/84 | Actually there’s one thing that I wanted to say something about that I didn’t like and that was even though it’s so well-intended it was that when I had answered a certain number of - or even just two things in a row it would say, “Awesome,” or like - I think it would have been fine if it said like, “Great,” or - yeah, maybe in all sorts. But then it said like, “Stupendous,” and I felt a little patronizing. [laughs] I don’t know. “It’s really not that stupendous, guys. It is not that stupendous. It just clicked some buttons.” But maybe someone else would feel encouraged by that. |
P16 | 41/72 | I kind of wanted to see my historical responses and that wasn’t available. |
P11 | 42/76 | I kinda wish I could put a little note in and be like, “This is why I put this number.” I think, yeah. I guess that’s actually - it would have been really nice to have some sort of journaly type feature where I could make notes like that |
P02 | 29/84 | I do have this thing called UP which is monitoring like how much I walk and how much I sleep and things like that and what kinds of sleep. Then it comes with - it actually - it looks at your data and then it tailors suggestions and advice based on that data. So that’s maybe what my frame of reference was. I think this app did less of that. I felt like it wasn’t as customized as I might have expected. |
P16 | 41/64 | I only went into the app when I was prompted. So I didn’t perceive any value in the app outside of the check-in. |
Theme 6. Care manager’s burden | ||
CM | N/A | I think more information, the better, as long as it doesn’t add too much time so - I think it’s a good mix of I got good information but only if I need it or wanted to access it… |
CM | N/A | I did wonder, but I think this was initially immediately resolved, if it would take a lot of time to look through the app or look through the answers from patients, but because they have alerts when they’re scoring really negatively in the app, it made it pretty easy. So I wouldn’t take more than a few minutes to look at the < system’s > dashboard per day. |
CM | N/A | In some ways I found it maybe even saved time in some of the patient interactions ‘cause I kind of could go back and look and see how they’ve been doing and, “Oh, your sleep has resolved or your mood has been pretty good,” or kind of reference it briefly if I wanted to. So, yeah, I think if they’d seen that for themselves that doesn’t take a long time to check the dashboard. It’s not like we’re being told to call our patients every single day. Maybe that’s something to back up. I worried about it a little bit. Is this gonna mean I’m gonna have daily contact with our patients, yeah. [laughs] But I didn’t, I really didn’t and it maybe because it only alerts when there’s something pretty significant or there’s a pattern that needs to be resolved. Yeah. Yeah, thankfully I think that isn’t - yeah, it doesn’t add a lot more time [laughs] which I think the time is something everybody would be most concerned about… |
CM | N/A | Most of our patients want more services rather than less. And I think this was a good way of providing that without it actually taking a lot of time or additional energy from the clinic. |
CM | N/A | I think one thing that would very much help in the future, if it was naturally part of the medical record ‘cause it is an additional software to use. |
3.4.1. Theme 1. Understanding the purpose of the system
The care manager readily saw the value the mobile platform offered to her and the patients she cared for. In contrast, patients expressed varying levels of understanding of the app’s purpose. They had a general understanding that the app collected self-report and passive data and that this information was shared with the clinician. However, most patients could not accurately recall specific information about the type of passive data collected. One patient noted that it was unclear whether the app was intended to help people in crisis. Limited patient understanding did not translate into shorter duration of app use (Table 4).
3.4.2. Theme 2. Benefits of daily monitoring
Despite variable understanding of the app, patients believe that using the app to monitor symptoms caused them to become more mindful of their symptoms (Table 4). This increased awareness of mood allowed them to be more proactive in their coping, although this awareness was received by some with ambivalence. The care manager also felt that monitoring mood more frequently than once or twice a month was beneficial. The platform’s alerts, generated mainly in response to daily surveys, were viewed favorably by the care manager in comparison to the existing registry system that identifies people who have not improved after 10 weeks of treatment.
3.4.3. Theme 3. The care manager’s role in reinforcing app use
Patients’ increased awareness of their mental health would not have been sufficient to sustain use for some patients in the absence of care manager involvement (Table 4). Many patients felt that the care manager’s response to the data they submitted enhanced their care, although a few noted that the data was not well-integrated into their care. The care manager echoed patients’ statements that the app enhanced the care she provides. Patients speculated about how the app may detract from care, although notably nobody indicated that these concerns were realized.
3.4.4. Theme 4. Privacy, confidentiality, and security
Patients felt the data they submitted was not entirely secure (Table 4). However, they did not believe that the information reported in the app was too personal and therefore the potential for a data breach was not a major concern. Patients were comfortable sharing information about their mental health symptoms through the app, and the care manager’s access to their information was frequently cited as promoting this comfort. Some patients, however, wished to have a better understanding about who else had access to their health information, as well as the ability to control such access. The care manager recalled that some patients felt that the notifications the app sent were insufficiently discrete.
3.4.5. Theme 5. Patients’ desire for more personalized features
Patients wanted to customize the app to meet their individual needs, for example, by adjusting the timing of prompts, the types of symptoms they were reporting on, or the frequency or content of health tips. Some participants appreciated the badges and reinforcements they received when they completed their check-in surveys, whereas others felt patronized by the motivational language, again suggesting a need to tailor the language to individuals’ tastes. Participants desired a more individualized app experience that included directly visualizing their own data, having tailored interventions based on their current states, i.e., just-in-time adaptive interventions, or annotating standardized scores with personal diary-style notes.
3.4.6. Theme 6. Care manager’s burden
For the care manager, the benefits of the mobile platform were balanced against its burdensomeness. Potential burdens included time and the cognitive demands of filtering patient-reported data to identify the information she needed. Neither of these potential burdens was realized. The care manager found some efficiencies in having access to patient-reported data paired with clinically-useful alerts which allowed her to meet patients’ desire for more services with minimal investment of her time or effort. Nevertheless, the need to access a separate system was burdensome.
4. Discussion
Our findings support the feasibility and acceptability of a mobile health platform as an adjunct to team-based Collaborative Care for primary care patients with depression and anxiety. We observed high levels of patient satisfaction in the quantitative evaluation and a high initial response rate, with 88% of people completing all weekly symptom measures and good response to daily mood measures prior to discontinuing use of the app. This level of patient engagement is promising for supporting effective measurement-based depression care and is comparable to, or higher than some other studies of app-based symptom monitoring in specialty care settings with patients with schizophrenia and bipolar disorder [27,28]. Similar to these tools, and in contrast to numerous smartphone-based mood monitoring tools [8], this platform facilitated symptom reporting to care providers. Our results are in line with the literature suggesting that patient engagement in mobile tool use is greater and attrition is lower for tools that are supported by healthcare providers [6]. The care manager’s relationship with the patient and support for the use of the mobile platform emerged as key facilitators of patients’ use of the tool. For patients, sharing symptom data with their care manager was valued highly.
The high response rate achieved in the initial weeks was not maintained over time. Due to a change in the platform that the developers introduced, the duration of patient access to the app was shortened from 12 weeks to 8 weeks, which appeared to affect ongoing use. Our qualitative findings also revealed some ambivalence and nuance in patients’ experiences that may in part account for the observed patterns of use and which provide important insights into areas for improvement. For developers and researchers, this highlights the value of a mixed methods approach to evaluation of mobile health tools as brief surveys of usability and satisfaction may not detect more complex responses that are important for sustaining use over time.
Lack of personalization emerged as an important factor for many patients who expressed a strong interest in a more individualized experience. Some of the desired customizations are relatively straight-forward (e.g., selecting the timing of notifications, choosing which symptoms to report, or receiving a summary or graph of one’s own symptoms). Some patients were also interested in annotating their symptom scores with explanatory comments or notes, or overlaying multiple data streams (to look concurrently at symptoms and adherence). However, other personalized features require the development of novel algorithms (e.g., targeting health tips based on symptom profile, offering real-time adaptive interventions based on symptoms in the moment, or suggesting self-management strategies or micro-interventions that are responsive to patient preferences). Our results demonstrate that patient interest in such advanced tools matches the enthusiasm of scientists for developing novel behavioral intervention methods [29].
Few patients in this study considered privacy and data security as significant issues, although it is plausible that these concerns may be more important as barriers to initial use than ongoing use among people who initiate use. Our results are consistent with prior research demonstrating that patients with negative views of their health are more comfortable sharing personal health information [30], which has led to calls for more coordinated regulations to protect patient privacy [31]. Past research has revealed that patients may withhold information due to concerns about electronic transmission of health data [30], however our results did not bear this out. Some patients did express uncertainty about privacy and security issues including what types of data, who has access, and how the data is used.
Despite receiving consistent written and verbal information about the app from the research team and accepting the user license, after 4 weeks of use, patients varied considerably in their understanding of the app’s functions and how it fit into their healthcare. While many participants had a good understanding of the system and the care manager’s use of their information, others had misconceptions about the data collected or were uncertain about its purpose. For example, one of our participants noted it was unclear whether this app was intended for crisis management. This finding underscores the importance of providing thorough education about technology-enabled services when introducing them and also following up to monitor patients’ understanding over time. Certain features may also promote patients’ understanding, such as providing patients with access to summaries or graphs that mirror the information that clinicians receive. Patient education about health technologies is important for addressing the ‘digital divide’ in healthcare and this will be particularly relevant when implementing tools in routine care settings with patients who are likely to be less motivated and have lower educational attainment than our study participants [32–35].
Patients and the care manager valued the ability to aggregate data in ways that were clinically meaningful and felt that the systems’ ability to track patient-reported outcomes served this purpose, although patients desired improvements in visualizing their data. The value of passively collected data was viewed more speculatively although both patients and the care manager expressed openness to incorporating such data into care into the future. To realize such potential, developers will need to address concerns about transparency in data privacy and security as well as generate more advanced analytics in collaboration with patients and clinicians.
Although revealing, the findings of this pilot trial should be viewed within the context of certain limitations. Study participants were recruited from a single clinic site over a 6-week period of time. Participants were primarily white, well-educated, employed urban- dwellers who were in or nearing remission of their symptoms of depression and anxiety and many had been receiving Collaborative Care services for over 6 months. While this affords an important perspective on the potential for the app over an episode of care for patients in this Collaborative Care program, patient experiences may differ among acutely depressed patients, patients who are less engaged in care, or underserved patients who may have less trust in healthcare providers or in the role for technology. Although our study does not address questions of how to incorporate mobile tools into the care of underserved patient groups, our approach and findings can help inform future research in this important area. The context of the research study may have promoted greater use of the app than would be true in a naturalistic setting, although unlike some studies of smartphone tools, our study did not provide study devices to participants or offer financial incentives for app use. Although initial uptake and use of the app was high in the first four weeks, attrition was steep thereafter. Because the qualitative interviews were conducted at Week 4, we do not know participants’ reasons for discontinuing app use. Our data on use should be viewed with caution given that the duration of participants’ access to the app was not constant across the study. Our qualitative findings suggest several domains that may have contributed to discontinuation including variability in participants’ understanding the purpose of the app, lack of personalization or graphs of progress, and uncertainty about data security. Understanding reasons for discontinuation is an important area for future research given that retention over time is crucial for longitudinal management of chronic conditions, yet high attrition is common with many technology-based depression interventions [6,9,21]. Another critical area for future research is assessment of the impact of mobile health tools on the outcomes of patients who are acutely depressed. Because most patients in this pilot study were at or nearing remission, we were unable to address this key area.
Our findings point to the need to identify strategies to educate patients and providers on mobile and patient-facing tools and develop methods to aggregate and summarize the information that is responsive to the needs of both patients and clinicians. Whereas emerging research is revealing that electronic medical records may have unintended effects on patient-provider communication and relationships [36,37], our research provides an example of how a mobile platform that includes a patient-facing tool may enhance and extend the therapeutic relationship between patients and their provider. Developing a better understanding of how digital health tools can support effective patient-provider communication is an important area for future research.
Mobile health tools are acceptable to patients and providers and can be paired with effective clinical care models, as we demonstrated for Collaborative Care. To support effective depression care, a mobile health tool will need to sustain the high level of engagement we achieved in the initial weeks of this pilot project. To optimize such uptake, attention needs to be directed both to the design of the technology, how it is introduced to patients and embedded into the service delivery model, and how care providers can integrate the tool into ongoing care and reinforce its use.
Supplementary Material
Acknowledgements
Portions of the research presented in this manuscript were presented at the NIMH Conference on Mental Health Services Research in Washington DC, August 2016. We are deeply grateful to each of the participants who shared valuable insights with us in this project and to the University of Washington Neighborhood Clinics for hosting this project. We would also like to thank Jane Edelson for her assistance. This research would not have been possible without the exceptional mentorship of Wayne J. Katon MD. Dr. Katon was the primary mentor for the career development award to the first author that funded this research. Thank you, Wayne, for your support and guidance.
Funding
This project was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health (KL2TR000421) through the Clinical and Translational Science Awards Program (CTSA). The technology platform was provided in kind by Ginger.io.
Footnotes
Conflicts of interest
Ms. Kincler is employed by Ginger.io. Ms. Miller was employed by the University of Washington Neighborhood Clinics at the time the study was conducted. None of the other authors have any conflicts of interest to disclose.
Appendix A. Supplementary data
Supplementary data to this article can be found online at https://doi.org/10.1016/j.genhosppsych.2017.11.010.
References
- [1].Steinhubl SR, Muse ED, Topol EJ. Can mobile health technologies transform health care? JAMA 2013;310(22):2395–6. [DOI] [PubMed] [Google Scholar]
- [2].Steinhubl SR, Muse ED, Topol EJ. The emerging field of mobile health. Sci Transl Med 2015;7(283):283rv283. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3].Ben-Zeev D Mobile health for all: public-private partnerships can create a new mental health landscape. JMIR Ment Health 2016;3(2):e26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [4].Torous J, Baker JT. Why psychiatry needs data science and data science needs psychiatry: connecting with technology. JAMA Psychiat 2016;73(1):3–4. [DOI] [PubMed] [Google Scholar]
- [5].Andersson G, Cuijpers P. Internet-based and other computerized psychological treatments for adult depression: a meta-analysis. Cogn Behav Ther 2009;38(4):196–205. [DOI] [PubMed] [Google Scholar]
- [6].Christensen H, Griffiths KM, Farrer L. Adherence in internet interventions for anxiety and depression. J Med Internet Res 2009;11(2):e13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Mohr DC, Cuijpers P, Lehman K. Supportive accountability: a model for providing human support to enhance adherence to eHealth interventions. J Med Internet Res 2011;13(1):e30. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Arean PA, Hallgren KA, Jordan JT, Gazzaley A, Atkins DC, Heagerty PJ, et al. The use and effectiveness of mobile apps for depression: results from a fully remote clinical trial. J Med Internet Res 2016;18(12):e330. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Gilbody S, Brabyn S, Lovell K, Kessler D, Devlin T, Smith L, et al. Telephone-supported computerised cognitive-behavioural therapy: REEACT-2 large-scale pragmatic randomised controlled trial. Br J Psychiatry 2017;210(5):362–7. [DOI] [PubMed] [Google Scholar]
- [10].Bauer AM, Thielke SM, Katon W, Unutzer J, Arean P. Aligning health information technologies with effective service delivery models to improve chronic disease care. Prev Med 2014;66:167–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [11].Archer J, Bower P, Gilbody S, et al. Collaborative care for depression and anxiety problems. Cochrane Database Syst Rev 2012;10:CD006525. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [12].Katon WJ, Unutzer J. Health reform and the Affordable Care Act: the importance of mental health treatment to achieving the triple aim. J Psychosom Res 2013;74(6):533–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].University of Washington AIMS Center. Patient-centered integrated behavioral health care principles & tasks. http://aims.uw.edu/collaborative-care/principles-collaborative-care, Accessed date: 12 May 2014.
- [14].American Psychiatric Association and Academy of Psychosomatic Medicine. Dissemination of integrated care within adult primary care settings: the Collaborative Care Model. https://www.psychiatry.org/psychiatrists/practice/professional-interests/integrated-care/collaborative-care-model; 2016, Accessed date: 15 July 2017.
- [15].Ramirez M, Wu S, Jin H, Ell K, Gross-Schulman S, Myerchin Sklaroff L, et al. Automated remote monitoring of depression: acceptance among low-income patients in diabetes disease management. JMIR Ment Health 2016;3(1):e6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Unutzer J, C Y, Cook IA, Oishi S A web-based data management system to improve care for depression in a multicenter clinical trial. Psychiatr Serv 2002;53(6):671–3. [678]. [DOI] [PubMed] [Google Scholar]
- [17].Hallgren KA, Bauer AM, Atkins DC. Digital technology and clinical decision making in depression treatment: current findings and future opportunities. Depress Anxiety 2017;34(6):494–501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [18].Hantsoo L, Criniti S, Khan A, Moseley M, Kincler N, Faherty LJ, et al. A mobile application for monitoring and management of depressed mood in a vulnerable pregnant population. Psychiatr Serv 2017. 10.1176/appi.ps.201600582. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Mohr DC, Tomasino KN, Lattie EG, Palac HL, Kwasny MJ, Weingardt K, et al. IntelliCare: an eclectic, skills-based app suite for the treatment of depression and anxiety. J Med Internet Res 2017;19(1):e10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].McGough PM, Bauer AM, Collins L, Dugdale DC. Integrating behavioral health into primary care. Popul Health Manag 2016;19(2):81–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Anguera JA, Jordan JT, Castaneda D, Gazzaley A, Arean PA. Conducting a fully mobile and randomised clinical trial for depression: access, engagement and expense. BMJ Innov 2016;2(1):14–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Junghans C, Feder G, Hemingway H, Timmis A, Jones M. Recruiting patients to medical research: double blind randomised trial of “opt-in” versus “opt-out” strategies. BMJ 2005;331(7522):940. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med 2001;16(9):606–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Spitzer RL, Kroenke K, Williams JB, Lowe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med 2006;166(10):1092–7. [DOI] [PubMed] [Google Scholar]
- [25].Creswell JW, Fetters MD, Ivankova NV. Designing a mixed methods study in primary care. Ann Fam Med 2004;2(1):7–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 2005;15(9):1277–88. [DOI] [PubMed] [Google Scholar]
- [27].Ben-Zeev D, Brenner CJ, Begale M, Duffecy J, Mohr DC, Mueser KT. Feasibility, acceptability, and preliminary efficacy of a smartphone intervention for schizophrenia. Schizophr Bull 2014;40(6):1244–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Beiwinkel T, Kindermann S, Maier A, Kerl C, Moock J, Barbian G, et al. Using smartphones to monitor bipolar disorder symptoms: a pilot study. JMIR Ment Health 2016;3(1):e2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Nahum-Shani I, Smith SN, Spring BJ, Collins LM, Witkiewitz K, Tewari A, et al. Justin-Time Adaptive Interventions (JITAIs) in mobile health: key components and design principles for ongoing health behavior support. Ann Behav Med 2016. 10.1007/s12160-016-9830-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [30].Anderson CL, Agarwal R. The digitization of healthcare: boundary risks, emotion, and consumer willingness to disclose personal health information. Inf Syst Res 2011;22(3):469–90. [Google Scholar]
- [31].Hall JL, McGraw D. For telehealth to succeed, privacy and security risks must be identified and addressed. Health Aff 2014;33(2):216–21. [DOI] [PubMed] [Google Scholar]
- [32].Goel MS, Brown TL, Williams A, Cooper AJ, Hasnain-Wynia R, Baker DW. Patient reported barriers to enrolling in a patient portal. J Am Med Inform Assoc 2011;18(Suppl. 1):i8–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [33].Amante DJ, Hogan TP, Pagoto SL, English TM. A systematic review of electronic portal usage among patients with diabetes. Diabetes Technol Ther 2014;16(11):784–93. [DOI] [PubMed] [Google Scholar]
- [34].Ronda MC, Dijkhorst-Oei LT, Rutten GE. Reasons and barriers for using a patient portal: survey among patients with diabetes mellitus. J Med Internet Res 2014;16(11):e263. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [35].Bauer AM, Rue T, Munson SA, Ghomi RH, Keppel GA, Cole AM, et al. Patient- oriented health technologies: patients’ perspectives and use. J Mob Technol Med 2017;6(2):1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [36].Alkureishi MA, Lee WW, Lyons M, Press VG, Imam S, Nkansah-Amankra A, et al. Impact of electronic medical record use on the patient-doctor relationship and communication: a systematic review. J Gen Intern Med 2016;31(5):548–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [37].Sinsky C, Colligan L, Li L, Prgomet M, Reynolds S, Goeders L, et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann Intern Med 2016;165(11):753–60. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.