Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Oct 26.
Published in final edited form as: Proc ACM Hum Comput Interact. 2021 Apr 22;5(CSCW1):99. doi: 10.1145/3449173

Automated vs. Human Health Coaching: Exploring Participant and Practitioner Experiences

ELLIOT G MITCHELL 1, ROSA MAIMONE 2, ANDREA CASSELLS 3, JONATHAN N TOBIN 4, PATRICIA DAVIDSON 5, ARLENE M SMALDONE 6, LENA MAMYKINA 7
PMCID: PMC9605038  NIHMSID: NIHMS1792263  PMID: 36304916

Abstract

Health coaching can be an effective intervention to support self-management of chronic conditions like diabetes, but there are not enough coaching practitioners to reach the growing population in need of support. Conversational technology, like chatbots, presents an opportunity to extend health coaching support to broader and more diverse populations. However, some have suggested that the human element is essential to health coaching and cannot be replicated with technology. In this research, we examine automated health coaching using a theory-grounded, wizard-of-oz chatbot, in comparison with text-based virtual coaching from human practitioners who start with the same protocol as the chatbot but have the freedom to embellish and adjust as needed. We found that even a scripted chatbot can create a coach-like experience for participants. While human coaches displayed advantages expressing empathy and using probing questions to tailor their support, they also encountered tremendous barriers and frustrations adapting to text-based virtual coaching. The chatbot coach had advantages in being persistent, as well as more consistently giving choices and options to foster client autonomy. We discuss implications for the design of virtual health coaching interventions.

Keywords: Health coaching, chatbots, conversational agents, wizard-of-oz, type 2 diabetes, self-management

1. INTRODUCTION

Interpersonal communication is foundational to the practice of healthcare [4]. Effective communication between clinicians and patients can support patient-centered care and shared decision-making [4,98]. Research in computer supported collaborative work (CSCW) has sought to investigate new ways to use computing technology to improve communication between healthcare stakeholders [14,29,80]. Much focus has been placed on communication between stakeholders inside the walls of the healthcare system, for example during rounds or outpatient visits [29,91] and technology mediation of communication between patients and their healthcare providers [14,24,29,80,91].

The growing prevalence of chronic diseases like type 2 diabetes (T2D) creates an imperative to extend healthcare beyond traditional patient-provider encounters and help individuals self-manage the myriad daily decisions that influence their health [8,89]. In-person health coaching is a common and effective approach to promote self-management [23,79,95,103]. Health coaching seeks to help individuals achieve personal health goals in a collaborative conversation that emphasizes autonomy, motivation, and tailored support [78], and can be effective in supporting sustained lifestyle behavior changes [23,95]. However, there are challenges to scaling in-person practice, particularly in low resource communities. Perhaps most significantly, there are not enough trained coaching practitioners to provide adequate education and support to the growing population of individuals with T2D [28,86]. In addition, there are barriers and disparities in access to in-person coaching, including transportation, community resources, and cost [28,82]. Individuals with low socio-economic status and ethnic minorities are disproportionately affected by chronic conditions, and the continued failure to identify effective interventions to reach these communities has the potential to deepen existing disparities [99]. Even for those with access, the quality of communication itself can be poorer for racial and ethnic minorities due to implicit bias as well as disparities in language and health literacy [2].

Technology has the potential to address these limitations and reach broader and more diverse individuals in their day-to-day lives. One approach is to introduce technology-mediated coaching, which seeks to connect clients with practitioners via telecommunication [86]; in recent years technology-mediated coaching has become increasingly common in practice [9,68,86]. A complementary approach is to replicate the support provided by human coaches with computing technologies, for example conversational agents. Conversational agents have long been explored in many areas of health and healthcare [5,54,102]. Bickmore and colleagues have argued for the advantages of conversational agents to overcome some of the challenges and biases of interpersonal communication in health, while still being able to establish a form of social relationship between the agent and user [5]. There has been a recent increase in the number of conversational interventions in health, often taking the form of scripted, text-based chatbots, which have seen positive results in areas like telemonitoring, cognitive behavioral therapy, and medication management [30,54]. Furthermore, new advances in Artificial Intelligence (AI) pave the wave for more advanced conversational agents, capable of more fluid, human-like interactions [1,87]. However, in the context of health coaching, questions remain as to whether technology can ever truly replicate health coaching practice. For example, Rutjes [88] argued that coaching’s emphasis on building personal connections and adapting support to situation-specific contexts make coaching an essentially human activity that cannot be replicated by technology.

In this research, we sought to examine these open questions and explore tensions of humanness in health coaching. Given the challenges in access to in-person coaches for low-resource minority communities, we focused specifically on these populations, and on the experience of remote, virtual coaching, where the coach and client interact via text message at a daily cadence. Our study examined the following research questions:

  • Can a scripted, rule-based chatbot create a positive coaching experience, comparable to that created by a human coach using the same medium (text messaging)?

  • What aspects of the coaching experience, if any, are uniquely human and do not lend themselves to automated approaches?

  • What are the potential advantages, if any, of chatbots for virtual coaching?

To begin addressing these questions, we conducted a two-week study in which we compared coaching experiences with a conversational agent and with a human coach both delivered via text messaging. Individuals with T2D recruited from economically disadvantaged communities were assigned to one of the two study arms. In the first arm, participants interacted with a scripted chatbot, designed based on an established protocol for goal setting (Brief Action Planning, BAP [37]); we used a wizard-of-oz technique [46,66] to imitate the behavior of a chatbot that was under development by our research team. In the second group, participants interacted with a human health coach; all interactions were exclusively over text messaging with no in-person meetings. To reduce variability in coaching styles, coaches were asked to follow the same BAP protocol as the chatbot, but were encouraged to deviate from the protocol when they deemed it necessary in order to improve the coaching experience for their clients. We then examined the participant coaching experience with an emphasis on differences between the two groups.

We found that participants in both the chatbot and human coaching groups reported generally positive experiences consistent with many of the qualities of successful coaching described in prior studies [78]. We also found some support to the previous suggestion that human coaches have advantages that may be challenging to replicate with automated systems [88]. In our study, human coaches were able to go off-script to express empathy or ask additional questions to tailor their support and expand its focus from health to other aspects of individuals’ lives. At the same time, we found that translating human coaching into the medium of text messaging presented many challenges that often led to negative experiences for both participants and coaches. Coaches often struggled to achieve common ground needed to tailor their support and had difficulties gauging how invested participants were and finding appropriate times for interaction. The study also showed that the chatbot had unique advantages as compared to human coaches. For example, while human coaches often felt compelled to shortcut the BAP script and jump to recommendations, chatbot’s consistency in providing participants with multiple options contributed to their sense of autonomy.

These findings tap into fundamental questions within CSCW regarding the role of computing mediation in supporting communications between human stakeholders [13,29,80]. They lend further support to previous research highlighting the importance of common ground [15,18] and the challenges of achieving grounding with short text communications [18]. They also further highlight the importance of mutual awareness among participants during social encounters, a characteristic referred to as social translucence [27]. Our study further expands existing knowledge of these phenomena and illustrates how they impact computer-mediated coaching in health.

Furthermore, they contribute to the ongoing discussion regarding the need to expand the scope of CSCW to include questions of cooperation and communication between humans and intelligent systems [29,58,81]. The continuing increase in the adoption and influence of systems powered by computational intelligence presents pressing need for additional research in order to reach a deeper understanding of such cooperation and to generate design guidelines for future intelligent systems, including intelligent coaching systems in health. Our study showed that intelligent systems have unique strengths in consistency and patience; understanding and leveraging these strengths may help to develop effective intelligent coaching systems that complement coaching by humans.

2. RELATED WORK

2.1. Health coaching and chronic disease self-management

Care for chronic conditions like T2D emphasizes changes to daily lifestyle behaviors, like diet, exercise, sleep, and medication adherence [8]. Effective self-management is challenging because it requires motivation, knowledge, and self-efficacy [8]. Health coaching is an effective intervention to support healthy lifestyle changes [23]. While there are many definitions of health coaching, a core feature is its focus on goal-setting, where the coach and client work collaboratively to set a health goal and make plans towards achieving that goal. The coaching process places an emphasis on cultivating a relationship to increase motivation, and facilitate learning and education [78,103]. A key consideration of health coaching is the conversational style and tone of the coach, which follows the spirit of motivational interviewing, emphasizing collaboration and respecting the client’s autonomy, as opposed to imposing or prescribing directions for change [37,69].

Recent work has argued for the importance of personal and human skills to the success of health coaching [88]. For example, Rutjes argued that successful coaching goes beyond achieving goals, and includes building relationships, implicitly adapting to different contextual factors, and cultivating motivation. These characteristics are consistent with other definitions of coaching [78,103], but place additional emphasis on human involvement that may be difficult to replicate with technology. However, in-person health coaching interventions are limited in their ability to reach everyone in need of support because there are not enough coaching practitioners, resources are not always available in underserved communities, reimbursement varies, and there are barriers for individuals in accessing coaching support, including access to transportation or time off from work [26,86,89].

2.2. Virtual health coaching

In the face of the challenges posed by in-person coaching, there has been increased focus on virtual coaching interventions, using telecommunication and mobile applications to bring coaching support directly to individuals [86]. Sometimes virtual coaching takes the form of audio or video sessions between a coach and their client. Another, related body of research seeks to use artificial intelligence (AI) to create automated coaching interventions [44]. However, the virtual coaching literature is sometimes vague in its definitions and discuses human-human and human-AI interventions interchangeably [86,88]. There is growing evidence that virtual coaching can be effective as a part of a hybrid approach in combination with other health apps, particularly apps for self-tracking [34,86].

Technology-mediated coaching is related to the large body of research in telemedicine which suggests that virtual visits can help improve access for rural and underserved populations [65]. Technologies that rely on human coaches are still inherently limited in their ability to reach all those in need, considering the inadequate number of trained practitioners and rapidly growing population with chronic conditions like diabetes or prediabetes [26,63,89]. This merits the exploration of fully or partially automated coaching systems. While there is research reporting provider perceptions of telehealth programs, little work has explored the perspectives of virtual coaching practitioners [9,68]. This is an important gap in understanding the virtual care ecosystem.

In contrast with technology-mediated coaching, automated coaching seeks to replicate the success of in-person coaching through patient-facing digital tools. While this falls within the large scope of digital interventions to support self-management and wellness, automated coaching interventions emphasize health coaching principles. Coaching interventions are meant to be participant-centered, and respect autonomy by being neither completely user driven, nor fully guided by the app [69]. This contrasts with personal informatics, which includes tools for individuals to explore their self-tracking data for self-discovery and knowledge [56,57]. These tools are often user-driven and exploratory, and rely on participant initiative and engagement. On the other extreme, automated coaching also contrasts with remote monitoring interventions, or behavioral interventions that primarily push information to users [74,100]. In remote monitoring, the user enters data, but the process is guided by the needs of the health system or provider. Similar to telemonitoring, with push-based behavioral interventions, alerts from the application are the driving force to guide the user through behavior change. Automated coaching interventions exist within this continuum.

Automated approaches to health coaching have been an increasing focus in the research community. In particular, because of the back-and-forth nature of coaching, conversational agents are a common interaction approach. Conversational agents for health coaching have a range of focuses and functionality, from strictly goal setting [11], to various forms of personalization and tailored feedback [17], to agents designed to respond with empathy [59]. Unfortunately, there is a lack of specificity and consistency in applications described as automated coaches, which makes comparing them difficult, and very few automated coaching interventions reference specific theories or background literature on health coaching [88].

2.3. Conversational artificial intelligence in health

In addition to automated coaching, there has been an increased focus on conversational agents in healthcare. In health, conversational agents are often scripted or rule-based [50,54]. A common strategy is to build off of an established clinical protocol, for example brief interventions, or cognitive behavioral therapy [30,54]. Text-based agents, or chatbots, are more common in practice than voice interfaces because of security and usability concerns with currently available spoken word systems, especially among populations with diverse language preferences, proficiencies, and accents [45].

In contrast, the ML community has been striving for ever-more human-like chatbots. While the quest for general conversational intelligence is advancing [1,87], most human-emulating bots still include a human-in-the-loop to handle more difficult messages [35]. However, there is also reason to question the approach of seeking convincingly human-like chatbots. People may not prefer human-like conversation with their AIs [16], and most common consumer chatbots do not converse in a way that humans do with each other [83]. There is also mounting fear and concern about the potential of AI and automation to displace or replace human workers, with potentially deep and profound social and economic consequences [25,64]. Specifically, in the case of health coaching, it is not clear whether conversational AI could replicate or supplement the work of health coaches, or which approach is preferable for individuals living with chronic conditions like T2D.

Somewhere on the continuum between completely scripted chatbots and open-domain AI, researchers in CSCW have argued that conversational agents can take advantage of the affordances of dialogs to engage users in novel ways for behavior change, for example discussing self-tracking data or negotiation around goals [51,84]. The work described here explores coaching as a domain for conversational interaction, and asks which aspects of health coaching might lend themselves to automated approaches.

3. METHODS

3.1. Overview of the study design

In this study, we recruited participants with T2D from low-resource communities and assigned them to one of two groups. In the first group (“chatbot”), participants interacted with a scripted chatbot called t2.coach. The study used a wizard-of-oz approach, common to evaluation studies of chatbots, to simulate the fully-functioning system, with a member of the research team sending messages and processing responses from participants [46,66]. In the second group (“human coaching”), participants interacted with an actual human health coach. To reduce potential variability in approaches to coaching, both the chatbot and the human coaches followed the same clinical protocol for structuring the dialogs; however, human coaches were actively encouraged to deviate from the protocol to provide the best support for their clients.

In this section, we describe the design of t2.coach, the wizard-of-oz implementation, as well as the study procedures and analysis methods.

3.2. t2.coach

We iteratively designed a scripted chatbot to support individuals with type 2 diabetes (T2D) in choosing and achieving health goals related to nutrition and physical activity, t2.coach. In line with the predominant approach for designing chatbots in health, the t2.coach conversational flow was based on a protocol for goal-setting and coaching, Brief Action Planning (BAP; [37]). BAP defines a set of steps for health practitioners to guide an individual towards choosing a health goal and making a specific plan to achieve it. The first step of the protocol is helping the client determine a behavioral goal they would like to work towards, offering a Behavioral Menu of suggestions. Next, the coach helps the user determine the specifics of their plan (what, when, where), assess their confidence in the plan, and establish accountability by setting a follow-up time.

Inspired by BAP, the t2.coach script included two primary dialogs: 1) a longer exchange to set a health goal, an excerpt of which is shown in Table 1, as well as 2) a shorter, daily, follow-up exchange to check in on goal progress, discuss barriers to meeting goals, and adjust the daily plan if desired. Dialogs drew directly from example dialogs in BAP and followed the spirit of motivational interview to the extent possible. Additional sample dialogs are included in Appendix A.

Table 1.

An excerpt from the t2.coach chatbot script. The right-hand column indicates which step of Brief Action Planning (BAP; [37]) the dialog corresponds.

Sender Message BAP Step/Skill

t2.coach Hello, welcome to the t2.coach study! Here you can set goals for improving your self-management and work on achieving these goals. What would you like to work on in the next week?
 A. Improve my diet
 B. Increase physical activity
Opening Question

User A

t2.coach Okay, here are some things you can do to improve your diet. Which would you like to try?
 A. Make ½ my meal fruits and/or vegetables
 B. Choose meals without added sugar
 C. Choose meals with lean proteins
 M. See more
Behavioral Menu

User C

t2.coach Great! Some people find it helps to get really specific about their plan. Here are a few ideas to help you meet that goal. Which would you like to try?
 A. Put lean turkey or smoked salmon on your toast instead of jam
 B. Add some skinless turkey or chicken to your meal
 C. Buy 95% lean (or higher) ground meats
 M. See more
SMART Behavioral Plan

...

3.2.1. Technical Implementation and Wizard-of-Oz.

We designed t2.coach to be implemented as a finite-state chatbot with deterministic, menu-based responses. Content for the chatbot including health goals and specific action plans were drawn from a knowledge base [19]. t2.coach was designed for text-messaging exchanges because of the ubiquity of SMS in the population, and because of potential privacy and security issues with speech-to-text software [49].

This study utilized an early wizard-of-oz prototype of t2.coach, where a member of the research team sent messages on behalf of the chatbot. The “wizard” strictly followed the same rules that the fully implemented chatbot would use respond, including the fallback response “I’m sorry, I didn’t understand” and re-prompting the question for off-script utterances.

If participants asked a question outside of an active dialog, the wizard followed a question-answering protocol meant to replicate the question answering of modern, commercially available agents (e.g. Siri, Alexa) [70]. A classifier was trained to categorize queries – for example, recipe requests came from an online American Diabetes Association (ADA) resource. General informational questions were searched verbatim in Google, with the additional keyword “diabetes,” and responses from Google snippets were sent after being approved by a clinician on the research team. If the question was unclassifiable, or the response was deemed inappropriate, the chatbot responded that it was unable to answer the question.

To relay messages between participants and the wizard while protecting participant privacy we implemented an SMS-forwarding proxy using Twilio [104] such that the wizard and participant send messages to a Twilio-hosted number. When the wizard was not online, Twilio sent an automatic reply to participants letting them know that their message had been received, and that they should expect a response within 24 hours.

3.2.2. Iterative Design.

This study was part of the iterative design of My T2D coach. As such, we made refinements to the chatbot’s script over the course of the study based on feedback from the initial wizard-of-oz participants. The most substantial change was to remove some components of the BAP protocol in order to shorten the initial goal-setting dialog. While BAP was designed for human-to-human conversations, we found that text message exchanges needed to be shorter for users to complete them in a single sitting. BAP includes a question to assess confidence in the chosen plan, but this step became superfluous as most said they were highly confident (10/10). We removed this step to shorten the conversational flow so that more participants could complete goal setting on the first day.

3.2.3. Messaging with Human Coaches.

The health coaches involved in this study exchanged text messages with participants from their own phones using the same SMS-forwarding proxy as the wizard. Human coaches were given a set of tools to make it easier for them to serve as coaches and follow the BAP protocol. First, to reduce the need for repetitive typing of prompts, coaches were provided with a mobile phone keyboard app with shortcuts to quickly send messages written following the protocol (Supplementary Figure A). In addition, coaches were given access to a dashboard with resources including the complete set of BAP messages, and all of the goals and action plans in the chatbot’s knowledge base. Furthermore, coaches could use the dashboard to access pages with all of the meals and blood glucose readings recorded by each of their participants during the study. As a part of training, each coach practiced following the protocol with a member of the research team to help ensure that deviations from the protocol were intentional, and not due to lack of awareness of the steps or technical difficulties following it.

3.2.4. Personalization and Self-Tracking.

Coaching support strives to be personalized to an individual’s behaviors and preferences, and many virtual coaching interventions are employed alongside self-tracking apps [34,86]. In light of this, t2.coach included a progressive web application for logging meals and blood glucose (BG) readings (Figure 1). In the app, users capture a photo of their meal and enter a free-text description. Next, they enter a pre-meal BG. Two hours after the meal, users are prompted with a text message reminder to enter a post-meal BG. These data were made available to the human health coaches to help them personalize their support. While these data were not used directly by the chatbot, we included self-tracking with the app in both study groups for parity and to examine participants’ attitudes towards self-tracking as part of the coaching experience.

Figure 1.

Figure 1.

Screens from the progressive web application for recording meals and blood glucose readings.

3.3. Participants

3.3.1. Individuals with diabetes.

Participants with T2D were recruited from two Federally Qualified Health Centers (FQHC) in a major United States metropolitan area. Patients served by these sites are predominantly minority (37% are African American and 60% are Latino) and low income, with 64% being insured through Medicaid, and 16% are uninsured. To be included in the study, participants needed to have self-reported diagnosis of T2D, be between 18 and 65 years old, and own a working smartphone.

3.3.2. Health coaches.

We recruited practicing health coaches to serve as the virtual coaches in this study. Coaches were recruited though researchers’ professional networks including message board postings. To be included, coaches had to be Certified Diabetes Educators (CDEs) and/or be practicing health coach who works with diabetes patients at one of the community health centers.

3.4. Procedure

After collecting informed consent, a study coordinator administered baseline demographics and a measure of nutrition literacy [101]. Participants began the study in small groups of 1–4 individuals with a 1.5-hour focus group on the design and content of t2.coach. All participants in a given focus group were assigned to the same study condition.

During the initial session, researchers helped set up t2.coach on participants’ phones, and participants were asked to use t2.coach for two weeks. The intervention was described to participants as a partially automated system, but an actual person would be reviewing their messages to help t2.coach respond appropriately. In both groups, participants were encouraged to respond to prompts from the coach and to ask free-form questions whenever they had them. In the human coach group, participants were told that the person reviewing messages was a health coach, while in the chatbot group, participants were told that the person was not a healthcare provider, and therefore would not be able to answer all of their questions.

Because coaches and the wizard could not be available at all hours of the day, participants were asked to indicate a set of times they would be available to exchange messages. Participants and human coaches were paired based on how their available times aligned. Each participant was assigned to a single coach, while each coach was assigned multiple participants, which we refer to as their “clients”.

Within 24 hours of enrollment, participants received a message to begin a goal-setting session. Each day at the agreed upon time, the coach initiated the daily check-in conversation. After the first week, the coach initiated another longer, goal setting conversation, giving participants the option to update their goal.

After two weeks, participants joined 30–60 minute debrief interviews over the phone. The interview guide included general questions about participants’ background, their prior experience with health coaching and self-tracking apps, and their overall experience in the study. The second part of each interview was grounded in transcripts of individuals’ exchanges with their coaches (human or chatbot); the interviewers asked questions based on transcript excerpts and asked participants to explain and contextualize their experiences within those exchanges. The interviewer also asked a targeted question about whether participants felt that they were working with a health coach. At the end of the interview, the researcher administered two post-measures, described below. Participants received $30 for their data plan, $20 for the initial visit, and 25 blood glucose test strips to use for testing during the study. The research protocol was approved by the Western Institutional Review Board (a single IRB for multi-center studies) and the local institutions’ IRB.

At the close of the study, we invited the health coaches for 1-hour debrief interviews. We asked about their experiences in the study and how they compared to prior virtual and in-person coaching experiences. To better understand the coach’s intention when they deviated from BAP, we reviewed transcripts of their exchanges with participants, to probe how they reacted to participant responses, why they responded the way they did, and if they would have done anything differently in retrospect.

3.5. Post-Measures

During the interview, we administered two adapted questionnaires. First, to assess the perceived usability of the text message interactions, we administered 10 items from the Subjective Assessment of Speech Systems Interfaces (SASSI), which has good coverage of broad usability domains [3,41]. Second, to assess the degree of collaboration and shared decision-making, we adapted a 9-item shared decision-making questionnaire (SDM-Q-9 [53]); instead of asking about shared decision-making in a clinical context, the adapted measure asked about shared decision-making related to choosing a health goal. See Supplementary Table C and D for the complete set of questions in the post-measures.

3.6. Data Analysis

For the quantitative analysis, we first calculated descriptive statistics of demographics and baseline measures. Our quantitative analysis aimed to answer the following questions: 1) Was there a difference in perceptions of usability or shared decision-making between human coaching experience and the chatbot? 2) Were there differences in conversational patterns between the two groups? and 3) Were there differences in goal attainment between participants in the two groups?

To answer these questions, we used the following methods: 1) To assess differences in perceived usability and shared decision-making, we compared differences in post-measures with an unpaired t-test. 2) To characterize differences in conversational patterns between the two group, we calculated the length of conversations as measured by the number of conversational turns (a switch from one speaker to the other) per day. 3) To explore how successful participants were in achieving their chosen goals, we did not directly measure changes in behavior [47]. Instead, we created a measure of self-reported goal attainment by analyzing text message transcripts to identify exchanges where the coach asked their client if they achieved their goal and the client replied with a clearly affirmative or negative response. Goal attainment was calculated for each week of the study and overall, and we compared attainment between the two groups using Fisher’s exact test.

For qualitative analysis, participant and coach interviews were analyzed with inductive thematic analysis [10]. The lead author and senior author coded 10% of transcripts collaboratively, with the lead author continuing to code the remaining transcripts while keeping a detailed audit trail on the code book. The senior author independently coded an additional 20% of transcripts for periodic check-in sessions to compare codes and resolve discrepancies through discussion, followed by additional interpretation sessions for axial coding as themes emerged. After coding was complete, we examined data saturation and theme comprehensiveness across participants [32,36]. To compare the prevalence of themes between the two study groups, we tagged each transcript to the corresponding study condition and used the crosstabs features of NVivo to compare prevalence between groups. Themes were considered equally prevalent if the share of participants who reported that theme in one group was within 20% of the share in the second group.

4. RESULTS

4.1. Participants

4.1.1. Individuals with diabetes.

A total of 23 participants were enrolled in the two-week study, of whom 18 participated in debrief interviews. As shown in Table 2, participants were predominantly female and majority black or Hispanic, with a low median income for a major US city. There were no differences in baseline demographics between groups.

Table 2.

Demographics and baseline measures

N Enrolled 23
Age 54.92 ± 7.16
Gender 75% Female
Race 55% Black
5% White
5% Asian
5% Native American
30% Other/Refused
Ethnicity 30% Hispanic
Median Income < $10k
Nutrition Literacy [101] 4.05 ± 1.61 (out of 6)
20% possibly limited literacy

4.1.2. Health coaches.

Four health coaches facilitated messaging in the Human Coaching group. They had a range of 10 to 18 years working with diabetes patients, and 3 of 4 were Certified Diabetes Educators (CDEs). All four self-identified as health coaches, worked for a health system or in private practice, and felt that the coaching approach should be commonplace in healthcare.

4.2. Post-measures and usage statistics

13 participants were assigned to the chatbot group and 10 to the Human Coaching (HC) group. 5 participants (1 in the chatbot group and 4 in the human coaching group) were either lost to follow-up or had to drop out of the study because of a family emergency. Dropout was disproportionately higher in the HC group such that only 6 of 10 participants in the HC completed the study and took part in the post-study interview.

Regarding possible differences in perceived usability, as shown in Table 3, there were no differences in reported usability, as measured by the Subjective Assessment of Speech System Interfaces (SASSI; [41]), or in shared decision-making of goal setting, as measured by the 9-item shared decision-making questionnaire (SDM-9 [53]).

Table 3.

Comparison of post measures between the two study groups

Human Coaching Chatbot

N Enrolled (N Interviewed) 10 (6) 13 (12)
SASSI (n.s. p = 0.94) 4.20 (± 0.74) 4.23 (± 0.55)
Adapted SDM-Q-9 (n.s. p = 0.73) 86% (± 20%) 83% (± 17%)

Regarding possible differences in conversational patterns, daily conversations tended to be significantly longer in the human coach group (Table 4). The median conversation was 3 turns in the chatbot group (approximately the length of the daily check in script) compared to 5 turns in the HC group, with one conversation continuing for 51 turns.

Table 4.

Length of conversations between participants and their coach in the two study groups

Conversational turns per day Human Coaching Chatbot

Mean (SD)* (p < 0.001) 7.89 (10.11) 4.22 (4.26)
Median (Range)* (p < 0.01) 5 (1 to 51) 3 (1 to 20)

Regarding differences in goal attainment, self-reported goal attainment was consistently higher in the chatbot group, averaging above 80%, while attainment was 36.4% on average in the human coaching group (Table 5), a difference that was statistically significant with Fisher’s exact test. While attainment was relatively consistent in the chatbot group, attainment increased from 25% to over 40% in the human coaching group from the first week of the study to the second. While the increase was not statistically significant due to the small sample size, the difference between the chatbot and human coaching group was no longer statistically significant in the second week of the study.

Table 5.

Self-reported goal attainment between the two study groups

Human Coaching Chatbot

Overall (p < 0.01) 36.4% 80.8%
First week (p < 0.05) 25.0% 80.4%
Second week (n.s. p = 0.06) 42.9% 81.4%

4.3. Qualitative Themes

In this section, we identify and describe the main qualitative findings from interviews with participants with diabetes and health coaches. After briefly describing participants and coaches in the study, we report 4 main themes, summarized in Table 6. As shown in Supplementary Table E & F, themes 1 & 4 were prevalent across participants, while themes 2 and 3 were prevalent among either the human coaching or chatbot group, respectively.

Table 6.

Summary of qualitative themes

Theme 1 Participants in both groups felt like they were working with a health coach
Theme 2 Human-powered coaching had the advantages of empathy and deeper engagement, but encountered multiple challenges with communication via text messaging
Theme 3 The consistency and predictability of the chatbot helped participants persevere in achieving their goals and promoted their autonomy
Theme 4 The directness and intimacy of text messaging created expectations for personalized and continuous support

Quotes from participants with diabetes are labeled with a participant number (e.g. P10), followed by their group in parentheses – Chatbot or HC (Human Coaching). Quotes from the 4 health coaches are labeled with Coach and a number (e.g. Coach #3). Excerpts from text message transcripts are included in monospace font.

4.3.1. Characterizing Participants.

During their interviews, many participants described challenges they had experienced in their prior efforts to self-manage their T2D. Many were in the habit of checking blood glucose (BG) at least once a day, but also described challenges in interpreting and acting on BG readings, especially unexpected or high readings. Participants also described a number of limiting circumstances that hindered their efforts towards self-management, for example inadequate food budgets, food allergies, disabilities, or other physical impediments.

Participants described a range of familiarity with and use of technology. Many were comfortable with text messaging; for example, P10 described regularly messaging with family and friends:

“All the time… I hit the messenger send my message, I message my daughter, my son, I have friends that I text with them too” P10 (Chatbot)

However, others were less familiar with texting, and in some cases adamantly disliked it, preferring to return texts with voice messages, or send voice memos:

“The only time I text to my children is this day at work and I have to tell them something… I don’t like texting, I don’t.” P5 (Chatbot)

In terms of using technology to support their self-management, none had ever used an application on their phone for self-tracking before this study, though some tech-savvy users had set up medication reminders on their phone, or used YouTube to find exercise videos or recipes.

4.3.2. Characterizing Health Coaches.

During interviews, each coach described their health coaching philosophy. Overall, the coaches’ philosophy aligned very closely with BAP [37], used to guide both the chatbot and human coaches. Coaches described the importance of being patient-centric, respecting autonomy, and letting clients drive the process.

“I would say my general philosophy is very patient, participant driven, so I really am very much of a coach in the true sense of the word that I work with them on their goals, and I never really, at least at this point in my practice, tell a patient what to do. I just try to get it to come from them.” Coach #2

With goal setting, coaches emphasized helping their clients arrive at goals that are specific and actionable, and advocated for working on only one or two goals at a time. Coaches felt that goals were meant to be cumulative, focusing on small, incremental changes to participant’s current practices, to help them build up healthy habits over time. Lastly, coaches described the importance of asking questions to learn about participants current practices and help drive them towards practical goals.

4.4. Theme 1: Participants in both groups felt like they were working with a health coach

Notably, nearly all participants in both the human coach and chatbot groups stated that they felt they were working with a health coach and used words like “coach” or “teacher” to describe the system. When describing their experience in the study, participants mentioned a number of phenomena consistent with health coaching. For example, setting actionable goals was the focal point of conversations with t2.coach; participants in both groups chose goals to work on, and most recounted examples of behaviors they changed to meet their goals. Many participants described their experiences acting on specific suggestions from their coaches; for example, P8 followed the recommendation to find a friend to go on walks with.

“I did find a friend. She does walk with me… We did from here to ████, walking.” P8 (Chatbot)

In the process of working towards their goals, participants in both groups described how working with the coach increased their motivation to pursue healthy behavior changes.

“I like it’s better than you know, you try to work on the key issue like motivate it for me. It gives me motivation and teach me” P15 (HC)

In addition to following goals and experiencing increased motivation, participants in both groups recounted other experiences related to different aspects of coaching. In both groups, participants commented that their exchanges with the coach felt like a conversation, appreciating the communication and the back-and-forth exchange. It was perhaps unsurprising that participants in the human coach group described how they felt they were building a relationship with their coach over time. More notably, participants in the Chatbot group similarly described that they appreciated the relationship with their coach especially when they did not always feel that they were able to talk about their diabetes with others in their life.

“Yes, it was nice to talk to somebody, you know, about diabetes because I don’t even want to talk to stuff like this to somebody else. Who wants to know what the goals were and what I would like to do and what I would like to accomplish?” P6 (Chatbot)

For many participants, the regular contact from their coach helped promote mindfulness of the goals they had chosen, and determine a plan to meet that goal. The regularity of communication with their coach also helped participants to feel that there was an entity keeping an eye on them. This feeling of someone looking out for them helped promote a sense of external accountability to work towards meeting stated goals. Through the process of working with their coach, participants described learning and gaining knowledge of new approaches for their self-management, whether altogether new behaviors or new ways to accomplish longstanding goals. Participants in both groups described a number of other experiences consistent with health coaching, which we summarize with illustrative quotes across both groups in Supplementary Table G. These themes were prevalent across participants in both study groups; while not every single participant experienced all positive aspects, each theme was found across both study groups approximately equally (within 20%).

The exchange between the chatbot and P5 in Table 7 exemplifies the manner in which even the fully-scripted chatbot created an opportunity for a conversation that promoted mindfulness and reflection over the participant’s experience pursuing personal goals. P5 responded thoughtfully to open-ended prompts from the chatbot, to reflect on the barriers they encountered meeting their goal and formulate a plan to address those barriers.

Table 7.

An exchange between P5 and the chatbot coach discussing strategies to overcome barriers in goal attainment.

Coach (Chatbot) Good morning! How did it go with your plan yesterday? Did you meet your goal to “Make ½ my meal fruits and/or vegetables”? (Yes/No)
P5 Not well,I missed lunch and I ate dinner late. I forgot to take my metformin last night.
Coach (Chatbot) I’m sorry, I didn’t understand that. Please respond with “yes” or “no”
P5 No
Coach (Chatbot) That’s okay, it can happen. What got in the way of meeting your goal?
P5 I accompanied a relative to a doctor’s appointment. Then we went to 2nd appointment and I skipped lunch. My glucose level started to drop so I ate 6 short bread cookies. I didn’t get home until after 8:00pm from 10:30 am. There was no place I could eat in the area.
Coach (Chatbot) How do you think you can overcome that challenge tomorrow?
P5 Packing sandwich in my bag just in case I’m out without access to the right kind of foids.
Coach (Chatbot) Thank you for sharing that with me
Coach (Chatbot) Would you like to choose a specific plan to meet your goal tomorrow?
...

The interview with P5 provided additional context for how communication with t2.coach helped participants in achieving their goals, including by promoting mindfulness and accountability.

“I think the questions made me pay attention to more of what was going on with myself, instead of being distracted because of what I was doing and what I was going through emotionally with situations with my relative and it made me concentrate on what I need to do for myself.” P5 (Chatbot)

In this exchange, P5 describes how they were going through a challenging time caring for a relative, and not prioritizing their own health. The questions and messages from the chatbot were prompts for P5 to step back and reflect in-the-moment on their health goals and how to continue achieving them despite stressful daily circumstances, and maintaining accountability to meeting goals.

4.5. Theme 2: Human coaching had advantages, but encountered barriers with text messaging

While there were surprising similarities between the coaching experiences of participants in both groups, there were numerous ways in which the daily exchanges varied between groups. While human coaches started off following the BAP protocol, they ended up embellishing it, and eventually went completely off-script. There were several notable situations when coaches went off-script: to provide empathy and appear more human, or to respond to their clients’ broader needs beyond the protocol. We discuss these below.

4.5.1. The advantages of empathy and an expanded scope of support.

One of the prominent places where human coaches went off-script was to express empathy and display their humanness. Table 8 shows an exchange between P17 and their coach (#4) where P17 says that they have not been feeling well, and the coach responds empathetically, in a way that might appear second nature for human conversation.

Table 8.

A brief exchange between P17 and Coach #4, where the coach responds with empathy after the participant shares that they are still not feeling well.

Coach (Coach #4) Hi ██, how are you feeling? How has it been going with your meals?
P17 Still not feeling well and my eating is not good right now but I am working on getting better. Thank you for asking. Appetite is not good.
Coach (Coach #4) Ok, I’m sorry to hear that. Feel better. I will check in with you again tomorrow at this time. Do you have any questions for me now?
P17 Not yet but waiting on feeling better and then I will have questions.
Coach (Coach #4) Ok sounds good. Take care. Talk to you tomorrow

In their interview, P17 described the appreciation they felt for their coach, who they believed was truly concerned about them and their wellbeing.

“Even though I don’t know whether that person was human or was it, you know, automated, I felt like is it like human and has to be concerned about my health. Because on those days I wasn’t feeling well… even though she didn’t say I’m disappointed I felt like I can’t let her down.” P17 (HC)

In addition to expressions of empathy, coaches were also able to follow their human instincts and go off-script to ask questions that expanded the scope of their coaching support. Many participants were experiencing deeper challenges that were preventing them from fully pursuing nutrition-related goals. This was the case, for example, with participants who experienced unstable housing. When one of these participants brought it up during an exchange, the coach pursued it with further questions and eventually shared additional information about a homelessness resource and also shared information about an upcoming job fair.

There were a lot of barriers there… she is going through homelessness basically… And that’s why you see… I try to give her some help with the housing and stuff like that which wasn’t really like you know the normal track that we would do. You have to deal with some of that first.” Coach #1

In a minority of cases, coaches’ questioning led to very fruitful exchanges, enabling much more personalized suggestions and support. For example, in Table 9, we see an excerpt of a conversation between P13 and their coach (#2), where the coach learns about the participant’s temporary housing status, and asks a series of questions to learn about their situation and preferences, and help them arrive at healthier breakfast options. This conversation was one of the longest, with 68 utterances and 32 conversational turns over 35 minutes.

Table 9.

An in-depth exchange between P13 and Coach #2

Coach (Coach #2) Hi ██, first, great job adding the green beans
Coach (Coach #2) I reviewed your food logs, as well as your blood glucose levels. I’d love to continue to help you to set goals.
Coach (Coach #2) Let me know and we can chat about them
P13 I am at a disadvantage I am not home I’m in transitional housing
P13 this point. I try to work with what is offered to me. I’m not making
P13 excuses I’m trying to live on a budget that I am not always in
P13 I can eat because I have no way to cook here
Coach (Coach #2) Thanks for sharing this with me, ███. That’s totally understandable that you’re limited in your choices. But not to worry, I have some ideas of what we can do
...
Coach (Coach #2) What other veggies did they serve, that you have tried?
P13 I like the protein idea yes I can do that
Coach (Coach #2) Liked “I like the protein idea yes I can do that”
Coach (Coach #2) Wonderful!
Coach (Coach #2) Tell me about the veggies so I can help you with that part
Coach (Coach #2) All these changes can help your blood sugars to get in better control.
P13 Today for breakfast I had two oatmeal cookies and that’s all I ate
Coach (Coach #2) Okay – let me help you with breakfast.
Coach (Coach #2) I noticed on your logs that you sometimes eat a banana
Coach (Coach #2) What is available at the housing for breakfast?

This in-depth exchanged helped the participant to arrive at many reasonable options for healthier breakfasts, which they described trying and enjoying. This type of in-depth exchange was unique to the human coaching group and exemplifies the advantages of the human-driven approach.

4.5.2. Text messaging created barriers to effective communication.

Despite these successes, health coaches expressed overwhelming frustration with text messaging as a medium for coaching, and found it to be much more difficult than in-person or telephone coaching.

Putting them in a hierarchy [in-person] would be the easiest and then more recently I’ve been doing a lot more phone calls which is harder in certain ways. Text messaging was even harder. There was no ability to pull out nuances.” Coach #3

Participants often replied with short responses, which coaches had difficulty interpreting, and sometimes resulted in miscommunications.

“We’re talking about do you want to keep the same plan tomorrow, “yes,” but what does yes mean? Does that mean literally you’re going to have the same dinner like yesterday?... There was no embellishment from her at any point” Coach #3

Furthermore, the lack of non-verbal cues made nuanced communication difficult, impeding the coaches’ ability to build rapport with their clients, as they would in an in-person setting.

“I just find that difficult to establish a rapport… How can we ask you to establish a rapport with someone through just text message? It’s pretty hard to convey who you are.” Coach #2

Overall, text messaging as a medium limited coaches’ ability to engage in the types of in-depth exchanges they were used to, and created barriers to effective communication that sometimes resulted in miscommunication and misunderstanding.

4.5.3. Coaching without nuance or context.

In addition to challenges communicating via text message, coaches described difficulties developing a coaching relationship without any context about their client.

“I don’t know if she likes apples, I don’t know if she likes peanut butter and that she could be allergic to peanuts for all I know.” Coach #3

In particular, coaches struggled to determine how engaged their clients were in the coaching process based on the short and ambiguous responses they often received, often after a considerable delay.

“I don’t know if I should’ve taken it like, “I don’t want to talk anymore,” or “I’m tired right now,” or it’s you know, she wasn’t welcoming to be pushed… the whole time I was getting mixed messages” Coach #4

When they perceived hesitation, coaches were uncertain about how to balance proactively pushing participants and continually messaging them, or to give their clients space.

“As a coach you are like should I reach out to her again or should I just leave her alone because there may be a reason why… As a coach you got to be sensitive when you are texting, you don’t want it to be like a drag… You really don’t know what the balance is” Coach #1

Reflecting on their exchanges, coaches often viewed these interactions as missed opportunities to engage or push their clients further, and regretted it when they felt they were too hesitant.

4.5.4. Attempts for deeper engagement sometimes backfired.

To try to combat the lack of context, coaches took up their tried-and-true strategy of asking questions to their clients. Coaches tried to probe participants to uncover more fundamental challenges they were encountering, or to find some jumping off point to drive the conversation forward.

“Sometimes you wait for that like little piece of information that’s the entry into a bigger conversation. So maybe they would drop a little tidbit about money being tight, and now you have an opening to talk about budget and planning and frozen vegetables.” Coach #3

As discussed in the beginning of this section there were a handful of circumstances where question-asking was fruitful in leading to in-depth coaching exchanges; however, there were many other situations when it was not as successful and, in occasionally even backfired. When participants were not as engaged, coaches continued to ask questions multiple times in multiple ways.

“So, I would ask the same question in different ways… you could see my maneuvering and trying to get her to focus.” Coach #1

This repetitive questioning occasionally led to annoyance among participants, and a feeling that the coach was not actually listening. In one instance, Coach #3 included some additional clarifying questions to spur the conversation during a daily check-in with P14; P14, however, interpreted these questions to mean that the coach did not remember the goal they had set together the day before, a misunderstanding that put the participant off.

“I thought we were talking about it the other day what my goal was, we want to do it for the whole week… she has the same thing every day we talk it was like somebody was not listening” P14 (HC)

This example highlights the disconnect between coaches’ and clients’ perceptions of their exchanges, which sometimes led to dissatisfaction and frustration.

4.5.5. Coaches want a rewarding experience, too, but rarely received it.

Overall, the health coaches in this study did their best to provide a positive coaching experience for participants, and took pride when their clients showed signs of success. In particular, coaches found satisfaction in the instances when participants engaged for more in-depth exchanges, like the exchange between Coach #2 and P13 in Table 9:

“He really opened up and was like very receptive to coaching. That was really cool. And it kind of worked out that like, the timing was good, too, like he and I were both online.” Coach #2

While these were the highlight of the experience for coaches, they were also quite rare. The more common experience was frustration due to the challenges with text messaging and a lack of context, described above. Coaches disliked receiving short responses from their clients, and in response some coaches went out of their way to embellish their messages in an attempt to convey that there was a human on the other end.

“I tried to lighten the mood a little bit… One of the things that I wanted to avoid was it sounding like I was just a computer. I wanted her to build up there was an actual person on the other end.” Coach #3

The disconnects that resulted from the challenges of text messaging, discussed above, led the coaches to feel frustrated and dissatisfied that their clients were not fully committed to the coaching process, and were not stretching or challenging themselves.

“At this point is when I realized that she [P14] chose the half a plate of vegetable goal because it was easy. This is like something that she did all the time.” Coach #3

Ironically, however, Coach #3’s dissatisfaction was the result of a fundamental miscommunication. In their interview, P14 discussed how they had completely changed their eating habits during the study, and were in uncharted territory with their nutrition goal.

“I am not normally eating salad, you know every day with my meals, I don’t.” P14

Because of the challenges of text messaging and difficulty perceiving how engaged their clients were, coaches received little direct feedback on how they were doing in their role as virtual coach. Even when participants were having highly positive experiences, coaches were not able to see this or share in this satisfaction until the very end of the study, if at all.

4.6. Theme 3: The consistency and predictability of the chatbot helped participants persevere in pursuing their goals and promoted their autonomy

While text messaging presented considerable barriers for human coaches, it also gave unique advantages to the chatbot. Specifically, its consistent, if annoying, behaviors helped individuals to persist in pursuing their goals. Furthermore, its strict adherence to the BAP script mandated consistency in including choices for goals and behaviors; these choices helped promote participants’ autonomy and sense of agency.

4.6.1. Perseverance in pursuing goals.

Many participants found the chatbot to demonstrated patience in its responses, always allowing users to make a choice, and re-prompting with the question if it did not understand the user’s response.

“It always gave me an answer. It never cut me off like it gave me what you call feedback, computer talk or whatever… but it left it open so I could continue to think on it.” P9 (Chatbot)

A common comment among participants in both groups was to describe their coaching experience as “annoying, but helpful.” The “annoyance” was particularly salient for participants who were not frequent users of text messaging and who preferred talking on the phone. Part of the annoyance stemmed from the fact that text message notifications would sometimes arrive at inopportune times, for example during a doctor’s appointment, at church, or when the participant was with friends. Poor timing of messages was compounded by the fact that participants felt obligated to respond to messages soon after they arrived.

“But it’s just really annoying when it’s just not giving the person a chance to think, understand. It’s just fast, you have to answer fast… But when a patient or someone is doing something, we can stop what we are doing just because we need to answer this fast” P8 (Chatbot)

However, when discussing the aspects of the coach that they found annoying, participants in the Chatbot group often described them as a double-edged sword, acknowledging the pushiness of the Chatbot as a necessary evil in achieving desired changes in their self-management.

“There’s a positive message and we have to believe each message that comes through is for a reason, is to keep us to maintain us healthy… even though it’s from an automated service, you know, so the power of positivity is there… Hey, you don’t get rid of your mom because she is annoying.” P8 (Chatbot)

The features that contributed to annoyance, like the persistence and consistency of messaging, were also tied to participant’s perceptions of what made the intervention helpful, by increasing motivation, and keeping behavior change intentions salient throughout the day to spur positive behaviors.

4.6.2. Choice and autonomy.

One of the predominant themes unique to interviews of participants in the chatbot group was regarding their appreciation for choices and options presented to them by the coach. BAP provides an opportunity for participants to select a preferred option for setting goals. While human coaches followed this part of the protocol initially, they eventually wound up suggesting individual goals rather than sharing a menu of choices. One human coach participant (P13) said that the coach “gave” them goals “to be ascribed to.”

Coach #4 to P17: For this coming week, I want to make sure that you make 1/2 of my plate fruits and/or vegetables. Is that something you can do?

In contrast, the Chatbot was consistent in following the protocol and offering menus each step of the way. As a result, participants in the Chatbot group appreciated the freedom to pursue options that mattered to them.

“Not only did it give me the options. But then if I didn’t appreciate those options it gave me the chance to request another set of options, you know, I found that to be helpful as well.” P11 (Chatbot)

P6 poignantly contrasted their experience with the chatbot coach to receiving a prescription from a doctor’s office. In the case of the doctor, it was an instructive, but with the chatbot, it was a choice.

“He gave you a variety of choices. It’s not like if you go to a doctor he tells you, we have to put this medicine if you wanted to heal, you know, he gave you more choices… and see what works better for you, so I think that was better” P6 (Chatbot)

4.7. Theme 4: The directness and intimacy of text messaging created expectations for personalized and continuous support

While there were a number of differences in participants’ experiences with human coaches in contrast to the chatbot, there were also a number of notable similarities, particularly in regards to their expectations from virtual coaching delivered through text messaging. Most notably, the participants saw the daily availability as a key advantage that also presented a stark contrast with their previous in-person coaching experiences.

“The thing that it kept track with you… it was constantly there for you almost every day. So, you’re never really alone.” P13 (HC)

However, daily engagement also raised expectations for a level of support that was connected to participants’ daily activities. This heightened expectation was unrealized in both the Chatbot group as well as the Human Coaching group, with all participants wishing for suggestions that were more related to the specifics of the meals they were logging.

“I have salad with tomatoes, onions, and I put sunflower seeds, and I put dressing, you know, so was it okay or was it not okay, what should I not put in my salad?” P14 (HC)

The daily nature of virtual coaching combined with the fact that the study included meal and blood glucose logging may have contributed to expectations for more direct feedback on the meals they had entered. For example, P8 described that they felt the experience of logging was disconnected from their coaching experience, and wished to receive more feedback based on what they were eating, like what kinds of additions to oatmeal would be best for their BG.

“They are not connected because I took a picture of my food but you didn’t say that is good. So its separate. Because if you eat a little bit of oatmeal and it raises your sugar 50% when it is supposed to be more healthy than there is something wrong, right? Is it the milk that I am using?” P8 (Chatbot)

This desire for more specific feedback and suggestions based on participants’ meal logs was the most commonly expressed recommendation from participants in both groups. Along these lines, participants also asked for more actionable and varied suggestions like recipe ideas, workout videos, or lists of healthy food items to buy at the grocery store.

5. DISCUSSION

In this research, we aimed to unpack tensions of humanness in virtual health coaching. While there has been an increased focus on conversational technologies in healthcare, some have argued that the human element is irreplaceable in health coaching [54,88]. We completed a two-week study with two versions of a virtual health coaching intervention. In one group, participants interacted with a scripted, wizard-of-oz chatbot based on Brief Action Planning (BAP; [37]). In the other group, participants interacted with an actual health coach, who started with the same protocol as the chatbot for consistency, but could embellish as necessary. We sought to compare and contrast the experience of coaching in these two groups, to explore 1) whether automated chatbots have the potential to serve as virtual health coaches, 2) whether there are any aspects of coaching that are uniquely human, as well as 3) potential advantages of automated conversational approaches for health coaching in a virtual setting. Below we discuss the main results of the study and their implications for future research in CSCW and for the design of virtual coaching interventions in health.

5.1. A comparable coaching experience with a chatbot

One of the overarching questions in this study was whether fully automated coaching systems are capable of creating positive coaching experiences. Overall, we found that participants from both groups reported generally positive experiences, and described their time in the study as working with a coach. Many of the themes observed in the accounts of their coaching experience — like increased motivation, learning and education, and accountability — align well with the description of positive coaching experiences by Olsen and others [78,103]. While it was not surprising that experienced health coaches were able to create a positive coaching experience, it was notable how similar experiences were between the two groups, despite divergent conversational patterns; human coaches had longer conversations that covered broader topics than the chatbot. Moreover, there were no differences in post-test assessments of usability or shared decision-making between the two groups. Interestingly, self-reported goal attainment over the study period was higher in the Chatbot group (over 80%) than the Human Coaching group (less than 50%). These results support the potential of even relatively simple automated approaches to cultivate a coach-like experience to support self-management.

On one hand, these findings challenge previous arguments that coaching is a uniquely human domain and that creating a positive coaching experience inevitably requires the involvement of human coaches [88]. On the other hand, this finding is consistent with multiple previous investigations that showed the efficacy of conversational agents in creating positive experiences in many areas related to individuals’ health [6,7,30,54,60]. Our study further supports these previous observations and extends them into the context of health coaching. However, our study also showed that while both human coaches and chatbots can create positive coaching experiences, they each have their unique advantages and limitations. We discuss these below.

5.2. Advantages and challenges of human-powered coaching via text message

Previous research argued that human coaches have characteristics that are uniquely human and cannot be replicated with automated systems [88]. Our study provided some support to this claim; human coaches were unmatched in their ability to express empathy and to flexibly expand the scope of support based on their understanding of individuals’ needs. However, it also showed that text messaging as a medium for coaching had several important limitations, often leading to negative experiences for both coaches and participants. We discuss these below.

5.2.1. Empathy, expanded scope of support, and accountability.

In this study, coaches demonstrated several important characteristics that had a positive impact on coaching experiences. First, they were unmatched in their ability to express empathy and build a human connection with their clients. Many participants in the human coaching group felt that their coaches really cared about them, which was both motivational and encouraging. Second, coaches were able to use their intuition and experience to identify their clients’ unmet needs, and used these cues to provide context-sensitive support for other aspects of participants’ lives, including housing and employment. Both of these advantages in the Human Coaching group are consistent with Rutjes’ account of health coaching, which emphasized interpersonal human connection and the ability for coaches to adapt support to situation-specific contexts [88]. In particular, the expanded scope of support and ability to adapt to multiple contextual factors is a substantial unsolved problem and area of ongoing work in conversational AI [33,54], but is incredibly important to support the complexity of self-care practices [77,85]. Similarly, while previous research has explored imbuing chatbots with empathy [59], other studies suggested that individuals can differentiate and prefer empathic responses from actual humans [71]. Pursuing automated approaches to empathy also has ethical implications, as there is potential for deceptive applications or unintended consequences on mental health and social interaction [71].

Another possible advantage of human coaching, less explored in the previous literature, is the sense of accountability inherent in relationships with human coaches. In the previous section we suggested that high self-reported goal attainment in the chatbot group indicated that individuals in this group met their goals more often. One explanation of this finding is that the chatbot was more effective than human coaches in helping participants achieve their chosen goals. More plausibly however, participants may have opted for goals that were easier to achieve with the chatbot, while human coaches encouraged them to take on more challenging goals. If that was indeed the case, chatbots could take concrete steps to encourage participants to set more challenging goals that are more likely to lead to improvements in health. For example, they could suggest incorporating a secondary, challenging goal alongside a primary, attainable goal [73], or setting adaptive goals that change over time based on the user’s behaviors [52].

Furthermore, it is possible that the perception of social commitment and accountability varied between the two groups, which also contributed to the discrepancy in self-reported goal attainment. Many factors can influence goal choice and attainment, including social commitment and accountability [52,73]. It is possible that the perception of a human on the receiving end of messages in the human coaching group fostered accountability and honesty, while participants in the chatbot group felt less social accountability and were more comfortable over-reporting their accomplishments [72]. This explanation contrasts with findings in mental health treatment suggesting individuals may be more forthcoming with an agent than a human counselor [61]. However, a key difference here is that the health coach could view the participant’s meal logs and objectively assess goal attainment, which may have fostered accountability. Future work could further explore perceptions of accountability with human and non-human conversational partners in coaching.

5.2.2. Precariousness of in-depth conversations without common ground.

Despite these advantages, health coaches described significant challenges communicating with clients via text messaging. Because text messaging has low information bandwidth [20] and lacks nuance, coaches and clients described a number of disconnects and misunderstandings. These at times led to frustration for both coaches and clients, thus negatively impacting the coaching experience.

We relate these findings to the notion of common ground [15] common in CSCW literature. Common ground offers a way to describe the shared understanding between individuals necessary to facilitate effective conversation. Common ground is built over time through collaboration and discussion, for example between colleagues in a workplace. Coiera [18] described how common ground can be relevant in understanding not just human-human but also human-computer interaction. Common ground can be challenging even for more straightforward tasks like scheduling, let alone complex tasks like health coaching [48]. Coaches in our study had substantial difficulty establishing common ground with participants over text message, despite their repeated attempts to engage in more in-depth conversations as they would in an in-person setting. The lack of common ground in these conversations may have contributed to the disconnect and dissatisfaction some participants reported. Furthermore, consistent with prior research [76], we found that delays in responses between coaches and participants further challenged understanding and satisfaction with conversations.

Notably, some of the successful in-depth conversations were proceeded by the coach reviewing the participant’s meal logs, which Coira describes as “pre-emptive grounding” [18]. While grounding may be more challenging via text message, clients’ self-tracking data offers an avenue to build up common ground before coaching exchanges begin. Research in CSCW has explored the use of self-tracking data artifacts and visualizations to improve common ground in patient-provider communication [13,91]. These approaches have so far been applied to in-person discussions, but the principles can apply to remote conversation as well, for example the need for both parties to view similar visualizations [40,63]. Self-tracking data also offers a path to grounding in automated systems, by incorporated user-tracked data to inform dialogs, for example offering feedback on specific meals, or tailoring suggestions based on the user’s recent logs.

In addition to challenges in establishing common ground, our study highlighted multiple challenges related to coordinating and organizing conversations. Nardi et al showed that a significant portion of instant message (IM) exchanges in the workplace were focused on organizing the appropriate context for the information exchange; including negotiating availability and maintaining the sense of connectedness [75].This was also the case in our study, where coaches struggled to find appropriate times for engagement. Prior work has shown that some individuals can engage in in-depth exchanges with multiple threads in a single conversation thus covering both coordination and information exchange [43]; however, these exchanges were rare in our study. Furthermore, in a workplace setting, individuals who engage in simpler conversation instead addressed other communication needs like logistics and social ties outside of the IM conversation, which was not possible with text-only coaching [43]. Our results support the difficulty of establishing common ground and coordinating engagement via text message alone, and suggest that text-based coaching interventions ought to consider simpler exchanges with a lower cost of establishing common ground through conversation or provide other means for establishing common ground beyond brief text messages.

5.2.3. Difficulty assessing engagement without social translucence.

In addition to challenges establishing common ground via text message, there were also a number of important social cues missing. Challenges related to coordinating opportunities for engagement via text messaging, described in the prior section, were further exacerbated by the lack of cues that could help coaches and their clients to maintain awareness of each other’s actions and context.

Previous research in CSCW used the notion of social translucence [27] to identify important characteristics of digital systems for fluid social interactions, including visibility of the other party’s status and availability. In our study, social translucence was lacking because coaches could not see when their clients were available to receive messages, or otherwise occupied, and had no additional information to interpret a lack of responsiveness or curt replies. Our attempts to impose external structure to promote coordination, for example allowing coaches and clients to list the times they would be available, were only partially successful. Previous research has explored conversational coaching via other text-based platforms that offer more visibility of a the user’s status to better enable social translucence [67,97]; however, further research is required to support the fluid social interactions at heart of health coaching.

One potential direction is to utilize sensing capabilities of mobile platforms to infer a user’s state and status [22]. Contemporary smartphones are able to capture considerable amounts of data about their users, and many contextual factors are relevant for chronic disease self-management [77,85]. In our study, this awareness of clients’ context could have helped human coaches tailor their support, and offered more visibility into when they should persist with messaging or pull back. For automated coaching systems, contextual data could help to determine the times when clients will be most receptive to messaging [55,74]. However, prioritizing the visibility of a user’s state is in tension with preserving their privacy [27]. Health coaching is deeply personal, and future work could explore this tradeoff in the context of health coaching, which may be different than in the workplace.

5.2.4. Heightened and unmet expectations for personalization.

Our study showed that text messaging as a coaching medium had several important limitations. At the same time, coaching over text messaging, combined with integration of self-tracking data, left participants with specific expectations for the type of support they would receive. In contrast with in-person coaching, participants appreciated that t2.coach was available every day to offer support and answer questions. Furthermore, by including self-tracking of meals and BG levels, participants expected the content of coaching to be highly specific to their own meals and behaviors, and wished for more specific suggestions about how to modify their common meals, or for other recipes to try. Yet this expectation for personalized support grounded in their self-tracking data was largely unmet in both groups. While half of the coaches attempted to review participants’ records, they found this process inconsistent with their typical practice of relying on in-depth conversations with clients to obtain needed information. Furthermore, reviewing data collected by multiple clients would require considerable investment of time and efforts from the coaches [62].

In contrast with t2.coach, which focused on holistic coaching via goal setting and action planning, other prior coaching interventions have focused on feedback related to specific meals, images, or behaviors [17,42,63]. Given the positive coaching experience observed with t2.coach, it’s possible that these approaches could be combined. In addition, data-driven systems can take advantage of machine learning to reduce the need for data analysis by human experts to personalize support. For example, many automated “coaching” interventions focus primarily on automatically generating targeted feedback on behaviors, captured with self-tracking and mobile sensing [17,44]. In addition, there is a growing body of research on nutrition-grounded and conversational recommendation systems, to make healthy recommendations based on what individuals have logged [12,90,96]. Indeed, because computational systems are able to process large amounts of data and make statistical inferences, automated systems may be especially well equipped to make certain kinds of recommendations, for example based on patterns in self-tracking data.

5.3. Unique advantages of chatbots as virtual coaches

While the virtual setting for coaching presented barriers for human coaches, it gave the chatbot a number of unique advantages. In particular, participants appreciated the “patience” and consistency of the chatbot, as well as the choice and autonomy it offered. While human coaches often went off-script to narrow their suggestion to a single goal, the chatbot consistently followed the protocol and always offered a menu of choices, which was highly appreciated by participants. Furthermore, the tone of its messages, carefully scripted by a team of experts in health behavior communication, was perceived as supportive and motivational. This is consistent with the argument by Bickmore and colleagues that automated conversational agents may not only be effective clinical communicators, but can be superior to human practitioners in some ways, because they can consistently follow clinical communication guidelines and best practices [5]. The chatbot’s insistence to always offer a menu of options helped create a sense of autonomy for participants, who appreciated the freedom to find a goal that suited them. This emphasizes the importance of tone, style, and personality in chatbot design [21], and reaffirms for future interventions the common practice of designing conversational agents based on clinically validated protocols [30,54,59,92].

A second potential advantage was the chatbot’s persistent nature, which allowed it to be “annoying but helpful,” which many participants viewed as necessary in achieving desired behavior change. While human coaches were sometimes hesitant to appear too “pushy”, the chatbot’s insistence was appreciated by the participants, who admitted, however begrudgingly, that the persistence was necessary to help them go through with the changes and meet their goals. In some ways, this is consistent with research in text-messaging interventions for health promotion, suggesting that consistent messages at a regular cadence can effectively spur behavior change, even with little interactive engagement from participants [31,38,39]. This suggests that brevity and consistency may be important design goals for chatbots in health.

5.4. Future directions for virtual health coaching

While t2.coach was designed as a scripted chatbot, our results also have implications for the design of AI-based chatbots in health coaching. Often, the approach to building more realistically human chatbots in the AI community relies on learning from large data sets of example dialog between humans to train a dialog model [33,94]. Our results problematize this approach for virtual health coaching. In this study, both participants and coaches reported notable differences with their prior in-person experiences, and human coaches encountered substantial difficulties translating their expert approach to a text-based virtual setting. This suggests that a dialog model trained on in-person exchanges is unlikely to generalize to a virtual setting. Even a corpus of virtual coaching exchanges may be fraught, because of the messiness we observed as coaches adjusted to the text-based medium. Meanwhile, the chatbot was able to cultivate a similar experience without completely human-like dialog, suggesting multiple paths to effective conversational health coaching interventions. Together, these results suggest that pursuing more human-like coaching chatbots by learning from human-human exchanges may not be the right approach, and more research is necessary to first characterize what works for virtual coaching conversations, perhaps looking to the content of commercially successful coaching platforms [9,68].

In addition, our results suggest opportunities for systems that combine human health coaches with automated systems like chatbots. Notably, there was a synergy between areas where each excelled. The human coaches outperformed the chatbot in longer, more in-depth exchanges, while the chatbot was effective at daily, brief interactions. However, there are open questions about different ways to conceptualize this combination. On one hand, “humbots” described by Grudin and Jacues use humans quietly behind the scenes and often do not disclose human involvement to their users [35]. On the other hand, Seering and colleagues envisioned a way for chatbots to be embedded within social settings like forums or message threads [93]. In this vision, chatbots do not masquerade as humans, but exist within a conversational thread explicitly labeled as bots, and serving a supportive role for other humans in the interaction. In the case of coaching, the human might engage in less frequent, more in-depth conversations, while one or many chatbots engage in daily, automated check-ins. However, how to handle the handoff of information between bots and humans in the loop as well as how to balance the two are still open questions. Furthermore, the comparative benefits and limitations of these different approaches have not been studied directly and require further research, as well as unpacking the ethical implications of how to label AI agents in human-AI collaboration [25,58]

5.5. Limitations

One notable limitation of this study was that the t2.coach chatbot was a wizard-of-oz prototype, not a fully automated system, which meant that responses from the chatbot were delayed. This created a parity in experience between the two study groups, because messages from human coaches would necessarily be delayed, but may limit the generalizability of some of our findings to fully automated systems. In addition, the study took place over a two-week period, and engagement with and perceptions of the system may have changed with extended use. As a pilot study, the sample size was small, and disproportionally more participants were lost to follow-up in the human coaching group which lend to an uneven distribution of participants across the two groups and could limit the validity of our findings.

5.6. Conclusion

Translating an intervention that works in an in-person setting to a virtual one is not always straightforward. In a text-based virtual coaching setting, an automated chatbot was able to cultivate an equivalently coach-like experience, and was advantaged by sticking to the script to offer choices, and persistently checking in. Human coaches offered empathy and were able to engage in deeper discussion, but encountered frustrations and barriers establishing common ground and coordinating engagement with clients over text messaging. Future virtual coaching interventions can incorporate more data driven personalization and consider novel ways to combine automated and human expertise.

Supplementary Material

1

CCS Concepts:

Applied computing~Life and medical sciences~Consumer health • Computing methodologies~Artificial intelligence~Distributed artificial intelligence~Intelligent agents • Human-centered computing~Ubiquitous and mobile computing

ACKNOWLEDGMENTS

This research was funded by the National Institute of Diabetes and Digestive and Kidney Diseases award number R01DK113189 and the National Library of Medicine award number T15LM007079.

Contributor Information

ELLIOT G. MITCHELL, Columbia University, Department of Biomedical Informatics, USA

ROSA MAIMONE, Fondazione Bruno Kessler, Trento, Italy.

ANDREA CASSELLS, Clinical Directors Network (CDN), USA.

JONATHAN N TOBIN, Clinical Directors Network (CDN) and The Rockefeller University, USA.

PATRICIA DAVIDSON, West Chester University, USA.

ARLENE M SMALDONE, School of Nursing, Columbia University, USA.

LENA MAMYKINA, Columbia University, Department of Biomedical Informatics, USA.

REFERENCES

  • [1].Adiwardana Daniel, Luong Minh-Thang, So David R Jamie, Fiedel Noah, Thoppilan Romal, Yang Zi, Kulshreshtha Apoorv, Nemade Gaurav, Lu Yifeng, and Le Quoc V.. 2020. Towards a Human-like Open-Domain Chatbot. Retrieved February 6, 2020 from https://www.msxiaobing.com/
  • [2].Ashton Carol M., Haidet Paul, Paterniti Debora A., Collins Tracie C., Gordon Howard S., O’Malley Kimberly Laura A., Sharf Barbara F., Suarez-Almazor Maria E., Wray Nelda P., and Street Richard L. 2003. Racial and ethnic disparities in the use of health services. Journal of General Internal Medicine 18, 2: 146–152. 10.1046/j.1525-1497.2003.20532.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Kocaballi A Baki, Laranjo Liliana, and Coiera Enrico. 2018. Measuring User Experience in Conversational Interfaces : A Comparison of Six Questionnaires. Proc. of British Computer Society Human Computer Interaction Conference (BCS HCI ‘18): 1–12. Retrieved July 24, 2018 from https://www.researchgate.net/publication/326188575 [Google Scholar]
  • [4].Barry Michael J. and Edgman-Levitan Susan. 2012. Shared Decision Making — The Pinnacle of Patient-Centered Care. New England Journal of Medicine 366, 9: 780–781. 10.1056/NEJMp1109283 [DOI] [PubMed] [Google Scholar]
  • [5].Bickmore Timothy, Gruber Amanda, and Picard Rosalind. 2005. Establishing the computer–patient working alliance in automated health behavior change interventions. Patient Education and Counseling 59, 1: 21–30. 10.1016/J.PEC.2004.09.008 [DOI] [PubMed] [Google Scholar]
  • [6].Bickmore Timothy W., Pfeifer Laura M., and Jack Brian W.. 2009. Taking the time to care: empowering low health literacy hospital patients with virtual nurse agents. Proceedings of the 27th international conference on Human factors in computing systems: 1265–1274. 10.1145/1518701.1518891 [DOI] [Google Scholar]
  • [7].Bickmore Timothy W., Silliman Rebecca A., Nelson Kerrie, Cheng Debbie M., Winter Michael, Henault Lori, and Paasche-Orlow Michael K. 2013. A Randomized Controlled Trial of an Automated Exercise Coach for Older Adults. Journal of the American Geriatrics Society 61, 10: 1676–1683. 10.1111/jgs.12449 [DOI] [PubMed] [Google Scholar]
  • [8].Bodenheimer Thomas, Lorig Kate, Holman Halsted, and Grumbach Kevin. 2002. Patient Self-management of Chronic Disease in Primary Care. JAMA 288, 19: 2469. 10.1001/jama.288.19.2469 [DOI] [PubMed] [Google Scholar]
  • [9].Bollyky Jennifer B., Bravata Dena, Yang Jason, Williamson Mark, and Schneider Jennifer. 2018. Remote Lifestyle Coaching Plus a Connected Glucose Meter with Certified Diabetes Educator Support Improves Glucose and Weight Loss for People with Type 2 Diabetes. Journal of Diabetes Research 2018. 10.1155/2018/3961730 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Braun Virginia and Clarke Victoria. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2: 77–101. 10.1191/1478088706qp063oa [DOI] [Google Scholar]
  • [11].Casas Jacky, Mugellini Elena, and Abou Khaled Omar. 2018. Food Diary Coaching Chatbot. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers (UbiComp ‘18), 1676–1680. 10.1145/3267305.3274191 [DOI] [Google Scholar]
  • [12].Christakopoulou Konstantina, Radlinski Filip, and Hofmann Katja. 2016. Towards Conversational Recommender Systems. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ‘16, 815–824. 10.1145/2939672.2939746 [DOI] [Google Scholar]
  • [13].Chung Chia-Fang, Dew Kristin, Cole Allison M, Zia Jasmine, Fogarty James A, Kientz Julie A, and Munson Sean A. 2016. Boundary Negotiating Artifacts in Personal Informatics: Patient-Provider Collaboration with Patient-Generated Data. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing - CSCW ‘16, 768–784. 10.1145/2818048.2819926 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Chung Chia Fang. 2017. Supporting patient-provider communication and engagement with personal informatics data. In UbiComp/ISWC 2017 - Adjunct Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers, 335–338. 10.1145/3123024.3123197 [DOI] [Google Scholar]
  • [15].Clark Herbert H. and Brennan Susan E.. 2004. Grounding in communication. In Perspectives on socially shared cognition. American Psychological Association, 127–149. 10.1037/10096-006 [DOI] [Google Scholar]
  • [16].Clark Leigh, Pantidi Nadia, Cooney Orla, Doyle Philip, Garaialde Diego, Edwards Justin, Spillane Brendan, Gilmartin Emer, Murad Christine, Munteanu Cosmin, Wade Vincent, and Cowan Benjamin R.. 2019. What makes a good conversation? Challenges in designing truly conversational agents. In Conference on Human Factors in Computing Systems - Proceedings, 1–12. 10.1145/3290605.3300705 [DOI] [Google Scholar]
  • [17].Clavel Céline, Whittaker Steve, Anaïs Ana\”\is Blacodon, and Martin Jean-Claude. 2018. WEnner: A Theoretically Motivated Approach for Tailored Coaching About Physical Activity. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers - UbiComp ‘18 (UbiComp ‘18), 1669–1675. 10.1145/3267305.3274190 [DOI] [Google Scholar]
  • [18].Coiera Enrico. 2000. When conversation is better than computation. Journal of the American Medical Informatics Association 7, 277–286. 10.1136/jamia.2000.0070277 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Cole-Lewis Heather J., Smaldone Arlene M., Davidson Patricia R., Kukafka Rita, Tobin Jonathan N., Cassells Andrea, Mynatt Elizabeth D., Hripcsak George, and Mamykina Lena 2016. Participatory approach to the development of a knowledge base for problem-solving in diabetes self-management. International Journal of Medical Informatics 85, 1: 96–103. 10.1016/J.IJMEDINF.2015.08.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Daft Richard L. and Lengel Robert H.. 1983. Information Richness. A New Approach to Managerial Behavior and Organization Design. Research in Organizational Behavior. 10.1128/jb.125.2.608-615.1976 [DOI] [Google Scholar]
  • [21].Danielescu Andreea and Christian Gwen. 2018. A Bot is Not a Polyglot. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ‘18, 1–9. 10.1145/3170427.3174366 [DOI] [Google Scholar]
  • [22].Dey Anind K.. 2001. Understanding and using context. Personal and Ubiquitous Computing 5, 1: 4–7. 10.1007/s007790170019 [DOI] [Google Scholar]
  • [23].Diabetes Prevention Program Research Group. 2009. 10-year follow-up of diabetes incidence and weight loss in the Diabetes Prevention Program Outcomes Study. The Lancet 374, 9702: 1677–1686. 10.1016/S01406736(09)61457-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Ding Xianghua, Chen Yunan, Ding Zhaofei, and Xu Yiwen. 2019. Boundary Negotiation for Patient-Provider Communication via WeChat in China. Proceedings of the ACM on Human-Computer Interaction, CSCW: 1–24. 10.1145/3359259 [DOI] [Google Scholar]
  • [25].Dubhashi Devdatt and Lappin Shalom. 2017. AI dangers: Imagined and real. Communications of the ACM 60, 43–45. 10.1145/2953876 [DOI] [Google Scholar]
  • [26].Eakin Elizabeth G, Winkler Elisabeth A, Dunstan David W, Healy Genevieve N, Owen Neville, Marshall Alison M, Graves Nicholas, and Reeves Marina M. 2014. Living well with diabetes: 24-month outcomes from a randomized trial of telephone-delivered weight loss and physical activity intervention to improve glycemic control. Diabetes care 37, 8: 2177–85. 10.2337/dc13-2427 [DOI] [PubMed] [Google Scholar]
  • [27].Erickson Thomas and Kellogg Wendy A.. 2000. Social Translucence: An Approach to Designing Systems that Support Social Processes. ACM Transactions on Computer-Human Interaction 7, 1: 59–83. 10.1145/344949.345004 [DOI] [Google Scholar]
  • [28].Evert Alison B., Dennison Michelle, Gardner Christopher D., Timothy Garvey W, Lau Ka Hei Karen, MacLeod Janice, Mitri Joanna, Pereira Raquel F., Rawlings Kelly, Robinson Shamera, Saslow Laura, Uelmen Sacha, Urbanski Patricia B., and Yancy William S.. 2019. Nutrition therapy for adults with diabetes or prediabetes: A consensus report. 10.2337/dci19-0014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Fitzpatrick Geraldine and Ellingsen Gunnar. 2013. A review of 25 years of CSCW research in healthcare: Contributions, challenges and future agendas. Computer Supported Cooperative Work: CSCW: An International Journal 22, 609–665. 10.1007/s10606-012-9168-0 [DOI] [Google Scholar]
  • [30].Fitzpatrick Kathleen Kara, Darcy Alison, and Vierhile Molly. 2017. Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR mental health 4, 2: e19. 10.2196/mental.7785 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Franklin Victoria Louise, Greene Alexandra; Waller Annalu,; Stephen Alan Greene, and Pagliari Claudia 2008. Patients’ Engagement With “Sweet Talk”-A Text Messaging Support System for Young People With Diabetes. Journal of medical Internet research 10, 2. 10.2196/jmir.962 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Fusch Patricia and Ness Lawrence. 2015. Are We There Yet? Data Saturation in Qualitative Research. The Qualitative Report 20, 9. Retrieved September 15, 2020 from https://nsuworks.nova.edu/tqr/vol20/iss9/3 [Google Scholar]
  • [33].Gao Jianfeng, Galley Michel, and Li Lihong. 2018. Neural Approaches to Conversational AI. 10.1145/3209978.3210183 [DOI] [Google Scholar]
  • [34].Greenwood Deborah A, Gee Perry M, Fatkin Kathy J, and Peeples Malinda. 2017. A Systematic Review of Reviews Evaluating Technology-Enabled Diabetes Self-Management Education and Support. Journal of diabetes science and technology 11, 5: 1015–1027. 10.1177/1932296817713506 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Grudin Jonathan and Jacques Richard. 2019. Chatbots, Humbots, and the Quest for Artificial General Intelligence. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ‘19, 1–11. 10.1145/3290605.3300439 [DOI] [Google Scholar]
  • [36].Guest Greg, Bunce Arwen, and Johnson Laura. 2006. How Many Interviews Are Enough? Field Methods 18, 1: 59–82. 10.1177/1525822X05279903 [DOI] [Google Scholar]
  • [37].Gutnick Damara, Reims Kathy, Davis Connie, Gainforth Heather, Jay Melanie, and Cole Steven. 2014. Brief Action Planning to Facilitate Behavior Change and Support Patient Self-Management. JCOM 21, 1. [Google Scholar]
  • [38].Hall Amanda K, Cole-Lewis Heather, and Bernhardt Jay M 2015. Mobile Text Messaging for Health: A Systematic Review of Reviews. Annu. Rev. Public Health 36: 393–415. 10.1146/annurev-publhealth-031914-122855 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Head Katharine J., Noar Seth M., Iannarino Nicholas T., and Harrington Nancy Grant 2013. Efficacy of text messaging-based interventions for health promotion: A meta-analysis. Social Science and Medicine 97, 41–48. 10.1016/j.socscimed.2013.08.003 [DOI] [PubMed] [Google Scholar]
  • [40].Heer Jeffrey and Agrawala Maneesh. 2008. Design considerations for collaborative visual analytics. Information Visualization 7: 49–62. 10.1057/palgrave.ivs.9500167 [DOI] [Google Scholar]
  • [41].Hone Kate S. and Graham Robert. 2000. Towards a tool for the Subjective Assessment of Speech System Interfaces (SASSI). Natural Language Engineering 6, 3&4: S1351324900002497. 10.1017/S1351324900002497 [DOI] [Google Scholar]
  • [42].Hoque Mohammed (Ehsan), Courgeon Matthieu, Martin Jean-Claude, Mutlu Bilge, and Picard Rosalind W. 2013. MACH: My Automated Conversation coacH. Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing - UbiComp ‘13: 697. 10.1145/2493432.2493502 [DOI] [Google Scholar]
  • [43].Isaacs Ellen, Walendowski Alan, Whittaker Steve, Schiano Diane J., and Kamm Candace. 2002. The character, functions, and styles of instant messaging in the workplace. In Proceedings of the 2002 ACM conference on Computer supported cooperative work - CSCW ‘02, 11. 10.1145/587078.587081 [DOI] [Google Scholar]
  • [44].Kamphorst Bart A.. 2017. E-coaching systems: What they are, and what they aren’t. Personal and Ubiquitous Computing 21, 4: 625–632. 10.1007/s00779-017-1020-6 [DOI] [Google Scholar]
  • [45].Kaye Joseph Jofish Joel Fischer, Hong Jason, Bentley Frank R., Munteanu Cosmin, Hiniker Alexis, Tsai Janice Y., and Ammari Tawfiq. 2018. Panel: Voice assistants, ux design and research. In Conference on Human Factors in Computing Systems - Proceedings, 1–5. 10.1145/3170427.3186323 [DOI] [Google Scholar]
  • [46].Kelley JF. 1984. An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Systems (TOIS) 2, 1: 26–41. 10.1145/357417.357420 [DOI] [Google Scholar]
  • [47].Klasnja Predrag, Consolvo Sunny, and Pratt Wanda. 2011. How to evaluate technologies for health behavior change in HCI research. In Proceedings of the 2011 annual conference on Human factors in computing systems - CHI ‘11, 3063. 10.1145/1978942.1979396 [DOI] [Google Scholar]
  • [48].Klein Gary, Feltovich Paul J, Bradshaw Jeffrey M, and Woods David D. 2004. Common Ground and Coordination in Joint Activity. [Google Scholar]
  • [49].Baki Kocaballi A, Quiroz Juan C., Laranjo Liliana, Rezazadegan Dana, Kocielnik Rafal, Clark Leigh, Liao Q. Vera, Park Sun Young, Moore Robert J., and Miner Adam. 2020. Conversational agents for health and wellbeing. In Conference on Human Factors in Computing Systems - Proceedings, 1–8. 10.1145/3334480.3375154 [DOI] [Google Scholar]
  • [50].Kocaballi Ahmet Baki, Berkovsky Shlomo, Quiroz Juan C, Laranjo Liliana, Tong Huong Ly Dana, Briatore Agustina, and Coiera Enrico. 2019. The Personalization of Conversational Agents in Health Care: Systematic Review. Journal of medical Internet research 21, 11: e15360. 10.2196/15360 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Kocielnik Rafal and Hsieh Gary. 2017. New Opportunities for Dialogue-based Interaction in Behavior Change Domain. In CSCW 2017 workshop on Talking with Conversational Agents in Collaborative Action. Retrieved October 30, 2019 from https://talkingwithagents.files.wordpress.com/2017/02/7-kocielnik1.pdf [Google Scholar]
  • [52].Korinek Elizabeth V., Phatak Sayali S., Martin Cesar A., Freigoun Mohammad T., Rivera Daniel E., Adams Marc A., Klasnja Pedja, Buman Matthew P., and Hekler Eric B.. 2018. Adaptive step goals and rewards: a longitudinal growth model of daily steps for a smartphone-based walking intervention. Journal of Behavioral Medicine 41, 1: 74–86. 10.1007/s10865-017-9878-3 [DOI] [PubMed] [Google Scholar]
  • [53].Kriston Levente, Scholl Isabelle, Hölzel Lars Daniela, Loh Andreas, and Härter Martin. 2010. The 9-item Shared Decision Making Questionnaire (SDM-Q-9). Development and psychometric properties in a primary care sample. Patient Education and Counseling 80, 1: 94–99. 10.1016/J.PEC.2009.09.034 [DOI] [PubMed] [Google Scholar]
  • [54].Laranjo Liliana, Adam G Dunn Huong Ly Tong, Ahmet Baki Kocaballi Jessica Chen, Bashir Rabia, Surian Didi, Gallego Blanca, Magrabi Farah, Annie Lau YS, and Coiera Enrico. 2018. Conversational agents in healthcare: a systematic review. Journal of the American Medical Informatics Association 25, 9: 1248–1258. 10.1093/jamia/ocy072 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [55].Lei Huitian, Tewari Ambuj, and Murphy Susan A.. 2017. An Actor-Critic Contextual Bandit Algorithm for Personalized Mobile Health Interventions. Retrieved January 31, 2019 from http://arxiv.org/abs/1706.09090
  • [56].Li Ian, Dey Anind, and Forlizzi Jodi. 2010. A stage-based model of personal informatics systems. Proceedings of the 28th international conference on Human factors in computing systems CHI 10: 557. 10.1145/1753326.1753409 [DOI] [Google Scholar]
  • [57].Li Ian, Dey Anind K., and Forlizzi Jodi. 2011. Understanding my data, myself: supporting self-reflection with ubicomp technologies. In Proceedings of the 13th international conference on Ubiquitous computing - UbiComp ‘11, 405. 10.1145/2030112.2030166 [DOI] [Google Scholar]
  • [58].Vera Liao Q, Wang Yi Chia, Bickmore Timothy, Fung Pascale, Grudin Jonathan, Yu Zhou, and Zhou Michelle. 2019. Human-agent communication: Connecting research and development in HCI and AI. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, 122–126. 10.1145/3311957.3358607 [DOI] [Google Scholar]
  • [59].Lisetti Christine, Amini Reza, Yasavur Ugan, and Rishe Naphtali. 2013. I Can Help You Change! An Empathic Virtual Agent Delivers Behavior Change Health Interventions. ACM Transactions on Management Information Systems 4, 4: 1–28. 10.1145/2544103 [DOI] [Google Scholar]
  • [60].Lisetti Christine, Amini Reza, Yasavur Ugan, and Rishe Naphtali. 2013. I Can Help You Change! An Empathic Virtual Agent Delivers Behavior Change Health Interventions. ACM Transactions on Management Information Systems 4, 4: 1–28. 10.1145/2544103 [DOI] [Google Scholar]
  • [61].Lucas Gale M., Rizzo Albert, Gratch Jonathan, Scherer Stefan, Stratou Giota, Boberg Jill, and Morency Louis-Philippe. 2017. Reporting Mental Health Symptoms: Breaking Down Barriers to Care with Virtual Human Interviewers. Frontiers in Robotics and AI 4, OCT: 12. 10.3389/frobt.2017.00051 [DOI] [Google Scholar]
  • [62].Mamykina Lena, Levine Matthew E, Davidson Patricia G, Smaldone Arlene M, Elhadad Noemie, and Albers David J 2016. Data-driven health management: reasoning about personally generated data in diabetes with information technologies. Journal of the American Medical Informatics Association 23, 3: 526–531. 10.1093/jamia/ocv187 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [63].Mamykina Lena, Mynatt Elizabeth, Davidson Patricia, and Greenblatt David. 2008. MAHI: Investigation of social scaffolding for reflective thinking in diabetes management. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘08), 477–486. 10.1145/1357054.1357131 [DOI] [Google Scholar]
  • [64].Manyika James and Sneader Kevin. 2018. AI, Automation, and the Future of Work: Ten Things To Solve for. Retrieved October 15, 2020 from https://www.mckinsey.com/featured-insights/future-of-work/ai-automation-and-the-futureof-work-ten-things-to-solve-for
  • [65].Marcin James P., Shaikh Ulfat, and Steinhorn Robin H.. 2016. Addressing health disparities in rural communities using telehealth. Pediatric Research 79, 169–176. 10.1038/pr.2015.192 [DOI] [PubMed] [Google Scholar]
  • [66].Maulsby David, Greenberg Saul, and Mander Richard. 1993. Prototyping an intelligent agent through Wizard of Oz. In Conference on Human Factors in Computing Systems - Proceedings, 277–284. 10.1145/169059.169215 [DOI] [Google Scholar]
  • [67].McDonald David W., Gokhman Stephanie, and Zachry Mark 2012. Building for social translucence: A domain analysis and prototype system. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, 637–646. 10.1145/2145204.2145301 [DOI] [Google Scholar]
  • [68].Michaelides Andreas, Major Jennifer, Pienkosz Edmund, Wood Meghan, Kim Youngin, and Toro-Ramos Tatiana 2018. Usefulness of a novel mobile diabetes prevention program delivery platform with human coaching: 65-week observational follow-up. JMIR mHealth and uHealth 6, 5: e93. 10.2196/mhealth.9161 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [69].Miller William R. and Rollnick Stephen. 2013. Motivational interviewing : helping people change. 482. Retrieved July 15, 2019 from https://www.guilford.com/books/Motivational-Interviewing/Miller-Rollnick/9781609182274
  • [70].Miner Adam S., Milstein Arnold, Schueller Stephen, Hegde Roshini, Mangurian Christina, and Linos Eleni. 2016. Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health. JAMA Internal Medicine 176, 5: 619. 10.1001/jamainternmed.2016.0400 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [71].Morris Robert R., Kouddous Kareem, Kshirsagar Rohan, and Schueller Stephen M.. 2018. Towards an Artificially Empathic Conversational Agent for Mental Health Applications: System Design and User Perceptions. Journal of Medical Internet Research 20, 6: e10148. 10.2196/10148 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Munson Sean A., Krupka Erin, Richardson Caroline, and Resnick Paul. 2015. Effects of public commitments and accountability in a technology-supported physical activity intervention. In Conference on Human Factors in Computing Systems - Proceedings, 1135–1144. 10.1145/2702123.2702524 [DOI] [Google Scholar]
  • [73].Munson Sean and Consolvo Sunny. 2012. Exploring Goal-setting, Rewards, Self-monitoring, and Sharing to Motivate Physical Activity. In Proceedings of the 6th International Conference on Pervasive Computing Technologies for Healthcare. 10.4108/icst.pervasivehealth.2012.248691 [DOI] [Google Scholar]
  • [74].Nahum-Shani Inbal, Smith Shawna N., Spring Bonnie J., Collins Linda M., Witkiewitz Katie, Tewari Ambuj, and Murphy Susan A.. 2016. Just-in-Time Adaptive Interventions (JITAIs) in Mobile Health: Key Components and Design Principles for Ongoing Health Behavior Support. Annals of Behavioral Medicine 52, 6: 446–462. 10.1007/s12160-016-9830-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [75].Nardi Bonnie A., Whittaker Steve, and Bradner Erin. 2000. Interaction and outeraction. In Proceedings of the 2000 ACM conference on Computer supported cooperative work - CSCW ‘00, 79–88. 10.1145/358916.358975 [DOI] [Google Scholar]
  • [76].Nguyen Duyen T. and Fussell Susan R.. 2016. Effects of Conversational Involvement Cues on Understanding and Emotions in Instant Messaging Conversations. Journal of Language and Social Psychology 35, 1: 28–55. 10.1177/0261927X15571538 [DOI] [Google Scholar]
  • [77].Nunes Francisco and Fitzpatrick Geraldine. 2018. Understanding the Mundane Nature of Self-care: Ethnographic Accounts of People Living with Parkinson’s. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ‘18. 10.1145/3173574.3173976 [DOI] [Google Scholar]
  • [78].Olsen Jeanette M.. 2014. Health Coaching: A Concept Analysis. Nursing Forum 49, 1: 18–29. 10.1111/nuf.12042 [DOI] [PubMed] [Google Scholar]
  • [79].Olsen Jeanette M and Nesbitt Bonnie J 2010. Health Coaching to Improve Healthy Lifestyle Behaviors: An Integrative Review. American Journal of Health Promotion 25, 1: e1–e12. 10.4278/ajhp.090313-lit-101 [DOI] [PubMed] [Google Scholar]
  • [80].Park Sun Young Yunan, and Raj Shriti. 2017. Beyond health literacy: Supporting Patient-provider communication during an emergency visit. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, 2179–2192. 10.1145/2998181.2998357 [DOI] [Google Scholar]
  • [81].Park Sun Young Pei Yi, Barbarin Andrea, Kaziunas Elizabeth, Chow Astrid, Singh Karandeep, Wilcox Lauren, and Lasecki Walter S.. 2019. Identifying challenges and opportunities in human-AI collaboration in healthcare. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, 506–510. 10.1145/3311957.3359433 [DOI] [Google Scholar]
  • [82].Peyrot Mark, Rubin Richard R, Funnell Martha M, and Siminerio Linda M. 2009. Access to diabetes self-management education: results of national surveys of patients, educators, and physicians. The Diabetes educator 35, 2: 246–8, 252– 6, 258–63. 10.1177/0145721708329546 [DOI] [PubMed] [Google Scholar]
  • [83].Porcheron Martin, Fischer Joel E., Reeves Stuart, and Sharples Sarah. 2018. Voice Interfaces in Everyday Life. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ‘18, 1–12. 10.1145/3173574.3174214 [DOI] [Google Scholar]
  • [84].Porcheron Martin, Fischer Joel E, McGregor Moira, Brown Barry, Luger Ewa, Candello Heloisa, and O’Hara Kenton. 2017. Talking with conversational agents in collaborative action. In CSCW 2017 - Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 431–436. 10.1145/3022198.3022666 [DOI] [Google Scholar]
  • [85].Raj Shriti, Toporski Kelsey, Garrity Ashley, Lee Joyce M., and Newman Mark W.. 2019. “My blood sugar is higher on the weekends”: Finding a role for context and context-awareness in the design of health self-management technology. In Conference on Human Factors in Computing Systems - Proceedings, 1–13. 10.1145/3290605.3300349 [DOI] [Google Scholar]
  • [86].Ramchandani Neesha. 2019. Virtual Coaching to Enhance Diabetes Care. Diabetes Technology and Therapeutics 21, S2: S2–48-S2–51. 10.1089/dia.2019.0016 [DOI] [PubMed] [Google Scholar]
  • [87].Roller Stephen, Dinan Emily, Goyal Naman, Ju Da, Williamson Mary, Liu Yinhan, Xu Jing, Ott Myle, Shuster Kurt, Smith Eric M., Boureau Y-Lan, and Weston Jason. 2020. Recipes for building an open-domain chatbot. Retrieved April 30, 2020 from http://arxiv.org/abs/2004.13637
  • [88].Rutjes Heleen, Willemsen Martijn C., and IJsselsteijn Wijnand A.. 2019. Beyond Behavior: The Coach’s Perspective on Technology in Health Coaching. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ‘19, 1–14. 10.1145/3290605.3300900 [DOI] [Google Scholar]
  • [89].Saeedi Pouya, Petersohn Inga, Salpea Paraskevi, Malanda Belma, Karuranga Suvi, Unwin Nigel, Colagiuri Stephen, Guariguata Leonor, Motala Ayesha A., Ogurtsova Katherine, Shaw Jonathan E., Bright Dominic, and Williams Rhys. 2019. Global and regional diabetes prevalence estimates for 2019 and projections for 2030 and 2045: Results from the International Diabetes Federation Diabetes Atlas, 9th edition. Diabetes Research and Clinical Practice 157: 107843. 10.1016/j.diabres.2019.107843 [DOI] [PubMed] [Google Scholar]
  • [90].Hanna Schäfer Santiago Hors-Fraile, Raghav Pavan Karumur André Calero Valdez, Said Alan, Torkamaan Helma, Ulmer Tom, and Trattner Christoph. 2017. Towards Health (Aware) Recommender Systems. In Proceedings of the 2017 International Conference on Digital Health - DH ‘17, 157–161. 10.1145/3079452.3079499 [DOI] [Google Scholar]
  • [91].Schroeder Jessica, Hoffswell Jane, Chung Chia-Fang, Fogarty James, Munson Sean, and Zia Jasmine. 2017. Supporting Patient-Provider Collaboration to Identify Individual Triggers using Food and Symptom Journals. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW ‘17, 1726–1739. 10.1145/2998181.2998276 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [92].Schroeder Jessica, Wilks Chelsey, Rowan Kael, Toledo Arturo, Paradiso Ann, Czerwinski Mary, Mark Gloria, and Linehan Marsha M.. 2018. Pocket Skills: A Conversational Mobile Web App To Support Dialectical Behavioral Therapy. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2018): 1–15. 10.1145/3173574.3173972 [DOI] [Google Scholar]
  • [93].Seering Joseph, Luria Michal, Kaufman Geoff, and Hammer Jessica. 2019. Beyond dyadic interactions: Considering chatbots as community members. In Conference on Human Factors in Computing Systems - Proceedings, 1–13. 10.1145/3290605.3300680 [DOI] [Google Scholar]
  • [94].Iulian Vlad Serban Ryan Lowe, Henderson Peter, Charlin Laurent, and Pineau Joelle. 2015. A Survey of Available Corpora for Building Data-Driven Dialogue Systems. arXiv preprint. 10.5087/dad [DOI] [Google Scholar]
  • [95].Sherifali Diana, Viscardi Virginia, Bai Johnny Wei, and Ali R. Muhammad Usman. 2016. Evaluating the Effect of a Diabetes Health Coach in Individuals with Type 2 Diabetes. Canadian Journal of Diabetes 40, 84–94. 10.1016/j.jcjd.2015.10.006 [DOI] [PubMed] [Google Scholar]
  • [96].Sun Yueming and Zhang Yi. 2018. Conversational Recommender System. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval - SIGIR ‘18, 235–244. 10.1145/3209978.3210002 [DOI] [Google Scholar]
  • [97].Agnieszka Matysiak Szostek Evangelos Karapanos, Eggen Berry, and Holenderski Mike. 2008. Understanding the implications of Social Translucence for systems supporting communication at work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, 649–658. 10.1145/1460563.1460664 [DOI] [Google Scholar]
  • [98].Tamhane Shrikant, Rodriguez-Gutierrez Rene, Hargraves Ian, and Montori Victor M.. 2015. Shared Decision-Making in Diabetes Care. Current Diabetes Reports 15, 112. 10.1007/s11892-015-0688-0 [DOI] [PubMed] [Google Scholar]
  • [99].Veinot Tiffany C, Mitchell Hannah, and Ancker Jessica S. 2018. Good intentions are not enough: how informatics interventions can worsen inequality. Journal of the American Medical Informatics Association. 10.1093/jamia/ocy052 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [100].Verhoeven Fenne, Karin Tanja-Dijkstra Nicol Nijland, Eysenbach Gunther, and Van Gemert-Pijnen Lisette. 2010. Asynchronous and Synchronous Teleconsultation for Diabetes Care: A Systematic Literature Review. Journal of Diabetes Science and Technology 4, 3: 666–684. 10.1177/193229681000400323 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [101].Weiss Barry D, Mays Mary Z, Martz William, Castro Kelley Merriam, DeWalt Darren A, Pignone Michael P, Mockbee Joy, and Hale Frank A. 2005. Quick assessment of literacy in primary care: the newest vital sign. Annals of family medicine 3, 6: 514–22. 10.1370/afm.405 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [102].Weizenbaum Joseph. 1966. ELIZA – a computer program for the study of natural language communication between man and machine. Communications of the ACM 26, 1: 36–45. 10.1145/357980.357991 [DOI] [Google Scholar]
  • [103].Wolever Ruth Q., Leigh Ann Simmons Gary A. Sforzo, Dill Diana, Kaye Miranda, Bechard Elizabeth M., Mary Elaine Southard Mary Kennedy, Vosloo Justine, and Yang Nancy. 2013. A Systematic Review of the Literature on Health and Wellness Coaching: Defining a Key Behavioral Intervention in Healthcare. Global Advances in Health and Medicine 2, 4: 38–57. 10.7453/gahmj.2013.042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [104].Twilio Studio. Retrieved October 14, 2020 from https://www.twilio.com/studio
  • [105].WordBoard Keyboard for iPhone & iPad. Retrieved October 14, 2020 from https://www.bytesizeapps.net/wordboard_keyboard/

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES