The transition towards personalized health management requires public awareness about management strategies of self-monitoring, self-appraisal, and self-management, eventually paving a way to more timely interventions and higher quality patient–clinician interactions.1 A key enabler is patient generated health data, fueled in good part by the growth in wearable devices including smart watches and other Internet-of-Things (IoT) for health-tracking (http://bit.ly/smart-wearables). These tracking devices provide “low-level” monitoring signals indicating health conditions such as sleep apnea and heart rhythm disorder. However, to make more sense of IoT data, it is imperative that we develop cognitive approaches where they mine, interlink, and abstract diverse IoT data. These cognitive approaches often needs to keep the user closely engaged to acquire more information, to obtain feedback, to collect verbal health conditions, and to provide intervention and management actions.
The chatbot technology was initially introduced as an artificial conversational agent to simulate conversations with a user using voice or text interactions (http://bit.ly/chatbot-communication).2 Its market is projected to reach $1.23 billion by 2025 (http://bit.ly/chatbot-market). If this technology is equipped with cognitive capabilities and additionally fed by continuous stream of IoT data, it can accelerate the use of personalized health management applications with improved clinical outcomes. Recently, the coalition of knowledge representation and machine learning has been the center of attention towards a more explainable cognitive computing.3,4 For a specific domain such as healthcare, the chatbot technology will require advanced cognitive capabilities relying on the representation of background medical knowledge (context) and specific health conditions of patients (personalized knowledge). The incorporation of data collected from IoT and mobile computing (which are often personalized data) into chatbot technology will enable constant tracking of a patient’s health condition. Furthermore, it will demonstrate the advancement of current conversational AI capabilities for managing and mining conversations to collect evidence about patients and generate personalized and contextualized inference complemented by knowledge extracted from multiple sources.
In this article, we share our perspective on how the contemporary chatbot technology can be extended towards a more intelligent, engaging, context-aware, and personalized agent. Furthermore, we underline the importance of contextualization, personalization, and abstraction1 with the use of domain-specific as well as patient-specific knowledge, and present examples of three healthcare applications.
CONTEXTUAL HEALTH KNOWLEDGE GRAPH AND EVOLVING PERSONALIZED KNOWLEDGE GRAPH
A knowledge graph is a structured representation of all the involving concepts, relations, and entities of a given domain. One large public knowledge has been Web of Data that surpasses 149 billion facts collected from 9960 data sets of diverse domains (observed on October 28, 2018, at http://stats.lod2.eu/). AI technologies can take advantage of these big interlinked knowledge. In the following, we first present the motivations and then discuss the two key challenges faced by current health systems. We describe how to augment existing health strategies by extending patient-chatbot experience that relies on three types of input knowledge (see Figure 1): (i) a background Health Knowledge Graph (HKG) (see Figure 1A) that comprises of domain and disease-specific knowledge which may be manually developed or extracted from Web of Data that includes a rich source of structured medical and life science data, (ii) an evolving Patient Health Knowledge Graph (PHKG) (see Figure 1B) that incorporates Patient Generated Health Data (PGHD) from sensors and IoT devices and structured knowledge extracted from a patient’s Electronic Medical Record (EMR) as well as environmental data (e.g., pollen, air quality) from public web services. The PHKG continues to grow by expanding informative pieces of knowledge from continuous patient interactions with the chatbot and (iii) is refined by healthcare provider’s feedback (see Figure 1C) on predictions and analytics.
Figure 1.

A healthcare assistant bot interacts with the patient via various conversational interfaces (voice, text, and visual) to disseminate information and provide recommendation (validated by physician). The core functionalities of the chatbot (Component C in the blue box) are extended with a background HKG (Component A in the green box) and an evolving PHKG (Component B in the orange box).
CURRENT HEALTHCARE CHALLENGES AND PROPOSED SOLUTIONS
Contextualization and Personalization of Patient’s Data.
The first challenge for developing personal health agent is the need to contextualize and personalize healthcare treatments and decisions. Current healthcare system lacks contextual and personalized knowledge about its patients3 due to the limited patient–physician time spent during clinical visits, the patient’s ability to recall prior events, and clinic-centric system that captures only a part of relevant patient data. Contextual factors in this instance refer to a more in-depth health management and clinical protocol knowledge that a physician may utilize, whereas personalized factors include a patient’s health history, data capturing patients health condition (e.g., a lab or BMI), ongoing activities, and lifestyle choices. A survey presented in the article by Linder et al.5 reports several notable barriers to the effective use of clinical decision support systems during patient visits, including physician losing direct eye contact with patients, falling behind schedule, inability to type quickly enough, and feeling that using the computer in front of the patient is rude. It concludes that EMRs have mixed effectiveness for supporting decision-making of physicians since exploring them is not reasonably agile to derive effective knowledge.4 These factors can potentially lead to missing patients’ data and likely to affect other healthcare professionals who utilize these data. On the bright side, patients are increasingly using technology (e.g., wearables) and using mobile applications to generate what is termed PGHD. Incorporation of such data in better health management is likely to become more important, and chatbots can further make it easier to collect some of the patient data such as symptoms or how a patient feels.
Contemporary implementations of chatbot technologies do not understand conversation narrative and demonstrate very limited cognitive capabilities and commonsense reasoning. Handling these limitations for a broad domain might take years, but in a specific domain such as health care, and even narrower applications, such as a specific disease, these limitations can be alleviated by extending the chatbot technology with domain and disease-specific health background knowledge (i.e., contextual and personalized knowledge). There are publicly available generic knowledge graphs (e.g., DBpedia and Freebase) as well as healthcare-specific knowledge source, e.g., unified medical language system, PubMed, systematized nomenclature of medicine-clinical term, and International classification of diseases. Chatbot technology can acquire a context-aware (i.e., patient’s context), domain-specific (i.e., health domain) knowledge graph (extracted and integrated from external sources such as Web of Data) termed HKG. The HKG can be updated and synchronized by the evolution of Web of Data or relevant knowledge sources. HKG provides essential facts (background knowledge) that are necessary for response generation, reasoning, and inference components of chatbot engine. The other obstacle to have a holistic overview of a patient’s circumstance is the lack of a unified and semantic-based approach for publishing and integrating an individual patient’s data. This gap hinders the health care system to provide a comprehensive history and insight about patients. To tackle this deficiency, we propose to publish a knowledge graph out of anonymized patient data that is collected from various sources (knowledge collected from EMR, IoTs devices, and external web services). PHKG further integrates knowledge extracted from previous conversations of patients with chatbot. To sum up, having two background knowledge graphs (see Figure 1) to feed the core chatbot engine will enhance reasoning and prediction in support of improving health decision making.
LIMITED PATIENT HEALTH DATA DUE TO EPISODIC VISITS AND TIME CONSTRAINTS
The American Academy of Family Physicians (AAFP) defines primary care as promoting effective communication and encouraging the role of the patients as partner in healthcare. During clinical visit, the primary care physician assumes the primary contact of patients for diagnosing a wide range of illnesses and injuries, counselling, and education as well as initiating preventative care. They are also responsible for making referrals to specialists according to the patient’s condition. This is a task of significant responsibility since a patient may endure prolonged suffering in case of a wrong referral. However, with increasing societal demand to healthcare resources, a significant percentage of physicians reported that they ran out of consultation time to converse and accurately diagnose the root cause of patients’ conditions (http://bit.ly/clinical-challenges). Consequently, some patients are being deprived of education about their health conditions, causes, available treatments, and education (such as on lifestyle changes). This indicates a worrisome gap in collecting, managing and analyzing patient’s health data as well as a proper mechanism for educating, advising, and referring patients.
Mobile devices and IoTs are increasingly prevalent with overall improved technology literacy among populations. They can hence be leveraged for continuous real-time tracking of patient health signals. These signals can help in bridging the information gap between each hospital visit and providing just-in-time adaptive interventions.6 For example, a joint project between Kno.e.sis and Dayton Children’s Hospital has developed knowledge-enabled semantic multi-sensory approach for personalized pediatric asthma management (kHealth, http://bit.ly/kHealth-Asthma).7 The kHealth-Asthma kit represented in Figure 2 consists of an Android application that asks contextual questionnaire (tailored to specific conditions of the user) to capture symptoms and medication usage. It also uses IoT and Web Services to collect patient’s and patient relevant relevant data including (a) physiological data captured via Fitbit (activity and sleep) and Peak Flow meter (PEF/FEV1 values); (b) indoor environmental data (particulate matter, volatile organic compound, CO2, humidity, and temperature) using Foobot, an indoor air quality monitor; (c) outdoor allergens and air quality recorded using web services (ozone, pollen, and air quality); and (d) selected data semi-automatically (human validation with strict anonymization) extracted from patient’s clinical notes (from EMRs). A total of 110 evaluations in this 150+ planned completed pediatric asthma patient cohort study have been completed, each lasting one or three months of participation. A compliance rate of 89% (defined as over 75% of data requiring active patient participation) shows the user acceptance of such a technology. The total number of data points collected per patient per day is up to 1852 over 29 types of parameters. All data are anonymized and securely backup on the Kno.e.sis cloud. These data are integrated together using a visualization and analysis platform, kHealthDash (http://bit.ly/kHealthDash).
Figure 2.

The kHealth framework with kHealth-Asthma kit, kHealth cloud (D), and kHealth Dashboard (E), showing the frequency of data collection, the number of parameters collected, and the total number of data points collected per day per patient (shown in dark blue). The kHealth kit components that are given to patients which collects PGHD are shown in light blue and the outdoor environmental parameters with their sources are shown in green. All data are anonymized and associated with respective randomly assigned patient IDs.
ARCHITECTURE OVERVIEW OF A HEALTH CHATBOT
Content, user interface, and user feedback are three major components that go hand-in-hand in creating a positive user experience which is a critical for defining the relationship a user has with a chatbot. Having the chatbot’s core functionalities extended with HKG and PHKG help contextualize and personalize conversations. However, without an equally strong front end communication system to (a) receive user input and (b) articulate smart responses by (c) making intelligent inferences and prediction, user interest and experience may decline and diminish over time (http://bit.ly/why-chatbots-fail). The six core components of the chatbot (see Figure 1C) each represents a research problem: conversation management, natural language (narrative) understanding, response generation, knowledge extraction and discovery, reasoning and inference engine, and prediction module. The following are the proposed extensions to the current state-of-the-art approaches to improve patient experience in using a chatbot.
Receive and understand user input: A chatbot should be sufficiently dynamic to communicate with patients via multiple input and output modalities including voice, text, and smart display. The chatbot should provide feedback to the user and affirm its understanding to avoid conflict and knowledge mis-representation.
-
Generating smart responses: The responses articulated by the chatbot are reasoned from the underlying HKG and PHKG to guarantee domain-specificity and contextualization as well as personalization aspects. The “smart” is attributed by the following components:
Comprehensible and concise. Conciseness and comprehensibility of answers profoundly matter as a slight flaw could compromise reliability.
Context-awareness and coherence. The chatbot should consider the patient’s context in terms of space and time in addition to the input provided. For example, if an asthmatic patient asks for the weather condition, a generic answer would be “Today is fairly sunny” versus a personalized answer with respect to the patient’s disease “Today is fairly sunny. However, the ragweed pollen is a little high which does not look too good for your health. Do remain indoor as much as possible.” The latter illustrates context awareness.
Dynamicity and evolution. The more the patient interacts with the chatbot, the more knowledge it discovers about the patient. In addition, knowledge evolves over time and they should be reflected on the knowledge bases(HKG and PHKG).
Balancing response granularity and volume. The complexity of traversing the graphs followed by reasoning and formulating a response, either by visualization or verbalization, increases dramatically with the volume of data. Retrieving and balancing an optimum amount of data, yet sufficient for a reasonable response is critical to communicate timely and effectively.
Inference, reasoning, and prediction. As knowledge evolves, both HKG and PHKG should be continuously updated to infer new insights (http://bit.ly/PHKG-evolution). The prediction module relies on both new and historical knowledge about the patient in order to infer, reason, and make a reasonable recommendation to assist the patient for self-management and self-appraisal. The predictions are also continuously presented to the corresponding physicians to create situational awareness, and in case of an emergency condition, the physician can be notified immediately to intervene.
CASE STUDIES WITH HEALTHCARE APPLICATIONS
The first use case is major depressive disorder. Depression is highly prevalent in the U.S. with estimated prevalence rates of 10.5% affecting millions of U.S. adults (http://bit.ly/major-depression). Successful early identification and intervention, albeit challenging, can lead to positive health and behavioral improvements. A routine screening for depression by a clinician involves administering a Patient Health Questionnaire (PHQ-9, http://bit.ly/PHQ-9) which relies heavily on patient’s ability to recall events that occurred over the span of last two weeks. Instead, a chatbot can directly converse with patient to collect relevant data on a continuous basis in real-time, or as an added option, a patient can consent the chatbot to use his social media conversations to indirectly assess some of the components of PHQ-9 assessment and directly converse with a patient for the remaining information needed for an assessment. The patient’s PHKG can represent patient’s past encounters and behavioral manifestations (optionally on social media) over a substantial period of time for a more accurate prognosis. In addition, a contextualized chatbot with domain knowledge can understand slang terms that are commonly used in social media such as “bupe” which refers to its medical term “buprenorphine.” This allows a viable entry for a chatbot to deliver tailored psychotherapy based on Cognitive-behavioral therapy8 and initiate the need for treatment intervention conforming to medical protocols.
The second use case is asthma. More than 20.4 million people in the U.S. are diagnosed with asthma in 2016 and asthma-related healthcare costs alone are around $50 billion a year (http://bit.ly/asthma-facts). In an attempt to bridge the informational gap between episodic patient–doctor visits, a chatbot can combine active and passive sensing using a variety of low-cost sensors and IoTs for continuous monitoring and collection of multimodal data. These longitudinal measurements can then be leveraged and transformed into practical and actionable information for both patient and health care provider. Specifically, the patient can access information with regards to his/her asthma control level based on symptoms, severity, and triggers for self-monitoring, all at the convenience by conversing with the chatbot.
The third prominent use case is elderly care. With improved healthcare services and amenities, elder residents are becoming one of the fastest growing cohort.9 These older adults however are at the highest risk for developing chronic diseases such as heart failure and chronic obstructive pulmonary disease. As the technology matures, a chatbot with consented access to patient–doctor profile and social information can be delegated to match patient–doctor preferences, organize telehealth sessions, and schedule appointments by looking up the doctor’s calendar. Extending with IoTs such as pill’s bottle sensor, the chatbot can be made smarter to remind and nudge patient of timely medication intake as well as adherence to clinician prescribed management plan. By incorporating background geospatial and gazetteers knowledge sources, it is also feasible to coordinate and arrange transportation service for elderly with physical disabilities and transportation barriers, especially in congested cities.
In conclusion, while the chatbot technology is not new, we discussed how its potential can be extended with IoTs and knowledge graphs. We further illustrated the possible health services a chatbot can intervene using three disease-specific use cases. To sum up, the chatbot technology can (i) be empowered with multisensory capabilities through IoTs and sensors, (ii) provide contextualized and personalized reasoning capabilities grounding with domain-specific knowledge, and (iii) assist situations requiring high cognitive load. These diverse potentials hold prodigious promising for a close future of promoted healthcare approach.
Biography
Amit Sheth works on semantic-cognitive-perceptual computing and knowledge-enhanced learning with application in healthcare and social good. He is a fellow of IEEE, AAAI, and AAAS. He is the corresponding author of this article and can be reached at amit. shet@wright.edu and http://knoeiss.org/amit.
Hong Yung (Joey) Yip is currently working toward the Ph.D. degree on topics in knowledge graph, deep learning, and conversational AI. Contact him at joey@knoesis.org.
Saeedeh Shekarpour is an Assistant Professor of computer science with the University of Dayton, Dayton, OH, USA. She works on knowledge graphs and cognitive computing in question answering and chatbot technologies. Contact her at sshekarpour1@udayton.edu.
Contributor Information
Amit Sheth, Kno.e.sis-Wright State University.
Saeedeh Shekarpour, University of Dayton.
Hong Yung Yip, Kno.e.sis-Wright State University.
REFERENCES
- 1.Sheth A, Jaimini U, and Yip HY, “How will the internet of things enable augmented personalized health?,” IEEE Intell. Syst, vol. 33, no. 1, pp. 89–97, Jan./Feb. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Pichponreay L, Kim JH, Choi CH, Lee KH, and Cho WS, “Smart answering chatbot based on OCR and overgenerating transformations and ranking,” in Proc. 8th Int. Conf. Ubiquitous Future Netw., pp. 1002–1005, July. 2016. [Google Scholar]
- 3.Holzinger A, Biemann C, Pattichis CS, and Kell DB, “What do we need to build explainable AI systems for the medical domain?,” arXiv:1712.09923, 2017. [Google Scholar]
- 4.Samek W, Wiegand T, and Müller KR, “Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models,” arXiv:1708.08296, 2017. [Google Scholar]
- 5.Linder JA, Schnipper JL, Tsurikova R, Melnikas AJ,Volk LA, and Middleton B, “Barriers to electronic health record use during patient visits,” in Proc. AMIA Annu. Symp., 2006, vol. 2006, pp. 499–503. [PMC free article] [PubMed] [Google Scholar]
- 6.Nahum-Shani I et al. , “Just-in-time adaptive interventions (JITAIs) in mobile health: Key components and design principles for ongoing health behavior support,” Ann. Behav. Med, vol. 52, no. 6, pp. 446–462, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Jaimini U, Thirunarayan K, Kalra M, Venkataraman R, Kadariya D, and Sheth A, “How is my child’s asthma?” Digital phenotype and actionable insights for pediatric asthma, JMIR Pediatrics and Parenting, vol. 1, no. 2, 2018, Art. no. e11988. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Fitzpatrick KK, Darcy A, and Vierhile M, “Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial,” JMIR Mental Health, vol. 4, no. 2, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Schneider AE, Ralph N, Olson C, Flatley AM, and Thorpe L, “Predictors of senior center use among older adults in New York City public housing,” J. Urban Health, vol. 91, no. 6, pp. 1033–1047, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
