Skip to main content
JMIR Human Factors logoLink to JMIR Human Factors
. 2022 Jun 9;9(2):e36831. doi: 10.2196/36831

Use of Health Care Chatbots Among Young People in China During the Omicron Wave of COVID-19: Evaluation of the User Experience of and Satisfaction With the Technology

Yi Shan 1,, Meng Ji 2, Wenxiu Xie 3, Xiaomin Zhang 4, Xiaobo Qian 5, Rongying Li 6, Tianyong Hao 5
Editor: Andre Kushniruk
Reviewed by: Dengshan Xia, Hiroki Tanaka
PMCID: PMC9186498  PMID: 35576058

Abstract

Background

Long before the outbreak of COVID-19, chatbots had been playing an increasingly crucial role and gaining growing popularity in health care. In the current omicron waves of this pandemic when the most resilient health care systems at the time are increasingly being overburdened, these conversational agents (CA) are being resorted to as preferred alternatives for health care information. For many people, especially adolescents and the middle-aged, mobile phones are the most favored source of information. As a result of this, it is more important than ever to investigate the user experience of and satisfaction with chatbots on mobile phones.

Objective

The objective of this study was twofold: (1) Informed by Deneche and Warren’s evaluation framework, Zhu et al’s measures of variables, and the theory of consumption values (TCV), we designed a new assessment model for evaluating the user experience of and satisfaction with chatbots on mobile phones, and (2) we aimed to validate the newly developed model and use it to gain an understanding of the user experience of and satisfaction with popular health care chatbots that are available for use by young people aged 17-35 years in southeast China in self-diagnosis and for acquiring information about COVID-19 and virus variants that are currently spreading.

Methods

First, to assess user experience and satisfaction, we established an assessment model based on relevant literature and TCV. Second, the chatbots were prescreened and selected for investigation. Subsequently, 413 informants were recruited from Nantong University, China. This was followed by a questionnaire survey soliciting the participants’ experience of and satisfaction with the selected health care chatbots via wenjuanxing, an online questionnaire survey platform. Finally, quantitative and qualitative analyses were conducted to find the informants’ perception.

Results

The data collected were highly reliable (Cronbach α=.986) and valid: communalities=0.632-0.823, Kaiser-Meyer-Olkin (KMO)=0.980, and percentage of cumulative variance (rotated)=75.257% (P<.001). The findings of this study suggest a considerable positive impact of functional, epistemic, emotional, social, and conditional values on the participants’ overall user experience and satisfaction and a positive correlation between these values and user experience and satisfaction (Pearson correlation P<.001). The functional values (mean 1.762, SD 0.630) and epistemic values (mean 1.834, SD 0.654) of the selected chatbots were relatively more important contributors to the students’ positive experience and overall satisfaction than the emotional values (mean 1.993, SD 0.683), conditional values (mean 1.995, SD 0.718), and social values (mean 1.998, SD 0.696). All the participants (n=413, 100%) had a positive experience and were thus satisfied with the selected health care chatbots. The 5 grade categories of participants showed different degrees of user experience and satisfaction: Seniors (mean 1.853, SD 0.108) were the most receptive to health care chatbots for COVID-19 self-diagnosis and information, and second-year graduate candidates (mean 2.069, SD 0.133) were the least receptive; freshmen (mean 1.883, SD 0.114) and juniors (mean 1.925, SD 0.087) felt slightly more positive than sophomores (mean 1.989, SD 0.092) and first-year graduate candidates (mean 1.992, SD 0.116) when engaged in conversations with the chatbots. In addition, female informants (mean 1.931, SD 0.098) showed a relatively more receptive attitude toward the selected chatbots than male respondents (mean 1.999, SD 0.051).

Conclusions

This study investigated the use of health care chatbots among young people (aged 17-35 years) in China, focusing on their user experience and satisfaction examined through an assessment framework. The findings show that the 5 domains in the new assessment model all have a positive impact on the participants’ user experience and satisfaction. In this paper, we examined the usability of health care chatbots as well as actual chatbots used for other purposes, enriching the literature on the subject. This study also provides practical implication for designers and developers as well as for governments of all countries, especially in the critical period of the omicron waves of COVID-19 and other future public health crises.

Keywords: health care chatbots, COVID-19, user experience, user satisfaction, theory of consumption values, chatbots, adolescent, youth, digital health, health care, omicron wave, omicron, health care system, conversational agent

Introduction

Background

Regretfully, more than 95% of the population suffers from particular health problems [1], and about 60% of them visit a doctor when merely affected by minor illnesses, including a cold, headache, and stomachache. Actually, 80% of these diseases can be cured with home remedies, without the intervention of a doctor [2]. In this scenario, health care chatbots are capable of monitoring people’s health [1] by providing timely, useful health care information, especially during the omicron waves of COVID-19. These conversational agents (CA) play a crucial role in health care in the fast-pacing world, where the public prefers to be addicted to social media rather than to be concerned about their health [3] and mobile phones are becoming the primary source of information. Meanwhile, chatbots are substantially alleviating the pressure on the already overloaded health care systems in various countries. Therefore, an upsurge in the development and application of health care chatbots has been witnessed since the advent of ELIZA in 1966, which served as a psychotherapist promoting communication with patients [4]. It inspired the design and application of other health care chatbots [5], including Casper [2], MedChat [2], PARRY [6], Watson Health [7], Endurance [7], OneRemission [8], Youper [9], Florence [10], Your.Md [11], AdaHealth [12], Sensely [13], and Buoy Health [14]. These leading chatbots offer patients tailored health and therapy information, recommended products and services, and personalized diagnoses and treatments based on confirmed symptoms [15]. Facing the repeated daunting waves of COVID-19, many people are craving information to respond to the coronavirus [16], which is incessantly mutating. This sudden surge in the demand for information is increasingly overtaxing health care resources [17], including various health care hotlines and clinic services, so health care chatbots seem to be the only possible solution [17,18]. Given the status quo, the user experience of and satisfaction with chatbots are more important now than ever before. Relevant studies have been undertaken in some countries to investigate the effectiveness [19], usability [20], and acceptability [21]. Depending on technology acceptance theories (TAT), these studies on the use of health care chatbots focused on improving user experience and satisfaction through personalization [22], enjoyment [19], and novelty [23]. However, almost no investigation has been conducted in this respect among people in China from the perspective of the theory of consumption values (TCV).

Chatbots display unmatched advantages compared to other health care alternatives: alleviating the pressure on contact centers [24] and reducing contact-induced risks, satisfying unprecedented needs for health care information in the case of the shortage of qualified human agents [25], providing cost-effective 24/7 service [25], offering consistent service quality [26], and making no moral judgement on undesirable information provided by users [27]. The enhancement of these qualities motivates their increased use for health care purposes. This trend is being accelerated in the repeated outbreak waves of COVID-19, where chatbots are being used to screen potential infected cases [28], to help call centers to triage patients [29], and to recommend the most appropriate solutions to patients [29].

These selling points will facilitate popularizing health care chatbots only when the public is willing to utilize them and adopt their recommendations [30,31] in the face of the rampant COVID-19 pandemic. To promote adoption and adherence, many related studies have been undertaken in terms of the use of chatbots during this global health emergency to explore user reaction [32], to probe user experience and design considerations [33], to focus on the usage purposes [34], to identify differences in chatbot feature use by gender, race, and age [35], to improve the bot response accuracy [36], to investigate people’s behavior when seeking COVID-19 information [37], and to introduce newly developed COVID-19–specific chatbots [38,39]. Apparently, few investigations [32,33] have examined the users’ perception of these chatbots, but extant studies predominantly focus on technology acceptance [40,41], neglecting user experience and user satisfaction. Admittedly, user experience and user satisfaction are crucially significant because good user experience is the prerequisite of user adoption of information systems (IS) [42,43] and user satisfaction is a crucial factor for IS acceptance intention [44,45].

To fight against the COVID-19 pandemic, chatbots have been used to provide psychological service for medical professionals and the general public in China [46]. Unfortunately, only 1 study, based on Deneche and Warren [47], investigated the user experience of and satisfaction with chatbots addressing COVID-19–related mental health in Wuhan and Chongqing, China [48]. However, this study focused on the determinants influencing user experience and satisfaction rather than on user experience and satisfaction per se. This gap in the literature needs to be filled.

Objective

The objective of this study was twofold: (1) Informed by Deneche and Warren’s [47] evaluation framework, Zhu et al’s [48] measures of variables, and the TCV [49], we designed a new assessment model for the user experience of and satisfaction with chatbots on mobile phones, and (2) we aimed to validate the newly developed model and use it to investigate the user experience of and satisfaction with the popular Chinese and English language chatbots for timely self-diagnosis and general information concerning COVID-19 and the latest virus variants among young people (aged 17-35 years) in China in order to provide evidence for the potential improvements and developments of chatbots to sustain adherence and adoption, which is undoubtedly an inevitable worldwide trend.

Based on the twofold research aim, we proposed the following hypotheses:

  • Hypothesis 1: Explaining user behaviors in terms of diverse value-oriented factors (function, emotion, social influence, and environment), the newly developed comprehensive assessment model will have a high degree of reliability and validity and can better evaluate the user experience of and satisfaction with chatbots on mobile phones.

  • Hypothesis 2: The informants will generally be satisfied with their experience of using popular health care chatbots.

Two facts justify the necessity of this research: Young people (aged 17-35 years), occupying a large portion of the population in China, are more addicted to mobile health care apps than other age groups, and sustainable user adoption of and adherence to chatbots in this population can considerably emancipate clinicians, enabling them to pay close attention to more complex tasks and enhance the availability of qualified health care services to the general public in China.

Methods

Overall Procedures

We followed 5 steps to reveal the user experience of and satisfaction with chatbots in young people (aged 17-35 years) in China. First, we established an assessment model evaluating user experience and satisfaction based on the related literature and TCV and designed a questionnaire according to the assessment model. Second, we screened and selected the chatbots to be investigated. Third, we recruited 413 students from Nantong University, China, as informants of this study. Fourth, we collected the informants’ demographic information, tested their health literacy, and solicited their experience of and satisfaction with the selected health care chatbots via a questionnaire survey. Finally, we conducted quantitative and qualitative analyses based on the data collected through the questionnaire.

Recruitment of Informants

Participants were recruited from among students of Nantong University, China. This university recruits around 8000 students annually, with the total number of students exceeding 30,000. On-campus psychological tests and students’ counselors reported that a large percentage of students suffer from psychological problems of varying degrees during the repeat COVID-19 outbreaks. They urgently need intelligence-based CA for self-diagnosis and general information on the pandemic and the latest virus variants to ease their psychologically strained mind during the public health emergency. Their experience of and satisfaction with health care chatbots are, on the whole, representative and characteristic of the adolescent and middle-aged population in China. The questionnaire survey was approved and supported by the school authority in charge of students’ affairs and the student participants themselves. It was conducted using the online questionnaire survey platform wenjuanxing [50] on January 8, 2022, and the survey lasted until no additional questionnaire was submitted online for 2 consecutive days (January 12, 2022). Over this period, the survey was announced to the entire student body of over 1000 at the School of Foreign Studies, Nantong University, through emails and WeChat groups. The reason informants were recruited from among these students is that only these English majors reach the English proficiency enabling them to experience the use of English language chatbots. Characteristic of all the schools of foreign studies of all colleges and universities in China, the overwhelming majority of students are female.

Selection of Health Care Chatbots

First, we chose the top 12 health chatbots popular throughout the world as the scope of selection of English language chatbots. These chatbots were reviewed by name, description, function, and experience, and only 2 (16.7%) of them, Buoy Health [14] and Healthily [11], were finally chosen (Figure 1).

Figure 1.

Figure 1

Flowchart of selecting Chinese and English language health care chatbots. Of the top 12 English chatbots, 3 (25%) were not accessible due to technical errors, requirement of enterprise/school identification, or difficult application for a demo.

Subsequently, we selected leading Chinese language chatbots from the dominant Android app markets, including 360 Mobile Assistant, Baidu Mobile Assistant, and Tencent MyApp, and the iOS App Store. The keywords health care chatbot (医疗保健聊天机器人), health care bot (医疗保健机器人), health care app (医疗保健应用软件), health care applet (医疗保健小程序), psychological health chatbot (心理健康聊天机器人), psychological health bot (心理健康机器人), psychological health app (心理健康应用软件), and psychological health applet (心理健康小程序) were searched in Chinese on January 8, 2022. The selection followed 2 steps: (1) A total of 18 apps were identified by the search words, and (2) a further review revealed that only 4 (22.2%) of these 18 apps—zuoshouyisheng (左手医生), adachina (爱达健康), zhinengyuwenzhen IPC (智能预问诊IPC), and xiaojiuzhinengwenzhenjiqiren (小九智能问诊机器人)—have the chatbot function, while 2 (11.1%; zhinengyuwenzhen IPC and xiaojiuzhinengwenzhenjiqiren) are still in development and provide no demos for experience and merely 2 (zuoshouyisheng and adachina) can truly function as chatbots. The selection process is illustrated in Figure 1.

Before answering the questionnaire, it was arranged that the informants would experience the use of both Chinese and English language chatbots for around 2 weeks. This 2-week experience was intended to guarantee the validity and reliability of the questionnaire survey.

Assessment Model and Questionnaire

Informed by Deneche and Warren’s [47] evaluation framework, Zhu et al’s [48] measures of variables, and TCV [49], the assessment model designed for this study included 5 evaluation dimensions (functional, emotional, epistemic, social, and conditional) consisting of 18 variables (Table 1). These variables are supposed to contribute to user experience and user satisfaction. The questionnaire included 36 measures (Multimedia Appendix 1). Measures 1-26 were designed in light of the variables listed in Table 1. To solicit sufficient information, some variables may have corresponded to more than 1 measure. For example, “performance” was related to 6 measures (15-20) in the questionnaire. Measures 27-36 were intended to display the informants’ overall experience and satisfaction.

Table 1.

Assessment model of user experience and user satisfaction.

Dimension Variables
Functional
  • Context awareness

  • Language suitability

  • Customized service

  • User-friendliness

  • Performance

Emotional
  • Enjoyment

  • Relief from mental disorders

Epistemic
  • Novelty

  • Desire for knowledge

  • Knowledge enrichment

Social
  • Engagement

  • Empathy

  • Human likeness

  • Privacy

Conditional
  • Time

  • Place

  • Technological context

  • Mental state

Data Collection

The survey was conducted through wenjuanxing [50], an online questionnaire platform that is most popular in China. Three categories of data were collected via the online questionnaire: demographic information about the informants, their health literacy, and their experience of and satisfaction with the selected Chinese and English language chatbots. The demographic section collected data on the informants’ age, gender, grade, English proficiency, and way to obtain health care information during the COVID-19 pandemic. The health literacy part tested the informants’ basic medical vocabulary. The user experience and satisfaction module elicited data concerning the respondents’ ratings of the 36 measures. The score of each measure was rated between 1 and 4 points (1: totally agree; 2: basically agree; 3: basically disagree; 4: totally disagree).

Data Analysis

Quantitative analyses were performed using SPSS Statistics version 22.0 (IBM Corp) and R version 4.0.2 (The R Foundation). First, the demographic data and health literacy of the participants were briefly described as the background information of the analysis. Afterward, the reliability and validity of the data concerning user experience and satisfaction were confirmed. Finally, the minimum, maximum, and mean scores, as well as SD, were calculated for each of the 36 measures, and the percentages of informants falling into each of the 4 ratings of the 36 measures were computed. Inspection of the data and residual plots for mean scores of the 36 measures did not indicate any violation of assumptions of normality, independence, and homogeneity of variance, so the correlation between measures 1-26 and measures 27-36 was tested and confirmed.

Ethical Considerations

Nantong University approved this study. It is an official practice in this university to ask the Students’ Affairs Department for approval before collecting data from students. We followed this practice. In addition, there is no ethics review board in Nantong University. Therefore, a review number or code for this study could not be provided.

Results

Informant Demographics

A total of 413 questionnaires were collected, including 358 (86.68%) from female respondents. This can be explained by the fact that over 80% of students studying in the School of Foreign Studies, Nantong University, are female. The age of the participants ranged from 17 to 33 years (mean 20.96, SD 2.18). The overwhelming majority (n=402, 96.86%) of them are aged between 18 and 25 years. The informants included freshmen (n=66, 15.98%), sophomores (n=72, 17.43%), juniors (n=110, 26.63%), seniors (n=68, 16.46%), first-year graduate candidates (n=52, 12.59%), and second-year graduate candidates (n=45, 10.90%). They study in the School of Foreign Studies. Most of them (n=259, 62.71%) scored more than 100 in English in the entrance examinations for colleges and universities. Most of them (n=267, 65.65%) passed College English Test Band 6 (CET 6), Test for English Majors Band 4 (TEM 4), and TEM 8. Their English proficiency can well enable them to experience the use of English language chatbots. The majority of the informants (n=355, 85.96%) obtained COVID-19–related health care information through visiting a doctor or logging on to the internet. Table 2 shows the informants’ demographics, including grade, age, gender, and English proficiency, as well as the health care information sources they drew on.

Table 2.

Informant demographics (N=413).

Categories Participants, n (%) Cumulative percentage (%)
I’m a____.

Freshman 66 (15.98) 15.98

Sophomore 72 (17.43) 33.41

Junior 110 (26.63) 60.05

Senior 68 (16.46) 76.51

First-year graduate candidates 52 (12.59) 89.10

Second-year graduate candidates 45 (10.90) 100.00
I’m ____ years old.

17 2 (0.48) 0.48
18 27 (6.54) 7.02
19 56 (13.56) 20.58
20 89 (21.55) 42.13
21 79 (19.13) 61.26
22 65 (15.74) 77.00
23 47 (11.38) 88.38
24 19 (4.60) 92.98
25 18 (4.36) 97.34
26 3 (0.73) 98.06
27 2 (0.48) 98.55
29 1 (0.24) 98.79
32 3 (0.73) 99.52
33 2 (0.48) 100.00
I’m ____.

Male 55 (13.32) 13.32
Female 358 (86.68) 100.00
I scored ____ in English in the entrance examinations for colleges and universities.

>90 154 (37.29) 37.29
>100 79 (19.13) 56.42
>110 41 (9.93) 66.34
>120 75 (18.16) 84.50
>130 57 (13.80) 98.31
>140 7 (1.69) 100.00
I passed ____.

CETa 3 52 (12.59) 12.59
CET 4 94 (22.76) 35.35
CET 6 47 (11.38) 46.73
TEMb 4 150 (36.32) 83.05
TEM 8 70 (16.95) 100.00
Facing COVID-19, I mainly obtain health care information through ____.

Visiting a doctor 94 (22.76) 22.76
Logging on to the internet 261 (63.20) 85.96
Reading books, papers, and journals 14 (3.39) 89.35
Families, friends, and classmates 31 (7.51) 96.85
Health care hotlines 4 (0.97) 97.82
Health care chatbots 9 (2.18) 100.00

aCET: College English Test.

bTEM: Test for English Majors.

Data Reliability and Validity

As shown in Table S1 in Multimedia Appendix 2, Cronbach α (.986) for all the items (measures), except item (measure) 4, rated by all the 413 respondents was well above .9. If item 4 was deleted, Cronbach α increased merely by .001, so it was retained for the analysis. This indicates that the data collected for each measure in the questionnaire are highly reliable. The corrected item-total correlation for each measure was well above 0.4, which implies that the 36 measures are closely correlated.

The data were highly valid (Table S2 in Multimedia Appendix 2). The communalities for all the 36 items ranged from 0.632 to 0.823, well above 0.4, indicating that all these items are reasonable and should be included in the analysis. The value of Kaiser-Meyer-Olkin (KMO) value of 0.980 was substantially above 0.6, showing that all the data concerning the 36 items are suitable for extraction. The percentage of variance (rotated) for factors 1-3 was 30.428%, 28.077%, and 16.752%, respectively, and the percentage of cumulative variance (rotated) for the 3 factors was 75.257%, considerably above 50%. This means that all the data on all the items can be extracted validly.

User Experience and Satisfaction

Table 3 displays the results of the descriptive analysis of user experience and satisfaction. The minimum, maximum, and mean scores were based on the rating scale of each measure (1: totally agree; 2: basically agree; 3: basically disagree; and 4: totally disagree). The mean scores of the 36 measures were lower than or slightly over 2, indicating that the respondents were inclined to totally or basically agree with these measures. In other words, they found the chatbots pleasurable and satisfactory in terms of the functional, emotional, epistemic, social, and conditional domains.

Table 3.

Descriptive analysis of user experience and satisfaction. Items 1-36 represent the 36 measures in the questionnaire (N=413 for each item).

Item Minimum score Maximum score Mean score (SD) Median score
Conditional domain (mean 1.995, SD 0.718)

1 1.000 4.000 1.908 (0.666) 2.000

2 1.000 4.000 1.971 (0.686) 2.000

3 1.000 4.000 2.048 (0.777) 2.000

4 1.000 4.000 2.051 (0.731) 2.000
Epistemic domain (mean 1.834, SD 0.654)

5 1.000 4.000 1.738 (0.646) 2.000

6 1.000 4.000 2.000 (0.690) 2.000

7 1.000 4.000 1.765 (0.627) 2.000
Functional domain (mean 1.762, SD 0.630)

8 1.000 4.000 1.978 (0.648) 2.000

9 1.000 4.000 1.891 (0.639) 2.000

10 1.000 4.000 1.881 (0.606) 2.000

11 1.000 4.000 1.881 (0.602) 2.000

12 1.000 4.000 1.942 (0.647) 2.000

13 1.000 4.000 1.927 (0.627) 2.000

14 1.000 4.000 1.862 (0.629) 2.000

15 1.000 4.000 1.794 (0.602) 2.000

16 1.000 4.000 1.932 (0.631) 2.000

17 1.000 4.000 2.046 (0.700) 2.000

18 1.000 4.000 1.872 (0.596) 2.000

19 1.000 4.000 1.896 (0.620) 2.000

20 1.000 4.000 1.891 (0.639) 2.000
Social domain (mean 1.998, SD 0.696)

21 1.000 4.000 1.915 (0.639) 2.000

22 1.000 4.000 1.998 (0.695) 2.000

23 1.000 4.000 2.133 (0.775) 2.000

24 1.000 4.000 1.944 (0.675) 2.000
Emotional domain (mean 1.993, SD 0.683)

25 1.000 4.000 1.976 (0.679) 2.000

26 1.000 4.000 2.010 (0.686) 2.000
Experience domain (mean 1.978, SD 0.639)

27 1.000 4.000 1.913 (0.617) 2.000

28 1.000 4.000 2.155 (0.751) 2.000

29 1.000 4.000 2.019 (0.653) 2.000

30 1.000 4.000 1.913 (0.593) 2.000

31 1.000 4.000 1.891 (0.583) 2.000
Satisfaction domain (mean 1.894, SD 0.617)

32 1.000 4.000 1.901 (0.593) 2.000

33 1.000 4.000 1.939 (0.634) 2.000

34 1.000 4.000 1.947 (0.648) 2.000

35 1.000 4.000 1.792 (0.595) 2.000

36 1.000 4.000 1.889 (0.617) 2.000

The functional domain displayed the lowest mean score (1.762, SD 0.630), closely followed by the epistemic domain (mean 1.834, SD 0.654). This indicates that the respondents were overall satisfied with the function of the selected chatbots when seeking self-diagnosis and general knowledge about COVID-19 and the latest virus variants and that they had enriched their COVID-19–related knowledge through the novel way of communication with the chatbots. The conditional, social, and emotional domains showed a similar mean score of slightly lower than 2. It follows that the participants found it necessary and technologically possible to obtain health care information through communicating with the chatbots via a mobile phone anytime and anyplace in the face of the rampant COVID-19 pandemic, which imposes on them mental stress in varying degrees. Additionally, they believed that seeking COVID-19–related health care information through communicating with the chosen chatbots was generally enjoyable and mentally relaxing and that the somehow humanlike empathetic chatbots made them socially and emotionally engaged in machine-human conversations. Furthermore, they basically thought that their personal information revealed in communication with the chatbots would be used for medical or research purposes rather than for unreasonable or even illegal ends. Overall, they had a pleasant and satisfactory experience when communicating with the chatbots for COVID-19–related self-diagnosis and health care information, as shown by the mean scores of experience (1.978, SD 0.639) and satisfaction (1.894, SD 0.617) in Table 3.

Table S3 in Multimedia Appendix 2 shows the proportion of informants falling into each of the 4 ratings of the 36 measures. Over 80% (n=330) of the respondents totally and basically agreed with all measures, except measures 3, 4, 17, 23, and 28. Strikingly, more than 90% of the respondents totally and basically agreed with measures 5 (n=381, 92.25%), 7 (n=385, 93.22%), 11 (n=372, 90.07%), 14 (n=372, 90.08%), 15 (n=386, 93.46%), 18 (n=379, 91.77%), 31 (n=375, 90.80%), and 35 (n=388, 93.95%). Even for measures 3, 4, 17, 23, and 28, 312 (75.54%), 322 (77.97%), 320 (77.48%), 298 (72.15%), and 286 (69.25%) of participants totally and basically agreed, respectively. Specifically, the rates of students totally agreeing with the 36 measures ranged from 76 (18.40%) to 147 (35.59%) and those basically agreeing with these measures varied between 210 (50.85%) and 286 (69.25%). This means that most of the participating students showed a positive attitude toward their experience of the use of chatbots.

Correlation Between the 5 Domains and User Experience and Satisfaction

Table S4 in Multimedia Appendix 2 also demonstrates that the 5 domains are intimately correlated with user experience and satisfaction, that is, the former considerably contributes to the latter. Pearson correlation was used to determine the correlation between each of the 26 measures (1-26) in the 5 domains and each of the 10 measures (27-36) in overall user experience and satisfaction. Statistics showed that each of the former 26 measures is positively correlated with each of the latter 10 measures, with P<.001 for each correlation and all correlation coefficients varying from 0.459 to 0.844. This indicates that the functional, epistemic, emotional, social, and conditional values of health care chatbots contribute positively to overall user experience and satisfaction, as far as the 413 informants of this study are concerned.

Differences in User Experience and Satisfaction by Gender and Grade

Table 4 illustrates the mean scores of all the 36 measures rated by males and females. The t test revealed that there was a significant difference between male ratings and female ratings, with the former being significantly higher than the latter (P<.001), as shown in Table 5. This implies that female participants were more positive in their experience of and satisfaction with health care chatbots compared to their male counterparts.

Table 4.

Mean scores of the 36 measures by gender.

Item Mean score (SD) by males Mean score (SD) by females
1 1.945 (0.803) 1.903 (0.643)
2 2.012 (0.782) 1.967 (0.671)
3 2.091 (0.867) 2.044 (0.764)
4 2.018 (0.805) 2.058 (0.720)
5 1.818 (0.772) 1.731 (0,624)
6 2.018 (0.828) 2.000 (0.667)
7 1.927 (0.813) 1.744 (0.591)
8 2.036 (0.769) 1.972 (0.628)
9 2.055 (0.826) 1.869 (0.602)
10 2.000 (0.816) 1.867 (0.566)
11 2.000 (0.754) 1.867 (0.576)
12 2.036 (0.838) 1.931 (0.613)
13 2.036 (0.838) 1.914 (0.587)
14 1.945 (0.780) 1.853 (0.603)
15 1.964 (0.860) 1.772 (0.549)
16 2.055 (0.848) 1.917 (0.590)
17 2.127 (0.862) 2.036 (0.673)
18 2.000 (0.793) 1.856 (0.558)
19 1.982 (0.871) 1.886 (0.573)
20 1.964 (0.816) 1.883 (0.608)
21 2.018 (0.871) 1.903 (0.595)
22 2.036 (0.881) 1.994 (0.663)
23 2.255 (0.927) 2.117 (0.749)
24 2.036 (0.793) 1.933 (0.655)
25 2.055 (0.826) 1.967 (0.654)
26 2.073 (0.836) 2.003 (0.661)
27 1.964 (0.769) 1.908 (0.591)
28 2.164 (0.918) 2.156 (0.723)
29 2.036 (0.793) 2.019 (0.630)
30 1.945 (0.756) 1.911 (0.565)
31 2.036 (0.769) 1.872 (0.547)
32 1.964 (0.816) 1.894 (0.552)
33 1.927 (0.790) 1.944 (0.607)
34 1.982 (0.805) 1.944 (0.621)
35 1.873 (0.795) 1.783 (0.559)
36 1.982 (0.805) 1.878 (0.583)

Table 5.

Results of the t test of mean scores of the 36 measures by gender (t test P<.001).

Classification Participants, n (%) Minimum score Maximum score Mean score (SD)
Male 55 (13.32) 1.000 4.000 1.999 (0.051)
Female 358 (86.68) 1.000 4.000 1.931 (0.098)

According to the t test (Table 6), there was a significant difference between freshmen’s ratings and sophomores’ ratings (P<.001), between freshmen’s ratings and first-year graduate candidates’ ratings (P<.001), between freshmen’s ratings and second-year graduate candidates’ ratings (P<.001), between sophomores’ ratings and juniors’ ratings (P=.004), between sophomores’ ratings and seniors’ ratings (P<.001), between sophomores’ ratings and second-year graduate candidates’ ratings (P<.001), between juniors’ ratings and seniors’ ratings (P<.001), between juniors’ ratings and first-year graduate candidates’ ratings (P=.01), between juniors’ ratings and second-year graduate candidates’ ratings (P<.001), between seniors’ ratings and first-year graduate candidates’ ratings (P<.001), between seniors’ ratings and second-year graduate candidates’ ratings (P<.001), and between first- and second-year graduate candidates’ ratings (P=.002). This indicates that freshmen had a better experience and greater satisfaction than sophomores, first-year graduate candidates, and second-year graduate candidates when communicating with health care chatbots for COVID-19–related information. Sophomores had a better experience and greater satisfaction than second-year graduate candidates but a less positive experience and lesser satisfaction than juniors and seniors. Juniors felt more positive than first- and second-year graduate candidates but less positive than seniors in their experience and satisfaction. Seniors had a better experience and greater satisfaction than first- and second-year graduate candidates. First-year graduate candidates felt more positive than second-year graduate candidates when engaged in conversations with the health care chatbots.

Table 6.

Results of the t test of mean scores of the 36 measures by grade.

Classification Participants, n (%) Minimum score Maximum score Mean score (SD) Freshman P value Sophomore P value Junior P value Senior P value First-year graduate candidate P value Second-year graduate candidate P value
Freshman 66 1.000 4.000 1.883 (0.114) N/Aa <.001 .24 .08 <.001 <.001
Sophomore 72 1.000 4.000 1.989 (0.092) <.001 N/A .004 <.001 .81 <.001
Junior 110 1.000 4.000 1.925 (0.087) .24 .004 N/A .001 .01 <.001
Senior 68 1.000 4.000 1.853 (0.108) .08 <.001 .001 N/A <.001 <.001
First-year graduate candidate 52 1.000 4.000 1.992 (0.116) <.001 .81 .001 <.001 N/A .002
Second-year graduate candidate 45 1.000 4.000 2.069 (0.133) <.001 <.001 <.001 <.001 .002 N/A

aN/A: not applicable.

Overall, seniors were the most positive when expressing their experience of and satisfaction with health care chatbots, closely followed by freshmen and juniors. Slightly less positive, sophomores and first-year graduate candidates had similar experience and satisfaction. Second-year graduate candidates did not feel so positive as the other 5 grade categories.

Discussion

Principal Findings

Young people aged 17-35 years constitute a population that is considered particularly receptive to health care chatbots during the omicron waves of COVID-19 for self-diagnosis and information about the latest virus variants. The findings of this study bring into focus the effect of the functional, epistemic, emotional, social, and conditional values of health care chatbots on the user experience and satisfaction of this specific population. Our findings suggest a considerable positive impact of these values on their overall user experience and satisfaction and a positive correlation between these values and user experience and satisfaction. By conducting an online questionnaire survey in the midst of the repeated outbreaks of the COVID-19 pandemic, we found that all the participants basically had a positive experience and were thus satisfied with the selected health care chatbots due to their generally satisfactory services. Results of the statistics also showed different degrees of experience of and satisfaction with the chosen health care chatbots among the 5 grade categories of participants: Seniors were the most receptive to health care chatbots for COVID-19 self-diagnoses and information, while second-year graduate candidates were the least receptive; freshmen and juniors felt slightly more positive than sophomores and first-year graduate candidates when engaged in conversations with the chatbots. In addition, female informants showed a relatively more receptive attitude toward the selected chatbots than male respondents. One possible reason for the relatively low reception among second-year graduate candidates is that they basically belonged to the oldest age group and were comparatively less willing to accept the novel way of obtaining information through communicating with chatbots. Although there are no studies devoted to age-related differences in user experience and satisfaction, this aspect deserves further investigation.

In addition to the chatbots’ advantages, such as accessibility, cost-effectiveness, and flexibility [51], the functional, epistemic, emotional, social, and conditional values contributed to the overall pleasant experience and general satisfaction among the 413 respondents. According to statistics, the functional and epistemic values of the selected chatbots were the most important contributors to the students’ positive experience and overall satisfaction. Functional values are concerned with functional and utilitarian performance [52]. In this study, the informants believed that the chatbots could be aware of the consulting context to use suitable language to provide personalized services based on their specific needs [53]. Personalization is a crucial function of artificial intelligence–based applications [54]. The selected chatbots of this study provided the survey participants with such personalized services as feedback, health reports, alerts, and recommendations [22], dealing with diverse mental health issues bothering different people during the repeated resurgences of COVID-19 [46] and leading to a higher level of user experience and satisfaction [22,55]. In addition, we found that other functional values, including user-friendliness, ease of use, and performance (eg, timely, precise, accurate, and effective answering, error-handling capacity) [47], also contributed to the participants’ generally positive experience and overall satisfaction. Communicating with the health care chatbots offered student informants novelty and satisfied their desire for knowledge [49], too. The novel way of learning self-diagnoses and general information concerning COVID-19 and the latest virus variants led to a basically positive experience of and overall satisfaction with the health care chatbots on the part of the respondents. This is in tune with some extant studies [49,52,56].

The conditional, emotional, and social values played similar roles in providing the informants with good experience and general satisfaction. Facing numerous mental disorders caused by COVID-19 worldwide, people have suffered from stress, anxiety, depression, and other psychological problems [57]. As such, chatbots have been launched to psychologically assist people in many countries during COVID-19 [58]. Such particular conditions and situations of time, place, technology, and people’s mental state [59,60] promote the decision [61] made by the informants to resort to health care chatbots for self-diagnosis and the general information about COVID-19 and the latest virus variants. The survey participants found that the health care chatbots were available almost anytime and anyplace, providing faster health care services and reducing contact-induced risks. Thus, informed by Lee et al [62], we concluded that the conditional values of chatbots perceived by the participants in the face of the worldwide health emergency of COVID-19 positively influenced the user experience of and satisfaction with the health care chatbots. This finding is in line with recent studies [48,52].

As an emotional value of chatbots [48], enjoyment is an important element of chatbots [40]. The respondents of this study considered that communicating with the chatbots gave them an enjoyable feeling and considerably relieved them of stress, depression, and anxiety, as proven in recent studies [62,63]. The impact of enjoyment and delight on the user experience of chatbots [64], user adoption [65], and user satisfaction [19,66] has been proven by some studies. This feeling helped relieved the stress, depression, and anxiety [66] of the informants of this study during the critical period of repeated outbreaks of COVID-19, contributing to their positive experience of and overall satisfaction with the health care chatbots chosen for this research.

User experience during the human-product interaction results from all respects of user feelings (functional, emotional, social, etc) [67], each of which brings about a particular evaluation of the product or service concerned [68]. In this study, the social values of the health care chatbots were also perceived by the participants. They believed that the selected chatbots could fully engage them when they communicated with the chatbots for self-diagnosis and acquisition of general information concerning COVID-19 and the latest virus variants, thus satisfying their needs for communication, affection, and social belonging [69]. They thought that they felt the chatbots’ empathetic tones when conversing about COVID-19–related health care information and that their personal information would not be misused unreasonably and illegally. Such humanlike empathy and privacy protection led to a more positive outlook, a feeling of emotional backup, and a sense of social belonging on the part of the informants, establishing trust and emotional connection between them and the chatbots [69].

Implications

Informed by Deneche and Warren’s [47] evaluation framework, Zhu et al’s [48] measures of variables, and TCV [49,70], this study established a new assessment framework to investigate the informants’ user experience of and satisfaction with the selected health care chatbots. It advanced the theory regarding the user experience of and satisfaction with health chatbots from the perspective of TCV, enriching previous studies that focus little on this aspect [48]. Although previous studies have examined the user experience of and satisfaction with health chatbots in terms of effectiveness, usability, and acceptability, personalization, enjoyment, and novelty, they have explored this topic drawing on TAT [19-23,40,41,63], for example, the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology Model (UTAUT). TAM and UTAUT are primarily concerned with the relationship between the user behavior and the quality and function of technology-empowered products, so these theories fail to provide a full account of the utilization of health care chatbots in various human-machine interaction settings, particularly in the context of the COVID-19–induced social distancing and even lockdown [48]. Comparing TAT with TCV, we found that the latter has a greater power of explanation: TCV comprehensively integrates a variety of value-oriented factors (functional emotional, epistemic, social, and conditional) into the account of the behaviors of users when engaging in communication with chatbots. Therefore, the user experience and satisfaction assessment model we established based on TCV is most likely to gain a better understanding of the user experience of and satisfaction with health care chatbots during the public health emergency of COVID-19 and other public health crises and natural disasters. In addition, the assessment scale of 36 items and 5 dimensions we newly developed is more comprehensive than Deneche and Warren’s [47] international assessment framework and Zhu et al’s [48] measures of variables, thereby having high reliability (Cronbach α=.986) and validity (KMO=0.980). Although many countries have provided chatbots to psychologically assist the public during the COVID-19–induced health emergency [58], almost no research has been conducted to study the user experience of and satisfaction with mental health chatbots during this pandemic [54]. This paper fills the gap in the extant literature.

On the practical facet, the new assessment framework of this research and the related findings can inspire artificial intelligence (AI) companies or scientific institutions to better design health care chatbots by giving top priority to the functional and epistemic values of these CAs while not neglecting their emotional, social, and conditional values. Health care chatbots integrating these 5 domains of values can enhance user experience and satisfaction. This paper also provides the governments of all countries with certain guidelines to choose and popularize health care chatbots in times of public health emergencies, such as COVID-19. As the first generation living with AI, we have the responsibility to design chatbots and make them ubiquitous and helpful to the whole society [69].

Limitations

Several limitations may influence the generalization of the findings reported in this paper. Most importantly, some of our findings may be biased due to the selection of respondents. The higher percentage of female respondents may be related to this bias. Particularly, the slightly higher level of user experience of and satisfaction with the selected health care chatbots may be attributed to the slightly higher percentage of female respondents. Additionally, we did not ask whether respondents had previous experience of health care chatbots, so we were unable to clarify whether our findings were biased by a mixture of respondents both with and without prior experience in this aspect. Finally, the survey is cross-sectional and lacks comparison to a period unaffected by the COVID-19 pandemic or to a different time of the year, and the data were collected merely from 1 university. We were unable to ascertain that the findings of this study can be generalized to the same age group in other regions or countries. The generalizability and validity of the findings and the assessment framework of this study need to be examined in further studies.

Conclusion

Government agencies worldwide have been providing the public with chatbots to psychologically assist them [58] in coping with a plethora of mental disorders caused by COVID-19 [57]. However, there is little focus on the user experience of and satisfaction with health care chatbots among young people in the literature. This study deals with the use of health care chatbots among young people (aged 17-35 years) in China, mainly investigating their user experience and satisfaction through a newly designed assessment framework. The findings illustrate that the functional, epistemic, emotional, social, and conditional domains in the new assessment framework all have a positive impact on the participants’ user experience and satisfaction. This paper advances the theory regarding the usability of health care chatbots, and chatbots for other purposes, enriching the literature. It also provides practical implications for chatbot designers and developers as well as for governments of all countries, especially in the critical period of the omicron waves of COVID-19 and other future public health crises.

Abbreviations

AI

artificial intelligence

CA

conversational agents

CET

College English Test

IS

information systems

KMO

Kaiser-Meyer-Olkin

TAM

Technology Acceptance Model

TAT

technology acceptance theories

TCV

theory of consumption values

TEM

Test for English Majors

UTAUT

Unified Theory of Acceptance and Use of Technology Model

Multimedia Appendix 1

Survey questionnaire.

Multimedia Appendix 2

Supplementary tables.

Footnotes

Conflicts of Interest: None declared.

References

  • 1.Rahman M, Amin R, Liton M, Hossain N. Disha: an implementation of machine learning based Bangla healthcare chatbot. 22nd International Conference of Computer and Information Technology; December 18-20, 2019; Dhaka, Bangladesh. 2019. pp. 18–20. [DOI] [Google Scholar]
  • 2.Bhirud N, Tataale S, Randive S, Nahar S. A literature review on chatbots in healthcare domain. Int J Sci Technol Res. 2019;8(7):225–231. [Google Scholar]
  • 3.Mathew R, Varghese S, Joy S, Alex S. 2019 Chatbot for disease prediction and treatment recommendation using machine learning. 3rd International Conference on Trends in Electronics and Informatics; April 23-25, 2019; Tirunelveli, India. 2019. pp. 23–25. [DOI] [Google Scholar]
  • 4.Weizenbaum J. ELIZA—a computer program for the study of natural language communication between man and machine. Commun ACM. 1966 Jan;9(1):36–45. doi: 10.1145/365153.365168. [DOI] [Google Scholar]
  • 5.Klopfenstein L, Delpriori S, Malatini S, Bogliolo A. The rise of bots: a survey of conversational interfaces, patterns, and paradigms. Conference on Designing Interactive Systems; June 10-14, 2017; Edinburgh, UK. 2017. pp. 10–14. [DOI] [Google Scholar]
  • 6.Colby KM, Weber S, Hilf FD. Artificial paranoia. Artif Intell. 1971;2(1):1–25. doi: 10.1016/0004-3702(71)90002-6. [DOI] [Google Scholar]
  • 7.Adamopoulou E, Moussiades L. Chatbots: history, technology, and applications. Mach Learn Appl. 2020 Dec;2:100006. doi: 10.1016/j.mlwa.2020.100006. [DOI] [Google Scholar]
  • 8.Oneremission: Making the Lives of Cancer Survivors Easier. [2022-01-06]. https://keenethics.com/project-one-remission .
  • 9.Youper Expert Care for Anxiety and Depression. [2022-01-06]. https://www.youper.ai/
  • 10.Florence: Your Health Assistant. [2022-01-06]. https://florence.chat/
  • 11.Healthily: Your Trusted Guide to Health. [2022-06-03]. https://www.livehealthily.com/
  • 12.Health. Powered by Ada. [2022-06-03]. https://ada.com/
  • 13.Sensely: Increasing Access. Lowering Costs. Improving Health. [2022-06-03]. https://www.sensely.com/
  • 14.When Something Feels Off, Buoy It. [2022-06-03]. https://www.buoyhealth.com .
  • 15.The Medical Futurist The Top 12 Health Chatbots. [2022-06-03]. https://medicalfuturist.com/top-12-health-chatbots .
  • 16.Drees J. Led by COVID-19 Surge, Virtual Visits Will Surpass 1B in 2020: Report. [2022-06-04]. https://www.beckershospitalreview.com/telehealth/led-by-covid-19-surge-virtual-visits-will-surpass-1b-in-2020-report.html .
  • 17.Judson T, Odisho A, Young J, Bigazzi O, Steuer D, Gonzales R, Neinstein AB. Implementation of a digital chatbot to screen health system employees during the COVID-19 pandemic. J Am Med Inform Assoc. 2020 Jul 01;27(9):1450–1455. doi: 10.1093/jamia/ocaa130. http://europepmc.org/abstract/MED/32531066 .5856745 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Miner AS, Laranjo L, Kocaballi AB. Chatbots in the fight against the COVID-19 pandemic. NPJ Digit Med. 2020 May 04;3(1):65. doi: 10.1038/s41746-020-0280-0. doi: 10.1038/s41746-020-0280-0.280 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. 2017 Jun 06;4(2):e19. doi: 10.2196/mental.7785. https://mental.jmir.org/2017/2/e19/ v4i2e19 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Cameron G, Cameron D, Megaw G, Bond R, Mulvenna M, O'Neill S, Armour C, McTear M. Assessing the usability of a chatbot for mental health care. International Workshop on Internet Science; April 2019; St. Petersburg, Russia. 2019. [DOI] [Google Scholar]
  • 21.Ail MR, Rasazi Z, Mamun AA, Langevin R, Rawassizadeh R, Schubert L, Hoque ME. A virtual conversational agent for teens with autism: experimental results and design lessons. Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (IVA'20); October 2020; Scotland, UK. 2020. [DOI] [Google Scholar]
  • 22.Kocaballi AB, Berkovsky S, Quiroz JC, Laranjo L, Tong HL, Rezazadegan D, Briatore A, Coiera E. The personalization of conversational agents in health care: systematic review. J Med Internet Res. 2019 Nov 07;21(11):e15360. doi: 10.2196/15360. https://www.jmir.org/2019/11/e15360/ v21i11e15360 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Nelson RR, Consoli D. An evolutionary theory of household consumption behavior. J Evol Econ. 2010 Feb 6;20(5):665–687. doi: 10.1007/s00191-010-0171-7. [DOI] [Google Scholar]
  • 24.Hao K. The pandemic is emptying call centers. AI chatbots are swooping in. MIT Technology Review. [2022-06-04]. https://www.technologyreview.com/2020/05/14/1001716/ai-chatbots-take-call-center-jobs-during-coronavirus-pandemic/
  • 25.Mittal A, Agrawal A, Chouksey A, Shriwas R, Agrawal S. A comparative study of chatbots and humans. Int J Adv Res Comput Commun Eng. 2016;5(3):1055. [Google Scholar]
  • 26.AbuShawar B, Atwell E. Usefulness, localizability, humanness, and language-benefit: additional evaluation criteria for natural language dialogue systems. Int J Speech Technol. 2016 Jan 4;19(2):373–383. doi: 10.1007/s10772-015-9330-4. [DOI] [Google Scholar]
  • 27.Følstad A, Skjuve M. Chatbots for customer service: user experience and motivation. Proceedings of the 1st International Conference on Conversational User Interfaces; 2019; New York, NY. 2019. pp. 1–9. [DOI] [Google Scholar]
  • 28.Ross C. I asked eight chatbots whether I had Covid-19. The answers ranged from ‘low’ risk to ‘start home isolation’. [2022-06-04]. https://www.statnews.com/2020/03/23/coronavirus-i-asked-eight-chatbots-whether-i-had-covid-19/
  • 29.Ghosh S, Bhatia S, Bhatia A. Quro: facilitating user symptom check using a personalised chatbot-oriented dialogue system. Stud Health Technol Inform. 2018;252:51–56. [PubMed] [Google Scholar]
  • 30.Nadarzynski T, Miles O, Cowie A, Ridge D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: a mixed-methods study. Digit Health. 2019;5:2055207619871808. doi: 10.1177/2055207619871808. https://journals.sagepub.com/doi/10.1177/2055207619871808?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed .10.1177_2055207619871808 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Gefen. Karahanna. Straub Trust and TAM in online shopping: an integrated model. MIS Quarterly. 2003;27(1):51. doi: 10.2307/30036519. [DOI] [Google Scholar]
  • 32.Dennis AR, Kim A, Rahimi M, Ayabakan S. User reactions to COVID-19 screening chatbots from reputable providers. J Am Med Inform Assoc. 2020 Nov 01;27(11):1727–1731. doi: 10.1093/jamia/ocaa167. http://europepmc.org/abstract/MED/32984890 .5867913 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.You Y, Gui X. Self-diagnosis through AI-enabled chatbot-based symptom checkers: user experiences and design considerations. AMIA Annu Symp Proc. 2020;2020:1354–1363. http://europepmc.org/abstract/MED/33936512 .172_3416719 [PMC free article] [PubMed] [Google Scholar]
  • 34.Chu SY, Kang S, Yoo S-C. The influences of perceived value of AI medical counseling chatbot service on the use intention: focused on the usage purpose of chatbot counseling of obstetrics and gynecology. Health Serv Manag Rev. 2021;15(3):41–59. doi: 10.18014/hsmr.2021.15.3.41. [DOI] [Google Scholar]
  • 35.Schubel LC, Wesley DB, Booker E, Lock J, Ratwani RM. Population subgroup differences in the use of a COVID-19 chatbot. NPJ Digit Med. 2021 Feb 19;4(1):30. doi: 10.1038/s41746-021-00405-8. doi: 10.1038/s41746-021-00405-8.10.1038/s41746-021-00405-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Ghalebl M, Almurtadha Y, Algarni F, Abdullah M, Felemban E, Alsharafi A, Othman M, Ghilan K. Mining the chatbot brain to improve COVID-19 bot response accuracy. Comput Mater Continua. 2022;70(2):2619. doi: 10.32604/cmc.2022.020358. [DOI] [Google Scholar]
  • 37.Skarpa PE, Garoufallou E. Information seeking behavior and COVID-19 pandemic: a snapshot of young, middle aged and senior individuals in Greece. Int J Med Inform. 2021 Jun;150:104465. doi: 10.1016/j.ijmedinf.2021.104465.S1386-5056(21)00091-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Sweidan S, Laban S, Alnaimat N, Darabkh K. SEG-COVID: a student electronic guide within Covid-19 pandemic. 9th International Conference on Information and Education Technology (ICIET); March 2021; Okayama, Japan. 2021. pp. 27–29. [DOI] [Google Scholar]
  • 39.Sweidan SZ, Abu Laban SS, Alnaimat NA, Darabkh KA. SIAAA‐C: a student interactive assistant android application with chatbot during COVID‐19 pandemic. Comput Appl Eng Educ. 2021 Apr 16;29(6):1718–1742. doi: 10.1002/cae.22419. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Ashfaq M, Yun J, Yu S, Loureiro SMC. I, Chatbot: modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telemat Inform. 2020 Nov;54:101473. doi: 10.1016/j.tele.2020.101473. [DOI] [Google Scholar]
  • 41.Luo X, Tong S, Fang Z, Qu Z. Frontiers: machines vs. humans: the impact of artificial intelligence chatbot disclosure on customer purchases. Mark Sci. 2019 Sep 20;38(6):937–947. doi: 10.1287/mksc.2019.1192. [DOI] [Google Scholar]
  • 42.Deng L, Turner DE, Gehling R, Prince B. User experience, satisfaction, and continual usage intention of IT. Eur J Inf Syst. 2017 Dec 19;19(1):60–75. doi: 10.1057/ejis.2009.50. [DOI] [Google Scholar]
  • 43.Zhang B, Zhu Y. Comparing attitudes towards adoption of e-government between urban users and rural users: an empirical study in Chongqing municipality, China. Behav Inf Technol. 2020 Mar 18;40(11):1154–1168. doi: 10.1080/0144929x.2020.1743361. [DOI] [Google Scholar]
  • 44.Li C, Fang Y. Predicting continuance intention toward mobile branded apps through satisfaction and attachment. Telemat Inform. 2019 Oct;43:101248. doi: 10.1016/j.tele.2019.101248. [DOI] [Google Scholar]
  • 45.Jung K, Lee D. Reciprocal effect of the factors influencing the satisfaction of is users. APJIS. 1995;5(2):199–226. [Google Scholar]
  • 46.Liu S, Yang L, Zhang C, Xiang Y, Liu Z, Hu S, Zhang B. Online mental health services in China during the COVID-19 outbreak. Lancet Psychiatry. 2020 Apr;7(4):e17–e18. doi: 10.1016/S2215-0366(20)30077-8.S2215-0366(20)30077-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Denecke K, Warren J. How to evaluate health applications with conversational user interface? In: Pape-Haugaard LB, Lovis C, Madsen IC, Weber P, Nielsen PH, Scott P, editors. Digital Personalized Health and Medicine. Amsterdam: European Federation for Medical Informatics (EFMI) and IOS Press; 2020. pp. 978–980. [DOI] [PubMed] [Google Scholar]
  • 48.Zhu Y, Janssen M, Wang R, Liu Y. is me, chatbot: working to address the COVID-19 outbreak-related mental health issues in China. User experience, satisfaction, and influencing factors. Int J Hum–Comput Interact. 2021 Nov 01;:1–13. doi: 10.1080/10447318.2021.1988236. [DOI] [Google Scholar]
  • 49.Sheth JN, Newman BI, Gross BL. Why we buy and what we buy: a theory of consumption values. J Bus Res. 1991 Mar;22(2):159–170. doi: 10.1016/0148-2963(91)90050-8. [DOI] [Google Scholar]
  • 50.wenjuanxing More Than Questionnaires/Online Exams. [2022-06-03]. https://www.wjx.cn/
  • 51.Przegalinska A, Ciechanowski L, Stroz A, Gloor P, Mazurek G. In bot we trust: a new methodology of chatbot performance measures. Bus Horiz. 2019 Nov;62(6):785–797. doi: 10.1016/j.bushor.2019.08.005. http://dx.doi.org/ S1350-4533(12)00171-3 [DOI] [Google Scholar]
  • 52.Teng C. Look to the future: enhancing online gamer loyalty from the perspective of the theory of consumption values. Decis Support Syst. 2018 Oct;114:49–60. doi: 10.1016/j.dss.2018.08.007. [DOI] [Google Scholar]
  • 53.Xiao. Benbasat E-commerce product recommendation agents: use, characteristics, and impact. MIS Quarterly. 2007;31(1):137. doi: 10.2307/25148784. [DOI] [Google Scholar]
  • 54.Chen T, Guo W, Gao X, Liang Z. AI-based self-service technology in public service delivery: user experience and influencing factors. Gov Inf Q. 2021 Oct;38(4):101520. doi: 10.1016/j.giq.2020.101520. [DOI] [Google Scholar]
  • 55.Shi S, Wang Y, Chen X, Zhang Q. Conceptualization of omnichannel customer experience and its impact on shopping intention: a mixed-method approach. Int J Inf Manag. 2020 Feb;50(4):325–336. doi: 10.1016/j.ijinfomgt.2019.09.001. http://dx.doi.org/ S1350-4533(12)00171-3 [DOI] [Google Scholar]
  • 56.El Qaoumi K, Le Masson P, Weil B, Ün A. Testing evolutionary theory of household consumption behavior in the case of novelty - a product characteristics approach. J Evol Econ. 2017 Aug 26;28(2):437–460. doi: 10.1007/s00191-017-0521-9. [DOI] [Google Scholar]
  • 57.Ransing R, Nagendrappa S, Patil A, Shoib S, Sarkar D. Potential role of artificial intelligence to address the COVID-19 outbreak-related mental health issues in India. Psychiatry Res. 2020 Aug;290:113176. doi: 10.1016/j.psychres.2020.113176.S0165-1781(20)31605-X [DOI] [PubMed] [Google Scholar]
  • 58.Smith AC, Thomas E, Snoswell CL, Haydon H, Mehrotra A, Clemensen J, Caffery LJ. Telehealth for global emergencies: implications for coronavirus disease 2019 (COVID-19) J Telemed Telecare. 2020 Mar 20;26(5):309–313. doi: 10.1177/1357633x20916567. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Omigie NO, Zo H, Rho JJ, Ciganek AP. Customer pre-adoption choice behavior for M-PESA mobile financial services. IMDS. 2017 Jun 12;117(5):910–926. doi: 10.1108/imds-06-2016-0228. [DOI] [Google Scholar]
  • 60.Pihlström M, Brush GJ. Comparing the perceived value of information and entertainment mobile services. Psychol Mark. 2008 Aug;25(8):732–755. doi: 10.1002/mar.20236. [DOI] [Google Scholar]
  • 61.Hung C, Hsieh C. Searching the fit pattern between cultural dimensions and consumption values of mobile commerce in Taiwan. Asia Pacific Manag Rev. 2010;15(2):147–165. https://www.proquest.com/docview/1115696143?pq-origsite= [Google Scholar]
  • 62.Lee S, Lee J, Kim H. A customer value theory approach to the engagement with a brand: the case of KakaoTalk Plus in Korea. APJIS. 2018 Mar 30;28(1):36–60. doi: 10.14329/apjis.2018.28.1.36. [DOI] [Google Scholar]
  • 63.Cheng Y, Jiang H. AI‐powered mental health chatbots: examining users’ motivations, active communicative action and engagement after mass‐shooting disasters. J Contingencies Crisis Manag. 2020 Sep 29;28(3):339–354. doi: 10.1111/1468-5973.12319. [DOI] [Google Scholar]
  • 64.Rese A, Ganster L, Baier D. Chatbots in retailers’ customer communication: how to measure their acceptance? J Retail Consum Serv. 2020 Sep;56(3):102176. doi: 10.1016/j.jretconser.2020.102176. [DOI] [Google Scholar]
  • 65.Kasilingam DL. Understanding the attitude and intention to use smartphone chatbots for shopping. Technol Soc. 2020 Aug;62:101280. doi: 10.1016/j.techsoc.2020.101280. [DOI] [Google Scholar]
  • 66.Abd-Alrazaq AA, Alajlani M, Alalwan AA, Bewick BM, Gardner P, Househ M. An overview of the features of chatbots in mental health: a scoping review. Int J Med Inform. 2019 Dec;132:103978. doi: 10.1016/j.ijmedinf.2019.103978.S1386-5056(19)30716-6 [DOI] [PubMed] [Google Scholar]
  • 67.Lallemand C, Gronier G, Koenig V. User experience: a concept without consensus? Exploring practitioners’ perspectives through an international survey. Comput Hum Behav. 2015 Feb;43:35–48. doi: 10.1016/j.chb.2014.10.048. [DOI] [Google Scholar]
  • 68.Yu M, Zhou R, Cai Z, Tan C, Wang H. Unravelling the relationship between response time and user experience in mobile applications. Internet Res. 2020 May 15;30(5):1353–1382. doi: 10.1108/intr-05-2019-0223. [DOI] [Google Scholar]
  • 69.Shum H, He X, Li D. From Eliza to XiaoIce: challenges and opportunities with social chatbots. Front Inf Technol Electronic Eng. 2018 Jan 8;19(1):10–26. doi: 10.1631/fitee.1700826. [DOI] [Google Scholar]
  • 70.Sweeney JC, Soutar GN. Consumer perceived value: the development of a multiple item scale. J Retail. 2001 Jun;77(2):203–220. doi: 10.1016/s0022-4359(01)00041-0. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Multimedia Appendix 1

Survey questionnaire.

Multimedia Appendix 2

Supplementary tables.


Articles from JMIR Human Factors are provided here courtesy of JMIR Publications Inc.

RESOURCES