Skip to main content
Wolters Kluwer - PMC COVID-19 Collection logoLink to Wolters Kluwer - PMC COVID-19 Collection
. 2022 Mar 2;40(11):779–785. doi: 10.1097/CIN.0000000000000884

Content and Usability Validation of an Intelligent Virtual Conversation Assistant Used for Virtual Triage During the COVID-19 Pandemic in Brazil

Amadeu Sá de Campos Filho 1, José Ricardo Vasconcelos Cursino 1, José William Araújo do Nascimento 1, Rafael Roque de Souza 1, Geicianfran da Silva Lima Roque 1, Andréia Roque de Souza Cavalcanti 1
PMCID: PMC9707853  PMID: 35234699

Abstract

This study aimed to describe the development process, content validation, and usability of a COVID-19 screening system incorporated into a chatbot-type intelligent virtual assistant (CoronaBot). This is a methodological research carried out in three phases. The first corresponded to the development of the flowchart and content of the virtual assistant, the second phase consisted of the implementation of the content in chatbot, and the third phase consisted of content validation. Data analysis was performed by agreement rate, content validity index, and kappa statistical test. Finally, in the third phase, the chatbot's usability was analyzed using the System Usability Scale, by 10 users. The CoronaBot content presented domains with agreement rate above 87.5%, and its items referring to symptomatological scores and interface screens had values of content validity index with a mean of 0.96, kappa test with values from 0.70 to 0.76, and interspecialist agreement of 1.00, demonstrating excellence of prototype content. The global usability score was 80.1. The script developed and incorporated into the chatbot prototype achieved a satisfactory level of content validity. The usability of the chatbot was considered good, adding to the credibility of the device.

KEY WORDS: Artificial intelligence, COVID, Pandemic, Telemedicine


COVID-19 is a disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), generating great global impact, still with alarming numbers of contagion in the world, specifically in Brazil.1 The World Health Organization announced COVID-19 as a pandemic on March 11, 2020, so that by February 2021, the virus had already spread to more than 100 million people worldwide, causing more than 2 million deaths, with 10 million of the cases and over 220,000 of the deaths in Brazil.24

Due to the rapid spread of SARS-CoV-2, the accelerated increase in the number of infections in Brazil, and the large flow of people seeking health units, SARS-CoV-2 became a concern in urgency and emergency networks, especially when suspected cases with non-specific symptoms arrive in the same period,5 which will overload the health units' care capacity. Thus, strategic resources that support the use of technology have shown great potential in the screening, diagnosis, and management of infections, besides being useful in the screening of suspected cases, which can serve as an effective instrument in the control of various diseases with high potential for dissemination.6

In this perspective, technological systems based on artificial intelligence and natural language processing provide great advantages to the entire population. These include rapid processing of data seeking a more accurate answer to decision making and by expediting the acquisition of answers to health questions and even for the guidance of treatments. Moreover, these technological resources understand the text informed by the patient by anticipating common doubts, often excluding the need to go to health units, and talking to a patient by text or audio in a humanized way.7

One of these technologies is the chatbot or intelligent conversation agent, robots that mimic human interaction and are able to replicate behaviors and perform various health actions, such as providing information to patients, assessing health status, and reviewing care plans, being available 24 hours a day.8 The use of chatbots to screen users with symptoms of COVID-19 can help health professionals and managers to control the demand of patients seeking hospital care. Thus, it avoids the unnecessary collapse of the health system and decreases the risk of contagion of people who do not present symptoms of this infection.7 This study aims to describe the process of content development and validation of a classification system for screening COVID-19 incorporated in a chatbot called CoronaBot.

METHODS

Study Design

Initially, to develop the approach method to be used by CoronaBot, it was necessary to establish the classification method that would be used. For this purpose, the Manchester protocol was used, whose categorization is based on the symptoms present in the patient.9 According to the authors, when the number of classification points is observed, the protocols that use a 5-point classification, when compared with those of 3 points, stand out for a better and more reliable evaluation.9

Since the Manchester protocol did not meet the symptom screening scenarios of COVID-19, it was necessary to adapt it to the classification system used by the Brazilian Ministry of Health. After analyzing the available symptoms for each stage of the classification, it was possible to notice that the symptomatological scoring (SS) scheme would be the best method for the association with CoronaBot.

As for the theoretical aspects of information technology used in the construction of CoronaBot, natural language processing was used to understand text dialogues with artificial intelligence algorithms. In this way, the conversation agent was developed on Google's Dialogflow platform and its structure was based on dialog flows that can lead to a specific or branched path of questions and answers, similar to a flowchart.10 To feed the flow, questions and answers were added to perform the virtual screening of COVID-19 symptoms among users.

Steps for CoronaBot's Content Validation

After implementing the content in CoronaBot, its content validation was performed, through a panel of specialists composed of 15 health professionals. Six physicians and nine nurses with notorious experience in scientific research and clinical practice were directly or indirectly linked to the treatment of COVID-19. These professionals were selected upon applying the expert selection instrument used by Teles et al11 (2014) based on degree, research, and clinical practice.

The CoronaBot evaluation questionnaire was based on the model proposed by Coluci et al12 (2015), adapted by the researchers, with a 4-point Likert response scale, namely, (1) not relevant, (2) little relevant, (3) relevant, and (4) very relevant. It was divided into two sections: (1) evaluation of domains and (2) evaluation of the items. The domains of CoronaBot refer to the thematic axes present in the virtual assistant, whereas the items of the domains are its functional requirements, which are the functions that the system should provide, that is, how it should react to the user's inputs.13

CoronaBot content analysis consisted of the evaluation of three major domains: (1) score scale; (2) total symptom score for COVID-19 virtual screening; and (3) conversation agent interface (screens). Domain 1 had one item evaluated by the specialists: the scale of the score of symptoms and risk factors of COVID-19. Domain 2, in turn, was evaluated by its six items, namely, item 1: non-suspected asymptomatic cases; item 2: non-suspected symptomatic cases; item 3: suspected cases with mild symptoms; item 4: suspected cases with moderate symptoms; item 5: suspected case with alarming symptoms; and item 6: serious suspected case. Domain 3 was evaluated in its nine items, which represent CoronaBot's screens as exemplified in Figure 1.

FIGURE 1.

FIGURE 1

CoronaBot interface. A, Home screen. B, Screening for symptoms of COVID-19. C, Screening for COVID-19 risk factors.

After the specialists' analysis, the agreement rate of the evaluations was calculated, considering results greater than or equal to 90% as adequate. In the second stage, experts should evaluate each item and picture individually concerning their clarity, relevance, and representativeness for the construct underlying the study, according to the theoretical definitions of the construct itself and its domains.

Usability Test

The evaluated CoronaBot randomly recruited 10 Brazilian users from the northeast region of Brazil with suspected new coronavirus infection to test the usability of the system. Individuals aged 18 years or older with access to digital technology were included in the tests. Those users with marked functional dependence and cognitive deficits or difficulties that made it impossible to handle a smartphone or computer were excluded from the tests. After a brief videoconference demonstration of the system to the study participants, an e-mail was sent containing the CoronaBot access link; Google Forms link containing the consent form; a questionnaire on personal characteristics including age, sex, and education; and the System Usability Scale (SUS) questionnaire.

The data were collected using the SUS, an instrument containing 10 questions that aim to measure the usability of various products and services. As for its scoring scale, the SUS produces a single number. To calculate the score, first, the score of each item is summed. These contribute on a scale of 0 to 5. For items 1, 3, 5, 7, and 9, the individual score is the score received minus 1. For items 2, 4, 6, 8, and 10, the contribution is minus 5. The sum of all scores is multiplied by 2.5, and thus, the total SUS value is obtained.14 After scoring and calculating the score, it is possible to make the classification of the evaluated system: 20.5, worst imaginable; 21 to 38.5, poor; 39 to 52.5, average; 53 to 73.5, good; 74 to 85.5, excellent; and 86 to 100, best imaginable.15 Participants were also asked to answer how long it took them to interact with CoronaBot and what their level of satisfaction was with the system, using an analog scale from 0 to 10.

Data Analysis

To analyze the data from the content validation of CoronaBot, the content validity index (CVI) and the kappa (κ) index were used, after the domains and items of the virtual assistant were judged by the experts. The CVI measures the proportion or percentage of judges who are in agreement about certain aspects of the virtual assistant and its items, and allows for the analysis of each item individually and the prototype as a whole.16 To evaluate the whole prototype using the CVI, the form used in this study was the average of the item values calculated separately. In this way, all the CVIs calculated separately were summed and divided by the number of items of the virtual assistant.17

To stipulate the acceptable level of agreement among the experts, the recommended CVI value of at least 0.75 was established, and an agreement of at least 80% among the experts was considered to serve as a criterion for deciding on the relevance and/or acceptance of the CoronaBot items.16 In addition to this analysis, the interrater agreement was verified, which analyzes the agreement of the experts among the items evaluated through the ratio between the number of items with CVI values above 0.80 by the total number of CoronaBot items.18

To verify the experts' level of agreement and consistency level regarding the CoronaBot items, the κ index was calculated using the Online Kappa Calculator.19 κ Values above 0.74 were considered excellent, those between 0.60 and 0.74 were considered good, those from 0.40 to 0.59 were considered regular, and those below 0.40 were considered insufficient to be adopted.20

Regarding the data analysis of usability with users, the data obtained through the SUS questionnaire were described in two sections: (1) socio-demographic variables of the participants and (2) questions regarding the SUS, with evaluation of each of the 10 questions. The Kruskal-Wallis test (95% confidence interval and 5% significance, P < .05) was used to assess the effect of user type on quality indicators, using IBM SPSS Statistics version 23.0 (IBM Inc., Armonk, NY, USA).

Ethical Criteria

This study was approved by the institutional review board of the Federal University of Pernambuco, each participant voluntarily decided to participate in this research, and written consent was obtained.

RESULTS

Content Development of CoronaBot Script

CoronaBot was designed to virtually screen users who believe they have symptoms of COVID-19 and provide them with the appropriate procedures for each case. For this screening, the SS scheme was used, which concerns the sum of the values obtained due to the attribution of values to each symptom present in the patient and to each risk factor that the patient has (Symptomatological Score = Symptoms + Risk Factors).

The scale of symptom scoring and risk factors was defined based on the combinatorial analysis of several scenarios where various symptoms and risk situations were simulated. To perform the simulation, some scenarios were defined: first, the patient's profile was defined based on a combination of two to four factors (Age + SAH + DM + Obesity + Presence of Chronic Disease). As a result of this combination, each patient obtains a value referring to his/her health condition (Risk Factors). The preparation of base score defined the second stage according to the definition of the three forms of presentation of COVID-19.21

To obtain results that are more consistent with reality, the scores were reevaluated, in both the area of risk factors and symptoms. To perform this stage, other chatbots that work in the same area were evaluated, revealing the score lag. Simulations were performed with different scores but following the same patient profiles, resulting in discovering an ideal score. Figure 2 exemplifies the difference between pre- and post-reassessment scores.

FIGURE 2.

FIGURE 2

Comparison of values referring to symptoms and comorbidities before and after reassessment.

With the reassessment of the scores, it is possible that the total sum of SS described by the patient when interacting with CoronaBot will be more coherent and reliable, thus giving rise to the patient's classification, which will subsequently influence the message and action passed at the end of the dialog with the conversational agent. For a better understanding, Figure 3 describes the mentioned method.

FIGURE 3.

FIGURE 3

Scheme for obtaining the CoronaBot symptom score and classification method.

CoronaBot interacts with the user through natural language processing, establishing a conversation with the user and performing the screening according to the symptoms presented, risk factors, and contact with suspected or confirmed cases of COVID-19. Below are the main guidelines provided to users based on the SS and the presentation of the text on CoronaBot:

  1. Non-suspected asymptomatic (SS: 0–1) or symptomatic (SS: 2–5): orientation of preventive methods of COVID-19 and maintenance of social distancing.

  2. Suspected with mild symptoms (SS: 6–10): social isolation and other protective measures are oriented.

  3. Suspected with moderate symptoms (SS: 11–15) or with alarming symptoms (SS: 16–20): social isolation and directs the user to a teleconsultation with health professionals of the Telehealth Center of the Clinical Hospital of the Federal University of Pernambuco (Recife, Brazil).

  4. Suspected with severe symptoms (SS: +21): to seek an emergency medical service immediately, already offering an option for direct connection to the mobile emergency care service (SAMU-192).

CoronaBot's Content Validation

The participants of this step were 15 specialists (physicians and nurses) working in the research and clinical conduct of COVID-19; these professionals were, on average, 45 years old, and 75% have over 11 years of clinical and research experience in the area of practice.

The first evaluation stage consisted of the analysis of the three major CoronaBot domains, revealing that all were considered adequate, relevant, and representative; domains 1 and 2 presented an agreement rate of 87.5%, suggesting improvements in the content present in their items.

Some experts also suggested leaving the chatbot with more common words, since the conversation agent will be used by people of all social classes. One point also addressed by the evaluators were concerned with some of the comorbidities, suggesting changes to some terms and scores, the puerperium time, and the addition of close-to-positive COVID contact. Also, on the agreement rate of the CoronaBot domains, domain 3 obtained a 100% result, without any suggestions for improvement.

In the second evaluation stage, the experts analyzed the items present in domain 1 (scoring scale), domain 2 (symptomatological score for virtual screening of COVID-19), and domain 3 (CoronaBot screens). As shown in Table 1, 12 of the 16 CoronaBot items were considered “excellent” and only four items related to domain 2 had a CVI of 0.87 and a κ of 0.70, being interpreted as “good” contents.

Table 1.

Content Validity Index per Item, by Scale and Agreement Between CoronaBot, Brazil, 2021

Evaluations S-CVI κ Interpretation
Items/Domain
Domain 1
 Item 1: COVID-19 symptom scoring scale and risk factors 1.00 0.76 Excellent
Domain 2
 Item 1: asymptomatic non-suspected case (conduct) 0.87 0.70 Good
 Item 2: symptomatic non-suspected case (conduct) 0.87 0.70 Good
 Item 3: suspected case with mild symptoms (conduct) 0.87 0.70 Good
 Item 4: suspected case with mild symptoms (conduct) 1.00 0.76 Excellent
 Item 5: suspected case with alarming symptoms (conduct) 0.87 0.70 Good
 Item 6: suspected case with severe symptoms (conduct) 1.00 0.76 Excellent
Domain 3
 Item 1: home screen 1.00 0.76 Excellent
 Item 2: home menu 1.00 0.76 Excellent
 Item 3: explanation of CoronaBot's objectives 1.00 0.76 Excellent
 Item 4: initial symptom assessment 1.00 0.76 Excellent
 Item 5: screening for various symptoms 1.00 0.76 Excellent
 Item 6: other screening criteria 1.00 0.76 Excellent
 Item 7: screening for risk factors 1.00 0.76 Excellent
 Item 8: identification of contact with suspected or diagnosed persons 1.00 0.76 Excellent
 Item 9: CoronaBot guidelines through symptomatic scoring 1.00 0.76 Excellent
S-CVI/Avea 0.96
IRA 1.00

Abbreviation: IRA, interrater agreement.

aContent validity index by scale and average level.

When analyzing the content validation by average, an excellent value (0.96) was verified, so that the interrater agreement, that is, the agreement of reliability among the experts, obtained a value of 1.00, indicating high agreement.

Usability Test

Ten users with suspected new coronavirus infection participated in CoronaBot usability tests. The most frequent age group was 20 to 30 years, with a minimum age of 20 years and a maximum age of 65 years (mean [SD], 27.5 [15.2] years). The male sex prevailed with 7 of the 10 respondents (70%). Eight users had a high level of education. All participants took an average of up to 5 minutes of interaction with CoronaBot. When asked about overall satisfaction with the chatbot, using an analog scale from 0 to 10, an average of 9.13 was found, indicating excellent satisfaction.

Regarding the usability questionnaire, SUS, all participants completed the questions. The average total score was 80.1, with a standard deviation of 10.2, a minimum value of 45.5, and a maximum value of 100. When analyzed by the Kruskal-Wallis test, the association of the SUS score with the sociodemographic variables was not statistically significant (Table 2).

Table 2.

Quality Indicators and SUS Global Score

Quality Indicators Average ± SD H a P
Learning 14.2 ± 2.16 0.66 .118
Efficiency 12.1 ± 1.26 1.05 .017
Memory capacity 3.86 ± 1.25 0.91 .045
Error minimization 2.24 ± 0.63 0.66 .167
Satisfaction 8.87 ± 1.88 0.37 .766
SUS global score 80.1 ± 10.2 0.74 .077

aKruskal-Wallis test.

DISCUSSION

COVID-19 has challenged health professionals and researchers in various aspects, from an early and accurate diagnosis to the most usual and effective treatment. This has caused wear in several spheres, especially in Brazil, where the disease continues with alarming numbers, mainly due to the lack of efficient public policies and also the low level of social isolation of the population.2224

The evaluation of CoronaBot content was essential to develop the device, because it verified the association between abstract concepts (COVID-19 screening) and observable indicators (screening of symptoms and screening of risk factors), to analyze the extent to which the items created represent the construct, that is, its relevance.

Regarding the selection of appropriate experts to make up the panel, the experience and qualification of its members should be considered. The experts' classification system used allowed selecting a sample composed of qualified professionals in the subject, attributing greater precision, accuracy, and quality to the evaluations made, as observed by the high correlation between the results.

Still, with the objective of verifying the consistency between the absolute value of the experts' evaluations, the interrater agreement was calculated, which showed a very strong agreement for the evaluated instrument.

Applications of concordance statistics between evaluators for Likert scales are abundant in research and practice, having as one of their applications the evaluation of panel interviews and any other approach to collect systematic observations.25

The assessment of agreement among specialists, whether as a primary or secondary component of the study, is common in several health disciplines where the use of evaluators or observers as a measurement method is prevalent. The literature stresses that the concept of ARI is fundamental for both the design and evaluation of research instruments, since it measures the extent to which different judges assign the same precise value to each item being judged.26

In this way, the CoronaBot content was considered excellent by experts, making the virtual assistant an additional public utility instrument to complement the current contact identification and tracking applications to improve the tracking of COVID-19 transmission in the community.

To identify possible biases in the interface of the developed prototype, a usability test was performed. Initially used in the area of software engineering, the concept of usability is fully applicable to the development of interactive health instruments, as it is not enough to create and validate an instrument without evaluating its user interface. This is essential; otherwise, it cannot be used in clinical practice due to difficulties such as learning the methods involved, memorizing its functioning, or knowing how to issue possible results.

These system evaluation tests are becoming increasingly essential before making the prototype available to the end user. Prior to applicability checks in a real-life context, this provides a technical foundation on which users become familiar with the potential of mobile technology. This allows users to provide richer feedback on functional requirements and use cases.27

The SUS instrument used in this research was efficient to assess the usability of the CoronaBot through the user's perception, being classified as a good usability instrument according to the SUS score. When considering the representative values of quality, it was possible to verify that the CoronaBot presents the five usability components: ease of learning, efficiency of use, ease of recall, low error rate, and subjective satisfaction. Similar usability data can be observed in the development of other prototypes in the healthcare field.28

In the scientific literature, intelligent virtual assistants like chatbots can be observed in aid of the COVID-19 pandemic. These actions performed by chatbots are configured as extremely important, as they help health professionals in decision making, alleviating stressed health systems.29 Researchers in Japan developed a chatbot-based healthcare system called COvid-19: Operation for Personalized Empowerment to Render smart prevention And care seeking (COOPERA) developed through the LINE application. This is the first real-time system used to diagnose and monitor COVID-19 in Japan.30 The chatbot asks participants questions about personal information, preventive actions, and non-specific symptoms related to COVID-19 and its duration.30

CONCLUSION

CoronaBot, a chatbot, was developed to perform virtual screening of users with COVID-19 symptoms and screening of suspicious individuals. The content of CoronaBot obtained a satisfactory level of content validity and good usability, giving greater credibility to the device.

Thus, it is possible to use this chatbot to screen patients with suspected COVID-19, thus being possible to guide users to the most appropriate approach for their case. Use in conjunction with the CoronaBot online service platforms is ideal due to the following factors: low maintenance, high precision in the results, and easy handling by users, among others.

Footnotes

The authors have disclosed that they have no significant relationships with, or financial interest in, any commercial companies pertaining to this article.

Contributor Information

Amadeu Sá de Campos Filho, Email: amadeu.campos@ufpe.br.

José Ricardo Vasconcelos Cursino, Email: jose.cursino@nutes.ufpe.br.

Rafael Roque de Souza, Email: rrs4@cin.ufpe.br.

Geicianfran da Silva Lima Roque, Email: gslr@cin.ufpe.br.

Andréia Roque de Souza Cavalcanti, Email: andreia@imunekids.com.br.

References

  • 1.World Health Organization (WHO) . Coronavirus Disease 2019 (COVID-19): Situation Report-78. Geneva, Switzerland: WHO; 2020. https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200407-sitrep-78-covid-19.pdf?sfvrsn=bc43e1b_2 [Google Scholar]
  • 2.World Health Organization (WHO) . Clinical Management of Severe Acute Respiratory Infection (SARI) When Covid-19 Disease Is Suspected. Interim Guidance. Geneva, Switzerland: WHO; 2020. https://apps.who.int/iris/handle/10665/331446 [Google Scholar]
  • 3.World Health Organization (WHO) . Painel do WHO coronavirus disease (COVID-19). 2021. https://covid19.who.int/
  • 4.Brazil Ministry of Health . Coronavirus panel. 2021. https://covid.saude.gov.br/
  • 5.Levenfus I, Ullmann E, Battegay E, Schuurmans MM. Triage tool for suspected COVID-19 patients in the emergency room: AIFELL score. The Brazilian Journal of Infectious Diseases. 2020;24(5): 458–461. doi: 10.1016/j.bjid.2020.07.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Ossemane EB, Moon TD, Were MC, Heitman E. Ethical issues in the use of SMS messaging in HIV care and treatment in low- and middle-income countries: case examples from Mozambique. Journal of the American Medical Informatics Association. 2018;25(4): 423–427. doi: 10.1093/jamia/ocx123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Munsch N Martin A Gruarin S, et al. Diagnostic accuracy of web-based COVID-19 symptom checkers: comparison study. Journal of Medical Internet Research. 2020;22(10): e21299. doi: 10.2196/21299. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Zhou S, Bickmore T, Paasche-Orlow M, Jack B. Agent-user concordance and satisfaction with a virtual hospital discharge nurse. Paper presented at: International Conference on Intelligent Virtual Agents; 2014: 528–541. doi: 10.1007/978-3-319-09767-1_63. [DOI] [Google Scholar]
  • 9.Júnior WC, Torres BLB, Rausch MCP. Manchester risk classification system: comparing models. 2014. Brazilian Risk Classification Group Web site. http://gbcr.org.br/public/uploads/filemanager/source/53457bf080903.pdf
  • 10.Dialogflow . Dialogflow: build natural and rich conversational experiences. 2019. https://dialogflow.com
  • 11.Teles LM Oliveira AS Campos FC, et al. Development and validating an educational booklet for childbirth companions. Revista da Escola de Enfermagem da U.S.P. 2014;48(6): 977–984. doi: 10.1590/S0080-623420140000700003. [DOI] [PubMed] [Google Scholar]
  • 12.Coluci MZ, Alexandre NM, Milani D. Construction of measurement instruments in the area of health. Ciênca & Saúde Coletiva. 2015;20(3): 925–936. doi: 10.1590/1413-81232015203.04332013. [DOI] [PubMed] [Google Scholar]
  • 13.Pressman R. Software Engineering: A Professional Approach. 8th ed. McGrawHill, Bookman; 2016:940. [Google Scholar]
  • 14.Bangor A, Kortum P, Miller J. An empirical evaluation of the system usability scale. International Journal of Human Computer Interaction. 2008;24(6): 574–594. doi: 10.1080/10447310802205776. [DOI] [Google Scholar]
  • 15.Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. Journal of Usability Studies. 2009;4: 114–123. [Google Scholar]
  • 16.Alexandre NM, Coluci MZ. Content validity in the development and adaptation processes of measurement instruments. Ciênca & Saúde Coletiva. 2011;16(7): 3061–3068. doi: 10.1590/S1413-81232011000800006. [DOI] [PubMed] [Google Scholar]
  • 17.Polit DF, Beck CT. The content validity index: are you sure you know what's being reported? Critique and recommendations. Research in Nursing & Health. 2006;29(5): 489–497. doi: 10.1002/nur.20147. [DOI] [PubMed] [Google Scholar]
  • 18.Gisev N, Bell JS, Chen TF. Interrater agreement and interrater reliability: key concepts, approaches, and applications. Research in Social & Administrative Pharmacy. 2013;9(3): 330–338. doi: 10.1016/j.sapharm.2012.04.004. [DOI] [PubMed] [Google Scholar]
  • 19.Randolph JJ. Online kappa calculator. 2008. http://justusrandolph.net/kappa/
  • 20.Perroca MG, Gaidzinski RR. Assessing the inter-rater reliability of an instrument for the classification of patients—kappa's quotient. Revista da Escola de Enfermagem da U.S.P. 2003;37(1): 72–80. doi: 10.1590/S0080-62342003000100009. [DOI] [PubMed] [Google Scholar]
  • 21.Rockwell KL, Gilroy AS. Incorporating telemedicine as part of COVID-19 outbreak response systems. The American Journal of Managed Care. 2020;26(4): 147–148. doi: 10.37765/ajmc.2020.42784. [DOI] [PubMed] [Google Scholar]
  • 22.Judson TJ Odisho AY Young JJ, et al. Implementation of a digital chatbot to screen health system employees during the COVID-19 pandemic. Journal of the American Medical Informatics Association. 2020;27(9): 1450–1455. doi: 10.1093/jamia/ocaa130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ohannessian R, Duong TA, Odone A. Global telemedicine implementation and integration within health systems to fight the COVID-19 pandemic: a call to action. JMIR Public Health and Surveillance. 2020;6(2): e18810. doi: 10.2196/18810. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Brazil. Ministry of Health . Guidelines for Managing Patients With COVID-19. Brasilia, Brazil; 2020. https://portalarquivos.saude.gov.br/images/pdf/2020/June/18/Covid19-Orientac--o--esManejoPacientes.pdf [Google Scholar]
  • 25.O'Neill TA. An overview of interrater agreement on Likert scales for researchers and practitioners. Frontiers in Psychology. 2017;8: 777. doi: 10.3389/fpsyg.2017.00777. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Shweta B, Bajpai R, Chaturvedi HK. Evaluation of inter-rater agreement and inter-rater reliability for observational data: an overview of concepts and methods. Journal of the Indian Academy of Applied Psychology. 2015;41(3): 20–27. http://www.jiaap.org.in/Listing_Detail/Logo/273614a1-a76f-4d64-9221-3609cafcc24a.pdf [Google Scholar]
  • 27.Epalte K, Tomsone S, Vetra A, Bērziņa G. Patient experience using digital therapy “Vigo” for stroke patient recovery: a qualitative descriptive study. Disability and Rehabilitation. Assistive Technology. 2020;15: 1–10. doi: 10.1080/17483107.2020.1839794. [DOI] [PubMed] [Google Scholar]
  • 28.da Silva Lima Roque G, Roque de Souza R, Araújo do Nascimento JW, de Campos Filho AS, de Melo Queiroz SR, Ramos Vieira Santos IC. Content validation and usability of a chatbot of guidelines for wound dressing. International Journal of Medical Informatics. 2021;151: 104473. doi: 10.1016/j.ijmedinf.2021.104473. [DOI] [PubMed] [Google Scholar]
  • 29.Battineni G, Chintalapudi N, Amenta F. AI chatbot design during an epidemic like the novel coronavirus. Healthcare. 2020;8(2): 154. doi: 10.3390/healthcare8020154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Yoneoka D Kawashima T Tanoue Y, et al. Early SNS-based monitoring system for the covid-19 outbreak in Japan: a population-level observational study. Journal of Epidemiology. 2020;30(8): 362–370. doi: 10.2188/jea.je20200150. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Computers, Informatics, Nursing are provided here courtesy of Wolters Kluwer Health

RESOURCES