Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Jan 1.
Published in final edited form as: Qual Manag Health Care. 2023 Jan-Mar;32(Suppl 1):S35–S44. doi: 10.1097/QMH.0000000000000396

Testing the Acceptability and Usability of an AI-Enabled COVID-19 Diagnostic Tool among Diverse Adult Populations in the US

Josh Schilling 1, F Gerard Moeller 3, Rachele Peterson 1, Brandon Beltz 1, Deepti Joshi 1, Danielle Gartner 1, Jee Vang 2, Praduman Jain 1
PMCID: PMC9811483  NIHMSID: NIHMS1829034  PMID: 36579707

Abstract

Background and Objectives:

Although at-home COVID-19 testing offers several benefits in a relative cost-effective and less risky manner, evidence suggests that at-home COVID-19 test kits have a high rate of false negatives. One way to improve the accuracy and acceptance of COVID-19 screening is to combine existing at-home physical test kits with an easily accessible, electronic, self-diagnostic tool. The objective of the current study was to test the acceptability and usability of an AI-enabled COVID-19 testing tool that combines a web-based symptom diagnostic screening survey and a physical at-home test kit to test differences across adults from races, ages, genders, educational and income levels in the US.

Methods:

822 people from Richmond, Virginia were included in the study. Data were collected from employees and patients of Virginia Commonwealth University Health Center as well as the surrounding community in June through October 2021. Data were weighted to reflect demographic distribution of patients in United States. Descriptive statistics and repeated independent t-tests were run to evaluate the differences in the acceptability and usability of an AI-enabled COVID-19 testing tool.

Results:

Across all participants, there was a reasonable degree of acceptability and usability of AI-enabled COVID-19 testing tool that included a physical test kit and symptom screening website. The AI-enabled COVID-19 testing tool demonstrated overall good acceptability and usability, across race, age, gender or educational background. Notably, participants preferred both components of the AI-enabled COVID-19 testing tool more than the in-clinic testing.

Conclusion:

Overall, these findings suggest that our AI-enabled COVID-19 testing approach has great potential to improve the quality of remote COVID testing at low cost and high accessibility for diverse demographic populations in the US.

Keywords: COVID-19, web-based screening, testing, usability, acceptability

Introduction

At-home COVID-19 testing offers several benefits in a relatively cost-effective and low risk manner. Specifically, at-home COVID-19 testing can be more accessible for people with limited ability to travel or limited access to clinical locations that offer testing by trained professionals; for instance, people in rural locations or people without reliable health coverage. Additionally, testing at-home can offer greater convenience and flexibility to anyone wishing to get tested for COVID-19. Finally, at-home testing can alleviate some of the burden on healthcare providers by avoiding direct contact between healthcare workers and potentially exposed people; thereby minimizing the need for personal protective equipment required by medical workers1. Despite the numerous advantages of at-home COVID-19 testing, growing evidence suggests differences in their accuracy relative to clinic administered tests2. One way to improve the accuracy of COVID-19 screening is to combine existing at-home COVID-19 test kits with an easily accessible self-diagnostic symptom screening surveys.

Symptom screening via questionnaires and online surveys would allow researchers to generate a predictive algorithm to inform individuals and providers in combination with tests. In fact, in a previously conducted, IRB Approved study, Vibrent and partners developed logistic regression models to predict the probability of COVID-19 using enhanced symptom screening, which has an Area under the Receiver Operating Curve, AROC, of 92%, indicating it is very accurate and comparable to in-home laboratory tests (3; see images 1, 2 in Appendix 1). This web-based symptom screening survey is built on technology that uses algorithms to predict the likelihood of COVID-19 infection based on self-reported symptoms (Alemi et al., under review)4. Furthermore, the symptom screening survey with its integrated public health data can provide patients and providers, employees and employers, students, and universities with timely information to support testing requests. Although still early in its diagnostic utility, we consider symptom self-reporting via electronic means to be an important step in COVID-19 screening processes to enhance the accuracy of COVID-19 screening, surveillance, and reporting given its many advantages.

One advantage of this web-based diagnostic symptom screening survey is that self-reported symptoms can provide healthcare providers another data point to enhance their screening and reporting process. Yet another advantage is that clinically validated recommendations can be shared with patients just after submitting their symptoms via electronic tools. As a way to ascertain its utility before widespread use, The current study was conducted to test the acceptability and usability of this web-based diagnostic symptom screening survey used with at-home COVID-19 test kits among a diverse adult population in the United States.

Although recent research on the usability of COVID-19 at-home test kits is encouraging5,6, we seek to better understand whether this unique AI-enabled COVID-19 testing tool that combines the at-home COVID-19 test kits and a web-based diagnostic symptom screening survey is acceptable and usable. One reason for this study was to assess whether the AI-enabled COVID-19 testing tool has low acceptance, i.e., individuals are resistant to using it. In case of resistance, individuals are more likely to pivot towards alternatives such as not getting screened or opting for in-person testing. This leads us to our first research question (RQ):

RQ1: How did the acceptability of the at-home COVID-19 test kit, web-based diagnostic symptom screening survey, and in-clinic screening compare among participants?

The second reason this study was conducted was to assess if this AI-enabled COVID-19 testing tool is challenging for individuals to complete successfully without errors. If so, incorrect usage will likely result in inaccurate results.6 This leads us to the second research question (RQ):

RQ2: How usable was the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit?

One of the main strengths of this work is the use of a diverse population and the inclusion of underrepresented minorities that have lower access to healthcare 79 and show higher rates of COVID-related death and hospitalization 1011. AI-enabled digital diagnosis tools can provide immediate and accurate diagnosis to patients, particularly to underserved populations who benefit the most from the low cost and self-management. However, despite the increased internet use, the digital divide continues to constitute a key barrier in adopting digital health informatics by underserved populations.12 Racial and ethnic disparities can persist in remote screening and data collection tools, such as telemedicine13 and web-based surveys.14 Moreover, the adoption of web-based surveys can vary across different groups of aging adults based on their demographic, financial, and health-related variables.15

Although at-home COVID testing can reduce the logistical burden and stigma associated with in-clinic testing, people with low health literacy may misinterpret the test instructions. 16 There is an increased interest in understanding the specific barriers faced by underserved communities, including rural ethnic minorities, in using home-based COVID tests.17 Funded by the National Institute of Health (NIH), the RADx Underserved Populations (RADx-UP) Consortium was created to study COVID-19 testing patterns in communities across the United States. Recent RADx-Up studies show that individuals with low socioeconomic status report lower motivation in using COVID-self tests.18

In the United States, it is well documented that racial and ethnic groups experience differences in access to health care. 79 The COVID-19 pandemic, although novel in its onset, was not unique in this lens where an excess burden presented in Black Americans.10 Consider among non-Hispanic Blacks in the United States, the rate of hospitalization for COVID-19 is 2.5x and rate of death is 1.7x greater compared to non-Hispanic Whites, although the rate of infection is equivalent according to CDC data through February 1, 2022.11

In comparison, the rate of death for Hispanic or Latinos is 1.1x that of Whites, or nearly equivalent.11 Further, differences among Black Americans are observed in public health beliefs, awareness, and practices; broadly1922 and related to testing 23,24 and vaccination25,26 for COVID-19.

To address the above discussed issues related to the existing racial and ethnic disparities in the web-based surveys and at-home toolkits, we introduced the following questions to assess differences in the acceptability and usability of these tools across races:

RQ3: How did the acceptability of the at-home test kit, web-based diagnostic symptom screening survey, and in-clinic screening compare among races?

RQ4: How did the usability of the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit compare among races?

We extended the RQ3 and RQ4 research questions to other demographic variables, including age and gender, to address the acceptability and usability of the web-based screening and at-home screening tools in these groups. All three variables (age, gender, and race/ethnicity) are included in the CDC Human Infection Case Report Form.27 Age is a particularly important demographic variable as the adoption of web-based surveys can vary among elderly adults.15 RQ5 and RQ6 describe age-specific acceptability and usability questions.

RQ5: Were there age-related differences in the acceptability of at-home test kit, web-based diagnostic symptom screening survey, and in-clinic screening?

RQ6: Were there age-related differences in the usability of the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit?

RQ7 and RQ8 address sex-specific acceptability and usability questions as previous research suggests sex difference in web survey participation, although the results are mixed. 28,29

RQ7: Were there gender-related differences in the acceptability of at-home test kit, web-based symptom screening survey, or in-clinic screening?

RQ8: Were there gender-related differences in the usability of the web-based symptom screening survey and the at-home COVID-19 test kit?

Due to the impact of health literacy on the interpretation of at-home test kits16 education was included as an additional demographic variable in our research questions (RQ9 and RQ10). Individuals with low education may have more difficulties in following the instructions of the at-home test kits and understanding the web-based health questions about their symptoms.

RQ9: Were there differences in the acceptability of the at-home test kit, web-based symptom screening survey, and in-clinic screening across individuals with different educational backgrounds?

RQ10: Were there differences in the usability of the web-based symptom screening survey and the at-home COVID-19 test kit across individuals with different educational backgrounds?

The AI-enabled COVID-19 testing tool is an innovative approach with great potential to improve the quality of remote COVID-19 testing, especially in underserved communities that can benefit from the low costs and easy-to-access features of this diagnostic tool. It is therefore imperative to show that our diagnostic solution does not exacerbate existing healthcare disparities. Hence, we examined its acceptability and usability across several demographic variables such as race, age, gender, and education. For example, Blacks and African Americans have been particularly affected by the COVID-19 pandemic.1011 Race, gender, and socioeconomic status tend to relate to disparities in COVID-19 screening, testing, and prevention20,21,2324. Yet another example is that people aged 65 and older have died from COVID-19 at a much higher rate than expected.3031

Methods

Study recruitment.

Prospective participants were recruited through advertisements on Virginia Commonwealth University’s (VCU) website, email lists, and flyers on the VCU campus. Participants were eligible if they were 18 years or older and located within the Richmond metropolitan area for 10 days. If interested and eligible, participants joined the study through the research study website where they provided their electronic consent and were instructed through the study tasks. Research staff were available by phone and in-person to answer questions participants had. All data were collected between June and October 2021.

Study Design and Procedures.

The study design was a case/control design where each study group would cease enrollment when the number of participants who have completed all study related procedures reached the enrollment targets (as shown in the Supplemental Table 1). Based on the group selected, participants were asked to complete up to two at-home COVID-19 tests, an in-clinic PCR test in the Richmond, VA area, and complete a series of surveys throughout the study (see Supplemental Table 1 for the study design). The web-based diagnostic symptom screening survey asked questions about demographics, basic health information, and current COVID-19 and flu symptoms (if any). At the end of the study, participants were given a survey asking about the acceptability and usability of the at-home COVID-19 test kits and the study website (including the web-based diagnostic symptom screening survey). Participants were given two at-home rapid antigen COVID-19 tests and a clinic-administered PCR COVID-19 test at no cost. The test kits that were used in this study were FDA approved and available off the shelf in various grocery stores and pharmacies. Participants completed these activities on their own schedule, using their own computers or mobile devices, and scheduled their in-clinic test in Richmond, VA. The at-home COVID-19 test used for the study was the QuickVue At-home OTC COVID-19 Test, which utilizes a self-administered nasal swab and has been approved for use under emergency authorization.5 Detailed instructions and demonstration videos can be found at https://quickvueathome.com/.

Compensation to all participants was pro-rated, according to study procedures completed. Participants were able to choose from a variety of electronic gift cards and payment was provided at the end of study completion, or immediately upon voluntary withdrawal from the study. The maximum compensation value was $175. The study was approved by Western Institutional Review Board (IRB) (study 1309332). The data collection effort was approved carried out by Virginia Commonwealth University; their IRB (HM20022035) deferred to Western IRB. The analysis of de-identified data was approved by the George Mason University IRB (1743684–1). The Vibrent Research Platform, which hosted the website and survey forms, is a Federal Information Security Management Act (FISMA) certified and Federal Risk and Authorization Management Program (FedRAMP) ready system.

Measures.

Participants reported demographic information including, gender (Female or Male), race (White, Black or African American, Asian, American Indian or Alaska Native, Other), ethnicity (Hispanic/Latino or Non-Hispanic/Latino), education (Grades 9 through 11, Grades 12 or GED, 1 to 3 years after high school, College 4 years or more, Advanced degree) and age (18–20, 21–44, 45–64, 65 and older) via surveys on the research website. Participants also reported any COVID-19 and flu symptoms (e.g., “In the last 14 days, did you experience any of the following gastrointestinal symptoms?” etc.) via the web-based diagnostic symptom screening survey. The web-based diagnostic symptom screening survey was an important input for the COVID-19 prediction software algorithms that we tested as part of a larger effort that is beyond the scope of this paper5 (see Appendix 1B for the complete symptom screening survey).

The acceptability of the AI-enabled COVID-19 testing tool components (the at-home test kit and symptom survey) were evaluated with six survey items.32 These survey items related to screening preference, likelihood of future use, and perceived accuracy of COVID-19 test (See Appendix 2). With regard to screening preferences, although our primary objective was to identify whether participants preferred to screen using the at-home COVID-19 test or on an app or website (i.e., the web-based diagnostic symptom screening survey), we also asked participants their preference for in-clinic screening to act as a benchmark for traditional screening methods. The usability of the AI-enabled COVID-19 testing tool components, the at-home test kit, and the web-based diagnostic symptom screening survey were determined by two survey items; the overall ease of use of the COVID-19 test kit and overall ease of use of the website that hosted the web-based diagnostic symptom screening survey.

COVID screening preference.

The first acceptability item asked, “If you need to screen yourself for COVID-19 or a similar disease in the future, which of the following options would you prefer?” (a) At home with a physical test kit, (b) Reporting my symptoms on an electronic app or website, (c) At a clinic administered by health care professionals. Participants responded to this question on a 5-point scale ranging from Prefer a great deal (1) to Do not prefer (5).

Likelihood of future use.

The second acceptability item asked, “If given the opportunity, would you use the same COVID-19 test kit again?” Participants responded to this item with Yes (1), Maybe (2), No (3), I’m not sure (4).

Perceived accuracy.

The third acceptability item asked, “Do you feel like your COVID-19 test results were accurate?” Participants responded with Yes (1), No (2), I’m not sure (3).

The usability of the AI-enabled COVID-19 testing tool components (the at-home test kit and the web-based diagnostic symptom screening survey) were determined by two survey items detailed below. There was an overall rating of ease of use as well as specific ratings for the major steps of using the test kit and the website, including the web-based diagnostic symptom screening survey.

Test kit ease of use.

One usability item asked, “How easy or difficult was it to use the COVID-19 test kit?” Participants responded to this item using a 5-point scale, Extremely easy (1) to Extremely difficult (5).

Website ease of use.

The second usability item asked, “How easy or difficult was it to use the website?” Participants responded using a 5-point scale, Extremely easy (1) to Extremely difficult (5).

Data Analysis Plan.

Participants who provided knowingly incomplete or inaccurate information and those who had signed up for the study multiple times were excluded from analyses (N = 18). The raw sample size for analyses was N = 822. Research questions were evaluated in R v4.0.2 with direct package dependencies on tidyverse6 v1.3.0 and survey33 v4.0.

Weighted Data Analysis

Data were weighted to reflect the distribution of demographic variables in United States. The raw data of 822 samples was resampled to 5,000 using rake34 to match national percentages along the dimensions of gender, age, race and education. The (normalized) marginal percentages35 are as follows; for gender was 50.8% for female and 49.2% for males; for age was 4.78% for [18, 20], 41.2% for [21, 44], 32.9% for [45, 64] and 21.1% for patients 65 and older; for race was 85.1% for White and 14.9% for Black; and for education was 36.2% for grades 9 through 12 (including diploma equivalent to grade 12), 31.8% for junior college (and/or equivalent), 20.1% for college (4 years degree) and 11.9% for advanced degrees beyond college.

Ten research questions (RQ1-RQ10) were used in this study. RQ1 was tested by running 3 separate summary independent t-tests to correspond to each of the acceptability variables (i.e., current screening preference, likelihood of future use, perceived accuracy). Thus, we conducted independent t-tests (α = 0.05, 95% confidence level) to compare (a) in-clinic vs. at-home (b) on app vs. at-home (c) on app vs. in-clinic. RQ2 was addressed by descriptive statistics, reported as mean (M) and standard deviation (SD).

RQ3 through RQ8 were tested by running separate univariate analyses to correspond to each of the acceptability (i.e., current screening preference, likelihood of future use, perceived accuracy) and usability (i.e., test kit ease of use, symptoms screening ease of use) variables for each of independent variables (race, age, gender, and education). The independent variable was entered as the fixed factor and the acceptability or usability variable was entered as the dependent variable. Research questions were answered by focusing on whether the overall F-test statistic reached statistically significance (p < .05) at 95% confidence level. If the overall test statistic reached statistical significance, Tukey’s post hoc analysis was conducted to identify which pairwise comparisons had significant differences.

RQ7 and RQ8 were tested running 5 separate independent samples t-test to each of the acceptability (i.e., current screening preference, likelihood of future use, perceived accuracy) and usability (i.e., at-home COVID-19 test kit ease of use, web-based symptoms screening survey ease of use) variables for one predictor variable (gender). While gender was entered as the grouping variable (Male = 1, Female = 0), the acceptability and usability variables were entered as testing variables. The research questions were answered by focusing on whether overall t-test statistic reached statistical significance (p < .05) at 95% confidence level.

Results

Demographic Statistics.

General demographics are shown in Supplemental Table 2. The majority of participants were female (65.5%). The sample consisted of mostly White (45.6%) and Black or African American (42.5%), followed by Asian (3.4%), American Indian or Alaska Native (1.6%) and Other (0.2%). The average age of the sample was M = 38.9 years (SD = 13.97; Age range= 18–92). Furthermore, a little over half of the sample was at least a college graduate and earned an income of at least $50,000.

RQ1: How did the acceptability of the at-home COVID-19 test kit, web-based diagnostic symptom screening survey, and in-clinic screening compare among participants?

Across all participants, there was a stronger preference t (8237) = 33.24, for screening at-home with a COVID-19 test kit (M = 1.93, SD = 1.14) compared to in-clinic testing (M = 2.85, SD = 1.46, p < .05). Likewise, there was a stronger preference t (8341) = 26.76, for screening at-home with a COVID-19 test kit (M = 1.93, SD = 1.14) compared to reporting symptoms on the web-based diagnostic symptom screening survey on a website or an app (M = 2.65, SD = 1.39, p < .05). No other statistically significant differences were noted in screening preferences (p = ns).

Furthermore, in terms of likelihood of future use of the at-home COVID-19 test kit, participants were, on average, willing to take the COVID-19 test kit again (M = 1.07, SD =.42), and were confident in the accuracy of the at-home COVID-19 test kit (M = 1.09, SD = .35).

RQ2: How usable was the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit?

On average, the participants in the current sample found it extremely easy to use the at-home COVID-19 test kit (M = 1.11, SD = .33) as well as the web-based symptom screening survey (M = 1.27, SD = .56).

RQ3: How did the acceptability of the at-home test kit, web-based diagnostic symptom screening survey, and in-clinic screening compare among races?

At-home test kit screening preferences did not show statistically significant differences across racial groups p=ns. However, with regards to the web-based diagnostic symptom screening survey, Black participants (M = 2.52, SD = 1.46) had a statistically significantly stronger preference for using the web-based diagnostic symptom screening than White participants (M = 2.67, SD = 1.38, p < .05). With regards to in-clinic screening, Black participants (M = 2.26, SD = 1.35) had a statistically significantly stronger preference for in-clinic screening compared to White participants (M = 2.96, SD = 1.45, p < .001).

RQ4: How did the usability of the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit compare among races?

Usability of the web-based diagnostic symptom screening survey statistically significantly differed across races F(1, 4379) = 12.23, p < .001, such that White participants (M = 1.26, SD = .52) found the web-based diagnostic symptom screening survey to have higher usability compared to Black participants (M = 1.34, SD = .70, p < .001). Similarly, usability of the at-home COVID-19 test kit statistically significantly differed across races F(1, 4376) = 78.95, p < .001, such that White participants (M = 1.09, SD = .30) found the at-home COVID-19 test kit to have higher usability compared to Black participants (M = 1.22, SD = .46, p < .001).

RQ5: Were there age-related differences in the acceptability of at-home test kit, web-based diagnostic symptom screening survey, and in-clinic screening?

At-home test kit screening preferences statistically significantly differed across individuals of different age groups F(3, 4377) = 32.55, p < .001. Specifically, Tukey’s post hoc analyses demonstrated that those between the ages of 45 through 64 years (M = 1.75, SD = 1.04) and 21 through 44 years of age (M = 1.89, SD = 1.15) had a statistically significantly stronger preference for at-home testing compared to participants 18 through 20 years of age (M = 2.32, SD = .95, p < .001). Likewise, those between 45 through 64 years (M = 1.74, SD = 1.04) and 65 years and older (M = 2.13, SD = 1.22) had a statistically significantly stronger preference for at-home testing compared to those between 21 through 44 years of age (M = 2.32, SD = .95, p < .001). Finally, those between 45 through 64 years of age (M = 1.74, SD = 1.04) had a statistically significantly stronger preference for at-home testing compared to those between 65 years and older (M = 2.13, SD = 1.22; p <.001). No other statistically significant differences were noted.

Similarly, web-based symptom screening preferences statistically significantly differed across individuals of different age groups. Specifically, Tukey’s post hoc analyses demonstrated that those between the ages of 21 through 44 years (M = 2.77, SD = 1.46) and 45 through 64 years (M = 2.40, SD = 1.43) had a stronger preference for web-based diagnostic symptom screening compared to those between 18 through 20 years of age (M = 3.06, SD = 1.09, p <.05). Furthermore, those between 45 through 64 years (M = 2.40, SD = 1.43) had a statistically significantly stronger preference for web-based diagnostic symptom screening compared to those between 21 through 44 years of age (M = 2.77, SD = 1.46) and those between 65 years and older (M = 2.71, SD = 1.21, p <.001). No other statistically significant differences were noted.

Finally, in-clinic screening preferences also statistically significantly differed across individuals of different age groups. Specifically, Tukey’s post hoc analyses demonstrated that those between the ages of 18 through 20 years (M = 2.70, SD = 1.38) demonstrated statistically significantly stronger preferences for in-clinic screening compared to those between 21 through 44 years of age (M = 3.11, SD = 1.38, p < .001). Furthermore, those between 45 through 64 years (M = 2.79, SD = 1.48) had a statistically significantly stronger preference for in-clinic screening compared to those between 21 through 44 years (M = 3.22, SD = 1.38, p < .001). Finally, those between 65 years and older (M = 2.52, SD = 1.52). had a statistically significantly stronger preference for in-clinic screening compared those between 21 through 44 years (M = 3.22, SD = 1.38) and those between 45 through 64 years of age (M = 2.79, SD = 1.48, p < .001). No other statistically significant differences were noted.

RQ6: Were there age-related differences in the usability of the web-based diagnostic symptom screening survey and the at-home COVID-19 test kit?

Usability of the web-based diagnostic symptom screening survey differed statistically significantly across individuals of different age groups F(3, 4377) = 65.72, p <.001. Specifically, Tukey’s post hoc analyses demonstrated that those between 18 through 20 years (M = 1.04, SD = .39) reported statistically significantly higher usability of the web-based diagnostic symptom screening survey compared to those between 21 through 44 years (M = 1.21, SD = .48), those between 45 through 64 years (M = 1.24, SD = .63) and those who were 65 years or older (M = 1.47, SD = .54, p <.001). Furthermore, those between 21 through 44 years (M = 1.21, SD = .48) and those 45 through 64 years (M = 1.24, SD = .63) had a statistically significantly higher usability of web-based diagnostic symptom screening survey compared to those who were 65 years or older (M = 1.47, SD = .54, p <.001). No other statistically significant differences were noted.

Usability of the at-home COVID-19 test kit differed statistically significantly across individuals of different age groups F(3, 4374) = 12.36, p <.001. Specifically, Tukey’s post hoc analyses demonstrated that those from 65 years and older (M = 1.17, SD = .37) reported statistically significantly higher usability compared to those between 21 through 44 years of age (M = 1.11, SD = .32) and those between 45 through 65 years of age (M = 1.08, SD = .29, p < .001). No other statistically significant differences were noted.

RQ7: Were there gender-related differences in the acceptability of at-home test kit, web-based symptom screening survey, or in-clinic screening?

At-home test kit screening preferences differed statistically significantly across genders such that males (M = 1.83, SD = 1.06) had a stronger preference for at-home kits compared to females (M = 2.02, SD = 1.19, p < .001). Similarly, in-clinic screening preferences differed statistically significantly across genders such that females (M = 2.68, SD = 1.47) had a stronger preference for in-clinic screening compared to males (M = 3.03, SD = 1.43, p < .001). No other statistically significant differences were noted.

RQ8: Were there gender-related differences in the usability of the web-based symptom screening survey and the at-home COVID-19 test kit?

Usability of the web-based symptom screening survey differed statistically significantly across genders F(1, 4379) = 7.79, p < .001 such that female participants (M = 1.23, SD = .49) found the web-based diagnostic symptom screening survey to have higher usability compared to male participants (M = 1.31, SD = .61).

Additionally, usability of the at-home COVID-19 test kit differed statistically significantly across genders F(1, 4376) = 6.83, p < .05 such that male participants (M = 1.10, SD = .30) found it the at-home COVID-19 test kit to have higher usability compared to female participants (M = 1.12, SD = .35).

RQ9: Were there differences in the acceptability of the at-home test kit, web-based symptom screening survey, and in-clinic screening across individuals with different educational backgrounds?

At-home COVID-19 test kit preference differed statistically significantly across individuals with different levels of education F(3, 4377) = 36.40, p < .001. Specifically, Tukey’s post hoc analyses demonstrated that those with a college degree (M = 1.72, SD = 1.03) had a stronger preference for at-home COVID tests compared to those with an educational attainment of grades 9 through 12 (M = 2.05, SD = 1.12), those with an educational attainment of junior college (M = 1.83, SD = 1.09) and those with an advanced degree (M = 2.18, SD = 1.37, p < .001). Furthermore, those with an educational attainment of junior college (M = 1.83, SD = 1.09) had a stronger preference for at-home COVID-19 tests compared to those and those with an advanced degree (M = 2.18, SD = 1.12, p < .001). No other statistically significant differences were noted.

Web-based symptom diagnostic screening survey preference differed statistically significantly across individuals with different levels of education F(3, 4329) = 27.51, p < .001. Specifically, Tukey’s post hoc analyses demonstrated that those in grades 9 to 12 (M = 2.58, SD = 1.26) had a stronger preference for web-based diagnostic symptom screening survey compared to those with an advanced degree (M = 3.17, SD = 1.51, p < .001). Furthermore, those in junior college (M = 2.56, SD = 1.39) demonstrated a stronger preference for web-based diagnostic symptom screening survey compared to those with a college degree (M = 2.62, SD = 1.37) or those with an advanced degree (M = 3.17, SD = 1.51, p < .001). No other statistically significant differences were noted.

In-clinic screening preferences differed significantly across individuals with different levels of education F(3, 4364) = 111.6, p < .001. Specifically, Tukey’s post hoc analyses demonstrated that those in grades 9 through 12 (M = 2.34, SD = 1.45) had a stronger preference for in-clinic testing as compared to those in college (M = 3.18, SD = 1.40), those in junior college (M = 3.09, SD = 1.34) or those with advanced degrees (M = 3.21, SD = 1.45, p < .001). No other statistically significant differences were noted.

RQ10: Were there differences in the usability of the web-based symptom screening survey and the at-home COVID-19 test kit across individuals with different educational backgrounds?

Usability of the web-based diagnostic symptom screening survey differed across individuals with different educational backgrounds F(3, 4377) = 13.43, p < .001. Specifically, Tukey’s post hoc analyses demonstrated that those in grades 9 through 12 (M = 1.33, SD = .55) reported lower usability for the web-based diagnostic symptom screening compared to those with advanced degrees (M = 1.16, SD = .40), those in college (M = 1.26, SD = .56) and those in junior college (M = 1.25, SD = .59, p < .05). Furthermore, those with advanced degrees (M = 1.15, SD = .40) reported higher usability for the web-based diagnostic symptom screening survey compared those in junior college (M = 1.25, SD = .59, p < .05). No other statistically significant differences were noted.

Usability of the at-home COVID-19 test kit also differed across individuals with different educational background F(3, 4374) = 25.4, p < .05. Specifically, Tukey’s post hoc analyses demonstrated that those in college (M = 1.11, SD = .32) demonstrated higher usability for at-home COVID-19 test kits compared to those in grades 9 through 12 (M = 1.15, SD = .38). On the other hand, those in junior college (M = 1.05, SD = .22) had higher usability for at-home COVID tests compared to those in grades 9 through 12 (M = 1.15, SD = .38, p < .001). Finally, those in junior college (M = 1.05, SD = .22) reported having higher usability of the at-home COVID-19 tests compared to those in college (M = 1.11, SD = .32) and those with advanced degrees (M = 1.14, SD = .37, p < .001).

Discussion

Study Objectives:

The primary objective of the current study was to examine the acceptability and usability of an AI-enabled COVID-19 testing Tool that combined a web-based diagnostic symptom screening survey and an at-home COVID-19 test kit. A secondary objective was to examine if there were any significant differences in the acceptability and usability of the AI-enabled COVID-19 testing tool across racial groups (in particular, Blacks and Whites), and across educational backgrounds, age groups, or genders.

Summary of the Results:

One key observation from this research was that all participants found it easy to use the at-home COVID-19 kit and the web-based screening survey. This finding gives a clear picture with regards to the acceptability of the AI-enabled COVID-19 testing tool that consisted of an at-home COVID-19 test kit and a web-based diagnostic symptom screening survey; the tool as a whole demonstrated good acceptability. A second observation was that the results demonstrated that in terms of acceptability, participants regardless of race, age, gender or educational background preferred both components of the AI-enabled COVID-19 testing tool (i.e., at-home test kits and web-based diagnostic symptom screening survey) more than the in-clinic testing. Finally, participants also found both the components of the AI-enabled COVID-19 testing tool to be extremely easy to use. Acceptability and usability scores by demographics are described in Supplemental Table 3.

Finding Implications (Acceptability):

These findings should be interpreted in the context of three considerations. One consideration is the versatility of the AI-enabled COVID-19 testing tool (i.e., the web-based symptom diagnostic screening survey and a physical at-home test kit) in that it allows for the use of any FDA approved COVID test kits to be combined with the web-based symptom screening survey, thereby expanding the scope of application of the tool itself. The second consideration is that although at-home screening using the COVID-19 test kit was the most preferred screening method, participants nonetheless demonstrated a moderate preference for screening via the web-based diagnostic symptom screening survey, as evidenced by a score around the mid-point of the scale. In fact, screening on a web-based diagnostic symptom screening survey was preferred far more than the traditional but familiar method of screening in-clinic, suggesting that individuals may be willing and open to trying the AI-Enabled tool for COVID-19 screening. The final consideration is that, given the novelty of the approach of the web-based diagnostic symptom screening, moderate acceptability may reflect less on the tool and more on participants’ perceptions or attitudes towards the tool (e.g., lack of trust, confidence) relative to familiar and tangible alternatives such as the physical, self-administered, at-home COVID-19 test kit. Beyond the initial adoption period, with more clinical validation, education, and awareness of the potential merits of a web-based diagnostic symptom screening survey, individuals’ perceptions, and importantly, intentions to use the AI-enabled COVID-19 testing tool are likely to become more favorable and increase over time. These findings, viewed in the light of the results suggest that participants were willing to re-take the at-home COVID-19 test kit in the future and were confident in the results they received, suggests that participants are open to using the tool as a whole and are more willing to choose it over traditional screening methods like in-person screening.

Finding Implications (Usability):

With regards to the usability of the AI-enabled COVID-19 testing tool, results clearly demonstrated that participants found the at-home COVID-19 test kit and the web-based symptom screening survey to have good usability. These results are consistent with existing literature that suggests high ease of use with at-home test kits.1,215 Therefore, the results suggest that the AI-enabled COVID-19 testing tool may be a highly usable option in the COVID-19 testing landscape wherein several COVID-19 alternatives may not be as easily usable, accessible, or fraught with other limitations. This is especially notable in light of the fact that the likely accuracy of AI-enabled COVID-19 testing tool could reduce the need for and costs associated with multiple COVID-19 tests.

Finding Implications (Differences Across Demographic Groups):

Furthermore, results of the current study suggested that there were several differences in the acceptability and usability of the COVID-19 testing tool across different racial groups, age groups, genders, as well as educational backgrounds. For instance, White participants preferred using the at-home COVID-19 tests more so than Black participants. These findings are consistent with prior research that showed that Non-Hispanic White participants demonstrated greater acceptability towards at-home COVID-19 test kits compared to Non-Hispanic Black participants.6 Furthermore, a less surprising finding in the current study is the weaker preference for the AI-enabled COVID-19 testing tool (i.e., the at-home COVID-19 test kit and the web-based symptom survey) among older participants relative to younger participants. Indeed, considerable research has documented the various barriers that older individuals face when it comes to technology adoption.2,36 Yet another finding in the current study was that female participants found it easier (i.e., usability) to use the at-home COVID-19 test kit compared to the male participants. One reason for this may be that it may be easier for females than males to physically handle the testing materials such as small test tube and test strip. However, more research needs to be conducted to identify whether the gender differences found in the current paper are broadly generalizable in ways that may be used to improve testing. Finally, the current study also found that individuals with advanced educational backgrounds demonstrated greater acceptability towards the at-home COVID-19 test kit as compared to those with less advanced educational backgrounds. These results are consistent with other studies that show greater acceptability of at-home COVID-19 tests in those with at least a college degree than those with some college or those with a high school degree or less.9 For instance, a US clinical trial NCT04502056 reported that physician intervention to explain COVID-19 knowledge resulted in smaller knowledge gaps compared to no intervention, with no significant effects on self-reported safety behavior by race for Black or White individuals without a college degree.37 Said another way, providing instructions or clarifying information to participants can balance some of the differences observed among those with varying educational backgrounds. Yet another clinical trial NCT04371419 demonstrated that knowledge gaps reduced for Black participants who viewed physician delivered video messages about COVID-19, with race-matched providers resulting in increased information-seeking in these subjects as well.37 Hence, overall, these findings suggest that, with a few exceptions, the AI-enabled COVID-19 testing tool is a viable option in terms of its acceptability among users, and their ability to carry out necessary steps.

User confidence and technological literacy impact attitudes toward web-based screening tools when compared to at-home COVID-19 test kits, and additional work needs to be done to close this gap in acceptability. Consideration should be paid to the ways in which instructional materials are developed and provided to users to carry out the at-home COVID-19 testing steps and should accommodate a variety of testing scenarios.38 For example, providing live coaching or recorded demonstrations for completing the at-home COVID-19 test for people with access to an internet enabled device may result in higher reported rates of understanding, acceptability, and trust.

Supplementary Material

SDC Table 3
SDC Table 2
SDC Table 1
Appendices

Acknowledgement:

Tiffany Pignatello, Shelly Orr, Amanda Adams, and Lori Keyser-Marcus assisted in data collection and management of research participants.

Footnotes

Conflicts of Interest and Source of Funding: This project was funded by National Cancer Institute contract number 75N91020C00038 to Vibrent Health, Praduman Jain (Principal Investigator). All listed authors and acknowledged individuals were paid by the contract and had no conflicts of interest to declare.

References

  • 1.WT, Hsu MY, Shen CF, Hung KF, Cheng CM. Home Sample Self-Collection for COVID-19–19 Patients. Advanced Biosystems. 2020. 4(11): e2000150. doi: 10.1002/adbi.202000150. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Guglielmi G Rapid coronavirus tests: a guide for the perplexed. Nature. 2021. 590(7845): 202–205. doi: 10.1038/d41586-021-00332-4. [DOI] [PubMed] [Google Scholar]
  • 3.Hanly & McNeil
  • 4.Alemi Vang, Guralnik Wojtusiak, Moeller Schilling, et al. Combined Symptom Screening and Home Tests for COVID-19. In review, 2022. [DOI] [PMC free article] [PubMed]
  • 5.Shuren JE. Coronavirus (COVID-19–19) Update: Authorization for Quidel QuickVue At-home COVID-19–19 Test. 2021. U.S. Food and Drug Administration [Google Scholar]
  • 6.Wickham H, Averick M, Bryan J, et al. Welcome to the tidyverse. 2019. Journal of Open Source Software,4(43), 1686. doi: 10.21105/joss.01686. [DOI] [Google Scholar]
  • 7.Aleshire ME, Adegboyega A, Escontrías OA, Edward J, Hatcher J. Access to Care as a Barrier to Mammography for Black Women. Policy Polit Nurs Pract. 2021. 22(1):28–40. doi: 10.1177/1527154420965537. Epub 2020 Oct 19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Feagin J, Bennefield Z. Systemic racism and U.S. health care. Soc Sci Med 2014. Feb;103:7–14. doi: 10.1016/j.socscimed.2013.09.006. [DOI] [PubMed] [Google Scholar]
  • 9.Cook BL, Trinh NH, Li Z, Hou SS, Progovac AM. Trends in Racial-Ethnic Disparities in Access to Mental Health Care, 2004–2012. Psychiatr Serv. 2017;68(1):9–16. doi: 10.1176/appi.ps.201500453. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Rentsch CT, Kidwai-Khan F, Tate JP, et al. Patterns of COVID-19 testing and mortality by race and ethnicity among United States veterans: A nationwide cohort study. PLoS Med. 2020;17(9):e1003379. Published 2020 Sep 22. doi: 10.1371/journal.pmed.1003379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Centers for Disease Control and Prevention. “Risk for COVID-19 Infection, Hospitalization, and Death By Race/Ethnicity.” United States Department of Health and Human Services. Accessed March 7, 2022 from Risk for COVID-19 Infection, Hospitalization, and Death By Race/Ethnicity | CDC. https://www.cdc.gov/coronavirus/2019-ncov/covid-data/investigations-discovery/hospitalization-death-by-race-ethnicity.html. [Google Scholar]
  • 12.Huh J, Koola J, Contreras A, Castillo AK, Ruiz M, Tedone KG, Yakuta M, Schiaffino MK. Consumer Health Informatics Adoption among Underserved Populations: Thinking beyond the Digital Divide. Yearb Med Inform 2018. Aug;27(1):146–155. doi: 10.1055/s-0038-1641217. Epub 2018 Aug 29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Adepoju OE, Chae M, Ojinnaka CO, Shetty S, Angelocci T. Utilization Gaps During the COVID-19 Pandemic: Racial and Ethnic Disparities in Telemedicine Uptake in Federally Qualified Health Center Clinics. J Gen Intern Med. 2022. Apr;37(5):1191–1197. doi: 10.1007/s11606-021-07304-4. Epub 2022 Feb 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Jang M, Vorderstrasse A. Socioeconomic Status and Racial or Ethnic Differences in Participation: Web-Based Survey. JMIR Res Protoc. 2019. Apr 10;8(4):e11865. doi: 10.2196/11865. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Couper MP, Kapteyn A, Schonlau M, Winter J. Noncoverage and nonresponse in an Internet survey. Social Science Research. 2007. Mar 1;36(1):131–48. [Google Scholar]
  • 16.Woloshin S, Dewitt B, Krishnamurti T, Fischhoff B. Assessing How Consumers Interpret and Act on Results From At-Home COVID-19 Self-test Kits: A Randomized Clinical Trial. JAMA Intern Med. 2022;182(3):332–341. doi: 10.1001/jamainternmed.2021.8075 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Thompson MJ, Drain PK, Gregor CE, Hassell LA, Ko LK, Lyon V, Ahmed S, Bishop S, Dupuis V, Garza L, Lambert AA, Rowe C, Warne T, Webber E, Westbroek W, Adams AK. A pragmatic randomized trial of home-based testing for COVID-19 in rural Native American and Latino communities: Protocol for the “Protecting our Communities” study. Contemp Clin Trials. 2022. Jun 9;119:106820. doi: 10.1016/j.cct.2022.106820. Epub ahead of print. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Bien-Gund C, Dugosh K, Acri T, Brady K, Thirumurthy H, Fishman J, Gross R. Factors Associated With US Public Motivation to Use and Distribute COVID-19 Self-tests. JAMA Netw Open. 2021. Jan 4;4(1):e2034001. doi: 10.1001/jamanetworkopen.2020.34001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Chandler R, Guillaume D, Parker AG, et al. The impact of COVID-19 among Black women: evaluating perspectives and sources of information. Ethn Health. 2021;26(1):80–93. doi: 10.1080/13557858.2020.1841120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Jimenez ME, Rivera-N√∫√±ez Z, Crabtree BF, Hill D, Pellerano MB, et al. Black and Latinx Community Perspectives on COVID-19–19 Mitigation Behaviors, Testing, and Vaccines. JAMA Netw Open. 2021. 4(7): e2117074. doi: 10.1001/jamanetworkopen.2021.17074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Jones J, Sullivan PS, Sanchez TH, Guest JL, Hall EW, et al. Similarities and Differences in COVID-19–19 Awareness, Concern, and Symptoms by Race and Ethnicity in the United States: Cross-Sectional Survey. J Med Internet Res. 2020. 22(7): e20001. doi: 10.2196/20001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Torres C, Ogbu-Nwobodo L, Alsan M, Stanford FC, Banerjee A, et al. COVID-19–19 Working Group. Effect of Physician-Delivered COVID-19–19 Public Health Messages and Messages Acknowledging Racial Inequity on Black and White Adults’ Knowledge, Beliefs, and Practices Related to COVID-19–19: A Randomized Clinical Trial. JAMA Netw Open. 2021. 4(7): e2117115. doi: 10.1001/jamanetworkopen.2021.17115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Mody A, Pfeifauf K, Bradley C, Fox B, Hlatshwayo MG, et al. Understanding Drivers of Coronavirus Disease 2019 (COVID-19–19) Racial Disparities: A Population-Level Analysis of COVID-19–19 Testing Among Black and White Populations. Clin Infect Dis. 2021. 73(9): e2921–e2931. doi: 10.1093/cid/ciaa1848. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Schaffer DeRoo S, Torres RG, Ben-Maimon S, Jiggetts J, Fu LY. Attitudes about COVID-19 Testing among Black Adults in the United States. Ethn Dis. 2021. Oct 21;31(4):519–526. doi: 10.18865/ed.31.4.519. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Nguyen LH, Joshi AD, Drew DA, et al. Racial and ethnic differences in COVID-19 vaccine hesitancy and uptake. Preprint. medRxiv. 2021;2021.02.25.21252402. Published 2021 Feb 28. doi: 10.1101/2021.02.25.21252402. [DOI] [Google Scholar]
  • 26.Willis DE, Andersen JA, Bryant-Moore K, Selig JP, Long CR, et al. COVID-19 vaccine hesitancy: Race/ethnicity, trust, and fear. Clin Transl Sci. 2021. Nov;14(6):2200–2207. doi: 10.1111/cts.13077. Epub 2021 Jul 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.CDC Human Infection with 2019 Novel Coronavirus Case Report Form. 2019
  • 28.Kwak N, Radler B. A comparison between mail and web surveys: Response pattern, respondent profile, and data quality. Journal of official statistics. 2002. Jun 1;18(2):257 [Google Scholar]
  • 29.Smith G Does gender influence online survey participation?: A record-linkage analysis of university faculty online survey response behavior. ERIC document reproduction service no. ED 501717. 2008. [Google Scholar]
  • 30.Centers for Disease Control. (n.d.) COVID-19 Data Tracker. https://COVID-19.cdc.gov/COVID-19-data-tracker/#demographics. Accessed Jan 6, 2022
  • 31.Wölfel R, Corman VM, Guggemos W, Seilmaier M, Zange S, et al. Virological assessment of hospitalized patients with COVID-19–2019.Nature. 2020,(7809): 465–469. doi: 10.1038/s41586-020-2196-x. [DOI] [PubMed] [Google Scholar]
  • 32.Alemi F, Guralnik E, Vang J, Wojtusiak J, Wilson A, Peterson R, Roess A. Guidelines for Triage of COVID-19 Patients Presenting with Non-respiratory Symptoms. Supplement to Healthcare Quality Management, in review for 2022 [Google Scholar]
  • 33.Lumley T Survey: Analysis of complex survey. 2021. Webpage: http://r-survey.r-forge.r-project.org/survey/.
  • 34.Deming WE, Stephan FF. “On a Least Squares Adjustment of a Sampled Frequency Table When the Expected Marginal Totals are Known.” The Annals of Mathematical Statistics, 11(4) 427–444. December,1940. 10.1214/aoms/1177731829 [DOI] [Google Scholar]
  • 35.U.S. Census Bureau quickfacts: United States [Internet]. [cited 2022 Mar 14]. Available from: https://www.census.gov/quickfacts/fact/table/US/LFE046219
  • 36.O’brien MA, Rogers WA, Fisk AD. Understanding age and technology experience differences in use of prior knowledge for everyday technology interactions.ACM Transactions on Accessible Computing (TACCESS). 2012.,4(2):1–27. doi: 10.1145/2141943.2141947. [DOI] [Google Scholar]
  • 37.Cassuto NG, Gravier A, Colin M, Theillay A, Pires-Roteira D, et al. Evaluation of a SARS-CoV-2 antigen-detecting rapid diagnostic test as a self-test: Diagnostic performance and usability. Journal of Medical Virology. 2021. 93(10). doi: 10.1002/jmv.27249. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.European Centre for Disease Prevention and Control. Considerations on the use of self-tests for COVID-19–19 in the EU/EEA – 17 March 2021. ECDC: Stockholm; (2021). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

SDC Table 3
SDC Table 2
SDC Table 1
Appendices

RESOURCES