Abstract
Background
One of the World Health Organization (WHO) recommendations to achieve its global targets for sexually transmitted infections (STIs) is the increased use of digital technologies. Melbourne Sexual Health Centre (MSHC) has developed an AI-assisted screening application (app) called AiSTi for the detection of common STI-related anogenital skin conditions. This study aims to understand the community’s preference for using the AiSTi app.
Methods
We used a discrete choice experiment (DCE) to understand community preferences regarding the attributes of the AiSTi app for checking anogenital skin lesions. The DCE design included the attributes: data type; AI accuracy; verification of result by clinician; details of result; speed; professional support; and cost. The anonymous DCE survey was distributed to clients attending MSHC and through social media channels in Australia between January and March 2024. Participant preferences on various app attributes were examined using random parameters logit (RPL) and latent class analysis (LCA) models.
Results
The median age of 411 participants was 32 years (interquartile range 26–40 years), with 64% assigned male at birth. Of the participants, 177 (43.1%) identified as same-sex attracted and 137 (33.3%) as heterosexual. In the RPL model, the most influential attribute was the cost of using the app (24.1%), followed by the clinician’s verification of results (20.4%), the AI accuracy (19.5%) and the speed of receiving the result (19.1%). The LCA identified two distinct groups: ‘all-rounders’ (88%), who considered every attribute as important, and a ‘cost-focussed’ group (12%), who mainly focussed on the price. On the basis of the currently available app attributes, the predicted uptake was 72%. In the short term, a more feasible scenario of improving AI accuracy to 80–89% with clinician verification at a $5 cost could increase uptake to 90%. A long-term optimistic scenario with AI accuracy over 95%, no clinician verification and no cost could increase it to 95%.
Conclusions
Preferences for an AI-assisted screening app targeting STI-related anogenital skin lesions are one that is low-cost, clinician-verified, highly accurate and provides results rapidly. An app with these key qualities would substantially improve user uptake.
Supplementary Information
The online version contains supplementary material available at 10.1007/s40271-024-00720-8.
Key Points for Decision Makers
| AI-assisted apps for STI screening are a new and promising approach, but more is needed about what potential users want from these apps. |
| People prefer an affordable STI screening app with results checked by a clinician, which is highly accurate and provides quick results. |
| Prioritising these preferred features during the development of such apps could potentially increase users’ uptake of this screening app. |
Introduction
Sexually transmitted infections (STIs) continue to pose a significant global health challenge [1–3]. In 2020, the World Health Organization (WHO) estimated 374 million new cases of chlamydia, gonorrhoea, syphilis and trichomoniasis worldwide. More than 500 million people were living with genital herpes. The congenital syphilis rate has increased dramatically since 2012 [4]. Despite these alarming figures, progress in addressing these infections fell short of the global 2020 targets, leaving many STIs undiagnosed and untreated [5]. To achieve 2030 global control targets, the WHO recommends fostering innovation, including using digital technologies, raising awareness about STIs and their symptoms, encouraging early healthcare seeking and improving the quality of healthcare services.
Access to healthcare is crucial for effective control of STIs [6] and early identification and management of symptomatic individuals [7]. Innovative digital health services [mobile Health apps (mHealth) or web-based apps] have been shown to be acceptable, feasible and impactful in enhancing access to healthcare services and improving health outcomes related to human immunodeficiency virus (HIV)/STI services such as treatment adherence, clinical attendance rates and self-care [8–10]. The use of artificial intelligence (AI)-assisted self-assessment apps is a relatively new approach for early detection of STIs and has shown promising results in clinical settings [11–15]. Melbourne Sexual Health Centre (MSHC), the largest public sexual health clinic in Australia, developed an AI-assisted self-assessment app that detects common STIs and other related anogenital skin conditions with reasonable accuracy [16–18]. We want to use this web-based app to encourage individuals with STIs to present earlier than they might otherwise for review and treatment. This assumption, and the acceptability of the proposed app to potential target populations, was supported by a developmental qualitative study (currently only available as a non-peer reviewed preprint) [19].
We conducted a discrete choice experiment (DCE) study to understand potential users’ preferences for an AI-assisted app (AiSTi) that checked their anogenital skin conditions. DCE is a method originating from marketing and economics that is now being used in health research to understand people’s preferences in a range of settings, including HIV/STI interventions and digital health research [20, 21]. For example, DCE studies were conducted for HIV self-testing [22, 23], COVID-19 tracing apps and sun protection apps [24–26]. In a DCE, participants are presented with a series of hypothetical alternatives for the attributes of a product (e.g. cost and accuracy) and are asked to choose their preferred option. Participants are provided with different combinations of attributes, and for each combination they choose their preferred option [27]. To date, no DCE study assessing community preferences for AI apps of this nature has been published.
Methods
Design of the DCE
Prior to the DCE, we conducted 3 focus group discussions (FGDs) with 12 clients from MSHC to explore their opinions on using a hypothetical AiSTi app for assessing STI-related anogenital skin conditions [19]. We excluded some attributes from the discussion because we considered them mandatory (e.g. data privacy, compliance with data storage policies and inclusion of the MSHC logo). Through these discussions, we identified possible attributes that might influence the uptake of the app for potential users. We have outlined the detailed steps for defining attributes and levels in our experimental design, including the key findings from the FGDs (See Electronic Supplementary Material, Appendix 1 for details). We also considered the expert opinions and relevant literature to finalise the list of attributes and levels (Table 1). We used Ngene software (Version 1.2.1, Choice metrics, USA) to construct a D-efficient experimental design (D-error of 0.466) using the MNL estimator with 0 priors to optimise the statistical efficiency of the parameter estimates while maintaining the orthogonality and balance of the design. We applied the following constraints to ensure the realistic scenarios: AI result with clinician verification could not provide instant results and could not be free or cost only 1 AUD. Our DCE survey contained 24 choice sets that were divided into four blocks so that each study participant was presented with six choice sets to reduce respondent fatigue [28]. In each choice set, we presented three unlabelled alternatives (i.e. option 1, option 2 and option 3), with option 3 as the opt-out option (Fig. 1). While the attribute levels for options 1 and 2 varied systematically across choice sets according to the experimental design, the opt-out option was understood by participants as not using the app. We subsequently conducted a think-aloud pre-testing [29] with six potential end users recruited from MSHC to check for the comprehensibility of the survey and identify any potential issues with the attributes, levels or choice set configurations. During the pretesting, each participant was asked to complete the survey while verbally expressing their thoughts and interpretation of the questions. After the pretesting, the research team reviewed the feedback and discussed potential modifications to the survey. Since there were no major issues identified, no modification was required for the final attributes and levels.
Table 1.
Attributes and levels of the discrete choice experiment
| Attributes | Levels |
|---|---|
| Data you need to provide | Image onlya |
| Image + total 5 questions (on symptoms) | |
| Image + total 10 questions (on symptoms and sexual behaviours) | |
| Image + total 15 questions (on symptoms, sexual behaviours and basic personal details) | |
| AI accuracy (What proportion of tests are correct) | Fair: 70–79%a |
| Moderate: 80–89% | |
| Good: 90–95% | |
| Very good: > 95% | |
| Result checked by | Artificial intelligence onlya |
| Artificial intelligence + clinician | |
| STI result provides | Only if STI is likely or nota |
| If STI is likely or not + possible specific diagnoses (e.g. Syphilis) | |
| Speed of receiving results | Instanta |
| 1 day | |
| Up to 3 days | |
| Professional support with result | Report onlya |
| Report + helpline | |
| Report + appointment booking to nearest clinic | |
| Report + pathology form sent to you for STI testing | |
| Cost per usage (AUD) | 0 (Free)a |
| $1 | |
| $5 | |
| $10 |
aReference levels used for the effect coding
Fig. 1.
A total of 24 discrete choice sets were generated. An example of a discrete choice set is as follows. Imagine that you suspect a genital skin lesion you have might be a sexually transmitted infection. You decide to use an online tool to evaluate your skin lesion image. Of the options presented below, which would you choose?
Survey Instrument
We created an anonymous online survey using the Qualtrics platform (Qualtrics, Provo, UT). The first section of the survey provided a brief introduction to the hypothetical AiSTi app and described its functionality for users. We then asked for participants’ consent to participate in the study and verified eligibility criteria, including being 18 years or older and having been sexually active within the past 12 months. The following section collected sociodemographic data such as participants’ gender, sexual orientation, education and employment status and travel distance to the nearest healthcare service. The participants were also asked about any previous experience with genital lesions and HIV/STI testing, and their opinions on using the AiSTi app, such as intentions to seek care and being anxious after seeing positive results. When responding to the DCE survey questions, we asked participants to imagine having a genital skin lesion and they could choose to use the AiSTi app to check if a photograph of their lesion indicated a possible STI.
Study Recruitment
We estimated the minimum sample size to be 167 on the basis of Johnson and Orme’s rule of thumb [30]. We recruited participants between January and March 2024. We used a non-probability sampling strategy to recruit participants. We sent a short messenger service (SMS) invitation with the survey link to MSHC clients who consented to receive text messages during their computer-assisted self-interview check-in process. We also promoted our survey through the MSHC website and social media channels (Facebook and X, formerly Twitter) and displayed a poster with a QR code at two general practice clinics. As an incentive for completing the survey, participants had the option to provide their contact details in a separate link, and ten survey participants were randomly selected to receive a $50 gift voucher.
Statistical Analysis
We used the Pandas Python package (version 2.2.1) to calculate the descriptive statistics for the sociodemographic characteristics of our sample. We used NLOGIT (version 6, Econometric Software Inc) for the analyses of choice data. Linearity in the cost attribute was not tested due to the utilisation of predetermined cost categories.
We used the random parameters logit model (RPL) due to the panel nature of our data, where each participant provided multiple-choice responses. The RPL model accounts for potential correlations among repeated choices made by the same individual and accommodates preference heterogeneity across the sample population. We estimated the model parameters using 1000 Halton draws and all parameters were specified to have a normal distribution. To assess model fit, we calculated the log-likelihood and Akaike information criterion (AIC) values. We used effects coding to model the attribute levels. The first level in each attribute was the base level (Table 1). Regarding the RPL results, the magnitude of the coefficients reflects the relative strength of preference for each attribute level within that same attribute. A larger positive coefficient indicates a more preferred attribute level, while a larger negative coefficient indicates a less preferred level. The significant standard deviation (SD) indicates unobserved heterogeneity in preferences for that specific attribute level across the sample. The relative importance of the attribute was calculated by calculating each attribute’s widest range (between its highest and lowest coefficient values), and then expressed as a percentage of the sum of all the attribute ranges. A larger relative importance indicates a greater influence of the attribute on respondents’ preferences and choices.
We conducted a latent class analysis (LCA) to identify clusters of users with similar preferences related to the use of the AiSTi app. We hypothesised that these clusters could be influenced by the following observable characteristics: recent arrivals to Australia (< 5 years), young age (< 26 years), men who have sex with men, higher education (graduate and above), residing far from health services (> 1 h travel time) and previous experience with genital lesions (≥ 1 occurrence). We determined statistical significance based on a p value < 0.05. To estimate the probabilities of individuals using the AiSTi app for checking anogenital lesions under different attribute scenarios, we simulated best-case, worst-case and realistic scenarios with the RPL model using NLOGIT software.
Results
Study Populations
We sent a total of 4300 SMS invites to eligible clients at MSHC between January and March 2024. Of these, 402 (9.4%) provided consent and initiated the survey. We also received 36 survey responses through advertising on other platforms (Facebook, X, MSHC website). The median duration to complete the survey was 6 min (IQR 4.5–9.1 min). After excluding 27 ineligible survey responses from people who were younger than 18 years and those without sexual contact within the past 12 months, the final analysis included 411 participants.
The detailed sociodemographic characteristics of the study participants are reported in Table 2. The median age of participants was 32 years (IQR 26–40 years) (Table 2). Most participants were assigned male at birth (264, 64.2%). Regarding sexual identity, 177 participants (43.1%) identified as same-sex attracted and 137 (33.3%) identified as straight/heterosexual. Most participants, 382 (92.9%), had previously tested for HIV and STIs, and 242 (58.9%) reported experiencing an anogenital skin problem at least once.
Table 2.
Sociodemographic characteristics of the study participants (N = 411)
| Total (N = 411) (n, %) | Non-MSHC (N = 34) (n, %) | MSHC (N = 377) (n, %) | p value | |
|---|---|---|---|---|
| Age, median (IQR) | 32 (26–40) | 31 (25–37) | 32 (26–40) | 0.0228 |
| Sex assigned at birth | ||||
| Male | 264 (64.2%) | 15 (44.1%) | 249 (66.1%) | < 0.001 |
| Female | 145 (35.3%) | 18 (52.9%) | 127 (33.7%) | |
| Intersex | 1 (0.2%) | 1 (2.9%) | 0 (0.0%) | |
| Prefer not to answer | 1 (0.2%) | 0 (0.00%) | 1 (0.3%) | |
| Gender | ||||
| Male | 262 (63.8%) | 16 (47.1%) | 246 (65.3%) | 0.074 |
| Female | 130 (31.6%) | 14 (41.2%) | 116 (30.8%) | |
| Non-binary/gender-fluid | 13 (3.2%) | 2 (5.9%) | 11 (2.9%) | |
| Another gender | 3 (0.7%) | 1 (2.9%) | 2 (0.5%) | |
| Prefer not to answer | 3 (0.7%) | 1 (2.9%) | 2 (0.5%) | |
| Sexual identity | ||||
| Lesbian/gay/homosexual | 177 (43.1%) | 11 (32.4%) | 166 (44.0%) | 0.137 |
| Bisexual | 68 (16.6%) | 4 (11.8%) | 64 (17.0%) | |
| Straight/heterosexual | 137 (33.3%) | 16 (47.1%) | 121 (32.1%) | |
| Queer | 25 (6.1%) | 2 (5.9%) | 23 (6.1%) | |
| Other | 2 (0.5%) | 1 (2.9%) | 1 (0.3%) | |
| Prefer not to answer | 2 (0.5%) | 0 (0.0%) | 2 (0.5%) | |
| Country of origin | ||||
| Australia | 210 (51.1%) | 20 (58.8%) | 190 (50.4%) | < 0.001 |
| Other country | 198 (48.2%) | 11 (32.4%) | 187 (49.6%) | |
| Prefer not to answer | 3 (0.7%) | 3 (8.8%) | 0 (0.0%) | |
| Highest level of education | ||||
| Primary school | 1 (0.2%) | 0 (0.00%) | 1 (0.3%) | 0.882 |
| High school | 51 (12.4%) | 4 (11.8%) | 47 (12.5%) | |
| Bachelor level | 144 (35.0%) | 11 (32.4%) | 133 (35.3%) | |
| Certificate level | 37 (9.0%) | 3 (8.8%) | 34 (9.0%) | |
| Diploma level | 44 (10.7%) | 3 (8.8%) | 41 (10.9%) | |
| Postgraduate level | 130 (31.6%) | 12 (35.3%) | 118 (31.3%) | |
| Other | 3 (0.7%) | 1 (2.9%) | 2 (0.5%) | |
| Prefer not to answer | 1 (0.2%) | 0 (0.0%) | 1 (0.3%) | |
| Employment status | ||||
| Full-time/self-employed | 215 (52.3%) | 16 (47.1%) | 199 (52.8%) | 0.971 |
| Part-time/casual employment | 124 (30.2%) | 10 (29.4%) | 114 (30.2%) | |
| Retired | 9 (2.2%) | 1 (2.9%) | 8 (2.1%) | |
| Unable to work | 8 (2.0%) | 1 (2.9%) | 7 (1.9%) | |
| Unemployed/not working | 44 (10.7%) | 5 (14.7%) | 39 (10.3%) | |
| Other | 9 (2.2%) | 1 (2.9%) | 8 (2.1%) | |
| Prefer not to answer | 2 (0.5%) | 0 (0.0%) | 2 (0.5%) | |
| Travel time to healthcare provider | ||||
| Less than 30 min | 279 (67.9%) | 22 (64.7%) | 257 (68.2%) | < 0.001 |
| 30–60 min | 111 (27.0%) | 8 (23.5%) | 103 (27.3%) | |
| 1–2 h | 17 (4.1%) | 1 (2.9%) | 16 (4.2%) | |
| More than 2 h | 4 (1.0%) | 3 (8.8%) | 1 (0.3%) | |
| Previously tested for HIV/STIs | ||||
| Yes | 382 (92.9%) | 23 (67.7%) | 359 (95.2%) | < 0.001 |
| No | 27 (6.6%) | 11 (32.4%) | 16 (4.2%) | |
| Prefer not to answer | 2 (0.5%) | 0 (0.0%) | 2 (0.5%) | |
| Experienced genital lesion(s) | ||||
| No, never | 167 (40.6%) | 15 (44.1%) | 152 (40.3%) | 0.817 |
| Yes, once | 110 (26.8%) | 7 (20.6%) | 103 (27.3%) | |
| Yes, more than once | 132 (32.1%) | 12 (35.3%) | 120 (31.8%) | |
| Prefer not to answer | 2 (0.5%) | 0 (0.0%) | 2 (0.5%) | |
%, percentage; IQR, interquartile range; MSHC, participants recruited by sending direct SMS to the clients of Melbourne Sexual Health Centre (MSHC); non-MSHC, participants recruited by online social media platforms (Facebook, X, MSHC website). p value: calculated with chi-squared test for the significant difference between MSHC and non-MSHC participants where p value < 0.05 is significant.
The participants’ perspectives on using the hypothetical AiSTi app are reported in Table 3. In total, 259 (63.1%) reported being comfortable with uploading anonymous genital lesion images to the app to check their symptoms. However, 30 (7.3%) reported feeling very uncomfortable with this process. In a hypothetical scenario where the app indicated a potential STI result, 368 (89.5%) reported they were likely to seek healthcare, while 34 (8.3%) reported they were unlikely to do so. In total, 177 (43.1%) reported that they would feel very (n = 108) or extremely (n = 69) anxious if the app indicated a potential STI result. There was no significant difference in responses between participants recruited from MSHC and those from other sources (p value > 0.05) and responses between participants with or without prior experience of genital lesions (p value > 0.05, Table S1).
Table 3.
User perspective on using AiSTi application
| Total (N = 411) (n, %) | Non-MSHC (N = 34) (n, %) | MSHC (N = 377) (n, %) | p value | |
|---|---|---|---|---|
| Comfort level with uploading anonymous genital lesion image to the app | ||||
| Very comfortable | 119 (29.0%) | 11 (32.4%) | 108 (28.7%) | 0.920 |
| Somewhat comfortable | 140 (34.1%) | 13 (38.2%) | 127 (33.7%) | |
| Neutral | 49 (11.9%) | 3 (8.8%) | 46 (12.2%) | |
| Somewhat uncomfortable | 73 (17.8%) | 5 (14.7%) | 68 (18.0%) | |
| Very uncomfortable | 30 (7.3%) | 2 (5.9%) | 28 (7.4%) | |
| Likelihood of care-seeking if AiSTi app indicates a potential STI | ||||
| Extremely likely | 320 (77.9%) | 24 (70.6%) | 296 (78.5%) | 0.189 |
| Somewhat likely | 48 (11.7%) | 8 (23.5%) | 40 (10.6%) | |
| Neither likely or unlikely | 9 (2.2%) | 1 (2.9%) | 8 (2.1%) | |
| Somewhat unlikely | 5 (1.2%) | 0 (0.0%) | 5 (1.3%) | |
| Extremely unlikely | 29 (7.1%) | 1 (2.9%) | 28 (7.4%) | |
| Anticipated anxious level for potential STI result indicated by the app | ||||
| Not anxious | 34 (8.3%) | 2 (5.9%) | 32 (8.5%) | 0.427 |
| Slightly anxious | 74 (18.0%) | 6 (17.7%) | 68 (18.0%) | |
| Moderately anxious | 126 (30.7%) | 13 (38.2%) | 113 (30.0%) | |
| Very anxious | 108 (26.3%) | 5 (14.7%) | 103 (27.3%) | |
| Extremely anxious | 69 (16.8%) | 8 (23.5%) | 61 (16.2%) | |
Preferences by Random Parameter Logit (RPL) Model
The most influential attribute was the cost of using the app (24.1%), followed by the verification process of the result (20.4%), the accuracy of the AI result (19.5%), the speed of receiving the result (19.1%), additional professional services to the result (6.8%), the types of data required (5.3%) and the details of the result report (4.8%) (Fig. 2). The most preferred scenario was an app that is free of charge, with an AI accuracy of over 95%, clinician verification of results and instant results. The least preferred app had a cost of $10, an AI accuracy below 80%, no clinician verification of results and a waiting time of up to 3 days for results. The detailed RPL results are presented in Table 4 and Fig. S1.
Fig. 2.
Relative importance of attributes by random parameter logit model
Table 4.
Random parameter logit model of preferences for using AiSTi app to check anogenital skin lesions
| Attributes | Coefficient (SE) | Standard deviation (SE) |
|---|---|---|
| Data you need to provide | ||
| Image only | −0.38 (0.10)*** | 0.62 (0.34) |
| Image + total 5 questions | −0.07 (0.09) | 0.05 (0.18) |
| Image + total 10 questions | 0.21 (0.09)** | 0.60 (0.14)*** |
| Image + total 15 questions | 0.23 (0.09)*** | 0.14 (0.24) |
| AI accuracy | ||
| Fair: 70–79% | −1.24 (0.14)*** | 0.99 (0.29) |
| Moderate: 80–89% | 0.48 (0.10)*** | 0.74 (0.16)*** |
| Good: 90–95% | −0.20 (0.09)** | 0.02 (0.19) |
| Very good: > 95% | 0.96 (0.11)*** | 0.67 (0.14)*** |
| Result checked by | ||
| Artificial intelligence only | −1.15 (0.18)*** | 0.36 (0.7)** |
| Artificial intelligence + clinician | 1.15 (0.18)*** | 0.36 (0.7)** |
| STI result provides | ||
| Only if STI is likely or not | −0.27 (0.04)*** | 0.18 (0.12) |
| If STI is likely or not + possible specific diagnoses | 0.27 (0.04)*** | 0.18 (0.12) |
| Speed of receiving results | ||
| Instant | 1.15 (0.22)*** | 1.26 (0.21)*** |
| 1 day | −0.13 (0.13) | 0.41 (0.18)** |
| Up to 3 days | −1.02 (0.18)*** | 1.20 (0.14)*** |
| Professional support with result | ||
| Report only | −0.46 (0.10)*** | 0.41 (0.39) |
| Report + helpline | 0.10 (0.09) | 0.29 (0.22) |
| Report + appointment booking to nearest clinic | 0.05 (0.08) | 0.23 (0.28) |
| Report + pathology form sent to you for STI testing | 0.31 (0.08)*** | 0.19 (0.18) |
| Cost per usage (AUD) | ||
| 0 (Free) | 1.64 (0.16)*** | 1.41 (0.36) |
| $1 | 0.28 (0.09)*** | 0.32 (0.23) |
| $5 | −0.84 (0.11)*** | 0.52 (0.20)*** |
| $10 | −1.09 (0.13)*** | 1.14 (0.14)*** |
| Opt-out | −1.14 (0.09)*** |
AIC/N = 1.67, log likelihood function= −2044.06. Opt-out refers to scenario where participants did not choose either option 1 or 2 in the choice options
AIC Akaike information criteria, AUD Australian dollars, SE standard error, STI sexually transmitted infection
*p value < 0.10, **p value < 0.05, ***p value < 0.01
Preference Heterogeneity by Latent Class Analysis (LCA)
The LCA identified two distinct groups of participants which we named ‘all-rounders’ (88.1%, n = 362) and ‘cost-focussed’ (11.9%, n = 49) (Table 5 and Fig. S2). The majority group, ‘all-rounders’, considered all attributes, and they preferred an app that allowed them to provide comprehensive data and receive fast, accurate and clinician-verified results at a low cost. They also preferred to receive additional professional services such as a pathology form for STI testing. They did not prefer the app option with AI accuracy below 80%, lack of clinician verification or long waiting up to 3 days for results. The ‘cost-focussed’ group was mostly influenced by the cost (45.0%), followed by clinician verification of the result (21.8%). They prefer to wait for the result up to 1 day and report without additional professional services if the cost of using the app is free. The relative importance of attributes in each class is reported in Fig. 3.
Table 5.
Latent class analysis of preferences for using AiSTi app for checking anogenital skin lesions
| ‘All-rounders: consider all aspects’ (88.1%) | ‘Cost-focussed group’ (11.9%) | |||
|---|---|---|---|---|
| Attributes | Coefficient | SE | Coefficient | SE |
| Data you need to provide | ||||
| Image only | −0.25*** | 0.07 | 0.19 | 0.30 |
| Image + total 5 questions | −0.19*** | 0.06 | 0.48 | 0.32 |
| Image + total 10 questions | 0.24*** | 0.06 | −0.08 | 0.33 |
| Image + total 15 questions | 0.21*** | 0.07 | −0.59 | 0.38 |
| AI accuracy | ||||
| Fair: 70–79% | −0.83*** | 0.07 | −0.46 | 0.41 |
| Moderate: 80–89% | 0.19*** | 0.07 | 0.34 | 0.31 |
| Good: 90–95% | 0.00 | 0.06 | −0.19 | 0.35 |
| Very good: > 95% | 0.64*** | 0.06 | 0.32 | 0.31 |
| Result checked by | ||||
| Artificial intelligence only | −1.00*** | 0.14 | −1.09* | 0.62 |
| Artificial intelligence + clinician | 1.00*** | 0.14 | 1.09* | 0.62 |
| STI result provides | ||||
| Only if STI is likely or not | −0.21*** | 0.03 | 0.01 | 0.17 |
| If STI is likely or not + possible specific diagnoses | 0.21*** | 0.03 | −0.01 | 0.17 |
| Speed of receiving results | ||||
| Instant | 0.89*** | 0.16 | −0.29 | 0.60 |
| 1 day | −0.16 | 0.10 | 0.17 | 0.45 |
| Up to 3 days | −0.73*** | 0.12 | 0.12 | 0.43 |
| Professional support with result | ||||
| Report only | −0.23*** | 0.06 | 0.55* | 0.30 |
| Report + helpline | −0.07 | 0.06 | −0.39 | 0.36 |
| Report + appointment booking to nearest clinic | 0.09 | 0.06 | −0.30 | 0.34 |
| Report + pathology form sent to you for STI testing | 0.21*** | 0.06 | 0.14 | 0.29 |
| Cost per usage (AUD) | ||||
| 0 (Free) | 1.01*** | 0.08 | 2.60*** | 0.46 |
| $1 | 0.22*** | 0.06 | 0.36 | 0.55 |
| $5 | −0.57*** | 0.07 | −1.08** | 0.48 |
| $10 | −0.66*** | 0.06 | −1.88*** | 0.62 |
| Opt-out | −2.72*** | 0.14 | 2.01*** | 0.27 |
AIC/N = 1.41, log likelihood function = −1698.41703. Opt-out refers to scenario where participants did not choose either option 1 or 2 in the choice options
AIC Akaike information criteria, AUD Australian dollars, SE standard error, STI sexually transmitted infection
*p value < 0.10, **p value < 0.05, ***p value < 0.01
Fig. 3.
Relative importance of attributes in two groups by latent class analysis
Predicted Uptake of the App
When the attribute levels were chosen to reflect the current context (scenario A: uploaded image only, AI accuracy between 70–79%, no verified result by a clinician, instant result, report only and free of charge) using the prototype at MSHC, the predicted uptake was 71.8%. The descriptions of different scenarios are shown in Fig. 4. The most preferred combination of attribute levels (scenario B) increased the uptake to 99.6%, while the least preferred combination of attribute levels (scenario C) decreased the uptake to 9.3%. Assuming we can improve the AI accuracy to 80–89% and include clinician verification of the result at a reasonable cost of $5 (scenario D), the uptake would increase to 90.4%. Further, assuming a future scenario E with an AI accuracy of over 95%, no clinician verification and no cost, the uptake would increase to 95.4%.
Fig. 4.
Simulation scenarios and predicted uptake. Scenario D assumes the app with 80–89% AI accuracy, clinician verification and a $5 cost. Scenario E assumes the app with more than 95% AI accuracy, no clinician verification and no cost
Discussion
In our study, we found that a hypothetical patient-facing application, which screened for potential STI-related anogenital lesions that individuals were concerned about, would be most likely to be used when an app has the combination of key attributes. These attributes included the application being accurate, fast, inexpensive and incorporating a clinician’s verification of the results. An application that had these features would be likely to be used by more than 90% of users. Our participants generally anticipated they would be comfortable using the app and very likely to attend healthcare if it indicated they were likely to have an STI, but a considerable proportion would be anxious. To the best of our knowledge, our study is the first DCE examining preferences for using an app for the screening of STI-related skin lesions. The LCA identified two groups of participants: 88.1% considered every attribute of the app as important, and 11.9% focussed only on the cost and clinician-verified result. Given the importance of individuals at high risk of STIs attending healthcare early for treatment and the rising rates of STIs, future studies should explore if our hypothetical findings translate into improved STI control.
Our study findings highlight the most influential attributes that impact user preferences on the adoption of the app for checking STI-related skin lesions. The cost was the most influential attribute, indicating that participants were highly sensitive to the cost of using the AiSTi app, with a preference for a cost less than or equal to $1. This finding is consistent with previous studies that have shown cost to be a significant factor in the adoption of digital health apps [25, 31, 32]. The potential explanation for the cost as being the most important attribute in our study was that most of the participants were recruited from MSHC, where they could access in-person consultation free of charge. Given the unlikely sustainability of a free app, other potential funding models need to be explored, including advertising on the site or a community-wide subscription paid by the government or health services. To encourage governments or health services to fund the app so it is free for users, it would be important to evaluate the cost-effectiveness of this approach.
The second most important attribute was the clinician verification of the results. This suggests that potential users acknowledge the possible limitations of AI and highlights the need for clinician oversight and validation of AI results to foster user confidence in the app. This finding aligns with the study by Liu et al. [25], where participants stated that AI screening results could potentially outperform human diagnosis in the future, but they still preferred to receive a combined assessment from both AI and a clinician.
The third most important attribute was a higher AI accuracy. Our previous studies [15, 33] demonstrated that AI could be reasonably accurate in detecting STI-related anogenital skin lesions, though the accuracy varied across different disease types. To improve the accuracy of the app and its inclusiveness, researchers will need more images from diverse populations and the inclusion of other data, including clinical history, to achieve a higher validated AI accuracy for checking STI-related skin lesions. When we achieve an AI accuracy over 95% for the important STI lesions, the AI result without clinician verification might be acceptable for self-assessment.
The speed of receiving the results was the fourth most important attribute, indicating that participants preferred to get the results instantly and waiting up to 3 days was unacceptable. This preference for instant results could be explained by potential users of this app who were assumed to have anogenital symptoms already, and thus would likely be anxious to know the result. This finding was supported by feedback provided in focus group discussion [19].
The findings from our DCE and user perspective on using the app provided insights into how the AiSTi app could be integrated with existing sexual health services. Nearly two-thirds (67%) of participants responded that they were comfortable with uploading anonymous anogenital lesion images to the app to check their condition. Nearly 85% responded they would seek healthcare if the app detected a potential STI, demonstrating a high intent to be tested. Participants also preferred to receive a pathology slip delivered to their home for lab tests if the app indicated a likely STI. The integrated approach using the app with pathology slip delivery might help sexual health services reach the community more effectively and potentially reduce the burden on sexual health clinics. A notable proportion of participants (43.1%) reported they would be very or extremely anxious about a potential STI screening result. Careful consideration of how to present this information to the public would be required to avoid unnecessary harm.
Our choice data predicted the app’s uptake with its current attributes to be 71.8%. The best-case was scenario B, which achieved a 99.6% uptake, but this may not be realistically achievable in the near future. A more feasible approach is scenario D, where we focussed on improving various attributes, but not necessarily to their optimal levels. For instance, potentially achieving an AI accuracy of over 95%, if indeed it was even possible, would require substantial research investments, such as collecting diverse training images and intensive AI model development. Given the scarcity of anogenital lesion images, reaching this level of accuracy may not be feasible currently. A more realistic goal is to improve the AI accuracy to a reasonably high level (80–89%) and have the results verified by clinicians at low or no cost to the user. This scenario D is projected to increase the uptake to 90.4%. The long-term goal should be to develop an app with an AI accuracy of over 95%, which may not require clinician verification and be provided at no cost to the user. This hypothetical scenario E could potentially increase the uptake to 95.4%. However, this 5% improvement over scenario D would require substantial time and resources, and we should weigh the balance between the increased app’s uptake and our investment.
This study has some limitations. First, we only selected the respondents with recent sexual contact and the findings may not fully reflect the preferences of the general population. Most participants were recruited from MSHC, and therefore, the findings on user preferences might not reflect the broader Australian population. However, we found no significant demographic difference between participants who had attended MSHC and those recruited by other means in terms of gender, education, employment status or history of experiencing anogenital lesions. Moreover, our latent class analysis with interaction did not identify any observable demographic variables that significantly differentiated these two groups. Second, due to the need to limit the number of attributes in the DCE design to minimise participant burden, we did not include all potential attributes such as data privacy [24], organisation logo in the app and anonymity. The exclusion of these attributes might have influenced participants’ perspectives on the relative importance of those that were included. However, according to our previous qualitative findings [19] on the AiSTi app, we assumed these attributes to be the minimum requirements for app development. As a health service, and particularly a sexual health service, we need to maximise data privacy regardless of user preferences. Third, as over 90% of responses were from MSHC clients through SMS invites, we did not incorporate reCAPTCHA scores from Qualtrics to identify potential bot responses. Moreover, there could be a social desirability bias where participants may alter their preferences to align with what they believe to be more socially acceptable. However, our survey was completely anonymous, and the wording of the questions was refined through pre-testing to minimise this bias. Moreover, our experimental design resulted in some attribute-level combinations appearing infrequently. It might influence respondents’ trade-offs and lead to a disordering of preference for AI accuracy. Alternatively, there could be insufficient distance between what moderate and good accuracy meant to participants. Finally, our study used brief attribute descriptions without additional explanatory hover text or visuals for the DCE. Although there could be potential for misinterpreting specific attribute levels, we mitigated this by including a warm-up scenario with thorough explanations of the attributes and levels. Further, all users during the pre-testing demonstrated a clear understanding of the attribute levels.
Conclusions
This DCE study provided valuable insights into user preferences for an AI-assisted self-assessment app targeting STI-related anogenital skin lesions. The findings highlight cost, clinician verification, AI accuracy and speed of receiving the results as the most influential attributes. These simulation findings also highlight the short- and long-term strategies to optimise the user uptake of the app. An app with the preferred key qualities would substantially improve user uptake.
Supplementary Information
Below is the link to the electronic supplementary material.
Acknowledgements
The authors thank Monash University and MSHC for a PhD scholarship for author N.S. We also thank all contributors to this study. We thank Mark Chung for graphic design and help with online recruitment and Rohit Sasidharan for helping identify eligible participants from the MSHC database. We also thank the GP clinics for their help with participant recruitment.
Author contributions
J.O., L.Z., C.K.F. and N.S. conceived the study. J.O. and N.S. designed and developed the DCE design. A.K. and D.L. assisted in the study design. J.O., N.S. and P.L. conducted data collection, data analysis and drafted the initial manuscript. All authors provided feedback and improved the final manuscript.
Declarations
Funding
C.K.F. is supported by an Australian National Health and Medical Research Council (NHMRC) Leadership Investigator Grant (GNT1172900). J.O. is supported by an NHMRC Emerging Leader Investigator Grant (GNT1193955).
Conflict of interest
None declared.
Data availability
Data are not publicly available.
Ethics approval
Ethical review was approved by the Alfred Hospital Ethics Committee (Project number: 642/23). The study was conducted in accordance with the ethical regulations and guidelines.
Footnotes
Lei Zhang and Jason J. Ong contributed equally to supervision.
Contributor Information
Nyi Nyi Soe, Email: drnyinyisoe1989@gmail.com.
Lei Zhang, Email: lei.zhang1@monash.edu.
Jason J. Ong, Email: Jason.ong@monash.edu, Email: Jason.Ong@lshtm.ac.uk
References
- 1.Kirby Institute. HIV, viral hepatitis and sexually transmissible infections in Australia. Annual surveillance report 2022. https://kirby.unsw.edu.au/sites/default/files/kirby/report/Annual-Surveillance-Report-2022_HIV.pdf.
- 2.World Health Organization. Sexually transmitted infections (STIs) 2022. https://www.who.int/news-room/fact-sheets/detail/sexually-transmitted-infections-(stis).
- 3.Centers for Disease Control and Prevention (CDC). Incidence, prevalence, and cost of sexually transmitted infections in the United States 2021. https://www.cdc.gov/nchhstp/newsroom/fact-sheets/std/STI-Incidence-Prevalence-Cost-Factsheet.html.
- 4.Centers for Disease Control and Prevention (CDC). STI Treatment Guidelines 2021. https://www.cdc.gov/std/treatment-guidelines/congenital-syphilis.htm. Accessed 22 July 2021.
- 5.World Health Organization; Global HIV Hepatitis and STIs Programmes. Global health secotr strategies 2022–2030. 2022.
- 6.Fairley CK, Chow EPF, Simms I, Hocking JS, Ong JJ. Accessible health care is critical to the effective control of sexually transmitted infections. Sex Health. 2022;19(4):255–64. [DOI] [PubMed] [Google Scholar]
- 7.Fairley CK, Chow EPF, Hocking JS. Early presentation of symptomatic individuals is critical in controlling sexually transmissible infections. Sex Health. 2015;12(3):181. [DOI] [PubMed] [Google Scholar]
- 8.Daher J, Vijh R, Linthwaite B, Dave S, Kim J, Dheda K, et al. Do digital innovations for HIV and sexually transmitted infections work? Results from a systematic review (1996–2017). BMJ Open. 2017;7(11): e017604. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Cao B, Bao H, Oppong E, Feng S, Smith KM, Tucker JD, et al. Digital health for sexually transmitted infection and HIV services. Curr Opin Infect Dis. 2020;33(1):44–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Abraham E, Chow EP, Fairley CK, Lee D, Kong FY, Mao L, et al. eSexualHealth: preferences to use technology to promote sexual health among men who have sex with men and trans and gender diverse people. 2022. [DOI] [PMC free article] [PubMed]
- 11.Latt PM, Soe NN, Xu X, Ong JJ, Chow EPF, Fairley CK, et al. Identifying individuals at high risk for HIV and sexually transmitted infections with an artificial intelligence-based risk assessment tool. Open Forum Infect Dis. 2024. 10.1093/ofid/ofae011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Xu X, Ge Z, Chow EPF, Yu Z, Lee D, Wu J, et al. A machine-learning-based risk-prediction tool for HIV and sexually transmitted infections acquisition over the next 12 months. J Clin Med. 2022;11(7):1818. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Melbourne Sexual Health Centre. MySTIRisk, a web-based AI tool for risk prediction of HIV/STIs. 2023. https://mystirisk.mshc.org.au/.
- 14.Bao Y, Medland NA, Fairley CK, Wu J, Shang X, Chow EPF, et al. Predicting the diagnosis of HIV and sexually transmitted infections among men who have sex with men using machine learning approaches. J Infect. 2021;82(1):48–59. [DOI] [PubMed] [Google Scholar]
- 15.Soe NN, Phyu Mon Latt, Lee D, Yu Z, Schmidt M, Bissessor M, et al. Using deep learning systems for diagnosing common skin lesions in sexual health. Preprint. 2024.
- 16.Soe NN, Yu Z, Latt PM, Lee D, Ong JJ, Ge Z, et al. Image capture: AI-assisted sexually transmitted infection diagnosis tool for clinicians in a clinical setting [Conference presentation]. Australasian Sexual and Reproductive Health Conference, Sydney. September 2023.
- 17.Soe NN, Yu Z, Latt PM, Lee D, Samra RS, Ge Z, et al. Using AI to differentiate Mpox from common skin lesions in a sexual health clinic: algorithm development and validation study. J Med Internet Res. 2024;26: e52490. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Soe NN, Yu Z, Latt PM, Lee D, Ong JJ, Ge Z, et al. Evaluation of artificial intelligence-powered screening for sexually transmitted infections-related skin lesions using clinical images and metadata. BMC Med. 2024;22(1):296. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.King AJ, Soe NN, Latt PM, Zhang L, Temple-Smith M, Maddaford K, et al. Sexual health service users’ perspectives on artificial intelligence applications for identification of lesions associated with sexually transmissible infections: a qualitative study. Sexual Reprod Health Matters. 2024.
- 20.Clark MD, Determann D, Petrou S, Moro D, De Bekker-Grob EW. Discrete choice experiments in health economics: a review of the literature. Pharmacoeconomics. 2014;32(9):883–902. [DOI] [PubMed] [Google Scholar]
- 21.Wulandari LPL, He SY, Fairley CK, Bavinton BR, Schmidt H-M, Wiseman V, et al. Preferences for pre-exposure prophylaxis for HIV: a systematic review of discrete choice experiments. eClinicalMedicine. 2022;51: 101507. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Ong JJ, De Abreu LR, Street D, Smith K, Jamil MS, Terris-Prestholt F, et al. The preferred qualities of human immunodeficiency virus testing and self-testing among men who have sex with men: a discrete choice experiment. Value Health. 2020;23(7):870–9. [DOI] [PubMed] [Google Scholar]
- 23.Ung M, Martin S, Terris-Prestholt F, Quaife M, Tieosapjaroen W, Phillips T, et al. Preferences for HIV prevention strategies among newly arrived Asian-born men who have sex with men living in Australia: a discrete choice experiment. Front Public Health. 2023;11:1018983. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Folkvord F, Peschke L, Gümüş Ağca Y, Van Houten K, Stazi G, Roca-Umbert A, et al. Preferences in the intention to download a COVID tracing app: a discrete choice experiment study in the Netherlands and Turkey. Front Commun. 2022. 10.3389/fcomm.2022.900066. [Google Scholar]
- 25.Liu T, Tsang W, Xie Y, Tian K, Huang F, Chen Y, et al. Preferences for artificial intelligence clinicians before and during the COVID-19 pandemic: discrete choice experiment and propensity score matching study. J Med Internet Res. 2021;23(3): e26997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Szinay D, Cameron R, Naughton F, Whitty JA, Brown J, Jones A. Understanding uptake of digital health products: methodology tutorial for a discrete choice experiment using the Bayesian efficient design. J Med Internet Res. 2021;23(10): e32365. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Merlo G, Van Driel M, Hall L. Systematic review and validity assessment of methods used in discrete choice experiments of primary healthcare professionals. Health Econ Rev. 2020. 10.1186/s13561-020-00295-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Ryan MK, Julie R, Rockers PC, Dolea C. How to conduct a discrete choice experiment for health workforce recruitment and retention in remote and rural areas: a user guide with case studies (English). World Bank Group; 2012.
- 29.Campoamor NB, Guerrini CJ, Brooks WB, Bridges JFP, Crossnohere NL. Pretesting discrete-choice experiments: a guide for researchers. Patient Patient Centered Outcomes Res. 2024;17(2):109–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Orme B. Sample size issues for conjoint analysis studies. Sequim: Sawtooth software technical paper. 1998.
- 31.Consumer Technology Association. Driving consumer adoption of digital health solutions. PR Newswire. 2023.
- 32.Savira F, Robinson S, Toll K, Spark L, Thomas E, Nesbitt J, et al. Consumer preferences for telehealth in Australia: a discrete choice experiment. PLoS One. 2023;18(3): e0283821. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Soe NN, Latt PM, Yu Z, Lee D, Kim CM, Tran D, et al. Clinical features-based machine learning models to separate sexually transmitted infections from other skin diagnoses. J Infect. 2024;88(4): 106128. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data are not publicly available.




