Abstract
Objective
To compare acceptability of 2 artificial intelligence (AI) use cases in the English National Health Servic Breast Screening Program.
Patients and Methods
From February 7 to March 14 2024, we conducted an online survey, randomizing participants to information about using AI either as the second mammogram reader or to triage mammograms. In the triage scenario, only higher-risk images would be reviewed by a human reader. The survey was completed by 3419 women aged 45 to 70 years, recruited from an online panel. The primary outcome was acceptability of the presented AI use case. We assessed a range of psychological and demographic factors. Regression modeling examined predictors of acceptability.
Results
Using AI as a second reader was rated as more acceptable (P<.001), less concerning (P<.001), and less likely to put people off screening (P=.001) than using it as a triage tool. In both groups, most women said AI would not affect their breast screening attendance (1251/1710 [73%] and 1195/1709 [70%] in the second reader and triage groups, respectively). Nevertheless, 15% (498/3419) of participants stated that the use of AI would make them less likely to attend. After adjusting for AI use case, acceptability was higher in respondents of older age, White ethnicity, higher education, greater AI knowledge, and with more positive attitudes toward both AI and breast screening.
Conclusion
Artificial intelligence in breast screening was rated as more acceptable if used alongside, rather than instead of, a human reader. Ongoing careful evaluation is needed to ensure its roll-out does not widen existing social inequalities and that the risk-benefit profile of screening is maintained.
The National Health Service (NHS) Breast Screening Program (BSP) in England invites women aged 50-70 years for 3-yearly mammography, preventing an estimated 1300 breast cancer deaths annually.1 At most centers, 2 radiologists review each mammogram, with a third reader arbitrating disagreement. This labor-intensive process reduces but does not eliminate human error.2,3 Artificial intelligence (AI) is being actively researched owing to its potential to improve diagnostic accuracy, address staff shortages, and reduce workload.2,4,5 However, it is also important to ensure the use of AI in mammography is acceptable among the screening-eligible public to maintain confidence and engagement with breast screening.
International and UK studies suggest that the public are generally open to AI in health care.6, 7, 8 Although a recent review suggests such attitudes may extend to AI-assisted mammography, a nonnegligible proportion of the population is skeptical.9 In England, a survey in a nonprobability sample of women working at NHS trusts in the East Midlands found 47% were in favor of AI-assisted mammography but notable proportions lacked understanding and trust in AI.10 More recently, a qualitative study found women in England appreciated the potential benefits of AI-assisted mammography but expressed concerns about reliability, loss of human oversight, transparency, and data security.11 Although these studies offer valuable insights, it is important to investigate public attitudes toward AI-assisted mammography using a broader population-based sample. Identifying and addressing sociodemographic and psychosocial correlates of acceptability can help ensure that AI implementation does not exacerbate existing breast screening disparities.
Possible use cases for AI in the NHSBSP include AI as a second reader, a triage tool, a radiologists’ aid, or a standalone reader.12 Although research has explored women’s perceptions of different applications of AI in breast screening, experimental studies directly comparing their acceptability are lacking.10,13, 14, 15, 16, 17
Acceptability of health care relates to the approval and adoption of health systems and interventions. On an individual level, acceptability is a multifaceted construct, encompassing cognitive, affective, and behavioral factors.18 To date, studies have used a range of measures to explore women’s perceptions of AI in breast screening,9 but none have explored specific dimensions of acceptability.
To address these gaps, this study compared how women in England anticipate accepting, experiencing, and attending breast screening under 2 AI use cases: (1) AI as a second reader, and (2) AI as a triage tool (with only higher-risk images reviewed by a human reader). These 2 use cases were selected because they are likely candidates for implementation and are relatively polarized in terms of human involvement.19 Variations in acceptability by sociodemographic and psychosocial characteristics were also explored.
Patients And Methods
Study Design and Population
We conducted an online survey, randomizing participants to 1 of 2 AI use cases (Figure 1). The protocol was preregistered on open science framework (OSF; https://osf.io/zayhb/overview).
Figure 1.
Artificial intelligence (AI) use cases presented to participants within the survey.
We recruited women aged 45-70 years in England, eligible or approaching eligibility for NHS breast screening. Women with a current or previous diagnosis of breast cancer and/or under annual surveillance for elevated risk were excluded. Participants were recruited by Dynata Global Ltd, with target quotas for minoritized ethnic groups (25%) and no formal educational qualifications (18%), based on 2021 census data.20 Age-group targets were as follows: 20% (45-49 years), 43% (50-59 years), and 37% (60-70 years).
Survey Procedure and Materials
The survey was developed for this study with input from patient and public contributors (N=16) and hosted on Qualtrics XM (version 2024). Further details about the development and piloting of the survey are available on OSF (https://osf.io/zayhb/files/osfstorage). On average, the survey took 12.5 minutes to complete. A minimum completion time of 7 minutes was set to filter out speeders.
Our study received institutional approval by the Queen Mary Ethics of Research Committee (ref: QME23.0161), and all participants gave informed consent at the start of the survey. Participants who did not consent or meet the eligibility and quota criteria were directed to an exit page (Figure 2). Following baseline questions, participants were randomly assigned (1:1) to 1 of 2 exposure groups: (A) AI as second reader or (B) AI as triage tool (Figure 1).
Figure 2.
Flowchart of participant inclusion.
All participants were shown a graphic and text description of current breast screening and its benefits and harms. Women in the AI-second reader group were presented with information explaining how this use of AI could be integrated within the mammogram-reading pathway. By comparison, participants in the AI-triage group were shown how AI could be deployed to prioritize mammograms for human double reading. Participants rated the acceptability of their respective AI use cases and indicated related concerns and expectations. They were asked about their familiarity with AI technologies, and information needs if invited to AI-assisted mammography (not reported in this study). Before exiting the survey, participants were given the option to complete items to monitor diversity and inclusion (Supplemental Table 1, available online at https://www.mcpdigitalhealth.org/). Full survey templates and study protocol outlining measures and coding are available on OSF.
Primary Outcomes
Acceptability was evaluated across 3 domains. Anticipated acceptance was measured by the item “How acceptable is this scenario to you?”, with 5 response options (completely unacceptable to completely acceptable). Anticipated experience was measured with the question “Would you feel concerned about the fact AI was used?”, with 4 response options (very concerned to not at all concerned). Anticipated attendance was assessed as follows: What impact would it have on you if the NHS started using AI in breast screening?, with the following 3 response options: I’d be more likely to take part; I’d be less likely to take part; and It wouldn’t affect whether I took part.
Secondary Outcomes
Participants were asked to rate concerns and expectations toward their respective AI use case. Concerns were identified from the literature,10,13 our focus group study,11 and the Theoretical Framework of Acceptability.18 Items used 4-point Likert scales: very concerned to not at all concerned. Participants' expectations of AI-assisted mammography including effectiveness, fairness and equity, and resource allocation15,18 were assessed with 4-point Likert scales from: very likely to completely unlikely.
Baseline Sociodemographic and Psychosocial Measures
Participants reported their age, ethnicity, highest educational qualification, and digital literacy (Table 1). Knowledge about breast screening was assessed with 3 items asking participants to correctly identify the asymptomatic nature of a screening mammogram, the nondefinitive nature of mammogram results, and the risk of overtreatment.21 We also assessed awareness of current usual practice of double reading mammograms in the NHSBSP. A summed knowledge score of 0 to 4 was calculated.
Table 1.
Baseline Sociodemographic and Psychosocial Characteristics (N=3419)
| Characteristic | All (N=3419) | AI-second reader (n=1710 [50.1%]) | AI-triage (n=1709 [49.9%]) |
|---|---|---|---|
| Age (y) | |||
| 45-49 | 527 (15.4) | 279 (16.3) | 248 (14.5) |
| 50-54 | 800 (23.4) | 393 (23.0) | 407 (23.8) |
| 55-59 | 776 (22.7) | 371 (21.7) | 405 (23.7) |
| 60-64 | 673 (19.7) | 339 (19.8) | 334 (19.5) |
| 65-70 | 643 (18.8) | 328 (19.2) | 315 (18.4) |
| Ethnic background (missing = 15) | |||
| White | 2602 (76.4) | 1300 (76.0) | 1302 (76.2) |
| Mixed/multiple | 169 (5.0) | 78 (4.6) | 91 (5.3) |
| Asian or Asian British | 289 (8.5) | 155 (9.1) | 134 (7.8) |
| Black, Black British, Caribbean, or African | 315 (9.3) | 154 (9.0) | 161 (9.4) |
| Any other background not yet described | 29 (0.9) | 16 (0.9) | 13 (0.8) |
| Educational level (missing = 59) | |||
| No qualifications to GCSE level or equivalent | 1134 (33.8) | 562 (32.9) | 573 (35.5) |
| AS level to below degree or equivalent | 901 (26.8) | 461 (27.0) | 440 (25.7) |
| Degree and above or equivalent | 1325 (38.8) | 661 (38.7) | 664 (38.9) |
| Breast screening experience | |||
| Have never been invited | 553 (16.2) | 280 (16.4) | 273 (16.0) |
| Have been invited but have never attended | 328 (9.6) | 164 (9.6) | 164 (9.6) |
| Have attended before but have sometimes delayed or missed my screening appointment | 286 (8.4) | 133 (7.8) | 153 (9.0) |
| Have always attended when invited | 2244 (65.6) | 1129 (66.0) | 1115 (65.2) |
| Prefer not to say | 8 (0.2) | 4 (0.2) | 4 (0.2) |
| Breast screening knowledge | |||
| What is a screening mammogram? | |||
| A mammogram you have when you are apparently healthy (correct) | 790 (23.1) | 413 (24.2) | 377 (22.1) |
| A mammogram you have if you notice a change or lump in your breast | 302 (8.8) | 165 (9.6) | 137 (8.0) |
| Both the above | 2286 (66.9) | 1114 (65.1) | 1172 (68.6) |
| Not sure/do not know | 41 (1.2) | 18 (1.1) | 23 (1.3) |
| Do you think a screening mammogram will find every cancer? | |||
| Yes | 875 (25.6) | 438 (25.6) | 437 (25.6) |
| No | 1816 (53.1) | 908 (53.1) | 908 (53.1) |
| Not sure/do not know | 728 (21.3) | 374 (21.3) | 364 (21.3) |
| Do you think breast screening leads some women to get treatment for a cancer that would never have caused them harm if left untreated? | |||
| Yes | 1847 (54.0) | 924 (54.0) | 923 (54.0) |
| No | 754 (22.1) | 369 (21.6) | 385 (22.5) |
| Not sure/do not know | 818 (23.9) | 417 (24.4) | 401 (23.5) |
| Do you know how many people usually look at each breast x-ray in breast screening? | |||
| No | 2349 (68.7) | 1161 (67.9) | 1188 (69.5) |
| Not sure/do not know | 743 (21.7) | 374 (21.9) | 369 (21.6) |
| Yes (if yes, how many?) | 327 (9.6) | 175 (10.2) | 152 (8.9) |
| 1 | 45 (1.3) | 24 (1.4) | 21 (1.2) |
| At least 2 | 209 (6.1) | 110 (6.4) | 99 (5.8) |
| At least 3 | 58 (1.7)) | 34 (2.0) | 24 (1.4) |
| 4 or more | 15 (0.4) | 7 (0.4) | 8 (0.5) |
| Self-rated knowledge of AI | |||
| Very little | 707 (20.7) | 358 (20.9) | 349 (20.4) |
| Some | 1681 (49.2) | 822 (48.1) | 859 (50.3) |
| Moderate—expert | 1031 (30.2) | 530 (31.0) | 501 (29.3) |
Values are n (%).
AI, artificial intelligence.
Attitudes toward breast screening included assessment of perceived benefits of screening,22 severity of breast cancer,23 anticipated coping with positive screening results,24 and overall enthusiasm for screening25 (Supplemental Table 2, available online at https://www.mcpdigitalhealth.org/). For analysis, responses were summed (range 4-16; Cronbach α=.72) with higher scores indicating more positive attitudes. Breast screening experience was measured (Table 1) and dichotomized into never or ever attended.
Participants were asked to self-rate their knowledge of AI and were shown 3 true/false statements about it (Supplemental Table 2; summed score range, 0-3). Attitudes toward AI were measured by rating agreement with statements about expectations of, and trust in, AI technology,26 and its social impact10 (Supplemental Table 2). Exploratory factor analysis identified 2 subscales reflecting positive or negative attitudes (Cronbach α=.84 and α=.72 respectively). Total subscale scores were calculated (higher scores reflected more positive and more negative attitudes, respectively). To assess familiarity with AI, participants were asked to indicate any items they recognized from a list of AI vendors and products (Supplemental Table 2).
Statistical Analysis
Analysis followed a preregistered statistical analysis plan available at OSF (https://osf.io/zayhb/overview). Based on a conservative estimated acceptance rate for AI of 50% and 95% confidence levels, a priori power analysis demonstrated that a target sample size of 3200 would allow us to estimate percentages of women who find AI acceptable with a precision of ±1.77% (full sample) and ±2.50% (per AI use case group).
Descriptive statistics were used to summarize participant characteristics and AI acceptability. Differences in acceptability outcomes across the 2 AI use cases were evaluated with χ2 tests. Regression modeling investigated variations in acceptability by sociodemographic and psychosocial factors. As proportional odds could not be assumed, responses were dichotomized for bivariate logistic regression modeling, adjusted for use case. Factors showing bivariate associations with acceptability were included in stepwise models: the first included use case and sociodemographic factors; the second added psychosocial variables. Although stepwise selection has known limitations,27 we had an adequate sample size, checked multicollinearity using variation inflation factors and applied an inclusive preliminary threshold (P<.10). Variations in concerns and expectations of AI-assisted mammography by use case were analyzed with χ2 tests.
Results
Of 3918 respondents completing the survey, 3419 (87.3%) were included in the analysis (Figure 2). Most exclusions (395/499; 79.2%) were due to speeding. The sample was representative of the female population in England with regards to age and ethnicity (Table 1) but slightly skewed to higher levels of education, especially among those with ethnic minority backgrounds.20 Participants were also representative in terms of length of UK residency, marital, and disability status (Supplemental Table 1).
Most participants had high levels of digital literacy, with only 1.4% (48/3419) reporting being unable to complete any of the presented tasks (Supplemental Table 2). Almost half (1681; 49.2%) reported having some knowledge of AI (Table 1), and most identified the correct definition of AI (2427; 71.0%) and recognized it as a type of computer software (2220; 64.3%). Over half of participants would trust AI tools (1929; 56.4%), and 68.4% (2338) acknowledged these would not perform at 100%. However, approximately 40% demonstrated all or nothing beliefs about AI (Supplemental Table 2), where high expectations of reliability may be off-set by zero tolerance for errors.26 Two-thirds (2109; 61.7%) believed AI would positively impact society, and half (1713; 50.1%) recognized more than 1 AI product or vendor, whereas 37.6% (1285) had heard of none.
Most participants who reported being invited (2530/2858; 88.5%) had previously attended breast screening and most had positive screening intentions (3181/3491; 93.0%) (Supplemental Table 2). However, knowledge about breast screening varied, with 27.3% (934) scoring 0 and 43.9% (1502) scoring 1 (of a possible 4). Only 9.6% (327) of participants thought they knew how many people usually look at each screening mammogram of whom 63.9% (209/327) selected at least 2 (Table 1).
Acceptability of Using AI in Breast Screening and Differences by Use Case
More than 70% of participants found AI-assisted mammography either somewhat acceptable (1388; 40.6%) or completely acceptable (653; 19.1%) or had no opinion (414; 12.1%) (Table 2). However, almost half would be very concerned (439; 2.8%) or somewhat concerned (1146; 33.5%) by its use. By contrast, most participants anticipated no negative impact on their screening behavior, with 71.5% (2446) and 13.9% (475) reporting that this would either have no effect or make their screening attendance more likely, respectively.
Table 2.
Acceptability Outcomes by AI Use Case (N=3419)
| Outcome | All (N=3419) | AI-second reader (n=1710) | AI-triage (n=1709) | χ2; P |
|---|---|---|---|---|
| Anticipated acceptancea | ||||
| Completely unacceptable | 286 (8.4) | 109 (6.4) | 177 (10.4) | χ2(3419,4) 82.55; <.001 |
| Somewhat unacceptable | 678 (19.8) | 260 (15.2) | 418 (24.5) | |
| No opinion | 414 (12.1) | 230 (13.5) | 184 (10.8) | |
| Somewhat acceptable | 1388 (40.6) | 725 (42.4) | 663 (38.8) | |
| Completely acceptable | 653 (19.1) | 386 (22.6) | 267 (15.6) | |
| Anticipated experienceb | ||||
| Very concerned | 439 (12.8) | 179 (10.5) | 260 (15.2) | χ2(3197,3) 60.31; <.001 |
| Somewhat concerned | 1146 (33.5) | 502 (29.4) | 644 (37.7) | |
| Not very concerned | 1062 (31.1) | 590 (34.5) | 472 (27.6) | |
| Not at all concerned | 550 (16.1) | 320 (18.7) | 230 (13.5) | |
| Do not know/unsure | 222 (6.3) | 119 (7.0) | 103 (6.0) | |
| Anticipated attendancec | ||||
| Less likely | 498 (14.6) | 210 (12.3) | 288 (16.9) | χ2(3419,2) 14.61; <.001) |
| No effect | 2446 (71.5) | 1251 (73.2) | 1195 (69.9) | |
| More likely | 475 (13.9) | 249 (14.6) | 226 (13.2) | |
Values are n (%).
AI, artificial intelligence; NHS, National Health Service.
Anticipated acceptance: how acceptable is this scenario to you?.
Anticipated experience: imagine you have recently been for breast screening and AI was used in the way shown in the picture. Would you feel concerned about the fact that AI was used?.
Anticipated attendance: what impact would it have on you if the NHS started using AI in breast screening (in the way shown in this picture)?.
Participants were significantly more likely to perceive AI-triage as unacceptable or completely unacceptable than AI-second reader (P<.001). Furthermore, more participants found AI-triage somewhat concerning or very concerning than AI-second readers (P<.001). More participants in the AI-triage group reported being less likely to take part in breast screening compared with those in the AI-second reader group (P=.001).
In fully adjusted analyses, multivariable regression results showed lower odds of acceptance of AI-triage using all 3 outcomes: adjusted odds ratio (aOR), 0.46 (95% CI, 0.39-0.55) for acceptance; aOR, 0.52 (95% CI, 0.45-0.61) for experience; and aOR, 0.61 (95% CI, 0.50-0.76) for attendance. Tabulated regression results are detailed in Supplementary Tables 3 to 5 (available online at https://www.mcpdigitalhealth.org/).
Acceptability of AI-Assisted Mammography by Sociodemographic and Psychosocial Characteristics
Older participants were significantly more likely to accept AI than those aged 45-49 years (Supplemental Table 3): aOR, 1.53 (95% CI, 1.10-2.12) for 60-64 years; and aOR, 1.48 (95% CI, 1.05-2.08) for 65-70 years. Age also independently predicted anticipated experience of AI-assisted mammography (Supplemental Table 4), with both older age groups (60-64 and 65-70 years) having higher odds of being unconcerned about AI than those aged 45 to 49 years (aOR, 2.33; 95% CI, 1.77-3.07, and aOR, 2.32; 95% CI, 1.75-3.08, respectively). Similarly, older age groups would be significantly more likely to attend screening irrespective of AI use, especially participants aged 65 to 70 years (aOR, 3.08; 95% CI, 1.94-4.89) (Supplemental Table 5).
Although ethnicity was not a predictor of anticipated acceptance of AI (Supplemental Table 3), women with Black ethnic backgrounds were more likely to be concerned about AI-assisted mammography (aOR, 0.64; 95% CI, 0.48-0.85) (Supplemental Table 4) and had lower likelihood of attendance (aOR, 0.42; 95% CI, 0.30-0.58) than those with White ethnic backgrounds (Supplemental Table 5).
Similarly, educational attainment did not predict anticipated acceptance (Supplemental Table 3), but those educated to degree level or higher were likely to be less concerned about AI (aOR, 1.23; 95% CI, 1.01-1.50) (Supplemental Table 4) and had higher intentions of attending such screening (aOR, 1.41; 95% CI, 1.08-1.84) (Supplemental Table 5) than those with lower levels of education.
Attitudes toward breast screening were associated with all 3 acceptability outcomes (Supplemental Tables 3 to 5). Participants with positive attitudes had higher anticipated acceptance (aOR, 1.07; 95% CI, 1.01-1.13), marginally higher odds of being unconcerned about AI (aOR, 1.06; 95% CI, 1.00-1.12), and greater likelihoods of attending such screening (aOR, 1.26; 95% CI, 1.18-1.35). Breast screening experience was only associated with anticipated attendance, with screening attenders more likely than never attenders to report that the use of AI would not impact attendance (aOR, 1.80; 95% CI, 1.37-2.36).
Self-rated AI knowledge was associated with anticipated experience only (Supplemental Tables 3 to 5). Compared with participants who rated their knowledge of AI as low or negligible, those with some or moderate-expert knowledge were likely to be less concerned about AI-assisted mammography: aOR, 1.30 (95% CI, 1.05-1.61) and aOR, 1.39 (95% CI 1.09-1.78), respectively. In contrast, attitudes to AI were associated with all 3 acceptability outcomes (Supplemental Tables 3 to 5), with positive attitudes predicting increased acceptance (aOR, 1.34; 95% CI, 1.30-1.39), lower concern (aOR, 1.42, 95% CI, 1.37-1.48), and higher odds of attending (aOR, 1.36; 95% CI, 1.30-1.43).
AI Concerns and Expectations by AI Use Case
Participants in both use case groups had concerns about AI-assisted mammography (Table 3), especially regarding its reliability. There were greater concerns about AI missing cancers among participants in the AI-triage group than those in the AI-second reader group (1208/1709; 70.7% vs 1014/1710; 59.3%; P<.001). Additionally, more participants were concerned by AI as a triage tool lacking transparency (1149/1709; 67.2% vs 951/1710; 55.6%; P<.001) and not being in women’s best interests (1010/1709; 59.1% vs 931/1710; 54.4%; P<.05).
Table 3.
Concerns and Expectations About Using AI in Breast Screening (N=3419)
| Concerns and expectations | AI-second reader (n=1710) | AI-triage (n=1709) |
|---|---|---|
| Concerns about AI-assisted mammography | ||
| There will be no way of knowing what AI is doing | ||
| Concerned | 951 (55.6)a | 1149 (67.2)a |
| Unconcerned | 759 (44.4)a | 560 (32.8)a |
| There will come a time when no humans are involved in the process | ||
| Concerned | 1420 (83.0) | 1399 (81.9) |
| Unconcerned | 290 (17.0) | 310 (18.1) |
| AI would increase risks to people’s data (either in how data are stored, used, shared, or owned or in issues with keeping it private and protected) | ||
| Concerned | 1037 (60.6) | 1035 (60.6) |
| Unconcerned | 673 (39.4) | 674 (39.4) |
| There will be unethical consequences | ||
| Concerned | 964 (56.4) | 951 (55.6) |
| Unconcerned | 746 (43.6) | 758 (44.4) |
| Patients would not have a say in when or how AI tools might be used | ||
| Concerned | 1204 (70.4) | 1209 (70.7) |
| Unconcerned | 506 (29.6) | 500 (29.3) |
| It will be more likely that cancers/signs of cancers are missed | ||
| Concerned | 1014 (59.3)a | 1208 (70.7)a |
| Unconcerned | 696 (40.7)a | 501 (28.3)a |
| AI developers will have motivations that go against my values | ||
| Concerned | 841 (49.2) | 834 (48.8) |
| Unconcerned | 869 (50.8) | 875 (51.2) |
| The decision to use AI in breast screening will not be based on the interests of women | ||
| Concerned | 931 (54.4)b | 1010 (59.1)b |
| Unconcerned | 779 (45.6)b | 699 (40.9)b |
| Expectations of AI-assisted mammography | ||
| Help relieve an overwhelmed health care system | ||
| Likely | 1317 (77.0) | 1318 (77.1) |
| Unlikely | 295 (17.3) | 288 (16.9) |
| Do not know/not sure | 98 (5.7) | 103 (6.0) |
| Free up radiologists time | ||
| Likely | 1390 (81.3) | 1378 (80.6) |
| Unlikely | 235 (13.7) | 243 (14.2) |
| Do not know/not sure | 85 (5.0) | 88 (5.1) |
| Save money that would go to other uses in breast screening and health care | ||
| Likely | 1141 (66.7) | 1149 (67.2) |
| Unlikely | 412 (24.1) | 399 (23.3) |
| Do not know/not sure | 155 (9.1) | 161 (9.4) |
| Make breast screening safer | ||
| Likely | 792 (46.3)a | 685 (40.1)a |
| Unlikely | 529 (30.9)a | 658 (38.5)a |
| Do not know/not sure | 388 (22.7)a | 366 (21.4)a |
| Make breast screening better at finding cancers that need treatment | ||
| Likely | 1013 (59.2)b | 946 (55.4)b |
| Unlikely | 354 (20.7)b | 422 (24.7)b |
| Do not know/not sure | 343 (20.1)b | 341 (20.0)b |
| Improve quality and fairness of the breast screening program | ||
| Likely | 827 (48.4) | 791 (46.3) |
| Unlikely | 485 (28.4) | 500 (29.3) |
| Do not know/not sure | 397 (23.2) | 418 (24.5) |
| Reduced wait times for screening results | ||
| Likely | 1247 (72.9) | 1283 (75.1) |
| Unlikely | 304 (17.8) | 264 (15.4) |
| Do not know/not sure | 158 (9.2) | 162 (9.5) |
Concerned: very/somewhat concerned; unconcerned: not very/not at all concerned; likely: very/somewhat likely; unlikely: not very/completely unlikely.
P<.001.
P<.05.
Similarly, there were between-group differences in expectations related to the reliability of AI (Table 3). Specifically, fewer participants in the AI-triage group expected AI to make breast screening safer than those in the AI-second reader group (685/1709; 40.1% vs 792/1710; 46.3%; P<.001). They were also less likely to think AI would improve detection outcomes (946/1709; 55.4% vs 1013/1710; 59.2%; P<.05).
Discussion
This first nationwide survey of women’s attitudes toward AI in breast screening in England found generally high acceptability across two use cases, with over 70% finding the concept somewhat or very acceptable. Although 71.5% anticipated no impact on their breast screening attendance, 14.6% stated that the use of AI would make them less likely to attend, potentially translating to 440,000 women in a screening program inviting over 3 million women annually.28 Acceptability was lower for those in younger age groups, from Black ethnic backgrounds, and with lower educational attainment. Such disparities may challenge the equitable implementation of AI in the NHSBSP. Our study is the first to randomize participants to 2 AI use cases in mammography. Consistent with previous research, AI was viewed more favorably when used as a second reader than as a standalone reader or triage tool,13, 14, 15, 16, 17 offering further evidence that acceptability in England varies by use case.
Almost half the participants had concerns about AI-read mammograms, with those in the AI-triage group reporting significantly greater concern about the risk of false negatives than the AI-second-reader group. Likewise, significantly fewer participants expected AI as a triage tool to improve detection of cancers needing treatment. Similar concerns and expectations were reported among women in Norway, where 25.4% cited the risk of false negatives and only 19.5% thought AI tools might improve breast cancer detection.15 These findings highlight the importance of informing the screening-eligible public about AI’s potential to enhance screening accuracy once performance data are available.2,4,5
Notwithstanding these concerns, over 80% of participants anticipated that AI would not negatively impact their screening attendance. Although not directly comparable, this proportion exceeds the 64.0% and 54.9% of women within BreastScreen Norway willing to participate with AI as a second reader or triage tool, respectively.15 However, approximately 15% of our sample indicated they may not attend breast screening under these conditions. It is possible this might have been lower had we included accuracy metrics showing AI would match or surpass human readers. Nevertheless, this finding warrants careful appraisal before implementation because failure to address concerns risks alienating women from the BSP and other AI-based health care services. As AI is trialed and implemented in the NHSBSP and other cancer screening programs, monitoring and evaluating uptake will be critical to ensure the risk-benefit profile of screening is maintained.
Although younger people may be more open to AI in general,29 multivariable analyses revealed that older age groups found AI-assisted mammography more acceptable. Such age-related differences align with previous studies showing greater acceptance of AI among women eligible for breast screening.9,30
Importantly, participants from Black ethnic backgrounds reported significantly higher AI-related concern and were more likely to be deterred from this type of screening than those from White ethnic backgrounds. Consistent with previous research,13, 14, 15 degree-educated participants reported lower anticipated concern about AI and were less likely to be put off screening than those with lower levels of education. Although our study was not designed to explore the interaction of education and ethnicity, intersectional effects are likely31,32 and adequately powered studies are needed to explore this in more detail. Alongside validating AI algorithms for diverse ethnic groups,33 AI communication strategies must meet the needs of individuals across all ethnic and educational backgrounds.
Negative attitudes toward breast screening predicted lower anticipated acceptance of AI, greater concern about its use, and a decreased attendance likelihood, suggesting concerns about the NHSBSP extend beyond AI itself. Breast screening knowledge did not independently predict acceptability, so it is unclear whether participants’ understanding of screening mammography contributed to AI-related concerns.10,16 Nonetheless, preferences for AI as a second reader suggests that participants value human involvement, even if unaware of its extent or inherent limitations. Furthermore, participants were significantly more concerned about the transparency of AI—not knowing what AI is doing—when used as a triage tool than as a second reader, indicating that the involvement of a trusted human can alleviate concerns.
Participants with positive attitudes toward AI were more likely to accept AI-assisted mammography, in line with previous studies.9,30 Similarly, those with high self-rated AI knowledge demonstrated greater acceptance and lower anticipated concern. Although our data do not prove AI knowledge drives acceptability, this aligns with previous research. In particular, a study in Scotland found that breast screening participants’ approval of AI-assisted mammography was positively associated with technological understanding.17
Unlike previous research,10,13, 14, 15, 16, 17 our sample was population representative for the age and ethnicity of women in England, although women with higher education were overrepresented. However, the use of an online recruitment panel introduces several limitations, including self-selection bias, exclusion of demographic groups such as those with low digital literacy,34 and inability to determine response rates. It is likely that participants had more positive screening attitudes, as reflected by 78% of age-eligible participants regularly attending breast screening, which is higher than the 70% uptake reported by the NHS for 2022-2023.28
There are also several statistical limitations. Because the a priori power analysis only covered primary acceptability outcomes, conclusions drawn from our exploratory analyses should be interpreted with caution. Additionally, we may have lost nuance when dichotomizing acceptability outcomes for multivariable analysis. Our findings indicate the need to further investigate the interplay between acceptability outcomes, sociodemographic characteristics such as ethnicity and education, and attitudes toward breast screening and AI technologies.
A significant limitation of this work is that we did not give participants information on the sensitivity and specificity of AI compared with human readers because trials are ongoing. Because AI performance metrics will very likely have a strong association with acceptability, future studies are needed to test the effect of such information, particularly on screening intentions.
Conclusion
This study shows that acceptability of AI in breast screening varies by use case, with greater support for AI assisting rather than replacing humans, although such attitudes may shift with the provision of accuracy metrics. Acceptability varied by sociodemographic characteristics and specific concerns and expectations about AI in this context. For the NHSBSP to equitably and safely implement AI, tailored communication strategies and public engagement initiatives may be required.
Potential Competing Interests
Dr Kehagia reports participation in data safety board of AI – PROGNOSIS (ai-prognosis.eu). Dr Waller reports leadership role in UK National Screening Committee Research & Methodology Group. The other authors report no competing interests.
Acknowledgments
Drs Gatting and Kelley Jones contributed equally to this work.
Ethics Statement
The study received institutional approval by the Queen Mary Ethics of Research Committee (ref: QME23.0161), and all participants gave informed consent at the start of the survey.
Footnotes
Grant Support: This study was conducted as part of an independent evaluation of Kheiron’s Mia AI tool funded by an Artificial Intelligence in Health and Care Award, one of the NHS AI Lab programs, to King’s Technology Evaluation Centre (ref C52511). The competitive award scheme is run by the Department of Health and Social Care. The AI Award has made funding available to accelerate the testing and evaluation of artificial intelligence technologies which meet the aims set out in the NHS Long Term Plan.
Data Previously Presented: These findings were presented as a poster presentation at the 2025 International Cancer Screening Network conference, Aarhus, Denmark.
Supplemental material can be found online at https://www.mcpdigitalhealth.org/. Supplemental material attached to journal articles has not been edited, and the authors take responsibility for the accuracy of all data.
Supplemental Online Material
References
- 1.Marmot M., Altman D., Cameron D.A., Dewar J., Thompson S., Wilcox M. The benefits and harms of breast cancer screening: an independent review. Br J Cancer. 2013;108(11):2205–2240. doi: 10.1038/bjc.2013.177. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Dembrower K., Crippa A., Colón E., Eklund M., Strand F., ScreenTrustCAD Trial Consortium Artificial intelligence for breast cancer detection in screening mammography in Sweden: a prospective, population-based, paired-reader, non-inferiority study. Lancet Digit Health. 2023;5(10):e703–e711. doi: 10.1016/S2589-7500(23)00153-X. [DOI] [PubMed] [Google Scholar]
- 3.Gulland A. Staff shortages are putting UK breast cancer screening “at risk,” survey finds. BMJ. 2016;353 doi: 10.1136/BMJ.I2350. [DOI] [PubMed] [Google Scholar]
- 4.Eisemann N., Bunk S., Mukama T., et al. Nationwide real-world implementation of AI for cancer detection in population-based mammography screening. Nat Med. 2025;31(3):917–924. doi: 10.1038/s41591-024-03408-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.McKinney S.M., Sieniek M., Godbole V., et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89–94. doi: 10.1038/s41586-019-1799-6. [DOI] [PubMed] [Google Scholar]
- 6.Wu C., Xu H., Bai D., Chen X., Gao J., Jiang X. Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis. BMJ Open. 2023;13(1) doi: 10.1136/bmjopen-2022-066322. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Young A.T., Amara D., Bhattacharya A., Wei M.L. Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review. Lancet Digit Health. 2021;3(9):e599–e611. doi: 10.1016/S2589-7500(21)00132-1. [DOI] [PubMed] [Google Scholar]
- 8.Hemphill S., Jackson K., Bradley S., Bhartia B. The implementation of artificial intelligence in radiology: a narrative review of patient perspectives. Future Healthc J. 2023;10(1):63–68. doi: 10.7861/fhj.2022-0097. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Carter S.M., Popic D., Marinovich M.L., Carolan L., Houssami N. Women’s views on using artificial intelligence in breast cancer screening: a review and qualitative study to guide breast screening services. Breast. 2024;77 doi: 10.1016/j.breast.2024.103783. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Lennox-Chhugani N., Chen Y., Pearson V., Trzcinski B., James J. Women’s attitudes to the use of AI image readers: a case study from a national breast screening programme. BMJ Health Care Inform. 2021;28(1) doi: 10.1136/bmjhci-2020-100293. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Gatting L., Ahmed S., Meccheri P., Newlands R., Kehagia A.A., Waller J. Acceptability of artificial intelligence in breast screening: focus groups with the screening-eligible population in England. BMJ Public Health. 2024;2(2) doi: 10.1136/bmjph-2024-000892. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Freeman K., Geppert J., Stinton C., et al. UK National Screening Committe Blog. 2022. Use of artificial intelligence for mammographic image analysis in breast cancer screening: rapid review and evidence map. [Google Scholar]
- 13.Ongena Y.P., Yakar D., Haan M., Kwee T.C. Artificial intelligence in screening mammography: a population survey of women’s preferences. J Am Coll Radiol. 2021;18(1):79–86. doi: 10.1016/j.jacr.2020.09.042. [DOI] [PubMed] [Google Scholar]
- 14.Jonmarker O., Strand F., Brandberg Y., Lindholm P. The future of breast cancer screening: what do participants in a breast cancer screening program think about automation using artificial intelligence? Acta Radiol Open. 2019;8(12) doi: 10.1177/2058460119880315. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Holen Å.S., Martiniussen M.A., Bergan M.B., et al. Women’s attitudes and perspectives on the use of artificial intelligence in the assessment of screening mammograms. Eur J Radiol. 2024;175 doi: 10.1016/j.ejrad.2024.111431. [DOI] [PubMed] [Google Scholar]
- 16.Pesapane F., Rotili A., Valconi E., et al. Women’s perceptions and attitudes to the use of AI in breast cancer screening: a survey in a cancer referral centre. Br J Radiol. 2023;96(1141) doi: 10.1259/bjr.20220569. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.de Vries C.F., Morrissey B.E., Duggan D., Staff R.T., Lip G. Screening participants’ attitudes to the introduction of artificial intelligence in breast screening. J Med Screen. 2021;28(3):221–222. doi: 10.1177/09691413211001405. [DOI] [PubMed] [Google Scholar]
- 18.Sekhon M., Cartwright M., Francis J.J. Acceptability of healthcare interventions: An overview of reviews and development of a theoretical framework. BMC Health Serv Res. 2017;17(1):88. doi: 10.1186/s12913-017-2031-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Elhakim M.T., Stougaard S.W., Graumann O., et al. AI-integrated screening to replace double reading of mammograms: a population-wide accuracy and feasibility study. Radiol Artif Intell. 2024;6(6) doi: 10.1148/RYAI.230529. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Census 2021. ONS. 2023. https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates
- 21.Hersch J., Barratt A., Jansen J., et al. Use of a decision aid including information on overdetection to support informed choice about breast cancer screening: a randomised controlled trial. Lancet. 2015;385(9978):1642–1652. doi: 10.1016/S0140-6736(15)60123-4. [DOI] [PubMed] [Google Scholar]
- 22.Robb K., Stubbings S., Ramirez A., et al. Public awareness of cancer in Britain: a population-based survey of adults. Br J Cancer. 2009;101(2):S18–S23. doi: 10.1038/sj.bjc.6605386. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Moss-Morris R., Weinman J., Petrie K., Horne R., Cameron L., Buick D. The Revised Illness Perception Questionnaire (IPQ-R) Psychol Health. 2002;17(1):1–16. doi: 10.1080/08870440290001494. [DOI] [Google Scholar]
- 24.Quaife S.L., Waller J., Dickson J.L., et al. Psychological targets for lung cancer screening uptake: a prospective longitudinal cohort study. J Thorac Oncol. 2021;16(12):2016–2028. doi: 10.1016/j.jtho.2021.07.025. [DOI] [PubMed] [Google Scholar]
- 25.Waller J., Osborne K., Wardle J. Enthusiasm for cancer screening in Great Britain: a general population survey. Br J Cancer. 2015;112(3):562–566. doi: 10.1038/bjc.2014.643. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Merritt S.M., Unnerstall J.L., Lee D., Huber K. Measuring individual differences in the perfect automation schema. Hum Factors. 2015;57(5):740–753. doi: 10.1177/0018720815581247. [DOI] [PubMed] [Google Scholar]
- 27.Heinze G., Wallisch C., Dunkler D. Variable selection—a review and recommendations for the practicing statistician. Biom J. 2018;60(3):431–449. doi: 10.1002/BIMJ.201700067. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Breast Screening Programme, England, 2022-23. NHS Digital. 49. https://digital.nhs.uk/data-and-information/publications/statistical/breastscreening-programme/england---2022-23#
- 29.Chu C.H., Nyrup R., Leslie K., et al. Digital ageism: challenges and opportunities in artificial intelligence for older adults. Gerontologist. 2022;62(7):947–955. doi: 10.1093/GERONT/GNAB167. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Pesapane F., Giambersio E., Capetti B., et al. Patients’ perceptions and attitudes to the use of artificial intelligence in breast cancer diagnosis: a narrative review. Life (Basel) 2024;14(4):454. doi: 10.3390/life14040454. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Baird J., Yogeswaran G., Oni G., Wilson E.E. What can be done to encourage women from Black, Asian and minority ethnic backgrounds to attend breast screening? A qualitative synthesis of barriers and facilitators. Public Health. 2021;190:152–159. doi: 10.1016/j.puhe.2020.10.013. [DOI] [PubMed] [Google Scholar]
- 32.Robb K., Wardle J., Stubbings S., et al. Ethnic disparities in knowledge of Cancer Screening Programmes in the UK. J Med Screen. 2010;17(3):125–131. doi: 10.1258/jms.2010.009112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Obermeyer Z., Powers B., Vogeli C., Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–453. doi: 10.1126/science.aax2342. [DOI] [PubMed] [Google Scholar]
- 34.Exploring the UK’s digital divide—Office for National Statistics. https://www.ons.gov.uk/releases/exploringtheuksdigitaldivide
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.


