Abstract
Introduction
Artificial intelligence (AI) systems for age‐related macular degeneration (AMD) diagnosis abound but are not yet widely implemented. AI implementation is complex, requiring the involvement of multiple, diverse stakeholders including technology developers, clinicians, patients, health networks, public hospitals, private providers and payers. There is a pressing need to investigate how AI might be adopted to improve patient outcomes. The purpose of this first study of its kind was to use the AI translation extended version of the non‐adoption, abandonment, scale‐up, spread and sustainability of healthcare technologies framework to explore stakeholder experiences, attitudes, enablers, barriers and possible futures of digital diagnosis using AI for AMD and eyecare in Australia.
Methods
Semi‐structured, online interviews were conducted with 37 stakeholders (12 clinicians, 10 healthcare leaders, 8 patients and 7 developers) from September 2022 to March 2023. The interviews were audio‐recorded, transcribed and analysed using directed and summative content analysis.
Results
Technological features influencing implementation were most frequently discussed, followed by the context or wider system, value proposition, adopters, organisations, the condition and finally embedding the adaptation. Patients preferred to focus on the condition, while healthcare leaders elaborated on organisation factors. Overall, stakeholders supported a portable, device‐independent clinical decision support tool that could be integrated with existing diagnostic equipment and patient management systems. Opportunities for AI to drive new models of healthcare, patient education and outreach, and the importance of maintaining equity across population groups were consistently emphasised.
Conclusions
This is the first investigation to report numerous, interacting perspectives on the adoption of digital diagnosis for AMD in Australia, incorporating an intentionally diverse stakeholder group and the patient voice. It provides a series of practical considerations for the implementation of AI and digital diagnosis into existing care for people with AMD.
Keywords: artificial intelligence; decision support systems, clinical; diagnostic imaging; diffusion of innovation; macular degeneration; stakeholder participation
Key points.
Tailoring solutions to the unique concerns of patients, clinicians, developers and leaders may lead to more effective and widescale adoption.
Participating stakeholders collectively foresee a fee for service model for remote detection and monitoring of neovascular age‐related macular degeneration in Australia.
Caution integrating artificial intelligence into diverse real‐world clinical settings is recommended due to concerns about accuracy, fairness and the importance of human oversight in clinical decision‐making.
INTRODUCTION
Age‐related macular degeneration (AMD) is a leading cause of vision loss among individuals older than 55 years. It has an estimated global prevalence of 8.7% 1 and is classified clinically into three stages: early, intermediate and late. 2 Late AMD is characterised by central vision loss and presents in two forms, known colloquially as either ‘dry’ (non‐neovascular) or ‘wet’ (neovascular) AMD. Conversion to neovascular AMD typically presents with symptoms of sudden central vision loss due to ingrowth of new vessels into the central retina, significant vision‐related disability, reduced quality of life, increased risk of falls and a high incidence of depression. 3 If identified and managed early, vision‐related disability can be minimised through repeated intravitreal injections of anti‐vascular endothelial growth factor.
Unfortunately, AMD is commonly misdiagnosed or the diagnosis delayed. 4 , 5 Digital diagnosis using artificial intelligence (AI) and related diagnostic decision support systems for the initial detection and distinction of AMD from healthy ageing, or early detection of possible macular neovascularisation in AMD abound. These may leverage one or multiple modalities, typically colour fundus photography or optical coherence tomography. 6 , 7 A recent systematic review and meta‐analysis described the performance of such AI algorithms for the detection of AMD in fundus images as ‘almost comparable’ to the performance of retinal specialists. 8 Despite the many promising, proof‐of‐concept examples, widespread implementation across Australia and other countries is not yet a reality.
Known reasons for this absence of widespread implementation include concerns surrounding cost, convenience, patient satisfaction, liability, trust and data privacy. 9 , 10 , 11 Although barriers for real‐world implementation of AI in eyecare have been described generally, the specific factors influencing the design, development, implementation, scale‐up, spread and sustainability of digital diagnosis for AMD (including possible futures) are not well understood. We conducted a qualitative interview study with key stakeholders from various backgrounds to describe the experiences, attitudes, enablers, barriers and possible futures of digital diagnosis for AMD and eyecare in Australia.
To structure the analysis and ensure broad coverage across different domains, we used the non‐adoption, abandonment, scale‐up, spread and sustainability of healthcare technologies (NASSS) framework 12 and additional AI translation subdomains by Gama et al. 13 The NASSS framework is a practical framework designed to predict the success of technology‐supported healthcare programmes. 12 It helps users identify and address challenges to adoption and their interactions by probing seven socio‐technical domains (22 sub‐domains) relevant to the disease, technology, adopters, wider healthcare and political or policy framework (Figure 1). The framework offers a conceptual understanding of the complexities of implementing healthcare technologies and draws on empirical research, knowledge and experience within the implementation science community, considering interactions of the technology beyond the immediate user at micro, meso and macro levels. Subsequent work by Gama et al., 13 finding that no single framework comprehensively addresses the unique challenges of AI translation into healthcare, identified seven additional subdomains relating to data dependency, human oversight, shared decision‐making and population ethics.
FIGURE 1.

Non‐adoption, abandonment, scale‐up, spread and sustainability of healthcare technologies framework, by Greenhalgh et al. 12 ©Trisha Greenhalgh, Joseph Wherton, Chrysanthi Papoutsi, Jennifer Lynch, Gemma Hughes, Christine A'Court, Susan Hinder, Nick Fahy, Rob Procter, Sara Shaw. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.11.2017. This is an open‐access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.
METHODS
Study design
A qualitative study based on data from a series of semistructured individual interviews with key stakeholders and patients was designed and conducted by researchers at the University of New South Wales (UNSW) Sydney School of Optometry and Vision Science and School of Business. The research protocol was approved by a UNSW Sydney Human Research Ethics Advisory Committee (HC210986; February 2022). Informed consent was obtained from all participants. Data were anonymised and personal identifiers removed. This study reports on the interview data and analysis using the consolidated criteria for reporting qualitative research (Appendix S1). 14
Sampling and recruitment
Purposive criterion sampling 15 and professional contact lists of the authors were used to identify key stakeholders from a range of professional backgrounds with intragroup diversity (patients with a range of disease severity, geographically distributed clinicians with different qualifications, developers including researchers or technologists, healthcare leaders including senior health managers and professional organisation representatives). All participants in the study were 18 years or older with an interest in or experience of macular disease. Potential participants were contacted by email. Demographic data (age, gender, job title) and informed consent were obtained using an online questionnaire (Qualtrics XM, qualtrics.com).
Data collection
All interviews were conducted in English online via Zoom Video Communications (Zoom Communications, zoom.com) by AL or SH. AL is a female, postdoctoral, senior lecturer and optometrist with prior qualitative research experience. SH is a female, postdoctoral researcher and psychology graduate with experience in social robotics. A single standard interview guide was developed by the investigators through a literature review and key questions identified for each stakeholder group (Appendix S2). Interview questions were designed to encourage open exploration of AI and digital diagnostic technologies for AMD and eyecare in Australia, ordered in five sections: (1) experiences, (2) attitudes, (3) enablers, (4) barriers, (5) possible futures. The interviewers were encouraged to re‐word, re‐order or clarify the questions to facilitate the discussions.
Participants were sent a copy of the interview guide in advance of the session and were asked to complete the interview once only. At the start of each interview, participants were advised that digital diagnosis was defined as ‘the process of identifying disease from its signs or symptoms using computer technology, including artificial intelligence’ (not excluding telehealth) and that the purpose of the interview was to hear their experience, views and attitude towards digital diagnosis, including AI, in eyecare, especially for AMD. Participation was voluntary and no financial incentives were offered for the completion of the study. Image, software or equipment prompts were not provided within the interviews. Data collection relied on participants' own verbal descriptions of their experiences with digital diagnosis.
Data analysis
All interviews were audio‐recorded, transcribed verbatim and analysed in NVivo (Lumivero, lumivero.com/products/nvivo/) using directed content analysis. 16 An a priori codebook including definitions, inclusion criteria and examples (Appendix S3) was used by SH to deduce code relevant comments into pre‐established categories—29 sub‐domains, including 22 from the Greenhalgh NASSS framework 12 and seven Gama et al. 13 proposed AI translation sub‐domains. The NASSS framework was chosen for its ability to address gaps in technology implementation, importantly also including adoption, non‐adoption, abandonment, scaling and sustainability. Developed from a systematic review and case studies, it synthesises insights from 28 frameworks, emphasising complex, dynamic, sociotechnical interactions and contextual adaptation over time. Descriptions of the data coded under each sub‐domain were developed by AL. The team met regularly through the analytic process to review the coding process and emergent descriptions for reliability and consistency. To quantify the relative importance of factors influencing adoption, summative content analysis was finally performed wherein the frequency of comments coded under each domain and across stakeholder types was compared.
RESULTS
Thirty‐seven of 147 invited stakeholders participated in the study from September 2022 to March 2023, a response rate of 25%. Interviews ranged in length from 17 to 42 min (mean 30 min). Nineteen (51%) of the participants were female and the mean (SD) age of the participants was 50.1 (15.8) years. Of the 37 participants, there were 12 (32%) clinicians, 10 (27%) healthcare leaders including senior health managers and professional organisation representatives, 8 (22%) patients with AMD and 7 (19%) developers—researchers or technologists. Further demographic details, including professional titles, are provided in Appendix S4.
Stakeholder names were replaced with numbers with codes C, L, P and D denoting clinicians, healthcare leaders, patients and developers, respectively; gender and age are also indicated. Table 1 summarises the domains in order of coding count frequency across stakeholder groups. Technological features were discussed most frequently, followed by the context or wider system, value proposition, adopters, organisations, the condition and finally embedding and adaptation. All stakeholder types commented most frequently on the technology and value proposition, except for patients who tended to discuss the condition rather than the technology. Organisation factors were more frequently elaborated on by healthcare leaders. Figure 2 shows the coding of the interview responses to domains and sub‐domains of the NASSS framework.
TABLE 1.
List of domains in order of highest to lowest frequency by coding count across stakeholder groups.
| Frequency by coding count | ||||
|---|---|---|---|---|
| Combined total | Clinicians | Leaders | Patients | Developers |
| 2. Technology | 2. Technology | 2. Technology | 1. Condition | 2. Technology |
| 6. Wider system | 6. Wider system | 5. Organisation | 3. Value proposition | 6. Wider system |
| 3. Value proposition | 3. Value proposition | 6. Wider system | 4. Adopters | 3. Value proposition |
| 4. Adopters | 5. Organisation | 3. Value proposition | 2. Technology | 4. Adopters |
| 5. Organisation | 4. Adopters | 4. Adopters | 6. Wider system | 5. Organisation |
| 1. Condition | 1. Condition | 1. Condition | 5. Organisation | 1. Condition |
| 7. Embedding and adaptation | 7. Embedding and adaptation | 7. Embedding and adaptation | 7. Embedding and adaptation | 7. Embedding and adaptation |
FIGURE 2.

Illustration of the data coding process of implementation factors to the 29 sub‐domains and seven domains of the AI‐extended non‐adoption, abandonment, scale‐up, spread and sustainability of healthcare technologies framework. Numbers correspond to the number of references per sub‐domain, left to right: the combined total, followed by clinicians, healthcare leaders, patients and developers.
The condition
The purpose of this category was to describe clinical aspects of the condition, suitable or otherwise to AI diagnostic technology. AMD as a condition (especially the neovascular subtype) was viewed as amenable to AI diagnostic technology by most stakeholders. Reasons included diagnostic uncertainty, complex disease subtypes, lack of effective monitoring options, risk of second eye involvement and limited capacity of existing change analysis.
The first ophthalmologist I saw wasn't able to diagnose it. So maybe if he had AI… that could have diagnosed it. [P‐08, F, 59]
The only thing a consumer has got is our Amsler grid. And it's reliant on the patient continuing to use it. If there was access to some AI technology that was a lot more simpler, then by all means, I'd be 100% behind that. [P‐04, F, 56]
Barriers cited include variable and unclear referral pathways, false positives, societal cost and the slow progressive nature of AMD, which may limit the value of early diagnosis.
It hasn't progressed that rapidly and I would have just had longer to worry about it. [P‐06, F, 67]
Opportunities include integrating the AI system with the referral process and providing a more holistic view of health and wellness.
AI can be involved in looking at your whole holistic profile, you know how old are you, have you got parents with disease, are you eating healthily, are you exercising appropriately, do you have diabetes? [L‐02, M, 54]
The technology
The purpose of this domain was to understand features of the technology perceived to influence usability and appropriateness. Desirable features differed distinctly by stakeholder group in this domain (Table 2). A key feature would be a portable, device‐agnostic, clinician decision aid integrated into existing equipment and patient management systems.
Best case scenario for the clinicians would be having it built in. So, you do your optical coherence tomography scan and it just automatically comes up with a risk rating, for example, this patient has intermediate AMD, this is what you need to do … I'd like to see it just rolled out across the board. [D‐01, F, 39]
TABLE 2.
Material features of the technology perceived to influence usability and appropriateness, categorised by stakeholder group.
| Key technological features | Supporting statement/s |
|---|---|
| Clinicians | |
| Patient management aid | It's really exciting to know that technology can be used to assist clinicians with patient management… diagnosis or treatment. [C‐04, M, 71] |
| Make work easier by ‘handing over’ tasks presently done by people | Hand over tasks that could be done by AI that are now done by people. So, people could focus on tasks that could be done only by people, not by AI. [C‐11, F, 58] |
| Leaders | |
| Able to detect new, formerly unobserved patterns in big data | The AI may be able to detect new patterns, new observations that you know from the data… and all of those things could then help to inform how do you intervene and how do you support those people. [L‐02, M, 54] |
| Used to promote practitioner upskilling | A means of being able to position optometrists in a better position to be able to handle and diagnose tricky cases and work like co‐manage better as well. [L‐04, F, 31] |
| Management tool for large volumes of data arising during change analysis of follow up examinations | I do foresee that they'd be much more accepting of something like where it's involved in the monitoring of the disease because um by that point you've got at least two data sets that you've got and you've got to look for change so um over time so it becomes much more time consuming in the data so I think that's where that's probably the biggest application is to talk about AI. [L‐03, F, 46] |
| Patients | |
| Informative | What comes after the diagnosis? Do you refer them? Do you send them information? … you can get the information directly, virtually, as you walk out of an appointment. So and then it could be something that could then incorporate a referral process in there as well. So, you know, it's much more, instead of relying on an individual to give you that information, which is usually the optometrist and the optometrist generally doesn't have time to be able to do that. [P‐04, F, 56] |
| Developers | |
| Used to understand disease causality better, to develop targeted treatments | What we're trying to do is to use this algorithm across a large cohort, lots of different cohorts around the world … run them through a genome‐wide association study … If you can do that, you understand the disease and you can develop targeted treatments for that, that's the hope. [D‐02, M, 30] |
Abbreviation: AI, artificial intelligence.
The technology should be accurate (high specificity and sensitivity), consistent, valid, generalisable, clear, trustworthy and allow the practitioner to interrogate the system to support ongoing trust and upskilling in the use of the technology. A preferred output would provide a probability estimate of normal or disease and could be used to educate and inform patients without diminishing the role of and importance of the clinician/patient connection.
Recommendations rather than providing an outright diagnosis and so ultimately you still rely on a practitioner to give you the final diagnosis and treatment. [C‐01, M, 35]
I think you'd want sort of a system that would … give us some sort of probability of choroidal neovascularisation being present. You know and then I guess a recommendation to refer to your local ophthalmologist, I think would be great. As I sort of touched on before, you know, things like dry AMD, you know, being able to identify geographic atrophy and maybe suggest this patient may meet the criteria for size of geographic atrophy to access a treatment. [C‐12, M, 42]
To maximise the benefits of AI technology, many stakeholders commented on the importance of understanding both the limitations of the technology and of upskilling practitioners to facilitate adoption.
Practitioners need to appreciate both the strength of an AI diagnosis as well as the shortcomings and where the tool might be at its weakest. [L‐05, M, 65]
Therefore, an opportunity may be to use the technology to facilitate practitioner education and self‐audit.
It's very common for ophthalmologists to self‐audit and look at their surgical outcomes and they accept fully that there's a learning curve … but optometrists don't potentially do that very often. [L‐03, F, 46]
Possible supply cost models included upfront hardware or subscription fees. These would be charged as fee for service ($20–50) and most likely through corporate providers in the first instance. Medicare (publicly funded universal health insurance) and private health insurance were deemed to be less feasible reimbursement pathways.
We could use retinal photography costs, you know, as a sort of an analogy … it has to be something that's borne privately by the patient… in the realm of $50 per test session. [C‐04, M, 71]
If you just get one or two corporate providers to roll out a particular approach … it rather quickly becomes the predominant way that people practice within the profession. So, uptake happens quite quickly. [L‐06, F, 43]
I can't see a possibility that we would get a Medicare item in optometry. [L‐06, F, 43]
Critical enablers include good digital processes to transmit images and secure, encrypted data storage solutions and explicit data ownership/informed consent processes.
Good digital communications between primary care and optometry… would be a really critical enabler. [L‐06, F, 43]
It's not just the infrastructure that's there, it's also about development of the platform itself and using the tools that are available to plug some of these cyber security gaps. [C‐01, M, 35]
One of my biggest concerns about a service delivery model is actually the appropriate implementation of A, opt‐out models and B, information for patients, because technically they own their data, right? It's been routinely collected for the purposes of providing their care, but they should have the ability to understand what else that information is being used for and to be able to have a say in how that is used to give them health advice, right? [D‐03, F, 36]
The value proposition
The value proposition category was applied to explore whether a technology is worth developing from different perspectives. Opportunities for AI include driving new models of healthcare, including telehealth and remote monitoring. These could improve both quality and access (appropriateness) of care, particularly in regional and rural communities.
If you're layering in‐person appointments with telehealth and asynchronous touch points, you start to be able to actually have a more comprehensive like model of delivering care … ability to unlock that different care model for chronic and complex disease patients. [D‐06, F, 29]
I think in a place like Australia where you may not necessarily have access to an ophthalmologist, particularly in remote and rural communities, the use of AI and also platforms to help share a lot of the information actually serves as a very useful tool. [C‐01, M, 35]
Clinicians viewed value as being able to see more patients at risk, greater confidence in diagnosis and better patient outcomes.
The role of AI in increasing clinicians' confidence in themselves – as a confirming diagnostic tool. [L‐06, F, 43]
Patients most likely to benefit were those at risk of fellow eye involvement.
They didn't know how to monitor their other eye … so their treatment can be given early is a huge one. [L‐09, F, 36]
The adopter system
Potential barriers for clinicians include mistrust, fear of jobs being lost to AI and diminishing professional judgement. Expressed barriers for patients include mistrust, concern for those with complex disease and comorbidities and the impact of incorrect diagnoses.
Optometrists are going to feel that their jobs are being taken… That's going to be the biggest hurdle. [L‐04, F, 31]
Practitioners might see it as an admission of not being able to manage these diagnoses yourself… I'm not sure how I would explain to a patient that the computer's told me that without taking away value of what I'm doing sitting there. [L‐01, M, 36]
I'd be quite happy to just, you know, sit down like I'm doing now and access the system and run through the test. But I would want backup so that if I didn't feel that was enough, there was a clinician I could contact to review it with me. [P‐05, F, 77]
Processes to support communication and shared decision‐making among clinicians, patients and families are likely to be key enablers to adoption.
I think the real key though is to understand how primary care and optometry would respect the new AI in diagnosis … Do you go with the AI's instinct or do you go with your own instinct? [L‐02, M, 54]
The organisation
Key organisations with a capacity to innovate in the AMD AI diagnostic space were characterised by the stakeholders as early adopters, forward‐thinking, opportunistic, disruptive entrants with an interest in commercialisation and multidisciplinary leadership.
Crafted ultimately out of a commercialisation incentive. And that produces a series of behaviours for people who are involved in the development practice. [L‐05, M, 65]
The core of what they needed, the business … being deeply technical and having the domain knowledge. [D‐07, M, 41]
Organisational barriers involve time, workload and staff training.
Time is the issue. It's going to add time to session. You know, we're already ramming so much in. And again, with the costs of, you know, the rebates from Medicare and whatnot, if it means, you know, lengthening the consult time, then it's probably practically not going to happen. [C‐06, F, 45]
Keeping staff trained and updated on this equipment. As the equipment gets updated, then that would be a barrier you need to keep training. [L‐08, F, 53]
Misalignments in readiness for the technology between managers and clinicians is also a barrier.
They're constantly trying to push this out to, again, you know, provide more point of difference, more things to say, look what we have, fancy, fancy stuff. But I feel like it's coming way before the actual accuracy. So just a little bit of concerning there. [C‐05, M, 37]
The wider system
This domain considered the status of wider institutional support and contextual requirements including support, or lack of, from key professional bodies and lay stakeholders. Equity across population groups, socioeconomic status, culturally and linguistically diverse groups, level of health literacy and location were considered important in both the development and implementation of AI technology.
Would prefer to see it as sort of included … otherwise what's going to happen is you kind of end up with this two tier health system where, you know, people that have more disposable income get a different service. And I don't personally think that's fair. [D‐01, F, 39]
That raises a whole bunch of things to do with culturally and linguistically diverse people, priority populations that might have reduced health literacy, etc. And they don't always have that level of understanding about what happens with their data. [D‐03, F, 36]
Normative databases must be generalisable.
We know that the healthcare system is not evenly distributed and so a lot of the fundus images are coming from the developed countries … AI performance will be compromised in terms of the accuracy for example accuracy and feasibilities in other deprived areas. [D‐04, F, 31]
Embedding and adaptation over time
This category was to capture the anticipated evolution of the technology over time, in which arose an opportunity to extend the disease area to include screening for other ocular and systemic diseases (disease agnostic) across different devices.
In the beginning, it started off with just four … it's sort of progressed to now … vitreous haemorrhages, retinal detachments, optic nerve oedema. [C‐05, M, 37]
DISCUSSION
The findings from this study align with previous research in the field describing significant challenges to the real‐world clinical implementation of AI. 11 , 17 Previous work, including a systematic review of clinical trials, 18 has already emphasised the importance of computer‐based, automatic, integrated recommendations within the clinician workflow at the time and location of decision‐making, as well as the need for AI diagnosis of AMD especially in remote communities, where skilled image readers may not be available. 8 Moreover, the inherent ‘black box’ nature of AI algorithms has also been widely acknowledged and supports existing efforts and visualisation techniques to promote the interpretability, acceptance and adoption of AI algorithms. 17 , 19 Overall, this alignment between this study and prior work underscores the importance of ensuring that AI is implemented primarily as a clinical decision support tool to enhance practitioner–patient communication and education.
The unique contribution of this study lies in its comprehensive stakeholder analysis, incorporating diverse perspectives to provide a nuanced understanding of the practical considerations for implementing AI in AMD care in the real‐world context of Australia. The findings highlight the feasibility of a fee‐for‐service model for remote detection and monitoring of neovascular AMD, offering a concrete vision for the future. Although the study was not designed to examine differences by stakeholder type, patients were revealed in the context of the interviews to focus primarily on the condition and their personal experience, including the potential added value of digital diagnosis and their interactions. In contrast, clinicians and developers were most similarly inclined, tending to see the local unmet needs and evolving features of digital diagnosis amenable to furthering the societal benefits in healthcare and research. Finally, leaders focused on articulating population, industry or organisation‐wide aspects, enablers and challenges. These differences imply that by addressing the specific needs and concerns of each group, more effective, equitable and ultimately successful AI solutions that enhance clinical practice and drive future research may be implemented.
Stakeholders in this study collectively recommended caution when applying AI to real‐world populations, particularly given the natural range of settings, equipment, photograph quality and patient presentations inherent in the community. At this critical moment before widespread implementation, they emphasised the importance of trying the algorithm for themselves and maintaining oversight over patients' final management, which would be reasonable given the limited ‘transportability’ and known variations in algorithm performance. 8 , 17 , 20 Equity across population groups, socioeconomic status, culturally and linguistically diverse groups, level of health literacy and location was considered important in both development and implementation of AI technology. Comments on this theme are in favour of works such as Ting et al., 21 which utilised large‐scale, well‐labelled, multi‐ethnic, population‐based training data sets in development of the AI. Further discussion on the data infrastructure required to successfully deploy AI in AMD is available. 7 , 20 Solutions for debiasing AI such as based on skin tone/fundus pigmentation 22 are emerging and may also play a role in building consumer and clinician confidence in AI for AMD.
Interestingly, one point of concern expressed by the clinicians was ‘over‐riding’ the AI recommendation in the case of false positives. 8 False positives and false negatives are problematic as they lead to commission (where users act on incorrect advice potentially leading to unnecessary and invasive testing) and omission errors (failure to detect disease), respectively. Automation bias and over‐reliance on computer‐aided decisions has been investigated using a clinical vignettes approach and assumes clinicians as the users. 23 , 24 Such studies are highly informative for understanding the human–algorithm interactions including the potential negative effects of relying on imperfect AI systems, but in the future will ideally extend to explore the broader challenge of understanding the impact of AI‐assisted clinical decisions occurring in real‐world conditions such as where patients, carers and practitioners actively share the decision‐making.
Strengths and limitations
A major strength of this study was the use of purposive criterion sampling to engage a group of stakeholders who were particularly knowledgeable and experienced with digital diagnosis. The number and role of the participants were explicitly selected to represent the diverse characteristics of the end‐user and stakeholder groups, allowing for comprehensive, broad exploration of the topic. A first limitation of this study is the absence of participating carers, regulators and policy makers, which limited the breadth of the results relating to overall user experience, accountability or the implementation interdependencies between policy and practice. 25 Regulatory agency approval of these systems is necessary before they may be introduced into clinical practice. This will directly impact the feasibility and timeline of real‐world adoption. 11 The study attempted to recruit a sample of participants across stakeholder groups, including patients with a range of disease severity and geographically distributed clinicians with different qualifications (Appendix S4); however, the participants who consented were mostly from metropolitan areas. This may have resulted in a more homogeneous group of stakeholders. All interviews were also conducted in English, and intra‐group variability in ethnicity, remoteness of residence/healthcare service provision, cultural and linguistic diversity and Aboriginal or Torres Strait Islander background were not specifically ascertained. The use of Zoom videoconferencing for data collection may have also inadvertently excluded participants (especially patients) with poor digital literacy or limited access to a device or reliable internet connection. Such participants may represent an important stakeholder group, especially given a known association between computer use and better visual functioning in AMD. 26
Data saturation was not pursued. Defining data saturation in qualitative research is challenging due to the lack of a universally accepted and contextually specific standard. 27 Attempting to apply data saturation to this study was further complicated by the different key interview questions per stakeholder group (Appendix S2), which may have also influenced the frequency of stakeholder comments (Figure 2) and breadth of discussion.
The study also focused on the implementation of AI firstly in Australia and secondly in eye care and AMD. Consequently, the results may have limited generalisability outside Australia, to other healthcare contexts and conditions; however, they are consistent with prior work, 9 , 28 , 29 , 30 suggesting the findings such as relating to the importance of a feasible business model that aligns with the healthcare system where the AI is destined to be deployed, complex professional, ethical, legal and social implications of AI deployment may be universal and not prone to selection bias.
This is the first study to report numerous, interacting perspectives on the adoption of digital diagnosis for AMD in Australia based on a broad stakeholder group. It introduces a feasible ‘possible future’ for AI‐supported AMD care, importantly informed both by the patient voice and an established model of AI adoption in healthcare.
AUTHOR CONTRIBUTIONS
Angelica Ly: Conceptualization (supporting); data curation (lead); formal analysis (lead); investigation (lead); methodology (lead); project administration (lead); writing – original draft (lead); writing – review and editing (lead). Sarita Herse: Data curation (supporting); formal analysis (supporting); investigation (supporting); project administration (supporting); validation (equal). Mary‐Anne Williams: Conceptualization (equal); funding acquisition (equal); resources (equal); supervision (equal); validation (equal); writing – review and editing (supporting). Fiona Stapleton: Conceptualization (equal); funding acquisition (equal); methodology (supporting); resources (equal); supervision (equal); validation (equal); writing – review and editing (supporting).
FUNDING INFORMATION
This work was carried out by the UNSW‐Roche Digital Diagnostics Project and funded by Roche Products, Australia. The funder had no role in study design, data collection, data analysis, data interpretation, writing of the report and decision to submit the paper for publication.
CONFLICT OF INTEREST STATEMENT
The authors declare no commercial/competing relationships.
Supporting information
Appendix S1.
ACKNOWLEDGEMENTS
The authors thank the participants who generously provided their time and expertise. Outside the submitted work, AL is a consultant for Apellis Australia and reports financial support from Novartis Pharmaceuticals Australia and the Future Vision Foundation and speakers' honoraria from Optometry Australia and Optometry NSW/ACT; SH reports financial support from the UNSW‐Roche Digital Project; MAW is a member of the South Western Sydney Local Area Health District Innovation Reference Group; FS is part of the advisory board for Alcon Laboratories Inc., Mentholatum Australia, Novartis Pharmaceuticals Australia, Novartis Pharmaceuticals Corporation and Seqirus and in the past 3 years, UNSW Sydney has received research grant support from Alcon Laboratories, Azura Ophthalmics, Exonate, IOLYX, Menicon, Novartis, nthalmic, Roche and Rodenstock. Open access publishing facilitated by University of New South Wales, as part of the Wiley ‐ University of New South Wales agreement via the Council of Australian University Librarians.
Ly A, Herse S, Williams M‐A, Stapleton F. Artificial intelligence for age‐related macular degeneration diagnosis in Australia: A Novel Qualitative Interview Study. Ophthalmic Physiol Opt. 2025;45:1282–1292. 10.1111/opo.13542
REFERENCES
- 1. Wong WL, Su X, Li X, Cheung CM, Klein R, Cheng CY, et al. Global prevalence of age‐related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta‐analysis. Lancet Glob Health. 2014;2:e106–e116. [DOI] [PubMed] [Google Scholar]
- 2. Ferris FL 3rd, Wilkinson CP, Bird A, Chakravarthy U, Chew E, Csaky K, et al. Clinical classification of age‐related macular degeneration. Ophthalmology. 2013;120:844–851. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Guymer RH, Campbell TG. Age‐related macular degeneration. Lancet. 2023;401:1459–1472. [DOI] [PubMed] [Google Scholar]
- 4. Ly A, Nivison‐Smith L, Zangerl B, Assaad N, Kalloniatis M. Advanced imaging for the diagnosis of age‐related macular degeneration: a case vignettes study. Clin Exp Optom. 2018;101:243–254. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Neely DC, Bray KJ, Huisingh CE, Clark ME, McGwin G Jr, Owsley C. Prevalence of undiagnosed age‐related macular degeneration in primary eye care. JAMA Ophthalmol. 2017;135:570–575. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Kumar H, Goh KL, Guymer RH, Wu Z. A clinical perspective on the expanding role of artificial intelligence in age‐related macular degeneration. Clin Exp Optom. 2022;105:674–679. [DOI] [PubMed] [Google Scholar]
- 7. Ferrara D, Newton EM, Lee AY. Artificial intelligence‐based predictions in neovascular age‐related macular degeneration. Curr Opin Ophthalmol. 2021;32:389–396. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Dong L, Yang Q, Zhang RH, Wei WB. Artificial intelligence for the detection of age‐related macular degeneration in color fundus photographs: a systematic review and meta‐analysis. EClinicalMedicine. 2021;35:100875. 10.1016/j.eclinm.2021.100875 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Tseng RMWW, Gunasekeran DV, Tan SSH, Rim TH, Lum E, Tan GSW, et al. Considerations for artificial intelligence real‐world implementation in ophthalmology: providers' and patients'. Perspectives. 2021;10:299–306. [DOI] [PubMed] [Google Scholar]
- 10. Singh RP, Hom GL, Abramoff MD, Campbell JP, Chiang MF. Current challenges and barriers to real‐world artificial intelligence adoption for the healthcare system, provider and the patient. Translational vision. Sci Technol. 2020;9:45. 10.1167/tvst.9.2.45 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Li J, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, et al. Digital technology, tele‐medicine and artificial intelligence in ophthalmology: a global perspective. Prog Retin Eye Res. 2021;82:100900. 10.1016/j.preteyeres.2020.100900 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A'Court C, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment and challenges to the scale‐up, spread and sustainability of health and care technologies. J Med Internet Res. 2017;19:e367. 10.2196/jmir.8775 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Gama F, Tyskbo D, Nygren J, Barlow J, Reed J, Svedberg P. Implementation frameworks for artificial intelligence translation into health care practice: scoping review. J Med Internet Res. 2022;24:e32215. 10.2196/32215 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Tong A, Sainsbury P, Craig J. Consolidated Criteria for Reporting Qualitative Research (COREQ): a 32‐item checklist for interviews and focus groups. International J Qual Health Care. 2007;19:349–357. [DOI] [PubMed] [Google Scholar]
- 15. Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health. 2015;42:533–544. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Ayton D, Tsindos T, Berkovic D. Qualitative research – a practical guide for health and social care researchers and practitioners. Melbourne: Monash University; 2023. [Google Scholar]
- 17. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17:195. 10.1186/s12916-019-1426-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMC. 2005;330:765. 10.1136/bmj.38398.500764.8F [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Keel S, Wu J, Lee PY, Scheetz J, He M. Visualizing deep learning models for the detection of referable diabetic retinopathy and glaucoma. JAMA Ophthalmol. 2019;137:288–292. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Dow ER, Keenan TDL, Lad EM, Lee AY, Lee CS, Loewenstein A, et al. From data to deployment: the collaborative community on ophthalmic imaging roadmap for artificial intelligence in age‐related macular degeneration. Ophthalmology. 2022;129:e43–e59. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Ting DSW, Cheung CY, Lim G, Tan GSW, Quang ND, Gan A, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA. 2017;318:2211–2223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Burlina P, Joshi N, Paul W, Pacheco KD, Bressler NM. Addressing artificial Intelligence bias in retinal diagnostics. Transl Vis Sci Technol. 2021;10:13. 10.1167/tvst.10.2.13 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Jabbour S, Fouhey D, Shepard S, Valley TS, Kazerooni EA, Banovic N, et al. Measuring the impact of AI in the diagnosis of hospitalized patients: a randomized clinical vignette survey study. JAMA. 2023;330:2275–2284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Carmichael J, Costanza E, Balaskas K, Keane PA, Blandford A. The influence of automated support on optometrists' interpretation of retinal OCT scans. Invest Ophthalmol Vis Sci. 2022;63:ARVO E‐Abstract 191. [Google Scholar]
- 25. Hogg HDJ, Al‐Zubaidy M, Talks J, Denniston AK, Kelly CJ, Malawana J, et al. Stakeholder perspectives of clinical artificial Intelligence implementation: systematic review of qualitative evidence. J Med Internet Res. 2023;25:e39742. 10.2196/39742 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Brody BL, Anne‐Catherine R‐L, Colin D, Lilit M. Computer use among patients with age‐related macular degeneration. Ophthalmic Epidemiol. 2012;19:190–195. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Younas A, Masih Y, Sundus A. Alternatives to ‘saturation’ for greater transparency in reporting of sample size decision‐making in qualitative research. Evid Based Nurs. 2025;28:77–80. [DOI] [PubMed] [Google Scholar]
- 28. González‐Gonzalo C, Thee EF, Klaver CCW, Lee AY, Schlingemann RO, Tufail A, et al. Trustworthy AI: closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res. 2022;90:101034. 10.1016/j.preteyeres.2021.101034 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Constantin A, Atkinson M, Bernabeu MO, Buckmaster F, Dhillon B, McTrusty A, et al. Optometrists' perspectives regarding artificial intelligence aids and contributing retinal images to a repository: Web‐Based Interview Study. JMIR Hum Factors. 2023;10:e40887. 10.2196/40887 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, et al. Artificial intelligence in ophthalmology: the path to the real‐world clinic. Cell Rep Med. 2023;4:101095. 10.1016/j.xcrm.2023.101095 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Appendix S1.
