Skip to main content
Springer logoLink to Springer
. 2025 Dec 17;6(1):71. doi: 10.1007/s43681-025-00921-3

Fostering an enabling environment for health AI innovation and scale: The need for tailored ethics training for innovators in low- and middle-income countries

Raffaele Joseph 1,2, Liya Wassie 1,2,, Hailemichael Getachew 1,2, Doris Wangari 3, Evelyn Gitau 3, Claude Pirmez 4, Dominique Laforest 5, Richa Vashishtha 6, Ndeya M Samb 7, Zoleka Ngcete 8, Mosoka P Fallah 9, Rosa Tsegaye Aga 10,11, Michael Mihut 12, Andreas Alois Reis 13, Abraham Aseffa 1,2, Alemseged Abdissa 1,11,
PMCID: PMC12711939  PMID: 41426945

Abstract

Artificial intelligence (AI) is a rapidly evolving technology with transformative potential across many fields, including health. Interest in AI is growing globally, including in low- and middle-income countries (LMICs), but few studies have assessed the knowledge, experiences, and ethical practices of innovators in this space. This study aimed to assess the knowledge, experiences, challenges, and ethical practices of researchers and innovators, involved in AI-driven health research, to support the responsible integration of AI into health innovation. A cross-sectional, semi-quantitative online survey was conducted between February and May 2025 among health AI innovators in LMICs, primarily through the Grand Challenges network and its partners. Fifty respondents from 13 countries participated, the majority (92%) emphasized the importance of ethical principles, and 80% perceived AI-related ethical risks as greater than moderate. However, the majority of respondents (74%) were not sure about their competence in identifying and addressing the risks associated with AI practices and the same proportion reported having received no formal training in AI ethics. Respondents identified key challenges such as weak governance, limited ethics capacity, and risks of bias and data misuse. There was strong demand for training on ethics, bias mitigation, explainability, data governance, and privacy. The findings reveal an encouraging level of awareness but emphasize the need for formal training, clearer guidelines, and context-specific frameworks to ensure ethical AI development and deployment in health research.

Keywords: Artificial intelligence, Ethics, Health research, LMIC

Introduction

Artificial intelligence (AI) is an emerging and rapidly evolving discipline [1]. It is a transformative technological science at the forefront of innovation, driving advances in fields such as data analysis, robotics, healthcare, and autonomous systems [2]. Central to all these are machine and deep learning models that can also be used in disease identification and pharmaceutical development [3]. Moreover, improved diagnostic accuracy, clinical decision-making support, and accelerated drug discovery are a few of the benefits of AI that would enable it to influence the healthcare and research environment [4].

As AI reshapes industries and challenges traditional paradigms, it raises important societal, ethical, and policy questions. As a result, there has been a growing interest in the ethics of AI globally [5]. Recently, the World Health Organization (WHO) has issued guidance documents, which highlight ethical and governance challenges of large multi-modal models (LMMs) in health, highlighting risks, like bias, misinformation, and misuse [6]. The guideline calls for pre-deployment evaluation, ongoing monitoring, and international cooperation to ensure LMMs benefit health systems. The 2025 WHO guidance document was built on the 2021 executive summary, which presents six core ethical principles for AI in health and urges investment in regulation and public trust, while warning against AI dependence without human oversight and cautions about digital divides that could exacerbate inequity [7].

According to 2022 World Bank data, approximately 80% of the world’s population lives in LMICs, regions that have historically been left out of the past technological revolutions [8]. With rapidly growing economies and youthful workforces, these countries are well-positioned to play a meaningful role in advancing ethical and equitable use of AI during the fourth industrial revolution [9]. AI presents a timely opportunity to address health disparities and improve care driven by genomic and disease prediction [10].

However, LMICs face significant obstacles to realizing AI’s potential, including constrained digital infrastructure, shortages of skilled personnel, data fragmentation, and emerging governance frameworks [11]. These barriers are compounded by social inequalities, which may further hinder equitable AI adoption [12]. AI carries significant risks if used irresponsibly, especially concerning public trust, patient safety, and data integrity. Key challenges include ensuring transparency, explainability, accountability in AI-driven decisions, as well as obtaining meaningful informed consent in the event of a study [13, 14]. While AI’s complexity is not problematic, it introduces ethical dilemmas, including patient confidentiality risk, inadequate data anonymization, and algorithmic bias, the latter one contributing to exacerbation of health disparities, as a result of unrepresented data or flawed models [15, 16]. Responsible AI deployment has increasingly become difficult in LMICs due to scientific and regulatory challenges, as well as limited capacity to assess the safety and effectiveness of new technologies [17, 18].

Upholding universal ethical standards is essential, while developing context-sensitive governance frameworks. At present, most ethical principles, guiding the development and use of AI, originate from high-income countries, which remain the primary developers of technology. This imbalance can undermine fairness, trust, and equity, particularly when it is coupled with systemic bias and limited algorithmic explainability. To move beyond a dominant narrative that often overlooks local sociocultural values and priorities, co-creating AI tools with local stakeholders and investing in robust data infrastructure are critical steps toward bridging the gap between the Global North and LMICs [5, 19, 20].

As empirical work on AI ethics readiness is still in its infancy [21]; particularly in LMICs, where AI holds immense promise, more research is needed to ensure ethical compliance and effective implementation [22, 23]. This study aimed at assessing the knowledge, experiences, challenges, and ethical practices of researchers and innovators involved in AI-driven health research, with a view to supporting the responsible and effective integration of AI into health innovation. Specifically, it evaluates their familiarity with AI ethics, identifies perceived key ethical challenges, and examines existing AI governance frameworks and perceptions.

Methods

Study setting

This cross-sectional, semi-quantitative study was conducted using a structured, online survey among health AI innovators. The survey was conducted between February and May 2025, primarily through the Grand Challenges (GC) network (https://gcgh.grandchallenges.org/). GC partners—including GC Brazil, GC Ethiopia, GC India, GC Senegal, GC South Africa, and GC Africa (pan-African)—together with the Gates Foundation, GC Canada, the Patrick J. McGovern Foundation, and the Pasteur Network. Collectively, these diverse and critical stakeholders recognized the need for an equitable and responsible approach to the use of AI, particularly Large Language Models (LLMs), in LMICs. In response, the network launched a joint call in 2023 that supported 57 health innovation projects as a second cohort of AI innovators. Grand Challenge Ethiopia (GCE), a member of the Global Grand Challenges Network and hosted at the Armauer Hansen Research Institute (AHRI), conducted a survey in collaboration with the Pan-African Bioethics Initiative (PABIN), also based at AHRI, and the Special Programme for Research and Training in Tropical Diseases (TDR) at WHO. This study was performed in accordance with the principles stated in the Declaration of Helsinki and received ethical approval from the AHRI/ALERT Research Ethics Review Committee (Protocol approval Number: PO. # 049-25). Potential respondents—all adults—were invited to provide electronic consent, which was included with the online questionnaire, prior to their participation.

Data collection and analysis

Health-focused AI innovators in LMICs, across 13 countries, were engaged through GC partners by GCE and PABIN, who distributed a survey tool to collect data on ethical considerations in AI. The survey consisted of both closed and open-ended questions, covering domains such as AI applications in health, perceived ethical challenges, institutional support, and capacity-building needs. The questionnaire was developed based on a review of current literature and expert consultations. The survey was circulated through national GC partners and shared within the GC AI community WhatsApp group, which included members of both the first and second cohorts of grantees, comprising researchers, innovators, policymakers, and other relevant stakeholders. Additionally, outreach extended to non-GC innovators through broader network channels, including multiple digital media platforms, to ensure wider participation. Although it was not possible to determine the exact number of recipients or calculate a formal response rate, participation was voluntary, and submission of the questionnaire implied consent. The survey data were then captured on a research electronic data capture (REDCap) database.

Given the small sample size, quantitative data were summarized descriptively, such as using frequencies, percentages, and measures of central tendency, to identify trends, recurrent themes, knowledge gaps, and training needs. Where appropriate, group comparisons were used to illustrate differences among participant categories but were not intended for statistical inference. For qualitative data, an inductive thematic analysis was undertaken, where open-ended responses were grouped into larger themes. Coding and interpretation were developed iteratively through team discussions among the primary authors, allowing for collective reflection and refinement of emerging themes. Coded and thematized responses were first captured in Microsoft Excel, and then exported to SPSS 27 for frequency counts and further statistical analyses). Infrequent or low-frequency themes that did not align with dominant themes were encompassed and labelled under ‘Other’ category to show their relevance and reflect the breadth of participants’ perspectives. This collaborative process helped ensure analytical transparency and strengthen the credibility of the findings. Additionally, qualitative responses were graded using a five-point Likert scale (1–5), where a score of 1 indicated low or poor, scores of 2–4 represented moderate, and a score of 5 indicated high or good.

Results

Characteristics of respondents

A total of 50 respondents, from 13 countries, participated in the survey. Brazil contributed the highest number (n = 12), followed by Ethiopia (n = 7), with additional respondents from nine African countries as well as India and Bhutan (Table 1). Notably, 82% of those who volunteered to participate in the survey were GC grantees (Table 1). The others received funding for their AI projects from other sources. One response was excluded as the respondent was from a high-income country.

Table 1.

Distribution of respondents by country and affiliation to grand challenges network, February–May 2025

Country Affiliation as a grand challenges project grantee (number) Total
No Yes
Bhutan 0 1 1
Brazil 1 11 12
Cameroon 1 0 1
Ethiopia 3 4 7
Ghana 1 0 1
India 0 4 4
Ivory Coast 0 2 2
Kenya 0 5 5
Mozambique 1 0 1
Nigeria 0 1 1
Senegal 1 3 4
South Africa 1 5 6
Uganda 0 5 5
Total 9 41 50

The participants had diverse professional backgrounds, including health researchers (n = 21), professionals in computer science and development (n = 12), clinicians (n = 9), data scientists (n = 4), and others (n = 4) (Table 2). This diversity also mirrored the wide range of their current roles. Respondents included researchers (n = 19), academics (n = 15), entrepreneurs (n = 10), engineers (n = 2), and other professionals (n = 4); with varying levels of AI experience: 16% (8/50) had less than one year of experience, 42% (21/50) had 1–3 years, 24% (12/50) had 4–6 years, and the remaining 18% [9] had more than 6 years of experience (Table 2).

Table 2.

Characteristics of respondents, in terms of their professional backgrounds and levels of AI experience, February–May 2025

Professional background Number (N) %
Clinical practice 9 18%
Data science/AI development 4 8%
Computer science 12 24%
Health research 21 42%
Others * 4 8%
Years of AI experience
Less than a year 8 16%
1–3 years 21 42%
4 and above 21 42%
Current profession
Academic/professor 15 30%
Developer/engineer 2 4%
Innovator/entrepreneur 10 20%
Researcher/scientist 19 38%
Other** 4 8%

Others * Monitoring and evaluation (1), Health and behavioral sciences and data sciences (1), disease ecology (1) and Product management (1)

Other** Entrepreneurial academic (1), Research, innovator and entrepreneur and academic (1), Coordinator (1) and all of the above−mentioned roles (1)

AI knowledge

Respondents were asked to self-assess their knowledge of four core themes in AI ethics. Overall, the majority reported having at least some level of understanding across all topics (Fig. 1). Bias and fairness in AI models was the most widely recognized theme, with 74% (37/50) of respondents indicating some familiarity. However, when focusing on those who reported knowing a topic well, a greater proportion (38%, 19/50) demonstrated a deeper understanding of AI/ML algorithms and their applications in health research than data privacy and security regulations (34%, 17/50). In contrast, only a small number of participants reported complete lack of awareness, while a few others had only a superficial familiarity in ethical and technical themes explored in this survey. Specifically, two were unaware of AI/ML applications in health research, four of bias and fairness, five of explainability and transparency, and three of data privacy and security regulations (Fig. 1).

Fig. 1.

Fig. 1

Self-reported knowledge on AI concepts and core ethical issues among participants

When asked whether their knowledge was sufficient for their work, 26 (52%) respondents believed it was adequate, 11 (22%) were unsure, and 13 (26%) felt their knowledge was insufficient.

Among the total participants, a quarter of them, 27% (13/49), received formal training on ethical principles of AI (Table 3). A look into respondents’ self-assessed confidence level in their ability to recognize and address ethical risks, a varying degree of preparedness was observed. Out of a total of 50 respondents, majority (74%) were uncertain about their competence in identifying and addressing ethical risks associated with AI practices, 62% reported to have moderate level of confidence in their ability to identify and mitigate ethical risk, a smaller, yet notable proportion, 26% (13/50), had high confidence and about 12% were not confident at all (Table 3).

Table 3.

Relationship between formal training, perceived knowledge sufficiency, and confidence levels in recognizing and addressing ethical risks, February–May 2025

Formal training received Knowledge sufficiency* Confidence level (Number) Total
Low confidence Moderate confidence High confidence
No (n = 36) Not sufficient 4 4 3 11
Uncertain 0 7 1 8
Sufficient 1 8 8 17
Yes (n = 13) Not sufficient 0 2 0 2
Uncertain 0 2 0 2
Sufficient 1 7 1 9
Non response (n = 1) Not sufficient 0 0 0 0
Uncertain 0 1 0 1
Sufficient 0 0 0 0
Total 6 31 13 50

*Refers to self−rated perception or adequacy of knowledge and understanding of ethical issues related to AI use in health research; values represent the number of respondents (n) in each category

We also observed a few inconsistencies between respondents’ familiarity with AI ethics and their training background. While many reported moderate assurances in their understanding of ethical issues, this was primarily attributed to self-directed learning rather than structured formal training (Table 3), highlighting a potential mismatch between actual knowledge and formal educational exposure. Although the sample size was too few to draw a strong comparison, the proportion of participants lacking confidence was apparently higher among trained respondents (17%, 1/13) compared to untrained ones (14%, 5/36). In terms of perceived knowledge adequacy, 69% (9/13) of trained participants reported sufficient understanding, compared to 47% (17/36) who were not trained. Nevertheless, both trained (n = 13) and untrained groups (n = 36) showed moderate to high level of confidence in identifying ethical risks; 92% (12/13) and 86% (31/36), respectively (Table 3).

Based on the survey, the majority of the respondents (92%, 46/50) acknowledged the importance of ethical principles in AI, with a strong emphasis placed on universal principles. Justice and equity emerged as the most frequently mentioned principles, closely followed by autonomy and beneficence.

AI governance policy awareness

Respondents' awareness of governance policies related to AI projects and activities was assessed. The results revealed a range of awareness levels across countries: 40% (20/50) of respondents indicated that a policy exists, 54% (27/50) reported that no policy exists, and 6% (3/50) were unsure (Fig. 2). While awareness varied between countries, there was also notable inconsistency within countries. In countries with only one respondent (such as Mozambique, Ivory Coast, and Bhutan), the data was too few to draw meaningful conclusions. However, in countries with multiple respondents, inconsistency was noted. For example, Brazil had the highest number of respondents, affirming the presence of a governance framework, yet several others from the same country reported that no such framework exists.

Fig. 2.

Fig. 2

Knowledge of AI governance policies by country

Collaboration with ethics experts

Notable respondents, 64% (32/50), reported seeking advice from ethicists or professionals at some stage of their AI projects. However, only 40% (20/50) indicated that they engaged with communities as part of their ethical practices. Reasons for limited collaboration with ethicists, social scientists, or legal experts in AI projects included lack of access to ethics experts (7 respondents), perceptions that such engagement was unnecessary (4 respondents), and lack of awareness about the role or importance of these experts (4 respondents). One respondent indicated that no such experts were available to them, and another stated they were already aware of ethics in AI research. Additionally, one respondent noted that they sought ethical guidance only after the validation phase, highlighting a more reactive than proactive approach to addressing ethical considerations in their projects.

To understand the concerns associated with AI development and deployment, respondents were asked to identify potential risks and rate their severity on a scale from 1 (Low risk) to 5 (High risk) (Table 4). Participants emphasized the risks of biased or incomplete training data, lack of representativeness for local contexts, and the potential for AI tools to reinforce existing inequalities related to gender, race, or social status. They also highlighted the lack of transparency and explainability in how AI makes decisions, which could undermine trust and fairness. When coded and thematized, the majority of survey responses were attributable to risks related to bias, discrimination, and fairness, which made it the most frequently mentioned concern (with 22 responses). On the other hand, in 20 survey responses, broader ethical and governance concerns surfaced; participants emphasized concerns related to accountability for AI-driven decisions, the protection of individual autonomy, disparities in access linked to digital literacy, and challenges in integrating AI tools within local cultural and policy contexts.

Table 4.

Summary of thematic analysis: anticipated potential risks in development and deployment of AI tools

Theme Frequency (responses)
Bias, discrimination, and fairness 22
Ethical and governance concerns 20
Data privacy and security 19
Patient and clinical safety risks 18
Trust, acceptance, and human oversight 11
Sustainability and implementation challenges 10
Others* 8

*Social risks to society, self−medication, unintended use, not representing traditional medicine correctly, job loss fears, misconceptions about AI in society, running cost of using AI

Clinical and patient safety risks emerged as another dominant theme, with many highlighting the potential harm from AI-generated misinformation, misdiagnoses, or false security that could lead to delayed or inappropriate treatments (Table 4). A significant number of respondents also raised data privacy and security as an urgent ethical issue. They pointed out that current safeguards such as anonymization may be insufficient, raising risks of re-identification, data breaches, and unauthorized sharing of sensitive information with third parties. Malicious attacks, such as model inversion or extraction, were also noted as growing threats.

The broader concept of data privacy and confidentiality in terms of data breach, autonomy, and personal identifier information handling were raised. Responses thematized in this category were raised 19 times (Table 4). Concerns around trust, acceptance, and human oversight were also raised, including fears that clinicians and health authorities might hesitate to rely on AI outputs without sufficient evidence of accuracy and interpretability (Table 4). Respondents noted that adoption by older generations and the risk of overdependence on AI tools without adequate human checks could further complicate responsible use. Sustainability and implementation challenges were cited as critical practical barriers that included limited infrastructure, equipment downtime, insufficient technical capacity, bureaucratic approval processes, and the challenge of ensuring long-term sustainability of AI initiatives. Additionally, respondents also raised unique risks such as function creep, profiling, or improper medical use without professional oversight as a possible AI threat.

When all the 50 respondents were asked about their familiarity with representational bias, a term used to describe respondents’ perceptions of unequal representation of populations, contexts, or datasets in AI development and implementation, in the design and application of AI systems, a significant majority, 71% (35/49), answered "Yes", 29% (14/49) reported that they are not familiar with the concept and replied "No" and one did not respond.

Challenges and solutions

Beyond anticipated risks, respondents were also asked to identify challenges in applying AI in health research. When their responses were coded, thematized and analyzed, other key themes were identified; bias, fairness, and equity, being the most common theme, highlighting widespread concerns about representational fairness and the risks of discrimination (Table 5). Issues related to data privacy, confidentiality, and ownership were also raised as the second recurring theme, alluded to the potential misuse or unauthorized sharing of sensitive information. Accountability, governance, and regulation were raised in relation to the need for clear frameworks, effective oversight, and stronger AI literacy. Similarly, infrastructure, capacity, and local readiness issues were raised in relation to structural challenges that are needed for responsible deployment of AI (Table 5).

Table 5.

Ethical concerns identified by respondents and frequency of responses by theme

Theme Frequency of issues raised
Bias, fairness, equity 20
Data privacy, confidentiality, ownership 18
Accountability, governance, regulation 11
Transparency, explainability, literacy 8
Evidence-based application, responsible use 7
Harm mitigation, safety, beneficence 6
Infrastructure, capacity, local readiness 4

In response to a follow-up question about which tools or resources would help address ethical challenges better, respondents identified several needs; governance and access to reference materials, including policies, guidelines, frameworks, and legal instruments being the most frequently cited, followed by capacity building through training, education, and public awareness (Table 6). In addition, bias detection and mitigation tools and practical aids such as templates, checklists, and roadmaps were highlighted as important resources. Respondents also pointed to the value of expert guidance, infrastructure for fairness such as datasets and funding, and transparency tools supporting explainability (Table 6).

Table 6.

Thematized frequency distribution of tools and resources identified by respondents to address ethical challenges in AI health research

Theme Frequency of issue raised
Governance / reference materials (policies, guidelines, frameworks, legal) 18
Capacity building / literacy (training, education, public awareness) 14
Bias mitigation (detection, fairness tools) 7
Practical tools (templates, checklists, roadmaps) 7
Expert guidance (consultations, meetings, SME input) 5
Infrastructure for fairness (datasets, funding, scaling) 3
Transparency (explainability tools) 3
No specific tool mentioned* 3

*Connotes participants’ description to capture responses in which they referred generally to AI tools or methods without specifying a particular example

Discussion

This study explored the main ethical challenges of AI in health research by drawing on the perspective of researchers and innovators from thirteen countries, including ten in Africa. The study provides a multi-country snapshot of the intersection between AI, health, and ethics. The findings highlight the need for a structured approach towards AI ethics in terms of legal frameworks, training needs, and multi-stakeholder collaborations. Respondents demonstrated a moderate level of familiarity with core AI ethics topics including bias, fairness, and explainability, as well as related regulatory topics like data privacy laws. However, they appeared to lack the depth and reliable knowledge necessary to support informed and effective practice.

While many AI practitioners demonstrated a conceptual understanding of AI ethics, they often lacked actionable knowledge and confidence to implement ethical principles in practice. This finding aligns with an empirical study involving 100 AI practitioners, which revealed that only a few participants expressed confidence in applying AI ethics in their work [24]. Nonetheless, certain good practices were found to be either partially adopted or at least understood. These included adherence to organizational ethics policies, recognition and mitigation of bias, reference to formal ethical guidelines, advocacy for high-quality training data, critical reflection on both technical and human limitations, and appreciation for structured education in ethics [25].

The survey revealed a gap between awareness of AI ethics and formal training; while most respondents recognized ethical issues, only about a quarter felt confident addressing them. This aligns with prior research, which emphasizes the critical need to integrate ethics education into fields like AI, particularly in LMICs, to equip professionals with the skills to navigate complex ethical challenges [26, 27]. The literature further highlights the importance of sustained support, innovative strategies, and embedding ethics training within scientific programs to strengthen research ethics capacity and promote equitable collaboration in global health [28].

Moreover, inconsistencies emerged between respondents’ familiarity with AI ethics and their training background. While many reported moderate confidence in understanding ethical issues, this was largely based on self-directed learning rather than structured formal training, suggesting a mismatch between perceived knowledge and formal preparation. While 75% of respondents endorsed core AI ethics principles, only 30% felt confident identifying and addressing these in their projects, reflecting a gap attributed to limited training, complexity, and unclear implementation steps. This finding can also be found in previous studies showing that ethical principles, although widely accepted, remain difficult to apply in practice, particularly in resource-constrained environments [29, 30]. Ethics was often introduced late in project cycles rather than embedded from the outset, undermining its preventive role and increasing risks i.e. reactive ethics, which "misses early harm mitigation opportunities [3133].

A striking intra-country inconsistency was observed with respect to national AI policies and frameworks, indicating that mere existence does not guarantee dissemination or practical implementation. These findings align with earlier research on challenges related to policy visibility, enforcement, and translation of high-level principles into operational realities in LMIC contexts. The principal challenges lie not only in devising AI governance frameworks, but also in making them visible, accessible, and understood by frontline personnel as highlighted in a case study on health data governance in Zanzibar [34], where stakeholders ‘lacked awareness’ of policies and faced significant gaps between policy formation and actual practice. Although analyses on the existence of national AI policy frameworks could be relevant and informative, an individual-level analytical focus helps to explore differences in awareness, accessibility, and implementation of these existing policies among surveyed respondents. The noted individual-level differences in awareness underscore the need for targeted training and capacity-building initiatives to bridge the gaps in AI ethics and enhance effective implementation of national frameworks and ethical AI practices in health.

Also, knowledge availability and awareness remain uneven, underscoring that policy presence and execution are not inherently intertwined. This highlights the importance of targeted capacity-building and training initiatives to bridge the gap [3537]. The absence or low visibility of regulatory frameworks for AI ethics, along with limited awareness among practitioners, was specifically highlighted by some respondents. This is consistent with findings reported by Nganyewou Tidjon and Khomh, who further emphasize that such gaps in implementation undermine ethical governance and stress the urgent need for AI regulatory models that are not only ethically grounded and legally robust, but also culturally responsive, context-specific, accessible, and operationalizable [38]. Scholars and policymakers widely caution against stand-alone universalist approaches, stressing that effective oversight demands tailoring to socio-political realities in different contexts. One such effort is reflected in the EU AI Act, which exemplifies risk-sensitive regulation, while promoting fairness, safety, and rights protection [39].

There is a growing consensus on the core ethical considerations for AI development and deployment worldwide that can inform training and good practice [5, 37]. The difference in practice of professional and community engagement (64% and 40%, respectively) suggest that this is an indicator of lack of participatory ethics, a model increasingly endorsed in African and global discourses [40]. Encouragingly, nearly 70% of respondents indicated some form of community involvement, suggesting a growing but uneven adoption of participatory practices.

Our data highlight persistent barriers to effective community engagement in health AI projects, including lack of awareness, limited access to experts, and undervaluing of community perspectives. These gaps point to the urgent need for clearer guidance on how to operationalize community engagement and integrate participatory approaches in practice [4143]. Respondents also emphasized the importance of greater access to locally relevant health data, bias detection tools, explainability mechanisms, and national guidelines that reflect local contexts. While there are online training courses on AI and ethics for health such as the WHO Academy course [44], effectively addressing these challenges will require targeted capacity building and practical training in AI ethics, as well as efforts to promote equitable collaboration. Encouraging examples, such as Adetiba’s hybrid training bootcamp that significantly improved AI competencies, show the positive impact of focused, hands-on training initiatives [28, 36, 45].

Although this study lacks adequate sample size to generalize to wider populations and fails to capture the gender-based dimension of AI and AI ethics, taken together, the findings of this survey indicate a complex and uneven landscape of AI ethics readiness among researchers and institutions. While knowledge about AI and its ethical dimensions is present, it is often acquired passively and remains fragmented in the absence of structured, actionable training. This lack of deep understanding and confidence, compounded by low awareness of existing policies, signals major gaps in institutional and national governance strategies. The predominance of self-directed learning, though commendable, highlights the urgent need for formalized, context-sensitive ethics education. The ethical challenges posed by AI, particularly data governance, algorithmic fairness, and cultural translation, cannot be addressed through voluntary measures alone. The high-stakes nature of AI's impact demands enforceable standards that go beyond "soft law" principles. Importantly, ethical AI development is not merely a technical pursuit, but a social imperative, deeply entangled with issues of justice, cultural sensitivity, and local relevance. Moving forward, frameworks and tools must reflect this complexity, being adaptable, inclusive, and grounded in real-world application.

On the same line, another challenge highlighted in the survey was related to data privacy and ownership. Beyond the traditional data governance models, recently new frameworks such as the global patient co-owned cloud (GPOC) have emerged, which highlights the growing recognition of patients’ ownership and co-ownership of their health data as a foundational human rights issue [46]. In addition, the GPOC Necessity Survey, which engaged national health ministries and major global health organizations, further underscores the feasibility and global momentum for shared, transparent, and ethically grounded health data stewardship [47]. Incorporating such co-ownership frameworks into the AI governance structures could strengthen accountability, patient trust, and equity in future health data systems and digital health research. Our findings also resonate with the WHO’s guidance in ethics and governance of AI for health [7], particularly the emphasis on accountability, inclusiveness and data stewardship.

In conclusion, we urge institutions in LMIC to move beyond passive knowledge acquisition by providing structured and context-specific training that enhances both competence and confidence in AI ethics. National and institutional policies should also be clearly communicated and made accessible through targeted, user-friendly platforms. Provision of institutionalized and formal ethics education also ensures consistent capacity-building, by reducing reliance on self-directed learning. Moreover, enforceable standards are needed for high-risk areas such as data governance and algorithmic fairness, while remaining adaptable to emerging technologies. While there is a need for ethical frameworks to be modular, culturally responsive, locally grounded and cognizant of diverse contexts in which AI is applied, it is high time for such surveys to be conducted, targeting institutional review boards (IRBs) to complement this work, through participatory engagement of experts in ethics and digital health, including AI, and practices in the development of frameworks, policies and training programs, which could provide valuable insights into their challenges and capacity needs in the age of AI. The findings of this study should be interpreted with caution, as the number of respondents were relatively small and predominantly from Africa, limiting the generalizability of the results beyond this regional context. However, the insights provide valuable perspectives on region-specific challenges and opportunities for responsible AI deployment in health research, which may also be relevant to other LMIC settings. Overall, fostering interoperable, patient-centered, and ethically guided frameworks will be crucial to advance responsible innovation and equitable digital health outcomes globally.

Acknowledgements

The authors gratefully acknowledge the Gates Foundation for funding the first cohort of AI innovators. Support for the second cohort was generously provided by Grand Challenges (GC) Africa, Brazil, Canada, India, Senegal, and South Africa. The collaboration and encouragement of all GC partners was instrumental in motivating innovators to complete the survey. The authors also extend their sincere appreciation to the Gates Foundation’s Grand Challenges and AI teams for their invaluable assistance in facilitating the survey process. Pan-African Bioethics Initiative (PABIN) is supported by the Special Programme for Research and Training in Tropical Diseases, TDR.

Author contributions

Al.Ab., Ab.As., L.W., H.G., and R.J. developed the study design and survey tool and contributed in the data analysis. R.J. drafted the manuscript. All authors reviewed and approved the final version of the manuscript.

Funding

United Nations Children’s’ Fund (UNICEF)/ United Nations Development Program (UNDP)/World Bank/World Health Organization (WHO) Special Programme for Research and Training in Tropical Diseases (TDR),P24-01456

Data availability

No datasets were generated or analysed during the current study.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Liya Wassie, Email: liya.wassie@ahri.gov.et.

Alemseged Abdissa, Email: alemseged.abdissa@ahri.gov.et.

References

  • 1.Puranik, Dr.: The role of artificial intelligence on emerging technologies & society. Int. J. Res. Appl. Sci. Eng. Technol. 12, 1657–1659 (2024). 10.22214/ijraset.2024.58713 [Google Scholar]
  • 2.Gupta, P.: The potential of AI in healthcare. Int. J. Adv. Res. 11, 1112–1115 (2023). 10.21474/IJAR01/17624 [Google Scholar]
  • 3.Esteva, A., Robicquet, A., Ramsundar, B., Kuleshov, V., DePristo, M., Chou, K., Cui, C., Corrado, G., Thrun, S., Dean, J.: A guide to deep learning in healthcare. Nat. Med. 25(1), 24–29 (2019). 10.1038/s41591-018-0316-z [DOI] [PubMed] [Google Scholar]
  • 4.Ahmed, M.: AI as emerging technology: opportunities. Vulnerabilities Future Direct. (2025). 10.2139/ssrn.5068671 [Google Scholar]
  • 5.Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389–399 (2019). 10.1038/s42256-019-0088-2 [Google Scholar]
  • 6.Ethics and governance of artificial intelligence for health. Guidance on large multi-modal models. Geneva: World Health Organization; 2024. Licence: CC BY-NC-SA 3.0 IGO.
  • 7.Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: World Health Organization; 2021. Licence: CC BY-NC-SA 3.0 IGO.
  • 8.The World Bank. (n.d.). Population, total – Lower middle income. Retrieved July 14, 2025, fromhttps://data.worldbank.org/indicator/SP.POP.TOTL?locations=XN
  • 9.Kenny, C.: The internet and economic growth in less-developed countries: a case of managing expectations? 1. Oxf. Dev. Stud. 31, 99–113 (2003). 10.1080/1360081032000047212 [Google Scholar]
  • 10.Unanah, O., Aidoo, E.: The potential of AI technologies to address and reduce disparities within the healthcare system by enabling more personalized and efficient patient engagement and care management. World J. Adv. Res. Rev. 25, 2643–2664 (2025). 10.30574/wjarr.2025.25.2.0641 [Google Scholar]
  • 11.Pillai, A.S.: Artificial intelligence in healthcare systems of low- and middle-income countries: requirements, gaps, challenges, and potential strategies. Int. J. Appl. Health Care Anal. 8(3), 19–33 (2023) [Google Scholar]
  • 12.Lainjo, Bongs. (2023). The global social dynamics and inequalities of artificial intelligence. 4966–4974.
  • 13.Abujaber, A.A., Nashwan, A.J.: Ethical framework for artificial intelligence in healthcare research: a path to integrity. World J. Methodol. 14(3), 94071 (2024). 10.5662/wjm.v14.i3.94071 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Shaw, J., Ali, J., Atuire, C.A., Cheah, P.Y., Español, A.G., Gichoya, J.W., Hunt, A., Jjingo, D., Littler, K., Paolotti, D., Vayena, E.: Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research. BMC Med. Ethics 25(1), 46 (2024). 10.1186/s12910-024-01044-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Weiner, E.B., Dankwa-Mullan, I., Nelson, W.A., Hassanpour, S.: Ethical challenges and evolving strategies in the integration of artificial intelligence into clinical practice. PLOS Digit Health. 4(4), e0000810 (2025). 10.1371/journal.pdig.0000810 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S.: Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447–453 (2019). 10.1126/science.aax2342 [DOI] [PubMed] [Google Scholar]
  • 17.Alami, H., Lehoux, P., Auclair, Y., de Guise, M., Gagnon, M.P., Shaw, J., Roy, D., Fleet, R., Ag Ahmed, M.A., Fortin, J.P.: Artificial intelligence and health technology assessment: anticipating a new level of complexity. J. Med. Intern. Res. 22(7), e17707 (2020). 10.2196/17707 [Google Scholar]
  • 18.Ho, C.W., Malpani, R.: Scaling up the research ethics framework for healthcare machine learning as global health ethics and governance. Am. J. Bioeth. 22(5), 36–38 (2022). 10.1080/15265161.2022.2055209 [Google Scholar]
  • 19.Williamson, S.M., Prybutok, V.: Balancing privacy and progress: a review of privacy challenges, systemic oversight, and patient perceptions in AI-driven healthcare. Appl. Sci. 14(2), 675 (2024). 10.3390/app14020675 [Google Scholar]
  • 20.Yu, L., Zhai, X.: Ethical and regulatory challenges of generative artificial intelligence in healthcare: a Chinese perspective. J. Clin. Nurs. (2024). 10.1111/jocn.17493 [Google Scholar]
  • 21.Morley, J., Floridi L., Kinsey L., Elhalal A.: From what to how. An overview of AI ethics tools, methods and research to translate principles into practices. (2019). 10.48550/arXiv.1905.06876
  • 22.Ciecierski-Holmes, T., Singh, R., Axt, M., Brenner, S., Barteit, S.: Artificial intelligence for strengthening healthcare systems in low- and middle-income countries: a systematic scoping review. NPJ Digit. Med. 5(1), 162 (2022). 10.1038/s41746-022-00700-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Boudi, A.L., Boudi, M., Chan, C., et al.: Ethical challenges of artificial intelligence in medicine. Cureus 16(11), e74495 (2024). 10.7759/cureus.74495 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Pant, A., Hoda, R., Spiegler, S., Tantithamthavorn, C., Turhan, B.: Ethics in the age of AI: an analysis of AI practitioners’ awareness and challenges. In: ACM transactions on software engineering and methodology. 33 (2023). 10.1145/3635715.
  • 25.Kerasidou, A.: Ethics of artificial intelligence in global health: explainability, algorithmic bias and trust. J. Oral. Biol. Craniofac. Res. 11(4), 612–614 (2021). 10.1016/j.jobcr.2021.09.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Millum, J., Sina, B., Glass, R.: International research ethics education. JAMA 313, 461–462 (2015). 10.1001/jama.2015.203 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Borenstein, J., Howard, A.: Emerging challenges in AI and the need for AI ethics education. AI Ethics (2020). 10.1007/s43681-020-00002-7 [Google Scholar]
  • 28.Kiemde, S., Kora, A.D.: Towards an ethics of AI in Africa: rule of education. AI Ethics (2021). 10.1007/s43681-021-00106-8 [Google Scholar]
  • 29.Morley, J., Machado, C.C.V., Burr, C., Cowls, J., Joshi, I., Taddeo, M., Floridi, L.: The ethics of AI in health care: a mapping review. Soc Sci Med 260, 113172 (2020). 10.1016/j.socscimed.2020.113172 [DOI] [PubMed] [Google Scholar]
  • 30.Karimian, G., Petelos, E., Evers, S.M.A.A.: The ethical issues of the application of artificial intelligence in healthcare: a systematic scoping review. AI Ethics 2, 539–551 (2022). 10.1007/s43681-021-00131-7 [Google Scholar]
  • 31.Giarmoleo, F., Ferrero, I., Rocchi, M., Pellegrini, M.: What ethics can say on artificial intelligence: insights from a systematic literature review. Bus. Soc. Rev. (2024). 10.1111/basr.12336 [Google Scholar]
  • 32.Afolabi, A.: Ethical issues in artificial intelligence adoption in African higher education institutions in Nigeria. Afr. J. Inf. Knowl. Manag. 3, 22–33 (2024). 10.47604/ajikm.2735 [Google Scholar]
  • 33.Shukla, S.: Principles governing ethical development and deployment of AI. Int. J. Eng. Bus. Manag. 8, 26–46 (2024). 10.22161/ijebm.8.2.5 [Google Scholar]
  • 34.Li, T., Wandella, A., Gomer, R., Al-Mafazy, M.: Operationalizing health data governance for AI innovation in low-resource government health systems: a practical implementation perspective from Zanzibar. Data Policy (2024). 10.1017/dap.2024.65 [Google Scholar]
  • 35.Pham, T.: Ethical and legal considerations in healthcare AI: innovation and policy for safe and fair use. R Soc. Open Sci. 12(5), 241873 (2025). 10.1098/rsos.241873.PMID:40370601;PMCID:PMC12076083 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Wong, A.: Ethics and regulation of artificial intelligence, (2021). 10.1007/978-3-030-80847-1_1
  • 37.Zhang, J., Li, W., Wang, L.: Exploring the governance of artificial intelligence ethics: current issues and challenges. Int. J. Glob. Econ. Manag. 4, 59–64 (2024). 10.62051/ijgem.v4n2.08 [Google Scholar]
  • 38.Nganyewou Tidjon, L., Khomh, F.: The different faces of AI ethics across the world: a principle-implementation gap analysis, (2022). 10.48550/arXiv.2206.03225.
  • 39.European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. 2024 Jun 13. Available from: https://eur-lex.europa.eu/legal-content/EN/HIS/?uri=CELEX:32024R1689
  • 40.Birhane, A., Isaac, W., Prabhakaran, V., Díaz, M., Elish, M., Gabriel, I., Mohamed, S.: Power to the people? Opportunities and challenges for participatory AI, (2022). 10.48550/arXiv.2209.07572
  • 41.Hossain, S., Ahmed, S. I.: Towards a new participatory approach for designing artificial intelligence and data-driven technologies, (2021). 10.48550/arXiv.2104.04072
  • 42.Crockett, K.A., Colyer, E., Coulman, L., Nunn, C., Linn, S.: PEAs in PODs: co-production of community based public engagement for data and AI research. Int. Joint Conf. Neural Netw. (IJCNN) 2024, 1–10 (2024) [Google Scholar]
  • 43.Soaad, Q. H., Syed, I. A.: Towards a new participatory approach for designing artificial intelligence and data-driven technologies. In . ACM, New York, NY, USA
  • 44.WHO Academy [Internet]. [cited 2025 Jul 31]. Available from: https://whoacademy.org/coursewares/course-v1:WHOAcademy-Hosted+H0061EN+H0061EN_Q4_2024?source=edX
  • 45.Adetiba, E., Wejin, J. S. et al. (2024). Bridging the artificial intelligence knowledge and skill gaps in Africa: a case of the 3rd google tensorflow bootcamp and FEDGEN mini-workshop, 1–7. 10.1109/SEB4SDG60871.2024.10629895
  • 46.Lidströmer, N., Kanters, J.J., Herlenius, E.: Systematic review of ethics and legislation of a global patient co-Owned cloud (GPOC) [version 3; peer review: 4 approved]. Open Res. 2, 3 (2025). 10.12688/bioethopenres.17693.3 [Google Scholar]
  • 47.Lidströmer, N., Herlenius, E., Davids, J.: (2023) Necessity of a global patient co-owned cloud (GPOC). 10.21203/rs.3.rs-3004727/v1

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

No datasets were generated or analysed during the current study.


Articles from Ai and Ethics are provided here courtesy of Springer

RESOURCES