Skip to main content
BMJ Health & Care Informatics logoLink to BMJ Health & Care Informatics
. 2025 Apr 10;32(1):e101112. doi: 10.1136/bmjhci-2024-101112

Cracking the code: a scoping review to unite disciplines in tackling legal issues in health artificial intelligence

Sophie Nunnelley 1,, Colleen M Flood 2, Michael Da Silva 3, Tanya Horsley 4, Sarathy Kanathasan 5, Bryan Thomas 2, Emily Ann Da Silva 6, Valentina Ly 6, Ryan C Daniel 7, Mohsen Sheikh Hassani 8, Devin Singh 9
PMCID: PMC11987151  PMID: 40216451

Abstract

Objectives

The rapid integration of artificial intelligence (AI) in healthcare requires robust legal safeguards to ensure safety, privacy and non-discrimination, crucial for maintaining trust. Yet, unaddressed differences in disciplinary perspectives and priorities risk impeding effective reform. This study uncovers convergences and divergences in disciplinary comprehension, prioritisation and proposed solutions to legal issues with health-AI, providing law and policymaking guidance.

Methods

Employing a scoping review methodology, we searched MEDLINE (Ovid), EMBASE (Ovid), HeinOnline Law Journal Library, Index to Foreign Legal Periodicals (HeinOnline), Index to Legal Periodicals and Books (EBSCOhost), Web of Science (Core Collection), Scopus and IEEE Xplore, identifying legal issue discussions published, in English or French, from January 2012 to July 2021. Of 18 168 screened studies, 432 were included for data extraction and analysis. We mapped the legal concerns and solutions discussed by authors in medicine, law, nursing, pharmacy, other healthcare professions, public health, computer science and engineering, revealing where they agree and disagree in their understanding, prioritisation and response to legal concerns.

Results

Critical disciplinary differences were evident in both the frequency and nature of discussions of legal issues and potential solutions. Notably, innovators in computer science and engineering exhibited minimal engagement with legal issues. Authors in law and medicine frequently contributed but prioritised different legal issues and proposed different solutions.

Discussion and conclusion

Differing perspectives regarding law reform priorities and solutions jeopardise the progress of health AI development. We need inclusive, interdisciplinary dialogues concerning the risks and trade-offs associated with various solutions to ensure optimal law and policy reform.

Keywords: Artificial intelligence, Delivery of Health Care, Global Health, Health Equity, Machine Learning


WHAT IS ALREADY KNOWN ON THIS TOPIC

  • There has been no systematic examination of the multidisciplinary literature discussing legal challenges posed by health artificial intelligence (AI). Prior efforts have addressed ethical concerns or limited subsets of legal issues or technologies and therefore do not establish the comprehensive groundwork essential for fostering meaningful cross-disciplinary dialogue on health AI regulation.

WHAT THIS STUDY ADDS

  • Our study uncovers a shared interdisciplinary apprehension regarding the effective regulation of health AI. However, distinct stakeholders such as physicians, innovators and legal scholars hold divergent perspectives on these issues and their relative significance. Notably, certain critical voices, such as within discussions around informed consent, are conspicuously absent, hindering the prospects of effective reform.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

  • The findings underscore the imperative for governments to facilitate inclusive dialogue and reconcile disparate disciplinary viewpoints. Effective regulation is pivotal to fostering responsible innovation and ensuring the safe deployment of health AI for the public good. This study presents essential entry points for the much-needed discourse on this challenge facing governments around the world.

Introduction

Artificial intelligence (AI) is transforming healthcare with the promise of more accurate diagnoses, improved treatment options and a restoration of humanised care through the automation of administrative tasks.1 But a roadblock is uncertainty about how to manage its risks, for instance, relating to patient privacy, blurred responsibility for mistakes made by AI and the potential for patient harm from algorithmic bias. There is growing recognition of the urgent need for regulation to ensure health AI is developed responsibly.2,5 This need is amplified by the current generative AI arms race between behemoth technology companies like Open AI, Microsoft, and Google, and the active integration of these tools into healthcare delivery.6

But what is the pathway to effective law reform? Many agree that this will require ‘multidisciplinary, international effort’.2 Yet, disciplines too often talk only to one another, impeding the joint conversations and analyses that are essential for both understanding the nature of the problems and how to resolve them. Addressing the urgent need for cross-disciplinary understanding, we provide a first-of-its-kind systematic examination of which legal concerns are raised and how they are discussed by different disciplines. We find a shared concern for better health AI regulation. Yet, understandings of key legal issues and solutions remain fractured. Multidisciplinary work is essential to ensure law reform incorporates a genuine understanding of AI, including its effects on patients and the clinicians tasked with employing AI at the bedside.

Methods

Over the last decade, the health AI literature has surged from a trickle to a torrent. Employing a scoping review, we systematically mapped the legal concerns about health AI raised in the published literature by different disciplines, including medicine, law, nursing, pharmacy, other healthcare professions (dentistry, nutrition, etc), public health, computer science and engineering. We aimed to assess which legal concerns were raised, how they were characterised and what solutions were proposed by these disciplines.7 Our review was guided by an a priori protocol and conducted in accordance with the Arksey and O’Malley framework as extended by Levac et al.7,9 Reporting was informed by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR), including the six Arskey and O’Malley stages.8

Stage 1: identifying the research question(s)

The primary research question was: (1) what is known from the literature regarding legal concerns in health-related AI? Secondary questions were: (2) are the legal concerns identified explicitly prioritised? (3) Do different disciplines identify, represent or prioritise legal concerns differently?

Stage 2: search strategy and selection criteria

Guided by two trained librarians, a preliminary search of MEDLINE and HeinOnline was conducted to pilot test a highly sensitive search strategy for its ability to identify key articles. Refinements led to a final MEDLINE search strategy, which was peer-reviewed using the Peer Review of Electronic Search Strategies checklist and adapted to other databases.9

The following electronic databases were searched on 21 July 2021 for eligible records published on or after 1 January 2012: MEDLINE (Ovid), EMBASE (Ovid), HeinOnline Law Journal Library, Index to Foreign Legal Periodicals (HeinOnline), Index to Legal Periodicals and Books (EBSCOhost), Web of Science (Core Collection), Scopus and IEEE Xplore (online supplemental e-Table 1). For full search terms (used for MEDLINE and adapted to other databases), see online supplemental e-Table 2 or the Protocol.7 Searches were augmented by hand-searching reference lists of relevant full-text records.10 All records were imported into a proprietary review software programme (Covidence) for duplicates removal and eligibility assessment.

Stage 3: study selection

All English and French-language records discussing legal concerns or solutions regarding health AI were selected. For definitions of ‘legal concern’, ‘artificial intelligence’ and ‘health-related’, see online supplemental e-Table 3. We excluded records raising issues that were characterised solely in ethical terms, without legal import or analysis, abstracts and conference proceedings, and secondary syntheses. Systematic reviews were tracked to ensure inclusion of relevant primary sources.7

Decisions regarding record inclusion were made by two authors with guidance from a pilot-tested eligibility assessment form and using record management software. Agreement was assessed and reported using a Kappa statistic.11 Subject matter expert authors resolved any conflicting decisions. Of 18 168 identified records, 432 studies were included for analysis. A summary of inclusion decisions at each stage is provided in the PRISMA flow diagram (figure 1).

Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses diagram.

Figure 1

Stage 4: extracting and charting the data

We developed, pilot tested and refined a standardised data extraction tool until it was deemed to support data extraction at a high level of consistency. Law students under author supervision extracted (1) record-level demographic information and (2) text-based expressions of legal concerns, expressed prioritisations and proposed solutions. Information was extracted verbatim without any attempt at interpretation. Extracted demographic information included the faculty of the corresponding author, which was deemed the author’s ‘discipline’ for analysis purposes (online supplemental e-Table 4).

Discussions of legal issues were extracted using a list of 10 legal issues and an ‘other’ category. Where an issue could be categorised under two headings (eg, data leaks could be described as a privacy or cybersecurity issue), extraction followed the characterisation in the text. For issues that could be discussed in a legal or non-legal way (eg, safety), extraction was only done if the issue was discussed as a legal issue. Where an article proposed law reform or another solution to one or more of the problems it identified, the data were extracted and categorised into solution type (online supplemental e-Table 4).

Stage 5: collating, summarising and reporting results

Quantitative data were extracted, analysed and visually represented with the aid of custom software written in Python. To generate qualitative data, we collated the extracted data by legal concern or solution type, further stratifying it by author discipline. We then closely reviewed to identify themes and prepared summary analyses for each legal issue, identifying key similarities and differences between disciplines. Issue coding and summary analyses were reviewed and confirmed by a second author.

Stage 6: stakeholder consultation

In March 2023, we conducted a consultation process with an International Advisory Board composed of multidisciplinary experts.12 Members reviewed the face validity of initial findings and confirmed that our results align with their understanding of the legal landscape of health-AI.

Patient and public involvement

The International Advisory Board for the project includes a member who is a patient partner, caregiver and advocate for the co-design of research and healthcare. We consulted with this member during our Stakeholder Consultation. Our understanding of the issues was also informed by a series of workshops that we developed and co-hosted with the Canadian Institute for Advanced Research (CIFAR), which brought together multidisciplinary experts including patient partners to discuss the legal and ethical issues raised by specific health AI technologies.12

Results

What is the frequency and distribution of legal issues discussions?

We found exponential growth in the literature raising legal issues with health AI. Rates of discussion grew by 950% between 2012 and 2016 and by 2914% between 2016 and 2020 (figure 2). The geographic distribution of published legal concerns was USA-led (38%), followed by the UK (9%), Canada (7%) and Australia (5%). Many countries were marginally represented or unrepresented (figure 3). Authors raising legal issues most frequently were in medicine (36%) followed by law (28%) (figure 4). AI developers (represented by authors in computer science and engineering) were minimally represented in the literature, with 4% for each of those disciplines.

Figure 2. Growth in discussions of legal issues with health artificial intelligence (AI) from 2012 to 2021*. *2021 values prorated. There has been exponential growth in the multidisciplinary literature discussing legal issues with AI, with especially strong growth beginning in 2018. Top-ranking issues include efficiency of regulation, privacy and safety/quality.

Figure 2

Figure 3. Geographical distribution of publications discussing legal issues with health artificial intelligence (AI) (2012–2021). Authors with English or French-language publications discussing legal issues with health AI are predominantly located in the USA, followed by the UK, Canada and Australia. Many countries, especially in the Global South, are not represented in this literature.

Figure 3

Figure 4. Disciplinary distributions of discussions of legal issues with health artificial intelligence (AI) (2012–2021). Writers in medicine produced the most discussions of legal issues (36%), followed by law (28%); other (11%); unknown (no discipline indicated, 10%); engineering (4%); computer science (4%); other health professions (dentistry, nutrition, etc, 1%); pharmacy (1%); and nursing (less than 1%). Disciplines were united in writing most frequently about the efficiency of regulation but went on to prioritise different legal issues.

Figure 4

Overall, the most frequently discussed legal issue was a concern to ensure the efficiency of regulation—for instance, the worry that unclear, overzealous or inconsistent regulation will make compliance difficult or impede innovation. Concerns over regulatory efficiency accounted for 17% of legal issues discussions, and this issue ranked first for authors in each of medicine and law. After this issue, authors in medicine most often discussed privacy, followed by safety/quality, while legal authors more often discussed liability, followed by privacy.

The most frequently discussed solutions to legal issues were new legislation (28%) and voluntary improvements (ie, non-legal measures; 26%), with calls to reform existing laws comprising 14% of solutions discussions. Authors in medicine and law again dominated these discussions (figure 5). Authors in medicine were most likely to discuss voluntary improvements (33%), followed by new legislation (23%). They also discussed other non-legal instruments for promoting responsible health AI adoption (eg, mandatory training and professional guidelines) more frequently than legal writers. Legal authors more often discussed new legislation (such as dedicated AI legislation; 34%), followed by reform of existing law (for instance, to strengthen privacy protections; 22%).

Figure 5. Disciplinary distribution of references to solutions to legal issues with health artificial intelligence (AI) (2012–2021). Writers in medicine and law produced the most discussions (by far) of possible solutions to legal problems with health AI. However, they proposed different solutions. Legal writers were most likely to propose new legislation, followed by reform of existing law. Writers in medicine were most likely to discuss calls (by industry, clinicians or others) for voluntarily improving standards, followed by new legislation. Writers in medicine were also more likely to call for mandatory training as a response to legal problems.

Figure 5

How do different disciplines characterise and prioritise legal issues?

We identified themes in how disciplines represent legal issues, noting similarities and differences across disciplines (see online supplemental e-Table 5). On some issues, we found significant cross-disciplinary agreement, including:

  • The need for efficient regulation and consensus that existing safety and quality regulations are inadequate, inconsistent or otherwise not ‘fit for purpose’ for health AI.

  • The lack of clarity as to who, as between developers, healthcare institutions or clinicians, should bear liability when AI use results in patient harm.

  • The critical need for improved cybersecurity where health AI is employed.

  • The importance of addressing the risk of algorithmic bias, which can pose safety risks to patients and exacerbate existing inequities.

On other legal issues, we found distinct disciplinary characterisations and approaches. For example:

  • While few authors overall discussed whether AI use must be disclosed to the patient as part of informed consent to treatment, those who discussed the issue were usually in law.13 When writers in medicine discussed ‘consent’, they usually were referring to the collection, use and disclosure of personal information—that is, consent as it relates to patient privacy rights.

  • Disciplines made different predictions about the likely allocation of liability where health AI use results in patient harm. Legal authors more often predicted reduced physician liability due to greater overall accuracy and the eventual incorporation of AI into the standard of care, while writers in medicine worried that clinical reliance on AI will be deemed negligent.14 15

  • Authors from different disciplines emphasised varying concerns about access and equity. For instance, legal writers were more likely to note risks from third parties like insurers, who may refuse to cover AI-based care or refuse to cover harms relating to AI use or who may use AI to deny coverage for healthcare more broadly, leading to inequity. Authors in medicine more frequently raised a possible ‘digital divide’ based on ability to pay for medical AI.

Discussion

Overview

AI promises to revolutionise healthcare by ushering in a new era of medicine and alleviating the strain on overburdened healthcare systems. However, there is a growing consensus that realising AI’s potential requires adequate legal governance. Given the rapid evolution of AI technology, delivering optimal regulation presents a significant challenge. Addressing this challenge necessitates convergence across disciplines to identify the risks posed by AI in healthcare and determine the best regulatory responses.

The situation resembles the tale of The Blind Men and the Elephant, where each discipline perceives only a fragment of the intersecting issues, hindering a comprehensive understanding. Regulatory efforts based on incomplete disciplinary perspectives risk distortion or failure. For instance, the Canadian government’s introduction of the Artificial Intelligence and Data Act in 2022 faced criticism for its vagueness, prompting calls for stakeholder consultation and consensus building.16 17 This study supports the necessary conversations by showing us who is most discussing different legal risks, which voices are missing and how disciplines are characterising the risks and possible solutions.

Missing voices impede effective governance

The WHO has called for dialogue among all stakeholders in the ‘AI for health ecosystem’, including ‘developers, manufacturers, regulators, users and patients’.2 Yet, our study reveals significant gaps in the voices discussing health AI’s legal risks. We found an underrepresentation of AI developers in particular, as evidenced by minimal engagement from authors in computer science and engineering (figure 4). The apparent lack of engagement is consistent with previous findings of minimal innovator discussion of the legal and ethical dimensions of mental health AI technologies.18

While innovators may be informed about legal issues without actively publishing on such matters, our findings are indicative of a concern. Innovator participation in legal discussions is crucial both for AI development (for instance, to ensure privacy ‘by design’) and to ensure the appropriateness of legal and regulatory reform. Developer involvement can help ensure law reform is responding to the true state of AI innovation rather than the hypothetical or fanciful and that any newly implemented requirements are feasible. For instance, calls for innovators to do comprehensive bias testing will be ineffective if the data for doing so are unavailable—a perspective that innovators can provide. Similarly, policy arguments that algorithms must be ‘explainable’ can only be evaluated with developer input on what is technically feasible. Moreover, some regulators do rely heavily on industry voices; if these voices are not informed by an understanding of legal concerns, we risk an imbalance between enthusiasm for innovation and critical interests like privacy and non-discrimination.19 At the same time, developer participation can help ensure that regulation achieves its objectives in an appropriately flexible way without unduly impeding beneficial innovation.

There is also a notable absence of clinician-driven literature on the complexities of informed consent in AI-assisted treatments. The lack of careful deliberation on this topic risks encouraging blunt solutions. (One article suggests the ‘[u]se of non-explainable AI should arguably be prohibited in healthcare’.20) Cross-disciplinary conversations are essential for defining informed consent standards, especially given physicians’ lack of training in AI’s risks and their crucial role in translating medical information for patients.4 5

Voices from the global south are also noticeably absent from these discussions, indicating a need for increased inclusion of authors from low- and middle-income countries (LMICs).21 22 This finding may be partly attributable to our having searched articles published in English and French. However, given known barriers to the full inclusion of academic voices from the Global South, and the disproportionate effects of some legal issues on LMICs (eg, the risk of biased algorithms perpetuating existing inequalities and compromising patient safety), meaningful engagement from LMIC stakeholders is crucial for realising the global potential of health AI.

The importance of multidisciplinary analysis on key issues

Our analysis reveals divergent disciplinary perspectives on key issues, such as liability and equity, which risk undermining effective AI governance if they are not understood and reconciled. A collaborative approach is essential for ensuring that regulation is fair, appropriately balancing competing interests, perspectives and concerns, and that it is effective, able to achieve its intended goals. Such an approach will also help engender public, patient and provider trust in AI and its regulation

An example is the allocation of responsibility when AI leads to patient harm. Some jurisdictions give lighter regulatory scrutiny to health innovations where physicians remain in the loop (ie, decision support).23 This dynamic shifts responsibility to physicians who may not have the information or training to evaluate AI processes or outcomes, which could embed bias, or produce difficult-to-detect AI ‘hallucinations’ that cause harm. We need interdisciplinary cooperation to ensure baseline quality and safety, to fairly allocate responsibility where harms occur and to build trust. Another key area of concern is the prevention of inequities in AI access. There are tensions between incentivising beneficial innovation through patent protection and ensuring equitable access to technology and data for public interest research and care. Ensuring nuanced, interdisciplinary analysis can aid in our understanding and balancing of these competing concerns.

Collaboration between disciplines, with their different expertise and perspectives, will also help ensure that regulation is effective, achieving its intended aims. For example, while many emphasise the need for stronger privacy protections, this need may collide with the need for data, including data relating to race and socioeconomic status, to train algorithms, so they are generalisable to different populations. As one author puts it, unless we address biased AI, ‘patients that have historically not benefited from the healthcare industry will continue to face discrimination’—our biases will simply ‘become solidified, automated ones’.24 Interdisciplinary discussion can help us to understand where well-intentioned legal developments (eg, to strengthen privacy) might have unintended effects (eg, undermining equity).

Collaboration is also essential to access and equity in the context of direct-to-consumer health AI tools like mental health apps, care robots and mobility devices.12 Some argue that these are important tools for filling troubling gaps in healthcare service provision. Yet, others observe that insufficient regulatory oversight could undermine that aim and harm vulnerable users (‘bots could be programmed to infiltrate people’s homes and lives en masse, befriending children and teens, influencing lonely seniors or harassing confused individuals until they finally agree to services’).25 26 These debates also underscore the need for input from those whose lives are affected by health AI. Interdisciplinary collaboration can assist in this regard, by requiring that we limit disciplinary jargon, avoiding AI ‘techno-speak’ and ‘legalese’ in favour of a common language that brings everyone—including patients, policymakers and the public—to the table, fostering trust.

Our approach aligns with the research suggesting that interdisciplinarity enhances problem-solving relating to complex social challenges and has a positive impact on policy formation.27 There are several models for interdisciplinary teamwork generally and for AI-related collaboration specifically, as well as calls for governments, research institutions and funders to better support interdisciplinarity through policy.28,30 Our approach to interdisciplinarity is also informed by experience. In our own research community, we have deliberately created multiple opportunities for structured engagement through workshops, discussions and collaborations, fostering an environment of mutual understanding. Our community includes clinicians, legal scholars, AI innovators, patients, caregivers, ethicists, regulators and others. For several years, we have been incrementally learning from each other, developing a common understanding of the challenges and opportunities of health AI. This scoping review, itself an interdisciplinary effort, exemplifies this approach.

Conclusion

National and international leaders increasingly advocate for interdisciplinary collaboration on AI regulation, yet current discussions remain unduly siloed. Our study supports calls for more meaningful interdisciplinarity to strike the right balance between competing values and to respond effectively to rising concerns. Governments must facilitate these discussions to guide health AI governance and ensure equitable, safe, and responsible AI advancements for all.

Supplementary material

online supplemental file 1
bmjhci-32-1-s001.docx (33.9KB, docx)
DOI: 10.1136/bmjhci-2024-101112

Acknowledgements

We thank Karni A Chagal-Feferkorn, Nicole Davidson, Samantha Iantomasi, Arianne Kent, Kelli White, Caroline Mercer, Angie Ortiz-Romero and Saly Sadek for assistance, including with research, article review and data extraction. We also thank the Canadian Institutes of Health Research, the Hospital for Sick Children Research Institute and the Alex Trebek Forum for Dialogue for funding.

Footnotes

Funding: The funders had no other role in this study. CIHR (grant number 452650), Alex Trebek Forum for Dialogue (grant number N/A) and Hospital for Sick Children Research Institute (grant number N/A).

Provenance and peer review: Not commissioned; externally peer-reviewed.

Patient consent for publication: Not applicable.

Ethics approval: Not applicable.

Data availability free text: The data supporting the findings of this study will be available upon reasonable request.

Map disclaimer: The depiction of boundaries on this map does not imply the expression of any opinion whatsoever on the part of BMJ (or any member of its group) concerning the legal status of any country, territory, jurisdiction or area or of its authorities. This map is provided without any warranty of any kind, either express or implied.

Data availability statement

Data are available upon reasonable request.

References

  • 1.Rajpurkar P, Chen E, Banerjee O, et al. AI in health and medicine. Nat Med. 2022;28:31–8. doi: 10.1038/s41591-021-01614-0. [DOI] [PubMed] [Google Scholar]
  • 2.Geneva: World Health Organization; 2023. Regulatory considerations on artificial intelligence for health, licence: CV BY-NC-SA 3.0 IGO. [Google Scholar]
  • 3.American medical association principles for augmented intelligence development, deployment, and use. 2023. [12-Dec-2023]. https://www.ama-assn.org/press-center/press-releases/ama-issues-new-principles-ai-development-deployment-use Available. Accessed.
  • 4.Royal College of Physicians and Surgeons of Canada Task force report on artificial intelligence and emerging digital technologies. 2020. [26-Jun-2023]. https://cdn.dal.ca/content/dam/dalhousie/pdf/faculty/medicine/departments/department-sites/psychiatry/rc-ai-task-force-e.pdf Available. Accessed.
  • 5.Civaner MM, Uncu Y, Bulut F, et al. Artificial intelligence in medical education: a cross-sectional needs assessment. BMC Med Educ. 2022;22:772. doi: 10.1186/s12909-022-03852-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Raza MM, Venkatesh KP, Kvedar JC. Generative AI and large language models in health care: pathways to implementation. NPJ Digit Med. 2024;7:62. doi: 10.1038/s41746-023-00988-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Da Silva M, Horsley T, Singh D, et al. Legal concerns in health-related artificial intelligence: a scoping review protocol. Syst Rev. 2022;11:123. doi: 10.1186/s13643-022-01939-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8:19–32. doi: 10.1080/1364557032000119616. [DOI] [Google Scholar]
  • 9.McGowan J, Sampson M, Salzwedel DM, et al. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. 2016;75:40–6. doi: 10.1016/j.jclinepi.2016.01.021. [DOI] [PubMed] [Google Scholar]
  • 10.Horsley T, Dingwall O, Sampson M. Checking reference lists to find additional studies for systematic reviews. Cochrane Database Syst Rev. 2011;2011:MR000026. doi: 10.1002/14651858.MR000026.pub2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kundel HL, Polansky M. Measurement of Observer Agreement. Radiology . 2003;228:303–8. doi: 10.1148/radiol.2282011860. [DOI] [PubMed] [Google Scholar]
  • 12.Queen’s university faculty of law. Machine M.D. [10-Apr-2024]. https://law.queensu.ca/research/machine-md Available. Accessed.
  • 13.Cohen IG. Informed Consent and Medical Artificial Intelligence: What to Tell the Patient? Geo L J. 2020;108:1425–70. doi: 10.2139/ssrn.3529576. [DOI] [Google Scholar]
  • 14.Harvey HB, Gowda V. Clinical applications of AI in MSK imaging: a liability perspective. Skeletal Radiol. 2022;51:235–8. doi: 10.1007/s00256-021-03782-z. [DOI] [PubMed] [Google Scholar]
  • 15.Froomkin AM, Kerr I, Pineau J. When AIs outperform doctors: confronting the challenges of a tort-induced over-reliance on machine learning. Ariz Law Review. 2019;61:33–99. doi: 10.2139/ssrn.3114347. [DOI] [Google Scholar]
  • 16.Minister of Innovation, Science and Industry BILL C-27: an act to enact the consumer privacy protection act, the personal information and data protection tribunal act and the artificial intelligence and data act and to make consequential and related amendments to other acts. [15-Oct-2023];2022 https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading Available. accessed.
  • 17.Scassa T. Regulating AI in Canada: A critical look at the proposed Artificial Intelligence and Data Act. Canada Bar Review. 2023;101:1–30. [Google Scholar]
  • 18.Gooding P, Kariotis T. Ethics and Law in Research on Algorithmic and Data-Driven Technology in Mental Health Care: Scoping Review. JMIR Ment Health. 2021;8:e24668. doi: 10.2196/24668. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Castaldo J. The Globe and Mail; 2023. [14-Oct-2023]. Canadian ai experts issue letter in support of draft law aimed at curbing technology’s risks.https://www.theglobeandmail.com/business/article-ai-legislation-open-letter/ Available. Accessed. [Google Scholar]
  • 20.Carter SM, Rogers W, Win KT, et al. The ethical, legal and social implications of using artificial intelligence systems in breast cancer care. Breast. 2020;49:25–32. doi: 10.1016/j.breast.2019.10.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Nakamura G, Soares BE, Pillar VD, et al. Three pathways to better recognize the expertise of Global South researchers. NPJ Biodivers . 2023;2:17. doi: 10.1038/s44185-023-00021-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Collyer FM. Global patterns in the publishing of academic knowledge: Global North, global South. Curr Sociol . 2018;66:56–73. doi: 10.1177/0011392116680020. [DOI] [Google Scholar]
  • 23.Da Silva M, Flood CM, Goldenberg A, et al. Regulating the Safety of Health-Related Artificial Intelligence. Healthc Policy. 2022;17:63–77. doi: 10.12927/hcpol.2022.26824. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Takshi S. Unexpected Inequality: Disparate-Impact From Artificial Intelligence in Healthcare Decisions. J Law Health. 2021;34:215–51. [PubMed] [Google Scholar]
  • 25.Blake V. Regulating Care Robots (quoting Kerr IR. Babes and the Californication of Commerce. U Ottawa L & Tech J. 2004; 1: 288-89) Temp L Rev. 2020;92:551–94. [Google Scholar]
  • 26.Abd-Alrazaq A, Alhuwail D, Schneider J, et al. The performance of artificial intelligence-driven technologies in diagnosing mental disorders: an umbrella review. NPJ Digit Med . 2022;5:87. doi: 10.1038/s41746-022-00631-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Pinheiro H, Vignola-Gagné E, Campbell D. A large-scale validation of the relationship between cross-disciplinary research and its uptake in policy-related documents, using the novel Overton altmetrics database. Quant Sci Stud . 2021;2:1–27. doi: 10.1162/qss_a_00137. [DOI] [Google Scholar]
  • 28.Bisconti P, Orsitto D, Fedorczyk F, et al. Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology. AI & Soc. 2023;38:1443–52. doi: 10.1007/s00146-022-01518-8. [DOI] [Google Scholar]
  • 29.Georgia tech’s effective team dynamics initiative. [28-Nov-2024]. https://etd.gatech.edu/ Available. Accessed.
  • 30.van Helden DP, Levine D, Guiry E, et al. Seven recommendations for scientists, universities, and funders to embrace interdisciplinarity : Practical guidelines to enabling interdisciplinarity. EMBO Rep. 2024;25:2832–6. doi: 10.1038/s44319-024-00173-y. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

online supplemental file 1
bmjhci-32-1-s001.docx (33.9KB, docx)
DOI: 10.1136/bmjhci-2024-101112

Data Availability Statement

Data are available upon reasonable request.


Articles from BMJ Health & Care Informatics are provided here courtesy of BMJ Publishing Group

RESOURCES