Skip to main content
Cureus logoLink to Cureus
. 2023 Aug 10;15(8):e43262. doi: 10.7759/cureus.43262

Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare

Madhan Jeyaraman 1, Sangeetha Balaji 2, Naveen Jeyaraman 1, Sankalp Yadav 3,
Editors: Alexander Muacevic, John R Adler
PMCID: PMC10492220  PMID: 37692617

Abstract

The integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education. However, as AI in healthcare gains momentum, it brings forth profound ethical challenges that demand careful consideration. This comprehensive review explores key ethical concerns in the domain, including privacy, transparency, trust, responsibility, bias, and data quality. Protecting patient privacy in data-driven healthcare is crucial, with potential implications for psychological well-being and data sharing. Strategies like homomorphic encryption (HE) and secure multiparty computation (SMPC) are vital to preserving confidentiality. Transparency and trustworthiness of AI systems are essential, particularly in high-risk decision-making scenarios. Explainable AI (XAI) emerges as a critical aspect, ensuring a clear understanding of AI-generated predictions. Cybersecurity becomes a pressing concern as AI's complexity creates vulnerabilities for potential breaches. Determining responsibility in AI-driven outcomes raises important questions, with debates on AI's moral agency and human accountability. Shifting from data ownership to data stewardship enables responsible data management in compliance with regulations. Addressing bias in healthcare data is crucial to avoid AI-driven inequities. Biases present in data collection and algorithm development can perpetuate healthcare disparities. A public-health approach is advocated to address inequalities and promote diversity in AI research and the workforce. Maintaining data quality is imperative in AI applications, with convolutional neural networks showing promise in multi-input/mixed data models, offering a comprehensive patient perspective. In this ever-evolving landscape, it is imperative to adopt a multidimensional approach involving policymakers, developers, healthcare practitioners, and patients to mitigate ethical concerns. By understanding and addressing these challenges, we can harness the full potential of AI in healthcare while ensuring ethical and equitable outcomes.

Keywords: secure multiparty computation, homo-morphic encryption, healthcare, large language models, chatgpt, artificial intelligence (ai)

Introduction and background

In the ever-evolving landscape of healthcare, the advent of artificial intelligence (AI) has emerged as a transformative force, promising unparalleled advancements in patient care. The use of AI has enhanced clinical diagnosis, predictive medicine, patients' data and diagnostics, and clinical decision-making [1]. The tools utilized to provide data processing and reasoning capabilities at par with those of human intelligence in machines and software include machine learning (ML), large and often unstructured datasets, advanced sensors, natural language processing (NLP), and recently, large language models (LLMs). ML, a subset of AI, enables algorithms to learn patterns from data without explicit programming. Deep learning is a subset of ML that uses algorithms known as artificial neural networks to learn from data and make predictions or decisions. It is particularly useful for tasks such as image and speech recognition [2]. The study of the application of natural language in interactions between computers and people is known as NLP, a branch of computer science and AI. To analyze, comprehend, and produce human language, computer methods and algorithms are used [3,4]. Developed by OpenAI, ChatGPT is a sophisticated LLM based on the GPT-3.5 architecture, extensively trained on vast text data to generate human-like responses to user inputs. Its applications in medical practice, research, and education are promising. Notable competitors in this field include Microsoft Bing and Google's Bard [5-7].

The opportunity to integrate AI into healthcare has been taken by medical specialties where images are essential, such as radiology, pathology, or oncology, and significant research and development efforts have been made to translate the potential of AI to therapeutic applications [8-10]. Additionally, ML is being used to analyze neuroimaging data to help with the early identification, prognosis, and treatment of disorders that pose a threat to brain health [11]. Segar et al. developed ML models that integrated social determinants of health, outperforming traditional logistic regression models [12]. AI applications in the field of mental healthcare have the potential to offer advantages, including new treatment modalities, chances to connect with underserved communities, and improved patient response [13]. Ashburner et al. analyzed narrative data from electronic health records using NLP to enhance the prediction of incident atrial fibrillation (AF) risk [3]. NLP can also be used to supplement symptom assessment in clinical and research settings for schizophrenia. It uses information that is likely to be more related to impaired brain processes such as impaired connectivity, information processing, and reward processing [4]. ChatGPT's capabilities make it a valuable tool for medical education through curriculum development, simulated training, and language translation. Additionally, it can assist in information retrieval for research purposes and potentially improve the precision and speed of medical recording in clinical settings [14,15].

However, as we embrace this revolutionary technology with open arms, we must also confront the profound ethical concerns that accompany its integration into our healthcare systems. AI advancement has the potential to make healthcare inequality more challenging. Given the many benefits of AI and the need to mitigate any potential negative effects, it is imperative to understand and handle the ethical issues in depth. To mitigate the legal and ethical issues related to AI in healthcare, a multidimensional approach encompassing policymakers, developers, healthcare practitioners, and patients is essential [16].

Review

Ethical challenges and AI

Research on the ethical concerns related to the use of AI in healthcare has identified key ethical issues and related subtopics that require consideration. These issues, along with the difficulties posed by the recent development of publicly accessible LLMs, are discussed in this review. The primary concerns include privacy, transparency, trust, responsibility, bias, cybersecurity, and data quality [14,16-18]. 

Privacy

In the realm of data-driven healthcare, privacy emerges as a critical challenge, given the utilization of ML and deep learning systems to make predictions using users' data. Patients trust healthcare professionals to protect their private information, including sensitive information like age, sex, and health data [19-23]. Big data use in healthcare creates many privacy issues. Among these is the moral conundrum posed by the unauthorized use of personal data in predictive analytics. The loss of control over data access is a key concern since it could have a serious psychological impact on patients if their private health information is exposed. Additionally, the availability of private databases involving genetic sequences and medical history could hinder the collection of data and the advancements in medical tests. Furthermore, some businesses may justify withholding data under the guise of preserving privacy, making data sharing challenging [20].

In the context of wearable devices and the Internet of Medical Things (IoMT) in healthcare, a plethora of security and privacy vulnerabilities exist, which pose a significant risk to sensitive data, including passwords. Denial-of-service and ransomware attacks have the potential to result in life-threatening circumstances. Users consequently voice concerns about the technology's vulnerabilities and the use of their data [22]. Health information management (HIM) practices such as automated medical coding, information capture, data management, and governance are significantly impacted by AI technologies. Moreover, AI-based applications have a direct impact on patients' confidentiality and privacy [21]. Finding the ideal balance between data access restrictions and the possible problems they can reduce is ultimately a crucial challenge.

Priyanshu et al. examined how ChatGPT can adhere to privacy-related policies and privatization mechanisms by providing outputs that comply with regulations like the Health Insurance Portability and Accountability Act (HIPAA) and General Data Protection Regulation (GDPR). The study involves instructing ChatGPT to generate outputs that are compliant and observing the extent of personally identifiable information (PII) omission from the responses. The objective is to explore ways to limit input copying and regurgitation in ChatGPT's responses by incorporating add-on sentences into prompts. This approach aims to induce privacy through sanitization of ChatGPT's generated responses while adhering to relevant data protection regulations. ChatGPT retains PII verbatim in 57.4% of cover letter summaries, with varying rates across subgroups. When instructed, it omits PII significantly, showing potential for privacy compliance [24].

To preserve privacy in healthcare applications, proposed solutions include homomorphic encryption (HE), allowing computations on encrypted data while preserving data privacy and enabling secure processing without the need for decryption. Secure Enclave (SGX), a hardware-based security technology, safeguards the confidentiality and integrity of code and data during execution within a secure enclave, protecting sensitive computations from potential threats. Secure multiparty computation (SMPC) distributes computations across multiple parties, preventing individual access to others' data. SMPC can be achieved through garbled circuits and secret sharing, enabling secure collaborative computation on sensitive data. These solutions are crucial in healthcare, where strict regulations protect patient privacy and data. By implementing these methods, sensitive information can be processed efficiently and securely while ensuring compliance with data protection standards in healthcare settings [22,23].

Transparency and trust

Transparency has emerged as a significant ethical concern, especially in complex black box AI systems that are highly efficient but lack clarity in their decision-making processes. Striking a balance between accuracy and explainability is crucial, especially in high-risk decision-making situations [25-28]. Despite existing guidance for transparent reporting, poorly reported medical AI models are still common, and the transparency required for trustworthy AI remains unfulfilled. Using a survey to prompt structured reporting on intended use, AI model development and validation, ethical considerations, and deployment constraints based on current guidelines, a paper presents a framework to measure transparency and trustworthiness in medical AI tools. The framework was piloted with three medical AI tools, and the assessment revealed reporting gaps and lowered the degree of trustworthiness, indicating compliance gaps with ethical guidelines [29].

Explainability is the ability of an AI-driven system to provide a person with an understanding of why it arrived at a certain prediction or decision. From a medical standpoint, it is essential to differentiate between two levels of explainability in AI systems. The first level pertains to understanding how the system reaches its conclusions in a general sense, while the second level involves explaining the training process that enables the system to learn from examples and produce outputs. From a technological viewpoint, achieving explainability is crucial, considering both its implementation and the advantages it offers during development. Legally, informed consent, certification, approval, and liability are critical aspects related to the explainability of medical devices [30]. In a survey focusing on explainable AI (XAI) applications in healthcare and medical imaging, various XAI types are summarized and categorized. The XAI types can be classified as model-specific techniques, which are tailored for a specific model type and cannot be applied to other models, and model-agnostic techniques, which have no specific requirements and can be used with a wide range of XAI models. The paper also highlights specific algorithms used to enhance interpretability in medical imaging. These algorithms, such as Local Interpretable Model-Agnostic Explanations (LIME), Backpropagation, Class Activation Mapping, and Layer-wise Relevance Propagation, fall under post hoc interpretation techniques. These techniques provide interpretable information about the model through external methods after its analysis [31]. Failure to prioritize explainability in clinical decision support systems can jeopardize core ethical values in medicine and may have adverse effects on both individual and public health [25,30,31].

Cybersecurity

Cybersecurity is the practice of preventing unauthorized access, theft, damage, or other harmful attacks on computer systems, networks, and digital information. Security breaches can be concealed by AI systems' incapacity to be explained and interpreted [27]. Open-source intelligence (OSINT) utilizes publicly available data from various sources, raising implications in areas like national security, political campaigns, the cyber industry, criminal profiling, societal issues, cyber threats, and cybercrimes [32]. The COVID-19 pandemic has introduced novel cybersecurity challenges, leading to the emergence of new themes in this field. Healthcare and cyber resilience are among the newly added cybersecurity themes that have been extensively researched in peer-reviewed literature. On the other hand, non-peer literature highlights the observation of newer forms of cyberattacks, such as social engineering and side-channel attacks [33]. These developments reflect the evolving nature of cybersecurity concerns. Stanfill and Marc highlighted the need for data security at various points, such as when collecting data from devices, transferring data between devices, storing data, and using the data [21].

In a systematic review, the utilization of semi-supervised learning (SSL) with cybersecurity data repositories is explored to construct robust models for computer security and cybersecurity systems. SSL is a specific type of ML that requires only a limited number of labels or may work with partially labeled data to build these models. It is crucial that the datasets used in developing these models for cybersecurity accurately represent real-world data to ensure their effectiveness and relevance [34]. As a fundamental framework for understanding the links and interdependencies among various cybersecurity components from a complete cybersecurity perspective, Grobler et al. offered the 3U's of cybersecurity: user, usage, and usability. The study demonstrates a paradigm shift away from a functional and usage-centric approach to cybersecurity and toward a user-centered strategy that emphasizes the human aspects of users. This user-centered cybersecurity strategy takes into account the human factor, which has the potential to boost the effectiveness of cybersecurity measures [35]. However, since cybersecurity is a constantly evolving challenge, it is essential to develop new strategies to prevent it.

Responsibility

AI responsibility attribution poses significant questions regarding who or what should be held liable for the outcomes of AI actions [36-38]. Some papers explore human responsibility concerning AI systems, emphasizing meaningful control and due diligence and caution against fully automated systems in medicine. They assert that moral agency is a human attribute absent in AI due to the lack of autonomy and sentience [38]. Others advocate for examining the causal chain of human agency, including interactions with technical components like sensors and software, to determine accountability. The temporal dimension is vital in considering AI's development, application, and maintenance, along with its interactions with other technical elements. Furthermore, since people might not fully understand how AI is involved or what their function as users is, the extent of end users' voluntary and informed use of AI is called into doubt [36]. Responsibility diffusion occurs when there are multiple options and several agents involved, making it challenging to attribute responsibility clearly. The concept is exemplified through the case of an AI-driven digital tumor board, where clinical decision-making is altered, leading to a diffusion of responsibility among various parties [37]. Addressing the structural and temporal connections between humans and technological elements, ensuring the transparency and traceability of AI systems, and comprehending how technology interfaces contribute to AI-related issues are all necessary for addressing these responsibility attribution challenges [36]. In the context of ChatGPT, the question of whether AI tools should be credited as authors sparks a debate among experts. Some argue that AI tools lack the capacity to make decisions or contribute to research in the same way human authors do, making authorship inappropriate. On the other hand, proponents contend that AI tools can play a significant role in generating ideas and assisting in the writing process, warranting appropriate acknowledgment and credit in the authorship of papers [39]. The issue revolves around the level of contribution and autonomy that AI tools bring to the research process and the ethical considerations surrounding their recognition in academic and scientific work.

Shifting from data ownership to data stewardship is crucial to ensure responsible data management, safeguard patients' privacy, and adhere to regulatory standards. Data stewardship involves governance and protection of data, including determining access and sharing permissions, ensuring regulatory compliance, and facilitating collaborations and data exchange for research and technological advancements. Tech giants combine AI and healthcare for growth, using personal health data for personalized features. To enhance privacy and data security in federated learning, a decentralized approach is being adopted to train ML models. Data stewards play a vital role in making data available, accessible, and retrievable while upholding privacy and compliance standards [18].

Bias

AI algorithms can be influenced by biases present in healthcare data. Biases in healthcare data can have an impact on AI algorithms. In addition to well-known study biases like blinding and sampling, we also need to identify implicit and explicit biases in the healthcare system. Large-scale data used to train AI systems may be impacted by these biases. Clinical decision-making can be influenced by factors like clinical trial eligibility requirements and implicit biases present in real-world treatment decisions, which can affect the predictions given by AI [26]. AI can lead to healthcare inequities through biased data collection, algorithm development, a lack of diversity in training data, transparency, and research teams, requiring efforts to address biases and promote equitable outcomes. Particularly with regard to demographic traits like sex and ethnicity, there is growing recognition of the detrimental effects of model bias. Studies have also revealed poorer implementation rates for specific diseases in rural areas, racial and ethnic minority groups, those without insurance or with inadequate insurance, as well as individuals with lower education and income [9,40-42].

Khoury et al. advocated a public-health approach to address inequalities. Targeted interventions, policy development, and ensuring ethical and the establishment of efficient delivery systems should be the main goals of public health initiatives. It is essential to involve communities, build coalitions, improve genetic health literacy, and promote diversity in the workforce [40]. The potential of AI to address health disparities in both high- and low-income countries (HICs) and low- and middle-income countries (LMICs) is highlighted in a scoping review; however, it is acknowledged that adoption of AI requires adequate infrastructure. AI has proven to be helpful in finding racial disparities in cancer outcomes and examining the influence of race and socioeconomic position on health outcomes in oncology [43].

Data quality

Convolutional neural networks (CNN) have gained popularity in image-related tasks, and their application has extended to non-imaging data through modern ML techniques. CNNs can be used for a variety of activities outside of conventional imaging tasks by transforming non-imaging input into images. This gives healthcare practitioners the opportunity to train hybrid deep learning models using multi-input/mixed data models that incorporate different patient information, such as genetics, imaging, and clinical data. In comparison to relying just on one data type, the integration of various data types offers a comprehensive and multiperspective picture of patient data. This strategy shows potential for improving the caliber of data generated by AI [44].

The term hallucinations refers to a noteworthy characteristic of LLMs that arises when the model produces inaccurate or misleading information that appears to be factual or coherent. As a result, language models could produce answers or data that is entirely fictitious or unsubstantiated by actual data. For language models to be reliable and trustworthy, it is essential to recognize and deal with these hallucinatory tendencies, particularly in the realm of healthcare, where precise and true information is critical [45]. A paper points out that as automation advances, the ability of HIM professionals to identify data patterns becomes essential, requiring additional skills in data analysis tools and techniques. With their in-depth knowledge of healthcare data sources and origins, HIM professionals are well-positioned to adapt to evolving AI technologies and take on new roles [21].

Conclusions

The integration of AI in healthcare presents multifaceted ethical challenges that demand meticulous consideration. This review highlights complex issues and potential risks associated with AI implementation in healthcare. Privacy emerges as a critical concern, with data-driven healthcare relying on ML and deep learning systems that utilize users' data for predictions, necessitating a delicate balance between data protection and access. Transparency and trust are crucial for successful AI adoption, particularly in high-stake decision-making contexts where the lack of clarity in AI's decision-making processes can breed skepticism and distrust. Developing frameworks and guidelines for transparent reporting and structured AI model assessment can enhance the trustworthiness and the ethical use of AI in medical applications. Cybersecurity concerns are of utmost importance to safeguard patients' safety and data integrity, urging the implementation of security measures like HE and Secure Enclave to protect against cyber threats. Responsibility attribution in AI remains complex and evolving, requiring a balance between human agency and AI capabilities, while shifting from data ownership to data stewardship can ensure responsible data management and privacy protection. Addressing biases in AI algorithms and data collection is essential to promoting equitable healthcare outcomes, as biases can impact AI-generated predictions and exacerbate healthcare inequities. Leveraging CNNs and multi-input/mixed data models can improve data quality and provide a comprehensive view of patients' information, enhancing the accuracy of AI-generated insights. The phenomenon of hallucinations in large language models necessitates rigorous validation and fact-checking to ensure the accuracy and reliability of AI-generated outputs. Collaboration among researchers, healthcare professionals, policymakers, and technology experts is essential to overcome ethical challenges and promote responsible AI use in healthcare. By developing comprehensive guidelines, regulatory frameworks, and technical solutions prioritizing privacy, transparency, and fairness, AI can revolutionize healthcare delivery and improve patient outcomes while upholding ethical principles.

Acknowledgments

All authors conceptualized the research ideology, design, data acquisition, and interpretation; drafted the article; and reviewed and finalized the manuscript. All the authors gave final approval to publish the manuscript and agreed to uphold the integrity and accountability of the research investigated.

The authors have declared that no competing interests exist.

References

  • 1.The role of artificial intelligence in healthcare: a structured literature review. Secinaro S, Calandra D, Secinaro A, Muthurangu V, Biancone P. BMC Med Inform Decis Mak. 2021;21:125. doi: 10.1186/s12911-021-01488-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Machine learning and deep learning. Janiesch C, Zschech P, Heinrich K. Electron Mark. 2021;31:685–695. [Google Scholar]
  • 3.Natural language processing to improve prediction of incident atrial fibrillation using electronic health records. Ashburner JM, Chang Y, Wang X, et al. J Am Heart Assoc. 2022;11:0. doi: 10.1161/JAHA.122.026014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Natural language processing: its potential role in clinical care and clinical research. Marder SR. Schizophr Bull. 2022;48:958–959. doi: 10.1093/schbul/sbac092. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Introducing ChatGPT. [ Aug; 2023 ]. 2023. https://openai.com/blog/chatgpt https://openai.com/blog/chatgpt
  • 6. ‎Try Bard, an AI experiment by Google. [ Aug; 2023 ]. 2023. https://bard.google.com https://bard.google.com
  • 7.Bing Chat | Microsoft Edge. [ Jul; 2023 ]. 2023. https://www.microsoft.com/en-us/edge/features/bing-chat https://www.microsoft.com/en-us/edge/features/bing-chat
  • 8.Artificial intelligence and machine learning for medical imaging: a technology review. Barragán-Montero A, Javaid U, Valdés G, et al. Phys Med. 2021;83:242–256. doi: 10.1016/j.ejmp.2021.04.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.The role of artificial intelligence in early cancer diagnosis. Hunter B, Hindocha S, Lee RW. Cancers (Basel) 2022;14 doi: 10.3390/cancers14061524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Deep learning and artificial intelligence in radiology: current applications and future directions. Yasaka K, Abe O. PLoS Med. 2018;15:0. doi: 10.1371/journal.pmed.1002707. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.How machine learning is powering neuroimaging to improve brain health. Singh NM, Harrod JB, Subramanian S, et al. Neuroinformatics. 2022;20:943–964. doi: 10.1007/s12021-022-09572-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Machine learning-based models incorporating social determinants of health vs traditional models for predicting in-hospital mortality in patients with heart failure. Segar MW, Hall JL, Jhund PS, et al. JAMA Cardiol. 2022;7:844–854. doi: 10.1001/jamacardio.2022.1900. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Fiske A, Henningsen P, Buyx A. J Med Internet Res. 2019;21:0. doi: 10.2196/13216. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Sallam M. Healthcare (Basel) 2023;11 doi: 10.3390/healthcare11060887. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Embracing large language models for medical applications: opportunities and challenges. Karabacak M, Margetis K. Cureus. 2023;15:0. doi: 10.7759/cureus.39305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Ethical conundrums in the application of artificial intelligence (AI) in healthcare—a scoping review of reviews. Prakash S, Balaji JN, Joshi A, Surapaneni KM. J Pers Med. 2022;12 doi: 10.3390/jpm12111914. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Ethics & ai: a systematic review on ethical concerns and related strategies for designing with ai in healthcare. Li F, Ruijs N, Lu Y. AI. 2023;4:28–53. [Google Scholar]
  • 18.Science without conscience is but the ruin of the soul: the ethics of big data and artificial intelligence in perioperative medicine. Canales C, Lee C, Cannesson M. Anesth Analg. 2020;130:1234–1243. doi: 10.1213/ANE.0000000000004728. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.A review of privacy-preserving techniques for deep learning. Boulemtafes A, Derhab A, Challal Y. Neurocomputing. 2020;384:21–45. [Google Scholar]
  • 20.Privacy in the age of medical big data. Price WN 2nd, Cohen IG. Nat Med. 2019;25:37–43. doi: 10.1038/s41591-018-0272-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Health information management: implications of artificial intelligence on healthcare data and information management. Stanfill MH, Marc DT. Yearb Med Inform. 2019;28:56–64. doi: 10.1055/s-0039-1677913. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Machine learning for healthcare wearable devices: the big picture. Sabry F, Eltaras T, Labda W, Alzoubi K, Malluhi Q. J Healthc Eng. 2022;2022:4653923. doi: 10.1155/2022/4653923. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Secure and robust machine learning for healthcare: a survey. Qayyum A, Qadir J, Bilal M, Al-Fuqaha A. IEEE Rev Biomed Eng. 2021;14:156–180. doi: 10.1109/RBME.2020.3013489. [DOI] [PubMed] [Google Scholar]
  • 24.Priyanshu A, Vijay S, Kumar A, Naidu R, Mireshghallah F. Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation into Input Regurgitation and Prompt-Induced Sanitization. [ Aug; 2023 ]. 2023. http://10.48550/arXiv.2305.15008 http://10.48550/arXiv.2305.15008
  • 25.Responsible application of artificial intelligence in health care. [ Jul; 2023 ];Obasa AE, Palk AC. https://sajs.co.za/article/view/14889. S Afr J Sci. 2023 119 [Google Scholar]
  • 26.Artificial intelligence in anesthesiology: current techniques, clinical applications, and limitations. Hashimoto DA, Witkowski E, Gao L, Meireles O, Rosman G. Anesthesiology. 2020;132:379–394. doi: 10.1097/ALN.0000000000002960. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Recent advances in artificial intelligence and tactical autonomy: current status, challenges, and perspectives. Hagos DH, Rawat DB. Sensors (Basel) 2022;22 doi: 10.3390/s22249916. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.In AIwe trust: ethics, artificial intelligence, and reliability. Ryan M. Sci Eng Ethics. 2020;26:2749–2767. doi: 10.1007/s11948-020-00228-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Piloting a survey-based assessment of transparency and trustworthiness with three medical AI tools. Fehr J, Jaramillo-Gutierrez G, Oala L, et al. Healthcare (Basel) 2022;10 doi: 10.3390/healthcare10101923. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. Amann J, Blasimme A, Vayena E, Frey D, Madai VI. BMC Med Inform Decis Mak. 2020;20:310. doi: 10.1186/s12911-020-01332-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Survey of explainable AI techniques in healthcare. Chaddad A, Peng J, Xu J, Bouridane A. Sensors (Basel) 2023;23 doi: 10.3390/s23020634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Open-source intelligence: a comprehensive review of the current state, applications and future perspectives in cyber security. Yadav A, Kumar A, Singh V. Artif Intell Rev. 2023:1–32. doi: 10.1007/s10462-023-10454-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.What changed in the cyber-security after COVID-19? Kumar R, Sharma S, Vachhani C, Yadav N. Comput Secur. 2022;120:102821. doi: 10.1016/j.cose.2022.102821. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.A systematic literature review of cyber-security data repositories and performance assessment metrics for semi-supervised learning. Mvula PK, Branco P, Jourdan GV, Viktor HL. Discov Data. 2023;1:4. doi: 10.1007/s44248-023-00003-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.User, usage and usability: redefining human centric cyber security. Grobler M, Gaire R, Nepal S. Front Big Data. 2021;4:583723. doi: 10.3389/fdata.2021.583723. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Artificial intelligence, responsibility attribution, and a relational justification of explainability. Coeckelbergh M. Sci Eng Ethics. 2020;26:2051–2068. doi: 10.1007/s11948-019-00146-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems. Bleher H, Braun M. AI Ethics. 2022;2:747–761. doi: 10.1007/s43681-022-00135-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.When doctors and ai interact: on human responsibility for artificial risks. Verdicchio M, Perin A. Philos Technol. 2022;35:11. doi: 10.1007/s13347-022-00506-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.ChatGPT listed as author on research papers: many scientists disapprove. Stokel-Walker C. Nature. 2023;613:620–621. doi: 10.1038/d41586-023-00107-z. [DOI] [PubMed] [Google Scholar]
  • 40.Health equity in the implementation of genomics and precision medicine: a public health imperative. Khoury MJ, Bowen S, Dotson WD, et al. Genet Med. 2022;24:1630–1639. doi: 10.1016/j.gim.2022.04.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.The effect of race and sex on physicians' recommendations for cardiac catheterization. Schulman KA, Berlin JA, Harless W, et al. N Engl J Med. 1999;340:618–626. doi: 10.1056/NEJM199902253400806. [DOI] [PubMed] [Google Scholar]
  • 42.Participation in cancer clinical trials: race-, sex-, and age-based disparities. Murthy VH, Krumholz HM, Gross CP. JAMA. 2004;291:2720–2726. doi: 10.1001/jama.291.22.2720. [DOI] [PubMed] [Google Scholar]
  • 43.The impact of artificial intelligence on health equity in oncology: scoping review. Istasy P, Lee WS, Iansavichene A, et al. J Med Internet Res. 2022;24:0. doi: 10.2196/39748. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Artificial intelligence-driven prediction modeling and decision making in spine surgery using hybrid machine learning models. Saravi B, Hassel F, Ülkümen S, et al. J Pers Med. 2022;12 doi: 10.3390/jpm12040509. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Large language models and the perils of their hallucinations. Azamfirei R, Kudchadkar SR, Fackler J. Crit Care. 2023;27:120. doi: 10.1186/s13054-023-04393-x. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Cureus are provided here courtesy of Cureus Inc.

RESOURCES