Skip to main content
Journal of Research in Nursing logoLink to Journal of Research in Nursing
. 2024 Feb 29;29(2):143–153. doi: 10.1177/17449871231215696

Artificial Intelligence in nursing: trustworthy or reliable?

Oliver Higgins 1,2,, Stephan K Chalup 3, Rhonda L Wilson 4
PMCID: PMC11271667  PMID: 39070561

Abstract

Background:

Trustworthiness in Artificial Intelligence (AI) innovation is a priority for governments, researchers and clinicians; however, clinicians have highlighted trust and confidence as barriers to their acceptance of AI within a clinical application. While there is a call to design and develop AI that is considered trustworthy, AI still lacks the emotional capability to facilitate the reciprocal nature of trust.

Aim:

This paper aims to highlight and discuss the enigma of seeking or expecting trust attributes from a machine and, secondly, reframe the interpretation of trustworthiness for AI through evaluating its reliability and validity as consistent with the use of other clinical instruments.

Results:

AI interventions should be described in terms of competence, reliability and validity as expected of other clinical tools where quality and safety are a priority. Nurses should be presented with treatment recommendations that describe the validity and confidence of prediction with the final decision for care made by nurses. Future research should be framed to better understand how AI is used to deliver care. Finally, there is a responsibility for developers and researchers to influence the conversation about AI and its power towards improving outcomes.

Conclusion:

The sole focus on demonstrating trust rather than the business-as-usual requirement for reliability and validity attributes during implementation phases may result in negative experiences for nurses and clinical users.

Implications for practice:

This research will have significant implications for the way in which future nursing is practised. As AI-based systems become a part of routine practice, nurses will be faced with an increasing number of interventions that require complex trust systems to operate. For any AI researchers and developers, understanding the complexity of trust and creditability in the use of AI in nursing will be crucial for successful implementation. This research will contribute and assist in understanding nurses’ role in this change.

Keywords: artificial intelligence, machine learning, mental health, nursing, psychosis, technology

Introduction

The role of Artificial Intelligence (AI) in nursing, especially in the mental health setting, is in the early stages of design and development. Increasingly, nurses are required to be active participants in contributing to the design, development and deployment of AI in clinical settings (Ronquillo et al., 2021) and ensure that any AI product used in clinical practice is done so safely and securely. Nursing AI research should focus on establishing programmes of practice that further develop the field if it is to be used to augment and support nursing practice and workflows (Ronquillo et al., 2021). For this to occur the incorporation of trust and empathy are fundamental elements within the design phases of building meaningful AI-informed clinical decision-making tools. This is in contrast with business as usual in the clinical context whereby reliability and validity are usually the favoured indicators of quality for ‘analogue’ decisional support tools used in the clinical settings. This point of difference presents a quandary in establishing quality and safety discourse in regard to innovation. Should reliability and validity of decision-making tools be sufficient alone as indicators of quality and safety, and/or should trust and empathy be incorporated or standardised as indicators of quality and safety? This paper will discuss several key issues at the cutting edge of AI and Nursing. It provides an encouraging but optimistically cautious viewpoint to better understand the role AI will contribute in Nursing’s future. The paper examines mechanisms of trust, the expectations of AI, the role of empathy, anthropotheism of AI and the importance of appraising AI through an evidence-based lens rather than simply being expected to demonstrate trust. The aim of this paper is to highlight an inconsistency in approach to the assessment of quality and safety of supportive clinical decision-making tools and to provide some guidance to inform a balanced approach to the assessment of new innovation in supportive clinical decision-making tools.

Background

Clinical Decision Support Systems (CDSS) are one particular aspect to offer potential for mental health nurses, especially for complex areas such as the assessment and prediction of suicidal behaviours in critical settings. A recent integrative review investigated the role of decision support systems based on AI and machine learning in contemporary mental health care practice (Higgins et al., 2023). It found a limited amount of literature on the subject; however, a compelling problem was identified whereby clinician trust in AI systems emerged as a significant theme (Higgins et al., 2023).

Scientific reliability: Inconsistent expectations for nursing implementation

Established nursing practice relies upon many different instruments to augment and guide practice. One example from the general nursing field, is the Waterlow Score: a tool routinely used to assess pressure ulcer risk (Charalambous et al., 2018). This tool has been widely incorporated into international practice, and is supported by extensive research that demonstrates the tool’s reliability (Charalambous et al., 2018). In the mental health context, a further example is found in the Health of The Nation Outcome Scale (HoNOS) which is also used internationally (e.g. UK, Australia and New Zealand (James et al., 2018)) and has demonstrated consistent interrater reliability through an extensive body of research findings (James et al., 2018; Wing et al., 1998). Interestingly, the literature for these two well-established examples describe the reliability and validity of both these tools; however, the attribute of trustworthiness is not mentioned as a requirement for implementation (Charalambous et al., 2018; James et al., 2018). This highlights an inconsistency whereby reliability and validity appear to be preferred attributes for some tools, yet in contrast, an expectation is evident whereby trust has arisen as the preferred attribute for adoption of a tool such as AI in the clinical context. The nature of this inconsistency in expectation about tools used in nursing contexts that support clinical decision making requires further investigation and is the subject of this paper.

The ethics of black boxes and anthropomorphism

AI is increasingly evident in all parts of life; however, the resulting complex machines are not always understood, and can often be black-box in nature revealing a sentiment of significant concerns in implementation (Rai, 2019). The term black-box denotes that researchers may understand the system’s inputs and outputs, but the process or method undertaken in the model to produce the output remains unknown (Rai, 2019). Significantly, with the unknown element a perplexing feature of the black-box and its inherent incorporation within the nature of AI, together with the human tendency to anthropomorphise machines and robots (Airenti, 2015), in combination, can result in humans attributing moral characteristics and behaviour to the AI (Ryan, 2020).

Additionally, the general media perpetuate popular perspectives about what these complex machines might do if left to their ‘human-like devices’, such as, depicted in ideas about sentient robots (Liang and Lee, 2017). This has resulted in an association and categorisation of what the ‘intelligent’ machines are capable of by inferring human definitions and contexts (Liang and Lee, 2017). As such, the public expectation that AI should deliver trustworthy solutions has become increasingly influential. In response, government entities have formed organisations such as the European Commission High-level Expert Group (HLEG) on AI, to assist in the review and critique of AI use, and to promote Ethical Guidelines for Trustworthy AI (HLEG AI, 2019). Likewise, countries such as Singapore have undertaken a series of AI governance and ethics initiatives designed to build an ecosystem of trust that supports the adoption of AI (Singapore: InfoComm Media Development Authority, 2019; Singapore: Singapore Computer Society, 2020). As public concern has increased, the World Health Organization released the Ethics and governance of artificial intelligence for health in 2021, which calls upon organisations and governments to demonstrate trustworthiness with providers and patients for the ethical use of AI interventions in health care (World Health Organization, 2021). However, viewing the AI as trustworthy may undermine the value of interpersonal trust, thereby further reinforcing the notion of anthropomorphising the AI and ultimately diverting accountability from the organisations responsible for its design and development (Ryan, 2020).

Can AI be trusted?

The notion that AI has the capacity to be considered trustworthy is complex, as trust is an integral and essential part of human relationships (Ryan, 2020). HLEG proposes that the main characteristics of trusting AI can be divided into three components, which include: the AI technology itself; the design and development organisations behind its conception and use; and, the socio-technical elements of the AI life cycle (HLEG AI, 2019). AI is a machine, albeit a sophisticated machine, but a machine nonetheless, and it is important to note that trustworthiness is not a value traditionally attributed to machines (HLEG AI, 2019). Therefore, while AI may meet the rational account of trust requirement, it cannot be held accountable for its actions as it does not possess the requisite emotional states to be held responsible for its own actions (Ryan, 2020). As such, it should not be considered trustworthy in the clinical setting, but rather considered in terms of reliability and validity (Ryan, 2020) and requires the tool to be appraised within the an Evidence-Based Decision Making framework (Sevy Majers and Warshawsky, 2020).

Many nurses have expressed reluctance in the use of AI, stating that the nature and complexity of learning algorithms can obscure the rationale or clinical reasoning for a recommendation (Rai, 2019). This can lead to hesitancy or anxiety for some nurses where they do not understand the process or logic that the AI system undertook to provide a particular recommendation (Brown et al., 2020). Nurses are less likely to engage or trust AI black-box recommendations (Brown et al., 2020), preferring an explicit understanding of the underlying AI mechanisms and logic process. AI-based innovations should be expected to demonstrate the rigour, reliability and validity of any other supportive clinical decisional intervention or instrument (Brown et al., 2020). Recent findings indicate that trust and confidence in AI are significant barriers to successfully implementing any intervention using AI CDSS in the mental health setting (Higgins et al., 2023). Imparting trust is an inherently human emotional construct (McLeod, 2021); as such, the concept of AI trustworthiness may be an additional barrier, preventing nurses from accepting AI within their practice. Without improvements in transparency about the logic and process within AI, it remains challenging for nurses to trust AI innovations despite the quality and validity indicators that may be apparent. Researchers and developers should note that there is more work to be done to assure nurses of confidence in AI CDSS in clinical practice.

Trust and confidence with the unknown

Trust can be defined as a confident relationship with the unknown (Botsman, 2018). It is the divide that exists between the two points of change, from where a person currently ‘is’ to where the change requires them to be. Crossing the change divide requires a leap of trust, events such as the COVID-19 pandemic revealed that clinicians are required to take trust leaps faster and wider than ever before (Dodgson, 2022). The qualitative feedback during times of rapid change echoes with examples such as: ‘I liked the old system much better’, ‘Why do we have to do this?’ or ‘I have so much to do, and now I’m being asked to change X as well!’ (Botsman, 2018). AI developers and researchers should not underestimate the investment in trust required as the frequency of changes increase, as the change may be perceived by clinicians to directly challenge their highly valued clinical judgement and competence.

Typically, there are four fundamental domains divided into two paired subsets that are used when deciding whether to attribute trust in someone, or something. Firstly, competence and reliability, that is, explanation related to, ‘How we do things’. Secondly, integrity and empathy, represent aspects about ‘Why we do things’ (Botsman, 2018). Applied together, the how and the why we do things assist in formulating our willingness to trust. Commonly, daily transactions occur that require people to undertake a leap of trust evoking the four trust domains aforementioned. For example, competence might be represented by questioning Do they have the skills, knowledge, time and resources to do a particular task or job? Reliability can be assessed by asking: Can people depend on you to keep your promises and commitments? Integrity can be judged by considering: Do you say what you mean and mean what you say? Empathy can be demonstrated by deciding: Do you care about the other person’s interests as well as your own? Do you think about how your decisions and actions affect others? (Botsman, 2018). It is possible to relatively confidently seek answers to these questions when asked of another human; however, it becomes more complicated when the same questions are asked of a non-sentient AI, and therefore, trust is more difficult to establish.

Competence, reliability and integrity

Competence and reliability pertain to ‘How we do things’ and align with questions that are straightforward in application for an AI intervention. Competence is evaluated by asking: Does the AI have the skills, knowledge, time, and resources to do the task? Are we open about what it is capable of? Reliability is measured through examination of past performance: Is it consistent in the way it behaves from one day to the next? Is it dependable? Thus, the aforementioned questions can be considered with a logic that offers sound rationale for the concept of trust (Botsman, 2018). This same logic can be equally applied to both AI and clinical tool implementation.

AI integrity is a developing field of research, and at present it overlaps with the first two components of competence and reliability. Integrity is more complex than notions of competence and reliability, articulating as an attribute of ‘Why we do things’. To assess if someone, or something, has integrity, the following questions need to be considered: Do you say what you mean, and mean what you say? Do your words align with your actions? Are you honest about its intentions and motives towards others?

Integrity in AI is further comprised of four components, health, explainability, security and reproducibility (Talagala, 2019). The AI model, including the production and deployment system must be healthy. For example, AI predictions behave in production as expected and within norms specified by the developer. Explainability is necessary to determine why the algorithm behaved the way it did in devising predictions reveals the factors that led to the prediction. Security is established when the algorithm represents a truth in logic of health and explainability if it encounters malicious or non-malicious attacks that aim to change or manipulate its behaviour.

As the science evolves, explainable AI continues to address the black-box limitations with innovative research described as a glass-box technology. This glass-box acts to reveal, explain and investigate the inner workings of the model improving transparency (Nori et al., 2019). Lastly, an important scientific principle is that all predictions must derive from a chain of logic that is reproducible. If an outcome cannot be replicated, then, there is no reliable way to understand what led to the outcome, or debug issues if, and where, problems arise (Talagala, 2019). In summary, competence, reliability, and integrity are logical scientific requirements that characterise the scientific traditional representation of trustworthiness and enable confidence with the unknown. The number of countries worldwide upskilling their workforce with AI skills is not known; however, there are a number of nursing bodies actively investing in AI-powered nursing education and training programmes. In the United States, the National League for Nursing (NLN) has developed a competency framework for AI-enabled nursing practice (Lebo and Brown, 2022). The framework identifies the knowledge, skills and abilities that nurses need to be effective in using AI in their practice. The NLN is also working to develop AI-powered educational resources for nurses (Lebo and Brown, 2022). In the United Kingdom, the National Health Service (NHS) is investing in AI-powered tools to help nurses with tasks such as clinical decision-making, patient monitoring and medication administration. They are also working to develop AI-powered training programmes for nurses with nursing input (NHS AI Lab & Health Education England, 2022).

Empathy

Another factor that contributes to an exploration of trust in the context of AI is the notion of empathetic engagement. Empathy demonstrates an ability to understand and share the feelings of another, coupled with the ability to imagine what someone else might be thinking or feeling (Botsman, 2018). For example, ‘Do you care about the other person’s interests as well as your own? Do you think about how your decisions and actions affect others?’ (Botsman, 2018). AI is capable of meeting the first three trust components of competence, reliability and integrity. However, addressing empathy attribution is much more complicated for developers and researchers to achieve because it is not possible for a non-sentient machine to actually ‘feel’ empathy, although person-like emulation is a design possibility with significant non-human limitations for the human user to navigate.

The concept of empathy and AI can be illustrated with a case explanation. Consider Pixar’s 2008 animated fictional film WALL-E (Stanton, 2008), which follows the story of a little robot programmed to clean-up the earth, long after humanity has left. There is no dialogue in the first 30 minutes, with the animators applying various empathy-evoking techniques throughout this opening stanza to evoke empathy of the viewer towards the little robot. One viewer described WALL-E ‘as a character with dimension, personality, and heart’ (Srivastava, 2016). Examining WALL-E reveals a vital clue: WALL-E depends on the human viewer to engage their human empathy to give ‘him’ personality. WALL-E underscores the importance of empathetic engagement, for without empathy, WALL-E is not perceived as real (Srivastava, 2016). Herwix et al. (2022) observed a relationship between perceived technical competence and both affect-based and cognitive-based trust, reinforcing the importance of empathy in human computer interaction (HCI) when planning digital health interventions (Søgaard Neilsen and Wilson, 2019). Just as empathetic engagement occurs for the viewer, so too, it is important that clinicians, researchers and developers recognise the empathetic components necessary to convey holistic trust in AI within the clinical setting.

As we have discussed, all four trust domains must be actively engaged for trust to arise in an AI context. The ‘How we do things’ facet of competence and reliability is relatively straightforward; however, the 'Why we do things’ characteristics related to integrity and empathy are more complex. Complexity in the context of rapid development in AI innovations applied to clinical settings will require clinicians to take wider leaps of trust with increased frequency. Researchers and developers should recognise that an expectation of trust investment is required when people are asked to leap into unknown phenomena, and consider ways that they can facilitate the development of a confident relationship with the unknown to establish prerequisite trust formation (Botsman, 2018). Understanding the mechanisms that inform the components of trust helps to identify best fit alignment of AI implementation within the clinical environment.

Designed to empathise

Researchers and developers must account for the aforementioned four trust domains as they design and implement AI innovations. A further link in logic is the alignment of empathy. The theoretical approach of Design Thinking promotes the inclusion of an important consideration of empathy in the context of AI. Design thinking employs an iterative process in which the researcher seeks to understand the requirements of all users, challenge assumptions, redefine problems and create innovative solutions which can be prototyped and tested systematically (Köppen and Meinel, 2015). Crucially, design thinking requires a consideration of empathy as a first step, employing a cognitive and emotional technique that will aid in the development process. The Emphathise domain is focused on exploring the nature of the problem and understanding the users and their needs, doing so by asking user perspectives directly through focus groups or one-to-one sessions with end users (Köppen and Meinel, 2015). The synthesis of this phase gives rise to a categorisation step which defines the main findings and acts as a ‘persona’ (an ideal user) to validate decisions later in the process. The design thinking process accelerates development, while including clinicians, designers, researchers and lived experience in the empathetic design and application of AI interventions. While the literature on the topic is limited, a recent review revealed evidence supporting the importance of inclusion of clinical staff in the final decision-making process (Higgins et al., 2023), and in doing so, it is possible to generate requisite clinician empathy within the AI environment. Developers and researchers of AI innovations have an ethical responsibility for AI solutions they invent, balancing building on strengths while recognising and compensating for its weaknesses to mitigate risks while promoting safety and quality for the end user (Ryan, 2020). Empathetic AI is not about teaching machines to feel emotion in the way a person might do so, but instead to use AI coding with regard to ethics, thereby facilitating sufficient empathy to determine the next best action to take for a patient.

As researchers and developers look towards the future, the empathic components of their innovations will grow ever more crucial for success. Consider the future presented in the film ‘Robot and Frank’, in which a robot designed with empathy provides care and companionship to those who may not have the luxury (Schreier, 2012). How developers and researchers deal with empathy and anthropomorphism will be crucial in adoption and value to the end user. However, they must consider anthropomorphism and its role for non-human entities to be given human characteristics and feelings (Airenti, 2015). To better understand anthropomorphism, consider the ‘Braitenberg vehicle’. A thought experiment suggesting that straightforward act of the vehicles (robots) is deliberate and motivated by emotions rather than merely being governed by their preprogrammed reactions (Braitenberg, 1986). For instance, a Braitenberg vehicle might perform in a way that appears ‘fearful’ or ‘curious’ due to how it responds to environmental cues, which could cause a bystander to anthropomorphise the vehicle and attribute it with these emotions. In Braitenberg vehicles, a minor increase in the complexity of a system can quickly transform a relatively basic system into a black box for humans (Rai, 2019). As a result, people may begin to lose faith in the system as they may believe that they cannot influence the vehicle’s behaviour (Braitenberg, 1986). The resulting sophisticated system’s lack of transparency in its behaviour can lead people to become less empathetic towards it since they are less likely to perceive the system as a thinking being with its own emotions and goals (Braitenberg, 1986). The importance of empathy in acceptance can be evidenced in Chat-GPT (a large language model) which has been recently launched (OpenAI, 2022). Chat-GPT has been trained on so much data (almost all of the internet) that its abilities can exceed that of humans while providing human-like answers (Aljanabi et al., 2023). Chat-GPT has been tested using the Emotion Twenty Questions (EMO20Q), the results demonstrating it possesses the ability to engage with a human and identify human emotion in a method similar to the way medical students are trained (Noever and McKee, 2023). Future innovations will require the empathetic component to be clearly understood for the innovation to succeed in adoption.

Implications for incorporation of AI CDSS in nursing practice

Four implications for practice have been identified in this discussion. Firstly, AI interventions should be described in terms of competence, reliability and validity as expected of other clinical tools where quality and safety are a priority. This consistency in the narrative supports the incorporation of AI augmentation within clinical practice. Acceptance of an intervention based on trust alone is not based on a consistent chain of logic, and risks compromising quality and safety related to its implementation. It is also apparent that a detailed description of the functions of decision logic is required to support the implementation of AI-based interventions. This must include clear actions that describe how clinical features contributed to the outcome while communicating multiple recommendations, and demonstrating confidence with prescribing recommendations. In Australia, the Australian Alliance for Artificial Intelligence in Healthcare (AAAiH) has published the roadmap for Artificial Intelligence in Healthcare for Australia, offering a high-level view of the use of AI in care and recommend the acceleration of training the health workforce in AI by developing foundational AI frameworks for health professionals and accredited programmes to train specialist AI health professionals (Australian Alliance for Artificial Intelligence in Healthcare, 2021). It is imperative that nursing leadership bodies develop AI strategies to ensure Nursing has a clear direction and voice in this rapid change.

Secondly, Nurses should be presented with treatment recommendations that describe the validity and confidence of prediction and shown the chain of logic that rests the final decision for the appropriate course of treatment with nurses. Tools that utilise glass-box design, such as InterpretML (Nori et al., 2019), which allow the developer and researcher to understand the model’s inner workings, should be encouraged and incorporated into clinical interfaces.

Thirdly, future research should be framed to better understand how AI is used to deliver care and how it will affect those with a mental illness, as research has previously demonstrated, especially where psychosis is a targeted condition for AI interventions (Higgins et al., 2023). Further, design thinking concepts should be applied to research to reveal AI attributes required to deliver effective care for those with mental illness. Importantly, this will identify where empathy should be located within the intervention, how it is to be applied and how this is communicated. Finally, there is a responsibility for developers and researchers to influence the conversation about AI, taking control and proving its value through responsible applications and directing its power towards improving outcomes (PEGA, 2019).

Conclusion

Future AI research will have significant implications for the way in which nursing is practiced. As AI-based systems become a part of routine practice, nurses will be faced with an increasing number of interventions that require complex trust systems to operate. For any AI researchers and developers, understanding the complexity of trust and creditability in the use of AI in nursing will be crucial for successful implementation. Trustworthiness is a complex concept and when considered from a HCI perspective, such as in the case of AI CDSS in the nursing context, it must also be considered in conjunction with competence, reliability, integrity and empathy – competence and reliability to validate the capability and dependability of the system, integrity to ensure explainability and empathy to determine the next best action to take for a patient. To discuss AI in terms of its trustworthiness alone risks illogical endorsement of human attributes to a machine. This paper has presented an overview of the components of trust in the context of AI applied to the nursing context and clinical setting. Understanding the roles of trust should be considered an important step to consider for the successful acceptance of AI into clinical practice. Communicating the system’s validity and reliability while creating transparent innovations will form a significant component to support the implementation of AI-based clinical instruments in routine clinical nursing practice.

Key points for policy, practice and/or research

  • AI strategies need to be developed by Nursing leadership bodies to ensure Nursing has a clear direction and voice in this rapid change.

  • Nurses need to partner with developers and researchers to influence the conversation about AI, taking control and proving its value through responsible applications and directing its power towards improving outcomes.

  • Nurses should be presented with treatment recommendations that describe the validity and confidence of prediction and shown the chain of logic that rests the final decision for the appropriate course of treatment with nurses.

  • AI evaluation needs model evidence-based practice guidelines, interventions should be described in terms of competence, reliability and validity as expected of other clinical tools where quality and safety are a priority.

  • Future research should be framed to better understand how AI is used to deliver care and how it will affect those with a mental illness.

Acknowledgments

The authors would like to acknowledge the support of Central Coast Local Health District.

Biography

Oliver Higgins is a mental health nursing scientist and computing scientist with a research focus on Artificial Intelligence and Machine Learning in Mental Health, specifically Emergency Department presentations.

Stephan K Chalup’s work is built on an in-depth understanding of the human brain and how it works as a complex and powerful processing system. Through his work, Stephan aims to explain how human thoughts are represented and processed in our brains – and provide tantalising insights into what makes us tick.

Rhonda L Wilson is a Wiradjuri woman, experienced nurse and an internationally recognised mental health nursing scientist. Her work in e-health is paving the way for new digital therapeutic interventions that promote and support patient-centred care and increased wellbeing.

Footnotes

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Partial financial support was received from NSW Ministry of Health as part of the Towards Zero Suicides initiative.

Ethical approval: Ethical clearance is not required as this review is based on published literature.

ORCID iD: Oliver Higgins Inline graphic https://orcid.org/0000-0001-6545-145X

Contributor Information

Oliver Higgins, PhD Candidate, School of Nursing and Midwifery, University of Newcastle, Gosford, NSW, Australia; Manager, Population, Data & Outcome Measures, Mental Health, Central Coast Local Health District, Gosford, NSW, Australia.

Stephan K Chalup, Professor in Data Science, School of Information and Physical Sciences (Computer Science and Software Engineering), Newcastle NSW, Australia.

Rhonda L Wilson, Professor of Nursing, School of Nursing and Gosford, NSW, Australia.

References

  1. Airenti G. (2015) The cognitive bases of anthropomorphism: from relatedness to empathy. International Journal of Social Robotics 7: 117–127. [Google Scholar]
  2. Aljanabi M, Ghazi M, Ali AH, et al. (2023) ChatGpt: Open Possibilities. Iraqi Journal For Computer Science and Mathematics 4: 62–64. [Google Scholar]
  3. Australian Alliance for Artificial Intelligence in Healthcare (2021) A Roadmap for AI in Healthcare for Australia. AAAIH, Macquarie University. [Google Scholar]
  4. Botsman R. (2018) Who Can You Trust?: How Technology Brought Us Together and Why It Might Drive Us Apart. London: Penguin. [Google Scholar]
  5. Braitenberg V. (1986) Vehicles: Experiments in Synthetic Psychology. Cambridge, MA: MIT press. [Google Scholar]
  6. Brown LA, Benhamou K, May AM, et al. (2020) Machine learning algorithms in suicide prevention: Clinician interpretations as barriers to implementation. Journal Clinical Psychiatry 81: 19m1l2970. DOI: 10.4088/JCP.19m12970. [DOI] [PubMed] [Google Scholar]
  7. Charalambous C, Koulori A, Vasilopoulos A, et al. (2018) Evaluation of the validity and reliability of the waterlow pressure Ulcer Risk Assessment Scale. Medical Archives 72: 141–144, DOI: 10.5455/medarh.2018.72.141-144. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Dodgson JE. (2022) The pandemic has brought too much change: Too many preprints; too many retractions. Journal of Human Lactation 38: 207–208. DOI: 10.1177/08903344221082571. [DOI] [PubMed] [Google Scholar]
  9. Herwix A, Haj-Bolouri A, Rossi M, et al. (2022) Ethics in information systems and design science research: Five perspectives. Communications of the Association for Information Systems 50: 34. [Google Scholar]
  10. Higgins O, Short BL, Chalup SK, et al. (2023) Artificial intelligence (AI) and machine learning (ML) based decision support systems in mental health: An integrative review. International Journal of Mental Health Nursing 32: 966–978. DOI: 10.1111/inm.13114. [DOI] [PubMed] [Google Scholar]
  11. HLEG AI. (2019) Ethics Guidelines for Trustworthy AI. Available at: https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai [Google Scholar]
  12. James M, Painter J, Buckingham B, et al. (2018) A review and update of the Health of the Nation Outcome Scales (HoNOS). BJPsych Bulletin 42: 63–68. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Köppen E, Meinel C. (2015) Empathy via design thinking: Creation of sense and knowledge. In: Plattner H, Meinel C, Leifer L. (eds) Design Thinking Research. Cham: Springer, pp. 15–28. [Google Scholar]
  14. Lebo C, Brown N. (2022) Integrating Artificial Intelligence (AI) simulations into undergraduate nursing education: An evolving AI patient. Nursing Education Perspectives. Epub ahead of print. 21 December 2022. DOI: 10.1097/01.Nep.0000000000001081. [DOI] [PubMed] [Google Scholar]
  15. Liang Y, Lee SA. (2017) Fear of autonomous robots and artificial intelligence: Evidence from National Representative data with probability sampling. International Journal of Social Robotics 9: 379–384. DOI: 10.1007/s12369-017-0401-3 [DOI] [Google Scholar]
  16. McLeod C. (2021) Trust, Fall 2021 edn. Metaphysics Research Lab, Stanford, CA: The Metaphysics Research Lab Philosophy Department Stanford University. [Google Scholar]
  17. NHS AI Lab & Health Education England (2022) Developing Healthcare Workers’ Confidence in AI. Available at: https://digital-transformation.hee.nhs.uk/binaries/content/assets/digital-transformation/dart-ed/understandingconfidenceinai-may22.pdf (accessed 07 February 2023). [Google Scholar]
  18. Noever D, McKee F. (2023) Chatbots as problem solvers: Playing twenty questions with role reversals. arXiv preprint arXiv:2301.01743 [Google Scholar]
  19. Nori H, Jenkins S, Koch P, et al. (2019) Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223. [Google Scholar]
  20. OpenAI (2022) ChatGPT: Optimizing Language Models for Dialogue. Available at: https://openai.com/blog/chatgpt/ (accessed 07 February 2023).
  21. PEGA (2019) AI and Empathy: Combining Artificial Intelligence With Human Ethics for Better Engagement. Available at: https://www.pega.com/insights/resources/ai-and-empathy-combining-artificial-intelligence-human-ethics-better-engagement (accessed 07 February 2023). [Google Scholar]
  22. Rai A. (2019) Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science 48: 137–141. [Google Scholar]
  23. Ronquillo CE, Peltonen LM, Pruinelli L, et al. (2021) Artificial intelligence in nursing: Priorities and opportunities from an international invitational think-tank of the Nursing and Artificial Intelligence Leadership Collaborative. Journal of Advanced Nursing 77: 3707–3717. DOI: 10.1111/jan.14855. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Ryan M. (2020) In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics 26: 2749–2767. DOI: 10.1007/s11948-020-00228-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Schreier J. (director) Acord L, Bisbee S, Kelman-Bisbee J, Niederhoffer G. (producer) (2012) Robot and Frank. Stage 6 Films. [Google Scholar]
  26. Sevy Majers J, Warshawsky N. (2020) Evidence-based decision-making for nurse leaders. Nurse Lead 18: 471–475. DOI: 10.1016/j.mnl.2020.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Singapore: InfoComm Media Development Authority (2019) Singapore Wins International Award for its Artificial Intelligence Governance and Ethics Initiatives. Available at: https://www.imda.gov.sg/newsand-events/Media-Room/Media-Releases/2019/singapore-wins-international-award-for-its-artificialintelligence-governance-and-ethics-initiatives (accessed 07 February 2023). [Google Scholar]
  28. Singapore: Singapore Computer Society (2020) Singapore Computer Society, InfoComm Media Development Authority. AI Ethics & Governance Body of Knowledge. Available at: https://ai-ethics-bok.scs.org.sg/ (accessed 07 February 2023). [Google Scholar]
  29. Søgaard Neilsen A, Wilson RL. (2019) Combining e-mental health intervention development with human computer interaction (HCI) design to enhance technology-facilitated recovery for people with depression and/or anxiety conditions: An integrative literature review. International Journal of Mental Health Nursing 28: 22–39. DOI: 10.1111/inm.12527. [DOI] [PubMed] [Google Scholar]
  30. Srivastava MB. (2016) The computational and aesthetic foundations of artificial empathy. Intersect: The Stanford Journal of Science, Technology, and Society 10: 8. [Google Scholar]
  31. Stanton A. (director) (2008) WALL-E Pixar. [Google Scholar]
  32. Talagala N. (2019) ML Integrity: Four Production Pillars For Trustworthy AI. Available at: https://www.forbes.com/sites/cognitiveworld/2019/01/29/ml-integrity-four-production-pillars-for-trustworthy-ai/?sh=b6dce0a5e6fe (accessed 25 May 2022).
  33. Wing JK, Beevor AS, Curtis RH, et al. (1998) Health of the Nation Outcome Scales (HoNOS): Research and development. British Journal of Psychiatry 172: 11–18, doi: 10.1192/bjp.172.1.11. [DOI] [PubMed] [Google Scholar]
  34. World Health Organization (2021) Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva: WHO. [Google Scholar]

Articles from Journal of Research in Nursing are provided here courtesy of SAGE Publications

RESOURCES