Introduction
The rapid convergence of Artificial Intelligence (AI) technologies has the potential to reshape organisations across both the private and public sectors (Daugherty et al., 2019). In particular, there is emerging anecdotal evidence that AI is influencing patient journeys and medical practices, with the potential to revolutionise the healthcare landscape. While AI promises to unlock patient data and provide better-personalised, evidence-based medicine (He et al., 2019), it also presents significant concerns leading to patient distrust and ethical quandaries.
The collection and use of personal data by AI and analytical algorithms give rise to serious issues, including privacy invasion, fraud, lack of transparency, misuse of algorithms, information leakage, and identity theft (Sivarajah et al., 2017; Wearn et al., 2019). In fact, a survey indicated that 63% of UK adults are uncomfortable allowing AI to replace doctors and nurses for some tasks, such as suggesting treatments, and 49% are not willing to share their personal health data to develop algorithms that might improve the quality of care (Fenech et al., 2018).
AI could pose potential risks to care delivery by devaluing physicians’ skills, failing to meet transparency standards, underestimating algorithmic biases, and neglecting the fairness of clinical deployment (Vayena et al., 2018). Such ethical dilemmas and concerns, if not adequately addressed when implementing AI for digital health and medical analytics, can not only negatively impact patients but may also tarnish the reputation of healthcare organisations (Wang et al., 2018). In response to these ethical challenges, many countries have implemented data protection regulations, such as the UK’s Data Protection Act 2018, which is in line with the General Data Protection Regulation (GDPR) formulated by the European Union. These regulations aim to improve individuals’ confidence in sharing personal information with healthcare organisations, leading to a scholarly and practical focus on the responsible use of AI.
Responsible AI refers to the integration of ethical and responsible AI use into strategic implementation and organisational planning processes (Wang et al., 2023). It aims to design and implement ethical, transparent, and accountable AI solutions that help organizations maintain trust and minimise privacy invasion. Responsible AI places humans (e.g., patients) at the centre and aligns with stakeholder expectations as well as applicable regulations and laws. The ultimate goal of responsible AI is to strike a balance between satisfying patient needs through responsible AI use and attaining long-term economic value for healthcare organisations. Despite its importance for organisational prosperity and the significant attention devoted to it, responsible AI use in healthcare is still in its nascent stages.
Original Research in this Special Issue
This special issue includes a total of nine research studies which focus on a vast range of topics within the context of our theme of Responsible Artificial Intelligence for Digital Health and Medical Analytics.
Two research studies (Fosso Wamba and Quieroz, 2023; Trocin et al., 2023) from the special issue outputs provide a holistic review of how AI is being used for digital health which remains scarce. Fosso Wamba and Queiroz (2023)’s research presented a bibliometric approach to explore the dynamics of the interplay between AI and digital health approaches, considering the responsible AI and ethical aspects of scientific production over the years. The research found four distinct periods in the publication dynamics and the most popular approaches of AI in the healthcare field. In terms of contributions, this work provides a framework integrating AI technologies approaches and applications while discussing several barriers and benefits of AI-based health. In addition, five insightful propositions emerged as a result of the main findings. The study’s originality is stemmed from providing a framework with a set of propositions considering responsible AI and ethical issues on digital health. Trocin et al. (2023)’s work provided a comprehensive analysis of health AI using responsible AI concepts as a structural lens. The study presented a systematic literature review that supported data collection and sampling procedure, the corresponding analysis, and extraction of research themes to provide an evidence-based foundation. The research contributed with a systematic description and explanation of the intellectual structure of Responsible AI in digital health and develop an agenda for future research.
The remaining seven research studies of the special issue were based on empirical data. Al-Dhaen et al. (2023) examine the continuous intention by healthcare professionals to use the Internet of Medical Things (IoMT) in combination with responsible artificial intelligence (AI). The study was underpinned using the theory of Diffusion of Innovation (DOI) and presented a model that was developed to determine the continuous intention to use IoMT taking into account the risks and complexity involved in using AI. Data was gathered from 276 healthcare professionals through a survey questionnaire across hospitals in Bahrain. The findings show that despite contradictions associated with AI, continuous intention to use behaviour can be predicted during the diffusion of IoMT.
Johnson et al. (2023) examine the use of responsible Artificial Intelligence in Healthcare to predict and prevent insurance denials for economic and social wellbeing. This study utilises Design Science Research (DSR) paradigm and develops a Responsible Artificial Intelligence (RAI) solution helping hospital administrators identify potentially denied claims. Guided by five principles, this framework utilises six AI algorithms – classified as white-box and glass-box – and employs cross-validation to tune hyperparameters and determine the best model. The results show that a white-box algorithm (AdaBoost) model yields an AUC rate of 0.83, outperforming all other models. This research’s primary implications are to (1) help providers reduce operational costs and increase the efficiency of insurance claim processes (2) help patients focus on their recovery instead of dealing with appealing claims.
Kumar et al. (2023) conduct a mixed-method study to identify the constituents of responsible AI in the healthcare sector and investigate its role in value formation and market performance. The study context is India, where AI technologies are in the developing phase. The results from 12 in-depth interviews enrich the more nuanced understanding of how different facets of responsible AI guide healthcare firms in evidence-based medicine and improved patient centred care. PLS-SEM analysis of 290 survey responses validates the theoretical framework and establishes responsible AI as a third-order factor. The 174 dyadic data findings also confirm the mediation mechanism of the patient’s cognitive engagement with responsible AI-solutions and perceived value, which leads to market performance.
El-Haddadeh et al. (2023) research examines the considerations of responsible Artificial Intelligence in the deployment of AI-based COVID-19 digital proximity tracking and tracing applications in two countries: the State of Qatar and the United Kingdom. Based on the alignment level analysis with the Good AI Society’s framework and sentiment analysis of official tweets, the diagnostic analysis resulted in contrastive findings for the two applications. While the application EHTERAZ (Arabic for precaution) in Qatar has fallen short in adhering to the responsible AI requirements, it has contributed significantly to controlling the pandemic. On the other hand, the UK’s NHS COVID-19 application has exhibited limited success in fighting the virus despite relatively abiding by these requirements. This underlines the need for obtaining a practical and contextual view for a comprehensive discourse on responsible AI in healthcare. Thereby offering necessary guidance for striking a balance between responsible AI requirements and managing pressures towards fighting the pandemic.
Wang et al. (2023) research investigates how signals of AI responsibility impact healthcare practitioners’ attitudes toward AI, satisfaction with AI, AI usage intentions, including the underlying mechanisms. The research study outlines autonomy, beneficence, explainability, justice, and non-maleficence as the five key signals of AI responsibility for healthcare practitioners. The findings reveal that these five signals significantly increase healthcare practitioners’ engagement, which subsequently leads to more favourable attitudes, greater satisfaction, and higher usage intentions with AI technology. Moreover, ‘techno-overload’ as a primary ‘techno-stressor’ moderates the mediating effect of engagement on the relationship between AI justice and behavioural and attitudinal outcomes. The study argues that when healthcare practitioners perceive AI technology as adding extra workload, such techno-overload will undermine the importance of the justice signal and subsequently affect their attitudes, satisfaction, and usage intentions with AI technology.
Gupta et al. (2023) study attempts to establish whether AI risks in digital healthcare are positively associated with responsible AI. The moderating effect of perceived trust and perceived privacy risks is also examined. The theoretical model was based on perceived risk theory. Perceived risk theory is important in the context of this study, as risks related to uneasiness and uncertainty can be expected in the development of responsible AI due to the volatile nature of intelligent applications.
Finally, Liu et al. (2023)’s research examines the impact of responsible AI on businesses using insights from analysis of 25 in-depth interviews of health care professionals. The exploratory analysis conducted revealed that abiding by the responsible AI principles can allow healthcare businesses to better take advantage of the improved effectiveness of their social media marketing initiatives with their users.
We present a summary of the nine papers in our special issue in Table 1. This table captures the diversity and depth of the papers, indicating the methodology, key contributions, and the dataset used. The methods employed range from systematic literature reviews and bibliometric analyses to empirical studies involving surveys and interviews. Each paper explores a unique aspect of the overarching theme, providing fresh insights into the interplay between artificial intelligence, digital health, and medical analytics. From uncovering the dynamics between AI and digital health to investigating the impact of AI on businesses, these studies offer a comprehensive view of the current landscape. A variety of datasets have been used, including surveys from healthcare professionals, data from AI-based COVID-19 tracking applications, and in-depth interviews. The findings from these studies extend our understanding of responsible AI in the healthcare sector, highlighting the potential benefits and challenges, ethical considerations, and the future directions of this rapidly evolving field. By collectively examining these papers, readers can gain a holistic understanding of the role of responsible AI in shaping digital health and medical analytics.
Table 1.
Authors | Method | Dataset Used | Key Contributions |
---|---|---|---|
Fosso Wamba and Queiroz (2023) | Bibliometric Analysis | 14,128 papers from 51,458 authors considered |
• Identified four distinct periods in the publication dynamics and the most popular approaches of AI in healthcare. • Presented a framework integrating AI technologies and applications with responsible AI and ethical considerations. |
Trocin et al. (2023) | Systematic Literature Review | 34 papers included | • Presented a systematic description and explanation of the intellectual structure of Responsible AI in digital health and proposed an agenda for future research. |
Al-Dhaen et al. (2023) | Empirical Study (Survey) | Survey data from 276 healthcare professionals in Bahrain | • Despite contradictions associated with AI, continuous intention to use behaviour can be predicted during the diffusion of IoMT. |
Johnson et al. (2023) | Design Science Research (DSR) | - | • Developed an RAI solution for identifying potentially denied claims, leading to reduced operational costs and improved efficiency of insurance claim processes. |
Kumar et al. (2023) | Mixed Method | Data from 12 in-depth interviews and 290 survey responses |
• Identified facets of responsible AI guiding healthcare firms in evidence-based medicine and improved patient-centred care. • Established responsible AI as a third-order factor. |
El-Haddadeh et al. (2023) | Empirical Study (Analysis of Applications) | Data from two AI-based COVID-19 tracking and tracing applications | • Highlighted the need for a practical and contextual view for a comprehensive discourse on responsible AI in healthcare. |
Wang et al. (2023) | Empirical Study (Survey) | 404 valid responses were obtained from healthcare professionals | • Five signals of AI responsibility significantly increase healthcare practitioners’ engagement, which leads to more favourable attitudes, satisfaction, and higher usage intentions with AI technology. |
Gupta et al. (2023) | Empirical Study (Survey) | 246 respondents in India | • Explored a positive association between AI risks in digital healthcare and responsible AI with trust and privacy risks as moderating factors. |
Liu et al. (2023) | Empirical Study (Interviews) | Data from 25 in-depth interviews of health care professionals | • Abiding by responsible AI principles can improve effectiveness of social media marketing initiatives in healthcare businesses. |
Pathways for Further Research
Current research on the use of AI in healthcare primarily focuses on the technological understanding of its implementation and the exploration of the economic value of AI applications. However, there is a notable lack of comprehensive studies on the practices, mechanisms, infrastructure, and ecosystem supporting responsible AI use in this context. This presents an urgent need to develop research in AI for healthcare from a social responsibility perspective. By doing so, we can transform the ethical considerations from potential barriers into opportunities that enhance trust and engagement among patients.
Understanding the role of responsible AI use in creating value in healthcare not only contributes to an emerging field in Information Systems (IS) research but also provides practical recommendations for healthcare practitioners. The papers in this special issue have begun to address these areas, but there is much more to be explored. Potential pathways for further research could include as follows. First, while several papers have explored the application of responsible AI in various health scenarios, more in-depth research could be conducted to understand the specific contexts where these AI applications excel or face challenges. Second, the papers by Fosso Wamba and Queiroz (2023) and Trocin et al. (2023) have identified different periods and themes in the evolution of AI in healthcare. Longitudinal studies could be carried out to track the evolution of AI in healthcare and examine these changes over time and predict future trends. Third, the role of regulations and policies can be considered. Given the ethical considerations surrounding responsible AI, more research could be dedicated to studying the impact of different regulations and policies on the development and deployment of AI in healthcare. Fourth, As Al-Dhaen et al. (2023) have shown, user behaviour plays a crucial role in the adoption of AI technology. Further research could delve into the factors that influence user behaviour, especially in the face of potential risks and uncertainties. In addition, the research by Wang et al. (2023) has revealed that “techno-overload” can impact healthcare practitioners’ attitudes and usage intentions. More research could be done to explore how these “techno-stressors” can be mitigated.
Finally, the paper by El-Haddadeh et al. (2023) has compared AI-based COVID-19 tracking and tracing applications in two countries. Cross-cultural studies could be carried out to understand the cultural factors influencing the deployment and acceptance of AI in healthcare. We recommend future research to study responsible AI in medical tourism as cross-countries healthcare systems may add another layer of complexity that need to be unpacked (Olya & Nia, 2021). Based on the Siala and Wang’s (2022) SHIFT (Acronym: Sustainability, Human centeredness, Inclusiveness, Fairness, and Transparency) framework suggesting a pathway to shift AI to be responsible in healthcare, we recommend five research themes for future research on development and application AI for digital health and medical analytics (see Table 2).
Table 2.
Research themes | Suggested future directions |
---|---|
Sustainable AI |
• Policies and practices to leverage AI for socially responsible purposes • Development of curricula and training programs to educate both current and future practitioners • Regulation and legislation to prevent irresponsible application of AI in healthcare services and operations • Agile AI response strategies for future medical crises • Exploring the role of AI in supporting sustainable healthcare systems, e.g., through resource optimisation or predictive analytics for preventive healthcare • Evaluating the long-term sustainability of AI applications in healthcare |
Human-centric AI |
• Mechanisms to enable user involvement in the design and application of AI solutions • Framework for multi-actor engagement (including AI agents and healthcare professionals) for co-design and evaluation of AI initiatives • Exploration of the paradoxical nature of bias from the socio-materiality view of algorithmic operation • Developing a human-centric approach where healthcare professionals are considered in the loop of AI-enabled clinical decisions • Understanding patient experiences and psychological impacts associated with AI-powered care |
Inclusive AI |
• Investigation of enablers for inclusive healthcare services using AI • Tactical changes (e.g., role specifications) to employ AI toward inclusive medical services • Integrated communication plans that enhance the experiences of both patients and healthcare professionals • Investigating the role of AI in promoting cultural competence in healthcare, e.g., through language translation or cultural sensitivity algorithms • Assessing potential barriers to AI adoption among diverse populations and developing strategies to overcome them |
Fair AI |
• Analysis of algorithmic attributes to minimise potential biases in the application of AI, including medical analytics • AI systems that are perceived as trustworthy and fair by key actor groups, primarily patients • Affordable AI solutions to address health disparities among disadvantaged patients • Addressing potential sources of bias in healthcare AI algorithms, such as datasets that lack diversity • Evaluating how AI can contribute to health equity, e.g., by improving access to care or addressing social determinants of health |
Transparent AI |
• Data management plans to reduce risks of data loss and cyberattacks • Transparent mechanisms for advising evidence-based treatments using AI approaches • AI solutions to make health-related outputs and procedures more accessible to patients |
Acknowledgements
We sincerely thank the authors who submitted their research to this special issue. We also thank all the reviewers for this special issue for their time and efforts.
Biographies
Uthayasankar Sivarajah
is a Chair in Technology Management and Circular Economy at the School of Management, University of Bradford. He enjoys interdisciplinary research and teaching focusing on the use of emerging digital technology by businesses and organisations for the betterment of society. He is the Deputy Editor of Journal of Enterprise Information Management and has published over 70 scientific articles in leading peer-reviewed journals and conferences. His research tackling societal challenges such as energy-efficient digital currencies, consumer behaviour and reduction of waste has featured in reputable media publications such as The World Economic Forum, BBC Yorkshire and The Conversation. To date, he has a successful track record as Principal and Co-investigator in over £3 million worth of R&I and consultancy projects funded by national, international funding bodies and commercial organisations. Some of the notable funders have been the European Commission (FP7, H2020, Marie Curie), Qatar National Research Fund (QNRF), Innovate UK/DEFRA and British Council focusing on projects addressing business and societal challenges surrounding themes such as Blockchain use in Financial Services, Smart Waste and Cities, Energy-efficient data centres, Social innovation and Participatory Budgeting.
Dr. Yichuan Wang
is an Associate Professor/Senior Lecturer in Digital Marketing at the Sheffield University Management School, University of Sheffield, UK. He holds a Ph.D. degree in Business & Information System from the Raymond J. Harbert College of Business, Auburn University (USA). His research focuses on examining the role of digital technologies and information systems (e.g., big data analytics, AI, and social media) in influencing business practices. His research has been published in the British Journal of Management, Information & Management, IEEE Transactions on Engineering Management, Annals of Tourism Research, Journal of Travel Research, Journal of Business Research, Industrial Marketing Management, International Journal of Production Economics, and Technological Forecasting and Social Change. He has been editing several special issues on digitalization related topics in journals and chairing mini-tracks in the leading IS conferences such as HICSS, AMCIS, and ECIS. He sits in the editorial board of Enterprise Information System.
Dr. Hossein Olya
is a Professor of Cultural and Creative Industries at the University of Sheffield, UK. Professor Olya joined Sheffield University Management School in 2018 and served as Research Development Director of Marketing and CCI group until 2021. Prior to joining Sheffield University Management School, he worked for Oxford Brookes Business School (Oxford, UK) and Sejong University (Seoul, South Korea). Throughout his career, he has taken an active approach to interdisciplinary and multidisciplinary research to investigate complex social problems with an attempt to develop impactful and innovative conclusions. He is actively leading research projects in the areas of cultural consumptions and sustainable management. He is currently serving as associate editor of the International Journal of Consumer Studies and was the Service Industries Journal. He has been delivering key notes at many prestigious international conferences in the USA, UK, Italy, South Korea, Cyprus, Turkey, Kazakhstan and Africa.
Dr. Sherin Mathew
is the Founder and CEO of the largest UK northern AI Ecosystem - AI Tech UK, built with a mission to bridge the divide and evangelise and democratise AI across the Northern Powerhouse. He was previously at IBM as an AI Lead and at Microsoft GBS, UK. Sherin specialises in transforming businesses by turning challenges into opportunities, turning businesses into a cognitive enterprises, or scaling startups to more than 3X times their valuation with cutting-edge solutions. Sherin helps businesses get creative and strategise growth with efficiency bringing multi-million returns by leveraging his experience working with elite brands in Fortune 500. As an accomplished public speaker, he promotes technical knowledge, standards, and best practices via professional profiles through keynotes, lectures, articles, user demonstrations, and creating AI projects for hackathons and Bootcamp events held in the UK.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Uthayasankar Sivarajah, Email: u.sivarajah@bradford.ac.uk.
Yichuan Wang, Email: Yichuan.Wang@sheffield.ac.uk.
Hossein Olya, Email: h.olya@sheffield.ac.uk.
Sherin Mathew, Email: sherin@ai-tech.uk.
References
- Al-Dhaen, F., Hou, J., Rana, N. P., & Weerakkody, V. (2023). Advancing the understanding of the role of responsible AI in the continued use of IoMT in Healthcare. Information Systems Frontiers, 25(6), 10.1007/s10796-021-10193-x. [DOI] [PMC free article] [PubMed]
- Daugherty PR, Wilson HJ, Chowdhury R. Using Artificial Intelligence to promote diversity. MIT Sloan Management Review. 2019;60(2):1. [Google Scholar]
- El-Haddadeh, R., Fadlalla, A., & Hindi, N. M. (2023). Is there a place for responsible artificial intelligence in pandemics? A tale of two countries. Information Systems Frontiers, 25(6), 10.1007/s10796-021-10140-w. [DOI] [PMC free article] [PubMed]
- Fenech, M., Strukelj, N., & Buston, O. (2018). Ethical, social and political challenges of artificial intelligence in health. Available at: https://wellcome.ac.uk/sites/default/files/ai-in-health-ethical-social-political-challenges.pdf.
- Fosso Wamba, S., & Queiroz, M. M. (2023). Responsible artificial intelligence as a secret ingredient for digital health: Bibliometric analysis, insights, and research directions. Information Systems Frontiers, 25(6), 10.1007/s10796-021-10142-8. [DOI] [PMC free article] [PubMed]
- Gupta, S., Kamboj, S., & Bag, S. (2023). Role of risks in the development of responsible artificial intelligence in the digital healthcare domain. Information Systems Frontiers, 25(6), 10.1007/s10796-021-10174-0.
- He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nature Medicine. 2019;25(1):30–36. doi: 10.1038/s41591-018-0307-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johnson, M., Albizri, A., & Harfouche, A. (2023). Responsible artificial intelligence in healthcare: Predicting and preventing insurance claim denials for economic and social wellbeing. Information Systems Frontiers, 25(6), 10.1007/s10796-021-10137-5.
- Kumar, P., Dwivedi, Y. K., & Anand, A. (2023). Responsible artificial intelligence (AI) for value formation and market performance in healthcare: The mediating role of patient’s cognitive engagement. Information Systems Frontiers, 25(6), 10.1007/s10796-021-10136-6. [DOI] [PMC free article] [PubMed]
- Liu, R., Gupta, S., & Patel, P. (2023). The application of the principles of responsible AI on social media marketing for digital health. Information Systems Frontiers, 25(6), 10.1007/s10796-021-10191-z. [DOI] [PMC free article] [PubMed]
- Olya H, Nia TH. The medical tourism index and behavioral responses of medical travelers: A mixed-method study. Journal of Travel Research. 2021;60(4):779–798. doi: 10.1177/0047287520915278. [DOI] [Google Scholar]
- Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Social Science & Medicine. 2022;296:114782. doi: 10.1016/j.socscimed.2022.114782. [DOI] [PubMed] [Google Scholar]
- Sivarajah U, Kamal MM, Irani Z, Weerakkody V. Critical analysis of Big Data Challenges and Analytical Methods. Journal of Business Research. 2017;70:263–286. doi: 10.1016/j.jbusres.2016.08.001. [DOI] [Google Scholar]
- Trocin, C., Mikalef, P., Papamitsiou, Z., & Conboy, K. (2023). Responsible AI for digital health: A synthesis and a research agenda. Information Systems Frontiers, 25(6), 10.1007/s10796-021-10146-4.
- Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLoS Medicine, 15(11), e1002689. [DOI] [PMC free article] [PubMed]
- Wang, W., Chen, L., Xiong, M., & Wang, Y. (2023). Accelerating AI adoption with responsible AI signals and employee engagement mechanisms in health care. Information Systems Frontiers, 25(6), 10.1007/s10796-021-10154-4.
- Wang Y, Kung L, Byrd TA. Big data analytics: Understanding its capabilities and potential benefits for healthcare organizations. Technological Forecasting and Social Change. 2018;126:3–13. doi: 10.1016/j.techfore.2015.12.019. [DOI] [Google Scholar]