Skip to main content
Health Research Policy and Systems logoLink to Health Research Policy and Systems
. 2025 Sep 26;23:115. doi: 10.1186/s12961-025-01390-0

Artificial-intelligence-driven governance: addressing emerging risks with a comprehensive risk-prevention-centred model for public health crisis management

Ching-Hung Lee 1,#, Zhichao Wang 1,#, Dianni Wang 1, Shupeng Lyu 1,, Chun-Hsien Chen 2
PMCID: PMC12465421  PMID: 41013547

Abstract

Background

In response to the coronavirus disease 2019 (COVID-19) pandemic, an emerging public health crisis with global impact, various artificial intelligence (AI)-enabled devices for pandemic-prevention emerged, highlighting the urgent need to understand public leverage of AI-enabled digital technologies.

Methods

This study constructs a comprehensive model, the Risk Prevention-centred and AI-enabled Anti-pandemic Technology Acceptance Model (RPAA-TAM), to elucidate public adoption of anti-pandemic digital tools, contributing to innovative governance. Integrating TAM, social influence theory and risk perception theory, RPAA-TAM analyses technology development and explores factors influencing public acceptance of AI in pandemic prevention.

Results

The study identifies seven key factors impacting public acceptance, including external variables, public trust, perceived benefit, perceived risk, attitude toward use, behavioural intention to use and system usage, offering insights into the integration of AI in managing emerging public health crises. The study offers seven novel propositions derived from a literature review on the basis of the RPAA-TAM.

Conclusions

The Risk Prevention-centred and AI-enabled Anti-pandemic Technology Acceptance Model (RPAA-TAM) offers a comprehensive framework for understanding public acceptance of AI in pandemic prevention. Identifying seven key factors impacting acceptance, our study provides novel propositions on the basis of literature review. RPAA-TAM contributes to innovative governance strategies, guiding the ethical and socially acceptable integration of AI in managing public health crises.

Keywords: COVID-19 pandemic, Artificial intelligence (AI), Evolutionary technology acceptance model, Emerging public health crisis

Introduction

An “emerging public health crisis” refers to a novel and rapidly evolving situation that poses a significant threat to the health of a population, often due to an infectious disease outbreak or a similar health event. Such crises are characterized by their sudden onset, potential for rapid spread and the need for urgent and coordinated responses from governments, health organizations and communities. Examples include the coronavirus disease 2019 (COVID-19) pandemic, which began in early 2020, and future potential crises could involve new or re-emerging infectious diseases, bioterrorism events or pandemics resulting from viral mutations. The COVID-19 pandemic exemplifies an emerging public health crisis, as it was caused by a novel coronavirus that spread globally, leading to widespread illness and death and significant disruption to social and economic systems. The pandemic highlighted the importance of preparedness, surveillance and the need for innovative solutions to manage such crises effectively.

Beginning in early 2020, the COVID-19 pandemic is notable for being an unprecedented worldwide health emergency with significant effects on public health and the global economy [29, 30, 35, 58, 59]. As an emerging public health crisis, it required a swift and comprehensive response from various sectors of society. In terms of pandemic prevention and control, the emergence of digital technologies such as cloud computing, Internet of things (IoT), big data analytics, blockchain, and AI has drastically changed people’s lifestyles and ushered in a “new normal” [5, 29, 30, 60]. These technologies have played a crucial role in managing the crisis, from tracking the spread of the virus to facilitating remote work and learning.

The transition from an offline to an integrated online and offline lifestyle has been accelerated by the ways in which governments, society, enterprises and individuals have adapted to face pandemic threats [10, 11, 3134, 39]. This adaptation was necessary to maintain social and economic functioning whilst minimizing the spread of the virus. The government’s normalized pandemic policies and evolving social needs have prompted enterprises to expedite digital transformation [7, 21, 2934], giving rise to the rapid emergence of AI-driven pandemic prevention technologies [2, 3, 42]. These technologies have been critical in addressing the challenges posed by the emerging public health crisis, showcasing the importance of innovation and adaptability in the face of unprecedented health emergencies.

TAM-based and AI-based studies for preventing and combating emerging public health crisis

An emerging public health crisis, such as the COVID-19 pandemic, is a situation where a new or previously controlled health threat rapidly evolves, threatening to cause significant harm to a population. These crises often require swift and innovative solutions to manage their spread and impact, making the integration of technology and public health responses critical. Amidst the COVID-19 pandemic, the rapid growth of the telemedicine sector, with telehealth at its forefront, aligns with the technology acceptance model (TAM) principles. This model helps us understand how users accept and integrate new technologies into their behaviours and routines, which is particularly relevant during a crisis where the adoption of new health technologies can be a matter of life and death.

Anderson et al. [2] highlight a surge in telemedicine usage, corresponding to TAM’s predictions, as people sought medical care whilst maintaining social distancing measures. Investigating patterns and factors influencing telemedicine adoption during the pandemic, Al Meslamani et al. [3] focus on the United Arab Emirates, revealing a favourable attitude and broad acceptance amongst the population. This acceptance is crucial in an emerging public health crisis, as it can significantly impact the effectiveness of telemedicine as a tool for managing the crisis. Torp et al. explore general practitioners’ acceptance of video consultations for managing type 2 diabetes, finding positive impacts on attitude and perceived usefulness. Their study emphasizes the significant role of perceived usefulness in practitioners’ favourable attitudes, contributing insights into the potential use of video consultations for chronic illnesses during the pandemic. In Poland, a modified TAM supports the emergence of telehealth as a popular method for remote primary care during the crisis, demonstrating the adaptability of TAM in different cultural and healthcare contexts.

In the ongoing technological revolution, AI stands out as a transformative force shaping production, daily life and learning [31, 32, 38, 59]. Amidst pandemic control, AI has proven effective in various scenarios, aiding in curbing the virus’s spread [10, 29, 47]. AI-based technologies have been used for contact tracing, predictive modelling of disease spread and even in the development of vaccines, showcasing their potential in managing emerging public health crises. Despite its positive impact, AI’s use sparks public controversies due to issues such as algorithmic subjectivity and opacity, potentially leading to adverse outcomes. The AI revolution has mitigated ideological conflicts on the social level, but also introduced new risks and complex challenges [10, 22, 37, 38]. At the public level, AI’s influence on production relations could result in unemployment and decision-making failures, posing social risks [26, 28, 34]. As AI spearheads technological and economic transformation, there is an imperative for the public to comprehensively strengthen scientific social risk response. This involves integrated and synergistic risk governance, addressing both technological aspects and societal dimensions [10, 23, 29]. In the context of an emerging public health crisis, this means that AI and other digital technologies must be developed and implemented with consideration for their broader social implications, ensuring that they are used ethically and effectively to protect public health.

State of the art of digital technology, social influence and emerging public health crisis

The development of emerging technologies, such as artificial intelligence (AI), extends beyond technological breakthroughs; it encompasses social issues tied to societal development and public needs [10, 23]. In the context of emerging public health crises, such as the unprecedented COVID-19 pandemic, these technologies have been thrust into the spotlight as critical tools for managing and mitigating the spread of infectious diseases. The rapid deployment of AI in contact tracing, diagnostics, vaccine development and telemedicine has highlighted the importance of technology in addressing the immediate and long-term challenges posed by such crises [30, 32].

Social risks associated with these technologies, marked by high uncertainty and correlation, present challenges in understanding their development paths. The emergence of a public health crisis exacerbates these risks, as the urgency to respond can lead to hasty implementations without fully considering the social and ethical implications. As technology spreads, risks such as moral dilemmas, security concerns, ethical issues, privacy challenges, algorithmic black boxes and biases amplify [26, 37] (Lou et al. 2010). These heightened risks contribute to public concerns, potentially leading to psychological imbalances and biases – a critical social issue in technological development during a crisis [5, 35, 53, 59].

Therefore, the study of emerging technologies, particularly artificial intelligence, transcends mere technology acceptance metrics such as perceived usability and ease of use. It urgently necessitates an examination of the intricate relationship between technology and social or public acceptance, especially in the context of a public health emergency. The challenges posed by public health crises demand effective emergency response efforts. In the new normal and digital transformation era, advanced digital technologies, including artificial intelligence and big data analytics, are crucial for managing risks such as the COVID-19 pandemic [31].

Integrating these technologies with safety management practices and consciousness is a pressing research issue at the intersection of safety science, emergency management, public health, technology management and public policy [16, 24]. This paper focusses on the relationship between pandemic prevention technology development and public perception, expectations, and acceptance in the context of artificial intelligence application during the pandemic. By incorporating “social factors” (social influence and facilitation condition) into the existing technology acceptance model, the study enriches the model, enhancing the mutual adaptability between emerging technologies and society.

The central research theme is aiding the public in comprehending and accepting the use of developing technologies, specifically AI, during public crises, balancing benefits and risks. This is particularly relevant as the world grapples with the ongoing impacts of the COVID-19 pandemic and prepares for future public health emergencies, where the role of AI and other digital technologies will be increasingly significant in shaping the response and recovery strategies.

Research gaps of the extended technology acceptance model (TAM) in the AI era when facing emerging public health crisis

Whilst the technology acceptance model (TAM) and its extended models have been instrumental in understanding the adoption of digital technologies, there are significant research gaps when it comes to applying these models to the context of a comprehensive risk-prevention-centred model for public health crisis management in the new normal era and the AI era. The traditional TAM focusses on perceived usefulness and ease of use, yet it falls short in addressing the multifaceted challenges posed by emerging public health crises such as pandemics. In the current landscape, where AI and other advanced technologies are being rapidly integrated into public health responses, there is a need for a more nuanced understanding of how these technologies are perceived and adopted within the context of crisis management.

Research gaps include the lack of comprehensive models that consider the social, ethical and psychological dimensions of technology acceptance during a public health crisis. Current models do not sufficiently account for the rapid changes in user behaviour and the accelerated adoption of technologies that occur in response to emergencies. Additionally, there is a need for research that explores the role of trust, privacy concerns and the impact of information overload in the context of AI-driven health technologies. The integration of social influence theory and risk perception theory into TAM is a step towards addressing these gaps, but more work is needed to fully understand how these factors interact and influence public acceptance of AI in managing public health crises.

Furthermore, there is a research gap in understanding the long-term effects of technology adoption during crises and how these technologies can be sustainably integrated into public health systems beyond the immediate crisis. The new normal era, characterized by remote work, digital surveillance and AI-driven solutions, requires an extended TAM model that can accommodate rapid technological advancements and evolving societal norms and expectations. Future research should focus on developing a more holistic model that not only predicts technology acceptance, but also evaluates the broader implications of these technologies on public health policy, societal wellbeing and ethical standards in the AI era.

To address these gaps, we propose the following research questions (RQs):

  1. RQ1 – Theoretical Integration

In what ways do social-influence dynamics and public risk-perception mechanisms deepen our understanding of citizens’ willingness to adopt AI-driven tools during emerging public-health crises?

  • 2.

    RQ2 – Ethical & Societal Embedding

How can an extended technology acceptance model (TAM) be architected to transparently embed, and systematically evaluate, the ethical and societal ramifications (privacy, equity, employment) of AI-infused public-health technologies?

  • 3.

    RQ3 – Empirical Validation via Case Study

Does a real-world deployment – specifically, the nationwide roll-out of AI-enabled medical-service robots in Chinese COVID-19 response – empirically corroborate the predictive and explanatory power of the proposed and extended TAM framework?

The COVID-19 pandemic accelerated the deployment of AI-driven health technologies at unprecedented speed and scale. Yet, public resistance – from privacy protests over contact-tracing apps to vaccine-chatbot boycotts – revealed that “crisis urgency” alone does not guarantee acceptance. Traditional TAM-based studies neither capture the heightened risk calculus of citizens during a health emergency nor integrate the latest ethical debates surrounding AI governance. RPAA-TAM fills this gap by simultaneously extending TAM with social-influence and risk-perception lenses, and embedding AI-driven factors to yield actionable guidance for crisis-time decision-makers.

In the conclusion of our study, we will provide insights that address these research questions, offering a comprehensive analysis of the factors influencing the acceptance of AI in public health crisis management and the development of a more holistic model that considers the broader implications of these technologies.

The order of the remaining sections is as follows. The foundation and applications of the technology acceptance model (TAM), social influence theory, risk perception theory and TAM-based study during COVID-19 are reviewed in the Methods and the construction of the refined conceptual model based on technology acceptance model, social influence theory and risk perception theory section’s literature review. Next, a conceptual framework for an AI-based, risk-prevention-centred anti-pandemic technology acceptance model (RPAA-TAM) is presented in the Propositions of the RPAA-TAM section. The claims made in the proposed model are examined in the Case illustration and explanation based on RPAA-TAM section, and the case illustration is used to validate the provided conceptual model. Lastly, we demonstrate, discuss and give a summarized conclusion with contributions and implications in the Conclusions and future directions section.

Methods and the construction of the refined conceptual model based on technology acceptance model, social influence theory and risk perception theory

The usage behaviour of anti-pandemic technology is a complex set of factors including consumer literacy, income level, perception of the product and risk perception. This paper integrates technology acceptance model (TAM), social influence theory and risk perception theory to analyse the key factors of the usage behaviour of the emerging and anti-pandemic digital technology.

Foundation and applications of technology acceptance model

The technology acceptance model (TAM) was first proposed by Professor Davis in 1989 and is based on the theory of reasoned action (TRA) and the theory of planned behavior (TPB) models, which explain the determinants of widespread public acceptance of information systems mainly from a cognitive perspective [13, 55]. In the cognitive perspective, perceived usefulness and perceived ease of use influence individuals’ final decision-making behaviour by influencing public attitudes, and the original model of technology acceptance is shown in Fig. 1.

Fig. 1.

Fig. 1

Original technology acceptance model (TAM)

The technology acceptance model (TAM) posits that external variables directly impact perceived ease of use and perceived usefulness. However, the model lacks detailed explanations of these external variables. Scholars have modified and extended the model by introducing various external variables (system design characteristics, public characteristics, task characteristics, R&D essential characteristics, policy influence, organizational structure, etc.) to moderate perceived ease of use and perceived usefulness, ultimately influencing public acceptance of information systems [8, 62].

In the early twentieth century, scholars such as Venkatesh and Davis [55] integrated classical theories of multiple technology acceptance, considering subjective factors. They proposed an integrated technology acceptance model that includes social influence (SI), technology support (TS) and other potential variables, known as the unified theory of use of technology model (UTAUT).

The extended technology acceptance model is primarily employed in studying the acceptance of new technologies. Escobar-Rodríguez et al. adjusted the TAM by adding variables such as perceived compatibility, perceived risk and training to study physicians’ and nurses’ acceptance of e-prescribing and automated drug management systems. Jing et al. used an extended TAM to explore key factors influencing local consumers’ acceptance of and intention to use self-driving cars.

Moreover, Hu et al. [19] conducted a study evaluating the TAM’s efficacy in analysing physicians’ decisions to adopt telemedicine technology in healthcare settings. Various theoretical models, with TAM receiving significant attention [1], have been employed to understand and manage the adoption process for new technologies. The theory of planned behavior (TPB), TAM and a decomposed TPB model were considered for their suitability in the healthcare context. Through an experimental exploration and comparison using survey results from more than 400 doctors in public tertiary institutions in Hong Kong [9], the study delves into doctors’ utilization of telemedicine.

Venkatesh et al. [56] discuss and compare eight well-known models and their extensions, aiming to create a unified model incorporating components from all eight models and testing it empirically. Pavlou [40] predicts consumer acceptance of e-commerce by outlining crucial factors influencing customer participation in online transactions. Lu et al. [27] construct a TAM for wireless internet via mobile devices (WIMD) to explain variables affecting user acceptability. In the context of e-banking studies, Pikkarainen et al. [43] establish a model suggesting online banking acceptability on the basis of a focus group interview with banking experts.

Yi et al. [64] integrate TAM, the Theory of Planned Behavior and the Innovation Diffusion Theory to create a comprehensive model tested in the adoption of personal digital assistants (PDAs) in healthcare contexts. Raaij et al. [57] build a conceptual model explaining variations in students’ acceptance and use of a virtual learning environment (VLE) after critically evaluating models such as TAM, TAM2 and the unified theory of acceptance and usage of technology (UTAUT).

However, the technology acceptance model itself has certain limitations, that is, it lacks attention to factors other than technology perception. Therefore, this paper introduces social influence and facilitation condition external variables on the basis of the technology acceptance model and incorporates social influence theory and risk perception theory to explore the factors influencing public acceptance of AI-enabled pandemic prevention technology. The following sections will discuss each factor in detail.

Social influence theory

Human interactions are inherently social, shaping thoughts, attitudes and behaviours through external influences from individuals or organizations. Social influence, defined as the alteration of an individual’s thoughts, attitudes or behaviours due to external factors, has broad applications in understanding human behaviour. Rational behaviour theory posits that social norms influence behavioural intentions, and technology diffusion theory emphasizes the impact of social systems on user adoption behaviour. Social influence encompasses social norms and subordination, creating a comprehensive framework in social psychology. Davis [13], when introducing the technology acceptance model, underscored the role of social influence, laying the groundwork for subsequent research. Scholars, such as Schierz, Schilke and Wirtz [48], have integrated social influence theory with technology acceptance models, revealing its impact on public behaviour choice. This paper integrates social influence theory with the technology acceptance model, as illustrated in Fig. 2.

Fig. 2.

Fig. 2

Refined technology acceptance model based on social influence theory

Risk perception theory

Risk perception theory, a psychological concept, focusses on an individual’s awareness and perception of objective risks in the external environment. It underscores the impact of personal experiences, intuitive judgments and subjective feelings on cognitive processes. The public’s perception of objective realities significantly shapes their attitudes towards these realities. In decision-making involving the public, gaining public understanding and support is crucial. Given that the existence of risk is often a sensitive topic, decision-makers must pay attention to public risk perception, as it plays a pivotal role in shaping public attitudes.

Public perceived risk encompasses judgments about the unknown, novelty and potential harmful long-term effects. Perceived benefit, arising from interactions between the public and product suppliers in specific contexts, significantly influences public purchasing behaviour. In studies such as that of Siegrist, Cvetkovich and Roth [49] and their examination of public acceptance of gene technology, the perceived risk associated with a technology plays a vital role in its development. For risky technologies, perceived benefits and risks are critical factors in public acceptance, with trust indirectly influencing acceptance. Therefore, the acceptance of a risky product is contingent on both perceived risk and benefit, whilst trust exerts an indirect impact on the acceptance of risky technologies. The framework underpinning risk perception theory is depicted in Fig. 3.

Fig. 3.

Fig. 3

Refined technology acceptance model based on risk perception theory

Whilst the technology acceptance model effectively explains public technology acceptance behaviour, it falls short in accounting for the risk factor, particularly in the context of risk-prevention technologies. To enhance our understanding of the factors shaping public behavioural intentions and their interplay, researchers have incorporated trust and perceived risk into the technology acceptance model. Pioneering work by Slovic [50] and others [28, 37, 41] integrates risk perception theory to scrutinize public responses to potentially risky activities or technologies, exploring deeper influences on acceptance at both risk and benefit perception levels. Siegrist [49] proposes a public acceptance model for gene technology, asserting that public technology acceptance is determined by risk and benefit perceptions, with trust indirectly impacting acceptance through its influence on these perceptions.

Methods and the construction of the refined conceptual model: risk-prevention-centred and artificial-intelligence-enabled anti-pandemic technology acceptance (RPAA-TAM) model

To construct the anti-pandemic technology acceptance model (RPAA-TAM), this study initially gauges the current state of anti-pandemic technology development by examining cases of technology companies in China, categorizing four major types and delineating their characteristics (2021a). Building on existing TAM literature and aligning with anti-pandemic technology specifics, the study proposes the RPAA-TAM model.

We also grounded RPAA-TAM in the most recent evidence on AI ethics, real-time pandemic analytics and global adoption benchmarks. Core sources include:

  • Ethical governance frameworks: WHO [63] on large multimodal models; Camilleri [12] on AI corporate social responsibility; and Al-kfairy et al. [4] on generative-AI ethics.

  • AI-driven crisis analytics: Kumar et al. [25] on smart-city governance during COVID-19 and Bello y Villarino and Bronitt [6] on regulatory responses.

  • Post-2020 empirical uptake studies: WHO guidance [63] global survey data and cross-national policy comparisons from Springer’s Digital Transformation, AI and Society series.

Integrating these works ensures that RPAA-TAM speaks to the current technological and normative landscape rather than to pre-pandemic assumptions.

This novel conceptual model integrates technology acceptance, social impact and risk perception theories, aiming to measure public acceptance, identify influencing factors and present research propositions. Utilizing social influence and facilitation conditions as external variables in conjunction with the technology acceptance model, the study investigates their impact on public perceived usefulness and ease of use, subsequently influencing acceptance. Incorporating risk perception theory, the study introduces external variables that influence public perceptions of benefits and risks, mediated by trust, ultimately shaping users’ behavioural intentions (Fig. 4).

Fig. 4.

Fig. 4

Proposed risk-prevention-centred and AI-based Anti-pandemic technology acceptance model (RPAA-TAM)

Propositions of the RPAA-TAM

Drawing insights from relevant literature, during the formulation of the AI risk technology acceptance model, we adapted the TAM by integrating risk perception theory and social impact theory, resulting in the creation of RPAA-TAM. This novel model, depicted in Fig. 5, refines the TAM by analysing factors influencing public adoption of anti-pandemic technology through external and internal perception variables.

Fig. 5.

Fig. 5

Propositions amongst RPAA-TAM

Perceived benefit

The perceived benefit concept, originating in product marketing, plays a crucial role in shaping consumer behaviour. In the realm of technology acceptance, scholars have incorporated perceived benefits into two dimensions: perceived usefulness and perceived ease of use [1, 8, 15]. Perceived usefulness gauges how individuals believe information systems enhance their performance, reflecting the public’s perception of pandemic prevention technology in safeguarding lives and maintaining normalcy. Meanwhile, perceived ease of use assesses the simplicity of using a specific information system, capturing the public’s perception of the ease of using pandemic prevention technology [2, 54]. The perceived benefits, instrumental in emerging technology acceptance studies, have demonstrated positive impacts in various domains, such as mobile banking, car sharing and fitness apps, influencing users’ acceptance behaviour [22].

The technology acceptance model efficiently gauges factors influencing public acceptance of new technologies. In the context of pandemic prevention technology, akin to general consumption behaviour, merging the technology acceptance model with perceived benefits is apt. As intelligent technology development is foundational for pandemic prevention, public perceptions regarding its efficacy, convenience and real-time nature, especially in contributing to public life and health protection, ease of use and preservation of regular public life, profoundly impact overall public assessment and attitude. Therefore, we propose the following proposition.

Proposition 1: Public perceived benefit is positively (+) correlated with public attitude towards use of prevention technology (P1).

Perceived risk

Perceived risk is considered to be an important factor influencing consumer behavioural decisions in risky technology use contexts [44]. Zeithaml [65] proposed that if consumers perceive a higher risk of a product or service, the acceptance of that product or service will decrease. For emerging technologies, the relationship between perceived risk and public acceptance behaviour has been studied in academia. For example, Luo, Li, Zhang and Shim [28], amongst others, confirmed that perceived risk negatively influences users’ acceptance of mobile banking. Similarly, perceived risk significantly and negatively affects continuous sharing intention and perceived risk has a significant negative effect on consumers’ continuous sharing behaviour [37].

In this pandemic, it is important to pay attention to both the risks associated with integrating AI into social life as well as the positive effects of AI on public crisis management. We should consider the relationship between technology itself and public acceptance of society, where the greater the public’s perceived risk, the smaller the acceptance of the technology, thus, for instance, even though we affirm that AI can be used to manage major pandemics very effectively, we should still pay attention to and think about the balance between the maintenance of data value for the promotion of AI and the protection of personal privacy.

Proposition 2: Public perceived risk is negatively (−) related to public attitude towards use of prevention technology (P2).

Public trust

The public, lacking sufficient experience with technology and the corresponding intellectual and technical background, is often unable to directly assess accurately the perceived benefits and risks of technology use, and this is where public trust needs to intervene. Previous studies have recognized the importance of trust as a mediating variable when examining the relationship between public perception variables and behavioural decisions. Studies such as Siegrist et al. [52] confirm that consumer trust is a key factor in increasing consumer perceived benefits and reducing consumer perceived risks in the context of new technology use. Moreover, the mediating effect of consumer trust between perceived benefits and behavioural intentions and perceived risks and behavioural intentions has been confirmed by previous studies. Park et al. [41] confirmed the effect of trust as a mediating variable on service adoption behaviour, and Garbarino and Strahilevitz [17] and others argued that amongst the factors influencing public purchasing behaviour, trust is positively related to perceived benefits and negatively related to perceived risks. Meanwhile, public trust is mainly influenced by external variables such as social environment and technological environment.

Whilst the age of AI has given data authority, it has also led to a crisis in public confidence. The growth of artificial intelligence in the social and technological environment has also increased public anxiety and concern, which has a negative impact on public trust in technological advancements. The emergence of AI technologies has been accelerated by the pandemic, which has also heightened public concerns about privacy and security, financial risk and psychological risk. These concerns in turn have an impact on public acceptance of risky technologies. Therefore, the following is suggested:

Proposition 3: Public trust is positively (+) related to public perceived benefits (P3).

Proposition 4: Public trust is negatively (–) related to public perceived risk (P4).

Proposition 5: There is a mediating effect of public trust between external environment and perceived risks and perceived benefits (P5).

Social influence

Human behaviour is social by nature and is influenced by other people. The two categories of social influence that social psychologists Deutsch and Gerard [14] distinguished were normative and informational. When people use group activities to acquire information for comprehension and judgment, this is known as informational influence. When people adopt the conduct of the group to blend in and fulfil social expectations, this is known as normative influence. The parallelism, timeliness and broad reach of information in the age of artificial intelligence and cutting-edge technologies increases the role of social influence on group decision-making. Social influence was defined by academics such as Venkatesh, Morris and Davis [55] as the extent to which people believe that those who are relatively significant to them should use information systems. Researchers such as Moon and Kim [36] discovered a favourable correlation between behavioural intention and social impact. According to research by Lu, Yao and Yu [26], as well as other academics, behavioural intention is indirectly impacted by social influence through perceived utility and simplicity of use. On the basis of a technology acceptance model, research of factors influencing government microblogging adoption of government issues found a substantial relationship between social influence and behavioural intention.

The foregoing discussion demonstrates that the public’s behavioural intention to select the preventive technology will be influenced by social influence in two key ways. Through technology propaganda, the government, technicians and specialists will first provide information to the public. Conversely, the public’s opinion of technology will be enhanced by a favourable user experience with the systems and the service delivery situation. The thoughts, words and actions of close friends, relatives and neighbours will also impact the public’s impression. This leads to the following proposition:

Proposition 6a: Social influence is positively (+) related to the public’s perceived usefulness of using pandemic prevention technology (P6a).

Proposition 6b: Social influence is positively (+) related to the public’s perceived ease of use of anti-pandemic technology (P6b).

Outcome demonstration

Outcome demonstration refers to the extent to which the effective results of a new technology can be shown to potential users. Rogers [46] argued that the more potential adopters can observe the improved results from a new technology, the more likely they are to become actual users. Outcome demonstration is considered to be an important factor influencing the perception of potential users of new technologies. Kamal et al. [23] found that outcome demonstration influences users’ perceived usefulness by studying the factors influencing behaviour facing telemedicine service. Walczak et al. [61] found that outcome presentation had a significant effect on general practitioners’ perceived ease of use and usefulness of the telemedicine technology. Setiawan and Oktaviani [51] also found that outcome demonstration of the health information system named “narcotic precursor reporting system” is critical for the technology’s acceptance.

In this study, outcome demonstration gauges the public’s ability to observe and quantify the performance results of digital prevention technology. Initial outcomes of AI-based solutions in pandemic prevention enhance the public’s perception of usefulness, usability and social acceptance. On this basis, the following propositions are formulated:

Proposition 6c: Outcome demonstration positively (+) influenced the public’s perceived usefulness of using pandemic prevention technology (P6c).

Proposition 6d: Outcome demonstration positively (+) influenced the public’s perceived ease of use of using pandemic prevention technology (P6d).

Facilitating conditions

In certain situations, the ease of access to technical resources or underlying technology to complete a task is referred to as a facilitating condition. Facilitation conditions were defined by Venkatesh, Morris and Davis [56] as the degree to which the general public views the usage of information systems with technological support. The primary sources of facilitation conditions, which can differ greatly depending on the domain, are the integration of the technology model’s facilitation conditions and the social facilitation perceptions found in the theory of rational behavior (TRB). According to research by Venkatesh, Morris, Davis [56] and others, perceived usefulness and ease of use of a product or service are likely to be influenced by the facilitation circumstances. The technology acceptance of cloud service systems of smart manufacturing [33, 45], telemedicine healthcare information systems [23, 51, 61] and electronic commerce [48] improves with more accessible, stable and solid technological infrastructure. It was discovered through an empirical investigation on the readiness to use online medical services that favourable enabling circumstances directly increase the willingness to use. As a result, increased public adoption of the technology will come from improved public production services’ efficiency and availability of technology support.

In this study, “facilitating conditions” mostly refers to the technology and software that make it possible for the general public to employ these digital anti-pandemic solutions during the preventative phase. The analysis presented above can be used to support the following statements about this study.

Proposition 7: The facilitating condition positively (+) influenced the behavioural intention of using pandemic prevention technology (P7).

All propositions are summarized in Table 1.

Table 1.

Summary of RPAA-TAM factors and propositions

RPAA-TAM factors Definition Direction of influence (proposition)
Perceived benefit (PB) Usefulness + ease of use of the AI anti-pandemic tool PB → + Attitude (P1)
Perceived risk (PR) Privacy, security, financial, psychological risks PR → – Attitude (P2)
Public trust (PT) Confidence in institutions & technology that deliver the tool PT → + PB (P3); PT → – PR (P4); PT mediates external variables (P5)
Social influence (SI) Normative + informational pressure from government, media, peers SI → + PB (P6a, P6b)
Outcome demonstration (OD) Observable evidence that the tool works (case counts drop, faster diagnosis, etc.) OD → + PB (P6c, P6d)
Facilitating conditions (FC) Technical & policy infrastructure enabling use (apps, bandwidth, regulations) FC → + Behavioural intention (P7)

Case illustration and explanation based on RPAA-TAM

In the context of the COVID-19 epidemic, cities face heightened vulnerability. However, the rapid integration of internet-enabled technologies presents opportunities for urban development and disease resistance. Governments globally emphasize the role of science and technology in combating the pandemic. For example, the Japanese government prioritizes ethical evaluations for COVID-19 research and increases funding. The White House recognizes the contributions of internet platforms and technology firms. In Russia, a new COVID-19 vaccine received funding from the Russian Direct Investment Fund. The German Academy of Sciences advocates using mobile phone data for infection rates and contacts. China, in a white paper titled “Fighting COVID-19: China in Action”, emphasizes the role of modern technologies [29]. In the post-pandemic era, smart-enabled technology becomes integral to daily life, fostering a cashless society, remote work and distance learning. Governments and society increasingly rely on data-rich services, utilizing intelligent positioning and instant messaging for big data analytics to track pandemic trends and visualize the activity path of confirmed cases. The internet plays a crucial role in disseminating knowledge and providing early warning signals. AI significantly contributes to medical diagnosis and healthcare provision. The empirical anti-pandemic uses of digital technology are illustrated in Fig. 6 [29], showcasing the four major classifications of digital technologies for anti-pandemic in public healthcare crisis (as listed in Table 2).

Fig. 6.

Fig. 6

Major classifications of digital technologies for anti-pandemic in public healthcare crisis

Table 2.

Summary of RPAA-TAM factors and case evidence

RPAA-TAM factors Empirical evidence from the case
Perceived benefit (PB) Robots replaced staff in high-risk wards → saved PPE, cut infection; public saw clear usefulness & ease of use (voice-interactive chatbots)
Public trust (PT) Central & local government policy white papers endorsed the robots → raised public trust, boosting perceived benefits & lowering perceived risks
Social influence (SI) Positive social media stories + neighbour testimonials amplified adoption
Outcome demonstration (OD) Daily TV news showed robots disinfecting, delivering meds → tangible proof → higher PB
Facilitating conditions (FC) Hospitals already had 5G & cloud infra; government fast-tracked robot deployment → smoother adoption

Drawing on the identified categories, we validated the proposed TAM evolution through a case study involving a prominent AI-enabled healthcare systems provider, specifically focussing on the acceptance of pandemic prevention technology during the COVID-19 pandemic. Our aim was to substantiate the validity of the proposed model. The case study centred on D company, a provider of “medical service robots” in China, offering unmanned intelligent robots designed for special hospital applications. These robots performed high-risk tasks, such as temperature measurement, medicine delivery and disinfection, minimizing the need for human intervention in contaminated areas. The voice-interactive service chatbot, a type of medical service robot, played a crucial role in disseminating pandemic prevention knowledge, reducing the workload of human workers. The case study exemplifies the applicability of the proposed conceptual model to understand and explain factors influencing public acceptance of pandemic prevention technology.

Firstly, intelligent medical service robots have significantly enhanced medical care quality and efficiency, particularly in temporary admission hospitals. Developed for the COVID-19 outbreak, these robots replace healthcare workers in high-risk tasks, saving masks, protective clothing and essential supplies. Voice-interactive service chatbots, utilizing AI-based speech recognition, query cases by voice, reducing virus contamination risk, increasing case entry efficiency and improving public perception of service usability. Consequently, public acceptance of intelligent healthcare information systems increases. Proposition P1 indicates a positive relationship between public perception and attitude toward use. As the public uses and understands these mature technologies, trust levels rise, increasing perceived ease of use and usability, explaining proposition P3. In summary, advantages are positively correlated with public trust.

Secondly, social influence significantly impacts the acceptance of pandemic prevention technologies. National policies, spearheaded by China’s health and wellness authorities, emphasize the role of information technology in pandemic research, diagnosis and treatment innovation. Policies such as the 2020 Notice on “The Use of New Generation of Intelligent Information Technology to Support Services for Pandemic Prevention and Control and Resumption of Work and Production” highlight government support for technologies such as the internet, big data, cloud computing and AI in pandemic monitoring and control. These policies positively shape public perception and enhance trust in pandemic prevention technologies. Moreover, social influences from family, relatives, neighbours and friends play a crucial role. Positive experiences shared by these networks enhance trust (P5) and contribute to the perceived usefulness and ease of use (P6a, P6b, P6c and P6d). The public’s perception of the value of adopting pandemic preventive technologies is directly linked to social influence, whilst the perceived simplicity of these technologies is positively associated with social influence. Additionally, outcome demonstrations positively impact the perceived usefulness and ease of adopting anti-pandemic technologies.

Thirdly, the rapid evolution of smart healthcare has revolutionized the healthcare industry, altering patients’ access patterns and simplifying processes through anti-pandemic technologies. In pandemic areas such as Hubei, high-intensity workloads for local and external medical teams are addressed by service-oriented robots undertaking tasks such as temperature measurement, disinfection and medical waste removal. This minimizes contact between medical staff and confirmed cases, enhancing the safety of frontline workers. The use of artificial intelligence as a facilitating condition positively influences public acceptance of pandemic prevention technology. This supports proposition P7, affirming that facilitating situations positively impact the general public’s acceptance of pandemic prevention technologies.

Fourthly, as of October 2022, China has transitioned into a normalized phase of COVID-19 prevention and control, resuming normal economic activities. Government measures, outlined in the 2020 white paper “China’s Actions to Combat the COVID-19 Pandemic”, highlighted the pivotal role of science and technology. Focussed on areas such as clinical treatment, drug development, vaccine research, detection technology, viral pathogenesis and animal model development, these efforts provided robust scientific support. The utilization of intelligent digital technologies, including AI-based service-oriented robots, demonstrated tangible pandemic prevention outcomes. This visibility and effectiveness gradually enhance public perceptions of pandemic prevention-centric digital technologies. The evidence supports the correlation of P6 outcome demonstration with the general public’s perceptions of the usefulness and ease of use of prevention-focussed digital technology. Furthermore, as trust in government prevention and control rises (P4), so does perceived risk, positively impacting the public’s attitude towards using technology (P2).

Conclusions and future directions

This paper introduces a novel conceptual model, the Risk Prevention-centred and AI-enabled Anti-pandemic Technology Acceptance Model (RPAA-TAM), tailored for public health risk response. The RPAA-TAM construct integrates the refined technology acceptance model (TAM), social influence theory and risk perception theory to address the complexities of public acceptance of AI-driven technologies during emerging public health crises. This model enhances our understanding by considering not only the perceived usefulness and ease of use, but also the social and risk-related factors that influence technology adoption in the context of public health emergencies.

Responding to RQ1 – Theoretical Integration: By weaving social-influence theory and risk-perception theory into the fabric of the classic technology acceptance model, RPAA-TAM reveals how normative cues from governments, media and peers intersect with citizens’ personal risk–benefit calculus. This dual lens clarifies why identical AI tools may be welcomed in one community yet resisted in another, thereby deepening our understanding of the socio-cognitive levers that shape uptake during public health crises.

Responding to RQ2 – Ethical & Societal Embedding: RPAA-TAM transcends utilitarian metrics of “usefulness” and “ease of use” by embedding ethical and societal variables – data privacy, algorithmic fairness, labour market effects and digital equity – directly into its causal pathways. The model thus becomes a moral compass as well as a predictive instrument, enabling policymakers and developers to anticipate and mitigate unintended societal harms whilst steering AI innovations towards ethically defensible and publicly acceptable outcomes.

Responding to RQ3 – Empirical Validation: The nationwide deployment of AI-driven medical-service robots in China’s COVID-19 response functions as a living laboratory. Observed patterns of adoption – accelerated by visible outcome demonstrations, reinforced by government endorsements and tempered by privacy concerns – align remarkably with RPAA-TAM’s propositions. This convergence offers robust empirical confirmation that the model not only explains but also foretells public acceptance trajectories in real-world crisis settings.

The RPAA-TAM’s exploration of factors influencing public acceptance of pandemic prevention technologies sheds light on how AI-driven digital technologies are embraced during significant public health crises. The model holds empirical implications for governments and AI companies, facilitating the identification of the balance and influential factors between innovative services/products and public acceptance. This insight guides companies in their digital transformation strategies, ensuring alignment with industry trends, public perception and customer expectations.

Turning insight into actionable policy suggestions, the RPAA-TAM delivers a ready-to-deploy playbook for crisis managers and technology providers. Governments should (strategy 1) mandate algorithmic-impact statements [4763] for every pandemic-time AI tool, anchoring public trust in transparent governance and pair this with (strategy 2) differential-privacy protocols and open-source data-handling audits [12, 18] to neutralize privacy fears before they crystallize into resistance. Behavioural adoption can be accelerated through “positive-deviance” storytelling campaigns (strategy 3) that amplify neighbour testimonials about service robots or chatbot triage successes [20, 47, 59, 66], leveraging normative social influence exactly where RPAA-TAM predicts it matters most. To make benefits visible in real time, agencies should launch (strategy 4) public outcome dashboards that continuously map infection-rate reductions to specific AI interventions, turning abstract utility into tangible proof. Finally, pre-crisis memoranda of understanding with telecom and cloud providers – guaranteeing bandwidth and edge-compute capacity within 72 h of outbreak declaration – ensure that facilitating conditions are locked in before panic peaks (strategy 5). Together, these five evidence-based actions operationalize RPAA-TAM’s theoretical levers, transforming the model from an explanatory framework into a living policy instrument for ethically grounded, publicly accepted AI deployment in the next health emergency.

The strategy-proposition mapping and explanation are as below. Algorithmic-impact statements lift trust (P3 → PB↑), differential-privacy audits cut perceived risk (P4 → PR↓), “positive-deviance” storytelling mobilizes social influence (P6a↑, P6b↑), real-time dashboards furnish observable outcomes (P6c↑, P6d↑) and 72-h cloud-connect MOUs secure facilitating conditions (P7↑), compressing all RPAA-TAM levers into a single, executable crisis-response checklist.

The qualitative analysis in this study provides a conceptual foundation, and as digital pandemic prevention technology evolves amidst dynamic crisis scenarios and emerging technologies, the factors influencing adoption and RPAA-TAM findings can be adapted. This adaptive approach enriches TAM and UTAUT theory, offering valuable insights for the application of emerging technologies, such as artificial intelligence, in public crisis response. By answering the research questions and providing a comprehensive model, the RPAA-TAM contributes to the fields of technology management, public administration and public health risk management, providing a roadmap for navigating the challenges of the AI era in the context of emerging public health crises.

Acknowledgements

Comments by the anonymous reviewers and editors are gratefully acknowledged.

Author contributions

Ching-Hung Lee: writing – review & editing, writing – original draft, visualization, validation, supervision, project administration, methodology, data curation and conceptualization. Zhichao Wang: writing – review & editing and project administration; Dianni Wang: writing – review & editing and visualization; Shupeng LYU: supervision, formal analysis and conceptualization; and Chun-Hsien Chen: supervision, project administration and methodology.

Funding

Not applicable.

Data availability

No datasets were generated or analysed during the current study.

Declarations

Ethics approval and consent to participate

Our research, based on publicly available online data, does not involve human participants, data or tissue, and thus is not subject to ethical approval or participant consent. It complies with national regulations, which deem ethical approval unnecessary. All data used are freely accessible and in line with open access policies, with no privacy or confidentiality concerns.

Consent for publication

The authors declare we all agree the papers for publication.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Ching-Hung Lee, Zhichao Wang have contributed equally to this work.

References

  • 1.Agarwal R, Prasad J. Are individual differences germane to the acceptance of new information technologies? Decis Sci. 1999;30(2):361–91. [Google Scholar]
  • 2.Anderson JTL, et al. Telehealth adoption during the COVID-19 pandemic: a social media textual and network analysis. Digit Health. 2022;8:205520762210900. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Al Meslamani AZ, Aldulaymi R, El Sharu H, Alwarawrah Z, Ibrahim OM, Al Mazrouei N. The patterns and determinants of telemedicine use during the COVID-19 crisis: a nationwide study. J Am Pharm Assoc. 2022;62(6):1778–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Al-kfairy M, Mustafa D, Kshetri N, Insiew M, Alfandi O. Ethical challenges and solutions of generative AI: an interdisciplinary perspective. Informatics. 2024;11(3):58. [Google Scholar]
  • 5.Brammer S, Branicki L, Linnenluecke MK. COVID-19, societalization, and the future of business in society. Acad Manage Perspect. 2020;34(4):493–507. [Google Scholar]
  • 6.Bello y Villarino JM, Bronitt S. AI-driven corporate governance: a regulatory perspective. Griffith Law Rev. 2024;33(4):355–74. [Google Scholar]
  • 7.Battula ST. Artificial intelligence-driven risk management for fintech enterprises: enhancing decision-making through predictive analytics. IJSAT Int J Sci Technol. 2025. 10.71097/IJSAT.v16.i1.2804. [Google Scholar]
  • 8.Chang A. Utaut and utaut 2: a review and agenda for future research. Winners. 2012;13(2):10. [Google Scholar]
  • 9.Chau PYK, Hu PJ-H. Investigating healthcare professionals’ decisions to accept telemedicine technology: an empirical test of competing theories. Inf Manag. 2002;39(4):297–311. [Google Scholar]
  • 10.Chamola V, et al. A comprehensive review of the COVID-19 pandemic and the role of IOT, drones, AI, Blockchain, and 5G in managing its impact. IEEE Access. 2020;8:90225–65. [Google Scholar]
  • 11.Cosimato S, Di Paola N, Vona R. Digital social innovation: how healthcare ecosystems face Covid-19 challenges. Technol Anal Strateg Manage. 2022. 10.1080/09537325.2022.2111117. [Google Scholar]
  • 12.Camilleri MA. Artificial intelligence governance: ethical considerations and implications for social responsibility. Expert Syst. 2024;41(7):e13406. [Google Scholar]
  • 13.Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989;13(3):319. [Google Scholar]
  • 14.Deutsch M, Gerard HB. A study of normative and informational social influences upon individual judgment. J Abnorm Soc Psychol. 1955;51(3):629. [DOI] [PubMed] [Google Scholar]
  • 15.Escobar-Rodríguez T, Monge-Lozano P, Romero-Alonso MM. Acceptance of e-prescriptions and automated medication-management systems in hospitals: an extension of the technology acceptance model. J Inf Syst. 2012;26(1):77–96. [Google Scholar]
  • 16.Fan B, Liu R, Huang K, Zhu Y. Embeddedness in cross-agency collaboration and emergency management capability: evidence from Shanghai’s urban contingency plans. Gov Inf Q. 2019;36(4):101395. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Garbarino E, Strahilevitz M. Gender differences in the perceived risk of buying online and the effects of receiving a site recommendation. J Bus Res. 2004;57(7):768–75. [Google Scholar]
  • 18.Hastie R. Social inference. Annu Rev Psychol. 1983;34(1):511–42. [Google Scholar]
  • 19.Hu PJ, et al. Examining the technology acceptance model using physician acceptance of telemedicine technology. J Manage Inf Syst. 1999;16(2):91–112. [Google Scholar]
  • 20.Jing P, et al. Exploring the factors affecting mode choice intention of autonomous vehicle based on an extended theory of planned behavior—A case study in China. Sustainability. 2019;11(4):1155. [Google Scholar]
  • 21.Jo H, Park S. Success factors of untact lecture system in COVID-19: TAM, benefits, and privacy concerns. Technol Anal Strateg Manag. 2022. 10.1080/09537325.2022.2093709. [Google Scholar]
  • 22.Kim C, Tao W, Shin N. An empirical study of customers’ perceptions of security and trust in e-payment systems. Electron Commerce Res Appl. 2010;9(1–6):84–95. [Google Scholar]
  • 23.Kamal SA, Shafiq M, Kakria P. Investigating acceptance of telemedicine services through an extended technology acceptance model (TAM). Technol Soc. 2020;60:101212. [Google Scholar]
  • 24.Kwee-Meier ST, Bützler JE, Schlick C. Development and validation of a technology acceptance model for safety-enhancing, wearable locating systems. Behav Inform Technol. 2016;35(5):394–409. [Google Scholar]
  • 25.Kumar S, Verma AK, Mirza A. Artificial intelligence-driven governance systems: smart cities and smart governance. In: Chakravorty A, Verma AK, Bhattacharya P, Pant M, Ghosh S, editors. Digital transformation, artificial intelligence and society: opportunities and challenges. Singapore: Springer Nature Singapore; 2024. p. 73–90. [Google Scholar]
  • 26.Lu J, Yao JE, Yu C-S. Personal innovativeness, social influences and adoption of wireless internet services via mobile technology. J Strateg Inf Syst. 2005;14(3):245–68. [Google Scholar]
  • 27.Lu J, et al. Technology acceptance model for wireless internet. Internet Res. 2003;13(3):206–22. [Google Scholar]
  • 28.Luo X, et al. Examining multi-dimensional trust and multi-faceted risk in initial acceptance of emerging technologies: an empirical study of mobile banking services. Decis Support Syst. 2010;49(2):222–34. [Google Scholar]
  • 29.Lee C-H, et al. Digital transformation and the new normal in China: how can enterprises use digital technologies to respond to COVID-19? Sustainability. 2021;13(18):10195. [Google Scholar]
  • 30.Lee CH, Liu CL, Trappey AJ, Mo JP, Desouza KC. Understanding digital transformation in advanced manufacturing and engineering: a bibliometric analysis, topic modeling and research trend discovery. Adv Eng Inform. 2021;50:101428. [Google Scholar]
  • 31.Lee C-H, et al. A digital transformation-enabled framework and strategies for public health risk response and governance: China’s experience. Ind Manag Data Syst. 2022;123(1):133–54. [Google Scholar]
  • 32.Lee CH, Liu CL, Trappey AJ, Mo JP, Desouza KC. Design and management of digital transformations for value creation. Adv Eng Inform. 2022;52:101547. [Google Scholar]
  • 33.Lee C-H, et al. Strategic servitization design method for industry 4.0-based smart intralogistics and production. Expert Syst Appl. 2022;204:117480. [Google Scholar]
  • 34.Lee C-H, et al. Requirement-driven evolution and strategy-enabled service design for new customized quick-response product order fulfillment process. Technol Forecast Soc Change. 2022;176:121464. [Google Scholar]
  • 35.Lyu S, Qian C, McIntyre A, Lee CH. One pandemic, two solutions: comparing the US-China response and health priorities to COVID-19 from the perspective of “two types of control”. Healthcare. 2023;11(13):1848. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Moon J-W, Kim Y-G. Extending the TAM for a world-wide-web context. Inf Manag. 2001;38(4):217–30. [Google Scholar]
  • 37.Malazizi N, Alipour H, Olya H. Risk perceptions of Airbnb hosts: evidence from a Mediterranean island. Sustainability. 2018;10(5):1349. [Google Scholar]
  • 38.Misra SK, Sharma SK, Gupta S, Das S. A framework to overcome challenges to the adoption of artificial intelligence in Indian government organizations. Technol Forecast Soc Change. 2023;194:122721. [Google Scholar]
  • 39.Margherita A, Nasiri M, Papadopoulos T. The application of digital technologies in company responses to COVID-19: an integrative framework. Technol Anal Strateg Manag. 2023;35(8):979–92. [Google Scholar]
  • 40.Pavlou PA. Consumer acceptance of electronic commerce: integrating trust and risk with the technology acceptance model. Int J Electron Commer. 2003;7(3):101–34. [Google Scholar]
  • 41.Park J, Amendah E, Lee Y. M-payment service: interplay of perceived risk, benefit, and trust in service adoption. Hum Factors Ergon Manuf Serv Ind. 2019;29(1):31–43. [Google Scholar]
  • 42.Pal D, Patra S. University students’ perception of video-based learning in times of COVID-19: a TAM/TTF perspective. Int J Hum Comput Interact. 2021;37(10):903–21. [Google Scholar]
  • 43.Pikkarainen T, Pikkarainen K, Karjaluoto H, Pahnila S. Consumer acceptance of online banking: an extension of the technology acceptance model. Internet Res. 2004;14(3):224–35.
  • 44.Rianthong N, Dumrongsiri A, Kohda Y. Optimizing customer searching experience of online hotel booking by sequencing hotel choices and selecting online reviews: a mathematical model approach. Tour Manage Perspect. 2016;20:55–65. [Google Scholar]
  • 45.Rahi SB, Bisui S, Misra SC. Identifying the moderating effect of trust on the adoption of cloud-based services. Int J Commun Syst. 2017;30(11):e3253. [Google Scholar]
  • 46.Rogerschair EM. Lessons for guidelines from the diffusion of innovations. Jt Comm J Qual Improv. 1995;21(7):324–8. [DOI] [PubMed] [Google Scholar]
  • 47.Reuschl AJ, Deist MK, Maalaoui A. Digital transformation during a pandemic: stretching the organizational elasticity. J Bus Res. 2022;144:1320–32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Schierz PG, Schilke O, Wirtz BW. Understanding consumer acceptance of mobile payment services: an empirical analysis. Electron Commerce Res Appl. 2010;9(3):209–16. [Google Scholar]
  • 49.Siegrist M, Cvetkovich G, Roth C. Salient value similarity, social trust, and risk/benefit perception. Risk Anal. 2000;20(3):353–62. [DOI] [PubMed] [Google Scholar]
  • 50.Slovic P. Perception of risk. Science. 1987;236(4799):280–5. [DOI] [PubMed] [Google Scholar]
  • 51.Setiawan RA, Oktaviani P. Examining the technology acceptance model in the adoption of Narcotic Precursor Reporting System (SIPPRE). J TAM (Technol Accept Model). 2021;12(2):158. [Google Scholar]
  • 52.Siegrist M. The influence of trust and perceptions of risks and benefits on the acceptance of gene technology. Risk Anal. 2000;20(2):195–204. [DOI] [PubMed] [Google Scholar]
  • 53.Schallmo D, Williams CA, Boardman L. Digital transformation of business models—Best practice, enablers, and roadmap. Int J Innov Manag. 2017;21(08):1740014. [Google Scholar]
  • 54.Torp DC, Sandbæk A, Prætorius T. Technology acceptance of video consultations for type 2 diabetes care in general practice: a cross-sectional survey of Danish general practitioners. J Med Internet Res. 2022;24(8):e37223. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Venkatesh V, Davis FD. A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage Sci. 2000;46(2):186–204. [Google Scholar]
  • 56.Venkatesh, et al. User acceptance of information technology: toward a unified view. MIS Q. 2003;27(3):425. [Google Scholar]
  • 57.van Raaij EM, Schepers JJL. The acceptance and use of a virtual learning environment in China. Comput Educ. 2008;50(3):838–52. [Google Scholar]
  • 58.van Elsland SL, O’Hare RM, McCabe R, Laydon DJ, Ferguson NM, Cori A, Christen P. Policy impact of the Imperial College COVID-19 Response Team: global perspective and United Kingdom case study. Health Res Policy Syst. 2024;22(1):153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Vial G. Understanding digital transformation: a review and a research agenda. J Strateg Inf Syst. 2019;28(2):118–44. [Google Scholar]
  • 60.Voke D, Perry A, Bardach SH, Kapadia NS, Barnato AE. Innovation pathways to preserve: rapid healthcare innovation and dissemination during the COVID-19 pandemic. Healthcare. 2022;10(4):100660. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Walczak R, Kludacz-Alessandri M, Hawrysz L. Use of telemedicine technology among general practitioners during COVID-19: a modified technology acceptance model study in Poland. Int J Environ Res Public Health. 2022;19(17):10937. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Williams MD, Rana NP, Dwivedi YK. The unified theory of acceptance and use of technology (UTAUT): a literature review. J Enterp Inf Manag. 2015;28(3):443–88. [Google Scholar]
  • 63.World Health Organization. Ethics and governance of artificial intelligence for health: guidance on large multi-modal models. Geneva: World Health Organization; 2024. [Google Scholar]
  • 64.Yi MY, et al. Understanding information technology acceptance by individual professionals: toward an integrative view. Inf Manag. 2006;43(3):350–63. [Google Scholar]
  • 65.Zeithaml VA. Consumer perceptions of price, quality, and value: a means-end model and synthesis of evidence. J Mark. 1988;52(3):2. [Google Scholar]
  • 66.Zhang H. What has China learnt from disasters? Evolution of the emergency management system after SARS, Southern Snowstorm, and Wenchuan Earthquake. J Comp Policy Anal. 2012;14(3):234–44. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

No datasets were generated or analysed during the current study.


Articles from Health Research Policy and Systems are provided here courtesy of BMC

RESOURCES