Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2025 Aug 6;15:28698. doi: 10.1038/s41598-025-14698-2

Invulnerability bias in perceptions of artificial intelligence’s future impact on employment

Felipe Barrera-Jimenez 1,, Jose Luis Arroyo-Barrigüete 2, Eduardo C Garrido-Merchán 3,4, Gonzalo Grinda-Luna 3
PMCID: PMC12328832  PMID: 40770226

Abstract

The adoption of Artificial Intelligence (AI) is reshaping the labor market; however, individuals’ perceptions of its impact remain inconsistent. This study investigates the presence of the Invulnerability Bias (IB), where workers perceive that AI will have a greater impact on others’ jobs than on their own, and Optimism Bias by Type of Impact (OBTI), where individuals perceive AI’s future impact on their own job as more positive than on others’. The study analyzes survey data collected from 201 participants, recruited through social media using convenience sampling. The data were analyzed using a combination of statistical and machine learning methods, including the Wilcoxon test, ordinary least squares regression, clustering, random forests, and decision trees. Results confirm a significant IB, but not OBTI; only 31.8% perceived AI’s future impact on their own job as more positive than on others’. Analysis shows that greater knowledge of AI correlates with lower IB, suggesting that familiarity with AI reduces the tendency to externalize perceived risk. Furthermore, bias levels vary across professional sectors: healthcare, law, and public administration exhibit the highest IB, while technology-related professions show lower levels. These findings highlight the need for interventions to improve workers’ awareness of AI’s potential future impact on employment.

Supplementary Information

The online version contains supplementary material available at 10.1038/s41598-025-14698-2.

Keywords: Artificial intelligence, Invulnerability bias, Optimism bias, AI biases, Unrealistic optimism, Future of work

Subject terms: Environmental social sciences, Psychology and behaviour

Introduction

In recent years, particularly with the advent of Large Language Models (LLMs), Artificial Intelligence (AI) has transformed daily life and business operations1. AI adoption has surged, driven by its ability to enhance critical processes in sectors such as finance, marketing, and manufacturing13. To remain competitive, organizations increasingly adopt AI to capitalize on emerging opportunities2,4.

Concerns about AI-driven job displacement are increasing, and recent studies show that high-skill occupations, once considered immune, now face the risk of automation5,6. Estimates suggest that 46–55% of U.S. jobs are exposed to automation by LLMs7. However, many workers may perceive AI as less threatening to their own jobs or organizations than to those of others8,9. For example, a Pew Research survey10 found that 62% of U.S. adults believe that AI will have a major impact on workers generally, but only 28% think it will significantly impact their own position. This pattern reflects the Invulnerability Bias (IB), a form of unrealistic optimism, in which individuals perceive others as more likely to be affected by negative events11. Given that workers’ perceptions of AI’s impact shape their readiness to adapt12,13, understanding the extent of this bias and its underlying causes becomes particularly relevant.

Similarly, slightly more individuals (16%) expect AI to benefit themselves than to benefit workers generally (13%), suggesting an additional bias10. This study refers to this as Optimism Bias by Type of Impact (OBTI), defined as the expectation that, if affected, one’s own outcome will be more positive than others’ outcomes, in line with affective forecasting biases14. These biases can coexist but are independent cf.[15]; thus, this article addresses IB and OBTI separately.

Despite growing interest in social perceptions of AI, many studies focus on general attitudes, typically emphasizing either its benefits or risks1618. However, empirical research directly addressing IB remains limited, particularly at the individual level, although some studies have examined it in organizational contexts9. Other studies report general findings without systematically comparing self-perceptions to those of others. These are typically descriptive in nature, based on limited qualitative scales and are lacking statistical tests to validate the prevalence of the bias10,19,20. While some studies include sociodemographic variables, they do not analyze them as predictors of bias or identify occupationally grouped bias profiles.

The aim of this study is to fill this gap by empirically examining whether individuals exhibit signs of IB and OBTI, and to investigate whether these biases systematically vary across sociodemographic groups and occupational categories. The goal is to quantitatively test the presence and distribution of the aforementioned biases within the workforce.

This study argues that workers tend to perceive AI as having a greater impact on other people’s jobs than on their own, indicating a comparative IB. Additionally, it is hypothesized that this perception may be accompanied by optimism regarding the expected valence of the impact (OBTI). In other words, individuals not only believe that the impact will be less severe for them, but also that if affected, the outcome will be more positive for themselves than for others. Finally, it is proposed that this pattern is not uniform across the population; rather, it varies by job category.

This article contributes to the existing literature by extending IB theory to individual perceptions of AI’s impact in the workplace. It measures both IB strength and OBTI, considering demographic variables, job category, and self-reported AI knowledge. Methodologically, the study uses recent survey data from the Spanish labor market. Biases are evaluated using paired questionnaire items and numerical scales, with analysis conducted through inferential statistics and interpretable machine learning techniques (e.g., Ordinary Least Squares (OLS) regression, clustering, and random forest). This approach moves beyond purely descriptive analysis and captures both the prevalence of the biases and their variation patterns.

Should the study confirm that workers systematically underestimate AI’s impact on their own jobs, the findings would signal a need for targeted interventions to counteract this complacency and strengthen workforce resilience in an increasingly automated labor market. If workers do not anticipate the potential impacts of AI on their roles, organizational change may be met with resistance, slowing responsiveness and affecting competitiveness [cf. 21]. This is particularly relevant in sectors where AI knowledge is low. Research shows that improving AI literacy reduces uncertainty and enhances realistic risk perception13,22,23. Addressing these biases early is essential for institutional resilience amid accelerating technological change12,16.

The remainder of this paper is organized as follows. Section 2 presents a review of the literature and the research hypotheses. Section 3 describes the methodology used for data collection and analysis. Section 4 reports the results, and Sect. 5 discusses the findings and presents the conclusions.

Literature review

The transformation caused by AI in the labor market has raised concerns about job displacement, increasing interest in research on public perceptions of its social impact24,25. As in other risk contexts26,27, people may not evaluate these impacts uniformly, and an IB may arise in this scenario. According to Helweg-Larsen and Shepperd28, this bias is modulated by factors such as perceived control, severity, proximity, comparative target, and affect-related responses to risk. These factors are also relevant when assessing AI’s potential effects on employment.

However, explicit measurement of this bias at the personal level is rare in the literature. It appears implicitly in some surveys, although without formal statistical testing. For instance, the Pew Research survey10 asked about the expected impact of AI on jobs in general versus one’s own job, differentiating the level and valence of the impact. The gap between the two assessments points to a clear bias, but it was not statistically analyzed. Other large-scale surveys corroborate this pattern. For example, the Ipsos survey19 found that 60% of respondents believe AI will affect employment in the next five years, yet only 36% fear their own job could be replaced. Similarly, the Eurobarometer Special 460 survey20 reported that while 74% of Europeans think AI will eliminate more jobs than it creates, 53% believe their own position would “not at all” be affected by automation.

Weber9 explicitly measures both dimensions of the bias, though from an organizational rather than a personal perspective, focusing on how human resources professionals anticipate a lower impact of AI on their own companies compared to others. The findings reveal a consistent pattern of bias. Overall, the literature suggests a tendency to perceive a greater impact of AI on others’ jobs than on one’s own, pointing to the presence of IB. Accordingly, the first hypothesis is:

Hypothesis 1

There is an invulnerability bias in perceptions of AI’s impact, such that individuals believe AI will have a significantly greater impact on others’ jobs than on their own.

In addition to these findings on the level of expected impact, manifestations of personal optimism regarding the effect of AI have also been documented. In the university context, a qualitative study in Hungary explored the perceptions of young students in non-technical fields regarding automation and employment, revealing an optimism bias. The results indicated that at a general level, moderate optimism prevailed regarding future changes in automation, while at an individual level, perceptions tended to be significantly more optimistic29. A similar perception exists in journalism education, where students largely believe AI will not threaten job stability30.

This optimistic trend can also be observed in the professional sphere. Woodruff et al.8 point out that white-collar workers perceive Generative AI (GAI) mainly as a support tool for routine tasks. In the communication sector, a qualitative study suggests that optimism bias shapes professionals’ views on AI. While they acknowledge the rise of AI-generated content, they still feel relatively secure in their jobs, perceiving AI as a tool for efficiency rather than a disruptive force31. Likewise, in another professional field, a global survey of 791 psychiatrists32 revealed that most expected AI to impact tasks like updating medical records (75%) and synthesizing information (54%). Yet only 3.8% thought AI could render their jobs obsolete, and just 17% believed it was likely to replace human clinicians in providing empathic care.

Survey data reinforce this optimistic perception, which is also observed in professional environments. In addition to the findings of Pew Research10, which hint at a bias towards positivism, the Ipsos report19 also reveals optimism, with 37% of respondents believing that their job will get better thanks to AI, while only 16% believe it will get worse. However, the surveys indicate that concern and nervousness are also on the rise33,34.

Taken together, the literature suggests that the valence attributed to the impact of AI is generally favorable when individuals assess their own employment, supporting the second hypothesis:

Hypothesis 2

Individuals believe that if an impact occurs, it will be more positive for their own jobs than for others’ jobs, thereby creating a personal optimism bias.

Although optimism about AI’s role in employment is widespread, some studies suggest that this perception may underestimate its true impact. Earlier research identified less-educated or low-income workers as most vulnerable35,36. Frey and Osborne37 found that jobs in management, education, law, engineering, and computing were less susceptible to automation. However, as AI advances, this view is changing and new groups are increasingly at risk. Orchard and Tasiemski6 warn that GAI could partially or fully replace traditional professions, as it is most efficient in tasks like content creation, customer service, or software engineering. Frank5 also notes that GAI may automate specific cognitive and creative tasks. Eloundou et al.7 estimate that with full integration of LLM-powered software in the United States, 46–55% of jobs may experience significant exposure to automation. Even partial disruption could especially affect high-skilled occupations once considered immune.

This pattern is reflected in workers’ perceptions: educational background appears to shape how individuals view the risks of automation. Data from the Spanish Foundation for Science and Technology38 indicate that 32.3% of respondents without primary education regard robotization as a serious threat, compared to 16.6% among those with university degrees. In contrast, Ghimire et al.39 found that in Atlanta, Hispanics and individuals with lower educational attainment did not view automation as a major danger. This divergence, also evident in the Ipsos19 survey, suggests that educational attainment may shape optimism toward automation; however, the direction of this relationship remains unclear, underscoring the need for further research.

These findings suggest that IB and OBTI may differ by occupational group, since perceived proximity to risk and the comparison group selected28 are not evenly distributed across the labor market. Workers in repetitive roles may consider the risk as more immediate and therefore minimize it less, yet they also expect fewer benefits. In contrast, those in prestigious or expertise-based occupations may regard the threat as distant and compare themselves with less-qualified groups, reinforcing their sense of security. This reasoning supports the third hypothesis:

Hypothesis 3

The biases in AI-impact perceptions vary by job category, revealing specific differences among occupational groups.

Methodology

Study design and data collection

A non-probability convenience sampling method was employed, which is appropriate for the exploratory nature of the research. This approach aimed to reach a diverse sample of working adults across various professional sectors. No incentives were offered, and all responses were anonymous. Although no geographical identifiers were collected to ensure participant anonymity, the survey language and recruitment methods strongly suggest that most participants are Spanish-speaking individuals, likely residing in Spain. The questionnaire, a 13-item survey developed in Google Forms, included Likert-type scales (Q8–Q13), multiple-choice questions (Q1–Q3, Q6, Q7), and open-ended responses (Q4, Q5). It covered sociodemographic data (Q2–Q7), knowledge of AI (Q13), and the perceived impact of AI on employment, both personally (Q10, Q11) and on others (Q8, Q9). A consent question (Q1) ensured voluntary participation and verified that respondents were legal adults, while a control question (Q12) assessed response consistency. Sociodemographic questions (Q2–Q7) used standard labor-market terminology.

Perception items (Q8–Q11) were adapted primarily from Kochhar10 and cross-checked against major AI surveys20,29,38. Kochhar’s10 original time horizon was shortened from 20 to 15 years. The original three-option response scale was also replaced with an 11-point scale (0–10), mirroring the grading system used throughout Spain’s educational system and therefore familiar to respondents in Spain. Additionally, this scale provides balanced anchors (0 = none/very negative, 10 = complete/very positive) and a true midpoint (5). AI knowledge (Q13) was self-assessed on a 0–10 scale. A five-participant pilot study led to minor edits. The final survey was distributed via LinkedIn posts and e-mail lists (Oct–Nov 2024). Further details are provided in Appendix A of the Supplementary Material.

This study did not require ethical approval as per the guidelines of the Ethics Committee of the Universidad Pontificia Comillas, which waives the need for approval in cases involving the voluntary collection of anonymized data from adult participants. All methods were conducted in accordance with relevant guidelines and regulations. The survey was anonymous, and no personally identifiable data were collected. All participants provided informed consent for the processing and analysis of their responses. Additionally, all participants confirmed that they were of legal age.

The required sample size was 179 for the Wilcoxon test, assuming a small effect (0.25), α = 0.05, and power = 0.9. After data collection and cleaning, the final sample included 201 participants. Analyses were conducted in R40.

Job positions (Q4) were grouped into professions for analysis, and educational level (Q7) was recoded into three categories: basic education (secondary, high school, vocational training), bachelor’s degree, and graduate degree (master’s or doctorate).

Two bias indicators were developed using the indirect method28, which estimates personal and others’ perceptions separately. IB was defined as the difference between the perceived impact on others’ jobs (Q8) and the impact on one’s own job (Q10), and OBTI as the difference between the valence of perceived impact on one’s own job (Q11) and on others’ jobs (Q9). Both metrics were subsequently converted into binary variables: BIB, equal to 1 if IB > 0, and BOBTI, equal to 1 if OBTI > 0.

Data analysis and statistical methods

Data analysis followed a sequential, multi-method approach appropriate for the exploratory nature of the study and the structure of the dataset. First, descriptive analyses were conducted to examine the distributions and summary statistics of all variables. Second, four derived variables were computed: IB, OBTI, and their corresponding binary indicators, BIB and BOBTI. Third, to assess the statistical significance of the observed biases, inferential tests were applied to both the continuous and binary versions of these variables. Fourth, sociodemographic predictors of bias magnitude were tested using OLS regression models with standardized variables and robust standard errors; job category clusters were later included as predictors in a second modeling step. Fifth, hierarchical clustering of job categories was performed based on IB and OBTI scores, excluding 23 heterogeneous “Other” responses. Differences between clusters were examined using the Kruskal–Wallis test and Dunn’s post hoc test with the Holm adjustment. Sixth, to further explore potential nonlinear relationships and interactions among predictors, two machine learning methods were applied: random forest analysis with Shapley value interpretation and a Bayesian-optimized decision tree.

This analytical strategy was deliberately designed to address key gaps in prior research on IB in the context of AI-driven employment change. Existing large-scale studies such as Kochhar10 and Ipsos19 rely almost exclusively on aggregate descriptive statistics that either do not explicitly differentiate between personal and comparative perceptions or do not statistically test the presence of IB. The few works that move beyond description (for instance, Weber9 do so at the organizational level, using classical tests and leaving personal-level perceptions and advanced analytics unexplored. This study addresses these gaps by (i) constructing individual-level IB metrics that contrast self-assessments with assessments of workers in general, and (ii) combining classical inference with machine-learning models (e.g., Bayesian-optimized random forest) to uncover non-linear patterns and heterogeneity in bias.

Results

Descriptive and prevalence analysis

After data preprocessing, 201 valid responses remained for analysis. Of these, 53.2% identified as women and 46.8% as men. Regarding educational attainment, 41.3% of participants held a postgraduate degree, 39.8% had completed a university degree, and 18.9% reported a basic education level. The age distribution was relatively balanced, with 24.4% aged 20–29, 16.9% aged 30–39, 17.9% aged 40–49, 23.4% aged 50–59, 15.9% aged 60–69, and 1.5% aged 70 or older. Although the proportion of respondents aged 60 years or older was comparatively small, this does not detract from the study’s relevance. Younger cohorts are precisely the segments of the workforce that will have to confront and adapt to forthcoming AI-driven transformations, whereas many older workers are likely to retire before these changes fully unfold. The survey language and recruitment channels indicate that most participants are likely based in Spain. More details are available in Appendix B.

The mean IB value was 1.39, indicating that participants perceived AI’s impact as higher for others’ jobs than their own (Wilcoxon test, p < 0.001). Additionally, 59.2% of participants exhibited this bias (χ² = 6.4, p = 0.011).

The mean OBTI was − 0.02, not significantly different from zero (Wilcoxon test, p = 0.905). However, only 31.8% of respondents exhibited an optimism bias, significantly lower than 50% (χ² = 25.8, p < 0.001). Thus, most participants (68.2%) did not show an optimism bias, indicating a general absence of personal optimism.

Regression analysis

Table 1 reports the OLS regression results for IB and OBTI. Predictors include age, gender, educational level, self-assessed AI knowledge, and an age-by-gender interaction term. The IB model is globally significant (F(6,194) = 5.631, p < 0.001). Both holding a graduate degree (p = 0.005) and higher self-assessed AI knowledge (p = 0.028) are associated with a significantly lower IB. The age-by-gender interaction is also significant (p = 0.014), indicating that the effect of age on IB is stronger for women.

Table 1.

OLS models for IB and OBTI.

IB OBTI
Coef. Robust std.
error
P-value Coef. Robust std.
error
P-value
Intercept 0.087 0.142 0.541 0.049 0.161 0.760
Age −0.069 0.085 0.416 −0.037 0.096 0.701
Female 0.238 0.140 0.090 −0.122 0.149 0.414
Bachelor’s degree −0.091 0.183 0.620 0.043 0.202 0.833
Graduate degree −0.479 0.170 0.005 0.010 0.201 0.959
AI knowledge −0.149 0.067 0.028 0.166 0.082 0.043
Interaction: female-age 0.327 0.132 0.014 −0.085 0.137 0.533
Sample size 201 201
R2/R2 adjusted 0.148/0.122 0.051/0.022

In the OBTI model, self-assessed AI knowledge (Q13) shows a significant positive effect (p = 0.043), indicating that higher AI knowledge is associated with a greater optimistic bias regarding the impact of AI on one’s own job compared to others’. No other predictors were significant.

Job clustering analysis

Eight job categories were consolidated. Twenty-three unclassifiable responses were excluded due to their heterogeneity. Figure 1(a) displays a heatmap with the dendrogram from hierarchical clustering (Ward.D2 method, Euclidean distance). Based on the majority rule in the NbClust package, the optimal number of clusters was three. Figure 1(b) shows the job categories distributed into three clusters: Cluster 1 (C1) with high IB and low OBTI, Cluster 3 (C3) with high OBTI and low IB, and Cluster 2 (C2) with intermediate values for both indicators.

Fig. 1.

Fig. 1

Clustering analysis of job categories based on IB and OBTI. (a) Heatmap of clustering. (b) Cluster distribution.

C1 includes Health, Government/Public Administration, and Legal/Law; C2 includes Administration, Education, Finance, and Sales/Marketing; C3 consists of IT/Engineering/Architecture. Table 2 presents the mean scores for each category. Levene’s test indicated no significant variance differences in IB or OBTI among groups (p = 0.207 and p = 0.921, respectively). However, significant differences in IB and OBTI were observed among clusters (Kruskal–Wallis test: p < 0.001 for IB; p = 0.020 for OBTI). Post hoc analysis (Dunn’s test) showed that C1 had the highest IB score, significantly higher than both C2 (p = 0.004) and C3 (p < 0.001), with no significant difference between C2 and C3 (p = 0.185). For OBTI, differences were mainly due to C3’s higher optimism bias, which was significantly greater than that of C1 (Dunn’s test, p = 0.015) but not of C2 (p = 0.140); no significant difference was observed between C1 and C2 (p = 0.162).

Table 2.

Bias values by cluster.

Bias Indicators Other characteristics
IB OBTI BIB BOBTI AI knowledge Age Female (%) Graduate degree (%)

C1

(n = 61)

2.18 −0.36 0.74 0.28 4.25 47.4 75.4 36.1

C2

(n = 85)

1.07 0.02 0.56 0.32 5.24 42.3 48.2 47.1

C3

(n = 32)

0.59 0.84 0.34 0.41 6.47 38.9 25.0 46.9

Significant differences in BIB proportions were found between C1 and C3 (p = 0.002), with marginally significant differences observed between C1 and C2 and between C2 and C3 (p = 0.098 for both). Nevertheless, BOBTI showed no significant differences (chi-square test, p = 0.455), indicating that the optimistic bias proportion is similar across clusters despite variations in its average magnitude. Moreover, the prevalence of this bias was significantly below 50% in both C1 and C2 (p = 0.001 for both).

Table 2 summarizes the bias indicators and the sociodemographic profile of each cluster. IB and OBTI are shown as mean scores, whereas BIB and BOBTI appear as the proportions of respondents exhibiting the bias. The table also reports each cluster’s mean age and self-assessed AI knowledge, together with the percentages of female participants and respondents holding a graduate degree. These results indicate that C1, with the highest IB, has a greater proportion of women and lower average AI knowledge, while C3, with no significant IB, has fewer women, higher AI knowledge, and a slightly younger population. Some of these differences in IB indicators may be attributable to the clusters’ sociodemographic composition.

To further assess the contribution of job category to IB, a regression model was estimated with job category cluster as a predictor. The results, summarized in Table 3, confirmed that individuals in C2 and C3 had significantly lower IB than those in C1 (p = 0.010 and p = 0.018, respectively), indicating that job category influences IB, with C3 showing the lowest levels. Additionally, gender (p = 0.043), postgraduate-level education (p = 0.006), and the age-by-gender interaction (p = 0.015) remained significant predictors of IB. Thus, while sociodemographic factors influence bias, job category-based clustering provides an additional explanatory dimension.

Table 3.

OLS model including job clusters as a predictor of IB.

Coef. Robust std.
error
P-value
Intercept 3.637 0.753 0.000
Age −0.016 0.014 0.259
Female −1.868 0.917 0.043
Bachelor’s degree −0.554 0.443 0.213
Graduate degree −1.067 0.385 0.006
AI knowledge −0.091 0.067 0.180
C2 −0.940 0.360 0.010
C3 −1.177 0.491 0.018
Interaction: female-age 0.050 0.020 0.015
Sample size 178
R2/R2 adjusted 0.172/0.133

The same procedure was repeated for the OBTI model, incorporating the cluster variable as a predictor. However, none of the predictors reached statistical significance.

Random forest and decision tree analysis

Machine learning methods were applied to capture potential non-linear relationships and to complement the explanation of IB. Job variables were specified at a more granular level to enable a finer-grained analysis of their effect on IB; instead of using the clusters derived in the previous stage, the original occupational categories were retained. First, a random forest model was trained on the dataset, which was split into training and testing sets, and Shapley values were calculated to assess each variable’s influence on IB. Figure 2 presents the Shapley values for all variables explaining IB. Larger absolute Shapley values indicate greater importance in explaining the bias, with positive (red) and negative (blue) contributions. Notably, the AI knowledge variable exhibits the highest Shapley values, predominantly negative. This suggests that individuals with greater AI knowledge tend to hold more extreme opinions, generally showing lower IB, although some cases indicate higher bias.

Fig. 2.

Fig. 2

Shapley values associated with invulnerability bias for each of the explanatory variables’ values.

A similar pattern is observed for the valence of the perceived impact of AI on one’s own job (impact_type_person, Q11). Individuals who perceive AI as having a more negative impact on their own job (blue points) tend to exhibit higher IB. This indicates that perceiving AI as a significant threat to one’s position may reinforce the bias rather than reduce it. This result is coherent with the (modest) negative correlation observed between IB and OBTI (r = −0.14, p = 0.042).

A decision tree model was trained to predict IB using 10-fold cross-validation repeated three times to reduce estimation bias and variance. Due to the sensitivity of decision trees to hyperparameters, Bayesian optimization (20 iterations) adjusted three key parameters: maximum tree depth, minimum instances for a split, and minimum instances in leaf nodes. The resulting optimal tree is shown in Fig. 3.

Fig. 3.

Fig. 3

Optimal decision tree to explain IB.

Figure 4 displays the importance of each variable. The first split in the tree is based on AI knowledge, consistent with the Shapley analysis and highlighting its strong association with IB (Table 1). The second split is determined by the perceived impact valence of AI on one’s own job, which also ranks as the second most important variable in the Shapley analysis. Age is selected as the third split, consistent with its fourth-place ranking in the Shapley value analysis.

Fig. 4.

Fig. 4

Variable importance in the decision tree.

The consistency between the Shapley value analysis and the decision tree results provides additional empirical support for the non-linear explanatory power of these variables with respect to IB.

Discussion and conclusions

This study investigates the presence of biases in perceptions of AI’s future impact on employment. The theoretical basis for applying IB to AI-driven labor displacement is as follows. Although IB originates in health psychology, its essence lies in comparative optimism. Thus, it naturally applies to contexts involving AI and job displacement. For example, in both health and employment contexts, individuals may ask themselves, “Will I be the one affected?” and often respond, “Probably not; others more than me.” Therefore, IB is not limited specifically to health contexts but represents a general psychological mechanism used to distance oneself from perceived threats, in this case, job substitution by AI. Such an event clearly poses a threat, as it would result in unemployment. Moreover, as has been shown, when applied to AI, it reflects the same psychological mechanism: distancing personal risk by shifting it onto others. The perception of AI in the employment context thus exemplifies comparative optimism regarding a threat to one’s job security.

The results confirm Hypothesis 1, revealing an average IB in judgments of impact: AI is perceived as having a greater effect on others’ jobs than on one’s own (p < 0.001). This bias was present in the majority of respondents (59.2%), a significantly predominant proportion (p = 0.011). This finding confirms previous results9,10,29, while providing new empirical evidence from a different sociocultural context and employing a direct, robust measurement of IB. Specifically, this study moves beyond the descriptive approach of prior surveys10 and qualitative studies29, as neither explicitly quantified this bias. In contrast to Weber9, who examined IB at the organizational level, this research analyzes the phenomenon at the individual level in a heterogeneous sample of Spanish-speaking professionals. Beyond traditional statistical methods, machine learning was used to uncover nonlinear relationships and to profile bias subgroups that may be overlooked in linear analyses. Thus, this approach provides a more detailed and contextualized understanding of IB in a distinct sociocultural environment.

Firstly, OLS analysis indicates that both AI knowledge and postgraduate education influence IB, consistent with previous research identifying these variables as key differentiators in opinions on automation20,38,39. Higher AI knowledge and education levels are associated with a lower degree of IB, suggesting a greater ability to recognize AI’s impact on one’s own job and a reduced tendency to externalize risk. However, it is important to note that AI knowledge in this study is self-assessed, so its effect may reflect not only actual knowledge but also greater confidence and cognitive openness.

Recent research in technological contexts suggests that individuals with higher levels of digital literacy are less likely to underestimate risk18. Similarly, meta-analyses on emerging technologies have shown that technical knowledge is associated with a lower subjective perception of risk, shaping how individuals interpret and evaluate potential technological impacts17. Specifically, in the context of AI, greater literacy not only reduces uncertainty and strengthens self-efficacy but also promotes more objective risk assessments23 and a more receptive emotional attitude towards technological change13,22. Taken together, these findings support the idea that individuals with higher education and greater AI knowledge exhibit less bias, as their judgments are informed by a more objective understanding rather than by general assumptions.

Secondly, a significant interaction between age and gender was observed, such that women exhibited a more pronounced IB that increased with age. However, the gender-by-age interaction pattern requires clarification. In the baseline model (Table 1, without occupational controls), gender is not significant, but the age-by-gender interaction is positive, indicating that IB rises with age among women. Once job-cluster dummies are included (Table 3), the gender main effect turns negative and significant (β = − 1.868, p = 0.043) while the interaction remains positive but smaller (β = 0.050, p = 0.015). This shift suggests that the initial gender pattern was partly confounded by occupational concentration, as many older women work in public-sector and legal professions, the cluster with the highest IB. After controlling for occupation, women display a substantially lower IB than men across the observed age range. Although the age-by-gender slope decreases, it does not reverse the ordering, so the gender gap narrows only modestly at older ages. Future research should verify this interaction with stratified samples and sector-specific analyses.

Regarding Hypothesis 2, the results show no evidence of OBTI. In fact, only 31.8% of participants perceived a more positive impact on their own job than on others’, representing a statistically significant minority (p < 0.001). Contrary to expectations, this provides sufficient evidence to reject Hypothesis 2. More specifically, the results indicate a lack of personal optimism about the type of impact that AI could have on one’s own job.

Although prior studies reported generally optimistic responses to automation10,19,32, this study finds no evidence of OBTI. Unlike previous research, optimism bias was explicitly measured here, prompting participants to consider personal impact, potentially increasing risk awareness by reducing psychological distance [cf. 28]. Sustained media exposure can further reduce psychological distance41, and in contexts with polarized narratives about AI42, perceived threats may appear more immediate and personal. When coverage emphasizes job replacement without offering coping strategies, it tends to heighten labor-related risk sensitivity and trigger more defensive responses13,43,44. According to Slovic21, difficult-to-control or catastrophic risks foster more negative evaluations, which reduces perceived control and, consequently, optimistic bias28. This pattern is echoed in public perceptions of emerging technologies17 and in recent surveys showing increased concern and nervousness19,33,34.

Hypothesis 3

is supported by the clustering analysis and further confirmed by OLS regression. Significant differences in IB were identified among the clusters, with C1 (comprising the health, law, and public administration sectors) exhibiting the highest values. This is particularly notable since AI has demonstrated high efficiency in tasks such as disease diagnosis45,46, information processing, and legal decision-making47,48. Moreover, studies such as Eloundou et al.7 suggest that these fields are among the most exposed to automation, raising questions about potential overconfidence in these professions. One possible explanation is that, because these roles are traditionally regarded as secure and may involve lower AI knowledge, individuals in these sectors may perceive their positions as less vulnerable to technological disruption. In contrast, only 34% of individuals in C3, which includes technology, engineering, and architecture professions, exhibit IB, potentially due to greater familiarity with AI in their daily work environments49.

At the aggregate level, workers who foresee a greater impact of AI on their own jobs (lower IB) tend to expect a more positive outcome for themselves (higher OBTI), a pattern especially pronounced in C3 (see Table 2). C1 displays higher IB alongside lower OBTI. This (modest) inverse relationship between IB and OBTI suggests a dual-pathway pattern. Well-informed and highly educated respondents, who score lower on IB, also register higher OBTI, suggesting a competence-based optimism: they judge the magnitude of AI’s impact on themselves more realistically yet remain confident that they can turn that impact to their advantage. In contrast, respondents with limited education or AI knowledge exhibit the mirror image, higher IB but lower OBTI. This is consistent with a defensive mechanism: they minimize the threat’s magnitude to themselves by projecting the threat onto others’ jobs, while anticipating a more negative personal outcome when pressed to specify valence. Recent research indicates that technology can foster current job optimism but also create uncertainty about future prospects16, especially when individuals doubt their ability to adapt and perceive AI as a direct threat to professional continuity12,50. This response is likely intensified by a broader emotional climate of growing nervousness about AI1. According to the affect heuristic, emotional appraisals of risk can further amplify perceived severity, even when personal risk is downplayed51. Thus, the combination of IB and lack of OBTI may function as a defensive shield and a catalyst for coping with potential adverse outcomes. This competence-versus-defense dynamic reconciles the modest negative IB-OBTI correlation (r = −0.14, p = 0.042) and the cluster pattern (C3 vs. C1). However, larger, stratified studies will be needed to test these mechanisms formally.

Although preliminary and derived from a convenience sample, these results highlight two complementary policy levers: (i) AI-literacy initiatives to reduce inflated invulnerability and (ii) targeted reskilling support to attenuate defensive pessimism among less-prepared groups. Such initiatives should address not only technical abilities but also human skills, including communication, problem-solving, and decision-making, which are essential for preparing workers52. In addition, emotional well-being should be considered, as the pressure to acquire new skills can create cognitive challenges13,53. Timely identification of IB and OBTI biases among employees and managers is crucial for all major departments within an organization. Managers, in particular, must recognize the implications of these biases, as top management plays a central role in effective AI innovation strategies54,55. As a first step, organizations should implement a brief and focused mandatory training course on cognitive biases and their consequences, ensuring that all employees, not just managers, are aware of these issues. Given the diversity of job roles, each employee should then decide whether further upskilling in AI is appropriate for their position56. Fostering a culture of continuous learning in AI and technology, while addressing both cognitive biases and technological anxiety, is a critical strategic factor for successful and rapid AI adoption57,58.

The development of an AI-oriented culture within organizations also depends on the skills and attitudes acquired during formal education. To maximize the effectiveness of these initiatives, it is important to consider how learning environments shape AI knowledge and risk perception. Exposing students to professional perspectives, such as lectures by industry experts, not only enhances career orientation59 but also clarifies how AI is applied across sectors. This bridges academic content with practical applications, fostering a realistic view of risk. Combining these experiential opportunities with classroom instruction further reinforces understanding and communication skills60, supporting a balanced perspective in rapidly changing technological contexts.

Limitations and directions of future research

The main limitation of this study is that the sample is primarily drawn from Spain, which may limit generalizability due to sociocultural influences. Additionally, the study relies on a voluntary, open-call sample. Although anonymity and the absence of incentives help reduce social desirability bias, this recruitment strategy can introduce self-selection bias, likely over-representing individuals more interested in AI or with stronger opinions. The sample is also skewed toward highly educated, white-collar respondents (81% with a university degree), shows a slight bias toward younger adults (ages 20–29), and includes very few participants over age 69. Prior literature61 indicates that higher educational attainment is linked to a greater perception of risk, which could contribute to a lower observed IB. Our results indicate that higher levels of education and greater AI knowledge are associated with lower IB, whereas the effects of age appear to be weaker. However, because younger, highly educated respondents often display greater digital skills62, this sample profile may have contributed to a lower observed IB, possibly due to higher self-perceived familiarity with AI. Overall, these sample characteristics likely lead to an underestimation of population-level IB; thus, the mean value of 1.39 should be interpreted as a conservative lower bound. Future studies should employ probability or stratified sampling to enhance representativeness and replicate this design across broader and more diverse populations, including a broader age range, more variation in participants’ country of residence, occupational groups, and education levels, to test the robustness and generalizability of the observed biases.

While Common Method Variance (CMV) is a potential concern in single-source survey designs, IB and OBTI were computed as difference scores between parallel items, which helps minimize the impact of uniform response tendencies. Although residual method bias cannot be fully ruled out, this approach reduces its influence on the main variables. Future research should incorporate multi-source data and formally test for CMV.

Moreover, future studies could complement this quantitative approach by incorporating qualitative methods such as interviews or focus groups. These techniques may help uncover the underlying motivations, beliefs, and contextual factors that drive the emergence of bias. Exploring individual narratives could provide a deeper understanding of how workers interpret AI-related risks and why certain groups are more prone to biased perceptions.

Another promising direction for future research involves intra-sector comparisons. This would involve adding another level of disaggregation to the self-versus-others comparison by asking respondents to assess the impact of AI on other workers within their own sector or professional group. While the present study was based on a general and inter-sectoral comparison, capturing broad perceptions across the labor market, this more specific approach could offer complementary insights that enrich the interpretation of IB in concrete professional contexts.

Finally, incorporating objective measures of AI knowledge would allow for a more precise assessment of its relationship with the identified biases.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary Material 1 (27.3KB, xlsx)
Supplementary Material 2 (19.1KB, docx)

Author contributions

F.B.-J.: Designed the study, analyzed data, interpreted results, and drafted the manuscript. J.L.A.-B.: Conceptualized the study, designed the study, interpreted results, and revised the manuscript. E.C.G.-M.: Designed the study, collected and analyzed data, interpreted results, and drafted the manuscript. G.G.-L.: Collected data, conducted the literature review, and drafted the manuscript. The manuscript was approved by all authors.

Funding

This research was partially funded by the Santalucía Chair of Analytics for Education, and by NORIA research project (‘The Impact of Artificial Intelligence on the Legal Framework: Special Consideration of Its Effect on Legal Liability’ Grant number: PP2023_1, Universidad Pontificia Comillas).

Data availability

The survey dataset generated and analyzed in this study is available as supplementary material.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Maslej, N. et al. The AI Index 2025 Annual Report. Stanford University (2025). https://hai.stanford.edu/ai-index/2025-ai-index-report
  • 2.Soomro, R. B. et al. A SEM–ANN analysis to examine impact of artificial intelligence technologies on sustainable performance of SMEs. Sci. Rep.15, 5438. 10.1038/s41598-025-86464-3 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Dabija, D. C. & Frau, M. Leveraging artificial intelligence in marketing: trend or the new normal? Oeconomia Copernicana. 15, 1173–1181. 10.24136/oc.3390 (2024). [Google Scholar]
  • 4.Michael, O. IGI Global,. Maximising the potentials of small and medium scale business enterprises in developing nations through the use of artificial intelligence: AI adoption by SMEs in the developing nations in The Future of Small Business in Industry 5.0 (eds Olubiyi, T. O., Suppiah, S. D. & Chidoko) C.) 215–246 (2025).
  • 5.Frank, M. R. Brief for the Canada House of Commons Study on the Implications of Artificial Intelligence Technologies for the Canadian Labor Force: Generative artificial intelligence shatters models of AI and labor. Preprint at (2023). 10.48550/arXiv.2311.03595
  • 6.Orchard, T. & Tasiemski, L. The rise of generative AI and possible effects on the economy. Econ. Bus. Rev.910.18559/ebr.2023.2.732 (2023).
  • 7.Eloundou, T., Manning, S., Mishkin, P. & Rock, D. GPTs are gpts: labor market impact potential of LLMs. Science384, 1306–1308. 10.1126/science.adj0998 (2024). [DOI] [PubMed] [Google Scholar]
  • 8.Woodruff, A. et al. How knowledge workers think generative AI will (not) transform their industries. Assoc. Comput. Mach.10.1145/3613904.3642700 (2024). [Google Scholar]
  • 9.Weber, P. Unrealistic optimism regarding artificial intelligence opportunities in human resource management. Int. J. Knowl. Manag. 1–19. 10.4018/IJKM.317217 (2023).
  • 10.Kochhar, R. & Which U.S. workers are more exposed to AI on their jobs? Pew Research Center (2023). https://www.pewresearch.org/social-trends/2023/07/26/which-u-s-workers-are-more-exposed-to-ai-on-their-jobs/
  • 11.Weinstein, N. D. Optimistic biases about personal risks. Science246, 1232–1233. 10.1126/science.2686031 (1989). [DOI] [PubMed] [Google Scholar]
  • 12.Ye, D. et al. Employee work engagement in the digital transformation of enterprises: a fuzzy-set qualitative comparative analysis. Humanit. Soc. Sci. Commun.11, 35. 10.1057/s41599-023-02418-y (2024). [Google Scholar]
  • 13.Jin, G., Jiang, J. & Liao, H. The work affective well-being under the impact of AI. Sci. Rep.1410.1038/s41598-024-75113-w (2024). [DOI] [PMC free article] [PubMed]
  • 14.Liu, L., Sun, W., Fang, P., Jiang, Y. & Tian, L. Be optimistic or be cautious? Affective forecasting bias in allocation decisions and its effect. Front. Psychol.13, 1026557. 10.3389/fpsyg.2022.1026557 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Shepperd, J. A., Waters, E., Weinstein, N. D. & Klein, W. M. A primer on unrealistic optimism. Curr. Dir. Psychol. Sci.24, 232–237. 10.1177/0963721414568341 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Gödöllei, A. F. & Beck, J. W. Insecure or optimistic? Employees’ diverging appraisals of automation and consequences for job attitudes. Comput. Hum. Behav. Rep.10014810.1016/j.chbr.2023.100342 (2023).
  • 17.Li, C. & Li, Y. Factors influencing public risk perception of emerging technologies: A meta-analysis. Sustainability15, 3939. 10.3390/su15053939 (2023). [Google Scholar]
  • 18.Fatoki, J. G., Shen, Z. & Mora-Monge, C. A. Optimism amid risk: how non-IT employees’ beliefs affect cybersecurity behavior. Comput. Secur.141, 103812. 10.1016/j.cose.2024.103812 (2024). [Google Scholar]
  • 19.Ipsos The Ipsos AI Monitor 2024: A 32-country Ipsos Global Advisor Survey. (2024). https://www.ipsos.com/sites/default/files/ct/news/documents/2024-06/Ipsos-AI-Monitor-2024-final-APAC.pdf
  • 20.European Commission, Directorate-General for Communications Networks, Content and Technology & TNS Opinion & Social. Attitudes towards the impact of digitisation and automation on daily life: report. Eur. Comm.10.2759/835661 (2017). [Google Scholar]
  • 21.Slovic, P. The Perception of Risk (Routledge, 2016). 10.4324/9781315661773
  • 22.Wu, H., Li, D. & Mo, X. Understanding GAI risk awareness among higher vocational education students: an AI literacy perspective. Educ. Inf. Technol.10.1007/s10639-024-13312-8 (2025). [Google Scholar]
  • 23.Biagini, G. Towards an AI-literate future: A systematic literature review exploring education, ethics, and applications. Int. J. Artif. Intell. Educ.10.1007/s40593-025-00466-w (2025). [Google Scholar]
  • 24.Kelley, P. G. et al. Mixture of amazement at the potential of this technology and concern about possible pitfalls: public sentiment towards AI in 15 countries. Bull. IEEE Comput. Soc. Tech. Comm. Data Eng.28, 1–19 (2021). [Google Scholar]
  • 25.Solaiman, I. et al. Evaluating the social impact of generative AI systems in systems and society. Preprint at (2024). https://arxiv.org/abs/2306.05949v4
  • 26.Salgado, S. & Berntsen, D. It won’t happen to us: unrealistic optimism affects COVID-19 risk assessments and attitudes regarding protective behaviour. J. Appl. Res. Mem. Cogn.10, 368–380. 10.1016/j.jarmac.2021.07.006 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Mills, L., Freeman, J., Truelove, V., Davey, J. & Delhomme, P. Comparative judgements of crash risk and driving ability for speeding behaviours. J. Saf. Res.79, 68–75. 10.1016/j.jsr.2021.08.006 (2021). [DOI] [PubMed] [Google Scholar]
  • 28.Helweg-Larsen, M. & Shepperd, J. A. Do moderators of the optimistic bias affect personal or target risk estimates? Pers. Soc. Psychol. Rev.5, 74–95. 10.1207/S15327957PSPR0501_5 (2001). [Google Scholar]
  • 29.Vicsek, L., Bokor, T. & Pataki, G. Younger generations’ expectations regarding artificial intelligence in the job market: mapping accounts about the future relationship of automation and work. J. Sociol.60, 21–38. 10.1177/14407833221089365 (2024). [Google Scholar]
  • 30.Calvo-Rubio, L. M. & Ufarte-Ruiz, M. J. Percepción de Docentes universitarios, estudiantes, responsables de Innovación y periodistas sobre El Uso de inteligencia artificial En periodismo. Prof. Inf.2910.3145/epi.2020.ene.09 (2020).
  • 31.Vicsek, L., Pinter, R. & Bauer, Z. Shifting job expectations in the era of generative AI hype – perspectives of journalists and copywriters. Int. J. Sociol. Soc. Policy. 45, 1–16. 10.1108/IJSSP-05-2024-0231 (2025). [Google Scholar]
  • 32.Doraiswamy, P. M., Blease, C. & Bodner, K. Artificial intelligence and the future of psychiatry: insights from a global physician survey. Artif. Intell. Med.102, 101753. 10.1016/j.artmed.2019.101753 (2020). [DOI] [PubMed] [Google Scholar]
  • 33.Lin, L. & Parker, K. U. S. Workers Are More Worried Than Hopeful About Future AI Use in the Workplace. Pew Research Center; (2025). https://www.pewresearch.org/social-trends/2025/02/25/u-s-workers-are-more-worried-than-hopeful-about-future-ai-use-in-the-workplace/
  • 34.Centro de Investigaciones Sociológicas de España. Estudio nº 3495. Inteligencia artificial. (2025). https://www.cis.es/cis/export/sites/default/-Archivos/Marginales/3490_3509/3495/es3495mar.pdf
  • 35.Acemoglu, D. Technical change, inequality, and the labor market. J. Econ. Lit.40, 7–72. 10.1257/0022051026976 (2002). [Google Scholar]
  • 36.McClure, P. K. You’re fired, says the robot: the rise of automation in the workplace, technophobes, and fears of unemployment. Soc. Sci. Comput. Rev.37, 3–20. 10.1177/0894439317698637 (2017). [Google Scholar]
  • 37.Frey, C. B. & Osborne, M. A. The future of employment: how susceptible are jobs to computerisation? Technol. Forecast. Soc. Change. 114, 254–280. 10.1016/j.techfore.2016.08.019 (2017). [Google Scholar]
  • 38.FECYT. Percepción social de la ciencia y la tecnología 2020: Informe completo. Fundación Española para la Ciencia y la Tecnología (2020). https://www.fecyt.es/sites/default/files/users/user378/percepcion_social_de_la_ciencia_y_la_tecnologia_2020_informe_completo_2.pdf revised in 2022).
  • 39.Ghimire, R., Skinner, J. & Carnathan, M. Who perceived automation as a threat to their jobs in metro atlanta: results from the 2019 metro Atlanta speaks survey. Technol. Soc.63, 101368. 10.1016/j.techsoc.2020.101368 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.R Core Team. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, 2025). https://www.R-project.org/
  • 41.Kirkpatrick, L. O., Brown, D. & Singh, L. Who shares about AI? Media exposure, psychological proximity, performance expectancy, and information sharing about artificial intelligence online. AI Soc.10.1007/s00146-024-01997-x (2024). [Google Scholar]
  • 42.Vicsek, L. Artificial intelligence and the future of work – lessons from the sociology of expectations. Rev. Int. Sociol. Polit Soc.41, 842–861. 10.1108/IJSSP-05-2020-0174 (2021). [Google Scholar]
  • 43.Qi, W., Pan, J., Lyu, H. & Luo, J. Excitements and concerns in the post-ChatGPT era: Deciphering public perception of AI through social media analysis. Telemat Inf.92, 102158. 10.1016/j.tele.2024.102158 (2024). [Google Scholar]
  • 44.Schwarz, A. The mediated amplification of societal risk and risk governance of artificial intelligence: technological risk frames on YouTube and their impact before and after ChatGPT. J. Risk Res.10.1080/13669877.2024.2437629 (2024). [Google Scholar]
  • 45.Ardila, D. et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat. Med.25, 954–961. 10.1038/s41591-019-0447-x (2019). [DOI] [PubMed] [Google Scholar]
  • 46.Wulaningsih, W. et al. Deep learning models for predicting malignancy risk in CT-detected pulmonary nodules: A systematic review and meta-analysis. Lung202, 625–636. 10.1007/s00408-024-00706-1 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Branting, L. K. et al. Scalable and explainable legal prediction. Artif. Intell. Law. 29, 213–238. 10.1007/s10506-020-09273-1 (2021). [Google Scholar]
  • 48.Yalcin, G. et al. Perceptions of justice by algorithms. Artif. Intell. Law. 31, 269–292. 10.1007/s10506-022-09312-z (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Emaminejad, N. & Akhavian, R. Trustworthy AI and robotics: implications for the AEC industry. Autom. Constr.139, 104298. 10.1016/j.autcon.2022.104298 (2022). [Google Scholar]
  • 50.Kim, B. J. & Kim, M. J. How artificial intelligence-induced job insecurity shapes knowledge dynamics: the mitigating role of artificial intelligence self-efficacy. J. Innov. Knowl.9, 100590. 10.1016/j.jik.2024.100590 (2024). [Google Scholar]
  • 51.Finucane, M. L., Alhakami, A., Slovic, P. & Johnson, S. M. The affect heuristic in judgments of risks and benefits. J Behav. Decis. Mak.13, 1–17 (2000).
  • 52.Phan, T. A. AI integration for communication skills: A conceptual framework in education and business. Bus. Prof. Commun. Q.010.1177/23294906241302000 (2024).
  • 53.Cramarenco, R. E., Burcă-Voicu, M. I. & Dabija, D. C. The impact of artificial intelligence (AI) on employees’ skills and well-being in global labor markets: A systematic review. Oeconomia Copernicana. 14, 731–767. 10.24136/oc.2023.022 (2023). [Google Scholar]
  • 54.Bevilacqua, S., Masárová, J., Perotti, F. A. & Ferraris, A. Enhancing top managers’ leadership with artificial intelligence: insights from a systematic literature review. Rev. Manag Sci. 1–37. 10.1007/s11846-025-00836-7 (2025).
  • 55.Agnese, P., Arduino, F. R. & Di Prisco, D. The era of artificial intelligence: what implications for the board of directors? Corp. Gov25, 272–287. 10.1108/CG-06-2023-0259 (2025). [Google Scholar]
  • 56.Morandini, S. et al. The impact of artificial intelligence on workers’ skills: upskilling and reskilling in organisations. Informing Sci.26, 39–68. 10.28945/5078 (2023). [Google Scholar]
  • 57.Emma, L. Adoption of Artificial Intelligence in Business: Challenges and Strategic Implementation (2025).
  • 58.Steiber, A. & Alvarez, D. Culture and technology in digital transformations: how large companies could renew and change into ecosystem businesses. Eur. J. Innov. Manag. 28, 806–824. 10.1108/EJIM-04-2023-0272 (2025). [Google Scholar]
  • 59.Phan, T. A., Vu, T. H. N., Vo, N. T. N. & Le, T. H. Enhancing educational outcomes through strategic guest speaker selection: A comparative study of alumni and industry experts in university settings. Bus. Prof. Commun. Q.010.1177/23294906241263035 (2024).
  • 60.Phan, T. A. & Ninh, T. T. D. Enhancing business communication: comparing teaching styles in supply chain education. Bus. Prof. Commun. Q.010.1177/23294906251315836 (2025).
  • 61.Wang, K. H. & Lu, W. C. AI-induced job impact: complementary or substitution? Empirical insights and sustainable technology considerations. Sustain. Technol. Entrep. 4, 100085. 10.1016/j.stae.2024.100085 (2025). [Google Scholar]
  • 62.Eurostat Individuals’ level of digital skills (from 2021 onwards). Eurostat10.2908/isoc_sk_dskl_i21 (2024). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material 1 (27.3KB, xlsx)
Supplementary Material 2 (19.1KB, docx)

Data Availability Statement

The survey dataset generated and analyzed in this study is available as supplementary material.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES