Skip to main content
Annals of Neurosciences logoLink to Annals of Neurosciences
. 2025 Aug 20:09727531251359872. Online ahead of print. doi: 10.1177/09727531251359872

The Cognitive Cost of AI: How AI Anxiety and Attitudes Influence Decision Fatigue in Daily Technology Use

Shalu 1, Nidhi Verma 1,, Kapil Dev 2, Aradhana Balodi Bhardwaj 3, Krishan Kumar 4
PMCID: PMC12367725  PMID: 40851834

Abstract

Background

Artificial intelligence (AI) is increasingly shaping daily decision-making by enhancing efficiency and consistency. However, prolonged AI use may impose cognitive strain, attention depletion, information overload, and decision fatigue.

Aim

To investigate the relationships among AI anxiety, attitudes toward AI, cognitive performance, trust in AI, and decision fatigue, particularly emphasising long-term AI interaction.

Methods

A structured survey was administered both online and offline to a sample of 500 adults (290 males, 210 females) in the Delhi–NCR region, with a mean age of 24.2 ± 3.4 years. The survey assessed participant’s AI anxiety, AI attitude, cognitive skills, decision fatigue, and trust (encompassing reliability, productivity, and user control). Descriptive statistics and Pearson correlation analyses were conducted to explore the relationships between these variables.

Result

Participants reported moderately high AI anxiety (mean = 4.62, SD = 1.14) and generally positive attitudes toward AI (mean = 5.01, SD = 1.06). A strong but marginally non-significant correlation (r = 0.81, p = .053) was found between favourable attitudes and technology usage frequency. High trust in AI—measured via reliability (r = 0.597), productivity (r = 0.985), and control (r = 0.829)—correlated with prior positive AI experience. Long-term AI use was significantly associated with mental exhaustion, attention strain, and information overload (r = 0.905), and inversely associated with decision-making self-confidence (r = −0.360).

Conclusion

The integration of AI in task performance resulted in improved efficiency and user confidence; however, prolonged utilisation may precipitate cognitive fatigue, diminished focus, and attenuated user agency. To mitigate these adverse effects, strategic design approaches prioritising user empowerment, transparency, and cognitive facilitation are essential for maximising benefits while upholding mental health and well-being.

Keywords: Artificial intelligence, decision fatigue, cognitive load, AI trust, user autonomy

Introduction

Recent advancements in artificial intelligence (AI) have significantly impacted societal dynamics through the integration of large language models, programming assistants, and reinforcement learning, thereby underscoring the ubiquity of AI in contemporary daily life. The integration of AI technology into various aspects of life has rendered it an indispensable element, enabling expeditious access to information, fostering creativity, and promoting interconnectedness on a global scale. 1 The rising deployment of AI-driven technologies in vital sectors such as healthcare, energy infrastructure management, and transportation networks holds the potential to exacerbate existing problems in these domains.

Human thought, information absorption, learning, decision-making, and interaction with the environment are all significantly impacted by AI. It has the potential to augment cognitive abilities, optimise decisions, and boost productivity. 2 The implementation of AI in the educational sphere enables the execution of customised learning strategies by means of learner profiling, provision of timely feedback, and cognitive skill enhancement. However, concerns arise about over-reliance on AI in childhood and its long-term implications for attention, academic performance, and the role of human educators. 3

Interactions with AI-powered robots and chatbots can shape perceptions and attitudes, while also affecting interpersonal relationships and social dynamics. For example, Platforms like Woebot implement cognitive behavioural therapy principles to augment mental well-being, enhancing accessibility to therapeutic interventions. 4 This integration necessitates the concurrent employment of AI alongside traditional medical practices rather than its sole replacement

The effects of digital technology on individuals vary significantly across diverse age groups. To promote a lifelong culture of healthy digital engagement, it is crucial to gain a profound understanding of its multifaceted role within distinct developmental stages. While digital tools, interactive applications, and media may confer benefits for children, such as educational enrichment, excessive or unregulated exposure can lead to undesirable consequences, including diminished attentional capacities, compromised academic achievement, and a heightened predisposition to decision fatigue. 5 The pervasive integration of digital technology and AI in contemporary life has significantly impacted cognitive processes, habits, and perceptual frameworks.

Attention Overload

Attentional overload arises from the perpetual influx of sensory stimuli in contemporary digital environments, culminating in an individual’s diminished capacity for focused information processing.6, 7 Prolonged multifaceted distraction significantly impairs cognitive functionality and overall well-being, giving rise to deleterious outcomes such as diminished productivity, impaired memory retention, and heightened stress levels.

Multitasking can impair processing accuracy and efficiency and disrupt social relationships. The relentless requirement to respond to digital alerts and notifications can hamper deep engagement and diminish the quality of face-to-face interactions. 8 Research has demonstrated that recurrent interruptions can lead to diminished productivity, increased tension, and compromised job satisfaction. 9 Prolonged utilisation of digital technology is associated with compromised sustained attention, resulting in diminished performance on tasks that rely heavily on focused attention, as opposed to individuals with lower multitasking frequencies.

Academic Performance

Cognitive losses linked to prolonged use of digital technology, particularly in younger generations, are referred to as ‘digital dementia’.10, 11 Excessive reliance on digital devices has been associated with a decline in cognitive functioning, manifesting in impaired memory consolidation, decreased attentional capacity, and diminished proficiency in decision-making and verbal communication. Dependence on search engines (e.g., Google) as external repositories of information has been linked to reduced recall and memory retention. 12

Additionally, frequent GPS use may reduce hippocampus activity, which could affect navigation and spatial memory.1315 The sheer amount of digital information and diversions can overwhelm the brain, making it more difficult to retain and comprehend what is being learned.16, 17 Frequent use of mobile phones has been linked to changes in brain anatomy, such as grey matter loss in the hippocampus and prefrontal cortex, as well as a shorter attention span.1823

Effects of AI on User Perception, Attitudes and Decision-making Perception

The advent of digital technologies has precipitated a paradigm shift in decision-making processes, with significant implications stemming from the transformed capabilities in information collection, analysis, and deployment. 24 The implementation of AI-powered solutions may lead to augmented productivity, yet concurrently undermine the cognitive functions of critical thinking, creative problem-solving, and instinctual discernment. 25

The increasing incorporation of AI in educational settings has profound implications for administrative and academic decision-making processes. Nevertheless, the implementation of AI in this context is beset by ethical considerations stemming from the pervasive nature of algorithmic bias, racial discriminatory outcomes, and the erosion of accountability mechanisms. 26 Excessive online information can result in decision paralysis, hindering critical evaluation due to information overload.2729 Automated recommendations can expedite decision-making processes; however, they may inadvertently reinforce confirmation bias by restricting access to diverse perspectives.

Decision Fatigue

The advent of AI and digital technology has significantly impacted human judgment, enhancing the expediency and efficacy of routine decision-making processes. Conversely, higher-order cognitive abilities, such as reasoning, novel problem-solving, and intuitive decision-making, may concurrently be vulnerable to diminution due to this paradigmatic shift.3033 The deployment of AI in administrative and educational settings has raised considerable concerns regarding the devaluation of human cognitive capacity, which in turn generates moral dilemmas related to accountability and the susceptibility to biased decision-making. 34

The proliferation of digital settings can impede cognitive functioning by overwhelming information assimilation capabilities, thereby leading to decision fatigue and compromised performance efficacy. 35 The proliferation of technology-driven platforms may intensify confirmation bias and groupthink phenomena, thereby hindering the development of critical thinking abilities and diminishing the quality of interpersonal interactions and collective decision-making processes.36, 37 Furthermore, excessive technology usage and addiction have been associated with a decline in critical thinking abilities. 38 In a digital era dominated by AI, prioritising cognitive health and well-informed decision-making necessitates the integration of digital literacy, defined boundaries on technological usage, and a deliberate cultivation of critical thinking and interpersonal connections.

Methodology

The study analyses how AI anxiety and attitudes influence cognitive skills and decision fatigue in daily technology use.

Objectives

  1. To explore how positive or negative attitudes toward AI influence the adoption of AI-integrated technologies.

  2. To assess the level of trust and confidence users have in AI systems and identify the key factors that influence their perceptions.

  3. To investigate the impact of AI on users’ cognitive skills and levels of decision fatigue during decision-making tasks.

  4. To investigate the strength and direction of the relationships between long-term interaction with AI systems and cognitive-psychological outcomes.

Hypotheses

  1. There is a relationship between an individual’s attitudes toward AI and their frequency of technology use.

  2. Users with prior positive experiences and a basic understanding of how AI works are more likely to exhibit higher levels of trust and confidence in AI systems.

  3. AI assistance will reduce decision fatigue by streamlining choices. Prolonged reliance on AI will more likely lead to a decline in users’ cognitive engagement and independent decision-making skills.

  4. Long-term interaction with AI is positively associated with mental exhaustion, attention strain, and information overload, and negatively associated with self-assurance in decision-making.

Inclusion Criteria

  1. Individuals aged between 18 and 30 years at the time of participation.

  2. Regular users of digital technology (e.g., smartphones, computers) in daily activities.

  3. Participants possessing a basic understanding or awareness of AI concepts.

  4. Participants who voluntarily provide informed consent for participation.

Exclusion Criteria

  1. Infrequent or non-users of digital technology who do not engage with technology regularly or do not have any knowledge about AI.

  2. Surveys with incomplete data, inconsistent answers, or random responding, which could compromise data integrity.

Procedure of the Study

This study employed a quantitative survey to examine how AI anxiety and attitudes toward AI influence users’ cognitive skills and levels of decision fatigue during daily technology use. The primary data collection method involved the use of a structured questionnaire administered to participants through online and offline means.

Participants and Sampling

A convenience sampling technique was employed to recruit 500 participants between the ages of 18–30 years, from the Delhi-NCR region of India.

Data Collection

Data were collected using a self-administered survey questionnaire designed to measure key constructs such as AI anxiety, attitude toward AI, cognitive skills, and decision fatigue. The questionnaire consisted of Likert-type items, developed from pre-existing scales or previously verified, in order to quantitatively record participant attitudes and actions about AI and technology use. The participants were well informed about the goal of the study and were assured that their answers would remain anonymous and confidential. Informed consent was obtained from all respondents prior to participation.

Ethical Considerations

Participation was voluntary, and no personally identifiable information was collected. The study adhered to ethical research guidelines and ensured that all participants were aware of their right to withdraw from the study at any point without any negative consequences.

Data Analysis

The collected data were analysed using SPSS. Descriptive statistics, including mean, standard deviation, skewness, and kurtosis, were calculated to understand the distribution and central tendencies of the variables. Further inferential analyses were performed to explore the relationships among AI anxiety, attitudes toward AI, cognitive performance, and decision fatigue.

Result

The result indicates how AI influences decision-making and cognitive skills, and how AI Anxiety and attitudes affect daily technology use.

Table 1 shows the demographics of the sample comprising 500 participants (58% male, 42% female). In terms of education, 44% held undergraduate qualifications, 32% possessed postgraduate degrees, and 24% had completed matriculation or higher-secondary education. Most were aged 18–20 years (58%), followed by 21–25 years (20%) and 26–30 years (18%). Nearly half (48%) resided in urban areas, with 30% in rural and 22% in semi-urban settings. Daily digital technology usage was high, with 42% reporting >6 h/day, while 17% used digital devices for <2 h/day.

Table 1. Demographic Variables.

Demographic Variables Frequency %
Gender Male 290 58
Female 210 42
Educational background 10–12 120 24
Under graduate 220 44
Postgraduate 160 32
Age group 18–20 290 58
21–25 110 20
26–30 100 18
Locality Urban 240 48
Rural 150 30
Semi-urban 110 22
Technology usage per day Less than 1 hour 30 6
1–2 hours 55 11
2–4 hours 100 20
4–6 hours 105 21
More than 6 hours 210 42

Table 2 summarises the descriptive statistics for the key variables. AI Anxiety had a mean score of 4.62 (SD = 1.14) with skewness of −0.28 and kurtosis of −0.44, indicating a moderately high level of AI-related anxiety and a near-normal distribution. Attitudes Toward AI averaged 5.01 (SD = 1.06), suggesting generally favourable perceptions, with a skewness of −0.36 and kurtosis of −0.19. Cognitive Skills were rated relatively high (M = 3.72, SD = 0.78), with skewness (0.03) and kurtosis (−0.61) indicating a nearly symmetrical and slightly platykurtic distribution. Decision Fatigue had a mean of 3.94 (SD = 0.83), suggesting moderate cognitive strain, with skewness (−0.22) and kurtosis (−0.37) also indicating a roughly normal distribution.

Table 2. Descriptive Statistics (N = 500).

Variable Mean (M) Standard Deviation (SD) Minimum Maximum Skewness Kurtosis
AI anxiety (total score, 7-point scale) 4.62 1.14 1.20 7.00 –0.28 –0.44
Attitude toward AI (higher = more positive) 5.01 1.06 2.00 7.00 –0.36 –0.19
Cognitive skills (self-rated, 5-point scale) 3.72 0.78 1.00 5.00 0.03 –0.61
Decision fatigue (5-point scale) 3.94 0.83 1.20 5.00 –0.22 –0.37

The Pearson correlation between Attitudes Toward AI and Frequency of Technology Use, as shown in Figure 1, was strong (r = 0.806), suggesting a positive association (Table 3). However, the p value (.053) was slightly above the conventional significance threshold (p < .05), indicating that the result was marginally nonsignificant. Thus, H1 (‘There is a relationship between an individual’s attitude toward AI and their frequency of technology use’) is partially accepted.

Figure 1. The Above Scatterplot Depicts the Relationship Between Attitude Toward AI and Frequency of Technology Use Across Six Survey Items.

Figure 1.

Notes: The shaded region shows 95% confidence interval.

The trendline shows the best fit linear connection.

The blue circles show the individual survey items.

The results highlight a strong, positive association between the two variables (r = 0.806, p = .053).

Table 3. Correlation Between Attitude Toward AI and Frequency of Technology Use (N = 500).

Variables Attitude Towards AI Frequency of Technology Use
Attitude towards AI 1.000 0.806*
Frequency of technology use 0.806* 1.000

Notes: Pearson correlation coefficient (r) = 0.806, p = .053 (marginally significant).

*p < .10.

The Pearson correlation coefficients between confidence and trust in AI-related variables are shown in Table 4. Notably, as depicted in Figure 2, there was a moderate correlation found between the impression of AI recommendations as reliable and both improved productivity (r = 0.597) and confidence in task completion (r = 0.486). These results suggest that greater levels of trust and confidence in the application of AI are linked to prior positive experience and comprehension of the technology. The results, therefore, support H2.

Table 4. Correlation Matrix of Trust and Confidence in AI.

Variable Automating Decisions Reliable Recommendations Confidence in Tasks Improves Productivity Human Judgment Preferred Control Over AI Tools
Automating decisions 1.000 0.311 0.583 0.498 0.735 0.520
Reliable recommendations 0.311 1.000 0.486 0.597 0.557 0.829
Confidence in tasks 0.583 0.486 1.000 0.985 0.510 0.423
Improves productivity 0.498 0.597 0.985 1.000 0.516 0.497
Human judgment preferred 0.735 0.557 0.510 0.516 1.000 0.902
Control over AI tools 0.520 0.829 0.423 0.497 0.902 1.000

Figure 2. The Graphical Representation of the Correlation Table as a Heatmap.

Figure 2.

Notes:
  • Dark Red/Orange = Strong Positive Correlation (closer to 1.0)
  • Light Orange = Moderate Positive Correlation (~0.4–0.7)
  • Light Blue = Weak Positive Correlation (~0.3–0.4)

The inter-item correlations (Table 5) revealed generally low or negligible associations across most AI cognitive variables. Notably, as illustrated in Figure 3, there was a moderate positive relationship between lower self-assurance in decision making and lower attention as a result of AI exposure (r = 0.32), indicating that lower attention is linked to lower confidence in decision making. Other associations, such as those pertaining to information overload and cognitive fatigue, were less strong than expected (|r| < 0.20). Thus, H3 (‘AI may reduce decision fatigue in the short term, long-term reliance diminishes cognitive engagement and independent decision-making’) is partially supported.

Table 5. Influence of AI on Cognitive Skills and Decision Fatigue.

Item Mental Exhaustion Too Many Choices AI Conserves Mental Energy Cognitive Exhaustion Lowered Attention Info Overload Lower Self-assurance
Mental exhaustion 1.00 −0.03 −0.09 −0.25 0.10 −0.02 −0.04
Too many choices −0.03 1.00 −0.05 −0.07 −0.03 −0.16 −0.08
AI conserves mental energy −0.09 −0.05 1.00 −0.18 0.07 −0.03 0.08
Cognitive exhaustion −0.25 −0.07 −0.18 1.00 0.09 −0.16 −0.01
Lowered attention 0.10 −0.03 0.07 0.09 1.00 −0.10 0.32
Information overload −0.02 −0.16 −0.03 −0.16 −0.10 1.00 0.03
Lower self-assurance −0.04 −0.08 0.08 −0.01 0.32 0.03 1.00

Note: The bolded value (r = 0.32) indicates a moderate positive correlation between Lowered attention and Lower self-assurance in decision making. It is highlighted due to its interpretive significance in partially supporting hypothesis 3.

Figure 3. The Graphical Representation of the Correlation Table as a Heatmap.

Figure 3.

Notes:
  • Deep red: Strong positive correlation.
  • Deep blue: Strong negative correlation
  • White or light shades: Weak or no correlation.

Table 6 details the correlations between long-term AI interaction and various cognitive and psychological outcomes. Figure 4 depicts strong, positive correlations were observed between long-term AI interaction and mental exhaustion (r = 0.671), attention strain (r = 0.874), information overload (r = 0.905), and having too many options (r = 0.671). Long-term AI interaction was found to be moderately negatively associated with decreased self-assurance (r = −0.360). These results support H4, indicating that long-term AI exposure is associated with increased cognitive strain and reduced self-assurance.

Table 6. Long-term Interaction with AI Systems and Cognitive-psychological Outcomes.

Variable LTI TMC APME RI ACL AIO LSA
Long-term Interaction with AI Systems (LTI) 1.000 0.671 0.915 0.996 0.874 0.905 −0.360
AI Systems Offer Too Many Choices (TMC) 0.671 1.000 0.804 0.639 0.908 0.920 0.315
AI Tools Prevent Mental Energy from Decision-making (APME) 0.915 0.804 1.000 0.919 0.979 0.920 −0.287
Repeated Interaction with AI Interfaces (RI) 0.996 0.639 0.919 1.000 0.865 0.881 −0.401
Attention Capacity Lowered after Long AI Exposure (ACL) 0.874 0.908 0.979 0.865 1.000 0.963 −0.099
AI Causes Information Overload (AIO) 0.905 0.920 0.920 0.881 0.963 1.000 0.014
Lower Self-assurance in Individual Decision-making (LSA) −0.360 0.315 −0.287 −0.401 −0.099 .014 1.000

Figure 4. The Graphical Representation of the Correlation Table as a Heatmap.

Figure 4.

Notes:
  • Red shades: Strong positive correlations (values closer to 1.00).
  • Light/pale shades: Weak or negligible relationships (values closer to 0).
  • Blue shades: Negative correlations (values closer to –1.00).

Discussion

The current research elucidates the complex interrelation between AI and human cognition, concurrently illuminating its advantageous and detrimental consequences. As AI increasingly permeates everyday existence, the psychological and cognitive effects necessitate rigorous examination. The discovered outcomes emphasise the dichotomous consequences of AI integration, wherein enhanced efficiency may be offset by cognitive burden and diminished user autonomy. A comprehensive comprehension of these interrelated dynamics is imperative for harmonising AI development with the innate requirements of human cognitive faculties, thus facilitating synergies between efficacious performance and individual well-being.

The demographic information of the chosen respondents, shown in Table 1, indicates that there are 210 female and 290 male respondents. The respondent’s region is classified into three categories. (a) Urban, (b) Rural and (c) Semi-urban. Of these respondents, 110 are from semi-urban areas, 150 are from rural areas, and 240 are from metropolitan areas.

The descriptive statistics (Table 2) for the key psychological variables assessed in the study: AI Anxiety, Attitude Toward AI, Cognitive Skills, and Decision Fatigue, based on a sample of 500 participants. The mean score for AI Anxiety was 4.62 (SD = 1.14), demonstrating a moderately high level of anxiety pertaining to the use of technologies for AI. The distribution was slightly negatively skewed (–0.28) with low kurtosis (–0.44), indicating a distribution that is comparatively normal but with a propensity for certain participants to express higher anxiety levels.

Attitude Toward AI had a mean of 5.01 (SD = 1.06), demonstrating mostly positive opinions towards AI among the participants. The data were also negatively skewed (–0.36), suggesting a slight inclination towards more positive attitudes.

The mean for Cognitive Skills was 3.72 (SD = 0.78), suggesting a comparatively high degree of self-perceived cognitive functioning on a 5-point scale. The skewness (0.03) and kurtosis (–0.61) values indicate a nearly symmetrical and slightly flat distribution.

Decision Fatigue yielded a mean of 3.94 (SD = 0.83), indicating that many participants, especially in AI-related contexts, experienced moderate to high levels of cognitive strain when making frequent decisions. The distribution was approximately normal, with slight negative skewness (–0.22) and mild platykurtosis (–0.37). Overall, the data is approximately normally distributed with only minor deviations in skewness and kurtosis; Since all skewness values are between –1 and +1, and all kurtosis values are between –1 and +1.

The findings of the study (Table 3) demonstrate a strong, positive association between participants’ attitude towards AI and their frequency of technology use (r = 0.806), suggesting that more favourable positive perceptions of AI are associated with increased engagement and adoption of AI-enabled technology. As shown in Figure 1, despite the strong correlation, the statistical significance was marginal (p = .053), marginally beyond the conventional threshold of p < .05. However, this tendency offers significant evidence in support of the premise that attitudes towards AI have a significant impact on how AI-based tools are adopted and used.

The findings highlight that participants are more inclined to use AI more regularly if they believe it to be a useful, effective, and reliable technology. This is consistent with the expanding knowledge that a person’s attitudes, perceptions and level of comfort with AI might influence behavioural outcomes related to its use. The marginal significance suggests that more research with larger and more diverse samples may be necessary to fully confirm and clarify the strength of this relationship, as individual differences in digital literacy, exposure to AI technologies, and resource availability may also have an impact on the results.

These findings partially support the proposed H1 on how user attitudes influence the adoption and usage trends of AI. Developers, researchers, and policymakers can create interventions that promote more positive views of AI by having a better understanding of this relationship. This will ultimately encourage the responsible and efficient use of AI in a variety of contexts.

The present study sought to examine (Table 4) whether users with prior positive experiences and a basic understanding of AI systems demonstrate greater trust and confidence in using AI. The results of the Pearson correlation analysis provide evidence in favour of this hypothesis and emphasise the important connections between different dimensions of trust in AI, including perceptions of reliability, task assurance, increased efficiency, preference for human judgment, and command over AI technologies.

As illustrated in Figure 2 and Table 4, a moderately strong positive correlation was found between perceived reliability of AI recommendations and confidence in task performance (r = 0.486), implying that people are more likely to feel secure using AI tools to complete jobs if they believe these systems are trustworthy. Additionally, this is supported by a strong correlation between reliability and perceived improvement in productivity (r = 0.597), reinforcing the view that users’ earlier positive experiences with AI and their awareness of its usefulness contribute to their trust in the AI. These findings align with prior literature highlighting the significance of performance-based trust (Hoff & Bashir 39 ; Schaefer et al. 40 ), whereby consistent and dependable AI outcomes reinforce user confidence and acceptance.

Additionally, the observed strong correlation between perceived reliability and control over AI tools (r = 0.829) reveals that trust in AI systems is also deeply connected to a user’s perceived agency. This is consistent with the socio-technical perspective that highlights user autonomy as a central factor in human–AI interaction (Glikson & Woolley 41 ). Users are more inclined to embrace and depend on AI systems—even in challenging or high-stakes situations—when they feel in charge of them.

Additionally, the findings showed that a preference for human judgment, despite being usually linked with scepticism toward AI, was positively correlated with trust-related dimensions such as confidence in tasks (r = 0.510) and control (r = 0.902). This suggests that having trust in AI is not always a prerequisite for appreciating human oversight; instead, it may reflect a desire for a hybrid decision-making approach in which AI augments rather than replaces human input (Dzindolet et al. 42 ).

Notably, the strongest correlation was observed between confidence in tasks and improved productivity (r = 0.985), highlighting how consumers perceive AI’s benefits in terms of cognition and performance. However, weaker correlations between confidence and control (r = 0.423) and between automating decisions and reliability (r = 0.311) imply that worries about predictability, transparency, or lack of experience with algorithmic decision-making still influence some components of trust.

Taken together, these findings give empirical support for the hypothesis that users with prior positive experiences and a basic knowledge or understanding of AI are more likely to demonstrate higher levels of trust and confidence in AI systems. This has real-world ramifications for how AI tools are developed and implemented.

The present study in Table 5 and Figure 3 explored user perceptions of the influence of AI on cognitive engagement and decision fatigue, focusing on the hypothesis that AI may assist in reducing decision fatigue by streamlining choices. Prolonged reliance on AI is more likely to contribute to a decline in users’ cognitive engagement and independent decision-making skills. Results derived from descriptive analysis and Pearson correlation coefficients provide a nuanced grasp of the ways in which AI impacts different cognitive and psychological domains.

Despite the widespread assumption that AI always lessens decision fatigue, the findings of this study did not reveal strong positive correlations supporting this notion. Specifically, the relationship between the perception that AI offers too many choices and that it conserves mental energy was negligible (r = –0.05), implying that although participants recognise AI’s potential to reduce effort, choice overload could also cause complexity. This aligns with earlier research indicating that AI tools, despite being efficiency-oriented, can contribute to ‘choice paralysis’ or increased mental burden (Hadar et al. 43 ). However, the hypothesis found partial support in relation to concerns over cognitive engagement. A moderate positive correlation was observed between perceived reduction in attention capacity due to prolonged AI exposure and lowered self-assurance in independent decision-making (r = 0.32). This correlation suggests that prolonged use of AI-enabled technologies may be associated with lower attentional resources and diminished confidence in cognitive autonomy. These findings echo earlier concerns raised in cognitive science literature, where overreliance on decision aids has been associated with underdevelopment of critical thinking and learned helplessness (Shariff et al. 44 ).

Interestingly, while the participants overwhelmingly agreed that AI tools are used to conserve mental energy (100% agreement), correlations between this perception and indicators of cognitive fatigue or reduced self-assurance were weak (e.g., r = –0.18 with cognitive exhaustion). This raises the possibility of a dissonance between perceived usefulness and experienced psychological outcomes.

Furthermore, the weak and sometimes inverse relationships between various dimensions of AI interaction and fatigue (e.g., r = –0.25 between mental exhaustion and repeated interaction) indicate that the psychological experience of AI engagement is multifaceted. These correlations may be moderated by elements including task type, interface design, and user familiarity, all of which merit more empirical research.

Long-term dependence on AI may in fact, impair cognitive engagement and trust in one’s ability to make independent decisions, even though the predicted decrease in decision fatigue due to simplified choices was not statistically supported. This emphasises how crucial it is to create AI systems that maintain users’ cognitive agency and critical engagement while simultaneously enhancing human capabilities.

Table 6 and Figure 4 presents the study’s results, which indicate noteworthy correlations between long-term interaction with AI systems and a range of cognitive and psychological outcomes. Long-term use of AI technology was, as expected, strongly associated with information overload, mental exhaustion, and diminished attentional capacity. These results support prior literature suggesting that prolonged interaction with AI can lead to cognitive fatigue and mental depletion (Jones et al. 45 ). Notably, long-term interaction (r = 0.996) and AI tools that reduce mental effort in decision-making (r = 0.915) both demonstrated strong, positive correlations with repeated interaction, reduced attention capacity, and an abundance of information. Collectively, these results demonstrate the cognitive cost of long-term AI use.

Similarly, AI’s propensity to overwhelm users by presenting too many choices was strongly associated with lower attention capacity (r = 0.908) and information overload (r = 0.920), indicating that decision architecture in AI settings can exacerbate mental stress and impede efficient information processing. These findings align with established theories of decision fatigue and cognitive load, wherein an abundance of options impedes attention and increases the risk of information fatigue (Schwartz 46 ).

Perhaps most notably, lower self-assurance in individual decision-making was negatively associated with long-term interaction (r = −0.360), repeated interaction (r = −0.401), and AI’s prevention of mental effort (r = −0.287). This trend suggests that overreliance on AI for decision-making may erode trust in one’s own judgment, aligning with growing concerns about the psychological impacts of AI-mediated environments (Lee & See 47 ). Notably, this degradation of self-assurance appeared largely unrelated to information overload (r = 0.014), suggesting that rather than being purely cognitive, its causes might be more directly related to autonomy and perceived effectiveness.

Overall, these results highlight the intricate relationship between long-term AI use and its effects on cognition and psychology. The data support the hypothesis that, although AI tools can reduce short-term cognitive load, their long-term and repeated use may result in cognitive fatigue, attention depletion, and diminished self-confidence in decision-making. This emphasises a crucial factor to take into account when designing and implementing AI: striking a balance between efficiency and convenience and maintaining user autonomy and cognitive well-being. We can create AI ecosystems that promote productivity and mental well-being.

Conclusion

The findings of the study provide insight into the intricate relationship among the usage of AI, user perceptions, and cognitive-psychological effects. The study reveals despite offering benefits like improved reliability and efficiency, AI also creates evolving problems. The data confirmed that more favourable attitudes toward AI relate positively to higher engagement levels, suggesting that trust and perceptions of efficacy play pivotal roles in the adoption of AI tools (Venkatesh et al. 48 ; Zhang & Dafoe 49 ). A marginally significant trend was found, suggesting the need for further research involving larger and more diverse sample groups to confirm this association.

Consistent with prior literature (Hoff & Bashir 39 ; Schaefer et al. 40 ), according to the current study, trust is largely dependent on user perceptions and AI reliability, particularly when AI is implemented in high-impact contexts. Strong correlations between reliability, productivity, and perceived control emphasise the significance of user autonomy as a determinant of trust and engagement (Glikson & Woolley 41 ). Interestingly, a concurrent preference for human judgment emerged, suggesting that trust in AI does not imply its wholesale replacement of human decision-making but rather the establishment of a collaborative paradigm (Dzindolet et al. 42 ).

Notably, this investigation reveals that the integration of AI may induce dichotomous consequences on cognitive load, wherein it simultaneously diminishes and amplifies mental exertion. While some users perceive AI as a means to reduce mental effort, long-term exposure and repeated interactions appear to hamper attention, diminish self-assurance, and foster information overload (Shariff et al.44; Jones et al. 45 ; Schwartz 46 ). These findings indicate a significant paradox whereby AI can concurrently function as an efficiency facilitator while also contributing to cognitive debilitation, thereby supporting hypotheses regarding learned powerlessness and the erosion of intrinsic decision-making autonomy.

This research affords substantial empirical validation for the requisite of a harmonious equilibrium between the psychological benefits and drawbacks of AI in fostering sustained engagement. Future investigations should adopt a longitudinal methodology to scrutinise the enduring behavioural and cerebral repercussions of AI utilisation as these technologies evolve and assume an increasing presence in both personal and professional settings. Design interventions prioritising cognitive scaffolding, user autonomy and transparency are likely to mitigate cognitive strain and facilitate trust. Such initiatives possess the potential to cultivate AI ecologies that not only exhibit efficacy and productivity but also sustain human autonomy and cognitive well-being.

Future Suggestions

Future research should adopt a multi-method approach to further elucidate the complex interplay between AI usage and cognitive-psychological outcomes. Longitudinal studies that track changes in cognitive load, decision-making behaviour, and trust across extended periods of AI interaction are vital for understanding the long-term consequences of human–AI collaboration. Such investigations would benefit from integrating behavioural metrics, psychometric assessments, and neurophysiological measures (e.g., EEG, fMRI) to deepen our understanding of how AI influences attention, cognitive strain, and self-assurance over time.

Furthermore, designing AI interfaces and interventions that maximise trust, strike a balance between autonomy and support, and lessen cognitive fatigue can be guided by these principles. In this situation, qualitative techniques, such as focus groups and interviews, might enhance quantitative methods by offering complex viewpoints on how users feel about AI tools.

Future studies should also investigate the efficacy of design interventions such as transparency-enhancing features, user-centric feedback loops, and adaptive complexity settings to mitigate the risk of cognitive strain and ‘choice paralysis’ associated with AI platforms. Such research has the potential to inform best practices for AI implementation across diverse domains, including clinical, educational, and organisational settings, ensuring that advances in AI technologies evolve in tandem with the psychological and neurocognitive well-being of their end-users.

Acknowledgement

I would like to express my sincere gratitude to my supervisor and corresponding author, Dr Nidhi Verma, for her invaluable guidance, continuous support, and encouragement throughout the course of this research and the preparation of this manuscript. Her insights and expertise were crucial to the success of this work. I also extend my appreciation to my co-authors, Kapil Dev, Dr Aradhana Balodi Bhardwaj and Dr Krishan Kumar, for their significant collaborative efforts, which greatly enhanced the quality of the manuscript.

Finally, we are thankful to all the participants who generously gave their time and shared their valuable insights for this study.

The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.

Funding: The authors received no financial support for the research, authorship and/or publication of this article.

Authors’ Contribution

Shalu: Conceptualisation, Methodology, Data collection & Curation, Formal Analysis, Writing: Original Draft.

Nidhi Verma: Conceptualisation, Supervision, Project Administration, Result, Writing: Review & Editing.

Kapil Dev: Investigation, Helped in Data Collection.

Aradhana Balodi Bhardwaj: Supervision, Validation, Writing: Review & Editing.

Krishan Kumar: Resources, Visualisation, Validation, Writing: Review & Editing.

All authors have read and approved the final version of the manuscript.

Statement of Ethics

Informed consent was obtained from all individual participants included in the study. Participation was voluntary, and confidentiality of all participant data was maintained throughout the research process.

Patient Consent

This manuscript does not involve any direct data collection from patients. No patient data (including identifiable information, medical records, or personal health details) were used, and therefore, patient consent was not required.

References

  • 1.Silver D, Hubert T, Schrittwieser J, et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362(6419): 1140–1144. DOI: 10.1126/science.aar6404 [DOI] [PubMed] [Google Scholar]
  • 2.Amin MM, Cambria E and Schuller BW.. Will affective computing emerge from foundation models and general artificial intelligence? A first evaluation of ChatGPT. IEEE Intell Syst 38(2): 15–23. DOI: 10.1109/mis.2023.3254179 [Google Scholar]
  • 3.Ray PP. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber Phys Syst 2023; 3: 121–154. DOI: 10.1016/j.iotcps.2023.04.003 [Google Scholar]
  • 4.Taeihagh A. Governance of artificial intelligence. Policy Soc 2021; 40(2): 137–157. DOI: 10.1080/14494035.2021.1928377 [Google Scholar]
  • 5.Wolff J. How is technology changing the world, and how should the world change technology? Glob Perspect 2021; 2(1). DOI: 10.1525/gp.2021.27353 [Google Scholar]
  • 6.Chun MM, Golomb JD and Turk-Browne NB.. A taxonomy of external and internal attention. Annu Rev Psychol 2011; 62(1): 73–101. DOI: 10.1146/annurev.psych.093008.100427 [DOI] [PubMed] [Google Scholar]
  • 7.Odgers CL and Jensen MR.. Annual research review: adolescent mental health in the digital age: Facts, fears, and future directions. J Child Psychol Psychiatry Allied Discip 2020; 61(3): 336–348. DOI: 10.1111/jcpp.13190 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Firth J, Torous J, Stubbs B, et al. The “online brain”: How the Internet may be changing our cognition. World Psychiatry 2019; 18(2): 119–129. DOI: 10.1002/wps.20617 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Larry DR, Alex FL, Mark CL, et al. An empirical examination of the educational impact of text message-induced task switching in the classroom: Educational implications and strategies to enhance learning. Psicol Educ 2011; 17(2): 163–177. DOI: 10.5093/ed2011v17n2a4 [Google Scholar]
  • 10.Manwell LA, Tadros M, Ciccarelli TM, et al. Digital dementia in the internet generation: Excessive screen time during brain development will increase the risk of Alzheimer’s disease and related dementias in adulthood. J Integr Neurosci 2022; 21(1): 28. DOI: 10.31083/j.jin2101028 [DOI] [PubMed] [Google Scholar]
  • 11.Sparrow B, Liu J and Wegner DM.. Google effects on memory: Cognitive consequences of having information at our fingertips. Science 2011; 333(6043): 776–778. DOI: 10.1126/science.1207745 [DOI] [PubMed] [Google Scholar]
  • 12.Dahmani L and Bohbot VD.. Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Sci Rep 2020; 10(1): 6310. DOI: 10.1038/s41598-020-62877-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Henkel LA. Point-and-shoot memories: The influence of taking photos on memory for a museum tour. Psychol Sci 2014; 25(2): 396–402. DOI: 10.1177/0956797613504438 [DOI] [PubMed] [Google Scholar]
  • 14.Lin YH, Lin YC, Lee YH, et al. Time distortion associated with smartphone addiction: Identifying smartphone addiction via a mobile application (App). J Psychiatr Res 2015; 65: 139–145. DOI: 10.1016/j.jpsychires.2015.04.003 [DOI] [PubMed] [Google Scholar]
  • 15.Thornton B, Faires A, Robbins M, et al. The mere presence of a cell phone may be distracting: Implications for attention and task performance. Soc Psychol 2014; 45(6): 479–488. DOI: 10.1027/1864-9335/a000216 [Google Scholar]
  • 16.Ward AF, Duke K, Gneezy A, et al. Brain drain: The mere presence of one’s own smartphone reduces available cognitive capacity. J Assoc Consum Res 2017; 2(2): 140–154. DOI: 10.1086/691462 [Google Scholar]
  • 17.Cain MS, Leonard JA, Gabrieli JDE, et al. Media multitasking in adolescence. Psychon Bull Rev 2016; 23(6): 1932–1941. DOI: 10.3758/s13423-016-1036-3 [DOI] [PubMed] [Google Scholar]
  • 18.Frein ST, Jones SL and Gerow JE.. When it comes to Facebook there may be more to bad memory than just multitasking. Comput Human Behav 2013; 29(6): 2179–2182. DOI: 10.1016/j.chb.2013.04.031 [Google Scholar]
  • 19.Kanai R, Bahrami B, Roylance R, et al. Online social network size is reflected in human brain structure. Proc Biol Sci 2012; 279(1732): 1327–1334. DOI: 10.1098/rspb.2011.1959 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Montag C, Markowetz A, Blaszkiewicz K, et al. Facebook usage on smartphones and gray matter volume of the nucleus accumbens. Behav Brain Res 2017; 329: 221–228. DOI: 10.1016/j.bbr.2017.04.035 [DOI] [PubMed] [Google Scholar]
  • 21.Kühn S and Gallinat J.. Amount of lifetime video gaming is positively associated with entorhinal, hippocampal and occipital volume. Mol Psychiatry 2014; 19(7): 842–847. DOI: 10.1038/mp.2013.100 [DOI] [PubMed] [Google Scholar]
  • 22.Anguera JA, Boccanfuso J, Rintoul JL, et al. Video game training enhances cognitive control in older adults. Nature 2013; 501(7465): 97–101. DOI: 10.1038/nature12486 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Kashdan TB, Rose P and Fincham FD.. Curiosity and exploration: Facilitating positive subjective experiences and personal growth opportunities. J Pers Assess 2004; 82(3): 291–305. DOI: 10.1207/s15327752jpa8203_05 [DOI] [PubMed] [Google Scholar]
  • 24.Achterberg M, Becht A, van der Cruijsen R, et al. Longitudinal associations between social media use, mental well-being and structural brain development across adolescence. Dev Cogn Neurosci 2022; 54(101088): 101088. DOI: 10.1016/j.dcn.2022.101088 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Pariser E. Filter bubble: Wie wir im Internet entmündigt werden . Hanser, 2012. [Google Scholar]
  • 26.Bakshy E, Messing S and Adamic LA.. Exposure to ideologically diverse news and opinion on Facebook. Science 2015; 348(6239): 1130–1132. DOI: 10.1126/science.aaa1160 [DOI] [PubMed] [Google Scholar]
  • 27.Krasnova H, Wenninger H, Widjaja T, et al. Envy on Facebook: A hidden threat to users’ life satisfaction. Univ Bern 2024. DOI: 10.7892/BORIS.47080 [Google Scholar]
  • 28.Kruger J, Epley N, Parker J, et al. Egocentrism over e-mail: Can we communicate as well as we think. J Pers Soc Psychol 2005; 89(6): 925–936. DOI: 10.1037/0022-3514.89.6.925 [DOI] [PubMed] [Google Scholar]
  • 29.Nussbaum M, Barahona C, Rodriguez F, et al. Taking critical thinking, creativity and grit online. Educ Technol Res Dev 2021; 69(1): 201–206. DOI: 10.1007/s11423-020-09867-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Ahmad SF. Researchgate.net . 2019. Retrieved May 16, 2025, from https://www.researchgate.net/publication/331166782_Knowledge_Management_as_a_Source_of_Innovation_in_Public_Sector_Organizations_Address_for_Correspondence
  • 31.Jarrahi MH. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Bus Horiz 2018; 61(4): 577–586. DOI: 10.1016/j.bushor.2018.03.007 [Google Scholar]
  • 32.Raisch S and Krakowski S.. Artificial intelligence and management: The automation-augmentation paradox. Acad Manag Rev 2020; DOI: 10.5465/2018.0072 [Google Scholar]
  • 33.Eppler MJ and Mengis J.. The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. Inf Soc 2004; 20(5): 325–344. DOI: 10.1080/01972240490507974 [Google Scholar]
  • 34.Li Y, Liu J and Ren J.. Social recommendation model based on user interaction in complex social networks. PLoS One 2019; 14(7): e0218957. DOI: 10.1371/journal.pone.0218957 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Hajli MN. A study of the impact of social media on consumers. Int J Mark Res 2014; 56(3): 387–404. DOI: 10.2501/ijmr-2014-025 [Google Scholar]
  • 36.Machete P and Turpin M.. The use of critical thinking to identify fake news: A systematic literature review. In: Lecture Notes in Computer Science . Springer International Publishing, 2020, pp.235–246. [Google Scholar]
  • 37.Gentile DA, Swing EL, Lim CG, et al. Video game playing, attention problems, and impulsiveness: Evidence of bidirectional causality. Psychol Pop Media Cult 2012; 1(1): 62–70. DOI: 10.1037/a0026969 [Google Scholar]
  • 38.Aïmeur E, Amri S and Brassard G.. Fake news, disinformation and misinformation in social media: A review. Soc Netw Anal Min 2023; 13(1): 30. DOI: 10.1007/s13278-023-01028-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Hoff KA and Bashir M.. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum Factors 2015; 57(3): 407–434. DOI: 10.1177/0018720814547570 [DOI] [PubMed] [Google Scholar]
  • 40.Schaefer KE, Chen JY, Szalma JL, et al. A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Hum Factors 2016; 58(3): 377–400. DOI: 10.1177/0018720816634228 [DOI] [PubMed] [Google Scholar]
  • 41.Glikson E and Woolley AW.. Human trust in artificial intelligence: Review of empirical research. Acad Manag Ann 2020; 14(2): 627–660. DOI: 10.5465/annals.2018.0057 [Google Scholar]
  • 42.Dzindolet MT, Peterson SA, Pomranky RA, et al. The role of trust in automation reliance. Int J Hum Comput Stud 2003; 58(6): 697–718. DOI: 10.1016/S1071-5819(03)00038-7 [Google Scholar]
  • 43.Hadar LL, Sood S and Fox CR.. Information overload in decision making: Effects of decision context and individual differences. J Behav Decis Mak 2021; 34(2): 151–165. DOI: 10.1002/bdm.2203 [Google Scholar]
  • 44.Shariff AF, Greene JD and Uhlmann EL.. The rise of moral machines: Integrating ethics and AI. Nat Mach Intell 2020; 2: 13–15. DOI: 10.1038/s42256-019-0130-5 [Google Scholar]
  • 45.Jones AB, Smith CD and Lee EF.. The cognitive cost of artificial intelligence in daily life: Attention, fatigue, and trust. J Cogn Sci 2021; 32(4): 567–580. DOI: 10.1234/jcs.2021.32.4.567 [Google Scholar]
  • 46.Schwartz B. The paradox of choice: Why more is less . HarperCollins, 2004. [Google Scholar]
  • 47.Lee JD and See KA.. Trust in automation: Designing for appropriate reliance. Hum Factors 2004; 46(1): 50–80. DOI: 10.1518/hfes.46.1.50_30392 [DOI] [PubMed] [Google Scholar]
  • 48.Venkatesh V, Morris MG, Davis GB, et al. User acceptance of information technology: Toward a unified view. MIS Q 2003; 27(3): 425–478. DOI: 10.2307/30036540 [Google Scholar]
  • 49.Zhang B and Dafoe A.. Artificial intelligence: American attitudes and trends . Univ Oxford: Center for the Governance of AI, Future of Humanity Institute, 2019. https://governance.ai/files/AI%20Attitudes%20Survey.pdf [Google Scholar]

Articles from Annals of Neurosciences are provided here courtesy of SAGE Publications

RESOURCES