Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2025 Sep 8;31(6):e70269. doi: 10.1111/jep.70269

Artificial General Intelligence and Its Threat to Public Health

Richard C Armitage 1,
PMCID: PMC12415933  PMID: 40916988

ABSTRACT

Background

Artificial intelligence (AI) is increasingly applied across healthcare and public health, with evidence of benefits including enhanced diagnostics, predictive modelling, operational efficiency, medical education, and disease surveillance.However, potential harms – such as algorithmic bias, unsafe recommendations, misinformation, privacy risks, and sycophantic reinforcement – pose challenges to safe implementation.Far less attention has been directed to the public health threats posed by artificial general intelligence (AGI), a hypothetical form of AI with human‐level or greater cognitive capacities.

Objective

This article explores the benefits and harms of current AI systems, introduces AGI and its distinguishing features, and examines the threats AGI could pose to public health and humanity's survival.

Discussion

Unlike ‘narrow’ AI, AGI could autonomously learn, generalise across domains, and self‐improve, potentially achieving superintelligence with unpredictable behaviours.AGI threatens public health through two broad categories: (1) misuse, where adversaries deploy AGI for cyberattacks, disinformation campaigns, or to develop chemical, biological, radiological, and nuclear (CBRN) weapons; and (2) misalignment, where poorly aligned AGI pursues goals in harmful ways, leading to loss of human control, erosion of autonomy, and potentially existential risk.The population‐level consequences include widespread unemployment, reduced trust in health systems, catastrophic biological threats, and risks to human survival.

Conclusion

Healthcare and public health professionals have a critical role in framing AGI risks as health threats, building coalitions akin to historic movements against nuclear war, and collaborating with AI researchers, ethicists, and policymakers.Leveraging their expertise, trust, and global networks, these professionals can help ensure that AI development prioritises human wellbeing and safeguards humanity's future.

1. Introduction

The ability of and potential for artificial intelligence (AI) to radically improve healthcare and public health has grown substantially in recent years. While substantial attention has been paid to its promises, much less focus has been directed to the harms of AI on individual and population health. More concerningly, very little has been written in the medical literature about the threats of artificial general intelligence (AGI)—towards which humanity is currently racing—to public health and humanity's survival. This article will survey the benefits and harms of AI to health, introduce the concept of AGI, and outline the threats of AGI to population health and humanity's existence. It will explicitly frame these risks as threats to human health that are of direct relevance to the work of healthcare and public health professionals. Finally, it will outline how these actors could work and collaborate to mitigate these threats.

2. Benefits of AI to Healthcare and Public Health

The ability for AI to bring about improved patient and population health outcomes has been demonstrated in at least six ways: first, several large language models (LLMs; a form of generative AI) have outperformed practicing clinicians in medical examinations across various specialities in multiple countries and languages [1]; second, AI systems have surpassed the imaging diagnostic abilities of clinicians in various domains, including the radiological detection of breast cancer [2] and clinically significant prostate cancer [3], the dermoscopic diagnosis of melanoma [4], and the identification of diabetic retinopathy in eye screening [5]; third, AI is increasingly deployed in predictive risk modelling like surgical [6] and cardiovascular risk [7], and to identify personalised precision therapies like the use of genomics to predict the effects of cancer treatment [8, 9]; fourth, AI systems can improve healthcare operational efficiency by parsing unstructured clinical notes [10], proving antimicrobial prescribing advice [11], performing clinical triage [12], and writing discharge summaries [13], clinic letters [14], and simplified radiology reports [15]; fifth, the utility of AI to strengthen and up‐skill healthcare workforces has been recognised in supporting medical education [16], generating scholarly content [17], and contributing to peer review [18], while the potential of LLMs to improve healthcare and health outcomes in low‐ and middle‐income countries has also been outlined [19, 20], sixth, AI systems can improve public health by improving epidemiological research, public communication, scarce resource allocation, and disease surveillance [21], such as by enhancing infectious disease outbreak predictions and improving responses to them [22].

3. Harms Caused by AI in Healthcare and Public Health

Despite these benefits, AI has the potential to cause substantial harm in healthcare and public health in various ways: For example, AI systems might perpetuate any harmful biases contained with their training data (known as ‘algorithmic biasʼ), such as those pertaining to groups based on race, sex, language, and culture [23]. A clear case of this is the commercial prediction algorithm affecting over 200 million people in the US health system that used healthcare costs rather than illness as a measure of healthcare need. Due to unequal access to US healthcare, less is spent on caring for Black patients than equally sick White patients, meaning the algorithm substantially underestimated the number of Black patients with complex health needs in need of additional help [24]. Additionally, AI systems might misdiagnose and offer unsafe clinical recommendations [25, 26]. Many existing systems largely function as ‘black boxes,ʼ and explaining their decisions poses serious technical challenges [27]. This lack of transparency and explainability, in addition to the (albeit declining) tendency of LLMs to ‘hallucinateʼ—the generation of incorrect or non‐sensical content including falsified academic citations [28]—means their responses can include inaccurate or unsafe information upon which harmful medical decisions might be made [29]. Furthermore, AI‐generated and algorithmically‐promoted health misinformation can both directly influence health behaviours in a negative manner and reduce public trust in healthcare and public health professionals [30, 31]. Yet further, the privacy and confidentiality of data analysed and trained on by AI systems, and the potential for breaches and inadequate handling of this information, can lead to psychological distress and reduced trust in health systems [32]. A recently revealed potential harm is that posed by sycophancy in LLMs, an AI characteristic generated by reinforcement learning, in which the model affirms whatever the user desires to be true, such as their world view or their opinions of their own actions (e.g., the user's radical political opinions or their claim that their actions are virtuous). A highly persuasive AI that unquestioningly affirms its user's viewpoints could be extremely dangerous if, for example, the user is suffering a mental health crisis (imagine an LLM agreeing with a user that their plan to abruptly stop their antipsychotic medication or to take their own life is indeed an excellent idea).

4. Artificial General Intelligence and the Race to Create It

Currently available AI systems, such as those aforementioned, each constitute examples of ‘narrowʼ AI that are specialised in a specific or narrow range of tasks, such as using datasets to assess risk, generating text, and identifying images. While the capabilities in their respective domains are impressive (and, in many cases, super‐human), they are unable to transfer knowledge from one domain to another, cannot generalise beyond their specific task, lack adaptability to new unstructured problems, and have little or no autonomy. Accordingly, narrow AI is also referred to as ‘weak AIʼ [33, 34].

In contrast, AGI is a hypothetical form of agentic, autonomous, general‐purpose AI capable of independent learning, general problem‐solving, and operating at or above human cognitive capabilities across various domains, akin to humans. While it remains theoretical, multi‐modal systems such as OpenAI's GPT‐4.5 and Google's Gemini 2.5, which can undertake multiple tasks including text generation, image creation, and voice understanding, are sometimes described as being on the path toward AGI. Furthermore, AGI is being vigorously pursued by various actors [35, 36, 37] including Sam Altman, CEO of OpenAI, who in January 2025 stated that ‘we [OpenAI] are now confident we know how to build AGI' [38]. Due to recent rapid advancements in AI capabilities, ‘timelines to AGIʼ—predictions of when AGI will be achieved—are shortening. For example, the aggregate forecast of AI researchers of when AGI will be realised was a 50% chance by 2047 in 2023, down thirteen years from 2060 in 2022 [39] (Altman has stated that AGI will arrive in ‘5 years, give or take, maybe slightly longerʼ, [40] although the date of the quote is unclear).

5. Risks to Public Health and Humanity's Survival From AGI

Due to its nonbiological, in silico existence, even human level AGI could ‘thinkʼ at digital speeds, operate without error or the need for rest and sustenance, and be replicated to coordinate with trillions of AGI copies. With such capabilities it is easy to foresee how AGI could revolutionise all fields—including healthcare and public health—by, for example, autonomously designing and conducting research agendas, independently building and operating companies, and solving governance and geopolitical problems. However, the widespread deployment of such agents in workplaces, alongside the advanced robotic technologies they would help to create, is forecasted to bring about growing unemployment due to increasing automation of labour (the aggregate forecast of AI researchers in 2023 gave a 50% chance of full automation of labour by 2116, down 48 years from 2164 in 2022) [39]. Since the negative consequences of unemployment on health outcomes and behaviours have been long recognised [41, 42, 43], such AGI‐driven automation would likely cause profound harm to human health on population scales, thereby rendering this a public health problem.

Yet, it is unlikely that an AGI, once created, would remain at human level cognition. Due to its ability to autonomously learn, alter its code, and recursively self‐improve, an AGI's abilities would likely increase rapidly in an ‘intelligence explosion,ʼ resulting in a superintelligence with unpredictable behaviours and cognitive abilities that far exceed those of the smartest humans [34, 44]. The creation of such an entity is increasingly considered—even by those racing to conceive it—to pose substantial risks not only to public health, but to humanity's survival [37, 45, 46, 47]. As such, a 2023 open letter signed by many technology leaders stated that ‘mitigating the risk of extinction from AI should be a global priority alongside other societal‐scale risks such as pandemics and nuclear warʼ [45].

The threats from AGI come in two major forms (see Table 1): AGI misuse and misaligned AGI. AGI misuse involves one or multiple human adversaries intentionally deploying AGI to cause harm [47, 48]. For example, AGI‐enabled cyberattacks could cripple health systems (by altering electronic health records and clinical data, or by interfering with patient monitoring systems and diagnostic equipment) and other critical infrastructure (such as traffic management systems or power grids). Or, AGI‐enhanced disinformation campaigns could rapidly disseminate credible but false health information (undermining trust in health systems and causing widespread non‐adherence to critical public health measures like immunisation and screening programmes) and manipulate public perceptions (leading to societal panic, civil unrest, or collapse of public health institutions). Furthermore, AGI could assist nonregulated actors in accessing chemical, biological, radiological or nuclear (CBRN) weapons, while AGI‐driven advancements in CBRN technologies could facilitate the rapid development of increasingly lethal weapons (such as the synthesis of novel bioengineered pathogens) [34, 48, 49]. Clearly, each of these misuses would cause harm on enormous scales and, therefore, pose substantial threat to public health.

TABLE 1.

Categories of threat from AGI and impacts on public health.

Threat from AGI Threat category Specific risks Public health impact
AGI misuse Cyberattacks Altering electronic health records, interfering with patient monitoring systems Crippled health systems, compromised patient safety
Disinformation False health information campaigns, manipulation of public perceptions Undermined trust, non‐adherence to health measures
CBRN weapons Assistance in accessing/developing chemical, biological, radiological, nuclear weapons Mass casualties, novel bioengineered pathogens
AGI misalignment Alignment problem AGI pursuing goals in ways detrimental to humans For example, forced sterilisation for maternal health, universal quarantine for disease control
Control problem Loss of human control over AGI systems AGI seeking power, resisting shutdown, deceiving humans
Existential risk Superintelligent AGI with unpredictable behaviours Human extinction through extermination, resource starvation, or collateral damage

The threat from misaligned AGI, known as ‘the alignment problemʼ, emerges from the challenge of ensuring that the goals and behaviours of AGI systems align with human values. Without alignment, AGIs might pursue desirable objectives in ways that are extremely detrimental to human well‐being. For example, an AGI optimising for maternal health or infectious disease control might enact drastic measures without considering human rights, such as forced sterilisation to prevent pregnancy or universal quarantine to prevent pathogen transmission, respectively. Furthermore, humans may lose control of the AGI (‘the control problemʼ) as it develops undesirable sub‐goals in aid of its primary objectives, such as seeking power, resisting shutdown attempts or human intervention, commandeering computational or physical resources like energy grids and financial markets, or deceiving humans by appearing aligned during training but acting contrary to human values once deployed [44, 50]. Once misaligned AGIs reach a threshold of autonomy, reversing their behaviour would likely be unachievable [51], and humanity's extinction through deliberate extermination, resource starvation, or collateral damage during AGI‐driven activities could spell an existential catastrophe [33, 34, 49, 52].

6. The Role for Healthcare and Public Health Professionals

Creating AGI will see humanity cede its position as the most intelligent species in the known universe, and the inherent dangers of doing so pose threats to its continuity. Healthcare and public health professionals have historically played pivotal roles in mitigating existential risk, such as the International Physicians for the Prevention of Nuclear War (IPPNW), which was awarded the Nobel Peace Prize in 1985 for its education on the catastrophic health consequences of nuclear conflict, advocacy for nuclear disarmament, and international collaboration across political divides. While the risks from advanced AI are not as immediately visceral as nuclear explosions, they are real, rapidly approaching, and potentially catastrophic. These risks can be explicitly framed as threats to human health, thereby making the appropriate governance of advanced AI of direct relevance to the work of healthcare and public health professionals.

A coalition of these actors could play a crucial role in addressing these challenges. Such an alliance could research, document, and communicate the health impacts of AI with the same rigour that the IPPNW applied to atomic weapons. They could assess how automation affects mental health and social wellbeing, evaluate the risks of AGI‐enhanced biological threats, and investigate how AI‐driven misinformation impacts public health behaviours.

This coalition could bridge the gap between technical AI safety research and public health practice. While AI researchers focus on alignment and control, healthcare and public health professionals could translate these abstract risks into concrete health impacts that resonate with policymakers and the public. They could develop frameworks for assessing AI's effects on population health, create guidelines for responsible AI deployment in healthcare and beyond, and advocate for governance structures that prioritise human wellbeing.

Crucially, healthcare and public health professionals bring unique assets to this challenge: they possess high public trust, experience in risk communication, and expertise in balancing innovation with safety under conditions of uncertainty. Their global professional networks could facilitate international cooperation on AI safety, just as medical organisations have done for other global health threats.

The coalition could work with AI researchers to ensure that health considerations are embedded in AI development from the outset. They could collaborate with ethicists to address questions of human autonomy and dignity in an AI‐transformed world. And they could partner with policymakers to develop regulations that protect public health while enabling beneficial applications of AI. In this way, while harnessing the benefits of AI to healthcare and public health, healthcare and public health professionals could help ensure that humanity's most powerful technology serves rather than threatens human health and wellbeing.

Conflicts of Interest

The author declares no conflicts of interest.

Armitage R. C., “Artificial General Intelligence and Its Threat to Public Health,” Journal of Evaluation in Clinical Practice 31 (2025): 1‐5. 10.1111/jep.70269.

Data Availability Statement

Data sharing is not applicable to this article as no new data were created or analysed in this study.

References

  • 1. Zong H., Wu R., Cha J., et al., “Large Language Models in Worldwide Medical Exams: Platform Development and Comprehensive Analysis,” Journal of Medical Internet Research 26 (2024): e66114, 10.2196/66114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. McKinney S. M., Sieniek M., Godbole V., et al., “International Evaluation of an AI System for Breast Cancer Screening,” Nature 577 (2020): 89–94, 10.1038/s41586-019-1799-6. [DOI] [PubMed] [Google Scholar]
  • 3. Saha A., Bosma J. S., Twilt J. J., et al., “Artificial Intelligence and Radiologists in Prostate Cancer Detection on MRI (PI‐CAI): An International, Paired, Non‐Inferiority, Confirmatory Study,” Lancet Oncology 25, no. 7 (2024): 879–887, 10.1016/S1470-2045(24)00220-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Pham T. C., Luong C. M., Hoang V. D., and Doucet A., “AI Outperformed Every Dermatologist in Dermoscopic Melanoma Diagnosis, Using an Optimized Deep‐CNN Architecture With Custom Mini‐Batch Logic and Loss Function,” Scientific Reports 11 (2021): 17485, 10.1038/s41598-021-96707-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Macdonald T., Zhelev Z., Liu X., et al., “Generating Evidence to Support the Role of AI in Diabetic Eye Screening: Considerations From the UK National Screening Committee,” Lancet Digital Health 03 (2025): 100840, 10.1016/j.landig.2024.12.004. [DOI] [PubMed] [Google Scholar]
  • 6. Hassan A. M., Rajesh A., Asaad M., et al., “Artificial Intelligence and Machine Learning in Prediction of Surgical Complications: Current State, Applications, and Implications,” American Surgeon 89, no. 1 (2022): 25–30, 10.1177/00031348221101488. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Rajkomar A., Dean J., and Kohane I., “Machine Learning in Medicine,” New England Journal of Medicine 380, no. 14 (2019): 1347–1358, 10.1056/NEJMra1814259. [DOI] [PubMed] [Google Scholar]
  • 8. Schork N. J., “Artificial Intelligence and Personalized Medicine.” in Precision Medicine in Cancer Therapy. Cancer Treatment and Research, eds. Von Hoff D. and Han H. (Springer, 2019). 178, 10.1007/978-3-030-16391-4_11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Johnson K. B., Wei W. Q., Weeraratne D., et al., “Precision Medicine, AI, and the Future of Personalized Health Care,” Clinical and Translational Science 14, no. 1 (2021): 86–93, 10.1111/cts.12884. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Vaid A., Landi I., Nadkarni G., and Nabeel I., “Using Fine‐Tuned Large Language Models to Parse Clinical Notes in Musculoskeletal Pain Disorders,” Lancet Digital Health 5, no. 12 (2023): e855–e858, 10.1016/S2589-7500(23)00202-9. [DOI] [PubMed] [Google Scholar]
  • 11. Howard A., Hope W., and Gerada A., “ChatGPT and Antimicrobial Advice: The end of the Consulting Infection Doctor?,” Lancet Infectious Diseases 23, no. 4 (2023): 405–406, 10.1016/S1473-3099(23)00113-5. [DOI] [PubMed] [Google Scholar]
  • 12. Levine D. M., Tuwani R., Kompa B., et al., “The Diagnostic and Triage Accuracy of the GPT‐3 Artificial Intelligence Model,” Lancet Digital Health 6, no. 8 (2024): e555–e561, 10.1016/S2589-7500(24)00097-9. [DOI] [PubMed] [Google Scholar]
  • 13. Patel S. B. and Lam K., “ChatGPT: The Future of Discharge Summaries?,” Lancet Digital Health 5, no. 3 (2023): e107–e108, 10.1016/S2589-7500(23)00021-3. [DOI] [PubMed] [Google Scholar]
  • 14. Ali S. R., Dobbs T. D., Hutchings H. A., and Whitaker I. S., “Using ChatGPT to Write Patient Clinic Letters,” Lancet Digital Health 5, no. 4 (2023): e179–e181, 10.1016/S2589-7500(23)00048-1. [DOI] [PubMed] [Google Scholar]
  • 15. Jeblick K., Schachtner B., Dexl J., et al., “ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports,” European Radiology 05 (2023): 2817–2825, 10.1007/s00330-023-10213-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Armitage R. C., “ChatGPT: The Threats to Medical Education,” Postgraduate Medical Journal 99, no. 1176 (2023): 1130–1131, 10.1093/postmj/qgad046. [DOI] [PubMed] [Google Scholar]
  • 17. Liebrenz M., Schleifer R., Buadze A., Bhugra D., and Smith A., “Generating Scholarly Content With ChatGPT: Ethical Challenges for Medical Publishing,” The Lancet Digital Health 5, no. 3 (2023): e105–e106, 10.1016/S2589-7500(23)00019-5. [DOI] [PubMed] [Google Scholar]
  • 18. Donker T., “The Dangers of Using Large Language Models for Peer Review,” Lancet Infectious Diseases 23, no. 7 (2023): 781, 10.1016/S1473-3099(23)00290-6. [DOI] [PubMed] [Google Scholar]
  • 19. Lam K., “ChatGPT for Low‐ and Middle‐Income Countries: A Greek Gift?,” Lancet Regional Health. Western Pacific 41 (2023): 100906, 10.1016/j.lanwpc.2023.100906. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Wang X., Sanders H. M., Liu Y., et al., “ChatGPT: Promise and Challenges for Deployment in Low‐ and Middle‐Income Countries,” Lancet Regional Health. Western Pacific 41 (2023): 100905, 10.1016/j.lanwpc.2023.100905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Panteli D., Adib K., Buttigieg S., et al., “Artificial Intelligence in Public Health: Promises, Challenges, and an Agenda for Policy Makers and Public Health Institutions,” Lancet Public Health 10 (2025): e428–e432, 10.1016/S2468-2667(25)00036-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Kraemer M. U. G., Tsui J. L., Chang S. Y., et al., “Artificial Intelligence for Modelling Infectious Disease Epidemics,” Nature 638, no. 8051 (2025): 623–635, 10.1038/s41586-024-08564-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Zack T., Lehman E., Suzgun M., et al., “Assessing the Potential of GPT‐4 to Perpetuate Racial and Gender Biases in Health Care: A Model Evaluation Study,” Lancet Digital Health 6, no. 1 (2024): e12–e22, 10.1016/S2589-7500(23)00225-X. [DOI] [PubMed] [Google Scholar]
  • 24. Obermeyer Z., Powers B., Vogeli C., and Mullainathan S., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,” Science (New York, N.Y.) 366, no. 6464 (2019): 447–453, 10.1126/science.aax2342. [DOI] [PubMed] [Google Scholar]
  • 25. Evans H. and Snead D., “Understanding the Errors Made by Artificial Intelligence Algorithms in Histopathology in Terms of Patient Impact,” NPJ Digital Medicine 7, no. 1 (2024): 89, 10.1038/s41746-024-01093-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Challen R., Denny J., Pitt M., Gompels L., Edwards T., and Tsaneva‐Atanasova K., “Artificial Intelligence, Bias and Clinical Safety,” BMJ Quality & Safety 28 (2019): 231–237, 10.1136/bmjqs-2018-008370. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Rajpurkar P., Chen E., Banerjee O., and Topol E. J., “AI in Health and Medicine,” Nature Medicine 28, no. 1 (2022): 31–38, 10.1038/s41591-021-01614-0. [DOI] [PubMed] [Google Scholar]
  • 28. Harrer S., “Attention Is Not All You Need: The Complicated Case of Ethically Using Large Language Models in Healthcare and Medicine,” EBioMedicine 90 (2023): 104512, 10.1016/j.ebiom.2023.104512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Smith A. L., Greaves F., and Panch T., “Hallucination or Confabulation? Neuroanatomy as Metaphor in Large Language Models,” PLOS Digital Health 2, no. 11 (2023): e0000388, 10.1371/journal.pdig.0000388. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Hoffman B. L., Felter E. M., Chu K. H., et al., “It's not all About Autism: The Emerging Landscape of Anti‐Vaccination Sentiment on Facebook,” Vaccine 37, no. 16 (2019): 2216–2223, 10.1016/j.vaccine.2019.03.003. [DOI] [PubMed] [Google Scholar]
  • 31. Gradon K. T., “Generative Artificial Intelligence and Medical Disinformation,” BMJ 384 (2024): q579, 10.1136/bmj.q579. [DOI] [PubMed] [Google Scholar]
  • 32. Powles J. and Hodson H., “Google DeepMind and Healthcare in an Age of Algorithms,” Health and Technology 7, no. 4 (2017): 351–367, 10.1007/s12553-017-0179-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Russell S., Human Compatible: Artificial Intelligence and the Problem of Control (Allen Lane, 2019). [Google Scholar]
  • 34. Bostrom N., Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). [Google Scholar]
  • 35. Milmo D., ‘Very scary’: Mark Zuckerberg's Pledge to Build Advanced AI Alarms Experts, The Guardian 19 January 2024, accessed April 12, 2025, https://www.theguardian.com/technology/2024/jan/19/mark-zuckerberg-artificial-general-intelligence-system-alarms-experts-meta-open-source.
  • 36. Dragan A., Shah R., Flynn F., et al., Taking a Responsible Path to AGI. Google DeepMind. 2 April 2025, accessed April 12, 2025, https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/.
  • 37. Anthropic , Core Views on AI Safety: When, Why, What, and How. 8 March 2023, accessed April 12, 2025, https://www.anthropic.com/news/core-views-on-ai-safety.
  • 38. Altman S., Reflections. 6 January 2025, accessed April 12, 2025, https://blog.samaltman.com/reflections.
  • 39. AI Impacts , 2023. Expert Survey on Progress in AI. 29 January 2024, accessed April 12, 2025, https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai.
  • 40. Kaput M., Reactions to Sam Altman's Bombshell AI Quote. Marketing Artificial Intelligence Institute. 12 March 2024, accessed April 12, 2025, https://www.marketingaiinstitute.com/blog/sam-altman-ai-agi-quote.
  • 41. Wilson S. H. and Walker G. M., “Unemployment and Health: A Review,” Public Health 107, no. 3 (1993): 153–162, 10.1016/s0033-3506(05)80436-6. [DOI] [PubMed] [Google Scholar]
  • 42. Bartley M., “Unemployment and Ill Health: Understanding the Relationship,” Journal of Epidemiology and Community Health 48, no. 4 (1994): 333–337, 10.1136/jech.48.4.333. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Murphy G. C. and Athanasou J. A., “The Effect of Unemployment on Mental Health,” Journal of Occupational and Organizational Psychology 72, no. 1 (1999): 83–99, 10.1348/096317999166518. [DOI] [Google Scholar]
  • 44. Ngo R., Chan L., and The S. M.. “Alignment Problem From a Deep Learning Perspective,” arXiv (2025), 10.48550/arXiv.2209.00626. [DOI] [Google Scholar]
  • 45. Centre for AI Safety , Statement on AI Risk: AI Experts and Public Figures Express Their Concern About AI Risk, accessed April 12, 2025, https://www.safe.ai/work/statement-on-ai-risk.
  • 46. OpenAI , How We Think About Safety and Alignment, accessed April 12, 2025, https://openai.com/safety/how-we-think-about-safety-alignment/.
  • 47. Shah R., Irpan A., Turner A. M., et al., An Approach to Technical AGI Safety and Security. Google DeepMind 2025, accessed April 12, 2025, https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/evaluating-potential-cybersecurity-threats-of-advanced-ai/An_Approach_to_Technical_AGI_Safety_Apr_2025.pdf.
  • 48. Anthropic . Responsible Scaling Policy. Version 2.1 31 March 2025, accessed April 12, 2025, https://www-cdn.anthropic.com/17310f6d70ae5627f55313ed067afc1a762a4068.pdf.
  • 49. Beard S. J., Rees M., Richards C., and Rios Rojas C., The Era of Global Risk: An Introduction to Existential Risk Studies (Open Book Publishers, 2023), 10.11647/OBP.0336. [DOI] [Google Scholar]
  • 50. Bales A., D'Alessandro W., and Kirk‐Giannini C. D., “Artificial Intelligence: Arguments for Catastrophic Risk,” Philosophy Compass 19, no. 2 (2024): e12964, 10.1111/phc3.12964. [DOI] [Google Scholar]
  • 51. Williams A., “Epistemic Closure and the Irreversibility of Misalignment: Modeling Systemic Barriers to Alignment Innovation,” arXiv (2025), 10.48550/arXiv.2504.02058. [DOI] [Google Scholar]
  • 52. Ord T., The Precipice: Existential Risk and the Future of Humanity (Hachette Books, 2020). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing is not applicable to this article as no new data were created or analysed in this study.


Articles from Journal of Evaluation in Clinical Practice are provided here courtesy of Wiley

RESOURCES