Abstract
Release of ChatGPT-4, an internet-based computer program that simulates conversation with human users using Artificial Intelligence (AI) on March 14, 2023 set off a flurry of debates regarding the role and impact of AI on human life. Influential leaders and thinkers from diverse fields have chimed in to offer their views, admonitions, and or recommendations. There seems to be a great diversity in viewpoints and visions regarding how AI would affect human destiny—right from confident optimism to stark doomsaying and all in between. However, not much attention is being paid to the insidious long-term effects on human societies, many of them being unintended consequences, that AI has the potential to create over a short period of time. Perhaps the greatest threat of AI is the potential for loss of meaning in life and human-technology-created enfeeblement in a large section of humanity. All the other threats including that of the current AI are mere epiphenomena of this basic threat. In view of the fact that the genie of AI is out of the bottle and that it cannot be put back in, the first order of business for technologists, policy makers, and the governments is to allocate resources and attention to address the problem of meaning in life and mitigation of the sentiment of overwhelming and universal helplessness. Lastly, not to be optimistic about AI while being cautious and pragmatic is not an option.
Keywords: artificial intelligence, ChatGPT-4, unintended consequences, AI, meaning in life, enfeeblement, mental health, violence, gender sex dynamics, death of despair
Release of ChatGPT-4, an internet-based computer program that simulates conversation with human users using Artificial Intelligence (AI) on March 14, 2023 set off a flurry of debates regarding the role and impact of AI on human life. Influential leaders and thinkers from diverse fields have chimed in to offer their views, admonitions, and or recommendations. There seems to be a great diversity in viewpoints and visions regarding how AI would affect human destiny- right from confident optimism 1 to stark doomsaying 2 and all in between. 3
Although the debate is new, AI technology is not. One can easily trace its roots to more than half a century back, in the late 1960s. 4 Even the concerns about AI’s existential threat to humanity are not new. In 2014, Stephen Hawkins the great physicist had spoken about it, 5 and science fiction writer Samuel Butler had aired it in 1863 in an article he wrote on the topic. 6 However, unlike prior future-looking trepidations, the timbre and tone of current nervousness distinctly betrays a sense of imminence and urgency, probably because, as Bill Gates pointed out in his recent interview, 7 the cat is now out of the bag.
Popular media, semi-academic discussions,8,9 or even scholarly 10 discourses reveal that the focus of discussion regarding the effects of AI on human life, whether beneficial or harmful, is on those issues that are somewhat obvious and direct, where cause and effect (strong correlation) is relatively easy to establish. For example, loss of jobs, spread of misinformation, identity theft, racial biases, privacy loss etc. on one side 9 or better drug discovery, improved productivity, economic growth on the other,1,3,11 just to name a few.
However, not much attention is being paid to the insidious long-term effects on human societies, many of them being unintended consequences, that AI has the potential to create over a short period of time. Effects may include a rapid shift in gender/sexual relational dynamics12,13 in humans with unpredictable outcomes; a further deterioration in mental health crisis among women, particularly young women 14 and an escalation in death of despair in men 15 ; a surge in random mass shootings like violence in the U.S. 16 ; and an overall decline in democratic systems in governments. 17 These are seldom attributed explicitly to technologies like AI, however, they make a realistic expectation.
The reasons behind this phenomenon are three folds. One, due to infodemic, data abundance, and availability of highly sophisticated analytical tools, decision makers are increasingly relying on “reductionist” but outdated linear predictive science (models) to chart their strategies or policies. This approach is falling short due to the accelerating complexity and dynamics of the current global milieu. Two, excluding the gift of intuition that evolution endowed humans with to address the complexities of life, which may now be considered untrustworthy for decision making, the “holistic” non-linear dynamical complexity science applicable to fully reproducing the human mind is yet to be developed. And three, in absence of an explicitly non-human, AI like external observing agency, or alternatively an enormously large passage of time to become aware of, most humanity cannot see the big picture quickly and clearly.
While current diagnostics is failing regarding the spectrum of effect of AI, the scenario is not much different in the landscape of treatment, which is about handling the arrival of the age of AI. Solutions are being proposed such as universal basic income (UBI) for coping with job displacement by AI, but they seem suboptimal at best.18-20 To use the metaphor of the performance of a trapeze show in a circus, the current debate is mostly about whether the trapeze artist will make it from the trapeze that is now left behind her to the swinging yet-to-be-captured trapeze ahead of her, even while being high up in the air, current solutions pay scant service to discuss or ensure that the safety net beneath, that is meant to provide an insurance against a fatal fall, is intact, robust, and strong enough.
Two phenomena, one, a recent decline in the life expectancy in the USA, and two the indirect ripple effects of COVID-19 pandemic on societies sufficiently, aptly, and lucidly illuminate the forest behind the trees. Historically lower life expectancy has been associated more with factors like infectious diseases and/or famines & starvation. The phenomena of death of despair 15 and COVID-19 pandemic aftermath21,22 challenge the conventional wisdom and in doing so also points to a possible correct diagnosis-treatment pair to the challenge of advent of AI.
Perhaps the greatest threat of AI is the potential for loss of meaning in life and human-technology-created enfeeblement in a large section of humanity.9,23-26 All the other threats including that of the current AI are mere epiphenomena of this basic threat.
In view of the fact that the genie of AI is out of the bottle and that it cannot be put back in, the first order of business for technologists, policy makers, and the governments is to allocate resources and attention to address the problem of meaning in life and mitigation of the sentiment of overwhelming and universal helplessness. The discussion of nifty gritty details about how to untangle the meaning in life or human-enfeeblement problems is beyond the scope of this article.
However, a caveat is in order here. The first step in that direction is not to just grant more resources to manage the issues of mental health, but even before doing that, to go back to square one to define what is (good) mental health. At the dawn of the age of AI, it behooves of all the stakeholders to remember what Richard Feynman (Nobel, Physics, 1965) said in the last sentence of Appendix F of the Rogers’ Commission report on the tragedy of space shuttle Challenger: “. . .. nature cannot be fooled.” 27
Lastly, not to be optimistic about AI while being cautious and pragmatic is not an option. Wise words of Epicurus: “When we exist, death is not; and when death exists, we are not.” or those of Isaac B. Singer (Nobel, Literature, 1978) who when asked whether he believed in free will or determinism: “Of course Free will, there is no choice” would provide us the needed courage.
Footnotes
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author received no financial support for the research, authorship, and/or publication of this article.
ORCID iD: Avinash Patwardhan https://orcid.org/0000-0002-5101-6809
References
- 1.Semuels A.Microsoft’s Satya Nadella responds to concerns about AI. Time.com. Published May 9, 2023. Accessed May 12, 2023. https://time.com/6278179/satya-nadella-ai-concerns/.
- 2.Matthews D.This Oxford professor thinks artificial intelligence will destroy us all. Vox.com. Published August 19, 2014. Accessed May 12, 2023. https://www.vox.com/2014/8/19/6031367/oxford-nick-bostrom-artificial-intelligence-superintelligence.
- 3.Goldman Sachs. Generative AI could raise global GDP by 7%. GoldmanSachs.com. Published April 25, 2023. Accessed May 12, 2023. https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html.
- 4.Mendel JM. Applications of artificial intelligence techniques to a spacecraft control problem. NASA NTRS. Published May 1, 1967. Accessed May 12, 2023. https://ntrs.nasa.gov/api/citations/19670019698/downloads/19670019698.pdf
- 5.Cellan-Jones R.Stephen Hawking warns artificial intelligence could end mankind. BBC News. Published December 2, 2014. Accessed May 12, 2023. https://www.bbc.com/news/technology-30290540.
- 6.Adami C. Brief history of artificial intelligence research. MIT Press Direct. Published May 2, 2021. Accessed May 12, 2023. https://direct.mit.edu/artl/article-abstract/27/2/131/107881/A-Brief-History-of-Artificial-Intelligence. [DOI] [PubMed]
- 7.Rigby J.Bill Gates says calls to pause ai won’t ‘solve challenges’. Reuters. Published April 4, 2023. Accessed May 12, 2023. https://www.reuters.com/technology/bill-gates-says-calls-pause-ai-wont-solve-challenges-2023-04-04/.
- 8.Future of Life Institute. Pause giant AI experiments: An open letter. Future of Life Institute. Published March 22, 2023. Accessed May 12, 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
- 9.Tegmark M. The ‘don’t look up’ thinking that could doom us with AI. Time. Published April 25, 2023. Accessed May 12, 2023. https://time.com/6273743/thinking-that-could-doom-us-with-ai/.
- 10.Haupt CE, Marks M.AI-Generated medical Advice-GPT and beyond. JAMA. 2023;329(16):1349-1350. doi: 10.1001/jama.2023.5321 [DOI] [PubMed] [Google Scholar]
- 11.Goldman Sachs. How artificial intelligence is accelerating innovation in healthcare. GoldmanSachs.com. Published April 26, 2023. Accessed May 12, 2023. www.goldmansachs.com/intelligence/pages/how-artificial-intelligence-is-accelerating-innovation-in-healthcare.html.
- 12.Kilander G. Professor warns of a ‘mating’ crisis in US as fewer men go to college. The Independent. Published September 27, 2021. Accessed May 12, 2023. www.independent.co.uk/news/world/americas/mating-crisis-us-women-college-nyu-b1927704.html.
- 13.Jenney A, Exner-Cortens D.Toxic masculinity and mental health in young women. Affilia. 2018;33(3):410-417. doi: 10.1177/0886109918762492 [DOI] [Google Scholar]
- 14.Centers for Disease Control and Prevention. YRBSS data summary & trends. Adolescent and School Health. Published April 27, 2023. Accessed May 12, 2023. www.cdc.gov/healthyyouth/data/yrbs/yrbs_data_summary_and_trends.htm.
- 15.Case A, Deaton A.Rising morbidity and mortality in midlife among white non-Hispanic Americans in the 21st century. Proc Natl Acad Sci. 2015;112(49):15078-15083. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Gun Violence Archive. Mass shootings in 2023. Gun Violence Archive. Published May 11, 2023. Accessed May 12, 2023. www.gunviolencearchive.org/query/0484b316-f676-44bc-97ed-ecefeabae077/map.
- 17.Yana Gorokhovskaia Y, Shahbaz A, Slipowitz A. Freedom in the World 2023: Marking 50 years in the struggle for democracy. Freedom House. Published March 2023. Accessed May 12, 2023. https://freedomhouse.org/sites/default/files/2023-03/FIW_World_2023_DigtalPDF.pdf.
- 18.Furman J, Seamans R.AI and the Economy. Innov Policy Econ. 2019;19(1):161-191. [Google Scholar]
- 19.Kostick-Quenet KM, Cohen IG, Gerke S, et al. Mitigating racial bias in machine learning. J Law Med Ethics. 2022;50(1):92-100. doi: 10.1017/jme.2022.13 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Caduff L, Diana G, Kutterer C, Papasotiriou S.Privacy issues in healthcare and their mitigation through privacy preserving technologies. In: Davide Cirillo D, Catuara-Solarz S, Guney E, eds. Sex and Gender Bias in Technology and Artificial Intelligence. Academy Press; 2022;205-218. [Google Scholar]
- 21.de Jong EM, Ziegler N, Schippers MC.From shattered goals to meaning in life: Life crafting in times of the COVID-19 Pandemic. Front Psychol. 2020;11:577708. doi: 10.3389/fpsyg.2020.577708 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Costanza A, Di Marco S, Burroni M, et al. Meaning in life and demoralization: a mental-health reading perspective of suicidality in the time of COVID-19. Acta Biomed. 2020;91(4):e2020163. doi: 10.23750/abm.v91i4.10515 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Thill S, Houssemand C, Pignault A.Effects of meaning in life and of work on health in unemployment. Health Psychol Open. 2020;7(2):2055102920967258. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Dursun P, Alyagut P, Yılmaz I.Meaning in life, psychological hardiness and death anxiety: individuals with or without generalized anxiety disorder (GAD). Curr Psychol. 2022;41(6):3299-3317. doi: 10.1007/s12144-021-02695-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Laukyte M.Averting enfeeblement and fostering empowerment: algorithmic rights and the right to good administration. Comput Law Secur Rev. 2022;46:105718. [Google Scholar]
- 26.Doya K, Ema A, Kitano H, Sakagami M, Russell S.Social impact and governance of AI and neurotechnologies. Neural Netw. 2022;152:542-554. [DOI] [PubMed] [Google Scholar]
- 27.Feynman RP.Appendix F: Personal observations on the reliability of the shuttle. Report of the Presidential Commission on the Space Shuttle Challenger Accident. 1986;F1-F5. [Google Scholar]