Skip to main content
Health Science Reports logoLink to Health Science Reports
. 2024 Feb 15;7(2):e1912. doi: 10.1002/hsr2.1912

ChatGPT and mental health: Friends or foes?

Khondoker Tashya Kalam 1, Jannatul Mabia Rahman 2, Md Rabiul Islam 3, Syed Masudur Rahman Dewan 1,
PMCID: PMC10867692  PMID: 38361805

Abstract

Background

ChatGPT is an artificial intelligence (AI) language model that has gained popularity as a virtual assistant because of its exceptional capacity to solve problems and make decisions. However, there are some ways in which technological misuse and incorrect interpretations can have potentially hazardous consequences for a user's mental health.

Discussion

Because it lacks real‐time fact‐checking capabilities, ChatGPT may create misleading or erroneous information. Considering AI technology has the potential to influence a person's thinking, we anticipate ChatGPT's future repercussions on mental health by considering instances in which inappropriate usage may lead to mental disorders. While several studies have demonstrated how the AI model may transform mental health care and therapy, certain drawbacks, including bias and privacy violations, have also been identified.

Conclusion

Educating people and organizing workshops on AI technology usage, strengthening privacy measures, and updating ethical standards are crucial initiatives to prevent misuse and resultant dire impacts on mental health. Longitudinal research on the potential of these platforms to impact a variety of mental health problems is recommended in the future.

Keywords: artificial intelligence, ChatGPT, depression, suicide, mental health

1. BACKGROUND

ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI, first launched on November 30, 2022. It enables users to mold and steer a discussion in the direction of the desired level of specificity, organization, style, and language used. 1 ChatGPT has significantly streamlined our daily lives ever since its introduction. The AI tool is capable of responding to any command promptly and consequently, people are growing increasingly dependent on it. Excessive dependency has been hypothesized to be both a risk factor and a complication of mental health such as depression. 2 Dependency, either as a trait or a state, can be closely related to depression. Theories to explain this correlation may be broadly classified into two groups: those who tend to self‐blame and have a constantly pessimistic attitude about life, and those who make excessive expectations for affection and get dissatisfied when they are not met 3 ; that causes depression, which can result in suicide attempts at later stage. In today's world, technology is pervasive in society and is widely employed in applications related to mental health. 4 It can help psychiatrists with everyday duties including finishing medical records, helping doctors and patients communicate, editing academic papers and presentations, and planning and carrying out research studies. 5 However, not much research has been conducted to evaluate the obstacles, such as technological challenges and unethical use of the platform for mental health services.

In this article, we aimed to identify the possible deleterious impacts ChatGPT may have on the general mental health of people, based on current scenario and existing research.

1.1. What is ChatGPT?

ChatGPT is based on the proprietary GPT‐3.5 and GPT‐4 family of GPT models from OpenAI. These large language models (LLMs) have been improved by utilizing supervised and reinforcement learning strategies for conversational applications. LLM experts claim that this training has improved ChatGPT's handling of “hallucinations” compared to its previous model, GPT‐3, but ChatGPT is still infamous for confidently presenting false information. 6

Initially made accessible as a free research preview, OpenAI now offers ChatGPT as a freemium service due to the service's popularity. The GPT‐3.5‐based version is accessible to users of its free tier, while the more sophisticated GPT‐4‐based version and early access to future features are made available to premium customers under the brand name “ChatGPT Plus.”

Now‐a‐days ChatGPT is frequently used in customer service interactions, virtual assistants, and chatbots. Through the provision of more interesting and smooth interactions, it is hoped to enhance user experiences. Moreover, ChatGPT excels in producing a variety of pertinent contextual replies, increasing its adaptability. It can handle a variety of queries, making it suitable for jobs like information retrieval and customer service. Pretraining the model on huge datasets aids in capturing linguistic patterns and producing grammatically sound replies.

ChatGPT may generate misleading or incorrect information, as it lacks real‐time fact‐checking capabilities. The model could provide replies that appear plausible but are contextually nonsensical. As ChatGPT learns about possibly skewed or contentious training data, there are worries about biased or disrespectful replies.

1.2. ChatGPT and possible mental health problems

Examining the relationship between the use of AI and mental health is of urgent necessity as conversational agents like ChatGPT are becoming highly popular in recent times. Even though this field of study is still in its early stages, several crucial factors highlight the possible connection. 7

A variety of social or relational interactions or events can trigger mental health problem like depression. 8 Some common examples include loss of a job, financial problems or poverty leading to homelessness, deterioration of social relationships, and dysfunction in the family. Moreover, abusive relationships and deterioration of self‐confidence, experiences that cause helplessness which may lead one to believe that they lost control in life, trauma such as abuse, neglect, and rape, and social isolation may also cause mental health problems.

These factors can cause depression, abnormal hormone release, suicidal thoughts, hypertension, diabetes, heart disease, and a few other conditions. Some situations involving the use of ChatGPT, including the following, can cause mental health issues in individuals.

  • Generative AI has the outstanding capacity to produce human‐like texts. However, using it as a replacement for human interaction, social communication, and being heavily involved may increase social isolation, which is known to be a risk factor for depression. Research found that social isolation and loneliness are associated with depression symptoms. 9

  • The job marketplace is changing fast due to AI, and these changes may continue for an extended period. After all, the development of AI technology has led to computers carrying out jobs that formerly required human intervention. People may therefore lose hope in finding employment. People may become discouraged about their careers as a result. Tampering with training data can result in the formation of erroneous financial data and market forecasts if generative AI is used without integrity or recreation. Furthermore, the spread of incorrect information can cause financial markets to panic, resulting in stock market crashes, heightened exchange rate risk, and disorderly capital movements, posing a substantial danger to financial stability and harming financial markets and economies. 10

  • Students may utilize ChatGPT to both study and pass exams. This practice may seriously impair their educational progress and productivity. The foundation of the educational system may be in danger. A student's future may bleak if he falls behind knowledge. Previous research found school difficulties to be correlated with mental disorders and substance abuse. 11 There may not be any open positions if ChatGPT‐like AI begins to dominate the workforce. If a candidate possessing adequate qualifications cannot find employment due to the dominance of AI, then it may leave them in despair. It is frequently seen that people who will not be able to get employment may become depressed.

ChatGPT aims to enhance human‐computer interaction by simulating human‐like dialog, creating AI technology for natural language conversation, and supporting various applications like chatbots, virtual assistants, and customer service engagements. 12 ChatGPT is widely used in online platforms for automated conversational interactions, accessible to individuals from diverse backgrounds. It is integrated into various applications, including customer support chatbots, virtual assistants like Siri (Apple) and Alexa (Amazon), and conversational AI services. However, large language models have the capacity to dramatically influence an individual's cognition and worldview. Continued dependence on models like ChatGPT may result in the construction of an information cocoon in which users are restricted to a limited range of generated material. Because individuals are exposed to a limited range of information, this tendency can lead to an increased tendency for self‐isolation. In susceptible groups, such as people suffering from depression, improper statements produced by these models can have disastrous implications, potentially leading to self‐harm or suicide. 10 According to a daily newspaper, after 6 weeks of interacting with an AI‐powered chatbot like ChatGPT, a Belgian man committed suicide as he grew worried about the impacts of global warming, which some call “eco‐anxious.” 13

Chatbots for mental health care are booming, but there's little proof that they help. 14 An AI‐based software such as ChatGPT for mental health treatment and therapy will inevitably require users to provide sensitive personal information about themselves and their family, leaving them open to possible privacy violations and data breaches. 15 A financial, social, physical, or psychological stress might arise from an app's data privacy violation. 16 Moreover, disastrous consequences might result from ChatGPT's inability to recognize nonverbal cues or subtle crisis signals from adolescents and children dealing with mental health problems including suicide and homicidal thoughts. Although ChatGPT can give evidence‐based information, it cannot diagnose or provide a care plan that is specifically customized to the requirements of a child or teenager. 17 In contrast to mental health care specialists, ChatGPT appears to underestimate the risk of suicide attempts, which is problematic, especially in the most serious state, according to a research study. 18 There are worries over ChatGPT's possible effects on mental health, including despair and suicide, even while its main goals are to improve user experiences and offer convenience. On people with depression or other susceptible mental health, the absence of real human empathy, and emotional support, and the potential exposure to false or harmful content through AI interactions can have detrimental impacts.

2. DISCUSSION

Good mental health is considered as an asset for healthy and happy lives. Although AI such as ChatGPT is quite useful for daily life, it has certain possibilities to raise cases of depression and other mental health issues. Educating people to ensure that no one misuses AI technology and taking the initiative to advance knowledge by organizing workshops, seminars, training sessions, counseling sessions, etc. on proper utilization of AI technology to use manpower and build a stronger community can be the timely initiatives. Thus, people would be able to effectively put their skills and imagination to use. If this AI technology interferes with anyone's ability to work, they can turn it to entrepreneurship. Ultimately, we do not want to be dependent on AI technology, but we want to use it to save time and produce high‐quality work. To ward off the possibility of AI to negatively influence mental health, certain ethical guidelines and protocols can be established.

3. CONCLUSION

To ensure effective and safe application of ChatGPT, it is essential to take into consideration the potential dire effects of AI on mental health. While ChatGPT and other AI‐based chatbots are gaining popularity in the field of mental health for diagnostic and therapeutic purposes, we should also be aware of the potential bias, privacy breach, and false information dissemination through the platform. Longitudinal studies are recommended for further in‐depth understanding of the impact of these AI‐generated models in the field of mental health.

AUTHOR CONTRIBUTIONS

Khondoker Tashya Kalam: Conceptualization; writing—original draft. Jannatul Mabia Rahman: Conceptualization; writing—review and editing. Md. Rabiul Islam: Writing—review and editing. Syed Masudur Rahman Dewan: Conceptualization; supervision; writing—review and editing.

CONFLICT OF INTEREST STATEMENT

The authors declare no conflict of interest.

TRANSPARENCY STATEMENT

The lead author Syed Masudur Rahman Dewan affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

ACKNOWLEDGMENTS

The authors would like to express their gratitude to the healthcare scientists.

Kalam KT, Rahman JM, Islam MR, Dewan SMR. ChatGPT and mental health: friends or foes? Health Sci Rep. 2024;7:e1912. 10.1002/hsr2.1912

DATA AVAILABILITY STATEMENT

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

REFERENCES

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.


Articles from Health Science Reports are provided here courtesy of Wiley

RESOURCES