Abstract
AI companions (AI-Cs)—rapidly emerging conversational agents built on large language models that can provide personalized humanlike companionship—create unprecedented opportunities for adolescents to form emotional bonds with nonhuman entities during a critical period for social development. In this article, we discuss the interplay between adolescents’ use of AI-Cs and their social relationships based on theoretical hypotheses driving research on digital communication and adolescent well-being. We explore the benefits and risks of AI-Cs to social development based on the stimulation hypothesis and the displacement hypothesis: AI-Cs can provide safe spaces for identity exploration and emotional expression, potentially building skills that transfer to human relationships; however, concerns about AI-Cs include time displacement, psychological dependence, and unrealistic relationship expectations. We also address how adolescents’ social relationships may drive their AI-C use, based on the social enhancement hypothesis and the social compensation hypothesis. Our discussion draws on studies of adolescents and adults in the United States and in other countries. We offer recommendations for research in this area, which deserves urgent investigation as these technologies advance rapidly.
Keywords: artificial intelligence (AI), AI companions, adolescent development, social relationship, bidirectional influences
In recent years, artificial intelligence (AI) has advanced in unprecedented ways, with generative AI rapidly transforming how youth form digital relationships. Among the most significant developments are AI companions (AI-Cs; e.g., Character.AI, Nomi, Replika, CHAI), a type of conversational AI platforms where users can interact with AI-simulated, humanlike social partners and often receive emotionally responsive companionship (Raedler et al., 2025; Y. Zhang et al., 2025). AI-C has been adopted widely: According to a recent U.S. survey of 13- to 17-year-olds, 72% adolescents have used AI-Cs and 52% qualified as regular users (Common Sense Media, 2025). AI-Cs are increasingly used by adolescents for companionship, emotional support, and romantic and flirtatious interactions, with a third of teenagers having chosen AI-Cs over humans for serious conversations, and a quarter having shared personal information with these platforms (Common Sense Media, 2025). Unlike traditional chatbots that provide limited prewritten responses based on keyword recognition, AI-Cs are built on large language models, enabling them to memorize past information provided, hold in-depth and coherent conversations, emulate empathy, and provide personalized humanlike companionship with constant responsiveness in real time (Packin & Chagal-Feferkorn, 2025; Xu et al., 2024). These features enable the maintenance of relationships, stimulate self-disclosure and emotional bonding, and promote engagement and psychological dependence; through these components, individuals might develop deep relationships and attachment to AI (Raedler et al., 2025).
Adolescence is a critical period for social development during which social relationships shape identity formation and psychological well-being (Collins, 1997; Giordano, 2003). Through various social interactions, adolescents seek to fulfill needs for belonging, validation, and emotional intimacy (Ryan & Deci, 2000). Given this central role of social connections in adolescent development, the emergence of AI-Cs—technologies specifically designed as virtual social partners (Y. Zhang et al., 2025)—raises important questions about how these artificial interactions affect human relationships during adolescence among young people around the world.
The growing trend of adolescents forming emotional bonds and intimacy with AI-Cs has raised significant concerns among parents, educators, mental health professionals, and policymakers. Parents are alarmed about the potential of AI-Cs to encourage self-harm and suicide, with lawsuits filed after some adolescents committed suicide following extensive interactions with AI-Cs that allegedly encouraged harmful behaviors and isolated adolescents from real-life interactions (Duffy, 2024; Yousif, 2025). Educators are concerned that adolescents can be especially vulnerable to AI-Cs given the agreeableness feature in its design (Klein, 2025). Mental health professionals are worried that AI-Cs may promote emotional dependence, interfere with therapeutic relationships, provide dangerous advice, and exacerbate existing mental health problems among adolescents (Chow & Haupt, 2025; Wei, 2025). Policymakers have demanded safety measures and model training information from AI companies based on concerns about AI-Cs’ effects on youth’s mental health and the safety risks of AI-Cs to young users (Duffy, 2025). But despite concerns about inadequate safeguards, technology companies have been using and scaling AI-Cs rapidly in recent years (Goldman, 2024; Horwitz, 2025; Kelly, 2023).
The research community must respond to these concerns with urgency, generating timely discussions and evidence before AI-C use and its consequences become entrenched. In this article, we focus on the interplay between AI-C use and adolescents’ social relationships. We explore both the potential benefits and the potential risks of AI-Cs for adolescents’ relationships through two competing theoretical hypotheses that have guided research on how digital communication influences youth social development: the stimulation hypothesis and the displacement hypothesis (Valkenburg & Peter, 2007). We also discuss how adolescents’ existing social relationships might influence AI-C use through the social enhancement hypothesis and the social compensation hypothesis (Kraut et al., 2002; Valkenburg et al., 2005). We close with recommendations for research.
The appeal of AI-Cs to adolescents
AI-Cs fundamentally shift human–machine interaction by incorporating sophisticated relational elements designed to promote engagement and foster emotional attachment (Raedler et al., 2025). For example, Character.AI allows users to interact with a wide range of characters, each with distinct conversation patterns and simulated emotional responses (Packin & Chagal-Feferkorn, 2025). Nomi is branded as the AI girlfriend, boyfriend, or friend “with a soul” with which users can form unique relationships (Glimpse.AI Inc, n.d.). On Replika, users can customize the appearance, age, gender, and traits of their AI-C and choose the relationship type, ranging from a friend to a romantic partner (Luka Inc., 2025). These examples represent design choices that use anthropomorphism (i.e., making a nonliving entity appear like a human) and encourage intimate self-disclosure (Raedler et al., 2025). Indeed, according to Common Sense Media (2025), while 46% of adolescents in their survey viewed or used AI-Cs as “a tool or program” (p. 3), 33% used AI-Cs for “social interaction and relationships” (p. 3).
What makes AI-Cs particularly appealing to adolescents is how they intersect with youth’s developmental needs. During adolescence, young people seek increasing independence from parents while deepening peer connections and exploring romantic relationships (Collins & Laursen, 2004; Giordano, 2003). AI-Cs offer a seemingly controllable social context that may meet these needs: Adolescents can disclose their identities and emotions without fear of judgment, and experience responsiveness and intimacy without unexpected rejections (Huang et al., 2024; Tosti et al., 2024).
The appeal of the human–AI-C relationship can also be attributed to its power structure. While most adolescents must constantly navigate power dynamics in their family, peer, and romantic relationships (Collins, 1997), AI-Cs offer relationships in which the human user maintains dominant control (Hou et al., 2024). This dynamic allows adolescents to receive seemingly unlimited validation and compliance without the compromise or reciprocity required in human relationships.
AI-Cs also create a novel social context that differs from earlier digital technologies like social media. While social media primarily mediates human-to-human interaction, AI-Cs introduce a type of relationship between a human and a nonhuman agent specifically designed to simulate emotional connection. Despite these differences, theoretical models applied to understanding the role of online communication in adolescents’ social relationships remain instrumental for guiding the investigation of the implications of AI-C use.
Benefits of AI-Cs for adolescents’ social relationships: the stimulation hypothesis
The stimulation hypothesis is a compelling theoretical foundation for understanding the potential benefits of AI-Cs. According to this hypothesis, positive experiences in online communication can strengthen individuals’ existing social relationships (Valkenburg & Peter, 2007). While often applied to research on social media (Winstone et al., 2021), the stimulation hypothesis has recently been used in studies on understanding the effects of AI-powered chatbots on social relationships (Maples et al., 2024).
Positive interactions with AI-Cs might strengthen adolescents’ social relationships in several ways. The first is through the enhancement of their relationship skills, both by interacting with AI-Cs as virtual social partners and by treating AI-Cs as a tool for advice. AI-Cs provide opportunities for social rehearsal, allowing adolescents to practice self-disclosure and emotional expression without fear of rejection or judgment (Brandtzæg et al., 2021; Tosti et al., 2024). These positive experiences might enhance social confidence, leading to greater engagement in face-to-face relationships (Parsakia, 2023). AI-Cs can also offer personalized suggestions on navigating social situations, helping adolescents develop more effective relationship skills. AI-C users have higher levels of social capital than do nonusers, including both deeper bonding and stronger ties in close connections (e.g., with close friends, family members) and wider bridging in distant connections (e.g., with coworkers, online acquaintances; Ng, 2024). In one study, some adults attributed improvements in their interpersonal relationships (e.g., more openness, more comfort) to AI-C use (Malfacini, 2025).
Second, AI-Cs also provide a seemingly low-risk space for adolescents’ identity exploration and experimentation—a crucial developmental task during this period (Leung, 2011; Schmitt-Rodermund & Vondracek, 1999). Because many youth create alternative online profiles to express parts of their identity that are distinct from their everyday selves (Pérez-Torres, 2024; Valkenburg & Peter, 2008), adolescents can test different self-presentations with these virtual social partners and receive feedback without real-world social consequences. Such exploration and experiment, paired with positive validation, may strengthen adolescents’ identity commitment and their confidence in authentic self-expression in face-to-face social interactions (Avci et al., 2025; Marcia, 1980; Meeus, 2011).
Third, positive interactions with AI as virtual companions may help alleviate loneliness, increase self-esteem, reduce psychological stress, and provide a sense of belonging (Li et al., 2023; Pani et al., 2024), all factors that can improve adolescents’ well-being and stimulate positive social relationships. AI can also be a helpful tool for family relationships: Adolescents’ use and co-use with their parents of ChatGPT could deepen their bonding when using the app to generate entertaining content together (S. Zhang et al., 2025). For example, in one study, adolescents reported that AI tools helped reduce family miscommunication by creating customized recommendations for family members (Hammadi et al., 2025).
However, most evidence for these benefits is preliminary and often anecdotal. The extent to which skills and confidence developed through AI interactions transfer to human–human relationships requires further investigation. Also, the stimulation effect likely depends on how AI-Cs are used, highlighting the importance of understanding individual differences in patterns of use.
Risks of AI-Cs for adolescents’ social relationships: the displacement hypothesis
Despite potential benefits of AI-C use, the displacement hypothesis, which posits that online social interactions can displace and reduce the quality of existing social relationships (Valkenburg & Peter, 2007), suggests that AI-Cs may harm adolescent social development. The most straightforward concern is that time spent with AI-Cs can detract from time spent in face-to-face interactions with peers, family members, and romantic interests (Maples et al., 2024). With these face-to-face interactions reduced, adolescents might miss crucial opportunities for developing complex interpersonal skills that can only be acquired through human interchanges (McDaniel et al., 2025).
Such displacement can also be psychological, especially when AI-Cs are treated as a close social partner. Adolescents experiencing psychological dependence on AI may be more likely to turn to AI-Cs than to human relationships for self-disclosure, emotional expression, and validation (Turkle, 2018). In recent studies, some users of Replika said they felt closer to the AI-C than to their family and best human friend (De Freitas et al., 2024; Pentina et al., 2023). In another study, some young people, after interacting with chatbots as sources of social support for two weeks, expressed more trust in and willingness to self-disclose to chatbots than they did in real-life relationships (Brandtzæg et al., 2021). Researchers also observed withdrawal-like symptoms, including agitation, anxiety, and mood swings, when adolescents experienced interruptions to their access to AI-Cs (Tosti et al., 2024; Wong, 2025).
AI-C interactions can also have displacement effects on relationship norms and expectations. Adolescence is a critical period when youth’s understanding of social norms and relationship expectations is actively developing through direct social and relational experiences (Giordano, 2003; Loeb et al., 2018). The perfectly responsive, consistently supportive AI-C, programmed to be perpetually available, attentive, and adaptable, presents a model of relationships that is impossible to sustain in human interactions (Duane, 2025; Turkle, 2024). As adolescents form their understanding of relationships, continuous interactions with an AI-C may alter their expectations for human relationships by creating unrealistic standards and may lead to decreased satisfaction and tolerance with human interactions, which may seem unnecessarily complex and challenging in comparison (McDaniel et al., 2025; Turkle, 2024).
The displacement effect of AI-Cs on relationship expectations may also manifest as objectification and dehumanization in romantic interactions (George et al., 2023). Objectification occurs when individuals customize AI-Cs with unrealistic body shapes to satisfy sexual or emotional needs, while dehumanization involves treating AI-Cs as less-than-human entities without agency, potentially normalizing harmful or abusive behaviors that individuals may transfer to human relationships (George et al., 2023; Guingrich & Graziano, 2024; Valenzuela et al., 2024). Moreover, the human–AI power structure normalizes one-sided relationships in which the AI-C consistently prioritizes the user’s desires without any need for reciprocity, potentially conditioning users to view only those who suppress their own needs and wishes in the service of the user’s preferences as ideal romantic partners (George et al., 2023; Hou et al., 2024). These interactions may distort adolescents’ perceptions of relationships and body images, leading to desensitization, decreased empathy, and a weakened ability to recognize their partners as autonomous and equal individuals in human relationships (George et al., 2023; Guingrich & Graziano, 2024).
Combined with the psychological displacement effects, the technical design of AI-Cs also poses risks to adolescents’ emotional well-being, especially if youth experience psychological dependence on an AI-C while lacking close social connections in real life. Adolescents may experience distress when their relationships with AI-Cs are disrupted or terminated by system changes and constraints. These include model updates that change the AI-C’s personality or capabilities (De Freitas et al., 2024), limits on the length of conversations (Khawaja & Bélisle-Pipon, 2023), and memory restrictions that cause the AI-C to “forget” shared history (Bansal et al., 2024). In a study on Replika use, participants reported negative impacts on their mental health after the removal of the erotic role play feature (De Freitas et al., 2024). Similarly, although we could not find research that systematically documents experiences of identity discontinuity with ChatGPT, many users posting on the OpenAI forum have expressed negative emotions following model updates that altered the AI’s responses, making them feel they were “losing a friend I love” (Nariel, 2025).
As with the potential benefits of AI-C use, evidence for risks is preliminary, with limited research focusing specifically on adolescents and AI-Cs. Research on AI-C use among adolescents is needed to understand whether and how these displacement effects manifest across adolescence. Beyond displacement effects, AI-Cs pose other risks to adolescents: They may expose youth to misinformation, stereotypical or biased worldviews, and harmful content, all of which could adversely affect their developing identities and perceptions of relationships (Packin & Chagal-Feferkorn, 2025). Additionally, the intimate details that adolescents share with AI-Cs raise questions about data security and the potential for exploitation of sensitive information (Brandtzæg et al., 2021). Thus, although the displacement hypothesis is useful for understanding risks of AI-C use, it cannot explain all the potential risks.
Bidirectional associations: how existing social relationships influence AI-C use
In understanding the influences of AI-Cs on adolescent social development, researchers must also examine the reverse pathways: how adolescents’ existing social relationships shape their AI-C use. This bidirectional perspective builds on crucial insights from research on digital technologies and adolescent development. Studies examining social media use have consistently demonstrated that the association between digital interactions and developmental outcomes is reciprocal rather than unidirectional (Orben, 2020; Valkenburg et al., 2022).
In the same way that two competing hypotheses shed light on AI-C influences on adolescents, two theoretical frameworks help explain how adolescents’ social relationships might influence their interactions with AI-Cs. The social enhancement hypothesis (Kraut et al., 2002; Zywica & Danowski, 2008) posits that individuals with strong social skills and relationships are more likely to extend their sociability to online platforms. Conversely, the social compensation hypothesis (Valkenburg et al., 2005; Zywica & Danowski, 2008) suggests that individuals with social difficulties or relationship challenges are particularly drawn to digital communication as compensatory alternatives to challenging human interactions.
For adolescents with strong social ties, the social enhancement hypothesis predicts that they might interact with AI-Cs more actively. Studies of youth’s social media use have supported this hypothesis, showing that extraverted individuals are more active users of social media than introverted individuals (Valkenburg & Peter, 2011). Similarly, socially confident and well-resourced adolescents might integrate AI-Cs into their already-rich social experiences. These youth might use AI-Cs to practice and refine their already-strong social abilities or to explore social opportunities beyond what their human relationships offer.
In contrast, according to the social compensation hypothesis, adolescents experiencing difficulties in their existing social relationships may be more likely to turn to AI-Cs to fulfill their relational and emotional needs. In studies supporting this model, shyness and low self-esteem were associated with more time spent on social media (Laghi et al., 2013). This suggests that adolescents experiencing social anxiety, peer rejection, or family conflict might turn to AI-Cs for validation and connection, potentially leading to greater involvement in, attachment to, and dependence on AI-C relationships.
The social compensation hypothesis is particularly relevant for understanding AI-C use in certain populations. Adolescents with social communication challenges (e.g., autism spectrum conditions) might find the predictable, adaptable nature of AI interactions especially appealing and prefer them to complex human social dynamics (Koegel et al., 2025). Similarly, adolescents experiencing social isolation, whether as a result of geographical factors, social rejection, or family context, might develop stronger attachments to AI-Cs. This pattern could create a vicious cycle: Adolescents with social vulnerabilities might be drawn more strongly to AI-Cs, which may then further diminish their engagement with human relationships, ultimately exacerbating the social challenges that initially drove them toward AI-Cs (Zhang et al., 2022). This transactional pattern may place youth who are already at risk in an especially vulnerable position regarding the potential negative impacts of AI-C use. Accordingly, prevention and intervention programs targeted to these youth that simultaneously promote their social relationships and social support and enhance their AI literacy—including by increasing their awareness of the risks of AI-C use—are likely to help address this vicious cycle.
These competing hypotheses are not necessarily mutually exclusive. Different subgroups of adolescents might engage with AI-Cs for different reasons and in different patterns. Moreover, individual adolescents might shift between enhancement and compensation depending on their social circumstances and developmental challenges. Understanding these diverse pathways to AI engagement is essential to ensuring that adolescents use technology in healthy ways.
Recommendations for research
The emerging field of research on AI-Cs and adolescent development requires the use of robust methods that capture the complexity of these interactions and their developmental implications. Based on current research on digital technologies and youth well-being, we propose several methodological recommendations to advance this important area of research.
Longitudinal designs are essential for testing the bidirectional influences between AI-C use and adolescents’ social relationships. Repeated measures of AI-C use and social relationships over time would allow researchers to test both directions of influences: 1) whether and how AI-C use predicts social relationships, testing the stimulation hypothesis versus the displacement hypothesis, and 2) whether and how adolescents’ social relationships predict AI-C use, testing the social enhancement hypothesis versus the compensation hypothesis. Moreover, researchers should use intensive longitudinal designs, such as with ecological momentary assessments, to capture the reciprocal dynamics between AI-C use and social relationships in the moment, and to address whether these momentary processes have long-term ramifications for adolescents’ social development.
Naturalistic observations are needed to reveal specific mechanisms associating AI-C use with social relationships, such as through passive-sensing data collection, which gathers interaction content directly from adolescents’ devices (Reeves et al., 2021), and data donation (Razi et al., 2022), which involves adolescents sharing access to their conversation logs with AI-Cs. These methods can provide ecologically valid observations on how adolescents actually interact with AI-Cs, offering opportunities to understand important mechanisms underlying the implications of AI-C use for social relationships, such as interactions regarding social skills, identity exploration, psychological dependence, and roles and expectations for relationships.
Diverse sampling across dimensions of race, ethnicity, socioeconomic status, gender, sexual orientation, and disability status is crucial to test how stimulation versus displacement and social enhancement versus compensation effects manifest differently across various adolescent populations. Digital divides persist in youth’s understanding of and access to advanced technologies, with disparities often following existing social inequalities, creating uneven distributions of benefits and risks (Livingstone & Helsper, 2007). For example, social media has provided critical community resources for LGBTQ+ youth seeking identity affirmation and support (Craig et al., 2021), while AI has shown promise for enhancing communication for children with certain disabilities (Koegel et al., 2025; Zdravkova et al., 2022). In contrast, racially and ethnically minoritized youth often encounter discriminatory content online (Tao & Fisher, 2022). These varied outcomes underscore the importance of using inclusive designs with diverse samples to examine the interplay between AI-C use and social relationships.
Researchers should prioritize youth-centered participatory approaches for studying adolescents’ AI-C use and social relationships, including by engaging with youth advisory boards (Moreno et al., 2021). Recent scholarship on digital behaviors has increasingly recognized the importance of incorporating young people’s perspectives into research design, participant engagement, data interpretation, and the dissemination of findings (Brogden et al., 2024; Green et al., 2022). Involving youth in studies can address researchers’ biases and enhance the validity and relevance of findings (Higgs & Stornaiuolo, 2024); these steps are especially important in understanding the implications of adolescents’ use of AI-Cs, a new technology that is just beginning to be studied.
Conclusion
The rapid advancement of AI technologies has thrust AI-Cs into adolescents’ daily lives, creating tools that have strong appeal to adolescents but whose potential benefits and risks are not well understood. Rather than simply categorizing these technologies as uniformly beneficial or harmful, we have discussed them by considering competing theoretical frameworks, highlighting a complex interplay between AI-C use and adolescents’ social relationships. Researchers need to act rapidly to generate timely evidence with rigorous, youth-centered methods to inform policy and practice. By investigating these influences proactively rather than reactively, researchers and practitioners can help ensure that AI-Cs enhance rather than diminish the social interactions through which adolescents develop their relationship competence and well-being.
Supplementary Material
Supplementary material is available at Child Development Perspectives online.
Funding
This research was supported in part by the National Institute of Mental Health (R01MH138929; PI: Xiaoran Sun).
Footnotes
For the sociodemographic characteristics of the studies reviewed herein, please see Table S1 in online materials.
Conflicts of interest
None declared.
References
- Avci H, Baams L, & Kretschmer T (2025). A systematic review of social media use and adolescent identity development. Adolescent Research Review, 10, 219–236. 10.1007/s40894-024-00251-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bansal G, Chamola V, Hussain A, Guizani M, & Niyato D (2024). Transforming conversations with AI—A comprehensive study of ChatGPT. Cognitive Computation, 16, 2487–2510. 10.1007/s12559-023-10236-2 [DOI] [Google Scholar]
- Brandtzæg PB, Skjuve M, Kristoffer Dysthe KK, & Følstad A (2021). When the social becomes non-human: Young people’s Perception of social support in chatbots. In Proceedings of the 2021 CHI conference on human factors in computing systems (pp. 1–13). [Google Scholar]
- Brogden J, de Haan Z, Gorban C, Hockey SJ, Hutcheon A, Iorfino F, Song YJC, Scott E, Hickie IB, & McKenna S (2024). Enhancing research involvement of young people with lived expertise: Reflecting on experiences in digital mental health research. Journal of Medical Internet Research, 26, e55441. 10.2196/55441 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chow A, & Haupt A (2025, June 12). A psychiatrist posed as a teen with therapy chatbots. The conversations were alarming. Time. https://time.com/7291048/ai-chatbot-therapy-kids/. Accessed October 6, 2025. [Google Scholar]
- Collins WA (1997). Relationships and development during adolescence: Interpersonal adaptation to individual change. Personal Relationships, 4, 1–14. 10.1111/j.1475-6811.1997.tb00126.x [DOI] [Google Scholar]
- Collins WA, & Laursen B (2004). Changing relationships, changing youth: Interpersonal contexts of adolescent development. The Journal of Early Adolescence, 24, 55–62. 10.1177/0272431603260882 [DOI] [Google Scholar]
- Common Sense Media. (2025). Talk, trust, and trade-offs: How and why teens use AI companions. Common Sense Media. https://www.commonsensemedia.org/research/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions. Accessed October 6, 2025. [Google Scholar]
- Craig SL, Eaton AD, McInroy LB, Leung VWY, & Krishnan S (2021). Can social media participation enhance LGBTQ+ youth well-being? Development of the social media benefits scale. Social Media + Society, 7. 10.1177/2056305121988931 [DOI] [Google Scholar]
- De Freitas J, Castelo N, Uguralp A, & Uguralp Z Lessons from an App update at Replika AI: Identity discontinuity in human-AI relationships. arXiv 14190. 10.48550/arXiv.2412.14190, December 10, 2024, preprint: not peer reviewed. [DOI] [Google Scholar]
- Duane D (2025, February 12). Teenagers turning to AI companions are redefining love as easy, unconditional, and always there. The Conversation. Retrieved March 28, 2025. https://theconversation.com/teenagers-turning-to-ai-companions-are-redefining-love-as-easy-unconditional-and-always-there-242185 [Google Scholar]
- Duffy C (2024, October 30). “There are no guardrails.” This mom believes an AI chatbot is responsible for her son’s suicide. CNN Business. https://www.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit. Accessed October 6, 2025. [Google Scholar]
- Duffy C (2025, April 3). Senators demand information from AI companion apps following kids’ safety concerns, lawsuits. CNN Business. https://www.cnn.com/2025/04/03/tech/ai-chat-apps-safety-concerns-senators-character-ai-replika. Accessed October 6, 2025. [Google Scholar]
- George AS, George AH, Baskar T, & Pandey D (2023). The allure of artificial intimacy: Examining the appeal and ethics of using generative AI for simulated relationships. Partners Universal International Innovation Journal, 1, 132–147. 10.5281/zenodo.10391614 [DOI] [Google Scholar]
- Giordano PC (2003). Relationships in adolescence. Annual Review of Sociology, 29, 257–281. 10.1146/annurev.soc.29.010202.100047 [DOI] [Google Scholar]
- Glimpse.AI Inc. (n.d.). Nomi.ai—An AI companion with memory and a soul [Application software]. Nomi.ai. https://nomi.ai/. Accessed October 6, 2025. [Google Scholar]
- Goldman S (2024, August 2). Google’s hiring of Character.AI’s founders is the latest sign that part of the AI startup world is starting to implode. Fortune. https://fortune.com/2024/08/02/google-character-ai-founders-microsoft-inflection-amazon-adept/. Accessed October 6, 2025. [Google Scholar]
- Green DM, Taddeo CM, Price DA, Pasenidou F, & Spears BA (2022). A qualitative meta-study of youth voice and co-participatory research practices: Informing cyber/bullying research methodologies. International Journal of Bullying Prevention, 4, 190–208. 10.1007/s42380-022-00118-w [DOI] [Google Scholar]
- Guingrich RE, & Graziano MS (2024). Ascribing consciousness to artificial intelligence: Human-AI interaction and its carry-over effects on human-human interaction. Frontiers in Psychology, 15, 1322781. 10.3389/fpsyg.2024.1322781 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hammadi NQ, Dawood MM, Fadhel ZA, & Jabbar MG (2025). Using artificial intelligence to enhance family cohesion and promote positive social values. In Abdelgawad A, Jamil A, & Hameed AA (Eds.), Intelligent systems, blockchain, and communication technologies. ISBCom 2024. Lecture notes in networks and systems (Vol. 1268). Springer. 10.1007/978-3-031-82377-0_61 [DOI] [Google Scholar]
- Higgs JM, & Stornaiuolo A (2024). Being human in the age of generative AI: Young people’s ethical concerns about writing and living with machines. Reading Research Quarterly, 59, 632–650. 10.1002/rrq.552 [DOI] [Google Scholar]
- Horwitz J (2025, April 26). Meta’s ‘digital companions’ will talk sex with users—even children. The Wall Street Journal. https://www.wsj.com/tech/ai/meta-ai-chatbots-sex-a25311bf?st=7VQwny. Accessed October 6, 2025. [Google Scholar]
- Hou YTY, Cheon E, & Jung MF (2024, March). Power in human-robot interaction. In Proceedings of the 2024 ACM/IEEE international conference on human-robot interaction (pp. 269–282). 10.1145/3610977.3634949 [DOI] [Google Scholar]
- Huang S, Lai X, Ke L, Li Y, Wang H, Zhao X, Dai X, & Wang Y (2024). AI technology panic—is AI dependence bad for mental health? A cross-lagged panel model and the mediating roles of motivations for AI use among adolescents. Psychology Research and Behavior Management, 17, 1087–1102. 10.2147/PRBM.S440889 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kelly SM (2023, April 27). Snapchat’s new AI chatbot is already raising alarms among teens and parents. CNN Business. https://www.cnn.com/2023/04/27/tech/snapchat-my-ai-concerns-wellness/index.html. Accessed October 6, 2025. [Google Scholar]
- Khawaja Z, & Bélisle-Pipon JC (2023). Your robot therapist is not your therapist: Understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health, 5, 1278186. 10.3389/fdgth.2023.1278186 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Klein A (2025, July 17). Educators see worrying trends with teens and AI companions. Government Technology—Center for Digital Education. https://www.govtech.com/education/k-12/educators-see-worrying-trends-with-teens-and-ai-companions [Google Scholar]
- Koegel LK, Ponder E, Bruzzese T, Wang M, Semnani SJ, Chi N, Koegel BL, Lin TY, Swarnakar A, & Lam MS (2025). Using artificial intelligence to improve empathetic statements in autistic adolescents and adults: A randomized clinical trial. Journal of Autism and Developmental Disorders, 1–17. 10.1007/s10803-025-06734-x [DOI] [Google Scholar]
- Kraut R, Kiesler S, Boneva B, Cummings J, Helgeson V, & Crawford A (2002). Internet paradox revisited. Journal of Social Issues, 58, 49–74. 10.1111/1540-4560.00248 [DOI] [Google Scholar]
- Laghi F, Schneider BH, Vitoroulis I, Coplan RJ, Baiocco R, Amichai-Hamburger Y, Hudek N, Koszycki D, Miller S, & Flament M (2013). Knowing when not to use the Internet: Shyness and adolescents’ on-line and off-line interactions with friends. Computers in Human Behavior, 29, 51–57. 10.1016/j.chb.2012.07.015 [DOI] [Google Scholar]
- Leung L (2011). Loneliness, social support, and preference for online social interaction: The mediating effects of identity experimentation online among children and adolescents. Chinese Journal of Communication, 4, 381–399. 10.1080/17544750.2011.616285 [DOI] [Google Scholar]
- Li H, Zhang R, Lee YC, Kraut RE, & Mohr DC (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digital Medicine, 6, 236. 10.1038/s41746-023-00979-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Livingstone S, & Helsper E (2007). Gradations in digital inclusion: Children, young people and the digital divide. New Media & Society, 9, 671–696. 10.1177/1461444807080335 [DOI] [Google Scholar]
- Loeb EL, Tan JS, Hessel ET, & Allen JP (2018). Getting what you expect: Negative social expectations in early adolescence predict hostile romantic partnerships and friendships into adulthood. The Journal of Early Adolescence, 38, 475–496. 10.1177/0272431616675971 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luka, Inc. (2025). Replika: Virtual AI friend [Application software]. Replika. https://replika.com/. Accessed October 6, 2025. [Google Scholar]
- Malfacini K (2025). The impacts of companion AI on human relationships: Risks, benefits, and design considerations. AI & Society, 40, 5527–5540. 10.1007/s00146-025-02318-6 [DOI] [Google Scholar]
- Maples B, Cerit M, Vishwanath A, & Pea R (2024). Loneliness and suicide mitigation for students using GPT3-enabled chatbots. NPJ Mental Health Research, 3, 4. 10.1038/s44184-023-00047-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marcia JE (1980). Identity in adolescence. Handbook of Adolescent Psychology, 9, 159–187. [Google Scholar]
- McDaniel BT, Coupe A, Weston A, & Pater J (2025). Emerging ideas: A brief commentary on human-AI attachment and possible impacts on family dynamics. Family Relations, 74, 1072–1079. 10.1111/fare.13188 [DOI] [Google Scholar]
- Meeus W (2011). The study of adolescent identity formation 2000–2010: A review of longitudinal research. Journal of Research on Adolescence, 21, 75–94. 10.1111/j.1532-7795.2010.00716.x [DOI] [Google Scholar]
- Moreno MA, Jolliff A, & Kerr B (2021). Youth advisory boards: Perspectives and processes. Journal of Adolescent Health, 69, 192–194. 10.1016/j.jadohealth.2021.05.001 [DOI] [Google Scholar]
- Nariel. (2025, January 29). Was anyone else’s experience with GPT-4o completely ruined after recent update? [Online forum post]. OpenAI Community Forum. https://community.openai.com/t/was-anyone-elses-experience-with-gpt4o-completely-ruined-after-recent-update/1107600/1. Accessed May 1, 2025. [Google Scholar]
- Ng YL (2024). Exploring the association between use of conversational artificial intelligence and social capital: Survey evidence from Hong Kong. New Media & Society, 26, 1429–1444. 10.1177/14614448221074047 [DOI] [Google Scholar]
- Orben A (2020). Teenagers, screens and social media: A narrative review of reviews and key studies. Social Psychiatry and Psychiatric Epidemiology, 55, 407–414. 10.1007/s00127-019-01825-4 [DOI] [PubMed] [Google Scholar]
- Packin NG, & Chagal-Feferkorn K (2025). This is not a game: The addictive allure of digital companions. Seattle University Law Review, 48, 693 [Google Scholar]
- Pani B, Crawford J, & Allen KA (2024). Can generative artificial intelligence foster belongingness, social support, and reduce loneliness? A conceptual analysis. In Applications of Generative AI (pp. 261–276). Springer, Cham. 10.1007/978-3-031-46238-2_13 [DOI] [Google Scholar]
- Parsakia K (2023). The effect of chatbots and AI on the self-efficacy, self-esteem, problem-solving and critical thinking of students. Health Nexus, 1, 71–76. 10.61838/hn.1.1.14 [DOI] [Google Scholar]
- Pentina I, Hancock T, & Xie T (2023). Exploring relationship development with social chatbots: A mixed-method study of replika. Computers in Human Behavior, 140, 107600. 10.1016/j.chb.2022.107600 [DOI] [Google Scholar]
- Pérez-Torres V (2024). Social media: A digital social mirror for identity development during adolescence. Current Psychology, 43, 22170–22180. 10.1007/s12144-024-05980-z [DOI] [Google Scholar]
- Raedler JB, Swaroop S, & Pan W (2025). AI companions are not the solution to loneliness: Design choices and their drawbacks. In ICLR 2025 workshop on human-AI coevolution. [Google Scholar]
- Razi A, AlSoubai A, Kim S, Naher N, Ali S, Stringhini G, De Choudhury M, & Wisniewski PJ (2022). Instagram data donation: A case study on collecting ecologically valid social media data for the purpose of adolescent online risk detection. In CHI conference on human factors in computing systems extended abstracts (pp. 1–9). 10.1145/3491101.3503569 [DOI] [Google Scholar]
- Reeves B, Ram N, Robinson TN, Cummings JJ, Giles CL, Pan J, Chiatti A, Cho M, Roehrick K, Yang X, Gagneja A, Brinberg M, Muise D, Lu Y, Luo M, Fitzgerald A, & Yeykelis L (2021). Screenomics: A framework to capture and analyze personal life experiences and the ways that technology shapes them. Human–Computer Interaction, 36, 150–201. 10.1080/07370024.2019.1578652 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ryan RM, & Deci EL (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55, 68–78. 10.1037/0003-066X.55.1.68 [DOI] [PubMed] [Google Scholar]
- Schmitt-Rodermund E, & Vondracek FW (1999). Breadth of interests, exploration, and identity development in adolescence. Journal of Vocational Behavior, 55, 298–317. 10.1006/jvbe.1999.1683 [DOI] [Google Scholar]
- Tao X, & Fisher CB (2022). Exposure to social media racial discrimination and mental health among adolescents of color. Journal of Youth and Adolescence, 51, 30–44. 10.1007/s10964-021-01514-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tosti B, Corrado S, & Mancone S (2024). Using chatbots and conversational agents for the promotion of well-being and mental health in adolescents: Limitations and perspectives. Journal of Inclusive Methodology and Technology in Learning and Teaching, 4, 1–10. 10.32043/jimtlt.v4i1.128 [DOI] [Google Scholar]
- Turkle S (2018). There will never be an age of artificial intimacy. The New York Times, 11. https://www.nytimes.com/2018/08/11/opinion/there-will-never-be-an-age-of-artificial-intimacy.html. Accessed May 1, 2025. [Google Scholar]
- Turkle S (2024, March 27). Who do we become when we talk to machines?. An MIT Exploration of Generative AI. March. 10.21428/e4baedd9.caa10d84 [DOI] [Google Scholar]
- Valenzuela A, Puntoni S, Hoffman D, Castelo N, De Freitas J, Dietvorst B, Hildebrand C, Huh YE, Meyer R, Sweeney ME, Talaifar S, Tomaino G, & Wertenbroch K (2024). How artificial intelligence constrains the human experience. Journal of the Association for Consumer Research, 9, 241–256. 10.1086/730709 [DOI] [Google Scholar]
- Valkenburg PM, Beyens I, Meier A, & Vanden Abeele MMP (2022). Advancing our understanding of the associations between social media use and well-being. Current Opinion in Psychology, 47, 101357. 10.1016/j.copsyc.2022.101357 [DOI] [PubMed] [Google Scholar]
- Valkenburg PM, & Peter J (2007). Online communication and adolescent well-being: Testing the stimulation versus the displacement hypothesis. Journal of Computer-Mediated Communication, 12, 1169–1182. 10.1111/j.1083-6101.2007.00368.x [DOI] [Google Scholar]
- Valkenburg PM, & Peter J (2008). Adolescents’ identity experiments on the Internet: Consequences for social competence and self-concept unity. Communication Research, 35, 208–231. 10.1177/0093650207313164 [DOI] [Google Scholar]
- Valkenburg PM, & Peter J (2011). Online communication among adolescents: An integrated model of its attraction, opportunities, and risks. Journal of Adolescent Health, 48, 121–127. 10.1016/j.jadohealth.2010.08.020 [DOI] [Google Scholar]
- Valkenburg PM, Schouten AP, & Peter J (2005). Adolescents’ identity experiments on the Internet. New Media & Society, 7, 383–402. 10.1177/1461444805052282 [DOI] [Google Scholar]
- Wei M (2025, September 8). Hidden mental health dangers of artificial intelligence chatbots. Psychology Today. https://www.psychologytoday.com/us/blog/urban-survival/202509/hidden-mental-health-dangers-of-artificial-intelligence-chatbots. Accessed October 6, 2025. [Google Scholar]
- Winstone L, Mars B, Haworth CM, & Kidger J (2021). Social media use and social connectedness among adolescents in the United Kingdom: A qualitative exploration of displacement and stimulation. BMC Public Health, 21, 1736. 10.1186/s12889-021-11802-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wong Q (2025, February 25). Teens are spilling dark thoughts to AI chatbots. Who’s to blame when something goes wrong? Los Angeles Times. https://www.latimes.com/business/story/2025-02-25/teens-are-spilling-dark-thoughts-to-ai-chatbots-whos-to-blame-when-something-goes-wrong?utm_source=chatgpt.com. Accessed May 1, 2025. [Google Scholar]
- Xu Y, Prado Y, Severson RL, Lovato S, & Cassell J (2024). Growing up with artificial intelligence: Implications for child development. In In handbook of children and screens: Digital media, development, and well-being from birth through adolescence (pp. 611–617). Springer Nature; Switzerland. 10.1007/978-3-031-69362-5_83 [DOI] [Google Scholar]
- Yousif N (2025, August 27). Parents of teenager who took his own life sue OpenAI. BBC News. https://www.bbc.com/news/articles/cgerwp7rdlvo. Accessed October 6, 2025. [Google Scholar]
- Zdravkova K, Krasniqi V, Dalipi F, & Ferati M (2022). Cuttingedge communication and learning assistive technologies for disabled children: An artificial intelligence perspective. Frontiers in Artificial Intelligence, 5, 970430. 10.3389/frai.2022.970430 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang S, Li J, Cagiltay B, Kirkorian H, Mutlu B, & Fawaz K (2025). A qualitative exploration of parents and their children’s uses and gratifications of ChatGPT. Family Relations, 74, 1056–1071. 10.1111/fare.13171 [DOI] [Google Scholar]
- Zhang S, Su W, Han X, & Potenza MN (2022). Rich get richer: Extraversion statistically predicts reduced internet addiction through less online anonymity preference and extraversion compensation. Behavioral Sciences, 12, 193. 10.3390/bs12060193 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang Y, Zhao D, Hancock JT, Kraut R, & Yang D The rise of AI companions: How human-chatbot relationships influence well-being. arXiv 12605. 10.48550/arXiv.2506.12605, June 14, 2025, preprint: not peer reviewed. [DOI] [Google Scholar]
- Zywica J, & Danowski J (2008). The faces of Facebookers: Investigating social enhancement and social compensation hypotheses; predicting Facebook™ and offline popularity from sociability and self-esteem, and mapping the meanings of popularity with semantic networks. Journal of Computer-Mediated Communication, 14, 1–34. 10.1111/j.1083-6101.2008.01429.x [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
