Abstract
Loneliness, social isolation, and anxiety affect millions of people across the world. Communication technologies, including artificial intelligence (AI) chatbots, can potentially offer support to those experiencing mental health challenges by providing companionship and support. Specifically, social AI can mimic human interaction, which may help alleviate loneliness and anxiety through person‐centered messaging. Despite growing AI usage, there is limited research on the effectiveness of specific message types in this context. Thus, this study employed a 2 (person‐centered message: high vs. low) × 2 (context: loneliness vs. anxiety) between‐subjects design to test how different supportive messages from social AI chatbots impact subsequent outcomes. Results revealed that high person‐centered messages are associated with increased emotional validation. Furthermore, the quality of social support and interpersonal warmth (IW) mediated the relationship between high person‐centered messages and emotional validation. Finally, the mediation effect between high person‐centered messages and emotional validation via the quality of emotional support was moderated by social presence, but not the mediation effect between high person‐centered messages and emotional validation via IW. These results demonstrate the importance of developing social AI chatbots that employ messages high in person‐centeredness, as these messages are most important for addressing mental health concerns.
Keywords: anxiety, artificial intelligence (AI), emotional validation, interpersonal warmth (IW), loneliness, person‐centered messages, social support
INTRODUCTION
Loneliness and anxiety are common mental health challenges for many people. Indeed, loneliness is a health epidemic that affects many US Americans, with 17% or 44 million experiencing significant loneliness daily, 1 and many more experiencing some symptoms of loneliness periodically. Similarly, 19% of US Americans have an anxiety disorder, 2 and one‐third of US Americans report feeling anxious or depressed. 3
Increasingly, communication technologies are considered useful tools for remedying experiences of loneliness and anxiety. From the perspective of human–machine communication (HMC), researchers have examined the use of artificial intelligence (AI) chatbots and companions to examine overall effects on mental health concerns. In recent years, technological advancements have made it possible for AI to meaningfully interact with humans in a social manner, 4 provide companionship, 5 and offer guidance in difficult times. 6 These AI technologies are often referred to as social AI or social robots. 7 Unlike traditional AI systems that focus on specific tasks or problem‐solving, social AI can engage in natural and meaningful interactions with people, often mimicking human social behaviors and communication.
Researchers have already acknowledged the positive role that social AI can play in addressing loneliness 5 , 8 and anxiety, 9 as they can provide emotionally supportive messages to humans and contribute to a sense of connection and belonging. Though social AI can provide emotional support generally, it is also important to investigate the types of messages that AI can provide to humans. A meta‐analysis found that messages higher in person‐centeredness were perceived to be more supportive than messages lower in person‐centeredness. 10 That is, messages that are more tailored to focus on the person's needs, perspectives, and preferences are perceived as more supportive than messages that are not. As AI continues to advance, it is important for developers to understand the impact that person‐centered messages can have on recipients.
Although past research has extensively examined the role of person‐centered support messages in a human–human communication context, it is unclear whether those findings replicate in an HMC context. Moreover, although loneliness, anxiety, and social support are extensively studied in the context of social AI, most of those studies rely either on qualitative data 11 or indirect data through analysis of forum posts 12 and focus on emotional support generally rather than on specific message characteristics, 13 or a combination thereof. 5 Thus, the current study investigates how humans respond to different types of supportive messages from social AI in regard to loneliness and anxiety. The purpose of this study is to examine ways in which social AI can provide social support comparable to that of human providers in mental health–related contexts and unpack the impact message content may have on subsequent responses and perceptions of those messages.
Mitigating the loneliness epidemic and rise of anxiety
US and international organizations have described loneliness and its impact on general well‐being as a public health crisis 14 and an epidemic. 15 This loneliness epidemic and the rise of anxiety in the United States and globally continue to impact people. In fact, reports document how 58% of US adults consider themselves lonely, 16 and problematic levels of loneliness are a persistent experience for a substantial portion of the population in more than 100 countries. 17 The effects of anxiety and loneliness are well documented in the literature and striking. 18 Studies have observed that people experiencing loneliness or anxiety face increased risk of heart diseases, strokes, developing dementia, and generally higher mortality. 19 , 20 Loneliness is also a specific risk factor for certain depressive symptoms, 21 making the need for holistic and comprehensive interventions to address loneliness necessary. 22
Conversely, social connections and relationships with others have been most strongly associated with reducing loneliness, in addition to a variety of other behavioral, structural, demographic, and cognitive factors. 15 , 22 , 23 As AI increasingly impacts every aspect of everyday life, technology proponents and industry members emphasize social AI's potential to mitigate the loneliness epidemic and the rise of anxiety. Others warn of the potential ethical repercussions, offer policy recommendations, and are generally more skeptical about whether AI will reduce people's anxiety and loneliness. 24 A recent systematic literature review documents the exponential growth of research on social AI, highlighting avenues for research examining the positive effects of those technologies on humans. 25 One such way that AI can mitigate the loneliness epidemic and the rise of anxiety is by providing social support to those in need.
Social support
Social support is “the perception or experience that one is cared for, esteemed, and part of a mutually supportive social network” (p. 192). 26 This can be enacted in various ways, whether tangible, such as financial aid or practical assistance, or intangible, such as emotional comfort and reassurance. Overall, social support is vital in helping people cope with stress, navigate challenges, and maintain both physical and mental well‐being. 27 , 28 , 29 , 30
Among the various types of support, emotional support is well‐studied in communication research. Indeed, researchers state that emotional support is one of the most valuable 31 and most common types of social support. 32 Emotional support refers to any expression of affect and empathy. 33 , 34 This type of social support helps people feel valued and understood by fostering a sense of belonging and reducing feelings of anxiety, loneliness, or isolation. It can play a critical role in buffering against the negative effects of stress, enhancing psychological resilience, and promoting overall well‐being. 35 There are various ways to provide emotional support, including listening to the support seeker, validating their feelings, and being empathetic to their situation.
Social AI as a source of social support
Although a variety of actors can function as a source of social support, recent advancements in technology have made it possible for social AI to do so as well. AI tools and systems can specifically function as a source of emotional support by providing users with immediate, personalized assistance and emotional comfort, especially in situations where human support may not be readily available. 36 Through advanced algorithms, machine learning (ML), and natural language processing, AI systems can engage in conversations that mimic human interaction, offering empathy, encouragement, and practical advice. 37 Indeed, previous research has already documented the potential impact of AI companions and chatbots serving as a source of social support, 5 , 6 , 8 , 9 , 13 and participants in one study rated humans, social robots, and AI similarly on impressions of social support. 38
AI‐driven interactions can also be valuable for people facing mental health challenges, offering a sense of companionship that helps alleviate anxiety and loneliness. 8 , 9 , 37 Longitudinal studies further support the effectiveness of AI companions in reducing feelings of loneliness among users. 39 Additionally, perceived supportiveness significantly mediates the relationship between an AI chatbot providing emotional support and reductions in both stress and worry. 6
Although AI cannot replace the depth of human relationships, it serves as a complementary support system, filling gaps in social support networks and making emotional assistance more accessible to a broader audience. As AI technologies continue to evolve, their role as a source of social support is likely to expand, offering increasingly sophisticated and responsive forms of assistance. Thus, it is important to focus on the messages they will exchange with support seekers, as these messages will ultimately impact the quality of support social AI actors can provide.
Person‐centeredness
Person‐centered messages are a form of social support that emphasizes understanding, empathy, and validation of the other person's feelings, needs, and perspectives. 10 , 40 These messages are characterized by active listening, non‐judgmentalism, and a focus on providing emotional and/or practical support. Person‐centered messages are particularly important for the provision of effective emotional support, 10 as the degree of person‐centeredness concerns how effective the emotional support messages are. 41
Hierarchies and conceptualizations of person‐centered messages distinguish between low, moderate, and high person‐centered messages. 42 , 43 Low person‐centered messages are those that may ignore, invalidate, or question the support seeker's feelings and emotions. 42 , 43 By failing to acknowledge the emotional needs of the support seeker, these messages often leave them misunderstood or dismissed, exacerbating feelings of distress and hindering the effectiveness of emotional support. Moderate person‐centered messages attempt to reframe a stressful situation or to divert the support seeker's attention away from the stressor. 42 , 43 Although these messages acknowledge the support seeker's feelings, they do not allow them to understand or elaborate upon their emotions further, nor do they offer assistance on how to cope with those feelings. High person‐centered messages acknowledge, validate, elaborate, and explore the support seeker's feelings and emotions. 42 , 43 Thus, these messages assist support seekers in gaining perspective on their emotions, and they are important for effective emotional support.
The impact of person‐centeredness on outcome variables is significant. Higher person‐centered messages are rated as more comforting than lower person‐centered messages. 44 Additionally, one meta‐analysis found that person‐centeredness was positively associated with both perceived and actual effectiveness across 23 studies. 10 Furthermore, people who receive person‐centered messages are more likely to experience improved emotional well‐being, as they feel understood and supported. 45 This can lead to reduced anxiety, increased satisfaction with interaction, and a stronger sense of connection with the person providing the support. Research also found longitudinal support for the impact that person‐centered messages have on recipients. 46 Thus, the positive effects of person‐centered communication extend to various relational and psychological outcomes, making it a crucial element in effective and compassionate interaction. Based on the aforementioned literature, we propose the following hypothesis:
H1. Participants exposed to high person‐centered messages will report greater emotional validation than participants exposed to low person‐centered messages.
Quality of social support
Although the role of person‐centered messages has been well studied in the literature on social support in a human–human context, the mechanisms driving its influence require additional investigation, but especially in a human–machine context. In this study, we also examine the quality of social support (QSS) and interpersonal warmth (IW) as mediating variables. The QSS refers to the effectiveness and adequacy of messages designed to assist someone. 47 , 48 The effectiveness of social support depends not only on its availability but also on how well it matches the recipient's needs and preferences. A well‐provided support system can alleviate stress, improve mental health, and foster resilience, enabling people to navigate challenges more effectively. However, the provision of support must be mindful of the recipient's context and emotional state, as inappropriate or unsolicited help can sometimes lead to feelings of inadequacy. Therefore, the thoughtful and sensitive provision of social support is essential in promoting well‐being and sustaining healthy relationships.
The QSS is important in determining the impact that messages have on well‐being. 49 High QSS is characterized by the support being appropriate to the person's needs, a timely delivery of the support, and great emotional attunement of the support provider. 48 Indeed, high QSS is associated with positive effects on physical and mental health and well‐being, enhanced coping mechanisms, reduced stress levels, and an increase in feelings of intimacy. 27 , 44 , 49 , 50 , 51 Furthermore, researchers found that more supportive comments led participants to perceive the support provider more positively. 52 Conversely, low QSS, which might be inconsistent, inappropriate, or lacking in empathy, can lead to feelings of frustration, increased stress, and even strain on relationships. Thus, the QSS plays a critical role in its effectiveness and the positive outcomes associated with it.
Person‐centered messages are closely tied to the QSS provided, as they directly impact how well the support seeker feels understood and validated. High person‐centered messages tend to enhance the effectiveness of social support by fostering a deeper connection and trust between the parties involved. Furthermore, high QSS and emotional validation are ultimately important for improved psychological well‐being. 40 Conversely, low person‐centered messages can undermine the QSS, leaving the person feeling neglected or invalidated, which can diminish the overall impact of the support offered. Thus, QSS may function as a mechanism that explains the relationship between person‐centered messages and emotional validation.
Interpersonal warmth
IW is the quality of expressing genuine kindness, empathy, and care in interactions with others. 53 It involves behaviors and communications that make others feel valued, respected, and understood. IW is conveyed through verbal and nonverbal cues, such as a friendly tone of voice, attentive listening, supportive language, and open body language. The presence of IW in relationships fosters trust and connection, making people feel safe and comfortable in expressing themselves. 54
Various characteristics of AI chatbots can impact how warm users perceive them. According to some researchers, perceptions of IW among AI actors depend on human–AI interdependence. 55 Specifically, when AI actors optimize the interests that are best aligned with the humans’ interests, the AI actors are perceived as warmer. Indeed, AI actors that express appropriate emotions are perceived as warmer and more believable than AI actors that express inappropriate emotions. 56 Thus, social AI that uses high person‐centered messages, which generally have the support seeker's best interests in mind by validating their feelings and emotions, is likely to be perceived as warmer than social AI that uses low person‐centered messages.
IW also predicts various outcomes related to AI chatbots. Generally speaking, IW is positively related to overall receptivity of AI. 57 AI chatbots perceived as greater in IW generate more user engagement and greater levels of persuasiveness than AI chatbots lower in IW. 58 However, one study found no significant mediation effects for IW between various sources of social support (human, expert bot, and novice bot) and emotional validation. 13 Instead, the authors found that IW significantly mediated the relationship between a human versus a novice bot and a reduction in emotional stress. Given that greater levels of IW are associated with more positive perceptions of social AI, 8 it is possible that IW can still impact the support seeker's perceptions of emotional validation. Based on the aforementioned literature, we propose the following hypothesis:
H2. The effect of high person‐centered messages on emotional validation will be mediated by (a) the quality of social support and (b) interpersonal warmth.
Social presence as a moderator
Social presence is “a psychological state in which virtual (para‐authentic or artificial) social actors are experienced as actual social actors in either sensory or nonsensory ways” (p. 45). 59 Generally speaking, social presence is the feeling of being connected to another social actor. Social presence is crucial in fostering meaningful and effective communication, as it enhances the sense of closeness and immediacy between people. Greater levels of social presence can lead to stronger relationships, increased trust, and more productive interactions, as people feel more connected and engaged. 60 , 61
Social presence can be induced by several factors, including technology‐related factors, user factors, and social factors. 62 , 63 Thus, it is likely that the characteristics of social AI can induce different levels of social presence. Given that high person‐centered messages are meant to connect with the support seeker and validate their feelings, 42 , 43 it is likely that these messages will induce greater levels of social presence than low person‐centered messages.
Furthermore, several studies have investigated the effects of social presence as a moderating variable between various technologies and outcome variables. For example, one study found that social presence significantly moderated the relationship between feelings of loneliness and online parasocial relationships with celebrities. 64 More germane to the current study, researchers found that social presence was a significant moderator between the embodiment of an AI companion and both the perceived usefulness of an AI companion and the willingness to recommend an AI companion to a lonely person. 8 Therefore, we propose this final hypothesis:
H3. The mediated effect of high person‐centered messages on emotional validation via (a) the quality of social support and (b) interpersonal warmth will be moderated by social presence.
MATERIALS AND METHODS
To test the proposed hypotheses, we conducted a 2 (person‐centeredness: low or high) × 2 (context: anxiety or loneliness) between‐subjects experiment. We utilized a stimulus sampling approach by addressing two different mental health contexts, anxiety and loneliness, to ensure that our findings are generalizable across health topics. All participants interacted with a custom‐built AI chatbot.
Chatbot design
In this study, we used Voiceflow, a service that enables the custom development of AI agents that either follow a rule‐based approach or ML design using visual scripting. Under an ML design, chatbots’ responses to user prompts tend to vary across participants as the responses are tailored to each user, intention, and context. 65 This variation in messages can raise concerns about internal validity, as the messages are not consistent across experimental conditions. In light of this concern, we designed the chatbot using a rule‐based approach, where the chatbot responses are strictly drawn from a predefined script, 65 ensuring consistent responses across all conditions and variations only in the intended manipulations.
The conversation alternated between the participants and the chatbot, for a total of eight turns. The person‐centered message manipulations appeared during the sixth, seventh, and eighth turns, whereas the context manipulations appeared during the first and fifth turns. We developed four different versions of the chatbot following the 2 × 2 design. All scripts are available on our OSF page [https://osf.io/gz2a4/].
Manipulations
Person‐centeredness
Informed by prior research on crafting person‐centered messages that comfort and validate the emotional states of vulnerable individuals, 42 , 66 we developed two scripts that reflect low person‐centeredness (Level 1) and high person‐centeredness (Level 9). Level 1 messages trivialize and invalidate individuals’ feelings and experiences (e.g., “It's really stupid to feel bad. You're an adult now and should realize these things happen.”). 43 Level 9 messages, on the other hand, recognize and affirm their feelings by elaborating on their situation (e.g., “I'm terribly sorry you experienced this. It's very upsetting to hear about!”). 43 Flesch–Kincaid scores of readability across the two scripts were 79.4 (4.2 grade level) for the low person‐centeredness messages and 75 (4.7 grade level) for the high person‐centeredness messages. AI responses and queries across both conditions were of similar length (∼300 words).a
Context
Context was manipulated by the chatbot either by identifying itself as “an AI chatbot designed to assist with individuals experiencing loneliness” or “an AI chatbot designed to assist with individuals experiencing anxiety.”
Pilot study
To ensure that our manipulations were successful, we conducted a pilot test using a sample recruited from Prolific, an online service for participant recruitment. Participants (N = 96), ranging in age from 19 to 69 years (M age = 35.31, SD age = 11.04) and comprising of 57% women, 35% men, and 7% who identified with another gender identity,b were informed that they would interact with an AI chatbot named Charlie that was developed by the University of Cincinnati. The person‐centered manipulation was successful (t(94) = 11.15, p < 0.001, d = 1.43), with messages in the high condition perceived to be more supportive (M = 5.38, SD = 1.25) than those in the low condition (M = 2.11, SD = 1.62). Similarly, the context manipulation was successful, with all but one participant correctly identifying the health context, 𝜒2 (1, 96) = 92.07, p < 0.001, V = 0.98). In light of these results, no modifications were made, and we proceeded with the main study.
Main Study
Sample
Participants were recruited from Prolific, all residing in the United States. Originally, 256 participants responded to our survey. After removing participants for either failing at least one attention check, failing the manipulation check, or not completing the survey in full, the dataset had 185 participants remaining. Then, we removed 45 participants who did not have meaningful interactions with the chatbot. We defined a meaningful interaction as one in which participants appropriately responded to all questions in a traditional turn‐based manner (one person after another) and adhered to all provided instructions. The authors of this paper independently evaluated each interaction to determine whether it met these criteria. Any discrepancies in coding among the authors were resolved through group discussion, with the final determination of a meaningful interaction being based on majority consensus.
The final sample (N = 140) ranged in age from 18 to 78 years (M age = 44.56, SD age = 15.88), with the majority identifying as women (n = 73, 51%) and men (n = 64, 45%). Racially, the sample contained White/Caucasian (n = 102, 73%), Black/African American (n = 17, 12%), and others (n = 38, 27%) (see footnote b). Most of the participants had an academic degree that is associate or higher (n = 80, 57%) and a household income of $50,000 or higher (n = 92, 65%). Additional demographics are available on our OSF page as previously listed.
Procedure
Data collection for the main study followed the pilot test. After clicking on the study link, participants were directed to the experiment hosted on Qualtrics. Akin to the pilot test, participants were informed that they will be connected to an AI chatbot named Charlie that was developed by the University of Cincinnati. Then, participants were randomly assigned to one of four chatbots (n high / anxiety = 40, n high/loneliness = 31, n low/anxiety = 34, n low/loneliness = 35). The chatbot was embedded within Qualtrics, and once assigned to the condition, participants were directed to a page featuring a chatbot interface positioned at the bottom right of the screen in a pop‐up. After participants initiated a conversation with the chatbot, the first message required them to say “hello” to begin the conversation. An if/else condition criterion was set, such that participants would need to type “hello” to proceed to the main conversation. If participants failed twice to type hello—or variations of hello (e.g., Hello!)—the chat ended, and no response was recorded. However, participants had the opportunity to refresh the chatbot and try again. When successful, participants began the conversation with the chatbot, which lasted an average of approximately 4 min (M = 251.55 s, Mdn = 221.79 s). Subsequently, they responded to the questionnaire containing the dependent measures. All scale items were presented in randomized order. At the conclusion of the survey, participants were thanked, debriefed, and received monetary compensation.
Two steps were taken to ensure high‐quality data. First, four close‐ended attention checks were integrated in various stages of the survey. An example attention check item read, “Please select ‘Agree’ for this response item.” Any participant who failed (i.e., did not select the specified response item) even a single attention check was removed from the data analysis. Second, all participants were assigned a random 5‐digit numerical ID, which they needed to enter during the chat session when prompted. Using chat transcripts, these IDs were later used to filter out participants who provided nonsensical responses or those who did not engage with the chatbot in any meaningful manner.
Measures
All items and R scripts used in this study are available on our OSF page [https://osf.io/gz2a4/].
Quality of social support
QSS was measured using the relational assurance subscale from the original QSS scale. 67 Although the original 12‐item scale assesses multiple dimensions of QSS (i.e., problem solving, emotional awareness, and relational assurance), we focused solely on relational assurance, as our interest lay in evaluating the chatbot's role as a supportive partner. The final semantic differential scale, ranging from 1 to 7, consisted of four items that measured the chatbot's ability to provide effective relational support (e.g., comforting–distressing).
Interpersonal warmth
IW was measured via a scale adapted from a previous study. 53 The 5‐item scale asked the participants to rate the chatbot on a scale of 1 (not at all) to 7 (very) on different characteristics of IW (e.g., polite and generous).
Social presence
Social presence was measured using a scale adopted from a previous study. 68 The 8‐item semantic differential scale assessed the extent to which participants felt they were psychologically involved with the chatbot (e.g., impersonal–personal, insensitive–sensitive).
Emotional validation
Emotional validation was measured using a scale adopted from a previous study. 69 The 2‐item scale asked the participants to agree on a scale of 1 (Strongly disagree) to 7 (Strongly agree) and captured the extent to which participants felt the chatbot acknowledged their feelings (e.g., “My conversation with the chatbot has made me feel like my concerns are legitimate.”).
Covariates
For all statistical analyses reported below, we controlled for participants’ pre‐experimental self‐reported anxiety and loneliness levels. Additional analyses, controlling for participants’ gender and race, are reported on our OSF page [https://osf.io/gz2a4/].
RESULTS
Data analysis
All analyses were performed in R, including mediation analyses using the R variant of PROCESS Macro v. 4.3.1. For transparency, all R scripts and analysis output are publicly available on our OSF repository [https://osf.io/gz2a4/].
Descriptive statistics, zero‐order correlations, and scale reliabilities are reported in Table 1. To assess potential concerns of multicollinearity between QSS and IW, we conducted diagnostic analyses (see Supporting Information). The results are also documented in our OSF repository.
TABLE 1.
Measurement‐scale descriptive statistics.
M | SD | α | EV | QSS | IW | SP | |
---|---|---|---|---|---|---|---|
EV | 3.33 | 1.97 | 0.92 | 1 | 0.80** | 0.83** | 0.79** |
QSS | 3.59 | 1.98 | 0.96 | 1 | 0.89** | 0.80** | |
IW | 3.33 | 1.97 | 0.96 | 1 | 0.87** | ||
SP | 3.72 | 1.66 | 0.93 | 1 |
Abbreviations: EV, emotional validation; IW, interpersonal warmth; QSS, quality of social support; SP, social presence.
Significant at p < 0.01 level.
Manipulation check
Before proceeding to hypothesis testing, we assessed significant differences between the anxiety and loneliness conditions to validate our stimulus sampling approach. An independent samples t‐test showed that the health context (i.e., loneliness or anxiety) did not affect the dependent variables of interest: emotional validation (t(138) = 0.97, p = 0.33), QSS (t(138) = 0.15, p = 0.88), perceived IW (t(138) = 1.06, p = 0.29), and social presence (t(138) = 1.19, p = 0.24). Furthermore, the interaction between person‐centeredness and context was also not statistically significant across all outcome variables: emotional validation (F(1, 136) = 0.08, p = 0.78, η 2 p < 0.01), QSS (F(1, 136) = 1.03, p = 0.31, η 2 p < 0.01), IW (F(1, 136) = 1.18, p = 0.28, η 2 p < 0.01), and social presence (F(1, 136) = 0.60, p = 0.44, η 2 p < 0.01). The nonsignificant differences between loneliness and anxiety suggest that participants’ responses to the outcomes of interest in this study do not meaningfully differ across these two distinct mental health conditions. As such, we collapsed the context manipulation and treated this study as a one‐factor between‐subjects experiment.
Hypotheses testing
H1 (i.e., high person‐centered messages are associated with greater emotional validation) was tested using a linear regression. Results show that the overall model is statistically significant (F(3, 136) = 32.14, p < 0.001, R 2 adj = 0.40). Participants in the high person‐centered message condition reported significantly higher emotional validation than those in the low person‐centered message condition (b = 2.46, SE = 0.26, p < 0.001). As such, H1 was supported.
H2 (i.e., the effect of high person‐centered messages on emotional validation will be mediated by (a) QSS and (b) IW) was tested using a parallel mediation analysis using PROCESS Model 4 and a bootstrap sample of 5000 with 95% bias‐corrected confidence intervals (CIs). 70 The mediation of person‐centered messages on emotional validation via QSS was significant (b = 0.79, boot‐SE = 0.41, 95% CI = [0.08, 1.71]). High person‐centered messages increased QSS (b = 2.91, t = 13.10, p < 0.001), which was, in turn, positively related to emotional validation (b = 0.27, t = 2.45, p = 0.01). Similarly, the mediation of person‐centered messages on emotional validation through IW was also significant (b = 1.56, boot‐SE = 0.40, 95% CI = [0.77, 2.35]). High person‐centered messages significantly enhanced perceptions of IW (b = 2.73, t = 11.31, p < 0.001), and this IW heightened emotional validation (b = 0.57, t = 5.63, p < 0.001). Therefore, H2a and H2b were supported.
H3 (i.e., the mediated effect of high person‐centered messages on emotional validation via (a) QSS and (b) IW will be moderated by social presence) was tested using a parallel moderated‐mediation analysis using PROCESS Model 7 and a bootstrap sample of 5000 with 95% bias‐corrected CIs. 70 When QSS was entered as the mediator, the index of moderated mediation was significant (b = −0.13, boot‐SE = 0.07, 95% CI = [−0.27, −0.01]). Specifically, at lower levels of social presence (−1 SD, value = 2.06), the positive mediated effect of high person‐centered messages on emotional validation was strongest (b = 0.68, boot‐SE = 0.34, 95% CI = [0.07, 1.42]). However, this effect reduced at the mean (value = 3.72) (b = 0.47, boot‐SE = 0.25, 95% CI = [0.05, 1.04]), and higher levels of social presence (+1 SD, value = 5.39) (b = 0.26, boot‐SE = 0.19, 95% CI = [0.02, 0.74]). When IW was entered as a mediator, the index of moderated mediation was not significant (b = −0.05, boot‐SE = 0.07, 95% CI = [−0.18, 0.10]). As such, H3a was supported, whereas H3b was not.
DISCUSSION
Primary findings
The current study examined the impact that person‐centered messages from an AI chatbot have on emotional validation. The study also investigated the effect of the QSS provided, IW, and social presence on this relationship. Results demonstrate that high person‐centered messages are associated with greater emotional validation. This finding is aligned with previous research in the human–human context. 44 High person‐centered messages acknowledge, validate, elaborate, and explore the support seeker's feelings and emotions. 42 , 43 Thus, these messages should lead the support seeker to feel greater emotional validation.
The current study also found that the effect of high person‐centered messages on emotional validation was significantly mediated by the QSS. This finding is due to the fact that high person‐centered messages directly impact how well the support seeker feels understood and validated. 42 , 43 Thus, high person‐centered messages can enhance the perceived effectiveness of social support, 10 thereby fostering a deeper connection between the support provider and the support seeker. Then, the support seeker feels greater levels of emotional validation as the high quality of emotional support provided was effective and appropriate. The results of the current study also demonstrate that the mediated effect of high person‐centered messages on emotional validation via the QSS was significantly moderated by social presence. This manifested in a way such that, at lower levels of social presence, the positive mediated effect of high person‐centered messages on emotional validation via the QSS was strongest, and this effect lessened at greater levels of social presence. Ultimately, this finding suggests that, when the QSS is already high, social presence has less room to add value because the message is already perceived as positive and validating. This finding makes sense as social presence is generally perceived as the feeling of being connected to another social actor. 59 Thus, when the QSS is high, the support receiver may already feel some connection to the support provider, as the accompanied messages are validating and centering their needs. 47 , 48
Finally, though the effect of high person‐centered messages on emotional validation was significantly mediated by IW, this mediated effect was not significantly moderated by social presence. Although previous research documents that both characteristics of social AI can positively impact IW 55 , 56 and IW is associated with more positive outcomes and perceptions of social AI, 8 , 57 , 58 that is not always the case (for an exception see Ref. 13). Generally speaking, high person‐centered messages validate the support seeker's feelings, 42 , 43 which may naturally lead the support seeker to perceive the support provider as greater in warmth. Furthermore, greater perceptions of warmth equate to greater care in interactions and increased feelings of helpfulness, 53 ultimately impacting feelings of emotional validation. Thus, the mediated effect of IW makes sense. However, the nonsignificant finding of social presence as a moderator is surprising. Social presence is concerned with the feeling of being connected to another social actor. 59 Thus, being connected to the other social actor does not matter when considering the impact that IW has between high person‐centered messages and emotional validation.
Theoretical implications
The current study provides several theoretical implications for research on social support. Indeed, the findings highlight the potential for AI‐driven support models to effectively simulate human‐like social support by incorporating high person‐centered messages. Furthermore, the findings demonstrate that the QSS and IW function as mechanisms that explain why person‐centered messages impact emotional validation. These findings contribute to the evolving understanding of how social support can be provided in increasingly digital environments, offering new insights into the mechanisms that make AI an effective tool for emotional support.
Theoretical implications for research on social presence are also observed. Specifically, the moderating role of social presence between the mediation effect of high person‐centered messages and emotional validation via the QSS indicates that the perceived engagement of the AI significantly enhanced the impact of person‐centered messages on the QSS, highlighting the importance of both content and relational dynamics in digital support systems. Limited research investigates the use of social presence as a moderating variable (for exceptions see Refs. 8, 64). Thus, the following study provides further theoretical understanding about the varying ways in which social presence can induce positive experiences, especially in human–AI interactions.
Practical implications
From a practical standpoint, our study shows that machine communicators such as AI chatbots can provide social support messages across two different mental health contexts. Although technology's effects on people experiencing mental health challenges continue to be debated, as critics note that technology furthers social isolation and is a primary driver toward an experience of being “alone together,” 71 proponents stress that technology can not only facilitate connection between people but may also offer alternative ways of addressing loneliness and anxiety. In particular, the results from this study indicate that socially supportive chatbots operate comparably to social support messages sent by humans. Given the decreasing effect of high person‐centered messages on emotional validation for increasing social presence, designers of socially supportive AI systems should lean into the machine‐likeness of their chatbots rather than aiming to invoke human‐likeness too strongly. It may be precisely because chatbots are machine‐driven entities rather than humans that their person‐centered support messages result in positive outcomes.
As is typical for research examining chatbots designed to address mental health issues, we emphasize that social AI is best understood as complementary to human mental healthcare providers rather than as replacements for humans. As such, chatbots providing socially supportive messages can potentially complement healthcare providers and serve as an initial point of contact with those seeking support and other services. In doing so, chatbots may lessen the load of an already understaffed and overworked mental healthcare system. 72 In fact, there is considerable diversity among health‐related chatbots, with a focus not only on mental health support but also behavior change, physical activity promotion, and more, 73 suggesting a breadth of applications for chatbots across healthcare sectors.
Ethical implications
Nonetheless, the ethical challenges common to many chatbots and communicative AI apply here, too, especially given the focus on addressing mental health challenges. Of course, this use should not come at the expense of protecting patient data, as sensitive health data are vulnerable to misuse or privacy breaches. 74 Therefore, developers and practitioners must prioritize data security and ethical safeguards, ensuring that AI‐driven mental health tools are both effective and trustworthy. Striking this balance is essential to maintaining user trust and promoting equitable access to care.
Furthermore, AI systems often reflect the biases inherent in the datasets used to train them, potentially perpetuating disparities in care among marginalized demographics. 75 These biases can exacerbate existing inequalities, such as underdiagnosis in certain racial or socioeconomic groups or overreliance on AI recommendations that lack cultural sensitivity. 76 Thus, vulnerable populations, such as marginalized people, minors, and those with mental health concerns, among others, need to be protected when communicative AI systems are designed to serve as support providers. 77 Practitioners should prioritize developing culturally competent AI systems by incorporating diverse and representative datasets and engaging stakeholders from underrepresented communities throughout the design process. 74
Private developers of communicative AI also need to ethically approach the question of whether monetization via subscription services, as increasingly common in the AI industry, might unduly harm customers by making the social support services provided by AI less accessible.
Finally, as users grow dependent on these tools, there is a risk of fostering harmful behavioral patterns or overreliance. 78 Researchers should investigate the long‐term impacts of AI‐driven interventions on health outcomes, ensuring that these systems enhance care without replacing the critical human elements of empathy and understanding.
Limitations and future directions
Although our findings advance prior theoretical understandings of human–AI interaction, specifically in the context of social support, our study comes with several limitations. First, our operationalization of person‐centeredness was only made at Levels 1 and 9, the most extreme of supportive messages. 66 A moderate level of person‐centered messaging may distinctly shape user evaluations as these messages tend to acknowledge individuals’ concerns but do not explicitly help to cope with them (e.g., Level 5 43 ). Therefore, future research should operationalize varying levels of person‐centered messages to offer a more fine‐grained study design, resulting in nuanced insights regarding supportive message design in human–AI interaction.
Second, our findings may also be limited by our chatbot design, which was equipped to strictly follow a script (i.e., a rule‐based design). Although we chose this design to increase internal validity, this design element may have stripped the chatbot's ability to tailor its responses to each participant, a key characteristic of current chatbots. Chatbots built on the principles of ML are trained on a massive corpus of data, thus enabling conversations that are much more reflective of natural language and tailored to each user. Future research should examine how these design features affect social support processes.
Third, regardless of their current mental state, participants were forced to reflect upon instances when they felt anxious or lonely. Typically, social support would be actively sought by individuals who face these mental health issues at a given moment. Therefore, it is likely that participants who intended to actively seek support would differ from those who did not, as the latter are likely less involved in the interaction.
Fourth, social support could also manifest via nonverbal behaviors or tone in addition to verbal descriptors. 44 Future research should develop chatbots that are visually represented with variations in nonverbal communication (e.g., facial expressions) and voice (e.g., pitch and tone) to gain a deeper understanding of AI's role in providing social support.
Finally, the current study did not include a control group in which participants would virtually chat with a real person. We intended to do so, but the source manipulation was not successful. This omission limits the ability to directly compare the persuasive power and engagement levels of social AI against human interaction, which could provide a more comprehensive understanding of how AI's effectiveness differs from human–human communication. Future research should either compare effects between an AI chatbot and a virtual human interaction or use a stronger source manipulation.
CONCLUSION
We conducted a 2 (person‐centeredness: low or high) × 2 (context: anxiety or loneliness) between‐subjects experiment to examine how social AI systems can provide social support messages. Our study indicates that higher person‐centered messages result in higher emotional validation. This relationship was mediated by the QSS (moderated by social presence) and IW. Collectively, our results are in line with the social support literature in the context of human–human communication, showcasing how social AI actors may be comparable to social support providers. Given the complexity of loneliness and anxiety, 22 social AI may complement existing healthcare services in addressing the loneliness epidemic and the rise of anxiety.
AUTHOR CONTRIBUTIONS
Kelly Merrill Jr. conceptualized the study and was responsible for obtaining IRB approval. Kelly Merrill Jr. also created the written stimuli and co‐developed the instrument/questionnaire with Marco Dehnert. Kelly Merrill Jr. and Marco Dehnert collaboratively wrote the introduction and background information, including the review of literature that informed the study hypotheses. All authors contributed to the materials and methods section. Sai Datta Mikkilineni created the chatbots in Voiceflow, conducted all analyses, and wrote the results section. Kelly Merrill Jr. and Marco Dehnert collaboratively wrote the discussion section. Kelly Merrill Jr. prepared the first draft of the manuscript with input from Sai Datta Mikkilineni and Marco Dehnert. The manuscript and revisions were edited by all authors, with Kelly Merrill Jr. taking the lead in coordinating revisions. Kelly Merrill Jr. had full access to all data and had final responsibility for the decision to submit the manuscript for publication.
CONFLICT OF INTEREST STATEMENT
The authors declare no conflicts of interest.
Supporting information
Supporting Information
ACKNOWLEDGMENTS
This work was financially supported by the Charles Phelps Taft Research Center.
Merrill Jr., K. , Mikkilineni, S. D. , & Dehnert, M. (2025). Artificial intelligence chatbots as a source of virtual social support: Implications for loneliness and anxiety management. Ann NY Acad Sci., 1549, 148–159. 10.1111/nyas.15400
Footnotes
Flesch–Kincaid scores evaluate the readability and complexity of textual materials. Higher scores indicate that the text is more accessible to a broader audience, with less reliance on the reader's level of education or specialized knowledge. The grade level score reflects the minimum educational level required to understand the text. 79 This score corresponds to the US Grade Level system. All conversation scripts are available on our OSF page [https://osf.io/gz2a4/].
The participants were free to choose multiple options for gender and race/ethnicity, so the percentages may exceed 100.
REFERENCES
- 1. Witters, D . (2023). Loneliness in U.S. subsides from pandemic high. Gallup. https://news.gallup.com/poll/473057/loneliness‐subsides‐pandemic‐high.aspx [Google Scholar]
- 2.Booth, J. (2023). Anxiety statistics and facts. Forbes. https://www.forbes.com/health/mind/anxiety‐statistics/ [Google Scholar]
- 3. Kaiser Family Foundation . (2023). Latest federal data show that young people are more likely than older adults to be experiencing symptoms of anxiety or depression. Kaiser Family Foundation. https://www.kff.org/mental‐health/press‐release/latest‐federal‐data‐show‐that‐young‐people‐are‐more‐likely‐than‐older‐adults‐to‐be‐experiencing‐symptoms‐of‐anxiety‐or‐depression/ [Google Scholar]
- 4. Kim, J. , Merrill, K. Jr. , & Collins, C. (2021). AI as a friend or assistant: The mediating role of perceived usefulness in social AI vs. functional AI. Telematics and Informatics, 64, 101694. 10.1016/j.tele.2021.101694 [DOI] [Google Scholar]
- 5. Ta, V. , Griffith, C. , Boatfield, C. , Wang, X. , Civitello, M. , Bader, H. , DeCero, E. , & Loggarakis, A. (2020). User experiences of social support from companion chatbots in everyday contexts: Thematic analysis. Journal of Medical Internet Research, 22(3), e16235. 10.2196/16235 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Meng, J. , & Dai, Y. (2021). Emotional support from AI chatbots: Should a supportive partner self‐disclose or not? Journal of Computer‐Mediated Communication, 26(4), 207–222. 10.1093/jcmc/zmab005 [DOI] [Google Scholar]
- 7. Westerman, D. , Edwards, A. , Edwards, C. , Luo, Z. , & Spence, P. R. (2020). I‐it, I‐thou, I‐robot: The perceived humanness of AI in human‐machine communication. Communication Studies, 71(3), 393–408. [Google Scholar]
- 8. Merrill, K. Jr. , Kim, J. , & Collins, C. (2022). AI companions for lonely individuals and the role of social presence. Communication Research Reports, 39(2), 93–103. [Google Scholar]
- 9. Klos, M. C. , Escoredo, M. , Joerin, A. , Lemos, V. N. , Rauws, M. , & Bunge, E. L. (2021). Artificial intelligence–based chatbot for anxiety and depression in university students: Pilot randomized controlled trial. JMIR Formative Research, 5(8), e20678. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. High, A. C. , & Dillard, J. P. (2012). A review and meta‐analysis of person‐centered messages and social support outcomes. Communication Studies, 63(1), 99–118. [Google Scholar]
- 11. Skjuve, M. , Følstad, A. , Fostervold, K. I. , & Brandtzaeg, P. B. (2022). A longitudinal study of human‐chatbot relationships. International Journal of Human‐Computer Studies, 168, 102903. 10.1016/j.ijhcs.2022.102903 [DOI] [Google Scholar]
- 12. Depounti, I. , Saukko, P. , & Natale, S. (2023). Ideal technologies, ideal women: AI and gender imaginaries in Redditors’ discussions on the Replika bot girlfriend. Media, Culture & Society, 45(4), 720–736. 10.1177/01634437221119021 [DOI] [Google Scholar]
- 13. Meng, J. , Rheu, M. , Zhang, Y. , Dai, Y. , & Peng, W. (2023). Mediated social support for distress reduction: AI chatbots vs. human. Proceedings of the ACM on Human‐Computer Interaction, 7, 1–25. [Google Scholar]
- 14. World Health Organization . Social isolation and loneliness. World Health Organization. https://www.who.int/teams/social‐determinants‐of‐health/demographic‐change‐and‐healthy‐ageing/social‐isolation‐and‐loneliness [Google Scholar]
- 15. U.S. Department of Health and Human Services . (2023). Our epidemic of loneliness and isolation: The U.S. surgeon general's advisory on the healing effects of social connection and community . U.S. Department of Health and Human Services. https://www.hhs.gov/sites/default/files/surgeon‐general‐social‐connection‐advisory.pdf [PubMed] [Google Scholar]
- 16. Cigna . (2021). The loneliness epidemic persists: A post‐pandemic look at the state of loneliness among U.S. adults. The Cigna Group Newsroom. https://newsroom.thecignagroup.com/loneliness‐epidemic‐persists‐post‐pandemic‐look [Google Scholar]
- 17. Surkalim, D. L. , Luo, M. , Eres, R. , Gebel, K. , van Buskirk, J. , Bauman, A. , & Ding, D. (2022). The prevalence of loneliness across 113 countries: Systematic review and meta‐analysis. British Medical Journal, 376, e067068. 10.1136/bmj-2021-067068 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. National Academies of Sciences, Engineering., and Medicine . (2020). Social isolation and loneliness in older adults: Opportunities for the health care system. The National Academies Press. 10.17226/25663 [DOI] [PubMed] [Google Scholar]
- 19. Holt‐Lunstad, J. , Smith, T. B. , & Layton, J. B. (2010). Social relationships and mortality risk: A meta‐analytic review. PLoS Medicine, 7(7), e000316. 10.1371/journal.pmed.1000316 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Holt‐Lunstad, J. , Smith, T. B. , Baker, M. , Harris, T. , & Stephenson, D. (2015). Loneliness and social isolation as risk factors for mortality: A meta‐analytic review. Perspectives on Psychological Science, 10(2), 227–237. 10.1177/1745691614568352 [DOI] [PubMed] [Google Scholar]
- 21. Cacioppo, J. T. , Hughes, M. E. , Waite, L. J. , Hawkley, L. C. , & Thisted, R. A. (2006). Loneliness as a specific risk factor for depressive symptoms: Cross‐sectional and longitudinal analyses. Psychology and Aging, 21(1), 140–151. 10.1037/0882-7974.21.1.140 [DOI] [PubMed] [Google Scholar]
- 22. Hawkley, L. C. , & Cacioppo, J. T. (2010). Loneliness matters: A theoretical and empirical review of consequences and mechanisms. Annals of Behavioral Medicine, 40(2), 218–227. 10.1007/s12160-010-9210-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Bruce, L. D. , Wu, J. S. , Lustig, S. L. , Russell, D. W. , & Nemecek, D. A. (2019). Loneliness in the United States: A 2018 national panel survey of demographic, structural, cognitive, and behavioral characteristics. American Journal of Health Promotion, 33(8), 1123–1133. 10.1177/0890117119856551 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Jecker, N. S. , Sparrow, R. , Lederman, Z. , & Ho, A. (2024). Digital humans to combat loneliness and social isolation: Ethics concerns and policy recommendations. Hastings Center Report, 54(1), 7–12. 10.1002/hast.1562 [DOI] [PubMed] [Google Scholar]
- 25. Chaturvedi, R. , Verma, S. , Das, R. , & Dwivedi, Y. K. (2023). Social companionship with artificial intelligence: Recent trends and future avenues. . Technological Forecasting and Social Change, 193, 122634. 10.1016/j.techfore.2023.122634 [DOI] [Google Scholar]
- 26. Taylor, S. E. (2011). Social support: A review. In Friedman H. S. (Ed.), The Oxford handbook of health psychology (pp. 189–214). Oxford University Press. [Google Scholar]
- 27. Cohen, S. , & Syme, S. L. (1985). Issues in the study and application of social support. In Cohen S., & Syme S. L. (Eds.), Social support and health (pp. 3–22). Academic Press. [Google Scholar]
- 28. Lee, S. , Chung, J. E. , & Park, N. (2018). Network environments and well‐being: An examination of personal network structure, social capital, and perceived social support. Health Communication, 33(1), 22–31. 10.1080/10410236.2016.1242032 [DOI] [PubMed] [Google Scholar]
- 29. Siceloff, E. R. , Wilson, D. K. , & Van Horn, L. (2014). A longitudinal study of the effects of instrumental and emotional social support on physical activity in underserved adolescents in the ACT trial. Annals of Behavioral Medicine, 48(1), 71–79. 10.1007/s12160-013-9571-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Uchino, B. N. , Cacioppo, J. T. , & Kiecolt‐Glaser, J. K. (1996). The relationship between social support and physiological processes: A review with emphasis on underlying mechanisms and implications for health. Psychological Bulletin, 119(3), 488–531. 10.1037/0033-2909.119.3.488 [DOI] [PubMed] [Google Scholar]
- 31. Burleson, B. R. , & MacGeorge, E. L. (2002). Supportive communication. In Knapp M. L., & Daly J. A. (Eds.), Handbook of interpersonal communication (3rd ed. pp. 374–424). Sage. [Google Scholar]
- 32. Mejova, Y. , & Hommadova Lu, A. (2022). I feel you: Mixed‐methods study of social support of loneliness on twitter. Computers in Human Behavior, 136, 107389. 10.1016/j.chb.2022.107389 [DOI] [Google Scholar]
- 33. Hirsch, J. K. , & Barton, A. L. (2011). Positive social support, negative social exchanges, and suicidal behavior in college students. Journal of American College Health, 59(5), 393–398. 10.1080/07448481.2010.515635 [DOI] [PubMed] [Google Scholar]
- 34. Uchino, B. N. (2006). Social support and health: A review of physiological processes potentially underlying links to disease outcomes. Journal of Behavioral Medicine, 29(4), 377–387. 10.1007/s10865-006-9056-5 [DOI] [PubMed] [Google Scholar]
- 35. Cohen, S. , & Wills, T. A. (1985). Stress, social support, and the buffering hypothesis. Psychological Bulletin, 98(2), 310–357. 10.1037/0033-2909.98.2.310 [DOI] [PubMed] [Google Scholar]
- 36. Skjuve, M. , Følstad, A. , Fostervold, K. I. , & Brandtzaeg, P. B. (2021). My chatbot companion—A study of human‐chatbot relationships. International Journal of Human‐Computer Studies, 149, 102601. [Google Scholar]
- 37. Alotaibi, J. O. , & Alshahre, A. S. (2024). The role of conversational AI agents in providing support and social care for isolated individuals. Alexandria Engineering Journal, 108, 273–284. 10.1016/j.aej.2024.07.098 [DOI] [Google Scholar]
- 38. Abendschein, B. , Edwards, C. , & Edwards, A. (2021). The influence of agent and message type on perceptions of social support in human‐machine communication. Communication Research Reports, 38(5), 304–314. 10.1080/08824096.2021.1966405 [DOI] [Google Scholar]
- 39. De Freitas, J. , Uguralp, A. K. , Uguralp, Z. O. , & Stefano, P. (2024). AI companions reduce loneliness. arXiv, 2407.19096. 10.48550/arXiv.2407.19096 [DOI] [Google Scholar]
- 40. Burleson, B. R. , & Goldsmith, D. J. (1998). How the comforting process works: Alleviating emotional distress through conversationally induced reappraisals. In Andersen P. A., & Guerrero L. K. (Eds.), Handbook of communication and emotion: Research, theory, applications, and contexts (pp. 245–280). Academic Press. [Google Scholar]
- 41. Burleson, B. R. (2008). What counts as effective emotional support: Explorations of individual and situational differences. In Motley M. T. (Ed.), Studies in applied interpersonal communication (pp. 207–227). Sage. [Google Scholar]
- 42. Burleson, B. R. (1982). The development of comforting communication skills in childhood and adolescence. Child Development, 53(6), 1578–1588. 10.2307/1130086 [DOI] [Google Scholar]
- 43. Samter, W. , & MacGeorge, E. L. (2016). Coding comforting behavior for verbal person centeredness. In Canary D. J., & VanLear A. (Eds.), Researching communication interaction behavior: A sourcebook of methods and measures (pp. 107–128). Sage. [Google Scholar]
- 44. Jones, S. M. , & Guerrero, L. K. (2001). The effects of nonverbal immediacy and verbal person centeredness in the emotional support process. Human Communication Research, 27(4), 567–596. 10.1111/j.1468-2958.2001.tb00793.x [DOI] [Google Scholar]
- 45. Rains, S. A. , & High, A. C. (2021). The effects of person‐centered social support messages on recipient distress over time within a conversation. Journal of Communication, 71(3), 380–402. [Google Scholar]
- 46. High, A. C. , & Solomon, D. H. (2014). Communication channel, sex, and the immediate and longitudinal outcomes of verbal person‐centered support. Communication Monographs, 81(4), 439–468. [Google Scholar]
- 47. Goldsmith, D. J. (2004). Communicating social support. Cambridge University Press. [Google Scholar]
- 48. MacGeorge, E. L. , Feng, B. , & Burleson, B. R. (2011). Supportive communication. In Knapp M. L., & Daly J. A. (Eds.), Handbook of interpersonal communication (pp. 317–354). Sage. [Google Scholar]
- 49. Merrill, K. Jr. (2022). Disparities in social support processes: Investigating differences in ingroup and outgroup sources of social support among gay men [Doctoral dissertation]. The Ohio State University. [Google Scholar]
- 50. Albrecht, T. L. , Burleson, B. R. , & Sarason, I. (1992). Meaning and method in the study of communication and social support: An introduction. Communication Research, 19(2), 149–153. 10.1177/009365092019002001 [DOI] [Google Scholar]
- 51. Kneavel, M. (2021). Relationship between gender, stress, and quality of social support. Psychological Reports, 124(4), 1481–1501. 10.1177/0033294120939844 [DOI] [PubMed] [Google Scholar]
- 52. Li, S. , & Feng, B. (2015). What to say to an online support‐seeker? The influence of others’ responses and support‐seekers’ replies. Human Communication Research, 41(3), 303–326. 10.1111/hcre.12055 [DOI] [Google Scholar]
- 53. Fiske, S. T. , Cuddy, A. J. , & Glick, P. (2007). Universal dimensions of social cognition: Warmth and competence. Trends in Cognitive Sciences, 11(2), 77–83. [DOI] [PubMed] [Google Scholar]
- 54. Andersen, P. A. , & Guerrero, L. K. (1996). The bright side of relational communication: Interpersonal warmth as a social emotion. In Handbook of communication and emotion (pp. 303–329). Academic Press. [Google Scholar]
- 55. McKee, K. R. , Bai, X. , & Fiske, S. T. (2023). Humans perceive warmth and competence in artificial intelligence. Iscience, 26(8), 107256. 10.17605/osf.io/aqcyu [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Demeure, V. , Niewiadomski, R. , & Pelachaud, C. (2011). How is believability of a virtual agent related to warmth, competence, personification, and embodiment? Presence, 20(5), 431–448. 10.1162/PRES_a_00065 [DOI] [Google Scholar]
- 57. Harris‐Watson, A. M. , Larson, L. E. , Lauharatanahirun, N. , DeChurch, L. A. , & Contractor, N. S. (2023). Social perception in human‐AI teams: Warmth and competence predict receptivity to AI teammates. Computers in Human Behavior, 145, 107765. 10.1016/j.chb.2023.107765 [DOI] [Google Scholar]
- 58. Shi, W. , Wang, X. , Oh, Y. J. , Zhang, J. , Sahay, S. , & Yu, Z. (2020). Effects of persuasive dialogues: Testing bot identities and inquiry strategies. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems , 1‐13. 10.1145/3313831.3376843 [DOI]
- 59. Lee, K. M. (2004). Presence, explicated. Communication Theory, 14(1), 27–50. 10.1111/j.1468-2885.2004.tb00302.x [DOI] [Google Scholar]
- 60. Kim, J. , Song, H. , & Lee, S. (2018). Extrovert and lonely individuals’ social TV viewing experiences: A mediating and moderating role of social presence. Mass Communication and Society, 21(1), 50–70. 10.1080/15205436.2017.1350715 [DOI] [Google Scholar]
- 61. Kim, J. , Merrill, K. Jr. , Xu, K. , & Kelly, S. (2022). Perceived credibility of an AI instructor in online education: The role of social presence and voice features. Computers in Human Behavior, 136, 107383. 10.1016/j.chb.2022.107383<./bib> [DOI] [Google Scholar]
- 62. Lee, K. M. , & Nass, C. (2005). Social‐psychological origins of feelings of presence: Creating social presence with machine‐generated voices. Media Psychology, 7(1), 31–45. 10.1207/S1532785XMEP0701_2 [DOI] [Google Scholar]
- 63. Lombard, M. , & Ditton, T. (1997). At the heart of it all: The concept of presence. Journal of Computer‐Mediated Communication, 3(2), JCMC321. 10.1111/j.1083-6101.1997.tb00072.x [DOI] [Google Scholar]
- 64. Kim, J. , Kim, J. , & Yang, H. (2019). Loneliness and the use of social media to follow celebrities: A moderating role of social presence. The Social Science Journal, 56(1), 21–29. 10.1016/j.soscij.2018.12.007 [DOI] [Google Scholar]
- 65. Adamopoulou, E. , & Moussiades, L. (2020). Chatbots: History, technology, and applications. Machine Learning With Applications, 2, 100006. 10.1016/j.mlwa.2020.100006 [DOI] [Google Scholar]
- 66. Applegate, J. L. , & Delia, J. G. (1982). Person‐centered speech, psychological development, and the contexts of language usage. In Clair R. S., & Giles H. (Eds.), The social and psychological contexts of language (pp. 245‐282). Erlbaum. [Google Scholar]
- 67. Goldsmith, D. J. , McDermott, V. M. , & Alexander, S. C. (2000). Helpful, supportive and sensitive: Measuring the evaluation of enacted social support in personal relationships. Journal of Social and Personal Relationships, 17(3), 369–391. 10.1177/0265407500173004 [DOI] [Google Scholar]
- 68. Lombard, M. , Ditton, T. B. , & Weinstein, L. (2009). Measuring presence: The Temple Presence Inventory. In Proceedings of the 12th Annual International Workshop on Presence (pp. 1‐15).
- 69. Rheu, M. , Dai, Y. , Meng, J. , & Peng, W. (2024). When a chatbot disappoints you: Expectancy violation in human‐chatbot interaction in a social support context. Communication Research, 51, 782–814. [Google Scholar]
- 70. Hayes, A. F. (2022). Introduction to mediation, moderation, and conditional process analysis: A regression‐based approach (3rd ed.). Guilford Press. [Google Scholar]
- 71. Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books. [Google Scholar]
- 72. Boucher, E. M. , Harake, N. R. , Ward, H. E. , Stoeckl, S. E. , Vargas, J. , Minkel, J. , Parks, A. C. , & Zilca, R. (2021). Artificially intelligent chatbots in digital mental health interventions: A review. Expert Review of Medical Devices, 18(S1), 37–49. 10.1080/17434440.2021.2013200 [DOI] [PubMed] [Google Scholar]
- 73. Xue, J. , Zhang, B. , Zhao, Y. , Zhang, Q. , Zheng, C. , Jiang, J. , Li, H. , Liu, N. , Li, Z. , Fu, W. , Peng, Y. , Logan, J. , Zhang, J. , & Xiang, X. (2023). Evaluation of the current state of chatbots for digital health: Scoping review. Journal of Medical Internet Research, 25, e47217. 10.2196/47217 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74. Merrill, K. Jr. (2024). Using artificial intelligence to address health disparities: Challenges and solutions. In Ramasubramanian S., & Banjo O. (Eds.), The Oxford Handbook of Media and Social Justice (pp. 234–240). Wiley. [Google Scholar]
- 75. Ibrahim, H. , Liu, X. , Zariffa, N. , Morris, A. D. , & Denniston, A. K. (2021). Health data poverty: An assailable barrier to equitable digital health care. Lancet Digital Health, 3(4), e260–e265. 10.1016/S2589-7500(20)30317-4 [DOI] [PubMed] [Google Scholar]
- 76. Adamson, A. S. , & Smith, A. (2018). Machine learning and health care disparities in dermatology. JAMA Dermatology, 154(11), 1247–1248. 10.1001/jamadermatol.2018.2348 [DOI] [PubMed] [Google Scholar]
- 77. Pozzi, G. , & de Proost, M. (2024). Keeping an AI on the mental health of vulnerable populations: Reflections on the potential for participatory injustice. AI and Ethics, 5, 2281–2291. 10.1007/s43681-024-00523-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78. Laestadius, L. , Bishop, A. , Gonzalez, M. , Illenčík, D. , & Campos‐Castillo, C. (2022). Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media & Society, 26(10), 5923–5941. 10.1177/14614448221142007t [DOI] [Google Scholar]
- 79. Jindal, P. , & MacDermid, J. (2017). Assessing reading levels of health information: Uses and limitations of Flesch formula. Education for Health, 30(1), 84–88. 10.4103/1357-6283.210517 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supporting Information