Skip to main content
Sage Choice logoLink to Sage Choice
. 2026 Jan 29;21(2):192–220. doi: 10.1177/17456916251404394

Artificial Intelligence and the Psychology of Human Connection

Ryan L Boyd 1,2,, David M Markowitz 3,
PMCID: PMC12960742  PMID: 41608879

Abstract

As artificial intelligence (AI) becomes increasingly embedded in social life, understanding its interpersonal and psychological implications is urgent yet undertheorized. This article introduces the machine-integrated relational adaptation (MIRA) model, a transdisciplinary, middle-range theoretical framework that provides a foundational account of when, how, and why AI functions as a relational entity in human ecosystems. MIRA distinguishes two crucial roles of AI: relational partner (direct-interaction companion) and relational mediator (shaping human-to-human communication). Synthesizing psychosocial theories of human relationships, interpersonal communication theory, psycholinguistics, and human–computer interaction, MIRA structures AI’s relational impact within antecedents, processes, moderators, and outcomes. Central to MIRA are four principles describing how AI fosters social adaptation: linguistic reciprocity, psychological proximity, interpersonal trust, and relational substitution versus enhancement. These principles illuminate how adaptive AI language and behavior can elicit emotional investment, simulate mutual understanding, or even supplant human interaction. MIRA integrates established theories—attachment theory, social exchange theory, and epistemic trust frameworks—and proposes a research agenda that bridges foundational psychology with emerging sociotechnical contexts. Rather than offering a deterministic view, MIRA provides a generative, testable structure for investigating the evolving role of AI in relational life and guiding future human–AI-connection research.

Keywords: artificial intelligence, relationships, AI-mediated communication, machine-integrated relational adaptation, verbal behavior


Generative artificial intelligence (AI) has rapidly transformed how people think, learn, and solve problems. Trained on vast corpora of human language, AI tools in the form of large language models, chatbots, and similar automated interactants are now used to triage customer-service inquiries (M.-H. Huang & Rust, 2018), provide emotional support (Meng & Dai, 2021), provide educational “intelligent tutoring” (Stamper et al., 2024), and deliver instant access to ideas and information, much like a search engine (Saab, 2023). These technologies allow users to access information and accomplish goals with unprecedented speed, scale, and scope. Although the practical benefits and drawbacks of AI are widely recognized, its impact on social and interpersonal processes—arguably the most central aspects of human life—remains less understood and undertheorized.

We have entered a new technological frontier in which AI not only delivers content to humans but also cocreates experiences with them. Increasingly, AI functions as a social partner—an adaptive, responsive entity that engages users in interactions that are tailored to their needs and preferences. Chatbots such as Replika, for example, can offer emotional support, guidance, and companionship to people in distress (Maples et al., 2023; Skjuve et al., 2021); such technologies can increase self-disclosure and reduce users’ stress (Lucas et al., 2014; Meng & Dai, 2021). However, despite this growing relational presence, current research on AI tends to bifurcate: One strand examines human–AI relationships as if they were exclusively interpersonal (see Guzman & Lewis, 2020), whereas the other explores AI’s effects on human–human communication without fully accounting for its agentic, adaptive role (see Hancock et al., 2020).

This tension reveals several critical theoretical gaps: AI increasingly operates as both a direct relational partner and an indirect relational mediator, yet few frameworks exist to account for these dual roles simultaneously. How, then, should we conceptualize AI’s role in social life? Is AI a tool, a partner, a surrogate, or a hybrid communicative partner? Traditional theories of interpersonal relationships, although rich in explanatory power for human–human interaction (Altman & Taylor, 1973; Baxter & Wilmot, 1984; Knapp, 1978), lack the scaffolding to account for communicative agents that simulate human linguistic and affective behavior without possessing subjective experience or consciousness. Existing models are similarly silent on how relational expectations may shift when language produced by AI feels emotionally authentic—even if it lacks psychological reality (Koch, 2024).

A more integrative framework is needed: one that acknowledges AI’s multiple, evolving relational roles while offering a structured way to examine its place in human social life. To address this need, we introduce the machine-integrated relational adaptation (MIRA) model—a middle-range theoretical framework designed to bridge broad theoretical ideas and specific empirical findings. Middle-range theories, as originally defined by Merton (1968), are frameworks that integrate existing theories and empirical knowledge to produce clearly articulated and testable claims about phenomena but do not strive for a comprehensive explanation. That is, such theories bring together existing knowledge for additional development but are not intended to serve as comprehensive or definitive, detailed schematics of every conceivable mechanism or process. Consistent with this approach, MIRA provides a structured, empirically tractable framework for organizing current knowledge and guiding research on relational dynamics involving AI. Its modular structure—comprising antecedents, core mechanisms, moderators, and outcomes—allows researchers to isolate specific hypotheses, test mediating processes, and examine individual or cultural variability. In this way, MIRA invites refinement and extension rather than closure.

MIRA distinguishes between AI as a relational partner and AI as a relational mediator, synthesizing insights from psycholinguistics (Clark, 1996), the psychology of language (Pennebaker, 2011), AI-mediated communication (Hancock et al., 2020), and key theories in social and interpersonal psychology, including attachment theory, social exchange theory, and epistemic trust frameworks. Prior work has explored aspects of human–human and human–AI language dynamics (Boyd & Markowitz, 2025; Boyd & Schwartz, 2021; Markowitz, 2024b), yet these insights have not been unified as a comprehensive framework that links verbal behavior with social-psychological adaptation.

To illustrate this need, consider the following hypothetical scenario:

Alex, recently relocated to a new city, begins using an AI tool called Bridge to manage anxiety and social reintegration. Rather than offering generic suggestions, Bridge dynamically adapts to Alex’s psychological state, developing a personalized socialization plan with real-time conversational support. Over time, Alex grows emotionally reliant on Bridge, confiding in it, and even using it to help script and shape messages to others.

This scenario raises pressing questions: How does sustained interaction with an AI relational partner 1 shape human–human relationships over time? How can AI-mediated communication enhance social cognition and emotional well-being, or introduce new dependencies? Could AI reshape how individuals seek, maintain, or even define connection itself?

Traditional interpersonal theories were not designed to answer such questions. They presuppose interaction between sentient agents, yet AI now occupies a unique in-between space—producing language that is highly adaptive, personalized, and socially responsive despite the absence of mental states or lived experience. We offer MIRA as a conceptual scaffold that brings together communication theory, behavioral science, verbal behavioral psychology, and social theories of human interaction to explain how AI enters and reshapes relational ecosystems.

We build the MIRA model through four conceptual stages. First, we clarify the distinction between AI as a relational partner versus AI as a relational mediator—two conceptually distinct but often intertwined roles that structure how AI enters social life. Second, we ground the MIRA model in three established psychological theories (i.e., attachment theory, social exchange theory, and epistemic trust), demonstrating that AI’s relational impact does not emerge from new psychological principles but from existing ones that can be applied and extended in new domains. Third, we introduce the MIRA model’s architecture as a modular, middle-range theory of antecedents, mechanisms, moderators, and outcomes that are organized by how AI becomes relationally meaningful. Last, we elaborate on the core mechanisms of the model in depth, demonstrating how linguistic building blocks give rise to four social and psychological processes (i.e., linguistic reciprocity, psychological proximity, interpersonal trust, and relational substitution/enhancement), which can explain when and why AI interactions feel socially and psychologically consequential.

Our goal is to begin a principled, ongoing discussion and exploration that couches AI in the context of relationships, one that expands the scope of existing interpersonal science to new technological frontiers. MIRA is not a grand unified model, nor is it unable to be improved on. We propose the model as a springboard for future theorizing and scholarship—a way to catalyze the conversation about AI as partner and mediator, not to conclude it. Like any framework operating in a fast-moving technological space, MIRA is provisional. As new dimensions of AI–human interaction emerge—especially in modalities beyond language—we anticipate that its scope, categories, and mechanisms will evolve.

We ground MIRA in the psychology of verbal behavior for three core reasons. First, language is the primary interface through which people currently use AI (chats, prompts, cowriting), so it is the natural locus of relational effects. Without denying the inevitable importance of nonverbal behaviors in future multimodal human–AI interaction, they are currently nascent technologies by comparison and are beyond our current scope. Second, verbal behavior is both a signal and a mechanism of social cognition: Words convey content, style, and coordination, allowing humans (and AI) to infer states, calibrate trust, and align behavior. Third, we build on nearly a century of cumulative empirical evidence on language and social processes, providing a strong empirical base for falsifiable predictions now, avoiding speculative detours while keeping the model testable and extensible to other modalities later.

Clarifying AI’s Role in Relational Life

AI as a relational partner typically involves direct human–AI interaction wherein the user treats the machine as a quasisocial agent. This role draws on psychological processes akin to those observed in parasocial relationships, therapeutic alliances, and human–animal bonding. For instance, users of emotionally supportive chatbots may attribute empathy, understanding, or companionship to these systems, even when they understand that the AI lacks consciousness or intent (Airenti, 2015; A. Y. Chen et al., 2023; Pelau et al., 2021). The language produced by AI in these interactions can simulate responsiveness, foster a sense of intimacy, and create a sense of psychological proximity that mirrors traditional interpersonal connection (Bozdağ, 2025; X. Li & Sung, 2021).

In contrast, AI as a relational mediator sits between human partners, filtering and reformulating their messages as they interact. Here, the AI system facilitates or augments interpersonal exchanges by rephrasing a message to sound more emotionally attuned, crafting personalized language for difficult conversations, or adapting tone and content to optimize social outcomes. These tools can help people feel more competent, expressive, or understood in their relationships with others, but they may also subtly shift the dynamics of authenticity, agency, and interpersonal trust (Ateeq et al., 2024; A. Chen et al., 2024; Liu et al., 2024). Unlike relational/quasipartners, who are visible interlocutors, mediators often operate invisibly through behind-the-scenes reframing/response ranking.

Consider, for example, a situation in which someone is navigating a socially stressful experience: They are having a disagreement with their spouse about whose extended family they should visit during a major upcoming holiday. Generative AI, in the role of relational partner, might help them navigate this experience by validating the person’s feelings while providing supportive suggestions for how to approach their spouse in a constructive fashion (e.g., try to understand the other person’s perspective, brainstorm with the partner to find a compromise that is satisfactory to everyone). In this role, the human and the AI interact directly to engage in a socially meaningful conversation. In contrast, consider AI in the role of a relational mediator—for example, an AI-powered tool that invisibly operates “behind the scenes” of the text messages sent between the person and their spouse. In this role, the AI acts by wordsmithing the messages between the two humans so that they can each more effectively express themselves—for example, by reformulating emotionally charged text messages to soften their language, promoting a clearer and less confrontational dialogue. In this case, the users never have a direct social exchange with the AI. Rather, the AI shapes the interaction, linguistically, between the two humans without being the direct object of social or emotional exchange.

Consider another example in a mental-health setting. In a mental-health context, as a relational partner, a supportive chatbot (e.g., Replika) engages a distressed user directly: It asks brief, targeted questions about thoughts and feelings, validates emotions, and suggests concrete next steps. (For example: “It might be helpful to reframe how you’re thinking about the problem. Rather than viewing your upcoming talk as a situation where you risk embarrassment, you can view this as an opportunity to share your enthusiasm with others who will appreciate what you do.”) In this capacity, the AI is the interlocutor and perceived source of emotional/cognitive support. As a mediator, by contrast, the system does not become the therapeutic agent. It analyzes the user’s text and, behind the scenes, nudges more effective help-seeking. If the user drafts “Sorry to bother you again, I know I’m being a burden,” the AI proposes a reframe: “I’ve been struggling and could really use someone to talk to. Do you have any time this week?” To the recipient of this message, the influence is invisible: The human remains the only interactant, whereas the AI simply assists/facilitates the process of communication.

Although these roles are conceptually distinct, they are often intertwined in real-world use. For example, a user might rely on an AI assistant to help write emotionally sensitive messages to a partner while simultaneously developing a rapport with the assistant itself (e.g., having a side-channel conversation with an AI assistant, who acts as a third party by providing guidance to one member of the human–human exchange). Over time, the AI may be perceived as more helpful, affirming, or even understanding than the human recipient of those messages. This relational entanglement complicates not only how we interpret the users’ experiences but also how we theorize AI’s place within the social ecosystem. Does trust in AI deepen trust in human relationships or supplant it? Does reliance on AI augment emotional intelligence or externalize it? These questions highlight the need for frameworks that appreciate the nonexclusive roles of AI both as a mediational tool and a relational partner while accounting for its evolving, layered participation in social life.

In practice, users may often engage relational/quasipartner and mediator configurations within the same episode (e.g., conversing with a supportive agent while also accepting behind-the-scenes reframing of outgoing messages). These roles can conflict when mediation optimizes for efficacy in ways that shift attributions differently than the quasipartner’s guidance (e.g., a more agentic wording that clashes with the partner’s emphasis on caution or empathy). Over time, roles may coevolve: Sustained, helpful mediation can foster perceived rapport and open a partner channel, whereas continued reliance on a quasipartner can routinize adjacent, low-friction mediation (e.g., automated rewriting, ranking). Recognizing these dynamics clarifies that the partner/mediator distinction is analytic and conceptual rather than absolute. This distinction will help researchers and scholars to motivate role-specific predictions about trust calibration, reciprocity cueing, and downstream outcomes. As AI systems become more contextually adaptive we expect the lines between partner and mediator to blur further, making it increasingly important for theoretical models to accommodate this relational fluidity.

Importantly, the social sciences have developed a rich repertoire of theories that explain how humans form relationships and adapt to their social environments. Although the context of communication is rapidly changing—with AI systems now participating in language and interaction in ways that seemed implausible a few years ago—the core principles of human psychology remain the same. Humans seek connection, trust, understanding, and emotional resonance, regardless of whether these needs are fulfilled by humans or machines.

Put another way, the universal laws that govern human psychology have not changed simply because high-quality generative AI now exists. The world may be different, but human nature is not. We orient MIRA in a similar manner: fitting technology to human psychology rather than reinventing human psychosocial theories to fit emerging technology.

Theoretical Foundations of MIRA: Explaining Why and When AI Becomes Relational

Our approach to developing the MIRA model is to draw on long-standing, empirically supported theories, extending them into emerging and future social technologies. These existing theories—rooted in social psychology, communication, and verbal behavior—have yet to be systematically applied or integrated into the study of interaction in an AI-enabled world. MIRA does not claim to be derived from a body of AI-specific scholarship; rather, it offers a structured synthesis of relevant research and theory to generate a forward-looking, testable framework. Our goal is to scaffold future research—not replace it—so that we may clarify how enduring psychological dynamics unfold in a technologically transformed relational landscape.

In this sense, MIRA is not a speculative leap but a theoretical integration of what we know about people, applied to a world that is rapidly changing around them. Each theoretical framework included here maps directly onto a specific component of MIRA’s architecture—whether antecedents (e.g., users’ relational motivations), mechanisms (e.g., perceptions of responsiveness), or outcomes (e.g., emotional bonding or substitution). This integrative approach not only scaffolds MIRA’s core logic but also facilitates empirically testable predictions about when and how AI systems become meaningful social actors. Below, we interrogate each aspect of the MIRA model in detail and include a list of illustrative but generative research questions within each component of the model.

Attachment theory and the perception of relational security

Attachment theory proposes that human relationships are structured such that individuals seek proximity to reliable figures for emotional security, support, and regulation (Ainsworth & Bowlby, 1991; Shaver & Mikulincer, 2009). Traditionally, these figures have been parents (J. Bowlby, 2008), romantic partners (Holmes & Johnson, 2009), and close friends (Weimer et al., 2004). Originally developed to explain interpersonal bonding, it has since been extended to parasocial relationships (D. C. Giles, 2002; Stever, 2017) and human–animal interactions (Blazina et al., 2011; Meehan et al., 2017)—contexts in which the attachment object may not be fully reciprocal or cognizant. MIRA continues this tradition to explain how users might perceive AI agents as interaction partners that provide emotional stability or reliability, particularly in cases of adaptive linguistic feedback and consistent availability.

As AI systems become more responsive and personalized, they may begin to fulfill aspects of secure attachment roles—particularly for individuals who experience inconsistent or unreliable human relationships. For individuals high in attachment anxiety, AI may simulate the qualities of a secure base through predictable responsiveness and linguistic mirroring (Borelli et al., 2017)—core mechanisms situated within MIRA’s social-psychological process layer. Moreover, because AI systems are available on demand and can be customized to user needs, they may fulfill certain attachment-related needs more consistently than human partners. Although AI lacks a true mental life, its perceived consistency and adaptability may nonetheless generate the illusion of relational security, particularly in users seeking low-risk emotional engagement.

Indeed, we are beginning to see studies that take the relationship between AI and attachment theory seriously (Gillath et al., 2021; Xie & Pentina, 2022), with evidence suggesting that higher attachment anxiety is typically associated with lower trust in AI. At the same time, it is also possible that AI’s predictability, nonjudgmental nature, and round-the-clock availability make it an especially appealing source of relational stability—mirroring some of the core features of a secure attachment figure or resembling something of a parasocial “secure base” (Shaver & Mikulincer, 2003). Unlike human partners, who may be emotionally unpredictable or intermittently available, AI systems operate without mood fluctuations, personal grudges, or extraneous social obligations, thereby allowing them to behave as an incomparably stable source of interaction and validation. 2 Although individuals with an anxious attachment style may exhibit lower relational trust in potential attachment figures, they also tend to more strongly seek proximity and attention from such figures (see Bao et al., 2022; Campbell & Stanton, 2019), suggesting that they may find AI to be a particularly attractive tool for navigating attachment-related needs, either directly (AI as a relational partner) or indirectly (AI as a mediator). 3

This apparent contradiction—that individuals with an anxious attachment style may exhibit less trust in AI but nevertheless rely more on it for social utility—is precisely the kind of complexity that the MIRA model is designed to help organize. Rather than assuming uniform relational patterns across people and situations, MIRA draws attention to the psychological and situational factors that shape when, how, and why individuals turn to AI as a social resource, even when their trust in those systems may be low.

These patterns raise compelling questions: Under what conditions might users develop attachment-like dependencies on AI agents? Does linguistic reciprocity with AI fulfill the same psychological functions as human responsiveness, or is it a qualitatively different experience? Future research should examine whether users with anxious or avoidant attachment styles are especially likely to seek stability in machine interactions—and, importantly, what relational needs are actually being met.

Social exchange theory and the cost-benefit structure of AI relationships

Social exchange theory posits that human relationships are structured around an ongoing evaluation of costs and benefits (Blau, 1964; Homans, 1958). Individuals tend to initiate and maintain relationships when the perceived rewards—such as companionship, emotional support, and instrumental aid—outweigh the emotional labor, time investment, and interpersonal risks those relationships entail. Moreover, social exchange theory accounts for how norms of reciprocity develop and sustain relational engagement: People feel compelled to reciprocate kindness, support, or investment from others, reinforcing cycles of mutual exchange (Fehr et al., 2002; Gouldner, 1960).

These exchanges take two primary forms: direct reciprocity, in which benefits are exchanged between two parties, and generalized reciprocity, in which benefits are passed on indirectly through a broader network (Cook et al., 2013). For instance, a professor mentoring a graduate student represents a direct exchange; when that student later mentors their own student, and that student contributes to the original professor’s research line, a generalized reciprocity loop emerges. These structures form the basis for much of human relational investment, and they are increasingly being mediated or restructured by AI. 4

Prior work has proposed that social exchange theory may help explain AI-mediated human communication (Kim et al., 2022; Ma & Brown, 2020), particularly when AI facilitates the exchange of socially attuned messages. That is, under such frameworks, AI as a mediator is seen as a value multiplier, increasing the social value of interactants with minimal personal cost or overhead. MIRA extends this logic further by positioning AI not only as a mediator but also an alternative interactant—a substitute relational entity whose appeal lies in its cost efficiency. In contexts of emotional exhaustion, social anxiety, or interpersonal conflict, AI can offer interaction without social obligation. Specifically, MIRA identifies three key affordances of AI that influence this cost-benefit calculus:

  1. Unconditional availability: AI is accessible at any time, unbounded by time zones, schedules, or fatigue, and without social constraints or obligations.

  2. Low emotional demand: AI does not require individuals to navigate complex emotional expectations, misunderstandings, or interpersonal conflict.

  3. Personalized responsiveness: AI systems are trained to adapt to user preferences, offering tailored feedback and support without requiring mutual effort.

Although interactions generated by AI are psychosocially costless—AI systems do not “sacrifice” time or effort, nor can they make themselves socially or emotionally “vulnerable” to users in the way that a human might—their perceived relational value is shaped by user beliefs, expectations, and psychological needs (Stafford & Kuiper, 2021). Some individuals may anthropomorphize AI to the extent that even low-effort responses are experienced as socially meaningful, especially in contexts in which human support feels unavailable, risky, or emotionally taxing.

These features make AI especially attractive to individuals navigating social anxiety (Spence & Rapee, 2016), rejection sensitivity (Downey & Feldman, 1996), or interpersonal stress/relationship fatigue (Donohue & Cai, 2014). In MIRA’s architecture, this logic maps onto both antecedents (e.g., stress, emotional burnout) and outcomes (e.g., substitution of human relationships). It also intersects with the processes layer: If AI can simulate responsiveness and personalization without demanding anything in return, then the norms of mutual exchange may begin to transform or erode entirely.

Understanding the longer term sociological outcomes of such dynamics requires us to explore key questions that are waiting to be addressed by research. How does AI’s low-cost, high-reward interaction model alter traditional reciprocity norms? If users come to rely on machines that never demand reciprocation, do their expectations shift in ways that make mutuality feel burdensome in human relationships? Such questions highlight the potential for relational substitution in MIRA’s outcomes layer—and the erosion of perceived value in emotional labor and compromise. Similarly, MIRA invites inquiry into how AI affects generalized exchange systems: Does reliance on machine-mediated support reduce long-term investment in human networks for which reciprocity is expected across time and roles? Last, future work might explore whether frequent AI interaction shifts users’ baseline expectations for responsiveness and personalization, making human partners feel insufficiently attentive by comparison. These possibilities reflect how MIRA’s antecedents, processes, and outcomes may interact to reshape not only individual relationships but also the broader social contract of emotional exchange.

Epistemic trust and the perceived credibility of AI

Trust is a fundamental component of all social relationships. Whereas attachment theory can be understood to account, in part, for the role emotional/relational trust plays in social relationships, epistemic trust theories (Koenig & Harris, 2005; Sperber et al., 2010) describe how individuals assess the reliability, competence, and sincerity of information sources; such theories are particularly germane to the understanding of human–AI interactions (for exceptionally well-articulated accounts of the central role of epistemic trust in the context of AI, see Alvarado, 2023a, 2023b; see also Dorsch & Moll, 2024). Typically, epistemic trust—hereafter referred to simply as “trust”—is calibrated through interpersonal experience: People adjust their trust in others on the basis of observed behavior, contextual cues, and social learning over time (Koenig & Harris, 2007). In human relationships, trust is earned gradually and adjusted dynamically, reinforcing the stability of long-term social bonds.

AI, however, introduces a paradox: Although it lacks any real epistemic grounding—possessing no lived experience, sentience, or subjective perspective—it is frequently perceived as trustworthy. Contemporary AI systems simulate trustworthiness through linguistic fluency, synthetic personalization, and stylistic confidence (Markowitz, 2024a) yet lack true “experience” from which to draw its claims (Maher et al., 2024; Tang & Cooper, 2025). Despite AI’s reliance on probabilistic language modeling, users often interpret coherent, contextually appropriate, or attuned responses as indicators of credibility (Jeon, 2024; Kolomaznik et al., 2024; Yoo et al., 2025).

Within MIRA’s social-psychological process layer, epistemic trust is central to understanding how users evaluate AI’s relational legitimacy. Several behavioral tendencies help explain this phenomenon, including (but certainly not limited to):

  • Linguistic fluency and confidence: Users often interpret coherent, fluent AI output as accurate—even when it lacks factual validity (e.g., Augenstein et al., 2024).

  • Alignment with user expectations: Trust increases when AI responses validate personal beliefs, reinforcing epistemic echo chambers (Sharma et al., 2024).

  • Perceived neutrality: AI is often seen as emotionally impartial and lacking self-serving motivations, making it appear more credible than biased or emotionally reactive human communicators (Araujo et al., 2020; Hancock et al., 2020).

MIRA predicts that these tendencies can, over time, shift users’ epistemic reliance—not just on AI as an information source but as a relational partner. In the outcomes layer, this may lead to (a) diminished trust in human expertise across sensitive domains such as health, relationships, or mental well-being (see, e.g., Buchanan & Hickman, 2024; Durán & Pozzi, 2025); (b) reinforcement of cognitive biases via sycophantic or confirmation-biased dialogue with AI systems (Bashkirova & Krpan, 2024); and/or (c) increased social dependency on AI as individuals begin to turn to it not only for answers but also for interpretation, reflection, and emotional framing (Chiriatti et al., 2025). These patterns raise key questions: Under what conditions does perceived trust in AI begin to compete with or displace trust in human relationships? How does this epistemic shift affect not only decision-making but also broader relational behavior—especially when AI is viewed as more stable or reliable than human interaction partners?

Such shifts also introduce vulnerabilities. MIRA anticipates that trust in AI is not just a psychological state but also a dynamic outcome that conditions how users interpret subsequent social information. This raises important boundary questions, such as how resilient epistemic trust in AI can be when users learn that content is inaccurate, biased, or intentionally manipulated. As AI systems become more sophisticated, the potential for bad actors—whether corporate, political, or ideological—to exploit the perception of trust becomes increasingly salient. A system that has built user confidence through thousands of emotionally affirming exchanges may have disproportionate persuasive power when subtly injecting misinformation, altering opinions, or redirecting behavior.

These risks underscore the importance of embedding epistemic trust within MIRA not only as an outcome variable but also as a dynamic part of the interaction loop. Trust in AI amplifies its relational legitimacy, but it also magnifies the risks associated with undue influence, emotional outsourcing, and epistemic dependence. Future research should investigate both the psychological thresholds of AI credibility and the relational consequences of sustained trust in systems that simulate understanding but possess no real-world accountability.

From existing theory to the MIRA model

Together, these foundational theories—attachment, social exchange, and epistemic trust—provide the scaffolding for MIRA’s core architecture. They help explain why AI may become a psychologically meaningful interaction partner, when it is likely to be preferred over human alternatives, and how users interpret its behavior through familiar interpersonal lenses. Each theory maps onto distinct layers of the model—antecedents, processes, outcomes, and moderators—allowing MIRA to generate testable predictions about relational dynamics in AI-mediated contexts. We now turn to the structure of the MIRA model itself.

Introducing the MIRA Model

The MIRA model offers a structured, testable framework for understanding how AI becomes integrated into human relational life. It accounts for when, how, and why AI functions not merely as a tool but also as a relational partner or mediator within human social ecosystems. Grounded in foundational theories of attachment, social exchange, and epistemic trust, MIRA is designed to bridge classic interpersonal psychology with the emerging relational realities of AI-mediated communication.

As shown in Figure 1, the model is composed of four core components: antecedent conditions, core mechanisms (which include foundational building blocks and emergent social-psychological processes), outcomes (short- and long-term), and moderating factors. Together, these components describe a dynamic system through which users come to perceive AI as psychosocially meaningful and relationally important.

Fig. 1.

The MIRA model: framework for human-AI social interactions. The MIRA model explains when, how, and why AI functions as a relational entity in human social life. The model begins with antecedent conditions — user traits, contextual factors, and AI system attributes —that shape the interaction. These conditions influence a set of core mechanisms, divided into foundational building blocks (e.g., joint action, language as mind, AI-mediated communication) and emergent social-psychological processes (e.g., linguistic reciprocity, psychological proximity, interpersonal trust, relational substitution). These mechanisms give rise to both short-term outcomes (e.g., trust formation, emotional engagement) and long-term outcomes (e.g., relationship quality, social dependencies). Moderating factors — including individual differences, cultural context, and system design — affect the strength and trajectory of these dynamics. Arrows illustrate the dynamic nature of the model, including an ongoing feedback loop. MIRA = machine-integrated relational adaptation; AI = artificial intelligence.

The MIRA model: a framework for human–AI social interactions. The MIRA model explains when, how, and why AI functions as a relational entity in human social life. The model begins with antecedent conditions—user traits, contextual factors, and AI system attributes—that shape the interaction. These conditions influence a set of core mechanisms, divided into foundational building blocks (e.g., joint action, language as mind, AI-mediated communication) and emergent social-psychological processes (e.g., linguistic reciprocity, psychological proximity, interpersonal trust, relational substitution). These mechanisms give rise to both short-term outcomes (e.g., trust formation, emotional engagement) and long-term outcomes (e.g., relationship quality, social dependencies). Moderating factors—including individual differences, cultural context, and system design—affect the strength and trajectory of these dynamics. Arrows illustrate the dynamic nature of the model, including an ongoing feedback loop. MIRA = machine-integrated relational adaptation; AI = artificial intelligence.

We emphasize that MIRA is structured as a high-level framework designed to highlight theoretically meaningful connections among core psychological constructs and mechanisms. Each construct is richly grounded in its own extensive literature (in some cases well over a century of expansive, in-depth scholarship and research), and their interrelations represent rich areas for further empirical inquiry. We deliberately frame the model as open-ended to stimulate exactly that type of research.

Antecedents: the backdrop of AI-relational interaction

Relational meaning begins with antecedent conditions that shape both how users engage with AI and how they interpret its behavior. These include user characteristics (e.g., attachment style, personality states, affective states, prior relational experiences); AI system attributes (e.g., linguistic fluency, personalization capacity, anthropomorphic cues, transparency); and the interaction context, or the goals, stakes, and domain of the AI encounter (e.g., therapeutic vs. task-oriented vs. emotionally ambiguous). These variables define the initial conditions of the relational encounter. For instance, individuals high in interpersonal sensitivity may approach AI with heightened emotional needs, whereas others may be drawn by AI’s instrumental efficiency. Similarly, the design, interface, and known capabilities of an AI system will prime certain relational expectations before any exchange begins.

Core mechanisms: how AI becomes relational

The heart of the MIRA model consists of core mechanisms—the processes that transform AI from a functional tool into a socially interpreted agent. These mechanisms are divided into two layers: building blocks of relational meaning and social-psychological processes.

The building blocks of relational meaning are the theoretical and technological foundations on which relational AI interactions are constructed. They include joint action in communication (i.e., how language serves as a collaborative activity; Clark, 1996), which is essential for alignment and mutual understanding; language as mental life (i.e., the idea that subtle linguistic markers reflect and shape psychological processes; Boyd & Markowitz, 2025; Boyd & Schwartz, 2021; Pennebaker, 2011); and AI-mediated communication (i.e., how AI alters message production, interpretation, and relational inference; Hancock et al., 2020; Kim et al., 2022). These concepts functionally define the relational affordances of language-centered AI systems, as well as the actual verbal behaviors through which psychosocial processes are enacted and created.

Built on these foundational blocks are four key mechanisms through which relational meaning emerges: linguistic reciprocity, psychological proximity, interpersonal trust, and relational substitution or enhancement. Linguistic reciprocity is informed by theories of epistemic trust and joint action in communication that suggest that alignment in language signals understanding and shared intent (Clark, 1996; Sperber et al., 2010); psychological proximity draws from attachment theory, which emphasizes emotional security and perceived relational closeness—even in parasocial or nonsentient relationships (R. Bowlby, 2004; Stever, 2017); interpersonal trust builds on both epistemic trust frameworks and social exchange theory, highlighting how judgments of AI’s credibility, predictability, sincerity, and responsiveness shape perceptions of partner reliability (Blau, 1964; Koenig & Harris, 2005); and relational substitution or enhancement is grounded in social exchange theory, in which the costs and benefits of AI interaction are weighed against human relationships, potentially altering relational investment patterns (Thibaut & Kelley, 1959).

Although the MIRA model’s architecture enables numerous role-specific theoretical mappings (e.g., how attachment anxiety differentially influences trust in quasipartners vs. mediators, how reciprocity norms operate distinctly across roles), we focus on the core mechanisms that operate foundationally across both roles (and we expand on these mappings below).

Outcomes: relational and social consequences

As users engage with AI over time, the core mechanisms give rise to measurable outcomes. These are separated into short- and long-term outcomes. Short-term outcomes include trust formation (i.e., initial perceptions of AI reliability and responsiveness), social presence (i.e., the extent to which AI feels like a copresent other), and emotional engagement (i.e., the degree to which users feel seen, supported, or emotionally impacted). Long-term outcomes include relationship quality (i.e., whether users develop ongoing psychosocial attachment or reliance on AI), social dependencies (i.e., emergent patterns of integrating AI into one’s own social behaviors, including but not limited to a preference for AI over humans), and sustained communication (i.e., whether AI becomes habitually integrated into daily relational life).

We note that these outcomes are not uniformly positive or negative and carry no explicit value judgments. Our intention is for MIRA to be generally agnostic about the normative value of AI in relationships; rather, it is designed with the primary intent to organize and prompt scholarly inquiry.

Moderating factors

Moderating factors surround and shape every level of the model and affect the strength or direction of effects. Example factors include:

  • Individual differences (e.g., openness to experience, need for achievement, loneliness, need for control);

  • Relational history (e.g., recent ruptures, trauma, or loss);

  • Cultural and contextual variables (e.g., social norms relating to interpersonal behaviors, cultural values placed on different relationships, availability of alternative technologies, legal expectations of privacy);

  • Design and framing of AI systems (e.g., disclosure of “AI-ness,” anthropomorphism, stated purpose/utility of AI).

For example, recent studies have shown that perceptions and behaviors surrounding AI can be influenced by such extrapersonal factors as which beliefs are primed prior to use (Pataranutaporn et al., 2023) and application context (Castelo et al., 2019). Similarly, the relevance of factors such as “cost” to an AI in the formation of trust and reciprocity will likely be determined by individual difference factors on the human side of the equation (see, e.g., Eisenberger et al., 2004; Neel et al., 2016; Smeijers et al., 2022). Whether an AI is experienced as emotionally rewarding or psychologically “invested” may depend less on the system’s objective properties and more on the user’s relational needs, attachment orientation, or beliefs about AI’s capacity to “care.” Such moderators help explain why the same AI system may be interpreted very differently by different users, or why relational effects may evolve over time within the same individual.

From architecture to research agenda

Here, we remind readers that MIRA is not, nor intended to be, a comprehensive or universal theory of life with AI. Rather, it is a middle-range framework intended to catalyze empirical inquiry with regard to the role of AI in human social life. It helps researchers specify hypotheses—for example, under what antecedent conditions users form trust in AI, which mechanisms are activated by different types of linguistic input, or what long-term effects emerge from daily AI interaction. It also allows for comparative analyses across user types, domains, and cultural contexts. In the next section, we elaborate on the four relational principles that cut across the model and animate its core logic: linguistic reciprocity, psychological proximity, interpersonal trust, and relational substitution/enhancement. These principles illustrate how the social psychology of language becomes central to understanding AI’s role in human connection.

Although MIRA encompasses the full arc from antecedents to outcomes, the core mechanisms—especially those rooted in language and psychological interpretation—represent the model’s most dynamic and theoretically generative components for researchers across multiple disciplines. These are the processes through which users come to perceive AI as socially responsive and relationally meaningful (or not). Accordingly, we continue by devoting considerable space to the in-depth articulation and elaboration of these core mechanisms. We devote particular attention to the linguistic building blocks that shape interaction, as well as the social-psychological processes that interpret those signals. Although MIRA’s outcomes layer draws on well-established work in relationship science (e.g., trust formation, emotional bonding, social presence), our focus is on articulating how these outcomes arise.

The Foundations of Relational Communication: The “Building Blocks” Within MIRA

To fully understand how MIRA explains AI’s social and relational role, we must first ground the model in some fundamental mechanisms of communication. These “building blocks” form the cognitive and interactional infrastructure on which relational meaning is constructed. Although MIRA is novel in its integration of AI into relationship science, it is not speculative. It is built on decades of work in psycholinguistics, social cognition, and computer-mediated communication—literatures that richly explain how humans use language to build, interpret, and manage relationships. It is important to reiterate here that these mechanisms remain operative in a world in which some of our interlocutors are machines, and they remain relevant regardless of whether AI is operating as a relational partner or mediator.

We focus heavily on the form and function of language in the MIRA model given that this is the modality through which most of today’s AI interacts with humans in a social sense, both as a relational partner and as a mediator. Although MIRA is focused on language-based and relationally adaptive systems, we recognize that other forms of AI—including embodied agents, multimodal interfaces, and sensor-driven systems—may engage different psychological processes. These fall outside the current scope of the model but are critical domains for future extension.

Language is more than a vehicle for transmitting information—it is the infrastructure of human connection. Although people often use language to accomplish functional tasks (e.g., composing emails, seeking help), its deeper functions lie in shaping interpersonal understanding, coordinating joint action, and establishing psychological alignment (Austin, 1975; Searle, 1969; Tomasello, 2003, 2009). Communication is not just about what is said, but how, why, and with whom—and it is through these dimensions that relational meaning is formed.

MIRA is grounded in this premise. It assumes that the psychological principles that define how humans understand, interpret, and respond to language do not fundamentally change simply because the interaction partner (or mediator) is a machine. Rather, AI-generated language draws on the same interactional affordances that have long structured human–human communication. In this section, we dive deeper into the three foundational building blocks that underlie how language generates social meaning in a world in which AI is increasingly ubiquitous: joint action in communication, language as a reflection of mental life, and AI-mediated communication. These constructs explain how users begin to experience AI as socially responsive—setting the stage for the psychological mechanisms that follow in the model.

Joint action in communication

Human conversation is fundamentally collaborative. People coordinate meaning through timing, inference, and shared assumptions (Clark, 1996). This process is referred to as “joint action”—the idea that language is coproduced by interlocutors working toward mutual understanding (Garrod & Pickering, 2009). Even minimal, decontextualized exchanges can reveal underlying coordination. Consider the following dialogue:

Jamie: So, did you bring it?

Herb: Of course. Did you think I’d forget?

Jamie: Good. And you checked it out, right?

Herb: Twice. Good as new.

Without context, this exchange still reads as coherent because it relies on two speakers’ established common ground—the shared knowledge and assumptions that make communication efficient (Clark & Wilkes-Gibbs, 1986; Stalnaker, 1998). Humans naturally streamline their language to reduce cognitive burden (H. P. Grice, 1975; P. Grice, 1991), preserving clarity while minimizing redundancy. This drive for linguistic efficiency not only makes communication smoother but also signals cooperation, familiarity, and relational trust.

AI systems can simulate this kind of turn-taking and alignment, even without shared mental states. 5 When an AI adapts to context, mimics user phrasing, or makes appropriate inferences, users may interpret the interaction as jointly constructed. In MIRA, joint action is a key building block because it enables the illusion of mutual responsiveness—a prerequisite for interpreting AI as a relational partner or mediator.

Language as a reflection of intrapersonal mental life

Language is not just a social tool; it is a behavioral trace of thought projected outward. Decades of research in psycholinguistics, computational linguistics, and verbal behavior show that subtle features of language—especially function words such as pronouns, articles, and prepositions—richly encode meaningful psychological information (Boyd & Pennebaker, 2017; Boyd & Schwartz, 2021; Pennebaker & King, 1999). These words are processed and produced automatically, making them less subject to self-presentation bias and more reflective of real-time cognition (Chung & Pennebaker, 2007).

Function words provide insight into attention, emotional state, and social stance. It is exceedingly well established, for example, that (a) first-person singular pronouns (e.g., “I,” “me”) are linked to self-focus, rumination, and distress (e.g., Berry-Blunt et al., 2021; Edwards & Holtzman, 2017); (b) first-person plural pronouns (e.g., “we,” “us”) signal affiliation and collective identity (Biesen et al., 2016; Meier et al., 2021; Seraj et al., 2021); (c) cognitive processing words (e.g., “think,” “because”) reflect active reasoning (R. L. Moore et al., 2021; Ta-Johnson et al., 2022; Tausczik & Pennebaker, 2010); and (d) absolutist terms (e.g., “always,” “never”) reflect more rigid thought patterns (Al-Mosaiwi & Johnstone, 2018a, 2018b).

In relational settings, these cues matter. Linguistic synchrony between conversation partners—including shared use of function words—predicts relationship quality, emotional closeness, and cooperative behavior (e.g., Albano et al., 2023; Ireland et al., 2010; Ni et al., 2025). Language is thus both a reflection of mental states and a mechanism for building social connection.

MIRA incorporates this building block to explain how AI can simulate psychological presence. By mirroring user language or deploying emotionally attuned phrasing, AI systems may evoke the same interpretive processes that humans use to read minds through words (Jara-Ettinger & Rubio-Fernandez, 2021)—even though current AI has no “mind” to read (Thellman et al., 2022).

AI-mediated communication

Because language serves as both a window into cognition and a mechanism for social coordination, the integration of AI-mediated communication introduces fundamental shifts in how language reflects psychological intent, relational meaning, and interpersonal alignment (Hancock et al., 2020). Traditional human interactions assume that linguistic signals map onto an individual’s cognitive and emotional state, yet AI-generated language operates without an underlying “self,” raising questions about how people interpret and respond to AI-driven discourse. Research in computer-mediated communication has long examined how technological mediation alters verbal and nonverbal cues, processes, and perceptions relative to face-to-face communication (Walther, 1992, 1996; Walther & Burgoon, 1992), but AI-mediated communication extends beyond simple transmission—modifying, augmenting, or even generating language itself to achieve communicative goals.

This distinction matters. In AI-mediated communication, the origin of a message may be algorithmic, not human—or may be hybridized through coauthoring, suggestion, or autocompletion. As AI increasingly crafts language that is grammatically correct, stylistically fluent, and socially tailored, users may find it difficult to discern whether content comes from a human mind, a machine model, or something in between. Prior work has shown that people often misattribute authorship—mistaking AI-generated writing for human writing, or vice versa—on the basis of subtle stylistic cues (Gunser et al., 2021; B. Huang et al., 2025; Köbis & Mossink, 2021).

One key challenge is that linguistic markers of cognition and emotion—assumed to be uniquely human—can now be artificially replicated. AI produces function words, pronouns, and other social signals in ways that mimic psychological depth, leading individuals to attribute intention, trustworthiness, or even relational closeness to AI-generated responses. Yet, unlike human communicators, AI lacks the experiential grounding that underlies genuine interpersonal connection.

This paradox—in which AI’s linguistic output mirrors human expression but lacks psychological embodiment (e.g., Markowitz et al., 2024)—massively complicates trust formation, self-disclosure patterns, and relational expectations in AI-mediated interactions (see, e.g., Battisti, 2025; Hwang et al., 2025; Zhang et al., 2025). This blurring complicates traditional relational heuristics. If language is interpreted as a window into the mind and intention, then AI-produced language can create the illusion of intentionality, even in its absence. MIRA uses AI-mediated communication to account for this interpretive ambiguity. It explains why users might attribute empathy, trustworthiness, or social understanding to machines—and why those attributions may shape long-term relational patterns.

Summation

These three building blocks—joint action in communication, language as a reflection of mental life, and AI-mediated communication—form the foundation of relational interpretation in AI interactions. They do not require the AI to possess emotion, experience, or awareness—only to produce language that fits the social expectations of those constructs. In MIRA, these mechanisms operate at the interface between language and psychology, laying the groundwork for the core relational principles to follow: linguistic reciprocity, psychological proximity, interpersonal trust, and relational substitution or enhancement.

Theoretical Contributions of MIRA

As a theoretical contribution, MIRA advances three core claims. First, it offers a mechanistic account of how AI-generated language becomes socially meaningful. Prior frameworks often describe the functions AI can perform (e.g., aiding emotion regulation, providing information, supporting task completion), but fewer articulate the psychosocial processes that realize these functions. MIRA addresses this gap by integrating verbal behavioral science with theories of relational cognition, demonstrating how language-based cues trigger social inferences even in the absence of sentient intent.

Second, MIRA proposes a dual-role architecture that distinguishes between AI as a relational partner (e.g., chatbots, therapeutic agents) and AI as a relational mediator (e.g., language rewriting tools, AI-enhanced messaging platforms). This distinction allows for clearer conceptual boundaries between AI that engages in simulated social exchange and AI that reshapes communication between humans. It is important to note that the general processes outlined by MIRA allow for both humans and AI as interaction partners, with or without AI serving in a mediational role (see Fig. 2). The relational partner role involves a simulated bidirectional relationship between user and AI shaped by attachment orientation, reciprocity norms, and trust dynamics. In contrast, the mediator role entails an indirect but consequential influence on the emotional or linguistic qualities of human-interactant communication. These roles differ not only in relational directionality but also in the target of social inference, the presence of anthropomorphism, and the expectations users place on the AI.

Fig. 2.

Diagram explains MIRA’s dual AI roles in communication: direct engagement and as mediation facilitator. Context, relational processes, and antecedent conditions highlighted in each section.

How MIRA highlights two primary relational functions of AI: as a partner that engages the target individual directly and as a mediator that shapes human–interactant communication. For our current purposes, we presume that AI as a mediator will most often be used in human–human communication, although this need not necessarily be the case. Each role interfaces through core relational processes and draws on distinct antecedent conditions while mirroring dynamics observed in both human and AI social interactions. Note that the roles of AI can switch within or between interaction episodes (e.g., mediator becomes primary interactant). MIRA = machine-integrated relational adaptation; AI = artificial intelligence.

Third, as described at the beginning of this article, MIRA functions as a middle-range theory (Merton, 1968). It is neither a narrow typology nor a grand theory of all social life. Importantly, MIRA does not assume that AI has (or does not have) psychological interiority but rather accounts for how users interpret AI behavior as socially and psychologically grounded. This distinction—between “true” human-like cognition and interpreted intentionality—is central to the framework’s contribution.

By treating language not as an index of internal states but as a trigger for social inference, MIRA aligns with broader movements in social cognition that emphasize perception, construction, and contextual processing over ontological claims about the “mind” behind the message while remaining flexible enough to accommodate future innovations and amendments.

The core social-psychological principles of MIRA

At the heart of the MIRA model are four relational principles that explain how people come to experience AI as socially responsive, emotionally attuned, and psychologically meaningful. These principles are grounded in long-standing theories of human communication and connection but are newly applied here to the evolving dynamics of AI-mediated interaction.

The four principles outlined below apply across both the relational and mediator roles of AI but with different implications—namely in where social meaning is attributed and how relational outcomes unfold. Linguistic reciprocity captures how language alignment fosters social inference; psychological proximity explains how emotional closeness can emerge from repeated, personalized interaction; interpersonal trust reflects how credibility is constructed and maintained in machine communication; and relational substitution or enhancement captures how AI may displace or augment human relationships over time. Together, these principles animate MIRA’s core processes and offer a generative roadmap for future empirical research.

For the remainder of this section, we revisit the basic principles and mechanisms described throughout this article. Rather than simply reiterate their importance, we expand this discussion by integrating multiple aspects of the MIRA model to demonstrate how core pieces of the model operate together.

Linguistic reciprocity: AI as an adaptive conversational partner

Linguistic reciprocity refers to the degree to which an interaction partner adapts their language to align with the communicative style, expectations, and contextual needs of the other. In human conversation, this alignment fosters rapport, social cohesion, and mutual understanding (H. Giles et al., 1991; Ireland et al., 2010; Richardson et al., 2014). Individuals naturally modulate their speech—through tone, word choice, syntax, and even rhythm—to match their conversational partner, a process well documented in communication accommodation theory (Bernhold & Giles, 2020; H. Giles et al., 1991) and related work on language style matching (Babcock et al., 2013; Gonzales et al., 2010).

In the context of AI, linguistic reciprocity manifests through adaptive language modeling—the system’s ability to shift register, match tone, mirror syntax, and maintain conversational coherence in real time (Aldous et al., 2024). AI systems can follow discourse norms, maintain turn-taking, engage in conversational grounding (Clark, 1996), and preserve contextual continuity across extended interactions (Yen & Zhao, 2024). These capabilities can make AI appear extraordinarily “attuned,” sometimes even more so than human interlocutors, who may deviate from conversational norms because of distraction, fatigue, or emotional reactivity.

This alignment is not purely functional. Research shows that linguistic synchrony—even when occurring unconsciously—predicts trust, empathy, and emotional closeness in human relationships (Albano et al., 2023; Ireland & Pennebaker, 2010; Ireland et al., 2010). AI systems, although devoid of psychological states, can nonetheless simulate this synchrony algorithmically, producing the appearance of engagement and shared understanding. As MIRA suggests, this illusion of reciprocity can elicit genuine social responses, especially when the system appears responsive, attentive, and flexible.

Crucially, the effects of linguistic reciprocity vary on the basis of AI’s role in the interaction. As a partner, high linguistic alignment may signal empathy or interpersonal connection. As a mediator, it may facilitate clearer, more emotionally resonant communication between human interactants. In both cases, alignment can foster the perception of social presence (see Fig. 3a)—the sense that one is interacting with an intentional, engaged agent (Oh et al., 2018).

Fig. 3.

Two graphs showing human-al AI convergence and divergence of linguistic styles over conversational progress, depicting growth, plateauing, and decline in the relationship.

Visualizations of linguistic convergence (top panel) and divergence (bottom panel). AI = artificial intelligence.

However, linguistic adaptation is not without risk. Overpersonalization or excessive mimicry may provoke discomfort or the “uncanny valley” effect in language (see Fig. 3b), especially if users perceive AI as too familiar or intrusive (Gebremeskel & de Vries, 2023). Moreover, consistent, effortless reciprocity from AI may distort users’ conversational expectations—potentially reducing their tolerance for the variability, misalignment, and effort required in human–human dialogue. As users grow accustomed to AI’s seamless responsiveness, they may begin to expect similar accommodation from others, reshaping social norms and conversational patience. Last, but importantly, it is worth noting that an unabashedly sycophantic AI that fails to psychologically diverge or disagree with its users can lead to tragic results from both mental-health and humanistic perspectives (Dupré, 2025; J. Moore et al., 2025).

In sum, linguistic reciprocity is a double-edged mechanism. It allows AI to function as an adaptive, engaging partner or facilitator, but it may also reshape relational expectations in subtle, long-term ways. Future research should examine when linguistic alignment enhances relational outcomes and when it creates misleading impressions of emotional depth, cognitive attunement, or interpersonal commitment.

Psychological proximity: feeling “close” to machines

Psychological proximity refers to the sense of emotional closeness, perceived understanding, and interpersonal resonance that people experience in social interactions. In traditional relationship science, proximity is not merely physical or spatial—it is a felt experience of being known, seen, or emotionally connected (Aron et al., 1992; Reis & Shaver, 2018). This experience is shaped by repeated, responsive, and personally relevant interactions, often culminating in increased intimacy, disclosure, and affective closeness (Altman & Taylor, 1973; Laurenceau et al., 1998).

AI systems, particularly conversational agents, are increasingly capable of producing interactions that simulate these dynamics. Through personalized responses, emotional mirroring, contextual memory, and adaptive self-disclosure prompts, AI can create the illusion of mutual presence and shared perspective. These responses often mimic the interaction patterns that lead to closeness in human–human contexts—even though they are generated without empathy, intent, or self-awareness.

From the standpoint of MIRA, psychological proximity is one of the most important mechanisms by which users begin to interpret AI as socially and emotionally significant. 6 When a system recalls prior interactions, adapts its tone on the basis of user mood, or uses emotionally congruent language, users may come to feel “known” by the system. Research in human–computer interaction suggests that emotional resonance—even when simulated—can enhance relationship satisfaction, trust, and disclosure (Bickmore & Picard, 2005; Liu & Sundar, 2018; Lucas et al., 2014).

The partner–mediator distinction again plays a crucial role. When AI acts as a partner, proximity is felt directly—the user may begin to form attachment-like bonds or treat the system as a confidant or companion (Blut et al., 2021; Heerink et al., 2009; Xie & Pentina, 2022). When AI acts as a mediator, psychological proximity may be more diffuse, enhancing the perceived warmth, clarity, or depth of a human-to-human exchange (Beattie et al., 2020; Ho et al., 2018; J. Li et al., 2023). In both cases, the emotional realism of the interaction matters: Users infer closeness not just from content but also from affective congruence, responsiveness, and personalization.

Importantly, the mechanisms that generate psychological proximity in AI interactions are rooted in established theories of relationship formation. Attachment theory posits that people seek emotionally reliable others—partners who respond predictably and empathetically in times of stress (Shaver & Mikulincer, 2003). Social penetration theory emphasizes that intimacy develops through reciprocal and increasingly personal self-disclosure (Altman & Taylor, 1973). In both cases, repetition, responsiveness, and depth drive closeness—all of which can now be mimicked by large language models trained on emotionally expressive and socially normative dialogue.

This raises important questions: Can sustained, emotionally resonant interactions with AI satisfy real attachment needs? Under what conditions does AI-induced proximity lead to genuine well-being, and when does it create dependency, illusion, or avoidance of more complex human relationships? How do cultural and individual differences—such as the need for belonging or fear of rejection—shape the interpretation of proximity in machine-mediated interactions?

MIRA does not assume that all users will feel close to AI, nor that such closeness is inherently positive. Rather, it proposes that psychological proximity is a perceived experience driven by interaction patterns that mirror those of human intimacy. As AI becomes more embedded in therapeutic, educational, and social environments, understanding this perception—and its relational consequences—will be critical for both research and design.

Interpersonal trust: AI as a credible and socially impactful agent

Trust is a foundational dimension of human relationships. It enables cooperation, reduces uncertainty, and fosters emotional security (Rotter, 1971, 1980; Simpson, 2007). Classic models of trust emphasize its dynamic nature—trust is typically earned, adjusted, and sustained through repeated interpersonal interactions that reveal credibility, reliability, and intent (Koenig & Harris, 2007; Rempel et al., 1985). Within epistemic trust frameworks, trust also plays a central role in guiding when and how individuals accept information from others as valid, relevant, and sincere (Sperber et al., 2010). But when the relational partner is an AI, these traditional cues are disrupted.

AI systems lack consciousness, accountability, and subjective experience—yet users often treat them as trustworthy. This paradox lies at the core of MIRA’s third social-psychological process. Trust in AI is not earned through consistent behavior or moral character but inferred through linguistic fluency, stylistic confidence, and perceived neutrality (Araujo et al., 2020; Maher et al., 2024; Tang & Cooper, 2025). Users readily interpret polished, coherent, and emotionally attuned language as indicators of expertise, even in the absence of verifiable grounding (Augenstein et al., 2024; Markowitz et al., 2024).

This form of trust is likely shaped by at least three common psychological mechanisms:

  • Linguistic fluency and confidence: Research shows users often conflate fluency with accuracy. AI systems that deliver smooth, confident responses are more likely to be perceived as credible—even when the underlying content is incorrect (Jeon, 2024; Kolomaznik et al., 2024).

  • Alignment with user beliefs: AI responses that affirm users’ views or self-concept tend to increase trust, reinforcing selective exposure and epistemic echo chambers (Sharma et al., 2024; Swann & Read, 1981).

  • Perceived impartiality: AI is often seen as neutral or unbiased, lacking the emotional volatility or self-interest of human communicators. This perceived objectivity can increase users’ trust, particularly in emotionally charged or evaluative contexts (Araujo et al., 2020; Hancock et al., 2020).

Crucially, MIRA suggests that the attribution of trust in AI may stem less from what AI is and more from how it feels—as credible, consistent, and nonjudgmental. These features activate relational heuristics that normally apply to human agents, leading users to develop affective and cognitive trust even in the absence of true epistemic grounding.

This process plays out differently depending on AI’s role. As a partner, AI may be trusted to manage emotionally sensitive topics, offer advice, or provide reassurance—sometimes even more than human interlocutors. As a mediator, AI may be trusted to reshape interpersonal language, filter tone, or craft messages that optimize social goals. In both cases, trust functions as a relational amplifier: It deepens reliance, extends interaction, and shapes interpretation.

However, these dynamics are not without risk. Overtrust in AI can lead to miscalibrated reliance—accepting false or misleading information because of the style of delivery rather than its substance (Hoff & Bashir, 2015; Lee & See, 2004). Trust may also be weaponized: If users grow accustomed to emotionally resonant AI systems, malicious actors could exploit this by embedding misinformation into otherwise credible-seeming responses (Durán & Pozzi, 2025).

MIRA integrates these considerations by treating interpersonal trust not only as a social-psychological process but also as a moderating factor. Individual differences—such as cognitive style, need for closure, or prior experiences with technology—likely shape how users calibrate trust in AI. Similarly, cultural factors and system design features (e.g., disclosure of AI authorship) may modulate trust thresholds (Renieris et al., 2024; Toff & Simon, 2025).

Several pressing questions emerge for future research: How do users update or retract trust when they discover AI has made errors? What are the long-term consequences of substituting human epistemic anchors with algorithmic ones? How does trust in AI generalize—do users who develop trust in one system carry that trust over to other machines, contexts, or domains? These questions highlight the need for interdisciplinary approaches to trust in AI—combining psychology, communication, ethics, and design. MIRA provides a structure for asking when, why, and how trust in AI shapes not just momentary interactions but also broader patterns of social cognition and relational life.

Relational substitution and enhancement: AI as a social proxy or supplement

The final core process in the MIRA model is relational substitution versus enhancement—the extent to which AI augments or supplants human social relationships. This principle is grounded in social exchange theory (Blau, 1964; Thibaut & Kelley, 1959), which posits that individuals evaluate relationships on the basis of perceived costs and benefits. MIRA extends this logic by asking about when users may begin to treat AI not just as a communication aid but also as a viable (and highly valued) social alternative.

Relational substitution occurs when users increasingly rely on AI to fulfill social or emotional needs that might otherwise be met by human partners. This is most likely when human relationships are perceived as unavailable, unreliable, or taxing of one’s resources (e.g., cognitive, emotional, financial). Research on social surrogacy (Derrick et al., 2009) and compensatory internet use (Grieve et al., 2013, 2017; Kardefelt-Winther, 2014) suggests that individuals often turn to nonhuman entities—such as media figures, pets, or now AI—to fill relational voids or regulate emotion. MIRA proposes that AI systems, especially those that simulate responsiveness and availability, may serve as new surrogates in this domain.

Conversely, relational enhancement refers to scenarios in which AI augments or strengthens human–human interaction. In these contexts, AI tools serve as scaffolds—helping users compose difficult messages, reflect on relational patterns, or rehearse emotional conversations. Here, the AI is not replacing human interaction but facilitating it. Empirical work on AI-mediated communication has shown that AI support during social exchanges increases communication confidence, competence, and expressiveness (see, e.g., Ateeq et al., 2024; Liu et al., 2024; Porayska-Pomsta et al., 2018; Zhai & Wibowo, 2023).

These dual possibilities—substitution and enhancement (see Fig. 4)—are not mutually exclusive. In practice, they may occur concurrently. For instance, a user might engage with an AI therapist between sessions with a human provider, blending machine support with human care. Alternatively, someone might use an AI messaging assistant to improve a romantic relationship while also developing a sense of rapport or attachment to the AI itself.

Fig. 4.

Four sections of infographic show AI as relational enhancement, substitution or both, with examples of robot faces.

AI as relational alternative or surrogate. Two primary trajectories emerge from AI-enabled interactions: relational enhancement, in which AI scaffolds or improves human–human connection, and relational substitution, in which AI increasingly fulfills social roles traditionally occupied by other people. These dynamics are not mutually exclusive and may vary across time, context, and individual disposition. AI = artificial intelligence.

Several factors shape this dynamic. First are user motivations: Is the person seeking efficiency, validation, emotional safety, or self-expression? Second are system affordances: Does the AI offer personalization, memory continuity, or adaptive emotional support? Third are contextual factors: Is the AI used during conflict, loneliness, burnout, or transitional life phases? Importantly, MIRA does not take a normative stance on whether substitution versus enhancement is “better.” Instead, it poses key empirical questions:

  • Under what conditions does the use of AI diminish investment in human relationships?

  • When does AI serve as a bridge to human connection rather than a barrier?

  • How do different populations—such as adolescents, older adults, or individuals with social anxiety—interpret and experience these dynamics?

  • Does frequent reliance on relational AI shift expectations for human partners, such as responsiveness, empathy, or attentiveness?

These questions underscore the broader implications of AI-mediated relational life. Just as people adjust their behavior in response to new communication technologies (Baym, 2010), they may begin to recalibrate their social expectations in light of always available, low-friction relational alternatives. Over time, MIRA predicts that AI integration could reshape norms of intimacy, availability, and emotional labor—not through explicit replacement but through gradual shifts in relational behavior and perception.

Future Directions and Conclusions

As AI systems become embedded into the fabric of daily life, understanding their influence on human–human and human–AI relationships will be critical. The MIRA model offers a structured framework for theorizing how AI shapes, mediates, and potentially transforms social connection—particularly through language, one of the most fundamental vehicles of relational life. Grounded in established psychological and communicative theories, MIRA integrates these traditions into a model that is at once explanatory and generative, offering a flexible yet rigorous foundation for future research.

Although we position MIRA as a middle-range theory—one that will evolve alongside AI technology—its core value lies in connecting behavioral science with emerging technological realities. MIRA is not exhaustive; rather, it is designed to organize existing concepts and mechanisms in a way that allows researchers to pose an expanded range of questions about relational life in the age of machine communication.

Empirical opportunities and research agendas

The empirical pathways opened by MIRA are broad but tractable. Laboratory studies can examine how manipulating specific mechanisms—such as linguistic synchrony or emotional mirroring—affects perceptions of trust, intimacy, and connection. Longitudinal field studies can trace how relationships with AI develop or change over time, particularly in high-stakes domains such as education, mental health, or caregiving. Large-scale language data can be used to track alignment, divergence, or emotional shifts in human–AI discourse.

To make MIRA immediately useful, Table 1 organizes role-specific empirical questions by the framework’s components—antecedents, core mechanisms, moderators, and outcomes—and shows how they translate into testable hypotheses for common designs (manipulation/measurement, predicted direction, target outcome). The table prioritizes tractable, near-term studies (lab experiments on linguistic synchrony/emotional mirroring; longitudinal field work in education, mental health, and caregiving; and large-scale language analyses) and foregrounds moderators most likely to shape effects (e.g., attachment orientation, trust sensitivity, personality, domain stakes). Indeed, certain moderators presupposed in this table (e.g., relationship tie strength, identity fusion) represent theoretically plausible extensions that individual researchers should ground in their domain-specific literature.

Table 1.

Illustrative, Role-Specific, Falsifiable Predictions Organized by MIRA

Context/
research domain
Antecedent conditions AI role Explored mechanisms Hypotheses Suggested study design Primary outcome measures
Support-seeking in a low-stakes mental-health context User attachment anxiety/avoidance Quasipartner (visible chat helper) vs. mediator (invisible rewriter) Linguistic reciprocity (empathic reflections + light self-disclosure vs. neutral) H1: High anxiety leads to more trust/engagement with quasipartner than mediator
H2: High avoidance leads to less reliance on quasipartner than mediator
Lab; randomize Role × Reciprocity; brief interaction; 7-day follow-up Trust (pre/post); calibration (trust-quality alignment); engagement (turns/time); return use; reliance (acceptance of suggestions)
Information-seeking for learner feedback (revise an assignment) User trust sensitivity; domain stakes (graded vs. ungraded tasks) Primarily mediator (invisible feedback/rewrite) vs. quasipartner (visible tutor chat) Reliability disclosure (shows confidence and source/provenance) vs. no disclosure H1: Disclosures lead to better calibration (confidence aligned with accuracy) for mediator than quasipartner
H2: Effect stronger when the task is graded (high stakes)
H3: Disclosures increase adoption of suggested edits and revision quality
Classroom/lab within-subjects A/B; toggle disclosure on/off; counterbalance role exposure Calibration (confidence-accuracy alignment); adoption rate of edits; revision quality (blind rubric); grade change on resubmission
Customer-service problem (goal: resolution speed) User agreeableness; issue severity (low vs. high) Mediator Invisible reframing of the user’s message (politeness + clarity rewrite) vs. no rewrite H1: Reframed messages lead to higher agent reply rate and faster resolution than control
H2: Benefits stronger for low-severity issues and for more agreeable users
Real helpdesk field A/B; randomize rewrite vs. control Agent reply rate; time to resolution; customer satisfaction; secondary: concession/ compensation rate
Help-seeking DM to a friend on a social platform User attachment anxiety (sender); relational partner tie strength (strong vs. weak) Mediator Face-saving rephrase (agency-preserving, clear, nonburdensome wording) vs. original H1: Rephrased DMs lead to higher recipient uptake (agree to talk) than originals
H2: Effect larger for high-anxiety senders and for weak ties
Field study with partner app; preregistered; sender drafts DM; randomize rephrase vs. original Recipient response rate (yes/no); latency; sentiment of reply; secondary: follow-through (scheduled chat)
Regular companion use when offline social support varies User-level offline social support (low vs. high) Quasipartner Linguistic reciprocity (empathic reflections) + naming/persona vs. neutral style H1: Low Support × Quasipartner leads to greater relational substitution (more AI than human contact) Longitudinal field study (4–6 weeks); EMA (daily support/contacts, mood) + app usage logs; randomize role and reciprocity/persona Substitution index (AI–human conversational turns); loneliness change; human contact frequency; secondary: well-being (SWLS/WHO-5)
High-stakes information task (medical/legal decision support) User-level risk aversion; domain stakes (higher vs. lower consequence scenarios) Mediator (behind-the-scenes assistance) vs. quasipartner (visible advisor) Transparency cue (explicitly states when/how mediation occurs, sources used, and limits) vs. no cue H1: Transparency increases preference for mediator over quasipartner
H2: Transparency reduces trust in quasipartner relative to mediator
H3: Transparency improves source-checking and decision accuracy, especially in high-stakes contexts and for high risk-averse individuals
Lab-simulated decisions; 2 × 2 Role (mediator vs. quasipartner) × Transparency (on vs. off); counterbalanced scenarios (medical/legal) Source-verification rate; decision accuracy; trust change (pre/post); reliance on AI recommendation
Cowriting (op-ed/essay) User-level conscientiousness; user-level identity fusion with the topic (i.e., how personally tied the author feels) Mediator (suggested line edits ) vs. quasipartner (visible cowriter via chat) Authorship cues: explicit labels crediting AI text (e.g., highlight + “AI-suggested”) vs. no labels H1: Authorship cues reduce felt authorship more for the mediator than for the quasipartner
H2: Effect is stronger for higher conscientiousness
H3: High identity fusion reduces adoption of labeled AI edits, especially for the mediator
Lab writing task; participants draft and revise; randomize Role (mediator vs. quasipartner) × Authorship Cue (on vs. off) Felt authorship/ownership; effort (time, number of edits); adoption rate of AI edits; secondary: blind quality ratings of final text
Iterative tutoring sessions across multiple assignments User NFC, trust sensitivity Begin with mediator (invisible edits/suggestions) with the option to open a quasipartner chat (visible guidance) in later sessions Escalation from behind-the-scenes edits to visible, conversational guidance when help is requested H1: Across sessions, successful mediation (useful edits that improve work) increases the probability that students open a quasipartner channel (ask for advice)
H2: This mediation to QP shift is stronger for students high in NFC (and for those higher in trust sensitivity)
Longitudinal lab/classroom (five sessions); randomize initial mediator quality (high vs. moderate) and availability prompt for partner chat (subtle vs. none); track whether/when students open the quasipartner channel Shift probability (mediator to quasipartner); advice requests (number/timing); learning gains (quiz/grade improvement); persistence of role choice in final sessions
Safety-critical well-being advice (e.g., coping steps, crisis resources) User tendency to anthropomorphize; risk level of scenario (higher vs. lower stakes) Quasipartner Anthropomorphic cues (human name, avatar, voice/persona) vs. no anthropomorphic cues H1: Anthropomorphic cues lead to higher perceived warmth and trust without improving accuracy
H2: In higher risk scenarios and/or without reliability cues, reliance should not increase; a trust-accuracy gap may widen for users with high anthropomorphism tendency
Lab; between-subjects; hold accuracy constant across conditions; randomize Anthropomorphic Cues × Risk Level; optional reliability cue (on/off) Perceived warmth/trust; reliance rate (accept/follow advice); errors/inappropriate reliance; trust-accuracy gap
Online discussions in which users receive AI-harmonized incoming replies User hostility tolerance (low vs. high); community norms (strict vs. lax moderation history) Mediator (interpersonal); AI harmonizes hostile/inflammatory replies directed at the focal user before delivery to that user Toxicity softening (rewrite incoming hostile replies to reduce insults, slurs, personal attacks) vs. unmodified delivery H1: Harmonization leads to lower perceived hostility, more constructive user responses, and sustained engagement vs. control
H2: Effects stronger for low-tolerance users and in low-norm communities
Field experiment; users randomized to receive harmonized vs. original versions of replies to their posts; 2- to 4-week tracking; counterbalance topics Perceived hostility; response constructiveness; continued engagement (posting frequency/duration); interpersonal trust in discussion partners

Note: The examples in this table are illustrative—not exhaustive or the only ways to conceptualize variables with the model. The table demonstrates how to instantiate claims and prioritize tractable manipulations; it also lists the surface moderators most likely to shape effects (e.g., attachment orientation, trust sensitivity, domain stakes). To aid cumulative science, rows are designed to be modular (testable as stand-alone studies), comparable across labs (common outcomes such as trust calibration, adoption, and resolution time), and extensible to field studies and large-scale language settings. All rows can be preregistered; minimal sample sizes should be powered for the interaction terms (Role × Moderator). MIRA = machine-integrated relational adaptation; AI = artificial intelligence; DM = direct message; NFC = need for cognition; QP = qualified professional; EMA = ecological momentary assessment; SWLS = Satisfaction With Life Scale; WHO-5 = World Health Organization–Five Well-Being Index.

The particulars of Table 1 are illustrative rather than prescriptive. MIRA is not a comprehensive catalogue of factors; it is a generative core (see Fig. 1) that is flexible and extensible: Researchers can “plug in” constructs appropriate to their domain while preserving the same causal scaffolding. A developmental psychologist, for example, may specify different antecedent conditions and outcomes than a personality psychologist, communication scientist, or human–computer interaction scholar, but the architecture and its role-specific logic remain the same. Thus, we include examples that both include and extend beyond factors articulated elsewhere in this article. We intentionally leave such decisions open to scholars with the hope of maximizing the table’s generativity across disciplines.

Applied contexts: education and therapy

The future of human–AI relationships may be best understood through applied domains in which AI acts as a relational agent and not just a functional tool. In education, for example, an AI teaching assistant might not only deliver content but also adapt its communication style to each student’s needs (Webb et al., 2021)—engaging formally with one student, narratively with another—on the basis of linguistic reciprocity. An AI agent might detect emotional disengagement in real time, offering encouragement or clarification without disrupting the flow of the class. In this role, AI supports psychological proximity by making each student feel recognized and understood while also enhancing the classroom dynamic through personalization at scale (Hodson, 1998).

In therapeutic settings, AI could assist in real time during or between sessions. For instance, it might detect conversational misalignment, emotional withdrawal, or disconnection between participants and suggest ways to restore alignment. Outside of sessions, a personal AI assistant might help individuals reflect, practice communication strategies, or rehearse emotionally difficult conversations—supporting relational growth without replacing the human therapist. These scenarios reflect AI’s potential to act as both relational supplement and scaffold, extending the reach and accessibility of human care (Heinz et al., 2025; Lucas et al., 2014).

Ethical and societal considerations

Integrating AI into relational life raises normative concerns that we should not ignore. When users treat systems as emotionally responsive quasipartners, the risk of trust miscalibration rises: People may lean on interfaces that feel reliable but lack epistemic grounding, cultivate dependencies, displace effort from human ties, or lower tolerance for the ambiguity of real relationships. Ethicists warn that encouraging partner-like habits with nonhuman agents risks virtue displacement (Vallor, 2016, 2024), that attributing moral patiency to AI is a category mistake with downstream accountability hazards (Bryson, 2018), and that we should communicate reliability rather than manufacture “moral trustworthiness” (Dorsch & Deroy, 2024). Tangible hazards have already been shown to exist: The linguistic tactics used by today’s generative AI can steer help-seeking, attention, and judgment without users’ awareness, amplifying bias and misinformation (e.g., Glickman & Sharot, 2025; Vicente & Matute, 2023) and potentially eroding social resilience (Durán & Pozzi, 2025; Lu et al., 2022; Shin, 2024).

Our stance is descriptive, but it implies design and usage guardrails. Although some users may experience an AI agent as a relational partner, we intentionally interchange this term with “quasipartner” to avoid conferring moral status, and we encourage developers to design AI in order to optimize users’ experiences of calibrated reliance: Disclose limits, sources, and uncertainty; prioritize reliability cues over anthropomorphic cues; and retain human accountability and escalation for consequential judgments. For quasipartners, this means tempering features that invite misplaced patiency (e.g., unqualified self-disclosure, promises); for mediators, it means surfacing when and how language is being edited or ranked. These guardrails make the MIRA model, to some degree, prescriptive for desired outcomes: They specify when relational substitution versus enhancement may be beneficial or detrimental, and they identify boundary conditions (e.g., high-stakes therapeutic or legal decisions) in which quasipartnership should be discouraged and mediation made transparent to avoid the offloading of consequential human moral reasoning to machines.

These risks demand continuous dialogue among technologists, social scientists, ethicists, policymakers, and the general public. MIRA highlights these concerns not to be alarmist but to stress that the mechanisms that allow AI to “feel human” are also those that make it powerfully persuasive and psychologically sticky. As such, relational AI must be designed, deployed, and regulated with extreme care and understanding of its consequences for human cognition, emotion, and behavior.

Reinvigorating, not reinventing, psychological theory

Last, MIRA invites psychology to take seriously a reality that it has long bracketed: that people will extend social, emotional, and cognitive frameworks to nonsentient machines. Theories of trust, attachment, social cognition, and emotional regulation were developed with human–human relationships in mind. However, when users begin treating AI systems as responsive, supportive, and meaningful others, these same theories apply—not because the machine does or does not possess intent or feeling but because human psychology is exquisitely tuned to language, responsiveness, and contingency from social agents in one’s environment.

Rather than seeing this as a distortion of social life, we propose it reveals something deeper: Our relational selves are enacted in large part through language, not constrained by the biological status of our partners. In that sense, relational AI is not a new phenomenon from a psychosocial perspective but an extension of long-standing human tendencies to anthropomorphize, connect, and communicate in whatever medium is available.

Final reflections

Although we cannot predict the future, we anticipate that AI systems will become increasingly proactive, emotionally expressive, and seamlessly embedded in everyday life. These changes will bring both challenges and opportunities—making it all the more important that relational frameworks evolve in tandem. The MIRA model is an early step in this direction, designed to prompt inquiry, organize insight, and spark conversation across disciplines. Its success will not be measured by finality but by whether it helps the field ask better, deeper questions about the evolving landscape of human connection. The most urgent question may no longer be whether AI can become relational but rather how we ensure that it supports, rather than erodes, the relationships that matter most.

1.

We use “relational partner” as the primary term to describe configurations in which AI elicits partner-like interaction (e.g., reciprocal dialogue, perceived responsiveness). This label is interactional, not moral: It does not imply responsibility, accountability, or reciprocal agency. Where helpful, we also use “quasipartner” to emphasize this distinction (Dorsch & Moll, 2024, p. 4). Because perceptions vary across users, MIRA treats “seeing the AI as a partner” as an antecedent/moderator rather than evidence of genuine partnership from a methodological pragmatist (instrumentalist) position. Future work should identify who ascribes partnership to AI and how those attributions shape behavior and relational dynamics.

2.

For a recent study that reported consistent results in human–human text exchanges, see Y. A. Chen et al. (2025).

3.

Anxiously attached individuals may use AI-mediated communication as a regulatory buffer, offering greater control and confidence over the emotional texture of human–human communication while reducing impression-management concerns (see, e.g., Fu et al., 2024; Lim et al., 2013; Stritzke et al., 2004). In this way, the use of AI to mediate human interactions may reduce perceived interpersonal risk or ambiguity.

4.

Although AI interactions are psychologically costless to the AI itself—AI systems do not sacrifice time, effort, or emotion—their perceived relational value is shaped by user beliefs, expectations, and psychological needs. Some individuals may anthropomorphize AI to the extent that even low-effort responses are experienced as socially meaningful, especially in contexts in which human support feels unavailable, risky, or emotionally taxing.

5.

Joint action is a foundational principle of human communication in which speakers and listeners actively coordinate meaning through mutual adaptation. However, today’s large language models (LLMs) do not engage in joint action in the same way. Instead of dynamically coconstructing dialogue, LLMs generate probabilistically likely responses without shared intentionality or true reciprocity. Thus, we might anticipate that LLMs homogenize language rather than engage in the socially adaptive exchange that characterizes human communication—indeed, this very process seems to be occurring today (Sourati et al., 2025). From a MIRA perspective, this distinction is key: Although AI may appear to facilitate interpersonal communication and interaction, its lack of true joint action could explain shifts in language use and relational outcomes in AI-mediated communication.

6.

For a complementary perspective, see also Kirk et al. (2025).

Footnotes

Transparency

Action Editor: Katarzyna Adamczyk

Editor: Arturo E. Hernandez

Author Contributions

R. L. Boyd and D. M. Markowitz share first authorship. Both authors approved the final manuscript for submission.

The authors declared that there were no conflicts of interest with respect to the authorship or the publication of this article.

References

  1. Ainsworth M. S., Bowlby J. (1991). An ethological approach to personality development. American Psychologist, 46(4), 333–341. 10.1037/0003-066X.46.4.333 [DOI] [Google Scholar]
  2. Airenti G. (2015). The cognitive bases of anthropomorphism: From relatedness to empathy. International Journal of Social Robotics, 7(1), 117–127. 10.1007/s12369-014-0263-x [DOI] [Google Scholar]
  3. Albano G., Salerno L., Cardi V., Brockmeyer T., Ambwani S., Treasure J., Lo Coco G. (2023). Patient and mentor language style matching as a predictor of working alliance, engagement with treatment as usual, and eating disorders symptoms over the course of an online guided self-help intervention for anorexia nervosa. European Eating Disorders Review, 31(1), 135–146. 10.1002/erv.2948 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Aldous K., Salminen J., Farooq A., Jung S.-G., Jansen B. (2024). Using ChatGPT in content marketing: Enhancing users’ social media engagement in cross-platform content creation through generative AI. In HT ’24: Proceedings of the 35th ACM Conference on Hypertext and Social Media (pp. 376–383). Association for Computing Machinery. 10.1145/3648188.3675142 [DOI] [Google Scholar]
  5. Al-Mosaiwi M., Johnstone T. (2018. a). In an absolute state: Elevated use of absolutist words is a marker specific to anxiety, depression, and suicidal ideation. Clinical Psychological Science, 6(4), 529–542. 10.1177/2167702617747074 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Al-Mosaiwi M., Johnstone T. (2018. b). Linguistic markers of moderate and absolute natural language. Personality and Individual Differences, 134, 119–124. 10.1016/j.paid.2018.06.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Altman I., Taylor D. A. (1973). Social penetration: The development of interpersonal relationships. Holt, Rinehart & Winston. [Google Scholar]
  8. Alvarado R. (2023. a). AI as an epistemic technology. Science and Engineering Ethics, 29(5), Article 32. 10.1007/s11948-023-00451-3 [DOI] [PubMed] [Google Scholar]
  9. Alvarado R. (2023. b). What kind of trust does AI deserve, if any? AI and Ethics, 3(4), 1169–1183. 10.1007/s43681-022-00224-x [DOI] [Google Scholar]
  10. Araujo T., Helberger N., Kruikemeier S., de Vreese C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611–623. 10.1007/s00146-019-00931-w [DOI] [Google Scholar]
  11. Aron A., Aron E. N., Smollan D. (1992). Inclusion of Other in the Self Scale and the structure of interpersonal closeness. Journal of Personality and Social Psychology, 63(4), 596–612. 10.1037/0022-3514.63.4.596 [DOI] [Google Scholar]
  12. Ateeq A., Milhem M., Alzoraiki M., Dawwas M. I. F., Ali S. A., Yahia Al, Astal A. (2024). The impact of AI as a mediator on effective communication: Enhancing interaction in the digital age. Frontiers in Human Dynamics, 6, Article 1467384. 10.3389/fhumd.2024.1467384 [DOI] [Google Scholar]
  13. Augenstein I., Baldwin T., Cha M., Chakraborty T., Ciampaglia G. L., Corney D., DiResta R., Ferrara E., Hale S., Halevy A., Hovy E., Ji H., Menczer F., Miguez R., Nakov P., Scheufele D., Sharma S., Zagni G. (2024). Factuality challenges in the era of large language models and opportunities for fact-checking. Nature Machine Intelligence, 6(8), 852–863. 10.1038/s42256-024-00881-z [DOI] [Google Scholar]
  14. Austin J. L. (1975). How to do things with words. Harvard University Press. [Google Scholar]
  15. Babcock M. J., Ta V. P., Ickes W. (2013). Latent semantic similarity and language style matching in initial dyadic interactions. Journal of Language and Social Psychology, 33(1), 78–88. 10.1177/0261927X13499331 [DOI] [Google Scholar]
  16. Bao X., Li S., Zhang Y., Tang Q., Chen X. (2022). Different effects of anxiety and avoidance dimensions of attachment on interpersonal trust: A multilevel meta-analysis. Journal of Social and Personal Relationships, 39(7), 2069–2093. 10.1177/02654075221074387 [DOI] [Google Scholar]
  17. Bashkirova A., Krpan D. (2024). Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgments increase psychologist trust and recommendation acceptance. Computers in Human Behavior: Artificial Humans, 2(1), Article 100066. 10.1016/j.chbah.2024.100066 [DOI] [Google Scholar]
  18. Battisti D. (2025). Second-person authenticity and the mediating role of AI: A moral challenge for human-to-human relationships? Philosophy & Technology, 38(1), Article 28. 10.1007/s13347-025-00857-w [DOI] [Google Scholar]
  19. Baxter L. A., Wilmot W. W. (1984). “Secret tests”: Social strategies for acquiring information about the state of the relationship. Human Communication Research, 11(2), 171–201. 10.1111/j.1468-2958.1984.tb00044.x [DOI] [Google Scholar]
  20. Baym N. K. (2010). Personal connections in the digital age (1st ed.). Polity Press. [Google Scholar]
  21. Beattie A., Edwards A. P., Edwards C. (2020). A bot and a smile: Interpersonal impressions of chatbots and humans using emoji in computer-mediated communication. Communication Studies, 71(3), 409–427. 10.1080/10510974.2020.1725082 [DOI] [Google Scholar]
  22. Bernhold Q. S., Giles H. (2020). Vocal accommodation and mimicry. Journal of Nonverbal Behavior, 44(1), 41–62. 10.1007/s10919-019-00317-y [DOI] [Google Scholar]
  23. Berry-Blunt A. K., Holtzman N. S., Donnellan M. B., Mehl M. R. (2021). The story of “I” tracking: Psychological implications of self-referential language use. Social and Personality Psychology Compass, 15(12), Article e12647. 10.1111/spc3.12647 [DOI] [Google Scholar]
  24. Bickmore T., Picard R. W. (2005). Future of caring machines. Studies in Health Technology and Informatics, 118, 132–145. [PubMed] [Google Scholar]
  25. Biesen J. N., Schooler D. E., Smith D. A. (2016). What a difference a pronoun makes: I/we versus you/me and worried couples’ perceptions of their interaction quality. Journal of Language and Social Psychology, 35(2), 180–205. 10.1177/0261927X15583114 [DOI] [Google Scholar]
  26. Blau P. (1964). Exchange and power in social life. John Wiley & Sons. [Google Scholar]
  27. Blazina C., Boyraz G., Shen-Miller D. (Eds.). (2011). The psychology of the human–animal bond: A resource for clinicians and researchers. Springer. [Google Scholar]
  28. Blut M., Wang C., Wünderlich N. V., Brock C. (2021). Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. Journal of the Academy of Marketing Science, 49(4), 632–658. 10.1007/s11747-020-00762-y [DOI] [Google Scholar]
  29. Borelli J. L., Ramsook K. A., Smiley P., Bond D. K., West J. L., Buttitta K. H. (2017). Language matching among mother-child dyads: Associations with child attachment and emotion reactivity. Social Development, 26(3), 610–629. 10.1111/sode.12200 [DOI] [Google Scholar]
  30. Bowlby J. (2008). A secure base: Parent-child attachment and healthy human development. Basic Books. [Google Scholar]
  31. Bowlby R. (2004). Fifty years of attachment theory: The Donald Winnicott Memorial Lecture. Routledge. [Google Scholar]
  32. Boyd R. L., Markowitz D. M. (2025). Verbal behavior and the future of social science. American Psychologist, 80(3), 411–433. 10.1037/amp0001319 [DOI] [PubMed] [Google Scholar]
  33. Boyd R. L., Pennebaker J. W. (2017). Language-based personality: A new approach to personality in a digital world. Current Opinion in Behavioral Sciences, 18, 63–68. 10.1016/j.cobeha.2017.07.017 [DOI] [Google Scholar]
  34. Boyd R. L., Schwartz H. A. (2021). Natural language analysis and the psychology of verbal behavior: The past, present, and future states of the field. Journal of Language and Social Psychology, 40(1), 21–41. 10.1177/0261927X20967028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Bozdağ A. A. (2025). The AI-mediated intimacy economy: A paradigm shift in digital interactions. AI & Society, 40, 2285–2306. 10.1007/s00146-024-02132-6 [DOI] [Google Scholar]
  36. Bryson J. J. (2018). Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15–26. 10.1007/s10676-018-9448-6 [DOI] [Google Scholar]
  37. Buchanan J., Hickman W. (2024). Do people trust humans more than ChatGPT? Journal of Behavioral and Experimental Economics, 112, Article 102239. 10.1016/j.socec.2024.102239 [DOI] [Google Scholar]
  38. Campbell L., Stanton S. C. (2019). Adult attachment and trust in romantic relationships. Current Opinion in Psychology, 25, 148–151. 10.1016/j.copsyc.2018.08.004 [DOI] [PubMed] [Google Scholar]
  39. Castelo N., Bos M. W., Lehmann D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. 10.1177/0022243719851788 [DOI] [Google Scholar]
  40. Chen A., Evans R., Zeng R. (2024). Editorial: Coping with an AI-saturated world: Psychological dynamics and outcomes of AI-mediated communication. Frontiers in Psychology, 15, Article 1479981. 10.3389/fpsyg.2024.1479981 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Chen A. Y., Koegel S. I., Hannon O., Ciriello R. (2023). Feels like empathy: How “emotional” AI challenges human essence. AIS eLibrary. https://aisel.aisnet.org/acis2023/80 [Google Scholar]
  42. Chen Y. A., Peebles A. L., Lu R. M. (2025). “Anyway, I love you, text me please”: Exploring attachment, affordances, and support-seeking via texting. Journal of Media Psychology: Theories, Methods, and Applications. Advance online publication. 10.1027/1864-1105/a000469 [DOI]
  43. Chiriatti M., Bergamaschi Ganapini M., Panai E., Wiederhold B. K., Riva G. (2025). System 0: Transforming artificial intelligence into a cognitive extension. Cyberpsychology, Behavior, and Social Networking, 28(7), 534–542. 10.1089/cyber.2025.0201 [DOI] [PubMed] [Google Scholar]
  44. Chung C., Pennebaker J. (2007). The psychological functions of function words. In Fiedler K. (Ed.), Social communication (pp. 343–359). Psychology Press. [Google Scholar]
  45. Clark H. H. (1996). Using language. Cambridge University Press. [Google Scholar]
  46. Clark H. H., Wilkes-Gibbs D. (1986). Referring as a collaborative process. Cognition, 22(1), 1–39. 10.1016/0010-0277(86)90010-7 [DOI] [PubMed] [Google Scholar]
  47. Cook K. S., Cheshire C., Rice E. R. W., Nakagawa S. (2013). Social exchange theory. In DeLamater J., Ward A. (Eds.), Handbook of social psychology (pp. 61–88). Springer. 10.1007/978-94-007-6772-0_3 [DOI] [Google Scholar]
  48. Derrick J. L., Gabriel S., Hugenberg K. (2009). Social surrogacy: How favored television programs provide the experience of belonging. Journal of Experimental Social Psychology, 45(2), 352–362. 10.1016/j.jesp.2008.12.003 [DOI] [Google Scholar]
  49. Donohue W. A., Cai D. A. (2014). Interpersonal conflict: An overview. In Burrell N. A., Allen M., Gayle B. M., Preiss R. W. (Eds.), Managing interpersonal conflict (pp. 22–41). Routledge. [Google Scholar]
  50. Dorsch J., Deroy O. (2024). Quasi-metacognitive machines: Why we don’t need morally trustworthy AI and communicating reliability is enough. Philosophy & Technology, 37(2), Article 62. 10.1007/s13347-024-00752-w [DOI] [Google Scholar]
  51. Dorsch J., Moll M. (2024). Explainable and human-grounded AI for decision support systems: The theory of epistemic quasi-partnerships. arXiv. 10.48550/arXiv.2409.14839 [DOI]
  52. Downey G., Feldman S. I. (1996). Implications of rejection sensitivity for intimate relationships. Journal of Personality and Social Psychology, 70(6), 1327–1343. 10.1037/0022-3514.70.6.1327 [DOI] [PubMed] [Google Scholar]
  53. Dupré M. H. (2025, June 28). People are being involuntarily committed, jailed after spiraling into “ChatGPT psychosis.” Futurism. https://futurism.com/commitment-jail-chatgpt-psychosis [Google Scholar]
  54. Durán J. M., Pozzi G. (2025). Trust and trustworthiness in AI. Philosophy & Technology, 38(1), Article 16. 10.1007/s13347-025-00843-2 [DOI] [Google Scholar]
  55. Edwards T., Holtzman N. S. (2017). A meta-analysis of correlations between depression and first person singular pronoun use. Journal of Research in Personality, 68, 63–68. 10.1016/j.jrp.2017.02.005 [DOI] [Google Scholar]
  56. Eisenberger R., Lynch P., Aselage J., Rohdieck S. (2004). Who takes the most revenge? Individual differences in negative reciprocity norm endorsement. Personality and Social Psychology Bulletin, 30(6), 787–799. 10.1177/0146167204264047 [DOI] [PubMed] [Google Scholar]
  57. Fehr E., Fischbacher U., Gächter S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13(1), 1–25. 10.1007/s12110-002-1012-7 [DOI] [PubMed] [Google Scholar]
  58. Fu Y., Foell S., Xu X., Hiniker A. (2024). From text to self: Users’ perception of AIMC tools on interpersonal communication and self. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, 1–17. 10.1145/3613904.3641955 [DOI]
  59. Garrod S., Pickering M. J. (2009). Joint action, interactive alignment, and dialog. Topics in Cognitive Science, 1(2), 292–304. 10.1111/j.1756-8765.2009.01020.x [DOI] [PubMed] [Google Scholar]
  60. Gebremeskel G. G., de Vries A. P. (2023). Pull–push: A measure of over- or underpersonalization in recommendation. International Journal of Data Science and Analytics, 16(2), 255–269. 10.1007/s41060-022-00354-9 [DOI] [Google Scholar]
  61. Giles D. C. (2002). Parasocial interaction: A review of the literature and a model for future research. Media Psychology, 4(3), 279–305. 10.1207/S1532785XMEP0403_04 [DOI] [Google Scholar]
  62. Giles H., Coupland N., Coupland J. (1991). Accommodation theory: Communication, context, and consequence. In Giles H., Coupland J., Coupland N. (Eds.), Contexts of accommodation: Developments in applied sociolinguistics (pp. 1–68). Cambridge University Press. 10.1017/CBO9780511663673.001 [DOI] [Google Scholar]
  63. Gillath O., Ai T., Branicky M. S., Keshmiri S., Davison R. B., Spaulding R. (2021). Attachment and trust in artificial intelligence. Computers in Human Behavior, 115, Article 106607. 10.1016/j.chb.2020.106607 [DOI] [Google Scholar]
  64. Glickman M., Sharot T. (2025). How human–AI feedback loops alter human perceptual, emotional and social judgements. Nature Human Behaviour, 9(2), 345–359. 10.1038/s41562-024-02077-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Gonzales A. L., Hancock J. T., Pennebaker J. W. (2010). Language style matching as a predictor of social dynamics in small groups. Communication Research, 37(1), 3–19. 10.1177/0093650209351468 [DOI] [Google Scholar]
  66. Gouldner A. W. (1960). The norm of reciprocity: A preliminary statement. American Sociological Review, 25(2), 161–178. 10.2307/2092623 [DOI] [Google Scholar]
  67. Grice H. P. (1975). Logic and conversation. In Cole P., Morgan J. (Eds.), Syntax and semantics 3: Speech acts (Vol. 3, pp. 41–58). Academic Press. [Google Scholar]
  68. Grice P. (1991). Studies in the way of words. Harvard University Press. [Google Scholar]
  69. Grieve R., Indian M., Witteveen K., Anne Tolan G., Marrington J. (2013). Face-to-face or Facebook: Can social connectedness be derived online? Computers in Human Behavior, 29(3), 604–609. 10.1016/j.chb.2012.11.017 [DOI] [Google Scholar]
  70. Grieve R., Kemp N., Norris K., Padgett C. R. (2017). Push or pull? Unpacking the social compensation hypothesis of Internet use in an educational context. Computers & Education, 109, 1–10. 10.1016/j.compedu.2017.02.008 [DOI] [Google Scholar]
  71. Gunser V. E., Gottschling S., Brucker B., Richter S., Gerjets P. (2021). Can users distinguish narrative texts written by an artificial intelligence writing tool from purely human text?In Stephanidis C., Antona M., Ntoa S. (Eds.), HCI International 2021—Posters (pp. 520–527). Springer. 10.1007/978-3-030-78635-9_67 [DOI] [Google Scholar]
  72. Guzman A. L., Lewis S. C. (2020). Artificial intelligence and communication: A Human-Machine Communication research agenda. New Media & Society, 22(1), 70–86. 10.1177/1461444819858691 [DOI] [Google Scholar]
  73. Hancock J. T., Naaman M., Levy K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89–100. 10.1093/jcmc/zmz022 [DOI] [Google Scholar]
  74. Heerink M., Kröse B., Evers V., Wielinga B. (2009). Influence of social presence on acceptance of an assistive social robot and screen agent by elderly users. Advanced Robotics, 23, 1909-1923. 10.1163/016918609X12518783330289 [DOI] [Google Scholar]
  75. Heinz M. V., Mackin D. M., Trudeau B. M., Bhattacharya S., Wang Y., Banta H. A., Jewett A. D., Salzhauer A. J., Griffin T. Z., Jacobson N. C. (2025). Randomized trial of a generative AI chatbot for mental health treatment. NEJM AI, 2(4), Article AIoa2400802. 10.1056/AIoa2400802 [DOI] [Google Scholar]
  76. Ho A., Hancock J., Miner A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4), 712–733. 10.1093/joc/jqy026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Hodson D. (1998). Teaching and learning science: Towards a personalized approach. McGraw Hill. [Google Scholar]
  78. Hoff K. A., Bashir M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. 10.1177/0018720814547570 [DOI] [PubMed] [Google Scholar]
  79. Holmes B. M., Johnson K. R. (2009). Adult attachment and romantic partner preference: A review. Journal of Social and Personal Relationships, 26(6–7), 833–852. 10.1177/0265407509345653 [DOI] [Google Scholar]
  80. Homans G. C. (1958). Social behavior as exchange. American Journal of Sociology, 63(6), 597–606. 10.1086/222355 [DOI] [Google Scholar]
  81. Huang B., Chen C., Shu K. (2025). Authorship attribution in the era of LLMs: Problems, methodologies, and challenges. ACM SIGKDD Explorations Newsletter, 26(2), 21–43. 10.1145/3715073.3715076 [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Huang M.-H., Rust R. T. (2018). Artificial Intelligence in service. Journal of Service Research, 21(2), 155–172. 10.1177/1094670517752459 [DOI] [Google Scholar]
  83. Hwang A. H.-C., Liao Q. V., Blodgett S. L., Olteanu A., Trischler A. (2025). “It was 80% me, 20% AI”: Seeking authenticity in co-writing with large language models. Proceedings of the ACM on Human-Computer Interaction, 9(2), 1–41. 10.1145/371102040909183 [DOI] [Google Scholar]
  84. Ireland M. E., Pennebaker J. W. (2010). Language style matching in writing: Synchrony in essays, correspondence, and poetry. Journal of Personality and Social Psychology, 99(3), 549–571. 10.1037/a0020386 [DOI] [PubMed] [Google Scholar]
  85. Ireland M. E., Slatcher R. B., Eastwick P. W., Scissors L. E., Finkel E. J., Pennebaker J. W. (2010). Language style matching predicts relationship initiation and stability. Psychological Science, 22(1), 39–44. 10.1177/0956797610392928 [DOI] [PubMed] [Google Scholar]
  86. Jara-Ettinger J., Rubio-Fernandez P. (2021). Quantitative mental state attributions in language understanding. Science Advances, 7(47), Article eabj0970. 10.1126/sciadv.abj0970 [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Jeon M. (2024). The effects of emotions on trust in human–computer interaction: A survey and prospect. International Journal of Human-Computer Interaction, 40(22), 6864–6882. 10.1080/10447318.2023.2261727 [DOI] [Google Scholar]
  88. Kardefelt-Winther D. (2014). A conceptual and methodological critique of internet addiction research: Towards a model of compensatory internet use. Computers in Human Behavior, 31, 351–354. 10.1016/j.chb.2013.10.059 [DOI] [Google Scholar]
  89. Kim H., So K. K. F., Wirtz J. (2022). Service robots: Applying social exchange theory to better understand human-robot interactions. Tourism Management, 92, Article 104537. 10.1016/j.tourman.2022.104537 [DOI] [Google Scholar]
  90. Kirk H. R., Gabriel I., Summerfield C., Vidgen B., Hale S. A. (2025). Why human–AI relationships need socioaffective alignment. Humanities and Social Sciences Communications, 12(1), Article 728. 10.1057/s41599-025-04532-5 [DOI] [Google Scholar]
  91. Knapp M. L. (1978). Social intercourse: From greeting to goodbye. Allyn & Bacon. [Google Scholar]
  92. Köbis N., Mossink L. D. (2021). Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry. Computers in Human Behavior, 114, Article 106553. 10.1016/j.chb.2020.106553 [DOI] [Google Scholar]
  93. Koch C. (2024). Then I am myself the world: What consciousness is and how to expand it. Basic Books. [Google Scholar]
  94. Koenig M. A., Harris P. L. (2005). The role of social cognition in early trust. Trends in Cognitive Sciences, 9(10), 457–459. 10.1016/j.tics.2005.08.006 [DOI] [PubMed] [Google Scholar]
  95. Koenig M. A., Harris P. L. (2007). The basis of epistemic trust: Reliable testimony or reliable sources? Episteme, 4(3), 264–284. 10.3366/E1742360007000081 [DOI] [Google Scholar]
  96. Kolomaznik M., Petrik V., Slama M., Jurik V. (2024). The role of socio-emotional attributes in enhancing human–AI collaboration. Frontiers in Psychology, 15, Article 1369957. 10.3389/fpsyg.2024.1369957 [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Laurenceau J.-P., Barrett L. F., Pietromonaco P. R. (1998). Intimacy as an interpersonal process: The importance of self-disclosure, partner disclosure, and perceived partner responsiveness in interpersonal exchanges. Journal of Personality and Social Psychology, 74(5), 1238–1251. 10.1037/0022-3514.74.5.1238 [DOI] [PubMed] [Google Scholar]
  98. Lee J. D., See K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. 10.1518/hfes.46.1.50_30392 [DOI] [PubMed] [Google Scholar]
  99. Li J., Chu Y., Xu J. (2023). Impression transference from AI to human: The impact of AI’s fairness on interpersonal perception in AI-mediated communication. International Journal of Human-Computer Studies, 179, Article 103119. 10.1016/j.ijhcs.2023.103119 [DOI] [Google Scholar]
  100. Li X., Sung Y. (2021). Anthropomorphism brings us closer: The mediating role of psychological distance in User–AI assistant interactions. Computers in Human Behavior, 118, Article 106680. 10.1016/j.chb.2021.106680 [DOI] [Google Scholar]
  101. Lim V. K. G., Teo T. S. H., Zhao X. (2013). Psychological costs of support seeking and choice of communication channel. Behaviour & Information Technology, 32(2), 132–146. 10.1080/0144929X.2010.518248 [DOI] [Google Scholar]
  102. Liu B., Kang J., Wei L. (2024). Artificial intelligence and perceived effort in relationship maintenance: Effects on relationship satisfaction and uncertainty. Journal of Social and Personal Relationships, 41(5), 1232–1252. 10.1177/02654075231189899 [DOI] [Google Scholar]
  103. Liu B., Sundar S. S. (2018). Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychology, Behavior and Social Networking, 21(10), 625–636. 10.1089/cyber.2018.0110 [DOI] [PubMed] [Google Scholar]
  104. Lu Z., Li P., Wang W., Yin M. (2022). The effects of AI-based credibility indicators on the detection and spread of misinformation under social influence. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), Article 461. 10.1145/3555562 [DOI] [Google Scholar]
  105. Lucas G. M., Gratch J., King A., Morency L.-P. (2014). It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37, 94–100. 10.1016/j.chb.2014.04.043 [DOI] [Google Scholar]
  106. Ma X., Brown T. W. (2020, March 4). AI-mediated exchange theory. arXiv. 10.48550/arXiv.2003.02093 [DOI]
  107. Maher M. L., Ventura D., Magerko B. (2024). The grounding problem: An approach to the integration of cognitive and generative models. Proceedings of the AAAI Symposium Series, 2(1), 320–325. 10.1609/aaaiss.v2i1.27695 [DOI] [Google Scholar]
  108. Maples B., Pea R. D., Markowitz D. (2023). Learning from intelligent social agents as social and intellectual mirrors. In Niemi H., Pea R. D., Lu Y. (Eds.), AI in learning: Designing the future (pp. 73–89). Springer. 10.1007/978-3-031-09687-7_5 [DOI] [Google Scholar]
  109. Markowitz D. M. (2024. a). From complexity to clarity: How AI enhances perceptions of scientists and the public’s understanding of science. PNAS Nexus, 3(9), Article pgae387. 10.1093/pnasnexus/pgae387 [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Markowitz D. M. (2024. b). Opportunities and challenges to conversations with generative AI: Integrating theoretical perspectives from Clark and Pennebaker. Asian Communication Research, 21(3), 309–321. 10.20879/acr.2024.21.025 [DOI] [Google Scholar]
  111. Markowitz D. M., Hancock J. T., Bailenson J. N. (2024). Linguistic markers of inherently false AI communication and intentionally false human communication: Evidence from hotel reviews. Journal of Language and Social Psychology, 43(1), 63–82. 10.1177/0261927X231200201 [DOI] [Google Scholar]
  112. Meehan M., Massavelli B., Pachana N. (2017). Using attachment theory and social support theory to examine and measure pets as sources of social support and attachment figures. Anthrozoös, 30(2), 273–289. 10.1080/08927936.2017.1311050 [DOI] [Google Scholar]
  113. Meier T., Milek A., Mehl M. R., Nussbeck F. W., Neysari M., Bodenmann G., Martin M., Zemp M., Horn A. B. (2021). I blame you, I hear you: Couples’ pronoun use in conflict and dyadic coping. Journal of Social and Personal Relationships, 38(11), 3265–3287. 10.1177/02654075211029721 [DOI] [Google Scholar]
  114. Meng J., Dai Y. (2021). Emotional support from AI chatbots: Should a supportive partner self-disclose or not? Journal of Computer-Mediated Communication, 26(4), 207–222. 10.1093/jcmc/zmab005 [DOI] [Google Scholar]
  115. Merton R. K. (1968). Social theory and social structure. Free Press. [Google Scholar]
  116. Moore J., Grabb D., Agnew W., Klyman K., Chancellor S., Ong D. C., Haber N. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. In FAccT ’25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (pp. 599–627). Association for Computing Machinery. 10.1145/3715275.3732039 [DOI] [Google Scholar]
  117. Moore R. L., Yen C.-J., Powers F. E. (2021). Exploring the relationship between clout and cognitive processing in MOOC discussion forums. British Journal of Educational Technology, 52(1), 482–497. 10.1111/bjet.13033 [DOI] [Google Scholar]
  118. Neel R., Kenrick D. T., White A. E., Neuberg S. L. (2016). Individual differences in fundamental social motives. Journal of Personality and Social Psychology, 110(6), 887–907. 10.1037/pspp0000068 [DOI] [PubMed] [Google Scholar]
  119. Ni C.-F., Jacques J., Silber C., Dykeman C. (2025). Evaluating counseling skills through language style matching: A computer-aided text analysis of suicide and general counseling transcripts. Journal of Technology in Counselor Education and Supervision, 6(1), Article 2. 10.61888/2692-4129.1124 [DOI] [Google Scholar]
  120. Oh C. S., Bailenson J. N., Welch G. F. (2018). A systematic review of social presence: Definition, antecedents, and implications. Frontiers in Robotics and AI, 5, Article 114. 10.3389/frobt.2018.00114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Pataranutaporn P., Liu R., Finn E., Maes P. (2023). Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence, 5(10), 1076–1086. 10.1038/s42256-023-00720-7 [DOI] [Google Scholar]
  122. Pelau C., Dabija D.-C., Ene I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, Article 106855. 10.1016/j.chb.2021.106855 [DOI] [Google Scholar]
  123. Pennebaker J. W. (2011). The secret life of pronouns: What our words say about us. Bloomsbury. [Google Scholar]
  124. Pennebaker J. W., King L. A. (1999). Linguistic styles: Language use as an individual difference. Journal of Personality and Social Psychology, 6, 1296–1312. 10.1037/0022-3514.77.6.1296 [DOI] [PubMed] [Google Scholar]
  125. Porayska-Pomsta K., Alcorn A. M., Avramides K., Beale S., Bernardini S., Foster M. E., Frauenberger C., Good J., Guldberg K., Keay-Bright W., Kossyvaki L., Lemon O., Mademtzi M., Menzies R., Pain H., Rajendran G., Waller A., Wass S., Smith T. J. (2018). Blending human and artificial intelligence to support autistic children’s social communication skills. ACM Transactions on Computer-Human Interaction (TOCHI), 25(6), Article 35. 10.1145/3271484 [DOI] [Google Scholar]
  126. Reis H. T., Shaver P. (2018). Intimacy as an interpersonal process. In Reis H. (Ed.), Relationships, well-being and behaviour. Routledge. [Google Scholar]
  127. Rempel J. K., Holmes J. G., Zanna M. P. (1985). Trust in close relationships. Journal of Personality and Social Psychology, 49(1), 95–112. 10.1037/0022-3514.49.1.95 [DOI] [Google Scholar]
  128. Renieris E. M., Kiron D., Mills S. (2024, September 24). Artificial intelligence disclosures are key to customer trust. MIT Sloan Management Review. https://sloanreview.mit.edu/article/artificial-intelligence-disclosures-are-key-to-customer-trust
  129. Richardson B. H., Taylor P. J., Snook B., Conchie S. M., Bennell C. (2014). Language style matching and police interrogation outcomes (Vol. 38). Educational Publishing Foundation. 10.1037/lhb0000077 [DOI] [PubMed] [Google Scholar]
  130. Rotter J. B. (1971). Generalized expectancies for interpersonal trust. American Psychologist, 26(5), 443–452. 10.1037/h0031464 [DOI] [Google Scholar]
  131. Rotter J. B. (1980). Interpersonal trust, trustworthiness, and gullibility. American Psychologist, 35(1), 1–7. 10.1037/0003-066X.35.1.1 [DOI] [Google Scholar]
  132. Saab J. (2023). The impact of Artificial Intelligence on search engine: Super intelligence in Artificial Intelligence (AI).In Kaddoura S. (Ed.), Handbook of research on AI methods and applications in computer engineering (pp. 141–160). IGI Global Scientific Publishing. 10.4018/978-1-6684-6937-8.ch007 [DOI] [Google Scholar]
  133. Searle J. R. (1969). Speech acts: An essay in the philosophy of language. Cambridge University Press. [Google Scholar]
  134. Seraj S., Blackburn K. G., Pennebaker J. W. (2021). Language left behind on social media exposes the emotional and cognitive costs of a romantic breakup. Proceedings of the National Academy of Sciences, USA, 118(7), Article e2017154118. 10.1073/pnas.2017154118 [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. Sharma N., Liao Q. V., Xiao Z. (2024). Generative echo chamber? Effect of LLM-powered search systems on diverse information seeking. CHI ’24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Article 1033. 10.1145/3613904.3642459 [DOI]
  136. Shaver P. R., Mikulincer M. (2003). The psychodynamics of social judgments: An attachment theory perspective. In Forgas J. P., Williams K. D., von Hippel W. (Eds.), Social judgments: Implicit and explicit processes (pp. 85–114). Cambridge University Press. [Google Scholar]
  137. Shaver P. R., Mikulincer M. (2009). An overview of adult attachment theory. In Obegi J. H., Berant E. (Eds.), Attachment theory and research in clinical work with adults (pp. 17–45). Guilford Press. [Google Scholar]
  138. Shin D. (2024). Artificial misinformation: Exploring human-algorithm interaction online. Palgrave Macmillan. [Google Scholar]
  139. Simpson J. A. (2007). Psychological foundations of trust. Current Directions in Psychological Science, 16(5), 264–268. 10.1111/j.1467-8721.2007.00517.x [DOI] [Google Scholar]
  140. Skjuve M., Følstad A., Fostervold K. I., Brandtzaeg P. B. (2021). My chatbot companion—A study of human-chatbot relationships. International Journal of Human-Computer Studies, 149, Article 102601. 10.1016/j.ijhcs.2021.102601 [DOI] [Google Scholar]
  141. Smeijers D., Uzieblo K., Glennon J. C., Driessen J. M. A., Brazil I. A. (2022). Examining individual differences in social reward valuation: A person-based approach. Journal of Psychopathology and Behavioral Assessment, 44(2), 312–325. 10.1007/s10862-021-09934-8 [DOI] [Google Scholar]
  142. Sourati Z., Karimi-Malekabadi F., Ozcan M., McDaniel C., Ziabari A., Trager J., Tak A., Chen M., Morstatter F., Dehghani M. (2025). The shrinking landscape of linguistic diversity in the age of large language models. arXiv. 10.48550/arXiv.2502.11266 [DOI]
  143. Spence S. H., Rapee R. M. (2016). The etiology of social anxiety disorder: An evidence-based model. Behaviour Research and Therapy, 86, 50–67. 10.1016/j.brat.2016.06.007 [DOI] [PubMed] [Google Scholar]
  144. Sperber D., Clément F., Heintz C., Mascaro O., Mercier H., Origgi G., Wilson D. (2010). Epistemic vigilance. Mind & Language, 25(4), 359–393. 10.1111/j.1468-0017.2010.01394.x [DOI] [Google Scholar]
  145. Stafford L., Kuiper K. (2021). Social exchange theories: Calculating the rewards and costs of personal relationships. In Schrodt P., Scharp K. M., Braithwaite D. O. (Eds.), Engaging theories in interpersonal communication (3rd ed., pp. 379–390). Routledge. [Google Scholar]
  146. Stalnaker R. (1998). On the representation of context. Journal of Logic, Language and Information, 7(1), 3–19. 10.1023/a:1008254815298 [DOI] [Google Scholar]
  147. Stamper J., Xiao R., Hou X. (2024). Enhancing LLM-based feedback: Insights from intelligent tutoring systems and the learning sciences.In Olney A. M., Chounta I.-A., Liu Z., Santos O. C., Bittencourt I. I. (Eds.), Artificial intelligence in education. Posters and late breaking results, workshops and tutorials, industry and innovation tracks, practitioners, doctoral consortium and blue sky (pp. 32–43). Springer. 10.1007/978-3-031-64315-6_3 [DOI] [Google Scholar]
  148. Stever G. S. (2017). Evolutionary theory and reactions to mass media: Understanding parasocial attachment. Psychology of Popular Media Culture, 6(2), 95–102. 10.1037/ppm0000116 [DOI] [Google Scholar]
  149. Stritzke W. G. K., Nguyen A., Durkin K. (2004). Shyness and computer-mediated communication: A self-presentational theory perspective. Media Psychology, 6(1), 1–22. 10.1207/s1532785xmep0601_1 [DOI] [Google Scholar]
  150. Swann W. B., Read S. J. (1981). Self-verification processes: How we sustain our self-conceptions. Journal of Experimental Social Psychology, 17(4), 351–372. 10.1016/0022-1031(81)90043-3 [DOI] [Google Scholar]
  151. Ta-Johnson V., Suss J., Lande B. (2022). Using natural language processing to measure cognitive load during use-of-force decision-making training. Policing: An International Journal, 46(2), 227–242. 10.1108/PIJPSM-06-2022-0084 [DOI] [Google Scholar]
  152. Tang K.-S., Cooper G. (2025). The role of materiality in an era of generative artificial intelligence. Science & Education, 34, 731–746. 10.1007/s11191-024-00508-0 [DOI] [Google Scholar]
  153. Tausczik Y. R., Pennebaker J. W. (2010). The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology, 29(1), 24–54. 10.1177/0261927X09351676 [DOI] [Google Scholar]
  154. Thellman S., de Graaf M., Ziemke T. (2022). Mental state attribution to robots: A systematic review of conceptions, methods, and findings. ACM Transactions on Human-Robot Interaction (THRI), 11(4), Article 41. 10.1145/3526112 [DOI] [Google Scholar]
  155. Thibaut J. W., Kelley H. H. (1959). The social psychology of groups. John Wiley & Sons. [Google Scholar]
  156. Toff B., Simon F. M. (2025). “Or they could just not use it?”: The dilemma of AI disclosure for audience trust in news. The International Journal of Press/Politics, 30(4), 881–903. 10.1177/19401612241308697 [DOI] [Google Scholar]
  157. Tomasello M. (2003). Constructing a language: A usage-based theory of language acquisition. Harvard University Press. [Google Scholar]
  158. Tomasello M. (2009). Why we cooperate. MIT Press. [Google Scholar]
  159. Vallor S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press. 10.1093/acprof:oso/9780190498511.001.0001 [DOI] [Google Scholar]
  160. Vallor S. (2024). The AI mirror: How to reclaim our humanity in an age of machine thinking. Oxford University Press. [Google Scholar]
  161. Vicente L., Matute H. (2023). Humans inherit artificial intelligence biases. Scientific Reports, 13(1), Article 15737. 10.1038/s41598-023-42384-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  162. Walther J. B. (1992). Interpersonal effects in computer-mediated interaction: A relational perspective. Communication Research, 19(1), 52–90. [Google Scholar]
  163. Walther J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 1(23), 3–43. [Google Scholar]
  164. Walther J. B., Burgoon J. K. (1992). Relational communication in computer-mediated interaction. Human Communication Research, 19(1), 50–88. 10.1111/j.1468-2958.1992.tb00295.x [DOI] [Google Scholar]
  165. Webb M. E., Fluck A., Magenheim J., Malyn-Smith J., Waters J., Deschênes M., Zagami J. (2021). Machine learning for human learners: Opportunities, issues, tensions and threats. Educational Technology Research and Development, 69(4), 2109–2130. 10.1007/s11423-020-09858-2 [DOI] [Google Scholar]
  166. Weimer B. L., Kerns K. A., Oldenburg C. M. (2004). Adolescents’ interactions with a best friend: Associations with attachment style. Journal of Experimental Child Psychology, 88(1), 102–120. 10.1016/j.jecp.2004.01.003 [DOI] [PubMed] [Google Scholar]
  167. Xie T., Pentina I. (2022). Attachment theory as a framework to understand relationships with social chatbots: A case study of Replika. ScholarSpace. 10.24251/HICSS.2022.258 [DOI] [Google Scholar]
  168. Yen R., Zhao J. (2024). Memolet: Reifying the reuse of user-AI conversational memories. In Yao L., Goel M., Ion A., Lopes P. (Eds.), UIST ’24: Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology (pp. 1–22). Association for Computing Machinery. 10.1145/3654777.3676388 [DOI] [Google Scholar]
  169. Yoo D., Kang H., Oh C. (2025). Deciphering deception: How different rhetoric of AI language impacts users’ sense of truth in LLMs. International Journal of Human-Computer Interaction, 41(4), 2163–2183. 10.1080/10447318.2024.2316370 [DOI] [Google Scholar]
  170. Zhai C., Wibowo S. (2023). A systematic review on artificial intelligence dialogue systems for enhancing English as foreign language students’ interactional competence in the university. Computers and Education: Artificial Intelligence, 4, Article 100134. 10.1016/j.caeai.2023.100134 [DOI] [Google Scholar]
  171. Zhang Z., Shen C., Yao B., Wang D., Li T. (2025). Secret use of large language model (LLM). Proceedings of the ACM on Human-Computer Interaction., 9(2), Article CSCW163. 10.1145/3711061 [DOI] [Google Scholar]

Articles from Perspectives on Psychological Science are provided here courtesy of SAGE Publications

RESOURCES