Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2025 Sep 18;60(5):e70111. doi: 10.1002/ijop.70111

The Influence of Perceived Fairness on Trust in Human‐Computer Interaction

Rui Chen 1, Yating Jin 1, Lincang Yu 1, Tobias Tempel 2, Peng Li 1,, Shi Zhang 3, Anqi Li 1, Weijie He 1
PMCID: PMC12445249  PMID: 40965285

ABSTRACT

Fairness is a fundamental principle in human social interactions that influences subsequent behavioural decisions. As artificial intelligence (AI) becomes more prevalent, human‐computer interactions have emerged as a new mode of social interaction. This study investigates the differences in fairness perceptions and their impact on trust decisions in human‐human and human‐AI contexts using a mixed experimental design of 2 (proposer identity: AI/human) × 2 (offer: fair/unfair) × 2 (trustee identity: AI/human). A total of 128 university students participated in the experimental study employing both the Ultimatum Game and the Trust Game paradigms. The results showed that participants who received fair offers had higher trust investment rates and amounts than those who received unfair offers. When offers were unfair, the AI proposer group elicited greater investment willingness, leading to higher trust investment rates than the human proposer group. Conversely, under fair conditions, participants displayed greater risk aversion towards human trustees, investing at lower rates and amounts than with AI trustees. The findings suggest that fairness perceptions in human‐computer interactions have a stronger impact on trust decisions than those in human‐human interactions.

Keywords: artificial intelligence, decision‐making behaviours, fairness perception, trust

1. Introduction

Fairness has long been a concern for economists, sociologists and psychologists. As a fundamental principle of human social interaction, the perception of fairness in interpersonal relationships influences subsequent behavioural decisions. As artificial intelligence (AI) becomes increasingly integrated into daily life, human‐computer interaction (HCI) emerged as a new mode of social interaction. Researchers extensively studied AI in fairness contexts, often highlighting its perceived objectivity and consistency. However, as the breadth and depth of AI applications are continuously expanding, risks and hidden dangers gradually emerge. Issues such as algorithm security, privacy protection, data discrimination and data abuse challenge trust in AI. Two diametrically opposed viewpoints might be strengthened: algorithm appreciation and algorithm aversion. On the one hand, people generally believe that AI systems are based on mathematical logic, which should treat all people and objects equally, making the results of AI decision trustworthy. This trust was evident in the response to COVID‐19: studies show that in China, AI decision‐making reduced bias in medical resource allocation, improved pathological testing efficiency and enhanced patient trust in algorithms compared with human decisions (Zhao and Cao 2020). Conversely, the opaque use of personal data by companies (e.g., for pricing) can erode trust in corporate fairness. For example, if an e‐commerce platform analysed consumers' purchase history, browsing behaviour and other data to personalise prices for different users, consumers might find that the price of the same product was different when logged in with different accounts, thus perceiving unfair treatment (Xiao and Gong 2022).

The present study examined the effects of fairness experiences in HCI on trust decisions. We explored the link between fairness perception and trust in AI by analysing the variables that affect fairness perception and trust. Fairness perception refers to subjective evaluations of whether the treatment received by others or oneself is fair. Research on fairness perception began with Adams' (1965) study on the rationality and fairness of wage compensation distribution and its impact on employee motivation. He proposed that employee motivation depends on the degree of perceived fairness in distribution, and that work motivation was not only related to the actual reward, but also to whether the employee feels that the reward is fair.

Fairness perception was widely recognised as a pivotal dimension in interpersonal interaction research, encompassing evaluations of interactional fairness, along with preferences and responses to fairness principles. Empirical evidence indicated that social status (Weiß et al. 2020) and personality traits (Weiß et al. 2021) substantially shape fairness perception and decision‐making in economic games. Individuals hold elevated expectations for high‐status actors, and perceived fairness violations by such individuals elicit contempt and subsequent altruistic punishment—a phenomenon wherein group members sanction norm violators despite personal cost (Fehr and Gächter 2002).

Previous research showed that subsequent behavioural decisions triggered by experiencing unfairness might be influenced by both subjective emotional experiences and the decision‐making environment. People are highly sensitive to fairness perception in human‐human interactions and have a strong tendency to inequity aversion. When perceiving unfair treatment, they experience dissatisfaction and resentment, often sacrificing economic gains to punish transgressors (Chen et al. 2023). In recent years, a growing number of scholars have integrated psychological and economic theories to analyse the irrational behaviours exhibited by individuals following unfair treatment. According to Inequity‐aversion Theory, people experience aversion when faced with an unfair distribution plan (Fehr and Schmidt 1999). Studies showed that this emotional response increases the probability of rejecting unfair distributions (Sanfey et al. 2003). This evidence suggested that people deviated from complete rationality in economic decision‐making, influencing subsequent behaviours. Moreover, Halali et al. (2014) proposed the automatic negative‐reciprocity hypothesis, suggesting that rejecting unfair offers may be an automatic response to negative emotions.

The availability of information fundamentally constrains decision‐making quality, with incomplete data increasing outcome uncertainty. The Ultimatum Game paradigm (Güth et al. 1982) effectively captures this dynamic, allowing participants to influence outcomes while operating under information constraints. Researchers adopted this paradigm to model social interactions and illuminated fairness perceptions in resource distribution contexts. According to Equity Theory (Lind 2001), players in such scenarios integrated sparse contextual cues with existing beliefs to generate rapid fairness assessments, demonstrating the adaptive nature of social decision‐making under uncertainty.

Social interaction in the above studies was mostly based on fairness perception in human‐human interactions; however, with the development of AI, research on fairness perception related to new modes of HCI is gradually increasing. HCI encompasses technology‐mediated exchanges between humans and intelligent systems. As electronic technology advanced, the term ‘machine’ evolved from traditional mechanical devices to computers and, more recently, to AI. AI offered a new pathway to promote social equity (Jiang et al. 2022), and Cao (2020) optimistically proposed it could contribute to reducing subjective bias and unfairness in human decision‐making processes.

Although AI algorithms can eliminate human bias in decision‐making, not everyone believes that it is fair. The simplicity and decontextualisation inherent in AI decision‐making processes indicated that these systems do not include certain qualitative information or contextual factors, leading to outcomes perceived as partial or incomplete. In the situation of organisational human resource decision‐making, employees perceived AI algorithmic decisions as lacking interactive capacity and being unable to resolve unexpected problems, compared with decisions made by supervisors, resulting in lower fairness (Langer et al. 2020). In the recruitment task, employees tended to believe that algorithms did not incorporate human intuitive reasoning or subjective assessment mechanisms (Newman et al. 2020). However, Moretti and di Pellegrino (2010) found that for the same amount of allocation options provided by both a human and computer, respondents were more likely to reject unfair proposals by a human, as they believed that the malicious intention of the unfair allocation option provided by the human was more obvious. These conflicting findings demonstrated that perceptions of AI fairness are context‐dependent. Whether AI decisions were seen as fairer than human decisions varies across situations.

Divergent perceptions of AI fairness can be attributed to two key psychological mechanisms. First, information ambiguity in AI systems (Acikgoz et al. 2020) created cognitive uncertainty, causing people to rely on heuristic processing to simplify fairness evaluations. Second, perceived algorithmic objectivity (Schildt 2017) led individuals to view AI decisions as systematic data processing rather than subjective bias. Consequently, even when outcomes are biased, AI decisions may still be judged as fair due to the following three process characteristics: (1) perceived controllability, (2) procedural consistency and (3) the absence of human prejudice.

At present, findings on fairness perception towards AI are inconsistent, which is related to the types of fairness that various studies focus on. From the perspective of process fairness, the process of AI‐generated decisions is more transparent, and in the same situation, the results of AI decisions do not change depending on the object. From the perspective of outcome fairness, AI systems were developed through data set training. If the dataset itself lacks representativeness, outcomes may not reflect reality, and the fairness of AI decision‐making results needs to be investigated. Critically, existing work primarily focused on AI's inherent algorithmic fairness while neglecting two key aspects: behavioural responses to AI‐imposed unfairness and the downstream consequences of such unfair treatment.

Trust decision‐making refers to the process of assessing whether others are trustworthy and deciding whether to trust them. Trust decision‐making arises mainly in the following three aspects: the trustor and the trustee have consistent interests, the trustor has positive expectations of the trustee's behaviour, and the trustor is willing to accept certain risks. Social exchange theory posits that individuals establish trust through mutual exchange and reciprocity during interpersonal interactions (Deng et al. 2020). In human‐human interactions, individuals engage in reciprocal exchange behaviours that establish trust relationships. When perceiving social support, they develop heightened trust in others and subsequently exhibit more trust‐related behaviours. Furthermore, during exchanges, people systematically evaluate the fairness of the exchange. Before trust establishes, equitable distributions enhance interpersonal trust and promote resource sharing. Conversely, perceived distribution inequity triggers dissatisfaction and inhibits the formation of reciprocal exchange.

Trust constitutes a critical determinant in human–AI interaction, shaping decision‐making processes and willingness to cooperate. Trust‐related decision‐making in human–machine interactions was often studied using the Trust Game (TG), where investors' choices to trust (invest money) or not (retain funds) reflected their trust decision‐making process, and investors' trust was indicated by the frequency and amount of investment (Thielmann and Hilbig 2015). Weiß et al. (2021) found that the interplay between self‐rated personality traits impacts economic decisions. However, AI systems simulate human behaviour and without possessing true personality traits, resulting in substantial differences in trust decisions between human–human and human–machine interactions. For example, individuals more frequently accepted unfair offers from AI than from humans, as AI's lack of personality traits reduced the perceived malicious intent (Moretti and di Pellegrino 2010). Moreover, Hoff and Bashir's three‐tier model described human–machine trust as dispositional, situational and learned trust (Hoff and Bashir 2015). Dispositional trust was influenced by prior experiences before the actual interaction, situational trust was affected by current task characteristics, environmental conditions, and system performance, and learned trust deepens over the course of interaction. Therefore, when examining trust in human–machine interactions, it was essential to consider both situational factors and historical experiences from previous interactions.

Human survival and development in social situations cannot be separated from social interactions, and the initiation and maintenance of social interactions require adaptive trust decision‐making. For families, groups, organisations and societies, an important issue is how resources and costs (time, money, attention, effort, etc.) should be allocated among members. Fairness can be regarded as a core social norm in human societies. Accordingly, the establishment of trust during social interactions depends on whether the result and process of resource allocation are fair.

Beliefs and emotional experiences (Singer et al. 2006), as well as general trust decisions during interpersonal interactions, were largely related to fairness perception. Research demonstrated that the perception of unfairness experienced in interpersonal interactions reduced trust in subsequent interactions (Yuan et al. 2023). However, in the respective studies, unfair behaviour occurred between humans. The identity of the unfair implementer was ‘real’. When AI initiates unfair actions, fairness perception during HCI varies. Moreover, there were certain differences in establishing trust between human–machine interactions and human–human interactions, with prior experiences also influencing human‐machine trust prior interactive experiences (Dreyfus 1972; Thielmann and Hilbig 2015). Therefore, it remains unclear as to how an unfair distribution plan by AI in the Ultimatum Game affects fairness perception and subsequent trust levels. Consequently, this study aimed to explore whether there were differences in fairness perception between human‐human and HCIs, and whether fairness perception in HCIs could influence subsequent trust tasks, comparing a fairness offer and an unfairness offer. Based on the above, the hypotheses were proposed as follows:

  1. There are differences in fairness perception between human and AI proposers.
    1. Fair offers produce higher fairness perception than unfair offers
    2. Participants show higher fairness perception towards AI proposers than towards human proposers.
  2. Fairness perception leads to differences in the trust levels among trustees.
    1. Fair offers produce higher trust levels than unfair offers;
    2. When receiving unfair offers, participants have higher trust levels towards AI proposers than towards human proposers;
    3. When receiving unfair offers, participants have higher trust levels towards AI trustees than towards human trustees.

2. Method

2.1. Participants

A total of 128 undergraduate participants (43 males; mean age = 21.08 years, SD = 1.62) were recruited via an online platform from a university in Yunnan, China. Participants were randomly assigned to four experimental groups (n = 31–33 per group). The sample consisted of right‐handed students with normal or corrected vision, no colour vision deficiencies and no history of mental disorders. All participants provided informed consent and came from similar sociodemographic backgrounds (predominantly urban middle‐class families pursuing bachelor's degrees).

During the data preprocessing phase, systematic screening and exclusion of invalid data were conducted based on a priori criteria aligned with the experimental task characteristics and research design requirements. The implementation protocols comprised: (1) abnormal behavioural pattern screening: considering the dynamic decision‐making nature of both the Trust Investment Game and Ultimatum Game, participants demonstrating mechanical response repetition (e.g., persistent investment/non‐investment choices across all trials) were identified as exhibiting decision‐making rigidity. This criterion excluded 3 participants (1 from the human unfair condition group, 2 from the AI unfair condition group). (2) Task compliance screening: to preserve interactive validity in game‐theoretic tasks, exclusion thresholds were set at ≥ 3 non‐response trials (30% of total trials) in the Ultimatum Game or ≥ 7 non‐response trials (30% of total trials) in the Trust Investment Game. This protocol resulted in the exclusion of 5 participants (2 from the human unfair condition group, 1 from the human fair condition group, and 2 from the AI fair condition group). The combined screening procedures excluded 8 participants in total, representing 6.25% of the initial sample (N = 128). All exclusion protocols were implemented prior to the formal data analysis to ensure dataset integrity.

2.2. Design and Materials

A mixed experimental design of 2 (proposer: AI/human) × 2 (offer: fair/unfair) × 2 (trustee: AI/human) was adopted. The proposer and the offer were between‐subjects variables, and the trustee was a within‐subjects variable. The dependent variables were the participants' investment rate and investment amount, indicating their level of trust.

The Ultimatum Game was used to manipulate participants' experiences of fairness/unfairness, thus influencing established trust links between participants and (AI or human) interaction partners. One is the proposer (AI or human) and the other is the responder (participant). As the responder, the participant received a series of suggestions on how to allocate a certain amount of money between the proposer and themselves. In every trial, they could choose to accept or reject the suggestion. If they accepted it, the money would be allocated according to the proposed plan. If the responder rejected it, neither player receives any money.

Half of the participants received predominantly fair offers, and the other half received overall unfair offers. Participants with fair offers were randomly assigned to receive 10 allocation proposals from the proposer (AI or human), among which (5, 5) and (4, 6) were repeated 9 times each, and (3, 7) was repeated twice. Each trial involved 100 RMB. Participants with unfair offers received nine times each (1, 9) and (2, 8), as well as (3, 7) twice. The instructions either emphasised that the proposal would be randomly selected from a pool of money allocation proposals provided by 400 humans or that the proposal would be randomly selected from an AI‐generated pool of 400 money allocation proposals. After the Ultimatum Game, participants rated its fairness level on a scale from 1 to 9 (1 = completely fair, 9 = completely unfair). For the analyses, the resulting fairness‐perception scores were transposed (i.e., 1 represented completely unfair and 9 represented completely fair).

This study used the standard TG paradigm to measure participants' trust levels towards AI or human interaction partners. In this task, there were two roles: investors (or trustors) and trustees (those who are trusted). The participants assumed the role of investors and were instructed to allocate tokens to anonymous trustees across 20 rounds. At the beginning of each round, both the investors and trustees received a predetermined endowment of tokens. Investors then decide whether to invest in their matched trustees and, if so, the amount to transfer. If the investors chose to invest, the invested amount would be tripled for the trustees; if the investors chose not to invest, the round would end, and the tokens of both parties would remain unchanged. After the investors chose to invest, the trustees could decide whether to return some of the tokens (usually half of the tripled invested amount in previous studies) to the investors. If the trustees chose to return, the investors would receive half of the tripled invested amount. If the trustees chose not to return, the investors would lose their invested tokens. To ensure that participants made choices carefully, at the beginning of the game, they were told that their final reward was related to their investment returns. The conversion ratio of 100 tokens equals 1 yuan.

2.3. Experimental Tools

The Ultimatum Game and TG tasks were programmed using jsPsych 2021, a JavaScript framework for web‐based behavioural experiments. Experimental procedures were deployed as interactive web applications accessible through standard browsers. All behavioural data were collected and managed via the Naodao platform (www.naodao.com) for subsequent analysis.

2.4. Procedure

The experiment consisted of three stages of tasks (Figure 1). The first stage used the Ultimatum Game to induce experiences of fairness or unfairness in interactions with either AI or human beings. At the beginning of the task, the instructions informed participants that the tokens they received in the Ultimatum Game would determine their initial investment tokens in a later investment game, and that the investment profit was related to their final reward. First, a fixation cross (‘+’) was presented for 1000 ms to signal trial onset. Second, a message ‘This round's game proposer is human (or AI), searching for an allocation plan from the subject pool (database)…’ appeared at the centre of the screen for 3000 ms. Then, the allocation plan was displayed, upon which the participant could choose whether to accept or reject the allocation plan by pressing the ‘A’ key for acceptance or the ‘L’ key for rejection. After the participant's response, a blank screen appeared for 2000 ms. After 10 trial rounds were completed, the total earnings of the game were displayed (Figure 2).

FIGURE 1.

FIGURE 1

Considerations of experimentation.

FIGURE 2.

FIGURE 2

Temporal sequence of one trial on Ultimatum Game.

In the second stage, participants rated how fair they perceived the offers given in the previous Ultimatum Game. First, a fixation cross was presented for 1000 ms, then the question ‘How fair do you think was the overall allocation scheme?’ was presented, and the participants rated it by pressing the corresponding key. After a short break (≤ 3000 ms), participants were informed that the income earned in the first stage would be converted into a corresponding amount of tokens to be used for the task in the third stage (Figure 3).

FIGURE 3.

FIGURE 3

Temporal sequence on fairness‐awareness evaluation task.

The third stage comprised a trust‐game task. The task consisted of 20 trials, including 10 with human trustees and 10 with AI trustees. Both types of trustees comprised 5 out of 10 trials returning 1.5× the invested amount and without refunding any amount in the other five trials. To prevent feedback effects from contaminating decision‐making processes, participants received cumulative earnings information exclusively upon the conclusion of all experimental trials.

First, a fixation cross was presented for 1000 ms; then, the screen displayed ‘Selecting the trustee for this round…’ for 3000 ms. Then, the identity of the trustee was presented, and the participant was asked to choose whether to invest by pressing the ‘F’ key for yes or the ‘J’ key for no. If the participant chose to invest, a dialogue box appeared on the screen, ‘Please enter the amount you want to invest’. The participants entered the investment amount and confirmed it by pressing the ‘Enter’ key. Subsequently, the words ‘This round end’ were displayed for 1000 ms, followed by the commencement of the next trial. If the participant chose not to invest, the words ‘This round of game ends’ were immediately displayed for 1000 ms (Figure 4).

FIGURE 4.

FIGURE 4

Temporal sequence of one trial on Trust Game. (a) the participant chooses to invest; (b) the participant chooses not to invest.

2.5. Statistical Analysis

To investigate whether the aversion to unfairness observed in human–human interactions also extends to human–machine interactions, a 2 (proposer identity: AI vs. human) × 2 (offers: fair vs. unfair) analysis of variance (ANOVA) was used to examine rejection rates in the Ultimatum Game. Additionally, the same analytical approach was applied to the fairness perception scores. This served two purposes: first, to verify whether unfair proposals could effectively elicit participants' perceptions of unfairness. Second, to further examine the impact of proposer identity on participants' fairness perception. Finally, a correlation analysis was conducted to assess the relationship between the rejection rate and fairness perception scores, thereby verifying that higher fairness perception scores were associated with lower rejection rates.

Regarding trust levels, a 2 (offers: fair vs. unfair) × 2 (proposer identity: human vs. AI) × 2 (trustee identity: human vs. AI) mixed‐model ANOVA with repeated measures on the third factor was conducted to examine investment rates and investment amounts. This study aimed to investigate whether fairness perceptions in prior human–AI interactions would influence subsequent trust decisions. Additionally, to validate the internal consistency of trust levels, a correlation analysis was performed between the investment rates and investment amounts.

3. Results

3.1. Rejection Rate in the Ultimatum Game

There was an effect of offers (F[1, 116] = 100.96, p < 0.001, η p 2 = 0.47). The rejection rate of unfair offers (M = 0.68, SD = 0.28) was higher than that of fair offers (M = 0.23, SD = 0.20; Figure 5a). There was no effect of proposer identity (F[1, 116] = 0.61, p = 0.437), nor was there the interaction (F[1, 116] = 0.60, p = 0.441).

FIGURE 5.

FIGURE 5

Rejection rate (a) and fairness perception score (b) as a function of offers in the Ultimatum Game and proposer identity. *p < 0.05, **p < 0.01. Error bars represent standard errors.

3.2. Fairness Perception Score

There was an effect of offers (F[1, 116] = 31.36, p < 0.001, η p 2 = 0.21). Unsurprisingly, fair offers were rated as fairer (M = 5.38, SD = 2.08) than unfair offers (M = 3.23, SD = 2.14; Figure 5b). Proposer identity had no effect (F(1, 116) = 0.02, p = 0.965) and no interaction (F[1, 116] = 3.17, p = 0.078). In addition, fairness‐perception scores were negatively correlated with rejection rates (r = −0.35, p < 0.001).

3.3. Trust Levels

3.3.1. Investment Rate

There was an effect of offers (F[1, 116] = 9.26, p = 0.003, η p 2 = 0.074) and an interaction between offers and proposer identity (F[1, 116] = 4.49, p = 0.036, η p 2 = 0.037; Figure 6a). Simple effects analysis revealed that after receiving unfair offers, investment rates were higher with AI proposers as compared to a human proposer (p = 0.005). After fair offers, investment rates did not differ between proposer types (p = 0.907).

FIGURE 6.

FIGURE 6

Investment rates as a function of offers in the Ultimatum Game and proposer identity (a), or as a function of offers and trustee identity (b) *p < 0.05, **p < 0.01. Error bars represent standard errors.

There was an interaction between offers and trustee identity (F[1, 116] = 4.79, p = 0.031, η p 2 = 0.040; Figure 6b). Simple effects analysis showed that after fair offers, participants had higher investment rates with AI trustees than with human trustees (p = 0.005). After unfair offers, investment rates did not differ between trustee types (p = 0.797). There was no effect of proposer identity (F(1, 116) = 3.82, p = 0.053), no effect of trustee identity (F[1, 116] = 3.32, p = 0.071), no interaction between proposer and trustee identity (F[1, 116] = 3.40, p = 0.068), and no interaction of offers, proposer identity, and trustee identity (F[1, 116] = 1.34, p = 0.249).

3.3.2. Investment Amount

There was an effect of offers (F[1, 116] = 12.95, p < 0.001, η p 2 = 0.10), and an interaction between offers and trustee identity (F[1, 116] = 7.12, p = 0.009, η p 2 = 0.06; Figure 7). Simple effects analysis showed that after fair offers, participants invested more with AI trustees as compared to a human trustee (p = 0.004). After unfair offers, investment amounts did not differ (p = 0.386). There was no effect of proposer identity (F[1, 116] = 0.09, p = 0.764) or trustee identity (F[1, 116] = 2.07, p = 0.153). No other interactions for: proposer × offer (F[1116] = 0.02, p = 0.876), proposer × trustee (F[1116] = 0.60, p = 0.442), or the interaction of offer, proposer and trustee identities (F[1, 116] = 1.42, p = 0.236). Investment rates and amounts were strongly correlated for both AI (r = 0.621, p < 0.001) and real human partners (r = 0.702, p < 0.001).

FIGURE 7.

FIGURE 7

Investment amount as a function of offers in the Ultimatum Game and trustee identity *p < 0.05, **p < 0.01. Error bars represent standard errors.

4. Discussion

This study investigated fairness perception in interpersonal or HCI situations. Regardless of the identity of the proposer, the fair group's trust investment rate and investment amount for both AI and human trustees were higher than those of the unfair group. Apparently, both interpersonal and HCIs exhibited a phenomenon in which unfairness perception formed during previous interactions reduces trust levels in subsequent interactions.

Peysakhovich and Rand (2016) suggested that people's cooperative norms and expectations of others stemmed from emotions in daily interactions. Thus, experiences formed during previous interactions sometimes generalise to subsequent social activities, affecting behavioural responses to subsequent activities, resulting in spillover effects, especially when such experiences originated from negative emotional experiences. This study provided support and extended evidence for spillover effects, verifying that similar spillover effects existed among human beings and between human beings and AI in social interactions.

In addition, the experimental results showed that the proposer's identity in the TG affected participants' investment rates. Specifically, in the unfair group, participants invested more frequently with an AI proposer than with a human proposer. However, investment amounts remained similar between proposer types. Perhaps the investment rate mainly reflected the willingness to invest or positive expectations for the return amount from the trustees, and choosing to invest or not does not directly affect the interests of the participants. However, the investment amount was related to factors such as the investor's risk tolerance and expected return. Therefore, participants were more cautious in choosing the investment amounts.

With the rapid development of technology and continuous optimisation of algorithms, algorithm appreciation was typical (Logg et al. 2019), which was characterised by a high degree of dependence on algorithms and more positive attitudes and behaviours towards algorithms. Because AI systems operated through algorithms, participants might perceive AI proposers as more reliable than human proposers, especially when considering factors like algorithmic performance, personal experience with algorithms, and perceptions of objectivity and transparency. This positive impression led participants to hold higher expectations and accept certain risks in interactions with AI.

Participants in the fair group invested more frequently and with larger amounts in AI trustees than in human trustees, whereas trustee type did not affect investments in the unfair condition. The trust decision‐making process was influenced by the source of risk, and betrayal aversion can undermine trust (Humphrey and Mondorf 2021). After being treated unfairly, participants were more likely to avoid risky investments. If one trusts others and entrusts them with decision‐making power, the result may be mutual benefits for both parties, or the trusted party may betray the trust of the one who entrusted them. In this study, participants in the fair offer invested higher rates and amounts in the AI trustee compared to the human trustee. Obviously, they believed that the probability of the AI trustee choosing a reward was a random result generated by the AI system. Consequently, the investment risk in AI for the fair offer was more attributed to objective factors.

Participants in the fair group rated fairness higher than those in the unfair group. Neither human nor AI proposers elicited different fairness ratings, suggesting proposer identity minimally affected fairness perception. These patterns indicated that fairness judgements depended primarily on the decision‐making context. In the Ultimatum Game, participants only access limited and ambiguous information; therefore, individuals tended to use the information related to fairness obtained from the situation, combined with existing knowledge, to quickly form an overall fairness perception of the situation (Fang and Chen 2022). Consequently, when evaluating identical allocation proposals under limited information, proposer identity exerted only marginal influence on fairness perceptions.

In the Ultimatum Game, participants rejected unfair offers more frequently than fair offers, supporting the automatic negative‐reciprocity hypothesis. Our results extended this pattern from interpersonal to HCI contexts: unfair proposals eroded trust in AI partners during subsequent TGs.

5. Limitations

This study simulated repeated social interactions via multi‐round tasks, yet these lab‐based interactions differed fundamentally from real‐world dynamics. First, unlike naturalistic settings where trust developed through long‐term relationships, the drawbacks of our paradigm were short‐term, monetarily incentivised interactions with limited rounds. This design excluded real‐world factors like reputation accumulation and moral constraints, potentially attenuating socioemotional influences. Second, proposers/trustees were presented only textually without interactive interfaces, compromising authentic perception of AI counterparts. Future research should incorporate realistic scenarios (e.g., AI customer service) or multimodal interactions (e.g., voice/facial expressions).

Furthermore, the study's findings might be influenced by gender‐related factors. The present study further examined the moderating role of gender distribution in decision‐making behaviours using mixed‐effects models with bootstrap resampling (1000 iterations). Results indicated that gender did not moderate the effects of proposer type (χ 2(1) = 1.11, p = 0.29) or scheme fairness (χ 2(1) = 0.07, p = 0.79) on rejection rates, as the 95% CIs for gender differences in proposer type (−0.06, 0.02) and trustee type (−0.12, 0.11) included zero. Similarly, for investment rates, gender showed no moderating effect on trustee type (χ 2(1) = 0.19, p = 0.665) or scheme fairness (χ 2(1) = 0.08, p = 0.782), with the 95% CIs for gender differences (proposer type: [−0.14, 0.09]; trustee type: [−0.11, 0.17]) spanning zero. However, for investment amounts, gender moderated the effect of trustee type (χ 2(3) = 10.04, p = 0.018). Specifically, male participants invested more in AI trustees than in human trustees, while females showed the opposite pattern. These findings suggest gender plays a contextual moderating role in social decision‐making (e.g., HCI), but its effects appear to be domain‐specific. Future studies should refine experimental designs to improve the ecological validity of the findings.

Finally, the study's exclusive focus on Chinese participants warrants careful consideration of their cultural influences. China's collectivist orientation, which emphasises guanxi (relational harmony) and mianzi (face maintenance), systematically shaped response patterns in ways that are distinct from individualistic cultures. Specifically, these cultural norms could underlie participants' greater acceptance of nominally fair outcomes and higher tolerance towards AI's non‐social nature—behaviours potentially motivated by harmony preservation rather than genuine trust. This cultural lens suggests that our findings regarding AI trust under unfair conditions may not generalise to other cultural contexts, highlighting the need for cross‐cultural replications using equivalent methodological frameworks to disentangle cultural versus universal patterns in HCI.

6. Conclusions

This study established that fairness perceptions in HCIs exerted a more positive influence on trust‐based decision‐making than those in interpersonal contexts. This distinction likely originated from reduced anthropomorphic expectations towards AI systems, which enabled objective fairness evaluations to dominate decisions without interference from social biases. Consequently, these findings refined trust formation mechanisms across interaction modalities and highlighted critical pathways for developing trustworthy AI systems.

Author Contributions

Rui Chen: Conceptualization (equal); Methodology (lead); Formal analysis (equal); Writing – original draft (lead); Writing – review & editing (equal). Yating Jin: Conceptualization (equal); Formal analysis (lead); Investigation (lead); Writing – original draft (equal); Writing – review & editing (equal). Lincang Yu: Conceptualization (equal); Formal analysis (supporting); Investigation (equal); Writing – original draft (supporting); Writing – review & editing (equal). Tobias Tempel: Conceptualization (equal); Methodology (equal); Writing – review & editing (equal). Peng Li: Conceptualization (equal); Methodology (equal); Project administration (lead); Supervision (lead); Writing – review & editing (equal). Shi Zhang: Conceptualization (equal); Investigation (supporting); Data curation (equal). Anqi Li: Conceptualization (equal); Investigation (supporting); Data curation (equal). Weijie He: Conceptualization (equal); Investigation (supporting); Data curation (equal).

Ethics Statement

All procedures performed in studies involving human participants were in accordance with the ethical standards of Yunnan Normal University (China) and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Consent

Informed consent was obtained from participants.

Conflicts of Interest

The authors declare no conflicts of interest.

Chen, R. , Jin Y., Yu L., et al. 2025. “The Influence of Perceived Fairness on Trust in Human‐Computer Interaction.” International Journal of Psychology 60, no. 5: e70111. 10.1002/ijop.70111.

Funding: The authors received no specific funding for this work.

Yating Jin and Rui Chen are both first author.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  1. Acikgoz, Y. , Davison K. H., Compagnone M., and Laske M.. 2020. “Justice Perceptions of Artificial Intelligence in Selection.” International Journal of Selection and Assessment 28, no. 4: 399–416. 10.1111/ijsa.12306. [DOI] [Google Scholar]
  2. Adams, J. S. 1965. “Inequity in Social Exchange.” Advances in Experimental Social Psychology 2: 267–299. 10.1016/S0065-2601(08)60108-2. [DOI] [Google Scholar]
  3. Cao, P. 2020. “Three Realms of the Reform of Artificial Intelligence in Education.” Educational Research 41, no. 2: 143–150. [Google Scholar]
  4. Chen, T. , Tang R., Yang X., Peng M., and Cai M.. 2023. “Moral Transgression Modulates Fairness Considerations in the Ultimatum Game: Evidence From ERP and EEG Data.” International Journal of Psychophysiology 188: 1–11. 10.1016/j.ijpsycho.2023.03.001. [DOI] [PubMed] [Google Scholar]
  5. Deng, Y. , Wang C. S., Aime F., et al. 2020. “Culture and Patterns of Reciprocity: The Role of Exchange Type, Regulatory Focus, and Emotions.” Personality and Social Psychology Bulletin 47, no. 1: 20–41. 10.1177/0146167220913694. [DOI] [PubMed] [Google Scholar]
  6. Dreyfus, H. L. 1972. What Computers Can't Do: The Limits of Artificial Intelligence. MIT Press. [Google Scholar]
  7. Fang, X. , and Chen S.. 2022. “Effects of Uncertainty and Emotion on Justice Judgment.” Journal of Psychological Science 35, no. 3: 711–717. 10.16719/j.cnki.1671-6981.2012.03.033. [DOI] [Google Scholar]
  8. Fehr, E. , and Gächter S.. 2002. “Altruistic Punishment in Humans.” Nature 415, no. 6868: 137–140. 10.1038/415137a. [DOI] [PubMed] [Google Scholar]
  9. Fehr, E. , and Schmidt K. M.. 1999. “A Theory of Fairness, Competition, and Cooperation.” Quarterly Journal of Economics 114, no. 3: 817–868. 10.1162/003355399556151. [DOI] [Google Scholar]
  10. Güth, W. , Schmittberger R. W., and Schwarze B.. 1982. “An Experimental Analysis of Ultimatum Bargaining.” Journal of Economic Behavior & Organization 3: 367–388. 10.1016/0167-2681(82)90011-7. [DOI] [Google Scholar]
  11. Halali, E. , Bereby‐Meyer Y., and Meiran N.. 2014. “Between Self‐Interest and Reciprocity: The Social Bright Side of Self‐Control Failure.” Journal of Experimental Psychology. General 143, no. 2: 745–754. 10.1037/a0033824. [DOI] [PubMed] [Google Scholar]
  12. Hoff, K. , and Bashir M.. 2015. “Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust.” Human Factors the Journal of the Human Factors and Ergonomics Society 57: 407–434. 10.1177/0018720814547570. [DOI] [PubMed] [Google Scholar]
  13. Humphrey, S. J. , and Mondorf S.. 2021. “Testing the Causes of Betrayal Aversion.” Economics Letters 198: 109663. 10.1016/j.econlet.2020.109663. [DOI] [Google Scholar]
  14. Jiang, L. , Cao L., Qin X., Tan L., Chen C., and Peng X.. 2022. “Fairness Perceptions of Artificial Intelligence Decision‐Making.” Advances in Psychological Science 30, no. 5: 1078–1092. 10.3724/SP.J.1042.2022.01078. [DOI] [Google Scholar]
  15. Langer, M. , Knig C. J., Sanchez D. R., and Samadi S.. 2020. “Highly Automated Interviews: Applicant Reactions and the Organizational Context.” Journal of Managerial Psychology 35, no. 4: 301–314. 10.1108/JMP-09-2018-0402. [DOI] [Google Scholar]
  16. Lind, E. A. 2001. “Fairness Heuristic Theory: Justice Judgments as Pivotal Cognitions in Organizational Relations.” In Advances in Organization Justice, edited by Greenberg J. and Cropanzano R., 56–88. Stanford University Press. [Google Scholar]
  17. Logg, J. M. , Minson J. A., and Moore D. A.. 2019. “Algorithm Appreciation: People Prefer Algorithmic to Human Judgment.” Organizational Behavior and Human Decision Processes 151: 90–103. 10.1016/j.obhdp.2018.12.005. [DOI] [Google Scholar]
  18. Moretti, L. , and di Pellegrino G.. 2010. “Disgust Selectively Modulates Reciprocal Fairness in Economic Interactions.” Emotion 10, no. 2: 169–180. 10.1037/a0017826. [DOI] [PubMed] [Google Scholar]
  19. Newman, D. T. , Fast N. J., and Harmon D. J.. 2020. “When Eliminating Bias Isn't Fair: Algorithmic Reductionism and Procedural Justice in Human Resource Decisions.” Organizational Behavior and Human Decision Processes 160: 149–167. 10.1016/j.obhdp.2020.03.008. [DOI] [Google Scholar]
  20. Peysakhovich, A. , and Rand D. G.. 2016. “Habits of Virtue: Creating Norms of Cooperation and Defection in the Laboratory.” Management Science 62, no. 3: 631–647. 10.1287/mnsc.2015.2168. [DOI] [Google Scholar]
  21. Sanfey, A. G. , Rilling J. K., Aronson J. A., Nystrom L. E., and Cohen J. D.. 2003. “The Neural Basis of Economic Decision‐Making in the Ultimatum Game.” Science 300, no. 5626: 1755–1758. 10.1126/science.1082976. [DOI] [PubMed] [Google Scholar]
  22. Schildt, H. A. 2017. “Big Data and Organizational Design – The Brave New World of Algorithmic Management and Computer Augmented Transparency.” Innovation 19: 23–30. 10.1080/14479338.2016.1252043. [DOI] [Google Scholar]
  23. Singer, T. , Seymour B., O'Doherty J. P., Stephan K. E., Dolan R. J., and Frith C. D.. 2006. “Empathic Neural Responses Are Modulated by the Perceived Fairness of Others.” Nature 439, no. 7075: 466–469. 10.1038/nature04271. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Thielmann, I. , and Hilbig B.. 2015. “Trust: An Integrative Review From a Person–Situation Perspective.” Review of General Psychology 19: 249–277. 10.1037/gpr0000046. [DOI] [Google Scholar]
  25. Weiß, M. , Paelecke M., and Hewig J.. 2021. “In Your Face(t)—Personality Traits Interact With Prototypical Personality Faces in Economic Decision Making.” Frontiers in Psychology 12: 652506. 10.3389/fpsyg.2021.652506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Weiß, M. , Rodrigues J., Paelecke M., and Hewig J.. 2020. “We, Them, and It: Dictator Game Offers Depend on Hierarchical Social Status, Artificial Intelligence, and Social Dominance.” Frontiers in Psychology 11: 541756. 10.3389/fpsyg.2020.541756. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Xiao, M. , and Gong D.. 2022. “Supervision Strategy Analysis on Price Discrimination of E‐Commerce Company in the Context of Big Data Based on Four‐Party Evolutionary Game.” Computational Intelligence and Neuroscience 2022: 2900286. 10.1155/2022/2900286. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Yuan, B. , Chen S., Dong Y., and Li W.. 2023. “How Does Unfairness Perception Influence Generalized Trust?” Journal of Psychological Science 46, no. 3: 693–703. 10.16719/j.cnki.1671-6981.20230323. [DOI] [Google Scholar]
  29. Zhao, Y. , and Cao W.. 2020. “The Inspiration of Artificial Intelligence Application in the Prevention and Control of COVID‐19 Virus.” Journal of Information Resources Management 10, no. 6: 20–27. 10.13365/j.jirm.2020.06.020. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.


Articles from International Journal of Psychology are provided here courtesy of Wiley

RESOURCES