Skip to main content
PLOS One logoLink to PLOS One
. 2023 Jul 17;18(7):e0284840. doi: 10.1371/journal.pone.0284840

Who should be first? How and when AI-human order influences procedural justice in a multistage decision-making process

Luyuan Jiang 1,#, Xin Qin 1,*,#, Kai Chi Yam 2,, Xiaowei Dong 1,, Wanqi Liao 1,, Chen Chen 1,#
Editor: José Manuel Santos Jaén3
PMCID: PMC10351705  PMID: 37459307

Abstract

Artificial intelligence (AI) has fundamentally changed the way people live and has largely reshaped organizational decision-making processes. Particularly, AI decision making has become involved in almost every aspect of human resource management, including recruiting, selecting, motivating, and retaining employees. However, existing research only considers single-stage decision-making processes and overlooks more common multistage decision-making processes. Drawing upon person-environment fit theory and the algorithm reductionism perceptive, we explore how and when the order of decision makers (i.e., AI-human order vs. human-AI order) affects procedural justice in a multistage decision-making process involving AI and humans. We propose and found that individuals perceived a decision-making process arranged in human-AI order as having less AI ability-power fit (i.e., the fit between the abilities of AI and the power it is granted) than when the process was arranged in AI-human order, which led to less procedural justice. Furthermore, perceived AI ability buffered the indirect effect of the order of decision makers (i.e., AI-human order vs. human-AI order) on procedural justice via AI ability-power fit. Together, our findings suggest that the position of AI in collaborations with humans has profound impacts on individuals’ justice perceptions regarding their decision making.

Introduction

In recent years, artificial intelligence (AI), which refers to “a highly capable and complex technology that aims to simulate human intelligence” [1], has been widely used to help make decisions in various areas, such as human resource management (HRM) [2, 3], tax management [4], sales management [5, 6], and customer service [7]. Particularly, given its high efficiency and performance [810], AI decision making has become involved in almost every aspect of HRM, including recruiting, selecting, motivating, and retaining employees, and has broad impacts on both employees and organizations [2, 3, 11, 12]. In this vein, one of the most important issues scholars have focused on is employees’ perceptions of the fairness of such decision-making procedures [1315]. This is because fairness perceptions are strongly associated with employees’ attitudes and behaviors toward organizations [16, 17], and candidates are less likely to accept offers from organizations they deem to have a low level of organizational fairness [1820]. Some emerging research also provided important insights about how AI characteristics (e.g., accuaracy, transparency) and individual difference (e.g., race, gender, education) affect fainess perceptions of AI decision making [21, 22]. However, several studies have shown that although AI has been demonstrated to have high efficiency and outperform human [23, 24], individuals tend to feel it is less fair when AI makes personnel decisions than when humans make personnel decisions, as they perceive that the AI decision-making process does not, or is unable to, consider the qualitive factors or unique social contexts from candidates [14, 2527].

While previous studies provide the first look into the effects of AI on individuals’ fairness perceptions during decision-making processes, they mainly consider single-stage processes in which either AI or a human makes a decision independently, largely overlooking the complexity of managerial decision making in the real world (e.g., multistage decision-making processes). Indeed, in organizations, a final decision often unfolds in a multistage process [2831], which involves a step-by-step approach whereby later available choices are based on early-stage decisions [28]. For example, top technology companies typically require at least five rounds of interview before an offer is made [32]. These include a series of interrelated stages, including initial screening interviews and later targeted interviews [33]. This process allows humans to make decisions in some stages, while AI makes decisions in other stages.

Although collaborations between humans and AI is ubiquitous in multistage organizational decision making [3436], we have limited knowledge about the impacts of the order arrangement of AI and humans as decision makers on individuals’ fairness perceptions. In multistage decision-making, decision makers eliminate unacceptable alternatives and reduce the number of alternatives in the earlier stages and make the final choice from the remaining alternatives in the last stages [28, 31]. The order of decision makers (AI-human order vs. human-AI order) is a specific procedure design in AI-human joint decision-making context, reflecting who take charge to make decisions in the earlier stages and who are in the latest stage. Since individuals usually cannot immediately obtain the results of the personnel decision making, the order of decision makers provides direct information and impressions of decision-making process for individuals [37]. Thus, in this research, we focus on a typical two-stage personnel selection and explore (1) whether the order of decision makers (i.e., AI-human order vs. human-AI order) affects individuals’ procedural justice, which is the most intuitive perceptions and one of the fairness perceptions individuals care about most in the personnel decision-making [37], and (2) if so, how and when does this happen. Exploring these questions is theoretically important because doing so expands our knowledge of the neglected but ubiquitous phenomenon of multistage decision making involving both AI and humans in HRM. Practically, by understanding the influence of the order of decision makers (i.e., AI-human order vs. human-AI order), organizations could better design decision-making procedures when introducing AI to help make personnel decisions.

To answer these questions, we draw upon person-environment fit theory (P-E fit theory; [38, 39]) and the algorithmic reductionism perspective [14]. P-E fit theory suggests that outcomes are most optimal when the attributes of actors (e.g., the abilities of AI and humans) and the surrounding environment (e.g., the specific stage of the decision-making process) are compatible [38, 39], while the algorithmic reductionism perspective suggests that individuals tend to perceive that AI has lower levels of ability than humans due to its inability to consider certain qualitative information and contextualize this information [14]. Integrating P-E fit theory and the algorithmic reductionism perspective, we posit that in a multistage decision-making process, individuals perceive a lower level of AI ability-power fit (i.e., the fit between the abilities of AI and the power it is granted) and consequently decreased procedural justice (i.e, the perception of appropriateness in decision making procedures) when a human makes a decision in the first stage and AI makes a decision in the second stage (i.e., human-AI order condition) than in the reversed condition (i.e., AI-human order condition), since individuals tend to believe that decision makers in the second stage possess more power and, in turn, are required to have a higher level of ability compared with those in the first stage. Furthermore, as individuals’ perceptions of AI ability varies [40], we examine the moderating role of AI ability to provide a more complete understanding of when the order of decision makers has a stronger influence on procedural justice in a multistage decision-making process. To test our theoretical model (see Fig 1), we conducted two experimental studies.

Fig 1. Theoretical model of the current research.

Fig 1

This research makes several primary theoretical contributions. First, our research contributes to the AI decision-making literature by showing the importance of the order of decision makers (i.e., AI-human order vs. human-AI order) in a multistage decision-making process. Specifically, while previous studies typically consider a single-stage process by exploring how AI characteristics (e.g., accuaracy, transparency) and individual difference (e.g., race, gender, education) perceptionsof AI decision making, or comparing the decisions made by AI or a human independently [1315, 41], this research considers a largely overlooked decision-making situation—that is, a multistage decision-making process—and explores effect of the order of decision makers (i.e., AI-human order vs. human-AI order), which is an important specific managerial design neglected in previous study, on individuals’ fairness perceptions. In doing so, this work expands our knowledge of individuals’ reactions in a multistage decision-making process that includes both AI and human decision makers. In addition, we identify an important boundary condition (i.e., AI ability), which helps us understand when the order of decision makers (i.e., AI-human order vs. human-AI order) has a stronger impact on procedural justice via AI ability-power fit.

Second, our research contributes to the literature on individuals’ fairness perceptions of AI decision making. Previous studies mostly consider how the attributes of AI (e.g., transparency, explainability, and accuracy) or the environment (e.g., task complexity) affect individuals’ fairness perceptions [4246]. In this research, we suggest that individuals consider AI to be a “fit” for making decisions at a certain stage even when they perceive that AI has a lower level of ability than humans. Specifically, we reveal that individuals’ perceived procedural justice largely depends on congruence between AI ability and the power it is granted (i.e., AI ability-power fit).

Theoretical grounding and hypothesis development

Integrating person-environment fit and reductionism theories

P-E fit theory suggests that employees’ attitudes and behaviors are optimal when the attributes of actors and the surrounding environment are compatible [38, 39]. In the organizational context, P-E fit is often considered in terms of person-job fit, which refers to the fit between the attributes of a person and his or her job, including demands-abilities fit (i.e., the person’s abilities meet the requirements of the job he or she performs) and needs-supplies fit (i.e., the person’s needs are met by the job he or she performs) [1820, 38, 47]. Moreover, job attributes can be conceptualized as different characteristics of a job, such as job autonomy, workload, job insecurity, role ambiguity, and (lack of) supervisor support [39].

Since we aim to explore how to arrange AI and humans to meet the requirements of a decision-making process at different stages, we conceptualize AI ability-power fit from the notion of abilities-demands fit and define it as the fit between the abilities of AI and the power it is granted. Since power is often viewed as a form of disproportionate control over other social actors [4850], individuals tend to perceive that a position granted with more power has more control over their goals, which leads them to require the one who takes this position to have a higher level of ability so as to maximize their goal achievement [5153]. Thus, when decision makers have a low level of ability in a position granted with a high level of power, they are more likely to be viewed as less capable of fulfilling the demands of such a position (i.e., misfit with this position) in the decision-making process. Taken into personnel selection, Gilliland suggests that a decision-making process should follow the 3 mian requirement: characteristics of the selection system, explanations offered during the selection process, and interpersonal treatment [54]. Thus, we proposed that AI ability-power reflects individuals’ perceptions on whether AI possess enough ability to follow appropriate way of evaluation (e.g., job relatedness), explanations offering (e.g., feedback), and interpersonal treatment (e.g., two-way communication) when making decisions at different positions.

Furthermore, to fully understand the effects of the order of decision makers in a multistage decision-making process, we integrate P-E fit theory [38, 39] with the algorithm reductionism perspective [14]. The algorithm reductionism perspective suggests that individuals tend to believe that AI does not have comparable abilities to make wise decisions as humans do because they think AI neglects “(1) the qualitative characteristics of human nature and, by extension, (2) the contextualized circumstances in which they occur” [14]. In a multistage decision-making process, the ability demands in different stages are quite different [55]. Thus, individuals may think it is a misfit when AI, rather humans, makes decisions at a specific stage granted with higher power.

The order of decision makers and AI ability-power fit

In a multistage decision-making process, in particular a two-stage decision-making process, decision makers have different tasks and responsibilities at different stages: decision makers (1) eliminate unacceptable alternatives and reduce the number of alternatives in the first stage and (2) make the final choice from the remaining alternatives in the second stage [28, 31]. In the second stage, decision makers are typically granted more power and require higher levels of ability than those in the first stage, because the former may possess more power and thus more control over individual’s goal achievement. Specifically, decision makers’ power arises from the extent to which individuals’ goal achievement and gratification depend on them [49]. In terms of goal achievement in our context, an individual’s goal is to be accepted as the final personnel choice. The decision makers in the first stage could only partly affect an individual’s goal achievement by deciding who will pass the first selection round and move into the next round. While the decision makers in the second stage have more direct and critical impacts on the individual’s goal achievement by deciding who will be accepted as the final personnel choice [28, 56]. Thus, decision makers in the second stage have more control over individuals’ goal achievement and thus possess more power.

We posit that individuals perceive a lower level of AI ability-power fit when the decision-making process is arranged in human-AI order (vs. AI-human order). Specifically, when the decision-making process arranged in human-AI order (human-AI condition), the decision made by AI are considered to have more impact over interviewees’ success in personnel direction than decisions made by human, leading individuals perceive that AI possesses more power than human decision makers and consequently require AI to have higher ability than human to take the position to make decisions at the second stage. While the algorithm reductionism perspective [14] suggests that individuals tend to perceive that AI has a lower level of ability in making wise decisions than humans. Thus, individuals may view AI’s abilities as less likely to meet the requirements of the task (i.e., AI ability-power fit is low) in human-AI order condition. Conversely, individuals likely perceive a high level of AI ability-power fit when the decision-making process is arranged in AI-human order, since it meets their expectations that the decision makers (i.e., humans) in the second stage have a higher level of ability. Thus, we propose the following hypothesis:

Hypothesis 1: Individuals perceive a lower level of AI ability-power fit in the human-AI order condition than in the AI-human order condition.

The effect of AI ability-power fit on procedural justice

Based on P-E fit theory [38, 39], we further propose that higher AI ability-power fit leads to higher perceptions of procedural justice, reflecting individuals perceptions of the appropriateness in decision-making procedures [57, 58], which is the most intuitive perceptions and one of the fairness perceptions individuals care about most in the personnel decision-making [37]. Gilliland suggests that individuals may feel lower procedural justice when the rules of a decision-making process are violated in three dimensions: characteristics of the selection system (e.g., job relatedness), explanations offered during the selection process (e.g., feedback), and interpersonal treatment (e.g., two-way communication) [54]. Since these three dimensions can greatly affect the individual’s perception of procedural justice, decision makers should have the ability to match their power (i.e., ability-power fit). Specifically, the above three aspects form the backbone of the abilities that decision makers should have.

Based on Gilliland’s theoretical model [54], we suggest that AI ability-power fit is positively related to procedural justice through these three elements. Specifically, first, when individuals perceive a low level of AI ability-power fit, they may think that AI does not have the abilities to accurately measure the content relevant to the job situation as required by its position in the decision-making process, which is a violation of the accuracy rule of formal characteristics [54, 59, 60]. Second, when individuals perceive a low level of AI ability-power fit, they may believe AI cannot give timely and informative feedback when making decisions, which violates the rule of explanation [54, 61]. Third, they may also consider that the social skills required for decision making are beyond AI’s abilities and will thus lead to limited communication, which is important for interpersonal treatment [37, 54, 59]. Accordingly, when individuals perceive a low level of AI ability-power fit, they may believe that AI does not have the abilities to make accurate decisions at its position, to give timely and informative feedback, and to enable two-way communication, thus leading these individuals to think the decision-making process is less fair [54]. Combining these arguments with Hypothesis 1, we propose the following hypotheses:

Hypothesis 2: AI ability-power fit is positively related to procedural justice.

Hypothesis 3: The order of decision makers (i.e., AI-human order vs. human-AI order) has an indirect effect on procedural justice via AI ability-power fit.

The moderating role of AI ability

Thus far, we have proposed the indirect effect of the order of decision makers (i.e., AI-human order vs. human-AI order) on procedural justice via AI ability-power fit. We further argue that AI ability is an important boundary condition for this relationship. Based on P-E fit theory [38, 39] and the algorithm reductionism perspective [14], when a human is the decision maker in the first stage and AI is the decision maker in the second stage, individuals tend to feel a lower level of AI ability-power fit since people tend to believe AI’s abilities are lower than a human’s abilities. Thus, we suggest that this relationship is buffered when AI ability is high.

Specifically, when AI ability is low, individuals are more likely to consider AI as less efficient and lower quality, which means they tend to perceive a greater gap between the abilities of AI and humans. Consequently, individuals are more likely to think that only humans, rather than AI, can meet the requirements of the second stage in the two-stage decision-making process we discussed above. Thus, when AI ability is low, individuals likely tend to feel a lower level of AI ability-power fit in the human-AI order condition than in the AI-human order condition. However, when AI ability is high, individuals may perceive a smaller gap between the abilities of AI and humans and may even believe that both AI and humans are capable of being decision makers in the second stage. As such, they likely tend to feel there are only small differences in the decision-making process in the human-AI order condition versus the AI-human order condition. Combining this rationale with Hypothesis 3, we further suggest that perceived AI ability is an important boundary condition under which the order of decision makers (i.e., AI-human order vs. human-AI order) has a stronger or weaker impact on individuals’ procedural justice perception. In sum, we propose the following hypotheses:

Hypothesis 4: AI ability moderates the relationship between the order of decision makers (i.e., AI-human order vs. human-AI order) and AI ability-power fit such that individuals perceive a lower level of AI ability-power fit in the human-AI order condition than in the AI-human order condition when AI ability is low (vs. high).

Hypothesis 5: AI ability moderates the indirect effect of the order of decision makers (i.e., AI-human order vs. human-AI order) on procedural justice via AI ability-power fit such that individuals perceive a lower level of procedural justice in the human-AI order condition than in the AI-human order condition via AI ability-power fit when AI ability is low (vs. high).

Overview of current research

To test our theoretical model, we conducted two experimental studies. In Study 1, we manipulated the order of decision makers (i.e., AI-human order vs. human-AI order) and examined the indirect effect of the order of decision makers on procedural justice via AI ability-power fit in an interview decision-making context. In Study 2, we manipulated the order of decision makers and examined the indirect effect of the order of decision makers on procedural justice via AI ability-power fit in a promotion decision-making context. In addition, we manipulated AI ability and tested the entire theoretical model in Study 2. Full data of all studies can be found online at https://osf.io/bu67v/?view_only=e4e3ace663674982aa71f8bfc455377b.

Study 1 method

Participants

We recruited 134 employees from the United States via Prolific, a widely used online survey platform [6264]. In both studies, we complied with the ethical standards of the Declaration of Helsinki and common institutional review board (IRB) regarding data collection procedures, even though the Chinese institutions that employ the authors in charge of data collection did not have an IRB. In particular, participants’ confidentiality was guaranteed throughout the entire data collection processes. We obtained a written participant consent from human subjects in the experiment, where they were asked whether they would like to participate and were allowed to withdraw from the study at any given time. This sample size ensured a power level of 0.80 to detect a medium effect (f = 0.25), assuming an α level of 0.05 [65]. Each participant in this research received 0.5 USD as compensation. Among them, 78.4% were female and 81.3% were White. Their average age was 29.4 years old (SD = 8.3), average education was 16.5 years (SD = 3.0), and average organizational tenure was 4.0 years (SD = 3.9). Participants worked in a variety of industries, including healthcare (18.7%), education (14.2%), information technology (12.7%), service (8.1%), and others (46.3%). They were also from different departments, including technology related (23.9%), administration related (23.9%), finance related (4.9%), marketing related (4.9%), and others (43.3%).

Procedure and experiment design

We manipulated the order of decision makers (i.e., interviewers) in an employee-hiring decision-making context, resulting in two conditions (i.e., AI-human order vs. human-AI order). We randomly assigned participants to one of the two scenarios: a two-round interview process arranged in either AI-human order (n = 67) or human-AI order (n = 67). After reading the scenario, participants completed measurements of AI ability-power fit and procedural justice, completed a manipulation check, and reported their demographic information.

Manipulation of the order of decision makers

We manipulated the order of decision makers by instructing participants to read a statement describing a two-round interview process including both AI and human interviewers (for similar research designs, see Longoni et al.’s [66] and Newman et al.’s [14] research). Each scenario began with the following background of the employee-hiring interview context:

Company X is switching from a traditional interview process for employee hiring to including an artificial intelligence (AI) interview. Specifically, in the interview procedure of Company X, there are two rounds of interviews.

Then, we presented the two-round design of the interview process to manipulate the order of decision makers. Specifically, in the AI-human order condition, participants read the following statement:

In the first round, AI interviewers (i.e., algorithm-based decision-making agents) will interview applicants and decide which applicants will pass this interview round. In the second round, human interviewers (i.e., the company’s division managers) will further interview applicants and decide which applicants will get offers.

In the human-AI order condition, participants read another statement:

In the first round, human interviewers (i.e., the company’s division managers) will interview applicants and decide which applicants will pass this interview round. In the second round, AI interviewers (i.e., algorithm-based decision-making agents) will further interview applicants and decide which applicants will get offers.

Measures

Unless otherwise specified, all measures for the two studies used a five-point Likert-type format ranging from 1 = “Strongly disagree” to 5 = “Strongly agree.” All items are available in S1 Appendix.

Perceived AI ability-power fit

We measured perceived AI ability-power fit using a four-item scale adapted from Cable and DeRue’s person-job fit scale [67]. A sample item is “The match is very good between AI’s abilities and the power it has been granted in this interview process” (α = .96).

Procedural justice

We measured procedural justice using Newman et al.’s four-item scale [14], which was adapted from Conlon et al.’s research [68]. A sample item is “The way this interview process determines which candidates receive job opportunities seems fair” (α = .95).

Manipulation check

We asked participants to recall and rate the extent to which they agreed with two statements about the order of the interviewers in the scenario they read. The two items are “In the two interview rounds presented in the scenario above, AI interviewers (rather than humans) are the first-round interviewers” and “In the two interview rounds presented in the scenario above, humans (rather than AI interviewers) are the second-round interviewers” (α = .93).

Analytic strategy

To test our hypotheses, we conducted two-sample t-tests by condition and ordinary least squares (OLS) regressions. To test the mediating effect, we employed the PROCESS macro (v3.5; Model 4) [69] to estimate the indirect effect of the order of decision makers on procedural justice via AI ability-power fit using 5,000 bootstrapped resamples.

Study 1 results and discussion

Manipulation check

Participants in the AI-human condition agreed that the decision-making process was arranged in AI-human order (M = 4.82, SD = .54) significantly more than those in the human-AI order condition (M = 1.32, SD = .88), t(132) = 27.68, p < .001, d = 4.78. These results suggested that our manipulation was successful.

Tests of hypotheses

The descriptive statistics and correlations are presented in Table 1.

Table 1. Descriptive statistics and correlation coefficients of variables in Study 1.

Variables Mean SD 1 2
1. The order of decision makers .50 .50
2. AI ability-power fit 2.88 1.00 -.25**
3. Procedural justice 3.04 .98 -.24** .85***

Note. n = 134. For the order of decision makers, AI-human condition = 0, human-AI condition = 1.

*p < .05,

** p < .01,

***p < .001.

Hypothesis 1 proposes that individuals perceive a lower level of AI ability-power fit in the human-AI order condition than in the AI-human order condition. The two-sample t-test results showed that participants perceived a higher level of AI ability-power fit in the AI-human order condition (M = 3.13, SD = .97) than in the human-AI order condition (M = 2.63, SD = .96), t(132) = 2.97, p = .004, d = .51. The result of OLS regression also showed that the order of decision makers had a significant and negative relationship with AI ability-power fit (Model 1, Table 2; b = -.50, p = .004). Thus, Hypothesis 1 was supported.

Table 2. The main effect of the order of decision makers on procedural justice in Study 1.

Variables AI ability-power fit Procedural justice
Model 1 Model 2 Model 3
b SE t b SE t b SE t
The order of decision makers -.50 .17 -2.96** -.47 .17 -2.87** -.06 .10 -0.66
AI ability-power fit .83 .05 17.94***
Constant 3.13 .12 26.36*** 3.28 .12 28.03*** .68 .16 4.30***
R 2 .06** .06** .73***

Note. n = 134; n = 67 in the AI-human condition; n = 67 in the human-AI condition. For the order of decision makers, AI-human condition = 0, human-AI condition = 1.

*p < .05,

** p < .01,

***p < .001.

Hypothesis 2 proposes that AI ability-power fit is positively related to procedural justice. As shown in Model 3 of Table 2, AI ability-power was positively related to procedural justice (b = .83, p < .001). Thus, Hypothesis 2 was supported.

Hypothesis 3 proposes that the order of decision makers has an indirect effect on procedural justice via AI ability-power fit. The two-sample t-test result showed that participants perceived a higher level of procedural justice in the AI-human order condition (M = 3.28, SD = .96) than in the human-AI order condition (M = 2.80, SD = .95), t(132) = 2.87, p = .005, d = .50. The result of OLS regression also showed that the order of decision makers had a significant and negative relationship with procedural justice (Model 2, Table 2; b = -.47, p = .005). The bootstrapping results revealed a significant indirect effect of the order of decision makers on procedural justice via AI ability-power fit (estimate = -.41, SE = .14, 95% CI = [-.69, -.14]). Thus, Hypothesis 3 was supported.

These results offered initial support for Hypotheses 1, 2, and 3. Study 1 demonstrated that the order of decision makers (i.e., AI-human order vs. human-AI order) affects procedural justice via AI ability-power fit. Specifically, compared with interviews arranged in AI-human order, individuals in interviews arranged in human-AI order perceive less AI ability-power fit, which decreases their procedural justice. Although we conducted this study in a typical multistage decision-making process—namely, for hiring decisions—we did not know whether our findings would hold in other situations. Therefore, we conducted Study 2 to extend our findings to a promotion decision context. In addition, we included the moderator (i.e., AI ability) to test our entire theoretical model in Study 2.

Study 2 method

Participants

We recruited 183 full-time employees from China via the authors’ alumni networks. As in Study 1, informed consent was obtained from all participants. This sample size ensured a power level of 0.80 to detect a medium effect size (f = 0.25), assuming an α level of 0.05 for our 2 x 2 factorial design [65]. Each participant in this research received 0.5 USD as compensation. Among them, 59.6% were female. Their average age was 29.1 years old (SD = 6.6), average education was 16.9 years (SD = 1.7), and average organizational tenure was 4.0 years (SD = 5.3). Participants worked in a variety of industries, including banking (25.7%), information technology (12.0%), education (11.5%), manufacturing (10.9%), and others (39.9%). Participants were from different departments, including technology related (27.9%), administration related (19.1%), finance related (16.4%), marketing related (10.4%), and others (26.2%).

Procedure and experiment design

Using a 2 (the order of decision makers: human-AI order vs. AI-human order) × 2 (AI ability: low vs. high) factorial design, we manipulated both the order of decision makers and AI ability in a promotion decision context. We randomly assigned participants to one of these four conditions. To manipulate the order of decision makers, we adapted the scenario in Study 1 to a promotion decision context. To manipulate AI ability, we gave participants detailed information about the abilities of the AI decision makers based on the experimental scenarios of Langer et al. [70] and Newman et al. [14]. In line with Study 1, participants then completed measurements of AI ability-power fit, procedural justice, and manipulation checks, and reported their demographic information.

Manipulation of AI ability

We first instructed participants to read the background information of a two-round promotion decision process, which was adapted from Study 1. After that, to manipulate AI ability, we supplied detailed information about the abilities of the AI used in this process. Specifically, in the high AI ability condition, participants were asked to read the following:

Company X is switching from a traditional decision-making process for employee promotion to including artificial intelligence (AI) agents. The AI agents that Company X currently employs can recognize the abilities and personal qualities of candidates, can evaluate candidates systematically, and rarely miss information related to future performance.

In the low AI ability condition, participants were asked to read the following:

Company X is switching from a traditional decision-making process for employee promotion to including artificial intelligence (AI) agents. The AI agents that Company X currently employs cannot fully recognize the abilities and personal qualities of candidates, can evaluate candidates on only a few aspects, and may miss information related to future performance.

Manipulation of the order of decision makers

Adapted from Study 1, we presented a two-round promotion decision scenario to manipulate the order of decision makers. Specifically, in the AI-human order condition, participants read the following statement:

Specifically, in the decision-making process of Company X, there are two rounds of assessments. In the first round, AI (i.e., algorithm-based decision-making agents) will evaluate candidates and decide which candidates will pass this assessment round. In the second round, humans (i.e., the company’s division managers) will further evaluate candidates and decide which candidates will get promoted.

In the human-AI order condition, participants read another statement:

Specifically, in the decision-making process of Company X, there are two rounds of assessments. In the first round, humans (i.e., the company’s division managers) will evaluate candidates and decide which candidates will pass this assessment round.

In the second round, AI (i.e., algorithm-based decision-making agents) will further evaluate candidates and decide which candidates will get promoted.

Measures

All scales in Study 2 were translated into Mandarin Chinese following a translation-back translation procedure based on Brislin’s suggestions [71].

Perceived AI ability-power fit

We measured perceived AI ability-power fit using the same scale in Study 1 (α = .90).

Procedural justice

We measured procedural justice using the same scale in Study 1 (α = .92).

Manipulation check

Similar to Study 1, we asked participants to recall and rate the extent to which they agreed with two statements about the order of decision makers in the scenario they read. A sample item is “In the two rounds of promotion presented in the scenario above, AI interviewers (rather than humans) are the first round decision makers” (α = .93). To check the manipulation of AI ability, we asked participants to rate the abilities of AI in the presented scenario using a three-item scale adapted from Newman et al. [14]. A sample item is “In this promotion decision-making process, AI could put all factors related to future performance into context when evaluating candidates” (α = .78).

Analytic strategy

In line with Study 1, we conducted two-sample t-tests and OLS regressions to test our hypotheses and estimated the indirect effects using the PROCESS macro (Model 4). To test the moderated mediation effect, we conducted bootstrapping-based mediation analyses using the PROCESS macro (Model 7) with 5,000 bootstrapped resamples and estimated the conditional indirect effects at high and low levels of the moderator [72, 73].

Study 2 results

Manipulation check

Regarding the manipulation check for the order of decision makers, participants in the AI-human condition agreed that the decision-making process was arranged in AI-human order (M = 4.09, SD = .68) significantly more than those in the human-AI condition (M = 2.46, SD = 1.36), t(181) = 10.28, p < .001, d = 1.52. Regarding the manipulation check for AI ability, participants in the high AI ability condition rated the AI as having a higher level of ability (M = 3.33, SD = .73) than those in the low AI ability condition (M = 2.99, SD = .89), t(181) = -2.83, p = .005, d = .42. These results suggested that our manipulations were successful.

Tests of hypotheses

Table 3 shows the descriptive statistics and correlations.

Table 3. Descriptive statistics and correlation coefficients of variables in Study 2.

Variables Mean SD 1 2 3
1. The order of decision makers .49 .50
2. AI ability .50 .50 -.01
3. AI ability-power fit 3.24 .88 -.33*** .15*
4. Procedural justice perceptions 3.31 .84 -.29*** .11 .72***

Note. n = 183. For the order of decision makers, AI-human condition = 0, human-AI condition = 1. For AI ability, low AI ability condition = 0, high AI ability condition = 1.

*p < .05,

** p < .01,

***p < .001.

Hypothesis 1 proposes that individuals perceive a lower level of AI ability-power fit in the human-AI order condition than in the AI-human order condition. The two-sample t-test results showed that participants in the human-AI condition perceived a lower level of AI ability-power fit (M = 2.94, SD = .96) than those in the AI-human condition (M = 3.52, SD = .70), t(181) = 4.63, p < .001, d = .68. The result of OLS regression also showed that the order of decision makers had a significant and negative relationship with AI ability-power fit (Model 1, Table 4; b = -.57, p < .001). Thus, Hypothesis 1 was supported.

Table 4. The main and interactive effects of the order of decision makers and AI ability on procedural justice in Study 2.

Variables AI ability-power fit Procedural justice
Model 1 Model 2 Model 3 Model 4 Model 5
b SE t b SE t b SE t b SE t b SE t
The order of decision makers -.57 .12 -4.67*** -.95 .17 -5.67*** -.48 .12 -4.02*** -.10 .09 -1.07 -.18 .13 -1.36
AI ability .27 .12 2.19* -.11 .17 -0.67 .18 .12 1.50 .00 .09 .01 -.07 .12 -0.59
The order of decision makers × AI ability .77 .24 3.24** .16 .18 .86
AI ability-power fit .67 .05 12.55*** .66 .05 11.99***
Constant 3.38 .11 32.01*** 3.57 .12 30.09*** 3.45 .10 33.44*** 1.19 .20 6.10*** 1.27 .21 5.91***
R 2 0.13*** 0.18*** 0.09** 0.52*** 0.52***

Note. n = 183; n = 47 in the AI-human order and high AI ability condition; n = 46 in the AI-human order and low AI ability condition; n = 45 in the human-AI order and high AI ability condition; n = 45 in the human-AI order and low AI ability condition. For the order of decision makers, AI-human condition = 0, human-AI condition = 1. For AI ability, low AI ability condition = 0, high AI ability condition = 1.

*p < .05,

** p < .01,

***p < .001.

Hypothesis 2 proposes that AI ability-power fit is positively related to procedural justice, while Hypothesis 3 proposes that the order of decision makers has an indirect effect on procedural justice via AI ability-power fit. As shown in Table 4, the results showed that AI ability-power fit was positively related to procedural justice (b = .67, p < .001). The two-sample t-test results showed that participants perceived a lower level of procedural justice in the human-AI order condition (M = 3.06, SD = .61) than those in the AI-human order condition (M = 3.55, SD = .98), t(181) = 4.02, p < .001, d = .60. The results of the OLS regressions also showed that the order of decision makers had a significant and negative relationship with procedural justice (Model 3, Table 4; b = -.48, p < .001). As shown in Table 5, the order of decision makers had a significant indirect effect on procedural justice via AI ability-power fit (estimate = -.38., SE = .10, 95% CI = [-.59, -.21]). Thus, Hypotheses 2 and 3 were supported.

Table 5. Analyses of conditional indirect effects in Study 2.

Paths and Effects Estimate SE 95% confidence intervals
The order of decision makers → AI ability-power fit → Procedural justice
Simple indirect effect -.38 .10 [-.59, -.21]
Moderated mediation
Lower AI ability -.64 .15 [-.95, -.37]
Higher AI ability -.13 .11 [-.34, .08]
Difference .52 .17 [.19, .87]

Note. n = 183.

To test Hypotheses 4 and 5, we used the PROCESS macro (Model 7) to test the moderated mediation model. As shown in Table 4, the interaction effect of the order of decision makers (i.e., AI-human order vs. human-AI order) and AI ability in predicting AI ability-power fit was significant (b = .77, p = .001). As depicted in Fig 2, simple slope tests indicated that the relationship between the order of decision makers and AI ability-power fit was significant and negative when AI ability was low (b = -.96, p < .001) but was not significant when AI ability was high (b = -.19, p = .25). Furthermore, the results in Table 5 revealed that the indirect effect of the order of decision makers on procedural justice via AI ability-power fit was significant when AI ability was low (estimate = -.64, SE = .15, 95% CI = [-.95, -.37]) but was not significant when AI ability was high (estimate = -.13, SE = .11, 95% CI = [-.34, .08]). The difference between these indirect effects was also significant (Δestimate = .52, SE = .17, 95% CI = [.19, .87]). Thus, Hypotheses 4 and 5 were supported.

Fig 2. The moderating effect of AI ability.

Fig 2

General discussion

Previous research has typically compared humans and AI in making single-stage decisions [14, 74]. However, in organizations, a final decision often unfolds in a multistage process [2831]. Drawing upon P-E fit theory [38, 39] and the algorithm reductionism perspective [14], we focus on two-stage personnel decision-making processes and propose how and when the order of decision makers (i.e., AI-human order vs. human-AI order) affects individuals’ perceptions of procedural justice. Across two experimental studies, we found that, compared to those in the AI-human order condition, individuals in the human-AI order condition perceived a lower level of AI ability-power fit, which led to a lower level of perceived procedural justice. Furthermore, AI ability attenuated the effect of the order of decision makers on procedural justice via AI ability-power fit.

Implications for theory

Our study makes several important theoretical contributions to the literature on AI decision making by exploring a multistage decision-making process. First, previous studies only explored how AI characteristics (e.g., accuaracy, transparency) and individual difference (e.g., race, gender, education) affect perceptionsof AI decision making, or compared the decisions made by AI or a human, which are largely based on situations in which AI makes decisions independently [14, 41, 75, 76]. However, in organizations, a final decision often unfolds in a multistage decision-making process involving both AI and humans [30, 31, 77]. This research explores an specific managerial design in AI decision-making process by identifying an important antecedent—the order of decision makers (i.e., AI-human order vs. human-AI order) on individuals’ justice perceptions AI decision-making process. By doing so, our study contributes to the AI decision-making literature by considering a neglected but ubiquitous context—multistage decision making—which is more common in actual workplaces and thus enables organizations to better understand the impacts of AI decision making. Furthermore, our study also provides some insights into human and AI jointly decision-making by considering an specific managerial design in personnel selection.

Second, our study contributes to the literature on AI fairness perceptions by exploring an important but neglected antecedent—AI ability-power fit. Our study emphasizes that the perceived fairness of AI is affected by the congruence between the attributes of AI (i.e., AI’s abilities) and its position (i.e., different stages in a decision-making process). However, existing research on AI fairness perceptions views the effects of AI characteristics (e.g., transparency, explainability, accuracy) and environmental characteristics (e.g., task complexity) on AI fairness perceptions separately [4246]. Based on P-E theory [38, 39] and the algorithm reductionism perspective [14], in this research, we found that AI ability-power fit was positively related to procedural justice. Thus, our research contributes to the literature on AI fairness perceptions by arguing that individuals’ fairness perceptions are determined not only by AI or the environment separately, but also by their fit or congruence with each other (i.e., AI ability-power fit). We also found that AI ability is a boundary condition such that AI ability-power fit and procedural justice are much lower in the human-AI order condition than in the AI-human order condition when AI ability is low (vs. high).

Third, our study contributes to P-E theory [38] by exploring an emerging social actor—AI. Indeed, existing research on P-E fit provides deep insights into the fit between people and their surrounding environments, such as person-organization fit, person-job fit, person-group fit, and person-supervisor fit [18, 78]. Since AI has become an important social actor in organizations [5, 79], our study extends P-E fit theory by exploring the fit between AI’s abilities and the power it is granted (i.e., AI ability-power fit). In particular, our study shows that when a two-stage decision-making process is arranged in human-AI order (vs. AI-human order), individuals tend to perceive a lower level of AI ability-power fit because they perceive that AI’s abilities is lower than humans’ abilities and that AI is not competent enough to match the power needed to make final decisions in the second stage. This finding provides some insights regarding the fit between AI and its surrounding environments. In sum, our study extends P-E fit theory by viewing AI as a social actor and exploring the fit between AI’s abilities and the power it is granted, which is important for understanding the phenomenon of AI becoming an increasingly important decision maker in HRM.

Implications for practice

Our findings also provide several important managerial implications. First, our work may provide important guidelines for practitioners to understand the position of AI in multistage decision making. We found that the decision-making process arranged in human-AI order (vs. human-AI order) led individuals to perceive lower AI ability-power fit and consequently less procedural justice. Although previous studies suggest there is a problem with AI making personnel-selection decisions (i.e., lack of perceived fairness), our findings show that an appropriately arranged decision-making process—namely, when AI makes decisions in the first stage and humans make decisions in the second stage—may help reduce this potential cost in multistage decision-making processes. That is, AI should be located in the earlier stage(s) of a decision-making process while humans should be located in the later stage(s).

Second, we recommend managers to reconsider AI’s roles and positions discreetly so as to ensure that AI’s abilities and power are matched reasonably in collaborations with humans. Our findings reveal that individuals’ attitudes (i.e., fairness perceptions) are largely affected by the compatibility of the attributes of AI and humans (i.e., abilities of AI and humans) and the respective tasks and responsibilities they take on. Thus, managers should be cautious when placing AI in higher positions than human decision makers due to the possible mismatch between AI’s abilities and the power it is granted.

Third, our findings help organizations make full use of both high-ability AI and low-ability AI. Since our findings indicate that AI ability moderates the effect of the order of decision makers on individuals’ perceived procedural justice, we come to realize AI with both high and low ability has its own use. In particular, we suggest that when AI ability is low, AI is a suitable choice for initial screening and assisting human decision makers. When AI ability is high, AI could also take charge of some important tasks in collaborations with humans. Thus, we encourage managers to consider the differences and advantages of the two types of AI, which means not blindly chasing AI with the best ability nor having too much confidence in low-ability AI.

Strengths, limitations, and future directions

While our research has a variety of strengths (e.g., multi-studies conducted in both America and China), there are some limitations that should be discussed and addressed by future research. First, although we conducted two scenario-based experiments in different context to enhance external validity of our foundings, our study still some limited in taking account the richeness of real organizational settings. In other words, all our finding are based on simulated decision-making process in personnel selection, and individuals might have different feelings when they are actually seeking for a job. Thus, we encourage future research to conduct field experiments and surveys to replicate these findings. For example, future studies can collaborat with companies that apply AI to personnel selection and randomly placing applicants into AI-human order condition or human-AI order condition in recruitment activities to explore their fairness perceptions, thus examining and extending our findings in actual organizational settings. In addition, in order to make it easy to understand for participants, we used the general scale to measure AI ability-power fit. Future research might also develop use specific AI ability-power fit scale that capture the specific abilities required in the actual field experiment.

Second, while we tested our hypotheses in a typical two-stage decision-making process, there are more complicated decision-making processes in actual organizational settings [80, 81]. For example, in personnel selection, individuals also have to receive an evaluation at an assessment center after their initial screening interview and later target interview [82, 83]. Furthermore, not all decision tasks are the same at each stage. Thus, we encourage future research to explore such decision-making processes that are more than two stages and more than one kind of task.

Third, our research only focused on the perceptions of applicants. However, human resource decision making involves other important actors, such as HR specialists and division managers. Future research can examine whether the order of decision makers (i.e., AI-human order vs. human-AI order) affects HR specialists’ or division managers’ perceptions and behaviors. For example, HR specialists may perceive less respect when they make decisions in the first stage and AI makes final decisions because this order may convey that organizations affirm the value of AI and deny their contributions, which may consequently lead to less organizational commitment in such employees [84]. Thus, it would be promising for future research to pay attention to other relevant actors and explore how AI decision making affects their perceptions and behaviors.

Conclusion

This research concentrated on a neglected but ubiquitous context—a multistage decision-making process involving AI and humans. Drawing upon P-E fit theory [38, 39] and the algorithm reductionism perspective [14], we propose and found that a decision-making process arranged in human-AI order (vs. AI-human order) leads individuals to perceive less AI ability-power fit and procedural justice, especially when AI ability is low. By identifying the roles and positions of AI in collaborations with humans, we hope our findings encourage future scholars to reconsider AI decision making in more complex and dynamic social environments.

Supporting information

S1 Appendix. Scale items used in Studies 1 and 2.

(DOCX)

Data Availability

Data relevant to this paper are available from OSF at https://osf.io/bu67v/.

Funding Statement

This research was supported by the National Natural Science Foundation of China in the form of grants to XQ [grant nos. 72272155 and 71872190], by the National Natural Science Foundation of China in the form of a grant [grant no. 71702202], and by the Fundamental Research Funds for the Central Universities in the form of a grant to CC [grant no. 19wkpy17].

References

  • 1.Glikson E, Woolley AW. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals. 2020; 14(2):627–660. doi: 10.5465/annals.2018.0057 [DOI] [Google Scholar]
  • 2.Cheng MM, Hackett RD. A critical review of algorithms in HRM: Definition, theory, and practice. Human Resource Management Review. 2021; 31(1):100698. doi: 10.1016/j.hrmr.2019.100698 [DOI] [Google Scholar]
  • 3.Kellogg KC, Valentine MA, Christin A. Algorithms at work: The new contested terrain of control. Academy of Management Annals. 2020; 14(1):366–410. doi: 10.5465/annals.2018.0174 [DOI] [Google Scholar]
  • 4.Shanmuganathan M. Behavioural finance in an era of artificial intelligence: Longitudinal case study of robo-advisors in investment decisions. Journal of Behavioral and Experimental Finance. 2020; 27:100297. doi: 10.1016/j.jbef.2020.100297 [DOI] [Google Scholar]
  • 5.Davenport T, Guha A, Grewal D, Bressgott T. How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science. 2020; 48:24–42. doi: 10.1007/s11747-019-00696-0 [DOI] [Google Scholar]
  • 6.Paschen J, Wilson M, Ferreira JJ. Collaborative intelligence: How human and artificial intelligence create value along the B2B sales funnel. Business Horizons. 2020; 63(3):403–414. doi: 10.1016/j.bushor.2020.01.003 [DOI] [Google Scholar]
  • 7.Yam KC, Bigman YE, Tang PM, Ilies R, De Cremer D, Soh H, et al. Robots at work: People prefer-and forgive-service robots with perceived feelings. Journal of Applied Psychology. 2020; 106(10):1557–1572. doi: 10.1037/apl0000834 [DOI] [PubMed] [Google Scholar]
  • 8.Cowgill B. Automating judgement and decision-making: Theory and evidence from résumé screening. Columbia University, 2015 Empirical Management Conference; 2017.
  • 9.Kuncel NR, Klieger DM, Connelly BS, Ones DS. Mechanical versus clinical data combination in selection and admissions decisions: A meta-analysis. Journal of Applied Psychology. 2013; 98(6):1060–1072. doi: 10.1037/a0034156 [DOI] [PubMed] [Google Scholar]
  • 10.Wilson HJ, Alter A, Shukla P. Companies are reimagining business processes with algorithms. Harvard Business Review. 2016. https://hbr.org/2016/02/companies-are-reimagining-business-processes-with-algorithms
  • 11.Black JS, van Esch P. AI-enabled recruiting: What is it and how should a manager use it? Business Horizons. 2020; 63(2):215–226. doi: 10.1016/j.bushor.2019.12.001 [DOI] [Google Scholar]
  • 12.Raveendhran R, Fast NJ. Humans judge, algorithms nudge: The psychology of behavior tracking acceptance. Organizational Behavior and Human Decision Processes. 2021; 164:11–26. doi: 10.1016/j.obhdp.2021.01.001 [DOI] [Google Scholar]
  • 13.Höddinghaus M, Sondern D, Hertel G. The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior. 2021; 116:106635. doi: 10.1016/j.chb.2020.106635 [DOI] [Google Scholar]
  • 14.Newman DT, Fast NJ, Harmon DJ. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes. 2020; 160:149–167. doi: 10.1016/j.obhdp.2020.03.008 [DOI] [Google Scholar]
  • 15.Ötting SK, Maier GW. The importance of procedural justice in human-machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior. 2018; 89:27–39. doi: 10.1016/j.chb.2018.07.022 [DOI] [Google Scholar]
  • 16.Colquitt JA, Conlon DE, Wesson MJ, Porter CO, Ng KY. Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology. 2001; 86(3):425–445. doi: 10.1037/0021-9010.86.3.425 [DOI] [PubMed] [Google Scholar]
  • 17.Colquitt JA, Scott BA, Rodell JB, Long DM, Zapata CP, Conlon DE, et al. Justice at the millennium, a decade Later: A meta-analytic test of social exchange and affect-based perspectives. Journal of Applied Psychology. 2013; 98(2):199–236. doi: 10.1037/a0031757 [DOI] [PubMed] [Google Scholar]
  • 18.Kristof-Brown AL, Zimmerman RD, Johnson EC. Consequences of individual’s fit at work: A meta-analysis of person-job, person-organization, person-group, and person-supervisor fit. Personnel Psychology. 2005; 58(2):281–342. doi: 10.1111/j.1744-6570.2005.00672.x [DOI] [Google Scholar]
  • 19.Schneider B, Smith DB, Taylor S, Fleenor J. Personality and organizations: A test of the homogeneity of personality hypothesis. Journal of Applied Psychology. 1998; 83(3):462–470. doi: 10.1037/0021-9010.83.3.462 [DOI] [Google Scholar]
  • 20.Yam KC, Reynolds SJ, Wiltermuth SS, Zhang Y. The benefits and perils of job candidates’ signaling their morality in selection decisions. Personnel Psychology. 2021; 74(3):477–503. doi: 10.1111/peps.12416 [DOI] [Google Scholar]
  • 21.Starke, C., Baleis, J., Keller, B., & Marcinkowski, F. Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. arXiv preprint arXiv:2103.12016. 2021. 10.48550/arXiv.2103.12016 [DOI]
  • 22.Wang, R., Harper, F. M., & Zhu, H. Factors influencing perceived fairness in algorithmic decision-making algorithm outcomes, development procedures, and individual differences. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 2020; 14. 10.1145/331383 [DOI]
  • 23.Grace K., Salvatier J., Dafoe A., Zhang B., & Evans O. When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 2018; 62:729–754. doi: 10.1613/jair.1.11222 [DOI] [Google Scholar]
  • 24.Woods T. Live Longer with AI: How artificial intelligence is helping us extend our healthspan and live better too. 2020. Birmingham, UK: Packt Publishing Ltd. [Google Scholar]
  • 25.Burton JW, Stein MK, Jensen TB. A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making. 2020; 33(2): 220–239. doi: 10.1002/bdm.2155 [DOI] [Google Scholar]
  • 26.Jussupow, Ekaterina; Benbasat, Izak; and Heinzl, Armin, Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. Twenty-Eighth European Conference on Information Systems (ECIS2020)–A Virtual AIS Conference. 2020; RP-168. https://aisel.aisnet.org/ecis2020_rp/168
  • 27.Kaibel C, Koch-Bayram I, Biemann T, Mühlenbock M. Applicant perceptions of hiring algorithms-uniqueness and discrimination experiences as moderators. Proceedings of the Academy of Management. 2019. USA. 10.5465/AMBPP.2019.210 [DOI]
  • 28.Beach LR, Mitchell TR. A contingency model for the selection of decision strategies. Academy of Management Review. 1978; 3(3):439–449. doi: 10.2307/257535 [DOI] [Google Scholar]
  • 29.Beswick CA, Cravens DW. A multistage decision model for salesforce management. Journal of Marketing Research. 1977; 14(2):135–144. doi: 10.1016/j.jclepro.2019.05.138 [DOI] [Google Scholar]
  • 30.Huber O. Complex problem solving as multistage decision making. In: Frensch PA, Funke J, editors. Complex problem solving: The European Perspective. New York: Psychology Press; 2014. [Google Scholar]
  • 31.Payne JW. Task complexity and contingent processing in decision making: An information search and protocol analysis. Organizational Behavior and Human Performance. 1976; 16(2):366–387. doi: 10.1016/0030-5073(76)90022-2 [DOI] [Google Scholar]
  • 32.Parfenenkov B. (2020). FAANG: 3 interview and 3 offers. Medium. December 15. https://medium.com/@idlerboris
  • 33.Barber AE. Recruiting employees: Individual and organizational perspectives. Zeitschrift Fur Arbeits-Und Organisationspsychologie. 1998; 44(2):102–103. doi: 10.1026//0932-4089.44.2.102 [DOI] [Google Scholar]
  • 34.Cai CJ, Winter S, Steiner D, Wilcox L, Terry M. "Hello AI": Uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proceedings of the ACM on Human-computer Interaction. 2019; 3(CSCW):1–24. doi: 10.1145/335920634322658 [DOI] [Google Scholar]
  • 35.Jarrahi MH. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons. 2018; 61(4):577–586. doi: 10.1016/j.bushor.2018.03.007 [DOI] [Google Scholar]
  • 36.Shrestha YR, Ben-Menahem SM, von Krogh G. Organizational decision-making structures in the age of artificial intelligence. California Management Review. 2019; 61(4):66–83. doi: 10.1177/0008125619862257 [DOI] [Google Scholar]
  • 37.Bye HH, Sandal GM. Applicant personality and procedural justice perceptions of group selection interviews. Journal of Business and Psychology. 2016; 31(4):569–582. doi: 10.1007/s10869-015-9430-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Edwards JR, Caplan RD, Harrison RV. Person-environment fit theory. Theories of Organizational Stress. 1998; 28(1): 67–94. [Google Scholar]
  • 39.van Vianen AE. Person–environment fit: A review of its basic tenets. Annual Review of Organizational Psychology and Organizational Behavior. 2018; 5:75–101. doi: 10.1146/annurev-orgpsych-032117-104702 [DOI] [Google Scholar]
  • 40.Edison SW, Geissler GL. Measuring attitudes towards general technology: Antecedents, hypotheses and scale development. Journal of Targeting, Measurement and Analysis for Marketing. 2003; 12(2):137–156. doi: 10.1057/palgrave.jt.5740104 [DOI] [Google Scholar]
  • 41.Helberger N, Araujo T, de Vreese CH. Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Computer Law & Security Review. 2020; 39:105456. doi: 10.1016/j.clsr.2020.105456 [DOI] [Google Scholar]
  • 42.Dodge J, Liao QV, Zhang YF, Bellamy RKE, Dugan C. (2019). Explaining models: An empirical study of how explanations impact fairness judgment. Proceedings of the International Conference on Intelligent User Interfaces. 2019; 275–285. Marina del Rey, CA. 10.1145/3301275.3302310 [DOI]
  • 43.Kasinidou M, Kleanthous S, Barlas P, Otterbacher J. I agree with the decision, but they didn’t deserve this: Future developers’ perception of fairness in algorithmic decisions. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 2021; 690–700. Virtual Event, Canada. 10.1145/3442188.3445931 [DOI]
  • 44.Lee MK, Jain A, Cha HJ, Ojha S, Kusbit D. Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction. 2019; 3:182–208. doi: 10.1145/3359284 [DOI] [Google Scholar]
  • 45.Nagtegaal R. The impact of using algorithms for managerial decisions on public employees’ procedural justice. Government Information Quarterly. 2021; 38. doi: 10.1016/j.giq.2020.101536 [DOI] [Google Scholar]
  • 46.van Berkel N, Goncalves J, Russo D, Hosio S, Skov MB. Effect of information presentation on fairness perceptions of machine learning predictors. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021; 1–13. USA. 10.1145/3411764.3445365 [DOI]
  • 47.Edwards JR, Cooper CL. The person‐environment fit approach to stress: Recurring problems and some suggested solutions. Journal of Organizational Behavior. 1990; 11(4):293–307. doi: 10.1002/job.4030110405 [DOI] [Google Scholar]
  • 48.Dahl RA. The concept of power. Behavioral Science. 1957; 2(3):201–215. doi: 10.1002/bs.3830020303 [DOI] [Google Scholar]
  • 49.Emerson RM. Power-dependence relations. American Sociological Review. 1962; 1(1):31–41. [Google Scholar]
  • 50.Magee JC, Galinsky AD. Social hierarchy: The self‐reinforcing nature of power and status. The Academy of Management Annals. 2008; 2(1):351–398. doi: 10.5465/19416520802211628 [DOI] [Google Scholar]
  • 51.Anderson C, Kilduff GJ. The pursuit of status in social groups. Current Directions in Psychological Science. 2009; 18(5):295–298. doi: 10.1111/j.1467-8721.2009.01655.x [DOI] [Google Scholar]
  • 52.Boukarras S, Era V, Aglioti SM, Candidi M. Modulation of preference for abstract stimuli following competence-based social status primes. Experimental Brain Research. 2020; 238(1):193–204. doi: 10.1007/s00221-019-05702-z [DOI] [PubMed] [Google Scholar]
  • 53.Fast NJ, Chen S. When the boss feels inadequate: Power, incompetence, and aggression. Psychological Science. 2009; 20(11):1406–1413. doi: 10.1111/j.1467-9280.2009.02452.x [DOI] [PubMed] [Google Scholar]
  • 54.Gilliland SW. The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review. 1993; 18(4):694–734. doi: 10.2307/258595 [DOI] [Google Scholar]
  • 55.Bettman JR, Park CW. Effects of prior knowledge and experience and phase of the choice process on consumer decision processes: A protocol analysis. Journal of Consumer Research. 1980; 7(3):234–248. doi: 10.1086/208812 [DOI] [Google Scholar]
  • 56.Svenson O. Differentiation and consolidation theory of human decision making: A frame of reference for the study of pre-and post-decision processes. Acta Psychologica. 1992; 80(1–3):143–168. doi: 10.1016/0001-6918(92)90044-E [DOI] [Google Scholar]
  • 57.Colquitt JA, Zipay KP. Justice, fairness, employee reactions. Annual Review of Organizational Psychology and Organizational Behavior. 2015; 2(1):75–99. doi: 10.1146/annurev-orgpsych-032414-111457 [DOI] [Google Scholar]
  • 58.Leventhal GS. What should be done with equity theory? New approaches to the study of fairness in social relationships. In: Gergen K, Greenberg M, Willis R, editors. Social exchange: Advances in theory and research. New York, NY: Plenum Press; 1980. p. 27–55. 10.1007/978-1-4613-3087-5_2 [DOI] [Google Scholar]
  • 59.Bauer TN, Truxillo DM, Sanchez RJ, Craig JM, Ferrara P, Campion MA. Applicant reactions to selection: Development of the selection procedural justice scale (SPJS). Personnel Psychology. 2001; 54(2):387–419. doi: 10.1111/j.1744-6570.2001.tb00097.x [DOI] [Google Scholar]
  • 60.Huffcutt A. Intelligence is not a panacea in personnel selection. Industrial Organizational Psychologist. 1990; 27(3):66–67. [Google Scholar]
  • 61.Truxillo D. M., Bodner T. E., Bertolino M., Bauer T. N., & Yonce C. A. (2009). Effects of explanations on applicant reactions: A meta‐analytic review. International Journal of Selection and Assessment, 17(4):346–361. doi: 10.1111/j.1468-2389.2009.00478.x [DOI] [Google Scholar]
  • 62.Castelo N, Bos MW, Lehmann DR. Task-dependent algorithm aversion. Journal of Marketing Research. 2019; 56(5):809–825. doi: 10.1177/0022243719851788 [DOI] [Google Scholar]
  • 63.Palan S,Schitter C. Prolific.ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance. 2018; 17:22–17. doi: 10.1016/j.jbef.2017.12.004 [DOI] [Google Scholar]
  • 64.Peer E, Brandimarte L, Samat S, Acquisti A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology. 2017; 70:153–163. doi: 10.1016/j.jesp.2017.01.006 [DOI] [Google Scholar]
  • 65.Faul F, Erdfelder E, Buchner A, Lang A-G. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods. 2009; 41(4):1149–1160. doi: 10.3758/BRM.41.4.1149 [DOI] [PubMed] [Google Scholar]
  • 66.Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. Journal of Consumer Research. 2019; 46(4):629–650. doi: 10.1093/jcr/ucz013 [DOI] [Google Scholar]
  • 67.Cable DM, DeRue DS. The convergent and discriminant validity of subjective fit perceptions. Journal of Applied Psychology. 2002; 87(5):875–884. doi: 10.1037/0021-9010.87.5.875 [DOI] [PubMed] [Google Scholar]
  • 68.Conlon DE, Porter COLH, Parks JM. (2004). The fairness of decision rules. Journal of Management. 2004; 30(3):329–349. doi: 10.1016/j.jm.2003.04.001 [DOI] [Google Scholar]
  • 69.Preacher KJ, Hayes AF. SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments, & Computers. 2004; 36(4):717–731. doi: 10.3758/bf03206553 [DOI] [PubMed] [Google Scholar]
  • 70.Langer M, König CJ, Papathanasiou M. Highly automated job interviews: Acceptance under the influence of stakes. International Journal of Selection and Assessment. 2019; 27(3):217–234. doi: 10.1111/ijsa.12246 [DOI] [Google Scholar]
  • 71.Brislin RW. Translation and content analysis of oral and written material. In: Triandis HC, Berry JW, editors. Handbook of cross-cultural psychology. Boston, MA: Allyn & Bacon; 1980. [Google Scholar]
  • 72.Edwards JR, Lambert LS. Methods for integrating moderation and mediation: A general analytical framework using moderated path analysis. Psychological Methods. 2007; 12(1):1–22. doi: 10.1037/1082-989X.12.1.1 [DOI] [PubMed] [Google Scholar]
  • 73.Preacher KJ, Rucker DD, Hayes AF. Addressing moderated mediation hypotheses: Theory, methods, and prescriptions. Multivariate Behavioral Research. 2007; 42(1):185–227. doi: 10.1080/00273170701341316 [DOI] [PubMed] [Google Scholar]
  • 74.Schlicker N, Langer M, Ötting S, Baum K, König CJ, Wallach D. What to expect from opening up ‘Black Boxes’? Comparing perceptions of justice between human and automated agents. Computers in Human Behavior. 2021; 122:106837. doi: 10.1016/j.chb.2021.106837 [DOI] [Google Scholar]
  • 75.Lindebaum D, Vesa M, den Hond F. Insights from “the machine stops” to better understand rational assumptions in algorithmic decision making and its implications for organizations. Academy of Management Review. 2020; 45(1):247–263. doi: 10.5465/amr.2018.0181 [DOI] [Google Scholar]
  • 76.Miller SM, Keiser LR. Representative Bureaucracy and Attitudes Toward Automated Decision Making. Journal of Public Administration Research and Theory. 2021; 31(1):150–165. doi: 10.1093/jopart/muaa019 [DOI] [Google Scholar]
  • 77.Høyland K, Wallace SW. Generating scenario trees for multistage decision problems. Management Science. 2001; 47(2):295–307. https://www.jstor.org/stable/2661576 [Google Scholar]
  • 78.Verquer ML, Beehr TA, Wagner SH. A meta-analysis of relations between person-organization fit and work attitudes. Journal of Vocational Behavior. 2003; 63(3):473–489. doi: 10.1016/S0001-8791(02)00036-2 [DOI] [Google Scholar]
  • 79.Larson L, DeChurch LA. Leading teams in the digital age: Four perspectives on technology and what they mean for leading teams. The Leadership Quarterly. 2020; 31(1):101377. doi: 10.1016/j.leaqua.2019.101377 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Snowden DJ, Boone ME. A leader’s framework for decision making-wise executive tailor their approach to fit the complexity of the circumtances they face. Harvard Business Review. 2007; 85(11):68–76. https://hbr.org/2007/11/a-leaders-framework-for-decision-making [PubMed] [Google Scholar]
  • 81.Tamošaitienė J, Zavadskas EK. The multi-stage decision making system for complicated problems. Procedia-Social and Behavioral Sciences. 2013; 82:215–219. doi: 10.1016/j.sbspro.2013.06.248 [DOI] [Google Scholar]
  • 82.Lievens F, van Dam K, Anderson N. Recent trends and challenges in personnel selection. Personnel Review. 2002; 31(5):580–601. doi: 10.1108/00483480210438771 [DOI] [Google Scholar]
  • 83.Robertson IT, Smith M. Personnel selection. Journal of Occupational and Organizational Psychology. 2001; 74(4):441–472. doi: 10.1348/096317901167479 [DOI] [Google Scholar]
  • 84.Allen NJ, Meyer JP. The measurement and antecedents of affective, continuance and normative commitment to the organization. Journal of Occupational Psychology. 1990; 63(1):1–18. doi: 10.1111/j.2044-8325.1990.tb00506.x [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

S1 Appendix. Scale items used in Studies 1 and 2.

(DOCX)

Data Availability Statement

Data relevant to this paper are available from OSF at https://osf.io/bu67v/.


Articles from PLOS ONE are provided here courtesy of PLOS

RESOURCES