Skip to main content
PLOS One logoLink to PLOS One
. 2026 Jan 23;21(1):e0340083. doi: 10.1371/journal.pone.0340083

Intuitive or deliberative dishonesty: The effect of abstract versus concrete victim

Jiayu Cheng 1, Haoran Wang 1, Yue Liu 1, Chongxiang Wang 1, Qingzhou Sun 2,*, Bruno Verschuere 3, Liyang Sai 1,4,*
Editor: Tobias Otterbring5
PMCID: PMC12829808  PMID: 41576063

Abstract

There has been ongoing debate over whether people are intuitively honest or intuitively dishonest. A recent social harm account was proposed to address this debate: dishonesty is intuitive when cheating inflicts harm on an abstract other while honesty is intuitive when cheating inflicts harm on a concrete other. This pre-registered and well-powered study (n = 764) aims to directly test this account by using a time pressure manipulation. Specifically, we examined whether time pressure (versus self-paced conditions) would lead to increased cheating depending on whether the harmed party was concrete or abstract. The results showed no significant effect of time pressure on cheating behavior. However, the harm-type manipulation produced findings that contradicted those reported in previous studies. Given the low replication rates and reliance on controversial experimental manipulations in this area, our findings underscore the importance of further pre-registered research to rigorously evaluate the roles of time pressure and social harm in shaping intuitive (dis)honesty.

1. Introduction

Although honesty is valued across countries, dishonesty is ubiquitous in everyday life, including financial fraud in large corporations, tax evasion of institutions or celebrities, and academic cheating by individuals. And some researchers are even being accused of cheating in their research about dishonesty [1]. These behaviors carry significant economic costs and erode trust in relationships and institutions [2]. Therefore, researchers from different disciplines paid much attention to how and why individuals cheat. One of the interesting questions researchers are paying attention to is whether honesty or dishonesty is intuitive when dishonesty can obtain profits. Yet, there are opposing theoretical perspectives and conflicting empirical data.

Grace theory proposes that honesty is an intuitive behavior, with individuals needing cognitive control to inhibit their honest tendency when making dishonest decisions. This theory is supported by a number of studies. For example, by making participants’ decision-making process under time pressure [3,4] or increasing participants’ cognitive load by using a distracting task [5], researchers found that people are more likely to be honest when their cognitive resources are limited [5,6]. These findings demonstrate that dishonesty is cognitively demanding, while honesty is a more intuitive response. Contrarily, Will theory propose that dishonesty is intuitive, and individuals need cognitive control to resist the temptation to cheat [7,8]. For example, Shalvi et al. [9] found that time pressure increased cheating (but see [10]). Other experimental works have similarly revealed that through cognitive load [1113], mental or physical depletion [1416], priming of intuition concepts [17] may increase self-serving dishonesty. These findings suggest that dishonesty comes naturally, whereas honesty requires overcoming the initial tendency to cheat.

The social harm theory [18] can integratively explain the seemingly opposing research findings. It proposes that dishonesty is intuitive when it harms an abstract, vaguer entity (e.g., the experimental budget), while dishonesty is deliberative when it harms a concrete other (e.g., a participant). The study by Pitesa et al. [19] provided direct evidence for this theory. In their study, a subset of participants initially engaged in a cognitive control task, followed by participation in a cheating task where they had the opportunity to cheat for a monetary reward. Subsequently, participants were randomly assigned to either a concrete harm group or an abstract harm group. The results showed that in these cognitively depleted participants, cheating decreased when it caused harm to another participant, and increased when it did not harm a concrete other person (their behaviour would not affect the other participant’s reward). However, no such difference was observed in the group without cognitive impairment. Consistent with findings from Pitesa et al. [19], the meta-analysis by Köbis et al. [18] found that when dishonesty harms abstract others, promoting intuition causes more people to cheat. However, when dishonesty inflicts harm on concrete others, promoting intuition has no significant effect on dishonesty.

Thus, the social consequences of dishonesty could be a promising key to the riddle of intuition’s role in honesty. However, the empirical evidence supporting this theory is limited. First, Pitesa et al. [19] provided evidence about this theory, but their study was not pre-registered, and their main findings were not well powered (given the observed effect size (ηp2) being 0.073 to 0.094). Thus, it is important to have well-powered and pre-registered studies to test it again [20]. Second, while meta-analyses serve as a valuable source of research synthesis, an inherent limitation is that they rely on unbiased input. There are indications of publication bias in this literature [21]. Moreover, many studies in the Köbis et al. meta-analysis rely on manipulations (e.g., ego-depletion, behavioral priming) that are at the heart of the replication crisis [22,23].

Given the above limited findings, the present pre-registered study aims to directly test the social-harm theory with time-pressure for intuitive dishonesty. We examined whether time pressure (as compared to a self-paced condition) leads to more cheating when there is an abstract victim or there is a concrete victim. We employed an online dice-rolling task modified from Shalvi et al. [9] (Experiment 2). Participants rolled a die three times, reporting the first outcome to determine their compensation—the higher the roll, the greater the payment. We manipulated harm type, following the approach of Pitesa et al. [19]. In the concrete victim condition, participants were told that another participant would be paid from the same pot (a fixed amount), which meant their earnings would decrease the payment of another participant’s share of that fixed amount. In the abstract victim condition, no other participants would be paid from the same pot as the participant (These two conditions align with the interpersonal impact salient and not salient conditions labels, respectively, as defined in Pitesa et al. [19], Experiment 2, but we adopt the terminology that is consistent with social harm theory proposed by Köbis et al. [18]). To manipulate time pressure, we followed the procedure used in Experiment 2 of Shalvi et al. [9]. Participants in the time pressure condition had 13 seconds to complete the task and report their initial dice roll, including the 8-second reporting window used by Shalvi et al. [9], with 5 seconds needed for dice rolling in our online setting. In the self-paced condition, participants had unlimited time to complete the task and submit their response.

As exploratory analyses, we also considered the influence of individual traits on cheating behavior. For instance, Xu and Ma observed that individuals with a high moral identity typically exhibit an intuitive honesty, while those with a low moral identity tend towards dishonesty [24]. Furthermore, Bacon et al. [25] reported that reward sensitivity traits are inversely correlated with academic dishonesty in intrinsically motivated students, yet positively correlated in those motivated by grade achievement. Additionally, empirical evidence suggests that high Machiavellian individuals are more prone to lying and experience less guilt post-deceit [26]. Accordingly, the second aim of the present study is to explore whether traits such as Moral Identity, Reward Sensitivity, and Machiavellianism influence an individual’s intuitive inclination towards honesty or dishonesty.

Based on the social-harm account of intuitive dishonesty [9,27], we expected an interaction between time pressure and harm type. Specifically, we expected that time pressure makes participants more likely to cheat in the harm abstract victim condition, while time pressure makes participants more likely to be honest in the harm concrete victim condition. Additionally, we explore how individual traits such as Moral Identity, Reward Sensitivity, and Machiavellianism influence participants’ intuitive tendencies toward honesty or dishonesty.

2. Methods

The study was preregistered (https://aspredicted.org/M9Q_8NF). This study was approved by the Ethics Committee of Hangzhou Normal University (NO. 20220301). All methods were performed in accordance with the relevant guidelines and regulations. All participants read and agreed to the informed consent form before beginning the online task and were informed that they could withdraw at any time. The research was conducted in accordance with the Declaration of Helsinki. Data was collected between July 2022 and October 2022.

2.1. Participants

G*Power 3.1 software was used for prior power analysis to determine the sample size in this study [28] with Power (1 − β) set at 0.8 and α = 0.05, which showed that to detect a significant interaction effect in a binary logistic regression with a medium effect size (OR = 1.506, computed by using our pilot data), 763 participants would be required (as preregistered). Specifically, participants were recruited from Hangzhou Normal University and Zhejiang University through both electronic and printed advertisements, and from Shenzhen University and Southwest University via online subject pools. Recruitment information was also disseminated via WeChat Moments. Interested individuals could access the study by scanning a QR code provided in the digital or printed posters, which directed them to the online game platform. A total of 1151 participants were recruited in China online, and exclusions were monitored until reaching a minimum of 763 valid participants. Following pre-registration, we excluded 186 participants (16.15%) with missing data in the die-rolling task, 11 participants (0.96%) who reported smaller outcomes than reality, and 190 participants (16.51%) didn’t complete the task and automatically excluded. Finally, 764 (66.38%) valid data was obtained, with 191 participants in each group (time pressure with abstract victim, M = 23.45, SD = 4.70, 58.64% female; time pressure with concrete victim, M = 23.47, SD = 4.56, 50.79% female; self-paced with abstract victim, M = 23.19, SD = 4.48, 53.93% female; and self-paced with concrete victim, M = 23.40, SD = 4.57, 55.50% female), no significant differences in age or gender among the four groups.

2.2. Procedures

Die-rolling Task. A die-rolling task was used to measure participants’ cheating behavior. This task was adapted from the study by Shalvi et al. [9], with the only difference being that it was implemented online. Participants were instructed to roll three dice sequentially by pressing a button and then report the outcome of the first roll by pressing a number between 1 and 6. They were told that their payment would depend on the reported number—the higher the number, the greater the reward.

The experiments were conducted on participants’ smartphones, using the WeChat Mini Program for stimulus presentation. After entering the program, participants were first asked to fill in their personal information such as gender and age. Then, participants were told that there was another participant who would take part in the experiment with them simultaneously. However, unknown to the participants, the other participant was not real.

Participants were randomly assigned into one of the four conditions (time pressure with concrete victim, time pressure with abstract victim, self-paced with concrete victim, and self-paced with abstract victim). In each condition, participants were told to play a die-rolling task. Specifically, participants were asked to roll three dice one by one, and only report the outcome of the first die-rolling (To ensure that all participants have the opportunity to lie, there will be no point 6 on the first die roll). Their payoff was dependent on what they reported in the game, with higher reported numbers resulting in greater payments (e.g., reporting “1” led to ¥ 1 yuan, while reporting “6” led to ¥ 6 yuan). In the time pressure condition, participants only had 13 seconds in total to complete the 3 times dice roll and report their outcomes, whereas the self-paced condition had no time limit. In the concrete victim condition, participants were informed of sharing a ¥7 yuan bonus with other participants, which means the more payoff you receive, the less payoff the other participants would obtain. Conversely, in the abstract victim condition, participants were informed that they were playing the game independently and would not be sharing the bonus with anyone else. Rule-check questions were used to ensure participants understood the instructions, and participants could not proceed until they answered correctly (see S1 Appendix for details). After the game, the participants were asked to complete post-game check questions and assess the effectiveness of our manipulation of time pressure and harm (see S1 Appendix for details). The experimental flow diagram is shown in Fig 1.

Fig 1. The flow diagram of the experiment.

Fig 1

In the matching phase, participants will be randomly assigned to one of four conditions (time pressure with concrete victim, time pressure with abstract victim, self-paced with concrete victim, and self-paced with abstract victim). A) the participants could complete the task and report the first dice point with no time limit. B) participants need to complete the task and report the number of the first die within 13 seconds.

2.3. Measures

Moral Identity Measure (MIM). The MIM [29] consists of two different scales labeled internalization and symbolization. Whereas internalization aims to capture the self-importance of moral identity as a personal striving (e.g., I strongly desire to have these characteristics), symbolization focuses on overtly demonstrating these characteristics to others (e.g., I am actively involved in activities that communicate to others that I have these characteristics). Internalization is the more commonly used scale and was found to generate more consistent research findings [30], that our study only used the internalization scale. It consists of 10 statements using a 5-point response format (1 = strongly disagree, 5 = strongly agree). The scale demonstrated good reliability (alpha = 0.77) in the current sample.

Machiavellianism (Mach-IV). The MachIV [31] is a 20-item self-report measure of Machiavellian personality traits. Participants provided ratings on a Likert scale from 1 (strongly disagree) to 5 (strongly agree) for statements about various opinions and strategies for dealing with other people, such as “The best way to handle people is to tell them what they want to hear” and “It is wise to flatter important people”. Ten items were reverse scored, such that higher scores represent higher Machiavellianism, with total scores used in the analysis. The Cronbach’s alpha coefficient of Mach-IV was 0.84 in the current sample.

The Sensitivity to Punishment and Sensitivity to Reward Questionnaire (SPSRQ). The SPSRQ [32] is a 48-item dichotomous Yes/No self-report measure purported to load on two factors, Sensitivity to Punishment (SP) and Sensitivity to Reward (SR). The behavioral activation system and behavioral inhibition system should be measured independently. The version we used was introduced by Guo et al. [33] and revised according to the Chinese background, forming a Chinese version of the SPSRQ, qualified reliability (0.66–0.76) and validity. It contains two independent dimensions, punishment sensitivity (SP) and reward sensitivity (SR). The punishment sensitivity dimension includes 19 items, and the reward sensitivity dimension includes 12 items, all of which adopt the two-point scoring method (Yes/No). 12-item reward sensitivity was used in this study, and the scale has good reliability (alpha = 0.71) in this study.

2.4. Data analysis

Data were coded and analyzed using IBM SPSS Statistics 29 (IBM Corp., Armonk, NY, USA) with a significance set at p < 0.05. Below, we distinguish preregistered from non-preregistered exploratory analyses.

2.4.1. Preregistered analysis.

According to our preregistration, the primary analysis was a binary logistic regression to examine the main research question regarding how time pressure and harm type influence cheating behavior. Cheating behavior (0 = cheat, 1 = honesty) served as the dependent variable, with time pressure, harm type, and their interaction as predictors.

In addition, the t-tests assessed the preregistered manipulations’ effects on participants’ subjective experience of time pressure and whether participants understood that their results in the game would (or would not) affect another participant’s earnings.

2.4.2. Non-preregistered analyses.

Beyond the preregistered plan, we conducted additional exploratory analyses to further investigate potential mechanisms. Specifically, hierarchical multiple logistic and linear regression analyses were employed to examine the influence of individual traits (moral identity, Machiavellianism, and reward sensitivity, which we mentioned in the preregistered report) on cheating behavior and on the magnitude of cheating, respectively. For these analyses, time pressure and harm type were included as binary predictors, individual trait measures as continuous predictors, and the first toss point was entered as a covariate, given its role in constraining opportunities for dishonesty. Interaction terms were entered in a subsequent step to test for moderation effects.

In addition to the classical statistical inference, and not preregistered, we also used Jeffreys-Zellner-Siow (JZS) Bayes factors (BFs) (scale R = 0.707, Rouder et al., 2009) as an alternative and/or supplementary statistical method. BF is an important method for model comparison and hypothesis testing in Bayesian statistics. BFs are applied to quantify and compare the support evidence for both null and alternative hypotheses for each contrast [34,35]. BF value reflects the likelihood ratio between the alternative and the null. In this study, we reported BF10 for favoring the alternative hypothesis or BF01 for favoring the null hypothesis. The BFs in this study is calculated by the open software JASP (Version 0.18.3, https://jasp-stats.org/, JASP team, 2024). This is particularly important in our study, where null or counterintuitive findings are central. While p-values only indicate a failure to reject the null hypothesis, Bayes factors complement this by quantifying whether the evidence actually supports the null hypothesis, or whether the results simply reflect insufficient evidence for the alternative, thereby providing a more nuanced interpretation of our results. Although not preregistered, we believe that including Bayes factors enhances the transparency and informativeness of our findings.

3. Results

3.1. Preregistered analyses

3.1.1. Manipulation check.

Participants in the time-pressure condition took less time to complete the task (M = 10.44 s, SD = 1.76 s) than those in the self-paced condition (M = 14.09 s, SD = 4.85 s), t (762) = –13.82, p < .001, Cohen’s d = 1.00, 95% CI = [0.85, 1.15], BF10 = 3.10 × 10 [36], Fig 2A shows the distribution details of the total time in each stage of the task. Furthermore, self-report results showed that participants feel more time pressure (M = 2.65, SD = 1.18) in the time pressure condition than participants in the self-paced group (M = 2.39, SD = 1.09), t (757.51) = 3.19, p = 0.001, Cohen’s d = 0.23, 95%C.I. = [0.37, 0.09], BF10 = 11.73. These results indicated that the time-pressure manipulation was successful. Fig 2A shows the participants’ post-check question scores distribution for the time-pressure and self-paced conditions.

Fig 2. A) Violin plot of rating scores (1 = not affected at all; to 5 = extremely affected) on manipulation check questions for time-pressure and self-paced conditions, concrete harm and abstract harm conditions.

Fig 2

Rectangular boxes represent the interquartile range of the distribution; the horizontal line in the middle represents the mean. The width of each plot shows the density of the data. B) The bar chart of the cheating rates in the four conditions. C) Distribution histogram of time used by time pressure group and self-paced group across different stages of the task.

In addition, participants in the concrete victim group (M = 2.92, SD = 1.29) believed that their performances harmed the benefits of other participants more than participants in the abstract victim group did (M = 2.31, SD = 1.28), t (761.99) = 6.54, p < 0.001, Cohen’s d = 0.47, 95%C.I. = [0.62, 0.33], BF10 = 6.34, indicating that the harm manipulation was also successful. The participants’ post-check question scores distribution in the concrete harm and abstract-harm conditions are shown in Fig 2A.

3.1.2. Effect of time pressure and harm type on cheating.

As was shown in Fig 2B, the within-condition cheating rates were 30.89% in the time pressure with concrete victim group, 23.04% in time pressure with abstract victim group, 29.32% in self-pace with concrete victim group, and 20.42% in the self-pace with abstract victim group. Fig 2C presents the distribution histogram of time used by the time pressure group and the self-paced group across different stages of the task. Our central interest is to examine how time pressure and harm type influence participants’ cheating behavior, a logistic regression analysis was conducted with cheating behavior as the predicted variable (0 = cheat, 1 = honesty), time pressure, harm type and the interaction between time pressure and harm type as the predictors. The full model was not significant, χ2(760) = 7.50, Nagelkerke R2 = .014, p = .058, BF01 = 4.32. Further inspections showed that participants in the time-pressure condition (M = 0.27, SD = 0.44) did not lead to more cheating than participants in the self-paced condition (M = 0.25, SD = 0.43), B = 0.15, SE = 0.25, Wald (1) = 0.38, p = .535, OR = 1.17, 95% C.I. = [0.72, 1.90]. Interestingly, we found participants cheated more in the concrete harm condition (M = 0.30, SD = 0.46) than abstract harm condition (M = 0.22, SD = 0.41), B = 0.48, SE = 0.24, Wald (1) = 4.02, p = .045, OR = 1.62, 95% C.I. = [1.01, 2.59]. Importantly, inconsistent with our prediction, the interaction between time pressure and harm types was also not significant, B = – 0.08, SE = 0.33, Wald (1) = 0.06, p = .812, OR = 0.92, 95% C.I. = [0.48, 1.78].

3.2. Non-preregistered analyses

3.2.1. The influence of individual traits on time pressure and harm type in cheating behavior.

To explore the influence of individual traits on time pressure and harm type in cheating behavior. We conducted a hierarchical logistic regression with cheating behavior as the predicted variable (0 = cheat, 1 = honesty), the first toss point was entered on the first step (The initial die roll was critical in shaping participants’ opportunities for dishonest reporting: a roll of 1 permitted over‐reporting of up to 5 points, whereas a roll of 5 restricted dishonesty to just 1 point. Consequently, we included the first die value as a covariate in our analyses), the time pressure, harm type and three scales (MIM, Mach-IV, SPSRQ) were entered on the second step, and the interactions were entered on the third step for predictors.

The results revealed that model 1 was significant, χ2(1) = 54.73, Nagelkerke R2 = 0.11, p < .001, suggesting that the first toss point negatively predicted cheating behaviors in the toss task, B = – 0.47, SE = 0.07, Wald (1) = 46.05, p < .001, OR = 0.63, 95% C.I. = [0.55, 0.72].

As Fig 3A shows that the higher the point of the first dice rolled by the participant, the less likely they are to cheat, while the smaller the point rolled by the participants for the first time, the more lies the participants have. And the model 2 was also significant, χ2(1) = 93.49, Nagelkerke R2 = .17, p < .001. We found that harm type can positively predict cheating behavior, B = 0.22, SE = 0.09, Wald (1) = 6.21, p = .013, OR = 1.25, 95% C.I. = [1.05, 1.48], participants are more likely to cheat when there is a concrete harm object than when there is abstract harm. Meanwhile, the reward sensitivity (B = 0.25, SE = 0.10, Wald (1) = 5.98, p = .014, OR = 1.28, 95% C.I. = [1.05, 1.56]) and the March-IV (B = 0.43, SE = 0.13, Wald (1) = 11.06, p = .001, OR = 1.54, 95% C.I. = [1.19, 1.98]) can positively predict cheating behaviors. The model 3 was also significant, χ2(1) = 114.09, Nagelkerke R2 = .20, p < .001. We only found that the interaction of Moral identity with Time pressure can positively predict cheating behavior, B = 0.27, SE = 0.14, Wald (1) = 3.98, p = .046, OR = 1.31, 95% C.I. = [1.01, 1.71]. Hence, we categorized the data into high and low moral identity groups based on scores within one standard deviation of the mean and conducted separate logistic regression analyses for each group to explore the impact of moral identity and time pressure on cheating behavior. The results showed that the people with high moral identity are more likely to cheat under time pressure condition (B = 1.22, SE = 0.42, Wald (1) = 8.40, p = .004, OR = 3.38, 95%C.I. = [1.48, 7.69]). But no difference was found in the low moral identity between the time pressure and self-paced condition (B = – 0.26, SE = 0.46, Wald (1) = 0.31, p = .581, OR = 0.78, 95%C.I. = [0.31, 1.92]). From Fig 3B, we can find that the cheating rate among participants in the time pressure group tends to increase as the level of morality increases, while the opposite trend is observed in the self-paced group. There were no other significant results (all ps > 0.1), see Table 1 for details.

Fig 3. A) The heatmap of the first toss point and the report toss point.

Fig 3

B) The ratio of cheating to honesty across the participants over morality identity under time pressure and self-paced conditions. C) The bar chart of the cheating magnitude in the four conditions.

Table 1. Hierarchical logistic regression model for predictors of cheating behavior.
Predictor B SE Wald p Odds Ratio [95% C.I.] χ 2 p
Step 1 54.73 <.001
First toss point – 0.47 0.07 46.05 <.001 0.63 [0.55, 0.72]
Step 2 93.49 <.001
Time pressure 0.08 0.09 0.84 0.359 1.09 [0.91, 1.29]
Harm type 0.22 0.09 6.21 0.013 1.25 [1.05, 1.48]
Moral identity – 0.14 0.13 1.26 0.262 0.87 [0.67, 1.11]
Reward sensitivity 0.25 0.10 5.98 0.014 1.28 [1.05, 1.56]
Machiavellianism 0.43 0.13 11.06 0.001 1.54 [1.19, 1.98]
Step 3 114.09 <.001
Time pressure × Harm type –0.07 0.09 0.58 0.448 0.93 [0.78, 1.12]
Moral identity × Time pressure 0.27 0.14 3.98 0.046 1.31 [1.01, 1.71]
Moral identity × Harm type 0.03 0.14 0.04 0.844 1.03 [0.79, 1.34]
Moral identity × Time pressure × Harm type – 0.16 0.14 1.40 0.237 0.85 [0.65, 1.11]
Reward sensitivity × Time pressure – 0.14 0.11 1.71 0.191 0.87 [0.71, 1.07]
Reward sensitivity × Harm type – 0.05 0.11 0.20 0.657 0.96 [0.78, 1.17]
Reward sensitivity × Time pressure × Harm type – 0.04 0.11 0.16 0.693 0.96 [0.78, 1.18]
Machiavellianism × Time pressure 0.15 0.13 1.26 0.261 1.16 [0.89, 1.51]
Machiavellianism × Harm type – 0.04 0.13 0.07 0.794 0.97 [0.74, 1.26]
Machiavellianism × Time pressure × Harm type 0.16 0.13 1.42 0.233 1.17 [0.90, 1.53]

Note. Step 1: Nagelkerke R2 = 0.10, Hosmer and Lemeshow Test χ2(3) = 2.79, p = .426; Step 2: Nagelkerke R2 = 0.17, Hosmer and Lemeshow Test χ2(8) = 13.73, p = .089; Step 3: Nagelkerke R2 = 0.20; Hosmer and Lemeshow Test χ2(8) = 1.56, p = .992.

3.2.2. The influence of individual traits on time pressure and harm type in cheating magnitude.

This model aims to explore whether time pressure and harm type have an impact on the cheating magnitude. Cheating magnitude here represents how large dishonesty does those liars tend to make, was computed using the points participants reported minus the points the real toss. As is shown in Fig 3C, time pressure with concrete victim (M ± SE = 1.14 ± 0.13), the time pressure with abstract victim (M ± SE = 0.79 ± 0.12), self-pace with concrete victim (M ± SE = 1.00 ± 0.12), and self-pace with abstract victim (M ± SE = 0.70 ± 0.11). To further explore how time pressure and harm type influence cheating magnitude. An exploratory hierarchical linear regression analysis was conducted with cheating magnitude as the dependent variable. The first toss point was entered on the first step, the time pressure, harm type and three scales (MIM, Mach-IV, SPSRQ) were entered on the second step, the interactions were entered on the third step.

The results showed that the first model was significant, ΔF (1, 762) = 95.00, ΔR2 = 0.11, p < 0.001 (block 1), suggesting that the first toss point can negatively predict cheating magnitude in the toss task (Bate = – 0.34, p < .001). As shown in Fig 3A, people tended to tell a bigger lie when the point of the dice they rolled was smaller. The second model was significant, ΔF (5, 757) = 6.38, ΔR2 = 0.14, p < 0.001, further inspection showed that that harm type can positively predict the cheating magnitude (Bate = 0.08, p = .013), participants are more likely to tell a bigger lie when there is a concrete harm object than when there is an abstract harm object. Meanwhile, the March-IV also played a positive predictive role in the toss task (Bate = 0.15, p = .002), which suggests that highly Machiavellians are more likely to tell a big lie. In addition, the third model was significant, F (10, 747) = 2.14, ΔR2 = 0.17, p = 0.019, but further inspection revealed that none of the interactions significantly predicted cheating magnitude, see Table 2 for details.

Table 2. Hierarchical linear regression model for predictors of cheating magnitude.
Predictor Bate t p ΔF ΔR2 df p
Block1 95.00 0.11 (1, 762) <.001
First toss point – 0.33 – 9.75 <.001
Block 2 6.38 0.14 (5, 757) <.001
Time pressure 0.05 1.40 0.163
Harm type 0.08 2.47 0.014
Moral identity – 0.07 – 1.41 0.160
Machiavellianism 0.06 1.70 0.089
Reward sensitivity 0.17 3.33 0.001
Block 3 2.14 0.15 (10, 747) =.019
Time pressure × Harm type – 0.02 – 0.50 0.614
MIM × Time pressure 0.09 1.73 0.084
MIM × Harm type – 0.02 – 0.43 0.669
MIM × Time pressure × Harm type – 0.07 – 1.36 0.173
Mach IV × Time pressure – 0.04 – 1.16 0.247
Mach IV × Harm type – 0.01 – 0.29 0.773
Mach IV × Time pressure × Harm type – 0.01 – 0.35 0.726
SPSRQ × Time pressure 0.08 1.59 0.113
SPSRQ × Harm type 0 – 0.003 0.997
SPSRQ × Time pressure × Harm type 0.09 1.76 0.079

4. Discussion

Are we intuitively inclined to cheat, or rather to be honest? A series of studies put this hypothesis to the empirical test, but found inconsistent results [5,6,10]. The social harm theory was proposed as a possible explanation, putting forward social harm as a moderator to explain the divergent findings [18], but empirical evidence supporting this theory is lacking. This pre-registration study aimed to provide well-powered evidence for this theory directly tested this theory by using the time pressure method with a total of 764 valid participants. Results showed no significant interaction between time pressure and harm type on cheating behavior, with the Bayesian analyses showing that the data are about 4 times more likely under the null hypothesis than under the model with the interaction. This finding does not support the proposed social harm theory. Two potential explanations can be considered to account for these results: (a) limitations in the methodology employed in the current study, (b) the volatile empirical basis for the theoretical framework.

The first possibility is that methodological changes in the current study produced these results. Our study adopts an online experiment as a means of data collection, which may have diminished the interpersonal impact on cheating behavior. Previous studies suggest that the salience of interpersonal impact significantly influences the manifestation of dominant impulses in socially desirable or undesirable behavior [18,3739]. Specifically, individuals exhibit a greater reluctance to inflict the identified victims compared to unidentified victims, primarily due to the heightened potential for inducing profound emotional distress by the former group [40,41]. In the present study, participants were matched with an online “real” player to engage in the game and the concrete harm group was informed that their monetary gains would inversely affect the other party’s earnings. In Pitesa’s study, for instance, participants were allowed to directly obtain rewards from a jar. They were informed that all participants would receive money from the jar, implying that taking too much could reduce resources for others. Results from the manipulation check in our study indicated that participants were aware that their behavior would harm the other player, but that the harm effect is not as strong as the effect online by Pitesa et al. [19] (effect size (ηp2) is 0.403). Therefore, the weakening of the salience of interpersonal impact in our study may be one methodological reason why we did not find the interaction between time pressure and the harm type.

A further consideration is that the issue of lying without being detected is particularly salient in online experiments. In classic offline paradigms, participants could observe the die outcome privately and thus plausibly misreport without fear of being caught [10]. By contrast, in our online setting, participants may have suspected that we recorded the actual roll, thereby enhancing the sense of exposure that cheating could be detected. Moreover, a prior multi-lab replication work has shown that participants in online often display lower engagement and a diminished sense of “experimental realism,” which can weaken the effectiveness of manipulations and attenuate observed effects [42]. Online experiments inevitably raise concerns about being monitored/detected and feeling less engagement in the task. While no solution is perfect, several reasonable approaches have been proposed and adopted, including the methods used in our study. In recent years, online experiments have increasingly incorporated methodological safeguards such as attention checks [43,44] and comprehension questions [45]. Empirical evidence further supports that, when such measures are in place, online studies can achieve high levels of validity and reliability [46,47]. However, in offline settings, participants may fear that their decisions are not anonymous, such that lying could threaten their public image of honesty toward the experimenters and thereby increase the psychological cost of dishonesty [21]. By contrast, the heightened anonymity of online settings may reduce such concerns, making it easier for participants to cheat without inhibition.

The second possible explanation is the volatile empirical basis of the theory. For instance, previous studies in the Koblis meta-analyses often used behavior priming or ego depletion [48,49], but whether such manipulations can influence a significant impact on behavior is now highly contested with effect sizes often close to zero (e.g., [22]). That makes us cast doubt on their ability to influence cheating behavior. The exception is time pressure, which as a manipulation of inducing more automatic vs more strategic behavior seems uncontroversial [11,18,50]. Yet the impact of time pressure on cheating behavior shows inconsistent results [9,10]. Given the uncertainty surrounding these manipulations, it is imperative to adopt preregistered procedures to directly test and replicate their effects. This approach is paramount for strengthening the integrity and reliability of research findings in this area [51,52].

Another possibility, as suggested, is that participants in the concrete victim condition perceived the task more as a competitive or game-like context, in which “trying to win” became normatively acceptable. In contrast, the abstract victim condition may not have evoked the same framing, leading to relatively less cheating. Relatedly, Moore et al. [53] highlighted that many everyday moral dilemmas can be understood as conflicts between benevolence (compassion for a specific individual [54]) and integrity (adherence to impartial rules [54,55]). Within this framework, cheating in the abstract victim condition may be interpreted as a failure of integrity (violating a general rule), whereas cheating in the concrete victim condition may reflect a lack of benevolence (disregarding the welfare of an identifiable other). This perspective helps clarify why the same dishonest behavior can be evaluated differently across conditions.

We observed that certain personality traits have a potential influence on individuals’ propensity for engaging in dishonest behavior. In our exploratory analyses, we found an interaction between moral identity and time pressure. Participants in the time pressure group tend to increase as the level of morality increases, while the opposite trend is observed in the self-paced group. This finding contradicts previous research by Xu and Ma [24], who found that moral identity moderated the effect of ego depletion on cheating behavior. One possible explanation is that time pressure alters how individuals with high moral identity process the situation. Although moral identity is generally expected to show a “moral default” [56,57], and reduce dishonest behavior, under time pressure participants may rely more on intuitive or heuristic responses rather than deliberate moral reasoning. In our task, the 13-second limit may have prompted even those with high moral identity to default to such heuristics, resulting in more cheating. Thus, moral identity may buffer dishonesty only when sufficient time is available for reflection. These findings should be interpreted with caution, and further research is needed to clarify the mechanisms underlying this counterintuitive interaction. Furthermore, Campos and colleagues demonstrated that time pressure affects dishonest behavior differently depending on the level of time pressure evaluated [58]. Additionally, consistent with previous research, individuals with high reward sensitivity exhibited challenges in resisting temptations, whereas those with low reward sensitivity demonstrated easier in resisting high reward incentives. Furthermore, the study found a positive correlation between Machiavellianism and cheating behavior, although this association did not reach statistical significance when considering the magnitude of cheating. It is possible that the limited payout offered in this study (up to 6 RMB) may have constrained the ability to observe a distinct difference in the magnitude of cheating behavior.

Finally, it is important to recognize that dishonesty can take multiple forms, including self-serving, prosocial, and altruistic lies. The present research only focuses on lying for self-profits which is based on previous research. Future studies should examine whether prosocial lies are intuitive or deliberative. At the same time, we acknowledge that the broader literature emphasizes the diversity of lying motives and their links to (pro) social behavior [59,60]. Future research would benefit from developing improved paradigms that examine prosocial and altruistic lies more directly, with a particular focus on the motivations underlying dishonesty and on how its consequences—whether positive, negative, or neutral—shape behavior.

In sum, our findings question the social harm theory and its empirical basis. To understand whether and when people are intuitively (dis-)honest, we require more robust empirical research.

Supporting information

S1 Appendix. The pre- and post-game questions.

(DOCX)

pone.0340083.s001.docx (15.8KB, docx)

Data Availability

All data and materials associated with this study are publicly available at Open Science Framework (https://osf.io/vgmsp/overview).

Funding Statement

This work was supported by the National Natural Science Foundation of China (32271111, U1736125 to L. Sai); the Science and Technology Innovation 2030-“Brain Science and Brain-like Research” Major Project (Grant/Award Number: 2022ZD0210800). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. There was no additional external funding received for this study.

References

  • 1.Lewis-Kraus G. They Studied Dishonesty. Was Their Work a Lie? The New Yorker. 30 Sept 2023. [Accessed 27 Apr 2024]. Available: https://www.newyorker.com/magazine/2023/10/09/they-studied-dishonesty-was-their-work-a-lie [Google Scholar]
  • 2.Sherchan W, Nepal S, Paris C. A survey of trust in social networks. ACM Comput Surv. 2013;45(4):1–33. doi: 10.1145/2501654.2501661 [DOI] [Google Scholar]
  • 3.Capraro V. Does the truth come naturally? Time pressure increases honesty in one-shot deception games. Economics Letters. 2017;158:54–7. doi: 10.1016/j.econlet.2017.06.015 [DOI] [Google Scholar]
  • 4.Capraro V, Schulz J, Rand DG. Time pressure and honesty in a deception game. Journal of Behavioral and Experimental Economics. 2019;79:93–9. doi: 10.1016/j.socec.2019.01.007 [DOI] [Google Scholar]
  • 5.Reis M, Pfister R, Foerster A. Cognitive load promotes honesty. Psychol Res. 2023;87(3):826–44. doi: 10.1007/s00426-022-01686-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Bereby-Meyer Y, Hayakawa S, Shalvi S, Corey JD, Costa A, Keysar B. Honesty Speaks a Second Language. Top Cogn Sci. 2020;12(2):632–43. doi: 10.1111/tops.12360 [DOI] [PubMed] [Google Scholar]
  • 7.Hausladen CI, Nikolaychuk O. Color me honest! Time pressure and (dis)honest behavior. Front Behav Econ. 2024;2. doi: 10.3389/frbhe.2023.1337312 [DOI] [Google Scholar]
  • 8.Tabatabaeian M, Dale R, Duran ND. Self-serving dishonest decisions can show facilitated cognitive dynamics. Cogn Process. 2015;16(3):291–300. doi: 10.1007/s10339-015-0660-6 [DOI] [PubMed] [Google Scholar]
  • 9.Shalvi S, Eldar O, Bereby-Meyer Y. Honesty requires time (and lack of justifications). Psychol Sci. 2012;23(10):1264–70. doi: 10.1177/0956797612443835 [DOI] [PubMed] [Google Scholar]
  • 10.Van der Cruyssen I, D’hondt J, Meijer E, Verschuere B. Does Honesty Require Time? Two Preregistered Direct Replications of Experiment 2 of Shalvi, Eldar, and Bereby-Meyer (2012). Psychol Sci. 2020;31(4):460–7. doi: 10.1177/0956797620903716 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Fan W, Yang Y, Zhang W, Zhong Y. Ego Depletion and Time Pressure Promote Spontaneous Deception:An Event-Related Potential Study. Adv Cogn Psychol. 2021;17(3):239–49. doi: 10.5709/acp-0333-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Jiang Q, Zhang Y, Zhu Z, Ding K, Zhang J, Gong P, et al. Does time of day affect dishonest behavior? Evidence from correlational analyses, experiments, and meta-analysis. Personality and Individual Differences. 2023;214:112330. doi: 10.1016/j.paid.2023.112330 [DOI] [Google Scholar]
  • 13.Welsh DT, Ordóñez LD. The dark side of consecutive high performance goals: Linking goal setting, depletion, and unethical behavior. Organizational Behavior and Human Decision Processes. 2014;123(2):79–89. doi: 10.1016/j.obhdp.2013.07.006 [DOI] [Google Scholar]
  • 14.Dickinson DL, Masclet D. Unethical Decision Making and Sleep Restriction: Experimental Evidence. 2021.
  • 15.Kouchaki M, Smith IH. The morning morality effect: the influence of time of day on unethical behavior. Psychol Sci. 2014;25(1):95–102. doi: 10.1177/0956797613498099 [DOI] [PubMed] [Google Scholar]
  • 16.Ling A, Loh KK, Kurniawan IT. No increased tendencies for money-incentivized cheating after 24-hr total sleep deprivation. Journal of Neuroscience, Psychology, and Economics. 2023;16(3):111–23. doi: 10.1037/npe0000177 [DOI] [Google Scholar]
  • 17.Zhong C-B. The Ethical Dangers of Deliberative Decision Making. Administrative Science Quarterly. 2011;56(1):1–25. doi: 10.2189/asqu.2011.56.1.001 [DOI] [Google Scholar]
  • 18.Köbis NC, Verschuere B, Bereby-Meyer Y, Rand D, Shalvi S. Intuitive Honesty Versus Dishonesty: Meta-Analytic Evidence. Perspect Psychol Sci. 2019;14(5):778–96. doi: 10.1177/1745691619851778 [DOI] [PubMed] [Google Scholar]
  • 19.Pitesa M, Thau S, Pillutla MM. Cognitive control and socially desirable behavior: The role of interpersonal impact. Organizational Behavior and Human Decision Processes. 2013;122(2):232–43. doi: 10.1016/j.obhdp.2013.08.003 [DOI] [Google Scholar]
  • 20.Reproducibility and Replicability in Science. Washington, D.C.: National Academies Press; 2019. doi: 10.17226/25303 [DOI] [PubMed] [Google Scholar]
  • 21.Gerlach P, Teodorescu K, Hertwig R. The truth about lies: A meta-analysis on dishonest behavior. Psychol Bull. 2019;145(1):1–44. doi: 10.1037/bul0000174 [DOI] [PubMed] [Google Scholar]
  • 22.Vohs KD, Schmeichel BJ, Lohmann S, Gronau QF, Finley AJ, Ainsworth SE, et al. A Multisite Preregistered Paradigmatic Test of the Ego-Depletion Effect. Psychol Sci. 2021;32(10):1566–81. doi: 10.1177/0956797621989733 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Yong E. Nobel laureate challenges psychologists to clean up their act. Nature. 2012. doi: 10.1038/nature.2012.11535 [DOI] [Google Scholar]
  • 24.Xu ZX, Ma HK. Does Honesty Result from Moral Will or Moral Grace? Why Moral Identity Matters. J Bus Ethics. 2014;127(2):371–84. doi: 10.1007/s10551-014-2050-x [DOI] [Google Scholar]
  • 25.Bacon AM, McDaid C, Williams N, Corr PJ. What motivates academic dishonesty in students? A reinforcement sensitivity theory explanation. Br J Educ Psychol. 2020;90(1):152–66. doi: 10.1111/bjep.12269 [DOI] [PubMed] [Google Scholar]
  • 26.Murphy PR. Attitude, Machiavellianism and the rationalization of misreporting. Accounting, Organizations and Society. 2012;37(4):242–59. doi: 10.1016/j.aos.2012.04.002 [DOI] [Google Scholar]
  • 27.Bereby-Meyer Y, Shalvi S. Deliberate honesty. Current Opinion in Psychology. 2015;6:195–8. doi: 10.1016/j.copsyc.2015.09.004 [DOI] [Google Scholar]
  • 28.Faul F, Erdfelder E, Buchner A, Lang A-G. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses. Behav Res Methods. 2009;41(4):1149–60. doi: 10.3758/BRM.41.4.1149 [DOI] [PubMed] [Google Scholar]
  • 29.Aquino K, Reed A 2nd. The self-importance of moral identity. J Pers Soc Psychol. 2002;83(6):1423–40. doi: 10.1037//0022-3514.83.6.1423 [DOI] [PubMed] [Google Scholar]
  • 30.Jennings PL, Mitchell MS, Hannah ST. The moral self: A review and integration of the literature. J Organiz Behav. 2014;36(S1):S104–68. doi: 10.1002/job.1919 [DOI] [Google Scholar]
  • 31.Christie R, Geis FL. Studies in machiavellianism. Academic Press; 2013. [Google Scholar]
  • 32.Torrubia R, Ávila C, Moltó J, Caseras X. The Sensitivity to Punishment and Sensitivity to Reward Questionnaire (SPSRQ) as a measure of Gray’s anxiety and impulsivity dimensions. Personality and Individual Differences. 2001;31(6):837–62. doi: 10.1016/s0191-8869(00)00183-5 [DOI] [Google Scholar]
  • 33.Guo Y, Song G, Zhao P, Ma Y. A revised and translated version of the sensitivity to punishment and sensitivity to reward questionnaire (SPSRQ) for college students. Jinan Vocat Coll. 2011:91–4. [Google Scholar]
  • 34.Jeffreys H. The theory of probability. Oxford: OuP; 1998. [Google Scholar]
  • 35.Kass RE, Raftery AE. Bayes Factors. Journal of the American Statistical Association. 1995;90(430):773–95. doi: 10.1080/01621459.1995.10476572 [DOI] [Google Scholar]
  • 36.Rouder JN, Speckman PL, Sun D, Morey RD, Iverson G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon Bull Rev. 2009;16(2):225–37. doi: 10.3758/PBR.16.2.225 [DOI] [PubMed] [Google Scholar]
  • 37.Gneezy U. Deception: The Role of Consequences. American Economic Review. 2005;95(1):384–94. doi: 10.1257/0002828053828662 [DOI] [Google Scholar]
  • 38.Jones TM. Ethical Decision Making by Individuals in Organizations: An Issue-Contingent Model. The Academy of Management Review. 1991;16(2):366. doi: 10.2307/258867 [DOI] [Google Scholar]
  • 39.Mok A, De Cremer D. Too Tired to Focus on Others? Reminders of Money Promote Considerate Responses in the Face of Depletion. J Bus Psychol. 2017;33(3):405–21. doi: 10.1007/s10869-017-9497-6 [DOI] [Google Scholar]
  • 40.Kogut T, Ritov I. The “identified victim” effect: an identified group, or just a single individual? J Behav Decis Making. 2005;18(3):157–67. doi: 10.1002/bdm.492 [DOI] [Google Scholar]
  • 41.Milgram S. Some Conditions of Obedience and Disobedience to Authority. Human Relations. 1965;18(1):57–76. doi: 10.1177/001872676501800105 [DOI] [Google Scholar]
  • 42.Baumeister RF, Tice DM, Bushman BJ. A Review of Multisite Replication Projects in Social Psychology: Is It Viable to Sustain Any Confidence in Social Psychology’s Knowledge Base? Perspect Psychol Sci. 2023;18(4):912–35. doi: 10.1177/17456916221121815 [DOI] [PubMed] [Google Scholar]
  • 43.Hauser DJ, Schwarz N. Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behav Res Methods. 2016;48(1):400–7. doi: 10.3758/s13428-015-0578-z [DOI] [PubMed] [Google Scholar]
  • 44.Zickfeld JH, Gonzalez ASR, Mitkidis P. Investigating the morning morality effect and its mediating and moderating factors. Journal of Experimental Social Psychology. 2025;118:104698. doi: 10.1016/j.jesp.2024.104698 [DOI] [Google Scholar]
  • 45.Parra D. Eliciting dishonesty in online experiments: The observed vs. mind cheating game. Journal of Economic Psychology. 2024;102:102715. doi: 10.1016/j.joep.2024.102715 [DOI] [Google Scholar]
  • 46.Douglas BD, Ewell PJ, Brauer M. Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA. PLoS One. 2023;18(3):e0279720. doi: 10.1371/journal.pone.0279720 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Horton JJ, Rand DG, Zeckhauser RJ. The online laboratory: conducting experiments in a real labor market. Exp econ. 2011;14(3):399–425. doi: 10.1007/s10683-011-9273-9 [DOI] [Google Scholar]
  • 48.Baumeister RF. Ego Depletion and Self-Control Failure: An Energy Model of the Self’s Executive Function. Self and Identity. 2002;1(2):129–36. doi: 10.1080/152988602317319302 [DOI] [Google Scholar]
  • 49.Gailliot MT, Plant EA, Butz DA, Baumeister RF. Increasing self-regulatory strength can reduce the depleting effect of suppressing stereotypes. Pers Soc Psychol Bull. 2007;33(2):281–94. doi: 10.1177/0146167206296101 [DOI] [PubMed] [Google Scholar]
  • 50.Rand DG, Newman GE, Wurzbacher OM. Social Context and the Dynamics of Cooperative Choice. Behavioral Decision Making. 2014;28(2):159–66. doi: 10.1002/bdm.1837 [DOI] [Google Scholar]
  • 51.Open Science Collaboration. PSYCHOLOGY. Estimating the reproducibility of psychological science. Science. 2015;349(6251):aac4716. doi: 10.1126/science.aac4716 [DOI] [PubMed] [Google Scholar]
  • 52.Youyou W, Yang Y, Uzzi B. A discipline-wide investigation of the replicability of Psychology papers over the past two decades. Proc Natl Acad Sci U S A. 2023;120(6):e2208863120. doi: 10.1073/pnas.2208863120 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Moore AK, Munguia Gomez DM, Levine EE. Everyday dilemmas: New directions on the judgment and resolution of benevolence–integrity dilemmas. Social Personality Psych. 2019;13(7). doi: 10.1111/spc3.12472 [DOI] [Google Scholar]
  • 54.Mayer RC, Davis JH, Schoorman FD. An integrative model of organizational trust. Acad Manage Rev. 1995;20:709–34. [Google Scholar]
  • 55.McFall L. Integrity. Ethics. 1987;98:5–20. [Google Scholar]
  • 56.Shalvi S, Levine E, Thielmann I, Jayawickreme E, Van Rooij B, Teodorescu K. The science of honesty: A review and research agenda. 2025.
  • 57.Speer SPH, Smidts A, Boksem MAS. Cognitive control increases honesty in cheaters but cheating in those who are honest. Proc Natl Acad Sci U S A. 2020;117(32):19080–91. doi: 10.1073/pnas.2003480117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Campos VF, Valle MA, Bueno JLO. Cheating Modulated by Time Pressure in the Matrix Task. Trends in Psychol. 2022;32(4):1201–14. doi: 10.1007/s43076-022-00148-9 [DOI] [Google Scholar]
  • 59.Rand DG, Greene JD, Nowak MA. Spontaneous giving and calculated greed. Nature. 2012;489(7416):427–30. doi: 10.1038/nature11467 [DOI] [PubMed] [Google Scholar]
  • 60.Tinghög G, Andersson D, Bonn C, Böttiger H, Josephson C, Lundgren G, et al. Intuition and cooperation reconsidered. Nature. 2013;498(7452):E1–2; discussion E2-3. doi: 10.1038/nature12194 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Tobias Otterbring

12 Aug 2025

I have now received the reports from two reviewers with considerable knowledge and expertise in your topic domain (their detailed feedback appears below or as separate files). On the positive side, both the reviewers find your manuscript easy to follow and believe that you have interesting results based on a rigorous and preregistered empirical investigation. On the more critical side, however, they also raise a series of substantial concerns, which largely focus on lacking recruitment details, potential differential attrition across conditions, insufficient level of detail in the preregistered analyses, and clarity issues with respect to what was and was not preregistered. Moreover, they also note construct validity/confusion concerns, a failure to clearly discuss certain seemingly counterintuitive findings (e.g., more cheating in the concrete victim condition), and a lack of elaboration pertaining to whether, why, and how you believe that the study setting (online vs. real life) might have influenced your results, potentially due to lacking engagement among online participants (Baumeister et al., 2023). Further, the reviewers mention some uncited work which might bolster your theorizing and storyline.

Based on the constructive comments from the reviewers and my own reading of your paper, I am willing to move this manuscript into a second round of reviews. Given the magnitude of some of the issues identified by the reviewers, this will be a major revision, although a revision with a relatively clear path toward publication as long as you meticulously address all the substantive concerns flagged in the separate reviewer reports. Please make sure to reply to all comments made by the reviewers, incorporate needed changes in the manuscript, and then send an updated version of it along with your revision notes at your earliest convenience. Try to do this within the next two months. If you need additional time, feel free to let me know (at tobias.otterbring@uia.no) and I am happy to extend your revision window.

Reference

Baumeister, R. F., Tice, D. M., & Bushman, B. J. (2023). A review of multisite replication projects in social psychology: is it viable to sustain any confidence in social psychology’s knowledge base? Perspectives on Psychological Science, 18(4), 912-935.

Please include the following items when submitting your revised manuscript:

   • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

   • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes

   • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Tobias Otterbring

Handling Editor, PLOS One

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating in your Funding Statement:

[This work was supported by the National Natural Science Foundation of China (32271111, U1736125 to L. Sai); the Science and Technology Innovation 2030-“Brain Science and Brain-like Research” Major Project (Grant/Award Number: 2022ZD0210800).].

Please provide an amended statement that declares *all* the funding or sources of support (whether external or internal to your organization) received during this study, as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now. Please also include the statement “There was no additional external funding received for this study.” in your updated Funding Statement.

3. Thank you for stating the following in the Acknowledgments Section of your manuscript:

[This work was supported by the National Natural Science Foundation of China (32271111, U1736125 to L. Sai); the Science and Technology Innovation 2030-“Brain Science and Brain-like Research” Major Project (Grant/Award Number: 2022ZD0210800).]

We note that you have provided funding information that is currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

[This work was supported by the National Natural Science Foundation of China (32271111, U1736125 to L. Sai); the Science and Technology Innovation 2030-“Brain Science and Brain-like Research” Major Project (Grant/Award Number: 2022ZD0210800).]

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

Please include your amended Funding Statement within your cover letter. We will change the online submission form on your behalf.

4. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously? -->?>

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available??>

The PLOS Data policy

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English??>

Reviewer #1: Yes

Reviewer #2: Yes

**********

Reviewer #1: Hello

I have read your manuscript "Intuitive or Deliberative Dishonesty: The effect of abstract versus concrete victim" submitted to Plos One. Below is my review.

First, I applaud the fact that you conducted a preregistered well-powered study that aims to replicate earlier findings. This is surely a merit.

Second, for the most part I think the paper is well written and easy to follow. You cite plenty of relevant research. Perhaps you want to look on Josh Greenes work on dual process morality where it is proposed that intuition leads to deontological decision making (following rules such as not harming) whereas deliberation leads to utilitarian decision making (maximizing good consequences). See Greene, J. D. (2008). The secret joke of Kant's soul. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol 3: The neuroscience of morality: Emotion, brain disorders, and development. (pp. 35-80). Cambridge: MIT Press. for an overview.

I wonder whether you measure dishonestly or selfishness. In your study, lying is always selfish as it increases the participants’ payoff so dishonestly and selfishness are difficult to disentangle. Still, it is important to remember that both lies and truth telling can have both positive and negative (or no) consequences for both the person deciding whether to lie, and for the person being lied to. There are white lies, prosocial lies and altruistic lies. This should be highlighted and you should discuss the related literature investigating about whether prosociality is intuitive or deliberate. For instance Rand, D. G., Greene, J. D., & Nowak, M. A. (2012). Spontaneous giving and calculated greed [10.1038/nature11467]. Nature, 489(7416), 427-430. Tinghög, G., Andersson, D., Bonn, C., Böttiger, H., Josephson, C., Lundgren, G., Västfjäll, D., Kirchler, M., & Johannesson, M. (2013). Intuition and cooperation reconsidered. Nature, 498(7452), E1-E2.

I am not perfectly updated on the literature, but I feel that the fact that you did the die roll and reporting online rather than in real life can influence the believability of being able to lie without being detected. In the original die rolling studies, participants rolled a die in private and then reported the results on a separate paper (correct me if I am misremembering). This approach made it believable that one could cheat without being detected. In your online study, I feel that I would assume that you recorded the result of the die roll and could easily “prove that I lied” if you wanted to. You might not agree, but I feel you should discuss whether this rather big methodological difference could explain diverging results.

Relatedly, could it even be that what you measure is not dishonestly (lying in the belief that no one notices) but rather how much participants care about being caught lying?

I like your method, but did you consider manipulating the die roll results rather than just observing them. In an online paradigm, this could easily be done I suppose and it would give you more control. Not saying that it must be done, but that it could be reflected upon.

I believe the most surprising result is that people cheated more in the concrete victim conditions. This seems to be an effect in the opposite direction of what could be expected and against the identifiable victim effect. I strongly suggest that you discuss and try to make sense of this result. Not only a null effect but an effect in the opposite direction. Could it be that participants in the concrete victim conditions perceived it more like a game where the normative response was to “try to win” whereas participants in the abstract condition did not. Not sure, but in your version it seems like you want to hide this significant effect.

I do not really understand panel C on Figure 2. The x-axis display the two manipulated variables, but what does “score” on the Y-axis stand for?

The weak interaction effect of moral identity and time pressure condition is interesting but hard to make sense of. It makes sense that low moral ID and no time pressure leads to much cheating, but it does not makes much sense that time pressure leads to much cheating among those with high moral id. Or does it? I feel that you report results in a good way, but you could do more to at least offer possible reasons for the obtained results.

The study below seems relevant. I feel that cheating in the abstract victim condition is to act with low integrity (break a rule, and to act unfairly) wheres cheating in the concrete victim condition is to act with low benevolence.

Moore, A. K., Munguia Gomez, D. M., & Levine, E. E. (2019). Everyday dilemmas: New directions on the judgment and resolution of benevolence–integrity dilemmas. Social and Personality Psychology Compass, 13(7), e12472.

Reviewer #2: This paper reports a preregistered experiment investigating whether time pressure (intuition) increases dishonesty (cheating behavior) when dishonesty harms an abstract (but not concrete) other. Results do not support this prediction, as there is no significant interaction effect between time pressure (vs. no time pressure) and harm type (abstract vs. concrete victim). However, there was a main effect of harm type such that participants cheated more when the victim was concrete than abstract, opposite of expectations. The experiment also explores individual traits.

The study addresses an interesting and relevant question and I think the paper is well-organized and clear and the conclusions appear warranted by the data. I hope my comments below can help improve the paper further.

1. Some more detail about how/where participants were recruited would be desirable (e.g. was it using a recruitment platform, social media, etc.).

2. I am a bit puzzled by the results from the manipulation check for the harm type manipulation (concrete vs. abstract victim). The question asks: “How much does your task performance affect another subject’s earnings?” It seems to me that participants in the “abstract” condition should answer 1 (“not at all”) if they correctly understood the instructions, yet the mean rating is 2.31. Furthermore, one of the pre-game rule-check questions asks how much the opponent could obtain if the participant reports 5 points; given that participants in the abstract condition are not playing with an opponent, it is unclear what they should answer here. I wonder if there might be some selection effect because of this that affects the two conditions differently, as participants are screened out if they answer incorrectly. Also, how many participants in each condition failed the rule-check questions?

3. I would suggest reporting the full regression results from the main hypothesis test (section 3.1.2), considering that it is the main (and preregistered) hypothesis test.

4. In some places it could be made clearer what was and what was not preregistered among the analyses. In particular, it could be mentioned in the analysis section that the use of Bayes factors was not preregistered.

5. Figure 2c (manipulation checks) could be modified slightly to improve interpretability on its own (without having to read the text). E.g. add the manipulation check questions into the figure as subheadings or similar.

Minor:

1. Look over the grammar in this sentence: ”Other experimental works have similarly revealed … will increase self-serving dishonesty.” (p 3-4)

2. “cognitively impaired” (p 4) has a different connotation than what I think the authors intend – change to “cognitively depleted” or “temporarily cognitively impaired”?

3. Preregistrated (p 5) -> preregistered

4. Mofied -> modified (p 5)

5. Pista et al./Pistea et al. -> Pitesa et al. (p 5)

6. ”Exploratory” -> ”As exploratory analyses” or similar (p 6)

7. present study -> the present study (p 6)

8. preregistratrion study -> preregistered study (p 23)

9. P. 11: “t-tests assessed the manipulations' effects on time pressure and harm types” – I believe this refers to the manipulation checks? I suggest a slight rephrasing for clarity, e.g. “t-tests assessed the manipulations’ effects on participants’ subjective experience of time pressure and whether participants understood that their results in the game would (or would not) affect another participant’s earnings” (or whatever the authors find more appropriate).

**********

what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy

Reviewer #1: Yes: Arvid Erlandsson

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

PLoS One. 2026 Jan 23;21(1):e0340083. doi: 10.1371/journal.pone.0340083.r002

Author response to Decision Letter 1


13 Oct 2025

Response Letter

Review PONE-D-25-29808

Title: Intuitive or Deliberative Dishonesty: The effect of abstract versus concrete victim

Editor comment:

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

I have now received the reports from two reviewers with considerable knowledge and expertise in your topic domain (their detailed feedback appears below or as separate files). On the positive side, both the reviewers find your manuscript easy to follow and believe that you have interesting results based on a rigorous and preregistered empirical investigation. On the more critical side, however, they also raise a series of substantial concerns, which largely focus on lacking recruitment details, potential differential attrition across conditions, insufficient level of detail in the preregistered analyses, and clarity issues with respect to what was and was not preregistered. Moreover, they also note construct validity/confusion concerns, a failure to clearly discuss certain seemingly counterintuitive findings (e.g., more cheating in the concrete victim condition), and a lack of elaboration pertaining to whether, why, and how you believe that the study setting (online vs. real life) might have influenced your results, potentially due to lacking engagement among online participants (Baumeister et al., 2023). Further, the reviewers mention some uncited work which might bolster your theorizing and storyline.

Based on the constructive comments from the reviewers and my own reading of your paper, I am willing to move this manuscript into a second round of reviews. Given the magnitude of some of the issues identified by the reviewers, this will be a major revision, although a revision with a relatively clear path toward publication as long as you meticulously address all the substantive concerns flagged in the separate reviewer reports. Please make sure to reply to all comments made by the reviewers, incorporate needed changes in the manuscript, and then send an updated version of it along with your revision notes at your earliest convenience. Try to do this within the next two months. If you need additional time, feel free to let me know (at tobias.otterbring@uia.no) and I am happy to extend your revision window.

Reference

Baumeister, R. F., Tice, D. M., & Bushman, B. J. (2023). A review of multisite replication projects in social psychology: is it viable to sustain any confidence in social psychology’s knowledge base? Perspectives on Psychological Science, 18(4), 912-935.

Response: Thank you very much for your comment and for considering our manuscript for publication in PLOS ONE. We are grateful for the constructive comments provided by you and the reviewers. We truly appreciate your recognition of the strengths of our study, as well as the detailed feedback that has helped us to improve the manuscript substantially.

As will become clear in our response to the reviewers, we carefully considered all of the issues raised, including those concerning recruitment details (see page 7 in our revision draft), preregistration transparency (see pages 12 in our revision draft), potential attrition differences across conditions (see our response to Reviewer #2, Comment 2), discussion of counterintuitive findings (see pages 25-27 in our revision draft), and possible implications of conducting the study online (see pages 24-25 in our revision draft) etc. We have revised the manuscript accordingly and clarified these points in detail. We have also incorporated additional citations suggested by the reviewers to strengthen the theoretical framing.

In addition, we read Baumeister et al. (2023) and carefully considered the possible reasons for large-scale replication failures discussed by the authors, including that the original hypothesis was wrong; the hypothesis was not properly tested because of operational failure; the low engagement of participants; and bias toward failure. Although a prior multi-lab replication work has found that participants in online settings often display lower engagement and a diminished sense of “experimental realism,” which can weaken the effectiveness of manipulations and attenuate observed effects (Baumeister et al., 2023), it is also noteworthy that many online experiments have increasingly incorporated attention checks (Hauser & Schwarz, 2016; Zickfeld et al., 2025), comprehension questions (Parra et al., 2024), and other methodological safeguards, thereby gradually improving data quality (Douglas et al., 2023; Horton et al., 2011). Thus, we think that the assumption that lab participants are attentive and engaged but online participants is not that straightforward. Nevertheless, this issue underscores the importance of further refining experimental paradigms and strengthening experimental control in online settings to ensure the collection of higher-quality data. We have added a corresponding discussion of these issues in the revised manuscript (pls see pages 23-25).

In the text below, we provide a point-by-point response to all reviewer comments. We sincerely thank you and the reviewers for your constructive guidance and for giving us the opportunity to revise our work. We look forward to our revised manuscript moving to the second round of reviews.

Reviewer #1:

I have read your manuscript "Intuitive or Deliberative Dishonesty: The effect of abstract versus concrete victim" submitted to Plos One. Below is my review.

Comment 1: First, I applaud the fact that you conducted a preregistered well-powered study that aims to replicate earlier findings. This is surely a merit.

Response: Thank you for this positive comment.

Comment 2: Second, for the most part I think the paper is well written and easy to follow. You cite plenty of relevant research. Perhaps you want to look on Josh Greenes work on dual process morality where it is proposed that intuition leads to deontological decision making (following rules such as not harming) whereas deliberation leads to utilitarian decision making (maximizing good consequences). See Greene, J. D. (2008). The secret joke of Kant's soul. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol 3: The neuroscience of morality: Emotion, brain disorders, and development. (pp. 35-80). Cambridge: MIT Press. for an overview.

Response: Thank you very much for your kind feedback and the helpful suggestion regarding Greene’s work on dual process morality. We agree that his proposal—that intuitive processes tend to drive deontological decisions while deliberative processes support utilitarian reasoning—is highly influential and insightful. We did review Greene (2008) and found the perspective compelling. However, given that our study focuses specifically on (dis)honesty behavior in the context of abstract vs. concrete victims with time pressure rather than moral dilemma judgments per se, we felt that the dual-process morality framework was not the most fitting for our research aims. Still, we truly appreciate the suggestion and will consider referencing it in future related work.

I wonder whether you measure dishonestly or selfishness. In your study, lying is always selfish as it increases the participants’ payoff so dishonestly and selfishness are difficult to disentangle. Still, it is important to remember that both lies and truth telling can have both positive and negative (or no) consequences for both the person deciding whether to lie, and for the person being lied to. There are white lies, prosocial lies and altruistic lies. This should be highlighted and you should discuss the related literature investigating about whether prosociality is intuitive or deliberate. For instance Rand, D. G., Greene, J. D., & Nowak, M. A. (2012). Spontaneous giving and calculated greed [10.1038/nature11467]. Nature, 489(7416), 427-430. Tinghög, G., Andersson, D., Bonn, C., Böttiger, H., Josephson, C., Lundgren, G., Västfjäll, D., Kirchler, M., & Johannesson, M. (2013). Intuition and cooperation reconsidered. Nature, 498(7452), E1-E2.

Response: Thank you for this comment. Our research focuses on lying for self-profit. This is because the theoretical debates focus on situations where lying could bring benefits (Shalvi et al., 2012). However, we agree with the reviewer that there are other kinds of lies, such as white lies, prosocial lies and altruistic lies, and they are also worthy to examine. We have added one paragraph to discuss that future studies should also examine whether prosocial lies are intuitive or deliberative. See page 27:

“Finally, it is important to recognize that dishonesty can take multiple forms, including self-serving, prosocial, and altruistic lies. The present research only focuses on lying for self-profits which is based on previous research. Future studies should examine whether prosocial lies is intuitive or deliberative. At the same time, we acknowledge that the broader literature emphasizes the diversity of lying motives and their links to (pro) social behavior (Rand et al., 2012; Tinghög et al., 2013). Future research would benefit from developing improved paradigms that examine prosocial and altruistic lies more directly, with a particular focus on the motivations underlying dishonesty and on how its consequences—whether positive, negative, or neutral—shape behavior.”

I am not perfectly updated on the literature, but I feel that the fact that you did the die roll and reporting online rather than in real life can influence the believability of being able to lie without being detected. In the original die rolling studies, participants rolled a die in private and then reported the results on a separate paper (correct me if I am misremembering). This approach made it believable that one could cheat without being detected. In your online study, I feel that I would assume that you recorded the result of the die roll and could easily “prove that I lied” if you wanted to. You might not agree, but I feel you should discuss whether this rather big methodological difference could explain diverging results.

Response: Thank you for this valuable comment. We agree with the reviewer’s consideration that participants may think that they will be detected if they lied. In the typical laboratory version of this paradigm (see e.g., Van der Cruyssen et al., 2020), participants indeed rolled a die privately under a cup and observed the outcome through a small hole (visible only to themselves). They then reported the result, with higher reported outcomes yielding larger rewards (e.g., reporting a six earned €12). Because the outcome was known only to the participant, they had a credible opportunity to misreport without the risk of being caught. This procedure indeed allows for cheating without detection. Online implementations of this paradigm inevitably raise concerns about how anonymity is ensured. While no solution is perfect, several reasonable approaches have been proposed and adopted, including the method used in our study. Numerous related studies (e.g., Alfonso et al., 2022; Zickfeld et al., 2025) have successfully conducted this paradigm online, and meta-analyses suggest that, with appropriate controls, online results can be as reliable as those obtained in laboratory settings. In recent years, online experiments have increasingly incorporated methodological safeguards such as attention checks (Hauser & Schwarz, 2016; Zickfeld et al., 2025) and comprehension questions (Parra et al., 2024). Empirical evidence further supports that, when such measures are in place, online studies can achieve high levels of validity and reliability (Douglas et al., 2023; Horton et al., 2011). At the same time, it is important to note that online studies also offer a greater degree of anonymity and less social presence compared to laboratory settings. Previous studies suggest that the salience of interpersonal impact significantly influences the manifestation of dominant impulses in socially desirable or undesirable behavior (Gneezy, 2005; Jones, 1991; Köbis et al., 2019; Mok & De Cremer, 2018). Specifically, individuals exhibit a greater reluctance to inflict the identified victims compared to unidentified victims, primarily due to the heightened potential for inducing profound emotional distress by the former group (Kogut & Ritov, 2005; Milgram, 1965). This also highlights that online experiments offer many factors for further exploration. We call for future research to conduct comparative studies and develop new paradigms to better separate these factors. In the revised manuscript, we have expanded our Discussion about this issue (pls see page 23-25). We sincerely appreciate your suggestion, which helped us clarify this important aspect.

“The first possibility is that methodological changes in the current study produced these results…

A further consideration is that the issue of lying without being detected is particularly salient in online experiments. In classic offline paradigms, participants could observe the die outcome privately and thus plausibly misreport without fear of being caught (Van der Cruyssen et al., 2020). By contrast, in our online setting, participants may have suspected that we recorded the actual roll, thereby enhancing the sense of exposure that cheating could be detected. Moreover, a prior multi-lab replication work has shown that participants in online often display lower engagement and a diminished sense of “experimental realism,” which can weaken the effectiveness of manipulations and attenuate observed effects (Baumeister et al., 2023). Online experiments inevitably raise concerns about being monitored/detected and feeling less engaged in the task. While no solution is perfect, several reasonable approaches have been proposed and adopted, including the methods used in our study. In recent years, online experiments have increasingly incorporated methodological safeguards such as attention checks (Hauser & Schwarz, 2016; Zickfeld et al., 2025) and comprehension questions (Parra, 2024). Empirical evidence further supports that, when such measures are in place, online studies can achieve high levels of validity and reliability (Douglas et al., 2023; Horton et al., 2011). However, in offline settings, participants may fear that their decisions are not anonymous, such that lying could threaten their public image of honesty toward the experimenters and thereby increase the psychological cost of dishonesty (Gerlach et al., 2019). By contrast, the heightened anonymity of online settings may reduce such concerns, making it easier for participants to cheat without inhibition. ”

Relatedly, could it even be that what you measure is not dishonestly (lying in the belief that no one notices) but rather how much participants care about being caught lying?

Response: Thank you for raising this important point. We agree that, in principle, there is a distinction between measuring dishonesty per se and measuring how much participants care about being caught lying. Methodologically, our die-roll paradigm is designed to capture dishonesty by comparing self-reported outcomes with the actual probabilities of dice results. Participants always had an opportunity to increase their payoff by inflating their report, and since their individual rolls were private, the only way for them to gain extra points was through dishonesty. While we cannot completely rule out that some participants refrained from lying because they worried about being detected, the paradigm and incentive structure strongly support the interpretation that the primary behavior we measured was honesty versus dishonesty.

Comment 3: I like your method, but did you consider manipulating the die roll results rather than just observing them. In an online paradigm, this could easily be done I suppose and it would give you more control. Not saying that it must be done, but that it could be reflected upon.

Response: In online settings, there are indeed two ways to handle the die role outcome: manipulating it (Mazar et al., 2008) versus observing (Shalvi et al., 2011; Van der Cruyssen et al., 2020). We c

Attachment

Submitted filename: Response_Letter.docx

pone.0340083.s003.docx (667.4KB, docx)

Decision Letter 1

Tobias Otterbring

16 Dec 2025

Intuitive or Deliberative Dishonesty: The effect of abstract versus concrete victim

PONE-D-25-29808R1

Dear Dr. Sai,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support .

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Tobias Otterbring

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Dear authors,

Thank you for delivering a responsive revision. Based on your material revisions in the manuscript and your detailed replies to the reviewers, both of whom are positive toward the current version of the manuscript, I am happy to recommend acceptance of your paper in its current form. That said, I strongly recommend you to consider the remaining minor suggestions from the reviewers in the proof process. Congratulations!

Kind regards,

Tobias Otterbring

Associate Editor, PLOS One

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions??>

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously? -->?>

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available??>

The PLOS Data policy

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English??>

Reviewer #1: Yes

Reviewer #2: Yes

**********

Reviewer #1: Hello

This is reviewer 1 from the earlier round. I have read your revised manuscript "Intuitive or Deliberative Dishonesty: The effect of abstract versus concrete victim" submitted to Plos One. Below is my review.

Thank you for the good revision. I feel you addressed most of my concerns in a satisfactory way. I have some final suggestions, but after they are fixed, I am willing to recommend publication.

Page 6. I feel you repeat yourself regarding analyses about individual differences.

Page 8-9: I feel that the die-rolling task can be explained more in detail at this point (it become clearer when you go through the results). When reading about it here, it is not clear (unless one has read Shalvi et al) that the die-roll was semi-random (you controlled so that it never landed on 6 but it was random whether it landed on 1-5). Please make it clear how you operate cheating and honesty, and I would also mention cheating magnitude at this point.

Page 15 Fig 2. Here your refer to the conditions as harm vs no harm. I think it is better to refer to it as concrete vs abstract harm

Page 24. Last rows: “inflict [harm to] the identified victims”

The discussion is much improved, but I think you can separate “lack of expected results” (which can be because a lot of different factors) from “presence of unexpected results” even more explicit. Again, I am surprised that cheating was higher in the concrete harm conditions. The discussion you have is good, but I would also argue that your “concrete harm” condition could be even more concrete (e.g. you learn more individualizing aspects about the player you are paired with and perhaps even that you anticipate how you will be able to observe the reaction when that player receives his/her pay).

Reviewer #2: Thank you for responding to my comments. I think the authors have answered the points raised.

Regarding my comment about potential selection effects - thank you for clarifying that participants are not screened out based on their answers to the pre-game rule-check questions. I would suggest adjusting the wording in the paper slightly to clarify this, specifically on p. 9 where it is stated that “only those who passed the rule-check questions could continue to the formal experiment” (to something along the lines of "participants could not proceed until they answered correctly")

**********

what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy

Reviewer #1: Yes: Arvid Erlandsson

Reviewer #2: No

**********

Acceptance letter

Tobias Otterbring

PONE-D-25-29808R1

PLOS One

Dear Dr. Sai,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Tobias Otterbring

Academic Editor

PLOS One

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. The pre- and post-game questions.

    (DOCX)

    pone.0340083.s001.docx (15.8KB, docx)
    Attachment

    Submitted filename: Response_Letter.docx

    pone.0340083.s003.docx (667.4KB, docx)

    Data Availability Statement

    All data and materials associated with this study are publicly available at Open Science Framework (https://osf.io/vgmsp/overview).


    Articles from PLOS One are provided here courtesy of PLOS

    RESOURCES