Skip to main content
PLOS One logoLink to PLOS One
. 2021 Dec 1;16(12):e0260141. doi: 10.1371/journal.pone.0260141

Do exhausted primary school students cheat more? A randomized field experiment

Tamás Keller 1,2,3,*, Hubert János Kiss 2,4
Editor: Valerio Capraro5
PMCID: PMC8635394  PMID: 34851960

Abstract

Motivated by the two-decade-long scientific debate over the existence of the ego-depletion effect, our paper contributes to exploring the scope conditions of ego-depletion theory. Specifically, in a randomized experiment, we depleted students’ self-control with a cognitively demanding task that required students’ effort. We measured the effect of depleted self-control on a subsequent task that required self-control to not engage in fraudulent cheating behavior—measured with an incentivized dice-roll task—and tested ego-depletion in a large-scale preregistered field experiment that was similar to real-life situations. We hypothesized that treated students would cheat more. The data confirms the hypothesis and provides causal evidence of the ego-depletion effect. Our results provide new insights into the scope conditions of ego-depletion theory, contribute methodological information for future research, and offer practical guidance for educational policy.

1. Introduction

The idea that people can consume their stock of self-control in situations that require self-control, resulting in a reduction in self-control in a subsequent self-control task, is referred to as ego-depletion. The ego-depletion theory is an elegant theory in social psychology that has led to a blossoming of empirical research in this field [1]. However, after more than two decades of research, there is still no conclusive agreement between skeptics and proponents of the theory that the ego-depletion effect is real [2].

Early meta-analyses on ego depletion reveal medium-to-large effect sizes that suggest that exhausted self-control leads to impaired performance in subsequent self-control tasks [3]. Later, the theory faced serious threats, as subsequent meta-analysis showed that the effect does not differ empirically from zero [4]. The finding of a large multi-lab registered replication report confirmed the nill ego-depletion effect in an inference task requiring cognitive control and attention [5]. More recently, however, Friese et al. [2] summarized that while the criticisms seriously challenge the ego-depletion theory, these criticisms are not sufficient to conclusively debunk it. Based on these considerations, some features of prior empirical research require revision, so as to explore the specific conditions under which ego-depletion effect might occur.

First, some previous research might have employed weak experimental manipulation due to the adopted sequential task paradigm, which deploys two consecutive tasks. While both tasks require self-control in the treatment group, only the second task requires it in the control group. Within the sequential task paradigm, the depletion of self-control hinges on the assumption that participants exert sufficient effort to deplete their self-control resources. Nevertheless, experimental manipulations are often abstract and tiresome cognitive tasks. They may either prompt respondents to control automatic reactions beyond individual awareness (suppress thoughts) or require effort to carry out some tedious exercises. Treated people might not be sufficiently motivated to engage with the manipulation task, which can cause weak experimental manipulation [6]. Therefore, new research should explore the ego-depletion effect by employing experimental manipulation beyond the sequential task paradigm.

Second, previous research employed various outcome variables; each of these might incorporate additional aspects besides self-control. The reliability of the outcome variables concerning self-control is often not explained, and the possible interplay between the additional aspects and self-control is usually not discussed. This shortcoming requires the usage of (new) outcome variables with an intuitive direct link to self-control.

Third, most prior research has tested ego depletion in lab experiments that provide artificial circumstances that are sometimes quite different from real-life situations. By contrast, various real-life situations offer tangible examples of how depletion triggers loss of self-control. Therefore, there is a need to move from lab to field experiments when testing the ego-depletion effect, thereby testing depletion in respondents’ real-life contexts.

To address these issues, we conducted a large-scale online experiment including 1,143 primary school students from 126 classrooms in rural Hungarian primary schools. Our paper contributes to exploring the scope conditions of ego-depletion theory in the following ways.

First, we go beyond the sequential task paradigm. We depleted treated students’ self-control using an approximately 20-minute questionnaire. This contained a 10-minute grade-specific math test and subsequent questions challenging students’ ability to delay gratification and altruism. Our experimental manipulation is based on the suggestion by Baumeister and Vohs [7] that solving complex logic problems consumes energy and leads to impaired self-control. In particular, math problems have been used in prior studies to deplete self-control [4]. After the 20-minute-long questionnaire (depletion task), students in the treatment group received the second task (outcome measure) requiring self-control. Students in the control group received only the second task, and they did not solve the 20-minute-long questionnaire beforehand.

Second, our outcome measure is cheating to achieve a more valuable outcome. In our experimental situation, students could gain a more valuable gift if they behaved dishonestly, that is, if they did not exert self-control and chose a more appealing gift available to them by not telling the truth. Cheating is a policy-relevant outcome that might be of interest to educational practitioners. Specifically, depleted students might cheat in school—to attain a desired grade, for instance [8]. Yet, little research has been conducted on cheating, mainly in the context of university rather than compulsory education [9, 10]. Prior experimental economic literature involving primary school students has almost exclusively explored the associations between socioeconomic factors and cheating [11, 12], with little attention to the social-psychological foundation of cheating. In sum, cheating as an outcome variable offers the opportunity to test students’ ability to exert self-control. In particular, it generates new substantive results in primary school students that might be interesting to educational policymakers.

Third, we test our hypothesis with a field experiment that explores ego depletion in a real rather than artificial situation, as the exam situation (solving math problems) is familiar to all students. Moreover, we deployed a large-scale and preregistered study. These features are important, since prior empirical results of ego-depletion were achieved with small sample sizes, and therefore, skeptics of the ego-depletion theory condemn these results as driven by specification-search (p-hacking) and small-study effects [13].

Our findings support the prediction of the ego-depletion effect. The exhausted (that is, treated) students, whose stock of self-control had been depleted by solving the 20-minute-long questionnaire, cheated by approximately 4.4 percentage points more than the control students. Our results suggest that our manipulation, which resembled the depletion that students might face in their everyday school context, consumes students’ self-control and leaves them unable to resist the temptation to behave dishonestly.

Our results contribute to determining the scope conditions of the ego-depletion effect. Ego depletion occurs when primary school students’ self-control is depleted by a cognitively demanding task (1), when dishonest behavior—i.e., cheating—is examined (2), and when ego-depletion is observed in the real-life context instead of in artificial laboratory circumstances (3). The practical consequence emerging from our work is that if primary school students are depleted during cognitively demanding exercises, they cheat more—an important finding that practitioners of education should consider when they plan students’ daily school schedules.

The results imply that dishonesty, not honesty, is intuitive, as resisting the temptation to be dishonest requires resources that might be consumed in tasks requiring self-control. Nevertheless, our paper has a narrow focus on the ego-depletion effect. We acknowledge, but do not directly focus on, the more general scientific debates on intuitive honesty/dishonesty [14] and the related tests on time pressure [15, 16] or cognitive load [17] that bring (sometimes conflicting) experimental evidence to this debate.

2. Materials and methods

2.1 Research transparency

We followed our detailed preanalysis plan, which we archived at the Open Science Forum (OSF) before receiving the endline data (https://osf.io/jhms2/). The data and analytic scripts to reproduce the analyses are available on the OSF page of the project: https://osf.io/2ykp8/.

2.2 Study overview

We conducted a large-scale online experiment among Hungarian primary school students between May 18 and June 8 2020. Primary education in Hungary is compulsory and encompasses the primary and lower-secondary ISCED 1 and ISCED 2 levels, comparable to elementary and middle school in the United States.

We recruited our respondents from an ongoing research program with 2,898 students (grades 4–8) in 29 Hungarian primary schools. We contacted and asked students via their schools to participate in a voluntary online experiment.

We reached a self-selected sample of students. Students participated in the experiment at home (students were undertaking online home-based education in response to the Covid-19 pandemic) under unsupervised conditions. On average, students who participated in our survey had better grades and better school behavior than their classmates—features typical of motivated students (see S1 Table).

The self-selected students who participated in our survey may have differed from their nonrespondent classmates in motivation—that is, in terms of the motivation that may have prompted students to engage with the purpose of the study and self-select into the sample. Motivation might render immunity to ego depletion, but only if the experienced depletion is mild—suggesting that there is a limit to the influence of motivation on ego-depletion [18]. Therefore, our reliance on a sample containing self-selected motivated primary school students might have introduced limited bias in estimating the ego-depletion effect.

The analytical sample contains 1,143 students in 126 4th through 8th grade classrooms at 28 schools. Descriptive statistics about the main variables in the paper are presented in Table 1. Half of the students (49.9%) were female (N = 570). Students’ age ranged from 10 to 16 (mean = 12.82, standard deviation = 1.43). The high maximum age reflects the students who had to repeat classes.

Table 1. Descriptive statistics.

Baseline variables measured before the treatment Treatment Outcome variables: Cheating Less valuable gift [fatigue] (g)
Girl Age(a) N of books(b) GPA(c) Disruptive school beh.(d) Math test DG(e) Altruism(f) Depleted students All misreports More valuable gift(g)
Mean 0.499 12.82 0 3.737 1.331 0.676 0.814 0.852 0.690 0.126 0.089 0.045
SD 0.500 1.432 1 0.983 0.382 0.266 0.389 0.355 0.463 0.332 0.284 0.207
Min 0 9.781 -0.724 1 1 0 0 0 0 0 0 0
Max 1 16.32 3.226 5 3.375 1 1 1 1 1 1 1
N 1143 1143 1106 1127 1025 983 983 983 1143 1143 1096 1046
Missing % 0 0 3.24 1.4 10.32 14 14 14 0 0 0.0885 0.045

(a) Students’ age refers to their age at the experiment.

(b) Assessed by the following question in the students’ questionnaire: “How many books do you have? You should count the number of books you and your parents possess together. Please do not include your coursebooks and newspapers” (answer categories: less than one shelf 0–50; one shelf ca. 50; 2–3 shelves (ca: 150); 4–6 shelves (ca: 300); 2 bookcases (ca: 300–600 books); 3 bookcases (ca: 600–1000 books); more than 1000 books. The variable was z-standardized to 0 mean and 1 standard deviation.

(c) Grades are integers between 1 (worst) and 5 (best). Grades reported in this table are teacher-awarded grades reported by the teachers. The source of grades is students’ mid-term reporting cards issued in January 2020.

(d) An index was calculated from the mean of the following eight disruptive school behaviors: teasing others, playing or reading something, being noisy, walking around, eating or chewing gum, sending letters, talking or laughing, being late. We asked about the frequency of these behaviors by using the following scale: 1 = “Never”; 2 = “Sometimes”; 3 = “Frequently”; 4 = “Almost always.” We measured this variable in a baseline teacher survey in February 2020, when homeroom teachers answered this question for all students in their class.

(e) DG refers to the delay of gratification. It was measured in a not incentivized, real choice situation where students chose between a more valuable future outcome and a less valuable immediate outcome. Students saw a picture of colorful wristbands and were asked the following question: “Do you want to have one wristband now, or two wristbands tomorrow?” Immediate gratification was coded as 0; delayed gratification was coded as 1. We measured this variable in a baseline online survey in April 2020.

(f) We measured altruism with the following questions: “Imagine that you are going to the zoo with some of your classmates. One of your classmates has forgotten to bring money for the entrance ticket. You have enough money for two entrance tickets. Would you lend your classmate the money for the entrance ticket?” Altruism is binary variable = 1 if the student lent money and 0 otherwise. The category “I do not know” was coded as zero. We measured this variable in a baseline online survey in April 2020.

(g) Missingness is due to the preregistered decision rule that restricted our analysis in the cases where students opted for a more/less valuable gift on their individualized preference list than they rolled.

The participating students were not representative of the Hungarian student population in terms of students’ academic achievement and social background. On average, students’ test scores and socioeconomic status are lower in our sample than the Hungarian average (see S2 Table). Corresponding with the participating students’ relatively disadvantaged social background, half of them (51%, N = 582) are from small rural settlements with fewer than 3,000 inhabitants.

The demographic composition of the analytical sample leads to an over-representation of students with weaker academic achievement and from poorer social backgrounds. Thus, students in our sample might have lower initial self-control, since low academic achievement translates into low self-control [19]. Nevertheless, the ego-depletion effect is weaker among low self-control students who have experience in resisting acute temptation and have gained experience in how to resist temptation (in contrast to high self-control students who might lack these experiences) [20]. Therefore, our estimates of the ego-depletion effect in our sample are conservative.

2.3 The measurement and coding of the outcome variable: Cheating

We measured our outcome (cheating) with a dice-roll task (a modified version of the standard dice-roll experiment [21]) to collect individual data about students’ dishonest behavior. Our procedure consisted of the following steps:

  1. We asked students to create their preference list by ranking six different objects of different monetary values according to the subjective value they attached to each object. We told the students that the rank of objects on the preference list should correspond to their desire for the object. For example, the first object on the preference list should be the object they most desire and would most like to receive as a gift. The six objects were a pencil case, a pouch, a mug, a pen, a keyring, and a badge, as shown in Fig 1.

  2. We asked students to roll a virtual six-sided dice and instantly reveal the rolled number.

  3. We showed students their subjective preference list and informed them how each object on their list would correspond to specific numbers between 1 and 6. For example, students were informed that 1 corresponded to their most desired object; 2 corresponded to the second object on the preference list, and 6 corresponded to the least desired object.

  4. We asked students to report the rolled number. We indicated that they would receive the object based on the reported number. Therefore, once we asked for the rolled number, students could report any number between 1 and 6 and the number students reported could differ from the rolled number.

Fig 1. The objects (gifts) used in the dice-roll exercise as incentives.

Fig 1

In sum, we tempted students to report a number that corresponds to the most desired object, which might differ from the object they had actually rolled. The original survey procedure of the dice-roll task can be seen in the following short video: https://osf.io/v49tq/.

Since students could report a more and a less valuable object from their subjective preference list relative to the number they rolled, there were multiple ways of misreporting the rolled number. Consequently, there were many different ways to empirically define students’ misreports:

  1. Misreport = 1 if the rolled object’s rank differed from the reported object’s rank (regardless of whether students choose a more or a less valuable object); otherwise, misreport = 0. The classification that results from this coding is referred to as overall cheating, which was our preregistered primary outcome.

  2. Misreport = 1 if students indicated a more valuable object; misreport = 0 if students chose the rolled object; the variable has missing values if students indicated a less valuable object. The classification that results from this coding is referred to as cheating to obtain a more valuable object, which is our preregistered secondary outcome.

  3. Misreport = 1 if students indicated a less valuable object; misreport = 0 if students chose the rolled object; the variable has missing values if students indicated a more valuable object. Since choosing a subjectively less valuable object is irrational, the variable resulting from this classification most likely measures inattention rather than cheating.

Fig 2 shows examples of (mis)reporting the rolled numbers.

Fig 2. Examples of (mis)reporting the rolled number.

Fig 2

In our data, the rolled number was misreported by 12.6% of students (N = 144). Most of the misreporters (N = 97) indicated a more valuable object, a sign of purposeful cheating. However, a smaller share of students (N = 47) indicated a less valuable object, possibly due to inattention.

2.4 The treatment exposure

The treatment exposure consisted of a 20-minute-long questionnaire to deplete the students. The questionnaire depleted students’ resources in various ways. First, the math test required cognitive effort [7], which consumed students’ self-control in a similar way to a low-stakes assignment (see the S1 Appendix on sample questions used in the math test). Second, the questionnaire required students to concentrate for 20 minutes, challenging their self-control. In sum, the treatment exposure exhausted students similar to the depletion that students experience in school.

2.5 Randomization and balance

Based on a value of a randomly generated number, we experimentally manipulated when students would answer the dice-roll task, i.e., at the beginning or at the end of the 20-minute-long questionnaire. Thus, in the control group, students first completed the dice-roll task and then the questionnaire. Therefore, students were not depleted in the control group. In the treatment group, however, we reversed the order. Students first completed the questionnaire and then the dice-roll task. Therefore, students were depleted in the treatment group.

We purposefully designed the size of the treatment groups to be larger (N = 789, 69%) than the size of the control group (N = 354; 31%), because the collected data was also used for a different study, where we investigated short-term changes in students’ attitudes [22]. That research question required a stable ordering of questionnaire items.

Randomization was fairly successful (see S3 Table). There was no significant difference between the control group and the treated group in the baseline variables we collected, such as age, number of books at home, GPA (January 2020), teacher-reported disruptive school behavior (February 2020), math test score (April 2020), delay of gratification (April 2020), or altruism (April 2020). However, there were significantly fewer (8.4 pp) girls in the treated group. Therefore, we control for gender in all of our later estimations.

2.6 Hypotheses

We preregistered the hypothesis that (exhausted) students would cheat more in the treated group (in which students received the dice-roll task after the 20-minute-long questionnaire) than in the control group (in which students received the dice-roll task before the 20-minute-long questionnaire).

2.7 Statistical analysis

To test our hypothesis, we estimated the following classroom-fixed-effect linear probability model:

Cheatsc=β0+β1×Tsc+β2×Xsc+θc+εsc

Where Cheats is the binary measure of cheating concerning student s in class c. Tsc is a binary variable equal to 1 if student s from classroom c is in the treated group. θc denotes classroom-fixed effects. We include a set of control variables (Xsc). Coefficient β1 is the causal effect of the treatment.

As a robustness check, we re-estimated all models using logistic regression (see S4 Table). Results are qualitatively similar.

2.8 IRB approval and consent

The study was reviewed and approved by the IRB office at the Center for Social Sciences, Budapest. We obtained consent at multiple points. First, school principals and teachers provided written consent to participate in the study. Second, parents provided written active consent for the retrieval of administrative records via teachers and for their children’s participation in the survey. Students received their reported object at the end of the study. The anonymized data file does not allow the researcher to trace individual students’ dishonest behavior. Teachers and schools had no access to students’ online survey inputs.

3. Results

Fig 3 shows the share of those who misreported the rolled number in the treated and control groups.

Fig 3. Distribution of cheating behavior as a function of treatment with 95% confidence interval.

Fig 3

In Table 2, we report two specifications of each model. The first specification includes the preregistered control variables of gender and age. The second specification includes further controls measured at the baseline (number of books at home, GPA, teacher-reported disruptive school behavior).

Table 2. Results of regression analyses with linear probability models, unstandardized regression coefficients.

(1) (2) (3) (4) (5) (6)
misreported dice roll opted for a more valuable object opted for a less valuable object
Control: preregistered Control: full Control: preregistered Control: full Control: preregistered Control: full
Treated 0.044+ (0.024) 0.045+ (0.024) 0.044* (0.016) 0.045** (0.016) 0.002 (0.019) 0.002 (0.019)
Observations 1,143 1,143 1,096 1,096 1,046 1,046
R-squared 0.144 0.153 0.158 0.165 0.130 0.141
Cohen’s d effect size 0.132 0.135 0.155 0.160 0.010 0.009
Mean in the control group 0.088 0.088 0.053 0.053 0.039 0.039

All models contain constant classroom-fixed effects and the preregistered control variables: gender and age. Standard errors were clustered at the school level.

The list of baseline control variables in full specifications is as follows: gender, age, N of books, GPA, teacher-reported disruptive school behavior, math test, delay of gratification (DG), baseline altruism. Missing values in baseline control variables have been replaced with 0, and separate dummy variables control for missing status. Descriptive statistics and the coding of baseline control variables are shown in Table 1.

Two-sided t-test are used. Robust standard errors in parentheses.

** p<0.01

* p<0.05, + p<0.1.

In Model 3, the significance level of the treatment coefficient is 0.0119. Thus, the coefficient is significant after correcting the significance level for multiple testing since 0.0119 < 0.05/3 [= 0.0167].

By employing a two-sided t-test, we observe a marginally significant difference of about 4.4 percentage points when considering all misreporting (Columns 1 and 2 in Table 2). The marginally significant coefficient might be due to noise generated by those who opted for a less desired object and thus might be insignificant.

When we focus only on those who opted for a more valuable object (Columns 3 and 4 in Table 2), we find that the size of the treatment effect remains the same but becomes significant at 1%. By contrast, there is no significant difference between the treated and control students’ outcomes for students who opted for a less valuable object (Columns 5 and 6 in Table 2). This suggests that students’ inattention is not associated with the treatment.

4. Discussion and conclusion

We conducted a large-scale preregistered field experiment to explore the scope conditions of the ego-depletion theory [1]. We found that primary school students whose self-control had been depleted (by doing cognitively demanding exercises) cheated more to achieve the desired outcome in a subsequent task.

Our results have substantive, methodological, and practical consequences. Substantively, we explored some specific conditions under which ego depletion occurs. Primary school students’ self-control can be depleted with simple assignments, and students with depleted self-control engage in fraudulent behavior.

Methodologically, our experiment informs subsequent research that the ego-depletion effect can be explored beyond the sequential task paradigm and in a real-life situation such as school assignments. Nevertheless, the Cohen’s d effect size of the ego-depletion effect we explored (between 0.134 and 0.157) was smaller in size (approximately 25%) than the effect size reported in the first meta-analysis by Hagger [3].

Finally, our findings have practical consequences. Teacher-written assignments often test students at the end of the school day after four or five 45-minute classes. Our results suggest that students might be tempted to cheat during these tests as they are depleted. Teachers should consider scheduling assignments at the beginning of the school day, when students are not exhausted.

Our online experiment is not free from limitations. We had limited scope to control for environmental factors (e.g., internet speed, IT device, home context), potentially resulting in an increase in noise that may have led to less accurate estimates. Nevertheless, our study should reassure the skeptics of ego-depletion theory since it was based on a large-scale and preregistered experiment. Thus, small sample size and specification search—which are often considered sources of bias in prior empirical works—do not impact our results. The depletion of students’ self-control in schools is an issue that requires more future research.

Supporting information

S1 Table. Mean differences in students’ baseline grades between those who answered/did not answer the online survey.

(DOCX)

S2 Table. Differences between students in schools that are/are not in our experiment based on 6th-grade students’ data in a nationwide administrative dataset.

(DOCX)

S3 Table. Balance in the sample: Mean of baseline variables in the control group and treated group relative to the control group.

(DOCX)

S4 Table. Results of regression analysis with conditional logit model, logit coefficients.

(DOCX)

S1 Appendix. Sample questions used in the math test.

(DOCX)

Data Availability

The data are available at the OSF platform: https://osf.io/2ykp8/wiki/home/.

Funding Statement

T.K. and H.J.K. acknowledge support from the Institute of Economics' internal grant that supported innovative research ideas during the COVID-19 epidemic. T.K. acknowledges support from the Hungarian National Research, Development and Innovation Office NKFIH (grant number K-135766), from the János Bolyai Research Scholarship of the Hungarian Academy of Sciences (BO/00569/21/9) and from the ÚNKP-21-5 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund. H.J.K. acknowledges support from the Hungarian National Research, Development and Innovation Office NKFIH (grant number K-119683).

References

  • 1.Baumeister RF, Bratslavsky E, Muraven M, Tice DM. Ego depletion-Is the active self a limited resource-Baumeister et al(1998). Lornal Personal Soc Psychol. 1998;74(5):1252–65. [DOI] [PubMed] [Google Scholar]
  • 2.Friese M, Loschelder DD, Gieseler K, Frankenbach J, Inzlicht M. Is Ego Depletion Real? An Analysis of Arguments. Personal Soc Psychol Rev [Internet]. 2019. May 29;23(2):107–31. Available from: http://journals.sagepub.com/doi/10.1177/1088868318762183 [DOI] [PubMed] [Google Scholar]
  • 3.Hagger MS, Wood C, Stiff C, Chatzisarantis NLD. Ego Depletion and the Strength Model of Self-Control: A Meta-Analysis. Psychol Bull. 2010;136(4):495–525. doi: 10.1037/a0019486 [DOI] [PubMed] [Google Scholar]
  • 4.Carter EC, Kofler LM, Forster DE, Mccullough ME. A Series of Meta-Analytic Tests of the Depletion Effect: Self-Control Does Not Seem to Rely on a Limited Resource. J Exp Psychol Gen. 2015;144(4):796–815. doi: 10.1037/xge0000083 [DOI] [PubMed] [Google Scholar]
  • 5.Hagger MS, Chatzisarantis NLD, Alberts H, Anggono CO, Batailler C, Birt AR, et al. A Multilab Preregistered Replication of the Ego-Depletion Effect. Perspect Psychol Sci [Internet]. 2016. Jul 29;11(4):546–73. Available from: http://journals.sagepub.com/doi/10.1177/1745691616652873 [DOI] [PubMed] [Google Scholar]
  • 6.Lee N, Chatzisarantis N, Hagger MS. Adequacy of the Sequential-Task Paradigm in Evoking Ego-Depletion and How to Improve Detection of Ego-Depleting Phenomena. Vol. 7, Frontiers in Psychology. 2016. doi: 10.3389/fpsyg.2016.00136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Baumeister RF, Vohs KD. Strength model of self-regulation as limited resource: Assessment, controversies, update [Internet]. 1st ed. Vol. 54, Advances in Experimental Social Psychology. Elsevier Inc.; 2016. 67–127 p. Available from: 10.1016/bs.aesp.2016.04.001 [DOI] [Google Scholar]
  • 8.Orosz G, Farkas D, Roland-Lévy C. Are Competition and Extrinsic Motivation Reliable Predictors of Academic Cheating? Front Psychol [Internet]. 2013;4(FEB):1–16. Available from: http://journal.frontiersin.org/article/10.3389/fpsyg.2013.00087/abstract [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Mead NL, Baumeister RF, Gino F, Schweitzer ME, Ariely D. Too tired to tell the truth: Self-control resource depletion and dishonesty. J Exp Soc Psychol [Internet]. 2009;45(3):594–7. Available from: 10.1016/j.jesp.2009.02.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Gino F, Schweitzer ME, Mead NL, Ariely D. Unable to resist temptation: How self-control depletion promotes unethical behavior. Organ Behav Hum Decis Process [Internet]. 2011;115(2):191–203. Available from: 10.1016/j.obhdp.2011.03.001 [DOI] [Google Scholar]
  • 11.Bucciol A, Piovesan M. Luck or cheating? A field experiment on honesty with children. J Econ Psychol [Internet]. 2011;32(1):73–8. Available from: 10.1016/j.joep.2010.12.001 [DOI] [Google Scholar]
  • 12.Glätzle-Rützler D, Lergetporer P. Lying and age: An experimental study. J Econ Psychol. 2015;46:12–25. [Google Scholar]
  • 13.Lurquin JH, Michaelson LE, Barker JE, Gustavson DE, Von Bastian CC, Carruth NP, et al. No evidence of the ego-depletion effect across task characteristics and individual differences: A pre-registered study. PLoS One. 2016;11(2):1–20. doi: 10.1371/journal.pone.0147770 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Köbis NC, Verschuere B, Bereby-Meyer Y, Rand D, Shalvi S. Intuitive Honesty Versus Dishonesty: Meta-Analytic Evidence. Perspect Psychol Sci. 2019;14(5):778–96. doi: 10.1177/1745691619851778 [DOI] [PubMed] [Google Scholar]
  • 15.Capraro V, Schulz J, Rand DG. Time pressure and honesty in a deception game. J Behav Exp Econ [Internet]. 2019;79(February):93–9. Available from: 10.1016/j.socec.2019.01.007 [DOI] [Google Scholar]
  • 16.Shalvi S, Eldar O, Bereby-Meyer Y. Honesty Requires Time (and Lack of Justifications). Psychol Sci. 2012;23(10):1264–70. doi: 10.1177/0956797612443835 [DOI] [PubMed] [Google Scholar]
  • 17.Welsh DT, Ordóñez LD. Organizational Behavior and Human Decision Processes The dark side of consecutive high performance goals: Linking goal setting, depletion, and unethical behavior. Organ Behav Hum Decis Process [Internet]. 2014;123(2):79–89. Available from: 10.1016/j.obhdp.2013.07.006 [DOI] [Google Scholar]
  • 18.Vohs KD, Baumeister RF, Schmeichel BJ. Motivation, personal beliefs, and limited resources all contribute to self-control. J Exp Soc Psychol [Internet]. 2012;48(4):943–7. Available from: 10.1016/j.jesp.2012.03.002 [DOI] [Google Scholar]
  • 19.Duckworth AL, Taxer JL, Eskreis-Winkler L, Galla BM, Gross JJ. Self-Control and Academic Achievement. Annu Rev Psychol [Internet]. 2019. Jan 4;70(1):373–99. Available from: https://www.annualreviews.org/doi/10.1146/annurev-psych-010418-103230 [DOI] [PubMed] [Google Scholar]
  • 20.Imhoff R, Schmidt AF, Gerstenberg F. Exploring the Interplay of Trait Self–Control and Ego Depletion: Empirical Evidence for Ironic Effects. Eur J Pers [Internet]. 2014 Sep 1;28(5):413–24. Available from: http://journals.sagepub.com/doi/10.1002/per.1899 [Google Scholar]
  • 21.Fischbacher U, Föllmi-Heusi F. Lies in disguise-an experimental study on cheating. J Eur Econ Assoc. 2013;11(3):525–47. [Google Scholar]
  • 22.Kiss HJ, Keller T. The short-term effect of COVID-19 on schoolchildren ‘ s generosity. Appl Econ Lett [Internet]. 2021;00(00):1–5. Available from: 10.1080/13504851.2021.1893892 [DOI] [Google Scholar]

Decision Letter 0

Valerio Capraro

18 Aug 2021

PONE-D-21-23313

Do exhausted primary school students cheat more? A randomized field experiment

PLOS ONE

Dear Dr. Keller,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please find below the reviewer's comments, as well as those of mine.

Please submit your revised manuscript by Oct 02 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Valerio Capraro

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

Additional Editor Comments:

I have now collected one review from one expert on the field, whom I thank for their detailed and thoughtful feedback. I was unable to find a second reviewer. However the review I could collect is very detailed and I am myself familiar with the topic of this manuscript. Therefore, I feel confident in making a decision with only one review. The reviewer thinks that the paper has potential, but suggests a major revision. I agree with the reviewer, therefore I would like to invite you to revise your work for Plos One. Apart from the reviewer's comments, I would like to add two additional comments. I have noticed that you do not review the literature on time pressure and cheating, which I think is very relevant (Gunia et al. 2012; Shalvi et al. 2012; Capraro, 2017; Lohse, Simon & Konrad, 2018; Capraro, Schulz & Rand, 2019). Moreover, there is one review article (Capraro, 2019) and one meta-analysis (Kobis et al. 2020) on the role of System 1/System 2 on cheating and other social behaviours, which I think are relevant too.

I am looking forward for the revision.

Capraro, V. (2017). Does the truth come naturally? Time pressure increases honesty in one-shot deception games. Economics Letters, 158, 54-57.

Capraro, V. (2019). The dual-process approach to human sociality: A review. Available at SSRN 3409146.

Capraro, V., Schulz, J., & Rand, D. G. (2019). Time pressure and honesty in a deception game. Journal of Behavioral and Experimental Economics, 79, 93-99.

Gunia, B. C., Wang, L., Huang, L. I., Wang, J., & Murnighan, J. K. (2012). Contemplation and conversation: Subtle influences on moral decision making. Academy of Management Journal, 55(1), 13-33.

Köbis, N. C., Verschuere, B., Bereby-Meyer, Y., Rand, D., & Shalvi, S. (2019). Intuitive honesty versus dishonesty: Meta-analytic evidence. Perspectives on Psychological Science, 14(5), 778-796.

Lohse, T., Simon, S. A., & Konrad, K. A. (2018). Deception under time pressure: Conscious decision or a problem of awareness?. Journal of Economic Behavior & Organization, 146, 31-42.

Shalvi, S., Eldar, O., & Bereby-Meyer, Y. (2012). Honesty requires time (and lack of justifications). Psychological science, 23(10), 1264-1270.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The paper studies the effects of ego depletion on cheating using an online experiment involving Hungarian students aged 9-16. Self-control is depleted using a cognitively demanding task; cheating is measured on a variant of the dice-roll task.

The authors find that there is an ego depletion effect, as those who depleted self-control are more likely to cheat.

There is no consensus in the literature on the ego depletion effect. The authors contribute to the literature in three main directions: 1) they design a task that goes beyond the common sequential task paradigm; 2) they focus on a cheating outcome; 3) they exploit a large-scale experiment involving more than 1,000 students.

I generally like the research question and the experimental setting. The empirical analysis is very simple but neat. I report below some comments, in no particular order, that I hope can help to improve further versions of the paper.

- I learn from Table A4 that the age range of the subjects is 9-16. Since the text repeatedly refers to "primary school students", I expected the subjects to be younger. I think some comments should be made on the Hungarian education system that, as far as I understand, is quite peculiar. Moreover, I believe that information on the age range is more relevant than the school level. For this reason, I would like the text to refer to the age range from time to time. I realized what the age range is only looking at Table A4.

- Three different definitions of cheating are considered. The third one (cheating toward less valuable objects), however, is irrational and probably measures mistakes rather than cheating. I suggest you to completely remove the third definition from the analysis.

- Two sentences in the text raise concerns on the representativeness of the sample. First, the pool of potential subjects perform worse at school and has lower status than the average Hungarian ("On average, students’ test scores and socioeconomic status is lower in our sample than the Hungarian average (see Table A1 in the Appendix)." Second, those who agreed to participate to the experiment perform better at school than those who did not agree to participate ("On average, students who participated in our survey had better grades and better school behavior than their classmates (see Table A2 in the Appendix)." I would like to read comments on how the non-representativeness of the pool of potential subjects and the self-selection into the sample may affect the results.

- You write "In sum, we tempted students to report a higher number than they had rolled to receive a more desirable object". However, if I understand correctly the task, students are tempted to report a lower number (the lower number, the better the prize).

- Is there a reason why you chose to have the treatment group twice as large as the control group?

- Below Table 1 you write that "Descriptive statistics and the coding of baseline control variables is shown in Table A2 in the Appendix". I think you refer to Table A4. However, I do not see there a definition of each control variable. In particular, could you describe the variable "Number of books at home"? I expected this variable to take natural values (0, 1, 2, ...), but I realize it also takes negative ones (see Table A4). Based on the table, it seems you standardized the variable. Why?

- Table 1, columns 1-2. The coefficient is significant only at 10%. Many researchers would tell that a 10% significance is is not a significance, especially if you consider the large sample size you have. From the comparison of all the columns, it seems you cannot get significant results here because of the noise brought by those who cheated toward less valuable objects.

- You write "We modified the standard dice-roll experiment to collect individual data about students’ dishonest behavior while keeping their identities hidden.", but you do not go more in detail. I hope you did not deceive subjects, as deception in experimental economics is (as a matter of fact) banned. I would like to read a comment on this in Section II.

- Why did you use a linear OLS model rather than a non-linear logit/probit model, which is more appropriate when you deal with binary dependent variables?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Dec 1;16(12):e0260141. doi: 10.1371/journal.pone.0260141.r002

Author response to Decision Letter 0


23 Sep 2021

Reviewer

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE’s style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Thank you for your guidance. We have checked the documents and formatted the manuscript accordingly. Most importantly, we use “Vancouver” style of reference, line numbers, and double-spaced layout.

2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

Data and analytical scripts are publicly available at the projects’ OSF homepage:

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

We have included the following section in the manuscript:

IRB approval and consent

The study was reviewed and approved by the IRB office at the Center for Social Sciences, Budapest. We obtained consent at multiple points. First, school principals and teachers provided written consent to participate in the study. Second, parents provided written active consent for the retrieval of administrative records via teachers and for their children’s participation in the survey. Students received their reported object at the end of the study. The anonymized data file does not allow the researcher to trace individual students’ dishonest behavior. Teachers and schools had no access to students’ online survey inputs.

Additional Editor Comments:

I have now collected one review from one expert on the field, whom I thank for their detailed and thoughtful feedback. I was unable to find a second reviewer. However the review I could collect is very detailed and I am myself familiar with the topic of this manuscript. Therefore, I feel confident in making a decision with only one review. The reviewer thinks that the paper has potential, but suggests a major revision. I agree with the reviewer, therefore I would like to invite you to revise your work for Plos One. Apart from the reviewer’s comments, I would like to add two additional comments. I have noticed that you do not review the literature on time pressure and cheating, which I think is very relevant (Gunia et al. 2012; Shalvi et al. 2012; Capraro, 2017; Lohse, Simon & Konrad, 2018; Capraro, Schulz & Rand, 2019). Moreover, there is one review article (Capraro, 2019) and one meta-analysis (Kobis et al. 2020) on the role of System 1/System 2 on cheating and other social behaviours, which I think are relevant too.

I am looking forward for the revision.

Capraro, V. (2017). Does the truth come naturally? Time pressure increases honesty in one-shot deception games. Economics Letters, 158, 54-57.

Capraro, V. (2019). The dual-process approach to human sociality: A review. Available at SSRN 3409146.

Capraro, V., Schulz, J., & Rand, D. G. (2019). Time pressure and honesty in a deception game. Journal of Behavioral and Experimental Economics, 79, 93-99.

Gunia, B. C., Wang, L., Huang, L. I., Wang, J., & Murnighan, J. K. (2012). Contemplation and conversation: Subtle influences on moral decision making. Academy of Management Journal, 55(1), 13-33.

Köbis, N. C., Verschuere, B., Bereby-Meyer, Y., Rand, D., & Shalvi, S. (2019). Intuitive honesty versus dishonesty: Meta-analytic evidence. Perspectives on Psychological Science, 14(5), 778-796.

Lohse, T., Simon, S. A., & Konrad, K. A. (2018). Deception under time pressure: Conscious decision or a problem of awareness?. Journal of Economic Behavior & Organization, 146, 31-42.

Shalvi, S., Eldar, O., & Bereby-Meyer, Y. (2012). Honesty requires time (and lack of justifications). Psychological science, 23(10), 1264-1270.

Thank you very much for the overall positive evaluation of the manuscript. We have taken your advice and cite this relevant body of literature at the end of the introduction. We say that:

“The results imply that dishonesty but not honesty is intuitive since resisting the temptation of dishonesty needs resources that might be consumed in tasks requiring self-control. Nevertheless, our paper has a narrow focus on the ego-depletion effect. We acknowledge but leave apart from our focus the more general scientific debates on intuitive honesty/dishonesty (Köbis et al. 2019) and the related tests on time pressure (Capraro, Schulz, and Rand 2019; Shalvi, Eldar, and Bereby-Meyer 2012) or cognitive load (Welsh and Ordóñez 2014) that bring (conflicting) experimental evidence on this debate. “

5. Review Comments to the Author

Reviewer #1: The paper studies the effects of ego depletion on cheating using an online experiment involving Hungarian students aged 9-16. Self-control is depleted using a cognitively demanding task; cheating is measured on a variant of the dice-roll task.

The authors find that there is an ego depletion effect, as those who depleted self-control are more likely to cheat.

There is no consensus in the literature on the ego depletion effect. The authors contribute to the literature in three main directions: 1) they design a task that goes beyond the common sequential task paradigm; 2) they focus on a cheating outcome; 3) they exploit a large-scale experiment involving more than 1,000 students.

I generally like the research question and the experimental setting. The empirical analysis is very simple but neat. I report below some comments, in no particular order, that I hope can help to improve further versions of the paper.

- I learn from Table A4 that the age range of the subjects is 9-16. Since the text repeatedly refers to “primary school students”, I expected the subjects to be younger. I think some comments should be made on the Hungarian education system that, as far as I understand, is quite peculiar. Moreover, I believe that information on the age range is more relevant than the school level. For this reason, I would like the text to refer to the age range from time to time. I realized what the age range is only looking at Table A4.

Thank you very much for this comment which motivated us to move Table A4 to the main body of the manuscript (the updated Table A4 is referred to as Table 1 in the recent manuscript). We briefly refer to the Hungarian educational system and provide information on the age range of students. We say in the updated manuscript that

“Primary education in Hungary is compulsory and encompasses the primary and lower-secondary ISCED 1 and ISCED 2 levels, comparable to elementary and middle school in the United States.”

Furthermore:

“The age of students ranged from 10 to 16 years (mean = 12.82, standard deviation = 1.43). The high maximum age is due to students who had to repeat classes.”

- Three different definitions of cheating are considered. The third one (cheating toward less valuable objects), however, is irrational and probably measures mistakes rather than cheating. I suggest you to completely remove the third definition from the analysis.

Motivated by your valuable insight, we do not call cheating when students choose a less valuable object, but we refer to this as inattention. Nevertheless, since this outcome was preregistered, we show the result. However, we have toned down the language used to discuss the results on this outcome.

- Two sentences in the text raise concerns on the representativeness of the sample. First, the pool of potential subjects perform worse at school and has lower status than the average Hungarian (“On average, students’ test scores and socioeconomic status is lower in our sample than the Hungarian average (see Table A1 in the Appendix).” Second, those who agreed to participate to the experiment perform better at school than those who did not agree to participate (“On average, students who participated in our survey had better grades and better school behavior than their classmates (see Table A2 in the Appendix).” I would like to read comments on how the non-representativeness of the pool of potential subjects and the self-selection into the sample may affect the results.

We have elaborated on the issues of self-selection and non-representativeness in the updated manuscript. We write that:

“The self-selected students who participated in our survey may have differed from their nonrespondent classmates in motivation—that is, in terms of the motivation that may have prompted students to engage with the purpose of the study and self-select into the sample. Motivation might render immunity to ego depletion, but only if the experienced depletion is mild—suggesting that there is a limit to the influence of motivation on ego-depletion (Vohs, Baumeister, and Schmeichel 2012). Therefore, our reliance on a sample containing self-selected motivated primary school students might have introduced limited bias in estimating the ego-depletion effect.

The demographic composition of the analytical sample leads to an over-representation of students with weaker academic achievement and from poorer social backgrounds. Thus, students in our sample might have lower initial self-control, since low academic achievement translates into low self-control (Duckworth et al. 2019). Nevertheless, the ego-depletion effect is weaker among low self-control students who have experience in resisting acute temptation and have gained experience in how to resist temptation (in contrast to high self-control students who might lack these experiences) (Imhoff, Schmidt, and Gerstenberg 2014). Therefore, our reliance on a sample containing self-selected motivated primary school students might have introduced limited bias in estimating the ego-depletion effect.”

- You write “In sum, we tempted students to report a higher number than they had rolled to receive a more desirable object”. However, if I understand correctly the task, students are tempted to report a lower number (the lower number, the better the prize).

Thank you for your careful reading! We have corrected the inconsistency and elaborated more on the section describing the measurement of cheating.

- Is there a reason why you chose to have the treatment group twice as large as the control group?

We purposefully designed the size of the treatment groups to be larger (N=789, 69%) than the size of the control group (N=354; 31%), because the data collected was also used for a different study, where we investigated short-term changes in students’ attitudes(Kiss and Keller 2021). That research question required a stable ordering of questionnaire items.

- Below Table 1 you write that “Descriptive statistics and the coding of baseline control variables is shown in Table A2 in the Appendix”. I think you refer to Table A4. However, I do not see there a definition of each control variable. In particular, could you describe the variable “Number of books at home”? I expected this variable to take natural values (0, 1, 2, ...), but I realize it also takes negative ones (see Table A4). Based on the table, it seems you standardized the variable. Why?

We moved Table A4 to the main body of the manuscript (the updated Table A4 is referred to as Table 1 in the recent manuscript) and we provide a detailed description of the deployed variables.

- Table 1, columns 1-2. The coefficient is significant only at 10%. Many researchers would tell that a 10% significance is is not a significance, especially if you consider the large sample size you have. From the comparison of all the columns, it seems you cannot get significant results here because of the noise brought by those who cheated toward less valuable objects.

Motivated by your comment, we have changed the langue that discusses the main results. We say that “The marginally significant coefficient might be due to noise generated by those who opted for a less desired object and thus might be insignificant.”

- You write “We modified the standard dice-roll experiment to collect individual data about students’ dishonest behavior while keeping their identities hidden.”, but you do not go more in detail. I hope you did not deceive subjects, as deception in experimental economics is (as a matter of fact) banned. I would like to read a comment on this in Section II.

We added a section that describes the parental consent we obtained and provides information about the attained IRB consent.

- Why did you use a linear OLS model rather than a non-linear logit/probit model, which is more appropriate when you deal with binary dependent variables?

As a robustness check, we re-estimated all models using logistic regression (see S4 Table). Results are qualitatively similar.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Valerio Capraro

6 Oct 2021

PONE-D-21-23313R1Do exhausted primary school students cheat more? A randomized field experimentPLOS ONE

Dear Dr. Keller,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Nov 20 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Valerio Capraro

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments (if provided):

The reviewer suggests one final comment before publication. Please address this comment at your earliest convenience. I am looking forward to receiving the final version.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: (No Response)

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors paid attention to my previous comments and did a good job in revising the paper.

Please notice that on line 311 you refer to Table 1, whereas the correct reference is now Table 2.

I have no further comments.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Dec 1;16(12):e0260141. doi: 10.1371/journal.pone.0260141.r004

Author response to Decision Letter 1


2 Nov 2021

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

.

We do not cite Ariely’s retracted paper (Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end), published in PNAS and retracted in September 2021. The retracted paper concerns cheating from the perspective of self-concept maintenance theory and not from ego-depletion theory. Therefore, we have not cited this research in any of the earlier versions of this manuscript.

Reviewer’s comment:

The authors paid attention to my previous comments and did a good job in revising the paper.

Please notice that on line 311 you refer to Table 1, whereas the correct reference is now Table 2.

Thank you very much for your careful reading. We have corrected our mistake in line 311.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Valerio Capraro

4 Nov 2021

Do exhausted primary school students cheat more? A randomized field experiment

PONE-D-21-23313R2

Dear Dr. Keller,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Valerio Capraro

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Valerio Capraro

8 Nov 2021

PONE-D-21-23313R2

Do exhausted primary school students cheat more? A randomized field experiment

Dear Dr. Keller:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Valerio Capraro

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Mean differences in students’ baseline grades between those who answered/did not answer the online survey.

    (DOCX)

    S2 Table. Differences between students in schools that are/are not in our experiment based on 6th-grade students’ data in a nationwide administrative dataset.

    (DOCX)

    S3 Table. Balance in the sample: Mean of baseline variables in the control group and treated group relative to the control group.

    (DOCX)

    S4 Table. Results of regression analysis with conditional logit model, logit coefficients.

    (DOCX)

    S1 Appendix. Sample questions used in the math test.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    The data are available at the OSF platform: https://osf.io/2ykp8/wiki/home/.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES