Deception has long been assumed to conjure diverse affective experiences (Trovillo, 1938). Liars, more than truth-tellers, are theorized to feel guilt, fear, and nervousness (e.g., Ekman, 1985; Zuckerman et al., 1981). Additionally, deception has been proposed to elicit positive affect. Ekman (1985) broadly defined duping delight as any positive affective experience that occurs in anticipation of, during, or following a lie. Empirical evidence for this definition of duping delight has primarily come from studies of affective cues during deceptive acts. For example, in a study of emotional, high-stakes lies in which people pleaded for help to find a person they had recently murdered, smiles were described by ten Brinke and Porter (2012) as a sign of duping delight. However, research suggests that smiles may occur for multiple reasons (e.g., to signal affiliation or dominance; Martin et al., 2017). Given the difficulty in inferring affective states from facial expressions (Barrett et al., 2019), a more direct approach would ask liars and truth-tellers to report on their affect.
Narrowing Focus on Duping Delight
Previous research on duping delight has also been hampered by the broad definition offered by Ekman (1985). More recently, scholars have narrowed the definition of duping delight, focusing on the pleasure that may come from successful deception (e.g., Gonza et al., 2001; Spidel et al., 2011). This narrower definition allows for targeted empirical investigation of affective experiences following successful (vs. unsuccessful) lies. Yet, DePaulo et al. (2003) pointed out that the “duping delight hypothesis has not yet been tested” (p. 75) and—to our knowledge—has evaded experimentation in the intervening years.
It is worth noting, however, that the conceptually similar cheater’s high has received some empirical attention. Ruedy et al. (2013) demonstrated that unethical behavior without obvious harm elicits positive affect. This occurs regardless of whether cheating (on a problem-solving task) is self-selected or sanctioned by experimental manipulation. Moreover, positive affect was unrelated to the size of financial incentives achieved by cheating, suggesting that the cheater’s high may come merely from the thrill of “getting away with it.” Actual or perceived success, however, was not directly manipulated.
Potential Consequences of Duping Delight
Previous research has established that unethical behavior is affected by situational variables, which can result in high rates of cheating and lying. For example, cheating may increase when it benefits others (vs. the self; Gino et al., 2013), when lying is seen as socially normative (Gino et al., 2009), or when self-control is depleted (Gino et al., 2011). Although situations can increase or decrease the likelihood of unethical behavior, research also suggests that there may be some dispositional influences on the use of deception. Most lies are told by relatively few prolific liars (Daiku et al., 2021; Halevy et al., 2014; Park et al., 2021; Serota & Levine, 2015; Serota et al., 2010; Serota et al., 2021) and “dark” personality traits (e.g., psychopathy, Machiavellianism, narcissism) are positively correlated with self-reported frequency of lying (Jonason et al., 2014; see also Makowski et al., 2021). While most people are motivated to lie merely when the truth presents a problem toward their goals (Levine et al., 2010), positive affect after successful deception may positively reinforce this behavior, providing an affective mechanism for why some individuals lie more than others.
The Current Research
Given the paucity of experimental work on duping delight, the present studies explore the affective consequences of successful lying. Specifically, we conducted two experiments wherein participants lied or told the truth in a mock insurance claim, received feedback about whether their claim was believable or not, and subsequently reported on their affect. The studies reported in this article were not preregistered. However, deidentified data, analysis scripts, and materials for both studies are publicly available via OSF: https://osf.io/cjw3r/. The study was IRB-approved, and we report all data exclusions (if any), manipulations, and measures.
Study 1
Method
Participants and Design
One-hundred and sixty-seven participants from the University of Denver (123 females, 40 males, 4 non-binary) participated voluntarily through an online participation pool, and were given course credit for their involvement in the study. The average age of participants was 19.29 years (SD = 1.50). The majority of participants were White (86.8%). Participants also included a small proportion of Hispanic (10.2%), Asian (7.8%), Black (3.6%), and Native American individuals (2.4%), as well as some who preferred not to provide race/ethnicity information (1.2%).1
We conducted a priori power analyses to detect a small-to-medium-sized interaction (η2 = .05) in a 2 (veracity: truth, lie) × 2 (feedback: positive, negative) between-subjects ANOVA. Setting 1-β = .80 and α = .05, the required sample size was estimated at N = 152. Ultimately, we collected data from N = 167 participants with 41 participants in the truth-believable feedback condition; 44 in the truth-not believable feedback condition; 43 in the lie-believable feedback condition; and 39 in the lie-not believable feedback condition.2
Materials and Procedure
Phase One
Participants provided informed consent and proceeded directly to study procedures. Specifically, all participants were presented with an image of an office desk with several items (e.g., electronics, books, keys) scattered across the table. They were asked to imagine that the desk and its items belonged to them. Participants were required to spend 30 s (or more) on this “pre-theft” image before they were allowed to proceed. Participants were then instructed to imagine a thief had entered and stolen items from their fictional office, and were shown a new “post-theft” image in which the desk was in disarray and four items were now missing. The four missing items were a desktop computer, a laptop computer, a wallet, and a set of car keys. Participants spent at least another 30 s on this image before proceeding.
Veracity Manipulation
Participants were randomly assigned either to the truth condition (i.e., accurately report the four missing items) or lie condition (i.e., falsely report that six items were stolen). In the truth condition, participants were presented with both pre- and post-theft images of the office simultaneously. They were then asked to correctly specify which items were stolen (i.e., missing in the “post-theft” image) by clicking on each item as it appeared in the “pre-theft” image. Participants were given a total of 10 possible choices/items to select, including the desktop computer, laptop computer, car keys, wallet, cell phone, sunglasses, mouse, journal, speaker, and office keys. Participants in the truth condition were only able to proceed when they had correctly chosen the four items that had actually been stolen.
In the lie condition, participants completed the same procedure as participants in the truth condition by accurately identifying the four missing items. They were then instructed to select an additional two items from the remaining six options; these were the items that they would falsely claim were also stolen in the theft (in addition to the four items that were actually taken). Participants could not proceed until they had chosen the four items that had originally been stolen, and then picked two additional non-stolen items.
Video-Recorded Interview
All participants were then video recorded in a short interview regarding items that had been stolen from their fictional office. Critically, participants in the truth condition reported that four items had been stolen from their office while participants in the lie condition claimed that six items had been stolen—consistent with their selections described above. Participants were also told that during the interview, an algorithmic computer program would code their facial expressions, and verbal and nonverbal behavior, and combine this data to determine whether their statements were believable or not. In reality, there was no algorithm used to evaluate their believability.
To prepare the participants for the video recording, each of them was presented with all the questions that would be asked during the interview in advance (e.g., “What and how many items from your desk were stolen?”). Participants were given as much time as they needed to prepare and familiarize with the interview questions before manually starting the video recording. During the video recording session, each question appeared on the screen for 20 to 30 s, while participants provided their answers. Using this process, they recorded a 3-min video talking about the office, responding to questions about the identity and the number of the missing items.
FaceReaderOnline (Den Uyl & van Kuilenburg, 2005) was used to record the participants’ interview statements through the computer webcam as they responded aloud. For the purposes of this study, the video and audio data were not analyzed. Recordings were in place primarily to bolster the plausibility of a fictional algorithmic computer program that could determine participants’ believability.
Feedback Manipulation
Following the interview, participants were required to fill in their demographic information, which included age, gender, sexual orientation, and race/ethnicity. At the same time, the participants were waiting for feedback from the algorithmic computer program (i.e., believable or not believable). To reinforce the plausibility of the feedback manipulation among the participants, they were informed that it would take 2 to 5 min for the computer program to evaluate the veracity of their interview statements. In the interim, participants saw a gif image of a computer loading wheel to indicate that the computer was processing their data. Then, participants were informed that the algorithm’s calculation has been completed. In the believable feedback condition, participants were told, “You are believable,” whereas in the not believable feedback condition, participants were told, “You are not believable.” In reality, this believability feedback was randomly assigned.
Immediately following this feedback, participants completed the Positive and Negative Affect Schedule (PANAS; Watson et al., 1988) to assess their positive (i.e., duping delight; α = .86) and negative (α = .86) affect in response to telling a successful (or unsuccessful) lie (or truth). The completion of Phase One took approximately 15 min.
Phase Two
Twelve days after Phase One, an email containing an online memory test was sent to the participants to determine whether deception and feedback about believability affected memory for the details of the theft. These responses were not of interest to the current investigation, but are described in detail in Vo et al. (2021).
Results
Positive Affect
A 2 (veracity: truth, lie) × 2 (feedback: believable, not believable) between-subjects ANOVA was performed on self-reported positive affect. There was a significant main effect of feedback F(1,163) = 6.44, p = .012, ηp2 = .038; participants who received believable feedback (M = 2.47, SD = .77) reported more positive affect than participants who received feedback that they were not believable (M = 2.19, SD = .63, see Fig. 1A). However, there was no main effect of veracity F(1,163) = 1.35, p = .25, ηp2 = .01, nor was there a significant veracity × feedback interaction F(1,163) = .01, p = .93, ηp2 = .00. Critically, there was no significant difference in positive affect reported by participants in the lie-believable (M = 2.53, SD = .81) condition and the lie-not believable condition (M = 2.26, SD = .71, t(80) = 1.60, p = .114, d = .35).
Fig. 1.
A The effect of veracity (liar, truth-teller) and feedback (believable, not believable) on positive affect reported by participants in Study 1. B The effect of veracity (liar, truth-teller) and feedback (believable, not believable) on negative affect reported by participants in Study 1. Error bars represent ± 1 standard error
Negative Affect
A 2 (veracity: truth, lie) × 2 (feedback: believable, not believable) between-subjects ANOVA was also performed on self-reported negative affect. Analyses revealed a significant main effect of veracity F(1,163) = 5.58, p = .017, ηp2 = .034; participants who lied (M = 1.73, SD = .65) reported more negative affect than participants who told the truth (M = 1.53, SD = .54). A significant main effect of feedback F(1,163) = 10.74, p = .001, ηp2 = .062, also indicated that participants who received feedback that they were not believable (M = 1.77, SD = .63) experienced more negative affect than participants were received feedback that they were believable (M = 1.49, SD = .54). However, these main effects were qualified by a significant veracity × feedback interaction F(1,163) = 5.45, p = .021, ηp2 = .032. Among liars, feedback did not appear to impact negative affect t(80) = .577, p = .565, d = .128. However, among truth-tellers, receiving not believable feedback (M = 1.78, SD = .61) led to more negative affect than receiving believable feedback (M = 1.27, SD = .28, t(83) = 4.79, p < .001, d = 1.04) (see Fig. 1B).
Study 2
In Study 2, we sought to replicate and extend the findings of Study 1 with a larger and more representative sample. Additionally, since lies are generally told to achieve some goal (Levine et al., 2010) and positive affective experiences are most likely to emerge when situations are appraised as goal conducive (e.g., Ellsworth & Scherer, 2003), we included an incentive manipulation. Lastly, we measured participants’ level of Machiavellianism, narcissism, psychopathy, and sadism to explore whether the experience of duping delight may be moderated by these personality traits.
Method
Participants and Design
An a priori power analysis indicated that we needed a total of 387 participants to detect a small-to-medium effect of ηp2 = .02 in a 2 (veracity: truth, lie) × 2 (incentive: present, absent) × 2 (feedback: believable, not believable) between-subjects design, setting 1-β = .95 and α = .05. A representative U.S. sample from Prolific was recruited; three-hundred and ninety-four participants participated in exchange for $3.00 CAD. This sample included 198 females, 189 males, and 7 non-binary individuals with a mean age of 44.41 (SD = 15.76, range = 18–79). Most participants identified as White (74.18%). Participants also identified as Black (14.94%), Latin American (7.09%), South Asian (2.78%), and Chinese (2.53%). One percent or less of participants identified as Japanese, Korean, Aboriginal, Southeast Asian, West Asian, Filipino, Arab, West Asian, or other. Nine participants failed an attention check (described below), leaving three-hundred and eighty-five participants for analysis. A sensitivity power analysis indicated that with n = 385, we had 95% power to detect an effect as small as η2 = .03 with α = .05 in 2 × 2 × 2 between-subjects design.
Materials and Procedure
As in Study 1, participants were presented with an image of an office desk with several items, were asked to imagine it was theirs, and were required to spend 30 s or more on the “pre-theft” image before proceeding.
Veracity Manipulation
Participants were then randomly assigned to either tell the truth or lie about the number of missing items. Like Study 1, participants in the truth condition had to correctly choose the four items that had actually been stolen, whereas participants in the lie condition had to select two non-stolen items in addition to the four items that were actually stolen.
Written Interview and Incentive Manipulation
All participants were then informed that they would be completing an interview about the office theft. This interview was the same as the interview in Study 1, except participant responses were written rather than video recorded. Specifically, participants were told that an algorithmic computer program would code their written responses to determine whether their answers were believable or not. In reality, there was no algorithm used to evaluate their believability. Immediately prior to the written interview, participants were randomly assigned to either receive or not receive an incentive for believability. Specifically, in the incentive present condition, participants were told, “If the algorithm determines your answers are believable, you will receive a $1.00 CAD bonus. When you are ready to begin the written interview, please click the next button.” In contrast, in the incentive absent condition, participants were simply told, “When you are ready to begin the written interview, please click the next button.” Participants then provided written responses to the same questions from Study 1. Participants could not proceed to the next part of the study for at least 2 min.
Motivation and Affect
Immediately following the written interview, participants indicated how motivated they were to provide believable responses on a scale of 1 (not at all) to 7 (highly, M = 6.11, SD = 1.09, range = 1–7). Next, participants completed the PANAS (Watson et al., 1988) to assess their positive (α = .89) and negative (α = .90) affect immediately following the veracity manipulation (but before receiving feedback).
Feedback Manipulation
Like Study 1, participants were told that the algorithmic computer program would take 2–5 min to determine the believability of their written responses. While waiting, participants completed the Short Dark Tetrad (SD4; Paulhus et al., 2021). The SD4 is a four-subscale inventory that measures Machiavellianism, psychopathy, narcissism, and sadism with seven items for each subscale. In the current study, each of the subscales of the SD4 was reliable: Machiavellianism (α = .67), narcissism (α = .76), psychopathy (α = .72), and sadism (α = .77). As an attention check, participants were presented with a scale that ranged from 1 (Disagree Strongly) to 5 (Strongly Agree) and were told, “Choose disagree to indicate you are paying attention.” Nine participants (2.28%) failed this attention check and were excluded from analyses. Next, participants completed demographic information (age, gender, and ethnicity) while they ostensibly waited for the algorithmic computer program to determine whether their written responses were believable or not. After completing the SD4 and demographics, participants were randomly assigned believability feedback, where participants were told either “You are believable” or “You are not believable.” Participants then immediately completed the PANAS to assess their positive (α = .93) and negative (α = .91) affect after receiving the feedback manipulation. Lastly, participants were debriefed and thanked for their time. On average, the study took 18 min to complete.
Results
Affect After Veracity Manipulation
Positive Affect
A 2 (veracity: truth, lie) × 2 (incentive: present, absent) between-subjects ANOVA was conducted on positive affect directly after lying or telling the truth. Results revealed no main effect of veracity F(1, 381) = 1.97, p = .161, ηp2 = .01, no main effect of incentive F(1, 381) = 0.13, p = .724, ηp2 = .00, nor a veracity × incentive interaction F(1, 381) = 0.02, p = .904, ηp2 = .00.
Negative Affect
A 2 (veracity: truth, lie) × 2 (incentive: present, absent) between-subjects ANOVA was conducted on negative affect directly after lying or telling the truth. Results revealed a significant main effect of veracity F(1, 381) = 9.50, p = .002, ηp2 = .02. Specifically, liars (M = 1.71, SD = 0.77) reported more negative affect than truth-tellers (M = 1.49, SD = 0.66). There was no main effect of incentive F(1, 381) = 0.25, p = .620, ηp2 = .001, nor a veracity × incentive interaction F(1, 381) = 0.91, p = .341, ηp2 = .002.
Motivation To Be Believed
Motivation
A 2 (veracity: lie, truth) × 2 (incentive: present, absent) between-subjects ANOVA3 was conducted on motivation to be believed. Results revealed a main effect of veracity F(1, 381) = 7.00, p = .008, ηp2 = .02. Specifically, truth-tellers (M = 6.25, SD = 0.99) reported more motivation to be believed than liars (M = 5.96, SD = 1.15). Although those in the incentive present condition (M = 6.21, SD = 1.05) reported nominally more motivation than those in the incentive absent condition (M = 6.00, SD = 1.10), this effect was not statistically significant F(1, 381) = 3.03, p = .083, ηp2 = .008. Finally, there was no veracity × incentive interaction F(1, 381) = 0.20, p = .657, ηp2 = .001.
Affect After Feedback Manipulation
Positive Affect
A 2 (veracity: truth, lie) × 2 (incentive: present, absent) × 2 (feedback: believable, not believable) between-subjects ANOVA was conducted on positive affect directly after receiving feedback. Results revealed no main effect of veracity F(1, 377) = 0.34, p = .558, ηp2 = .001, nor a main effect of incentive F(1, 377) = 0.13, p = .719, ηp2 = .00. The main effect of feedback, however, was significant F(1, 377) = 54.47, p < .001, ηp2= .126. Replicating Study 1 findings, participants who received believable feedback (M = 3.01, SD = 0.99) reported more positive affect than those who received not believable feedback (M = 2.26, SD = 1.00). There were no significant interactions between veracity × incentive F(1, 377) = 0.15, p = .701, ηp2 = .00, veracity × feedback F(1, 377) = 0.34, p = .561, ηp2 = .001, incentive × feedback F(1, 377) = 2.25, p = .135, ηp2 = .006, nor veracity × incentive × feedback F(1, 377) = 0.50, p = .479, ηp2 = .001, (see Fig. 2A).
Fig. 2.
A The effect of veracity (liar, truth-teller) and feedback (believable, not believable) on positive affect reported by participants in Study 2. B The effect of veracity (liar, truth-teller) and feedback (believable, not believable) on negative affect reported by participants in Study 2. Error bars represent ± 1 standard error
Negative Affect
A 2 (veracity: truth, lie) × 2 (incentive: present, absent) × 2 (feedback: believable, not believable) between-subjects ANOVA was conducted on negative affect directly after receiving feedback. Replicating the results of Study 1, there was a significant main effect of veracity F(1, 377) = 5.91, p = .016, ηp2 = .015. Liars reported more negative affect (M = 1.56, SD = 0.80) than truth-tellers M = 1.39, SD = 0.57). Like Study 1, results also revealed a main effect of feedback F(1, 377) = 28.82, p < .001, ηp2 = .066. Participants who received not believable feedback (M = 1.65, SD = 0.68) reported more negative affect than those who received believable feedback (M = 1.29, SD = 0.68). There was no main effect of incentive F(1, 377) = 0.30, p = .862, ηp2 = .000, and no significant interactions between veracity × incentive F(1, 377) = 0.56, p = .455, ηp2 = .001, incentive × feedback F(1, 377) = 0.36, p = .547, ηp2 = .001, nor veracity × incentive × feedback F(1, 377) = 0.01, p = .905, ηp2 = .00. Additionally, in contrast to Study 1, there was not a veracity × feedback interaction F(1, 377) = 0.26, p = .609, ηp2 = .001, (see Fig. 2B).
Personality Moderators
Although we did not find an interaction between veracity and feedback on positive affect, we explored whether dark tetrad (Machiavellianism, narcissism, psychopathy, sadism) personality traits may serve as potential moderators of duping delight. Specifically, using Hayes PROCESS macro with 5,000 bootstrap resamples (Model 1, Hayes, 2018), we explored whether for liars (n = 190), the effect of receiving believable (vs. not believable) feedback on positive affect interacted with dark tetrad personalities. We found that Machiavellianism interacted with believability feedback to predict positive affect after deception (b = .06, t = 1.97, p = .05, 95% CI [.000, .13]). Specifically, participants with high (b = .90, p < .001) and moderate (b = .64, p < .001) levels of Machiavellianism reported more positive affect after receiving believable (vs. not believable) feedback, whereas participants with low levels of Machiavellianism (b = .33, p = .11) did not (see Fig. 3). We also found that narcissism interacted with believability feedback (b = .06, t = 2.13, p = .035, 95% CI [.004, .11]). Specifically, participants with high (b = .95, p < .001) and moderate (b = .66, p < .001) levels of narcissism reported more positive affect after receiving believable (vs. not believable) feedback, whereas participants with low levels of narcissism (b = .38, p = .053) did not (see Fig. 4). The interactions between feedback and psychopathy (b = −.004, p = .86), and sadism (b = .02, p = .40), were not significant (see Table 1 for correlations).
Fig. 3.

The effect of Machiavellianism on positive affect for liars who received believable (vs. not believable) feedback in Study 2. Low, moderate, and high levels of Machiavellianism represent the 16th, 50th, and 84th percentiles, respectively, using the Johnson–Neyman technique
Fig. 4.

The effect of narcissism on positive affect for liars who received believable (vs. not believable) feedback in Study 2. Low, moderate, and high levels of narcissism represent the 16th, 50th, and 84th percentiles, respectively, using the Johnson–Neyman technique
Table 1.
Pearson correlations between positive and negative affect after lying or telling the truth (1, 2), positive and negative affect after receiving believability feedback (3, 4), and dark tetrad personality traits (5–8)
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1. Veracity positive affect | — | ||||||||||||||
| 2. Veracity negative affect | 0.072 | — | |||||||||||||
| 3. Feedback positive affect | 0.715 | *** | 0.016 | — | |||||||||||
| 4. Feedback negative affect | 0.164 | ** | 0.696 | *** | −0.031 | — | |||||||||
| 5. Machiavellianism | 0.191 | *** | 0.074 | 0.215 | *** | 0.117 | * | — | |||||||
| 6. Narcissism | 0.318 | *** | 0.147 | ** | 0.271 | *** | 0.189 | *** | 0.344 | *** | — | ||||
| 7. Psychopathy | 0.065 | 0.206 | *** | 0.019 | 0.204 | *** | 0.401 | *** | 0.417 | *** | — | ||||
| 8. Sadism | 0.131 | ** | 0.096 | 0.070 | 0.146 | ** | 0.466 | *** | 0.325 | *** | 0.530 | *** | — | ||
Note: n = 385. *p < .05, **p < .01, ***p < .001
Discussion
In the two studies, we sought experimental evidence that successful deception would result in duping delight. Across both studies, receiving affirming feedback about one’s believability increased positive affect. Believability feedback, however, did not interact with veracity to predict positive affect: Successful (vs. unsuccessful) liars did not report greater positive affect. However, we did find that liars who reported moderate and high (but not low) Machiavellianism and narcissism reported more positive affect after receiving affirming feedback, suggesting that personality variables may be an important predictor of who experiences duping delight.
Although these findings suggest that duping delight may be a less common response to successful deception than previously theorized, it should be noted that Ekman and Frank (1993) proposed additional conditions for producing duping delight that were not part of our paradigm. While our paradigm involved a lie that caused no harm to others (Ruedy et al., 2013), the present study lacked an audience to witness the lie and a “victim” who has a reputation of being hard to trick (Ekman & Frank, 1993).
That said, we did directly test one potential moderator of the experience of duping delight in Study 2, specifically whether the lie was goal conducive. We attempted to manipulate goal congruence by including an incentive condition, which provided a tangible reward for successful lying (or truth-telling). However, we found no effect of incentive. Although these findings may suggest that our incentive was not large enough to impact affective experiences, these results are consistent with previous research on conceptually similar cheating behavior: positive affect is elicited whether cheating is self-selected or sanctioned by experimental manipulation and was unrelated to the size of financial incentive gained by cheating (Ruedy et al., 2013).
Alternatively, it is possible that duping delight is only experienced by a subset of the population. For example, individuals with high levels of “dark” personality traits have been observed to lie more and report duping delight as a motivator for their deception (Jonason et al., 2014; Spidel et al., 2011). Indeed, duping delight might serve as positive reinforcement for these individuals, resulting in their prolific use of deception. The results in Study 2 indicate that Machiavellianism and narcissism moderated the effect of positive affect after receiving believable (vs. not believable) feedback after lying. This is consistent with previous work suggesting that “dark” personality traits are positively associated with lying and unethical behavior across various situations (Azizli et al., 2016; Baughman et al., 2014; Elaad et al., 2020) and positive attitudes about deceptive communication (Oliveira & Levine, 2008). Future research should continue to explore the effects of personality and situational variables (e.g., Markowitz & Levine, 2021) with consideration for a typology of lies (Cantarero et al., 2018) that may elicit different affective experiences while lying.
To date, much of the research on duping delight in the deception literature has been focused on how this affective experience might give rise to behavioral cues to deception (e.g., Ekman, 1985; ten Brinke & Porter, 2012). The current research advances theorizing about duping delight by testing some of the proposed moderators of this experience and considering how this affective experience may reinforce and exacerbate the use of deception in social life. Additional research on duping delight will allow for a richer understanding of how and why people choose to lie, and whether this affective experience acts as an affective “reward” that affects deception frequency over time.
Conclusion
Successful deception did not result in robust positive affective responses, casting doubt on the magnitude and frequency of duping delight. Overall, liars who received believable feedback did not report greater positive affect than liars who received not believable feedback. However, findings suggest that “dark” personalities may be more likely to experience duping delight than others, providing a potential mechanism for why they lie so much. Future research is necessary to understand the conditions and personalities impacting affective experiences following successful deception.
Author Contributions
C.G. identified the research question, conducted initial analyses, and provided critical revisions to the manuscript; T.V.A.V. designed the experiment and collected all data; B.H. and C.K. assisted with drafting the manuscript; L.t.B. assisted with study design, gained ethical approval, conducted additional analyses, and drafted and provided critical revisions to the manuscript.
Additional Information
Funding
Research was supported by funding provided to L. ten Brinke by the Department of Psychology, University of Denver.
Data availability
Data generated and analyzed for the current study are available in the Open Science Framework repository: https://osf.io/cjw3r/
Ethics approval
Ethics approval was provided by the University of Denver IRB (989739-17) and University of British Columbia Okanagan BREB (H21-01400).
Conflicts of interest
The authors declare no competing interests.
Informed consent
Informed consent was obtained from all participants prior to study procedures.
Consent for publication
No individual participants are identified in this publication; participants provided consent for their data to be analyzed and reported in aggregate.
Code availability
SPSS syntax replicating the results of the current study are available in the Open Science Framework repository: https://osf.io/cjw3r/
Footnotes
Note that racial/ethnic percentages add to greater than 100% since participants were encouraged to select all that apply.
Our sample size exceeded our planned a priori sample size. Specifically, data collection occurred over two quarters; we opened more time slots for participation than necessary, expecting some no-show appointments, but unexpectedly exceeded our goal. A chi-square analysis suggests that although cell sizes for our design differ slightly, these differences are not significant χ2(1, N = 167) = 0.30, p = .59.
The distribution of the motivation responses was negatively skewed (Shapiro-Wilk p < .001). Accordingly, a Kruskal-Wallis H test was conducted and revealed—as expected—a significant effect of veracity χ2(1) = 8.45, p = .004, and incentive χ2(1) = 3.98, p = .05.
References
- Azizli N, Atkinson BE, Baughman HM, Chin K, Vernon PA, Harris E, Veselka L. Lies and crimes: Dark Triad, misconduct, and high-stakes deception. Personality and Individual Differences. 2016;89:34–39. doi: 10.1016/j.paid.2015.09.034. [DOI] [Google Scholar]
- Barrett LF, Adolphs R, Marsella S, Martinez AM, Pollak SD. Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest. 2019;20:1–68. doi: 10.1177/1529100619832930. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baughman HM, Jonason PK, Lyons M, Vernon PA. Liar liar pants on fire: Cheater strategies linked to the Dark Triad. Personality and Individual Differences. 2014;71:35–38. doi: 10.1016/j.paid.2014.07.019. [DOI] [Google Scholar]
- Cantarero K, Van Tilburg WAP, Szarota P. Differentiating everyday lies: A typology of lies based on beneficiary and motivation. Personality and Individual Differences. 2018;134:252–260. doi: 10.1016/j.paid.2018.05.013. [DOI] [Google Scholar]
- Daiku Y, Serota KB, Levine TR. A few prolific liars in Japan: Replication and the effects of Dark Triad personality traits. PLOS ONE. 2021;16(4):e0249815. doi: 10.1371/journal.pone.0249815. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Den Uyl MJ, van Kuilenburg H. The FaceReader: Online facial expression recognition. Proceedings of Measuring Behavior. 2005;30(2):589–590. [Google Scholar]
- DePaulo BM, Lindsay JJ, Malone BE, Muhlenbruck L, Charlton K, Cooper H. Cues to deception. Psychological Bulletin. 2003;129(1):74–118. doi: 10.1037/0033-2909.129.1.74. [DOI] [PubMed] [Google Scholar]
- Ekman P. Telling lies: Clues to deceit in the marketplace, marriage, and politics. Norton; 1985. [Google Scholar]
- Ekman P, Frank MG. Lies that fail. In: Lewis M, Saarni C, editors. Lying and deception in everyday life (p. 184–200) The Guilford Press; 1993. [Google Scholar]
- Elaad E, Hanania SB, Mazor S, Zvi L. The relations between deception, narcissism and self-assessed lie- and truth-related abilities. Psychiatry, Psychology and Law. 2020;27(5):880–893. doi: 10.1080/13218719.2020.1751328. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ellsworth PC, Scherer KR. Appraisal processes in emotion. In: Davidson RJ, Scherer KR, Goldsmith HH, editors. Handbook of affective sciences (pp. 572–595) Oxford University Press; 2003. [Google Scholar]
- Gino F, Ayal S, Ariely D. Contagion and differentiation in unethical behavior: The effect of one bad apple on the barrel. Psychological Science. 2009;20(3):393–398. doi: 10.1111/j.1467-9280.2009.02306.x. [DOI] [PubMed] [Google Scholar]
- Gino F, Schweitzer ME, Mead NL, Ariely D. Unable to resist temptation: How self-control depletion promotes unethical behavior. Organizational Behavior and Human Decision Processes. 2011;115(2):191–203. doi: 10.1016/j.obhdp.2011.03.001. [DOI] [Google Scholar]
- Gino F, Ayal S, Ariely D. Self-serving altruism? The lure of unethical actions that benefit others. Journal of Economic Behavior & Organization. 2013;93:285–292. doi: 10.1016/j.jebo.2013.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gonza LF, Vrij A, Bull R. The impact of individual differences on perceptions of lying in everyday life and in a high stake situation. Personality and Individual Differences. 2001;31(7):1203–1216. doi: 10.1016/S0191-8869(00)00219-1. [DOI] [Google Scholar]
- Halevy R, Shalvi S, Verschuere B. Being honest about dishonesty: Correlating self-reports and actual lying: honest about dishonesty? Human Communication Research. 2014;40(1):54–72. doi: 10.1111/hcre.12019. [DOI] [Google Scholar]
- Hayes AF. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (Second edition) Guilford Press; 2018. [Google Scholar]
- Jonason PK, Lyons M, Baughman HM, Vernon PA. What a tangled web we weave: The Dark Triad traits and deception. Personality and Individual Differences. 2014;70:117–119. doi: 10.1016/j.paid.2014.06.038. [DOI] [Google Scholar]
- Levine TR, Kim RK, Hamel LM. People lie for a reason: Three experiments documenting the principle of veracity. Communication Research Reports. 2010;27(4):271–285. doi: 10.1080/08824096.2010.496334. [DOI] [Google Scholar]
- Makowski, D., Pham, T., Lau, Z. J., Raine, A., & Chen, S. H. A. (2021). The structure of deception: Validation of the lying profile questionnaire. Current Psychology.10.1007/s12144-021-01760-1
- Markowitz DM, Levine TR. It’s the situation and your disposition: A test of two honesty hypotheses. Social Psychological and Personality Science. 2021;12(2):213–224. doi: 10.1177/1948550619898976. [DOI] [Google Scholar]
- Martin J, Rychlowska M, Wood A, Niedenthal P. Smiles as multipurpose social signals. Trends in Cognitive Science. 2017;21(11):864–877. doi: 10.1016/j.tics.2017.08.007. [DOI] [PubMed] [Google Scholar]
- Oliveira CM, Levine TR. Lie acceptability: A construct and measure. Communication Research Reports. 2008;25(4):282–288. doi: 10.1080/08824090802440170. [DOI] [Google Scholar]
- Park HS, Serota KB, Levine TR. In search of Korean outliars: “A few prolific liars” in South Korea. Communication Research Reports. 2021;38(3):206–215. doi: 10.1080/08824096.2021.1922374. [DOI] [Google Scholar]
- Paulhus, D. L., Buckels, E. E., Trapnell, P. D., & Jones, D. N. (2021). Screening for dark personalities: the short dark tetrad (SD4). European Journal of Psychological Assessment,37(3), 208–222. 10.1027/1015-5759/a000602
- Ruedy NE, Moore C, Gino F, Schweitzer ME. The cheater’s high: The unexpected affective benefits of unethical behavior. Journal of Personality and Social Psychology. 2013;105(4):531–548. doi: 10.1037/a0034231. [DOI] [PubMed] [Google Scholar]
- Serota KB, Levine TR. A few prolific liars: Variation in the prevalence of lying. Journal of Language and Social Psychology. 2015;34(2):138–157. doi: 10.1177/0261927X14528804. [DOI] [Google Scholar]
- Serota KB, Levine TR, Boster FJ. The prevalence of lying in America: Three studies of self-reported lies. Human Communication Research. 2010;36(1):2–25. doi: 10.1111/j.1468-2958.2009.01366.x. [DOI] [Google Scholar]
- Serota, K. B., Levine, T. R., & Docan-Morgan, T. (2021). Unpacking variation in lie prevalence: Prolific liars, bad lie days, or both? Communication Monographs, 1–25. 10.1080/03637751.2021.1985153
- Spidel A, Hervé H, Greaves C, Yuille JC. “Wasn’t me!” A field study of the relationship between deceptive motivations and psychopathic traits in young offenders. The British Psychological Society. 2011;16(2):335–347. doi: 10.1348/135532510X518722. [DOI] [Google Scholar]
- ten Brinke L, Porter S. Cry me a river: Identifying the behavioral consequences of extremely high-stakes interpersonal deception. Law and Human Behavior. 2012;36(6):469–477. doi: 10.1037/h0093929. [DOI] [PubMed] [Google Scholar]
- Trovillo PV. History of lie detection. Journal of Criminal Law and Criminology. 1938;29:848–881. [Google Scholar]
- Vo, T. V. A., Gunderson, C. A., & ten Brinke, L. (2021). How deception and believability feedback affect recall. Memory, 1–9. 10.1080/09658211.2021.1883064 [DOI] [PubMed]
- Watson D, Clark LA, Tellegan A. Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology. 1988;54(6):1063–1070. doi: 10.1037/0022-3514.54.6.1063. [DOI] [PubMed] [Google Scholar]
- Zuckerman M, DePaulo BM, Rosenthal R. Verbal and nonverbal communication of deception. Advances in Experimental Social Psychology. 1981;14:1–59. doi: 10.1016/S0065-2601(08)60369-X. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data generated and analyzed for the current study are available in the Open Science Framework repository: https://osf.io/cjw3r/


