Abstract
Escalation of commitment—the tendency to remain committed to a course of action, often despite negative prospects—is common. Why does it persist? Across three preregistered experiments (N=3,888), we tested the hypothesis that escalating commitment signals trustworthiness. Experiments 1–2 respectively revealed that decision makers who escalated commitment were perceived as more trustworthy and entrusted with 29% more money by third-party observers. Experiment 3 revealed that decision makers who escalated commitment subsequently made more trustworthy choices, returning 15% more money than those who deescalated. Decision makers were equally likely to escalate commitment in public versus in private, possibly because they previously internalized how others would evaluate them. Complementing research examining cognitive factors driving escalation of commitment, the present work reveals that accounting for the reputational causes and consequences of decisions to escalate enhances understanding of why escalation is so common and suggests how organizations might reduce it.
Keywords: Decision making, trust, escalation of commitment, signaling, reputation, person-perception
Escalation of commitment—specifically, the tendency to maintain commitment to a course of action, often despite negative prospects (Brockner, 1992; Staw, 1976)—is quite common. Decision makers persist at fatally flawed business ventures and continue unwinnable wars, despite the widely held prescription that selecting options based solely on future net expected value achieves better material outcomes (Mankiw, 2020, p. 261).
Why does costly escalation persist? Prior research has identified numerous cognitive factors that drive this robust phenomenon (for meta-analysis, see Sleesman et al., 2012). In addition, conceptual papers have theorized that social incentives play a role (Kanodia, Busman, & Dickhaut, 1989; Staw, 1981; Tetlock, 2000). Yet, as Sleesman and colleagues’ meta-analysis concluded, relatively little work has experimentally tested the role of social incentives, such as reputation management, in driving escalation.1
Drawing on the foregoing theorizing concerning social incentives, we hypothesized that escalation would signal trustworthiness, an attribute essential for successful professional relationships (Arrow, 1974; Dirks & Ferrin, 2002; Kramer, 1999; Mayer, Davis, & Schoorman, 1995). Moreover, this reputational benefit (signaling trustworthiness) may partially offset the material costs of escalation, contributing to its prevalence. This hypothesis yields a set of testable questions.
(1). Do third-party observers trust decision makers who escalate more than they trust decision makers who de-escalate?
Drawing on Mayer et al.’s (1995) tripartite model of trust, we hypothesized that escalation would signal integrity- and benevolence-based trust, but not extend equally to ability-based trust (Experiment 1). We drew on three lines of reasoning. First, regarding integrity, Mayer et al. (1995, pg. 719) argue that “…the consistency of the party’s past actions, credible communications… and the extent to which the party’s actions are congruent with his or her words all affect the degree to which the party is judged to have integrity.” Thus, if escalation reinforces a decision maker’s prior choices, it should increase perceived integrity. Second, Mayer and colleagues suggest that perceived benevolence is inversely correlated with a perceived motivation to misrepresent one’s position. If escalation indicates a decision maker’s unwillingness to say one thing and do another, it should increase perceived benevolence. Third, escalation should exert less influence on ability-based trust because there is no clear “correct choice” in uncertain escalation decisions. Finally, to rule out a potential halo effect, we also predicted less influence on other positive traits for which evaluative criteria are lacking (e.g., humorousness) than on integrity- and benevolence-based trust.
Importantly, because escalators (versus de-escalators) should be perceived as higher in integrity- and benevolence-based trust, we predicted observers would translate perceived trust into financial trust (Experiment 2).
(2). Does escalation of commitment predict actual trustworthiness?
Sleesman et al.’s (2012) meta-analysis makes clear that multiple distinct factors contribute to escalation, including the insight that individuals who escalate are more likely to care about their reputation in the eyes of others (Brockner, Rubin, & Lang, 1981; Fox & Staw, 1979). Outside the escalation domain, scholars have found that individuals who care about their reputation return more money in strategic interactions involving trust (e.g., Jordan, Hoffman, Nowak, & Rand, 2016). Thus, we reasoned that individuals who escalate commitment would be more trustworthy and return more money in a trust game.
(3). Are decision makers more likely to escalate commitment in public versus in private?
If strategic decision makers can intuit the potential benefits of signaling trustworthiness through escalation, they should be more likely to escalate when observed by others (Brockner, Rubin, & Lang, 1981; Fox & Staw, 1979). Alternatively, based on social-interactionist schools of thought (Goffman, 1983; Mead, 1930), decision makers may already have internalized how others would evaluate them (for discussion, see Lerner & Tetlock, 1999; Schlenker & Weigold, 1992), in which case, observation would have no discernible influence on the tendency to escalate.
Current Research
We address the preceding questions in three preregistered experiments (N=3,888). Experiments 1–2 respectively test whether escalation (a) is perceived as a signal of trustworthiness and (b) engenders trusting financial behavior by observers. Experiment 3 tests whether decision makers who escalate make more trustworthy choices in strategic interactions and whether they are more likely to escalate in public versus in private.
Open Science Statement
All data, materials, and preregistrations are publicly available via the Open Science Framework (tinyurl.com/OSFescalation). We report how we determined sample sizes, preregistered data exclusions, all manipulations, and all measures in all experiments (Simmons et al., 2012).
Experiment 1: Does Escalation of Commitment Increase Perceptions of Trust?
Overview and Method
Experiment 1 tested the hypothesis that escalation of commitment would signal integrity- and benevolence-based trust, and that these reputational benefits would not extend equally to ability-based trust. Participants in this experiment evaluated a decision maker who escalated vs. de-escalated commitment to a failing course of action. Due to space constraints, methodological details and secondary exploratory analyses are available for all experiments in the Supplement. All experiments received Institutional Review Board approval.
Participants
We recruited 441 adult respondents from the United States. In this and all experiments, we used Amazon’s Mechanical Turk (MTurk) and Cloud Research to recruit participants. Based on pilot data, we pre-determined our target sample size to ensure 80% power to detect a Cohen’s d=.30 assuming a 10% loss due to exclusions. The final sample for analyses consisted of N=429 after preregistered exclusions for failing comprehension checks (45% female; Mage=39.6). In all the experiments, all reported results are qualitatively consistent regardless of exclusion (see Supplement).
Procedure
Participants read a frequently used scenario (e.g., Arkes & Blumer, 1985; Olivola, 2018) in which the CEO of a company, after completing 90% of product development, learned that a competitor had launched a superior product. Participants were randomly assigned to a condition in which the CEO decided to spend the money to finish developing the inferior product (i.e., escalation condition) or cut their losses (i.e., de-escalation condition). Participants in prior experiments using nearly identical scenarios when no prior investment had been made overwhelmingly rejected investment (e.g., Arkes & Blumer, 1985, in which over 85% of participants chose not to invest).
Participants then evaluated the CEO on the tripartite trust scale (Mayer et al., 1995; Zlatev, 2019) and humorousness. We predicted that the CEO who escalated commitment would be perceived as higher on integrity- and benevolence-based trust, but less so on ability-based trust or humorousness. Ability, benevolence, and integrity were each measured on three-item scales; humorous was scored as the average of two items. Reliability for all scales was high (alphas > .90). Finally, participants responded to exploratory and demographic questions (see Supplement).
Results and Discussion
Table 1 presents means and effect sizes for all dependent variables. As predicted, unpaired t-tests revealed that escalation signaled both integrity and benevolence (Cohen’s ds > .40 for both indices, ps < .001). Notably, two 2×2 mixed ANOVAs crossing Escalation Choice (between subject) and Type of Trust (within subject) revealed that the effects of escalation on integrity and benevolence, respectively, were significantly larger than the effect of escalation on perceived ability (ps=.025 and .040 for respective interactions. An unpaired t-test revealed that the effect of escalation on ability was marginally significant: p=.06). Finally, two additional mixed ANOVAS replacing ability with humor revealed that the effects of escalation on integrity and benevolence were directionally, but not significantly, larger than the effect on perceived humorousness (ps=.18 and .07 for respective interactions; effect on humor: d=.29, p=.002).
Table 1.
Descriptive and Inferential Statistics in Experiment 1
| Integrity-based trust | Benevolence-based trust | Ability-based trust | Humor | |
|---|---|---|---|---|
|
| ||||
| Escalator | 4.10 | 3.65 | 4.13 | 2.97 |
| De-escalator | 3.77 | 3.35 | 3.97 | 2.76 |
| Mean difference | 0.33 | 0.30 | 0.16 | 0.21 |
| t-test | p < .001 | p < .001 | p = .064 | p = .002 |
| Cohen’s d [95% CI] | 0.41 [0.21, 0.60] | 0.40 [0.21, 0.59] | 0.18 [−0.01, 0.37] | 0.29 [0.10, 0.49] |
Note: Results from Experiment 1: Observers perceived Escalators (versus De-escalators) as higher in integrity- and benevolence-based trust. These benefits did not extend equally to ability-based trust. Observers also perceived Escalators as higher in humor.
Taken together, results revealed two main findings. First, observers perceived decision makers who escalated (versus de-escalated) as more trustworthy. Second, insofar as escalation did not signal ability-based trust as strongly as it signaled integrity- and benevolence-based trust, the results indicate some differentiation in ratings rather a pure halo effect. That said, a partial halo effect cannot be fully ruled out.
Experiment 2: Does Escalation of Commitment Increase Financial Trust?
Overview and Method
Following standard procedures from prior literature (e.g., Berg et al., 1995; Jordan et al., 2016), we employed a two-stage experimental design with two players: Actor and Observer. The participant of interest played the role of Observer. In the first stage, the Actor made a choice to escalate or de-escalate commitment. Unbeknown to the Observer, the Actor was fictitious and their choice was randomly assigned: We randomly paired Observers with Escalators (i.e., Actors who escalated) or De-Escalators (i.e., Actors who de-escalated). In the second stage, the Observer decided how much money to allocate to the Actor in a Trust Game (TG), described below. We predicted that Observers would trust Escalators more than De-Escalators.
Participants
We recruited 660 adult respondents from the United States. Based on pilot data, we pre-determined our target sample size to ensure 80% power to detect a Cohen’s d=.25 assuming a 10% loss due to exclusions. The final sample for analyses consisted of N=602 after preregistered exclusions for failing comprehension checks (50% female; Mage=37.3).
Procedure
Experiment 2 borrowed procedures from Jordan and colleagues (2016). Participants learned that the experiment would have two stages. In Stage 1, Observers read the same scenario from Experiment 1. Following two comprehension questions, Observers learned of the Actor’s choice to either escalate or de-escalate (as randomly assigned).
Based on Berg et al. (1995), in Stage 2 Observers received a bonus allocation and could transfer any amount (in cents, including none) to the Actor, allegedly another MTurk worker. This transfer constituted the primary dependent variable. We asked two comprehension questions to ensure participants understood their incentives. Observers received 50% of the tripled amount sent if they correctly answered all comprehension questions. Finally, participants responded to several exploratory and demographic questions (see Supplement).
Results and Discussion
As predicted, escalation increased financial trust. Observers entrusted 29% more of their endowment money to Escalators (M=.60, SD=.39) compared to De-Escalators (M=.47, SD=.41), t(599.2) = 4.18, p < .001, d=0.34, CI95 [.18, .50] (see Figure 1). The Supplement contains preregistered robustness checks, which support the main findings. Further, the Supplement contains analyses addressing (but finding inconsistent evidence for) a potential alternative explanation: that all observers would be more likely to trust the actor who chooses a pattern consistent with their own preferences (Tajfel, 1970).
Figure 1.
Changes in Trusting Behavior as a Function of Escalation of Commitment
Note. Results from Experiment 2: Observers sent 29% more money in a Trust Game to Actors who escalated commitment. Shaded plots display the distributions; horizontal bars indicate the means; rectangles show 95% confidence intervals.
Experiment 3: Are Escalators (versus De-Escalators) More Trustworthy?
Overview and Method
Given that Observers trust Escalators more than De-Escalators, Experiment 3 tested whether Escalators deserve such trust. Further, it tested whether participants would be more likely to escalate commitment when their choice was public rather than private.
Participants
We recruited 2,787 adult respondents from the United States. The final sample after preregistered exclusions for failure to correctly answer comprehension checks consisted of N=1,589 (54% female; Mage=36.9). We initially collected 803 respondents. While we did detect a relationship between escalation and trustworthiness, we did not detect a significant effect of observation on escalation. Due to higher-than-expected exclusion rates (42%) resulting from following our preregistered exclusion plan, the estimate obtained was imprecise. We therefore collected an additional 1,984 respondents and report results from the combined sample to ensure a more precise estimate.
Procedure
Drawing on Jordan et al. (2016), we employed a two-stage design similar to Experiment 2, except participants played the role of Actor. We manipulated the observability of the choice they made in Stage 1. In the unobserved condition, Actors read that the Observer would be unable to see their choice and that their choice could not affect how much the Observer would send in the TG. In the observed condition, Actors read that the Observer could see the Actor’s choice, which could affect how much the Observer chose to send. After choosing whether to escalate or not, Actors decided what percentage of their tripled amount they would return. This choice was made without knowledge of how much the Observer would send to them. Finally, participants completed demographic and exploratory questions, including trait-level reasonableness and rationality (See Supplement). To determine bonus payments, we randomly matched Actors with purported Observers from Experiment 2 (See Supplement).
Results
Results supported the hypothesis that Escalators would return a greater portion (15%) of their money (M=.35, SD=.23) than would De-Escalators (M=.31, SD=.25), t(695.4) = 3.30, p=.001, d=0.19, CI95 [0.08, 0.31]; Figure 2. However, whether a decision maker was explicitly observed had no discernable impact on escalation (b=0.053, SE=0.11, z=0.46, p=.64, OR=1.05, CI95 [0.844, 1.317]). Additionally, there was no effect of observation on the size of the relationship between escalation and trustworthiness (interaction p=.67).
Figure 2.
Trustworthiness as a Function of Escalation of Commitment
Note. Results from Experiment 3: Actors who chose to escalate commitment returned 15% more money in a Trust Game. Shaded plots display the distributions; horizontal bars indicate the means; rectangles show 95%-confidence intervals.
General Discussion
The present research examined the reputational causes and consequences of escalating commitment to failing courses of action. Three key facts emerged. First, Observers trust decision makers who escalate commitment, even in the face of a potentially failing course of action. Specifically, Observers attribute higher integrity-based and benevolence-based trust to decision makers who escalate commitment versus de-escalate; Observers also entrust 29% more money to these Escalators. Second, consistent with Observers’ expectations, Escalators actually behave in more trustworthy ways, returning 15% more money than De-Escalators. Third, whether decision makers escalated or not was not affected by whether the decision was observed.
Theoretical Implications
The present work adds empirical content to an emerging conclusion that what may seem like costly choices from the perspective of material outcomes can be rewarding in strategic interactions involving trust (for related work, see Everett et al., 2016; Jordan, Hoffman, Bloom, et al., 2016; Tenney et al., 2019). To the extent that models of behavior seek to contrast prescriptive versus descriptive models (i.e., models of how individuals should versus actually behave), they will need to account not only for traditional economic incentives but also for key reputational incentives for signaling trustworthiness.
Our findings also pertain to the mature literature on the sunk-cost bias (Arkes & Blumer, 1985; Jessup et al., 2018). Specifically, it adds empirical content to recent research efforts aimed at elucidating distinctly-social underlying mechanisms (Sleesman et al., 2012). For example, the findings cohere with research showing that escalation is underpinned by neural activity in the insula (Fujino et al., 2016), a region that may be associated with the feeling of trust (Kang et al., 2012). Moreover, the sunk-cost bias persists when investments are incurred by others (Olivola, 2018). Our work converges with this recent body of work suggesting a necessary social-institutional focus to understand decisions to escalate commitment.
Inasmuch as the present work sheds light on the social causes and consequences of escalation, it highlights a novel implication for de-biasing. Whereas de-biasing research has typically focused on the individual-level of analysis, harnessing approaches such as education (e.g., Sellier, Scopelliti, & Morewedge, 2019), the present work suggests it could be crucial to focus on the social/organizational level of analysis. Specifically, the present studies imply that reducing excessive escalation requires a social/organizational context in which decision makers can de-escalate commitment without incurring reputational costs. Ideally, organizations can build a culture in which decision makers explicitly earn trust when they engage in rational cost-benefit analysis at key decision junctures.
Limitations and Future Directions
A few limitations merit note. First, while Experiment 1 revealed that decision makers who escalate are more trusted even when no opportunity for reciprocity is available, Experiment 2 cannot fully rule out whether reciprocity plays a role. Future research should consider alternative explanations, including whether escalators are perceived as more rational or whether they are perceived as more likely to reciprocate, among other alternatives. In addition, future research should explore more deeply the extent to which escalation predicts trust discretely versus positive attributes generally.
Second, although the present studies included real monetary transactions, a potential limitation of this work is that the monetary sums were small. Thus, we cannot conclude whether the patterns would change with larger sums (but see Simmons & Massey, 2012). In addition, we cannot conclude how such effects may change in repeated interactions.
Finally, it would also be valuable to explore why participants in Experiment 3 were not sensitive to explicit observation. Following social-interactionist schools of thought (Goffman, 1983; Mead, 1930), decision makers may already have internalized how others would evaluate them. Consistent with this idea, Jordan and Rand (2020) built on diverse strands of theory (e.g., Lerner & Tetlock, 1999; Schlenker & Weigold, 1992) to propose that individuals use a reputation heuristic – i.e., engage in signaling behavior even without an explicit audience (see also Chaudhry & Loewenstein, 2019). Participants may have already internalized concerns about evaluation, even lacking an explicit audience – a question for future research.
Conclusion
Most choices in life arrive not as isolated problems but as the next in a series of decisions within particular social or institutional contexts. Thus, understanding how prior choices impact future choices, especially seemingly irrational choices, merits attention. Examining why escalation of commitment to failing courses of action persists, we found a novel answer: Decisions to escalate engender trust and increase financial benefits from strategic partners. Whether those who de-escalate do so because they are unaware of the reputational benefit they will miss is an important area for future research.
Context
The central hypothesis (that escalation engenders trust) arose from discussions with our accomplished executive education students, who vigorously defended their escalation choices against prescriptions for “rational decision making.” They argued that, especially in global diplomacy, abandoning commitments creates political costs in ways our models fail to capture. The present data support this intuition. Collectively, these ideas spawned a broader program of research on the reputational costs of economic rationality.
Supplementary Material
Acknowledgments
All data, stimuli, analysis code, and preregistrations are publicly available on the Open Science Framework: https://tinyurl.com/OSFescalation. Grants from the National Science Foundation (1559511), the National Institute of Health (1R01CA224545-01A1), the Harvard Program on Negotiation, and the Harvard Mind-Brain-Behavior Initiative supported the project, which was presented at the 2020 meeting of the Society for Judgment and Decision Making. Led by CKU and CAD, all coauthors designed the studies and analysis plans. CKU and CAD collected data and conducted analyses. All coauthors wrote the manuscript and approved the final version.
Footnotes
Sleesman and colleagues (2012, pg. 557) concluded that “Researchers have emphasized project and psychological determinants at the expense of social and structural factors.” For a notable exception, see Fox & Staw (1979)
The authors declare no competing interests or conflicts.
References
- Arkes HR, & Blumer C. (1985). The psychology of sunk cost. Organizational Behavior and Human Decision Processes, 35(1), 124–140. 10.1016/0749-5978(85)90049-4 [DOI] [Google Scholar]
- Arrow KJ (1974). The limits of organization. W. W. Norton & Company. [Google Scholar]
- Ashraf N, & Bandiera O. (2018). Social incentives in organizations. Annual Review of Economics, 10(1), 439–463. 10.1146/annurev-economics-063016-104324 [DOI] [Google Scholar]
- Bazerman MH, & Moore DA (2013). Judgment in Managerial Decision Making (8th ed.). John Wiley & Sons, Ltd. https://digitalcommons.usu.edu/unf_research/44/ [Google Scholar]
- Berg J, Dickhaut J, & McCabe K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122–142. 10.1006/game.1995.1027 [DOI] [Google Scholar]
- Brockner J. (1992). The escalation of commitment to a failing course of action: Toward theoretical progress. The Academy of Management Review, 17(1), 39–61. 10.2307/258647 [DOI] [Google Scholar]
- Dirks KT, & Ferrin DL (2002). Trust in leadership: Meta-analytic findings and implications for research and practice. Journal of Applied Psychology, 87(4), 611–628. 10.1037/0021-9010.87.4.611 [DOI] [PubMed] [Google Scholar]
- Dorison CA, DeWees B, Rahwan Z, Robichaud C, & Lerner JS (2020). Inefficient (but seemingly fair) resource allocations are used to signal trustworthiness [Working Paper]. Harvard University. [Google Scholar]
- Edwards W. (1954). The theory of decision making. Psychological Bulletin, 51(4), 380–417. 10.1037/h0053870 [DOI] [PubMed] [Google Scholar]
- Everett JAC, Pizarro DA, & Crockett MJ (2016). Inference of trustworthiness from intuitive moral judgments. Journal of Experimental Psychology: General, 145(6), 772–787. 10.1037/xge0000165 [DOI] [PubMed] [Google Scholar]
- Gelfand MJ, Raver JL, Nishii L, Leslie LM, Lun J, Lim BC, Duan L, Almaliach A, Ang S, Arnadottir J, Aycan Z, Boehnke K, Boski P, Cabecinhas R, Chan D, Chhokar J, D’Amato A, Ferrer M, Fischlmayr IC, … Yamaguchi S. (2011). Differences between tight and loose cultures: A 33-nation study. Science, 332(6033), 1100–1104. 10.1126/science.1197754 [DOI] [PubMed] [Google Scholar]
- Gneezy U, Meier S, & Rey-Biel P. (2011). When and why incentives (don’t) work to modify behavior. Journal of Economic Perspectives, 25(4), 191–210. 10.1257/jep.25.4.191 [DOI] [Google Scholar]
- Goffman E. (1983). The interaction order: American sociological association, 1982 presidential address. American Sociological Review, 48(1), 1–17. JSTOR. 10.2307/2095141 [DOI] [Google Scholar]
- Grossmann I, Eibach RP, Koyama J, & Sahi QB (2020). Folk standards of sound judgment: Rationality versus reasonableness. Science Advances, 6(2), eaaz0289. 10.1126/sciadv.aaz0289 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jessup RK, Assaad LB, & Wick K. (2018). Why choose wisely if you have already paid? Sunk costs elicit stochastic dominance violations. Judgment and Decision Making, 13(6), 575–586. [Google Scholar]
- Jordan JJ, Hoffman M, Bloom P, & Rand DG (2016). Third-party punishment as a costly signal of trustworthiness. Nature, 530(7591), 473–476. 10.1038/nature16981 [DOI] [PubMed] [Google Scholar]
- Jordan JJ, Hoffman M, Nowak MA, & Rand DG (2016). Uncalculating cooperation is used to signal trustworthiness. Proceedings of the National Academy of Sciences, 113(31), 8658–8663. 10.1073/pnas.1601280113 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jordan JJ, & Rand DG (2020). Signaling when no one is watching: A reputation heuristics account of outrage and punishment in one-shot anonymous interactions. Journal of Personality and Social Psychology, 118(1), 57–88. 10.1037/pspi0000186 [DOI] [PubMed] [Google Scholar]
- Kang Y, Williams LE, Clark MS, Gray JR, & Bargh JA (2011). Physical temperature effects on trust behavior: The role of insula. Social Cognitive and Affective Neuroscience, 6(4), 507–515. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kanodia C, Bushman R, & Dickhaut J. (1989). Escalation errors and the sunk cost effect: An explanation based on reputation and information asymmetries. Journal of Accounting research, 27(1), 59–77. [Google Scholar]
- Kramer RM (1999). Trust and distrust in organizations: Emerging perspectives, enduring questions. Annual Review of Psychology, 50(1), 569–598. 10.1146/annurev.psych.50.1.569 [DOI] [PubMed] [Google Scholar]
- Lerner JS, & Tetlock PE (1999). Accounting for the effects of accountability. Psychology Bulletin, 125(2), 255–275. [DOI] [PubMed] [Google Scholar]
- Mankiw NG (2020). Principles of economics (9th ed.). Cengage Learning. [Google Scholar]
- Mayer RC, Davis JH, & Schoorman FD (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734. 10.5465/amr.1995.9508080335 [DOI] [Google Scholar]
- Mead GH (1930). Cooley’s contribution to American social thought. American Journal of Sociology, 35(5), 693–706. 10.1086/215189 [DOI] [Google Scholar]
- Olivola CY (2018). The interpersonal sunk-cost effect. Psychological Science, 29(7), 1072–1083. 10.1177/0956797617752641 [DOI] [PubMed] [Google Scholar]
- Schlenker BR, & Weigold MF (1992). Interpersonal processes involving impression regulation and management. Annual review of psychology, 43(1), 133–168. [Google Scholar]
- Simmons JP, Nelson LD, & Simonsohn U. (2012). A 21 word solution. [Google Scholar]
- Sleesman D, Conlon DE, McNamara G, & Miles J. (2012). Cleaning up the big muddy: A meta-analytic review of the determinants of escalation of commitment. The Academy of Management Journal, 55(3), 541–562. 10.5465/amj.2010.0696 [DOI] [Google Scholar]
- Staw BM (1976). Knee-deep in the big muddy: A study of escalating commitment to a chosen course of action. Organizational Behavior and Human Performance, 16(1), 27–44. 10.1016/0030-5073(76)90005-2 [DOI] [Google Scholar]
- Staw BM (1981). The Escalation of Commitment to a Course of Action. The Academy of Management Review, 6(4), 577–587. 10.2307/257636 [DOI] [Google Scholar]
- Tajfel H. (1970). Experiments in intergroup discrimination. Scientific american, 223(5), 96–103. [PubMed] [Google Scholar]
- Tenney ER, Meikle NL, Hunsaker D, Moore DA, & Anderson C. (2019). Is overconfidence a social liability? The effect of verbal versus nonverbal expressions of confidence. Journal of Personality and Social Psychology, 116(3), 396–415. 10.1037/pspi0000150 [DOI] [PubMed] [Google Scholar]
- Tetlock PE (2000). Cognitive biases and organizational correctives: Do both disease and cure depend on the politics of the beholder? Administrative Science Quarterly, 45(2), 293–326. 10.2307/2667073 [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.


