Skip to main content
PLOS One logoLink to PLOS One
. 2020 Aug 14;15(8):e0237644. doi: 10.1371/journal.pone.0237644

Dispositional optimism weakly predicts upward, rather than downward, counterfactual thinking: A prospective correlational study using episodic recall

Jessica Gamlin 1,*, Rachel Smallman 2, Kai Epstude 3, Neal J Roese 4
Editor: Peter Karl Jonason5
PMCID: PMC7428155  PMID: 32797102

Abstract

Counterfactual thoughts center on how the past could have been different. Such thoughts may be differentiated in terms of direction of comparison, such that upward counterfactuals focus on how the past could have been better, whereas downward counterfactuals focus on how the past could have been worse. A key question is how such past-oriented thoughts connect to future-oriented individual differences such as optimism. Ambiguities surround a series of past studies in which optimism predicted relatively greater downward counterfactual thinking. Our main study (N = 1150) and six supplementary studies (N = 1901) re-examined this link to reveal a different result, a weak relation between optimism and upward (rather than downward) counterfactual thinking. These results offer an important correction to the counterfactual literature and are informative for theory on individual differences in optimism.

Introduction

Goal of the present research

Looking back on one’s past to compare what actually transpired to what might have been, (i.e., counterfactual thinking) is a common feature of mental experience [13]. Further, counterfactual thoughts may be differentiated in terms of their direction of comparison, where upward counterfactuals center on how an outcome could have been better than actuality and downward counterfactuals center on how an outcome could have been worse than actuality. Direction of comparison has been widely used to parse the content of counterfactual thinking. Such counterfactual thoughts about past outcomes may also connect to future-oriented individual differences such as dispositional optimism, which is defined as domain-general beliefs that future outcomes will be positive [4, 5].

A subset of the counterfactual literature (comprising nine studies in six papers [611]; see S1 Appendix) has examined the relationship between dispositional optimism and counterfactual direction of comparison, finding that optimism predicts downward (vs. upward) counterfactual thinking. That is, people who tend to hold positive expectancies about future outcomes tend to think about how things in the past could have been worse (rather than better). Although past studies indicate that optimism predicts downward counterfactual thinking [611], we identify theoretical and methodological reasons to question those prior results.

Recent trends in scholarly research, particularly within the field of psychology, illuminate the need to consider existing findings in light of new research practices and/or revised theorizing [1215]. In particular, bias toward the publication of only “clean,” significant findings has introduced a gap between reproducible effects and the refinement of existing (i.e., published) knowledge. Proposed solutions to bridge this gap include (a) greater publication of incremental and, where appropriate, non-significant findings; (b) the use of increased power and greater sample sizes; (c) preregistration; (d) single-paper meta-analyses; (e) reporting all, rather than only significant, studies conducted; and, (f) data transparency—among many other recommendations [1520]. Following such recommendations, studies retesting published findings and employing one or a combination of the proposed solutions are becoming commonplace [c.f., 2123]. The present paper draws on such recommendations to reexamine the relation between optimism and counterfactual direction of comparison.

The goal of the present research is to conduct a new test of the relation between optimism and counterfactual direction of comparison using a methodological approach superior to what appears in the literature. We report the results of a large-sample, pre-registered main study, followed by a single-paper meta-analysis that includes six preliminary studies (see S1 File). Our meta-analytic summary of these data sets (7 total) provides a robust estimate of the effect size relating optimism to counterfactual direction of comparison.

Theoretical background

Counterfactual thoughts play a key role in a range of emotions, judgments, and behaviors and have been applied in studies from moral judgment [24] to mental health [25] and from the neurocognitive underpinnings of choice [26] to the developmental progression of causal reasoning [27]. Counterfactuals are thoughts about the past, specifically how particular facets of the past may have been different from actuality. Such thoughts may contain kernels of insight that suggest new courses of action in the future. Indeed, the functional theory of counterfactual thinking asserts that much of counterfactual thinking is oriented toward the formation of intentions that embody learning and promote improvement [3, 28] and may therefore be useful for goal pursuit [3, 2831].

According to the functional theory of counterfactual thinking, individuals tend to generate counterfactuals when there is a discrepancy between their actual state and desired end-state. And more specifically, counterfactual direction of comparison characterizes the sorts of counterfactuals that arise spontaneously in light of such a discrepancy. Upward counterfactuals, because they specify improvement to the status quo, may be useful in articulating means by which future performance might be improved. Indeed upward counterfactuals predominate in response to recognition of an actual-ideal discrepancy. Conversely, downward counterfactuals are related to affect regulatory goals. By way of a contrast effect, consideration of inferior outcomes can make factual outcomes seem more positive. As a result, positive emotions such as relief are evoked by downward counterfactual thoughts; such thoughts may thus be generated strategically to repair mood. Although downward counterfactuals are generated less frequently overall than upward counterfactuals [3235], they arise when individuals feel a need to compensate for negative emotional states [36].

The link between goals and counterfactual direction of comparison is further illuminated by consideration of the antecedents to counterfactual thinking. At the most basic level, outcome valence is a key antecedent, such that negative more than positive outcomes activate upward counterfactual thinking [3739]. Additionally, situationally active performance goals differentially influence counterfactual direction, such that when goals remain active (e.g., for tasks that involve a repeating sequence) versus inactive (e.g., completed tasks), upward counterfactuals are generated more frequently [40]. In addition, individual differences in chronic goals may also be antecedents to counterfactual direction [4143]. For example, incremental (vs. entity) theorists, who see human behavior as more variable and hence improvable, are more likely to generate upward (vs. downward) counterfactuals [43]. This same link is reflected in the “opportunity principle,” whereby the opportunity to change or amend prior outcomes is associated with the generation of more upward (vs. downward) counterfactuals [3, 44]. In these varying ways, goal cognition constitutes a key determinant of counterfactual direction of comparison.

A fundamental question, then, centers on how individuals form cognitions of the past and future when it comes to counterfactual thinking, and whether there is meaningful variation across individuals in this intersection. One way that prior theory has addressed this question is via the relation between dispositional optimism and counterfactual direction of comparison [611]. Optimists expect that good things will occur, resulting in confidence and persistence in the face of challenges [45]. Moreover, optimism connects to models of behavioral self-regulation, in that people engage in goal-congruent efforts to the extent that they expect their efforts will eventually result in success [46, 47].

Dispositional optimism may therefore predict counterfactual direction. One way to define optimism is at a domain-general (vs. domain-specific) level, as instantiated by the Life Orientation Test (LOT and LOT-R; [5, 47]). Defined in this way, optimism is associated with superior coping outcomes [46, 4851]. For example, coronary bypass patients who scored higher in optimism recovered more quickly and reported higher quality of life six months post-surgery [50]. Although optimism may engender positive coping outcomes partly via affect regulatory processes (e.g., a self-serving attributional basis that mitigates the affective sting of negative outcomes [52]), the bulk of recent evidence suggests that optimism brings beneficial outcomes via performance improvement goals. For example, Scheier et al. [47] showed that optimism more often involves performance improvement processes (e.g., active coping, planning, seeking social support) than affect regulation (e.g., denying or disengaging). Nes and Segerstrom’s [53] meta-analysis (N = 11,629) indicated that optimism is associated more strongly with performance improvement goals (defined in terms of approach goals and active coping) than with affect regulation (defined in terms of avoidance goals and passive coping). To be sure, performance improvement is not the same as an approach goal, nor is affect regulation synonymous with avoidance, yet nevertheless the degree of conceptual overlap indicates an overarching connection between optimism and performance improvement. Upward counterfactuals connect to performance improvement goals, whereas downward counterfactuals connect to affect regulatory goals (e.g., [29, 32, 36, 40, 54, 55]). Thus, the optimism literature provides a basis for predicting, through shared conceptual emphasis on performance improvement, that optimism may predict upward (vs. downward) counterfactual thinking.

Extant findings

In contrast to the theoretically derived prediction described above, the counterfactual literature indicates that optimism predicts downward (vs. upward) counterfactual thinking [611]. For example, Kasimatis and Wells [8] operationalized optimism with the LOT scale to predict counterfactual thoughts collected in a thought-listing task. Those higher in optimism generated more downward than upward counterfactuals, but Kasimatis and Wells’ book chapter report omitted many procedural and statistical details. Sanna [10] operationalized optimism with the Defensive Pessimism Questionnaire (DPQ; [52]) to predict students’ course-related counterfactual thoughts collected via thought-listing—again, those higher in optimism generated more downward than upward counterfactuals. However, in this as well as a follow-up study with similar findings [11], important statistical details were omitted, such as those centering on the main effect of optimism on counterfactual direction. Further, the use of tertile splits and exclusion of the middle third of participants from analyses raises questions of statistical precision in light of emerging data standards (e.g., [56]). Issues with dichotomizing the optimism measure [7] as well as omitted statistical analyses [6, 9] further impact the conclusions of the remaining papers to have shown that optimism predicts downward (vs. upward) counterfactual thinking. A final point of concern is that to date, eight of Sanna’s papers have been retracted because of data fraud [57]. Although the two relevant papers cited here have not been retracted, a prudent reading of the literature suggests the need for a new look at the relation between optimism and counterfactual direction of comparison.

Main study

To that end, our main study features four key improvements over the literature. First, we focus on episodic counterfactuals that participants report regarding their own autobiographical experiences [58]. The method of soliciting memories of episodic counterfactuals is superior to two other methods to elicit counterfactual thinking: a) hypothetical scenarios (used in [8]), a method vulnerable to the critique that participant responses are speculative than genuine, and b) laboratory task (used in [10]), a method that, although yielding more genuine responses, lacks cross-domain generality. Second, participants self-code the direction of each counterfactual they generate using scale ratings, preventing independent coders (as in [8, 9, 10]) from possible misinterpretation of ambiguous counterfactuals (e.g., “what if I had taken a ‘different’ train” could represent either a better or a worse alternative). Moreover, although much prior research has dichotomized direction into upward versus downward, some prior research has used scales (e.g., [59]), with the advantage of capturing greater variability in the way individuals report on counterfactual thoughts. Third, we assessed optimism using the LOT-R [47], perhaps the best validated and most widely used of available optimism measures. Fourth, the main study method solicited four episodic counterfactuals from each participant in an attempt to enhance reliability of measurement via an entirely within-subject design.

The main study assessed the magnitude and direction of the association between optimism and counterfactual direction of comparison. We report all conditions, measures, and any data exclusions. The IRB of Northwestern University, Kellogg School of Management approved this study for Human Subject Research. Written informed consent was obtained from all participants prior to their commencing the study and before any data collection. The study was pre-registered on 10-9-2018, and all materials and de-identified data are available at: https://osf.io/wudjs/.

We hypothesized that optimism would not predict greater downward (vs. upward) counterfactual thinking. Further, based on evidence from the counterfactual and optimism literatures connecting both upward counterfactuals and more optimistic individuals to improved performance on goals, we theorized the opposite pattern may emerge—that is, that optimism may predict greater upward (vs. downward) counterfactual thinking. Thus, our aim was to reexamine the link between optimism and counterfactual direction of comparison that has received prior research attention.

Method

Sample size and power

An a priori power analysis was conducted using G*Power v3.1.9.4 to determine the minimum sample size required to find significance based on an effect size of = .08, alpha = .05, power = .8, two-tail t-test, using a within-subject design. This analysis resulted in a desired sample of 1,229 participants. Although this power calculation was based on a t-test, our analysis relies primarily on a mixed effects model (MEM). Given continued debate on how best to calculate power for MEMs, and given such models should be more sensitive than t-tests, we believe that this power calculation is a reasonable way to determine sample size in our study. Specifically, sample size requirements for the analyses for the predictors in our model (optimism, context, and opportunity) would fall within this minimum sample size of 1,229 [6062]. From our prior experience with attrition rates, we assumed a completion rate of .6 from Time 1 (T1) to Time 2 (T2) and on this basis we set the desired sample size for T1 at N = 2,050.

The final sample (comprising participants who completed both T1 and T2) consisted of 1,150 adults drawn from MTurk (59% female; Mage = 36, SDage = 12). The T1 sample consisted of 2,059 adults; out of this number 1,287 (63%) returned at T2. Participants not following instructions (e.g., answered “na” or gibberish, n = 119) or failing attention checks (n = 18), as determined by the first author, were excluded from analyses (total exclusions, n = 137). T1 data collection took place between 10-4-2018 and 10-5-2018; T2 data collection took place between 10-11-2018 and 10-14-2018.

Research design and measures

The T1 assessment focused on optimism, measured using the 6-item LOT-R with 5-point response options, averaged to create the optimism index (α = .88). Participants responded to demographic questions (e.g., age, gender, race, and ethnicity) and an open-ended attention check (“Please confirm you are human by describing the weather outside right now where you are”).

The T2 assessment centered on counterfactual thinking. To introduce variability in the tendency to report upward and downward counterfactuals, participants reported counterfactual alternatives to four autobiographical events with prompts that varied according to a 2 (context: personal vs. professional) × 2 (opportunity: low vs. high) factorial design (fully within-subject). Participants read the instructions: “This survey asks you about four separate events from your recent past. Your job is to answer brief questions about what you remember.” Participants then responded to four prompts: 1) low opportunity professional, 2) low opportunity personal, 3) high opportunity professional, and 4) high opportunity personal (with order of presentation randomized). For each, participants recalled and shared details about a recent negative experience (see Table 1). The choice of personal versus professional context was based on the observation that life regrets, which derive from counterfactual thinking, tend to focus on these contexts more frequently than others [44, 63]. The opportunity manipulation derived from past demonstrations that upward counterfactuals are more common under conditions of high (vs. low) opportunity [3]. The manipulation of opportunity thus afforded an internal, theoretically-based check on our measurement technique. Replication of this known effect lends credibility to our methods.

Table 1. Main study prompts by condition.
Professional Personal
High Opportunity Think back to a recent NEGATIVE experience you had at WORK or SCHOOL that there are POSSIBLE SOLUTIONS TO. Think back to a recent NEGATIVE experience you had with FRIENDS or FAMILY that there are POSSIBLE SOLUTIONS TO.
Low Opportunity Think back to a recent NEGATIVE experience you had at WORK or SCHOOL that there is NO WAY TO RESOLVE. Think back to a recent NEGATIVE experience you had with FRIENDS or FAMILY that there is NO WAY TO RESOLVE.

Exact prompts used in Main Study assigning participants to the 2 (context: personal vs. professional) × 2 (opportunity: low vs. high) factorial design (within-subject) conditions.

Next, we solicited counterfactual thoughts with a prompt that intended to be neutral with regard to upward or downward direction of comparison: “After having experiences like this, sometimes people have thoughts like ‘what if’—in that they think about how things could have gone differently. In the space below, please share one ‘what if’ thought.”

Our dependent measure, counterfactual direction of comparison, was assessed using a self-report rating scale. Specifically, after each counterfactual prompt, participants responded to: (1) “Does your ‘what if’ thought focus more on how things could have gone better or how things could have gone worse?”; (2) “In general when you look back at this experience, do you tend to ponder more about how things could have gone better or how things could have gone worse?”; and, (3) “When you look back at this experience, does it make more sense to you to reflect on how the outcome could have been better or how the outcome could have been worse?” on 5-point scales anchored at [-2] = Definitely Worse to [2] = Definitely Better. We averaged these three items to create an index of counterfactual direction for each of the four prompts (α1 = .87; α2 = .88; α3 = .90; α4 = .90; overall α = .85). Participants responded to the same demographic measures and attention check as in T1.

Results

Analyses were conducted using JMP Pro v14.1.0. In the counterfactual direction index, positive values indicate an upward direction of comparison and negative values indicate a downward direction of comparison. Overall, the mean counterfactual direction score was greater than zero, M = 0.98, SD = 0.75; t(1149) = 44.33, p < .001, 95% CI [0.96, 1.00], revealing a general tendency toward generating more upward than downward counterfactuals, an effect consistent with the prior literature (e.g., [3235]).

We ran a mixed regression model with mean-centered optimism (M = 2.28, SD = 0.88), context (professional = 0; personal = 1), opportunity (low = 0; high = 1), and all 2 and 3-way interactions between these factors, as predictors of counterfactual direction, with participant-level variation accounted for as a random effect. This overall model was significant, AICc = 13495.93, p < .001. Critically, this regression showed a significant effect of optimism, b = .06, SE = .03, β = .05, t(1152.5) = 2.29, p = .022, 95% CI [.008, .11], such that greater optimism predicted more upward counterfactuals (see Fig 1). The main effect of context was not significant, b = -.01, SE = .01, β = -.02, t(3447.9) = -1.08, p = .26, 95% CI [-0.04, 0.01]. However, we did observe a significant main effect of opportunity, b = .07, SE = .01, β = .07, t(3447.9) = 5.32, p < .001, 95% CI [0.05, 0.10], such that events high (vs. low) in opportunity predicted more upward counterfactuals. Because this opportunity effect replicates prior research [3], we gain confidence in the fidelity of the method. There were no significant 2-way or 3-way interactions (ps > .23).

Fig 1. Dispositional optimism predicts counterfactual direction of comparison, moderated by context and opportunity.

Fig 1

In support of our hypothesis, the main study reveals that optimism does not predict greater downward counterfactual thinking, as prior research had found. Instead, this study shows that optimism weakly predicts greater upward than downward counterfactual thinking and that this effect is consistent across two different contexts of life, personal and professional.

Statistical summary of studies

We ran six preliminary studies between February 2017 and March 2018. We summary these prior studies here to avoid a potential bias by not reporting prior unpublished studies and to illuminate the rationale for the main study’s sample size. We report all methodological details of the preliminary studies in the S1 File. Further, Table 2 summarizes key methodological details (measures and manipulations) as well as the results, focusing on the effect (β) of optimism on counterfactual direction of comparison.

Table 2. Summary of methodologies and results across all studies.

Study N Optimism Measure(s) (IV) Counterfactual Direction Measure (DV) Moderators (Manipulated) Effect (β)
P1 197 LOT-R Three-item scale Outcome Valence β = 0.07, p = .46
P2 494 Same as P1 Dichotomous Same as P1 β = 0.15, p = .29
P3 199 Same as P1 Same as P1 β = 0.02, p = .80
P4 290 Same as P1 Same as P1 Same as P1 β = 0.10, p = .09
P5 196 Same as P1 and DPQ Same as P1 Same as P1 LOT-R: β = 0.07, p = .43 DPQ: β = -0.07, p = .47
P6 525 Same as P1 and DPQ Same as P1 Same as P1 LOT-R: β = 0.06, p = .15 DPQ: β = 0.06, p = .17
Main 1150 Same as P1 Same as P1 Context, Opportunity β = 0.05, p = .02

Table 3 summarizes the focal relation between optimism and counterfactual direction of comparison as estimated across all studies conducted. We conducted a meta-analysis as per McShane and Bockenholt [19] by way of 1) averaging across all conditions within a study, 2) computing the correlation between the two key variables across each study, 3) converting that correlation to the Fisher Z scale, and 4) analyzing via the basic random effects meta-analytic model (see S1 File for R-code). From this analysis (N = 3,051), we noted a mean effect size of r = 0.06, SE = .02, Z = 3.38, p = .0007, 95% CI [.03, .10]. Importantly, although this effect is weak, it supports our initial hypothesis that optimism does not predict greater downward counterfactual thinking. Instead, this meta-analysis suggests optimism is more clearly, albeit weakly, linked to greater upward counterfactual direction of comparison.

Table 3. Statistical summary of studies.

Study N r P
Preliminary 1 197 0.10 0.17
Preliminary 2 494 0.04 0.35
Preliminary 3 199 -0.03 0.64
Preliminary 4 290 0.06 0.28
Preliminary 5 196 0.13 0.08
Preliminary 6 525 0.06 0.15
Main Study 1150 0.07 0.02
Meta-analysis 3,051 0.06 .0007

Overall mean effect of optimism on counterfactual direction, as estimated across the six preliminary and one main studies.

Conclusions

Is an optimist more likely to see counterfactual alternatives that specify a better (upward) or worse (downward) state of affairs, relative to actuality? This question hinges on their underlying goals, which might either center on performance improvement or affect regulation. If performance improvement goals dominate, the optimist will generate counterfactuals that help them to improve in the future (upward counterfactuals). But if affect regulatory goals dominate, the optimist will generate counterfactuals that help them to feel better in the moment (downward counterfactuals). The prior counterfactual literature indicates the latter answer, that optimism predicts downward counterfactual thinking. However, the theoretical consensus in the optimism literature suggests a different pattern, that optimism predicts upward counterfactual thinking. Given uncertainty surrounding the counterfactual literature (methodological, statistical, data reporting), we conducted new research to examine this relation, and provide evidence that optimism weakly predicts upward counterfactual thinking. Thus, our current result is consistent with the optimism literature (generally speaking) but not the counterfactual literature (as it pertains to optimism). Future research might explore potential moderators leading to our result compared to prior findings, potentially exploring the methodological differences as a factor. Furthermore, although we found a weak relation between optimism and upward counterfactual direction of comparison, future research may explore the role of performance improvement goals as linking these constructs.

Our key result does connect to a broader theme in the counterfactual literature, namely that “counterfactual thoughts often reflect goals and the varying means to reach those goals … Imagining alternative pathways by which past goals might have been achieved provides insights that comprise blueprints for future action” ([3], pp. 5). Our results thus speak to the intersection of past-focused versus future-focused thinking. As individuals look to the past to imagine alternatives to factual events, they likely rely upon the same brain system (e.g., [26]) as when they look to the future to imagine those possibilities that may yet come to pass. Optimism, as an enduring individual difference, plays a role in thoughts of both the past and future.

Supporting information

S1 Appendix. Summary table of prior research.

A summary table of nine studies across six papers published between 1995 and 2015, which indicate that optimism predicts downward (vs. upward) counterfactual thinking. This summary table reports the authors and year of publication; the study number within the publication (if applicable); the total sample size of the study (if reported); the design of that study including the conditions participants were assigned to or whether the design was correlational; how counterfactuals were elicited (i.e., in response to what prompts or events); the scale that was used to capture trait optimism; what optimism was compared to (if applicable); and, how counterfactuals were classified as downward or upward.

(DOCX)

S1 File

(DOCX)

Acknowledgments

We thank Richard Robins and Suzanne Segerstrom for their comments on an early manuscript draft; and Daniel Jung, Yiyun Lan, Jue Wu, and Michelle Zhou for assistance in data collection and library search.

Data Availability

All deidentified data files are available at OSF at https://osf.io/wudjs/.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Byrne RMJ. Counterfactual thought. Annu Rev Psychol. 2016;67:135–157. 10.1146/annurev-psych-122414-033249 [DOI] [PubMed] [Google Scholar]
  • 2.Kahneman D, Miller DT. Norm theory: Comparing reality to its alternatives. Psychol Rev. 1986;93(2):136–153. 10.1037/0033-295X.93.2.136 [DOI] [Google Scholar]
  • 3.Roese NJ, Epstude K. The functional theory of counterfactual thinking: New evidence, new controversies, new insights. Adv Exp Soc Psychol. 2017;56:1–79. 10.1016/bs.aesp.2017.02.001 [DOI] [Google Scholar]
  • 4.Carver CS, Scheier MF, Segerstrom SC. Optimism. Clin Psychol Rev. 2010;30(7):879–889. 10.1016/j.cpr.2010.01.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Scheier MF, Carver CS. Optimism, coping, and health: Assessment and implications of generalized outcome expectancies. Health Psychol. 1985;4(3):219–247. 10.1037//0278-6133.4.3.219 [DOI] [PubMed] [Google Scholar]
  • 6.Barnett MD, Martinez B. Optimists: It could have been worse; Pessimists: It could have been better. Pers Individ Dif. 2015;86(November):122–125. 10.1016/j.paid.2015.06.010 [DOI] [Google Scholar]
  • 7.del Valle CHC, Mateos PM. Dispositional pessimism, defensive pessimism and optimism: The effect of induced mood on prefactual and counterfactual thinking and performance. Cogn Emot. 2008;22(8):1600–1612. 10.1080/02699930801940289 [DOI] [Google Scholar]
  • 8.Kasimatis M, Wells GL. Individual differences in counterfactual thinking In Roese NJ, Olson JM, editors. What might have been: The social psychology of counterfactual thinking. Mahwah, NJ: Erlbaum; 1995. pp. 81–101 [Google Scholar]
  • 9.Rye MS, Cahoon MB, Ali RS, Daftary T. Development and validation of the counterfactual thinking for negative events scale. J Pers Assess. 2008;90(3):261–269. 10.1080/00223890701884996 [DOI] [PubMed] [Google Scholar]
  • 10.Sanna LJ. Defensive pessimism, optimism, and simulating alternatives: Some ups and downs of prefactual and counterfactual thinking. J Pers Soc Psychol. 1996;71(5):1020–1036. 10.1037//0022-3514.71.5.1020 [DOI] [PubMed] [Google Scholar]
  • 11.Sanna LJ. Defensive pessimism and optimism: The bitter-sweet influence of mood on performance and prefactual and counterfactual thinking. Cogn Emot. 1998;12(5):635–665. 10.1080/026999398379484 [DOI] [Google Scholar]
  • 12.Ioannidis JP. Why most published research findings are false. PLoS medicine. 2005. August 30;2(8):e124 10.1371/journal.pmed.0020124 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Lakens D, Etz AJ. Too true to be bad: When sets of studies with significant and nonsignificant findings are probably true. Soc Psychol Personal Sci. 2017. November;8(8):875–81. 10.1177/1948550617693058 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Nosek BA, Ebersole CR, DeHaven AC, Mellor DT. The preregistration revolution. Proc Natl Acad Sci. 2018. March 13;115(11):2600–6. 10.1073/pnas.1708274114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22(11):1359–66. 10.1177/0956797611417632 [DOI] [PubMed] [Google Scholar]
  • 16.Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, et al. Power failure: Why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci. 2013. May;14(5):365–76. 10.1038/nrn3475 [DOI] [PubMed] [Google Scholar]
  • 17.Fraley RC, Vazire S. The N-pact factor: Evaluating the quality of empirical journals with respect to sample size and statistical power. PloS One. 2014. October 8;9(10):e109019 10.1371/journal.pone.0109019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.LeBel EP, Vanpaemel W, Cheung I, Campbell L. A brief guide to evaluate replications. Meta-Psychology. 2019. June 14;3. [Google Scholar]
  • 19.McShane BB, Böckenholt U. Single paper meta-analysis: Benefits for study summary, theory-testing, and replicability. J Consum Res. 2017;43(6):1048–1063. 10.1093/jcr/ucw085 [DOI] [Google Scholar]
  • 20.VanVoorhis CW, Morgan BL. Understanding power and rules of thumb for determining sample sizes. Tutor Quant Methods Psychol. 2007. November;3(2):43–50. 10.20982/tqmp.03.2.p043 [DOI] [Google Scholar]
  • 21.Aknin LB, Dunn EW, Proulx J, Lok I, Norton MI. Does spending money on others promote happiness?: A registered replication report. J Pers Soc Psychol. 2020. August; 119(2);e15–e26. 10.1037/pspa0000191 [DOI] [PubMed] [Google Scholar]
  • 22.Burns DM, Fox EL, Greenstein M, Olbright G, Montgomery D. An old task in new clothes: A preregistered direct replication attempt of enclothed cognition effects on Stroop performance. J Exp Soc Psychol. 2019. July;83:150–6. 10.1016/j.jesp.2018.10.001 [DOI] [Google Scholar]
  • 23.Gervais WM, McKee SE, Malik S. Do religious primes increase risk taking? Evidence against “Anticipating Divine Protection” in two preregistered direct replications of Kupor, Laurin, and Levav (2015). Psychol Sci. 2020. June 25;31(7);858–64. 10.1177/0956797620922477 [DOI] [PubMed] [Google Scholar]
  • 24.Effron DA, Miller DT, Monin B. Inventing racist roads not taken: The licensing effect of immoral counterfactual behaviors. J Pers Soc Psychol. 2012;103(6):916–932. 10.1037/a0030008 [DOI] [PubMed] [Google Scholar]
  • 25.Broomhall AG, Phillips WJ, Hine DW, Loi NM. Upward counterfactual thinking and depression: A meta-analysis. Clin Psychol Rev. 2017;55(July):56–73. 10.1016/j.cpr.2017.04.010 [DOI] [PubMed] [Google Scholar]
  • 26.Schacter DL, Benoit RG, De Brigard F, Szpunar KK. Episodic future thinking and episodic counterfactual thinking: Intersections between memory and decisions. Neurobiol Learn Mem. 2015;117:14–21. 10.1016/j.nlm.2013.12.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Beck SR, Guthrie C. Almost thinking counterfactually: Children’s understanding of close counterfactuals. Child Dev. 2011; 82(4):1189–1198. 10.1111/j.1467-8624.2011.01590.x [DOI] [PubMed] [Google Scholar]
  • 28.Epstude K, Roese NJ. The functional theory of counterfactual thinking. Pers Soc Psychol Rev. 2008;12(2):168–192. 10.1177/1088868308316091 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Roese NJ. Counterfactual thinking. Psychol Bull. 1997;121(1):133–148. 10.1037/0033-2909.121.1.133 [DOI] [PubMed] [Google Scholar]
  • 30.Roese NJ, Smallman R, Epstude K. Do episodic counterfactual thoughts focus on personally controllable action?: The role of self-initiation. J Exp Soc Psychol. 2017;73:14–23. 10.1016/j.jesp.2017.05.006 [DOI] [Google Scholar]
  • 31.Smallman R, Summerville A. Counterfactual thought in reasoning and performance. Soc Personal Psychol Compass. 2018;12(4):e12376 10.1111/spc3.12376 [DOI] [Google Scholar]
  • 32.Nasco SA, Marsh KL. Gaining control through counterfactual thinking. Pers Soc Psychol Bull. 1999;25(5):557–569. 10.1177/0146167299025005002 [DOI] [Google Scholar]
  • 33.Alquist JL, Ainsworth SE, Baumeister RF, Daly M, Stillman TF. The making of might-have-beens. Pers Soc Psychol Bull. 2015;41(2):268–283. 10.1177/0146167214563673 [DOI] [PubMed] [Google Scholar]
  • 34.De Brigard F, Addis DR, Ford JH, Schacter DL, Giovanello KS. Remembering what could have happened: Neural correlates of episodic counterfactual thinking. Neuropsychologia. 2013;51(12):2401–2414. 10.1016/j.neuropsychologia.2013.01.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Petrocelli JV, Seta CE, Seta JJ, Prince LB. “If only I could stop generating counterfactual thoughts”: When counterfactual thinking interferes with academic performance. J Exp Soc Psychol. 2012;48(5):1117–1123. 10.1016/j.jesp.2012.03.017 [DOI] [Google Scholar]
  • 36.White K, Lehman DR. Looking on the bright side: Downward counterfactual thinking in response to negative life events. Pers Soc Psychol Bull. 2005;31(10):1413–1424. 10.1177/0146167205276064 [DOI] [PubMed] [Google Scholar]
  • 37.McEleney A, Byrne RM. Spontaneous counterfactual thoughts and causal explanations. Think Reason. 2006;12(2):235–255. 10.1080/13546780500317897 [DOI] [Google Scholar]
  • 38.Roese NJ, Hur T. Affective determinants of counterfactual thinking. Soc Cogn. 1997;15(4):274–290. 10.1521/soco.1997.15.4.274 [DOI] [Google Scholar]
  • 39.Roese NJ, Olson JM. Counterfactual thinking: The intersection of affect and function. Adv Exp Soc Psychol. 1997;29:1–59. 10.1016/S0065-2601(08)60015-5 [DOI] [Google Scholar]
  • 40.Markman KD, Gavanski I, Sherman SJ, McMullen MN. The mental simulation of better and worse possible worlds. J Exp Soc Psychol. 1993;29(1):87–109. 10.1006/jesp.1993.1005 [DOI] [Google Scholar]
  • 41.Pierro A, Leder S, Mannetti L, Higgins ET, Kruglanski AW, Aiello A. Regulatory mode effects on counterfactual thinking and regret. J Exp Soc Psychol. 2008;44(2):321–329. 10.1016/j.jesp.2007.06.002 [DOI] [Google Scholar]
  • 42.Sirois FM, Monforton J, Simpson M. “If only I had done better”: Perfectionism and the functionality of counterfactual thinking. Pers Soc Psychol Bull. 2010;36(12):1675–1692. 10.1177/0146167210387614 [DOI] [PubMed] [Google Scholar]
  • 43.Wong EM, Haselhuhn MP, Kray LJ. Improving the future by considering the past: The impact of upward counterfactual reflection and implicit beliefs on negotiation performance. J Exp Soc Psychol. 2012;48(1):403–406. 10.1016/j.jesp.2011.07.014 [DOI] [Google Scholar]
  • 44.Roese NJ, Summerville A. What we regret most and why. Pers Soc Psychol Bull. 2005;31(9):1273–1285. 10.1177/0146167205274693 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Segerstrom SC, Castañeda JO, Spencer TE. Optimism effects on cellular immunity: Testing the affective and persistence models. Pers Individ Dif. 2003;35(7):1615–1624. 10.1016/S0191-8869(02)00384-7 [DOI] [Google Scholar]
  • 46.Chopik WJ, Kim ES, Smith J. Changes in optimism are associated with changes in health over time among older adults. Soc Psychol Personal Sci. 2015;6(7):814–822. 10.1177/1948550615590199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Scheier MF, Carver CS, Bridges MW. Distinguishing optimism from neuroticism (and trait anxiety, self-mastery, and self-esteem): A reevaluation of the Life Orientation Test. J Pers Soc Psychol. 1994;67(6):1063–1078. 10.1037//0022-3514.67.6.1063 [DOI] [PubMed] [Google Scholar]
  • 48.Aspinwall LG, Taylor SE. Modeling cognitive adaptation: A longitudinal investigation of the impact of individual differences and coping on college adjustment and performance. J Pers Soc Psychol. 1992;63(6):989–1003. 10.1037//0022-3514.63.6.989 [DOI] [PubMed] [Google Scholar]
  • 49.Litt MD, Tennen H, Affleck G, Klock S. Coping and cognitive factors in adaptation to in vitro fertilization failure. J Behav Med. 1992;15(2):171–187. 10.1007/BF00848324 [DOI] [PubMed] [Google Scholar]
  • 50.Scheier MF, Matthews KA, Owens JF, Magovern GJ, Lefebvre RC, Abbott RA, et al. Dispositional optimism and recovery from coronary artery bypass surgery: the beneficial effects on physical and psychological well-being. J Pers Soc Psychol. 1989;57(6):1024–1040. 10.1037//0022-3514.57.6.1024 [DOI] [PubMed] [Google Scholar]
  • 51.Taber JM, Klien WMP, Ferrer RA, Kent EE, Harris PR. Optimism and spontaneous self-affirmation are associated with lower likelihood of cognitive impairment and greater positive affect among cancer survivors. Annals Behav Med. 2015;50(2):198–209. 10.1007/s12160-015-9745-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Norem JK, Cantor N. Anticipatory and post hoc cushioning strategies: Optimism and defensive pessimism in “risky” situations. Cognit Ther Res. 1986;10(3):347–362. 10.1007/BF01173471 [DOI] [Google Scholar]
  • 53.Nes LS, Segerstrom SC. Dispositional optimism and coping: A meta-analytic review. Pers Soc Psychol Rev. 2006;10(3):235–251. 10.1207/s15327957pspr1003_3 [DOI] [PubMed] [Google Scholar]
  • 54.Markman KD, McMullen MN. A reflection and evaluation model of comparative thinking. Pers Soc Psychol Rev. 2003;7(3):244–267. 10.1207/S15327957PSPR0703_04 [DOI] [PubMed] [Google Scholar]
  • 55.Roese NJ. The functional basis of counterfactual thinking. J Pers Soc Psychol. 1994;66(5):805–818. 10.1037/0022-3514.66.5.805 [DOI] [Google Scholar]
  • 56.Rucker DD, McShane BB, Preacher KJ. A researcher’s guide to regression, discretization, and median splits of continuous variables. J Consum Psychol. 2015;25(4):666–678. 10.1016/j.jcps.2015.04.004 [DOI] [Google Scholar]
  • 57.Oransky I. Retraction eight appears for social psychologist Lawrence Sanna. 2013 Jan 11. In: Retraction Watch. http://retractionwatch.com/2013/01/11/retraction-eight-appears-for-social-psychologist-lawrence-sanna/
  • 58.De Brigard F, Parikh N. Episodic counterfactual thinking. Curr Dir Psychol Sci. 2018;28(1):59–66. 10.1177/0963721418806512 [DOI] [Google Scholar]
  • 59.Sumerville A, Roese NJ. Dare to compare: Fact-based versus simulation-based comparison in daily life. J Exp Soc Psychol. 2008;44(3):664–673. 10.1016/j.jesp.2007.04.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Cohen J. Statistical power analysis for the behavioral sciences, 2nd ed Hillsdale, NJ: Lawrence Erlbaum Associates; 1988. [Google Scholar]
  • 61.Erdfelder E, Faul F, Buchner A. G*Power: A general power analysis program. Behavior Research Methods, Instruments, & Computers. 1996;28:1–11. Available from: http://link.springer.com/article/10.3758/BF03203630 [Google Scholar]
  • 62.Faul F, Erdfelder E, Lang AG, Buchner A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods. 2007;39(2):175–191. Available from: http://www.ncbi.nlm.nih.gov/pubmed/17695343 [DOI] [PubMed] [Google Scholar]
  • 63.Morrison M, Roese NJ. Regrets of the typical American: Findings from a nationally representative sample. Soc Psychol Personal Sci. 2011;2(6):576–583. 10.1177/1948550611401756 [DOI] [Google Scholar]

Decision Letter 0

Peter Karl Jonason

3 Apr 2020

PONE-D-20-00918

Does dispositional optimism predict counterfactual direction of comparison?

PLOS ONE

Dear Dr. Gamlin,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by May 18 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Peter Karl Jonason

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements:

1.    Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.plosone.org/attachments/PLOSOne_formatting_sample_main_body.pdf and http://www.plosone.org/attachments/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please consider changing the title so as to meet our title format requirement (https://journals.plos.org/plosone/s/submission-guidelines). In particular, the title should be "Specific, descriptive, concise, and comprehensible to readers outside the field" and in this case it is not informative and specific about your study's scope and methodology.

3. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified whether consent was informed.

4. Thank you for stating the following financial disclosure:

"The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

At this time, please address the following queries:

a)    Please clarify the sources of funding (financial or material support) for your study. List the grants or organizations that supported your study, including funding received from your institution.

b)    State what role the funders took in the study. If the funders had no role in your study, please state: “The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

c)     If any authors received a salary from any of your funders, please state which authors and which funders.

d)     If you did not receive any funding for this study, please state: “The authors received no specific funding for this work.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors present a large-scale study, as pilot studies, aiming to assess the relation between trait optimism and counterfactual thinking focusing on upward vs. downward comparisons. This is clearly a considerable research effort, testing many participants over time, which should hence be published to be available to interested researchers. However, I have several major concerns regarding the methods and statistics, which question the appropriateness and relevance of the conclusions. The questions with the methods used, and the apparent very small size of the effect of interest seems to significantly question the relevance, generalizability, and replicability of such effects. The authors should recognise and discuss this clearly in the article, and further considering and discussing how it fits more broadly in the literature would increase the contribution of this work to the field.

- The statistical analyses across the paper are confusing, often insufficiently described, and/or can seem inappropriate. The variability in the statistical analyses (i.e. regressions, correlations) used within and across studies to test the same hypothesis is confusing, and hinders an overall understanding of the strength of evidence and estimating effect sizes for the hypotheses of interest.

- Power calculation – the rationale for this is not clear, and does not seem appropriate. It was calculated for a t-test, based on an unspecified and unjustified effect size estimate, without clarifying what hypothesis that would concern. More importantly, this is not related to the actual statistical tests used to assess the relevant hypotheses, i.e. regression models, nor the reported effect sizes. Hence, it is not pertinent to assessing the power of the analyses actually used (and standard power calculators do offer the possibility to calculate power for a regression model or a correlation). This is a common issue, and the past cannot be changed, and clearly the most important is that some method was used to estimate a target sample size in advance of analysis. Nonetheless, the authors should be clear about what they actually did and what was their rationale, as well as account for the divergence in the analyses.

- The description of the “single-factor” regression and “mixed regression” models (p.11-12) is insufficient. What software and methods more exactly were used to estimate the models and the reported parameters (e.g. CIs, degrees of freedom, p-values)? What is the rationale and meaning of inconsistently reporting different effect sizes across, e.g. eta squared, d… Unclear whether the predictor optimism was mean centred, which is recommended so the remaining parameters are estimated at that average level. It’s unclear why the authors run a single-factor, and then mixed regression. If the other predictors are plausible modulators, that single-factor model seems pretty meaningless, and it is also not serving the purpose of replicating a similar analysis from other studies, since later correlations are used. It’s also unclear to me how the “mixed regression model” was specified. The authors state their DV is the ratings across 3 questions, so if that average was used in the model, and all other effects are between subjects, then it’s unclear what data would be nested within “subject” (hence why use mixed regression). If they include the 3 measurements separately for each participant (which would be best), then in fact the role of the question itself should be modelled as crossed design, a separate “random” effect (e.g. often called “item” effects, cf. Baayen et al 2008; Bates et al 2015; Barr et al 2013). The full results table of the tested models should also be included in the article. [Note that, to avoid the inconsistency in tests within and across studies, could use mixed effects models for a meta-analysis of all the data, using the study as a higher-level, nesting factor. There obviously can be good reasons for choosing other methods, but might be worth thinking about whether the relevant tests and statistics should focus on correlations or on regression.]

- In fact, I’m not clear on the rationale for averaging across the 3 questions about counterfactuals that concern different issues. Whether trait optimism is related to beliefs about counterfactual thinking vs. actual behaviour while engaging in it are actually two different questions. But currently the subjective ratings to the 3 questions posed currently confuse those two aspects. The authors could in fact analyse that separately. The authors also did not comment on whether the counterfactuals produced by the participants, if rated by an external observed, would indeed be in line with the subjective reports. In other words, the trait vs. state production of counterfactuals seems confused by the design and the measurements, and the article doesn’t clearly address that point.

- Looking at the reported meta-analysis does not yield much confidence in the overall conclusions, since only the final large study would seem to robustly show the hypothesised effect. While the much larger sample in the final study obviously yields more reliable evidence, hence likely yielding an overall effect, this also seems to question what the relevance of such a small effect is, if it can’t be easily reproduced, even when the previous studies had relatively large samples (N>200).

- The authors highlight throughout how one’s goals are key mediators of whether up/downward counterfactuals will be produced, but their main analyses do not actually address that moderator. The predictions about the role of goals also seem to predict interactions in quite opposing directions, e.g. more upwards counterfactual if aiming to improve performance + have opportunity to change future outcomes, but downward counterfactuals to regulate mood when there’s nothing you can do about it. While they find that opportunity is related to higher ratings (more likely upward) than “no opportunity”, both are quite clearly in upward side of the scale. As the interaction with outcome valence is described, one might predict that is likely because they chose to only use scenarios with negative outcomes, but then that seems to limit the scope of the conclusions about the relation between optimisms and counterfactuals, if the scenarios are already likely to yield upwards comparisons?

- The supplemental materials describing the previous studies should be improved to more clearly summarise the information of what was varied across the studies and what the various results would be (maybe akin to the Appendix table, but more relevant to summarising the relevant points). Including figures/tables summarising the effects in the regression models would also help the reader.

- Appendix – Should clarify the methods involved in collecting this list, the meaning of the columns, etc, and its purpose here… How were the sample size estimates obtained? Is there actually no info on the N per group in any of the studies? Were all manipulations are always done between participants?

- The authors vaguely state that there has been controversy in the literature, and that there are flaws and limitations in the previous work, which is understandably a good argument for make a robust new study. But they could more clearly address what that should actually imply for how to interpret the previous literature, and where/whether there might be reasonable methodological differences, e.g. related to the moderators mentioned, or whether the measures target beliefs vs. behaviour, which could explain opposing patterns of relations with optimism. Such a more detailed discussion could well be moved to the appendix, which could currently be mostly puzzling for someone who might not already be familiar with the details of that work.

- The 4th paragraph seems to basically repeat what was said in the 2nd, while possibly expanding on some points, but this should be combined to avoid repetition.

Reviewer #2: The manuscript explores and interesting question about the relationship between dispostional optimism and the direction of comparison of counterfactual thoughts. I find the topic very important, bcause the forms of counterfactual thoughts might have consequences for emotion regulation. The introduction is generally well-written and the justification of the study is sufficient. However, I do not understand why the authors decided to report fully only one study out of seven? I believe the article can make much stronger contribution if the authors would decide to develop it into a full paper. I present my concerns below.

p. 9. Method - I am not sure for what kind of analysis the sample size has been determined? Is this sample to detect an effect of .08 in t-test comparison?

Unfortunately, the content of the part "Statistical summary of studies" is unclear. First, this should presented before their main study, because these are "preliminary" studies as the authors wrote. Second, I am not sure what is actually presented here. What do the authors mean that they try to "avoid file drawer issues"? As I understand, the authors conducted six other studies before the one reporting here. In this case I would encourage them to report them in the current manuscript. In the current version it is really confusing what are all those effects etc. This would strenghten their main finding. At least, I would suggest to present the summary of previous studies more clearly and earlier in the manuscript.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 14;15(8):e0237644. doi: 10.1371/journal.pone.0237644.r002

Author response to Decision Letter 0


23 Jun 2020

Response to Reviewer #1 (R1)

1. The bulk of R1’s comments centered on the statistical analysis, which we have substantially revised to provide additional clarity. Specifically:

a. We report the software and methods used and clarify how the mixed regression model was specified (page 12).

b. We mean-center the predictor, optimism, in our Main Study, as we had in our Preliminary Studies.

c. Rather than reporting effect sizes, we report Standardized Beta and 95% Confidence Intervals for analyses pertaining to our central relation of interest (i.e., the effect of optimism on counterfactual direction of comparison).

d. We eliminate our single-factor regression in the main study.

e. We add further rationale supporting our decisions regarding the elicitation of our dependent measure (pages 7-8). We also explain why we decided to average across the three questions about counterfactuals (rather than running a random effects model assessing item effects to capture these separately). In short, prior research has used scales, with the advantage of capturing greater variability in the way individuals report on counterfactual thoughts. Moreover, our central aim with this measure was to capture a broad tendency (whether behavioral or belief-based) to think about a past outcome as it could have gone better or worse. Thus, averaging the three-items, we feel, best captures our intended construct of counterfactual direction of comparison.

f. We enhanced our description of the original power analysis to better conform to reporting standards (page 9).

2. R1 also requested independent coders for the counterfactuals (rather than using participant’s self-reported coding of upward or downward). As we now elucidate in the manuscript (page 8), independent coding suffers when ambiguous counterfactuals arise. Self-coding prevents misinterpretation by independent coders—e.g.,:

a. One participant discussed their negative professional experience, “We're now forced to use an electronic medical record, but it is not made to be used in my particular field,” and generated a counterfactual as follows, “What if they had considered my field when shopping and found a different system that met our needs?” It is impossible to know whether the “different system” referred to constitutes a better or worse system, and thus it is impossible for an independent coder to reliably code the direction of this counterfactual.

b. Similarly, another participant noted, “My boss is an idiot, doesn't understand English, and is going deaf. What if I worked for a different company?” Again, an independent coder cannot reliably code the direction for such a counterfactual.

Such examples are sufficiently common in this dataset, making independent coding an unreliable source for assessing the directionality of counterfactuals and suggesting that self-coded counterfactuals (as is also standard practice in the counterfactual literature) is a robust method for capturing counterfactual direction of comparison.

3. R1 suggested that our meta-analytic conclusion “does not yield much confidence in the overall conclusions.” We agree that, on the one hand, a meta-analysis showing a weak correlation between optimism and upward counterfactuals, and only significant when sample sizes are large (i.e., N > 1,000), does not provide evidence of a strong link between optimism and upward counterfactual direction of comparison. On the other hand, this is precisely why our meta-analytic approach is appropriate. Our goal in this paper, as is now stated earlier in the text for enhanced clarity (pages 3, and 8-9), is to correct a long-held belief in the literature that optimism is robustly linked to downward counterfactual direction of comparison. The purpose of our meta-analysis is to support a revised conclusion that there is not a robust link between optimism and upward counterfactual direction of comparison and only “a weak relation” between optimism and upward counterfactual thinking (page 14). This is our critical finding and the primary contribution of our paper.

4. Relatedly, R1 suggests we explore mediation by goals in more depth. While we agree that this is a worthwhile pursuit, we believe it is outside the scope of this initial investigation. Specifically, we emphasize that the main finding and primary contribution of our paper is to correct a conclusion that has persisted in the literature for decades. We more carefully emphasize this in the introduction (page 3) and suggest future directions pertaining to this suggestion by R1 (page 15).

5. Also, relatedly, R1 suggests adding more clarity around the limitations of the previous work, what this implies for how to interpret the previous literature, and where/whether there might be reasonable methodological differences that account for the differences. We have edited the text to bring enhance clarity on this matter (pages 7-8) and additional suggest future work that might investigate this in more depth (page 15).

6. R1 notes that restricting episodic recall in our main study to include only negative events could limit our conclusions on the relation between optimism and counterfactual direction of comparison. To clarify, the prior literatures shows that counterfactual thoughts are much more likely to be spontaneously considered after a negative than after a positive event, and indeed the frequency of counterfactual thoughts after positive events is so low that meaningful variation is absent. Moreover, the literature also shows a robust main effect of more upward (vs. downward) counterfactuals in response to negative outcomes. However, there is nothing to suggest that optimists recall positive and negative events in a way that would differentially affect the generation of counterfactuals. Thus, omitting the recall of positive events has minimal impact on the broader correlation that we find between optimism and upward counterfactual direction of comparison. Thus, we feel limiting our main study to not include an additional factor of valence is warranted.

7. Finally, R1 had some minor comments, which we address as follows:

a. More clearly summarizing the information of what varied across the preliminary studies and the results (see new Table 2).

b. Streamlining the table in the appendix and adding clarification for each of the columns’ meaning.

c. Revising the writing throughout so as to avoid redundancies.

Response to Reviewer #2 (R2)

R2 had two main points:

1. Questioning our rationale for reporting only 1 out of 7 studies, and, relatedly, suggesting we present the statistical summary of studies prior to our main study

a. With respect to including all studies in the main text, we felt that this would not be appropriate given we intended them to be preliminary studies. Our rationale for including them in the Supplemental Materials was primarily one of full data transparency, but also to provide a clear summary of the different methods we used that might potentially replicate the effect in the prior literature that optimism was associated with more downward counterfactuals. We also included them to show that this effect was not replicated, providing important evidence to justify our need to run a sufficiently-powered main study examining the relationship between optimism and counterfactual direction of comparison. However, we do now include a new table (Table 2) summarizing exactly what varied methodologically across the preliminary studies and the results (focusing on β to facilitate comparison across studies).

2. Asking for more clarity on how we determined sample size and the accompanying power analysis.

b. We enhanced our description of the original power analysis to better conform to reporting standards (page 9).

Decision Letter 1

Peter Karl Jonason

16 Jul 2020

PONE-D-20-00918R1

Dispositional optimism weakly predicts upward, rather than downward, counterfactual thinking: A prospective correlational study using episodic recall.

PLOS ONE

Dear Dr. Gamlin,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. As you will read below, you will see that both reviewers were in favor of your paper being published, congratulations. Despite this, there is room for improvement that I hope you can address easily and quickly to get your paper published in PLOS 1. Some areas of concern are (1) clarifying the power calculations, (2) ensuring the OSF site is populated and prepared properly, (3) and better highlighting the importance of this study. After addressing these issues, I suspect the paper will be ready to be accepted.

Please submit your revised manuscript by Aug 30 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Peter Karl Jonason

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have made a clear effort to address the reviewers’ comments, which I believe has greatly improved the manuscript. But I believe the following minor points should be revised before publication.

The information provided for the power calculation is very vague to allow others to replicate (e.g. “weak effect”?). There’s still much debate concerning how to calculate power for mixed effects models, plus mentioning “paths” when running MEMs (and not SEMs) seems quite confusing. As the authors linked to their OSF page, despite the unclear labelling of document names, I found that the justification provided there in “Sample Size 2.rtf" is more precise, and clarifies that was run for a t-test. Since that’s probably the actual basis for their sample size, I suggest they copy that text to the manuscript, but acknowledge/explain that is not the power of the statistics actually used. I’d think that’s a plausible way for deciding on sample size, given challenges with assessing power for MEMs, and MEMs should actually be more sensitive and robust than t-tests. I just think it’s important to be accurate and transparent about what was/is done.

The OSF page linked currently doesn’t show any files in the “dataset” folder. There also seems to be some things that imply there was a 7th study, and the current main study would be number 8. Adding some clarification notes on OSF on the data/studies included there and pointing people to the relevant files would help to avoid confusion.

Regarding the MEMs analysis/results, the authors should report the method used for calculating degrees of freedom, and those should be reported with the remaining test statistics (i.e. the associated p values).

Reviewer #2: All comment have been addressed.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 14;15(8):e0237644. doi: 10.1371/journal.pone.0237644.r004

Author response to Decision Letter 1


29 Jul 2020

Response to The Editor

The Editor has asked us to better highlight the importance of this study. We now include a clear summary justifying the importance of this study in the “Goal of the present research” section of the paper. [See: pages 3-4]

Response to Reviewer #1 (R1)

We are thankful for R1’s careful attention to detail and for guidance that has improved our paper. We have updated our manuscript to address the following concerns raised by R1:

1. Clarify the Power calculation. We updated the manuscript to be not only consistent with the OSF preregistration, but also more transparent vis-à-vis the general debate on how to best calculate power for MEMs (which R1 rightly highlighted). [see: Pages 9-10]

2. Ensure the OSF site is prepared properly. Our OSF page (https://osf.io/wudjs/?view_only=389862820cdb43bdaaa6bda477202a48) now includes (a) all data files, (b) clearer file naming and folder organization, and (c) a folder entitled “0. Information (read first)” containing a text file explaining the components included in the OSF page. Specifically, that file reads:

“This OSF project contains the following documents:

1. STUDY DESIGN.

- Basic information about how Preliminary Study 6 and the Main Study were designed and planned is included in text files.

- Study Design information was uploaded prior to data collection being completed or any data analysis took place.*

2. SAMPLE SIZE.

-Basic information on how the sample size was justified (Preliminary Study 6 and Main Study) and calculated is included.

- Sample Size justification information was uploaded prior to data collection being completed or any data analysis took place.*

3. SURVEYS.

- Surveys have been added in both .qsf (Qualtrics) and .docx (Microsoft Word) format. The .qsf files should be able to be uploaded directly to Qualtrics for replication. Qualtrics surveys can be shared directly (i.e., from Qualtrics account to Qualtrics account) by the first author upon request. Surveys from all studies are available upon request from the first author in either .qsf or .docx format.

- Surveys were uploaded after data collection and analysis was complete.

4. DATA.

- All data files: Preliminary Study 1-6 and the Main Study.

- Data files were uploaded after data collection and analysis was complete.

*Due to an error in the timing of the launch of the Time 1 Main Study survey, the sample size justification and study design files were uploaded after the survey had been launched, but before looking at, downloading, or analyzing any data. Note that these files were however uploaded prior to the launch of the Main Study - Time 2 Survey, and therefore uploaded prior to any data merging, cleaning, or analysis taking place.”

3. Include t-test and degrees of freedom in Main Study. We had replaced the t-value and corresponding DF from our 2nd round submission with the standard beta and corresponding confidence interval. We are happy to also report the t-test and corresponding DF, and have added this information back into the results of the Main Study. [See: page 13]

Response to Reviewer #2 (R2)

We are thankful for R2’s support of our manuscript. As R2 did not raise any specific points for further clarification in this round of revision, we have not made any specific changes for R2.

Decision Letter 2

Peter Karl Jonason

31 Jul 2020

Dispositional optimism weakly predicts upward, rather than downward, counterfactual thinking: A prospective correlational study using episodic recall.

PONE-D-20-00918R2

Dear Dr. Gamlin,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Peter Karl Jonason

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Peter Karl Jonason

6 Aug 2020

PONE-D-20-00918R2

Dispositional optimism weakly predicts upward, rather than downward, counterfactual thinking: A prospective correlational study using episodic recall.

Dear Dr. Gamlin:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Peter Karl Jonason

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Summary table of prior research.

    A summary table of nine studies across six papers published between 1995 and 2015, which indicate that optimism predicts downward (vs. upward) counterfactual thinking. This summary table reports the authors and year of publication; the study number within the publication (if applicable); the total sample size of the study (if reported); the design of that study including the conditions participants were assigned to or whether the design was correlational; how counterfactuals were elicited (i.e., in response to what prompts or events); the scale that was used to capture trait optimism; what optimism was compared to (if applicable); and, how counterfactuals were classified as downward or upward.

    (DOCX)

    S1 File

    (DOCX)

    Data Availability Statement

    All deidentified data files are available at OSF at https://osf.io/wudjs/.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES