Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2025 Sep 29;15:33436. doi: 10.1038/s41598-025-19051-1

The effect of probability and framing on the default effect in decision making under risk

Joshua Lanier 1,, Di Wang 2,, Yusha Xie 2
PMCID: PMC12480121  PMID: 41023088

Abstract

This study examines how probability and outcome framing modulate the default effect in risky decision-making using two controlled experiments with probabilistically equivalent lotteries. Participants repeatedly chose among four equivalent betting options, with one highlighted as a default. Across both studies (N = 317), we document a robust default effect, with 38–39% choosing defaults versus a 25% random benchmark. Crucially, low winning probability (25% vs. 75%) consistently amplified default reliance. However, loss framing increased default choices significantly only in Study 1 (where task descriptions differed across probabilities) but not in Study 2 (where task framing was standardized). This suggests winning probability is a more reliable moderator than framing. Post-experiment surveys indicate cognitive ease and responsibility avoidance are key psychological mechanisms: low probability heightens the difficulty of winning, increasing default acceptance, while loss framing exacerbates responsibility aversion. The study advances understanding of the economic and contextual determinants of the default effect and highlights its implications for designing choice architectures in real-world applications where uncertainty is prominent.

Supplementary Information

The online version contains supplementary material available at 10.1038/s41598-025-19051-1.

Keywords: Default effect, Decision under risk, Cognitive ease, Utility of control, Responsibility avoidance

Subject terms: Psychology, Human behaviour

Introduction

The default effect, a well-documented cognitive bias, typically describes the tendency of individuals to avoid making an active choice by sticking with a pre-selected default option. It has emerged as one of the most robust and policy-relevant findings in behavioral science1. This phenomenon is often viewed as reflecting the human inclination to conserve cognitive effort by accepting defaults, even when opting out incurs minimal financial cost. Decades of research has demonstrated its pervasive influence across various domains. For instance, automatic enrollment in retirement plans increases savings rates by over 40%2, opt-out organ donation systems boost donor registrations by 50–90%3, and default settings in energy consumption significantly reduce carbon footprints4.

While many studies have shown the prevalence and robustness of the default effect, it is not well-understood what features of the choice problem faced by the decision maker are likely to amplify or diminish the effect. Notably, the decision-making environments where people are prone to the default effect are often not riskless. For instance, in deciding whether to donate organs3, how much to save for retirement2, or what investments to make5, people face substantial uncertainty regarding the future consequences of their choices and thus may be attracted by default options, perceiving these defaults as some kind of guidance6. But few studies have varied the features of a risky decision-making context to see how the default effect is influenced. For example, would people be more inclined to accept a default when considering losses (for example when buying insurance) versus when considering gains (for example when investing for retirement)? Such questions are important because the default effect is now one of the main tools in the behavioral scientist’s toolkits and thus understanding the conditions under which the effect is stronger has notable policy implications.

Motivated to fill this gap, we investigate the following research question: How do the properties of the lotteries which decision makers face modulate the strength of the default effect? To address this question, we run experiments where we ask subjects to select an option from a list of lotteries where one lottery is preset as the default. A novel feature of our experiment is that, in each choice problem, the options are consequentially equivalent lotteries that yield the same payoffs with the same probability distribution. This allows us to investigate how decisions in risky environments are impacted by the presence of default options without worrying about the heterogeneity in individual risk preferences. In the experiments, we manipulate two treatment variables—the probability of winning the bets (75% vs. 25%) and the framing of payoffs (gain vs. loss)—to see how individuals’ propensity to follow the default choice vary across conditions. These treatment variables were selected primarily because they reflect real-world variations in probability and framing of risky decision environments.

We conducted two waves of studies which differed in how the experimental task of the high-winning-probability treatments were described. In Study 1, the task in the low-probability condition was described as a treasure seeking game where a treasure is randomly hidden in one of four directions and participants need to search a direction for the treasure. In contrast, the task in the high-probability condition is a treasure hiding game where participants need to choose one of the four directions to hide their treasures which will be destroyed if the computer’s random pick of a direction matches the participant’s choice. Noticeably, while this design naturally gives rise to a difference of the probability to obtain the treasure (25% vs. 75%), it also encompasses a difference in the framing of the choice task (i.e., the seeking game vs. the hiding game). This potentially leads to different ways of utilizing a default option, confounding the treatment effect of probability, as insightfully pointed out by an anonymous referee. Therefore, we introduced Study 2 in which participants consistently played the treasure seeking game for all treatments, avoiding this confounding factor. In this sense, Study 2 could also be viewed as a robustness check for Study 1.

What expectations do we have for the treatment effects? To answer this question, we need to review the relevant psychological mechanisms underlying the default effect. Prior research (e.g.,7) has identified two main types of psychological drivers of the default effect: the demand for cognitive ease810 and the perception that the default is an implicit endorsement made by a policymaker3,6,11. The former provides a cognitive account for the default effect as well as other closely related phenomena such as the status quo bias5,7 and decision inertia12,13. Defaults can save cognitive effort by offering an opportunity to make passive decisions, reducing the need to evaluate other alternatives. As a result, the default effect is often observed to be stronger in more complex environments14,15. Yet, changing the winning odds or payoff framing does not seem to change the complexity of the decision task, because the alternatives are always consequentially equivalent in our experiments and this is made clear to participants. Therefore, we expect the cognitive effort mechanism to play a limited role in generating treatment effects. Similarly, the mechanism of implicit endorsement should also expected to be unimportant in our setup because of consequentially equivalent options.

Besides these two established mechanisms, we propose two additional psychological mechanisms which, we feel, have been largely overlooked in the default-effects literature: the utility of control and responsibility avoidance. The former refers to the idea that the process of making an active decision can itself generate utility, independent of the outcome16,17. When options are equivalent, actively choosing allows individuals to express agency, even if the outcome is probabilistically identical. When desirable outcomes of choices are expected, this utility may be enhanced if individuals take active decisions, because they can attribute these outcomes to their internal factors such as ability or skill, reflecting the self-serving attribution bias18,19. Therefore, a higher chance of winning may weaken the default effect due to an enhanced utility of control.

Another motivational factor of the default effect, responsibility avoidance, concerns a benefit of passive decision-making. It is a widely-found psychological trait investigated in various decision contexts2024. Generally, it refers to the tendency to make decisions in ways that minimize accountability for unfavorable outcomes. In uncertain environments, adhering to the default allows individuals to attribute negative outcomes to the system rather than their own choices, reducing blame or regret25,26. This dynamic is expected to be particularly salient in gambling, financial trading, and strategic environments, where choices are risky and losses may occur. We expect that a loss framing of the payoffs amplifies the default effect through exacerbating avoidance of the responsibility for losses. In general, we hypothesize that, increasing the probability of winning in risky choices is likely to weaken the default effect, whereas loss framing of outcomes can lead to greater default effect compared to the gain framing due to amplified inclination to avoid taking responsibility for negative outcomes.

Although the literature focusing on the effect of contextual factors on the default effect in risky decision-making has been limited, some recent related studies are worth mentioning. For example, Giuliani et al.27 investigated the interaction between the default effect and framing effects, including gain-loss framing, finding that default options enhance risk propensity but do not interact with framing. A major difference is that they are concerned with how the way default options are framed affects risk attitude, while we are concerned with how aspects of the choice problem (including framing) affect the default effect. Meunier et al.28 also examined the effect of framing on the default effect in risk-taking. They varied framing in terms of whether the decision involved investment or saving and whether the action was to accept or change the default choice. While their experiment did not involve gain-loss differentiation or consequentially equivalent choices, the authors suggested that framing the decision as accepting or changing a default action matters because active choosing is associated with greater control and responsibility29,30, implying that default effect results from the interplay of these opposing psychological forces. Our study provides additional insights into the relative roles of these mechanisms.

Unlike the two studies mentioned above, our experiment and Couto et al.31 both investigated the default effect in repeated risky choices under different payoff framings (gain vs. loss). However, Couto et al.31 focused on endogenous or internal default options and did not involve consequentially equivalent choices. They found that in repeated decision-making, individuals’ choices tend to correlate over time, with previous choices serving as endogenous defaults for subsequent decisions. While this is less relevant to our design, where participants make probabilistically equivalent betting decisions, their study highlights the importance of considering serial correlation in default adherence when analyzing repeated choices. Following this implication, such correlation will be addressed in our data analysis.

In summary, the significance of this study lies in its real-world relevance and theoretical contributions to behavioral science. Firstly, by controlling for potential confounding factors, our study can help understanding how probabilities and the framing of consequences—two important factors defining the riskiness of the decision problem—influence the strength of the default effect, providing insights into its determinants and offering guidance for designing choice architectures in real-world applications. Secondly, investigating the default effect in a choice-neutral setting can potentially help to advance behavioral decision theories by uncovering hidden mechanisms, such as the motivational factors of utility of control and responsibility avoidance, that are obscured when options differ in expected utility. In these ways, we contribute to the literature of default effect and deepens our understanding of this well-documented phenomenon.

Methods

Participants

The two studies were conducted using oTree32 at the laboratory of Southwest University of Finance and Economics in May 2024 and June 2025 respectively. For Study 1, a total of 160 undergraduate subjects were recruited and 159 participated in the experiment. For Study 2, a total of 160 undergraduate subjects were recruited and 158 participated in the experiment. In addition, Study 2 was pre-registered at AsPredicted.org (#232745). Details about our treatments and participants are shown in Table 1. Since the majority of our university students are female and we did not control for gender balance to ensure randomization in recruitment, the majority of our participants are female too.

Table 1.

Treatment design and participant information.

High winning chance (75%) Low winning chance (25%)
Gain frame

Treatment 1

(Study 1: N = 39, Mean age = 19.64, Males 15.4%;

Study 2: N = 38, Mean age = 19.66, Males 18.4%)

Treatment 2

(Study 1: N = 40, Mean age = 19.75, Males 22.5%;

Study 2: N = 40, Mean age = 19.88, Males 22.5%)

Loss frame

Treatment 3

(Study 1: N = 40, Mean age = 19.83, Males 20%;

Study 2: N = 40, Mean age = 19.88, Males 22.5%)

Treatment 4

(Study 1: N = 40, Mean age = 19.95, Males 12.5%;

Study 2: N = 40, Mean age = 19.88, Males 22.5%)

Each participant is randomly assigned to one of the four treatments. They receive a show-up fee of CN¥5 and could earn additional payoffs based on their choices during the experiment. On average, each experimental session lasted approximately 30 min, with an expected payoff of around CN¥26 per participant. Ethical approval for the study was granted by the institutional review board at Southwestern University of Finance and Economics. All methods were performed in accordance with the relevant guidelines and regulations, and informed consent was obtained from all participants prior to their involvement.

Experimental design

Study 1

The experiment consisted of four between-subject treatments, as detailed in Table 1. Each treatment included two parts: a decision task and a task-related survey. The decision task comprised two training rounds and 21 formal rounds of a chance game. In the experiment, participants play a treasure seeking or hiding game (see Fig. 1). In the High-Winning-Chance treatments, participants act as treasure hiders. They need to choose one of four directions (Up, Down, Left, or Right) to hide a treasure, and the computer will randomly choose a direction to destroy the treasure. If the participant’s choice matches the computer’s selection, they lose the treasure, resulting in a 75% chance to successfully retain the treasure. The corresponding lottery for participants is (X, 75%; Y, 25%), where X and Y represent payoffs in experimental tokens, with X > Y. In the Low-Winning-Chance treatments, participants act as treasure seekers. They choose a direction to search for a treasure randomly hidden by the computer, resulting in a 25% chance of success. The corresponding lottery was (X, 25%; Y, 75%).

Fig. 1.

Fig. 1

Descriptions of choice tasks for each treatment (original in Chinese).

Under the gain frame, X ∈ {20, 40, 60, 80, 100, 120} and Y ∈ {0, 20, 40, 60, 80, 100}. Under the loss frame, participants were endowed with 120 tokens, and X ∈ {0, − 20, − 40, − 60, − 80, − 100}, while Y ∈ {− 20, − 40, − 60, − 80, − 100, − 120}. In addition, the stakes of the game were described as a reduced loss if they successfully find a treasure (see Fig. 1). The payoffs were carefully designed to ensure that the chance games were identical under both frames in terms of final payoff distributions. For example, when X = 80 and Y = 20 under the gain frame, the corresponding task under the loss frame has X = − 40 and Y = − 100, with an endowment of 120 tokens. Payoff sizes varied across rounds but were consistent for all participants within a given treatment.

In each round of the chance game, a default option is highlighted in light blue but not pre-selected (see Fig. 2). Unlike the typical implementation of a pre-selected default option in the literature, we intentionally avoid pre-selection primarily to eliminate the incentive to minimize physical effort. By merely highlighting the default without pre-selection, we ensure that participants incur identical effort to choose any option (a single click), isolating the psychological mechanisms underlying the choice of the default option from physical effort minimization. Then the presence of default effect can only be due to psychological motives.

Fig. 2.

Fig. 2

Interface of the choice options for which “down” is preset (Study 1 and 2).

Arguably, although in the literature a default option is typically implemented as a pre-selected option, the issue of making decisions with default options is fundamentally an issue of active vs. passive decision-making33,34. A widely shared perspective is that defaults matter as they provide an opportunity for people (or their brains) to appreciate the benefits of passive decision-making relative to active decision-making (e.g.,2,3538). These benefits include cognitive ease8, implicit endorsement6, responsibility avoidance20,21, etc. In other words, it is the status of having the opportunity to make passive decisions that creates the feeling (and effectiveness) of the “default”. Therefore, any cue that is salient enough to generate a focal point on one of the alternatives works as a (cognitively) default option, because people can make a (cognitively) passive decision by simply following this cue. In fact, many studies produce default effects without pre-selection of options. For instance, imagining owning an option or simply labeling an option as “status quo” without selecting it39 significantly increased choice of this option, replicating classic default effects. The effectiveness of highlighted options has also been confirmed in healthcare settings where pre-selection has ethical problems40. Therefore, we decided to implement default options as highlighted or preset options, facilitating the opportunity to make passive decisions while eliminating noise due to non-psychological motives.

The default option in our experiment was randomly assigned and independently generated for each participant in each round. Following the convention of the literature, we did not inform participants of how the default option was determined. Furthermore, since all options in the chance game are probabilistically equivalent, a control treatment is not necessary, because the benchmark choice pattern without defaults is obviously to select each option 25% of the time.

In each round, participants were provided feedback about the outcomes for that round, including where the treasures are and whether they win the treasure seeking game. This makes the randomness of the game more transparent, the incentives of the game more salient, and participants more serious about each of their choices. Without any feedback, they might simply rush through the decisions from the beginning to the last round. While the presence of feedback may trigger serial correlations of choices across rounds due to, for example, gambler’s fallacy41,42, such correlations do not undermine our design because our focuses are inter-treatment comparisons. Participants were incentivized through a random payment system, where one of the formal rounds was randomly selected for real payment. The tokens earned (or remaining after losses) in that round were converted to money at a rate of 1 token = 0.33 CN¥. To mitigate potential income effects, the expected payoff magnitudes were similar across treatments with different winning probabilities.

Study 2

The second study differs from the first only in the framing of the choice tasks of the high-chance treatments (see the two middle boxes of Fig. 1). Instead of being described as a treasure-hiding game in the high-chance treatments, the choice tasks in Study 2 were consistently described as a treasure seeking game for all treatments. Now, in the high-chance treatments, a computer randomly places treasures in three of the four directions. If a participant chooses the direction that has a treasure, they win the game and get a payoff of X (and otherwise Y), resulting in the same lottery of (X, 75%; Y, 25%) as in Study 1. As mentioned, the main purpose of this design is to control for the potential noise of task description (treasure hiding vs. treasure seeking) that may confound the comparison between the high-chance and low-chance groups.

Data analysis

Our primary objective, for both Study 1 and 2, is to detect how the strength of the default effect under risky decision-making varies with winning probability and payoff framing. Hence, we adopt the same methods in data analysis for both studies. The key hypotheses we aim at testing are: (a) the probability of winning a lottery (75% vs. 25%) has an effect on individuals’ tendency to follow the default; (b) the framing of payoffs (gain vs. Loss) has an effect on individuals’ tendency to follow the default; (c) there is an interaction effect between these two factors.

At the individual level, we conduct pairwise parametric tests to identify inter-group differences and correct the significance threshold for multiple comparisons using the Holm-Bonferroni method. Specifically, we focus on how likely an individual is to choose the default option by looking at the proportion of choices (or rounds) in which the default option is chosen. Furthermore, we will analyze individual-round level data with Logit estimations to directly examine the impact of treatment variables on individual propensity to choose the default option in each single choice. This data will be organized in a panel format, with independent observations at the participant-round level. The main econometric model is specified as follows:

graphic file with name d33e531.gif

In this model, i refers to individual and t refers to experimental round. DE is a binary variable with Inline graphic=1 denoting that the decision maker i chooses the default option at round t, and Inline graphic=0 otherwise. Chance and Frame are the binary treatment variables. Control variables included the experimental round number, participants’ gender, age, and decision time for each round. This model was estimated using random-effect panel Logit with individual-clustered standard errors, allowing for intra-individual correlation of data. This approach has been widely used in experimental economic studies that have datasets of a similar panel structure43,44. These estimation results are reported in Table 3 under Model (1).

Table 3.

Logit estimation: treatment effect on default propensity (Study 2).

Independent Var. Dependent Var.: DE
Model (1) Model (2)
Chance − 0.525* − 0.525*
(1 = 75%, 0 = 25%) (0.265) (0.265)
Frame − 0.152 − 0.152
(1 = Gain, 0 = Loss) (0.224) (0.224)
Chance Inline graphic Frame 0.181 0.181
(0.357) (0.357)
Round − 0.031*** − 0.031***
(0.008) (0.008)
Gender 0.247 0.247
(1 = Male, 0 = Female) (0.297) (0.297)
Age − 0.211* − 0.211*
(0.095) (0.095)
Decision time 0.000 0.000
(0.000) (0.000)
Constant 4.111* 4.111*
(1.810) (1.809)
Constant(id) 0.968***
(0.147)
Observations 3318 3318
Number of individuals 158 158
Inline graphic 0.984
Inline graphic 0.227

The estimation is based on 3318 (158*21) individual-round level independent observations. For other table notes please refer to Table 2.

We used a random-effect model basically for two reasons. First, the random-effects model assumes that the individual-specific effects are uncorrelated with the explanatory variables. Since our key explanatory variables are treatment variables and participants were randomly assigned to treatments, these variables are unlikely to be correlated with individual-specific effects. Second, our focus is not on intra-individual variations and thus fixed-effect models do not suit our purpose. These variations can nevertheless be controlled for by using clustered standard errors. In fact, since the treatment variables are time-invariant (or round-invariant in our case), a fixed-effect estimation cannot obtain estimates for the treatment variables.

To check for the robustness of the random-effect model estimation, we also included a mixed-effect model estimation (Model (2) of Table 3). This model takes into account both random effects and individual-specific fixed effects, allowing for both individual-specific intercepts and clustered variances. This approach is also widely used on the analysis of panel-structure experimental datasets45,46. By comparing Model (1) and Model (2), we are able to know how robust are the Model (1) results to unobserved round-invariant individual heterogeneity.

Results

Study 1

Our analysis revealed significant evidence of a default effect and notable treatment effects. Across all treatments, the average proportion of default choices was 38.81%, which is significantly higher than the 25% benchmark expected under random selection (t-test, p < 0.001). This indicates that participants did not choose randomly but exhibited a tendency to adhere to the default option. Within each treatment, the default effect was also significant, though its strength varied across conditions. Participants in the high-chance treatments exhibited an average default propensity of 33.45%, which is much lower than 44.11% of the low-chance treatments (t-test, p = 0.006, D.F. = 157). Participants in the gain-framed treatments exhibited an average default propensity of 33.76%, which is lower than 43.81% of the loss-framed treatments (t-test, p = 0.011, D.F.=157).

In Fig. 3 we compare across the four conditions to check for interactions between the two treatment variables. Specifically, we find that the winning probability had an insignificant effect under the gain frame but a strong and significant effect under the loss frame. A low winning probability with loss framing leads to a mean default propensity of 52.86%, much larger than 34.76% of the high-winning-chance case under the same framing (t-test, p = 0.003; D.F. = 78). Additionally, payoff framing has a significant effect only when the winning probability is low, increasing the default propensity under the gain domain from 35.36 to 52.86% (t-test, p = 0.002; D.F. = 78) under the loss domain. In fact, the default propensity is outstandingly high under low winning probability and loss framing, while it is similar across the other three cases.

Fig. 3.

Fig. 3

Individual propensity to choose the default option across treatments (Study 1). The individual default propensity refers to the individual-level measurement of the the proportion of choices (or rounds) in which the default option is chosen. In other words, there are 159 independent observations. It compares the group-level distributions of this individual statistics across the 2 × 2 conditions. The black dash line denotes the 25% level, which is the benchmark proportion if participants always choose randomly. The boxes represents the interquartile range (IQR), which spans from the 25th percentile (Q1) to the 75th percentile (Q3). The line inside the box is the median (50th percentile). The whiskers extend from the box to the minimum and maximum values within 1.5*IQR from Q1 and Q3 respectively. Any data points beyond the whiskers (i.e., below Q1–1.5*IQR or above Q3 + 1.5*IQR) are plotted individually as dots or markers. Group-level differences are tested using two-sided t-test and the results are denoted as *, ** and *** for p < 0.05, p < 0.01 and p < 0.001 respectively.

As we have made six pairwise tests (high-chance vs. low-chance, gain vs. loss, and the 2 × 2 comparisons), our t-test results are subject to the multiple comparison problem. To mitigate this problem, we adopted the widely used Holm-Bonferroni method47 to check whether these effects remain significant after correction. It is a stepwise multiple comparison correction that controls the Family-Wise Error Rate while being less conservative than the classic Bonferroni correction. Applying a 5% significance criterion, we find that all the treatment effects we have observed are still significant under the Holm-Bonferroni correction. In other words, each of the treatment variables independently affects individual default propensity, and their effects also interact whereby the impact of probability is significant under loss framing but not gain framing.

So far, the analysis has been based on individual-level data. To increase the statistical power and examine whether the treatment variables impact the propensity of individuals in making each single choice, we organized the data in a panel format and conducted the panel Logistic estimation. The results are detailed in Table 2. Generally, these results corroborate the findings from our t-tests. Particularly, Model (1) confirms a significant negative effect of both high winning probability and gain framing. Participants in the high-winning-chance conditions were 62.51% (i.e., 1Inline graphic) less likely to choose the default compared to a low-chance condition (p = 0.001), while those under gain framing were 60.43% (i.e., 1Inline graphic) less likely to choose the default compared to loss framing (p = 0.001). At the individual-round level, there is also a significant interaction effect between the two treatment variables, indicated by the coefficient of the term Chance*Frame (p = 0.032).

Table 2.

Logit estimation: treatment effect on default propensity (Study 1).

Independent Var. Dependent Var.: DE
Model (1) Model (2)
Chance − 0.981** − 0.982**
(1 = 75%, 0 = 25%) (0.307) (0.307)
Frame − 0.927** − 0.927**
(1 = Gain, 0 = Loss) (0.290) (0.289)
Chance Inline graphic Frame 0.845* 0.846*
(0.395) (0.394)
Round − 0.033*** − 0.033***
(0.009) (0.009)
Gender 0.274 0.274
(1 = Male, 0 = Female) (0.287) (0.287)
Age − 0.014 − 0.014
(0.066) (0.066)
Decision Time − 2.085 − 2.085
(1.302) (1.302)
Constant 0.922 0.922
(1.378) (1.377)
Constant(id) 1.289***
(0.264)
Observations 3339 3339
Number of individuals 159 159
Inline graphic 1.135
Inline graphic 0.281

This table reports the panel Logit estimation results for two different model specifications, Model (1) using a random-effect model, and Model (2) using a mixed-effect model. Both models assume individual-clustered standard errors. The estimation is based on 3339 (159*21) individual-round level independent observations. Inline graphic is the standard deviation of inter-individual difference. Inline graphic is the intra-class correlation coefficient which captures the fraction of the variance of the dependent variable that is due to inter-individual differences. Constant(id) is a special output of mixed-effect model estimation, and it indicates the variance of random intercepts across individuals. Standard errors are in parentheses. Statistical significance is reported such that *, ** and *** denote significance levels of p < 0.05 and p < 0.01 and p < 0.001 respectively.

Furthermore, incorporating round-level data revealed a significant temporal pattern, consistent with studies involving repeated risky choices. The variable Round had a significant, though modest, effect on the default effect (p < 0.001), suggesting that participants were less likely to choose the default option in later rounds. The existence of this serial correlation in repeated risky choices entails a learning effect, despite the computer’s random picks are independent across rounds. However, this within-individual learning effect is not our focus, and it does not undermine the validity of our treatment effects because the latter are replicated in the individual-level analysis where round-level dynamics are omitted.

In Table 2, the column of Model (2) presents the results of mixed-effect logistic estimation with individual-clustered standard errors. Compared to Model (1), this model allows for individual-specific random intercepts to account for unobserved individual heterogeneity. The variance of these random intercepts is reported as the coefficient of Constant (id), and it is significant at p < 0.001, indicating the existence of substantial individual heterogeneity. However, the two models produced extremely similar estimation results. This suggests that, although there is substantial individual heterogeneity, this heterogeneity does not affect the slopes of the key explanatory variables which are group-level treatment variables, further justifying the appropriateness of our estimation approach based on Model (1).

Study 2

In this study where the confounding factor of task description difference is eliminated, we find qualitatively similar results with subtle difference. Across all treatments, the average proportion of default choices was 39.33%, which is significantly higher than the 25% benchmark expected under random selection (t-test, p < 0.001), indicating the existence of default effect. Participants in the high-chance treatments exhibited an average default propensity of 34.19%, which is significantly lower than 44.35% of the low-chance treatments (t-test, p = 0.005, D.F. = 157). Participants under gain framing exhibited an average default propensity of 38.16%, not significantly different from 40.48%, which is the propensity under loss framing (t-test, p = 0.527, D.F. = 157).

Figure 4 shows the individual default propensity for the four conditions. Specifically, we find that, similar to Study 1, the winning probability had an insignificant effect under the gain frame but a significant effect under the loss frame. A lower winning probability increases the average default propensity from 34.17% to 46.79% under loss framing (t-test, p = 0.018; D.F. = 78). Payoff framing no longer has a significant effect even when the winning probability is low (t-test, p = 0.298; D.F. = 78). Nevertheless, the default propensity is still highest under low winning probability and loss framing. Using the Holm–Bonferroni method and applying a 5% significance criterion, we find that, while the effect of winning probability remains significant, the interaction effect shown in Fig. 4 becomes insignificant. In other words, unlike what we find in Study 1, the effect of probability does not depend on whether payoffs are framed as gains or losses any more.

Fig. 4.

Fig. 4

Individual propensity to choose the default option across treatments (Study 2). It consists of 158 independent observations. For other figure notes please refer to Fig. 3.

The individual-choice data is analyzed using the same estimation approach of Study 1 and the results are shown in Table 3. Generally, these results are consistent with the findings of t-tests. Compared to Study 1, we can see that payoff framing no longer has a significant effect on single choice-making, and there is no longer an interaction effect between the two treatment variables. Meanwhile, winning probability is still influential. Participants in the high-winning-chance conditions were 40.84% (i.e., 1Inline graphic) less likely to choose the default compared to a low-chance condition (p = 0.048). The mixed-effect estimation (Model (2) of Table 3) again generates extremely similar results to the random-effect estimation, confirming the robustness of the results to typical panel model specifications.

Comparing study 1 and study 2

As the main purpose of Study 2 is to eliminate the task description variation that potentially confounds the effect of winning probability on default choices, we would like to test whether this variation indeed has an effect on individual behaviors. We carried out pairwise t-tests between the two studies for each of the four conditions separately. Since the samples of each condition do not overlap, these tests do not suffer from the multiple comparison problem. We find that, for the first and third conditions (i.e., the high-chance groups) where Study 1 and Study 2 differ in task descriptions, the mean default propensities are extremely similar, although the variance of individual choices seems to be larger in Study 2. This suggests that the hider-seeker differentiation in task description does not play a role in our experiments. This can be seen more clearly in Fig. 5. However, interestingly, for the low-chance conditions where Study 1 and Study 2 have identical designs, particularly the second condition (gain & low-chance), there seem to be greater gaps between the two samples. It is likely that these gaps lead to the diminishing of the effect of gain-loss differentiation in Study 2.

Fig. 5.

Fig. 5

Individual propensity to choose the default option (Study 1 vs. Study 2). The number of observations for each conditions are N = 77, N = 80, N = 80, N = 80, respectively. The p-values of each pairwise t-tests are shown on the top right corner of each panel. For other figure notes please refer to Fig. 3.

In summary, our two studies generated four key findings: (1) A significant default effect exists in risky decision-making with probabilistically equivalent alternatives (averagely 38.81% and 39.33% default choices for Study 1 and Study 2 respectively); (2) The probability of winning significantly influenced the strength of the default effect in both studies, with participants more likely to adhere to the default under low winning probabilities; (3) The loss framing of payoffs is found to significantly increase individual default choices in Study 1 but not in Study 2 (compared to gain framing); (4) The interaction effect between the two treatment variables is also only found in Study 1, with the effect of winning probability greatly amplified under loss framing.

Discussion

Through two controlled laboratory experiments, we investigated how the economic factor of probability (75% vs. 25%) and the contextual factor of payoff framing (gain vs. loss) modulate default effect in a risky decision-making environment where all options had identical probabilistic outcomes. In both studies, we document robust default effects as well as significant treatment effect of probability, with low winning chances strengthening default reliance. This pattern is consistent with our theoretical prediction. In terms of the effect of gain-loss differentiation, the two studies generated inconsistent results that cannot be attributed to the difference in their experimental designs. This implies that the inconsistency could be due to various reasons such as sampling differences, underpowered sample sizes, or temporal factors (May 2024 vs. June 2025). Alternatively, as the choice of the default option could involve various considerations such as cognitive ease, perceived endorsement, utility of control and responsibility avoidance, the default effect might genuinely fluctuate in an environment where these considerations are well-balanced (e.g., under the low-chance-gain condition). Therefore, we feel that the question of how the gain-loss differentiation influences the default effect cannot be fully understood until we are able to identify the key underlying mechanisms associated with this differentiation.

While the investigation of psychological mechanisms underlying the treatment manipulation is beyond the focus of the current study, we have tried to probe into this question by including a post-experiment survey. The survey questions, same for both Study 1 and 2, aim at measuring, through the five-point Likert scale48, participants’ attitude towards statements related to the four psychological factors, including three that promote the choice of default options (i.e., cognitive ease, implicit endorsement, and responsibility avoidance) and one that motivates not choosing the default (i.e., the utility of control). However, the results and implications of the survey data should be taken with caution, for the following reasons. Firstly, the design of the survey questions is mainly based on introspection and lacks precedents in the literature. Hence these questions may not have properly captured those psychological elements. Secondly, the feedback participants obtained for each round of the gameplay may affect their post-task responses to the attitudinal questions, potentially making the responses biased. Therefore, we will only take this data as auxiliary evidence and will only provide the corresponding details in our supplementary materials (see File S1) for interested readers.

Briefly speaking, through analyzing the survey data, we find mixed results regarding the roles of these psychological factors. In Study 1, participants’ appreciation of cognitive ease and utility of control were significantly correlated with their decisions (see Table S1 of File S1). Specifically, greater appreciation of the cognitive ease is associated with higher default propensity. The sense of utility of control (of active decision-making) is negatively related to default propensity, consistent with our theoretical analysis. Yet, through the Sobel-Goodman mediation analysis49, we find that responsibility avoidance is the most important mediator underlying the treatment effects on individual default propensity (see Fig. S1 and Table S2 of File S1). It explains 20.9% of the treatment effect of probability and 20.8% of the treatment effect of payoff framing.

In Study 2, implicit endorsement is also shown to be positively correlated with decisions (see Table S3 of File S1), consistent with evidence in the literature. But mediation analysis revealed that, to our surprise, cognitive ease plays the key role in mediating the significant effect of winning probability on default choices (see Fig. S2 and Table S4 of File S1), explaining about 46.6% of the treatment effect. We speculate that this is because, in a risky environment of betting, the smaller the winning odd is, the more challenging the bet would be in order to be successful (although the complexity of the tasks does not vary as the probabilities change). This could make the default more attractive when the winning chance is low, as a mental shortcut would save more effort when the decision is more challenging.

Assuming that our survey measurements have reliably captured the four psychological considerations, this analysis shows the critical roles of cognitive ease and responsibility avoidance driving the treatment effects in our experiment. Specifically, on one hand, a low-winning-chance context may be more cognitively demanding for choice making because a lower winning probability entails greater risk which instinctively invokes more attention and caution. Hence the need for cognitive ease may become more important in low-winning-chance treatments. On the other hand, the motive to avoid responsibility could be amplified in scenarios where possible outcomes are negative, leading to higher propensity of passive decision-making under loss framing compared to gain framing.

Generally, our study redefines the boundaries of the default effect by demonstrating its sensitivity to economic and contextual factors in risky environments and by uncovering the psychological mechanisms that drive it. While prior works emphasize cognitive ease and perceived endorsement, the standard experimental framework where options differ in expected desirability cannot reveal hidden motivational explanations such as the utility of control and avoidance of accountability. In this sense, our study expands the default effect paradigm beyond cognitive-based explanations. This mechanistic insight bridges behavioral economics with moral psychology and agency theory, offering a richer, more human-centric understanding for how and why defaults influence decisions. Practically, our findings offer actionable insights for designing ethical and effective choice architectures in real-world applications, especially for the design of defaults in digital interfaces under high-uncertainty, low-control circumstances (e.g., gambling and investment platforms and healthcare directives).

Despite its contributions, this study has several limitations. Firstly, the sample comprised Chinese undergraduates with an imbalanced gender ratio, limiting cultural and demographic generalizability. Cross-cultural studies suggest that responsibility norms vary significantly50, which may moderate our findings. Secondly, each of our two studies faces the risk of having underpowered sample sizes, leading to inconclusive evidence regarding the effect of payoff framing. It is recommended that larger samples be used in the experimental research of the default effect. Thirdly, we did not pin down the psychological mechanisms for the observed effects. A more rigorous and reliable method should be used to measure the psychological factors and the association between these mechanisms and the default effect in choice-making should be more clearly identified. Finally, we recognize that the default effect is a complex phenomenon. While we have controlled for individual risk preference to study how default effect depends on economic and contextual elements in risky decision-making, it is unclear how risk preference is associated with the propensity to take default. In this regard, we propose that future research could also try to address the relationship between individual risk preference and the default effect.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary Material 1 (244.9KB, docx)

Acknowledgements

The authors thank Sen Tian, Dalin Sheng, and other colleagues at China Center for Behavioral Economics and Finance for their helpful comments on this paper.

Author contributions

All authors contributed to the conceptualization of the research question and design of the experiment. Y.X. programmed and organized the experiments and collected the data. Y.X. and D.W. analyzed and visualized the data. J.L. and D.W. wrote up, reviewed, and revised the paper.

Data availability

The data that support the findings of this study and codes for all analyses are available from the corresponding authors upon request.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Joshua Lanier, Email: jlanier84@gmail.com.

Di Wang, Email: wangd@swufe.edu.cn.

References

  • 1.Mertens, S., Herberz, M., Hahnel, U. J. & Brosch, T. The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains. Proc. Natl. Acad. Sci.119 (1), e2107346118 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Madrian, B. C. & Shea, D. F. The power of suggestion: inertia in 401 (k) participation and savings behavior. Q. J. Econ.116 (4), 1149–1187 (2001). [Google Scholar]
  • 3.Johnson, E. J. & Goldstein, D. Do defaults save lives? Science. 302 (5649), 1338–1339 (2003). [DOI] [PubMed] [Google Scholar]
  • 4.Pichert, D. & Katsikopoulos, K. V. Green defaults: information presentation and pro-environmental behaviour. J. Environ. Psychol.28 (1), 63–73 (2008). [Google Scholar]
  • 5.Samuelson, W. & Zeckhauser, R. Status quo bias in decision making. J. Risk Uncertain.1, 7–59 (1988). [Google Scholar]
  • 6.McKenzie, C. R., Liersch, M. J. & Finkelstein, S. R. Recommendations implicit in policy defaults. Psychol. Sci.17 (5), 414–420 (2006). [DOI] [PubMed] [Google Scholar]
  • 7.Jachimowicz, J. M., Duncan, S., Weber, E. U. & Johnson, E. J. When and why defaults influence decisions: A meta-analysis of default effects. Behav. Public. Policy. 3 (2), 159–186 (2019). [Google Scholar]
  • 8.Brown, C. L. & Krishna, A. The skeptical shopper: A metacognitive account for the effects of default options on choice. J. Consum. Res.31 (3), 529–539 (2004). [Google Scholar]
  • 9.Johnson, E. J. et al. Beyond nudges: tools of a choice architecture. Mark. Lett.23, 487–504 (2012). [Google Scholar]
  • 10.Sunstein, C. R. Nudging: A very short guide. J. Consum. Policy. 37, 583–588 (2014). [Google Scholar]
  • 11.Everett, J. A., Caviola, L., Kahane, G., Savulescu, J. & Faber, N. S. Doing good by doing nothing? The role of social norms in explaining default effects in altruistic contexts. Eur. J. Social Psychol.45 (2), 230–241 (2015). [Google Scholar]
  • 12.Alison, L. et al. Decision inertia: deciding between least worst outcomes in emergency responses to disasters. J. Occup. Org. Psychol.88 (2), 295–321 (2015). [Google Scholar]
  • 13.Alós-Ferrer, C., Hügelschäfer, S. & Li, J. Inertia and decision making. Front. Psychol.7, 169 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Johnson, E. J., Bellman, S. & Lohse, G. L. Defaults, framing and privacy: why opting in-opting out1. Mark. Lett.13 (1), 5–15 (2002). [Google Scholar]
  • 15.Ortmann, A., Ryvkin, D., Wilkening, T. & Zhang, J. Defaults and cognitive effort. J. Econ. Behav. Organ.212, 1–9 (2023). [Google Scholar]
  • 16.Deci, E. L. & Ryan, R. M. Intrinsic Motivation and self-determination in Human Behavior, vol. 29 (Springer Science & Business Media, 2013).
  • 17.Botti, S. & Iyengar, S. S. The psychological pleasure and pain of choosing: when people prefer choosing at the cost of subsequent outcome satisfaction. J. Personal. Soc. Psychol.87 (3), 312 (2004). [DOI] [PubMed] [Google Scholar]
  • 18.Miller, D. T. & Ross, M. Self-serving biases in the attribution of causality: fact or fiction? Psychol. Bull.82 (2), 213 (1975). [Google Scholar]
  • 19.Weiner, B. An attributional theory of achievement motivation and emotion. Psychol. Rev.92 (4), 548 (1985). [PubMed] [Google Scholar]
  • 20.Tetlock, P. E. & Accountability A social check on the fundamental attribution error. Social Psychol. Q.1, 227–236 (1985).
  • 21.Charness, G. & Jackson, M. O. The role of responsibility in strategic risk-taking. J. Econ. Behav. Organ.69 (3), 241–247 (2009). [Google Scholar]
  • 22.Leonhardt, J. M., Keller, L. R. & Pechmann, C. Avoiding the risk of responsibility by seeking uncertainty: responsibility aversion and preference for indirect agency when choosing for others. J. Consumer Psychol.21 (4), 405–413 (2011). [Google Scholar]
  • 23.Bartling, B. & Fischbacher, U. Shifting the blame: on delegation and responsibility. Rev. Econ. Stud.79 (1), 67–87 (2012). [Google Scholar]
  • 24.Möbius, M. M., Niederle, M., Niehaus, P. & Rosenblat, T. S. Managing self-confidence: theory and experimental evidence. Manag. Sci.68 (11), 7793–7817 (2022). [Google Scholar]
  • 25.Loomes, G. & Sugden, R. Regret theory: an alternative theory of rational choice under uncertainty. Econ. J.92 (368), 805–824 (1982). [Google Scholar]
  • 26.Zeelenberg, M., Beattie, J., Van der Pligt, J. & De Vries, N. K. Consequences of regret aversion: effects of expected feedback on risky decision making. Organ. Behav. Hum Decis. Process.65 (2), 148–158 (1996). [Google Scholar]
  • 27.Giuliani, F. et al. The joint effect of framing and defaults on choice behavior. Psychol. Res.87 (4), 1114–1128 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Meunier, L., Bashirzadeh, Y. & Ohadi, S. Framing the default option right. J. Behav. Decis. Mak.37 (3), e2395 (2024). [Google Scholar]
  • 29.Langer, E. J. The illusion of control. J. Personal. Soc. Psychol.32 (2), 311 (1975). [Google Scholar]
  • 30.Botti, S. & McGill, A. L. When choosing is not deciding: the effect of perceived responsibility on satisfaction. J. Consum. Res.33 (2), 211–219 (2006). [Google Scholar]
  • 31.Couto, J., Van Maanen, L. & Lebreton, M. Investigating the origin and consequences of endogenous default options in repeated economic choices. PloS One. 15 (8), e0232385 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Chen, D. L., Schonger, M. & Wickens, C. oTree—An open-source platform for laboratory, online, and field experiments. J. Behav. Exp. Finance. 9, 88–97 (2016). [Google Scholar]
  • 33.Carroll, G. D., Choi, J. J., Laibson, D., Madrian, B. C. & Metrick, A. Optimal defaults and active decisions. Q. J. Econ.124 (4), 1639–1674 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Chetty, R., Friedman, J. N., Leth-Petersen, S., Nielsen, T. H. & Olsen, T. Active vs. passive decisions and crowd-out in retirement savings accounts: evidence from Denmark. Q. J. Econ.129 (3), 1141–1219 (2014). [Google Scholar]
  • 35.Choi, J. J., Laibson, D., Madrian, B. C. & Metrick, A. Optimal defaults. Am. Econ. Rev.93 (2), 180–185 (2003). [Google Scholar]
  • 36.Thaler, R. H., Sunstein, C. R. & Nudge Improving Decisions about Health, Wealth, and Happiness, vol. 24 (Penguin, 2009).
  • 37.Dinner, I., Johnson, E. J., Goldstein, D. G. & Liu, K. Partitioning default effects: why people choose not to choose. J. Exp. Psychol. Appl.17 (4), 332 (2011). [DOI] [PubMed] [Google Scholar]
  • 38.Anderson, C. J. The psychology of doing nothing: forms of decision avoidance result from reason and emotion. Psychol. Bull.129 (1), 139 (2003). [DOI] [PubMed] [Google Scholar]
  • 39.Moshinsky, A. & Bar-Hillel, M. Loss aversion and status quo label bias. Soc. Cogn.28 (2), 191–204 (2010). [Google Scholar]
  • 40.Keller, P. A., Harlam, B., Loewenstein, G. & Volpp, K. G. Enhanced active choice: A new method to motivate behavior change. J. Consumer Psychol.21 (4), 376–383 (2011). [Google Scholar]
  • 41.Jarvik, M. E. Probability learning and a negative recency effect in the serial anticipation of alternative symbols. J. Exp. Psychol.41 (4), 291 (1951). [DOI] [PubMed] [Google Scholar]
  • 42.Wang, D. & Li, Y. The number of available sample observations modulates gambler’s fallacy in betting behaviors. Sci. Rep.15 (1), 1205 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Charness, G. & Gneezy, U. Incentives to exercise. Econometrica77 (3), 909–931 (2009). [Google Scholar]
  • 44.Karlan, D. & List, J. A. Does price matter in charitable giving? Evidence from a large-scale natural field experiment. Am. Econ. Rev.97 (5), 1774–1793 (2007). [Google Scholar]
  • 45.Falk, A. & Ichino, A. Clean evidence on peer effects. J. Labor Econ.24 (1), 39–57 (2006). [Google Scholar]
  • 46.Fehr, E. & Leibbrandt, A. A field study on cooperativeness and impatience in the tragedy of the commons. J. Public. Econ.95 (9–10), 1144–1155 (2011). [Google Scholar]
  • 47.Holm, S. A simple sequentially rejective multiple test procedure. Scandinavian J. Stat.1, 65–70 (1979).
  • 48.Likert, R. A technique for the measurement of attitudes. Archives Psychol. (1932).
  • 49.Sobel, M. E. Asymptotic confidence intervals for indirect effects in structural equation models. Sociol. Methodol.13, 290–312 (1982). [Google Scholar]
  • 50.Hsee, C. K. & Weber, E. U. Cross-national differences in risk preference and Lay predictions. J. Behav. Decis. Mak.12 (2), 165–179 (1999). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material 1 (244.9KB, docx)

Data Availability Statement

The data that support the findings of this study and codes for all analyses are available from the corresponding authors upon request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES