Skip to main content
PLOS One logoLink to PLOS One
. 2020 Apr 14;15(4):e0231203. doi: 10.1371/journal.pone.0231203

The effect of anchors and social information on behaviour

Tanya O’Garra 1,2,*,#, Matthew R Sisco 2,#
Editor: Joanna Tyrowicz3
PMCID: PMC7156041  PMID: 32287302

Abstract

We use a ‘multi-player dictator game’ (MDG), with ‘social information’ about the monetary transfer made by a previous dictator to a recipient, to examine whether average contributions as well as the behavioural strategy adopted are affected by the first amount presented (the ‘anchor’) using a sequential strategy elicitation method. We find that average contributions are positively affected by the anchor. The anchor is also found to influence the behavioural strategy that individuals adopt, such that low anchors significantly increase the likelihood that players will adopt unconditional self-interested strategies, whereas high anchors increase the likelihood of adopting giving strategies. The distribution of strategies–and hence, the distribution of behavioural ‘types’—is therefore affected by the initial conditions of play, lending support to the notion that behavioural strategies are context dependent.

Introduction

This paper reports results of an experiment that examines the impact of an initial piece of information—or ‘anchor’—on redistribution choices in response to social information.

Anchoring is a well-established cognitive phenomenon describing the tendency of individuals to make judgments that are biased towards the first piece of information they receive [1,2]. Most anchoring studies examine the impact of anchors on numerical judgments (e.g. [1]), beliefs (e.g. [3,4]), and elicited preferences (e.g. [5]). Anchors are also found to affect actual behaviour, including consumer bidding in auctions (e.g. [6, 7, 8]), consumer purchases [9], and valuations of consumer goods with probabilistically binding choices (e.g. [10, 11]) although Fudenberg et al (2012) [12] do not replicate the findings in [11].

There has been much less research on the effects of anchors on pro-social behaviour, such as cooperation and redistribution, and what little evidence there is, is mixed. For example, Cappelletti, Güth, & Ploner, M. (2011) [13] and Luccasen (2012) [14] both fail to find evidence of anchoring effects on cooperation behaviour using public goods games, whereas Fosgaard & Piovesan (2015) [15] find that subjects playing a public goods game with default options (using the strategy method) anchor their subsequent decisions to the default. The evidence is similarly inconclusive with respect to anchoring effects on redistribution behaviour. Raihani & McAuliffe (2014) [16] find that numerical anchors (based on player’s ages) have no effect on monetary transfers in a dictator game–although we note that the treatments analysed for anchoring in [16] were intended as control treatments and were not designed to elicit an anchoring effect. On the other hand, Dhingra et al (2012) [17] find evidence that choices in a dictator game with default options are anchored to the defaults; they term this a “default pull” although it corresponds essentially to an anchoring effect.

We aim to add to this limited literature, by asking the following questions: firstly, can an initial piece of information alter the amount that an individual redistributes? By ‘redistribution’, we refer to decisions to share wealth with others, with no expectation or possibility of benefitting materially from redistribution. Secondly, if anchors do affect the amount that individuals redistribute, might they also affect how the individual perceives the situation and hence, their behavioural strategy?

The behavioural strategies that people adopt in economic experiments are often used to classify people into social ‘types’, such as ‘conditional co-operators’ or ‘free riders’ (e.g. [18, 19]). The general understanding is that these different behavioural strategies reflect underlying social preferences, such as ‘altruism’, ‘reciprocity’ or ‘warm glow’. For example, redistribution behaviour in economic experiments is often considered indicative of altruistic preferences [20], while contributions to the public good are considered to reflect reciprocity or conformity [21].

However, a growing number of studies are finding that the specific behavioural strategies that individuals adopt—and hence the distribution of ‘types’–are susceptible to contextual factors, such as the frame (e.g. [22]), and how choices are elicited (e.g. [23,15]). For example, Dariel (2018) [23] find that changing the way in which conditional strategies are elicited in a public goods game radically changes the proportions of conditional co-operators and free riders. This suggests that the behavioural strategies that individuals adopt may be context-dependent [24,25].

To address this question, we examine the behavioural strategies that individuals adopt in response to social information. The effect of social information on redistribution decisions has been extensively explored (e.g. [2628]), and the general finding is that on average, people positively condition the amounts they give to the amounts given by others. However, there is heterogeneity in how individuals respond to social information, with some people positively conditioning their choices to those of others, some negatively conditioning their choices and others unaffected [29,30]. We ask whether the distribution of behavioural types in this context is sensitive to anchoring effects. This is exploratory research, and as such, we have no expectations about the size or direction of anchoring effects on the distribution of ‘types’ in the population under study. Our aim is mainly to identify whether the choice of behavioural strategy is affected by normatively irrelevant contextual factors, such as anchors.

To this end, we use a ‘multi-player dictator game’ (MDG), in which there is a first mover (FM) who makes an initial visible monetary transfer to recipients in the group, and second movers (SM) who make transfer choices in response to all possible FM choices using a sequential strategy method. The strategy method involves players providing contingent responses to a range of possible actions by a peer. Individual ‘types’ are classified based on the full vector of responses to FM transfers, as either: ‘conformists’ (positive relationship), ‘compensators’ (negative relationship), ‘self-interested’ (fixed zero transfer) or ‘unconditional givers’ (fixed positive transfer) types. The impact of anchors on the distribution of types is ascertained by randomly presenting different SMs with different starting values in the sequential strategy elicitation exercise and examining whether this initial amount affects the distribution of SM types. To the best of our knowledge, this is the first study to examine anchoring effects using the sequential strategy method.

Overall, we show evidence of an anchoring effect, with average transfers influenced by the initial amount that SMs must respond to using the sequential strategy method. We also find that anchors affect the distribution of ‘types’, such that the likelihood of choosing an unconditional self-interested strategy is greater in response to low value anchors than high value anchors. This suggests that the adoption of self-interested strategies may be at least partly determined by contextual factors, such as anchors.

We consider this to be an important investigation for various reasons. Firstly, individuals are regularly faced with new redistribution decisions, for example, in the form of charitable appeals. If the initial piece of information determines the entire strategies adopted by potential donors, then it suggests that initial information has an inordinate influence on all the related decisions that follow. The practical value of this finding is highly significant, as anchoring effects could potentially be harnessed not only to ‘nudge’ individuals towards single instances of fair sharing, but towards the adoption of more persistent redistributive behaviour. Additionally, from a theoretical perspective, the behavioural strategy that an individual adopts is expected to reflect preferences. Assuming preferences to be stable and well-defined, if anchors cause a change in the distribution of behavioural strategies, this may suggest that such strategies (such as ‘self-interest’ or ‘conformity’) are not fixed and may actually reflect different psychological processes and motivations interacting with contextual factors, such as anchors [24,25,31].

We note that this study also complements the literature examining ‘default’ effects on redistribution choices. Defaults are pre-determined choices that will be implemented unless an individual actively changes them [32]. They are related to anchors in that a default option can also act as an anchor. As noted earlier, Dhingra et al (2012) [17] find evidence of what they term a “default pull” on choices in a dictator game with default options. Similar findings are reported in [15 and 23] albeit with respect to cooperation behaviour in a public goods game. Also related is the literature on ‘reference points’, which people often use to evaluate gains and losses [33], and which have been found to influence bidding behaviour in auctions (e.g. [6]). With regards to impacts on redistribution choices, Charite, Fisman & Kuziemko (2015) [34] find that people’s choices are impacted by other people’s reference points.

This rest of paper is organised as follows: in the next section we present our research questions and hypotheses. This is followed by the Materials and Methods, after which we present the Results, and finally, the Discussion and Conclusions.

Identifying anchoring effects

To identify anchoring effects with respect to the initial amount presented to second movers, the order in which the hypothetical first mover transfers were presented to SMs was randomized. Hence, we obtained vectors of responses (SM strategies) for each possible initial amount, or ‘anchor’ (experimental details are provided in the Experimental Design section). Based on general findings in the literature on anchoring, we hypothesize that SM transfers will be biased towards the anchor (e.g. [11]). We do not aim to identify the precise psychological or cognitive mechanism underlying this anticipated anchoring effect. There are different explanations for anchoring, including ‘anchoring-and-adjustment’ [1], ‘selective accessibility’ [35,36] and a close variant of this, ‘query theory’ [37]. The first of these proposes that individuals use the initial information provided as a starting point (anchor) and reach their final judgment through a process of marginal but insufficient adjustments from this anchor. ‘Selective accessibility’ and ‘query theory’ models however suggest that when individuals receive an initial piece of information, they engage in an internal assessment of the validity of this information. Greater weight is placed on the initial information provided, resulting in judgments converging on this initial piece of information.

However, we do not propose to identify whether these (or indeed, other explanations) explain our findings. The main purpose of the present study is, firstly, to assess whether the initial piece of information impacts the redistribution behaviour of individuals in response to social information; and secondly, to identify whether the behavioural strategy that individuals adopt are affected by anchors. The first question has only been addressed by two other studies, as noted in the introduction [16,17]. The second question is novel and has not been addressed previously.

On the one hand, it is possible that all we observe is a magnitude effect–by which subsequent choices are simply adjusted upwards or downwards in response to the initial decision, but no changes in actual strategy occur. Thus, for example, if this were to occur, players classified as ‘conformists’ would positively condition their choices to the social information provided, albeit with an upward (downward) shift in overall transfers in response to a higher (lower) anchors. Similarly, players classified as ‘compensators’ and ‘unconditional givers’ would be expected to continue behaving in line with their type, but with similar upwards (downwards) adjustments. Self-interested contributors however would not be expected to adjust, assuming that they have pure self-interested preferences. On the other hand, it is possible that the reasoning an individual engages in when faced with different anchors affects how they perceive the decision, which could potentially lead to changes in adopted strategy. As noted, this is an exploratory question, and we have no expectations for the pattern of an effect in this regard.

We also consider it possible that the entire order in which FM transfers are presented to SMs may have an effect on choices beyond the effect of the initial amount. To assess possible order effects, we ran a series of tests which are reported in the S1 Appendix. We found no evidence of order effects beyond the impact of the initial amount on SM transfers.

Finally, we acknowledge that there are other contextual factors—such as how the decision is framed—that may influence decisions. Framing effects occur when information is presented in different ways, leading to different interpretations of the context and decision. In our study, it is possible that the first piece of information received (what we term the ‘anchor’) actually affects choices through a ‘framing effect’–i.e. by changing the perception of what the decision context involves. This would be in line with the ‘selective accessibility’ and ‘query theory’ models, which propose heavy reliance on the first piece of information to shape one’s decision–hence, in this context, the anchoring effect could be akin to a ‘framing effect’ whereby the frame is provided by the initial information, or ‘anchor’.

Materials and methods

Experimental design

To explore the influence of first mover (FM) monetary transfers on second mover (SM) redistribution behaviour, we used a ‘multi-player dictator game’ (MDG) in which SM’s could condition their choices on the possible choices of a first mover. At the beginning of the game, participants were randomly assigned to groups of eight players. Within these groups, half of the players were randomly assigned to the role of allocator (i.e. ‘dictator’) and half to the role of recipient. Allocators received an endowment of $2 per person; recipients did not receive this endowment.

The next set of instructions informed allocators that that one of them would be randomly selected “by the computer” to make the first transfer and that this amount would be communicated to the other allocators in the group. The instructions specifically read:

“the computer will now randomly select one of you to make a transfer before anyone else. This person will be referred to as the ‘first mover’. The transfer made by the first mover will be made visible to all the other participants. Please move to the next page to determine whether you have been selected to be the ‘first mover’”

When allocators moved to the next page of the experiment, one of them was informed that s/he had been selected to be the ‘first mover’. The FM was then given the option to transfer one of the following amounts from their endowment to the recipients: [$0, $0.10, $0.25, $0.50, $0.75, $1]. These amounts were presented simultaneously on the same page.

Meanwhile, the remaining three allocators moved to another page where they were informed that they had not been selected to be the first mover. These ‘second movers’ (SMs) were then informed that the FM had been given the choice of transferring one of the six aforementioned amounts to the recipients. SMs were then asked to indicate how much they would contribute conditional on each of these possible FM transfers. Fig 1 shows a screenshot of the page that SMs were presented with, outlining these instructions.

Fig 1. Screenshot showing transfer instructions for SMs.

Fig 1

Each possible FM transfer was presented to SM’s sequentially on separate screens, and in random order–thus implementing our anchoring treatments. SMs indicated their preferred transfers sequentially in response to each of the six possible FM transfers. Hence, we obtained vectors of responses (SM strategies) for each of these six possible anchors. Table 1 shows the sample size for each anchor.

Table 1. Summary sample size by anchor.

Anchor value (IA) Sample Size
$0 55
$0.10 40
$0.25 60
$0.50 51
$0.75 64
$1 54

SM transfers were elicited using an open-ended format, such that they could transfer any amount between $0 and $2. Fig 2 shows a sample screenshot of one of these choices offered to SMs.

Fig 2. Screenshot example—Elicitation of SM transfer in response to FM transfer of $0.

Fig 2

Each time a SM clicked on “Next” after indicating their preferred transfer, a new screen appeared with another FM transfer. Once the SMs had provided a full vector of responses to each possible FM transfer, the FM’s choice was communicated to the SMs.

As a side note, we mention that the strategy method is usually used non-sequentially, i.e. subjects view all possible choices by another subject/other subjects and provide their conditional choices simultaneously. Thus, in the standard approach, subjects make their choices under a scenario of “advanced disclosure”. Given our interest in identifying whether subjects would anchor their decisions to the first amount they were presented with, we used a sequential approach. However, to keep our design as close as possible to the standard approach, we opted for advanced disclosure of the FM’s choices. Only when choices were to be made, was this done sequentially.

After completing the MDG, allocators (FMs and SMs) were asked to provide an open-ended explanation for their decision–specifically, the question read: “How did you decide on the amount that you contributed? Although this qualitative data lacks the clarity of quantitative measures of social influence on redistribution, it can be used to assess the robustness of the SM classification process. Participants then indicated how much they expected other SMs in their group to contribute on average. Finally, they were asked to provide basic socio-economic information, including their gender, age and income. We expect that redistribution behaviour will be positively influenced by female gender (e.g. [38,39]) and income (due to the income effect).

A custom, web application was used to allow participants to play the game interactively with the other members of their group at the same time. The web application was developed specifically for this experiment primarily using the programming languages PHP, HTML, and Javascript. It was hosted on Amazon EC2 while the experiment was running. This is a fairly novel development in studies using Mechanical Turk subjects (other examples include [40]). Typically, group-based studies using MTurk subjects do not provide interactive platforms for players to play simultaneously with each other. The design in the present study adds realism and urgency to the player’s actions, which enhances the validity of group-based decisions.

The experimental instructions can be found in the S1 Data under ‘Experimental Instructions’. In addition, a recording of the interactive platform can be found in the following link: http://www.columbia.edu/~ms4403/dictator_game/Dictator%20Game%20Screencast.mp4.

Analysis procedure

To identify anchoring effects on conditional transfer amounts, firstly, we compare the overall contributions by anchor using a Kruskal-Wallis (KW) test, which is a rank-based nonparametric test used to compare the medians of two or more groups, and is considered the nonparametric equivalent of the one-way ANOVA. We use the KW test because our examination of residuals (using standardised and quantile normal probability plots) suggest the residuals are not normally distributed; additionally, the Shapiro-Wilk test for normality confirms that the raw data is not normally distributed.

Then, given that we have repeated observations (six) per SM, we assume that observations from the same individual are correlated and hence we opt to use mixed effects regression analyses on the full data set of SM strategy-method transfers, with clustering of standard errors at the individual level. Mixed effects regression is appropriate to model anchoring effects in which SMs are treated as random effects; we do not cluster at the group level as there is no interaction between group members during the strategy data collection stage, so there is no reason that there should be group-level effects. The regression models include dummies for all possible anchors (with IA of $0.50 as the reference) so as to identify the specific impacts of each anchor on transfers and non-linearities. We also run regressions using a dichotomous version of the anchoring variable (where 1 = IA≥$.50 and 0 = IA<$0.50). We tested the assumption that observations by individual SMs are correlated, as required by mixed effects models. Estimation of the intraclass correlation (ICC)–which indicates the correlation among observations within the same ‘level’ (in this case, the ‘individual’)–suggests that approximately 85% of the total residual variance in our dependent variable can be accounted for by clustering at the individual level. Wald Chi2 tests and likelihood ratio tests comparing the mixed effects versus linear model confirm that the mixed effects model is suitable for our data.

To identify the impact of anchors on the behavioural strategy that SM’s adopt, we first categorize SMs by fitting a linear model (using ordinary least squares) predicting the SM strategy transfer amount by the FM transfer (similar to the approach used in [41,30]). The linear model fitted for each subject was simply:

transfer_amounti=β0+β1FM_transfer_amounti+ϵi
FM_transfer_amounti{0.0,0.1,0.25,0.50,0.75,1.0}

The estimated intercept term, β0, and the beta term, β1, were used to categorize SMs into four main groups (details can be found in Table 2). To explore whether the adoption of different strategies is affected by the initial information or ‘anchor’, we conduct a multinomial logistic regression on the different player ‘types’, as well as a binary logistic regression specifically aimed at addressing whether anchors influence the adoption of a ‘self-interested’ strategy. Our motivation for focusing on the ‘self-interested’ type is based on our finding that this particular behavioural strategy appears to be most susceptible to anchors.

Table 2. Classification scheme.

Type Classification Quantitative criteria
1 Conformist β1 significantly positive; y-intercept (β0) irrelevant.
2 Compensator β1 significantly negative; β0 irrelevant.
3 Unconditional giver β1 not significant; β0 significantly positive; average transfer>$0.05.
4 Self-interested β1 not significant; β0 not significantly different from zero; average transfer<$0.05
5 All other R2 less than or equal to 0.20

As noted earlier, given the focus on this paper on anchoring effects, all results and analyses in this paper pertain solely to SM decisions elicited using the strategy method. Data on FM transfers is not analysed here; however, it is available upon request. The analyses in this paper were conducted using the statistical packages Stata 15 and R.

Participants

We used Amazon Mechanical Turk (MTurk) to recruit participants for this experiment. MTurk experiments generally involve low stakes, as participants play from their computers or smartphones, which usually takes less than ten minutes. This allows experimenters to decrease the stakes without compromising the results. This has been confirmed by several studies showing that data collected using MTurk (with low stakes) are of similar quality than those gathered using the standard laboratory [42,43,44].

Consent was obtained at the beginning of the study; participants read a page of text summarising the study and their rights, and if they consented to participate, they could choose to continue or discontinue the study. After providing informed consent, participants were presented with the experimental instructions, followed by two questions testing comprehension. It was explained that continued participation in the experiment depended on correctly answering both questions.

Data was collected from a total of 118 groups of subjects, with eight in each group (four allocators and four recipients). Due to dropouts (n = 39) the final sample consists of 433 allocators (109 FMs, 324 SMs) distributed unevenly among groups. Given the focus on this paper on anchoring effects, all results and analyses in this paper pertain solely to SM decisions elicited using the strategy method. Additionally, as we are interested in individual SM decisions rather than aggregate group decisions, we opt to use the full SM dataset rather than exclude incomplete groups (n = 24 incomplete groups)–we do this because group members did not interact in any way other than by viewing the FM’s decision, hence dropouts were not observed by SMs when providing their conditional redistribution choices via the strategy method. For the final pay outs, we always divided the sum of all transfers made among the actual number of recipients in the group, regardless of the number of dropouts. The sample was composed of 43% females; the average age was 33 years and median annual income was $45,000.

This research was approved by Columbia University’s Internal Review Board, approval number IRB-AAAM5961.

Results

Overview of data

We start by examining the data at the aggregate level, presenting an overview of social information on redistribution decisions. As noted previously, the experiment elicited SM transfers in response to each of six possible FM transfers that were presented sequentially [$0, $0.10, $0.25, $0.5, $0.75, $1]. The distribution of SM contributions in response to each possible FM transfer can be found in S2 Appendix, in addition to a line graph showing mean SM transfers in response to each of these FM transfers.

Overall, mean SM transfers are found to increase modestly with FM transfers. Results of a Friedman test (non-parametric equivalent to a repeated measures ANOVA) suggest that FM transfers have no significant influence on SM transfers overall (Friedman’s χ2 = 8.598, p = 0.1262; Kendall’s W (effect size = 0.005). However, additional pairwise paired t-tests and non-parametric Wilcoxon signed-rank tests between mean SM responses (with Bonferroni adjustments to account for multiple testing) suggest that there are some significant pairwise differences in SM transfers in response to some FM transfers. For example, there is a significant difference between SM responses to $1 and SM responses to $0, $0.10 and $0.25 (p<0.05 for all tests). Results of these pairwise tests can be found in S3 Appendix.

Anchoring effects on average SM transfers

Fig 3 presents mean SM contributions at each possible FM transfer level disaggregated by IA. Thus, each line in Fig 3 represents mean SM transfers at each FM transfer level for the different anchor treatments; for example, the bottom line with triangular markers shows mean contributions at each possible FM transfer level only for SMs who were presented with an IA of $0.25.

Fig 3. SM responses to FM transfers disaggregated by initial amount.

Fig 3

From Fig 3, it appears that mean SM transfers differ by anchor, especially in response to lower FM transfers. For example, SMs in the $0.25 anchor treatment transfer an average of $0.18, while SMs in the $0.75 anchor treatment transfer $0.32. To identify whether there is a statistically significant difference between SM transfers in response to each FM transfer by anchor, we use a Kruskal-Wallis (KW) test, to compare the medians of the anchoring groups. Results indicate that there is a statistically significant difference between SM transfers by anchor treatment in response to FM transfers of $0 (KW: p = 0.0371, η2 = 0.036)) and $0.10 (KW: p = 0.0329, η2 = 0.036)). Anchoring treatments have no statistically significant effect on average SM transfers in response to FM transfers of $0.25, $0.50, $0.75 and $1. This shows that anchoring only impacts decisions made in response to selfish FM transfers.

However, it can also be observed that mean SM transfers do not increase linearly with the size of the anchor. For example, the $0.50 anchor appears to elicit the highest mean transfers in response to most FM transfer amounts. Interestingly, contributions in response to the most extreme anchors ($0 and $1) converge in the middle; this suggests that the extreme anchors may lead to more moderate responses. In their review of anchoring studies, Furnham and Boo (2011) [45] report mixed findings regarding the impact of extreme anchors, with some studies finding that extreme anchors generate strong anchoring effects (e.g. [35]) while others find exactly the opposite (e.g. [46]). Our results agree with the latter findings that extreme anchors have weaker anchoring effects. There also appears to be a modest interaction between the IA and the FM transfer amount, with the IA of $0.50 leading to a fairly flat relationship between IA and FM amount while other IAs suggest positive relationships.

To verify if the differences in average responses to FM transfers by anchor are meaningful, we carry out mixed effects regression analyses on the full data set of individual SM strategy-method transfers. In the models, we include the FM transfer amounts that SMs provided responses to, as well as key socio-economic influences on behaviour (age, gender and income). Additionally, we include a variable representing the order in which FM transfers were presented (first through sixth), to account for possible effects of time or repetition on stated contributions. Studies have shown [47,48] that individuals playing sequential dictator games decrease their contributions round by round, hence we wish to control for this possible source of variation here.

We examine the potential anchoring of transfers to the initial amount (IA) presented to SMs, using dummies for all possible anchors (with IA of $0.50 as the reference) so as to identify specific impacts of each anchor on transfers and non-linearities. We also use a dichotomous version of the anchoring variable (where 1 = IA≥$.50 and 0 = IA<$0.50). This reflects the apparent dichotomised response to the anchors, which we report in the S4 Appendix. Finally, given the apparent interaction between anchor and FM transfer, we also present models with interaction effects. Regression results are presented in Table 3. In S5 Appendix, we report results of similar regressions using only those choices made by SMs in groups without dropouts, to assess whether there are systematic differences in results when excluding groups with dropouts. As noted previously, dropouts were not observed by SMs when providing their conditional redistribution choices hence there should be no effect of dropouts on choices. Results of these additional regressions confirm that there is no systematic difference in results.

Table 3. Regressions on second mover transfers.

The dependent variable is cents transferred per second mover to the recipients. The reference initial amount (IA) level is $0.50.

(1) (2) (3) (4)
IA = $0 -11.092
(7.116)
-14.323*
(7.446)
IA = $0.10 -12.392*
(7.498)
-16.645**
(7.904)
IA = $0.25 -15.420**
(6.719)
-19.021***
(7.077)
IA = $0.75 -3.122
(7.042)
-6.852
(7.319)
IA = $1 -8.165
(7.152)
-13.214*
(7.460)
IA dichotomous (where 1≥$0.50, 0<$0.50) 9.284**
(3.861)
9.944**
(4.051)
Order in which FM transfer presented -0.283
(0.200)
-0.283
(0.200)
-0.285
(0.189)
-0.319*
(0.189)
FM transfer (cents) 0.051***
(0.015)
0.051***
(0.015)
-0.026
(0.037)
0.059***
(0.021)
Female 12.999***
(3.977)
12.960***
(3.999)
12.999***
(3.977)
12.960***
(3.999)
Age 0.477***
(0.170)
0.458***
(0.171)
0.477***
(0.170)
0.458***
(0.171)
Income (divided by 1000) -0.084
(0.054)
-0.094*
(0.055)
-0.084
(0.054)
-0.094
(0.055)
Interactions
IA = $0*FM transfer 0.075
(0.051)
IA = $0.10*FM transfer 0.098*
(0.059)
IA = $0.25*FM transfer 0.083*
(0.049)
IA = $0.75*FM transfer 0.086*
(0.051)
IA = $1*FM transfer 0.117**
(0.051)
IA dichotomous*FM transfer -0.015
(0.030)
Constant 19.257**
(8.645)
7.382
(7.522)
22.567**
(8.787)
7.166
(7.581)
Number of observations 1884 1884 1884 1884
Number of groups (i.e. SMs) 313 313 313 313
Wald chi2 56.37*** 55.65*** 62.28*** 56.27***
Likelihood ratio test: mixed versus linear model *** *** *** ***

Missing data from 10 respondents on income, age and gender (refusal to answer)

Standard errors (clustered at individual level) are shown in parentheses,

* p < 0.1,

** p < 0.05,

*** p < 0.01.

Results in model 1 in Table 3 show that–compared to the reference IA of $0.50 (representing 25% of the endowment)–IAs (anchors) of $0.10 and $0.25 have negative influences on overall transfers, whereas IAs over $0.50 (as well as the IA of $0) do not lead to significantly different SM transfers. In model 2, the dichotomous version of the IA variable has a positive influence on SM transfers, somewhat confirming results in model 1. In addition, results also show that SMs appear to condition their contributions positively to those of FM’s. However, the slope is quite modest: for each unit increase in the FM’s transfer, SMs increase the amount they transfer by about 5% of the FM’s transfer.

Models 3 and 4 include additional terms for interactions between FM transfer and anchors (hence allowing for different slopes). Interaction terms in model 3 show that the slopes associated with anchors of $0.10, $0.25, $0.75 and $1 are positively and significantly different from the slope for the anchor of $0.50 (although this is only weakly significant for the slopes of $0.10, $0.25 and $0.75), partly confirming what can be observed in Fig 3. Ex post tests of the equality of slopes also confirm that all the slopes (except for the slope associated with the anchor of $0.50) are not significantly different to each other. When modelled as dichotomous (model 4), there is no interaction effect. This can be observed visually quite clearly in the figure in the S6 Appendix, which shows SM contributions disaggregated by the dichotomous IA variable. In terms of effect sizes, estimates indicate that the smaller value anchors lead to a reduction of around $0.10-$0.20 in the average amount transferred by SMs (corresponding to about 10–20% of the fair donation amount of $1), depending on the model specification.

Finally, female gender and age positively influence SM transfers, such that older females give more. The positive effect of gender on donations has been found in numerous studies (e.g. [38,39]).

Overall, results indicate that average SM transfers are influenced by the initial FM choice presented to them using the sequential strategy method, thus indicating the presence of an anchoring effect. In addition, the general pattern of SM responses (to the first amount seen and in response to all possible FM transfers) suggests a positive relationship between FM contributions and SM contributions (which could be indicative of conformity)—although we do not observe this for the IA of $0.50, for which we observe no relationship between FM and SM contributions. In the following section, we will examine the extent to which the anchor influences the response strategy selected by individual SM.

Influence of anchor on individual strategies

SMs were categorized by fitting a linear model (using ordinary least squares) predicting the SM strategy transfer amount by the FM transfer (outlined in the Analysis Procedure section). After fitting a linear model to the data from each participant we categorized them into four main groups, as outlined in Table 2. The classification was guided by theoretical expectations regarding the potential response of individuals to redistribution choices made by others [27]. Briefly, these expectations derive from two broad classes of social preference model; in the first type of model, contributions by others are perceived as complements to one’s own contributions due to a desire to conform [49,50]; in the second type of model, contributions by others are seen as substitutes for one’s contributions because one mainly cares about recipients’ final earnings [51].

Thus, SMs were classed into four main categories: SMs whose transfers are positively correlated with FM transfers are termed ‘conformists’, whilst those whose transfers are negatively related to those of FM’s are termed ‘compensators’. We recognize that a positive association between others and one’s own contributions may be attributed to other motivations, such as reciprocity, but in this case we are using the definition of conformity as “the act of changing one’s behaviour to match the responses of others” ([52, p606]). This definition accounts for any positive conditioning of one’s behaviour on the behaviour of others.

In addition, taking into account that SMs may not condition their responses to FM choices, SMs may also be ‘self-interested’ (zero contribution over all possible FM transfers) or ‘unconditional givers’ (positive contribution, no relationship with FM transfers).

The distribution of SM types by each of the six anchors can be found in Table 4. A Pearson Chi2 test of the difference in proportions confirms that the proportions of SM types differ significantly between anchors (p = 0.026). This suggests that there are players whose redistribution strategies are susceptible to the anchor. Given the small sub-samples of SM types responding to each anchor, we also present distributions of SM types according to ‘low’ and ‘high’ anchors (Fig 4), where ‘low’ anchors are those IAs that have a value of less than $0.50 and ‘high’ anchors have a value of $0.50 or more. This figure is intended to complement Table 4 by providing a visual overview of the impact of anchors on the distribution SM types.

Table 4. Percentage distribution of SM types by anchor.

Initial Amount (anchor)
SM Type $0 $0.10 $0.25 $0.50 $0.75 $1 Overall
Conformists 10.91 17.50 11.67 11.76 18.75 20.37 15.12
Compensators 1.82 7.50 0 15.69 3.13 3.70 4.94
Unconditional givers 23.64 15.00 25.00 29.41 32.81 22.22 25.31
Self-interested 47.27 55.00 48.33 31.37 39.06 37.04 42.59
Other 16.36 5.00 15 11.76 6.25 16.67 12.04
Sample size 55 40 60 51 64 54 324

Fig 4. Distribution of SM types by ‘high’ or ‘low’ anchor.

Fig 4

Results in Fig 4 clearly show a higher proportion of self-interested players (49.7%) when the IA is low, compared to the proportion of such players (36%) when the IA is ‘high’; a two-sample test of proportions indicates that the difference is statistically significant (p = 0.0135). At the same time, the numbers of conformists, unconditional givers and compensators have increased marginally and non-significantly (although a Pearson Chi2 test indicates a weakly significant increase in the proportion of ‘compensators’ (p = 0.061)).

To assess whether the apparent impact of the IA on the likelihood of adopting self-interested versus all other behavioural strategies can still be observed when controlling for socio-economic characteristics, we ran a logistic regression on ‘self-interested’ (where 1 = self-interested player type, and 0 = all other). In Table 5, we report the results of two models, the first using individual dummies representing the different anchoring amounts (with $0,50 as the reference) and the dichotomous version of the IA variable (where 1 = IA≥$0.50, and 0<$0.50). Results of additional multinomial regressions on the individual SM types can be found in S7 Appendix, as complements to the logistic regressions in Table 5. We do not present these results in the main text, as subsample sizes for each player type are below the recommended 10 observations per independent variable [53], leading to potentially biased results. However, results in the multinomial logit models confirm findings in the logistic regression models.

Table 5. Logistic regression models of determinants of ‘self-interested’ SM type.

(1) (2)
IA (anchor) presented to SM (where 1 = IA≥$0.50, 0 = IA<0.5) -0.577**
(0.236)
IA = $0 0.709*
(0.416)
IA = $0.10 1.051**
(0.457)
IA = $0.25 0.653
(0.413)
IA = $0.75 0.340
(0.406)
IA = $1 0.208
(0.433)
Income (div by 1000) 0.002
(0.004)
0.002
(0.003)
Age -0.011
(0.011)
-0.011
(0.011)
Female -0.776***
(0.245)
-0.776***
(0.244)
Constant 0.580
(0.444)
-2.165
(0.499)
N 314 314
chi2 19.63*** (d.f. = 4) 321.29*** (d.f = 8)

Standard errors in parentheses;

* p < 0.1,

** p < 0.05,

*** p < 0.01

a Missing data from 10 respondents on income, age and gender (refusal to answer)

Results in Model 1 in Table 5 show that—when controlling for the socio-economic characteristics of SMs—the higher anchors (above $0.50) significantly reduce the likelihood of adopting a self-interested strategy. Model 2 confirms this to be the case, with lower IAs (namely, $0 and $0.10) having a positive effect on the likelihood of self-interested strategies, compared to the reference of $0.50. This influence of lower anchors appears to be limited to the very lowest values, as there is no relationship between an IA of $0.25 and the likelihood of adopting a self-interested strategy.

With regards to socio-economic characteristics, we observe that females are much less likely to adopt self-interested strategies, compared to all other strategies; this confirms findings that women are more altruistic (e.g. [39]), and adds to the mixed evidence on how conformity relates to gender (e.g. [54,55]).

Comment: SM expectations

Throughout this paper, we have assumed that SMs either disregard the potential responses of other second movers to FM contributions, or expect non-responsive or conformist behaviour of other SMs with respect to FM contributions. However, if the expected behaviour of other SMs is negatively correlated with FM contributions, and if SMs mainly condition their responses on their expectations on how other SMs will behave, then this could lead to complications in interpreting SM responses and the classification of redistribution strategies in subsequent sections. However, our analysis of expectations shows that–broadly–SMs consider other SMs to positively condition their contributions to FM contributions. This is true across all SM types. In other words: all SM types expect other SMs to ‘conform’ to FM contributions, regardless of whether this is the strategy they use or not. We also note that if we control for ‘expectations’ in the regressions in Table 3, results are unchanged with the exception that expectations are positively and significantly correlated with SM contributions in all models. However, we do not include these models in the main text because the expectations question was not incentivised. As a result, we cannot be sure whether stated expectations influenced contributions, or whether players answered the expectations question in such a way to justify the contributions choices they made in the game. Given this potential problem and the fact that expectations do not affect other variable influences, we opt to omit the expectations variable from the analyses presented in this paper (however, they are available upon request).

Discussion & conclusions

In this study, we used a multiplayer dictator game to identify how redistribution behaviour is influenced by what others do. Specifically, we examined how second movers (SMs) responded to contributions by first movers (FMs) to passive recipients, using a strategy game, in which SMs provided a vector of responses to a range of possible FM decisions, ranging from selfish (zero contributions by FM) to a fair split (half of the endowment).

We found that at the aggregate level, SM redistribution choices elicited via a sequential strategy method were positively influenced by the initial amount presented (the anchor). Analysis of SM redistribution choices thus confirm that SMs condition their transfer amounts on the initial FM transfer presented to them in the strategy experiment. The size of the effect was found to be small but meaningful. Specifically, smaller value anchors ($0.00, $0.10, and $0.25) were estimated to reduce the amount transferred on average by around $0.10–$0.20 (10–20% of the fair donation amount of $1). While anchoring effects are well-established and have been extensively documented in the empirical literature (see [45] for a review), there is rather less evidence of anchoring effects with regards to monetary transfer decisions. The past literature on anchoring and adjustment has mostly focused on the effect of anchors on judgments, beliefs, and bids for consumer goods, with only a few studies examining how anchors (or related concepts, such as defaults) can affect redistribution or ‘fair sharing’ behaviour (e.g. [16,17]. Hence, our finding that anchoring effects extend to redistribution decisions is an important contribution to the limited literature. Future studies might explore whether anchors influence other types of pro-social behaviour, such as cooperation.

We also found that the size of the anchor influenced the distribution of behavioural ‘types’ in our experiment. The impact on the distribution of self-interested individuals appears to be most evident, with higher anchors leading to significantly fewer self-interested players. This adds to the literature showing that the distribution of ‘types’ may be context-dependent; our focus on how anchors in particular influence behavioural strategies is novel and thus a major contribution.

Overall, these findings imply that ‘types’ may be malleable, and the adoption of a behavioural strategy may be context dependent [24,25]. In particular, we note that self-interested types become less frequent with higher anchors. This suggests there may not just be of one ‘type’ of self-interested agent. Ubeda (2014) [56] notes that there are two motivations underlying observed self-interested behaviour: on the one hand there is a purely self-interested motivation, in which only one’s earnings influence choices, and on the other hand, there are more complex, self-serving motivations, in which there is a tension between pure self-interest and the desire to maintain a positive self-image. An individual of the second type might seek self-justification for selfish behaviour; this justification may be provided in the form of a low IA observed during the initial stages of play. However, if the initial conditions of play involve high anchors, then such a player might struggle to justify a selfish strategy if they also seek to maintain a positive self-image.

Indeed, analysis of open-ended explanations (see S8 Appendix) shows that fewer SMs with self-interested strategies explain their decisions in terms of greed/self-interest under a high anchor (IA ≥ $0.50) (52.46% of self-interested subsample), compared to a low anchor (66.23% of self-interested subsample). This difference in proportions however has a small effect size (h = 0.28) and a test of two proportions indicates this is not statistically significant (p = 0.1008). However, these findings can be taken as broadly indicating the possibility that positive self-image is less of a concern among self-interested SMs who received a low anchor.

Further research could examine this apparent switching behaviour among those classed as having self-interested strategies and confirm whether this is only induced by the size of the anchor or whether this occurs in response to other factors. Additionally, it would be valuable to explore in greater detail the cognitive mechanisms underlying self-interested strategies.

We note that Gunnthorsdottir et al. (2007) [57] find that initial cooperative disposition is a good indicator of subsequent behaviour in an experimental setting–in our case we observe that initial contextual factors may influence an individual’s initial disposition as well as the subsequent redistribution strategies of individuals. Thus, not only is individual redistribution behaviour observed to be path dependent, but initial conditions strongly determine the path. If this is indeed the case, it suggests a very fruitful avenue for future research, in which the path dependency of different behaviours in a range of collective decision settings is examined as a function of the initial conditions of play. The outputs from this research may provide critical input into the understanding of how people choose to behave, and the types of citizen that individuals choose to be. It also holds some promise with regards to the potential for self-interested individuals to be ‘nudged’ towards positive redistribution strategies at critical junctures in time.

Supporting information

S1 Appendix. Testing for order effects.

(DOCX)

S2 Appendix. SM responses to FM Contributions.

(DOCX)

S3 Appendix. Comparing mean SM transfers in response to different FM transfers.

(DOCX)

S4 Appendix. SM responses to the initial amount (the anchor).

(DOCX)

S5 Appendix. Regressions on second-mover transfers, using only SMs in groups without SM dropouts (n = 279 SMs).

The dependent variable is cents transferred per second mover to the recipients.

(DOCX)

S6 Appendix. SM responses to FM contributions disaggregated by IA (dichotomous).

(DOCX)

S7 Appendix. Multinomial logit model of determinants of SM Type.

Individuals dummies for each anchor (reference category: self-interested).

(DOCX)

S8 Appendix. Analysing open-ended explanations for transfer decision.

(DOCX)

S1 Data. Experimental instructions.

(DOCX)

Acknowledgments

Many thanks to Roger Fouquet, Praveen Kujal, Daniele Nosenzo, Natalia Jimenez and Valerio Capraro for valuable comments on this paper. We also wish to thank colleagues at Middlesex Behavioural Economics Group as well as members of the LSE Behavioural Economics group for providing useful feedback on an earlier version of this paper. Finally, we acknowledge the valuable comments and suggestions from two anonymous reviewers.

Data Availability

All relevant data is within the paper and its Supporting Information files.

Funding Statement

The author(s) received no specific funding for this work. It was funded through the first author’s personal research funds by the Earth Institute Postdoctoral Fellowship programme at Columbia University.

References

  • 1.Tversky A., & Kahneman D. Judgment under uncertainty: Heuristics and biases. Science. 1974, 185(4157), 1124–1131. 10.1126/science.185.4157.1124 [DOI] [PubMed] [Google Scholar]
  • 2.Mussweiler T., & Strack F. The semantics of anchoring. Organizational Behavior and Human Decision Processes. 2001, 86(2), 234–255. [Google Scholar]
  • 3.Chapman G. B., & Johnson E. J. Anchoring, activation, and the construction of values. Organizational Behavior and Human Decision Processes. 1999. 79(2), 115–153. 10.1006/obhd.1999.2841 [DOI] [PubMed] [Google Scholar]
  • 4.Joireman J., Truelove H. B., & Duell B. Effect of outdoor temperature, heat primes and anchoring on belief in global warming. Journal of Environmental Psychology. 2010. 30(4), 358–367. [Google Scholar]
  • 5.Green D, Jacowitz KE, Kahneman D, McFadden D. Referendum contingent valuation, anchoring, and willingness to pay for public goods. Resource and energy economics. 1998. June 1;20(2):85–116. [Google Scholar]
  • 6.Holst GS, Hermann D, Musshoff O. Anchoring effects in an experimental auction–Are farmers anchored?. Journal of Economic Psychology. 2015. June 1;48:106–17. [Google Scholar]
  • 7.Dholakia UM, Simonson I. The effect of explicit reference points on consumer choice and online bidding behavior. Marketing Science. 2005. May;24(2):206–17. [Google Scholar]
  • 8.Wolk A, Spann M. The effects of reference prices on bidding behavior in interactive pricing mechanisms. Journal of Interactive Marketing. 2008. January 1;22(4):2–18. [Google Scholar]
  • 9.Wansink B., Kent R. J., & Hoch S. J. (1998). An anchoring and adjustment model of purchase quantity decisions. Journal of Marketing Research, 35(1), 71–81. [Google Scholar]
  • 10.Bergman O., Ellingsen T., Johannesson M., & Svensson C. (2010). Anchoring and cognitive ability. Economics Letters, 107(1), 66–68. [Google Scholar]
  • 11.Ariely D., Loewenstein G., & Prelec D. (2003). “Coherent arbitrariness”: Stable demand curves without stable preferences. The Quarterly Journal of Economics, 118(1), 73–106. [Google Scholar]
  • 12.Fudenberg D., Levine D. K., and Maniadis Z. (2012). On the robustness of anchoring effects in WTP and WTA experiments. American Economic Journal: Microeconomics, 4(2), 131–45. [Google Scholar]
  • 13.Cappelletti D., Güth W., & Ploner M. (2011). Unravelling conditional cooperation. Jena Economic Research Papers, 2011, 047. [Google Scholar]
  • 14.Luccasen R. A. (2012). Anchoring effects and participant numbers: Evidence from a public good game. Social Science Quarterly, 93(3), 858–865. [Google Scholar]
  • 15.Fosgaard T. R., & Piovesan M. (2015). Nudge for (the public) good: how defaults can affect cooperation. PloS One, 10(12), e0145488 10.1371/journal.pone.0145488 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Raihani N. J., & McAuliffe K. (2014). Dictator game giving: The importance of descriptive versus injunctive norms. PloS One, 9(12), e113826 10.1371/journal.pone.0113826 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Dhingra N., Gorn Z., Kener A., & Dana J. (2012). The default pull: An experimental demonstration of subtle default effects on preferences. Judgment and Decision Making, 7(1), 69. [Google Scholar]
  • 18.Burlando R. M., & Guala F. (2005). Heterogeneous agents in public goods experiments. Experimental Economics, 8(1), 35–54. [Google Scholar]
  • 19.Fischbacher U., Gächter S., & Fehr E. (2001). Are people conditionally cooperative? Evidence from a public goods experiment. Economics Letters, 71(3), 397–404. [Google Scholar]
  • 20.Eckel C. C., & Grossman P. J. (1996). Altruism in anonymous dictator games. Games and Economic Behavior 16(2), 181–191. [Google Scholar]
  • 21.Bardsley N., and Sausgruber R., (2005). Conformity and reciprocity in public good provision. Journal of Economic Psychology, 26, 664–681. [Google Scholar]
  • 22.Gächter S., Kölle F., & Quercia S. (2017). Reciprocity and the tragedies of maintaining and providing the commons. Nature Human Behaviour, 1(9), 650 10.1038/s41562-017-0191-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Dariel A. (2018). Conditional Cooperation and Framing Effects. Games, 9(2), 37. [Google Scholar]
  • 24.Konow J. (2001). Fair and square: the four sides of distributive justice. Journal of Economic Behavior & Organization, 46(2), 137–164. [Google Scholar]
  • 25.Crusius J., van Horen F., & Mussweiler T. (2012). Why process matters: A social cognition perspective on economic behavior. Journal of Economic Psychology, 33(3), 677–685. [Google Scholar]
  • 26.Krupka E., & Weber R. A. (2009). The focusing and informational effects of norms on pro-social behavior. Journal of Economic Psychology, 30(3), 307–320 [Google Scholar]
  • 27.Shang J., & Croson R. (2009). A field experiment in charitable contribution: The impact of social information on the voluntary provision of public goods. The Economic Journal, 119(540), 1422–1439. [Google Scholar]
  • 28.Frey B. S., & Meier S. (2004). ‘Social comparisons and pro-social behavior: Testing "conditional cooperation" in a field experiment’. American Economic Review, 94(5), 1717–1722. [Google Scholar]
  • 29.Gächter S., Gerhards L., & Nosenzo D. (2017). The importance of peers for compliance with norms of fair sharing. European Economic Review, 97, 72–86. [Google Scholar]
  • 30.Panchanathan K., Frankenhuis W. E., & Silk J. B. (2013). The bystander effect in an N-person dictator game. Organizational Behavior and Human Decision Processes, 120(2), 285–297. [Google Scholar]
  • 31.Blanco M., Engelmann D., & Normann H. T. (2011). A within-subject analysis of other-regarding preferences. Games and Economic Behavior, 72(2), 321–338. [Google Scholar]
  • 32.Thaler R.H. & Sunstein C.S. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. Yale University Press; 2008. [Google Scholar]
  • 33.Kahneman D. and Tversky A. Prospect theory: An analysis of decision under risk. 1979;47(2):263–91. [Google Scholar]
  • 34.Charité J, Fisman R, Kuziemko I. Reference points and redistributive preferences: Experimental evidence. National Bureau of Economic Research; 2015 Mar 5.
  • 35.Mussweiler T., & Strack F. (1999a). Comparing is believing: A selective accessibility model of judgmental anchoring In Stroebe W. & Hewstone M. (Eds.), European Review of Social Psychology (Vol. 10, pp. 135–167). Chichester, England: Wiley. [Google Scholar]
  • 36.Mussweiler T., Strack F., (1999b). Hypothesis-consistent testing and semantic priming in the anchoring paradigm: a selective accessibility model. Journal of Experimental Social Psychology 35, 136–164. [Google Scholar]
  • 37.Weber EU, Johnson EJ. Query theory: Knowing what we want by arguing with ourselves. Behavioral and Brain Sciences. 2011. April;34(2):91–2. [Google Scholar]
  • 38.Mesch D. J., Brown M. S., Moore Z. I., & Hayat A. D. (2011). Gender differences in charitable giving. International Journal of Nonprofit and Voluntary Sector Marketing, 16(4), 342–355. [Google Scholar]
  • 39.Eckel C. C., & Grossman P. J. (1998). Are women less selfish than men?: Evidence from dictator experiments. The Economic Journal, 108(448), 726–735. [Google Scholar]
  • 40.Arechar A. A., Gächter S., & Molleman L. (2018). Conducting interactive experiments online. Experimental Economics, 21(1), 99–131. 10.1007/s10683-017-9527-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Fischbacher U, Gachter S. Social preferences, beliefs, and the dynamics of free riding in public goods experiments. American Economic Review. 2010. March;100(1):541–6. [Google Scholar]
  • 42.Paolacci G., & Chandler J. (2014). Inside the Turk: Understanding Mechanical Turk as a participant pool. Current Directions Psychological Science, 23, 184–188. [Google Scholar]
  • 43.Horton J.J., Rand D.G., Zeckhauser R.J. (2011). The online laboratory: conducting experiments in a real labor market. Experimental Economics. 14, 399–425. [Google Scholar]
  • 44.Berinski A. J., Huber G. A., & Lenz G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Political Analysis, 20, 351–368. [Google Scholar]
  • 45.Furnham A., & Boo H. C. (2011). A literature review of the anchoring effect. The Journal of Socio-Economics, 40(1), 35–42. [Google Scholar]
  • 46.Wegener D.T., Petty R.E., Detweiler-Bedell B., Jarvis W.B.G., (2001) Implications of attitude change theories for numerical anchoring: anchor plausibility and the limits of anchor effectiveness. Journal of Experimental Social Psychology 37, 62–69. [Google Scholar]
  • 47.Brañas-Garza P., Bucheli M., Espinosa M. P., & García-Muñoz T. (2013). Moral cleansing and moral licenses: experimental evidence. Economics & Philosophy, 29(2), 199–212. [Google Scholar]
  • 48.Brosig, J., Riechmann, T., & Weimann, J. (2007). Selfish in the end?: An investigation of consistency and stability of individual behavior. FEMM Working Paper Series No. 05/2007 (February 2007): pp. 1–33.
  • 49.Andreoni J., & Bernheim B. D. (2009). Social image and the 50–50 norm: A theoretical and experimental analysis of audience effects. Econometrica, 77(5), 1607–1636. [Google Scholar]
  • 50.Bernheim B. D. (1994). A Theory of Conformity. Journal of Political Economy, 841–877. [Google Scholar]
  • 51.Fehr E., & Schmidt K. M. (1999). A theory of fairness, competition, and cooperation. Quarterly Journal of Economics, 114(3), 817–868. [Google Scholar]
  • 52.Cialdini R. B., & Goldstein N. J. (2004). Social influence: Compliance and conformity. Annual Review Psychology, 55, 591–621. [DOI] [PubMed] [Google Scholar]
  • 53.Schwab, J. A. (2002). Multinomial logistic regression: Basic relationships and complete problems. http://www.utexas.edu/courses/schwab/sw388r7/SolvingProblems/
  • 54.Fosgaard T. R., Hansen L. G., & Piovesan M. (2013). Separating Will from Grace: An experiment on conformity and awareness in cheating. Journal of Economic Behavior & Organization, 93, 279–284. [Google Scholar]
  • 55.Eagly A. H. (1983). Gender and social influence: A social psychological analysis. American Psychologist, 38(9), 971. [Google Scholar]
  • 56.Ubeda P. (2014). The consistency of fairness rules: An experimental study. Journal of Economic Psychology, 41, 88–100. [Google Scholar]
  • 57.Gunnthorsdottir A., Houser D., & McCabe K. (2007). Disposition, history and contributions in public goods experiments. Journal of Economic Behavior & Organization, 62(2), 304–315. [Google Scholar]

Decision Letter 0

Joanna Tyrowicz

21 Oct 2019

PONE-D-19-24674

The Effect of Anchors and Social Information on Behaviour

PLOS ONE

Dear PhD O'Garra,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Dec 05 2019 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Joanna Tyrowicz

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

Additional Editor Comments (if provided):

Dear Miss O'Garra,

I have now heard from the referees, and I am happy to invite you to revise your paper and resubmit it for PLOS ONE. While Referee 1 is less critical, Referee 2 raises some important points. I believe that all the raised concerns can be addressed by you in your revised manuscript, so I encourage you to take them in good spirit and attempt to address them.

If you decide to resubmit, please, explain in detail how you addressed the concerns of the referees in a cover letter.

I am looking forward to receiving your submission.

Kind regards,

Joanna Tyrowicz

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I think this is a very well written paper, presenting results of an interesting economic experiment on anchoring effects in the context of monetary transfers. I do not have any strong objections, rather only some comments and questions that I hope could be of some inspiration how to improve the paper.

Practical importance of the study

I understand that one of main messages of the paper is that “the choice of behavioural strategies is … affected by normatively irrelevant contextual factors, such as anchors” (p. 4). There is a good discussion of this issue in the paper, but what I am missing is an answer to questions: Why is this outcome important? For what, and in what contexts, can it be useful? I mean providing practical meaning and importance of the study in a broader sense. I see there is a bit of such a discussion in the last paragraph on p. 5. Nevertheless, upon reading the paper, I am left with a feeling that a reader might be not really faced and convinced about practical importance of this study and application of its results. To me, this could be made in the paper more apparent, which would make the contribution of the paper more pronounced.

Focus on anchoring or maybe potential ordering effects?

The authors focus the analysis on the first amount displayed to second-movers in the sequence of considered first-mover transfer amounts. I wonder, however, whether the entire order of questions with different first-mover transfer amounts was not an important factor for decisions of second-movers. For instance, having seen $1 in the first question, a second-mover could differently answer (undertake a different strategy) when faced in the second question with a value of $0 than with a value of $0.75. Furthermore, the behavior could differ depending on whether the sequence was monotonic or not (at least throughout several, e.g., three, initial values). This aspect is paid nearly no attention in the paper, except for including a variable “Order in which FM [first-mover] transfer presented” in models in Table 2. I could not find a definition of levels of this variable in the paper, so I do not understand how this single variable controls for various possible combinations of displaying first-mover transfer amounts. It seems to me that at least in the context of preference elicitation, there is a good part of literature devoted to ordering effects, and this seems to me closely related to the potential role of the order in which the sequence of first-mover amounts is displayed.

Literature references

It seems to me that a little more caution might be needed when describing the current state of the literature. Two specific examples are below.

The authors write: “Most anchoring studies have examined the impact of anchors on … elicited preferences (e.g. Ariely, Loewenstein & Prelec, 2003; Green et al., 1998). Far less research has been conducted on how anchors affect actual behaviour; …” (p. 2)

I wonder about this distinction—in particular, I think that studies concerning elicited preferences can also involve actual behavior. In the context of anchoring, I think that an example of such a study could be Fudenberg et al. (2012). They investigate the role of anchoring when eliciting preferences and their study involve actual payments (probabilistically binding choices). This is just one quick example; I think there are other similarly related studies.

The authors write: “this is the first study to show evidence of anchoring effects influencing monetary transfer decisions” (p. 24). Further in the same paragraph, the author use the phrase “monetary contribution decisions”. I wonder if, for example, the study by Luccasen (2012) does not fit into this description. If yes, that would suggest that the reviewed study was not the first one with respect to the specified research area.

Other questions and comments

$2 seem to be a small amount. Could the result be affected by that? If so, how?

It is clear what possible choices for first movers were (the six discrete values between $0 and $1). What was the set of possible choice for second movers? Was it also a close-ended question or maybe an open-ended one? I have not found explicit information on that in the paper.

“All transfers by FMs and SMs were divided up equally among the recipients in the group.” (p. 8) What happened upon dropouts? Was the entire sum divided by 8 or by a reduced number of recipients?

The question above is related to the dependent variable in models in Table 2. What exactly is the amount—per recipient or total; how does it control dropouts of second-movers or recipients; does it include first-movers’ transfers or only those of second-movers? This clarification could further make clearer the description of the results: for example, “for each unit increase in the FM’s [first-mover’s] transfer, SMs [second-movers] increase the amount they transfer by about 5% of the FM’s transfer” – is the 5% for the entire group of four second-movers? If yes, this would imply an even smaller individual effect, as based on the model: a one-cent increase of first-mover transfer increases the transfer to recipients by 0.05 cent, which would be a 0.05 / 4 transfer increase per second-mover, if I understand the notation correctly.

The last paragraph on p. 18 describes models briefly summarized in Table 3. I think the paragraph could be more explicit, e.g., provide a formula for the models, to make it clear what is meant in Table 3 by beta and y-intercept, and whether any other controls were considered in this modelling.

There is a bit of discussion on a compensating behavioral strategy, on p. 22, among others. I wonder, however, about this referring to the group of compensating second-movers as this is a very small group. If I recovered correctly the numbers of individuals based on Table 4, the number of compensators is 16, out of which 4 are faced with a low anchor (below $0.5) and 12 with a high anchor. This means that some coefficient estimates in the model in Table 5 are based on very few responses, such as 4 (or even less if any of those individuals did not report some of their socio-demographics). I wonder whether it is justified to make any comparisons to such an under-represented group.

“lower anchors also positively (but only weakly) influence the likelihood of adopting conformist and unconditionally giving strategies” (p. 22) Should it not be “higher” instead of “lower” in the first word of this quotation?

Little language corrections might be needed: e.g., “at aggregate level” on p. 11 (an article might be missing); “we’ve” on p. 23 look informal.

References

Fudenberg, D., Levine, D. K., and Maniadis, Z. (2012). On the robustness of anchoring effects in WTP and WTA experiments. American Economic Journal: Microeconomics, 4(2), 131-45.

Luccasen, R. A. (2012). Anchoring effects and participant numbers: Evidence from a public good game. Social Science Quarterly, 93(3), 858-865.

Reviewer #2: On introduction: Overall, the idea of testing the effect of anchoring toward monetary transfer is interesting. The authors take the challenge to bring about a similar concept that has been discussed in different field using different terms (p.2). In this line of thought, the authors statements that this study would be the first one in this path (p. 24 para 2) may need stronger support. This can be done for example by adding more thorough description regarding the conceptual definitions that differentiate anchoring, framing, and social norms. Others may use different terms such a s “default” and “reference points” (Charité, Fisman, & Kuziemko, 2015; Dhingra, Gorn, Kener, & Dana, 2012) for quite similar intention.

In addition, it is perhaps important to also provide reader with a richer understanding toward the meaning of anchor effect utilising this modified dictator game, by explaining how monetary transfer decision used in this study differs with other monetary transfer forms in different context which was also found suffered from anchor effect like in auction (Holst, Hermann, & Musshoff, 2015) and donation (Martin & Randal, 2007). If the difference lies behind the motivation for such behaviour –comparing the use of dictator game (context-free) and charity (context-dependent), then this study could also contribute in the attempt to explain why anchor effect occurred in a context-dependent situation or vice versa (p. 10 para 2 and p. 25 para 1). Further, the authors attempt to explain the behavioural strategies may also refer to studies that have been reported to identify the cognitive mechanism underlie individual redistribution decision (e.g. Crusius, Horen, & Mussweiler, 2012).

On method. The rationale of this study should be put before explaining the experimental method such that we could tell whether the selected procedure has been properly designed and justified. It should also be mentioned if the experiment was originally designed by authors or an adoption/modification from other studies.

The authors could also add more information on how the information on the FM’s transfer being made visible to other players by inserting the screenshot (p.7 para 2). As there are 6 possible FM transfer and 6 possible SM respond, and all those possibilities were presented sequentially to both of them (or it is only randomized to the SM [p. 9, para 2]?), there is actually a chance of order effect (or perhaps, an anchor effect) to the FM too. It is still unclear in this report on how the authors anticipate this issue, e.g. whether the choices were presented in random order etc.

Information regarding the step-by-step instruction during experiment may be better to be written chronologically rather than inserted in the middle (p.8 para 1 and 3) as to make the flow of experiment easier to be understood by a more general reader. Stating the web application used in running this experiment in the text may also considered important when reporting computer based experimental study.

On data report. Descriptive data analysis should be added. Also, the testing for assumption used for the analysis. More importantly, information on goodness of fit of the model for all analysis (e.g. [partial] eta square for ANOVA) should be added, apart from the p-value. Also, the statistical package used in doing the analysis should be named. The authors are advised to see the statistical reporting guidelines for authors provided in the PLOS ONE website for more detail.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Apr 14;15(4):e0231203. doi: 10.1371/journal.pone.0231203.r002

Author response to Decision Letter 0


20 Dec 2019

PONE-D-19-24674

The Effect of Anchors and Social Information on Behaviour

PLOS ONE

Comments to the Author

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Dear reviewers,

We greatly appreciate the thoughtful comments and suggestions, which we have endeavoured to address to the best of our ability. Any page references are made in relation to the unmarked Manuscript.

Kind regards.

Reviewer #1: I think this is a very well written paper, presenting results of an interesting economic experiment on anchoring effects in the context of monetary transfers. I do not have any strong objections, rather only some comments and questions that I hope could be of some inspiration how to improve the paper.

Practical importance of the study

I understand that one of main messages of the paper is that “the choice of behavioural strategies is … affected by normatively irrelevant contextual factors, such as anchors” (p. 4). There is a good discussion of this issue in the paper, but what I am missing is an answer to questions: Why is this outcome important? For what, and in what contexts, can it be useful? I mean providing practical meaning and importance of the study in a broader sense. I see there is a bit of such a discussion in the last paragraph on p. 5. Nevertheless, upon reading the paper, I am left with a feeling that a reader might be not really faced and convinced about practical importance of this study and application of its results. To me, this could be made in the paper more apparent, which would make the contribution of the paper more pronounced.

Response: We have added the following sentence to the para2 on page 5 highlighting the practical importance of the findings of this study:

Added text:

The practical value of this finding is highly significant, as anchoring effects could potentially be harnessed not only to ‘nudge’ individuals towards single instances of fair sharing, but towards the adoption of more persistent redistributive behaviour.

Focus on anchoring or maybe potential ordering effects?

The authors focus the analysis on the first amount displayed to second-movers in the sequence of considered first-mover transfer amounts. I wonder, however, whether the entire order of questions with different first-mover transfer amounts was not an important factor for decisions of second-movers. For instance, having seen $1 in the first question, a second-mover could differently answer (undertake a different strategy) when faced in the second question with a value of $0 than with a value of $0.75. Furthermore, the behavior could differ depending on whether the sequence was monotonic or not (at least throughout several, e.g., three, initial values). This aspect is paid nearly no attention in the paper, except for including a variable “Order in which FM [first-mover] transfer presented” in models in Table 2. I could not find a definition of levels of this variable in the paper, so I do not understand how this single variable controls for various possible combinations of displaying first-mover transfer amounts. It seems to me that at least in the context of preference elicitation, there is a good part of literature devoted to ordering effects, and this seems to me closely related to the potential role of the order in which the sequence of first-mover amounts is displayed.

Response: We have conducted a series of additional analyses on the data to identify whether there are order effects beyond those related to the impact of the initial amount. These are reported in the Online Appendix S1. We found no evidence of order effects beyond the impact of the initial amount on SM transfers. This is noted on page 12, para2.

The ’order’ variable in the regressions in Table 2 is an indicator of the order in which FM transfers were presented (first through sixth), as described on page 17, para1. This controls for possible effects of time or repetition on stated contributions but is not intended to identify order effects which are tested as described in Online Appendix S1.

Added text:

We also consider it possible that the entire order in which FM transfers are presented to SMs may have an effect on choices beyond the effect of the initial amount. To assess possible order effects, we ran a series of tests which are reported in the Online Appendix S1. We found no evidence of order effects beyond the impact of the initial amount on SM transfers.

Literature references

It seems to me that a little more caution might be needed when describing the current state of the literature. Two specific examples are below.

The authors write: “Most anchoring studies have examined the impact of anchors on … elicited preferences (e.g. Ariely, Loewenstein & Prelec, 2003; Green et al., 1998). Far less research has been conducted on how anchors affect actual behaviour; …” (p. 2)

I wonder about this distinction—in particular, I think that studies concerning elicited preferences can also involve actual behavior. In the context of anchoring, I think that an example of such a study could be Fudenberg et al. (2012). They investigate the role of anchoring when eliciting preferences and their study involve actual payments (probabilistically binding choices). This is just one quick example; I think there are other similarly related studies.

Many thanks for the suggested reference. We have addressed this missing literature by amending paragraph 2 of the introduction and acknowledging that there are additional studies that examine anchoring effects on valuations with probabilistically binding choices.

Added references:

Fudenberg, D., Levine, D. K., and Maniadis, Z. (2012). On the robustness of anchoring effects in WTP and WTA experiments. American Economic Journal: Microeconomics, 4(2), 131-45.

Bergman, O., Ellingsen, T., Johannesson, M., & Svensson, C. (2010). Anchoring and cognitive ability. Economics Letters, 107(1), 66-68.

The authors write: “this is the first study to show evidence of anchoring effects influencing monetary transfer decisions” (p. 24). Further in the same paragraph, the author use the phrase “monetary contribution decisions”. I wonder if, for example, the study by Luccasen (2012) does not fit into this description. If yes, that would suggest that the reviewed study was not the first one with respect to the specified research area.

Response: Luccasen (2012) examines anchoring effects on contributions to the public good (reflecting a willingness to cooperate, am interdependent decisions); in contrast, we examine how anchors affect pure redistribution choices (often considered to reflect fairness concerns).

To address this extra literature, we have amended paragraph 3 in the introduction to introduce a broader (yet limited) literature addressing anchoring effects on pro-social behaviour, of which Luccasen (2012) is one paper.

We have also amended the wording to be more precise – by referring to “redistribution choices” instead of “monetary contribution decisions”.

Added references:

Cappelletti, D., Güth, W., & Ploner, M. (2011). Unravelling conditional cooperation. Jena Economic Research Papers, 2011, 047.

Luccasen, R. A. (2012). Anchoring effects and participant numbers: Evidence from a public good game. Social Science Quarterly, 93(3), 858-865

Fosgaard, T. R., & Piovesan, M. (2015). Nudge for (the public) good: how defaults can affect cooperation. PloS One, 10(12), e0145488.

Dhingra, N., Gorn, Z., Kener, A., & Dana, J. (2012). The default pull: An experimental demonstration of subtle default effects on preferences. Judgment and Decision Making, 7(1), 69.

Other questions and comments

$2 seem to be a small amount. Could the result be affected by that? If so, how?

Response: In fact, $2 is quite standard in Amazon Mechanical Turk experiments, and there have been a number of studies finding that data collected using MTurk (with low stakes) are of similar quality than those gathered using the standard laboratory. We therefore assume that the size of the stake had no significant impact on behaviour. We have added the following text addressing this issue (on page 6):

Added text:

MTurk experiments generally involve low stakes, as participants play from their computers or smartphones, which usually takes less than ten minutes. This allows experimenters to decrease the stakes without compromising the results. This has been confirmed by several studies showing that data collected using MTurk (with low stakes) are of similar quality than those gathered using the standard laboratory (Horton, Rand & Zeckhauser, 2011; Berinski, Huber & Lenz, 2012; Goodman, Cryder & Cheema, 2013; Paolacci & Chandler, 2014).

Added References:

Goodman, J. K., Cryder, C. E., & Cheema, A. (2013). Data collection in the flat world: The strength and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making, 26, 213-224.

Berinski, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Political Analysis, 20, 351-368.

Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a participant pool. Current Directions Psychological Science, 23, 184-188.

It is clear what possible choices for first movers were (the six discrete values between $0 and $1). What was the set of possible choice for second movers? Was it also a close-ended question or maybe an open-ended one? I have not found explicit information on that in the paper.

Response: Thanks for noting this. Second movers could provide open-ended responses. We have added text on page 9 clarifying this.

Added text:

SM transfers were elicited using an open-ended format, such that they could transfer any amount between $0 and $2

“All transfers by FMs and SMs were divided up equally among the recipients in the group.” (p. 8) What happened upon dropouts? Was the entire sum divided by 8 or by a reduced number of recipients?

Response: For the final pay outs, we always divided the sum of all transfers among the actual number of recipients in the group, regardless of the number of dropouts. This has been clarified on page 8.

The question above is related to the dependent variable in models in Table 2. What exactly is the amount—per recipient or total; how does it control dropouts of second-movers or recipients; does it include first-movers’ transfers or only those of second-movers? This clarification could further make clearer the description of the results: for example, “for each unit increase in the FM’s [first-mover’s] transfer, SMs [second-movers] increase the amount they transfer by about 5% of the FM’s transfer” – is the 5% for the entire group of four second-movers? If yes, this would imply an even smaller individual effect, as based on the model: a one-cent increase of first-mover transfer increases the transfer to recipients by 0.05 cent, which would be a 0.05 / 4 transfer increase per second-mover, if I understand the notation correctly.

Response: The dependent variable in Table 2 is the amount transferred per SM. This has been clarified in main text and in the title of the table. We have also added clarification that the analysis in this paper only uses data from SM responses to the strategy method, given our focus on anchoring effects. This clarification has been added into page 7, para 1, as well as into the new Section 2.3 (Analysis Procedure) on page 16, suggested by reviewer #2 as per the PLOS ONE statistical reporting standards.

Regarding dropouts, we do not consider it relevant to control for dropouts as players never learn whether other players have dropped out when making their strategy decisions - which is what we analyse in this study. Nonetheless, we have re-run new models with only groups which have all SMs, and results are similar; we have reported this in the main text (page 18, para above the table) and added the new regressions in Online Appendix S5.

As noted above, the results only include the SM decisions, as these were the only ones elicited using the strategy method. FM decisions were elicited via direct response and are not analysed in this study as they do not affect the SM’s responses analysed in this paper. This has been clarified on page 14 (new Section 2.3) with the following text:

Added text:

Given the focus on this paper on anchoring effects, all results and analyses in this paper pertain solely to SM decisions elicited using the strategy method. Data on FM transfers is not analysed here; however, it is available upon request.

The last paragraph on p. 18 describes models briefly summarized in Table 3. I think the paragraph could be more explicit, e.g., provide a formula for the models, to make it clear what is meant in Table 3 by beta and y-intercept, and whether any other controls were considered in this modelling.

Response: This has been done, although we have added in the model in the new added Section 2.3 (Analysis Procedure) which we added in response to a suggestion by Reviewer #2 with regards to the PLOS ONE statistical reporting standards.

There is a bit of discussion on a compensating behavioral strategy, on p. 22, among others. I wonder, however, about this referring to the group of compensating second-movers as this is a very small group. If I recovered correctly the numbers of individuals based on Table 4, the number of compensators is 16, out of which 4 are faced with a low anchor (below $0.5) and 12 with a high anchor. This means that some coefficient estimates in the model in Table 5 are based on very few responses, such as 4 (or even less if any of those individuals did not report some of their socio-demographics). I wonder whether it is justified to make any comparisons to such an under-represented group.

Response: Thank you very much for spotting this. Indeed, the player ‘type’ sample sizes are too small for multinomial logistic regression. Consequently, we have addressed this issue by replacing the multinomial logit regression with binary logistic regression models in which we examine the determinants of adopting a self-selected strategy versus all other strategies. We have moved the multinomial logit model to the online appendix (S7) for reference with the caveat (stated in the main text), that results may not be reliable due to sample size issues. We have also amended the entire discussion on the results associated with Table 5 (pages 24-26).

“lower anchors also positively (but only weakly) influence the likelihood of adopting conformist and unconditionally giving strategies” (p. 22) Should it not be “higher” instead of “lower” in the first word of this quotation?

Response: Thanks for noting this. It has been corrected.

Little language corrections might be needed: e.g., “at aggregate level” on p. 11 (an article might be missing); “we’ve” on p. 23 look informal.

Response: These have been corrected.

References

Fudenberg, D., Levine, D. K., and Maniadis, Z. (2012). On the robustness of anchoring effects in WTP and WTA experiments. American Economic Journal: Microeconomics, 4(2), 131-45.

Luccasen, R. A. (2012). Anchoring effects and participant numbers: Evidence from a public good game. Social Science Quarterly, 93(3), 858-865.

Thank you for these references!

Reviewer #2:

On introduction: Overall, the idea of testing the effect of anchoring toward monetary transfer is interesting. The authors take the challenge to bring about a similar concept that has been discussed in different field using different terms (p.2). In this line of thought, the authors statements that this study would be the first one in this path (p. 24 para 2) may need stronger support. This can be done for example by adding more thorough description regarding the conceptual definitions that differentiate anchoring, framing, and social norms. Others may use different terms such a s “default” and “reference points” (Charité, Fisman, & Kuziemko, 2015; Dhingra, Gorn, Kener, & Dana, 2012) for quite similar intention.

Response: We are grateful for these suggestions, which are indeed relevant to our study (particularly the Dhingra et al, 2012 paper). We have amended para 3 in the introduction to acknowledge this other related literature. We have also added text on page 12-13, acknowledging the relevance of framing effects, which we agree are closely related to anchoring effects.

Added text:

We acknowledge that there are other contextual factors - such as how the decision is framed - that may influence decisions. Framing effects occur when information is presented in different ways, leading to different interpretations of the context and decision. In our study, it is possible that the first piece of information received (what we term the ‘anchor’) actually affects choices through a ‘framing effect’ – i.e. by changing the perception of what the decision context involves. This would be in line with the ‘selective accessibility’ and ‘query theory’ models, which propose heavy reliance on the first piece of information – hence, in this context, the anchoring effect could be akin to a ‘framing effect’.

We have added the following paragraph to page 6 of the Introduction.

Added text:

We note that this study also complements the literature examining ‘default’ effects on redistribution choices. Defaults are pre-determined choices that will be implemented unless an individual actively changes them [32]. They are related to anchors in that a default option can also act as an anchor. As noted earlier, Dhingra et al (2012) [17] find evidence of what they term a “default pull” on choices in a dictator game with default options. Similar findings are reported in [15 and 23] albeit with respect to cooperation behaviour in a public goods game. Also related is the literature on ‘reference points’, which people often use to evaluate gains and losses [33], and which have been found to influence bidding behaviour in auctions (e.g. [6]). With regards to impacts on redistribution choices, Charite, Fisman & Kuziemko (2015) [34] find that people’s choices are impacted by other people’s reference points.

In addition, it is perhaps important to also provide reader with a richer understanding toward the meaning of anchor effect utilising this modified dictator game, by explaining how monetary transfer decision used in this study differs with other monetary transfer forms in different context which was also found suffered from anchor effect like in auction (Holst, Hermann, & Musshoff, 2015) and donation (Martin & Randal, 2007). If the difference lies behind the motivation for such behaviour –comparing the use of dictator game (context-free) and charity (context-dependent), then this study could also contribute in the attempt to explain why anchor effect occurred in a context-dependent situation or vice versa (p. 10 para 2 and p. 25 para 1).

Response: On page 3, para 2, we have added an explanation of what we mean by ‘redistribution choices’ (added text: “By ‘redistribution’, we refer to decisions to share wealth with others, with no expectation or possibility of benefitting materially from redistribution”).

This is somewhat different from the effect of anchors on consumer decisions in auctions (as in Holst et al, 2015) which has already been addressed in paragraph 2 of the introduction. We have added this reference with those listed in paragraph 2. As for possible anchoring effects examined in Martin and Randall (2007) we note that the anchoring effect examined in this paper is only considered as a side-issue, and from their findings, it is not possible to disentangle anchoring effects from social information effects.

Further, the authors attempt to explain the behavioural strategies may also refer to studies that have been reported to identify the cognitive mechanism underlie individual redistribution decision (e.g. Crusius, Horen, & Mussweiler, 2012).

Response: Thank you for this reference, it has been used several times throughout the paper to bolster our argument that context matters (intro, page 3, last sentence; conclusion, page 28, para3) and we have also added a sentence (page 5, end of para 1) acknowledging that different behaviours and strategies may result from very different psychological processes interacting with context, as discussed in the recommended paper.

Added reference:

Crusius, J., van Horen, F., & Mussweiler, T. (2012). Why process matters: A social cognition perspective on economic behavior. Journal of Economic Psychology, 33(3), 677-685.

On method. The rationale of this study should be put before explaining the experimental method such that we could tell whether the selected procedure has been properly designed and justified. It should also be mentioned if the experiment was originally designed by authors or an adoption/modification from other studies.

Response: The rationale for the study is provided in the Introduction, where we review the literature and identify our research questions and rationale. At the end of para 2, page 4-5 we clarify that to the best of our knowledge, this is the first study to examine anchoring effects using the sequential strategy method.

The authors could also add more information on how the information on the FM’s transfer being made visible to other players by inserting the screenshot (p.7 para 2).

Response: We have added two figures: Figure 1 is a screenshot showing how SMs were informed about the transfer choices that FMs had (page 9) and Figure 2 is a screenshot showing how SM transfers were elicited in response to sequential FM transfers (page 9).

With the addition of Figure 1, there is now some repetition in the paper regarding some of the instructions presented to SMs; to avoid this, we have deleted the repeated text from the main body of the paper (from page 9).

As there are 6 possible FM transfer and 6 possible SM respond, and all those possibilities were presented sequentially to both of them (or it is only randomized to the SM [p. 9, para 2]?), there is actually a chance of order effect (or perhaps, an anchor effect) to the FM too. It is still unclear in this report on how the authors anticipate this issue, e.g. whether the choices were presented in random order etc.

Response: Thanks for noting this. In fact, FMs were presented the six options simultaneously (now clarified on page 8), so there is no effect of order on their choices.

We also mention that the SM transfers were elicited using an open-ended format – this has been clarified on page 9 (paragraph between the figures).

Information regarding the step-by-step instruction during experiment may be better to be written chronologically rather than inserted in the middle (p.8 para 1 and 3) as to make the flow of experiment easier to be understood by a more general reader.

Response: We have rewritten the experimental instructions to make the chronological order clearer.

Stating the web application used in running this experiment in the text may also considered important when reporting computer based experimental study.

Response: The web application was developed specifically for this experiment primarily using the programming languages PHP, HTML, and Javascript. It was hosted on Amazon EC2 while the experiment was running.

We have added this text to page 10.

On data report. Descriptive data analysis should be added. Also, the testing for assumption used for the analysis. More importantly, information on goodness of fit of the model for all analysis (e.g. [partial] eta square for ANOVA) should be added, apart from the p-value. Also, the statistical package used in doing the analysis should be named. The authors are advised to see the statistical reporting guidelines for authors provided in the PLOS ONE website for more detail.

Response: Section 3.1 is a descriptive analysis of the data – it presents an overview of how SMs responded to FM transfers using the strategy method, without any focus on the anchoring effect. A description of the sample characteristics is now found in new section titled ‘Participants’ (page 6-7), which has been added for clarity.

As suggested by the reviewer, we followed the PLOS ONE statistical reporting guidelines, and added a new section (section 2.3) titled ‘Analysis Procedure’ in the Materials/Methods section. Here we outline the various analyses conducted throughout the paper and we report tests of the assumptions behind the analyses used, such as testing for normality, required for use of ANOVA. We did this using standardised and quantile normality plots and the Shapiro Wilks test. Given that our data (both residuals and raw) are quite non-normal, after careful consideration we decided to remove the ANOVA analyses and only report the non-parametric equivalents (Friedman test for repeated measures and Kruskal Wallis tests comparing contributions by anchor). This is now reported in Section 2.3. To complement the p-values, we have also included effects sizes, as suggested by the reviewer - specifically, eta-square values for Kruskal Wallis tests and Kendall’s coefficient for the Friedman test. For the mixed effects models, we have assessed intraclass correlation and have also added likelihood ratio tests to assess whether linear regression performs better than mixed effects in Table 3. Finally, the statistical packages used in this paper have been specified in the last paragraph in Section 2.3.

Attachment

Submitted filename: Responses to reviewers.docx

Decision Letter 1

Joanna Tyrowicz

8 Jan 2020

PONE-D-19-24674R1

The Effect of Anchors and Social Information on Behaviour

PLOS ONE

Dear PhD O'Garra,

Thank you for submitting your revision to PLOS ONE, as you will see from the referee reports, the paper has been substantially improved in their opinion and I share this judgment. However, there is still some issues, related mostly to presentation and clarity of your writing. Please, take the last effort to perfect your paper (hence: minor revision). Please, mark clearly your improvements relative to the current version of the text upon submission. If possible, please, have the text proof-read by a professional writer, to facilitate the flow and make your article better received by the audience.

We would appreciate receiving your revised manuscript by Feb 22 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Joanna Tyrowicz

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: To me, the paper has been substantially improved upon this revision. I have only minor comments/questions. Other than that, I think the paper is good for publication in the journal.

The authors could make sure that details regarding their study, in particular, in the description of the experiment, are provided sequentially in such an order that is understood to a reader unfamiliar with the study. I mean that a reader learns paragraph-by-paragraph about the study and does not need to wonder what some information means before reaching further parts of the paper. Examples:

(i) p. 7, “… hence drop-outs were not observed by SMs …” – this would not be clear to me as a reader without learning about the experiment structure in next sections of the experiment description. Hence, maybe it would be better if the subsection “Participants” was placed at the end of the section describing methods.

(ii) p. 14, the paragraph below the equations says about the analysis focused only on self-interested strategy. At the stage there, it is not clear why it is done so, and it sounds confusing.

The paragraph on p. 11 describes potential explanations. I know that it is a side discussion for this paper, nevertheless, the paragraph sounds a little misleading to me. It suggests as if there are only three possible explanations, although I think there could be many more, and these other explanations are provided in the literature too. For example, learning, fatigue or willingness to behave consistently. I do not mean to largely extend this side discussion in the paper but just to be careful in order not to suggest fewer explanations than the literature actually provides.

Getting to known the screen as displayed to second-movers (Figure 1), it seems to me that the paper could include a short note on advanced disclosure. There was a reason for which the authors decided to present upfront all possible first-movers’ amounts to second-movers, and this could be explained. Especially that in some literature, advanced disclosure has been suggested to mitigate order/anchoring effects – e.g., Day, B., Bateman, I. J., Carson, R. T., Dupont, D., Louviere, J. J., Morimoto, S., Scarpa, R., and Wang, P. (2012). Ordering effects and choice set awareness in repeat-response stated preference studies. Journal of Environmental Economics and Management, 63(1), 73-91.

The discussion on p. 26 talks about the analysis of expectations. I do not know whether it is my mistake but I have missed information in the paper how the data on expectations was collected (e.g., what type of questions and in which part of the experiment). As this is used for the discussion in the paper, I think it would be good to inform a reader about these few details.

Given the equation on p. 14, I think the text in Table 3 could be adjusted. Specifically, the text informs about “beta”, while the equation includes two betas.

In Table 4, the word “selfish” could probably be changed for “self-interested” for consistency.

The text refers to section numbers, although the numbers are not present in the paper.

There are some punctuation and other mistakes that need corrections. Examples:

- p. 10, “(other examples include [40].” – missing end bracket

- p. 11, “[41,42]and” – missing space

- p. 13, “a Kruskal-Wallis (KW) test, which a rank-based nonparametric test” – missing “is”

- p. 17, “transfers. in the models” – a large letter needed

- p. 24, “(with $0,50 as the reference)” – a dot instead of a comma

- inconsistent use of the word “dropouts” either with a dash or without it

Reviewer #2: This version has been highly improved; the richness of the data is presented clearly. I have only several comments.

1. On flow of writing: It might be helpful if the author mentioned clearly in the beginning that this study (particularly for RQ 2) is an exploratory and therefore readers could expect many interesting findings from this study following their hypotheses. As for the RQ 1, I suppose that actually the authors proposed an explicit hypothesis but it was written a bit far behind (p. 11 par 1), and some additional hypotheses about the anchor effect on female and age in the previous part (p. 10, par 1). I wonder if the “Identifying Anchor Effect” part (p. 10) can be moved up to right after Introduction. This is to ensure the link between literatures, research questions and hypothesis proposed by the author kept closed and therefore easily understood by the reader.

2. I understand that the authors have no intention to go further to explain the cognitive mechanism behind redistributive behaviour in this setting. But, since it is also written in the beginning that the focus of this study is redistributive behaviour, rather than cooperative behaviour (p.3 par 2) it may also important to relate this argument here with the argument of explanation used in the Result and Discussion part, that is: the influence of social information toward redistribution and behavioural strategy.

3. On reporting data and discussion. The authors may consider to add explanation of the result based on the effect size. The effect size will be important to be used when discussion the result and what are suggested further research, regardless the p-value. I really appreciate the qualitative data on types of behavioural strategy. I agree this is very important findings. Just one little questions regarding the open ended data processing (whether you are using interrater or not, etc.). This is perhaps can be added, just to be sure.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Apr 14;15(4):e0231203. doi: 10.1371/journal.pone.0231203.r004

Author response to Decision Letter 1


5 Feb 2020

(Note: these responses have also been uploaded as a Word document)

Reviewer #1: To me, the paper has been substantially improved upon this revision. I have only minor comments/questions. Other than that, I think the paper is good for publication in the journal.

The authors could make sure that details regarding their study, in particular, in the description of the experiment, are provided sequentially in such an order that is understood to a reader unfamiliar with the study. I mean that a reader learns paragraph-by-paragraph about the study and does not need to wonder what some information means before reaching further parts of the paper. Examples:

(i) p. 7, “… hence drop-outs were not observed by SMs …” – this would not be clear to me as a reader without learning about the experiment structure in next sections of the experiment description. Hence, maybe it would be better if the subsection “Participants” was placed at the end of the section describing methods.

As suggested, the ’Participants’ section has been moved to the end of the methods section. Given this change of location, some of the text in this section has been adjusted or moved within the section to ensure that the text flows correctly, hence improving clarity of reading. In addition, the introductory sentences of the ‘Experimental Design’ section have also been adjusted due to the change in location, to improve readability and flow.

(ii) p. 14, the paragraph below the equations says about the analysis focused only on self-interested strategy. At the stage there, it is not clear why it is done so, and it sounds confusing.

To make this clearer, we have adjusted the text on page 14 to read:

“To explore whether the adoption of different strategies is affected by the initial information or ‘anchor’, we conduct a multinomial logistic regression on the different player ‘types’, as well as a binary logistic regression specifically aimed at addressing whether anchors the adoption of a ‘self-interested’ strategy. Our motivation for focusing on the ‘self-interested’ type is based on our finding that this particular behavioural strategy appears to be most susceptible to anchors.”

We have also removed the sentence saying that the multinomial regression has been placed in the Online Appendix, and instead make this point – and the justification for doing so – in the Results section. This makes for more clarity in reading the Analysis Procedures section.

The paragraph on p. 11 describes potential explanations. I know that it is a side discussion for this paper, nevertheless, the paragraph sounds a little misleading to me. It suggests as if there are only three possible explanations, although I think there could be many more, and these other explanations are provided in the literature too. For example, learning, fatigue or willingness to behave consistently. I do not mean to largely extend this side discussion in the paper but just to be careful in order not to suggest fewer explanations than the literature actually provides.

The proposed psychological mechanisms presented on page 6-7 are the main ones used to explain anchoring in the literature (reviewed in Cho, 2011), which is why we refer to these. However, we have acknowledged the possibility of other explanations as follows on page 7:

“However, we do not propose to identify whether these (or indeed, other explanations) explain our findings.”

Getting to known the screen as displayed to second-movers (Figure 1), it seems to me that the paper could include a short note on advanced disclosure. There was a reason for which the authors decided to present upfront all possible first-movers’ amounts to second-movers, and this could be explained. Especially that in some literature, advanced disclosure has been suggested to mitigate order/anchoring effects – e.g., Day, B., Bateman, I. J., Carson, R. T., Dupont, D., Louviere, J. J., Morimoto, S., Scarpa, R., and Wang, P. (2012). Ordering effects and choice set awareness in repeat-response stated preference studies. Journal of Environmental Economics and Management, 63(1), 73-91.

We have added the following paragraph to the Experimental Design section (page 11) clarifying our use of “advanced disclosure” of FM transfers to SMs:

“As a side note, we mention that the strategy method is usually used non-sequentially, i.e. subjects view all possible choices by another subject/other subjects and provide their conditional choices simultaneously. Thus, in the standard approach, subjects make their choices under a scenario of “advanced disclosure”. Given our interest in identifying whether subjects would anchor their decisions to the first amount they were presented with, we used a sequential approach. However, to keep our design as close as possible to the standard approach, we opted for advanced disclosure of the FM’s choices. Only when choices were to be made, was this done sequentially.”

The discussion on p. 26 talks about the analysis of expectations. I do not know whether it is my mistake but I have missed information in the paper how the data on expectations was collected (e.g., what type of questions and in which part of the experiment). As this is used for the discussion in the paper, I think it would be good to inform a reader about these few details.

This has been clarified in the Experimental Design section (page 11) as follows:

“Participants then indicated how much they expected other SMs in their group to contribute on average.”

Given the equation on p. 14, I think the text in Table 3 could be adjusted. Specifically, the text informs about “beta”, while the equation includes two betas.

Thank you for noting this. Table 3 has been adjusted to reflect the two different betas used.

In Table 4, the word “selfish” could probably be changed for “self-interested” for consistency. This has been fixed

The text refers to section numbers, although the numbers are not present in the paper.

These have been removed, and/or section titles referred to instead of section numbers.

There are some punctuation and other mistakes that need corrections. Examples:

- p. 10, “(other examples include [40].” – missing end bracket fixed

- p. 11, “[41,42]and” – missing space fixed

- p. 13, “a Kruskal-Wallis (KW) test, which a rank-based nonparametric test” – missing “is” fixed

- p. 17, “transfers. in the models” – a large letter needed fixed

- p. 24, “(with $0,50 as the reference)” – a dot instead of a comma fixed

- inconsistent use of the word “dropouts” either with a dash or without it thanks for noting this – we have chosen to go with “dropout” and have corrected other spellings

Reviewer #2: This version has been highly improved; the richness of the data is presented clearly. I have only several comments.

1. On flow of writing: It might be helpful if the author mentioned clearly in the beginning that this study (particularly for RQ 2) is an exploratory and therefore readers could expect many interesting findings from this study following their hypotheses. As for the RQ 1, I suppose that actually the authors proposed an explicit hypothesis but it was written a bit far behind (p. 11 par 1), and some additional hypotheses about the anchor effect on female and age in the previous part (p. 10, par 1). I wonder if the “Identifying Anchor Effect” part (p. 10) can be moved up to right after Introduction. This is to ensure the link between literatures, research questions and hypothesis proposed by the author kept closed and therefore easily understood by the reader.

On page 4 (end of para 1) we have added the following text clarifying that this is exploratory research:

“This is exploratory research, and as such, we have no expectations about the size or direction of anchoring effects on the distribution of ‘types’ in the population under study. Our aim is mainly to identify whether the choice of behavioural strategy is affected by normatively irrelevant contextual factors, such as anchors.”

We also moved the section ‘Identifying Anchoring Effects’ so that is now right after the Introduction, as suggested. Given this change of location, some of the text in this section has been adjusted or moved within the section to ensure flow and readability. Table 1 and the accompanying text (which were in the ‘Identifying Anchoring Effects’ section) have now been moved to the ‘Experimental Design’ section, with some minor adjustments to the accompanying text. This has been done to maintain readability, given the change of location of various sections in the paper.

2. I understand that the authors have no intention to go further to explain the cognitive mechanism behind redistributive behaviour in this setting. But, since it is also written in the beginning that the focus of this study is redistributive behaviour, rather than cooperative behaviour (p.3 par 2) it may also important to relate this argument here with the argument of explanation used in the Result and Discussion part, that is: the influence of social information toward redistribution and behavioural strategy.

We have acknowledged that anchors may affect other pro-social behaviours, by adding a sentence in the Discussion and Conclusions (page 29) proposing that future research might address how anchors affect other pro-social behaviours, such as cooperation.

3. On reporting data and discussion. The authors may consider to add explanation of the result based on the effect size. The effect size will be important to be used when discussion the result and what are suggested further research, regardless the p-value. I really appreciate the qualitative data on types of behavioural strategy. I agree this is very important findings. Just one little questions regarding the open ended data processing (whether you are using interrater or not, etc.). This is perhaps can be added, just to be sure.

Effect sizes with respect to results in Table 2 have now been reported on page 21, and effect sizes are now mentioned in the Discussion and Conclusions section (page28), in addition to the statistical significance, as suggested.

As for the open-ended data, it is explained in the Online Appendix S8 that the data was coded by three people and final codes agreed on through discussion. We have also added a measure of interrater reliability (Gwet’s AC, value of 0.56) to the Online Appendix.

Attachment

Submitted filename: Response to reviewers R2.docx

Decision Letter 2

Joanna Tyrowicz

11 Feb 2020

PONE-D-19-24674R2

The Effect of Anchors and Social Information on Behaviour

PLOS ONE

Dear PhD O'Garra,

thank you for submitting the revised version of your study. I think both referees would be fully satisfied. I read your paper carefully and there are two issues which I find a bit problematic. Feel free to answer me directly by emailing to j.tyrowicz@uw.edu.pl if you find these questions simply wrong. However, if I am right, I would expect you to fix those minor issues, upon which your paper can be accepted at PLOS ONE.

Issue #1. When I look at Table 2, it is clear to me what is the reference level for your dummies in Column 1, but in Column 3 you interact your IA dummies with the amount. In principle it can be informative, but I am confused about the base levels in this case. As you have 0 for IA!=1 and a given amount, it seems to me that the base level in Column 3 is both 0 for each level of IA and on top of that IA=0.5$. I find that confusing and given the interaction term, you can use all the levels in Column 3. 

Issue #2. Table 2 reports a mixed effects regression, but nowhere in your paper do I find the assumption concerning the standard error, i.e. I think you should cluster standard errors at SM level, because presumably all the responses of a given SM are driven by the same decision rule (i.e. they are not independent). If Table 2 reports clustered standard errors, I think it should be made salient. If they are not clustered, I fear that you may need to re-estimate the model and report the new standard errors in your Table 2.

These are minor issues. If re-estimating your model necessitates other changes in text, please do so. As minor minor points, I can bring to your attention the following:

* I think it is typical to use present simple tense in reporting the findings of the literature and your own (rather than present continuous, or other continuous). 

* Perhaps your paper is a bit long both in words and in the illustrations. I leave it at your discretion for the text. As to the illustrations, personally, I see little value to Figure 1 and 2, they can very well be reported in one Table (if need be) and it may confuse some readers not to see the CI whiskers (even despite your notes). 

Please note that the above comments have no influence on whether or not your paper is accepted for publication in PLOS ONE. The decision is positive and will not change if your results change (e.g. because of clustering). These are my requests made in the interest of presenting the highest quality research to our audience. Note also that if I am mistaken in the two issues numbered above, I am happy to hear your comments. Maybe they help us clarify the text so that other readers would not be confused.

We would appreciate receiving your revised manuscript by Mar 27 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A letter that responds to my points raised above. This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, that you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Joanna Tyrowicz

Academic Editor

PLOS ONE

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Apr 14;15(4):e0231203. doi: 10.1371/journal.pone.0231203.r006

Author response to Decision Letter 2


25 Feb 2020

Response to suggestions by editor

Issue #1. When I look at Table 2, it is clear to me what is the reference level for your dummies in Column 1, but in Column 3 you interact your IA dummies with the amount. In principle it can be informative, but I am confused about the base levels in this case. As you have 0 for IA!=1 and a given amount, it seems to me that the base level in Column 3 is both 0 for each level of IA and on top of that IA=0.5$. I find that confusing and given the interaction term, you can use all the levels in Column 3.

Response: following our email conversation, I have added a sentence to the table clarifying that the reference level for the IA is $0.50 (this text is also included in the main text).

Issue #2. Table 2 reports a mixed effects regression, but nowhere in your paper do I find the assumption concerning the standard error, i.e. I think you should cluster standard errors at SM level, because presumably all the responses of a given SM are driven by the same decision rule (i.e. they are not independent). If Table 2 reports clustered standard errors, I think it should be made salient. If they are not clustered, I fear that you may need to re-estimate the model and report the new standard errors in your Table 2. These are minor issues. If re-estimating your model necessitates other changes in text, please do so.

Response: as suggested, I have re-run the models with clustering of standard errors (using vce(cluster id) as suggested). This has resulted in some changes to the regression results which have now been updated in the manuscript. As a result, I have had to rewrite some of the text to account for these changes. For consistency, I have also re-run the extra models reported in Online Appendix S5 also with clustering of standard errors at the individual (MM) level.

As minor minor points, I can bring to your attention the following:

* I think it is typical to use present simple tense in reporting the findings of the literature and your own (rather than present continuous, or other continuous).

Thanks for noting this, it has been corrected.

* Perhaps your paper is a bit long both in words and in the illustrations. I leave it at your discretion for the text. As to the illustrations, personally, I see little value to Figure 1 and 2, they can very well be reported in one Table (if need be) and it may confuse some readers not to see the CI whiskers (even despite your notes).

Response: (note: we clarified by email that you were referring actually to Figs 3 and 4). To shorten the length of the paper, I have put Figure 3 in the Online Appendix S2 together with the distributions of SM responses to FM transfers. I have not included standard error bars as this is not correct with repeated measures data (as clarified in Estes, 1997) and box plots are not suitable for continuous two-way data. However, by placing the line graph together with the histograms showing the distributions of responses to each FM transfer, it should be clear to the reader that these averages come from distributions of responses. To further emphasise the link between the distributions and the line graph, I have added the following sentence to the Online Appendix:

“In the following figure we summarise the above distributions in the form of a line graph depicting mean SM transfers in response to each FM transfer.”

Attachment

Submitted filename: Response to Editor regarding R3.docx

Decision Letter 3

Joanna Tyrowicz

19 Mar 2020

The Effect of Anchors and Social Information on Behaviour

PONE-D-19-24674R3

Dear Dr. O'Garra,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Joanna Tyrowicz

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Joanna Tyrowicz

25 Mar 2020

PONE-D-19-24674R3

The Effect of Anchors and Social Information on Behaviour

Dear Dr. O'Garra:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Joanna Tyrowicz

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Testing for order effects.

    (DOCX)

    S2 Appendix. SM responses to FM Contributions.

    (DOCX)

    S3 Appendix. Comparing mean SM transfers in response to different FM transfers.

    (DOCX)

    S4 Appendix. SM responses to the initial amount (the anchor).

    (DOCX)

    S5 Appendix. Regressions on second-mover transfers, using only SMs in groups without SM dropouts (n = 279 SMs).

    The dependent variable is cents transferred per second mover to the recipients.

    (DOCX)

    S6 Appendix. SM responses to FM contributions disaggregated by IA (dichotomous).

    (DOCX)

    S7 Appendix. Multinomial logit model of determinants of SM Type.

    Individuals dummies for each anchor (reference category: self-interested).

    (DOCX)

    S8 Appendix. Analysing open-ended explanations for transfer decision.

    (DOCX)

    S1 Data. Experimental instructions.

    (DOCX)

    Attachment

    Submitted filename: Responses to reviewers.docx

    Attachment

    Submitted filename: Response to reviewers R2.docx

    Attachment

    Submitted filename: Response to Editor regarding R3.docx

    Data Availability Statement

    All relevant data is within the paper and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES