Skip to main content
PLOS One logoLink to PLOS One
. 2020 Oct 28;15(10):e0240937. doi: 10.1371/journal.pone.0240937

Attraction to similar options: The Gestalt law of proximity is related to the attraction effect

Liz Izakson 1,2,*, Yoav Zeevi 1,3, Dino J Levy 1,2
Editor: Tyler Davis4
PMCID: PMC7592845  PMID: 33112897

Abstract

Previous studies have suggested that there are common mechanisms between perceptual and value-based processes. For instance, both perceptual and value-based choices are highly influenced by the context in which the choices are made. However, the mechanisms which allow context to influence our choice process as well as the extent of the similarity between the perceptual and preferential processes are still unclear. In this study, we examine a within-subject relation between the attraction effect, which is a well-known effect of context on preferential choice, and the Gestalt law of proximity. Then, we aim to use this link to better understand the mechanisms underlying the attraction effect. We conducted one study followed by an additional pre-registered replication study, where subjects performed a Gestalt-psychophysical task and a decoy task. Comparing the behavioral sensitivity of each subject in both tasks, we found that the more susceptible a subject is to the proximity law, the more she displayed the attraction effect. These results demonstrate a within-subject relation between a perceptual phenomenon (proximity law) and a value-based bias (attraction effect) which further strengthens the notion of common rules between perceptual and value-based processing. Moreover, this suggests that the mechanism underlying the attraction effect is related to grouping by proximity with attention as a mediator.

Introduction

All of our decisions, from simple ones like the size of the popcorn we choose to buy in the cinema to more complicated ones like choosing our life partner, are influenced by other available alternatives (as well as unavailable ones) in the environment. Other available or unavailable alternatives in the current environment of the choice set are considered spatial context. A well-known example of the effect of spatial context is the attraction effect [1,2]. Suppose you are choosing between a small-sized popcorn that is relatively cheap and costs only $3 (competitor) and a large-sized one which costs $6.5 (target). In this scenario, no option has a clear advantage over the other. The small-size option is better in one attribute (price), while the large-size option is better on the other attribute (size). Now imagine a third option of popcorn that is medium-sized and costs $7 (decoy). Under these circumstances, the decoy is asymmetrically dominated, since it is inferior to the target option in both attributes (size and price), but inferior to the competitor in only one attribute (price). Numerous experiments have shown that the presence of such a decoy in the choice set shifts preferences toward the target option [1,35].

The attraction effect, as well as other decoy effects, such as the similarity and compromise effects [2,6], violate integral axioms of normative theories of choice. These include the independence of irrelevant alternatives (IIA) axiom (the introduction of an irrelevant option to a choice set should not change the preference between existing options [7] and the regularity principle [6].

These various decoy effects and other related context-dependent phenomena, such as framing effects (e.g. asymmetries in valuation of gains and losses) [8,9] demonstrate the importance of the specific context that is experienced by the decision maker during choice to the valuation process. Although context is an integral part of our decision-making process, the mechanisms underlying these phenomena are still unclear. Understanding the mechanisms which allow context to influence our valuation and choice processes can shed light on human choice mechanisms in general, and the generation of values in a complex environment in particular.

Several explanations have been proposed to account for the change in preference induced by different context effects. Specifically, many suggested computational models use the notion that people accumulate evidence for alternatives over time, and make a choice when the evidence reaches a decision criterion [1014]. According to one of these sequential sampling models, the multi-alternative decision by sampling (MDbS) model, the accumulation of the evidence is made by pairwise comparisons on a single attribute [14]. Importantly, the more similar the attributes of different options are, the more time the observer would spend comparing between them. In our example, the decoy (a medium-sized popcorn which costs $7) and the target (a large-sized popcorn which costs $6.5) options are the most similar pair on both attributes (size and cost) between the three available pairs. Therefore, according to the MDbS model, the duration of comparison between them would be longer. This prediction has been supported empirically using eye movements [15]. Moreover, the higher the probability to compare between a specific pair of options, the higher the probability to choose the better option between this pair [14]. In our example, a comparison between the decoy and the target would yield that the target is the better option in both attributes (size and cost), thus the probability to choose it would be higher. Therefore, an important part of the mechanism underlying the attraction effect, according to the MDbS model, is the perception of the differences and similarities between two options in the attribute space. Other sequential sampling models also ascribe a crucial role to the distance between options in the mechanism which leads to the attraction effect. For example, according to the Multi-alternative Decision Field Theory (MDFT), lateral inhibition is increased when the options are closer to each other in the attribute space [16] and according to the Multi-Attribute Linear Ballistic Accumulator (MLBA) model, options that are more difficult to discriminate (i.e., more similar options) receive more attention, thus increasing the probability of the more dominant option to be chosen, which leads to the attraction effect [17]. Note that the explanation for why people drive more attention to similar options is still unclear. In the current study, as our attributes are monetary amount and winning probability, we refer to the distance between options in the attribute space as value distance (VD).

This raises an intriguing possibility. Is the perception of value distance similar or analogous to the way we perceive and are affected by actual physical distance? There are many similarities and analogies between sensory and value processing. First, computational models, in both sensory perception and value processes, use transformation of information from objective magnitudes to a subjective scale in order to explain subjects’ performance. In perception this is known as the Weber-Fechner function [18], which states that the increase in the perceived magnitude of a certain stimulus declines as the stimulus intensity increases, and in value-based choice it is known as the Bernoulli function, or the utility function [19], which describes the notion that the marginal utility of a certain object declines as the total amount of this object increases. Second, value modulation was observed in both visual cortex [20] and auditory cortex [21] in a modality-specific way [22]. Moreover, recent studies have shown that visual selective attention is also driven by the learned value of a stimulus [23,24]. Third, in recent decision models, the mechanisms for why people demonstrate violations of normative axioms such as the IIA is explained using the analogy of perceptual biases [2528]. According to this view, similar to perceptual illusions, both spatial and temporal, choice biases (or cognitive illusions) are the result of inherent cognitive and neurobiological limitations in both the capacity and the efficiency of information processing. For example, a recent study by Khaw and colleagues [29] demonstrated that value is influenced by adaptation in a similar way as perception: subjects’ valuations were lower after high-value adaptations and higher after low-value adaptations demonstrating a repulsive effect of recent values. By the same token, several studies have shown that the attraction effect as well as other decoy effects (e.g., similarity, compromise) emerge in simple perceptual decision-making tasks [30,31]. Therefore, we propose that we can use the vast knowledge that we gained regarding sensory processing, in order to better understand the mechanisms underlying choice biases such as the attraction effect.

In the current study, we examined a potential link between the attraction effect and the Gestalt law of proximity and then used this link in order to better understand the mechanisms underlying the attraction effect. The proximity law suggests that we tend to combine elements that are close to each other and treat them as one group [32]. We chose specifically this law because it refers to the physical distance between objects, and as was mentioned above, an integral feature of the attraction effect is the position of the decoy relative to the target which defines the distance between them [1]–what we refer to as the value distance.

Because there are many similarities and analogies between sensory processing and value processing, we hypothesized that the value distance between a decoy and a target option would be conceptually analogous to the physical distance between objects as formulated by the Gestalt law of proximity. According to Koffka [1], the bigger the physical distance between objects, the less chance there is to perceive these objects as grouped by proximity. Thus, we hypothesized that the bigger the value distance between the target and the decoy, the less the subject would be affected by the attraction effect [33].

Furthermore, we hypothesized that the larger the sensitivity to grouping by proximity a subject will have, the larger will be her susceptibility to the attraction effect (the subject's probability to choose the target would be higher), because it would be easier for the subject to perceive the similarity between the decoy and the target, since she would more readily group them together.

To address these questions, we performed two independent experiments, where we pre-registered the replication experiment according to the results of the first experiment in Open Science Framework (https://osf.io/jzk6y/). In both experiments, all subjects performed a Gestalt-psychophysical task and a decoy task. Thereafter, we compared, for each subject, their behavioral sensitivity in both tasks. As we will present below, we found that the more sensitive a given subject is to the proximity law, the more she displayed the attraction effect. Therefore, we will illustrate how the proximity law might account for the attraction effect with attention as a mediator.

General methods

Data sharing

Our pre-registration forms as well as all of our data and codes are shared on Open Science Framework (https://osf.io/jzk6y/).

Subjects

A total of 119 healthy subjects took part in this study; each participated in one of two identical experiments. Experiment 1, n = 52; Replication, n = 102; see Table 1 for a demographic description of each experimental sample. We pre-registered the replication experiment according to the results of Experiment 1 (https://osf.io/jzk6y/). The replication experiment was identical to Experiment 1, and aimed to validate its results. We performed a power analysis with the data obtained from Experiment 1, which yielded a minimal n = 74 for detecting the Gestalt threshold effect on choice with 80% power and alpha = .05. (Power was calculated using a Wald test, examining the significance of the Gestalt threshold variable in a mixed-effect logistic regression). However, based on Experiment 1, we assumed that some of the subjects (~20%) will be excluded from the study based on our strict exclusion criteria (available online at https://osf.io/jzk6y/). Therefore, we chose to pre-register a larger sample size of n = 100 for the replication experiment.

Table 1. Demographic information.

Sample size (excluded) Females (percent) Age M (SD)
Experiment 1 52 (14) 29 (56%) 26.53 (4.12)
Replication 102 (21) 58 (57%) 25.63 (5.39)

In both experiments, all subjects performed three tasks: first, a calibration task for the forthcoming decoy task, second, a Gestalt-psychophysical task and third, a Decoy task. The experiment was conducted in the laboratory on monitors with screen resolutions of 1,920 × 1,080 pixels. All subjects received a participation fee and were also paid according to their winnings in the experiment. They all signed a written informed consent that was approved by the ethics committee at Tel Aviv University (ethics approval number 0000080–1).

Stimuli & procedure

Calibration task

In the calibration task, for each subject we estimated two indifference points that served as the basis for generating the stimuli used in the Decoy task. Subjects made repeated choices between a choice option with 61% chance to win 22 NIS (and a 39% chance to win zero)–option A, and an option with p chance to win 42 NIS (and 1-p to win zero)–option B. The numbers were randomly jittered across trials by ±1 or ±2 to prevent subjects from memorizing their choices. We systematically varied the value of p from 19% to 52%, resulting in 11 different unique trials. Each trial was repeated 6 times. We calculated the indifference point (or point-of-subjective-equivalence) for each subject based on a logistic regression model. That is, for each subject, we identified the expected-value difference between the two options that corresponds to the 50% point on the y-axis (-constant/slope) according to the logistic model. We repeated this procedure using different amounts and probabilities in order to estimate a second indifference point for each subject. Subjects made repeated choices between an option which offered a 45% chance to win 27 NIS–option A, and another option with a varying chance of q to win 59 NIS–option B. As in the previous set of amounts and probabilities, the numbers were randomly jittered across trials by ±1 or ±2 to prevent subjects from memorizing their choices. We systematically varied the value of q from 12% to 34%, resulting in 11 different unique trials that were repeated 6 times (The full list of trials for both sets of options A and B is available in S1 Appendix).

In all trials of the calibration task, each trial started with the presentation of a fixation cross at the center of the screen for 500 ms. Thereafter, a table of 2 lotteries was presented at the center of the screen. Subjects were requested to choose their preferred lottery by clicking the number of this lottery on the keyboard. There was no time limit to make a choice. After subjects made their choice, the trial ended with a fixation cross for 500 ms. The two different sets of options A and B were presented in a random order in white text on a black background. The total amount of trials in the calibration task was 132.

The task was incentive compatible. At the end of the task, one of the 132 choices was randomly chosen. The option that was chosen on the randomly selected trial was played out according to the amount and probability of that lottery. The subjects were informed about the results of this lottery only at the end of the experiment (after the end of task 3 –Decoy task). If they won the allotted amount of money of the lottery, it was added to their show up fee.

Based on the two subject-specific indifference points, we generated the choice options for the main Decoy task. This step is important because decoy effects are most strongly demonstrated when the target and the competitor are equally valuable [34].

Gestalt task

The aim of the task was to measure, for each subject, the threshold for differentiating between two stimuli of 12 dots arranged in a row. The task is based on the task described in [35]. Fig 1A illustrates an example trial. The subject was presented with a fixation displayed in the center of the screen for 300–500 ms (randomly jittered). Thereafter, a Constant stimulus of 12 white dots arranged in a row on a black background was displayed in the center of the screen for 1 second, followed by a Mask (a scrambled picture of the Constant stimulus) for 250 ms. The distance between the dots was always constant and was equal to 20 pixels. Then, a Variable stimulus of 12 dots arranged in a row was displayed for 1 second in which the distance after the 3rd, 6th, and 9th dot was equally varied across trials (Fig 1B). The second stimulus was also followed by a Mask for 250 ms. The order in which the Constant and Variable stimuli appeared was randomized across trials. Afterwards, the subject was asked if the two stimuli that were presented (Constant vs. Variable) were identical or different by clicking ‘1’ or ‘2’ on the keyboard, respectively. There was no time limit for the response phase. In total, subjects observed 29 different Variable stimuli in which the spacing distance between each three dots was varied from 20.5 pixels to 34.5 pixels in increments of a half pixel. Each Variable stimulus was repeated 6 times (except the trial in which the Constant and Variable stimuli were identical which was repeated 18 times) for a total of 192 trials.

Fig 1. Gestalt task.

Fig 1

(A) Single trial timeline. (B) Examples of stimulus patterns selected from the set of 29 variable stimuli used in the experiment and a constant stimulus in the Gestalt task. Each individual row represents a different stimulus that was presented separately on the screen in each trial.

For each subject, we measured the probability to respond “different” as a function of the increase in physical distance between the dots in order to calculate the threshold to differentiate between the two stimuli. As was stated above, this task was based on a psychophysical task presented in [35] where a higher tendency to detect differences in physical distance as well as the tendency to group by proximity is translated to a lower threshold. A subject who is more susceptible to grouping by proximity will detect the differences between the Constant stimulus and the Variable stimulus at a much lower distance between the triplets of dots, since it would be easier for her to group the row of 12 dots into 4 groups of triplets.

Decoy task

The aim of the task was to examine, for each subject, the existence of the attraction effect, as well as its strength. We also measured for each subject the influence of the decoy's location (in value distance) on the strength of the attraction effect. The task is based on the task described in [36]. Fig 2A illustrates an example trial. Subjects performed a series of choices between gambles in three different conditions: (I) Basic condition, (II) Decoy condition, and (III) Filler conditions (Fig 2B).

Fig 2. Decoy task.

Fig 2

(A) Single trial timeline. (B) The three task conditions (Basic, Decoy, and Filler).

In the Basic condition, subjects made choices between two lotteries (option A and option B). Each lottery had some amount of money to win associated with a winning probability (e.g. 42% to win 30 NIS (and 58% not to win) vs. 30% to win 38 NIS (and 70% not to win). There were two different sets of options for the Basic condition (both of them shown in S2 Appendix). The specific numbers of the amounts and probabilities were randomly jittered across trials by ±1 or ±2 to prevent subjects from memorizing their choices. Importantly, based on the calibration task, the lotteries were individually tailored, such that the subject was close to being indifferent between them. Each Basic trial was repeated 8 times for a total of 16 trials.

In the Decoy condition, we added a third gamble to the choice set. The additional option, the decoy, was either similar to the probability or to the amount of one of the gambles appearing in the Basic condition (the target). The remaining dimension of the decoy (either probability or amount) was parametrically varied in 4 steps, from being 15%, 30%, 45%, and 60% smaller than that dimension in the target option resulting in a ranked-order parameter with 4 levels. For instance, if the target was a lottery of 21% to win 59 NIS, A decoy on the amount attribute which is 15% smaller would be 21% to win 50 NIS (a detailed calculation is provided in S2 Appendix). Additionally, we used two different decoy types: range (which gives the target an advantage in its weaker attribute) and frequency (which gives the target an advantage in its stronger attribute). This terminology was introduced in the original article on the attraction effect [1].

Therefore, we had 4 different types of trials in the Decoy condition (range-probability, range-amount, frequency-amount, and frequency-probability) and 4 different value distance (VD) steps, resulting in 16 different decoy options for each set of the Basic condition. Each of the Decoy trials was repeated 8 times, resulting in 16*2*8 = 256 Decoy trials.

In the Filler condition, there were two types of trials: 1) Binary filler (first order stochastically dominated trials): subjects chose between two options, where one was a non-degenerate lottery (like in the previous conditions), while the other always had a bigger monetary amount and 100% winning probability. We added this condition to validate the continued engagement of the subject with the task throughout the experiment. 2) Trinary filler: subjects chose between three randomly-generated gambling choices not related to the decoy trials. We added this condition to disguise the aim of the experiment from the subjects. There were 8 different Filler trials that were repeated 4 times, resulting in 32 Filler trials.

In all trials of the Decoy task, each trial started with the presentation of a fixation cross at the center of the screen for 500 ms. Thereafter, a table of either 2 or 3 lotteries (depending on the condition) was presented at the center of the screen. Subjects were requested to choose their preferred lottery by clicking the number of that lottery on the keyboard. There was no time limit to make a choice. After the subjects made their choice, the trial ended with a fixation cross for 500 ms. The different types of trials were presented in a random order in white text on a black background. The total amount of trials in the Decoy task was 16 Basic trials + 256 Decoy trials + 32 Filler trials = 304 trials.

The experiment was incentive compatible. At the end of the Decoy task, one of the 304 choices was randomly chosen. The option that was chosen on the randomly selected trial was played out according to the amount and probability of that lottery. If subjects won the allotted amount of money of the lottery, it was added to their show up fee.

In order to examine the existence and strength of the attraction effect across subjects and steps of VD, we first examined if the probability to choose the target was significantly higher than chance level (50%). Then, we used the choice in every trial (target or competitor) as our dependent variable in order to examine the effect of different predictors (e.g., VD, Gestalt threshold) on the attraction effect using mixed-effect logistic regression models. We are aware that there are other measurements for the attraction effect, however we chose specifically this one since our main analyses in which we used mixed-effect logistic regressions required a dependent variable per trial (we used the choice in each trial: target/competitor for each subject). We included analyses of the attraction effect using three other measurements that are used in the literature: violation of WASRP (the Weak Axiom of Stochastic Revealed Preference) [39,40], violation of regularity [6,37] and relative choice share of the target [38]. We concluded that most of the measurements are very similar to the one we used, and thus, yield similar results (see S1 Text; Robustness analyses for more details).

Exclusion criteria

In addition to the reported 119 participants, across the two experiments, 35 additional participants were excluded from final analyses (Experiment 1: 14 subjects were excluded; Replication: 21 subjects were excluded). The exclusion criteria were decided based on Experiment 1, as we preregistered the replication experiment, and then implemented as well in the replication experiment.

Subjects were disqualified due to two exclusion criteria: 1) lack of engagement in the Gestalt task (their slope of the fitted logistic regression was not significantly larger than zero (p<0.05) meaning that they were not sensitive at all to the interval increase between the dots, and 2) they chose "different" more than 50% of the trials that were actually identical meaning that they were biased to answer "different". Subjects were also excluded due to lack of engagement in the Decoy task. That is, if they chose the same option more than 96% of the trials at least in two out of the four blocks of the task, which indicates that they showed no variation in their choices across the different trial types (which is analogous to a low slope in the Gestalt task). We chose the 96% threshold based on a thorough exploration of our data from Experiment 1, and based all of our exclusion criteria according to it. These exclusion criteria were listed in the pre-registration of the replication experiment.

Experiment 1

Subjects

38 valid participants completed the three tasks presented in the general methods section (mean age = 26.53, SD = 4.12, 29 females; demographic statistics are reported in Table 1). 14 additional subjects were excluded: four performed poorly in the Gestalt task (their slope of the fitted logistic regression was not significantly larger than zero (p<0.05) or they chose "different" more than 50% of the trials that were actually identical). Ten performed poorly in the Decoy task (chose the same option more than 96% of the trials at least in two of the four blocks in the task).

Gestalt results

In order to examine the influence of the increase in physical distance between the triplets of dots (interval increase) on the propensity of the subject to differentiate between the stimuli, we fitted a mixed effect logistic regression with interval increase as the independent variable, and subject’s choices (identical/different) as the dependent variable.

We found that, on average, the propensity to discriminate between the two stimuli increased as a function of the interval increase between the dots demonstrating that subjects were sensitive to the spaces between the triplets of dots (β = 0.34, p<0.001; left side of Table 2).

Table 2. Influence of the interval increase between the dots on the propensity to respond "different".

Experiment 1 (n = 38) Replication (n = 81)
Fixed-effects Parameters B SE# Z p-val B SE# Z p-val
Constant -2.38 0.18 -12.91 < .001*** -2.47 0.12 -20.34 < .001***
Interval increase 0.34 0.03 12.81 < .001*** 0.38 0.02 18.69 < .001***
Random-effects Parameters var var
Constant 1.12 1.01
Interval increase 0.02 0.03

We used mixed effect logistic regression with random intercept. The model included a random intercept and slope components, allowing them to interact (using an unstructured covariance matrix specification).

# Robust Std. Err. (Errors clustered by Subject); * p < .05 **p < .01

*** p < .001.

Next, we fitted for each subject separately, her behavioral data to a logistic regression with interval increase as the independent variable, and subject’s choices (identical/different) as the dependent variable. We then estimated the physical distance between the triplets of dots in which the subject was at chance level (the x value at y = 0.5) based on the best fit logistic function per subject. That is, we estimated the interval increase in which the subject could not tell the difference between the Constant and Variable stimuli, i.e. the sensitivity threshold. Fig 3A describes the data and the logistic fit of two representative subjects. The subject which is represented by the gray dots has a lower sensitivity threshold in comparison to the subject which is represented by the blue dots. That is, the “gray” subject starts to differentiate between the two stimuli when the physical distance between the triplets is smaller (3.6 pixels) in comparison to the “blue” subject who needs a larger physical distance (9.8 pixels) in order to differentiate between the two stimuli. The top histogram in Fig 3B describes the distribution of sensitivity thresholds across subjects for Experiment 1. The average sensitivity threshold was 7.19 (±0.28) pixels (ranging from 3 to 12 pixels). We then used this variation in sensitivity (termed ‘Gestalt threshold’) across subjects in order to find a link between the Gestalt thresholds and the tendencies to show an attraction effect in the Decoy task.

Fig 3. Gestalt results.

Fig 3

(A) Two representative subjects: the one colored in gray has a lower threshold to differentiate between the two stimuli than the one colored in blue. (B) Histograms of the Gestalt sensitivity thresholds calculated for all subjects in Experiment 1 (top) and Replication (bottom). The dashed blue line represents the mean (Experiment 1: mean = 7.19(±0.28) pixels; n = 38; Replication: mean = 7.02(±0.2) pixels, n = 81).

Decoy results

Significant attraction effect across subjects, however large heterogeneity between subjects

In order to examine the occurrence of the attraction effect across subjects including all decoy locations, we measured the probability to choose the target when the decoy was asymmetrically dominated by it and compared it to chance level (50%). We found that on average, subjects chose the target significantly higher than chance level (one sample t-test, mean = 0.52, CI = [0.51, 0.54], t(37) = 2.76, p<0.01). Although the average effect across subjects is significant, it is a rather small effect. This is probably because there is a considerable heterogeneity across subjects in their probability to choose the target (the range of probabilities spreads between 0.38 and 0.68 (Fig 4A)). Therefore, additionally, we examined separately for each subject, the effect of adding a decoy on their probability to choose the target option using a binomial test. We found that only ~20% of subjects chose the target significantly different than 50% (Experiment 1: 7 out of 38 subjects (18%) chose the target significantly different than 50% (p<0.05); detailed individual results are available in S3 Appendix). While most of the subjects who showed a significant decoy effect displayed an attraction effect, 29% of them displayed the opposite effect (a repulsion effect–higher probability to choose the competitor when the decoy was asymmetrically dominated by the target [37,38]). These results are in line with previous studies which posited that decoy effects are usually weak effects [40,54] and that there are considerable differences between subjects [54].

Fig 4. Histogram of the probability to choose the target.

Fig 4

The black line represents choice probability of chance level (0.5). Choice probability of less than chance level (on the left side of the zero line) represent subjects who displayed a general repulsion effect, while the choice probability of more than chance level (on the right side of the zero line) represent subjects who displayed a general attraction effect. (A) Experiment 1: mean = 0.52, n = 38. (B) Replication: mean = 0.53, n = 81.

Moreover, we examined the robustness of our measurement for the attraction effect size (choice proportion of target) by comparing it with three other measurements that are used in the literature: violation of WASRP (the Weak Axiom of Stochastic Revealed Preference) [39,40], violation of regularity [6] and relative choice share of the target [38]. We concluded that most of the measurements are very similar to the one we used, and thus, yield similar results [detailed information and analyses are provided in S1 Text].

The influence of the value distance on the probability to choose the target

In order to examine the influence of the value distance on the choice proportion of the target, we used the VD, in each trial of the Decoy condition of the Decoy task, as our predictor and subjects’ choices (target or competitor) as the dependent variable. We first fitted a random-intercept logistic regression model and clustered the errors per subject.

The VD had a significant negative effect on the choice proportion of the target (β = -0.25, p<0.05; left side of Table 3). That is, the further away the decoy was from the target (regardless of the specific attribute (probability/amount) which differentiated between them), the less the subject chose the target, and hence, the lower was the attraction effect.

Table 3. Influence of the value distance on the choice proportion of the target.
Experiment 1 (n = 38) Replication (n = 81)
Fixed-effects Parameters B SE# Z p-val B SE# Z p-val
Constant 0.19 0.06 3.323 < .001*** 0.17 0.04 4.244 < .001***
Value distance -0.25 0.12 -2.09 .03 * -0.18 0.08 -2.12 .03 *
Random-effects Parameters var var
Constant 0.03 0.04

We used mixed effect logistic regression with random intercept.

# Robust Std. Err. (Errors clustered by Subject)

* p < .05 **p < .01

*** p < .001.

However, when we examined the slope coefficients of each subject separately (by fitting each subject’s behavioral data to its own logistic function), we discovered that 2/3 of our subjects had negative slope coefficients similar to the overall coefficient we found in the main regression (i.e., a negative influence of the VD on the probability to choose the target), while the other 1/3 had positive slope coefficients (i.e., a positive influence of the VD on the probability to choose the target) (Fig 5A).

Fig 5. Choice proportion of the target as a function of value distance.

Fig 5

Red color represents subjects who had a negative slope (as in the overall coefficient we found in the main regression), while blue color represents subjects who had a positive slope. The black bold line represents the mean slope across all subjects. (A) Experiment 1: 68% of subjects had negative slope coefficients, while 32% had positive slope coefficients. (B) Replication: 61% of subjects had negative slope coefficients, while 39% had positive slope coefficients.

Additionally, there was no connection between the size of the attraction effect of a specific subject (choice proportion of target) and the tendency to be affected negatively or positively by the VD (R = 0.09, p = 0.6).

Since there was such a large variability between subjects, we decided to use a model with a random slope in addition to the random intercept. That is, we allowed the intercept and the slope coefficients of the VD to vary across subjects in addition to clustering the errors per subject. Using this model, we still found a marginally significant effect of the VD on the proportion to choose the target option (β = -0.25, p = 0.07; left side of S1 Table). Note, that the coefficient value is the same as in the previous model. Therefore, we concluded for these series of analyses, that on average there is a small negative effect of the value distance between the decoy and target on the proportion to choose the target option. However, there is a large variation across subjects in their individual slope coefficients (variance = 0.87).

Sensitivity to physical proximity influences the attraction effect

Next, we wanted to examine, across subjects, if and to what extent, there is an influence of the sensitivity to physical proximity (as measured in the Gestalt task) on the propensity to demonstrate the attraction effect. Therefore, we added the subject-specific Gestalt sensitivity threshold parameter that we estimated in the Gestalt task as another predictor to our model.

Interestingly, as can be seen in the left side of Table 4 and in the simple correlation presented in Fig 6A (for illustration purposes only), the Gestalt sensitivity threshold had a significant negative effect on the proportion to choose the target (β = -0.04, p<0.03). That is, the lower the Gestalt sensitivity threshold of a given subject (more sensitive to the proximity law), the more the subject tended to choose the target option. Importantly, note that the coefficient size and the significance of the VD regressor did not change after introducing the Gestalt sensitivity regressor, suggesting that the effect of the Gestalt sensitivity on choice is orthogonal to the effect of the VD on choice.

Table 4. Summary of the mixed effects logistic regression model for variables predicting the choice proportion of the target.
Experiment 1 (n = 38) Replication (n = 81)
Fixed-effects Parameters B SE# Z p-val B SE# Z p-val
Constant 0.46 0.13 3.53 < .001*** 0.39 0.09 4.45 < .001***
Value distance -0.25 0.14 -1.86 .06 -0.17 0.09 -1.83 .07
Gestalt Threshold -0.04 0.02 -2.26 .02 * -0.03 0.01 -2.72 < .01 **
Random-effects Parameters var var
Constant 0.00 0.00
Value distance 0.14 0.15

# Robust Std. Err. (Errors clustered by Subject)

* p < .05

**p < .01

*** p < .001.

Fig 6. Correlation between Gestalt threshold and choice proportion of the target.

Fig 6

The lower the Gestalt sensitivity threshold of a given subject (more sensitive to the proximity law), the more the subject tends to choose the target option. (A) Experiment 1: R = -0.31, p = 0.06, n = 38. (B) Replication: R = -0.25, p = 0.02, n = 81.

To exclude the possibility that the significant negative link between the Gestalt threshold and the probability to choose the target is merely due to task engagement, such that subjects who were less engaged in the Gestalt task (and thus have higher thresholds) were also less engaged in the Decoy task (and thus have lower attraction effect sizes), we performed further analyses and added them to the supplementary material (S1 Text).

In the perceptual task, in order to examine task engagement, we measured the slope of the logistic regression fit for each subject. The meaning of the slope of the logistic fit is how accurate was the subject in general, across all intervals (distribution of error rates across trial difficulties). We, then, used the Gestalt slope as a predictor in our main analysis instead of the Gestalt threshold, and had no significant effect of the Gestalt slope on the probability to choose the target in both experiments (Experiment 1: β = 0.12, p = 0.52; Replication: β = 0.04, p = 0.67; Table 1 in S1 Text). These results indicate that there is no systematic effect of the error rates (task engagement) in the Gestalt task and the level of choice proportion of the target in the Decoy task.

Regarding the Decoy task, it is impossible to define a choice error since there is no correct answer in each trial (except for the first order stochastically dominated trials in which all the subjects, except one, chose the 100% winning probability options all the time). Nonetheless, equivalently to the slopes of the logistic fits in the Gestalt task, we measured the choice variance in each trial type in the Decoy task (there were 32 different trial types that were repeated 8 times each). We calculated two measurements for task engagement in the Decoy task: 1) the mean of choice variance which gives an indication of how consistent was the subject per trial type (the smaller this mean of choice variance, the more consistent was the subject per trial type and thus, more engaged in the Decoy task), and 2) the variability across trial types which represents if the subject responded differently across the different trial types (the smaller the variability of choices across trial types, the less the subject changed his response according to the different trial types, and thus, we assume, the less engaged she was in the task) [detailed equations of the two measurements are available in S1 Text]. When we examined the correlation between each of these measurements of task engagement (the mean of choice variance per trial type and the variability of choices across trial types) and the choice proportion of the target, we observed that there is no significant correlation between neither of the measurements for task engagement in the Decoy task and the choice proportion of the target (Fig 7 in S1 Text). This indicates that subjects who had a higher variance in their choices per trial type or a small variability across trial types, and thus were probably less engaged in the Decoy task, did not choose systematically the target more or less often.

Moreover, there is no significant correlation between neither of the measurements for task engagement in the Decoy task (the mean of choice variance per trial type and the variability across trial types) and the measurement of task engagement for the Gestalt task (the slope of the logistic fit) (Fig 8 in S1 Text) which demonstrates that there is no connection between the levels of task engagement in both tasks.

Finally, we ran a regression analysis which includes VD, Gestalt threshold, and the task engagement measurements (Gestalt slope for the Gestalt task and the mean of choice variance per trial type for the Decoy task) as predictors to the choice proportion of the target for both experiments and observed that none of the task engagement measurements had a significant effect on the choice proportion of the target in both experiments (Table 2 in S1 Text). Furthermore, the coefficients of our main predictors (Gestalt threshold and VD) of the model which includes the task engagement measurements (Table 2 in S1 Text) were very similar to the coefficients of our main predictors in our main model in the paper (Table 4) in both experiments.

These results suggest that the effect of the sensitivity to the proximity law on the choice proportion of the target is not related to task engagement (see S1 Text for more details).

Range decoys as oppose to frequency decoys induce a stronger attraction effect

It was previously shown that range decoys (which gives the target an advantage in its weaker attribute) produce stronger attraction effects in comparison to frequency decoys (which gives the target an advantage in its stronger attribute) [1,31]. In order to examine if this is the case in our data, we added the decoy type (range or frequency) as a dummy predictor to our model. Similar to the findings of previous studies, range decoys were associated with a higher probability to choose the target in comparison with frequency decoys (dummy variable: range was coded as 0. β = -0.12, p<0.01; left side of S2 Table). Importantly, the size and significance of all other regressors remained the same.

Replication experiment

Our aim was to replicate the results of Experiment 1. Therefore, we pre-registered the results of experiment 1 and the planned replication experiment (see OSF https://osf.io/jzk6y/), which was identical to Experiment 1, both in design and analysis.

Subjects

81 valid participants completed the replication experiment (demographic statistics are reported in Table 1). 21 additional subjects were excluded based on our pre-registered exclusion criteria: five performed poorly in the Gestalt task and thirteen performed poorly in the Decoy task. Three performed poorly on both tasks.

Gestalt results

Similar to Experiment 1, we found that the propensity to discriminate between the two stimuli increased as a function of the interval increase between the dots demonstrating that subjects were very sensitive to the spaces between the dots (β = 0.38, p<0.001; right side of Table 2). Furthermore, in the replication experiment, the average sensitivity threshold was 7.02 (±0.2) pixels (ranging from 2.61 to 11.8 pixels) [Fig 3B, bottom histogram] which is very similar to the distribution of sensitivity thresholds in Experiment 1 (t = 0.09, p = 0.92) [Fig 3B, top histogram].

Decoy results

We present here only the results of the full model of the replication experiment. However, we performed the same analysis steps as was presented in Experiment 1. The partial models of the replication experiment are presented at the right side of Tables 3 and 4 as well as at the right side of S1 and S2 Tables, and are also available online at https://osf.io/jzk6y/.

Similar to Experiment 1, on average, subjects chose the target significantly higher than chance level (one sample t-test, mean = 0.53, CI = [0.51, 0.54], t(80) = 4.14, p<0.001). Additionally, there was a high variability across subjects in their probability to choose the target (the range of probabilities spreads between 0.41 and 0.83 (Fig 4B)). Similar to Experiment 1, ~20% of the subjects displayed a significant decoy effect on an individual level (17 out of 81 subjects (21%) chose the target significantly different than 50% (p<0.05); detailed individual results are available in S3 Appendix). Moreover, similarly to Experiment 1, most of the subjects who showed a significant decoy effect displayed an attraction effect, while 18% displayed a repulsion effect.

Interestingly, the coefficients of our predictors of the full model for the replication experiment were very similar to the coefficients of the full model for Experiment 1 (right side of Table 4; and of S2 Table). Specifically, we again found a marginally negative effect of the VD on the proportion to choose the target option (β = -0.17, p = 0.07; right side of Table 4). Moreover, we observed a similar proportion of negative and positive coefficients as was found in Experiment 1 (Experiment 1: 68% of subjects had negative coefficients (Fig 5A); Replication: 61% of subjects had negative coefficients (Fig 5B)).

Regarding the connection between the Gestalt sensitivity threshold and the probability to choose the target in the Decoy task, we again observed a significant negative effect of the Gestalt threshold on the proportion to choose the target (β = -0.03, p<0.01; right side of Table 4). These results strengthen our conclusion from Experiment 1 that the lower the Gestalt sensitivity threshold of a given subject (more sensitive to the proximity law), the more the subject tends to choose the target option.

Furthermore, similar to Experiment 1, range decoys were associated with a higher probability to choose the target in comparison with frequency decoys (dummy variable: range was coded as 0. β = -0.06, p<0.05; right side of S2 Table).

General discussion

In the current study, we aimed to elucidate the mechanisms underlying the attraction effect by examining a potential link between this effect and a well-known perceptual phenomenon, the Gestalt law of proximity. Across two independent and identical experiments with a pre-registered replication, we found that the lower the Gestalt sensitivity threshold of a given subject as measured in a perceptual task (i.e., it is easier for this subject to differentiate between the two stimuli because her tendency to group by the Gestalt law of proximity is higher), the more she tends to choose the target option (i.e., displays a stronger attraction effect). Therefore, we suggest that the variation across subjects in their susceptibility to the Gestalt law of proximity might account for some of the variation observed in their tendency to show the attraction effect. These results strengthen the notion that there are commonalities between perceptual and value-based processing by demonstrating a within-subject link between a perceptual phenomenon (proximity law) and a value-based bias (attraction effect). Moreover, our findings can help us better understand the mechanisms underlying the attraction effect using the within-subject link with the Gestalt law of proximity.

Grouping by proximity as an optional mechanism for the attraction effect

How can grouping by proximity be one of the mechanisms mediating the attraction effect? The Gestalt principles of self-organization aim to describe how our brain engineers the perception of the world around us [32]. Since we live in a noisy world with endless sensory information but at the same time with constraints of limited resources and capacity, we are naturally in a need for an efficient coding [41,42] in order to balance between robustness (stability, solve ambiguity and resistance to change) and flexibility (dynamic environment). The Gestalt principles offer a solution for this computational problem of balancing robustness and flexibility using both the internal and external aspects of perception [43].

The perceptual system, which is established to be subordinate to specific rules of efficient processing, is an integral input into the value-based decision process [44]. Therefore, it is not surprising that both sensory and choice biases are considered to be the consequence of several canonical computations and patterns in the brain, which we use in order to efficiently code our environment [26,4547]. Our results demonstrate a direct link between these two domains. The within-subject relationship between the sensitivity to group by proximity and the susceptibility to the attraction effect suggests that the grouping principle of proximity (either physical or value-based proximity) might be a part of the mechanism underlying the attraction effect. We offer a theoretical model of how this connection might occur using attention as a mediator (Fig 7).

Fig 7. A suggested mechanism of how grouping by proximity may mediate the attraction effect.

Fig 7

There is a close interplay between selective attention and perceptual organization processes [48,49]. Since, as we mentioned above, we have limited resources and capacity [41,42], we are obliged to direct our attention to a particular part of the scene [50]. Several studies demonstrated that perceptual organization plays a crucial role in the deployment of attention [48,51,52]. For example, Kimchi and colleagues [51] demonstrated that stimuli which are grouped according to principles of self-organization attract subjects’ attention more than stimuli which are not perceived as grouped. Therefore, selective attention might be a mediator which allows Gestalt principles of self-organization to increase the efficiency of our perception process.

Interestingly, selective attention has also been suggested to have an integral role in the mechanism underlying the attraction effect. For instance, Roe and colleuges [16] posit in their Multialternative Decision Field Theory (MDFT), that the shifting of attention between the attributes of the choice options which are more similar, increases the attractiveness of the better option between them via lateral inhibition, and leads to the attraction effect. Another, more recent model, Multiattribute Linear Ballistic Accumulator (MLBA) model [17] also suggests that more attention is allocated to the comparison between the target and the decoy options because it is harder to discriminate between them (more similar), and this highlights the superiority of the target. The Multialternative Decision by Sampling (MDbS) model [14] also postulates that similar options attract more attention, but because it is easier (rather than harder) to compare between them. Moreover, a recent study demonstrated that the value of the choice options also influence the allocation of attention [53]. Since selective attention plays an integral role both in the Gestalt principles of self-organization and in the attraction effect, we propose that the within-subject correlation that we found between sensitivity to proximity and the tendency to show the attraction effect could be mediated by selective attention (Fig 7).

A first step of every evaluation or choice process is a perceptual appraisal of the alternatives [44] which, as was mentioned above, is subordinate to specific rules of efficient processing, for example grouping by proximity. Therefore, in the Decoy task, we propose that when the subject is presented with 3 different gambling options, the Gestalt self-organization principles will construct the way she will perceive these options. Since, some of the options are closer to each other in the value space, she will tend to group these options according to the proximity law. Moreover, she will direct her attention to these closer options (in the value space), because they are perceived as more similar, and according to the self-organization rules, closer (similar) options receive more attention. This in turn, will lead to more comparisons between the closer options and thus to the attraction effect. According to the MDbS [14] and MDFT [16] models, the higher the probability of comparison between a specific pair, the higher the probability to choose the better option between that pair, which is the definition of the attraction effect.

An unresolved question is what actually leads people to drive more attention to similar options. We suggest the Gestalt principle of grouping by proximity as a possible explanation for this query, since we showed that subjects who are less sensitive to grouping by proximity are also less susceptible to the attraction effect. Furthermore, when the decoy was located further away from the target, subjects tended to display a weaker attraction effect. However, note, that this is a theoretical notion that should be examined in future studies using either imaging techniques or eye movements.

The effect of value distance on the attraction effect—heterogeneity between subjects

We replicated the attraction effect when averaging the behavior across subjects. The probability of choosing the target was significantly higher when the decoy was asymmetrically dominated by the target than when it was asymmetrically dominated by the competitor. However, our results demonstrated a considerable heterogeneity between subjects in their sensitivity to the attraction effect. We observed, in both experiments, that only ~20% of the subjects displayed a significant decoy effect on an individual level. It is important to note that in most previous studies, only group effects were described [13,14], either because the study was a between-subject’s design or because the study only focused on group effects. However, studies that did examine and report results at the individual level show that there are systematic differences across subjects in regard to the influence of context on their behavior [40,54,55] and posit that decoy effects are usually weak effects [40,54] similar to our results.

Furthermore, although we aimed to reach for each subject an indifference between options A and B using the Calibration task, the safer option (option A) was chosen more often across subjects in the Basic condition in both experiments, albeit only significant in the Replication experiment. Additionally, for around third of the subjects in both experiments there was a significant difference in the subjective value between the two options even though we used a Calibration task. This could be another reason for the small attraction effect sizes in our study. Nonetheless, although the calibration task did not work perfectly, we were able to show a significant attraction effect across subjects as well as a significant link between the choice proportion of the target and the susceptibility to group by proximity.

Moreover, across both experiments, we demonstrated a marginally significant negative VD effect on the attraction effect. That is, the further away the decoy was from the target, the less the subject chose the target, and hence, the lower was the attraction effect. Interestingly, this is in line with the manner physical distance between objects affects the susceptibility to grouping by the proximity law; the bigger the physical distance between objects, the less chance there is to perceive these objects as grouped by proximity [32]. The effect is only marginally significant because there is a considerable variability in subjects' sensitivity to the VD. Two thirds of our subjects, had negative slope coefficients similar to the overall coefficient we found in the main regression (i.e., a negative influence of the VD on the probability to choose the target), while the other third had positive slope coefficients (i.e., a positive influence of the VD on the probability to choose the target).

Additionally, there is evidence for both negative and positive effects of the target-decoy distance on the attraction effect. Several previous studies found that further decoys produced a smaller attraction effect, similar to our findings [33,40]. However, other studies observed the opposite effect. For example, Soltani et al. [4] found that close decoys had no significant effect while far decoys had a very strong effect, and Spektor et al. [38] demonstrated that an increase in the target-decoy distance of perceptual stimuli increased the choice proportion of the target. A possible explanation for this discrepancy and the large variability between subjects in our study is that there are actually two contradicting forces in the Decoy task: on the one hand, the more similar the decoy and the target are (smaller VD), the more attention subjects would allocate to these options which would lead to a more frequent comparison between them (which would then result in a larger attraction effect) [14]. On the other hand, the more the decoy is inferior to the target (larger VD), the more the subjects would perceive the superiority of the target (larger attraction effect) [see also [56]]. In fact, both MDFT model [16] and MDbS model [14] refer to the point that when the decoy is very similar to the target and hence its inferiority is less clear, it may reduce the attraction effect. Therefore, we suggest this balance between the similarity of the decoy and the target, and the inferiority of the decoy in comparison to the target, as a possible explanation for the contradicting findings in the literature regarding the effect of the target-decoy distance on the attraction effect, and for the large variability between subjects in the effect of VD on the attraction effect in our study. It might be that these subjects who displayed a positive effect of the VD on the probability to choose the target are more sensitive to the inferiority of the decoy, while the subjects who displayed a negative effect are more sensitive to the similarity between the target and the decoy. It is also important to note that in our study the smallest target-decoy distance was a difference of 15% while in the other contradicting findings the smallest target-decoy distance was 2% [4,38]. These different ranges may also interact with the two contradicting forces of similarity and inferiority which affect the size of the attraction effect. A difference of 2% between the decoy and the target may result in a decoy which is not inferior enough to the target, and thus subjects would display less attraction effect, while a difference of 15% may be large enough for the target to be perceived as superior to the decoy but still similar enough to it.

Nevertheless, our results suggest that some of the variability between subjects in their overall susceptibility to the attraction effect can be explained by their sensitivity to the Gestalt law of proximity. Subjects who displayed low sensitivity to the attraction effect or even displayed the opposite effect–repulsion effect, were also less susceptible to grouping by proximity. However, our findings also highlight the importance of examining variability across subjects and not relying only on group differences in order to understand behavior and cognition.

Conclusion

Our findings provide evidence for a within-subject link between the sensitivity to a perceptual heuristic (proximity law of Gestalt theory) and the sensitivity to a value-based bias (attraction effect). These findings elucidate the commonalities between sensory and value-based processing within an individual. This also strengthens the notion that the brain generalizes across domains. Specifically, we suggest that the variation across subjects in their susceptibility to the Gestalt law of proximity might account for some of the variation observed in the attraction effect. Therefore, we used the comprehensive research and knowledge regarding the proximity law of Gestalt theory in order to explain a query in the mechanism underlying the attraction effect. Previous studies suggested that selective attention to more similar options plays an integral role in the mechanism underlying the attraction effect [14,16,17]. However, an unresolved question is what actually leads people to drive more attention towards similar options. Using the evidence that there is a close interplay between selective attention and the Gestalt grouping principles, we suggest that grouping by proximity of the more similar options is what leads people to drive more attention to these similar options. This allows us to draw a specific connection between perceptual processing (grouping by proximity) and value-based processing (comparison between lottery options). These findings are important to better understand the mechanisms underlying the attraction effect. Future work could examine computational models that may suggest further explanation for the mechanism underlying this interesting connection between the proximity law and the attraction effect.

Furthermore, our results offer a new approach for examining mechanisms of context-based choice biases using perceptual mechanisms. We can use the evidence that the brain generalizes across domains and that there are fundamental rules that it follows, in order to transfer knowledge from one domain to the other. In addition, finding such connection between perceptual and value processing may shed light on the overall mechanism by which the brain integrates information across different domains.

Supporting information

S1 Table. Influence of the value distance on the choice proportion of the target.

(PDF)

S2 Table. Summary of the mixed effects logistic regression model for variable predicting the choice proportion of the target.

(PDF)

S1 Appendix. List of ssstrials for the calibration task.

(PDF)

S2 Appendix. Calculation of the decoy options.

(PDF)

S3 Appendix. Individual results of binomial tests for choice proportion of target.

(PDF)

S1 Text. Additional analyses.

(PDF)

Acknowledgments

We thank Vered Kurtz-David, Adam Hakim, Tal Sela, Sharon Yefet and Noa Palmon for their help and guidance in many discussions. We also thank Marius Usher for his valuable advice throughout the work on the study.

Data Availability

All data is available from https://osf.io/jzk6y/.

Funding Statement

D.J.L. received funding from the United States-Israeli Bi-national Science Foundation (CNCRS = 2014612). https://www.bsf.org.il/. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Huber J, Payne J, Puto C. Adding asymmetrically dominated alternatives: violations of regularity and the similarity hypothesis. Journal of Consumer Research. 1982;9(1): 90–98. [Google Scholar]
  • 2.Simonson I. Choice based on reasons: the case of attraction and compromise effects. Journal of Consumer Research. 1989;16(2): 158–74. [Google Scholar]
  • 3.Wedell DH, Pettibone JC. Using judgments to understand decoy effects in choice. Organ Behav Hum Decis Process. 1996;67(3): 326–44. [DOI] [PubMed] [Google Scholar]
  • 4.Soltani A, De Martino B, Camerer C. A range-normalization model of context-dependent choice: a new Model and Evidence. PLoS Comput Biol. 2012;8(7): 1–15. 10.1371/journal.pcbi.1002607 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Louie K, De Martino B. The neurobiology of context-dependent valuation and choice In: Glimcher PW, Fehr E, editors. Neuroeconomics: Decision Making and the Brain. 2nd ed. Elsevier Inc; 2013. pp. 455–78. [Google Scholar]
  • 6.Tversky A. Prospect theory: an analysis of decision under risk. Psychological Review. 1972;79(4): 281–99. [Google Scholar]
  • 7.Luce R. D. (1959). Individual choice behavior. Oxford, England: John Wiley; 10.1037/h0043178 [DOI] [Google Scholar]
  • 8.Kahneman D, Tversky A. Prospect theory: an analysis of decision under risk. Econometrica. 1979;47(2): 263–292. [Google Scholar]
  • 9.Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science.1981;211(4481): 453–458. 10.1126/science.7455683 [DOI] [PubMed] [Google Scholar]
  • 10.Ratcliff R, Smith PL. A Comparison of sequential sampling models for two-choice reaction time. Psychological Review. 2004;111(2), 333–367. 10.1037/0033-295X.111.2.333 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Usher M, Mcclelland JL. Loss aversion and inhibition in dynamical models of multialternative choice. Psychological Review. 2004;111(3):757–69. 10.1037/0033-295X.111.3.757 [DOI] [PubMed] [Google Scholar]
  • 12.Gold JI, Shadlen MN. The neural basis of decision making. Annu Rev Neurosci. 2007;30: 535–74. 10.1146/annurev.neuro.29.051605.113038 [DOI] [PubMed] [Google Scholar]
  • 13.Busemeyer JR, Gluth S, Rieskamp J, Turner BM. Cognitive and neural bases of value-based decisions. Trends Cogn Sci. 2019;23(3): 251–63. 10.1016/j.tics.2018.12.003 [DOI] [PubMed] [Google Scholar]
  • 14.Noguchi T, Stewart N. Multialternative Decision by Sampling: a model of decision making constrained by process data. Psychological Review. 2018;125(4): 512–44. 10.1037/rev0000102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Noguchi T, Stewart N. In the attraction, compromise, and similarity effects, alternatives are repeatedly compared in pairs on single dimensions. Cognition. 2014;132(1): 44–56. 10.1016/j.cognition.2014.03.006 [DOI] [PubMed] [Google Scholar]
  • 16.Roe RM, Busemeyer JR, Townsend JT. Multialternative Decision Field Theory: a dynamic connectionist model of decision making. Psychological Review. 2001;108(2): 370–92. 10.1037/0033-295x.108.2.370 [DOI] [PubMed] [Google Scholar]
  • 17.Trueblood JS, Brown SD, Heathcote A. The Multiattribute Linear Ballistic Accumulator Model of context effects in multialternative choice. Psychological Review. 2014;121(2): 179–205. 10.1037/a0036137 [DOI] [PubMed] [Google Scholar]
  • 18.Fechner GT. Elements of psychophysics. Adler HE, translator. New York: Holt, Rinehart and Winston; 1860. [Google Scholar]
  • 19.Bernoulli D. Exposition of a new theory on the measurement of risk. Econometrica. 1954;22: 22–36. [English translation of Latin original, 1738]. [Google Scholar]
  • 20.Serences JT. Article Value-Based Modulations in Human Visual Cortex. Neuron. 2008;60(6): 1169–81. 10.1016/j.neuron.2008.10.051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Brosch M, Selezneva E, Scheich H, Lewis JW, Virginia W. Representation of reward feedback in primate auditory cortex. Frontiers in System Neuroscience. 2011;5: 1–12. 10.3389/fnsys.2011.00005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Shuster A, Levy DJ. Common sense in choice: the effect of sensory modality on neural value representations. eNeuro. 2018;5: 1–14. 10.1523/ENEURO.0346-17.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Anderson BA. A value-driven mechanism of attentional selection stimulus value. Journal of Vision. 2013;13: 1–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Bourgeois A, Chelazzi L, Vuilleumier P. How motivation and reward learning modulate selective attention. Prog Brain Res. 2016;229: 325–342. 10.1016/bs.pbr.2016.06.004 [DOI] [PubMed] [Google Scholar]
  • 25.Kahneman D. Maps of bounded rationality: a perspective on intuitive judgment and choice. In: The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. 2002. pp. 449–89. [Google Scholar]
  • 26.Louie K, Khaw MW, Glimcher PW. Normalization is a general neural mechanism for context-dependent decision making. Proc Natl Acad Sci U S A. 2013;110: 6139–44. 10.1073/pnas.1217854110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Woodford M. Prospect theory as efficient perceptual distortion. American Economic Review. 2012;102: 41–6. [Google Scholar]
  • 28.Khaw MW, Li Z, Woodford M. Risk aversion as a perceptual bias. SSRN: abstract = 2964856 [Working Paper]. 2017. [cited 2017 April 4]. Available from: https://ssrn.com/abstract=2964856. [Google Scholar]
  • 29.Khaw MW, Glimcher PW, Louie K. Normalized value coding explains dynamic adaptation in the human valuation process. Proc Natl Acad Sci U S A. 2017;114(48): 1–6. 10.1073/pnas.1715293114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Choplin JM, Hummel JE. Comparison-induced decoy effects. Mem Cogn. 2005;33(2): 332–43. 10.3758/bf03195321 [DOI] [PubMed] [Google Scholar]
  • 31.Trueblood JS, Brown SD, Heathcote A, Busemeyer JR. Not just for consumers: context effects are fundamental to decision making. Psychological Science. 2013;24(6): 901–908. 10.1177/0956797612464241 [DOI] [PubMed] [Google Scholar]
  • 32.Koffka K. Principles of Gestalt psychology. Oxford, England: Harcourt, Brace; 1935. [Google Scholar]
  • 33.Lehmann DR, Pan Y. Context Effects, New Brand Entry, and Consideration Sets. J Mark Res. 1994;31(3): 364–74. [Google Scholar]
  • 34.Mishra S, Umesh UN, Stem DE. Antecedents of the attraction effect: an information-processing approach. J Mark Res. 1993;30(3): 331–49. [Google Scholar]
  • 35.Gori S, Spillmann L. Detection vs. grouping thresholds for elements differing in spacing, size and luminance. An alternative approach towards the psychophysics of Gestalten. Vision Res. 2010;50(12): 1194–202. 10.1016/j.visres.2010.03.022 [DOI] [PubMed] [Google Scholar]
  • 36.Mohr PNC, Heekeren HR, Rieskamp J. Attraction effect in risky choice can be explained by subjective distance between choice alternatives. Sci Rep. 2017;(7): 1–10. 10.1038/s41598-017-06968-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Frederick S, Lee L, Baskin E. The limits of attraction. J Mark Res. 2014;51(4): 487–507. [Google Scholar]
  • 38.Spektor MS, Kellen D, Hotaling JM. When the good looks bad: an experimental exploration of the repulsion effect. Psychological Science. 2018;29(8): 1309–20. 10.1177/0956797618779041 [DOI] [PubMed] [Google Scholar]
  • 39.Bandyopadhyay T, Dasgupta I, Pattanaik PK. Stochastic revealed preference and the theory of demand. Journal of Economic Theory. 1999;110: 95–110. [Google Scholar]
  • 40.Castillo G. The attraction effect and its explanations. Games and Economic Behavior. 2020;119: 123–147. [Google Scholar]
  • 41.Barlow HB. Possible principles underlying the transformation of sensory messages In: Resenblith WA, editor. Sensory Communication. Cambridge, MA: MIT Press; 1961. pp 217–234. [Google Scholar]
  • 42.Attneave F. Some informational aspects of visual perception. Psychological Review. 1954;61(3): 183–93. 10.1037/h0054663 [DOI] [PubMed] [Google Scholar]
  • 43.Wagemans J, Iea-paris S, Feldman J, Gepshtein S, Kimchi R, Pomerantz JR, et al. A Century of Gestalt Psychology in Visual Perception: II. Conceptual and Theoretical Foundations. Psychol bull. 2012;138(6): 1218–52. 10.1037/a0029334 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Summerfield C, Tsetsos K. Building bridges between perceptual and economic decision-making: neural and computational mechanisms. Front Neurosci. 2012;6(5): 1–20. 10.3389/fnins.2012.00070 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Louie K, Glimcher PW. Efficient coding and the neural representation of value. Ann N Y Acad Sci. 2012;1251(1): 13–32. 10.1111/j.1749-6632.2012.06496.x [DOI] [PubMed] [Google Scholar]
  • 46.Li V, Michael E, Balaguer J, Castañón SH, Summerfield C. Gain control explains the effect of distraction in human perceptual, cognitive, and economic decision making. Proc Natl Acad Sci U S A. 2018;115(38): 8825–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Polanía R, Woodford M, Ruff, CC. Efficient coding of subjective value. Nat Neurosci. 2019;22: 134–42. 10.1038/s41593-018-0292-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Kimchi R. Perceptual organization and visual attention. Progress in Brain Research. 2009;176: 15–33. 10.1016/S0079-6123(09)17602-1 [DOI] [PubMed] [Google Scholar]
  • 49.Wagemans J, Iea-paris S, Elder JH, Kubovy M, Palmer SE, Peterson MA. A Century of Gestalt Psychology in Visual Perception: I. Perceptual Grouping and Figure–Ground Organization. 2012;138(6): 1172–217. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Johnston WA, Dark VJ. Selective attention. Ann Rev Psychol. 1986;37: 43–75. [Google Scholar]
  • 51.Kimchi R, Yeshurun Y, Cohen-Savransky A. Automatic, stimulus-driven attentional capture by objecthood. Psychon Bull Rev. 2007;14(1): 166–72. 10.3758/bf03194045 [DOI] [PubMed] [Google Scholar]
  • 52.Lamers MJ, Roelofs A. Role of Gestalt grouping in selective attention: Evidence from the Stroop task. Percept Psychophys. 2007;69(8): 1305–14. 10.3758/bf03192947 [DOI] [PubMed] [Google Scholar]
  • 53.Gluth S, Kern N, Kortmann M, Vitali CL. Value-based attention but not divisive normalization influences decisions with multiple alternatives. Nat Hum Behav. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Liew SX, Howe PDL, Little DR. The appropriacy of averaging in the study of context effects. Psychon Bull Rev. 2016;1: 1639–46. [DOI] [PubMed] [Google Scholar]
  • 55.Daviet R, Webb R. Double Decoys and a Potential Parameterization: Empirical Analyses of Pairwise Normalization. SSRN: ssrn.3374514 [Working Paper]. 2019 [cited 2019 January 28]. Available from: https://ssrn.com/abstract=3374514.
  • 56.Król M, Król M. Inferiority, Not Similarity of the Decoy to Target, Is What Drives the Transfer of Attention Underlying the Attraction Effect: Evidence From an Eye-Tracking Study with Real Choices. J Neurosci Psychol Econ. 2019;12(2): 88–104. [Google Scholar]

Decision Letter 0

Tyler Davis

30 Jun 2020

PONE-D-20-14306

Attraction to similar options: the Gestalt law of proximity is related to the attraction effect

PLOS ONE

Dear Dr. Izakson,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Two expert reviewers have reviewed the submission and are in agreement that the study described is a useful contribution to the field, pending some conceptual and methodological revisions. I agree with their reviews and thus will not reiterate them here. The recommendations from both reviewers are straightforward and concrete, and thus should be addressable in a major revision. Please make sure to address each point in your resubmission.

Please submit your revised manuscript by Aug 14 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Tyler Davis, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In their paper, the authors tested whether "physical proximity" and "value proximity" are related concepts. To do so, they investigated the relationship between the attraction effect, a phenomenon according to which the addition of a specific inferior option affects the relative choice proportions between the original options, and the Gestalt law of proximity, the tendency to group objects according to their physical proximity. The authors found a correlation between the tendency to group stimuli by proximity and the degree to which they showed an attraction effect.

The paper investigates an interesting hypothesis and extends our understanding of the similarities between perceptual and value-based decision making. The paper is well written and structured, the methods are described in detail, and the analyses are conducted rigorously. In general, I think this is a good paper worthy of publication. However, I have a few minor questions and one major concern regarding the theoretical background and the interpretation of the results that, in my opinion, has to be addressed prior to publication.

Major comment:

1) The authors motivate their research by the question whether the perception of value distance is similar (or analogous) to the perception of actual physical distance. They use a psychophysical task in which individuals have to tell whether two arrays of dots are identical or different to determine the sensitivity threshold and a standard risky-choice task to determine the degree to which individuals exhibit the attraction effect. A negative correlation between the threshold (i.e., higher values = lower sensitivity) and the attraction effect (i.e., higher values = stronger influence of context) emerges which the authors interpret as a commonality, even claiming that the results "strengthen the notion that the brain generalizes across domains" (without it being a neuroscientific investigation).

In the perceptual task, individuals faced a total of 192 trials, out of which 174 were between two different arrays and 18 were between two identical arrays. Despite an achievable accuracy of more than 90% by merely pressing "different" on every trials (individuals that the authors excluded), there have been substantial individual differences with respect to sensitivity in the task. The authors interpreted that those with a low threshold are more sensitive to a perceptual heuristic. I find this classification quite surprising, given that individuals with a low threshold are those that performed the task more accurately (objectively better) than those with a high threshold. To me, it seems that those with a higher threshold are simply less engaged with the task/more random in their response behavior. Strikingly, random behavior would also be reflected in a regression toward 50% choices of the target in the decoy task (i.e., weaker attraction effect).

If this is the case, the observed correlation is merely a statistical necessity reflecting randomness due to boredom/task engagement, such that more random people have both a higher threshold and a smaller attraction effect. The conclusions of the authors would not hold in such a scenario. Thankfully, the authors' design of both task allows to distinguish the two interpretations: In both tasks, each unique trials is repeated (at least) 8 times. This property can be used to quantify the individuals' consistency, in other words, the proportion of choosing the same option across repetitions. A person that is engaged in the task but is simply not very sensitive will have a high consistency and a rather abrupt cut-off as the variable stimulus becomes less similar to the constant one. In contrast, a person that is less engaged will have a much more uniform distribution of error rates across the different difficulties. The analogous reasoning applies in the risky choice task.

The authors should provide a more convincing reasoning why individuals with a lower threshold are more susceptible to a heuristic and present analyses that rule out that the confound of task engagement is the main driver of the observed effect.

Additional minor comments:

1) The authors mention on multiple occasions that there are substantial individual differences and that about a quarter of their sample show the opposite of the attraction effect, the repulsion effect. The presence of individuals on the "other" side of the effect is more-or-less a necessity; If these individuals were not there, the observed effect size would be substantially greater. If the authors want to make this claim, they would need to run tests on an individual level (e.g., a binomial test). Most likely, only few of those people (if any) will show a significant deviation from a 50% chance of choosing the target option.

2) As a robustness check, the authors checked for violations of WASRP. A correlation of r=.99 suggests that the main analysis and the supplemental analysis are almost identical. However, I do not see how these analyses differ from each other. Does either of the analyses include trials on which the decoy was chosen? It would be helpful if the authors could clarify the differences between the analyses.

3) In the basic condition, option A seems to be preferred to option B (60% vs. 40%, Fig. 5 in S3), at least on average. Since this is not mentioned anywhere in the text I have to guess, but I assume that A is the safe option and B is the riskier option. Moreover, there are substantial differences in the choice proportions, such that the values range from 0% to about 80%, whereas they should be closer to 50%. The authors should at least mention and briefly discuss that their calibration procedure did not achieve the desired result.

Reviewer #2: This is my first review of the paper “Attraction to similar options: the Gestalt law of proximity is related to the attraction effect” by Izakson et al.. In their study, the authors investigate if and how common processes underlying perceptual and value-based decisions might cause the attraction effect. In two studies, one of them a pre-registered replication study, they basically found a correlation between susceptibility to the Gestalt law of proximity and the size of the attraction effect. While this is an interesting finding that should be published, the authors thus far did not aim at understanding why this correlation exists. I therefore recommend to do additional analyses. I furthermore feel uncomfortable with the number of excluded participants and the reasons for exclusion. Please find my details comments below.

1.) Please define spatial context.

2.) I would not call attraction effect, compromise effect and similarity effect “decoy effects” but “context effects”.

3.) Line 114: Transformation into subjective scale: please use a “weaker” formulation. At least for value this notion is based on models and not on evidence. Other models take objective values as inputs without transforming them into a subjective scale.

4.) “Contemporary decision making theory” is a very unusual term in decision science.

5.) More than 20% of the participants were excluded, which obviously influences the results. However, I do not see obvious “mistakes” resulting in exclusion but relevant behavior. Some people might not be able to perform well in the Gestalt task. Yes, if they chose “different” in identical trials they might have had a prior. But many other participants might have had, too. Perhaps they chose different when the distance was 20.5 pixels just because they had a prior and not because they perceive a difference. I do not see the 50% threshold as a valid exclusion criterion. Similarly, the slope of the logistic regression. Obviously, these people exist. Why should their behavior not map to behavior in the decoy task? How was the “more than 96% criterion chosen”? Also these participants not necessarily make mistakes but show their preferences. I would wish to see analyses including all participants.

6.) Criterion for attraction effect is >50% choice probability for target. Given some noise, I would always expect some individuals above or below 50%, which is not necessarily a sign for the attraction or repulsion effect.

7.) Logistic regressions: Instead of focusing on significance levels of slopes, I would want to see a model comparison of models including value difference or not (e.g. based on BIC).

8.) If I understand the analyses right, the main finding is a correlation between susceptibility to the Gestalt law of proximity and the size of the attraction effect. The question that now arises is why. I would encourage the authors to aim at answering this question. At the moment two behavioral outcome variables are correlated. An interesting approach would be to identify models that predict these behavioral outcomes variables (e.g., MDFT) and see if parameters of these models are correlated. Ideally a single model can be defined predicting both behavior in the Gestalt and the decoy task, including a single parameter driving observed correlations between behavioral outcome variables.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Mikhail S. Spektor

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Oct 28;15(10):e0240937. doi: 10.1371/journal.pone.0240937.r002

Author response to Decision Letter 0


6 Sep 2020

Re: PONE-D-20-14306

Attraction to similar options: the Gestalt law of proximity is related to the attraction effect

Dear Editor,

We sincerely thank you and the reviewers, for the time and effort taken to help us improve our manuscript. We hope that we adequately addressed the concerns raised by the reviewers. Please find below the comments we received in bold, and our responses in italics. Note that all the additions and corrections in the main text and the supplementary information are highlighted in yellow.

Concerns raised by Reviewer 1:

Major comments

The paper investigates an interesting hypothesis and extends our understanding of the similarities between perceptual and value-based decision making. The paper is well written and structured, the methods are described in detail, and the analyses are conducted rigorously. In general, I think this is a good paper worthy of publication. However, I have a few minor questions and one major concern regarding the theoretical background and the interpretation of the results that, in my opinion, has to be addressed prior to publication.

The authors motivate their research by the question whether the perception of value distance is similar (or analogous) to the perception of actual physical distance. They use a psychophysical task in which individuals have to tell whether two arrays of dots are identical or different to determine the sensitivity threshold and a standard risky-choice task to determine the degree to which individuals exhibit the attraction effect. A negative correlation between the threshold (i.e., higher values = lower sensitivity) and the attraction effect (i.e., higher values = stronger influence of context) emerges which the authors interpret as a commonality, even claiming that the results "strengthen the notion that the brain generalizes across domains" (without it being a neuroscientific investigation).

In the perceptual task, individuals faced a total of 192 trials, out of which 174 were between two different arrays and 18 were between two identical arrays. Despite an achievable accuracy of more than 90% by merely pressing "different" on every trials (individuals that the authors excluded), there have been substantial individual differences with respect to sensitivity in the task. The authors interpreted that those with a low threshold are more sensitive to a perceptual heuristic. I find this classification quite surprising, given that individuals with a low threshold are those that performed the task more accurately (objectively better) than those with a high threshold. To me, it seems that those with a higher threshold are simply less engaged with the task/more random in their response behavior. Strikingly, random behavior would also be reflected in a regression toward 50% choices of the target in the decoy task (i.e., weaker attraction effect).

If this is the case, the observed correlation is merely a statistical necessity reflecting randomness due to boredom/task engagement, such that more random people have both a higher threshold and a smaller attraction effect. The conclusions of the authors would not hold in such a scenario. Thankfully, the authors' design of both task allows to distinguish the two interpretations: In both tasks, each unique trials is repeated (at least) 8 times. This property can be used to quantify the individuals' consistency, in other words, the proportion of choosing the same option across repetitions. A person that is engaged in the task but is simply not very sensitive will have a high consistency and a rather abrupt cut-off as the variable stimulus becomes less similar to the constant one. In contrast, a person that is less engaged will have a much more uniform distribution of error rates across the different difficulties. The analogous reasoning applies in the risky choice task.

The authors should provide a more convincing reasoning why individuals with a lower threshold are more susceptible to a heuristic and present analyses that rule out that the confound of task engagement is the main driver of the observed effect.

Response:

We thank the reviewer for the kind words, and think that these are very important points regarding the relation between heuristic and accuracy and the potential confound of task engagement. We agree that it might seem odd that subjects with a lower threshold are more sensitive to the heuristic. However, this is a matter of the definition of a heuristic. In the last decade, there is substantial literature regarding efficient coding which treat biases and heuristics, both sensory and value-based, as optimal with respect to the environmental conditions (Polania et al., nature neuroscience, 2019; Louie et al., Ann N Y Acad Sci., 2012; Wagemans et al., Psychol bull, 2012). That is, in certain situations, the heuristic can be considered as efficient under the relevant conditions and constraints, and hence should not always be treated as an error. Moreover, our “task was based on a psychophysical task presented in (Gori et al., Vision Research, 2010) where a higher tendency to detect differences in physical distance as well as the tendency to group by proximity is translated to a lower threshold. A subject who is more susceptible to grouping by proximity will detect the differences between the constant stimulus and the variable stimulus at a much lower distance between the triplets of dots, since it would be easier for her to group the row of 12 dots into 4 groups of triplets.” We added the part with the quotation marks as a clarification of this point in the description of the Gestalt task in the main text (p. 10, line 248).

Regarding the reviewer’s concern that the level of task engagement might be a confound in the study, we performed the analyses suggested by the reviewer, which yielded that the negative link between the Gestalt threshold and the choice proportion of the target is not merely due to task engagement. We included a general explanation regarding these analyses in the main text under the results section (page 20, line 481):

“To exclude the possibility that the significant negative link between the Gestalt threshold and the probability to choose the target is merely due to task engagement, such that subjects who were less engaged in the Gestalt task (and thus have higher thresholds) were also less engaged in the Decoy task (and thus have lower attraction effect sizes), we performed further analyses and added them to the supplementary material (S3 text).

In the perceptual task, in order to examine task engagement, we measured the slope of the logistic regression fit for each subject. The meaning of the slope of the logistic fit is how accurate was the subject in general, across all intervals (distribution of error rates across trial difficulties). We, then, used the Gestalt slope as a predictor in our main analysis instead of the Gestalt threshold, and had no significant effect of the Gestalt slope on the probability to choose the target in both experiments (Experiment 1: β=0.12, p=0.52; Replication: β=0.04, p=0.67; Table 1 in S3 text). These results indicate that there is no systematic effect of the error rates (task engagement) in the Gestalt task and the level of choice proportion of the target in the Decoy task.

Regarding the Decoy task, it is impossible to define a choice error since there is no correct answer in each trial (except for the first order stochastically dominated trials in which all the subjects, except one, chose the 100% winning probability options all the time). Nonetheless, equivalently to the slopes of the logistic fits in the Gestalt task, we measured the choice variance in each trial type in the Decoy task (there were 32 different trial types that were repeated 8 times each). We calculated two measurements for task engagement in the Decoy task: 1) the mean of choice variance which gives an indication of how consistent was the subject per trial type (the smaller this mean of choice variance, the more consistent was the subject per trial type and thus, more engaged in the Decoy task), and 2) the variability across trial types which represents if the subject responded differently across the different trial types (the smaller the variability of choices across trial types, the less the subject changed his response according to the different trial types, and thus, we assume, the less engaged she was in the task) [detailed equations of the two measurements are available in S3 text]. When we examined the correlation between each of these measurements of task engagement (the mean of choice variance per trial type and the variability of choices across trial types) and the choice proportion of the target, we observed that there is no significant correlation between neither of the measurements for task engagement in the Decoy task and the choice proportion of the target (Fig. 7 in S3 text). This indicates that subjects who had a higher variance in their choices per trial type or a small variability across trial types, and thus were probably less engaged in the Decoy task, did not choose systematically the target more or less often.

Moreover, there is no significant correlation between neither of the measurements for task engagement in the Decoy task (the mean of choice variance per trial type and the variability across trial types) and the measurement of task engagement for the Gestalt task (the slope of the logistic fit) (Fig. 8 in S3 text) which demonstrates that there is no connection between the levels of task engagement in both tasks.

Finally, we ran a regression analysis which includes VD, Gestalt threshold, and the task engagement measurements (Gestalt slope for the Gestalt task and the mean of choice variance per trial type for the Decoy task) as predictors to the choice proportion of the target for both experiments and observed that none of the task engagement measurements had a significant effect on the choice proportion of the target in both experiments (Table 2 in S3 text). Furthermore, the coefficients of our main predictors (Gestalt threshold and VD) of the model which includes the task engagement measurements (Table 2 in S3 text) were very similar to the coefficients of our main predictors in our main model in the paper (Table 4) in both experiments.

These results suggest that the effect of the sensitivity to the proximity law on the choice proportion of the target is not related to task engagement (see S3 text for more details).”

Additionally, we included a detailed analyses in the supplementary material (S3 text under ‘confound analyses’ section). We added all the following parts with the quotation marks to the supplementary text. The parts that are not with quotation marks are further explanations for the reviewer.

“We performed further analyses in order to exclude the possibility that the significant negative link between the Gestalt threshold and the probability to choose the target is merely due to task engagement, such that subjects who were less engaged in the Gestalt task (and thus have higher thresholds) were also less engaged in the Decoy task (and thus have lower attraction effect sizes).”

Regarding the perceptual task, we agree with the reviewer that since there were 192 trials, out of which 174 were between two different arrays, and 18 were between two identical arrays, responding “different” all the time would result in a 90% overall accuracy even in cases where a subject is not engaged at all in the task (and as was also mentioned by the reviewer we excluded these subjects and expand more about this in the answer for comment 5 of Reviewer 2). However, as the reviewer suggested, we examined the distribution of choice consistency (variance) across all the different trial difficulties (different pixel interval increases) for each non-excluded subject (see Appendix A at the end of this letter), and also averaged this across subjects (figure below). As shown in the figure below (and as can be seen for each subject in Appendix A), the subjects’ choice variance changed according to the difficulty of the trial: the subjects were more consistent in the easier trials (when the distance between the triplets of dots was either very small or very high) and were more variant in the harder trials (around the area of the threshold (5-8 pixels)). This indicates that the subjects were affected by the trial difficulty and were not simply pressing “different” all the time.

Since the slope of the logistic regression fit is a well-known measurement for task engagement in psychophysical tasks, we examined if there is a relation between it and the choice consistency. In order to perform this analysis, we averaged, for each subject, the choice consistency across all trials, and correlated it with the slope of the logistic regression. We found a very high correlation between the mean of choice variance and the slope of the logistic fit (Experiment 1: R=-0.96, p<0.001, n=38; Replication: R=-0.92, p<0.001, n=81; figure below). This demonstrates that the mean of choice variance of a subject is very similar to the slope of the logistic regression fit.

“The meaning of the slope of the logistic fit is how accurate was the subject in general, across all intervals (distribution of error rates across trial difficulties). That is, a flat curve means a fully random subject, while the steeper the slope, the more accurate the subject is, with a step function being perfectly accurate.” As shown in the correlation we conducted, the lower the slope (flatter slope), the more variable the subject is across the different trial difficulties (higher mean choice variance). This suggests that the higher the error rates (lower slope) and the more variable the subject is across all trials, the less engaged she is in the task.

Since, the slope of the logistic regression fit is a well-known measurement for task engagement in psychophysical tasks and that there is a significant correlation between the slope and the mean of the choice variance, we will use the slope as our measurement for task engagement in the Gestalt task throughout this discussion and in the supplementary text. “Importantly, in our main analysis (Tables 2-4 in the main text; figures 3 and 6 in the main text), as a first step, we excluded subjects with a slope that was not significantly (p<0.05) higher than 0, meaning that subjects with a uniform distribution of errors across the different difficulties (very low slope) were excluded from the analyses because they were not engaged in the task” (we expand on this point in the answer for comment 5 of Reviewer 2 as well).

“Moreover, when we used the Gestalt slope as a predictor in our main analysis instead of the Gestalt threshold, we had no significant effect of the Gestalt slope on the probability to choose the target in both experiments (Experiment 1: β=0.12, p=0.52; Replication: β=0.04, p=0.67; Table 1). These results indicate that there is no systematic effect of the error rates (task engagement) in the Gestalt task and the level of choice proportion of the target in the Decoy task. Hence, this strengthens the notion that task engagement was not the driver of the main relationship we found between the two tasks.

Table 1. Summary of the mixed effects logistic regression model for variables predicting the choice proportion of the target.

Experiment 1 (n = 38) Replication (n = 81)

Fixed-effect Parameters B SE# Z p-val B SE# Z p-val

Constant 0.15 0.07 1.93 .05 0.15 0.05 3.12 <.01**

Value distance -0.25 0.14 -1.83 .06 -0.17 0.09 -1.80 .07

Gestalt Slope 0.12 0.17 0.64 .52 0.04 0.09 -2.72 .67

Random-effects Parameters var var

Constant 0.00 0.00

Value distance 0.14 0.16

# Robust Std. Err. (Errors clustered by Subject); * p<.05 **p<.01 *** p<.001”

Additionally, we ran the same regression using the mean of choice variance in the Gestalt task instead of the Gestalt threshold in order to further validate our conclusion. As presented in table 2, we had no significant effect of the mean of choice variance in the Gestalt task on the choice proportion of the target in both experiments (Experiment 1: β=-0.94, p=0.19; Replication: β=-0.48, p=0.26; Table 2).

Table 2. Summary of the mixed effects logistic regression model for variables predicting the choice proportion of the target.

Experiment 1 (n = 38) Replication (n = 81)

Fixed-effect Parameters B SE# Z p-val B SE# Z p-val

Constant 0.56 0.17 3.32 <.01** 0.25 0.08 3..23 <.01**

Value distance -0.25 0.14 -1.81 .07 -0.17 0.09 -1.81 .07

Gestalt mean of choice variance -0.94 0.71 -1.31 .19 -0.48 0.43 -1.12 .26

Random-effects Parameters var var

Constant 0.00 0.00

Value distance 0.17 0.17

# Robust Std. Err. (Errors clustered by Subject); * p<.05 **p<.01 *** p<.001

“Regarding the Decoy task, it is impossible to define a choice error since there is no correct answer in each trial (except for the first order stochastically dominated trials in which all the subjects, except one, chose the 100% winning probability options all the time).” Therefore, similarly to what the reviewer suggested, we defined subjects whose behavior (choice) did not change across the different trials (different levels of decoy) as not engaged in the task, and this was our exclusion criteria for the Decoy task (we expand on this point in the answer for comment 5 of Reviewer 2 as well).

“Nonetheless,” we performed the analysis which the reviewer suggested and, “equivalently to the slopes of the logistic fits in the Gestalt task, we measured the choice variance in each trial type in the Decoy task. There were 32 different trial types (2 different data sets, 2 attributes (probability/amount), 2 different types of decoy (range/frequency), 4 options of VD), which were repeated 8 times each. On the one hand,” as Reviewer 1 pointed out, “we expect that a subject who is engaged in the task, will have a relatively small variance in her choices across repetitions of the exact same trial. On the other hand, analogous to the Gestalt task, we would also expect that a subject who is engaged in the task will have some variance across the different trial types (different levels of decoy).

Therefore, we calculated two measurements” to address the reviewer’s concern: “1) the mean of choice variance, where we calculated the variance in choice of the 8 repetitions for each trial type, and then averaged for each subject these variances across the 32 trial types (equation 1). This average of choice variance gives an indication of how consistent was the subject per trial type (the smaller this average of choice variance, the more consistent was the subject per trial type and thus, more engaged in the Decoy task), and 2) the variability across trial types (variance of means), where we calculated the probability to choose a specific option (A/B) per trial type and then calculated the variability of choice probabilities across trial types (equation 2). This measurement represents if the subject responded differently across the different trial types (the smaller the variability of choices across trial types, the less the subject changed his response according to the different trial types, and thus, we assume, the less engaged she was in the task).

Equation 0:

(∑_1^Ni▒x_ijk )/N_i =P_jk

P_jk is the choice probability of option B^* across the 8 repetitions of a trial type, where i stands for repetition (8 per trial type), j stands for trial type (32 per subject) and k stands for subject.

Equation 1:

(∑_1^Nj▒〖P_jk (1-P_jk)〗)/N_j =X_k

X_k is the average of choice variance. We first calculated the variance across the repetitions per each trial type, and then calculated the average of these variances for each subject.

Equation 2:

(∑_1^Nj▒〖〖(P〗_jk-(P_k ) ®)〗^2 )/(N_j-1)=Y_k

Y_k is the variability across trial types (variance of means). We first calculated the choice probability across the repetitions per each trial type, and then calculated the variance of these choice probabilities for each subject. (P_k ) ® stands for the mean of choice proportion of option B per subject.

* We randomly chose option B. It is equivalent for both options (A and B).

If both of these measurements are indications for task engagement, we would assume a negative connection between them: the smaller the mean of choice variance per trial type, the higher the variability of the choices across trial types (which would indicate a more engaged subject). Interestingly, this is exactly what we observed when we correlated between these two measurements as shown in the figure below (Fig. 6 in S3 text): the X axis represents the mean of choice variance per trial type (X_k in equation 1), and the Y axis represents the variability of choices across trial types (Y_k in equation 2) [Experiment 1: R=-0.46, p<0.01, n=38; Replication: R=-0.63, p<0.001, n=81].”

To answer the reviewer’s concern regarding “a potential connection between task engagement in the Decoy task and the tendency to show an attraction effect”, we examined the correlation between each of these measurements for task engagement (the mean of choice variance per trial type and the variability of choices across trial types) and the choice proportion of the target.

As shown in the figure above (Fig. 7 in S3 text), there is no significant correlation between the mean of choice variance per trial type and the choice proportion of the target in both experiments (Experiment 1: R=0.23, p=0.17, n=38; Replication: R=0.09, p=0.44, n=81; Fig. A). Additionally, there is no significant correlation between the variability of choices across trial types and the choice proportion of the target in both experiments (Experiment 1: R=-0.13, p=0.45, n=38; Replication: R=-0.03, p=0.76, n=81; Fig. B). These results indicate that subjects who had a higher variance in their choices per trial type or a small variability across trial types, and thus were probably less engaged in the Decoy task, did not choose systematically the target more or less often.

Moreover, in order to test if there is a connection between the level of task engagement in both tasks, we examined the correlation between each of the two measurements for task engagement in the Decoy task (the mean of choice variance per trial type and the variability across trial types) and the measurement of task engagement for the Gestalt task (the slope of the logistic fit). We found no significant correlation between the mean of choice variance per trial type in the Decoy task and the Gestalt slope in both experiments (Experiment 1: R=0.16, p=0.32, n=38; Replication: R=-0.12, p=0.27, n=81; Fig. A) as well as no significant correlation between the variability across trial types in the Decoy task and the Gestalt slope (Experiment 1: R=-0.18, p=0.28, n=38; Replication: R=0.09, p=0.42, n=81; Fig. B), which demonstrates that subjects who were less engaged in the Decoy task were not necessarily less engaged in the Gestalt task as well. Hence, this strengthens our interpretation regarding the relation that we found between the two tasks and indicates that it is not caused by the lack of task engagement.

Finally, we ran a regression analysis which includes VD, Gestalt threshold, and the task engagement measurements (Gestalt slope for the Gestalt task and the mean of choice variance per trial type for the Decoy task) as predictors to the choice proportion of the target for both experiments in order to examine if there is any overall influence of task engagement on our main results [we did not include the variability of choices across trial types as a predictor since there is a significant correlation between it and the mean of choice variance per trial type].

Table 3. Summary of the mixed effects logistic regression model for variables predicting the choice proportion of the target including task engagement measurements.

Experiment 1 (n = 38) Replication (n = 81)

Fixed-effect Parameters B SE# Z p-val B SE# Z p-val

Constant 0.40 0.18 2.23 <.05* 0.44 0.12 3.78 <.001***

Value distance -0.25 0.14 -1.86 .06 -0.17 0.09 -1.82 .07

Gestalt Threshold -0.04 0.02 -2.06 <.05* -0.03 0.01 -2.69 <.01**

Mean of choice variance per trial type 0.26 0.54 0.48 .63 -0.22 0.35 -0.64 .52

Gestalt Slope 0.03 0.17 0.15 .88 -0.03 0.09 -0.36 .72

Random-effects Parameters var var

Constant 0.00 0.00

Value distance 0.14 0.16

# Robust Std. Err. (Errors clustered by Subject); * p<.05 **p<.01 *** p<.001

As presented in Table 3 (Table 2 in S3 text), none of the task engagement measurements had a significant effect on the choice proportion of the target in both experiments. Furthermore, the coefficients of our main predictors (Gestalt threshold and VD) of the model which includes the task engagement measurements (Table 3) were very similar to the coefficients of our main predictors in our main model in the paper (Table 4 in the main text) in both experiments. This result suggests that the effect of the sensitivity to the proximity low on the choice proportion of the target is not related to task engagement.

In sum, these results demonstrate that the significant negative link between the Gestalt thresholds and the choice proportion of target is not merely due to the level of engagement in both tasks.”

Additional minor comments:

1) The authors mention on multiple occasions that there are substantial individual differences and that about a quarter of their sample show the opposite of the attraction effect, the repulsion effect. The presence of individuals on the "other" side of the effect is more-or-less a necessity; If these individuals were not there, the observed effect size would be substantially greater. If the authors want to make this claim, they would need to run tests on an individual level (e.g., a binomial test). Most likely, only few of those people (if any) will show a significant deviation from a 50% chance of choosing the target option.

This is an important comment, since one of the main claims of the paper is that there are individual differences and thus, we aim to show the results at the individual level. To address this comment, we added additional analyses in the main text in the results section of Experiment 1 (page 16, line 397):

“Although the average effect across subjects is significant, it is a rather small effect. This is probably because there is a considerable heterogeneity across subjects in their probability to choose the target (the range of probabilities spreads between 0.38 and 0.68 (Fig 4A)). Therefore, additionally, we examined separately for each subject, the effect of adding a decoy on their probability to choose the target option using a binomial test. We found that only ~20% of subjects chose the target significantly different than 50% (Experiment 1: 7 out of 38 subjects (18%) chose the target significantly different than 50% (p<0.05); detailed individual results are available in S6 Appendix). While most of the subjects who showed a significant decoy effect displayed an attraction effect, 29% of them displayed the opposite effect (a repulsion effect – higher probability to choose the competitor when the decoy was asymmetrically dominated by the target [37, 38]). These results are in line with previous studies which posited that decoy effects are usually weak effects [40, 54] and that there are considerable differences between subjects [54].”

Additionally, we added these analyses in the results section of the Replication experiment (page 24, line 564):

“Additionally, there was a high variability across subjects in their probability to choose the target (the range of probabilities spreads between 0.41 and 0.83 (Fig 4B)). Similar to Experiment 1, ~20% of the subjects displayed a significant decoy effect on an individual level (17 out of 81 subjects (21%) chose the target significantly different than 50% (p<0.05); detailed individual results are available in S6 Appendix). Moreover, similarly to Experiment 1, most of the subjects who showed a significant decoy effect displayed an attraction effect, while 18% displayed a repulsion effect.”

Moreover, we added this explanation in the discussion (page 28, line 676):

“We observed, in both experiments, that only ~20% of the subjects displayed a significant decoy effect on an individual level. It is important to note that in most previous studies, only group effects were described [1-3, 14], either because the study was a between-subject’s design or because the study only focused on group effects. However, studies that did examine and report results at the individual level show that there are systematic differences across subjects in regard to the influence of context on their behavior [54, 55, 40] and posit that decoy effects are usually weak effects [40, 54] similar to our results.”

It is important to note that although the small attraction effect size and the large variability between subjects, we were able to show a significant attraction effect across subjects in both experiments, as well as we were able to explain the tendency to choose the target using the Gestalt threshold.

As a robustness check, the authors checked for violations of WASRP. A correlation of r=.99 suggests that the main analysis and the supplemental analysis are almost identical. However, I do not see how these analyses differ from each other. Does either of the analyses include trials on which the decoy was chosen? It would be helpful if the authors could clarify the differences between the analyses.

We thank Reviewer 1 for this comment. We understand that these analyses were unclear in the original text. Therefore, we added additional explanation in the main text (the exact explanation as well as the location in the text are described further in this response) as well as detailed analyses in S3 text under ‘robustness analyses’ section in order to clarify our purpose.

We used mixed-effect logistic regressions in our main analyses, which requires a binary dependent variable per trial. This is the reason that we used the choice of either target or competitor as a measurement per trial for each subject and the choice proportion of the target overall trials (excluding trials in which the decoy option was chosen – 1% of the trials) as a measurement per subject.

The purpose of the robustness analyses was to examine if our measurement (choice proportion of the target) is consistent with other measurements that are usually applied in other studies that examined decoy effects. We wanted to discard the possibility that our main result is due to the specific measurement we chose. Therefore, we measured the attraction effect with additional three measurements: violation of WASRP (the Weak Axiom of Stochastic Revealed Preference) (Bandyopadhyay et al., 1999; Castillo, 2020), violation of regularity (Tversky, 1972) and relative choice share of the target (RST) (Berkowitsch et al., 2014; Spektor et al., 2018). The comparisons of WASRP violation and RST with our measurement (choice proportion of target) indicated that these three measurements are very similar to each other. This assured us that our measurement is valid and consistent with other measurements in the literature.

It is important to note that both WASRP violation and RST allow a combination of both options A and B as a target, while the regularity violation do not. This is because there is no definition of a target option in the Basic condition. Therefore, we compared our measurement (choice proportion of target) only to WASRP violation and RST. Nonetheless, we present in the robustness analyses (in S3 text) the measurement of the attraction effect using the regularity violation with our data.

These are the additional explanations that we added in the main text (page 12, line 311):

“We are aware that there are other measurements for the attraction effect, however we chose specifically this one since our main analyses in which we used mixed-effect logistic regressions required a dependent variable per trial (we used the choice in each trial: target/competitor for each subject). We included analyses of the attraction effect using three other measurements that are used in the literature: violation of WASRP (the Weak Axiom of Stochastic Revealed Preference) [39, 40], violation of regularity [6] and relative choice share of the target [38]. We concluded that most of the measurements are very similar to the one we used, and thus, yield similar results (see S3 text; Robustness analyses for more details).”

And (page 17, line 410):

“Moreover, we examined the robustness of our measurement for the attraction effect size (choice proportion of target) by comparing it with three other measurements that are used in the literature: violation of WASRP (the Weak Axiom of Stochastic Revealed Preference) [39, 40], violation of regularity [6] and relative choice share of the target [38]. We concluded that most of the measurements are very similar to the one we used, and thus, yield similar results [detailed information and analyses are provided in S3 Text].”

Additionally, as was mentioned above we described in detail the additional analyses in S3 text under ‘robustness analyses’ section.

In the basic condition, option A seems to be preferred to option B (60% vs. 40%, Fig. 5 in S3), at least on average. Since this is not mentioned anywhere in the text I have to guess, but I assume that A is the safe option and B is the riskier option. Moreover, there are substantial differences in the choice proportions, such that the values range from 0% to about 80%, whereas they should be closer to 50%. The authors should at least mention and briefly discuss that their calibration procedure did not achieve the desired result.

We appreciate the reviewer’s comment. First of all, in the method section (line 196), we clearly describe the attributes (amount and probability) of both A and B. Based on this description, it is clear that indeed option A is the safer option. However, as requested, we added further analyses and clarifications of the basic preference in the supplementary material (S3 text under “Higher choice proportion of the safer option” section):

“It is important to note that although we aimed to reach an indifference between options A and B using the Calibration task, the safer option (option A) was chosen more often across subjects in the Basic condition in both experiments, albeit only significant in the Replication experiment (Experiment 1: mean choice proportion of option A: 0.61, t(38)=1.89, p=0.07; Replication: mean choice proportion of option A: 0.57, t(81)=2.85, p<0.01; Fig. 5 and Fig. 9 (figure below)). There is a very wide range of preferences of option A in both experiments (Experiment 1: from 0.125 to 1, Replication: from 0 to 1; Fig 9 (figure below)). Additionally, we conducted a binomial test to examine the significance of the preference towards a specific option per subject. In both experiments, ~30% of the subjects significantly preferred one of the options (Experiment 1: 26% of the subjects significantly preferred option A over B and none of the subjects significantly preferred option B; Replication: 32% of the subjects significantly preferred one of the options, and 70% out of these subjects significantly preferred option A over B). This indicates that although we aimed for subjects to be indifferent in the subjective value between option A and B by using the Calibration task, around third of the subjects had a significant difference in the subjective value between the options. A previous study demonstrated that an increase in the subjective value difference between the relevant options (A and B) leads to a decrease in the attraction effect [10]. Therefore, this could be one of the reasons for the small attraction effect sizes in our study. Nonetheless, although the calibration task did not work perfectly, we were able to show a significant attraction effect across subjects as well as a significant link between the choice proportion of the target and the susceptibility to group by proximity.”

Additionally, we mention this in the main text in the discussion (page 28, line 683):

“Furthermore, although we aimed to reach for each subject an indifference between options A and B using the Calibration task, the safer option (option A) was chosen more often across subjects in the Basic condition in both experiments, albeit only significant in the Replication experiment. Additionally, for around third of the subjects in both experiments there was a significant difference in the subjective value between the two options even though we used a Calibration task. This could be another reason for the small attraction effect sizes in our study. Nonetheless, although the calibration task did not work perfectly, we were able to show a significant attraction effect across subjects as well as a significant link between the choice proportion of the target and the susceptibility to group by proximity.”

Concerns raised by Reviewer 2:

Comments

This is my first review of the paper “Attraction to similar options: the Gestalt law of proximity is related to the attraction effect” by Izakson et al.. In their study, the authors investigate if and how common processes underlying perceptual and value-based decisions might cause the attraction effect. In two studies, one of them a pre-registered replication study, they basically found a correlation between susceptibility to the Gestalt law of proximity and the size of the attraction effect. While this is an interesting finding that should be published, the authors thus far did not aim at understanding why this correlation exists. I therefore recommend to do additional analyses. I furthermore feel uncomfortable with the number of excluded participants and the reasons for exclusion. Please find my details comments below.

Please define spatial context.

We added an explanation for spatial context in the main text (page 2, line 60): “Other available or unavailable alternatives in the current environment of the choice set are considered spatial context.”

I would not call attraction effect, compromise effect and similarity effect “decoy effects” but “context effects”.

We apologize if this term is confusing. However, this is the standard term used in the literature in many studies (Tsuzuki & Guo, Proc Annu Conf Cogn Sci Soc, 2004; Pettibone & Weddel, Organ Behav Hum Decis Process, 2000; Pettibone & Weddel, J Behav Dec Making, 2007; Marini & Paglieri, BEHAV PROCESS, 2019; Dumbalska, Li, Tsetsos & Summerfield, https://psyarxiv.com/p85mb, 2020). Our goal is to use a term that differentiates these three effects (attraction effect, compromise effect, and similarity effect) from other context effects like framing effect or temporal effects. We consider the term “decoy effects” as the appropriate term since in the field of judgment and decision making, this term is usually used to describe solely these three effects (context effects that happen as a result of an addition of a decoy option). For example:

“These effects all occur with the addition of a third alternative, called the decoy, to a two-alternative choice set and are all called decoy effects.” (page 1351 from Tsuzuki & Guo, Proc Annu Conf Cogn Sci Soc, 2004)

Therefore, as to stay loyal to the common terminology in the field we kindly request to keep and use this term.

Line 114: Transformation into subjective scale: please use a “weaker” formulation. At least for value this notion is based on models and not on evidence. Other models take objective values as inputs without transforming them into a subjective scale.

We edited this sentence according to this comment (line 115): “computational models, in both sensory perception and value processes, use transformation of information from objective magnitudes to a subjective scale in order to explain subjects’ performance.”

“Contemporary decision making theory” is a very unusual term in decision science.

We apologize for the confused term. We meant to use this term to oppose recent decision models from standard rational choice theories. In order to clarify our meaning, we changed this term to “suggested computational models” in line 86, and to “recent decision models” in line 125.

More than 20% of the participants were excluded, which obviously influences the results. However, I do not see obvious “mistakes” resulting in exclusion but relevant behavior. Some people might not be able to perform well in the Gestalt task. Yes, if they chose “different” in identical trials they might have had a prior. But many other participants might have had, too. Perhaps they chose different when the distance was 20.5 pixels just because they had a prior and not because they perceive a difference. I do not see the 50% threshold as a valid exclusion criterion. Similarly, the slope of the logistic regression. Obviously, these people exist. Why should their behavior not map to behavior in the decoy task? How was the “more than 96% criterion chosen”? Also these participants not necessarily make mistakes but show their preferences. I would wish to see analyses including all participants.

We thank the Reviewer for this important comment. The exclusion criteria were a major issue in the study and was one of the main reasons that we conducted a replication experiment. We first performed a thorough exploration of our data from Experiment 1 and based all of our exclusion criteria according to it. After we pre-registered the exclusion criteria, we carefully carried exactly the same exclusion criteria in the Replication experiment. Thus, the exclusion criteria were data-driven according to Experiment 1 and were performed on the data gained from the Replication experiment without looking at the data first. Therefore, we had to calculate numerical thresholds for each exclusion criterion so we would be able to exclude subjects in the Replication experiments without looking at the data.

Moreover, following this comment, we added the following text to the main text (page 13, line 332): “Subjects were also excluded due to lack of engagement in the Decoy task. That is, if they chose the same option more than 96% of the trials at least in two out of the four blocks of the task, which indicates that they show no variation in their choices across the different trial types (which is analogous to a low slope in the Gestalt task). We chose the 96% threshold based on a thorough exploration of our data from Experiment 1, and based all of our exclusion criteria according to it. These exclusion criteria were listed in the pre-registration of the replication experiment.”

Regarding the perceptual task, since there were 192 trials, out of which 174 were between two different arrays and 18 were between two identical arrays, responding “different” all the time would allow the subject to have 90% accuracy without being truly engaged in the task (it is important to note that the subjects did not know about this proportion). In order to exclude this kind of subjects, we had to exclude subjects according to their accuracy in the identical trials (which we referred to as “catch” trials). We decided specifically on the 50% threshold, since this is the chance level. Therefore, a subject who was wrong in at least 50% of the identical trials was considered being at chance level and biased towards answering “different” or just not paying attention to the task. It is important to note, that there were only a few subjects who showed such a behavior in both experiments (Experiment 1: 3 out of 52 subjects; Replication: 4 out of 102 subjects).

Regarding the exclusion criterion that was based on the slope of the logistic regression, it was meant to exclude subjects that were not engaged in the task. We considered subjects who had a uniform distribution of errors across the different difficulty levels in the gestalt task (different interval increases), as if they are not engaged in the task. This is especially true in psychophysical tasks where there is a true correct answer. The lower the slope of the logistic function, the more uniform are the error rates across the different difficulties the subject has. Moreover, if the slope is negative then that means that the subject had more errors in the easier trials than in the harder trials, which makes it very hard to believe that he/she was truly engaged in the task.

Such lack of consistency in the task is a sign of almost complete disinterest in the task or a lack of understanding of the task’s instructions. Therefore, we inferred that the results of such subjects do not represent their true sensitivity in the perceptual task. Moreover, in Experiment 1 (which was the experiment we based these exclusion criteria on), most of the subjects who were excluded due to the 50% or less of accuracy in the identical trials, were also excluded due to the slope criterion (3 out of 4 subjects showed both behaviors). Additionally, their thresholds were extremely different than the thresholds of other subjects as can be seen in the following figure (outliers with thresholds of -61.42, -13.78, -20.21 and 1.21 are presented as red dots while the other subjects are presented as blue dots). Thresholds which are lower than 0 actually make no sense in a psychophysical task. This indicates that our exclusion criteria relate to each other, meaning that most subjects who showed one problematic feature, usually also showed others.

Regarding the Decoy task, the 96% threshold was chosen based on Experiment 1. As was also mentioned in the first response for Reviewer 1, it was more complicated to define an error in the Decoy task since there is no correct answer in each trial (except for the first order stochastically dominated trials in which all subjects, except one, chose the 100% winning probability options all the time). Therefore, during our exploratory analysis of Experiment 1 only, we first investigated the pattern of choices for each subject using decision trees. Using this method, we noticed that subjects who chose the same option during most of the task were driven by variables such as the number of the trial, number of block or RT. Those subjects usually had a different strategy at the beginning of the task, but then, from a particular block (each subject from a different block) it seems like they got tired (or bored) and started to choose only according to a specific strategy (safer option/larger amount, etc.) and much faster (seems like they did not even consider the other options in the choice set). This examination led us, similarly to what Reviewer 1 have suggested and analogous to the slope of the Gestalt task, to define subjects whose behavior (choice) did not change across the different trials as not engaged in the task. Using the decision trees, we were able to identify these subjects (10 problematic subjects).

However, as we mentioned in the beginning of this response, we had to calculate a numerical threshold in order to exclude subjects in the Replication experiment without looking at the data. Therefore, we tried different thresholds ranging from 95%-97% of choosing the same option throughout the task. We found that the interval of 95.4%-96.5% yielded the same problematic subjects which we identified using the decision trees. We then, chose the 96% as our threshold since this is the mean of this interval. After collecting the data of the Replication experiment, we applied the 96% numerical threshold in order to exclude subjects in the decoy task without looking at the data.

It is important to note that since our goal in the Replication experiment was to exclude subjects without looking at the data, we had no control over the number of excluded subjects. Therefore, for the Replication experiment, we recruited more subjects than was needed based on a power analysis and stated the number of subject that we are going to recruit a-priori in our pre-registration document.

Although there could be other criteria for exclusion, we believe that these data-driven exclusion criteria are the right approach for this kind of experiment. Nonetheless, we understand Reviewer’s 2 concern regarding the number of subjects, which were excluded, and this is one of the main reasons we ran a replication experiment.

Regarding the reviewer’s request to run the analyses including the excluded subjects, we approach this in the following way:

First, we must posit that the exclusion criterion in the Decoy task is more problematic than the exclusion criteria in the Gestalt task. In the Gestalt task there is an objective correct answer for each trial, therefore it is easier to identify subjects who were not engaged in the task (as was explained above). Therefore, we distinguish between the exclusion criteria in both of these tasks. We first examine our main results including only the problematic subjects from the Decoy task (where the exclusion criterion was much trickier to identify) [Table 4] and then we also excluded the problematic subjects from the Gestalt task (where the exclusion criteria are more straightforward) [Table 5]. This analysis is equivalent to the analysis conducted in Table 4 in the main text (page 19) where we used mixed-effects logistic regression with value distance and Gestalt threshold as predictors to predict choice proportion of target.

Table 4. Summary of the mixed effects logistic regression model for variables predicting the choice proportion of the target including the problematic subjects from the Decoy task (Experiment 1: 10 problematic subjects in addition to the 38; Replication: 13 problematic subjects in addition to the 81).

Experiment 1 (n = 48) Replication (n = 94)

Fixed-effect Parameters B SE# Z p-val B SE# Z p-val

Constant 0.38 0.13 2.86 <.01** 0.33 0.08 4.33 <.001***

Value distance -0.22 0.13 -1.78 .07 -0.16 0.09 -1.85 .06

Gestalt Threshold -0.03 0.01 -1.79 .07 -0.03 0.01 -2.61 <.01**

Random-effects Parameters var var

Constant 0.03 0.00

Value distance 0.16 0.13

# Robust Std. Err. (Errors clustered by Subject); * p<.05 **p<.01 *** p<.001

As shown in Table 4, when we conducted the main analysis while including the subjects that were excluded due to poor task engagement in the Decoy task, we still observed a marginal negative significant effect of the Gestalt threshold in Experiment 1 (β=-0.03, p=0.07) and a significant effect in the Replication experiment (β=-0.03, p<0.01). We also found a marginal significant effect of the value distance in both experiments (Experiment 1: β=-0.22, p=0.07; Replication: β=-0.16, p=0.06). These results indicate that our main effect (the negative effect of the Gestalt threshold on the choice proportion of target) exists even when we include the subjects, which we defined to be problematic in the Decoy task, which we strongly think were not engaged in the task.

Table 5. Summary of the mixed effects logistic regression model for variables predicting the choice proportion of the target including all problematic subjects (Experiment 1: 14 problematic subjects in addition to the 38; Replication: 21 problematic subjects in addition to the 81).

Experiment 1 (n = 52) Replication (n = 102)

Fixed-effect Parameters B SE# Z p-val B SE# Z p-val

Constant 0.13 0.05 2.64 <.01** 0.13 0.03 4.21 <.001***

Value distance -0.19 0.12 -1.67 .09 -0.14 0.08 -1.64 0.1

Gestalt Threshold 0.00 0.00 0.35 .73 -0.00 0.00 -0.27 .79

Random-effects Parameters var var

Constant 0.03 0.00

Value distance 0.13 0.13

# Robust Std. Err. (Errors clustered by Subject); * p<.05 **p<.01 *** p<.001

However, as shown in Table 5, when we included also the subjects who were poorly engaged in the Gestalt task, we have no significant effect of the Gestalt threshold or the value distance. This is not really surprising since the exclusion criteria for the Gestalt task are straightforward and by looking at the data of the problematic subjects it is obvious that they were not really engaged in the task and thus have thresholds that are extremely different than the other subjects (such as: -61.42, -13.78, -20.21) and actually makes no sense in psychophysical tasks. Thus, it is impossible to infer any connection to their Decoy results using their Gestalt thresholds.

In sum, in any behavioral paradigm, we can find subjects who are obviously not adhering to the task but just pressing buttons to finish the task and go home. It makes no sense to use them to examine a scientific question since they do not demonstrate any valid behavior that we can understand or model. This problem is not specific to our paradigm or design. One approach to minimize the number of un-engaged subjects is to use incentive compatible designs as we used in our experiments. Another approach is to use exclusion criteria. As we stated above, we agree with Reviewer 2 that exclusion criteria can be problematic since they create a lot of degrees of freedom to the researchers. Therefore, after conducting a thorough investigation to decide on the most relevant exclusion criteria based on Experiment 1, we carefully followed the rules of pre-registration and conducted a replication experiment in order to validate our results.

If Reviewer 2 strongly thinks that these analyses are necessary, then we can put them in the supplementary material. However, we are strongly against it, first, because of all the reasons mentioned above, and second, since it defies the whole idea of a replication study in which the exclusion criteria are decided beforehand.

Criterion for attraction effect is >50% choice probability for target. Given some noise, I would always expect some individuals above or below 50%, which is not necessarily a sign for the attraction or repulsion effect.

This is an important comment, and similarly as we responded to Reviewer’s 1 first minor comment, we added analyses and clarifications in the main text.

To address this comment, we added additional analyses in the main text in the results section of Experiment 1 (page 16, line 397):

“Although the average effect across subjects is significant, it is a rather small effect. This is probably because there is a considerable heterogeneity across subjects in their probability to choose the target (the range of probabilities spreads between 0.38 and 0.68 (Fig 4A)). Therefore, additionally, we examined separately for each subject, the effect of adding a decoy on their probability to choose the target option using a binomial test. We found that only ~20% of subjects chose the target significantly different than 50% (Experiment 1: 7 out of 38 subjects (18%) chose the target significantly different than 50% (p<0.05); detailed individual results are available in S6 Appendix). While most of the subjects who showed a significant decoy effect displayed an attraction effect, 29% of them displayed the opposite effect (a repulsion effect – higher probability to choose the competitor when the decoy was asymmetrically dominated by the target [37, 38]). These results are in line with previous studies which posited that decoy effects are usually weak effects [40, 54] and that there are considerable differences between subjects [54].”

Additionally, we added these analyses in the results section of the Replication experiment (page 24, line 564):

“Additionally, there was a high variability across subjects in their probability to choose the target (the range of probabilities spreads between 0.41 and 0.83 (Fig 4B)). Similar to Experiment 1, ~20% of the subjects displayed a significant decoy effect on an individual level (17 out of 81 subjects (21%) chose the target significantly different than 50% (p<0.05); detailed individual results are available in S6 Appendix). Moreover, similarly to Experiment 1, most of the subjects who showed a significant decoy effect displayed an attraction effect, while 18% displayed a repulsion effect.”

Moreover, we added this explanation in the discussion (page 28, line 676):

“We observed, in both experiments, that only ~20% of the subjects displayed a significant decoy effect on an individual level. It is important to note that in most previous studies, only group effects were described [1-3, 14], either because the study was a between-subject’s design or because the study only focused on group effects. However, studies that did examine and report results at the individual level show that there are systematic differences across subjects in regard to the influence of context on their behavior [54, 55, 40] and posit that decoy effects are usually weak effects [40, 54] similar to our results.”

In sum, we agree that the attraction effect is very weak, however we claim that the tendency of the subject to be affected by the decoy is not a random noise, but a systematic process and this is why we were able to show the connection to the susceptibility to group by proximity.

Logistic regressions: Instead of focusing on significance levels of slopes, I would want to see a model comparison of models including value difference or not (e.g. based on BIC).

To address this comment, we performed likelihood ratio tests (LRT) to compare between models and added these analyses to the supplementary material: S3 text under ‘model comparisons’ section:

“First, we compared between a mixed-effects logistic regression which includes only an intercept (M0) and a mixed-effects logistic regression which includes value distance (VD) as a predictor as well (M1). As shown in Table 6 (Table 3 in S3 text), the addition of the VD as a predictor to the regression model produced a marginal significant effect in both experiments (Experiment 1: Chisq=3.29, p=0.07; Replication: Chisq=3.22, p=0.07). Moreover, the AIC score of M1 was (slightly) better than the AIC score of M0. This indicates that the addition of VD to the model increased (slightly) the goodness-of-fit.

M0: 〖P(Choice Target)〗_ij=1/(1+〖exp〗^((-() 〖β0j+ε〗_ij))

M1: 〖P(Choice Target)〗_ij=1/(1+〖exp〗^((-() 〖β0j+β1jValue distance" " +ε〗_ij))

Table 6. Summary of model comparisons to examine the significance of adding Value distance.

Experiment 1 (n = 38) Replication (n = 81)

models AIC logLik deviance Chisq Chi Df Pr(>Chisq) AIC logLik deviance Chisq Chi Df Pr(>Chisq)

M0 13260.74 6626.371 13252.74 28287.47 14139.74 28279.47

M1 13259.45 6624.723 13249.45 3.295668 1 0.06946285 28286.25 14138.13 28276.25 3.21966 1 0.07275873

Moreover, we compared between a mixed-effects logistic regression which includes only VD as a predictor of the probability to choose the target (M1) and a mixed-effects logistic regression which includes both VD and Gestalt threshold as predictors (M2). As shown in Table 7 (Table 4 in S3 text), the addition of the Gestalt threshold to the regression model produced a significant effect in both experiments (Experiment 1: Chisq=5.00, p<0.05; Replication: Chisq=7.21, p<0.01), and the AIC score was lower in M2 compared to M1.

M1: 〖P(Choice Target)〗_ij=1/(1+〖exp〗^((-() 〖β0j+β1jValue distance" " +ε〗_ij))

M2: 〖P(Choice Target)〗_ij=1/(1+〖exp〗^((-() 〖β0j+β1jValue distance" + " β2 Gestalt threshold+ε〗_ij))

Table 7. Summary of model comparisons to examine the significance of adding Gestalt threshold.

Experiment 1 (n = 38) Replication (n = 81)

models AIC logLik deviance Chisq Chi Df Pr(>Chisq) AIC logLik deviance Chisq Chi Df Pr(>Chisq)

M1 13259.45 -6624.723 13249.45 28286.25 -14138.13 28276.25

M2 13256.44 -6622.222 13244.44 5.002777 1 0.02530668 28281.04 -14134.52 28269.04 7.212329 1 0.00724045

Furthermore, we compared between a mixed-effects logistic regression which includes only Gestalt threshold as a predictor of the probability to choose the target (M3) and a mixed-effects logistic regression which includes both VD and Gestalt threshold as predictors (M2). As shown in Table 8 (Table 5 in S3 text), the addition of the VD to the regression model that already includes the Gestalt threshold produced a marginal significant effect in both experiments (Experiment 1: Chisq=3.43, p<0.05; Replication: Chisq=3.32, p<0.01).

M3: 〖P(Choice Target)〗_ij=1/(1+〖exp〗^((-() 〖β0j+β1jGestalt threshold" " +ε〗_ij))

M2: 〖P(Choice Target)〗_ij=1/(1+〖exp〗^((-() 〖β0j+β1jValue distance" + " β2 Gestalt threshold+ε〗_ij))

Table 8. Summary of model comparisons to examine the significance of adding Value distance as an addition to the Gestalt threshold.

Experiment 1 (n = 38) Replication (n = 81)

models AIC logLik deviance Chisq Chi Df Pr(>Chisq) AIC logLik deviance Chisq Chi Df Pr(>Chisq)

M3 13257.87 -6623.935 13247.87 28282.36 -14136.18 28272.36

M2 13256.44 -6622.222 13244.44 3.427645 1 0.06411347 28281.04 -14134.52 28269.04 3.31989 1 0.06844643

These results demonstrate that using model comparisons (LRT) yielded similar results as using the significance of slopes (Wald test). The value distance has a marginal significant effect when added to either a null model which includes only an intercept as a predictor or a model which includes only the Gestalt threshold as a predictor. Additionally, the Gestalt threshold has a significant effect when added to a model that includes only the value distance as a predictor. These results strengthen our main conclusion that there is a connection between the sensitivity to the proximity law (Gestalt threshold) and the attraction effect (choice proportion of target).”

If I understand the analyses right, the main finding is a correlation between susceptibility to the Gestalt law of proximity and the size of the attraction effect. The question that now arises is why. I would encourage the authors to aim at answering this question. At the moment two behavioral outcome variables are correlated. An interesting approach would be to identify models that predict these behavioral outcomes variables (e.g., MDFT) and see if parameters of these models are correlated. Ideally a single model can be defined predicting both behavior in the Gestalt and the decoy task, including a single parameter driving observed correlations between behavioral outcome variables.

This is a very important comment. As the reviewer described, what we found across two independent and identical experiments with a pre-registered replication, is that the lower the Gestalt sensitivity threshold of a given subject as measured in a perceptual task, the more she tends to choose the target option in a decoy task. We agree with Reviewer 2 that we lack a mechanistic explanation for this phenomenon. We tried to answer the question: how can grouping by proximity be one of the mechanisms mediating the attraction effect in the discussion by suggesting a theoretical idea. Using the evidence that there is a close interplay between selective attention and the Gestalt grouping principles, we suggest that grouping by proximity of the more similar options is what leads people to drive more attention to these similar options. This allowed us to draw a specific connection between perceptual processing (grouping by proximity) and value-based processing (comparison between lottery options).

We agree that computational modeling in general can add to the understanding of the mechanism underlying this connection. However, since, we already have a logistic regression model that combines both of these parameters (the Gestalt threshold and the choice proportion of the target), we feel that using other more complicated models without any justification (for instance: neural data) will not significantly contribute to the understanding of the connection between these behavioral outcomes. A computational model is also only one theoretical explanation of the data, out of many possible explanations. In future work, after collecting neural data which will be relevant to these tasks, it could be beneficial to develop a more complicated computational model, which will allow the understanding of the neural mechanism underlying this connection.

However, to address this important comment, we acknowledged it in the discussion:

“However, note, that this is a theoretical notion that should be examined in future studies using either imaging techniques or eye movements.” (page 27, line 665)

“Future work could examine computational models that may suggest further explanation for the mechanism underlying this interesting connection between the proximity law and the attraction effect.” (page 31, line 756)

Appendix A – individual subjects’ plots - choice variance as a function of interval increase between the dots in the Gestalt task

Experiment 1

Replication

Attachment

Submitted filename: Response to reviews.pdf

Decision Letter 1

Tyler Davis

6 Oct 2020

Attraction to similar options: the Gestalt law of proximity is related to the attraction effect

PONE-D-20-14306R1

Dear Dr. Izakson,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Tyler Davis, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Mikhail S. Spektor

Acceptance letter

Tyler Davis

8 Oct 2020

PONE-D-20-14306R1

Attraction to similar options: the Gestalt law of proximity is related to the attraction effect

Dear Dr. Izakson:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Tyler Davis

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Influence of the value distance on the choice proportion of the target.

    (PDF)

    S2 Table. Summary of the mixed effects logistic regression model for variable predicting the choice proportion of the target.

    (PDF)

    S1 Appendix. List of ssstrials for the calibration task.

    (PDF)

    S2 Appendix. Calculation of the decoy options.

    (PDF)

    S3 Appendix. Individual results of binomial tests for choice proportion of target.

    (PDF)

    S1 Text. Additional analyses.

    (PDF)

    Attachment

    Submitted filename: Response to reviews.pdf

    Data Availability Statement

    All data is available from https://osf.io/jzk6y/.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES