Abstract
Extensive evidence shows that humans are inconsistent in their choices. Yet, the neural mechanism underlying inconsistent choice remains unknown. Here, we aim to show that inconsistent choice is tied to the valuation process but is also linked to motor dynamics during task execution. We report the results from two behavioral and one neuroimaging studies. Human subjects (n = 206, 117 females) completed a well validated risky-choice task to measure their inconsistency levels, followed by two novel tasks, explicitly designed to examine motor dynamics free of value computations. We record mouse trajectories during task execution and extract 34 features to examine the role of motor dynamics in choice inconsistency. We show that motor dynamics predict inconsistency levels, even when motor output is absent any sort of valuation elements, nor is it related to deliberation about the decision. These results are robust across all three studies and when using various analysis strategies. We then apply neuroimaging and show that inconsistency is represented in value brain areas but, at the same time, is also related to activity in motor brain regions. These findings suggest that inconsistent choice behavior may arise from multiple cognitive processes and invite choice theorists to examine models of inconsistent choice that include heterogeneous sources of stochasticity.
Keywords: choice execution, fMRI, irrational choice, motor cognition, mouse tracking, value-based decision–making
Significance Statement
Individuals tend to be inconsistent in their choices. For example, one could prefer sushi over pizza and, in the next moment, reverse their choice. Kurtz-David and colleagues investigate the contribution of motor dynamics to this stochastic behavior. Leveraging a novel task design and mouse tracking, they show that specific motor elements during task execution—which were unrelated to valuation—could predict the intensity of subjects’ inconsistencies. They further show that inconsistency rises in value-related brain regions but is also related to activations of motor circuits. Hence, inconsistency is related to the valuation process but is also tied to motor computations during task execution. These results urge theorists to expand current choice models.
Introduction
“Trembling hand errors” (Selten, 1975) have become a synonym for accidental outcomes in decision-making (Binmore, 1987; Samuelson and Zhang, 1992) that lead to irrational choices. Nonetheless, as the term implies, such mistakes could be the result of altered value representations, of motor tremor in choice execution, or of both.
Previous studies demonstrated how random neural activity can lead to choice stochasticity (Glimcher, 2005; Faisal et al., 2008; Findling et al., 2019; Lebovich et al., 2019), arising either in sensory regions during the perception of choice options (Gold and Shadlen, 2007) or in decision circuits of the value network (“inference noise”; Drugowitsch et al., 2016; Kurtz-David et al., 2019; Webb, 2019). However, the valuation of choice options is also reflected in the motor system through the selection of strategies in task execution (Shadmehr et al., 2019). As such, it is prone to imprecision stemming from motor circuits (Harris and Wolpert, 1998). While stochastic motor computations have been shown to affect performance in classic goal-directed motor tasks (Wolpert and Ghahramani, 2000), a lot less is known on how motor imprecision—independent of value-related (inference) or sensory noise—is related to value-based decision–making in general and to choice inconsistency in particular.
Choice inconsistency violates fundamental axioms in economic theory, as it creates cyclical choice patterns, thus considered a form of irrational behavior (Houthakker, 1950; Afriat, 1967). Importantly, it occurs even without experimental manipulations to the context (Choi et al., 2007; Kurtz-David et al., 2019). As such, it resembles findings in psychophysics that noisy mechanisms vary perception and motor output, even when the stimuli remain constant (Gold and Shadlen, 2007). This suggests that choice inconsistency may originate from noisy neural processes in various brain systems. Nevertheless, to date, most economic theorists have remained agnostic about the cognitive underpinnings for this behavior (Becker et al., 1963; McFadden, 2005). More recently, empirical (Camille et al., 2011; Padoa-Schioppa, 2013; Chung et al., 2017; Kurtz-David et al., 2019) and theoretical (Webb, 2019) works have pointed at randomness during the valuation process (inference noise) as its core cognitive source but mostly overlooked other possible sources for inconsistent choice.
In the current work, we aim to highlight two possible sources of choice inconsistency. The first source is related to inference noise in value-related brain areas. The second is related to task execution and originates from motor-related brain regions.
To this aim, we use a well studied task for eliciting subjects’ inconsistency levels (Choi et al., 2007; Halevy et al., 2018; Kurtz-David et al., 2019). Crucially though, to tease apart the two sources for choice inconsistency, we present a novel task design, which solely captures motor output free of any value computations. We leverage the vast dynamic information of task execution and examine motor movements leading up to choices by analyzing mouse trajectories, a powerful tool for studying the timing and loci of subtask computations (Freeman, 2018). Our mouse tracking data analysis strategy takes an exploratory approach and enables us to identify specific elements within the motor movement that predict choice inconsistency. Finally, we employ neuroimaging to examine the neural footprints of motor dynamics and to pinpoint the circuits related to inconsistent choices.
We report the results from eight tasks in three separate studies: behavioral (n = 89), neuroimaging (n = 42), and a replication of the behavioral study (n = 69). Our main findings show that motor output, even in the absence of value modulation, predicts inconsistency levels. In the neuroimaging study, we replicate our previous findings and show that inconsistent choice is related to noisy neural networks during the computation of value, but at the same time, is also strongly associated with activations in motor brain regions. Taken together, these findings suggest that motor circuits play a substantial role in inconsistent choice. Our results provide empirical evidence for future computational studies concerning cognitive models of stochastic choice and call for further research on the linkages between the motor and valuation systems.
Materials and Methods
Experimental design
Main risky-choice task
Subjects completed a well validated task to measure their inconsistency levels, where they made choices from linear budget sets in the context of risk (Choi et al., 2007, 2014; Halevy et al., 2018; Kurtz-David et al., 2019). On each trial, subjects were presented with a set of 50/50 lotteries between two accounts, X and Y, and were asked to choose their preferred lottery (a bundle of some amount of X and some amount of Y). All possible lotteries in each trial were represented along a budget line. For example, the (53, 18) coordinate (red dot) on the line presented in Figure 1A is a lottery with 50% chance to win 53 tokens, and 50% chance to win 18 tokens. The slope of the line indicated the substitution (price ratio) between the X and Y amounts, whereas the distance of the line from the axes’ origin indicated the total token amount to be won (“endowment”). Slopes and endowments were randomized across trials (see Extended Data Figs. 1-1, 1-2 for budget sets’ parameters). Importantly, we set the mouse cursor at the axes origin at the beginning of each trial and recorded mouse trajectories (see below, Mouse tracking preprocessing). We refer to these trajectories as “choice dynamics” throughout the rest of the paper. See Figure 1A for further details.
Figure 1.
Experimental design. A, In all tasks, at the beginning of each trial, we set the mouse cursor at the axes origin and recorded mouse trajectories during the trial. In the behavioral and replication studies, subjects used a mouse, while in the fMRI study, they used a trackball to submit their choices. Main risky-choice task (left). Subjects were presented with a budget line with 50/50 lotteries between two accounts, labeled and . Each point on the budget line represented a different lottery between the and accounts. Subjects had to choose their preferred lottery (a bundle of the and accounts) out of all possible lotteries along the line. For example, the bundle (53, 18, red dot) corresponds to a lottery with a 50% chance to win 53 tokens (account ) and a 50% chance to win 18 tokens (account ). Each trial presented a different budget line with a different slope (the substitution, or price ratio, between and ) and endowment (the distance from the axes origin), which were randomized across trials. Middle and right panels. non-value tasks, aimed to study motor output, absent value-based decision-making. The lines and predefined targets were the budget lines and actual choices from the main task, respectively. Motor task (middle, used in both the behavioral and neuroimaging studies). Subjects had to move the cursor to the predefined blue target. Note that no numerical information was presented, therefore not available for subjects to guide their choices. Numerical task (right, used solely in the behavioral study). Subjects had to reach the predefined target coordinates. The title displayed subjects’ current position (also see Materials and Methods). B, Experimental procedures. Top panel, in the behavioral and replication studies (n = 89 and n = 69, respectively), subjects completed 150 trials of the main task divided to three blocks of 50 trials, followed by one block of 51 trials in each of the non-value tasks (task order was counterbalanced across subjects). Bottom panel, in the neuroimaging study (n = 42), subjects completed 75 trials in the main and in the motor tasks, each divided to three blocks of 25 trials. We thus report results from all eight tasks across the three studies. See also Extended Data Figures 1-1, 1-2.
Non-value motor tasks
Following the main task, subjects conducted non-value motor tasks to examine motor-related sources of choice inconsistency that did not involve value modulation. We designed the non-value tasks in such a manner that precluded any sort of a valuation process yet would yield similar trajectories to the ones in the main risky-choice task. This design allowed us to relate task execution in the non-value tasks to inconsistency levels in the main risky-choice tasks, which could directly indicate on additional sources for choice inconsistency other than miscalculations of the value.
Motor task. Subjects were presented with linear graphs and had to reach a black circular target on the graph. The graphs, nor the axes grid, presented numbers, thus resembling well known target-reaching tasks (Phillips and Triggs, 2001). When subjects reached the black target, they left-clicked the mouse/trackball to submit their position (Fig. 1A, middle panel).
Numerical task. Similarly, the numerical task tested subjects’ motor pathways in the absence of value modulation, though instead of visual circular targets, here, subjects had numerical coordinates as targets, resembling spatial navigation tasks. Subjects were presented with linear graphs and had to reach a predefined coordinate appearing at the top of the screen, which represented a target spot on the graph (Fig. 1A, right panel). The current cursor position was presented continuously at the top of the screen. When subjects reached the target coordinate, they left-clicked the mouse/trackball to submit their position.
In both non-value motor tasks, to fully resemble the main task, each graph (both the slope and the endowment) was identical to a graph from the main risky-choice task. The predefined targets for both non-value motor tasks were identical to the actual location on that graph that each subject chose in the main risky-choice task. That is, the loci of the predefined targets were subject-specific. Note that in both non-value motor tasks, there was no value-based decision–making, as the X- and Y-axes did not represent monetary payoffs. Here, too, we set the mouse cursor to the axes origin at the beginning of each trial and recorded mouse trajectories. To avoid confusion with the main risky-choice task, we refer to mouse tracking from the non-value tasks as “motor dynamics.”
Experimental procedures
Behavioral and replication studies
Participants. Behavioral study: we recruited 91 volunteering undergraduate students from various departments at Tel Aviv University (46 females; mean age, 24.2 years; 18–28). Our sample size was determined based on previous studies using the same task (Choi et al., 2007; Fisman et al., 2007; Halevy et al., 2018). In the main task, two subjects were dropped from the sample due to technical problems during their run. We report the results for the remaining 89 subjects. Five subjects in the numerical task and six subjects in the motor task were dropped due to technical problems in their computer stands during the run. We report the results from the remaining 85 and 86 subjects, respectively.
Replication study: The desired sample size was determined based on a power analysis that used the r correlation coefficients from cross-tasks prediction presented in Figure 5B, which yielded an effect size (ES) of r = 0.462 and r = 0.487 (motor and numerical tasks, respectively). We computed the desired sample size using the smaller ES (r = 0.462), implemented in the standard G*Power software (correlation, bivariate normal model, one-sided). We concluded that to achieve a power of 90% with α = 5%, we had to recruit at least 37 subjects. In practice, we recruited 72 volunteering undergraduate students from various departments at Tel Aviv University. In the main task, three subjects were dropped from the sample due to technical problems during their run. We report the results for the remaining 69 subjects (49 females; mean age, 28.9 years; 18–60). Three additional subjects in the numerical task and seven subjects in the motor task were dropped due to technical problems in their computer stands during the run. We report the results from the remaining 66 and 62 subjects, respectively. All the sessions of the behavioral and replication studies were carried out in the Interactive Decision-Making Lab at the Coller School of Management at Tel Aviv University. All subjects gave informed written consent before participating in the study, which was approved by the local ethics committee at Tel Aviv University.
Figure 5.
Cross-tasks prediction. A, We use the coefficients from a regression of MMI scores on mouse features from the value-based task to predict inconsistency levels with mouse features from the non-value tasks. We then correlate predicted scored with actual MMI scores. B,C, Obtained correlations between predicted and actual MMI. Each dot in the scatterplot relates to a different trial. B, Behavioral study. Left, Predictions based on mouse features from the motor task (N = 4,279). Right, Predictions based on mouse features from the numerical task (N = 4,192). C, Neuroimaging study (N = 2,915). D, Replication study. Left, Predictions based on mouse features from the motor task (N = 3,085). Right, Predictions based on mouse features from the numerical task (N = 3,160). See also Extended Data Figure 4-1 for the results for “Approach B: PCA.”
Sessions. Main task. Before the main task, subjects read an instruction sheet for that task. The instructions were repeated aloud by the experimenter. Subjects then carried out a short practice on the experimental software and went on to the main task. Subjects made a total of 150 trials, divided into three blocks of 50 trials each (see budget sets’ parameters in Extended Data Figs. 1-1, 1-2). On each trial, subjects had a maximum of 12 s to make their choices, followed by a 1 s variable intertrial interval (ITI; jittered between trials).
Non-value tasks. Following the main task, we randomly chose 51 budget sets (out of 150) to be used as the graphs in the two non-value tasks. Subjects then received the instructions sheet for the two additional tasks and went on to complete the two tasks. The order of the tasks was counterbalanced between subjects across experimental sessions. Within subjects, trials were presented at a random order within each non-value task.
In the motor task, subjects had up to 6 s on each trial to reach the predefined black target, followed by a variable ITI of 1 s (jittered between trials). In the numerical task, subjects had a maximum of 12 s on each trial to reach the predefined coordinates, followed by a variable ITI of 1 s (jittered between trials; Fig. 1B, top panel).
Payoffs. At the end of the experiment, one of the trials from the main risky-choice task was randomly selected for monetary payment. Subjects tossed a fair coin to determine the winning account, X or Y. Subjects won the monetary value of the tokens allocated to the winning account on the trial drawn for payment. Each token was worth 1 NIS (at the time, $1 ≅ 3.5 NIS).
At the end of each non-value motor task, we casted one trial at random and compared the predefined target coordinate with subjects’ actual submitted position. Subjects received 5 NIS for each non-value motor task, though this amount decreased as a function of their Euclidean distance from the target. We implemented this payment method to incentivize subjects’ precision and motivate them to reach the predefined targets. By doing so, we could ensure that trajectories from the non-value motor tasks would be comparable with those in the main task (Fig. 1E). The average prize was 40.5 NIS (behavioral study) and 42.7 NIS (replication study). Subjects also received 25 NIS as a show-up fee.
Neuroimaging study
Participants. We recruited 46 right-handed volunteering students from various departments at Tel Aviv University (F = 22; mean age, 25.5 years; 19–49). The sample size was determined based on a previous study from our lab (Kurtz-David et al., 2019). Subjects gave informed written consent before participating in the study, which was approved by the local ethics committee at Tel Aviv University and by the Helsinki Committee of the Medical Center. We dropped scans with sharp head movements (>3 mm in translation/3° in rotation). As a result, three subjects were dropped from the analysis of the main task, and two subjects were dropped from the analysis of the motor task. For three additional subjects, we had to drop one scan (either from the main or the motor task), and for one subject, we dropped two scans (one from each task). Another subject was dropped due to technical problems during their session. We therefore report the data for the remaining 42 subjects in the main task and 43 subjects in the motor task.
Instructions and pre-scan practice. Before the scan, subjects read an instruction sheet for both the risky-choice and motor tasks (note that subjects did not conduct the numerical task in the neuroimaging study) and completed a pre-scan questionnaire to verify that the task is clear. For details on the pre-scan questionnaire, see Kurtz-David et al. (2019). Thereafter, subjects completed a practice block of the risky-choice and motor tasks in front of a computer, using a similar trackball to the one used inside the fMRI, to imitate the motor movements required during the scan. The budget sets in the practice block were different from the ones subjects encountered inside the scanner, but their parameters were predefined to ensure all subjects encountered a substantial variation of slopes and endowments.
Sessions. Main task. Subjects performed the experimental task using an fMRI-compatible trackball to choose their preferred bundle. We let subjects complete another three practice trials inside the MRI scanner to gain experience with the fMRI-compatible trackball. Then, the experiment began. Subjects made a total of 75 trials (with unique budget sets), divided into three blocks of 25 trials each. On each trial, subjects had a maximum of 11 s to make their choices, followed by a 6 s variable ITI (jittered between trials).
Motor task. We used the exact same 75 budget sets and subjects’ choices from the main task as graphs and targets in the motor task (randomized within and across subjects). Subjects had up to 6 s on each trial to reach the black target, followed by a variable ITI of 4 s (jittered between trials; Fig. 1B, bottom panel).
After completing the two tasks, we obtained anatomical scans.
Payoffs. The payment protocol was identical to the behavioral study, with small modifications to increase monetary incentives. In the main task, each experimental token was equal to 2 NIS (rather than 1 NIS), and the endowed payoff for the motor task was 20 NIS (rather than 5 NIS). The average prize was 88.6 NIS + 100 NIS show-up fee.
Image acquisition. fMRI. Scanning was performed at the Strauss Center for Computational Neuroimaging at Tel Aviv University, using a 3T Siemens Prisma scanner with a 64-channel Siemens head coil. To measure blood oxygenation level-dependent (BOLD) changes in brain activity during the experimental task, a T2*-weighted functional multiband EPI pulse sequence was used (TR, 1 s; TE, 30 ms; flip angle, 68°; matrix, 106 × 106; field of view, 212 mm; slice thickness, 2 mm; multiband factor, 4). Sixty-four slices with no interslice gap were acquired in an ascending interleaved order and aligned −30° to the AC–PC plane to reduce signal dropout in the orbitofrontal area. Each run in the main task had 428 TRs (∼7.1 min), and each run in the motor task had 253 TRs (∼4.1 min).
Anatomy. Anatomical images were acquired using a 1 mm isotropic MPRAGE scan, which was comprised from 176 axial slices without gaps at an orientation of −30° to the AC–PC plane to reduce signal dropout in the orbitofrontal area.
Statistical analyses
The general axiom of revealed preference (GARP)
The budget set could be formally described as follows:
where (x,y) is the chosen bundle, is the total endowment to be spent on and and and are the prices of allocating the endowment to either account. Thus, is the price ratio between the two accounts and is graphically represented by the slope of the budget line.
Consider a finite dataset , where is the subject's chosen bundle at prices . Bundle is as follows:
We say that the subject satisfies GARP if their choices do not indicate that they prefer over but at the same time also prefer over . Formally, satisfies GARP if every pair of observed bundles, , implies not . According to Afriat’s theorem (Afriat, 1967, 1972), there exists a well behaved utility function (continuous, monotone, and concave) that rationalizes the data iff the subject satisfies GARP. Otherwise, a strict cycle of choices exists, and we say that D violates GARP. By Afriat's theorem, if the dataset does not satisfy GARP, then the subject cannot be described as a utility maximizer and is therefore said to be inconsistent.
Aggregate inconsistency indices
We use three different indices to measure subjects’ inconsistency levels. The first is simply counting the “number of GARP violations,” which quantifies the frequency of inconsistencies (Extended Data Fig. 2-2a). We use Afriat index (AI; Afriat, 1973) and the money metric index (MMI; Halevy et al., 2018) to measure the intensity of violations. Both indices range between 0 and 1, with higher indices values indicating higher inconsistency levels.
AI. The index measures the maximal adjustment to the budget sets needed to remove all violations. It can be interpreted as the subject's waste of income due to inconsistency (Extended Data Fig. 2-2a).
MMI. Consider the continuous and nonsatiated utility function as representing the preferences of the subject. induces a complete preference ranking for each bundle presented to the subject. At the same time, the subject provides a partial ranking of bundles by the principle of revealed preference. If these two rankings are compatible for every choice made by the subject, then rationalizes the dataset. If, however, these two rankings are incompatible, then according to , some feasible bundles are ranked higher by than the bundle chosen by the subject. For every observation, the incompatibility between the two rankings can be measured by the minimal expenditure (a shift of the budget line toward the axes origin) such that the adjusted budget set does not include any bundle that is strictly preferred over according to . Formally, given the budget set prices , the money metric for observation is the minimal expenditure required for the dataset to include a bundle as follows:
We aggregate the adjustments for all observations using an average sum of squares. We examine all utility functions in the family of disappointment aversion (DA) utility functions (see below; Gul, 1991) and look for the one with the smallest incompatibility with the dataset. Therefore, the computation of MMI yields two measures: (1) the computation of aggregate MMI and (2) elicitation of subject-specific utility function parameters. See Extended Data Figure 2-2b for a visualization of the index. For further details, see Halevy et al. (2018) and Kurtz-David et al. (2019).
Trial-specific index
We implement a leave-one-out procedure on the MMI index to yield a trial-level inconsistency index (Gluth and Meiran, 2019). Let be an aggregate inconsistency index of dataset . Let be a subset of that includes all trials but the observation. Let be the aggregate inconsistency index of , and let be the trial-specific inconsistency index of trial .
Parametric utility recovery
Since the valuation of the chosen bundle might be confounded with motor movement, we used subjective valuation of chosen lotteries (subjective value, SV) as a control variable in the regressions reported in the paper (see below for the regression model). For parametric family of utility functions, we used the DA model with a constant relative risk aversion (CRRA) functional form (Gul, 1991), as it includes many well known types of preferences in the context of risk and has become the common practice for utility recovery for this task (Choi et al., 2007, 2014; Halevy et al., 2018; Kurtz-David et al., 2019). Formally,
where is the weight of the better outcome and is a CRRA utility index with a relative risk aversion parameter . When this is the common expected utility function with parameter . If, in addition , it is expected value, and when , it is the Cobb–Douglas with equal exponents. See Kurtz-David et al. (2019) for details on the different behavioral types the DA function represents.
To use SV as a parametric regressor in our behavioral and neuroimaging analyses, we calculated the value of the DA model at the chosen bundle in each trial , using the subject's recovered parameters, and .
Measuring choice difficulty
Since a high difficulty level of a given trial can be an additional source for choice inconsistency, we used the choice difficulty level of a given trial as a control variable in the regressions reported in the paper (see the regression model below). For each subject and trial, we calculated the choice simplicity index (CSI), as presented in our previous study (Kurtz-David et al., 2019). Briefly, in each trial, we computed the SV of 1,000 lotteries along the budget line and then computed the variance between these valuations, such that smaller variance indicates a greater difficulty as lotteries along the line are more similar to one another due to a smaller value difference between the lotteries.
Imprecision in the non-value tasks
As a first indication of the relationship between motor output and choice inconsistency, we aimed to relate imprecision in the non-value tasks to inconsistency levels in the main risky-choice task on a trial-by-trial basis. To evaluate subjects’ imprecision, we computed the Euclidean distance between the targets’ coordinates in the non-value tasks to actual button-press loci. Using our unique task design, which made use of the same graphs across all three tasks, we could directly compare between inconsistency levels in the risky-choice task and their respective Euclidean distances in the non-value tasks. Thus, to check if there was such relationship, we ran an ordinary least squares (OLS) regression with random intercepts for subjects, Euclidean distances as the regressor of interest, and MMI as the dependent variable.
Mouse tracking preprocessing
To examine motor output in our task, after analyzing subjects’ submitted choices and measuring their level of choice inconsistency in each trial, we systematically analyzed the dynamics leading up to a choice (“choice dynamics”). Therefore, in all three studies, we recorded the mouse trajectories of motion pathways until the final choice was made. We recorded mouse movements using MATLAB built-in functions. Each movement of the mouse triggered writing its location in its coordinates and a timestamp in a dedicated excel file. Since sampling occurred only when the mouse moved, it resulted in unequal sampling intervals. We used the timestamps and coordinates to interpolate the mouse location in either 100 or 1,000 samples with equal intervals in time (using MATLAB's interp1 function). The total time of a trial differed, so the resulting 100/1,000 samples had different sampling rates per trial, but a constant sampling rate within trial. In addition, for some features, we required trajectory samples to represent a fixed curve rather than movement through time. Therefore, we also created a version of each trajectory where samples were evenly spaced on the X- and Y-axes, neglecting the time domain. In this case, the number of samples per trajectory was determined as half the total distance traveled in the trial (in coordinates units), to create a coherent and smooth representation of the trajectory curve. All features reported in this study were extracted on either the evenly spaced trajectory curve, the 100 mouse trajectory samples with equal time intervals, or the 1,000-sample trajectory, which provided higher temporal resolution and enabled more precise calculations in certain features. Trajectories with two samples or lower could not be analyzed and were discarded. This was caused by faulty sampling or a hastened miss-click by the subject and occurred only in 1 trial in the risky-choice task, 13 trials in the numerical cognition task, and 29 trials in the motor task (behavioral study). In the neuroimaging and replication studies, this did not occur, so no trials were excluded due to a low number of samples.
Mouse features
To take full advantage of the information concealed in the rich datasets provided by the mouse trajectories, we extracted a total of 34 mouse features from recorded trajectories. Since we did not know ex ante which features would relate to either value modulation or motor processes, we aimed to extract a large variety of features. Twenty-five features were based on or inspired by Jonathan B. Freeman's extensive work on mouse trajectories, such as maximum deviation (“MD”), complexity (e.g., “Xflips” and “Yflips”), velocity (e.g., “meanVel”), acceleration (e.g., “meanAcc”), and area under the curve, adapted to our experimental design (Freeman and Ambady, 2010; Freeman et al., 2011; Freeman and Dale, 2013; Hehman et al., 2015; Freeman, 2018). We also calculated the trajectory angle at each time point, with respect to a straight line from the axes origin toward the chosen bundle, adapted from a work by Sullivan and colleagues (Sullivan et al., 2015). Additionally, we used sample entropy (Richman and Moorman, 2000) to discern the complexity of trajectories [see Feldman and Crutchfield (1998) for a discussion of complexity measures].
For the purpose of extracting trajectory curvatures at each time point, we employed an open-source MatLab code for curvature calculation, on the evenly spaced trajectory curve (Mjaavatten, 2020). Other features, such as features related to mouse fixations (“numFixations”) and layover at the axis’ origin (“Layover”), or features concerned with the time spent in different areas relative to the grid, namely, the time out of the grid bounds (“XTimeOutofBounds” and “YTimeOutofBounds”), time above budget line (“AboveLine”), time near the predicted bundle (“TimeNearPredicted”), and time on a budget line (“TimeBudgetLine”), were our own design.
Our features design approach thus used well validated features but, at the same time, also explored novel task-related features. Our goal was to use the high-dimensional space created by the features to improve prediction results. The large number of features was thus the main reason for running the replication study, which was aimed to strengthen our conclusions and to reduce any confounds that may had been created simply due to our choice of specific mouse features. A comprehensive list of all the features we used appears in Extended Data Figure 3-4.
Dimensionality reduction
Due to the high number of mouse features relative to the number of trials (in all three studies) and due to inter-features correlations, we had to induce dimensionality reduction procedures. To this end, we took two different strategies: “Approach A,” feature selection via elastic net, a linear combination of the ridge- and LASSO-regularized regression methods, and “Approach B,” a feature projection strategy via principal component analysis (PCA). The main advantage of the elastic net approach is that it allows for a straightforward interpretation of the results on the expense of losing some information concealed in the features that did not survive the regularization. On the contrary, the PCA approach combines data from all mouse features but loses some of the interpretability. We first implemented these two strategies in mouse features from the main risky-choice task (z-scored) in each of the three studies. Both the elastic net and PCA were run against inconsistency levels (MMI) and included subject fixed effects with 10-fold cross–validation. We then repeated the dimensionality reduction procedure—using the same protocol—on mouse features from the non-value motor tasks (in each of the three studies). See Figure 4A.
Figure 4.
Mouse features and inconsistency levels. A,B, Analysis strategy. A, We took two dimensionality reduction approaches, in each of the studies, due to high dimensionality relative to the number of trials per subject, as well as due to inter-features correlations (top row). In 84 (behavioral study), 56 (neuroimaging study), and 114 (replication study) pairs out of the 561 off-diagonal correlations, the mean correlation coefficient was higher or equal to 0.4 in an absolute value. Approach A (left), An elastic net, which combines two methods for regularized regression (LASSO and Ridge) and maintains interpretability of the features. See Extended Data Tables 4-1 and 4-2 for the list of features, which survived the elastic net regularization. Approach B (right), PCA. This approach preserves all features’ data on the expense of interpretability. In all further analyses, we used the first 10 components, which accounted for 76.6% (behavioral study), 77.3% (neuroimaging study), and 78.2% (replication study) of the variance in the data. B, In the main risky-choice task, we used the selected features from Approach A and the PCs from Approach B to account for MMI scores. We then repeated the analysis in Approaches A and B with mouse features from the non-value motor tasks as out-of-sample predictors for MMI scores estimated in the value-based task. In the out-of-sample analysis, we solely used the common trials across tasks (51 trials in the behavioral and replication studies, and all 75 trials in the neuroimaging study). For the fMRI analysis, we used the selected features and looked for brain regions that correlated with both mouse features and MMI scores. C, Mouse features in the main risky-choice task correlated with inconsistency levels in both studies (Approach A). Shaded areas indicate a significant predictor (p < 0.05) in a subject fixed-effect regression. Top, Behavioral study. Middle, Neuroimaging study. Bottom, Replication study. D, Adj-R2 from the analyses described in B (Approach A, selected features). Solid fill indicates regressions, which only included task-execution mouse features. Diagonal striped fill indicates subjects’ random intercept. Horizontal striped fill indicates additional explained variance from RT, SV, and other task-related parameters. E, Leave-one-subject-out prediction. Distribution of correlation coefficients between predicted inconsistency levels based on the selected mouse features from the main task and actual MMI scores. Left, Behavioral study, median, 0.491; min, 0.117; max, 0.805; SD, 0.153. Middle, Neuroimaging study, median, 0.1177; min, −0.1789; max, 0.5830; SD, 0.1652. Right, Replication study, median, 0.514; min, −0.114; max, 0.784; SD, 0.196. The dashed black line indicates median. The scatterplot shows individual correlation coefficients (behavioral study, N = 89; neuroimaging study, N = 42; replication study, N = 69.). C–E, See Extended Data Figure 4-1 for the results for “Approach B: PCA,” as well as Extended Data Figures 4-2 and 4-3.
Relating choice dynamics to inconsistency levels (risky-choice task)
As a first step toward understanding how motor output affected choice inconsistency, we wanted to test whether specific elements in task execution could predict inconsistency levels. Thus, following dimensionality reduction, we ran a subject fixed effects (random intercept) regression on inconsistency levels with the motor components as the regressors of interest as follows:
where in the behavioral study, in the neuroimaging study, and in the replication study are an index indicating subjects’ identity, is an index indicating the trial number, and is an index indicating the mouse features that survived the elastic net regularization. is a subject fixed effect (random intercept) and is a vector of control variables including SV, budget set parameters (slope and endowment), RT, and the CSI index (see above). We ran this model, with and without , on the data collected in each study using the mouse features from the risky-choice task.
In a similar manner, in the PCA strategy, we tested whether choice dynamics could predict inconsistency levels, using a subject fixed-effect regression on mouse features’ PCs as follows:
where are the first ten components from the PCA analysis. All other specifications are identical to the elastic net approach detailed in Equation 9.
Leave-one-subject-out procedure
To obtain an out-of-sample prediction between mouse tracking data and inconsistency levels, we repeated the dimensionality reduction procedure for subjects, followed by estimation of regression coefficients for those subjects (the same model as in the previous paragraph). We then correlated the predicted trial-level inconsistency levels with the actual MMI scores. We repeated this process 89, 42, or 69 times (depending on the number of subjects in each sample—behavioral, neuroimaging, or replication studies) and reported the subject-specific correlation coefficients.
Relating motor output from the non-value motor tasks to choice inconsistency
We aimed to evaluate whether motion components unrelated to value modulation also correlated with inconsistency levels. Therefore, following the dimensionality reduction, we used the independent motor components from the non-value tasks to predict MMI scores in the main risky-choice task in a subject fixed-effect (random intercept) regression. Since the motor output in the non-value tasks did not involve a valuation process, using the mouse features from these tasks could be regarded as strong out-of-sample predictors. Note that for the purpose of this analysis, in the behavioral and replication studies, we could only use the subset of 51 trials that were common across the three tasks, whereas in the neuroimaging study, all 75 trials were common across tasks, and thus all 75 trials were used in this analysis (Fig. 4B).
In the elastic net strategy, we ran the following model:
where indicates whether the mouse feature was obtained in the numerical or motor tasks. All other notations are identical to Equation 9. We ran this model, with and without , on the data collected in each study (separately) using mouse features from each of the non-value motor tasks (in separate models) as predictors of inconsistency levels.
Likewise, in the PCA approach, we used PC1–10 from the non-value tasks to predict MMI scores in the main risky-choice task in a subject fixed-effect regression, using the following model:
where are the first 10 components from the PCA analysis and indicates whether the mouse feature was obtained in the numerical or motor tasks. All other elements in the model are identical to Equation 11.
Finally, in both approaches, we evaluated goodness of fit by examining the adj-R2 of each model.
fMRI data preprocessing
BrainVoyager QX (Brain Innovation) was used for image analysis, with additional analyses performed in MatLab (MathWorks) and in Python. Functional images were sinc-interpolated in time to adjust for staggered slice acquisition, corrected for any head movement by realigning all volumes to the first volume of the scanning session using six-parameter rigid body transformations, and detrended and high-pass filtered to remove low-frequency drift in the fMRI signal. Spatial smoothing with a 6 mm FWHM Gaussian kernel was applied to the fMRI images. Images were then coregistered with each subject's high-resolution anatomical scan and normalized using the Montreal Neurological Institute (MNI) template. All spatial transformations of the functional data used trilinear interpolation.
Functional regions of interest (ROIs)
We focused our analysis on predefined hypothesis-driven ROIs. To maintain continuity with our previous work, all value-related brain regions, namely, the ventromedial prefrontal cortex (vmPFC), the dorsal anterior cingulate cortex (dACC), the ventral striatum (vStr), and the posterior cingulate cortex (PCC) were defined in the same way as in Kurtz-David et al. (2019). In practice, the vStr and the vmPFC were defined using the masking in the canonical meta-analysis for value encoding by Bartra et al. (2013) For the dACC we drew a 12 mm sphere around the peak voxel reported by Kolling et al. (2016) who looked for positive neural correlates of the foraging value while controlling for RT and other task parameters in a random-effect general linear model (RFX GLM). For the PCC, we simply used neurosynth.org meta-analyses masks with “PCC” as the search words (Yarkoni et al., 2011).
We used V1 as a control region to rule out the possibility that SV and MMI correlated with a vast number of brain regions. We made use of the connectivity-based parcellation atlas by Fan et al. (2016) and drew 12 mm spheres around the peak voxels (one for each hemisphere) from their subregion probabilistic voxel mapping.
To define the motor brain regions, left M1 and the supplementary motor area (SMA), we used a degenerated version of the motor task that appeared in our previous study (Kurtz-David et al., 2019) as an independent functional localizer. We estimated a GLM with one predictor that modeled RT using a boxcar epoch function convolved with the canonical hemodynamic response function (HRF), whose duration was equal to the RT of the trial. Six motion-correction parameters and the constant were included as nuisance regressors of no interest to account for head motion-related artifacts. We used a stringent Bonferroni’s correction as a statistical threshold and defined any significant voxel with a positive correlation as either belonging to M1 or the SMA regions. Finally, to avoid overlapping between ROIs, we excluded from the SMA ROI any voxels that were common with the neighboring dACC ROI. See Extended Data Figure 6-1 to see the resulting M1 and SMA functional ROIs.
Neural correlates of choice inconsistency and replication of the main finding in Kurtz-David et al
To identify the neural correlates of choice inconsistency in the main task, we ran exactly the same RFX GLM regression that appeared in our previous study (see Kurtz-David et al. for further details). Model regression:
We tested the model on all our predefined value-related ROIs. Furthermore, we used the same model to examine neural correlates of choice inconsistency and value modulation in M1 and SMA.
Since a large portion of our sample (8 out of 42 subjects, 19%) was consistent with GARP, we wanted to increase statistical power for a whole-brain analysis. Hence, we also ran a more refined model, which only included trial-specific MMI and a boxcar epoch function to model RT, both were entered for the trial duration and convolved with the canonical HRF. Six motion-correction parameters and the constant were included as regressors of no interest to account for motion-related artifacts.
Neural correlates of mouse features
To evaluate the neural correlates of motor output during task execution, we ran an RFX GLM with mouse features that survived the elastic net regularization and significantly correlated with MMI in both tasks of the neuroimaging study. Namely, we used the average velocity during the trial (“meanVel”), the maximal distance from the shortest path to choice (“MD”), and the number of segments during the trial in which the subject ceased movement (“numFixations”) as our predictors of interest (Fig. 4C; Extended Data Tables 4-1, 4-2). Additional predictors included budget sets parameters (slopes and endowments) and a boxcar epoch function to capture RT. All predictors were normalized, convolved with the HRF and entered for the trial duration. Six nuisance head movement regressors and a constant were included as well. We ran the model on each of the value and motor-related ROIs and looked for differences in the neural representation of the motor output between the value-based task compared with the motor task.
Statistical thresholds in the neuroimaging analyses
All the RFX GLMs in the paper were run at the voxel level, and the results reported in Figures 6 and 7A (and their corresponding extended data) present significant voxels that survived the following criteria: (1) ROI analyses—to increase the statistical power, most of the neuroimaging analyses focused on hypothesis-driven ROIs. In each ROI, we ran separate RFX GLMs using ROI masks with a few hundreds of voxels [see above, Functional regions of interest (ROIs)], thus limiting the number of comparisons in each analysis. Within each ROI, we applied a voxel-level cluster–forming threshold of p < 0.05, uncorrected, and then corrected for multiple comparisons with cluster-size correction based on 10,000 permutations. We note that the nonparametric permutation test was suggested by Eklund et al. (2016) as the safest procedure to avoid false-positive rates in whole-brain analyses. (2) Whole-brain analysis—the analysis presented in Figure 6D is the only whole-brain analysis in the paper aimed to strengthen the findings of Figure 6A–C. Here we applied a more stringent voxel-level cluster–forming threshold of p < 0.005, uncorrected, and then corrected for multiple comparisons with cluster-size correction, using the same procedure as in (1).
Figure 6.
Choice inconsistency correlates with value and motor neural activations. A–C, An overlay of the MMI and SV regressors. ROI RFX GLM, n = 42; p < 0.05; cluster-size correction. Model regression, . Six additional motion-correction regressors were included as regressors of no interest. A, Replication of the main finding in Kurtz-David et al. (2019) in value-related ROIs: vStr, dACC, PCC, and vmPFC. B, Motor-related brain regions: M1 and SMA. C, Control ROI: V1. d, A whole-brain analysis reveals inconsistency levels correlated with the same region found in Kurtz-David et al. (2019) (pink). Model regression: . Six additional motion-correction regressors were included as regressors of no interest. N = 42; p < 0.005; cluster-size correction. Results are shown on the Colin 152-MNI brain. See also Extended Data Figures 6-1–6-3.
Figure 7.
Neural correlates of motor dynamics (“elastic net-based features”). A, An overlay of the neural correlates of motor features, which were selected by the elastic net analysis. Top row, Main risky-choice task. Bottom row, motor task. Left column presents value-related ROIs, whereas the right column presents motor ROIs. ROI RFX GLM; n = 42; p < 0.05; cluster-size correction. Model regression: . Results are shown on the Colin 152-MNI brain. B, Neural activations of mouse features in motor brain regions correlated with the neural representation of inconsistency levels. We obtained the beta coefficients attributed to each mouse feature from a trial-by-trial RFX GLM (representing the change in BOLD signal, model presented in A, upper row), as well as the beta coefficients attributed to inconsistency levels from the RFX GLM of the BOLD signal on MMI (model presented in Fig. 6b). We then correlated the two sets of coefficients on the subject level. Each scatterplot represents subject's random effects (slopes) from the two different models and refers to the change in BOLD activation associated with mouse features (x-axis) compared with the change associated with inconsistency levels (y-axis). Top, SMA ROI; bottom, m1 ROI.
Functional connectivity
We ran a psychophysical interaction (PPI) analysis (Friston et al., 1997) to examine the changes of task-related connectivity between value and motor brain regions as a function of inconsistency levels. The value-related ROIs were selected as seeds (vmPFC, vStr, dACC, and PCC) with two separate PPI analyses were done for each seed, one for each motor ROI (M1 and SMA), for a total of eight models. The PPI regressors were thus generated by multiplying the demeaned BOLD time series of M1 or SMA with the trial-specific MMI (convolved with the canonical HRF). The other regressors that were included in the model were motor regions’ demeaned BOLD time series, MMI, budget sets parameters (slopes and endowments), and a boxcar epoch function to capture RT. All predictors except the BOLD signal were convolved with the HRF and entered for the trial duration. We used Bonferroni’s correction for multiple comparisons (eight models) and report significant results of the PPI regressors for a threshold of 0.00625.
Ruling out value modulation in the motor task
To show that our task manipulation worked and that the additional non-value motor task indeed did not involve value modulation and solely examined motor output, we conducted two separate analyses.
First, we conducted a pseudo-SV analysis and treated the motor task in the neuroimaging study “as if” it actually was a value-based choice task. Given this thought experiment, we treated the coordinates of each mouse button press at the end of the trial like a lottery chosen in the main risky-choice task. We then elicited subject-specific utility parameters and calculated the “trial-specific SV” and “trial-specific MMI” to be used as parametric regressors in a GLM with a design matrix identical to the GLM used to look for the neural correlates of choice inconsistency in the main risky-choice task (see above). We ran this model against the BOLD signal in all predefined value-related ROIs. Our main hypothesis in this analysis was to find null positive correlates between the pseudo-SV predictor and the BOLD signal in value-related brain regions, suggesting that the motor task did not result in value modulation processes in the brain.
Our second approach was to test whether the motor task induced a different type of decision-making mechanism, one that did not involve utility maximization, nor could it yield choice inconsistencies. We hypothesized that during task execution, subjects had to constantly determine whether they had reached the target coordinate. We therefore used the equal time sampled trajectories (InterTime; Extended Data Fig. 3-4) upon which we calculated the exact mouse location in each TR using a linear interpolation. We then calculated for each such location the Euclidean distance between current location and target. The Euclidean distance between current location and target at each TR was used as a parametric regressor in an RFX GLM, normalized and convolved with the HRF. Additional predictors included the slope and Y-axis intersection of the linear graph, as well as an epoch function to capture RT (all normalized and convolved with the HRF). This model was tested against all the predefined value/decision-making ROIs to look for brain regions that tracked distance from target as a reinforcement-like signal.
Code accessibility
The code used for the computation of MMI and the other inconsistency indices is available as an open-source code at https://github.com/persitzd/RP-Toolkit. The code package used to generate all mouse features, sample raw data, statistical maps from the neuroimaging analysis, and the code used to generate all the figures in the paper are available as open source at https://github.com/djlevylab/trembling-hand-unraveled.
Results
Behavioral results
Inconsistency and choice behavior
Subjects completed a well validated risky-choice task to test their inconsistency levels (Choi et al., 2007; Kurtz-David et al., 2019; Fig. 1A, left panel; see Materials and Methods). Subjects had no difficulty understanding the task (Extended Data Fig. 2-1) and responded to changes in prices (slopes), following the law of demand (Fig. 2B). Based on subjects’ choices in the risky-choice task, we computed their subject-by-subject number of GARP violations (Afriat, 1967). We report two different subject-by-subject inconsistency measures for the intensity of violations: the nonparametric AI (Afriat, 1973) and the parametric MMI (Halevy et al., 2018; see Materials and Methods and Extended Data Fig. 2-2 for a graphical illustration of the indices). Overall, subjects exhibited a large heterogeneity in their inconsistency levels (see Fig. 2A and Extended Data Fig. 2-3 for interindices correlations).
Figure 2.
Inconsistency levels and imprecision in the non-value motor tasks. A, Distributions of inconsistency indices among subjects in the three studies, compared with subjects in our previous study (Kurtz-David et al., 2019) and with 1,000 simulated random decision-makers. In the behavioral study (red bars), only three subjects (out of 89, 3.4%) were fully consistent with GARP. Subjects made on average 1,509.1 GARP violations (min, 0; max, 8,258; SD, 2,036.0). Subjects had an average AI score of 0.128 (min, 0; max, 0.572; SD, 0.113) and an MMI score of 0.058 (min, 0.001; max, 0.200; SD, 0.042). In the neuroimaging study (pink bars), eight subjects (out of 42, 19.0%) were fully consistent with GARP. Subjects made on average 91.0 GARP violations (min, 0; max, 1,568; SD, 289.5) with an average AI score of 0.037 (min, 0; max, 0.336; SD, 0.006) and an MMI score of 0.040 (min, 0.001; max, 0.174; SD, 0.036). In the replication study, three subjects (out of 69, 4.3%) were fully consistent with GARP. Subjects made on average 2,101.6 GARP violations (min, 0; max, 8,962; SD, 2,742.3) with a mean AI, 0.1702 (min, 0; max, 0.7895; SD, 0.1774) and MMI, 0.0756 (min, 0.0001; max, 0.3267; SD, 0.069). Note that the large difference between the yellow and green boxplots (simulated decision-makers with 150 and 75 trails, respectively) exemplifies the dependency between the number of trials and potential violations (Bronars, 1987). Whiskers indicate one SD. B, The share of total endowment (expenditure) subjects allocated to the Y account as a function of the price ratio , the slope of the budget set, increased as the Y account became cheaper. Price ratios are grouped into 15 bins. In each boxplot, the center line indicates the median share of tokens; the box limits indicate top and bottom quartiles; and whiskers indicate one SD. C, Distributions of Euclidean dist. (in coordinate units) between target coordinates and the coordinates of mouse button presses in the non-value tasks. Euclidean dist. could range between 0 and 141.42 (representing the longest diagonal within the axes’ grid). Top, Behavioral study. Motor task, mean, 0.597; min, 0; max, 75.479; SD, 3.403; numerical task, mean, 0.784; min, 0; max, 116.885; SD, 4.670; middle, neuroimaging study, motor task, mean, 0.849; min, 0; max, 14.677; SD, 0.795. Bottom, Replication study. Motor task, mean, 1.258; min, 0; max, 95.65; SD, 7.07; numerical task, mean, 0.886; min, 0; max, 128.7; SD, 5.750. Distributions of Euclidean distances are highly asymmetrical and skewed at 0, suggesting very high accuracy rates and similar button-press coordinates across tasks. Note that the Euclidean distances in the neuroimaging study (middle panel) are higher than in the behavioral study (top panel; p < 0.0001; two-sample, one-sided Wilcoxon rank-sum test), probably due to some difficulty to maneuver the trackball inside the MRI scanner. See also Extended Data Figures 2-1–2-3.
Budget sets’ parameters, behavioral study. Download Figure 1-1, DOCX file (24.1KB, docx) .
Budget sets’ parameters, neuroimaging study. Download Figure 1-2, DOCX file (19.1KB, docx) .
Distributions of trials by the share allocated to the cheap account (x or y). Left - behavioral study; middle - neuroimaging study; right - replication study. When slope is > 1 (in absolute value), then it is more beneficial to allocate money to the Y-axis and it is considered the cheap account - and vis-à-vis - when the slope<=1 (in absolute value), then it is more beneficial to allocate more money to the X-axis, and it is considered the cheap account. As X & Y are symmetrical, allocating less than 50% of endowment to the cheap account corresponds to violating FOSD. The share of trials with FOSD violations was 11.79% in the behavioral study, 4.98% in the neuroimaging study, and 14.45% in the replication study. If we allow some sensitivity around the focal 50%-share point (the intersection between the budget set and a 45-degrees line from the axes origin) and examine only the share of trials where subjects allocated less than 45% to the cheap account, then we find that only 2.84% of trials in the behavioral study, 1.68% in the neuroimaging study, and 5.77% in the replication study, violated FOSD. This suggests that subjects understood the task. Download Figure 2-1, DOCX file (1.5MB, docx) .
An illustration of the inconsistency indices. (a) A dataset with choices on three budget lines. Bundle 3 is reveled preferred to Bundle 1 and Bundle 2, but at the same time Bundle 1 and Bundle 2 are reveled preferred to Bundle 3. These choices create a choice cycle with three GARP violations. The dashed line indicates Afriat Index (AI) and shows the biggest adjustment to the budget lines required to remove all the violations. (b) MMI. Left: The utility function U(·) ranks some bundles on the budget line higher than the ranking provided by the subject, who chose x^i as their most desired bundle. Right: We resolve this incompatibility by shifting the budget line towards bundle y (dotted line). On the adjusted budget line, y is the highest ranked bundle by U(·). Download Figure 2-2, DOCX file (1.6MB, docx) .
Correlations between inconsistency indices. Scatterplots show correlations between inconsistency indices: upper panel shows the correlations between MMI and AI (r = 0.7608, left) and the number of GARP violations (r = 0.722, right) in the behavioral study, the middle panel shows the same results for the neuroimaging study (r = 0.8618 and r = 0.7989, respectively). The bottom panel shows the results for the replication study (r = 0.8723 and r = 0.8797, respectively). Dashed line indicates the least squares fit. Download Figure 2-3, DOCX file (6.7MB, docx) .
The number of violations is heavily dependent on the number of trials that subjects encountered—as the number of trials grows, the potential for making choice that create choice cycles (and violations) grows exponentially (Bronars, 1987). Hence, because subjects across the three samples in our study completed a different number of trials (behavioral and replication studies, 150 trials; neuroimaging study, 75 trials), we do not compare subjects’ inconsistency levels across the different samples. Instead, we compare their choices with 1,000 simulated random decision-makers (Bronars, 1987) to show that even though subjects were inconsistent in their choices, they did not choose at random. Indeed, across all three inconsistency indices under examination, inconsistency levels among participants in our study were substantially lower than those of the simulated choosers (behavioral study, GARP violations, z = 15.5244; q(FDR) < 0.0001; AI, z = 15.4744; q(FDR) < 0.0001; MMI, z = 15.6188; q(FDR) < 0.0001; n1 = 89; n2 = 1,000; neuroimaging study, GARP violations, z = 10.8915; q(FDR) < 0.0001; AI, z = 10.9532; q(FDR) < 0.0001; MMI, z = 10.9788; q(FDR) < 0.0001; n1 = 42; n2 = 1,000; replication study, GARP violations, z = 12.7769; q(FDR) < 0.0001; AI, z = 12.4674; q(FDR) < 0.0001; MMI, z = 13.000; q(FDR) < 0.0001; n1 = 69; n2 = 1,000; one-sided two–sample Wilcoxon rank-sum test; Fig. 2A, yellow and green boxplots). Importantly, in the absence of an appropriate statistical measure, Bronars (1987) suggested to use the frequency of simulated choosers who violated GARP as a measure for the power of the inconsistency test. As none of the simulated choosers complied with GARP, we conclude that in all three studies, our task had a power of (at least) 99.999% for detecting inconsistencies in subjects’ choices.
The rest of the paper will focus on trial-by-trial analyses, using the trial-level MMI index (henceforth, MMI; see Materials and Methods).
Subjects demonstrate various strategies in task execution
We aimed to characterize as many aspects of the choice dynamics as possible and to relate them to choice inconsistencies. We thus extracted a total of 34 different mouse features to systematically examine subjects’ motor movements in each trial. We extracted features like the mean velocity of movement (“meanVel”), layover time at the axes origin (“Layover”), aggregated curvature of the trajectory (“Curveness”), time spent outside the axes grid (“XTimeOutofBounds” and “YTimeOutofBounds”), or the entropy of the trajectory pathway (“XSampEn4” and “YSampEn4”). Figure 3A–D visualizes four representative features and Extended Data Figure 3-4 provides a detailed list of all the features. Most of the extracted features (25 features out of 34) were based on standard features from the mouse tracking literature (Freeman and Ambady, 2010; Hehman et al., 2015; Freeman, 2018; Wirth et al., 2020; Wulff et al., 2021), while nine features were based on our own design. As can be seen in Figure 3E and Extended Data Figures 3-2 and 3-3, there was high variability in response strategies within and across subjects.
Figure 3.
Mouse features. A–D, Representative features. A, “meanVel”, For each trial, we measured the average velocity (dashed line) in coordinates/second (same trial as in Extended Data Fig. 3-1a, left panel). B, “Xflips”, In each trial, we measured the number of changes in movement direction (flips) along the x-axis. C, “MD”, In each trial, we calculated the maximal distance between the actual mouse trajectory and the “choice line”—the shortest line connecting the axes origin and choice location. D, “numFixations”, In each trial, we divided the graph to a 20 × 20 grid, such that each bin had a width of five tokens. Any bin in which the subject spent >0.2 s was considered a mouse fixation, meaning segments in which the subject ceased motion, excluding the actual choice. In the “numFixations” feature, we counted the number of such segments. Figure inset, spatial illustration of mouse fixations. E, Distributions of trials based on features’ values (same features as in A–D). We grouped features into 10 equally spaced bins. Subjects are sorted in an ascending order according to their share of trials classified to Bin 1 (lowest values). Note that within subject (each row), trials are sorted by bins in an ascending order of the feature value and not in chronological trial order (up, features from the behavioral study; bottom, features from the neuroimaging study). See also Materials and Methods and Extended Data Figures 3-1–3-4.
Mouse tracking. (a) Two trajectories from two different trials from the same subject (sub. 217) across the different tasks, behavioral study. Left - a trial with similar trajectories across tasks (trial 23). Right - a trial with dis-similar trajectories across tasks (trial 75). (b) Same as (a), but from a subject in the MRI study (trials 10 and 27 from subject 113). (c-d) Velocity profiles for the same trials presented in (a-b), respectively. We normalized RT to a relative scale. (e-f) Average velocity profiles for all subjects in the sample. We normalized RT to a relative scale. Shaded error bar represent standard errors. (e) Behavioral study. (f) MRI study. Download Figure 3-1, DOCX file (3.5MB, docx) .
Distribution of features values after preprocessing, behavioral study (z-scoring, removing nan and extreme values). Download Figure 3-2, DOCX file (5.8MB, docx) .
Distribution of features values after preprocessing, MRI study (z-scoring, removing nan and extreme values). Download Figure 3-3, DOCX file (4.9MB, docx) .
Description of mouse features. Download Figure 3-4, DOCX file (19.1KB, docx) .
Mouse features from the risky-choice task predict inconsistency levels
Our main goal was to demonstrate that in addition to value miscalculations (inference noise), choice inconsistency was influenced by the dynamics of motor characteristics originating in noisy neural computations in motor networks unrelated to value-based computations. As a first step toward this goal, we aimed to demonstrate that specific elements in motor output, captured by the extracted mouse features, could be related to subjects’ inconsistent choice behavior.
Since the mouse features were highly correlated (Fig. 4A) and to avoid overfitting, we induced a dimensionality reduction procedure on the mouse features. We conducted a linear regression with elastic net regularization (see Materials and Methods) that resulted in a subset of mouse features out of the total 34 features. The advantage of using this approach is that it maintains the original mouse features, which allows interpretability of the results.
Mouse features and inconsistency levels, Approach B (PCA). (a) Same analysis as in Fig. 4c, using Approach B - PCA. Mouse features in the main task correlated with inconsistency levels in both studies. Shaded area indicate a significant predictor (p < 0.05) in a subjects fixed-effects regression. Left: behavioral study. Middle: neuroimaging study. Right: replication study. (b) Adj-R2 from regressions that included PCs from the main task, compared with regressions that included PCs from the non-value tasks. Solid fill indicates regressions, which only included task-execution mouse features. Diagonal striped fill indicates subjects’ random intercept. Horizontal striped fill indicates additional explained variance from SV and other task-related parameters. Left: behavioral study, middle: neuroimaging study, right: replication study. For the behavioral and replication studies, the figure includes only the 51 trials that were common across all three tasks. (c) Leave-one-subject-out prediction. Distribution of correlation coefficients between predicted inconsistency levels based on PC1 s from the main task and actual MMI scores. Behavioral study: median=0.3797, min=0.0980, max=0.7949, std=0.1582; neuroimaging study: median=0.142, min=-0.065, max=0.541, std=0.145. Replication study: median=0.382, min=-0.109, max=0.7963, std=0.201. Dashed black line indicates median. Scatter shows individual correlation coefficients. p < 0.0001, Behavioral study: t(88)= 29.971; Neuroimaging study: t(41)= 7.727; Replication study: t(68) = 16.1498. One-sided one-sample t-test (behavioral study: N = 89, neuroimaging study: N = 42, replication study: N = 69.). (d) Same analysis as in Fig. 5b-c, using Approach B - PCA. Obtained correlations between predicted and actual MMI. Behavioral study. Each dot in the scatter relates to a different trial. predictions based on mouse features from the behavioral study (N motor task = 4,279, N numerical task = 4,192). (e) Predictions based on mouse features from the neuroimaging study (N = 2,915). (f) Predictions based on mouse features from the replication study (N motor task = 3,085, N numerical task = 3,160). Download Figure 4-1, DOCX file (6.9MB, docx) .
Surviving features from the elastic net analysis, behavioral and neuroimaging studies. Download Figure 4-2, DOCX file (17.7KB, docx) .
Surviving features from the elastic net analysis, replication study. Download Figure 4-3, DOCX file (16.4KB, docx) .
Specifically, a total of 30 features survived the regularization in the behavioral study ( , with a mean squared prediction error of ). A respective group of 30 features survived the regularization in the neuroimaging study , and 29 features survived the regularization in the replication study . The list of surviving features appears in Extended Data Tables 4-1 and 4-2.
Following dimensionality reduction, we tested which of the features selected by the elastic net regularization was significantly related to inconsistency levels. We found 18 (behavioral study), 21 (replication study), and 11 (neuroimaging study) such features (p < 0.05; multiple linear model with subject fixed effects; Fig. 4C; Extended Data Fig. 4-2; see Materials and Methods). This suggests that specific elements in task execution accounted for inconsistency levels. As can be seen in Figure 4D, goodness-of-fit measures (adj-R2) show that task-execution mouse features accounted for 17.6% (behavioral study), 16.3% (replication study), and 7.4% (neuroimaging study) of the variance of inconsistency levels. Note that once random intercepts by subjects into the model are included, the explained variance increased to 28.0, 32.2, and 23.9%, respectively. Note that this effect is achieved even when controlling for other constructs that can also be tied to inconsistency levels, such as reaction time (RT), the SV of the chosen lottery, choice difficulty, and budget set parameters (Fig. 4D, horizontal striped bars).
To further strengthen this finding, we conducted a leave-one-subject-out prediction, in which the dimensionality reduction and model coefficients were trained on all trials from N − 1 subjects and tested on the trials of the nth subject. We found a positive correlation between predicted and actual inconsistency levels for all subjects in the behavioral study, 37 of 42 subjects in the neuroimaging study, and 68 of 69 subjects in the replication study (p < 0.0001; behavioral study, t(88) = 23.2446; median correlation, r = 0.4910; neuroimaging study, t(41) = 6.210; median correlation, r = 0.1177; replication study, t(68) = 19.896; median correlation, r = 0.5142; one-sided one–sample t test; Fig. 4E). These results show that choice dynamics in the value-based risk task were related to inconsistency levels, both in and out of sample.
One may argue that the regularization provided by the elastic net approach is not sufficient to fully remove the collinearity between features. Thus, to check the robustness of our results, we also employed an unsupervised algorithm that reduced the dimensionality of the data while simultaneously removing collinearities between the features. To this end, we conducted a PCA on the mouse features. Importantly, this method preserved all the mouse features (as opposed to the elastic net) but at the expense of the interpretability of the components. We then used the first 10 principal components, which together accounted for >75% of the variance in the data (in all three studies) and examined whether they correlated with inconsistency levels in a subject fixed-effect regression (Fig. 4B). The PCA strategy yielded very similar results to the elastic net approach (Extended Data Fig. 4-1).
To conclude, two analysis approaches, across three independent samples, indicated that motor components during task execution were related to subjects’ inconsistent choice behavior on a trial-by-trial level. However, mouse features from the risky-choice task did not solely capture “pure” motor computations, since subjects might have been actively engaging in value computations during the motor trajectory toward the final choice (Resulaj et al., 2009). Accordingly, one may argue that the results presented in this section captured a combination of the valuation component of inconsistency and motor components, which originated in motor circuits, hence only partially demonstrating our main goal. The next set of analyses addresses exactly this issue.
Basic motor traits, unrelated to value modulation, also predict choice inconsistency
Since we aimed to demonstrate that inconsistency was influenced by the dynamics of motor characteristics originating in neural motor networks unrelated to value-based computations, we next examined subjects’ motor output in two tasks that did not involve value computations. Therefore, following the main risky-choice task, subjects also completed novel non-value tasks (Fig. 1A,B, the motor task and the numerical task and Materials and Methods).
Analogous to the inconsistency levels measured in the value task, we measured subjects’ imprecision in the non-value tasks by calculating the Euclidean distance between the predefined target coordinates and the coordinates of the actual location of the button press (Fig. 2C). Crucially, even though the imprecisions in the non-value tasks were rather small, they captured simple motor errors in task execution. We therefore correlated the Euclidean distances with inconsistency levels in the risky-choice task. We found that in the numerical task, Euclidean distances positively correlated with inconsistency levels in the risky-choice task, suggesting that a greater imprecision in that task was tied to higher inconsistency levels in choice (behavioral study, β = 0.0005; p = 0.0063; replication study, β = 0.0008; p < 0.0001; OLS regression with random intercepts for subjects). In the motor task, we obtained a similar result only in the replication study (behavioral study, β = −0.0002; p = 0.567; neuroimaging study, β = 0.0012; p = 0.2466; replication study, β = 0.0017; p < 0.0001; OLS regression with random intercept for subjects). This shows that motor imprecisions that were measured at the end of motor-only trials were (somewhat) related to choice inconsistencies in the risky-choice task. Hence, to fully tackle what generates this linkage between motor imprecision and task execution, we next examined the motor dynamics recorded in these tasks.
Similarly to the main risky-choice task, we recorded mouse trajectories during task execution of the non-value tasks. Here, too, we extracted the same 34 mouse features as the ones extracted (Fig. 3; see Materials and Methods). To differentiate mouse features extracted from the risky-choice task (“choice dynamics”) from mouse features extracted from the non-value tasks, we refer the latter as “motor dynamics.” As can be seen from Extended Data Figure 3-1, mouse trajectories from the non-value motor tasks were comparable to the trajectories in the main task.
As in the main value-based risky–choice task, we repeated the dimensionality reduction procedure using an elastic net regularization on the mouse features extracted from the non-value tasks. In the motor task, a group of 25 features survived the regularization in the behavioral study , 25 features survived the regularization in the neuroimaging study , and 26 features survived the regularization in the replication study . In the numerical task, 33 features survived the regularization in the behavioral study , and 16 features survived the regularization in the replication study .
We then tested whether the surviving features from the non-value tasks correlated with inconsistency levels estimated from the value-based risk task (Fig. 4B; see Materials and Methods). In all three studies, in both the motor and numerical tasks between six and nine features significantly correlated with inconsistency levels (motor task: behavioral study, seven features; replication study, nine features; and neuroimaging study, six features; numerical task: behavioral study, eight features; replication study, seven features). Importantly, these features explained a substantial amount of the variance in inconsistency levels. Specifically, the mouse features from the motor task explained 3.9, 6.0, and 9.9% of the variance in inconsistency levels (behavioral, neuroimaging, and replication studies, respectively), whereas mouse features from the numerical task explained 4.5 and 6.6% of the variance in inconsistency levels (behavioral and replication studies, respectively). Moreover, when adding to the model a random intercept by subject, the explained variance increased to 18.6, 23.4 and 29.4% in the motor task (respectively), and to 19.0 and 27.2%, in the numerical task (respectively,Fig. 4D). Note that even though the adj-R2s are not as high as when using the mouse features extracted from the main risky-choice task, they are still surprisingly high, given that the regressors are derived from a different task. We note that since mouse features in the non-value tasks did not involve any value calculations, they can be regarded as strong out-of-sample predictors of the effect that motor dynamics had on inconsistency levels (Fig. 4B).
Our unique analysis approach, which leverages the mouse features, enabled us to point at the exact motor elements that robustly influenced inconsistency levels. Namely, we found that only one feature, the maximal distance from the straight line between the axes origin and choice (“MD”), significantly correlated with inconsistency levels across the three studies in seven out of the eight different tasks (three tasks in the behavioral of replication studies and two in the neuroimaging study). In all analyses, this feature had a positive coefficient, suggesting that the larger the distance, the higher the inconsistency levels. This suggests that subjects who tended to wander around the grid further from the shortest path to their choice, regardless of which task they performed, were more likely to be inconsistent. This finding resonates classic findings from motor cognition, which indicate that spatial deviations from straight paths lead to greater motor errors (Diedrichsen et al., 2010). See Figure 3A for a visualization of the “MD” feature.
Furthermore, we also found that the average velocity during the trial (“meanVel”) negatively correlated with inconsistency levels in six out of the eight tasks, implying that fast movements led to consistent choices, perhaps indicating that the subjects were precisely aiming at a predetermined specific location. This finding might seem rather contradictory to the canonical speed-accuracy trade–off (Fitts, 1954), where one would expect higher error rates with faster movements. Nevertheless, decision field theory offers a structural explanation to our findings: once discriminability between choice options is low, as it is in our task for a continuous budget set, an inverse relationship is to be expected; thus faster movements would result in fewer errors (Busemeyer, 1993). See Figure 3C for visualization of the “meanVel” feature.
Next, to obtain a robust cross-task prediction, we treated the regression of mouse features from the main risky-choice task on inconsistency levels as a training set (the same regression model reported in the previous section). We used the estimated coefficients of the mouse features from this regression and applied them to be the coefficients of the mouse features extracted from the non-value tasks, which we considered as our test set. Based on this, we calculated the predicted inconsistency levels using the mouse features extracted from either the motor or numerical tasks ( , for the motor and numerical tasks, respectively, Fig. 5A). We then correlated these predicted inconsistency levels with the actual MMI scores. We found a significant positive correlation in all studies (behavioral study, r = 0.462; CI = [0.437; 0.485]; and r = 0.487; CI = [0.463; 0.509]; p < 0.0001; motor and numerical tasks, respectively; Fig. 5B; neuroimaging study, r = 0.467; CI = [4.38; 4.95]; p < 0.0001; motor task, Fig. 5C; replication study, r = 0.5895; CI = [0.566; 0.612] and r = 0.574; CI = [0.550; 0.597]; p < 0.0001; motor and numerical tasks, respectively; Fig. 5D). This strengthens our findings that specific mouse features from the non-value tasks could predict inconsistency levels in the risky-choice task even when using regression coefficients that were trained on a different dataset.
Finally, here, too, we repeated the PCA analysis for all the results reported in Figures 4, D and E, and 5 and obtained similar findings (Extended Data Fig. 4-1).
Taken together, using two different tasks that captured motor dynamics but did not involve value modulation, in three different samples with two different dimensionality reduction approaches, and two out-of-sample predictions, we were able to show that choice inconsistency in value-based decision–making was related to motor dynamics and computations.
Neuroimaging results
Replication of the main finding from our previous study
As we hypothesized that neural noise in value brain regions during valuation leading up to a choice is one source for choice inconsistency, we first aimed to replicate the main finding from our previous study (Kurtz-David et al., 2019), which showed that value modulation and inconsistency levels correlated with the BOLD signal in the same value-related ROIs. We therefore ran the same RFX GLM used in that study (see Materials and Methods) and focused our investigation on prehypothesized value-related ROIs. To increase statistical power, we ran separate GLMs in each region using ROI masking. Similar to our previous study (Kurtz-David et al., 2019), we found that MMI correlated with three out of four predefined value ROIs: the vStr, dACC, and PCC (Fig. 6A; Extended Data Fig. 6-2; p < 0.05; cluster-size corrected). We further found an overlap between regions that correlated with inconsistency and with the SV of the chosen bundle in each trial in two of these regions (dACC and PCC; p < 0.05; cluster-size corrected).
M1 and SMA functional ROIs. We used a degenerated version of the motor task that appeared in Kurtz-David et al. (2019) as an independent functional localizer for m1 and SMA. Whole-brain RFX GLM, n = 22, q(Bonferroni) < 0.05. Model regression: BOLD = β0 +β1 RT+ε. We defined any significant voxel that correlated with the boxcar function as either belonging to m1 (left) or the SMA (right) regions. To avoid overlap between ROIs, we excluded from the SMA ROI any voxels that were common with the neighboring dACC ROI. Download Figure 6-1, DOCX file (1.8MB, docx) .
ROI-GLM analysis, neural correlates of inconsistency levels (less binding regression model as in Fig. 6d). Each row indicates the number of functional voxels in a cluster of activation related to the regressor of interest, MMI (inconsistency index). The second column indicates the ROI in which the model was run. The third column indicates the minimal cluster-size required to reach significance of p < 0.05, based on 10,000 permutations. The fourth column details the actual number of voxels in the cluster of activation. Given that, the fifth column indicates whether the results are significant after cluster-size correction, i.e. whether the number in the fourth column is greater than or equal to the number in the third column. Model specification: ROI RFX GLM, n = 42, . Six additional motion-correction regressors were included as regressors of no interest. Download Figure 6-2, DOCX file (15.6KB, docx) .
Psychophysiological Interaction analysis results. Download Figure 6-3, DOCX file (17.3KB, docx) .
We ascribe the weaker correlations of inconsistency levels with the BOLD signal to the fact that eight of the subjects in our sample (19.5%) were fully consistent. Furthermore, the current study had a less powerful test of GARP (Bronars, 1987) compared with the previous study, as each subject completed only 75 trials instead of 108. For this reason, we also ran a less binding model, which only included RT and inconsistency levels as regressors (see Materials and Methods). The less binding model was run both in a whole-brain analysis and in separate ROI-GLMs. In the whole-brain analysis, we found a positive correlation between MMI and the BOLD signal in the mPFC in a very similar region to the one we found in the whole-brain analysis in our previous study (p < 0.005; cluster-size corrected; Fig. 6D). The ROI-GLM yielded significant activations in two of the value ROIs (dACC and PCC; Extended Data Fig. 6-2). These results indicate that we were able to successfully replicate our previous findings in a different sample, confirming that inconsistency emerges during value modulation in value-related brain regions.
Motor brain regions also correlate with value modulation and inconsistency levels
Next, we looked at the second hypothesized source of choice inconsistency: neural noise originating from motor planning and execution. We thus tested whether inconsistency levels and valuations correlated with activations in motor brain regions. We ran the same RFX ROI-GLM that we conducted in the value-related brain regions, this time on M1 and SMA (see Materials and Methods). We found that both these regions correlated with inconsistency levels (SMA, positive correlation; M1, negative correlation; p < 0.05; cluster-size corrected; Fig. 6B). Here, too, we ran a less binding model, which only included RT and inconsistency levels as regressors (see Materials and Methods), which indicated that motor ROIs significantly correlated with inconsistency levels (cluster-size corrected, whole-brain analysis, p < 0.005; ROI-GLM, p < 0.05; Fig. 6D; Extended Data Fig. 6-2).
In line with previous studies, we also identified a strong activation of the SV regressor in the SMA (p < 0.05; cluster-size corrected; Fig. 6B), which demonstrates the notion that motor regions receive information regarding valuations from value-related brain areas before execution (Hare et al., 2011) and/or that this activation is evidence for the representation of action values (Wunderlich et al., 2009). Importantly, within the SMA, we found an overlap between the neural correlates of SV and inconsistency levels, suggesting that not only did activity in motor brain regions encode valuations of choice but rather also tracked, and perhaps contributed, to inconsistency levels.
Finally, we also examined the activity in V1 as a control region and found a very small cluster in the right hemisphere, which correlated with MMI. Nonetheless, we found no representation of the SV regressor in the right nor in the left V1 ROI (Fig. 6C), suggesting that valuations and inconsistency levels were not jointly represented in any main brain region to be tested.
Functional connectivity between motor and value brain regions moderates inconsistency levels
If, in fact, both value and motor brain regions contribute to choice inconsistency, then inter-transmission of activations between these regions should track inconsistency levels. Therefore, we performed a PPI functional connectivity analysis. We used the value regions (vmPFC, vStr, dACC, and PCC) as seeds and ran a separate PPI regression for each of the two motor regions (M1 and SMA; for a total of eight models). We found that as MMI increased (more inconsistent), M1 had stronger connectivity with three of the value regions (vStr, dACC, and PCC; Extended Data Fig. 6-3). In the SMA, we found a similar result for the dACC seed. These results show that the activity in motor and value brain regions were more synchronized in inconsistent trials than in consistent trials, strengthening the notion that motor-related computations also contributed to inconsistency levels.
Neural correlates of mouse features
We next examined how the mouse features were represented in the brain. Our main goal was to compare the neural correlates of the mouse features during the value-based risky–choice task to the non-value motor task and then test whether this representation was related to the degree of inconsistency levels. We first examined the neural correlates of mouse features in both value and motor ROIs. We hypothesized that the mouse features would correlate with activity in the motor ROIs in both tasks, as these features reflected the motor dynamics of task execution regardless of the specifics of the task at hand. However, we speculated that mouse features would be evident in value-related ROIs solely during the main value-based risky–choice task, where they also reflected the valuation process, and not during the non-value motor task where value computations were not required.
To test this, we ran a RFX GLM on the BOLD signal in the value and motor ROIs using the three mouse features, which significantly correlated with MMI in both the main and motor tasks of the neuroimaging study: mean velocity during the trial (“meanVel”), the maximal distance from the shortest path to the choice (“MD”), and the number of segments in which the subject ceased motion during the trial (“numFixations”; Fig. 4C, middle panel; Extended Data Fig. 4-2). Note that contrary to “MD” and “meanVel”, which were strong predictors of inconsistency levels across all three studies (see above), “numFixations” was a strong predictor of inconsistency levels solely in the neuroimaging study (in both tasks). Figure 7A presents an overlay of the neural correlates of the mouse features in all ROIs in the risky-choice task (Fig. 7A, top row) and motor task (Fig. 7A, bottom row).
Not surprisingly, we found that mouse features correlated with activations in the motor ROIs in both the main task and the motor task (p < 0.05; cluster-size corrected). However, in the value ROIs, we found a large discrepancy between the two tasks. While in the risky-choice task, we identified that the mouse features strongly correlated with activations in value ROIs (the vStr correlated with three of three features, the PCC correlated with two of three features), we found only one cluster of activation in the vmPFC, from one mouse feature, during the motor task (p < 0.05; cluster-size corrected).
In other words, in line with our hypothesis, neural activations associated with motor dynamics during the motor task were much weaker in the value ROIs than during the risky-choice task. Hence, only during the risky-choice task, we found that choice dynamics similarly activated motor and value brain regions. This suggests that there was a strong transmission of motor information between the two networks, only when task-execution involved a valuation process. In contrast, given that all three features were shown as robust predictors of inconsistency levels in the non-value tasks, the activations associated with these features in motor brain regions during the motor task were another indication that motor-only activations may have contributed to inconsistency.
We next aimed to directly relate the neural activations of choice dynamics in the motor brain regions with inconsistency levels to examine if the relationship we found in behavior between choice dynamics and inconsistency levels also holds at the neural level. To this end, we obtained the beta coefficients from the RFX GLM that modeled the trial-by-trial change in the BOLD signal in M1 and SMA that was attributed to each mouse feature (subjects’ random slopes, risky-choice task, presented in Fig. 7A, top row), as well as the beta coefficients from the RFX GLM that modeled the trial-by-trial change in the BOLD signal attributed to inconsistency levels (MMI) in the same ROIs (subjects’ random slopes from the model presented in Fig. 6B). We then correlated, on the subject level, these two sets of beta coefficients. A significant correlation between the two sets of beta coefficients would suggest that activations associated with inconsistency levels were also associated with a specific aspect of the motor movement captured by the mouse feature under examination.
We found significant correlations for two of the three mouse features we examined (“MD” and “numFixations”) in SMA and marginally significant correlations for the same two features in M1 (“MD”, SMA, r = −0.458; q(FDR) = 0.002; CI = [−0.669; −0.179]; M1, r = −0.293; q(FDR) = 0.059; CI = [−0.548; −0.0114]; “numFixations”, SMA, r = 0.327; q(FDR) = 0.035; CI = [0.025; 0.574]; M1, r = 0.297; q(FDR) = 0.056; CI = [0.010; 0.622]; Fig. 7B). The smaller effect size in M1 might be related to the weaker inconsistency signal in that ROI (Fig. 6B).
To conclude, the current analysis shows that motor characteristics of task execution had an explicit neural footprint in motor circuits, one which could directly be linked to the neural computation of inconsistent choice behavior. This provides further evidence for the possibility that the motor cortex contributes to inconsistent choice in value-based decision–making.
Ruling out value modulation in the motor task
Finally, to validate that our non-value motor task did not involve any sort of value modulation related to choice, we conducted two separate analyses. First, we conducted a pseudo-SV analysis, where we treated the location of the button press in each trial as if it was an actual choice had it been conducted in the value-based risky–choice task. Based on this assumption, we calculated what would have been the inconsistency level and SV had it been a choice in the value-based task (Fig. 8A). We then looked for the neural correlates of this pseudo-SV regressor with the BOLD signal in the motor task, using the same RFX GLM modeling that we used in the analysis described in Figure 6. We found null results for all predefined value ROIs (Fig. 8B; p < 0.05; cluster-size corrected), indicating that subjects in the motor task did not conduct monetary-based value computations during motor execution, nor did they treat the location of the button press as choosing a lottery.
Figure 8.
Ruling out value modulation in the motor task. A,B, pseudo-SV analysis. A, We treated the button-press location in the motor task as if it was a choice in the value-based risky–choice task. Based on this assumption, we then extracted utility parameters, corresponding to this location, and calculated what would have been its trial-specific MMI score and trial-specific SV had it been a lottery choice in the value-based risky–choice task. B, We repeated the same ROI RFX GLM from Figure 6a–c on the BOLD signal from the motor task with the pseudo-MMI and pseudo-SV regressors. C, A reinforcement signal for reaching the target in the motor task. For each TR, we calculated the current Euclidean distance from the target. We then ran an RFX GLM in value-related regions. n = 42; p < 0.05; cluster-size corrected. Model regression: . Six additional motion-correction regressors were included as regressors of no interest. Results are shown on the Colin 152-MNI brain.
In the second analysis, we hypothesized that the motor task carried a different type of decision-making procedure, one that did not include value computations but rather a different type of computation. We speculated that at each point in time during the trial, the subject determined the distance between the mouse curser and the predefined target, indicating whether they had already reached the predefined target's coordinate. To this end, in each repetition time epoch (TR), we calculated the Euclidean distance between the current position of the mouse's curser and the predefined target and looked for its neural correlates in all value-related ROIs. We found it was negatively correlated with BOLD activations in the vStr, meaning that the vStr became more active as subjects were closer to the target, probably suggesting a reinforcement-like signal for target reaching (Fig. 8C; p < 0.05; cluster-size corrected). This result shows that during the motor task subjects did not calculate SV nor tried to maximize utility of chosen lotteries but instead calculated the spatial location of the mouse relative to the target location.
Discussion
We used behavioral paradigms, neuroimaging, and mouse tracking techniques to show that inconsistent choices can be tied to two interacted neural processes. Inconsistent choices arose during the noisy process of value computation but at the same time were also associated with task-execution instantiated by motor output. The findings we reported in the current paper were obtained in two very different environments (computer vs inside the MRI scanner) and then replicated in a third separate sample. We used a well established risky-choice task and presented innovative non-value motor tasks that isolated the dynamics in motor planning and execution from value-related computations in the same experimental setting.
We showed that motor imprecisions in the non-value tasks were related to inconsistency levels in the main risky-choice task. However, to examine the moment-by-moment motor dynamics, we employed mouse tracking tools, which had been extensively used in recent years to study implicit cognitive mechanisms in several domains (Spivey et al., 2005; Dale et al., 2007; Song and Nakayama, 2008; Stolier and Freeman, 2017), including in the study of value-based choice (Sullivan et al., 2015; Stillman et al., 2017; Lim et al., 2018; Konovalov and Krajbich, 2020). Our study built upon these works to pinpoint the motor elements that directly predicted choice inconsistency. Nevertheless, we did not focus solely on a small set of features that had been previously related to valuation and choice [e.g., mean velocity (Sullivan et al., 2015) or the curvature of the trajectory (Dshemuchadse et al., 2013; Stillman et al., 2017)]. Rather, we explored a wide range of features, as we extracted multiple (34) mouse features from recorded trajectories to characterize temporal and spatial variation in motor dynamics.
Controlling for other task parameters, the mouse features from the risky-choice task accounted for roughly 7–18% of the variation in inconsistency levels. Crucially though, we found that ∼7% (on average) of the variation in mouse features that originated from the non-value tasks could also account for inconsistency levels in the main risky-choice task. As these tasks did not involve any value-related computations (Fig. 8), they solely represented motor planning and execution. We illustrated this finding in two different analysis approaches, with both in- and out-of-sample prediction techniques (Figs. 4, 5).
The contribution of mouse features to inconsistency levels was larger in the behavioral and replication studies (>16%) than in the neuroimaging study (7%), probably because subjects’ movements were not restricted in the behavioral studies as opposed to the fMRI study where subjects were requested to lie still. This resulted in a stronger association between motor-related computations and inconsistent choice. Along these lines, previous studies claimed that the role of motor output to variability in choice was very limited (Drugowitsch et al., 2016; Findling et al., 2019). We believe that this discrepancy is rooted in the far more complex motor output in our task — moving the mouse toward a continuous budget line — versus a simple button-click in two-alternative forced choice paradigms used in previous studies. We pose that such designs were too simple to fully examine the complexities of the motor system and rigorously test if motor output contributed to choice inconsistency. Hence, we consider the complex motor dynamics in our task as a strong advantage, which allowed us to thoroughly tackle the role of motor noise in choice inconsistency. Our findings thus call for further investigation on the links between the motor and valuation systems (Trommershäuser et al., 2008; Wu et al., 2009; Chen et al., 2017).
The only feature that correlated with inconsistency levels across all tasks and samples was “MD”, the maximal distance from the shortest path to the final choice. This indicates that subjects, who traveled in straight “decisive” pathways toward their choice, were less likely to demonstrate high inconsistency levels. This suggests that more decisive routes had lower noise levels and corroborates the groundbreaking work of Harris and Wolpert (1998). They argued that straight trajectories were selected to minimize the variance in endpoint positions, as a result of a unimodal velocity profile that minimized the accumulated signal-dependent noise, which was the cause for deviations from targets. In our study, minimizing the variance toward the endpoint also resulted in lower inconsistency levels. Since the intensity of inconsistency levels is sometimes referred to as a measure of wasted income (Afriat, 1973), these routes can be considered less costly, suggesting that motor planning takes into account subjects’ own motor uncertainty which carries monetary costs (Trommershäuser et al., 2003; Zhang et al., 2010).
We were able to replicate our previous neuroimaging results (Kurtz-David et al., 2019) and show that inconsistency emerged during the computation of value and was represented in value-related brain areas. However, we also show that choice inconsistency was linked to activations in motor brain regions (Figs. 6, 7). Our neuroimaging results further tied between neural activations associated with specific elements in task execution to inconsistent choice. Indeed, we found that, in the same motor regions and during the same time epoch, changes in the BOLD signal that were related to the “MD” and “numFixations” features correlated (in opposite directions) with changes in the signal associated with inconsistencies. This implies that the decision output in our task was linked to other cognitive functions – in particular, the motor system – which lie outside the value network.
Several studies have already shown how noisy neural computations of value, rather than explore–exploit or diversification strategies, may cause irrationality or imprecision in value-based choice (Padoa-Schioppa, 2013; Polanía et al., 2015; Webb et al., 2019; Findling et al., 2019; Kurtz-David et al., 2019). Similarly, spontaneous neural activity also account for stochastic behavior in sensory and motor decision tasks that do not involve valuation processes (Fox and Raichle, 2007; Drugowitsch et al., 2016; Lebovich et al., 2019). Here, we associate motor activations directly with value-based choice and argue that neural dynamics from various cognitive functions may give rise to inconsistent choice. This provides additional evidence that inconsistent choice does not originate single-handedly from noisy valuations (Madar et al., 2024). Our results highlight the importance of studying the role of motor output outside classic paradigms of motor skills (i.e., in grabbing or target reaching).
The term “trembling hand” refers to mere noise. In that case, one should not have expected to find any links between the risky-choice task and the two non-value tasks. In particular, motor imprecision in the non-value tasks, captured by Euclidean distances from targets, should not have correlated with subjects’ inconsistency levels in the main risky-choice task. Our results suggest that motor imprecision is not noise in the classic sense. To fully understand what generates this motor imprecision, the detailed mouse features analysis was required. This analysis revealed that indeed, motor imprecision reflects stable inherent motor characteristics of an individual, some of which relate to inconsistent choice. Our unique task design of the non-value tasks allowed us to identify these specific characteristics in a very similar setting. Had we compared task execution in the risky-choice task to a motor task with a different setting, we expect that we would have found weaker correlations with a lower predictive power. Furthermore, our task design did not allow us to isolate “classic” motor noise, which reflects sheer randomness within motor output. Such randomness would probably not correlate across tasks. Future research is required to pinpoint such stochasticity and further test how manipulations on motor output affect choice inconsistency.
Our neuroimaging findings further indicated that the mouse features representing choice dynamics correlated with activations in value ROIs during the main risky-choice task. This suggests that mouse features reflected fundamental aspects of the valuation process by itself, ones that went beyond motor planning and task execution. The much weaker correlations between the mouse features and activity in value ROIs during the motor task further indicated that the bidirectional transmission between the networks was less necessary in the absence of a value-based choice process. This can imply that the transmission between decision circuits and motor regions was affected by the task and was probably not sequential nor was it only feed-forward. Rather, it may reflect a task dependent continuous flow of information between circuits (Selen et al., 2012) and suggests that further deliberation occurs after the movement onset (Burk et al., 2014).
Economists have long argued that inconsistencies in subjects’ responses can be the result of some random components in their preferences ordering (Becker et al., 1963; Loomes and Sugden, 1995). This has led to calls for improved theories of how stochastic noise influences behavior (Loomes, 2005; Harrison, 2008). More recently, these inconsistencies have been modeled as resulting from a deterministic source (Hey, 2005). The “constant-error” approach (Harless and Camerer, 1994) suggested that subjects obey some deterministic theory of choice but will experience a tremble or a lapse with some probability at the execution of choice. In contrast, the “white noise” approach poses that subjects maximize a utility function with some additive noise term (Hey and Orme, 1994; McFadden, 2001), implying that stochasticity in choice occurrs during the valuation process. This approach has been supported by neural evidence, suggesting that the source for these stochastic valuations is the spontaneous fluctuations of neural networks in value-related brain regions (Webb et al., 2019; Kurtz-David et al., 2019; Webb, 2019). Our findings can help inform economic theory and enable a finer description of the magnitude and form of the error component (Hey and Orme, 1994; Gul and Pesendorfer, 2006). Lately, Webb (2019) has shown that “white noise” models are analogous to sequential sampling models, often used in psychology and neuroscience to describe dynamic choice processes in discrete, usually binary, choice sets (Gold and Shadlen, 2001; Ratcliff and McKoon, 2008; Milosavljevic et al., 2010). The current study presents vast empirical evidence that bridges these schools of thought and suggests that in choices that involve substantial motor elements, irrationality can be associated with randomness derived from the value system but can also be associated with inherent characteristics of the motor system. Future analytical work will be required to develop a unified modeling approach for such choice processes.
References
- Afriat SN (1967) The construction of utility functions from expenditure data. Int Econ Rev 8:67–77. 10.2307/2525382 [DOI] [Google Scholar]
- Afriat SN (1972) Efficiency estimation of production functions. Int Econ Rev 13:568–598. 10.2307/2525845 [DOI] [Google Scholar]
- Afriat SN (1973) On a system of inequalities in demand analysis: an extension of the classical method. Int Econ Rev 14:460–472. 10.2307/2525934 [DOI] [Google Scholar]
- Bartra O, McGuire JT, Kable JW (2013) The valuation system: a coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. Neuroimage 76:412–427. 10.1016/j.neuroimage.2013.02.063 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Becker GM, DeGroot MH, Marschk J (1963) Stochastic models of choice behavior. Behav Sci 8:41–55. 10.1002/bs.3830080106 [DOI] [Google Scholar]
- Binmore KEN (1987) Modeling rational players. Econ Philos 3:179–214. 10.1017/S0266267100002893 [DOI] [Google Scholar]
- Bronars SG (1987) The power of nonparametric tests of preference maximization. Econometrica 55:693–698. 10.2307/1913608 [DOI] [Google Scholar]
- Burk D, Ingram JN, Franklin DW, Shadlen MN, Wolpert DM (2014) Motor effort alters changes of mind in sensorimotor decision making. PLoS One 9:e92681. 10.1371/journal.pone.0092681 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Busemeyer JR (1993) Violations of the speed-accuracy tradeoff relation. In: Time pressure and stress in human judgment and decision making (Svenson O, Maule AJ, eds), pp 181–193. Boston, MA: Springer US. [Google Scholar]
- Camille N, Griffiths CA, Vo K, Fellows LK, Kable JW (2011) Ventromedial frontal lobe damage disrupts value maximization in humans. J Neurosci 31:7527–7532. 10.1523/JNEUROSCI.6527-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen X, Mohr K, Galea JM (2017) Predicting explorative motor learning using decision-making and motor noise. PLoS Comput Biol 13:e1005503. 10.1371/journal.pcbi.1005503 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Choi S, Fisman R, Gale D, Kariv S (2007) Consistency and heterogeneity of individual behavior under uncertainty. Am Econ Rev 97:1921–1938. 10.1257/aer.97.5.1921 [DOI] [Google Scholar]
- Choi S, Kariv S, Müller W, Silverman D (2014) Who is (more) rational? Am Econ Rev 104:1518–1550. 10.1257/aer.104.6.1518 [DOI] [Google Scholar]
- Chung H, Tymula A, Glimcher P (2017) The reduction of ventrolateral prefrontal cortex grey matter volume correlates with loss of economic rationality in aging. J Neurosci 37:12068–12077. 10.1523/JNEUROSCI.1171-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dale R, Kehoe C, Spivey MJ (2007) Graded motor responses in the time course of categorizing atypical exemplars. Mem Cognit 35:15–28. 10.3758/BF03195938 [DOI] [PubMed] [Google Scholar]
- Diedrichsen J, White O, Newman D, Lally N (2010) Use-dependent and error-based learning of motor behaviors. J Neurosci 30:5159–5166. 10.1523/JNEUROSCI.5406-09.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Drugowitsch J, Wyart V, Devauchelle A, Koechlin E, Drugowitsch J, Wyart V, Devauchelle A, Koechlin E (2016) Computational precision of mental inference as critical source of human choice suboptimality. Neuron 92:1398–1411. 10.1016/j.neuron.2016.11.005 [DOI] [PubMed] [Google Scholar]
- Dshemuchadse M, Scherbaum S, Goschke T (2013) How decisions emerge: action dynamics in intertemporal decision making. J Exp Psychol Gen 142:93–100. 10.1037/a0028499 [DOI] [PubMed] [Google Scholar]
- Eklund A, Nichols TE, Knutsson H (2016) Cluster failure: why fMRI inferences for spatial extent have inflated false-positive rates. Proc Natl Acad Sci U S A 113:7900–7905. 10.1073/pnas.1602413113 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Faisal AA, Selen LP, Wolpert DM (2008) Noise in the nervous system. Nat Rev Neurosci 9:292–303. 10.1038/nrn2258 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan L, et al. (2016) The human brainnetome atlas: a new brain atlas based on connectional architecture. Cereb Cortex 26:3508–3526. 10.1093/cercor/bhw157 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Feldman DP, Crutchfield JP (1998) Measures of statistical complexity: why? Phys Lett A 238:244–252. 10.1016/S0375-9601(97)00855-4 [DOI] [Google Scholar]
- Findling C, Skvortsova V, Dromnelle R, Palminteri S, Wyart V (2019) Computational noise in reward-quided learning drives behavioral variability in volatile environments. Nat Neurosci 22:2066–2077. 10.1038/s41593-019-0518-9 [DOI] [PubMed] [Google Scholar]
- Fisman R, Kariv S, Markovits D (2007) Individual preferences for giving. Am Econ Rev 97:1858–1876. 10.1257/aer.97.5.1858 [DOI] [Google Scholar]
- Fitts PM (1954) The information capacity of the human motor system in controlling the amplitude of movement. J Exp Psychol 47:381–391. 10.1037/h0055392 [DOI] [PubMed] [Google Scholar]
- Fox MD, Raichle ME (2007) Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nat Rev Neurosci 8:700–711. 10.1038/nrn2201 [DOI] [PubMed] [Google Scholar]
- Freeman JB, Dale R, Farmer TA (2011) Hand in motion reveals mind in motion. Front Psychol 2:59. 10.3389/fpsyg.2011.00059 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Freeman JB (2018) Doing psychological science by hand. Curr Dir Psychol Sci 27:315–323. 10.1177/0963721417746793 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Freeman JB, Ambady N (2010) Mousetracker : software for studying real-time mental processing using a computer mouse-tracking method. Behav Res Methods 42:226–241. 10.3758/BRM.42.1.226 [DOI] [PubMed] [Google Scholar]
- Freeman JB, Dale R (2013) Assessing bimodality to detect the presence of a dual cognitive process. Behav Res Methods 45:83–97. 10.3758/s13428-012-0225-x [DOI] [PubMed] [Google Scholar]
- Friston KJ, Buechel C, Fink GR, Morris J, Rolls E, Dolan RJ (1997) Psychophysiological and modulatory interactions in neuroimaging. Neuroimage 6:218–229. 10.1006/nimg.1997.0291 [DOI] [PubMed] [Google Scholar]
- Glimcher PW (2005) Indeterminacy in brain and behavior. Annu Rev Psychol 56:25–56. 10.1146/annurev.psych.55.090902.141429 [DOI] [PubMed] [Google Scholar]
- Gluth S, Meiran N (2019) Leave-one-trial-out, LOTO, a general approach to link single-trial parameters of cognitive models to neural data. Elife 8:e42607. 10.7554/eLife.42607 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gold JI, Shadlen MN (2001) Neural computations that underlie decisions about sensory stimuli. Trends Cogn Sci 5:10–16. 10.1016/S1364-6613(00)01567-9 [DOI] [PubMed] [Google Scholar]
- Gold JI, Shadlen MN (2007) The neural basis of decision making. Trends Neurosci 30:535–574. 10.1146/annurev.neuro.29.051605.113038 [DOI] [PubMed] [Google Scholar]
- Gul F (1991) A theory of disappointment aversion. Econometrica 59:667–686. 10.2307/2938223 [DOI] [Google Scholar]
- Gul F, Pesendorfer W (2006) Random expected utility. Econometrica 74:121–146. 10.1111/j.1468-0262.2006.00651.x [DOI] [Google Scholar]
- Halevy Y, Persitz D, Zrill L (2018) Parametric recoverability of preferences. J Political Econ 126:1558–1593. 10.1086/697741 [DOI] [Google Scholar]
- Hare TA, Schultz W, Camerer CF, Doherty JPO, Rangel A (2011) Transformation of stimulus value signals into motor commands during simple choice. Proc Natl Acad Sci U S A 108:18120–18125. 10.1073/pnas.1109322108 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harless BYDW, Camerer CF (1994) The predictive utility of generalized expected utility theories. Econometrica 62:1251–1289. 10.2307/2951749 [DOI] [Google Scholar]
- Harris C, Wolpert DM (1998) Signal-dependent noise determines motor planning. Nature 394:780–784. 10.1038/29528 [DOI] [PubMed] [Google Scholar]
- Harrison GW (2008) Neuroeconomics : a critical reconsideration. Econ Philos 24:303–344. 10.1017/S0266267108002009 [DOI] [Google Scholar]
- Hehman E, Stolier RM, Freeman JB (2015) Advanced mouse-tracking analytic techniques for enhancing psychological science. Group Process Intergroup Relat 18:384–401. 10.1177/1368430214538325 [DOI] [Google Scholar]
- Hey JD (2005) Why we should not be silent about noise. Exp Econ 345:325–345. 10.1007/s10683-005-5373-8 [DOI] [Google Scholar]
- Hey JD, Orme C (1994) Investigating generalizations of expected utility theory using experimental data. Econometrica 62:1291–1326. 10.2307/2951750 [DOI] [Google Scholar]
- Houthakker HS (1950) Revealed preference and the utility function. Economica 17:159–174. 10.2307/2549382 [DOI] [Google Scholar]
- Kolling N, Behrens TEJ, Wittmann MK, Rushworth MFS (2016) Multiple signals in anterior cingulate cortex. Curr Opin Neurobiol 37:36–43. 10.1016/j.conb.2015.12.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Konovalov A, Krajbich I (2020) Mouse tracking reveals structure knowledge in the absence of model-based choice. Nat Commun 11:1893. 10.1038/s41467-020-15696-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kurtz-David V, Persitz D, Webb R, Levy DJ (2019) The neural computation of inconsistent choice behavior. Nat Commun 10:1583. 10.1038/s41467-019-09343-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lebovich L, Darshan R, Lavi Y, Hansel D, Loewenstein Y (2019) Intrinsic stochasticity in neuronal dynamics. Nat Hum Behav 3:1190–1202. 10.1038/s41562-019-0682-7 [DOI] [PubMed] [Google Scholar]
- Lim S, Penrod MT, Ha O, Bruce JM, Bruce AS (2018) Calorie labeling promotes dietary self-control by shifting the temporal dynamics of health- and taste-attribute integration in overweight individuals. Psychol Sci 29:447–462. 10.1177/0956797617737871 [DOI] [PubMed] [Google Scholar]
- Loomes G (2005) Modelling the stochastic component of behaviour in experiments : some issues for the interpretation of data. Exp Econ 8:301–323. 10.1007/s10683-005-5372-9 [DOI] [Google Scholar]
- Loomes G, Sugden R (1995) Incorporating a stochastic element into decision theories. Eur Econ Rev 39:641–648. 10.1016/0014-2921(94)00071-7 [DOI] [Google Scholar]
- Madar A, Kurtz-David V, Hakim A, Levy DJ, Tavor I (2024) Pre-acquired functional connectivity predicts choice inconsistency. J Neurosci 44:e0453232024. 10.1523/JNEUROSCI.0453-23.2024 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McFadden DL (2005) Revealed stochastic preference : a synthesis. Econ Theory 264:245–264. 10.1007/s00199-004-0495-3 [DOI] [Google Scholar]
- McFadden D (2001) Economic choices. Am Econ Rev 91:351–378. 10.1257/aer.91.3.351 [DOI] [Google Scholar]
- Milosavljevic M, Malmaud J, Huth A, Koch C, Rangel A (2010) The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure. Judgm Decis Mak 5:437–449. 10.1017/S1930297500001285 [DOI] [Google Scholar]
- Mjaavatten A (2020) Curvature of a 2D or 3D curve. Available at: https://www.mathworks.com/matlabcentral/fileexchange/69452-curvature-of-a-2d-or-3d-curve, MATLAB central file exchange. Retrieved July 1, 2020.
- Padoa-Schioppa C (2013) Neuronal origins of choice variability in economic decisions. Neuron 80:1322–1336. 10.1016/j.neuron.2013.09.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Phillips JG, Triggs TJ (2001) Characteristics of cursor trajectories controlled by the computer mouse. Ergonomics 44:527–536. 10.1080/00140130121560 [DOI] [PubMed] [Google Scholar]
- Polanía R, Moisa M, Opitz A, Grueschow M, Ruff CC (2015) The precision of value-based choices depends causally on fronto-parietal phase coupling. Nat Commun 6:8090. 10.1038/ncomms9090 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ratcliff R, McKoon G (2008) The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput 20:873–922. 10.1162/neco.2008.12-06-420 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Resulaj A, Kiani R, Wolpert DM, Shadlen MN (2009) Changes of mind in decision-making. Nature 461:263–266. 10.1038/nature08275 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Richman SJ, Moorman JR (2000) Physiological time-series analysis using approximate entropy and sample entropy. Am J Physiol 278:H2039–H2049. 10.1152/ajpheart.2000.278.6.H2039 [DOI] [PubMed] [Google Scholar]
- Samuelson L, Zhang J (1992) Evolutionary stability in asymmetric games*. J Econ Theory 391:363–391. 10.1016/0022-0531(92)90041-F [DOI] [Google Scholar]
- Selen LPJ, Shadlen MN, Wolpert DM (2012) Deliberation in the motor system : reflex gains track evolving evidence leading to a decision. J Neurosci 32:2276–2286. 10.1523/JNEUROSCI.5273-11.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Selten R (1975) Reexamination of the perfectness concept for equilibrium points in extensive games. Int J Game Theory 4:25–55. 10.1007/BF01766400 [DOI] [Google Scholar]
- Shadmehr R, Reppert TR, Summerside EM, Yoon T, Ahmed AA (2019) Movement vigor as a reflection of subjective economic utility. Trends Neurosci 42:323–336. 10.1016/j.tins.2019.02.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Song JH, Nakayama K (2008) Target selection in visual search as revealed by movement trajectories. Vision Res 48:853–861. 10.1016/j.visres.2007.12.015 [DOI] [PubMed] [Google Scholar]
- Spivey MJ, Grosjean M, Knoblich G (2005) Continuous attraction toward phonological competitors. Proc Natl Acad Sci U S A 102:10393–10398. 10.1073/pnas.0503903102 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stillman PE, Medvedev D, Ferguson MJ (2017) Resisting temptation : tracking how self-control conflicts are successfully resolved in real time. Psychol Sci 28:1240–1258. 10.1177/0956797617705386 [DOI] [PubMed] [Google Scholar]
- Stolier RM, Freeman JB (2017) A neural mechanism of social categorization. J Neurosci 37:5711–5721. 10.1523/JNEUROSCI.3334-16.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sullivan N, Hutcherson C, Harris A, Rangel A (2015) Dietary self-control is related to the speed with which attributes of healthfulness and tastiness are processed. Psychol Sci 26:122–134. 10.1177/0956797614559543 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Trommershäuser J, Maloney LT, Landy MS (2003) Statistical decision theory and trade-offs in the control of motor response. Spat Vis 16:255–275. 10.1163/156856803322467527 [DOI] [PubMed] [Google Scholar]
- Trommershäuser J, Maloney LT, Landy MS (2008) Decision making, movement planning and statistical decision theory. Trends Cogn Sci 12:291–297. 10.1016/j.tics.2008.04.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Webb R (2019) The (neural) dynamics of stochastic choice. Manage Sci 64:230–255. 10.1287/mnsc.2017.2931 [DOI] [Google Scholar]
- Webb R, Glimcher PW, Levy I, Stephanie C, Rutledge RB (2019) Neural random utility: relating cardinal neural observables to stochastic choice behaviour. J Neurosci Psychol Econ 12:45. 10.1037/npe0000101 [DOI] [Google Scholar]
- Wirth R, Foerster A, Kunde W, Pfister R (2020) Design choices: empirical recommendations for designing two-dimensional finger-tracking experiments. Behav Res Methods 52:2394–2416. 10.3758/s13428-020-01409-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wolpert DM, Ghahramani Z (2000) Computational principles of movement neuroscience. Nat Neurosci 3:1212–1217. 10.1038/81497 [DOI] [PubMed] [Google Scholar]
- Wu SW, Delgado MR, Maloney LT (2009) Economic decision-making compared with an equivalent motor task. Proc Natl Acad Sci U S A 106:6088–6093. 10.1073/pnas.0900102106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wulff DU, Kieslich PJ, Henninger F, Haslbeck JM (2021) Movement tracking of cognitive processes: a tutorial using mousetrap. WP:1–29.
- Wunderlich K, Rangel A, O'Doherty JP (2009) Neural computations underlying action-based decision making in the human brain. Proc Natl Acad Sci U S A 106:17199–17204. 10.1073/pnas.0901077106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yarkoni T, Poldrack RA, Nichols TE, Van Essen DC, Wager TD (2011) Large-scale automated synthesis of human functional neuroimaging data. Nat Methods 8:665–670. 10.1038/nmeth.1635 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang H, Maddula SV, Maloney LT (2010) Planning routes across economic terrains : maximizing utility, following heuristics. Front Psychol 1:214. 10.3389/fpsyg.2010.00214 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Budget sets’ parameters, behavioral study. Download Figure 1-1, DOCX file (24.1KB, docx) .
Budget sets’ parameters, neuroimaging study. Download Figure 1-2, DOCX file (19.1KB, docx) .
Distributions of trials by the share allocated to the cheap account (x or y). Left - behavioral study; middle - neuroimaging study; right - replication study. When slope is > 1 (in absolute value), then it is more beneficial to allocate money to the Y-axis and it is considered the cheap account - and vis-à-vis - when the slope<=1 (in absolute value), then it is more beneficial to allocate more money to the X-axis, and it is considered the cheap account. As X & Y are symmetrical, allocating less than 50% of endowment to the cheap account corresponds to violating FOSD. The share of trials with FOSD violations was 11.79% in the behavioral study, 4.98% in the neuroimaging study, and 14.45% in the replication study. If we allow some sensitivity around the focal 50%-share point (the intersection between the budget set and a 45-degrees line from the axes origin) and examine only the share of trials where subjects allocated less than 45% to the cheap account, then we find that only 2.84% of trials in the behavioral study, 1.68% in the neuroimaging study, and 5.77% in the replication study, violated FOSD. This suggests that subjects understood the task. Download Figure 2-1, DOCX file (1.5MB, docx) .
An illustration of the inconsistency indices. (a) A dataset with choices on three budget lines. Bundle 3 is reveled preferred to Bundle 1 and Bundle 2, but at the same time Bundle 1 and Bundle 2 are reveled preferred to Bundle 3. These choices create a choice cycle with three GARP violations. The dashed line indicates Afriat Index (AI) and shows the biggest adjustment to the budget lines required to remove all the violations. (b) MMI. Left: The utility function U(·) ranks some bundles on the budget line higher than the ranking provided by the subject, who chose x^i as their most desired bundle. Right: We resolve this incompatibility by shifting the budget line towards bundle y (dotted line). On the adjusted budget line, y is the highest ranked bundle by U(·). Download Figure 2-2, DOCX file (1.6MB, docx) .
Correlations between inconsistency indices. Scatterplots show correlations between inconsistency indices: upper panel shows the correlations between MMI and AI (r = 0.7608, left) and the number of GARP violations (r = 0.722, right) in the behavioral study, the middle panel shows the same results for the neuroimaging study (r = 0.8618 and r = 0.7989, respectively). The bottom panel shows the results for the replication study (r = 0.8723 and r = 0.8797, respectively). Dashed line indicates the least squares fit. Download Figure 2-3, DOCX file (6.7MB, docx) .
Mouse tracking. (a) Two trajectories from two different trials from the same subject (sub. 217) across the different tasks, behavioral study. Left - a trial with similar trajectories across tasks (trial 23). Right - a trial with dis-similar trajectories across tasks (trial 75). (b) Same as (a), but from a subject in the MRI study (trials 10 and 27 from subject 113). (c-d) Velocity profiles for the same trials presented in (a-b), respectively. We normalized RT to a relative scale. (e-f) Average velocity profiles for all subjects in the sample. We normalized RT to a relative scale. Shaded error bar represent standard errors. (e) Behavioral study. (f) MRI study. Download Figure 3-1, DOCX file (3.5MB, docx) .
Distribution of features values after preprocessing, behavioral study (z-scoring, removing nan and extreme values). Download Figure 3-2, DOCX file (5.8MB, docx) .
Distribution of features values after preprocessing, MRI study (z-scoring, removing nan and extreme values). Download Figure 3-3, DOCX file (4.9MB, docx) .
Description of mouse features. Download Figure 3-4, DOCX file (19.1KB, docx) .
Mouse features and inconsistency levels, Approach B (PCA). (a) Same analysis as in Fig. 4c, using Approach B - PCA. Mouse features in the main task correlated with inconsistency levels in both studies. Shaded area indicate a significant predictor (p < 0.05) in a subjects fixed-effects regression. Left: behavioral study. Middle: neuroimaging study. Right: replication study. (b) Adj-R2 from regressions that included PCs from the main task, compared with regressions that included PCs from the non-value tasks. Solid fill indicates regressions, which only included task-execution mouse features. Diagonal striped fill indicates subjects’ random intercept. Horizontal striped fill indicates additional explained variance from SV and other task-related parameters. Left: behavioral study, middle: neuroimaging study, right: replication study. For the behavioral and replication studies, the figure includes only the 51 trials that were common across all three tasks. (c) Leave-one-subject-out prediction. Distribution of correlation coefficients between predicted inconsistency levels based on PC1 s from the main task and actual MMI scores. Behavioral study: median=0.3797, min=0.0980, max=0.7949, std=0.1582; neuroimaging study: median=0.142, min=-0.065, max=0.541, std=0.145. Replication study: median=0.382, min=-0.109, max=0.7963, std=0.201. Dashed black line indicates median. Scatter shows individual correlation coefficients. p < 0.0001, Behavioral study: t(88)= 29.971; Neuroimaging study: t(41)= 7.727; Replication study: t(68) = 16.1498. One-sided one-sample t-test (behavioral study: N = 89, neuroimaging study: N = 42, replication study: N = 69.). (d) Same analysis as in Fig. 5b-c, using Approach B - PCA. Obtained correlations between predicted and actual MMI. Behavioral study. Each dot in the scatter relates to a different trial. predictions based on mouse features from the behavioral study (N motor task = 4,279, N numerical task = 4,192). (e) Predictions based on mouse features from the neuroimaging study (N = 2,915). (f) Predictions based on mouse features from the replication study (N motor task = 3,085, N numerical task = 3,160). Download Figure 4-1, DOCX file (6.9MB, docx) .
Surviving features from the elastic net analysis, behavioral and neuroimaging studies. Download Figure 4-2, DOCX file (17.7KB, docx) .
Surviving features from the elastic net analysis, replication study. Download Figure 4-3, DOCX file (16.4KB, docx) .
M1 and SMA functional ROIs. We used a degenerated version of the motor task that appeared in Kurtz-David et al. (2019) as an independent functional localizer for m1 and SMA. Whole-brain RFX GLM, n = 22, q(Bonferroni) < 0.05. Model regression: BOLD = β0 +β1 RT+ε. We defined any significant voxel that correlated with the boxcar function as either belonging to m1 (left) or the SMA (right) regions. To avoid overlap between ROIs, we excluded from the SMA ROI any voxels that were common with the neighboring dACC ROI. Download Figure 6-1, DOCX file (1.8MB, docx) .
ROI-GLM analysis, neural correlates of inconsistency levels (less binding regression model as in Fig. 6d). Each row indicates the number of functional voxels in a cluster of activation related to the regressor of interest, MMI (inconsistency index). The second column indicates the ROI in which the model was run. The third column indicates the minimal cluster-size required to reach significance of p < 0.05, based on 10,000 permutations. The fourth column details the actual number of voxels in the cluster of activation. Given that, the fifth column indicates whether the results are significant after cluster-size correction, i.e. whether the number in the fourth column is greater than or equal to the number in the third column. Model specification: ROI RFX GLM, n = 42, . Six additional motion-correction regressors were included as regressors of no interest. Download Figure 6-2, DOCX file (15.6KB, docx) .
Psychophysiological Interaction analysis results. Download Figure 6-3, DOCX file (17.3KB, docx) .








