Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Oct 1.
Published in final edited form as: Cognition. 2021 Jul 9;215:104818. doi: 10.1016/j.cognition.2021.104818

Cognitive efficiency beats top-down control as a reliable individual difference dimension relevant to self-control

Alexander Weigard 1,*, D Angus Clark 1, Chandra Sripada 1
PMCID: PMC8378481  NIHMSID: NIHMS1723337  PMID: 34252724

Abstract

Top-down control of responses is a key construct in cognitive science that is thought to be critical for self-control. It is typically measured by subtracting performance in experimental conditions in which top-down control is theoretically present against performance in matched conditions in which it is assumed to be absent. Recently, however, subtraction-based metrics of top-down control have been criticized for having low test-retest reliability, weak intercorrelations, and little relation to self-report measures of self-control. Concurrently, there is growing evidence that task-general cognitive efficiency, indexed by the drift rate parameter of the diffusion model (Ratcliff, 1978), constitutes a cohesive, reliable individual difference dimension relevant to self-control. However, no previous studies have directly compared latent factors for top-down control (derived from subtraction metrics) with factors for task-general efficiency “head-to-head” in the same sample in terms of their cohesiveness, temporal stability, and relation to self-control. In this re-analysis of a large open data set (Eisenberg et al., 2019; N=522), we find that top-down control metrics fail to form cohesive latent factors, that these factors have poor temporal stability, and they exhibit tenuous connections to questionnaire measures of self-control. In contrast, cognitive efficiency measures—drawn from conditions of the same tasks that both are, and are not, assumed to demand top-down control—form a robust, temporally stable factor that correlates with questionnaire measures of self-control. These findings suggest that task-general efficiency is a central individual difference dimension relevant to self-control. Moreover, they go beyond recent measurement-based critiques of top-down control metrics, and instead suggest problems with key theoretical assumptions that have long guided this research paradigm.

Keywords: top-down control, self-regulation, diffusion model, conflict, subtraction

1. Introduction

A standard view in cognitive science is that humans have a set of inter-related abilities for “top-down control” of responses, including inhibiting prepotent responses, resolving conflict between incompatible stimulus/response tendencies, and preventing previous task sets from interfering with current performance. In turn, top-down control of responses has been theorized to contribute to the broader construct of “self-control”, a person’s general ability to regulate their behavior in day-to-day life, especially in the pursuit of long-term goals. Substantial research effort has been devoted to linking measures of top-down control from cognitive tasks to risk taking (Romer et al., 2009), impulsivity (Moeller et al., 2001), personality (Unsworth et al., 2009), and psychopathology, including ADHD (Barkley, 1997; Willcutt et al., 2005b), substance use (Bates et al., 2002; Stevens et al., 2014), schizophrenia (Heinrichs & Zakzanis, 1998; Mesholam-Gately et al., 2009), bipolar disorder (Swann et al., 2009), and depression (Fossati et al., 2002; Snyder, 2013).

Paradigms that are widely used to study top-down control in cognitive science, such as the Stroop and Simon tasks (Simon & Rudell, 1967; Stroop, 1935), depend on a critical assumption of “selective engagement”: a “conflict” experimental condition is assumed to selectively engage top-down control and, in an otherwise-matched “non-conflict” experimental condition, engagement of top-down control is assumed to be absent. This assumption allows subtraction of performance metrics across the two conditions to index the integrity of top-down control. For example, on the Simon task, individuals must decide between response options in conditions in which the correct response is either spatially congruent (i.e., on the same side) or incongruent (on the opposite side) with the position of the presented stimulus. Top-down control is assumed to resolve conflicting stimulus properties in the incongruent (conflict) condition exclusively, and thus individual differences in metrics formed by subtracting performance across these two conditions should, in theory, precisely index the efficacy of top-down control (J. Miller & Ulrich, 2013).

There is understandable appeal to the selective engagement assumption and the associated subtraction-based method of measuring top-down control: it appears to address a critical challenge in disentangling contributions of top-down control from myriad other processes engaged by these complex tasks (J. Miller & Ulrich, 2013). In recent years, however, multiple lines of evidence have emerged to suggest that the study of top-down control faces serious challenges, especially in the context of individual differences research. First, recent large, multi-task studies show that subtraction metrics of top-down control generally have poor test-retest reliability (Draheim et al., 2019; Enkavi et al., 2019; Hedge et al., 2018). Second, interrelations among subtraction metrics drawn from conceptually similar tasks that putatively index top-down control are relatively weak (making it challenging to form coherent latent factors) (Frischkorn et al., 2019; Karr et al., 2018), even under conditions in which problems with reliability are minimized (Rey-Mermet et al., 2019; Rouder & Haaf, 2019). Third, well-powered analyses find subtraction metrics of top-down control display weak to negligible relationships with self-report indices of self-control (Eisenberg et al., 2019; Saunders et al., 2018), calling their construct validity into question.

In parallel, a largely independent line of research has investigated how mechanistic processes posited by formal computational models contribute to individual differences in performance on speeded decision tasks, a category which subsumes most of the experimental tasks that have been used to measure top-down control. Sequential sampling models, such as the diffusion decision model (DDM) (Ratcliff, 1978; Ratcliff et al., 2016) and linear ballistic accumulator (Brown & Heathcote, 2008), have been particularly successful at explaining behavior on these tasks as resulting from a process in which individuals gradually accumulate evidence for each possible choice until a critical evidence threshold for a given choice is reached. Parameters from these models that index the efficiency with which individuals can accumulate evidence for appropriate responses are often of key interest. In the DDM, for example, the “drift rate” parameter v indexes the average rate at which the decision process moves towards a threshold for the correct choice on a given task trial (Figure 1). The model also includes several additional parameters that determine other aspects of the response selection process, which are elaborated in Figure 1 and its caption.

Figure 1.

Figure 1.

Schematic of the diffusion decision model (DDM: Ratcliff, 1978) as applied to a Simon task in which the relevant stimulus dimension is the color of a square (on this trial, blue) presented next to a fixation cross. The model assumes that, following the presentation of the stimulus, the individual gradually gathers noisy evidence for each response option over time. The evidence accumulation process is represented as a decision variable (depicted here, for ten example trials, by the light blue traces) that drifts between an upper boundary for the correct (“blue”) choice and a lower boundary for the incorrect (“red”) choice. When this variable reaches one of the two thresholds, the corresponding (“red” or “blue”) response is initiated. In this way, the model makes specific predictions about both the accuracy of responses and the distribution of response times (represented by the light blue density plots at each boundary). Model parameters include: 1) the drift rate (v), which determines the average rate at which the individual accumulates evidence for the correct choice, 2) boundary separation (a), which determines an individual’s level of response caution (with higher values indicating greater emphasis on accurate, as opposed to speeded, responding), 3) the start point (z) of the evidence accumulation process, and 4) a “nondecision time” (Ter) parameter that accounts for processes peripheral to the decision, such as perceptual encoding and motor response implementation. Additional parameters for between-trial variability in v, z and Ter are also included in the full DDM to account for several features of behavioral data, but their inclusion appears to have little practical effect on inferences about the other model parameters (Dutilh et al., 2019).

In contrast to subtraction-based metrics of top-down control, drift rate displays substantial test-retest reliability and other trait-like properties (Lerche & Voss, 2017; Schubert et al., 2016) and forms strong latent factors when measured across a variety of tasks and cognitive domains (Lerche et al., 2020; Schmiedek et al., 2007; Schmitz & Wilhelm, 2016; Schubert et al., 2016; Schulz-Zhecheva et al., 2016), suggesting that it represents a stable and cohesive cognitive individual difference dimension. Moreover, although the construct of “cognitive efficiency”, as indexed by drift rate parameters in the DDM and other sequential sampling models, is not usually thought of as being closely related to self-control, growing evidence invites a reconsideration.

First, drift rate is conceptualized as the ability to rapidly and selectively extract goal-relevant information from a stimulus for the purposes of generating an appropriate response. Self-control is itself often conceptualized in terms of effectively pursuing goals, especially long-term goals, while avoiding behaviors that are detrimental or irrelevant to goal pursuit despite appearing rewarding in the short term. Thus, there is a theoretical link between the construct of “cognitive efficiency”, as defined by drift rate, and the broader concept of self-control. Second, latent drift rate factors typically explain a large portion of the variance in higher-order cognitive abilities such as working memory and higher-order reasoning (Lerche et al., 2020; Schmiedek et al., 2007; Schmitz & Wilhelm, 2016; Schubert & Frischkorn, 2020; Schulz-Zhecheva et al., 2016), that are widely thought to be relevant to implementation of self-control. Third, reduced cognitive efficiency, as measured by drift rate, has been consistently documented in clinical conditions linked to self-control difficulties (Heathcote et al., 2015; Huang-Pollock et al., 2012; Shapiro & Huang-Pollock, 2019; Sripada & Weigard, 2021; Weigard & Sripada, 2021; Ziegler et al., 2016).

A key difference between the constructs of cognitive efficiency and top-down control is that the processes underlying cognitive efficiency appear to be task general: they operate in tasks that range in complexity from simple perceptual decisions to the highly complex “executive” paradigms that are widely thought to demand top-down control. Thus, in tasks explicitly designed to measure top-down control, such as the Stroop and Simon, cognitive efficiency is assumed to contribute to performance similarly in conflict conditions and non-conflict conditions. Supporting this view, recent work with the go/no-go task has demonstrated that drift rates estimated from conditions that do (“no-go”) and do not (“go”) require inhibition are strongly related (r = .73) (Weigard et al., 2019). This sets up a major difference in perspective vis-a-vis the top-down control paradigm and its associated assumption of selective engagement. If cognitive efficiency is relevant for individual differences in self-control, and if cognitive efficiency is the major driver performance in both conflict and non-conflict conditions, subtracting performance across these two conditions will largely erase the contributions of cognitive efficiency, and the resulting subtraction metric will have little relationship with self-control. Note that this is precisely the opposite of what is predicted by the top-down control perspective, in which, due to the selective engagement assumption, subtraction is thought to isolate the top-down control process that is proposed to be relevant to self-control. Overall, then, both top-down control and cognitive efficiency are possible mechanisms operative in conflict tasks that are potentially relevant to self-control, but these accounts make strongly opposed predictions. Yet, attempts to quantify these constructs in the same subjects performing the same tasks and compare them “head-to-head” in terms of their relation to self-control are lacking.

Closely related issues were considered in an unprecedented recent investigation by Eisenberg and colleagues (2019), who examined performance on an extensive battery of cognitive tasks (37 tasks) and questionnaires (22 surveys) in a large sample of participants (N=522). Although they measured drift rates in many of these tasks, their analytic approach was highly data-driven. Specifically, they aggregated across a number of dependent measures: performance metrics across “conflict” and “non-conflict” conditions of tasks designed to index top-down control, subtraction metrics from these same tasks, as well as measures derived from other types of paradigms (e.g., delay discounting). They found that aggregated metrics from their data-driven factor analysis were generally test-retest reliable, but failed to predict real-world outcomes, such as driving under the influence citations and obesity.

The analysis approach by Eisenberg and colleagues is novel and ambitious. Nonetheless, it leaves certain questions unanswered. Most prominently, their data-driven aggregation approach joined together DDM parameters for overall performance on experimental tasks thought to manipulate top-down control, subtraction-based metrics from the same tasks, and other measures from theoretically distinct domains. Thus, the critical question remains of how the competing theoretical accounts reviewed above— top-down control accounts based on subtraction versus the task-general cognitive efficiency account indicated by the DDM literature—compare as explanations for the cognitive underpinnings of self-control.

Here we seek to clarify this outstanding issue by undertaking a re-analysis of the publicly available data from Eisenberg and colleagues’ study using a more theory-driven and comparative approach. Our goal was to directly contrast the psychometric and predictive properties of latent factors reflecting top-down control, as measured using the standard subtraction approach, with a latent factor reflecting task-general cognitive efficiency, as defined in the DDM individual differences literature. For completeness, we also study other non-DDM task-general variables such as accuracy and mean response time, as these variables may hold differential promise for indexing cognitive factors relevant to self-control as well (Draheim et al., 2019). Importantly, we define these latent factors in the same subjects undertaking the same tasks, allowing direct “head-to-head” comparisons. More specifically, we assess whether latent factors that reflect subtraction assumptions about top-down control and alternate latent factors that reflect task-general cognitive efficiency each: 1) show coherence by explaining a substantial amount of variance in the individual tasks; 2) are stable over time, as reflected by test-retest reliability; and 3) are related to a latent general dimension of self-control derived from questionnaire-based measures.

2. Methods

2.1. Data Set

As previously described in detail (Eisenberg et al., 2019), data from an initial sample of 662 adults were collected using the Experiment Factory in Amazon’s Mechanical Turk (Sochat et al., 2016). All participants clicked to indicate their agreement with an informed consent form prior to completing study procedures.

We included data from the same group of 522 participants (262 females; mean age=33.6, SD=7.9) that Eisenberg et al. (2019) included in their analyses after excluding participants who either did not complete the entire battery or failed data quality checks on ≥4 cognitive tasks. We also removed data from individual tasks in which included participants failed these quality checks (2.2% of the overall cognitive data matrix for the current study). Data and quality check code from Eisenberg et al. (2019) are available at: github.com/IanEisenberg/Self_Regulation_Ontology. Code files for the analyses conducted in the current study are available at: osf.io/dyr6b/.

2.2. Cognitive Measures

We selected seven tasks from the battery that met the following criteria: 1) the task was assumed to measure top-down control; 2) the task contained at least one experimental manipulation which allowed control to be measured using a “subtraction” assumption (conditions in which top-down control processes are assumed to be alternately present versus absent); and 3) response time (RT) and accuracy were recorded for all trials, which allows application of the DDM (Table 1).

Table 1.

Features of the seven tasks included in the analysis, including: the “conflict” and “non-conflict” experimental conditions of interest and the formula used to compute subtraction-based measures of “top-down control” (Δ) based on drift rate (v), mean response time (RT) and accuracy rate. Switch = trials in which the relevant stimulus dimension changes from the previous trial (note: for the Cue/Task-Switching Task, switching refers to the task, rather than to the cue); Stay = trials in which the relevant stimulus dimension stays the same as for the previous trial; Negative = probe letters that were presented in the trial, but were directed to be forgotten; Control = probe letters that were not presented in the trial; Incongruent = stimuli containing irrelevant information that conflicts with information in the relevant stimulus dimension; Congruent = stimuli containing irrelevant information that is consistent with information in the relevant stimulus dimension

Task Conflict Condition Non-Conflict Condition v Δ RT Δ accuracy Δ
Local Global Switch Stay Stay-Switch Switch-Stay Stay-Switch
Shape Matching Distractor No Distractor Distractor – No Distractor No Distractor – Distractor Distractor – No Distractor
Directed Forgetting Negative Control Control – Negative Negative – Control Control - Negative
Attentional Network Task Incongruent Congruent Congruent – Incongruent Incongruent - Congruent Congruent - Incongruent
Stroop Incongruent Congruent Congruent – Incongruent Incongruent - Congruent Congruent - Incongruent
Simon Incongruent Congruent Congruent – Incongruent Incongruent - Congruent Congruent - Incongruent
Cue/Task-Switching Task Switch Stay Stay-Switch Switch-Stay Stay-Switch

As several of the seven tasks included in the study had more than one experimental manipulation, breaking up experimental trials into the full factorial design would have led to small numbers of trials per cell at the individual level, which would likely lead to unreliable estimates of DDM parameters. Therefore, we selected one experimental manipulation relevant to top-down control for each task (detailed in Table 1) and obtained estimates of DDM parameters, RT means, and accuracy rates while collapsing across other experimental conditions. For example, as the Attentional Networks Task contained manipulations of “alerting” (no cue vs. double cue conditions), “attentional orienting” (central cue vs. spatial cue conditions) and “executive control” (congruent vs. incongruent flanking stimuli), we obtained estimates separately from trials with congruent (non-conflict) and incongruent (conflict) flanking stimuli, regardless of the trial’s cue condition.

In addition, because cognitive efficiency, as measured by v, is thought to be task-general, driving performance across both simple and complex cognitive domains (Lerche et al., 2020; Weigard & Sripada, 2021), we also sought to include a task without any experimental manipulations that were assumed to demand top-down control. We therefore selected the choice response time (CRT) task from the Eisenberg et al. (2019) battery, in which participants simply decide whether an orange versus blue square is displayed on the screen. As described below, v and other metrics from the CRT were allowed to load on task-general factors in a subset of models to evaluate the generality of the factors in these models.

We used the EZ diffusion model (Wagenmakers et al., 2007) to obtain estimates of the drift rate (v), boundary separation (a) and nondecision time (Ter) for each condition of interest. The EZ method allows closed-form solutions for estimates of the main DDM parameters to be computed using three variables: the accuracy rate, and the mean and variance of correct RTs. Despite its simplicity, the EZ method tends to provide similar inferences to those drawn from more complex DDM estimation methods (Dutilh et al., 2019; Lerche et al., 2017; van Ravenzwaaij et al., 2017; van Ravenzwaaij & Oberauer, 2009). Consistent with this work, Eisenberg et al. (2019) found that EZ yielded similar results to a hierarchical Bayesian DDM.

Our main aim was to compare task-general factors, including those for v and for other performance metrics, to factors formed from subtraction metrics drawn from the same seven tasks. Thus, for each task, we created subtraction-based difference scores for comparisons between non-conflict and conflict conditions for accuracy (accuracyΔ), reaction time (RTΔ), and drift rates (vΔ). For measures that reflect better performance (accuracy, v) we subtracted non-conflict from conflict scores, while for RT, as higher RT reflects worse performance, we did the reverse. This created subtraction metrics oriented in the direction commonly found in the literature: larger values reflect theoretically poorer top-down control.

2.3. Cognitive Factor Models

We evaluated task-general factors for v and for the other two main DDM parameters, boundary separation (a) and nondecision time (Ter). As prior work suggests that a and Ter differ from v in that they do not appear to have strong trait-like properties or robust relations with psychopathology (Schubert et al., 2016; Ziegler et al., 2016), we expected task-general factors for these parameters to have weaker coherence, poorer reliability and little relation with self-control questionnaires. We also evaluated task-general factors for RT and accuracy rates. These summary statistics play a role in the estimation of v and are assumed by the DDM theoretical framework to directly result from v and the other underlying processes posited in the mathematical model. It would therefore not be surprising if task-general factors for these metrics show properties that are similar to v. However, as v is assumed to index the underlying psychological mechanism of cognitive efficiency more precisely, it is possible that v may show improved psychometric properties relative to these summary statistics.

We modeled the common variance across the main seven experimental tasks (Table 1) for the five general performance metrics (v, a, Ter, RT, accuracy) and the three subtraction-based metrics (accuracyΔ, RTΔ, vΔ) using either bifactor or single-factor confirmatory factor models; the model form for a given metric was informed by preliminary exploratory factor analyses (using geomin rotation and including parallel analyses with 1,000 random draws). For all task-general factors (v, a, Ter, RT, accuracy), we used a bifactor model in which the 14 performance metric values (from non-conflict and conflict conditions across all seven tasks) loaded onto a single general factor while the 2 values from each of the 7 tasks loaded onto 7 orthogonal task-specific factors. For the subtraction metrics (accuracyΔ, RTΔ, vΔ), single-factor confirmatory factor analytic models were used in which the 7 difference score values loaded onto one general factor (there was little evidence of multidimensionality in the preliminary EFAs). Importantly, general factors are typically highly robust to analytic decisions, such as the number of orthogonal specific factors included (Clark et al., 2021), suggesting that the model structures would have little impact on the general factor scores across models. Bifactor and single-factor models were estimated in the lavaan R package (Rosseel, 2012) using Full Information Maximum Likelihood (FIML) to address missing data. All bifactor models except for the task-general v model displayed instances of an estimated negative residual variance for one or two of the indicators (i.e., Heywood Cases; (Wothke, 1993)). These variances were fixed to 0 and the model was re-estimated. As these negative residual variances were small and non-significant, fixing them to 0 did not affect other parts of the model (while resulting in a properly converged solution). We used coefficient omega and omega hierarchical (Rodriguez et al., 2016) to assess the proportion of variance the general factors explained in the single-factor and bifactor models, respectively.

In a separate set of analyses assessing the task-generality of v and other general performance factors (i.e., the idea that these factors drive performance even on tasks in which top-down control demands are assumed to be absent), we re-estimated all task-general bifactor models while also allowing performance metrics from the simple CRT task to load onto the general factors. We expected that, at least for v and potentially also for the other task-general performance metrics (a, Ter, RT, accuracy), the CRT would display a strong loading on the task-general factor.

2.4. Assessment of Test-Retest Reliability

A subset of participants (n=150; randomly selected) completed a retest session within 8 months of the original session (60–228 day gaps between sessions). We computed intraclass correlation coefficients (ICCs) using functions from the psych R package (Revelle & Revelle, 2015) to quantify reliability of all task-general and subtraction-based factors. ICCs specifically came from a random effects model as defined by: (Shrout & Fleiss, 1979).

2.5. Self-Control Questionnaires and Factor Model

A bifactor model was used to estimate general self-control factor scores from across multiple questionnaires that were included in the original study (Supplemental Table 7). This model included two specific factors (Supplemental Figure 1) as informed by preliminary EFAs and exhibited reasonable overall fit (RMSEA=0.124, CFI=0.925, TLI=0.887, SRMR=0.052). Scores on the general factor were transformed such that higher values indicate better self-reported self-control.

3. Results

3.1. Task-general performance factors are cohesive while top-down control factors are not.

Fit statistics and measures that indicate the strength of loadings on the general factor (omega coefficients, mean λ) are displayed for each model in Table 2. As expected, the task-general v bifactor model (Figure 2) displayed good fit. Notably, the v factor captured 82% of variance in the data (omega hierarchical=0.82), and its factor loadings were consistently high (Figure 2; mean λ=0.55). Task-general factor models for the other performance metrics (a, Ter, RT, accuracy; Supplemental Figures 2-5) similarly displayed good fit and strong average loadings on the general factor.

Table 2.

Model fit indices for all models, measures that indicate variance explained by the general factor in each model, and intraclass correlation coefficient (ICC) measures of the test-retest reliability of the general factor scores from each model. “Gen. Ω” refers to coefficient omega for single factor models (vΔ, RTΔ, accuracyΔ) and omega hierarchical for bifactor models (all others) which indicate the amount of variance explained by the general factor in each type of models, respectively. “w/CRT” denotes models that also allow metrics from the Choice Response Time task to load on the task-general factor. Mean λ = average of the standardized loadings on the general factor; RMSEA = root mean square error of approximation; CFI = comparative fit index; TLI = Tucker-Lewis index; SRMR = standardized root mean squared residual; Full Sample ICC = reliability of factor scores derived from a model fit to the full sample; Independent ICC = reliability of factor scores derived from a model trained on “independent” subset of the sample that did not complete the retest session, where this model is then applied to subjects who had a test-and retest sessions.

Model Category General Factor RMSEA CFI TLI SRMR Gen. Ω Mean λ Full Sample ICC Independent ICC
v 0.043 0.975 0.967 0.034 0.82 0.55 0.78 0.78
a 0.048 0.963 0.952 0.053 0.74 0.47 0.66 0.67
Task-General Ter 0.037 0.986 0.982 0.028 0.81 0.56 0.70 0.69
RT 0.043 0.990 0.987 0.025 0.84 0.64 0.76 0.76
Accuracy 0.046 0.960 0.949 0.041 0.75 0.46 0.75 0.74

vΔ 0.036 0.675 0.512 0.036 0.23 0.20 0.41 0.31
Subtraction RTΔ 0.058 0.573 0.360 0.044 0.29 0.24 0.51 0.52
accuracyΔ 0.015 0.971 0.956 0.028 0.39 0.29 0.56 0.54

v 0.049 0.966 0.957 0.040 0.84 0.56 0.81 0.81
Task-General w/CRT a 0.049 0.958 0.947 0.057 0.76 0.47 0.70 0.70
Ter 0.046 0.975 0.969 0.037 0.82 0.56 0.70 0.69
RT 0.054 0.982 0.978 0.034 0.86 0.64 0.78 0.78
accuracy 0.048 0.950 0.938 0.044 0.76 0.46 0.76 0.75

Figure 2.

Figure 2.

Standardized factor loadings of the bifactor model of task-general v. Values greater than or equal to .40 are displayed in bold to emphasize strong loadings. F1-F7 indicate the seven task-specific factors. Dist. = shape matching trials in which a distractor was present that did not match the target or probe; No Dist. = shape matching trials in which no distractor was present; Neg. = probe letters that were presented in the trial, but were directed to be forgotten; Con. = probe letters that were not presented in the trial; CTS = Cue/Task Switching Task

In contrast to the task-general factor models, the factors for all subtraction-based top-down control metrics captured relatively little variance in their respective indicators (vΔ omega=0.23; accuracyΔ omega=0.39; RTΔ omega=0.29) with the loadings on these factors (Figure 3) being variable and generally weak (vΔ mean λ=0.20, accuracyΔ mean λ=0.29, RTΔ mean λ=0.24). As such, fit statistics for the top-down control models were variable and more complicated to interpret. The accuracyΔ model displayed good fit across all indices. Notably, for the vΔ and RTΔ models, the indices which compare a model’s fit with a null model (CFI, TLI) were particularly poor. This reflects the fact that these subtraction variables explained very little variance in one another (Supplemental Table 6) and thus the factor models did little better than a model with no paths connecting any variables.

Figure 3.

Figure 3.

Standardized factor loadings of the single-factor models of latent factors made up of subtraction-based “top-down control” difference scores (Δ) based on drift rate (v), mean response time (RT) and accuracy rate. Values greater than or equal to .40 are displayed in bold to emphasize strong loadings. CTS = Cue/Task Switching Task

3.2. Efficiency on a simple choice response time task is strongly related to the task-general cognitive efficiency factor.

Bifactor models that allowed performance metrics from the simple CRT task to load on the task-general factor (“w/CRT” in Table 2; Supplemental Figures 6-10) displayed nearly identical fit to the original bifactor models for each performance metric. As expected, v from the CRT loaded strongly on the task-general v factor, in fact displaying the highest standardized loading out of any of the task conditions (CRT λ = 0.80; Supplemental Figure 6). This finding is highly consistent with the idea that v drives performance across all task conditions, even those from relatively simple tasks that are not typically assumed to demand top-down control.

3.3. Task-general performance factors have superior test-retest reliability to top-down control factors.

The task-general v factor had a test-retest reliability (Table 2) of ICC=0.78, which is considered “good” according to commonly used guidelines (Koo & Li, 2016). Reliability was identical for v factors derived from the full sample and those derived from an independent subsample of participants who did not complete the re-test session (in which this model is then applied to subjects who had a test and retest sessions) and was slightly improved by inclusion of v from the CRT (ICC=0.81). Reliability of the other task-general performance factors was often comparable to, or slightly lower than, the reliability of v. Reliability of the accuracyΔ and RTΔ factors was ICC=0.51–0.56, toward the lower end of the “moderate” range, and reliability of the vΔ factor was ICC=0.41 at best, typically considered “poor”.

3.4. Self-control is most strongly correlated with the task-general cognitive efficiency (v) factor; its relationship with top-down control factors is variable and likely confounded.

Correlation values for relations between the task-general factors (v, a, Ter, RT, accuracy), subtraction-based top-down control factors (accuracyΔ, RTΔ, vΔ), and the self-control questionnaire factor (SC) are displayed in Table 3 and represented as a heat map in Figure 4. Of the DDM parameters, task-general v displayed a significant positive correlation in the small-to-moderate range with the general self-control factor, r=0.18 (95% CI=0.09,0.26), p<.001, while task-general a displayed a small negative correlation with self-control, r=−0.10 (95% CI=−0.18,0.01), p=.029 (see scatterplots in Figure 5). Task-general v and task-general a were themselves moderately negatively correlated, r=−0.34 (95% CI=−0.41,.−0.26), p<.001, suggesting that more efficient individuals also waited for less evidence to accumulate before making their decisions. Thus, we suspected that a may only be associated with self-control because of its negative correlation with v, and indeed, when both factors were entered in a linear regression predicting the general self-control factor, task-general v remained significantly related to self-control, b=0.17, t=3.61, p<.001, but task-general a was not, b=−0.04, t=−0.86, p=.391.

Table 3.

Correlations between task-general factors (v, a, Ter, RT, accuracy), subtraction-based factors (accuracyΔ, RTΔ, vΔ), and the self-control questionnaire factor (SC).

v a Ter RT accuracy vΔ RTΔ accuracyΔ

a −0.34***
Ter −0.04 0.18***
RT −0.48*** 0.70*** 0.76***
accuracy 0.78*** 0.25*** 0.05 −0.07
vΔ −0.03 −0.44*** −0.17*** −0.31*** −0.28***
RTΔ −0.31*** 0.34*** 0.24*** 0.42*** −0.12** 0.09*
accuracyΔ −0.48*** −0.28*** −0.10* −0.06 −0.67*** 0.66** 0.18***
SC 0.18*** −0.10* 0.05 −0.07 0.11* 0.08 −0.02 −0.08
*

=p<.05

**

=p<.01

***

=p<.001

Figure 4.

Figure 4.

Heat map representing correlations between task-general factors (v, a, Ter, RT, accuracy), subtraction-based top-down control factors (accuracyΔ, RTΔ, vΔ), and the self-control questionnaire factor (SC).

Figure 5.

Figure 5.

Scatterplots, regression lines and correlation (r) values of relations between general cognitive factor scores from the cognitive factor models and the general self-control factor derived from questionnaire measures.

Task-general RT was not significantly associated with self-control, r=−0.02 (95% CI=0.10,0.07), p=.666, but task-general accuracy displayed a small positive relationship with self-control, r=0.11 (95% CI=0.03,0.20), p=.011, which was not surprising given the strong correlation between task-general accuracy rate and v, r=0.78 (95% CI=0.74,.0.81), p<.001. A test of dependent correlations (William’s test implemented in the psych package), however, showed that the correlation of task-general v with self-control was significantly stronger than that of task-general accuracy after accounting for the correlation between v and accuracy, t=2.36, p=.018. Moreover, when both factors were entered into a linear regression predicting self-control, task-general v remained significantly related to self-control, b=0.25, t=3.41, p=.001, but task-general accuracy was not, b=−0.08, t=−1.05, p=.294. Therefore, although task-general v, a, and accuracy rate all appear to show some relation with self-control when considered alone, v was the most robust predictor and the relations between the other two factors and self-control appeared to primarily attributable to their correlation with v.

Turning to subtraction-based factors, RTΔ showed a non-significant relationship with the general self-control factor, r=−.02 (95% CI=−.10,.07), p=.666. Meanwhile, vΔ showed a small relationship with the self-control factor that was close to significance, r=.08 (95% CI=.00,.17), p=.061. However, this trend was in opposite of theoretical expectations: a larger difference in cognitive efficiency across conditions was weakly associated with better, not worse, self-control. The relationship between accuracyΔ and the general self-control factor was also close to significance r=−.08 (95% CI=−.17,.00), p=.054, and in the direction consistent with theoretical expectations. However, follow-up analyses suggested this correlation is likely driven by a strong ceiling effect in accuracy scores in the non-conflict condition for subjects with higher v, who in turn exhibit low accuracyΔ (their conflict-condition accuracy differs little from non-conflict condition accuracy because non-conflict-condition accuracy cannot go past 100%; see Figure 6). When we restricted the analysis to subjects in the bottom half of mean non-conflict v scores (i.e., subjects not constrained by the ceiling effect), the relationship between accuracyΔ and general self-control disappeared: r=−.01 (95% CI=−.13,.11), p=.888.

Figure 6.

Figure 6.

Demonstrations of the ceiling effect in non-conflict condition accuracy rates and the consequences of this ceiling effect for the accuracyΔ factor. The scatterplot in the top row shows that mean accuracy rates, averaged across all non-conflict conditions, reach a ceiling for many participants in the upper half of the non-conflict v distribution, causing the relationship between v and accuracy rates in these conditions to become non-linear at higher levels of v. Scatterplots in the lower row illustrate the consequences of this ceiling effect for the accuracyΔ factor; as v increases and causes non-conflict accuracy rates to reach ceiling levels, accuracy difference scores necessarily decrease because this ceiling restricts the range of conflict-related differences in accuracy rate. This appears to cause an illusory negative correlation between the accuracyΔ factor and the task-general v factor, r=−.48 (95% CI=−.55,−.42), p<.001, which in turn appears to explain the weak relationship between accuracyΔ and the general self-control factor (as this latter relationship disappears when only individuals in the lower half of the non-conflict v distribution are included).

4. Discussion

Top-down control of responses is a central construct in cognitive science that is thought to be relevant to self-control in day-to-day life and is typically studied in tasks that are assumed to selectively engage control to resolve conflicts in specific task conditions. However, recent work rooted in computational modeling suggests an alternative approach to studying the cognitive underpinnings of self-control based on task-general cognitive efficiency, as indexed by the drift rate parameter (v) in the diffusion decision model (DDM) and similar models. In this reanalysis of the unprecedented Eisenberg et al. (2019) dataset, we build latent factors for top-down control variables and cognitive efficiency variables, and we directly compared them on their cohesiveness, temporal stability, and relationship to self-report measures of self-control (and, for completeness, we also assessed latent factors for other task-general behavioral summary statistics, such as RT and accuracy). We found that subtraction-based metrics of top-down control derived from seven tasks: 1) fail to cohere as a single dimension, 2) have moderate to poor test-retest reliability, and 3) lack meaningful relations to self-control measures. In contrast, task-general cognitive efficiency measured across the same seven tasks: 1) represents a cohesive individual difference dimension, 2) shows considerable stability across sessions, and 3) shows uniquely meaningful relations with self-reported self-control. These results go beyond recent psychometric critiques of top-down control and raise more basic questions about the selective engagement assumption that has guided this program of research. More hopefully, they point to task-general cognitive efficiency as being a computationally anchored, psychometrically reliable, and predictively valid cognitive individual difference dimension relevant to self-control.

The main novel contributions of this study result from taking a theory-driven approach to systematically compare latent factors from the top-down control tradition and the DDM tradition “head-to-head”, that is in the same subjects engaged in the same tasks who all completed the same battery of self-report measures of self-control. In doing so, our results are particularly well-positioned to test whether variance relevant to self-control is isolated in “conflict” conditions (selective engagement), as is typically assumed in the top-down control tradition, or is task-general, as implied by DDM-based research, and to give practical guidance to researchers about what specific metrics (e.g., DDM parameters versus RT or accuracy) best capture this variance.

Overall, our results indicate that task-general cognitive efficiency, as measured by the DDM’s drift rate parameter, both displays good psychometric properties and yields relationships with self-reported self-control that are the most substantial of any cognitive factor evaluated in this study (although the effect size is modest in an absolute sense, at least in healthy populations, as noted below). Importantly, task-general cognitive efficiency arises from task conditions that, according to the assumptions of the subtraction tradition, both do and do not engage top-down control. Indeed, we found that one task that ostensibly does not involve conflict or top-down control at all, the choice reaction time task, reflects task-general cognitive efficiency just as well as, if not better than, conditions thought to demand top-down control. This pattern of results directly challenges the selective engagement assumption that is central to top-down control research. Instead, we find that what matters for self-control can be measured just as well in both conflict and non-conflict conditions, and also in simple speeded tasks, such as perceptual decision paradigms. This observation is broadly consistent with prior findings in which people with clinical disorders linked to self-control difficulties displayed lower drift rate across both complex conflict tasks and simple decision tasks with no (assumed) top-down control demands (Heathcote et al., 2015; Huang-Pollock et al., 2012, p.; Shapiro & Huang-Pollock, 2019; Sripada & Weigard, 2021; Weigard et al., 2019; Weigard & Sripada, 2021; Ziegler et al., 2016).

We found that all task-general latent factors, not just task-general efficiency as measured by drift rate, were systematically more cohesive and test-retest reliable than subtraction-based top-down control factors. These findings are consistent with the generally greater reliability of overall performance metrics relative to subtraction metrics, as reported by Enkavi et al. (2019) for individual tasks. We additionally found that task-general drift rate appears to be a uniquely robust predictor of the general self-control factor, and, specifically, a stronger predictor than task-general accuracy and RT. These observations, taken together, are parsimoniously explained by appealing to the theoretical assumptions of the DDM, in which drift rate is the underlying latent process that largely determines observed accuracy rates and RTs across a variety of tasks and conditions. If this assumption is correct, it is not surprising that accuracy rates and RTs show similar psychometric properties to drift rate, since drift rate is the latent process that drives them. Furthermore, our finding that task-general drift rate is nonetheless a stronger predictor of self-control than task-general accuracy and RT supports the notion that what matters for self-control is better captured by the latent driver of cognitive performance (i.e., drift rate) rather than its observed effects on behavior (RT and accuracy). This empirical finding is also notably consistent with a recent simulation study demonstrating that the use of the DDM in place of behavioral summary statistics leads to substantial increases in statistical power to detect effects (Stafford et al., 2020).

The relationship between task-general cognitive efficiency and self-reported self-control was statistically significant, but modest in size (r=0.18). Converting to Cohen’s d, for which benchmarks are better established (J. Cohen, 1988), this reflects a small to medium effect size (d=0.38). In interpreting the size of this effect, an important consideration is that the Amazon Mechanical Turk sample used here is expected to be relatively high functioning and have relatively low rates of problems with self-control (McCredie & Morey, 2019). Importantly, there is some evidence that task-general cognitive efficiency has larger relations with clinical outcomes in samples with a substantially higher prevalence of psychiatric disorders. For example, in a recent report, we constructed a task general cognitive efficiency factor from several tasks in a large sample (n=272), of which 142 had psychiatric disorders (schizophrenia, attention-deficit/hyperactivity disorder, bipolar disorder). There we saw substantially larger effects, with cognitive efficiency reduced in all patients compared to healthy individuals (d=0.67), and with the largest disorder-specific effect observed in schizophrenia (d=1.12). These results underscore that effect size is dependent on both the population and outcome studied, and the modest effect size seen here in a relatively high-functioning population may be smaller than effects observed in patient groups.

Eisenberg et al. (2019) drew more pessimistic conclusions about the relationship between cognitive task-based metrics and self-control. Several differences in approach could have played a role. First, a main focus of the Eisenberg et al. (2019) study was the relationship between measures of self-control (both task-based metrics and self-report scales) and self-reported real-world outcomes (including socio-economic and health outcomes). In contrast, our focus here has been on the relationship between task-based measures and a measure of general self-control derived from self-report scales. Our use of a theory-driven latent variable approach to index both constructs, an approach that was not taken in the Eisenberg et al. (2019) study (though they did examine relationships between individual tasks and individual surveys, and found weak relationships), likely allowed us to identify the relationship between task-general efficiency and self-control. We agree with Eisenberg et al. (2019) that real world outcomes are important, but measuring such outcomes in a relatively high-functioning MTurk population presents challenges. For this reason, as in the previous paragraph, we emphasize the importance of links between task-general cognitive efficiency and psychiatric disorders, such as ADHD and bipolar disorder, that reflect chronic impaired self-control leading to significant distress and dysfunction.

In agreement with Eisenberg (2020) and Enkavi (2019), our findings call into question the standard practice in individual differences research of using subtraction metrics to index top-down control abilities that are putatively important for self-control. Crucially, however, our findings extend this work by supporting a novel theoretical explanation for why these metrics fail to relate to self-control. Current leading explanations focus on limitations related to the basic measurement properties of subtraction metrics (i.e., their poor reliability). For example, it is argued that that experimental tasks thought to measure top-down control face a “reliability paradox” (Hedge et al., 2018; J. Miller & Ulrich, 2013; Rouder & Haaf, 2019): they afford excellent experimental manipulations of top-down control, and this in turn yields strong and stable between-condition differences. However, owing to the very robustness of these manipulations, between-individual differences tend to be small and unstable, rendering subtraction metrics from such tasks poorly suited for individual differences research.

Our results go further than this existing psychometric critique in that we find evidence against the selective engagement assumption that is central to research on top-down control. Specifically, we find that the cognitive dimension found to be relevant to self-control in this study, task-general cognitive efficiency, is not selectively engaged in conflict tasks; instead, it drives performance similarly across both conflict and non-conflict conditions. If selective engagement fails, then forming subtraction metrics is not justified, not necessarily because these metrics have poor psychometric properties, but rather because subtraction will not isolate the key process of interest. That is, if a process of interest drives performance in both the conflict and non-conflict conditions, then subtraction will simply discard the systematic variance associated with that process, leaving it essentially unmeasured (Draheim et al., 2019). Consistent with this explanation, adaptive task designs that similarly eschew subtraction assumptions by treating performance on conflict and non-conflict trials equally also appear to hold promise as alternative metrics for measuring cognitive processes relevant to self-control (Draheim et al., 2019, 2020).

Given that the selective engagement assumption may not be defensible, one interesting approach that supporters of the top-down control paradigm may wish to consider is to give up this assumption, and instead reconceptualize the role of top-down control in conflict tasks. To set the stage for this alternative view, it is useful to note that the drift rate parameter in the DDM is proposed to reflect the ability to efficiently extract goal-relevant information from a stimulus for the purposes of generating an appropriate response. Such an ability is notably similar to certain existing definitions of top-down control: as involving the ability to use contextual goals to bias situation-appropriate responses (E. K. Miller & Cohen, 2001; J. D. Cohen, 2017). It is possible, then, that measurement of cognitive efficiency through drift rates is not an alternative to measurement of top-down control. Instead, the drift rate might reliably measure both constructs, which may be closely related or identical. A key observation supporting this view is that any cognitive operation that requires focused attention, including performing non-conflict conditions and even more “basic” tasks such as sensory discrimination (Tsukahara et al., 2020), likely engages some general kind of goal-directed control, for example to maintain focus on the task instructions and resist distractions. This view gains further support from recent advances in computational modeling of conflict tasks, which suggest a general kind of control process may operate pervasively across conflict and non-conflict conditions (Heathcote et al., under review). Thus, giving up the selective engagement assumption could open up new possibilities to conceptualize top-down control as a task general construct that is closely related to, or even identical with, task general cognitive efficiency.

In sum, this study leverages a highly unique dataset to perform a head-to-head comparison of two distinct approaches to understanding self-control in cognitive science: a top-down control approach based on subtraction metrics and a task general cognitive efficiency approach based on metrics derived from computational modeling. We found that cognitive efficiency, alone, constitutes a psychometrically coherent and reliable individual-difference dimension that is robustly linked to individual differences in self-control.

Supplementary Material

1

Acknowledgements

A. Weigard and D.A. Clark were supported T32 AA007477 (awarded to Frederic Blow). A. Weigard was also supported by K23 DA051561. C. Sripada was supported by R01 MH107741 and the Dana Foundation David Mahoney Neuroimaging Program. We would also like to thank Eisenberg and colleagues for making all data and code from their 2019 study openly available, without which the current project would not have been possible.

Footnotes

Declarations of interest: none.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Barkley RA (1997). Behavioral inhibition, sustained attention, and executive functions: Constructing a unifying theory of ADHD. Psychological Bulletin, 121(1), 65–94. 10.1037/0033-2909.121.1.65 [DOI] [PubMed] [Google Scholar]
  2. Bates ME, Bowden SC, & Barry D (2002). Neurocognitive impairment associated with alcohol use disorders: Implications for treatment. Experimental and Clinical Psychopharmacology, 10(3), 193. [DOI] [PubMed] [Google Scholar]
  3. Brown SD, & Heathcote A (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57(3), 153–178. [DOI] [PubMed] [Google Scholar]
  4. Clark DA, Hicks BM, Angstadt M, Rutherford S, Taxali A, Hyde LW, Weigard A, Heitzeg MM, & Sripada C (2021). The General Factor of Psychopathology in the Adolescent Brain Cognitive Development (ABCD) Study: A Comparison of Alternative Modeling Approaches. Clinical Psychological Science. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Cohen J (1988). Statistical power analysis for the behavioural sciences. Hillsdale, NJ: Laurence Erlbaum Associates. Inc. [Google Scholar]
  6. Cohen JD (2017). Cognitive Control. In The Wiley Handbook of Cognitive Control (pp. 1–28). Wiley-Blackwell. 10.1002/9781118920497.ch1 [DOI] [Google Scholar]
  7. Draheim C, Mashburn CA, Martin JD, & Engle RW (2019). Reaction time in differential and developmental research: A review and commentary on the problems and alternatives. Psychological Bulletin, 145(5), 508. [DOI] [PubMed] [Google Scholar]
  8. Draheim C, Tsukahara JS, Martin JD, Mashburn CA, & Engle RW (2020). A toolbox approach to improving the measurement of attention control. Journal of Experimental Psychology: General. [DOI] [PubMed] [Google Scholar]
  9. Dutilh G, Annis J, Brown SD, Cassey P, Evans NJ, Grasman RP, Hawkins GE, Heathcote A, Holmes WR, Krypotos A-M, & others. (2019). The quality of response time data inference: A blinded, collaborative assessment of the validity of cognitive models. Psychonomic Bulletin & Review, 26(4), 1051–1069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Eisenberg IW, Bissett PG, Enkavi AZ, Li J, MacKinnon DP, Marsch LA, & Poldrack RA (2019). Uncovering the structure of self-regulation through data-driven ontology discovery. Nature Communications, 10(1), 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Enkavi AZ, Eisenberg IW, Bissett PG, Mazza GL, MacKinnon DP, Marsch LA, & Poldrack RA (2019). Large-scale analysis of test–retest reliabilities of self-regulation measures. Proceedings of the National Academy of Sciences, 116(12), 5472–5477. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Fossati P, Ergis AM, & Allilaire JF (2002). Executive functioning in unipolar depression: A review. L’encéphale, 28(2), 97–107. [PubMed] [Google Scholar]
  13. Frischkorn GT, Schubert A-L, & Hagemann D (2019). Processing speed, working memory, and executive functions: Independent or inter-related predictors of general intelligence. Intelligence, 75, 95–110. [Google Scholar]
  14. Heathcote A, Hannah K, & Matzke D (under review). Pervasive choice conflict. [Google Scholar]
  15. Heathcote A, Suraev A, Curley S, Gong Q, Love J, & Michie PT (2015). Decision processes and the slowing of simple choices in schizophrenia. Journal of Abnormal Psychology, 124(4), 961. [DOI] [PubMed] [Google Scholar]
  16. Hedge C, Powell G, & Sumner P (2018). The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior Research Methods, 50(3), 1166–1186. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Heinrichs RW, & Zakzanis KK (1998). Neurocognitive deficit in schizophrenia: A quantitative review of the evidence. Neuropsychology, 12(3), 426. [DOI] [PubMed] [Google Scholar]
  18. Huang-Pollock CL, Karalunas SL, Tam H, & Moore AN (2012). Evaluating vigilance deficits in ADHD: a meta-analysis of CPT performance. Journal of Abnormal Psychology, 121(2), 360. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Karr JE, Areshenkoff CN, Rast P, Hofer SM, Iverson GL, & Garcia-Barrera MA (2018). The unity and diversity of executive functions: A systematic review and re-analysis of latent variable studies. Psychological Bulletin, 144(11), 1147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Koo TK, & Li MY (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Lerche V, von Krause M, Voss A, Frischkorn G, Schubert A-L, & Hagemann D (2020). Diffusion modeling and intelligence: Drift rates show both domaingeneral and domain-specific relations with intelligence. Journal of Experimental Psychology: General. [DOI] [PubMed] [Google Scholar]
  22. Lerche V, & Voss A (2017). Retest reliability of the parameters of the Ratcliff diffusion model. Psychological Research, 81(3), 629–652. [DOI] [PubMed] [Google Scholar]
  23. Lerche V, Voss A, & Nagler M (2017). How many trials are required for parameter estimation in diffusion modeling? A comparison of different optimization criteria. Behavior Research Methods, 49(2), 513–537. [DOI] [PubMed] [Google Scholar]
  24. McCredie MN, & Morey LC (2019). Who are the Turkers? A characterization of MTurk workers using the personality assessment inventory. Assessment, 26(5), 759–766. [DOI] [PubMed] [Google Scholar]
  25. Mesholam-Gately RI, Giuliano AJ, Goff KP, Faraone SV, & Seidman LJ (2009). Neurocognition in first-episode schizophrenia: A meta-analytic review. Neuropsychology, 23(3), 315. [DOI] [PubMed] [Google Scholar]
  26. Miller EK, & Cohen JD (2001). An integrative theory of prefrontal cortex function. Annu Rev Neurosci, 24, 167–202. [DOI] [PubMed] [Google Scholar]
  27. Miller J, & Ulrich R (2013). Mental chronometry and individual differences: Modeling reliabilities and correlations of reaction time means and effect sizes. Psychonomic Bulletin & Review, 20(5), 819–858. [DOI] [PubMed] [Google Scholar]
  28. Moeller FG, Barratt ES, Dougherty DM, Schmitz JM, & Swann AC (2001). Psychiatric aspects of impulsivity. American Journal of Psychiatry, 158(11), 1783–1793. [DOI] [PubMed] [Google Scholar]
  29. Ratcliff R (1978). A theory of memory retrieval. Psychological Review, 85(2), 59. [DOI] [PubMed] [Google Scholar]
  30. Ratcliff R, Smith PL, Brown SD, & McKoon G (2016). Diffusion Decision Model: Current Issues and History. Trends in Cognitive Sciences, 20(4), 260–281. 10.1016/j.tics.2016.01.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Revelle W, & Revelle MW (2015). Package ‘psych.’ The Comprehensive R Archive Network, 337, 338. [Google Scholar]
  32. Rey-Mermet A, Gade M, Souza AS, Von Bastian CC, & Oberauer K (2019). Is executive control related to working memory capacity and fluid intelligence? Journal of Experimental Psychology: General, 148(8), 1335. [DOI] [PubMed] [Google Scholar]
  33. Rodriguez A, Reise SP, & Haviland MG (2016). Evaluating bifactor models: Calculating and interpreting statistical indices. Psychological Methods, 21(2), 137. [DOI] [PubMed] [Google Scholar]
  34. Romer D, Betancourt L, Giannetta JM, Brodsky NL, Farah M, & Hurt H (2009). Executive cognitive functions and impulsivity as correlates of risk taking and problem behavior in preadolescents. Neuropsychologia, 47(13), 2916–2926. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Rosseel Y (2012). Lavaan: An R package for structural equation modeling and more. Version 0.5–12 (BETA). Journal of Statistical Software, 48(2), 1–36. [Google Scholar]
  36. Rouder JN, & Haaf JM (2019). A psychometrics of individual differences in experimental tasks. Psychonomic Bulletin & Review, 26(2), 452–467. [DOI] [PubMed] [Google Scholar]
  37. Saunders B, Milyavskaya M, Etz A, Randles D, & Inzlicht M (2018). Reported self-control is not meaningfully associated with inhibition-related executive function: A Bayesian analysis. Collabra: Psychology, 4(1). [Google Scholar]
  38. Schmiedek F, Oberauer K, Wilhelm O, Süß H-M, & Wittmann WW (2007). Individual differences in components of reaction time distributions and their relations to working memory and intelligence. Journal of Experimental Psychology: General, 136(3), 414. [DOI] [PubMed] [Google Scholar]
  39. Schmitz F, & Wilhelm O (2016). Modeling mental speed: Decomposing response time distributions in elementary cognitive tasks and correlations with working memory capacity and fluid intelligence. Journal of Intelligence, 4(4), 13. [Google Scholar]
  40. Schubert A-L, Frischkorn G, Hagemann D, & Voss A (2016). Trait characteristics of diffusion model parameters. Journal of Intelligence, 4(3), 7. [Google Scholar]
  41. Schubert A-L, & Frischkorn GT (2020). Neurocognitive Psychometrics of Intelligence: How Measurement Advancements Unveiled the Role of Mental Speed in Intelligence Differences. Current Directions in Psychological Science, 0963721419896365. [Google Scholar]
  42. Schulz-Zhecheva Y, Voelkle MC, Beauducel A, Biscaldi M, & Klein C (2016). Predicting fluid intelligence by components of reaction time distributions from simple choice reaction time tasks. Journal of Intelligence, 4(3), 8. [Google Scholar]
  43. Shapiro Z, & Huang-Pollock C (2019). A diffusion-model analysis of timing deficits among children with ADHD. Neuropsychology. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Shrout PE, & Fleiss JL (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420. [DOI] [PubMed] [Google Scholar]
  45. Simon JR, & Rudell AP (1967). Auditory SR compatibility: The effect of an irrelevant cue on information processing. Journal of Applied Psychology, 51(3), 300. [DOI] [PubMed] [Google Scholar]
  46. Snyder HR (2013). Major depressive disorder is associated with broad impairments on neuropsychological measures of executive function: A meta-analysis and review. Psychological Bulletin, 139(1), 81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Sochat VV, Eisenberg IW, Enkavi AZ, Li J, Bissett PG, & Poldrack RA (2016). The experiment factory: Standardizing behavioral experiments. Frontiers in Psychology, 7, 610. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Sripada C, & Weigard AS (2021). Impaired Evidence Accumulation as a Transdiagnostic Vulnerability Factor in Psychopathology. Frontiers in Psychiatry. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Stafford T, Pirrone A, Croucher M, & Krystalli A (2020). Quantifying the benefits of using decision models with response time and accuracy data. Behavior Research Methods, 1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Stevens L, Verdejo-García A, Goudriaan AE, Roeyers H, Dom G, & Vanderplasschen W (2014). Impulsivity as a vulnerability factor for poor addiction treatment outcomes: A review of neurocognitive findings among individuals with substance use disorders. Journal of Substance Abuse Treatment, 47(1), 58–72. [DOI] [PubMed] [Google Scholar]
  51. Stroop JR (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18(6), 643. [Google Scholar]
  52. Swann AC, Lijffijt M, Lane SD, Steinberg JL, & Moeller FG (2009). Increased trait-like impulsivity and course of illness in bipolar disorder. Bipolar Disorders, 11(3), 280–288. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Tsukahara JS, Harrison TL, Draheim C, Martin JD, & Engle RW (2020). Attention control: The missing link between sensory discrimination and intelligence. Attention, Perception, & Psychophysics, 82, 3445–3478. [DOI] [PubMed] [Google Scholar]
  54. Unsworth N, Miller JD, Lakey CE, Young DL, Meeks JT, Campbell WK, & Goodie AS (2009). Exploring the relations among executive functions, fluid intelligence, and personality. Journal of Individual Differences, 30(4), 194–200. [Google Scholar]
  55. van Ravenzwaaij D, Donkin C, & Vandekerckhove J (2017). The EZ diffusion model provides a powerful test of simple empirical effects. Psychonomic Bulletin & Review, 24(2), 547–556. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. van Ravenzwaaij D, & Oberauer K (2009). How to use the diffusion model: Parameter recovery of three methods: EZ, fast-dm, and DMAT. Journal of Mathematical Psychology, 53(6), 463–473. 10.1016/j.jmp.2009.09.004 [DOI] [Google Scholar]
  57. Wagenmakers E-J, Van Der Maas HL, & Grasman RP (2007). An EZ-diffusion model for response time and accuracy. Psychonomic Bulletin & Review, 14(1), 3–22. [DOI] [PubMed] [Google Scholar]
  58. Weigard A, Soules M, Ferris B, Zucker RA, Sripada C, & Heitzeg M (2019). Cognitive modeling informs interpretation of go/no-go task-related neural activations and their links to externalizing psychopathology. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 614420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Weigard A, & Sripada C (2021). Task-general efficiency of evidence accumulation as a computationally-defined neurocognitive trait: Implications for clinical neuroscience. Biological Psychiatry Global Open Science. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Willcutt EG, Doyle AE, Nigg JT, Faraone SV, & Pennington BF (2005). Validity of the executive function theory of attention-deficit/hyperactivity disorder: A meta-analytic review. Biological Psychiatry, 57(11), 1336–1346. 10.1016/j.biopsych.2005.02.006 [DOI] [PubMed] [Google Scholar]
  61. Wothke W (1993). Nonpositive Definite Matrices in Structural Modeling. S. 256–293 in: Bollen KA/Long JS (Hrsg.), Testing Structural Equation Models. Newbury Park: Sage. [Google Scholar]
  62. Ziegler S, Pedersen ML, Mowinckel AM, & Biele G (2016). Modelling ADHD: A review of ADHD theories through their predictions for computational models of decision-making and reinforcement learning. Neuroscience & Biobehavioral Reviews, 71, 633–656. 10.1016/j.neubiorev.2016.09.002 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES