Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Jul 18.
Published in final edited form as: J Exp Psychol Learn Mem Cogn. 2024 May 9;51(2):218–237. doi: 10.1037/xlm0001352

Evidence for Response Inhibition as a Control Process Distinct From the Common Executive Function: A Two-Study Factor Analysis

Grant S Shields 1, Andrew P Yonelinas 2
PMCID: PMC12273591  NIHMSID: NIHMS2093868  PMID: 38722590

Abstract

The dominant model of executive functions, which has held for over two decades, contends that various aspects of seemingly disparate forms of inhibitory control—for example, inhibiting a prepotent response, or inhibiting irrelevant thoughts and distractions—are in fact manifestations of a single latent executive function. Recent work, however, has cast doubt on this dominant model, as certain conditions can dissociate performance on tasks thought to index inhibitory control. Moreover, issues related to task reliability and latent estimation of inhibition processes have prompted questions about whether the structure of inhibitory control can even be reliably estimated at a latent level. We addressed these issues in two studies of healthy young adults (Study 1 N = 154, Study 2, N = 279), examining seven then 12 different tasks taken by prior research to assess inhibitory control. Contrary to the dominant model of executive functions, we found that, at a latent level, inhibitory control was best fit by a replicable two-factor solution, with response inhibition as a distinct executive function. Further, our data suggested that prior work on executive functions may not have observed a response inhibition factor due to task selections (i.e., including either one of two specific tasks was critical to identifying a separate response inhibition factor). Therefore, contrary to the current primary theoretical model of executive functions, these results suggest that response inhibition is, in fact, a distinct control process from the control process underpinning other forms of inhibition, which has important implications for designing interventions and assessing outcomes related to inhibitory control.

Keywords: response inhibition, interference control, factor analysis, inhibition, executive function


Inhibitory control is a hot topic in contemporary psychology, and for good reason: Better inhibitory control predicts greater academic achievement (Latzman et al., 2010), healthier eating behavior and lower weight (Dohle et al., 2018; Yang et al., 2019), decreased stress reactivity (Shields et al., 2016), and even a decreased risk of mortality (Amirian et al., 2010). The construct of inhibitory control is often heuristically subdivided into two broad facets: response inhibition (i.e., inhibiting a prepotent or activated response, such as withholding the automatic tendency to remove one’s arm from a painful stimulus) and another form of inhibition, which is referred to by various labels (e.g., cognitive inhibition, interference control, or inhibitory control of attention) but invariably related to inhibition at the level of thought or perception rather than inhibition of motor behavior (Diamond, 2013; Geurts et al., 2014; Hung et al., 2018; Johnstone et al., 2009; Shields et al., 2015; Shields, Sazma, & Yonelinas, 2016). Despite this common heuristic division, however, there is debate about the structure of inhibition (Chuderski et al., 2012; Friedman & Miyake, 2004; Karr et al., 2018; Pettigrew & Martin, 2014; Testa et al., 2012), as well as whether inhibition can be reliably estimated at the latent level (e.g., Draheim et al., 2019, 2021; Hedge et al., 2022; Rey-Mermet et al., 2018, 2019, 2020; Rouder et al., 2022; Unsworth et al., 2020; von Bastian et al., 2020). In the present manuscript, we present the results of two studies aimed at determining whether inhibitory control tasks are supported by more than one control process.

The paradigm-defining study of Miyake and colleagues (Miyake et al., 2000) was the first to systematically evaluate how inhibitory control related to other executive functions. In this study, Miyake et al. found that inhibitory control—defined as a latent factor with loadings from Stroop response time (RT) interference effects, proportion of stop trials correct in the stop-signal task, and correct trials in the antisaccade task (see Table 1 for brief task descriptions)—was separable from, yet highly correlated with, other executive functions. Moreover, a follow-up study by Friedman and Miyake (2004) found that response inhibition—again defined by a latent factor with loadings from the three tasks in their prior study—did not differ from what they referred to as interference control at a latent level, suggesting that response inhibition may not be a distinct control process from the executive control process(es) involved in inhibition tasks that do not require motor inhibition. The model developed by Miyake et al. (2000) has since been extended. This extended model (i.e., with inhibition task performance being underpinned only by a common executive function that supports performance across all executive function task outcomes; Friedman et al., 2008; Friedman & Miyake, 2017)—which we alone refer to as the “unity model,” and we name it as such because it posits that performance on all inhibition tasks is underpinned only by one (i.e., the common) executive function—was recently shown to be the overall best fit to data from multiple studies (Karr et al., 2018). It should be noted, however, that there were low rates of this model’s acceptance across these studies, suggesting caution in interpretation (Karr et al., 2018). Although the majority of these studies did not explicitly test whether there was a difference between response inhibition and the common executive function, the unity model holds that, at a latent executive function level, only the common executive function is required to explain performance on outcomes indexing either response inhibition or some form of nonmotor inhibition (e.g., Friedman & Miyake, 2004, 2017; Friedman & Robbins, 2022). Taken together, if the research supporting the unity model is correct, there are no true differences between response inhibition and the common executive function.

Table 1.

Descriptions of Some Common Inhibitory Control Tasks

Task name Description Typical primary outcome

Antisaccade After a central fixation, a stimulus is presented on the left or right side of the screen, and participants are told to look in the direction away from the presented stimulus, thus inhibiting a reflexive saccade toward the stimulus Number or proportion of target trials answered correctly
Emotional Sternberg Information is presented to participants, followed by a short delay within which emotional or neutral material is shown. After the delay, participants are shown a test array and indicate if any of the information in the initial array is present in the test array Time taken to respond correctly to trials with
emotional material minus trials with neutral material
Flanker A response conflict task wherein participants indicate the direction of a central arrow, regardless of the direction of arrows surrounding (i.e., flanking) the central arrow. Flanking arrows can be congruent (i.e., in the same direction) or incongruent (i.e., in the opposite direction) with the central arrow Time taken to respond correctly to incongruent trials minus congruent trials (i.e., interference effect)
Go/no-go A single stimulus that requires a response is presented in an overwhelming majority of trials (go trials), building up a prepotent tendency to respond. A second stimulus is presented in a minority of trials (no-go trials), and participants must withhold their response when this stimulus is presented Number or proportion of no-go trials to which a response was given
Simon A response conflict task wherein participants indicate the color of a circle using the left and right arrow keys, regardless of the location of the circle on the screen. The circle’s location can be congruent (i.e., on the same side of the screen) or incongruent (i.e., on the opposite side) with the key for the color Time taken to respond correctly to incongruent trials minus congruent trials (i.e., interference effect)
Stroop A response conflict task wherein participants indicate the font color of presented word, regardless of what the word is. The word can be congruent (e.g., the word “blue” written in blue font) or incongruent (e.g., the word “blue” written in red font) with the font color. Time taken to respond correctly to incongruent trials minus congruent trials (i.e., interference effect)
Stop signal Participants make a choice response (e.g., indicate the direction of an arrow) on all trials unless a stop signal is presented, indicating that participants should withhold their response. The stop signal occurs sometime after the stimulus was initially presented, requiring inhibition of an activated response Stop-signal reaction time. This measures the time required for a participant to inhibit an activated response
Sustained attention to response task Numbers 0–9 are presented quickly, in random order, one at a time, and with the same frequency. All numbers except 3 require a response. When the number 3 occurs, participants are required to withhold their response Number or proportion of number 3 trials to which a response was given

Neuroscience, however, has suggested that there may be separable forms of inhibitory control. For example, different lesions impair performance on different types of inhibitory control tasks (Cipolotti et al., 2016). Additionally, a recent synthesis of the neuroimaging literature on inhibitory control has suggested that there are two separate mechanisms through which prefrontal cortical regions can exert top-down inhibitory control: either through directly inhibiting all activity in a given subcortical region, or through indirectly inhibiting goal-irrelevant information by actively strengthening goal-relevant information (Munakata et al., 2011). Similarly, meta-analyses of neuroimaging studies have found that different inhibitory control tasks are associated with distinct patterns of activity (Cieslik et al., 2015; Nee et al., 2007). To date, however, little cognitive work has found evidence supporting the idea that response inhibition is a distinct control process in healthy young adults (though see Chuderski et al., 2012; Stahl et al., 2014; Testa et al., 2012).

An additional challenge to the unity model comes from debate about the methods and tasks used to derive the inhibition factor in the unity model. Driven by strong theoretical motivation, the unity model was derived by confirmatory factor analysis. This confirmatory factor analysis reflected the most contemporary theoretical understanding of the included executive function tasks at the time. However, recent work has highlighted the difficulty in measuring inhibition at the latent level, which has led to a debate about whether inhibition is even reliably measurable at the latent level (e.g., Draheim et al., 2019, 2021; Hedge et al., 2022; Rey-Mermet et al., 2018, 2019, 2020; Rouder et al., 2022; Unsworth et al., 2020; von Bastian et al., 2020). Moreover, conceptualization of each of the tasks loading on the response inhibition factor of the unity model—Stroop, stop signal, and antisaccade—has been challenged. In particular, the primary outcome of the Stroop task has recently been argued to rely more on the common executive function than response inhibition (Roos et al., 2017), and the primary outcomes of both the stop-signal task (Ridderinkhof et al., 1999; Sharp et al., 2010) and the antisaccade task (Rouder et al., 2022; Tendolkar et al., 2005; Unsworth et al., 2010) have been suggested to conflate or be influenced by both response inhibition and either other inhibitory control processes or a common executive control process. If these arguments are correct, the lack of difference between response inhibition and interference control found by Friedman and Miyake (2004) may be expected, given that the tasks loading on their “response inhibition” factor are either not response inhibition tasks or a mixture of both response inhibition and a common executive control process. Similarly, the unity model, which views successful response inhibition task performance as requiring no more than a common latent factor underpinning performance on all executive function tasks, can also be explained in a different way: If the “response inhibition” factor is created from both response inhibition and common executive control outcomes in studies supporting the unity model, that factor may reflect the common executive function despite the potential existence of a distinct response inhibition factor.

Given the above, we propose that although the common executive function described by Friedman and Miyake—which they define as the “ability to actively maintain goals and goal-related information and use this ability to effectively bias lower-level processing” (Miyake & Friedman, 2012, p. 11; see also Friedman & Miyake, 2017, p. 194)—contributes to performance on all inhibitory control tasks, including response inhibition tasks, response inhibition may nonetheless be a distinct control process from the control process(es) that support performance on interference control or other inhibitory control tasks. The definition of the common executive function goes beyond goal maintenance, highlighting the use of these goals in biasing information processing; we thus describe this control process as, “goal-directed information gating and/or amplification.” Given its definition, although it would certainly not be the only cognitive factor involved, the common executive function would contribute to performance metrics across many tasks, such as faster reactions time in tasks with a variable intertrial interval (e.g., requiring active maintenance of task goals, biasing stimulus detection processes toward a lower threshold) or in the presence of stimulus interference (e.g., ignoring the meaning of a word, biasing attentional sensitivity to color). Where our model diverges from Friedman and Miyake, though, is that we posit a distinct executive response inhibition process that underlies inhibition of motor actions.

The unity model is not the only model of inhibitory control, and a number of alternative models have been put forward (Bari & Robbins, 2013; Dang, 2017; Karr et al., 2018, 2019; Nigg, 2000; Roos et al., 2017; Shields, 2017; Shields, Sazma, & Yonelinas, 2016; Shields & Yonelinas, 2018; Stahl et al., 2014; Testa et al., 2012). For example, Stahl et al. (2014) argued that there are up to six facets of inhibitory control; similarly, Bari and Robbins (2013) have argued that inhibition is underpinned by both behavioral inhibition and cognitive inhibition, with behavioral inhibition being underpinned by eight different subprocesses. Some of the debate around this topic may be due to the difficulty of measuring inhibition at the latent level (e.g., Draheim et al., 2019, 2021; Hedge et al., 2022; Rey-Mermet et al., 2018, 2019, 2020; Rouder et al., 2022; Unsworth et al., 2020; von Bastian et al., 2020). The existence of multiple models and the difficulty in reliably estimating them together highlight the need to examine the structure of inhibitory control across multiple studies to determine whether a model of inhibitory control can be estimated reliably.

The Current Research

The current research represents a two-study examination of the latent structure of inhibitory control. We initially conducted a smaller, more exploratory study to determine whether task outcomes that we felt perhaps best differentiated up to three factors (i.e., response inhibition, goal-directed information gating and/or amplification [i.e., the common executive function], and working memory maintenance) in order to determine if evidence emerged for a common factor on which all task outcomes loaded, as the unity model would predict, or if there was evidence for a distinct response inhibition factor. After seeing the results of Study 1, we then conducted a replication and extension study with an expanded task set and a larger sample size in order to examine the replicability of our findings. Put simply, we first explored the factor structure among tasks via exploratory factor analysis (EFA) (Study 1). Then, we attempted to confirm those results via confirmatory factor analysis (Study 2). Drawing on the literature described above, we hypothesized that response inhibition would emerge as a distinct inhibitory control factor.

Study 1: Introduction

The first study consisted of a sample of 154 participants with usable data after cleaning and included outcomes from seven different tasks—namely, the go/no-go, sustained attention to response task (SART), stop-signal task, Stroop task, Simon task, and two forward span tasks (i.e., digit span and Corsi block test)—as variables for exploratory factor analyses.

Study 1: Method

Participants

Participants were 154 (121 female) undergraduates (Mage = 20.04, SDage = 2.19, range: 18–37) attending a large, public university, who had completed on average 13.5 years of education. This sample size represented all participants with usable data that we were able to recruit over the winter and spring quarters within the context of data collection for other projects at the time. This study was not preregistered. The sample was racially/ethnically diverse, with 46.1% identifying as Asian/Asian American, 28.6% as non-Hispanic Caucasian/White, 18.8% as Hispanic/Latino/a/e, 1.3% as African American/Black, 0.6% as Native American, and 4.5% preferring not to answer.

Materials

All cognitive tasks were run in psychology experiment building language (PEBL; Mueller & Piper, 2014), v2.1. Unless otherwise noted, tasks were either coded from scratch or PEBL battery scripts that were modified by Grant S. Shields in order to make the tasks more similar when beneficial (e.g., the same background color, the same stimulus presentation location) or more dissimilar when beneficial (e.g., using different colors for stimuli across similar tasks so that a response tendency to a given color in one task did not transfer to another). Code for each of the tasks is available online at this project’s Open Science Framework (OSF) page (This Project, 2024): https://osf.io/6xpbf/?view_only=57430884e49442b2bdd88d32b6df6e32.

Tasks included in this study were not equal in terms of features such as stimulus timings, proportions, and locations. This was intentional, as our task designs were drawn from prior work. We did not redesign many nonessential task features or elements in part because experimental work has shown that variations in factors such as these can increase the replicability and generalizability of studies, and they provide more robust tests of theory (Baribault et al., 2018). The task parameters (e.g., intertrial interval, stimuli colors, etc.) were chosen for each task based upon what had been used in prior research and found to elicit effects of interest (e.g., congruency effects, prepotent responses) for that task.

Task order was randomized for each participant. All tasks began with extensive instructions informing participants about how to complete the task. Participants were instructed to respond “as quickly and accurately as possible” in all tasks where speed or accuracy were mentioned. Detailed descriptions of each task are presented within the online supplemental materials.

Corsi Block-Tapping Task

Some work has considered forward span tasks to index inhibitory control (e.g., Lorenc et al., 2021; Shields, 2017; Shields, Sazma, & Yonelinas, 2016). We therefore included the PEBL forward Corsi block-tapping task (V0.2) (Corsi, 1972) in this study with few modifications. After a detailed set of instructions, participants completed three practice trials, which they were told were practice. In each trial, viewed a random nonoverlapping array of nine blue blocks presented against a black background, which were subsequently illuminated in the order to be repeated by changing color from blue to yellow for 1,000 ms and then back to blue as the next square was illuminated. After the correct sequence for a trial had been shown, a box containing the word “done” appeared, and participants were required to use the mouse to click on the boxes in the same order in which they had been illuminated. After pressing “done,” participants were given accuracy feedback (i.e., “correct” or “incorrect”) on each trial. Each trial was then followed by a 1,000 ms intertrial interval that consisted of a blank screen except for the word “ready” written in white font. Each of the three practice trials displayed a sequence of three blocks, a sequence which participants then repeated using the mouse. After the practice trials, the test trials began with a span length of two blocks. Two trials of each span length were displayed to each participant, regardless of his/her accuracy on the first trial. Span lengths increased by one—until a maximum of nine—whenever a participant provided a correct response to at least one of the two trials of a given span length. When a participant failed to provide a correct response to at least one of the two trials at a given span length, the task ended. The primary outcome from this task was the Corsi span, defined as the highest span length a participant reached—given this definition, participants who successfully completed a span length of 9 (2.7% of all participants) were assigned a score of 10. Scores on this measure could thus range from 2 to 10.1

Digit Span Forward

Some work has considered forward digit span to index inhibitory control (Lorenc et al., 2021; Shields, 2017; Shields, Sazma, & Yonelinas, 2016), and we thus included it in this study. After reading the task instructions, participants heard “ready?” followed by a random set of two numbers 0–9, presented audibly at a speed of 1 s per number with 1 s between each number. After all numbers were presented, participants were required to type these numbers in the same order in which they heard the numbers. Once the participant typed the length of numbers heard in a given trial (i.e., the current span length), the trial concluded, and was followed by a 1.5-s intertrial interval. There were three possible trials at each span length, and the span length increased by one as soon as two out of the three possible trials at each span length were answered correctly. Span length increased up to a theoretical maximum of 16 in this task. When a participant failed to provide a correct response to at least two of the three trials at a given span length, the task ended. The primary outcome from this task was the digit span, defined as the highest span length a participant reached. Scores on this measure could range from 2 to 16, though in this study the highest digit span reached was 11.

Go/No-Go

The go/no-go is a prototypical task used to assess inhibitory control (Drewe, 1975; Wessel, 2018). In the go/no-go, participants build up a prepotent response tendency by making a response—such as hitting the space bar—to a stimulus that is presented in an overwhelming majority of trials; in a minority of trials, however, another stimulus is shown, and participants are required to withhold making a response to that stimulus. The go/no-go task used in this study began with three small paragraphs of instructions—each presented on their own page—that, in brief, instructed participants to press the spacebar when they saw a solid purple circle (go trials) and not to respond when they saw a solid light-blue circle (no-go trials). At all points in the task, a footer was displayed at the bottom of the screen instructing participants to press the spacebar for a purple circle and do nothing for a light-blue circle. Each trial began with a random uniform delay between 400 and 600 ms, which was immediately followed by stimulus presentation. The 100-px purple or light-blue circle remained on the screen until either the participant responded or 500 ms had elapsed. During practice (described below), feedback on accuracy (i.e., “correct” or “incorrect”) for each trial (e.g., pressing a spacebar on a go trial or withholding a response on a no-go trial both provided correct feedback) was then provided for 400 ms. Stimulus presentation time and between-trial delays were intentionally very short to facilitate building a prepotent response. After completing a practice block consisting of 25 practice trials, participants completed four test blocks of 75 trials each. Each block contained 80% go trials and 20% no-go trials, and the order of these trials was randomized. The primary outcome of interest in this task was the number of errors of commission (i.e., the number of no-go trials to which a participant erroneously made a response).

Simon Task

The Simon task is a commonly used task thought to measure inhibitory control (e.g., Rossa et al., 2014; Sebastian et al., 2013; Ulrich et al., 2015). In the version of the Simon task used for this study, participants saw either a red circle or blue circle of 50 px presented on the screen with a black background. Circles were always presented at the same top-bottom location, but they varied in left-right location. Participants were told to press the left arrow key whenever they saw a red circle, no matter where it appeared on the screen, and to press the right arrow key whenever they saw a blue circle, no matter where it appeared on the screen. Each trial began with a fixation cross, which was presented for 400 ms. After the fixation cross disappeared, a random interval between 200 and 400 ms was presented, followed by presentation of the circle, which remained on screen until a response was made. The intertrial interval was 400 ms. In 120 trials (i.e., congruent trials), the circle appeared on the same side of the screen as the button press required by its color (e.g., the red circle appeared on the left side of the screen), in 40 trials (i.e., neutral trials) the circle appeared exactly in the center of the screen, and in the remaining 120 trials (i.e., incongruent trials) the circle appeared on the opposite side of the screen than the button press required by its color (e.g., the red circle appeared on the right side of the screen). The order of trials was randomized, and after a set of 50 trials had elapsed participants were informed that they could take as long of a break as they would like before continuing with the task. Following prior research with this task (e.g., Horn et al., 2013), the primary outcome of interest was the Simon RT interference effect, calculated for each participant as the mean RT for correctly answered incongruent trials minus the mean RT for correctly answered congruent trials.

Stroop Task

The Stroop task is another prototypical task used to index inhibitory control (Friedman & Miyake, 2004). Recent work has argued that computerized, rather than verbal, versions of the Stroop may rely more nonmotor inhibition than on response inhibition (Roos et al., 2017) because the motor response is no longer automatic (i.e., read aloud) but is instead mapped to a keypress. A three-color Stroop task was used in this study. The words red, blue, and green were presented in either red, blue, or green 60 pt font on a white background. Participants were told to indicate the color of the font using the keys “v,” “b,” and “n” to indicate red, blue, and green font, respectively. “Congruent” trials were those in which the word (e.g., red) and font color (e.g., red font) matched, whereas “incongruent” trials were those in which the word (e.g., red) and font color differed (e.g., blue font). Participants were given extensive instructions in the task and completed 18 practice trials (in order, two congruent red trials, four incongruent red trials [two blue font, two green font], two congruent blue trials, four incongruent blue trials [two red font, two green font], two congruent green trials, and four incongruent green trials [two red font, two blue font]), during which time feedback was given (i.e., “correct” or “incorrect”) for 500 ms after each trial. At all points in the task, a footer was displayed at the bottom of the screen instructing participants to press v for red font, b for blue font, and n for green font, with a note to respond as quickly and accurately as possible. At the start of each trial, a fixation cross was displayed for 1,000 ms. Following the fixation cross, the stimulus was displayed, and participants were given up to 2,500 ms to indicate the font color. If participants did not respond within 2,500 ms, the stimulus disappeared and the words “too slow” were displayed on the screen for 500 ms. The interstimulus interval was 500 ms. The total task consisted of four blocks of 72 trials each—with each block containing 75% congruent trials and 25% incongruent trials of each color—for a total of 288 trials (216 congruent, 72 incongruent). Congruent trials made up 75% of total trials to help produce a response tendency of “v” for the word red, “b” for the word blue, and “n” for the word green.

Following most prior literature on the Stroop (e.g., Miyake et al., 2000), the primary outcome of interest on the Stroop task was the Stroop RT interference effect, calculated for each participant as the mean RT for correctly answered incongruent trials minus the mean RT for correctly answered congruent trials.

Stop-Signal Task

The stop-signal task is a commonly used task thought to assess inhibitory control (Aron et al., 2014; Verbruggen et al., 2013). In this task, participants are told to indicate the direction of an arrow presented on the screen (i.e., go trials) unless a stop signal is given some time after the trial starts (i.e., stop trials), in which case they are told to withhold their response. This task thus requires inhibition of activated, rather than prepotent, responses.

In the stop-signal task used in this study, each trial began with presentation of a hollow white fixation circle against a black background, which was presented for 500 ms. Then, an arrow (“<” or “>”) appeared in the center of the circle and participants were given up to 1,500 ms to respond. On stop trials, a stop-signal—a 900 Hz tone played for 500 ms—was presented after the arrow had been presented for a specified period of time. The stop-signal was initially presented 250 ms after presentation of the arrow, and it was titrated according to a “staircase” procedure, with the delay between arrow and stop-signal increasing by 50 ms after every stop trial where a response was correctly withheld, and decreasing (i.e., getting closer to 0 ms) by 50 ms after every stop trial where a response was erroneously provided. Participants were given extensive instructions on this adaptive procedure and told that they should not wait to respond because the tone automatically adjusts to their response, so waiting would provide no benefit to their performance and just make the task take much longer. Participants first completed five practice trials, on which they received accuracy feedback. After completing the practice trials, participants completed three blocks of 80 trials each; no accuracy feedback was given on these trials. Each block contained 60 go trials and 20 stop trials, and the order of these trials was randomized within blocks. Left- and right-arrow trials were counterbalanced across go and stop trials and were randomized within each block. After each block was completed, participants were told to take as long of a break as they would like.

Following prior research (Verbruggen et al., 2013), the primary outcome measure from this task was stop-signal reaction time (i.e., SSRT)—calculated using the integration method (Verbruggen et al., 2013)—which represents the time required for a participant to inhibit an activated response. The integration method is the recommended method of calculating SSRT (Verbruggen et al., 2013), and it does so by subtracting the mean stop-signal delay from the nth RT in the “go” RT distribution, where n corresponds to p(respond|signal) = .50 multiplied by the number of trials in the go RT distribution.

SART

The SART is a contentious task in inhibitory control literature. The SART was initially proposed as a task assessing sustained attention—hence the name—largely because it correlated with outcomes from what were thought to be tests of sustained attention (i.e., the lottery and telephone search with counting subtests of the test of everyday attention) and did not correlate with outcomes that were thought to assess response inhibition, such as Stroop RT interference effects (Manly et al., 1999; Robertson et al., 1997). However, the SART shares many commonalities with a standard go/no-go task—similarities and differences will be elaborated on below—and, although debated, subsequent research has suggested that the primary outcome from this task may better assess response inhibition than sustained attention (Carter et al., 2013; Head & Helton, 2014; Stevenson et al., 2011; for an argument that both processes are important, see Seli, 2016).

The SART differs from a typical go/no-go task in at least three ways. First, although some have used go/no-go tasks with a variety of go stimuli (e.g., Sebastian et al., 2013; Stahl et al., 2014), the typical go/no-go contains a single go stimulus; in contrast, the SART does not contain a single stimulus presented on a majority of trials, but instead contains multiple stimuli requiring responses (i.e., 1–2 and 4–9) that are presented in different font sizes. Presumably, this requires greater attention to the stimuli than a typical go/no-go for successful performance. Second, the frequency of SART stimuli that require a response to be withheld is much less than a typical go/no-go, which is thought to encourage mind-wandering because response-related errors occur much less frequently. Third, the SART stimuli are presented for a much shorter length of time than stimuli in the typical go/no-go and are replaced by a mask, during which time responses can still be made; this difference presumably punishes lapses in sustained attention more strongly than the typical go/no-go due to the high probability that if a participant did not fully see the stimulus during its presentation—which, given the short stimulus presentation time, is likely if participants are not completely attentive—they would hit the spacebar due to the low base rate of trials requiring responses to be withheld. These differences are thought to be important, as the primary outcome of interest on this task—errors of commission—is the same type of primary outcome from the go/no-go task, but the cognitive processes contributing to it have been thought to be driven by sustained attention in the SART (Robertson et al., 1997) but by response inhibition in the go/no-go.

In both the original SART and the SART used in this study, participants were told to press the spacebar whenever they see a number that is not the number three, and this instruction was given in a footer displayed throughout the entire task. Participants were shown the numbers one through nine—25 times each, in a randomized order—in one of five randomly assigned fonts (i.e., 48, 72, 94, 100, and 120 point font). On each trial, a number was presented for 250 ms and was then replaced by a mask of a circle occupying the largest space a number could take with an X spanning its diameter for 900 ms. The response window for a trial began at number onset and ended at the time the mask was removed from the screen (i.e., 1,150 ms after stimulus onset). The number and mask were each white and presented against a black background. After reading a detailed set of instructions, participants completed a set of 18 practice trials (two presentations of each number), during which time they received accuracy feedback for 800 ms after each trial. Once the practice trials were completed, following the original SART protocol, participants then completed the entire 225 test trials in a single block.

Procedure

Participants came to the laboratory for a 1-hr study session. Each session ran between one to 15 participants; each participant was seated at an individual station and provided with construction-grade soundproof earmuffs over the headphones that they wore for the study. Participants were seated at their computers and provided with informed consent forms. Participants were then instructed to read and sign their informed consent form if they consented to participate, then complete the demographics form open on the computer in front of them. After completing the demographics form, participants were started on the cognitive tasks, which had extensive instructions, and the experimenter remained in the room in case the participant had any questions. The order of cognitive tasks was randomized across participants in order to avoid any systematic effects of fatigue on any particular task. During each task, participants were provided the opportunity to take breaks over the course of the task (i.e., in between each block) and were told to take as long of a break as they desired. Participants were also told to take as long of a break as they desired between each cognitive task—before instructions for the next task were provided. After completing the final task, participants were thanked, debriefed, and dismissed.

Data Reduction and Analysis

Each raw data file for each task was examined for intentional premature responding (i.e., “clicking through” to finish the study sooner) and for failing to follow task instructions (i.e., performance at or below chance). Participants who engaged in these behaviors were excluded (see the online supplemental materials for descriptions of exclusions); the final sample consisted of 154 participants that did not engage in these behaviors. Data are available at this project’s OSF page (This Project, 2024): https://osf.io/6xpbf/?view_only=57430884e49442b2bdd88d32b6df6e32.

EFA—conducted using the factanal function in R, Version 4.3.1, which estimates factors via maximum likelihood—was used to determine the structure of inhibitory control present in this sample with these task outcomes. Factors were rotated with varimax rotation, but conclusions did not differ when no rotation or oblique rotations were used.

Study 1: Results

Descriptive Statistics and Correlations

As expected, errors of commission on the go/no-go (M = 16.28) and SART (M = 11.27) were significantly greater than zero, ps < .001. Similarly, Simon RT interference effects (M = 49.25 ms), and Stroop RT interference effects (M = 151.07 ms) were significantly different from zero, ps < .001. Descriptive statistics for each variable are presented in the online supplemental materials. Additional analyses using variables transformed to normal distributions are presented in the online supplemental materials; conclusions did not differ from the results below.

Split-half task (odd/even) reliabilities are presented in Table 2.2,3 Only three tasks evidenced acceptable reliability (though see Footnote 2), which was in part the motivation for Study 2—to determine the replicability of the obtained pattern of results.

Table 2.

Observed Reliabilities of Study 1 Outcomes

Task Odd/even split-half reliability (ρST)

Go/no-go .85
SART .81
Stroop .79
Corsi block .59
Stop signal .57
Simon .49
Digit span forward .47

Note. ρST = split-half tau-equivalent reliability; SART = sustained attention to response task.

Bivariate correlations between all variables are presented in Table 3. Bivariate correlations between tasks were modest, with the strongest associations being between SART commissions and SSRT (r = .26, p < .001), Simon interference effects and Stroop interference effects (r = .22, p = .005), and Corsi forward span and Simon interference effects (r = −.22, p < .001).

Table 3.

Correlation Matrix for Study 1 Tasks

Task (outcome) 1 2 3 4 5 6

1. Go/no-go (errors of commission)
2. Sustained attention to response task (errors of commission) .15
3. Stop-signal task (stop-signal reaction time) .17* .26***
4. Stroop task (Stroop interference effect) −.01 .00 .07
5. Simon task (Simon interference effect) .15 .14 .02 .22**
6. Digit span forward (digit span) −.04 −.01 .03 .02 −.19*
7. Corsi block tapping task (Corsi span) .02 −.13 −.01 − .14 −.22** .15

p < .10.

*

p < .05.

**

p < .01.

***

p < .001.

In exploratory factor analyses, we found that a one-factor solution was a marginally unacceptable fit to the data, χ2(14) = 22.97, p = .061, whereas a two-factor solution was an acceptable fit to the data, χ2(8) = 7.39, p = .495, and a significant improvement in fit to the data over the one-factor solution, Δχ2(6) = 15.06, p = .016. A three-factor solution did not further improve fit to the data over the two-factor solution, Δχ2(5) = 4.43, p = .489; the scree plot and sample-adjusted Bayesian information criterion (SABIC) values are shown in Figure 1. Loadings for this two-factor solution are presented in Table 4. Primary loadings clustered within a response-inhibition-like factor (i.e., stop-signal, go/no-go, SART) and a common-executive-function-like factor (i.e., Simon, Stroop, Corsi span, and digit span)—we refer to this factor as “goal-directed information gating and/or amplification” because we did not have the variety of tasks necessary to conclude that this was the common executive function.

Figure 1. Eigenvalues for Factors From Factor Analysis With All Possible Factors Estimated and SABIC Values for Factor Solutions.

Figure 1

Note. SABIC could not be computed for more than three factors given the number of variables included. The two-factor solution was preferred by SABIC and χ2 analyses. SABIC = sample-adjusted Bayesian information criterion; BIC = Bayesian information criterion.

Table 4.

Exploratory Factor Analysis Loadings for Study 1 Tasks

Task (outcome) Factor 1: information gating Factor 2: response inhibition

Simon task (interference effect, RT) .78 .13
Stroop task (interference effect, RT) .26 .07
Corsi block (forward span) −.29 −.07
Digit span (forward span) −.24 .02
Stop signal task (SSRT) −.07 .60
SART (errors of commission) .10 .46
Go/no-go (errors of commission) .13 .30

Note. Varimax rotation loadings are shown, but identical primary loadings with approximately equal loading values were obtained without rotation or with promax and oblimin rotations. Loadings that were significant in the CFA are shown in bold. RT = response time; SSRT = stop-signal reaction time; SART = sustained attention to response task; CFA = confirmatory factor analysis.

An identical pattern of results emerged if the Corsi block and digit span tasks were omitted from the EFA: A one-factor solution was an unacceptable fit to the data, χ2(5) = 11.58, p = .041, whereas a two-factor solution was an acceptable fit to the data, χ2(1) = 1.53, p = .216, and a significant improvement in fit to the data over the one-factor solution, Δχ2(4) = 10.05, p = .040. Primary loadings for this analysis again clustered within a response-inhibition-like factor (i.e., stop-signal, go/no-go, SART) and a goal-directed information gating, common-executive-function-like factor (i.e., Simon, Stroop). A three-factor solution with only five variables returns negative degrees of freedom for a χ2 test so we were unable to determine if a third factor would have further improved model fit in this restricted analysis. Therefore, the best model for these data fit inhibition via two latent factors.

Finally, we conducted additional analyses, including sensitivity and exploratory analyses, which are presented in the online supplemental materials. These analyses only strengthened the above conclusions.

Study 1: Discussion

Study 1 found evidence for a distinct response inhibition factor, separable from a factor that appeared to index a more general or common control process, which we refer to as goal-directed information gating and/or amplification. This is in contrast to the single latent executive process supporting inhibition task performance posited by the unity model of executive functions (e.g., Miyake & Friedman, 2012). However, there were at least two reasons to be cautious about this interpretation.

First, although the unity model was the most consistently accepted in a meta-analysis (Karr et al., 2018), there was relatively poor agreement across studies, indicating that patterns of loadings observed within factor analyses, such as in Study 1, might be less reliable than is ideal. An implication of this is that the pattern of loadings that we observed may have been an effect of undetected noise, and that we needed to determine the replicability of these loadings—especially given low observed split-half task outcome reliability—using the same task outcomes, as well as whether including additional outcomes from other tasks would alter the observed structure. Therefore, we felt that replication was important before any inferences were made.

Second, our Stroop task used in Study 1 may have made more demands on working memory than a spoken Stroop task (which does not require keymappings) or a two-color Stroop task (which requires fewer keymappings to be kept in mind). It is possible that these working memory demands on the three-color Stroop may have helped the variance common to both the Stroop and Simon tasks to be explained by the same latent factor that explained variance in the Corsi and digit span tasks, and that a Stroop task that had less demand on working memory may have resulted in either the Stroop loading on the response inhibition factor or the need for a third factor to explain these data. In other words, the Stroop task may have critically influenced our factor structure in multiple plausible ways. We thus decided to modify the Stroop task in a follow-up study to examine whether the similarity between Stroop and the span tasks would differ using a two-color Stroop task.

To address the above issues, we conducted Study 2.

Study 2: Introduction

After the first study was conducted, open questions remained, including the reliability of the above model, whether including additional tasks would alter the factor structure obtained in Study 1, or whether reducing the working memory demands of the Stroop would affect results. We therefore conducted Study 2 to answer these questions.

In Study 2, we included a number of perhaps more controversial task outcomes as potential inhibition task outcomes. The overarching reason that these tasks were included was that they were those previously included in a meta-analysis that attempted to apply research on inhibitory control in order to understand the effects of acute stress on inhibitory control (Shields et al., 2016). The current work aimed to clarify whether or not the primary outcomes from those tasks were in fact outcomes that assessed inhibitory control in ways purported by this applied work, in an attempt to help clarify these relations for said applied work. For each of these measures, we describe why we included them within their respective Method sections below.

Study 2 examined the structure of 12 tasks considered by at least some prior work to be supported at least in part by some form of inhibitory control in a sample of 279 participants. In particular, this study assessed the latent factors supporting performance on the primary outcomes derived from the go/no-go, SART, stop-signal task, Stroop task, Simon task, flanker task, emotional Sternberg task, d2 test of attention, vigilance task, simple reaction time test, digit span forward, and the forward Corsi block test. A major aim of this study was to empirically test whether each of these tasks were supported in part by some process supporting performance on well-validated inhibitory control task outcomes or not. We hypothesized that performance across all tasks would be explained better by a two-factor than a one-factor solution, with response inhibition as a distinct factor.

Study 2: Method

Participants

Participants were 279 (209 female) healthy undergraduates (Mage = 20.23, SDage = 2.57, range: 18–47) attending a large, public university, who had completed on average 13.8 years of education. This sample size represented all participants with usable data that we were able to recruit over a 1-year period and is somewhat larger than sample sizes in many comparable studies (e.g., Friedman & Miyake, 2004; Miyake et al., 2000; Stahl et al., 2014; for larger sample sizes in similar studies, see, for example, Draheim et al., 2021; Tsukahara et al., 2020; Unsworth et al., 2021). Although this sample size was not determined via a power analysis, a power analysis using the R package semPower (Jobst et al., 2023; Moshagen & Erdfelder, 2016) with our expected model specifications (i.e., each variable loading only on its expected factor, no covariance between response inhibition and information gating, and no residual covariances; 12 manifest variables, 53 degrees of freedom, α = .05) showed that a sample of 235 participants was necessary to achieve 80% power to reject model misspecifications with root-mean-square error of approximation (RMSEA) > .05. A post hoc power analysis using our actual confirmatory model specifications (see below; 11 manifest variables, 43 degrees of freedom, α = .05) required 266 participants to achieve 80% power; achieved power in this study was thus 82.8%. This study was not preregistered. The sample was racially/ethnically diverse, with 41.3% identifying as Asian/Asian American, 26.2% as non-Hispanic Caucasian/White, 20.0% as Hispanic/Latino, 3.4% as African American/Black, 1.0% as Pacific Islander, 0.5% as Native American, 6.8% as multiple, and 0.7% preferring not to answer.

Materials

Study 1 tasks were included in Study 2, with the following additional five tasks and different Stroop.

d2 Test of Attention

The d2 test of attention is sometimes considered to measure inhibitory control (Arán Filippetti et al., 2022; Benitez-López et al., 2019; Grinspun et al., 2020) and was therefore included in this study. In this task, participants viewed a series of 14 trials, each of which presented a horizontal array of 36 “d”s or “p”s with zero to two lines above and zero to two lines below each letter. Participants were required to mark each “d” with a total of two lines around it (i.e., d’s with two lines above and zero below, one above line and one below, or zero above and two lines below) but to refrain from clicking on a “d” if it had more or less than two lines around it as well as a “p” regardless of the lines around it. Each letter that was clicked displayed a red box around it upon being clicked. After a participant had clicked on all letters s/he believed were correct in a given trial, s/he clicked a box at the far side of the screen from where the mouse began that said “done,” and the task continued to the next trial. Participants were given a maximum of 20 s to complete each trial (shown by a timer that counted the remaining seconds on the screen), after which time the task continued to the next trial automatically. The primary outcome used in this task was the recommended total time taken to complete the task, correcting for accuracy (Steinborn et al., 2018). Accuracy-corrected time was calculated as the residualized time taken to complete the task when regressed on accuracy.

Emotional Interference Task

Emotional interference tasks have been thought to index inhibitory control via assessment of the ability to inhibit (emotional) distractions (vs. neutral stimuli) and attend to task-relevant stimuli (Giles et al., 2015; Hung et al., 2018; Shields et al., 2016). In this study, the emotional interference task used was an emotional Sternberg item recognition task. The task began with a detailed set of instructions informing participants that they would see a set of either one (low-load) or four (high-load) letters on the screen, and then, following a delay, a second set of letters on the screen (which always consisted of four letters), and their task was to indicate whether any of the letters in the second set was present in the first set of letters. Participants were told to press the “g” key to indicate that any letter in the second set was present in the first, and to press the “h” key to indicate that no letter in the second set was present in the first; a footer with these instructions was present during the entire task, and the letters “g” and “h” were never present in any stimulus array shown to participants. In the delay between the first and second letter sets, a negatively valenced or neutrally valenced picture was shown to participants; participants were told that the picture during the delay was irrelevant and to try to ignore it. All pictures were taken from the international affective picture system and have been validated as negative or neutral in a number of prior studies (e.g., McCullough et al., 2015; Sazma et al., 2019; Shields, Dunn, et al., 2019; Shields, McCullough, et al., 2019). Each trial began with a fixation cross shown for 500 ms. Following the fixation cross, the first set of letters was displayed for 1,000 ms. A blank screen was then displayed for 100 ms, after which time a negative or neutral picture was immediately displayed and remained on the screen for 800 ms. The picture was then replaced with a second blank screen for 100 ms, after which time the second set of letters appeared on the screen, where it remained until participants responded or 5,000 ms had elapsed. Participants were then given accuracy feedback on screen (i.e., “correct” or “incorrect”) for 400 ms. Across the entire task, 50% of trials had a target letter (i.e., a letter from the first set) present in the second set of letters and 50% of trials had no target (i.e., target-absent trials), 50% of trials were low-load and 50% were high-load, and 50% of trials contained a neutral valence distractor and 50% of trials contained a negative valence distractor, for a total of eight different trial types. Target present/absent, load, and valence were all counterbalanced for an equal number of trials of each type, and trial order was randomized within each block. Participants first completed a practice block of eight trials, and then completed four test blocks of 32 trials each, for a total of 64 negative and 64 neutral trials. Following prior research (e.g., Giles et al., 2015), the primary outcome of interest for this task was the emotional RT interference effect: the difference between mean correct RT on negative-distractor trials and mean correct RT on neutral-distractor trials.

Flanker Task

The Eriksen flanker task (Eriksen & Eriksen, 1974) has a long history as a purported assessment of inhibitory control (e.g., Baumeister et al., 2014; Shields, Rivers, et al., 2019) and was therefore included in this study. In this study, the flanker task began with a series of instruction screens informing participants to report the direction of the center arrow using the left or right arrow key on a keyboard. The instructions further stated that the center arrow would be flanked by two arrows on both sides pointing in the same direction (congruent trials) or two arrows on both sides pointing in a different direction (incongruent trials), and that any flanking arrows should be ignored. After the instructions, 24 practice trials were presented and accuracy feedback was given for 400 ms after responses. In each trial, a fixation cross was presented in the center of the screen for 500 ms and was subsequently replaced by the target and flanking stimuli until a response was provided or 1,200 ms had elapsed. The intertrial interval then lasted for a random interval between 500 and 1,000 ms. The test phase began after a brief reminder of instructions; no accuracy feedback was given during the test phase, though throughout the test phase a footer remained on screen that instructed participants to use the left arrow key to indicate the center arrow was pointing left, and the right arrow key to indicate the center arrow was pointing right. The test phase consisted of 240 trials divided into three blocks (80 trials per block). Each block contained 40 congruent trials and 40 incongruent trials, for a total of 120 congruent trials and 120 incongruent trials. The center arrow pointed left in half the trials of each type (i.e., congruent and incongruent) and right in the other half of the trials of each type. After each block of trials, participants reached a screen that displayed their progress, were told to take as long of a break as they wanted, and to press “b” to begin the next block when ready. The primary outcome of interest on the flanker task was the flanker RT interference effect: the difference between mean correct RT on incongruent trials and mean correct RT on congruent trials.

Simple Reaction Time Test

Perhaps surprisingly, and controversially, some prior work has considered simple reaction time to index inhibitory control to a degree (Boulinguez et al., 2008; Brunia, 1993; Narayanan et al., 2006; Shields, 2017; Smith et al., 2010). We thus included a simple reaction time test in our battery. The simple reaction time test used in this study began with a series of instruction screens informing participants to press the “x” key whenever the letter “X” appeared on the screen. After the instructions, 24 practice trials were presented and accuracy feedback was given for 400 ms after responses. In each trial, a blank screen was presented for an initial stimulus onset delay. The stimulus onset delay was chosen from a set of ten possible stimulus onset delays from a sequence of 250 to 2,500 ms in steps of 250 ms (e.g., 250, 500, 750 ms, etc.). Each stimulus onset delay was used in nine trials during the task. Participants were then provided with up to 1,200 ms to respond to each trial. We included a variable delay because a wealth of research has found that a variable delay helps to prevent entrainment relative to a constant intertrial interval (e.g., Daitch et al., 2013; Drazin, 1961; Karlin, 1959; Klemmer, 1956, 1957; Schiff et al., 2013; Thomaschke & Haering, 2014); this helps to remove interval-timing-based and rhythm-based prediction from influencing performance, leaving a greater influence of continuous vigilance on simple reaction time (e.g., Breska & Deouell, 2017; Daitch et al., 2013). Trials were split into three blocks of 30 trials each, and participants were told to take as long of a break as they wanted between blocks. The primary outcome of interest used in this task was mean reaction time.

Stroop Task

The Stroop task used in Study 1 required participants to either maintain multiple keymappings within working memory or quickly saccade to the footer to determine the correct keymapping, which may have altered correlations between Stroop effects and other inhibition tasks. In this study, therefore, to decrease working memory demands compared to the Study 1 Stroop task, only two words were presented.

In this task, the words red and blue were presented in either red or blue 60 pt font on a white background. Participants were told to indicate the color of the font using the “r” and “b” keys for red and blue font, respectively. Participants were given extensive instructions in the task and completed 16 practice trials (in order, four congruent red trials, four incongruent red trials, four congruent blue trials, and four incongruent blue trials), during which time feedback was given (i.e., “correct” or “incorrect”) for 500 ms after each trial. At all points in the task, a footer was displayed at the bottom of the screen instructing participants to press r for red font and b for blue font, with a note to respond as quickly and accurately as possible. At the start of each trial, a fixation cross was displayed for 500 ms. Following the fixation cross, the stimulus was displayed, and participants were given up to 2,500 ms to indicate the font color. If participants did not respond within 2,500 ms, the stimulus disappeared and the words “too slow” were displayed on the screen for 500 ms. The interstimulus interval was 500 ms. The total task consisted of five blocks of 64 trials each—with each block containing 75% congruent trials and 25% incongruent trials of each color—for a total of 320 trials (240 congruent, 80 incongruent). Congruent trials made up 75% of total trials to help produce a response tendency of “r” for the word red and “b” for the word blue.

Following most prior literature on the Stroop (e.g., Miyake et al., 2000), the primary outcome of interest on the Stroop task was the Stroop RT interference effect: the difference between mean correct RT on incongruent trials and mean correct RT on congruent trials.

Vigilance Test

Vigilance task performance has been considered in some work to rely in part on inhibitory control (e.g., Brache et al., 2010; Rossa et al., 2014; Shields, Sazma, & Yonelinas, 2016); we thus included a vigilance task in this study. In this task, participants were told to watch the screen and press the spacebar whenever they saw the letter X (i.e., target trials). They were further informed that between each letter there would be a very long delay and that other letters, such as K (i.e., nontarget trials), may appear, in which case they should not respond. The instructions “Respond only to X” were presented at the bottom of the screen during the entire task. Each trial began with a hollow white circle presented against a black background, which was all that was on the screen for a random interval between 5,000 and 15,000 ms. After the random interval elapsed, a white letter was displayed in the center of the circle for 25 ms (or, more accurately, no more or less than two screen refreshes at 60 Hz, so approximately 33 ms of visible time), after which it immediately vanished—though the white circle remained. Participants were then given up to 975 ms to press the spacebar before the trial automatically concluded and moved onto the next. Participants first completed five practice trials (two target, three nontarget), on which they were given accuracy feedback with additional information (e.g., “This was a target trial, and you responded correctly to the ‘X’ by hitting the spacebar”) on the first two of them. A total of 40 test trials were then presented—10 target trials and 30 nontarget trials. Nontarget trials were presented in the majority of trials so that participants did not build up a prepotent tendency to respond whenever a stimulus was displayed even if it was not fully attended to. The extremely long intertrial interval, target trial to nontarget trial ratio, and extremely short stimulus presentation time made correct responding difficult. The primary outcome from this task was errors of omission (i.e., missed targets).

Procedure

Participants came to the laboratory for a 2-hr study session. Except for the additional five tasks, the procedure for this study was identical to Study 1. After completing the final task, participants were thanked, debriefed, and dismissed.

Data Reduction and Analysis

Each raw data file for each task was examined for intentional premature responding (i.e., “clicking through” to finish the study sooner) and for failing to follow task instructions (i.e., performance at or below chance). Participants who engaged in these behaviors were excluded (see the online supplemental materials for descriptions of exclusions and data cleaning); the final sample consisted of 279 participants that did not engage in these behaviors. Data are available at OSF (This Project, 2024): https://osf.io/6xpbf/?view_only=57430884e49442b2bdd88d32b6df6e32.

Data were analyzed in R, and confirmatory factor analysis (CFA) was conducted using the lavaan package, Version 0.6–16.

Study 2: Results

Descriptive Statistics and Correlations

As expected, errors of omission on the vigilance task (M = 2.77, i.e., 27.7% of possible trials; errors of commission only occurred on 2.3% of possible trials in the vigilance task) and errors of commission on the go/no-go (M = 18.33) and SART (M = 11.91) were significantly greater than zero, ps < .001. Similarly, emotional RT interference effects (M = 22.82 ms), flanker RT interference effects (M = 57.18 ms), Simon RT interference effects (M = 44.22 ms), and Stroop RT interference effects (M = 80.80 ms) were significantly different from zero, ps < .001. Descriptive statistics, including those quantifying normality, are presented in the online supplemental materials. Additional analyses using variables transformed to normal distributions are presented in the online supplemental materials; conclusions did not differ from the results below. Split-half task (odd/even) reliabilities are presented in Table 5.4

Table 5.

Observed Reliabilities of Study 2 Outcomes

Task Odd/even split-half reliability (ρST)

d2 .95
Simple reaction time .91
SART .86
Go/no-go .85
Stroop .85
Flanker .77
Vigilance .66
Stop signal .63
Simon .56
Corsi block .53
Digit span forward .35
Emotional Sternberg .21

Note. Excluding tasks with ρST<.70, ρST<.60, or ρST<.40 did not alter the results. ρST = split-half tau-equivalent reliability; SART = sustained attention to response task.

Bivariate correlations between all variables are presented in Table 6. Bivariate correlations between tasks were modest, with the strongest associations being between the SART and go/no-go (r = .39, p < .001), the Simon and Stroop tasks (r = .21, p < .001), and the Corsi and digit span tasks (r = .21, p < .001). Exploratory analyses indicated that the inverse associations between Stroop RT interference and either the go/no-go or the SART were not driven by outliers, as these associations only strengthened when studentized residuals greater in absolute value than three ( ps < .005) or two ( ps < .002) were removed from the analyses.

Table 6.

Bivariate Correlations Between Primary Outcomes From Each Task

Task outcome 1 2 3 4 5 6 7 8 9 10 11

 1. Corsi block span
 2. d2 time regressed on accuracy −.01
 3. Digit span forward span .21*** −.00
 4. Emotional Sternberg interference .03 .01 .01
 5. Flanker interference .05 −.12* −.07 .02
 6. Go/no-go commissions −.04 .00 −.03 −.02 .08
 7. Simon interference −.10 .09 −.10 .11 .03 −.02
 8. Simple reaction time −.11 .13* −.12* .12 −.07 −.01 .09
 9. Stroop interference −.02 .15* −.04 .17** .03 −.14* .21*** .13*
10. Stop signal reaction time −.04 .10 .01 .04 .05 .20*** .09 .07 .03
11. SART commissions −.07 −.05 −.02 −.00 .10 .39*** .03 −.03 −.17** .12*
12. Vigilance omissions .02 .02 −.11 .02 .06 .13* .09 .19** −.00 .01 .08

Note. SART = sustained attention to response task.

*

p < .05.

**

p < .01.

***

p < .001.

CFA

The variables were entered into a CFA with factors indicated as might be expected from the EFA in Study 1 (i.e., response inhibition being indicated by go/no-go commissions, SART commissions, and SSRT, and goal-directed information gating and/or amplification being indicated by all other variables).

The fit of this model was acceptable by some metrics but not others, χ2(54) = 72.24, p = .049, comparative fit index (CFI) = .838, RMSEA = .035, Bayesian information criterion (BIC) = 9,517.9, SABIC = 9,441.8, Akaike information criterion (AIC) = 9,430.7. Flanker interference effects did not significantly load on the expected factor, p = .879, and exploratory analyses (the online supplemental materials) suggested removing this variable. As such, this variable was removed from the model to improve model fit; retaining it did not influence the results below (i.e., the accepted two-factor model below was a good and acceptable fit with the flanker included). The fit of this model was acceptable by some metrics but not others, χ2(44) = 58.22, p = .074, CFI = .870, RMSEA = .034, BIC = 8,715.9, SABIC = 8,646.1, AIC = 8,634.0. Allowing the correlation between response inhibition and information gating to be estimated (r = −.08, p = .480) resulted in a numerically worse fitting model, χ2(43) = 57.85, p = .065, CFI = .865, RMSEA = .035, BIC = 8,721.1, SABIC = 8,648.2, AIC = 8,637.6. An examination of modification indices revealed the largest improvement in model fit would be achieved by estimating the covariance between the residuals of the digit span forward and the residuals of the Corsi span; allowing this covariance (r = .19, p = .003) led to an acceptable model fit, χ2(43) = 48.80, p = .251, CFI = .947, RMSEA = .022, BIC = 8,712.1, SABIC = 8,639.2, AIC = 8,628.6 (see Figure 2).

Figure 2. Confirmatory Factor Analysis for Study 2.

Figure 2

Note. Without allowing the covariance between digit span and Corsi span tasks (not shown), the model was not a satisfactory fit to the data; however, when this path was estimated (as shown), the model was a good fit to the data. The results indicate that response inhibition appears to be distinct from a more common control process a latent level. Both digit span forward and Corsi span are coded such that higher scores indicate better performance, whereas the rest of the tasks’ outcomes are coded such that higher scores indicate worse performance (e.g., greater interference or more errors). Com. = commission errors; SART = sustained attention to response task; RT = response time. See the online article for the color version of this figure.

Comparing the model shown in Figure 2 to the one-factor, unity model that also had the digit/Corsi covariance estimated, χ2(43) = 81.42, p < .001, CFI = .650, RMSEA = .057, BIC = 8,744.7, SABIC = 8,671.8, AIC = 8,661.2, the model shown in Figure 2 was a better fit to the data, ΔBIC = −32.6, ΔSABIC = −32.6, ΔAIC = −32.6.

Comparing the model shown in Figure 2 to a model that estimated a separate working memory factor (indicated by digit span and Corsi span, without these variables loading on information gating), the covariance-only model shown in Figure 2 was a slightly better fit to the data than this three-factor model, χ2(42) = 47.75, p = .251, CFI = .948, RMSEA = .022, BIC = 8,716.7, SABIC = 8,640.6, AIC = 8,629.5, ΔBIC = −4.6, ΔSABIC = −1.4, ΔAIC = −1.0.

In short, response inhibition and information gating were separable at a latent level (Figure 2).

Comparison With the Unity Model of Executive Functions

We also conducted a series of analyses attempting to determine why we observed a two-factor solution whereas prior research has often obtained a one-factor solution. These analyses are presented in detail within the online supplemental materials. In brief, exploratory and confirmatory factor analyses converged to suggest that without including either one of the variables that selectively loaded on response inhibition (i.e., errors of commission on the go/no-go or SART), structural equation models examining the latent structure of inhibitory control may be unable to detect response inhibition as a distinct latent control process.

Consideration of Outcome Type as an Explanation

We also examined whether a two-factor structure was an artifact of outcome type (i.e., reaction time or accuracy-based) in two sets of analyses. These analyses are presented in the online supplemental materials. The first of these estimated additional or alternative factors for outcome type. The second of these rescored outcomes to make each task outcome a function of both accuracy and RT. This alternate scoring method strengthened the correlations among variables, and these analyses further provided evidence that our goal-directed information gating and/or amplification factor appeared to be the common executive function (e.g., Figure S9 in the online supplemental materials). In short, our finding of a two-inhibition-factor model was not an artifact of differential outcome types between the two factors. Response inhibition emerged as a distinct factor in each of these analyses.

Restriction of Variables

Because many of the tasks used in this study may be controversial with respect to their utilization of inhibitory control, we ran analyses including only go/no-go commissions, SART commissions, SSRT, Stroop interference, Simon interference, and emotional Sternberg interference. Only Study 2’s data were used in these analyses, since only Study 2 contained all six of these tasks and outcomes. In exploratory factor analyses, a single factor was a poor fit to the data, χ2(9) = 29.27, p < .001, whereas a two-factor solution was a sufficiently good fit to the data, χ2(4) = 2.29, p = .683, and a better fit to the data than the one-factor solution, Δχ2(5) = 26.99, p < .001; a three-factor solution could not be compared to the two-factor solution due to insufficient degrees of freedom in the three-factor solution. Similarly, a CFA with two uncorrelated inhibition factors (one inhibition factor indicated only by go/no-go commissions, SART commissions, and SSRT; the second inhibition factor indicated only by Stroop interference, Simon interference, and emotional Sternberg interference) fit the data well, χ2(9) = 14.74, p = .098, CFI = .929, RMSEA = .048, BIC = 4,731.7, SABIC = 4,693.6, AIC = 4,688.1, with each loading as significant, ps < .007 (Figure 3). Therefore, two inhibition factors appear to be necessary to explain the structure of the six task outcomes we used that are least controversial as inhibition task outcomes.

Figure 3. Two-Factor Model With Restricted Variables.

Figure 3

Note. This model was a good fit to the data, and all loadings were significant. Com. = commission errors; SART = sustained attention to response task; RT = response time. See the online article for the color version of this figure.

Cross-Study Reproducibility Analysis: Multiple Group CFA by Study

Finally, we attempted to determine the total replicability of our two-factor CFA model across studies by conducting a multiple group CFA for all variables in both data sets, with study as the grouping variable, first using the loading pattern that fit Study 1 data well (i.e., two uncorrelated latent factors with no residual covariances, with the loading structure given in Table 4). This model, constraining all loadings, residual variances, residual covariances, latent variances, and latent covariances to equality between studies, was a good fit to the data, χ2(47) = 52.46, p = .271 (Figure 4) (see Table 7 for additional fit statistics for this model and all subsequent models). Moreover, compared to an unconstrained model, χ2(33) = 39.60, p = .199, the constrained model improved model fit, as shown in substantial reductions in BIC, SABIC, and AIC, Δχ2(14) = 12.87, p = .537, ΔBIC = −72.1, ΔSABIC = −27.7, ΔAIC = −15.1. Together, these analyses indicate that the paths in this factor analysis replicated between studies. The consistency of these paths is particularly impressive given the differences in Stroop task parameters between studies.

Figure 4. eMultigroup SEM With All Loadings and Residuals Fixed to Equality Between Studies (i.e., Strong Invariance Between Studies).

Figure 4

Note. This model was an excellent fit to the data, and all loadings were significant. SEM = structural equation model; Com. = commission errors; SART = sustained attention to response task; RT = response time. See the online article for the color version of this figure.

Table 7.

Model Fit Statistics for Multigroup CFAs by Study

Model CFI RMSEA BIC SABIC AIC

Without Corsi and digit span covariance
Uncorrelated two-factor w/ invariance .952 .023 8,622.8 8,549.8 8,529.2
Uncorrelated two-factor w/o invariance .943 .030 8,694.9 8,577.5 8,544.3
One-factor w/ invariance .696 .058 8,647.2 8,577.3 8,557.6
One-factor w/o invariance .766 .055 8,674.5 8,595.9 8,563.6
Correlated two-factor w/ invariance .954 .028 8,627.6 8,551.5 8,529.9
Correlated two-factor w/o invariance .954 .028 8,703.7 8,579.9 8,544.9

With Corsi and digit span covariance
Uncorrelated two-factor w/ invariance 1.000 .000 8,619.3 8,543.1 8,521.6
Uncorrelated two-factor w/o invariance 1.000 .000 8,696.5 8,572.7 8,537.7
One-factor w/ invariance .813 .046 8,638.9 8,565.9 8,545.2
One-factor w/o invariance .860 .048 8,709.5 8,588.9 8,554.8
Correlated two-factor w/ invariancea
Correlated two-factor w/o invariancea

Note. The best-fitting model in each category (i.e., with or without the Corsi and digit covariance) is listed in bold. The best-fitting model indicated strong replication of our Study 1 factor structure across studies. CFA = confirmatory factor analysis; CFI = comparative fit index; RMSEA = root-mean-square error of approximation; BIC = Bayesian information criterion; SABIC = sample-adjusted Bayesian information criterion; AIC = Akaike information criterion; w/ = with; w/o = without.

a

Model was not identifiable.

Moreover, this strong invariance, uncorrelated two-inhibition-factor model was a better fit to the data than a one-factor model assuming strong invariance, χ2(48) = 82.91, p = .001, Δχ2(−1) = −30.45 (nonnested), ΔBIC = −24.4, ΔSABIC = −27.6, ΔAIC = −28.5; than a one-factor model without invariance, χ2(34) = 60.92, p = .005, Δχ2(13) = −8.46 (nonnested), ΔBIC = −87.4, ΔSABIC = −46.1, ΔAIC = −34.5; than a correlated two-factor model assuming strong invariance, χ2(46) = 51.24, p = .276, Δχ2(1) = 1.23, p = .268, ΔBIC = −4.8, ΔSABIC = −1.7, ΔAIC = −0.8; or than a correlated two-factor model without invariance, χ2(31) = 36.24, p = .237, Δχ2(16) = 16.22, p = .438, ΔBIC = −80.9, ΔSABIC = −30.1, ΔAIC = −15.8.

Next, we included the covariance between digit span and Corsi span (i.e., the model that fit Study 2 well) to determine whether including this covariance altered any of the above conclusions. It did not. In particular, in this analysis, a model that constrained all loadings, residual variances, residual covariances, latent variances, and latent covariances to equality between studies (i.e., like the model in Figure 4 except with an additional covariance between digit span and Corsi span constrained to equality across studies) was a good fit to the data, χ2(46) = 42.89, p = .603. Moreover, relative to an unconstrained model, χ2(46) = 29.04, p = .567, this strong invariance did not harm model fit via χ2, Δχ2(15) = 13.84, p = .603, and improved model fit via BIC, SABIC, and AIC: ΔBIC = −77.2, ΔSABIC = −29.6, ΔAIC = −16.2. Moreover, this strong invariance, uncorrelated two-inhibition-factor model was a better fit to the data than a one-factor model assuming strong invariance, χ2(47) = 68.54, p = .022, Δχ2(−1) = −25.65, ΔBIC = −19.6, ΔSABIC = −22.8, ΔAIC = −23.6, or than a one-factor model without invariance, χ2(32) = 48.13, p = .033, Δχ2(14) = −5.24, ΔBIC = −90.2, ΔSABIC = −45.8, ΔAIC = −33.2. The correlated two-factor model with or without invariance was not identifiable.

In short, regardless of whether the residual covariance between the digit span and Corsi span was included, the uncorrelated two-factor model was reliably the best-fitting model across studies, and the loadings observed in it were stable enough to support strong invariance. Even when the covariance between digit span and Corsi span was estimated separately, a single common control factor was insufficient to explain these data: Response inhibition required a separate factor.

Study 2: Discussion

In this study, we again found evidence for a two-factor structure of task outcomes that appeared to reflect response inhibition (i.e., inhibiting a prepotent or activated response) and goal-directed information gating and/or amplification (i.e., what we believe to be the common executive function). Because of this consistency of results between studies, we save the bulk of our Study 2 discussion for the General Discussion section, and we discuss idiosyncratic Study 2 results here.

One interesting finding was the lack of expected flanker RT interference effects loading on any factor (though see alternate scoring analyses in the online supplemental materials). Why this occurred is not clear, but it is possible that resolution of interference from multiple, concurrent, and competing stimuli relies on a different inhibitory process from interference induced by incongruent single-stimulus features (e.g., Stroop or Simon task interference). This idea is supported by computational modeling studies, which have found that the flanker task is well described by a model of continuous, shrinking attentional selectivity to the target stimulus present within an array (Evans & Servant, 2020; see also Kinder et al., 2022); notably, this shrinking spatial attentional window would not resolve interference from a single stimulus, such as is present in the Stroop or Simon task (e.g., an incongruent word or spatial location). Thus, it is possible that flanker interference effects are more dependent upon the ability to narrow attention from an array to a single stimulus, whereas other interference effects are more dependent upon a process that uses goals to amplify relevant, or gate or inhibit irrelevant, features of a single stimulus. However, we note that analyses using the alternative outcome scores, which equated RT and accuracy between task outcomes, did not produce this result: The flanker task loaded on the information gating factor as expected in this analysis (see the online supplemental materials).

On the level of observed (i.e., nonlatent) variables, an interesting finding to emerge was that Stroop RT interference effects were inversely correlated with go/no-go and SART errors of commission. The inverse association between Stroop RT interference effects and go/no-go and SART errors of commission is interesting because Stroop RT interference effects are classically thought to reflect poor response inhibition (Jensen & Rohwer, 1966), and because the Stroop is thought to differ from other interference control tasks in response-related interference (Stahl et al., 2014). However, although errors on incongruent trials in the Stroop task may reflect poor response inhibition, recent research has suggested that RT interference effects on tasks such as the Stroop may primarily reflect an individual’s “decision boundary,” or response caution (i.e., a tendency to prefer accuracy over speed in speed/accuracy tradeoffs; Ulrich et al., 2015). Thus, the inverse associations between Stroop RT interference effects and go/no-go and SART errors of commission may have reflected individual differences in response caution correlating across these tasks. Alternatively, it is possible that our use of a two-color Stroop—which was intended to improve overlap with other potential response inhibition tasks—rather than a more typical three- or four-color Stroop (though see Inzlicht & Gutsell, 2007) somehow contributed to these inverse associations with go/no-go and SART commissions. This interpretation is supported by the differences in associations with the Stroop between Study 1 and Study 2.

General Discussion

A growing body of research points to the clinical and real-world relevance of inhibitory control. Despite the popularity of inhibitory control assessment, however, its latent structure is unclear. The present studies addressed this issue by conducting exploratory and confirmatory factor analyses of the primary outcomes from seven tasks (Study 1) and 12 tasks (Study 2) thought by at least some prior work to be supported at least in part by some form of inhibitory control. The results indicated that two factors—response inhibition and, what we have referred to as, goal-directed information gating and/or amplification—were necessary to account for these data. Further, in Study 2, we found that a single-factor model was not a significantly different from a two-factor model if a task that exclusively loaded on the response inhibition factor was not included in the model, which may explain why prior work has found a single-factor model of inhibitory control preferable to a two-factor model (Friedman & Miyake, 2004). In short, these results suggest that response inhibition and information gating are, in fact, distinct inhibitory control processes or executive functions, and they further show why prior work has failed to support this distinction.

As described above, we believe the loadings for our second factor are well summarized by the label of goal-directed information gating and/or amplification. This definition is virtually identical to Miyake and Friedman’s common executive function, and we believe that this factor is, in fact, Miyake and Friedman’s common executive function (e.g., see Figure S9 in the online supplemental materials). Notably, our loadings for this factor are also consistent with recent work suggesting that the common executive function represents the relative speed of goal-directed information uptake (Löffler et al., 2023). We did not primarily use the term “common executive function” to describe our loadings within the results because we did not assess all of Miyake and Friedman’s executive functions (e.g., set-shifting, or updating per se), and it is thus possible that our second factor is not identical to Miyake and Friedman’s common executive function. However, we believe that our second factor is likely to be their common executive function and should presumably be interpreted in the same way as the common executive function.

We found that the go/no-go and SART were important tasks for identifying response inhibition as a distinct factor. Indeed, prior work that has included one of these tasks has typically found that including a separate factor primarily indicated by that task is needed to account for the data (e.g., Bender et al., 2016; Chuderski et al., 2012; Robison & Brewer, 2022; Tiego et al., 2018). To our knowledge, only four studies (Kane et al., 2016; McVay & Kane, 2012; Redick et al., 2016; Unsworth & McMillan, 2014), which each included a variant SART, have not required a separate response inhibition factor to account for the data—although many of these studies found that inhibitory control task outcomes required more than one factor, none of these four studies obtained a response-inhibition-specific factor. These studies all included a semantic SART and used as outcomes both the standard deviation of reaction time and d’, and allowed them to covary—except Unsworth and McMillan (2014), who used accuracy on targets and did not allow SDRT and target accuracy to covary. Importantly, d’ in this task is penalized by both omissions and commissions; this d’ outcome therefore includes variance associated with information gating and response inhibition, thereby permitting omission-related variance to relate to other variables in the factor analysis and the d’ commissions variance to be the error variance in the analysis. In addition, the semantic SART, which contained stimuli that would not be inhibited prior to processing them at the level of meaning, may have influenced the results relative to the standard task: Response inhibition recruited by a semantic category appears to be underpinned by at least partly different processes from response inhibition recruited by simple stimulus features (Dierolf et al., 2018). Therefore, it is unclear whether the fit to these studies’ data would have been poor with only a common control process accounting for their data if they had included the typical SART and used only commissions as the outcome.

A similar study conducted previously merits discussion. In particular, Stahl et al. (2014) examined the structure of what they term impulsivity in a CFA. Stahl et al. found that a five-factor solution best fit the 13 tasks that they included in their primary CFA. Although Stahl et al.’s study should be lauded for numerous reasons, it differs from our study in at least two crucial ways. First, their model comparisons in CFA took place prior to adding a response inhibition factor to their model (see the final section of their results, and their sixth figure). The implication of this difference is that it is likely that with the response inhibition tasks in their model prior to the model comparison stage, they may have obtained different results comparing their primary model against the alternatives—the decrement in model fit from including their response inhibition factor certainly implies that. Because we conducted exploratory factor analyses to determine our initial factor structure in Study 1, our results do not suffer from this limitation; we mention it here as a potential explanation for a distinction between response inhibition and other forms of inhibitory control found by Stahl et al. that may not have occurred had their analytic approach differed. Second, their study was not a two-study replication, and the stability of their model across samples is thus less clear. Nonetheless, Stahl et al.’s finding that a sixth latent factor indicated by the go/no-go, antisaccade, and stop-signal tasks did not correlate with any of the five other inhibition factors they constructed in their primary model is broadly consistent with our finding that response inhibition is, in fact, a separate latent construct from other forms of inhibitory control.

One result to emerge in both of our studies merits some discussion. That is, simple reaction time, vigilance, and both forward spans all loaded on the information gating factor, and—despite the inclusion of these factors being theoretically motivated (e.g., a processing speed factor, or a working memory factor)—model fits were not significantly improved by including additional factors within our primary analyses (though see the online supplemental materials). We believe that it is likely that an additional factor would have emerged as required to explain at least part of task performance in the underrepresented task outcome types that we included (e.g., sustained attention, working memory maintenance)—even within our primary analyses (i.e., those in the main text, not supplement)—had we included more of these measures. We note that our data do not suggest that performance on these outcomes was wholly underpinned by information gating, just that, given our sample size and task selection, explaining performance on these task outcomes was not different enough from other information-gating-dependent outcomes that a separate factor was needed. In other words, while performance via these outcomes shares variance with other information gating outcomes, they are presumably not explained only by information gating.

It should be noted that, like Miyake and Friedman’s definition of the common executive function (2012), our definition of information gating implies broad importance: Any performance outcome on a cognitive task may index goal-directed information gating (or Miyake and Friedman’s common executive function) to some degree. Importantly, though, this does not mean that every task outcome indexes information gating (or common executive function) to a greater extent than it indexes one or more other cognitive processes. For example, RT on a reaction time task may be affected by information gating failures (e.g., failing to bias processing toward the center location on the screen, thus increasing the likelihood of lapses in goal-relevant attention) more so than is accuracy on spatial rotation tasks. Every task outcome draws on multiple processes to varying degrees, and some task outcomes (e.g., Simon interference) presumably have more of their variance in performance explained by information gating than others (e.g., SSRT, perseverative errors in the Wisconsin card sorting task, or spatial rotation RT).

An issue related to that described in the previous paragraph is that because broad behavioral outcomes (e.g., reaction time, accuracy) conflate multiple cognitive processes (e.g., processing speed, inhibitory control; see Draheim et al., 2016; Farrell & Lewandowsky, 2018; Hedge et al., 2022; Jewsbury et al., 2016; Rey-Mermet et al., 2019), it may be that our factors reflect a conflation of constructs—for example, those related to accuracy versus reaction time (e.g., decision threshold)—rather than the constructs that we assume. We cannot rule this possibility out, though we note that two forms of supplemental analyses that considered outcome type either in addition to, or in place of, inhibition in explaining task performance converged to indicate that response inhibition is a distinct factor underpinning some task performance. Still, we believe that an ideal analysis of these data would include a computational model that can fit data across all tasks included here well with relevant parameters (e.g., goal maintenance, processing speed), and test whether a version of that model with a single inhibition parameter more parsimoniously accounts for the data than a version of the model with two or more inhibition parameters. We consider this a fruitful avenue for future research.

In some of our analyses, we observed outsized loadings by some variables, which might call into question whether those models represented the latent structure of inhibition per se. Notably, however, outsized loadings were not present within our supplemental analyses that accounted for outcome type, nor were they present to a large degree within our multigroup model across studies, indicating that our more stable estimates did not suffer from this problem.

Perhaps more importantly, in these and similar studies (e.g., Friedman & Miyake, 2004), latent variables of inhibitory control explained little variance—typically around 5% to 20%—in inhibitory control task outcomes (e.g., Friedman & Miyake, 2004; Gärtner & Strobel, 2021; Miyake et al., 2000). This problem of high residual variance may be explained in part by differences across tasks, such as in stimulus timing, stimulus presentation (e.g., size), target stimulus modality (e.g., visual vs. auditory), intertrial intervals, or whether the stimulus feature to be inhibited is a high-level or low-level feature (e.g., Dierolf et al., 2018). However, these nonessential differences are unlikely to explain the majority of weak loadings in inhibitory control research: Unlike the inhibitory control literature, the working memory literature often observes much stronger explanations of variance in observed variables by latent factors (e.g., Conway et al., 2002). However, our supplemental analyses explained approximately 20% to 65% of the variance in each of the outcomes via the factors. Nonetheless, even within our primary analyses, our pattern of results replicated across two studies and our primary inference—that more than one factor is needed to explain inhibitory control task performance—was robust to numerous variations of supplemental analyses. Together, these results suggest that our inferences are reliable, and they challenge prior factor analytic work finding that a single inhibition factor is sufficient to explain all inhibition task performance.

This article has several strengths, including replication of results across studies, a large and diverse sample in Study 2, examination of 12 distinct tasks thought by at least some prior work to rely at least in part on some inhibitory control process, and theory-driven hypotheses. Despite its strengths, however, this study has limitations that should be noted. First, there are numerous other tasks thought to rely on inhibitory control which we were unable to examine due to the length of the studies. We believe that it is likely that if we had included other tasks, our results would have differed—for example, requiring a third factor, or obtaining a different set of loadings. Second, we assessed the latent structure of inhibitory control in a sample of healthy young adults. The latent structure of executive functions more broadly is thought to differ across development (Karr et al., 2018), and it is possible that the latent structure of inhibitory control would differ across development as well. Third, within our primary—though not supplemental—analyses, the outcomes were only weakly correlated with each other, and were only weak indicators of the latent variables, indicating that the factors we identified may only weakly contribute to performance within primary task outcomes. Task-specific factors may play a larger role in explaining performance on these primary outcomes. Finally, although we have no a priori reason to expect any cultural differences in the latent structure of inhibitory control, it is worth noting that some aspects of cognition differ across cultures (Henrich et al., 2010) and the samples in the present studies were predominately Asian or Asian American. Although fluency in English was a prescreen prerequisite for these studies, it is possible that the latent structure of inhibitory control differs cross-culturally. Future research should attempt to replicate and extend these results in other populations and with additional tasks.

Conclusion

In conclusion, across two studies, we found that performance across multiple tasks was best described by two factors resembling forms of inhibitory control: response inhibition, and what we have called goal-directed information gating and/or amplification. Although this finding differs from most findings of prior work on inhibitory control in healthy young adults, our data suggested that this difference may have emerged due to tasks included in prior studies. Therefore, these results suggest that models of executive function may need to be updated to account for a distinct response inhibition factor. Inhibitory control is an important construct, and our results reveal its important facets are, in fact, distinct.

Supplementary Material

Supp1

Supplemental materials: https://doi.org/10.1037/xlm0001352.supp

Acknowledgments

This research was supported by a U.C. Davis Provost Dissertation Year Fellowship and a University of Arkansas’ Honors College Grant to Grant S. Shields and National Eye Institute EY025999 to Andrew P. Yonelinas. Cognitive task code, data for analysis, and analysis scripts are available online (This Project, 2024): https://osf.io/6xpbf/?view_only=57430884e49442b2bdd88d32b6df6e32.

Footnotes

1

Although partial credit scoring can improve reliability of span tasks, the partial credit scoring method requires all items to be answered by all participants, and the end score reflects the proportion of items correct across all trials. In contrast, both the digit span and Corsi span had termination rules upon failures; participants did not respond to all items. Therefore, we used the standard digit and Corsi span scoring rather than partial credit scoring.

2

Because of how digit span and Corsi span are calculated, it is not possible to calculate the split-half reliability of these tasks like other tasks. As such, to calculate split-half reliability for these tasks, we conducted an odd–even split of trials and coded span on each split as the set size at which an individual first responded incorrectly (effectively terminating the task at one error, rather than at two errors as was done in the actual task). Therefore, the reliability coefficients do not appropriately represent reliability for these two tasks, as increasing the number of errors required for terminating the task increases span task reliability (Blackburn & Benton, 1957). Split-half reliability for these tasks can therefore be thought of as a lower-bound estimate.

3

Note that intraclass correlation coefficients from a consistency two-way random model that are corrected for halving test length—using the standard Spearman-Brown formula—equal the tau-equivalent split-half reliability coefficients presented in Table 2 (ρST=4σAB/σX2). As such, Table 2 coefficients can also be thought of as ICCs.

4

Because of how digit span and Corsi span are calculated, it is not possible to calculate the split-half reliability of these tasks like other tasks (see Footnote 2). Split-half reliability for these tasks can therefore be thought of as a lower-bound estimate.

Grant S. Shields served as lead for data curation, formal analysis, investigation, methodology, project administration, visualization, and writing–original draft and contributed equally to funding acquisition. Andrew P. Yonelinas served as lead for funding acquisition and resources and served in a supporting role for conceptualization, investigation, methodology, and project administration. Grant S. Shields and Andrew P. Yonelinas contributed equally to writing–review and editing.

The experimental materials are available at https://osf.io/6xpbf/?view_only=57430884e49442b2bdd88d32b6df6e32.

References

  1. Amirian E, Baxter J, Grigsby J, Curran-Everett D, Hokanson JE, & Bryant LL (2010). Executive function (capacity for behavioral self-regulation) and decline predicted mortality in a longitudinal study in Southern Colorado. Journal of Clinical Epidemiology, 63(3), 307–314. 10.1016/j.jclinepi.2009.06.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Arán Filippetti V, Gutierrez M, Krumm G, & Mateos D (2022). Convergent validity, academic correlates and age- and SES-based normative data for the d2 Test of attention in children. Applied Neuropsychology: Child, 11(4), 629–639. 10.1080/21622965.2021.1923494 [DOI] [PubMed] [Google Scholar]
  3. Aron AR, Robbins TW, & Poldrack RA (2014). Inhibition and the right inferior frontal cortex: One decade on. Trends in Cognitive Sciences, 18(4), 177–185. 10.1016/j.tics.2013.12.003 [DOI] [PubMed] [Google Scholar]
  4. Bari A, & Robbins TW (2013). Inhibition and impulsivity: Behavioral and neural basis of response control. Progress in Neurobiology, 108, 44–79. 10.1016/j.pneurobio.2013.06.005 [DOI] [PubMed] [Google Scholar]
  5. Baribault B, Donkin C, Little DR, Trueblood JS, Oravecz Z, Van Ravenzwaaij D, White CN, De Boeck P, & Vandekerckhove J (2018). Metastudies for robust tests of theory. Proceedings of the National Academy of Sciences, 115(11), 2607–2612. 10.1073/pnas.1708285114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Baumeister S, Hohmann S, Wolf I, Plichta MM, Rechtsteiner S, Zangl M, Ruf M, Holz N, Boecker R, Meyer-Lindenberg A, Holtmann M, Laucht M, Banaschewski T, & Brandeis D (2014). Sequential inhibitory control processes assessed through simultaneous EEG-fMRI. NeuroImage, 94, 349–359. 10.1016/j.neuroimage.2014.01.023 [DOI] [PubMed] [Google Scholar]
  7. Bender AD, Filmer HL, Garner KG, Naughtin CK, & Dux PE (2016). On the relationship between response selection and response inhibition: An individual differences approach. Attention, Perception, & Psychophysics, 78(8), 2420–2432. 10.3758/s13414-016-1158-8 [DOI] [PubMed] [Google Scholar]
  8. Benitez-López Y, Redolar-Ripoll D, Ruvalcaba-Delgadillo Y, & Jáuregui-Huerta F (2019). Inhibitory control failures and blunted cortisol response to psychosocial stress in amphetamine consumers after 6 months of abstinence. Journal of Research in Medical Sciences, 24(1), Article 20. 10.4103/jrms.JRMS_1148_17 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Blackburn HL, & Benton AL (1957). Revised administration and scoring of the Digit Span Test. Journal of Consulting Psychology, 21(2), 139–143. 10.1037/h0047235 [DOI] [PubMed] [Google Scholar]
  10. Boulinguez P, Jaffard M, Granjon L, & Benraiss A (2008). Warning signals induce automatic EMG activations and proactive volitional inhibition: Evidence from analysis of error distribution in simple RT. Journal of Neurophysiology, 99(3), 1572–1578. 10.1152/jn.01198.2007 [DOI] [PubMed] [Google Scholar]
  11. Brache K, Scialfa C, & Hudson C (2010). Aging and vigilance: Who has the inhibition deficit? Experimental Aging Research, 36(2), 140–152. 10.1080/03610731003613425 [DOI] [PubMed] [Google Scholar]
  12. Breska A, & Deouell LY (2017). Neural mechanisms of rhythm-based temporal prediction: Delta phase-locking reflects temporal predictability but not rhythmic entrainment. PLoS Biology, 15(2), Article e2001665. 10.1371/journal.pbio.2001665 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Brunia CHM (1993). Waiting in readiness: Gating in attention and motor preparation. Psychophysiology, 30(4), 327–339. 10.1111/j.1469-8986.1993.tb02054.x [DOI] [PubMed] [Google Scholar]
  14. Carter L, Russell PN, & Helton WS (2013). Target predictability, sustained attention, and response inhibition. Brain and Cognition, 82(1), 35–42. 10.1016/j.bandc.2013.02.002 [DOI] [PubMed] [Google Scholar]
  15. Chuderski A, Taraday M, Ne¸ cka E, & Smoleń T (2012). Storage capacity explains fluid intelligence but executive control does not. Intelligence, 40(3), 278–295. 10.1016/j.intell.2012.02.010 [DOI] [Google Scholar]
  16. Cieslik EC, Mueller VI, Eickhoff CR, Langner R, & Eickhoff SB (2015). Three key regions for supervisory attentional control: Evidence from neuroimaging meta-analyses. Neuroscience & Biobehavioral Reviews, 48, 22–34. 10.1016/j.neubiorev.2014.11.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Cipolotti L, Spanò B, Healy C, Tudor-Sfetea C, Chan E, White M, Biondo F, Duncan J, Shallice T, & Bozzali M (2016). Inhibition processes are dissociable and lateralized in human prefrontal cortex. Neuropsychologia, 93(Pt. A), 1–12. 10.1016/j.neuropsychologia.2016.09.018 [DOI] [PubMed] [Google Scholar]
  18. Conway ARA, Cowan N, Bunting MF, Therriault DJ, & Minkoff SR (2002). A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence. Intelligence, 30(2), 163–183. 10.1016/S0160-2896(01)00096-4 [DOI] [Google Scholar]
  19. Corsi PM (1972). Human memory and the medial temporal region of the brain. Dissertation Abstracts International, 34(2-B), Article 819B. https://psycnet.apa.org/record/1976-04900-001 [Google Scholar]
  20. Daitch AL, Sharma M, Roland JL, Astafiev SV, Bundy DT, Gaona CM, Snyder AZ, Shulman GL, Leuthardt EC, & Corbetta M (2013). Frequency-specific mechanism links human brain networks for spatial attention. Proceedings of the National Academy of Sciences, 110(48), 19585–19590. 10.1073/pnas.1307947110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Dang J (2017). Commentary: The effects of acute stress on core executive functions: A meta-analysis and comparison with cortisol. Frontiers in Psychology, 8, Article 1711. 10.3389/fpsyg.2017.01711 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Diamond A (2013). Executive functions. Annual Review of Psychology, 64(1), 135–168. 10.1146/annurev-psych-113011-143750 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Dierolf AM, Schoofs D, Hessas EM, Falkenstein M, Otto T, Paul M, Suchan B, & Wolf OT (2018). Good to be stressed? Improved response inhibition and error processing after acute stress in young and older men. Neuropsychologia, 119, 434–447. 10.1016/j.neuropsychologia.2018.08.020 [DOI] [PubMed] [Google Scholar]
  24. Dohle S, Diel K, & Hofmann W (2018). Executive functions and the self-regulation of eating behavior: A review. Appetite, 124, 4–9. 10.1016/j.appet.2017.05.041 [DOI] [PubMed] [Google Scholar]
  25. Draheim C, Hicks KL, & Engle RW (2016). Combining reaction time and accuracy: The relationship between working memory capacity and task switching as a case example. Perspectives on Psychological Science, 11(1), 133–155. 10.1177/1745691615596990 [DOI] [PubMed] [Google Scholar]
  26. Draheim C, Mashburn CA, Martin JD, & Engle RW (2019). Reaction time in differential and developmental research: A review and commentary on the problems and alternatives. Psychological Bulletin, 145(5), 508–535. 10.1037/bul0000192 [DOI] [PubMed] [Google Scholar]
  27. Draheim C, Tsukahara JS, Martin JD, Mashburn CA, & Engle RW (2021). A toolbox approach to improving the measurement of attention control. Journal of Experimental Psychology: General, 150(2), 242–275. 10.1037/xge0000783 [DOI] [PubMed] [Google Scholar]
  28. Drazin DH (1961). Effects of foreperiod, foreperiod variability, and probability of stimulus occurrence on simple reaction time. Journal of Experimental Psychology, 62(1), 43–50. 10.1037/h0046860 [DOI] [PubMed] [Google Scholar]
  29. Drewe EA (1975). Go–no go learning after frontal lobe lesions in humans. Cortex, 11(1), 8–16. 10.1016/S0010-9452(75)80015-3 [DOI] [PubMed] [Google Scholar]
  30. Eriksen BA, & Eriksen CW (1974). Effects of noise letters upon the identification of a target letter in a nonsearch task. Perception & Psychophysics, 16(1), 143–149. 10.3758/BF03203267 [DOI] [Google Scholar]
  31. Evans NJ, & Servant M (2020). A comparison of conflict diffusion models in the flanker task through pseudolikelihood Bayes factors. Psychological Review, 127(1), 114–135. 10.1037/rev0000165 [DOI] [PubMed] [Google Scholar]
  32. Farrell S, & Lewandowsky S (2018). Computational modeling of cognition and behavior. Cambridge University Press. 10.1017/9781316272503 [DOI] [Google Scholar]
  33. Friedman NP, & Miyake A (2004). The relations among inhibition and interference control functions: A latent variable analysis. Journal of Experimental Psychology: General, 133(1), 101–135. 10.1037/0096-3445.133.1.101 [DOI] [PubMed] [Google Scholar]
  34. Friedman NP, & Miyake A (2017). Unity and diversity of executive functions: Individual differences as a window on cognitive structure. Cortex, 86, 186–204. 10.1016/j.cortex.2016.04.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Friedman NP, Miyake A, Young SE, DeFries JC, Corley RP, & Hewitt JK (2008). Individual differences in executive functions are almost entirely genetic in origin. Journal of Experimental Psychology: General, 137(2), 201–225. 10.1037/0096-3445.137.2.201 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Friedman NP, & Robbins TW (2022). The role of prefrontal cortex in cognitive control and executive function. Neuropsychopharmacology, 47(1), 72–89. 10.1038/s41386-021-01132-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Gärtner A, & Strobel A (2021). Individual differences in inhibitory control: A latent variable analysis. Journal of Cognition, 4(1), Article 17. 10.5334/joc.150 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Geurts HM, van den Bergh SFWM, & Ruzzano L (2014). Prepotent response inhibition and interference control in autism spectrum disorders: Two meta-analyses. Autism Research, 7(4), 407–420. 10.1002/aur.1369 [DOI] [PubMed] [Google Scholar]
  39. Giles GE, Mahoney CR, Urry HL, Brunyé TT, Taylor HA, & Kanarek RB (2015). Omega-3 fatty acids and stress-induced changes to mood and cognition in healthy individuals. Pharmacology Biochemistry and Behavior, 132, 10–19. 10.1016/j.pbb.2015.02.018 [DOI] [PubMed] [Google Scholar]
  40. Grinspun N, Nijs L, Kausel L, Onderdijk K, Sepúlveda N, & Rivera-Hutinel A (2020). Selective attention and inhibitory control of attention are correlated with music audiation. Frontiers in Psychology, 11, Article 1109. 10.3389/fpsyg.2020.01109 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Head J, & Helton WS (2014). Practice does not make perfect in a modified sustained attention to response task. Experimental Brain Research, 232(2), 565–573. 10.1007/s00221-013-3765-0 [DOI] [PubMed] [Google Scholar]
  42. Hedge C, Powell G, Bompas A, & Sumner P (2022). Strategy and processing speed eclipse individual differences in control ability in conflict tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 48(10), 1448–1469. 10.1037/xlm0001028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Henrich J, Heine SJ, & Norenzayan A (2010). Most people are not WEIRD. Nature, 466(7302), Article 29. 10.1038/466029a [DOI] [PubMed] [Google Scholar]
  44. Horn SS, Bayen UJ, & Smith RE (2013). Adult age differences in interference from a prospective-memory task: A diffusion model analysis. Psychonomic Bulletin & Review, 20(6), 1266–1273. 10.3758/s13423-013-0451-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Hung Y, Gaillard SL, Yarmak P, & Arsalidou M (2018). Dissociations of cognitive inhibition, response inhibition, and emotional interference: Voxelwise ALE meta-analyses of fMRI studies. Human Brain Mapping, 39(10), 4065–4082. 10.1002/hbm.24232 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Inzlicht M, & Gutsell JN (2007). Running on empty: Neural signals for self-control failure. Psychological Science, 18(11), 933–937. 10.1111/j.1467-9280.2007.02004.x [DOI] [PubMed] [Google Scholar]
  47. Jensen AR, & Rohwer WD (1966). The Stroop color-word test: A review. Acta Psychologica, 25, 36–93. 10.1016/0001-6918(66)90004-7 [DOI] [PubMed] [Google Scholar]
  48. Jewsbury PA, Bowden SC, & Strauss ME (2016). Integrating the switching, inhibition, and updating model of executive function with the Cattell–Horn–Carroll model. Journal of Experimental Psychology: General, 145(2), 220–245. 10.1037/xge0000119 [DOI] [PubMed] [Google Scholar]
  49. Jobst LJ, Bader M, & Moshagen M (2023). A tutorial on assessing statistical power and determining sample size for structural equation models. Psychological Methods, 28(1), 207–221. 10.1037/met0000423 [DOI] [PubMed] [Google Scholar]
  50. Johnstone SJ, Barry RJ, Markovska V, Dimoska A, & Clarke AR (2009). Response inhibition and interference control in children with AD/HD: A visual ERP investigation. International Journal of Psychophysiology, 72(2), 145–153. 10.1016/j.ijpsycho.2008.11.007 [DOI] [PubMed] [Google Scholar]
  51. Kane MJ, Meier ME, Smeekens BA, Gross GM, Chun CA, Silvia PJ, & Kwapil TR (2016). Individual differences in the executive control of attention, memory, and thought, and their associations with schizotypy. Journal of Experimental Psychology: General, 145(8), 1017–1048. 10.1037/xge0000184 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Karlin L (1959). Reaction time as a function of foreperiod duration and variability. Journal of Experimental Psychology, 58(2), 185–191. 10.1037/h0049152 [DOI] [PubMed] [Google Scholar]
  53. Karr JE, Areshenkoff CN, Rast P, Hofer SM, Iverson GL, & Garcia-Barrera MA (2018). The unity and diversity of executive functions: A systematic review and re-analysis of latent variable studies. Psychological Bulletin, 144(11), 1147–1185. 10.1037/bul0000160 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Karr JE, Hofer SM, Iverson GL, & Garcia-Barrera MA (2019). Examining the latent structure of the delis–Kaplan executive function system. Archives of Clinical Neuropsychology, 34(3), 381–394. 10.1093/arclin/acy043 [DOI] [PubMed] [Google Scholar]
  55. Kinder KT, Buss AT, & Tas AC (2022). Tracking flanker task dynamics: Evidence for continuous attentional selectivity. Journal of Experimental Psychology: Human Perception and Performance, 48(7), 771–781. 10.1037/xhp0001023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Klemmer ET (1956). Time uncertainty in simple reaction time. Journal of Experimental Psychology, 51(3), 179–184. 10.1037/h0042317 [DOI] [PubMed] [Google Scholar]
  57. Klemmer ET (1957). Simple reaction time as a function of time uncertainty. Journal of Experimental Psychology, 54(3), 195–200. 10.1037/h0046227 [DOI] [PubMed] [Google Scholar]
  58. Latzman RD, Elkovitch N, Young J, & Clark LA (2010). The contribution of executive functioning to academic achievement among male adolescents. Journal of Clinical and Experimental Neuropsychology, 32(5), 455–462. 10.1080/13803390903164363 [DOI] [PubMed] [Google Scholar]
  59. Löffler C, Frischkorn GT, Hagemann D, Sadus K, & Schubert A (2023). The common factor of executive functions measures nothing but speed of information uptake. PsyArxiv. https://psyarxiv.com/xvdyz/download?format=pdf [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Lorenc ES, Mallett R, & Lewis-Peacock JA (2021). Distraction in visual working memory: Resistance is not futile. Trends in Cognitive Sciences, 25(3), 228–239. 10.1016/j.tics.2020.12.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Manly T, Robertson IH, Galloway M, & Hawkins K (1999). The absent mind: Further investigations of sustained attention to response. Neuropsychologia, 37(6), 661–670. 10.1016/S0028-3932(98)00127-4 [DOI] [PubMed] [Google Scholar]
  62. McCullough AM, Ritchey M, Ranganath C, & Yonelinas AP (2015). Differential effects of stress-induced cortisol responses on recollection and familiarity-based recognition memory. Neurobiology of Learning and Memory, 123, 1–10. 10.1016/j.nlm.2015.04.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. McVay JC, & Kane MJ (2012). Why does working memory capacity predict variation in reading comprehension? On the influence of mind wandering and executive attention. Journal of Experimental Psychology: General, 141(2), 302–320. 10.1037/a0025250 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Miyake A, & Friedman NP (2012). The nature and organization of individual differences in executive functions: Four general conclusions. Current Directions in Psychological Science, 21(1), 8–14. 10.1177/0963721411429458 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Miyake A, Friedman NP, Emerson MJ, Witzki AH, Howerter A, & Wager TD (2000). The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41(1), 49–100. 10.1006/cogp.1999.0734 [DOI] [PubMed] [Google Scholar]
  66. Moshagen M, & Erdfelder E (2016). A new strategy for testing structural equation models. Structural Equation Modeling, 23(1), 54–60. 10.1080/10705511.2014.950896 [DOI] [Google Scholar]
  67. Mueller ST, & Piper BJ (2014). The psychology experiment building language (PEBL) and PEBL test battery. Journal of Neuroscience Methods, 222, 250–259. 10.1016/j.jneumeth.2013.10.024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Munakata Y, Herd SA, Chatham CH, Depue BE, Banich MT, & O’Reilly RC (2011). A unified framework for inhibitory control. Trends in Cognitive Sciences, 15(10), 453–459. 10.1016/j.tics.2011.07.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Narayanan NS, Horst NK, & Laubach M (2006). Reversible inactivations of rat medial prefrontal cortex impair the ability to wait for a stimulus. Neuroscience, 139(3), 865–876. 10.1016/j.neuroscience.2005.11.072 [DOI] [PubMed] [Google Scholar]
  70. Nee DE, Wager TD, & Jonides J (2007). Interference resolution: Insights from a meta-analysis of neuroimaging tasks. Cognitive, Affective, & Behavioral Neuroscience, 7(1), 1–17. 10.3758/CABN.7.1.1 [DOI] [PubMed] [Google Scholar]
  71. Nigg JT (2000). On inhibition/disinhibition in developmental psychopathology: Views from cognitive and personality psychology and a working inhibition taxonomy. Psychological Bulletin, 126(2), 220–246. 10.1037/0033-2909.126.2.220 [DOI] [PubMed] [Google Scholar]
  72. Pettigrew C, & Martin RC (2014). Cognitive declines in healthy aging: Evidence from multiple aspects of interference resolution. Psychology and Aging, 29(2), 187–204. 10.1037/a0036085 [DOI] [PubMed] [Google Scholar]
  73. Redick TS, Shipstead Z, Meier ME, Montroy JJ, Hicks KL, Unsworth N, Kane MJ, Hambrick DZ, & Engle RW (2016). Cognitive predictors of a common multitasking ability: Contributions from working memory, attention control, and fluid intelligence. Journal of Experimental Psychology: General, 145(11), 1473–1492. 10.1037/xge0000219 [DOI] [PubMed] [Google Scholar]
  74. Rey-Mermet A, Gade M, & Oberauer K (2018). Should we stop thinking about inhibition? Searching for individual and age differences in inhibition ability. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(4), 501–526. 10.1037/xlm0000450 [DOI] [PubMed] [Google Scholar]
  75. Rey-Mermet A, Gade M, Souza AS, von Bastian CC, & Oberauer K (2019). Is executive control related to working memory capacity and fluid intelligence? Journal of Experimental Psychology: General, 148(8), 1335–1372. 10.1037/xge0000593 [DOI] [PubMed] [Google Scholar]
  76. Rey-Mermet A, Singh KA, Gignac GE, Brydges CR, & Ecker UKH (2020). Interference control in working memory: Evidence for discriminant validity between removal and inhibition tasks. PLOS ONE, 15(12), Article e0243053. 10.1371/journal.pone.0243053 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Ridderinkhof KR, Band GPH, & Logan GD (1999). A study of adaptive behavior: Effects of age and irrelevant information on the ability to inhibit one’s actions. Acta Psychologica, 101(2–3), 315–337. 10.1016/S0001-6918(99)00010-4 [DOI] [Google Scholar]
  78. Robertson IH, Manly T, Andrade J, Baddeley BT, & Yiend J (1997). “Oops!”: Performance correlates of everyday attentional failures in traumatic brain injured and normal subjects. Neuropsychologia, 35(6), 747–758. 10.1016/S0028-3932(97)00015-8 [DOI] [PubMed] [Google Scholar]
  79. Robison MK, & Brewer GA (2022). Individual differences in working memory capacity, attention control, fluid intelligence, and pupillary measures of arousal. Journal of Experimental Psychology: Learning, Memory, and Cognition, 48(9), 1296–1310. 10.1037/xlm0001125 [DOI] [PubMed] [Google Scholar]
  80. Roos LE, Knight EL, Beauchamp KG, Giuliano RJ, Fisher PA, & Berkman ET (2017). Conceptual precision is key in acute stress research: A commentary on Shields, Sazma, & Yonelinas, 2016. Neuroscience & Biobehavioral Reviews, 83, 140–144. 10.1016/J.NEUBIOREV.2017.10.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Rossa KR, Smith SS, Allan AC, & Sullivan KA (2014). The effects of sleep restriction on executive inhibitory control and affect in young adults. Journal of Adolescent Health, 55(2), 287–292. 10.1016/j.jadohealth.2013.12.034 [DOI] [PubMed] [Google Scholar]
  82. Rouder JN, Chávez De la Peña AF, Pratte MS, Richards VM, Hernan MC, Pascoe MM, & Thapar A (2022). Is the antisaccade task a unicorn task for measuring cognitive control? OSF Preprint. https://osf.io/fhg3n/ [Google Scholar]
  83. Sazma MA, McCullough AM, Shields GS, & Yonelinas AP (2019). Using acute stress to improve episodic memory: The critical role of contextual binding. Neurobiology of Learning and Memory, 158, 1–8. 10.1016/j.nlm.2019.01.001 [DOI] [PubMed] [Google Scholar]
  84. Schiff ND, Shah SA, Hudson AE, Nauvel T, Kalik SF, & Purpura KP (2013). Gating of attentional effort through the central thalamus. Journal of Neurophysiology, 109(4), 1152–1163. 10.1152/jn.00317.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Sebastian A, Baldermann C, Feige B, Katzev M, Scheller E, Hellwig B, Lieb K, Weiller C, Tüscher O, & Klöppel S (2013). Differential effects of age on subcomponents of response inhibition. Neurobiology of Aging, 34(9), 2183–2193. 10.1016/j.neurobiolaging.2013.03.013 [DOI] [PubMed] [Google Scholar]
  86. Seli P (2016). The attention-lapse and motor decoupling accounts of SART performance are not mutually exclusive. Consciousness and Cognition, 41, 189–198. 10.1016/j.concog.2016.02.017 [DOI] [PubMed] [Google Scholar]
  87. Sharp DJ, Bonnelle V, De Boissezon X, Beckmann CF, James SG, Patel MC, & Mehta MA (2010). Distinct frontal systems for response inhibition, attentional capture, and error processing. Proceedings of the National Academy of Sciences of the United States of America, 107(13), 6106–6111. 10.1073/pnas.1000175107 [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Shields GS (2017). Response: Commentary: The effects of acute stress on core executive functions: A meta-analysis and comparison with cortisol. Frontiers in Psychology, 8, Article 2090. 10.3389/fpsyg.2017.02090 [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Shields GS, Bonner JC, & Moons WG (2015). Does cortisol influence core executive functions? A meta-analysis of acute cortisol administration effects on working memory, inhibition, and set-shifting. Psychoneuroendo crinology, 58, 91–103. 10.1016/j.psyneuen.2015.04.017 [DOI] [PubMed] [Google Scholar]
  90. Shields GS, Dunn TM, Trainor BC, & Yonelinas AP (2019). Determining the biological associates of acute cold pressor post-encoding stress effects on human memory: The role of salivary interleukin-1β. Brain, Behavior, and Immunity, 81, 178–187. 10.1016/j.bbi.2019.06.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Shields GS, Kuchenbecker SY, Pressman SD, Sumida KD, & Slavich GM (2016). Better cognitive control of emotional information is associated with reduced pro-inflammatory cytokine reactivity to emotional stress. Stress, 19(1), 63–68. 10.3109/10253890.2015.1121983 [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Shields GS, McCullough AM, Ritchey M, Ranganath C, & Yonelinas AP (2019). Stress and the medial temporal lobe at rest: Functional connectivity is associated with both memory and cortisol. Psychoneuroendocrinology, 106, 138–146. 10.1016/j.psyneuen.2019.04.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Shields GS, Rivers AM, Ramey MM, Trainor BC, & Yonelinas AP (2019). Mild acute stress improves response speed without impairing accuracy or interference control in two selective attention tasks: Implications for theories of stress and cognition. Psychoneuroendocrinology, 108, 78–86. 10.1016/j.psyneuen.2019.06.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Shields GS, Sazma MA, & Yonelinas AP (2016). The effects of acute stress on core executive functions: A meta-analysis and comparison with effects of cortisol. Neuroscience & Biobehavioral Reviews, 68, 651–668. 10.1016/j.neubiorev.2016.06.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Shields GS, & Yonelinas AP (2018). Balancing precision with inclusivity in meta-analyses: A response to Roos and colleagues (2017). Neuroscience & Biobehavioral Reviews, 84, 193–197. 10.1016/j.neubiorev.2017.11.013 [DOI] [PubMed] [Google Scholar]
  96. Smith NJ, Horst NK, Liu B, Caetano MS, & Laubach M (2010). Reversible inactivation of rat premotor cortex impairs temporal preparation, but not inhibitory control, during simple reaction-time performance. Frontiers in Integrative Neuroscience, 4, Article 124. 10.3389/fnint.2010.00124 [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Stahl C, Voss A, Schmitz F, Nuszbaum M, Tüscher O, Lieb K, & Klauer KC (2014). Behavioral components of impulsivity. Journal of Experimental Psychology: General, 143(2), 850–886. 10.1037/a0033981 [DOI] [PubMed] [Google Scholar]
  98. Steinborn MB, Langner R, Flehmig HC, & Huestegge L (2018). Methodology of performance scoring in the d2 sustained-attention test: Cumulative-reliability functions and practical guidelines. Psychological Assessment, 30(3), 339–357. 10.1037/pas0000482 [DOI] [PubMed] [Google Scholar]
  99. Stevenson H, Russell PN, & Helton WS (2011). Search asymmetry, sustained attention, and response inhibition. Brain and Cognition, 77(2), 215–222. 10.1016/j.bandc.2011.08.007 [DOI] [PubMed] [Google Scholar]
  100. Tendolkar I, Ruhrmann S, Brockhaus-Dumke A, Pauli M, Mueller R, Pukrop R, & Klosterkötter J (2005). Neural correlates of visuo-spatial attention during an antisaccade task in schizophrenia: An ERP study. International Journal of Neuroscience, 115(5), 681–698. 10.1080/00207450590887475 [DOI] [PubMed] [Google Scholar]
  101. Testa R, Bennett P, & Ponsford J (2012). Factor analysis of nineteen executive function tests in a healthy adult population. Archives of Clinical Neuropsychology, 27(2), 213–224. 10.1093/arclin/acr112 [DOI] [PubMed] [Google Scholar]
  102. This Project. (2024). OSF storage page for this project. OSF. https://osf.io/6xpbf/?view_only=57430884e49442b2bdd88d32b6df6e32 [Google Scholar]
  103. Thomaschke R, & Haering C (2014). Predictivity of system delays shortens human response time. International Journal of Human-Computer Studies, 72(3), 358–365. 10.1016/j.ijhcs.2013.12.004 [DOI] [Google Scholar]
  104. Tiego J, Testa R, Bellgrove MA, Pantelis C, & Whittle S (2018). A hierarchical model of inhibitory control. Frontiers in Psychology, 9, Article 391079. 10.3389/fpsyg.2018.01339 [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Tsukahara JS, Harrison TL, Draheim C, Martin JD, & Engle RW (2020). Attention control: The missing link between sensory discrimination and intelligence. Attention, Perception, & Psychophysics, 82(7), 3445–3478. 10.3758/s13414-020-02044-9 [DOI] [PubMed] [Google Scholar]
  106. Ulrich R, Schröter H, Leuthold H, & Birngruber T (2015). Automatic and controlled stimulus processing in conflict tasks: Superimposed diffusion processes and delta functions. Cognitive Psychology, 78, 148–174. 10.1016/j.cogpsych.2015.02.005 [DOI] [PubMed] [Google Scholar]
  107. Unsworth N, & McMillan BD (2014). Similarities and differences between mind-wandering and external distraction: A latent variable analysis of lapses of attention and their relation to cognitive abilities. Acta Psychologica, 150, 14–25. 10.1016/j.actpsy.2014.04.001 [DOI] [PubMed] [Google Scholar]
  108. Unsworth N, Miller AL, & Robison MK (2020). Are individual differences in attention control related to working memory capacity? A latent variable mega-analysis. Journal of Experimental Psychology: General, 150(7), 1332–1357. 10.1037/xge0001000 [DOI] [PubMed] [Google Scholar]
  109. Unsworth N, Redick TS, Lakey CE, & Young DL (2010). Lapses in sustained attention and their relation to executive control and fluid abilities: An individual differences investigation. Intelligence, 38(1), 111–122. 10.1016/j.intell.2009.08.002 [DOI] [Google Scholar]
  110. Unsworth N, Robison MK, & Miller AL (2021). Individual differences in lapses of attention: A latent variable analysis. Journal of Experimental Psychology: General, 150(7), 1303–1331. 10.1037/xge0000998 [DOI] [PubMed] [Google Scholar]
  111. Verbruggen F, Chambers CD, & Logan GD (2013). Fictitious inhibitory differences: How skewness and slowing distort the estimation of stopping latencies. Psychological Science, 24(3), 352–362. 10.1177/0956797612457390 [DOI] [PMC free article] [PubMed] [Google Scholar]
  112. von Bastian CC, Blais C, Brewer GA, Gyurkovics M, Hedge C, Kałamała P, Meier ME, Oberauer K, Rey-Mermet A, Rouder JN, Souza AS, Bartsch LM, Conway ARA, Draheim C, Engle RW, Friedman NP, Frischkorn GT, Gustavson DE, Koch I, ..., Weimers EA (2020). Advancing the understanding of individual differences in attentional control: Theoretical, methodological, and analytical considerations. PsyArXiv. 10.31234/osf.io/x3b9k [DOI] [Google Scholar]
  113. Wessel JR (2018). Prepotent motor activity and inhibitory control demands in different variants of the go/no-go paradigm. Psychophysiology, 55(3), Article e12871. 10.1111/psyp.12871 [DOI] [PubMed] [Google Scholar]
  114. Yang Y, Shields GS, Wu Q, Liu Y, Chen H, & Guo C (2019). Cognitive training on eating behaviour and weight loss: A meta-analysis and systematic review. Obesity Reviews, 20(11), 1628–1641. 10.1111/obr.12916 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supp1

RESOURCES