Abstract
Brain–computer interfaces (BCIs) can convert mental states into signals to drive real-world devices, but it is not known if a given covert task is the same when performed with and without BCI-based control. Using a BCI likely involves additional cognitive processes, such as multitasking, attention, and conflict monitoring. In addition, it is challenging to measure the quality of covert task performance. We used whole-brain classifier-based real-time functional MRI to address these issues, because the method provides both classifier-based maps to examine the neural requirements of BCI and classification accuracy to quantify the quality of task performance. Subjects performed a covert counting task at fast and slow rates to control a visual interface. Compared with the same task when viewing but not controlling the interface, we observed that being in control of a BCI improved task classification of fast and slow counting states. Additional BCI control increased subjects’ whole-brain signal-to-noise ratio compared with the absence of control. The neural pattern for control consisted of a positive network comprised of dorsal parietal and frontal regions and the anterior insula of the right hemisphere as well as an expansive negative network of regions. These findings suggest that real-time functional MRI can serve as a platform for exploring information processing and frontoparietal and insula network-based regulation of whole-brain task signal-to-noise ratio.
Keywords: support vector machine, speech motor imagery, neurofeedback, multi-voxel pattern analysis
Although overt actions allow us to interact directly with our environment, much of our brain’s activity is devoted to covert acts, such as motor planning, rehearsal, and self-directed thought. Indeed, we enjoy a rich inner experience consisting of activities such as visual imagery, inner language, somatosensory awareness, recollection of the past, and planning for the future (1). By definition, covert actions are usually neither observable by a third party nor capable of directly affecting the outside world. Brain–computer interfaces (BCIs), however, provide a technological means for converting thought into action by transducing brain measurements into control signals for devices, such as robots and computer displays. One strategy for BCI control is to provide the subject with task commands, such as “move the cursor to the right by imagining that you are moving your right hand.”
In effect, BCI control creates a synthetic link between covert action and the role of sensory feedback. The additional cognitive requirements and consequences for BCI control, however, are unknown, and several interrelated cognitive processes could play a role. Examples include multitasking for dual task performance (2–5), conflict and outcome monitoring (6, 7), attention (8), reward monitoring (9, 10), and learning and conditioning (11, 12). If present, each of these cognitive processes should have specific neural signatures (prefrontal cortex, anterior cingulate, frontoparietal networks, ventral striatum, etc.).
An additional consideration is the performance of the task itself. Evaluating how well a subject is executing a covert task is challenging. Moreover, it is important to evaluate whether the BCI is an aid or serves as a distracter and if task-based brain activity differs between BCI control (C) and no control (noC) conditions. A nascent technology that holds promise for addressing the effects of BCI is real-time functional MRI (rtfMRI). In particular, classification-based rtfMRI (13, 14) can quantify the ability to predict task states for both C and noC conditions and simultaneously examine neural differences that arise with BCI control. Thus, this study was designed to investigate the neural underpinnings of BCI control and evaluate its impact on task classification accuracy.
Results
A familiar example of human covert behavior is inner speech. We scanned 24 healthy subjects who performed blocks of fast and slow covert counting. Such tasks hold particular promise for therapeutic applications of rtfMRI, because mental imagery may be a back door to the motor system for rehabilitation that lessens patient frustration and fatigue (15, 16). For this task, speech rate is a crucial aspect of communication that affects intelligibility. It is such an important factor that diadochokinetic rate (where a subject is asked to produce syllables as rapidly as possible) is almost universally used to evaluate oral motor skill and differentially diagnose pathology. Furthermore, speech rate represents a target for therapy in dysarthria (17), has been studied in normal populations with neuroimaging (18–20), and has known neurological vulnerabilities in development, traumatic brain injury, and stroke.
Fig. 1A shows examples of the display presented during the fMRI experiments. The two categories of experiments (C and noC) are shown in Fig. 1B. Subjects performed two runs of both conditions, and the order was randomized. At the beginning of each of four runs, both the experimenter’s spoken instructions as well as the text written on the display informed the subjects that they would or would not be controlling the computer interface.
Fig. 1.
rtfMRI of fast vs. slow covert speech. (A) Subjects performed covert counting at fast and slow rates. A rest condition followed each counting block and occasionally was preceded by a number? cue, where subjects reported aloud their last counted number. (B) Subjects controlled needle movements in two of four fMRI runs (classifier-based rtfMRI decoded fast or slow counting to update the display). In the other two runs, the needle position simply increased at a fixed rate for the duration of the block.
Support Vector Machine Classification of Fast vs. Slow Covert Counting.
We used support vector machine (SVM) -based whole-brain classification to control the stimulus display, with noC runs serving as training data for C runs (SI Materials and Methods). Additional SVM analysis was performed offline to do cross-validation and compare the combinations of training and testing with C and noC data. The classification accuracies from this experiment establish the degree to which covert speech rate can be used to modulate feedback interfaces. The challenge for both real-time and subsequent offline classification was to distinguish between fast and slow rates of covert counting. Because experimental parameters, such as the covert nature of both tasks, the visual display, and trial durations, were constant, it was an open question as to whether a classifier could differentially detect fast from slow within a presumably spatially overlapping covert counting brain network. Because each subject participated in four runs, offline analysis used 12 possible train–test permutations. Fig. S1A shows that all train–test permutations resulted in above-chance classification accuracies. However, significant heterogeneity of the results indicates that interface control did influence classification accuracy for this experiment. One important observation is that the type of run mattered more than the time order of the run. Specifically, although early runs could not be classified more accurately than late runs, the feedback-controlled runs led to higher classification accuracy than nonfeedback-controlled runs (Fig. S1 and SI Results). Furthermore, Fig. 2A shows the influence of training and testing a classifier with the two different types of runs averaged across all subjects. In the resulting four possible combinations, we observed a significant increase in classification accuracy for subject-controlled runs. The case of training and testing on runs where the subject was not in control (noC/noC) is analogous to a multivoxel analysis of a standard offline version of the task. The noC/C data correspond to our standard approach to rtfMRI experiments. We have noted in preliminary studies that noC/C tends to be higher than the reverse (C/noC) (13, 21). A measurement that we were able to make with this experimental design was the impact of both training and testing with C data. Compared with the other combinations, this case (C/C) showed a statistically significant increase in classification accuracy (P = 0.003). Taken together, these results show that neurofeedback-based interface control significantly improved classification accuracy for this experiment.
Fig. 2.
SVM classification of fast vs. slow covert counting. (A) Classifier training and testing combinations for C and noC showed a statistically significant increase in accuracy when a C run was used to train an SVM and then subsequently applied to classify another C run (*P = 0.003, two-tailed). (B) Regions for SVM of fast vs. slow covert counting that were used to control the rtfMRI computer interface thresholded at P < 0.05 [false discovery rater (FDR) corrected]. L, left; R, right.
The analysis also produced SVM maps of fast vs. slow speech (Fig. 2B and Table S1). Fig. S2 and Table S2 show SVM and general linear model analyses of speech (fast and slow vs. rest). The regions in Fig. 2B match previously published speech studies. Notably, fast > slow regions corresponded to the speech production network described in the work by Sörös et al. (22), including medial structures such as supplementary and cingulate motor areas, thalamus, red nucleus, cerebellum, and dorsal brainstem as well as lateral primary motor cortex and superior temporal gyrus. The cerebellar contribution agrees with previous reports (18, 19). The covert speech maps (Fig. S2) include all of the main effects reported by the study by Riecker et al. (19), which examined six rates of syllable production, including both superior and inferior cerebellum. Although both superior and inferior cerebellum were present in the covert speech vs. baseline, the inferior cerebellum was absent in the fast vs. slow map. The study by Riecker et al. (19) assigned superior cerebellum to a preparative loop that coordinates with supplementary motor area, anterior insula, and dorsolateral frontal cortex, whereas the inferior cerebellum was in the executive loop with motor cortex, thalamus, putamen, and caudate. One explanation for our results is that the executive load at the level of the inferior cerebellum was relatively matched between the slow and fast conditions and did not contribute to discriminating between the two rates. Beyond speech, the areas in Fig. 2B suggest a general relationship to motor coordination. For example, Rao et al. (23) asked subjects to synchronize finger tapping with an auditory metronome at 3.3 and 1.7 Hz. In a continuation phase, subjects maintained the same rate without the auditory cue. This experimental detail is similar to the task here; subjects practiced two different rates before internally pacing themselves at those targets. Our results show a network that is highly similar to the continuation condition used by Rao et al. (23) that included cerebellum, supplementary motor area, putamen, thalamus, and superior temporal gyrus. Rao et al. (23) concluded that the medial premotor network was critical to the internal representation of movement timing, which seems to generalize here to speech imagery. Based on observed superior temporal and inferior frontal activation, the study also suggested that subjects used an internal auditory representation of timing. It is likely that our subjects used a similar strategy, including phonological processing (22).
BCI Control Increases Whole-Brain Signal-to-Noise Ratio for the Covert Task.
Our results in Fig. 2A and Fig. S1 show that the presence of neurofeedback changes the classification accuracy of fast vs. slow covert counting. We therefore questioned whether neural differences would also be detectable between C and noC at the group level. By combining both fast and slow blocks and contrasting C vs. noC runs, we found that subjects’ control consisted of a positive network of frontoparietal regions and the anterior insula of the right hemisphere and an extensive negative network in areas that have been implicated in the default mode network, such as posterior cingulate and precuneus (24), as well as areas overlapping with covert speech vs. rest (Fig. S2 and Table S2), such as supplementary motor area, right Brodmann area (BA) 4, and right BA 13 (Fig. 3 and Table 1). Right anterior insula has been implicated in interoceptive awareness (25) as well as engagement of attention and disengagement of task irrelevant systems (26), whereas right frontoparietal areas are thought to control attention (27). Frontoparietal circuitry is critical in a variety of relevant contexts, such as control of action as well as object manipulation and tool use (28). In particular, the right frontoparietal circuit (Fig. 3 and Table 1) matches early imaging reports of attention to sensory input (29–32).
Fig. 3.
Regions for rtfMRI neurofeedback-based control of a computer interface using fast vs. slow covert counting (P < 0.05, FDR corrected).
Table 1.
Regions for rtfMRI-based stimulus control
| Region* | BA | Coordinates (mm) |
|||
| x | y | z | t | ||
| C > noC | |||||
| R anterior insula | 36 | 20 | 3 | 4.8 | |
| R middle frontal | 6 | 48 | 2 | 38 | 3.9 |
| R inferior parietal | 40 | 55 | −38 | 41 | 3.9 |
| C < noC | |||||
| Posterior cingulate | 30 | −5 | −60 | 8 | −6.1 |
| R insula | 13 | 43 | −14 | 13 | −5.5 |
| Anterior cingulate | 32 | −2 | 45 | 7 | −4.1 |
| Supplementary motor area | 6 | 10 | −24 | 51 | −3.4 |
| L middle frontal | 8 | −21 | 27 | 39 | −5.0 |
| R precentral | 4 | 36 | −24 | 49 | −3.8 |
| L precuneus | 7 | −15 | −43 | 56 | −3.9 |
L, left; R, right.
Foci are in Talairach coordinates and reflect the peak t value for regions with P < 0.05, FDR correction, and extent threshold > 20 voxels.
The general framework of linear classification of the form
(where
is the time series of fMRI images with V voxels) is that the weight vector,
, specifies the signal direction for classification in the multivariate image space. Note that, because
is also V-dimensional, it can be displayed on the brain as an SVM brain map. Strother et al. (33) showed that the correlation between two such maps is proportional to a task’s signal-to-noise ratio (SNR). To illustrate, taking the correlation of the maps for a subject’s two C runs leads to a 2 × 2 correlation matrix that can be decomposed through singular value decomposition:
![]() |
In other words, we obtain two eigenvectors. The first eigenvector is the task’s signal direction, and the second eigenvector is orthogonal to the first eigenvector. With noise distributed equally in both eigenvectors, SNR can be defined as the ratio of the signal and noise variances:
. For the noC condition, SNR can be estimated through the same process:
. The use of both classification accuracy and a weight vector’s spatial correlation has been shown for evaluating the quality of multivoxel pattern analysis models, but previous reports have not shown a clear relationship between SNR and classification accuracy. For example, published results show that high accuracy can occur even for low SNR (33, 34). Our data, however, led to a strong relationship for the difference between C and noC conditions. Defining the average classification accuracies of these two types of runs as AC and AnoC, Fig. 4 shows
vs.
, which resulted in a statistically significant linear relationship. Thus the tendency for higher classification accuracies when subjects are in control of the feedback interface (Fig. 2A) corresponds to a proportional increase in whole-brain SNR for the fast vs. slow covert counting task. Relationships between SNR and the frontoparietal–insula regions are reported in SI Results and Fig. S3.
Fig. 4.
Approximately 52% of the variance in classification accuracy changes
is explained by the corresponding change in global SNR
of the weight vectors
.
To summarize our experimental observations, the C condition increased classifier accuracy (Fig. 2A and Fig. S1), and it co-occurs with a right frontoparietal and anterior insula network (Fig. 3) and an increase in task SNR (Fig. 4). To further evaluate the plausibility of SNR as a mechanism for changing classifier accuracy, we performed simulations of this effect. Specifically, we embedded a task signal as a pattern in a large noise vector (SI Materials and Methods). The goal was to confirm more generally that differences in SNR for different runs can give rise to the experimental results shown in Fig. 2A. The impact of SNR on the classification accuracies for the four SVM train/test scenarios (noC/noC, noC/C, C/noC, and C/C) was then estimated. Although only two examples are shown in Fig. 5, we have found that the relative ranking of classification accuracies (especially the improvement in C/C classification accuracy over the other three conditions) is robust across a wide range of simulation parameters.
Fig. 5.

Classification accuracies for simulations of SNR for BCI C and noC. (A) SNR was lowered for noC runs. (B) SNR was raised for C runs. SI Materials and Methods details the simulation. These examples illustrate that relative differences in SNR between different types of fMRI runs can lead to the classification accuracy results shown in Fig. 2A. Note that the only fMRI properties that were modeled were the approximate ratio of the number of voxels to observations (1,000:1), and the number of simulation trials matched the number of subjects for this study (n = 24). Error bars show SEM across 24 trials.
Discussion
This study examined whether differences exist in fMRI measurements with and without neurofeedback-based control of a computer interface. If the C and noC conditions were equivalent for the fast and slow covert counting tasks, they should have generated data that led to equivalent classification accuracy estimates and SVM models. Our results, however, show that the act of controlling an interface leads to increased classification accuracy. Furthermore, we showed that this increased accuracy in the C condition corresponded to increases in whole-brain SNR. Neurally, C vs. noC coincided with a positive frontoparietal and insula network as well as a broadly distributed negative network of regions, which are shown in Fig. 3 and Table 1.
Taken together, these results suggest that BCI control enhances attention to fMRI tasks, reduces extraneous processing to improve whole-brain task SNR, and leads to increased multivoxel classification. Although we favor this conclusion, it is important to take two factors into consideration. First, our experimental design was motivated by a need to control for the sequential order of experimental runs as a potential confound for the C vs. noC effects. One remaining limitation of this study, however, is that it is conceivable that our results are partially caused by uncontrolled effects stemming from the subjects’ belief of being in control of the interface. Future experiments that provide insight into the interaction between belief in control and the actual state of being in control would complement a number of research areas (e.g., perceptual learning) (35, 36) that are actively examining the roles of feedback, fake feedback, and reinforcement. Second, the role of the frontal, parietal, and anterior insula regions is correlative and may additionally include areas that we have not detected. Furthermore, these regions engage in a wide variety of cognitive functions, and thus, we have inferred attention here merely as a hypothesis for future experiments (37).
The quantification of SNR relies on a definition of signal. In many communication contexts, the transmitted signal is known, even if the receiver is challenged by channel noise and other extraneous signals that share overlapping bandwidth. In fMRI, instrumentation noise and known MRI artifacts from sources such as heart rate and respiration (38–40) exist, and a major challenge is in isolating a poorly characterized signal from the spectrum of other ongoing brain activity. Isolating signal from noise, in sum, is the goal of multivoxel pattern analysis. Here, the classification objective was to isolate fast covert counting from slow covert counting, which are characterized only by the timing of the stimulus presentation. Thus, the previous work to quantify SNR from multivariate classifier-based correlations (33) constitutes an important advance that has proved an invaluable tool in this study. Specifically, SNR provided an explanation for the observed increases in classification accuracy that we then tested directly through simulations. Furthermore, the fact that SNR seems to be related to frontoparietal and anterior insula activity possibly pertains to a growing number of observations from electrophysiology that attention reduces slow timescale coherence and is an important potential issue for blood oxygenation level dependent (BOLD) imaging (41, 42). Recent fMRI work (43) has shown that multivoxel techniques in visual areas can be used to explore operational mechanisms of attention, such as the biased competition framework (44), and that techniques such as mutual information can quantify the quality of population codes in visual cortex (45). The classification results that we observe suggest that attention-related findings from visual cortex generalize in a distributed manner to other parts of the brain and nonvisual tasks.
Although the SVM analysis and the resulting maps (Fig. 2B) were specific to this particular task, we suspect that the classification accuracy differences between C and noC are much more general in nature. First, we have previously observed this type of dependence in classification accuracy in other rtfMRI tasks. Specifically, in a pilot study of this task as well as a left vs. right button pressing task, we observed higher classification accuracy for the training/testing of noC/C than C/noC (21), and we had early evidence for this classification accuracy asymmetry in our original description of classification-based rtfMRI (13). Based on the experimental and simulated results reported here, we conjecture that the asymmetry between noC/C and C/noC is related to the classifier’s sensitivity to noise during training and testing. We note that the SVM is a weighted average of the training data and thus, reduces the noise in the training data through averaging. Therefore, the combination of using noC runs as training data (which reflects SVM-based noise reduction) paired with C test data (which is already at an increased SNR) seems to be more advantageous than the C/noC case in a wide range of fMRI experiments. Second, the neural involvement of frontoparietal and anterior insula regions is unlikely to be specific to the fast vs. slow covert counting task. Third, we have begun to explore the extent to which these results generalize by examining motor-based (rather than neurofeedback-based) computer interface control, in which subjects are tasked with closely matching target finger-tapping rates. Our results generalize the rtfMRI covert speech results reported here and show that SNR is increased for pattern classifiers in tasks that use direct motor feedback.* In light of this result, motor feedback (for overt actions) may be an efficient route to improving task SNR, whereas rtfMRI and other BCI modalities allow us to accomplish neurofeedback in tasks where behavioral measures are not available (like the covert task reported here).
If C boosts classification accuracy, then this points to important rtfMRI experimental approaches for the future. For example, it may be possible to dramatically boost neurofeedback accuracy using a simultaneous training and testing rtfMRI mode. The goal would be to progress beyond training the classifier in the absence of neurofeedback and begin updating classifier training while simultaneously providing feedback. At a more fundamental level, the frontoparietal–anterior insula network might also serve as a proxy signal for evaluating the performance quality of covert tasks. Thus, subjects’ performance could be rated based on changes in frontoparietal circuitry, and the computer interfaces themselves could be designed to then optimally engage this network.
The results of this study touch on two important experimental aspects of fMRI. The first aspect is that fMRI tasks are necessarily tailored to study specific components of behavior and satisfy constraints of the scanning environment. Our results suggest, however, that if the task design lacks feedback for action, it results in subjects becoming less engaged in performing the task. The second aspect is that the brain is a physical system over which experimental paradigms exert incomplete control; beyond the fMRI stimulus, subjects experience internal and external noise and are constantly engaged in ongoing emotional, sensory, and thought processing. Although covert behaviors are an ever-present part of life, sustained attention to covert tasks is challenging. rtfMRI cannot only bring thoughts to action in the outside world but can simultaneously enable more engaging action for the actor. Controlling an interface—even through the noisy coupling provided by rtfMRI-based neurofeedback—did not detract from the basic task. Rather, these results suggest that such control increases whole-brain task SNR and enhances top-down attentional resources to coordinate the inner states and intentions of the individual as they are translated into actions in the environment.
Materials and Methods
Subjects.
We collected data from 24 right-handed, native English-speaking subjects (13 females, age range = 18–36 y, and mean age = 25 y). All subjects gave informed consent, and the study was done in accordance with the Institutional Review Board of Baylor College of Medicine.
Data Collection and rtfMRI.
Structural and functional brain data were acquired on a 3.0 T head-only scanner (Siemens Allegra). T1-weighted anatomical volumes were acquired with a 3D magnetization prepared rapid acquisition gradient echo pulse sequence with 192 axial slices (resolution = 1 × 1 × 1 mm3; repetition time (TR) = 1,200 ms; echo time (TE) = 2.93 ms; field-of-view (FOV) = 245 mm2; flip angle (FA) = 12°). Functional data consisted of 30 interleaved axial slices collected every 2 s with an echo time of 30 ms and a 90° flip angle using an echo planar sequence (resolution = 3.46 × 3.46 × 5 mm3; FOV = 220 mm2). Each subject’s session consisted of four functional runs that were each 8.5 min (255 volumes).
rtfMRI followed previously developed methods (13); the scanner was first run in classifier training mode to obtain a model relating each fMRI volume to the fast or slow counting rate for that time point (Materials and Methods, Automatic Covert Speech Task and rtfMRI Stimulus Display). After a classifier was trained, it was possible to run the scanner in feedback mode, where fMRI volumes were decoded and online predictions of fast or slow counting served as a control signal to update the stimulus display. For this study, if more than one training run preceded it, the subject-controlled runs used the most recent classifier for online decoding.
Automatic Covert Speech Task and rtfMRI Stimulus Display.
To be included in the study, subjects were assessed on their ability to perform covert counting for fast and slow conditions. Subjects listened to recordings of one of the authors (M.M.), in which the slow counting rate was ∼0.7 syllables/s, which was accomplished by stretching out the vowel and maintaining smooth transitions to the next syllable. The recording of fast rate was ∼7.8 syllables/s and produced as forced fast counting. Subjects first imitated both the counting quality and rate aloud and then did so covertly for the experimenter (T.D.P.). They were instructed to count sequentially, starting from the number one. Although they were practicing outside the scanner, the experimenter observed each subject’s behavior and verified their fast and slow rates by occasionally interrupting their counting and asking them to state their last counted number.
During all fMRI runs, subjects performed covert sequential counting during randomly ordered blocks of fast or slow rates that ranged from 26 to 34 s in duration. Rest periods lasting 10–14 s separated these covert counting periods. Usually covert speech does not have a behavioral measure that can be used to evaluate subject performance. For this task, however, we were able to use a simple strategy to verify that subjects were performing the task. Specifically, at the end of one to three blocks in each run, we included a catch condition, in which the subjects were asked number? on the display screen and instructed to answer aloud. Because both the block lengths and the occurrence of a catch condition were randomized, the subjects needed to perform the task and could not simply memorize a one response for the slow condition and another response for the fast condition. In the control room, the experimenter (but not the subject) was warned of an upcoming catch question through a chime that sounded 4 s before the subject was cued. The chime gave the operator time to press the scanner intercom button to listen to the subject’s response.
The stimulus display consisted of an analog meter with a text area, which instructed the subject with the words fast, slow, rest, and number? During the rest and number? conditions, the needle was white and pointed to the center of the meter. During fast and slow periods, the subject was instructed to perform the covert counting task as practiced outside the scanner. For the subject-controlled runs, the needle’s control signal came from the SVM classification of fast counting or slow counting, moving to the right for correctly classified scans (or the left for incorrectly classified scans). At the beginning of each of four runs, the experimenter informed the subjects whether they were controlling the needle movement. In addition, written reminders of the task were displayed before every run, which included the heading practice and a statement that the needle would move automatically during SVM training runs or the heading stating “you are controlling the needle” and written instructions directed the subject to try to move the needle to the right by doing the counting task.
To control for temporal effects, subjects were randomly assigned to one of three groups, in which the order of C and noC runs was different. Because the capability for classification-based interface control necessitates an existing SVM model, the first run for all three groups was always a training run (noC). During the experiment, subject control came from the most recent classifier model generated by the latest training run. Thus, a training run’s model was not used for interface control if it occurred just before another training run or if a training run was the last run of the session. Similarly, some training run models were used two times (for back-to-back subject-controlled runs). The critical point for this study is whether the subject was controlling the interface. Designating the runs (Rs) by their chronological order (R1, R2, R3, and R4) or as C or noC, the three groups were group A (R1: noC, R2: C, R3: noC, and R4: C), group B (R1: noC, R2: C, R3: C, and R4: noC), and group C (R1: noC, R2: noC, R3: C, and R4: C).
fMRI Analysis.
We used AFNI (46) for offline analyses. The 3dsvm command was used to perform SVM analyses as previously described (47). The anatomical volumes were skull-stripped and registered to the TT-N27 atlas. The functional data were transformed to Talairach space (48) by applying a single transformation matrix, which was equivalent to transforming from functional to structural and then from structural to Talairach coordinate systems. Each run was motion-corrected by aligning to the first scan of that run. We examined the spatial map and classification accuracy of each run by using each of four runs individually as training data and testing on the other three runs. A binary mask (segmenting brain pixels from pixels outside the brain) was generated for the designated training run. The 3dsvm command was then applied using the fMRI volumes, the mask, and a label file that specified the volumes corresponding to fast and slow covert speech. As designated by the label file, baseline volumes and the first two TRs for every block were excluded from the analysis. The resulting classification model was then applied to test the other runs. Percent classification accuracy was calculated as (number of correctly classified images)/(total number of images) × 100.
Supplementary Material
Acknowledgments
We thank P. R. Montague, C. Neblett, R. Smith, G. Chen, M. Beauchamp, T. Ellmore, B. King-Casas, and P. Chiu. T.D.P. additionally thanks M. DeBiasi and S. Smirnakis. This work was partially supported by the Robert and Janice McNair Foundation. T.D.P. was partially supported by a Ruth L. Kirschtein Postdoctoral National Service Award and a faculty fellowship from the McNair Medical Institute.
Footnotes
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
*Zemla J, Lisinski J, LaConte S. Performance feedback engages attention, boosts task SNR, and enhances pattern classification. Poster presented at the 18th Annual Meeting of the Organization for Human Brain Mapping, June10–14, 2012, Beijing, China.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1210738110/-/DCSupplemental.
References
- 1.Delamillieure P, et al. The resting state questionnaire: An introspective questionnaire for evaluation of inner experience during the conscious resting state. Brain Res Bull. 2010;81(6):565–573. doi: 10.1016/j.brainresbull.2009.11.014. [DOI] [PubMed] [Google Scholar]
- 2.Dux PE, et al. Training improves multitasking performance by increasing the speed of information processing in human prefrontal cortex. Neuron. 2009;63(1):127–138. doi: 10.1016/j.neuron.2009.06.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Pashler H. Dual-task interference in simple tasks: Data and theory. Psychol Bull. 1994;116(2):220–244. doi: 10.1037/0033-2909.116.2.220. [DOI] [PubMed] [Google Scholar]
- 4.Schumacher EH, et al. Virtually perfect time sharing in dual-task performance: Uncorking the central cognitive bottleneck. Psychol Sci. 2001;12(2):101–108. doi: 10.1111/1467-9280.00318. [DOI] [PubMed] [Google Scholar]
- 5.Tombu M, Jolicoeur P. Virtually no evidence for virtually perfect time-sharing. J Exp Psychol Hum Percept Perform. 2004;30(5):795–810. doi: 10.1037/0096-1523.30.5.795. [DOI] [PubMed] [Google Scholar]
- 6.Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD. Conflict monitoring and cognitive control. Psychol Rev. 2001;108(3):624–652. doi: 10.1037/0033-295x.108.3.624. [DOI] [PubMed] [Google Scholar]
- 7.Botvinick MM, Cohen JD, Carter CS. Conflict monitoring and anterior cingulate cortex: An update. Trends Cogn Sci. 2004;8(12):539–546. doi: 10.1016/j.tics.2004.10.003. [DOI] [PubMed] [Google Scholar]
- 8.Corbetta M, Shulman GL. Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci. 2002;3(3):201–215. doi: 10.1038/nrn755. [DOI] [PubMed] [Google Scholar]
- 9.Schultz W. Multiple reward signals in the brain. Nat Rev Neurosci. 2000;1(3):199–207. doi: 10.1038/35044563. [DOI] [PubMed] [Google Scholar]
- 10.Montague PR, Berns GS. Neural economics and the biological substrates of valuation. Neuron. 2002;36(2):265–284. doi: 10.1016/s0896-6273(02)00974-1. [DOI] [PubMed] [Google Scholar]
- 11.Pagnoni G, Zink CF, Montague PR, Berns GS. Activity in human ventral striatum locked to errors of reward prediction. Nat Neurosci. 2002;5(2):97–98. doi: 10.1038/nn802. [DOI] [PubMed] [Google Scholar]
- 12.Weiskopf N, et al. Self-regulation of local brain activity using real-time functional magnetic resonance imaging (fMRI) J Physiol Paris. 2004;98(4–6):357–373. doi: 10.1016/j.jphysparis.2005.09.019. [DOI] [PubMed] [Google Scholar]
- 13.LaConte SM, Peltier SJ, Hu XP. Real-time fMRI using brain-state classification. Hum Brain Mapp. 2007;28(10):1033–1044. doi: 10.1002/hbm.20326. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.LaConte SM. Decoding fMRI brain states in real-time. Neuroimage. 2011;56(2):440–454. doi: 10.1016/j.neuroimage.2010.06.052. [DOI] [PubMed] [Google Scholar]
- 15.Jackson PL, Lafleur MF, Malouin F, Richards C, Doyon J. Potential role of mental practice using motor imagery in neurologic rehabilitation. Arch Phys Med Rehabil. 2001;82(8):1133–1141. doi: 10.1053/apmr.2001.24286. [DOI] [PubMed] [Google Scholar]
- 16.Sharma N, Pomeroy VM, Baron JC. Motor imagery: A backdoor to the motor system after stroke? Stroke. 2006;37(7):1941–1952. doi: 10.1161/01.STR.0000226902.43357.fc. [DOI] [PubMed] [Google Scholar]
- 17.McHenry MA. The effect of pacing strategies on the variability of speech movement sequences in dysarthria. J Speech Lang Hear Res. 2003;46(3):702–710. doi: 10.1044/1092-4388(2003/055). [DOI] [PubMed] [Google Scholar]
- 18.Wildgruber D, Ackermann H, Grodd W. Differential contributions of motor cortex, basal ganglia, and cerebellum to speech motor control: Effects of syllable repetition rate evaluated by fMRI. Neuroimage. 2001;13(1):101–109. doi: 10.1006/nimg.2000.0672. [DOI] [PubMed] [Google Scholar]
- 19.Riecker A, et al. fMRI reveals two distinct cerebral networks subserving speech motor control. Neurology. 2005;64(4):700–706. doi: 10.1212/01.WNL.0000152156.90779.89. [DOI] [PubMed] [Google Scholar]
- 20.Shergill SS, et al. Modulation of activity in temporal cortex during generation of inner speech. Hum Brain Mapp. 2002;16(4):219–227. doi: 10.1002/hbm.10046. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Papageorgiou TD, Curtis WA, McHenry M, LaConte SM. Neurofeedback of two motor functions using supervised learning-based real-time functional magnetic resonance imaging. Conf Proc IEEE Eng Med Biol Soc. 2009;2009:5377–5380. doi: 10.1109/IEMBS.2009.5333703. [DOI] [PubMed] [Google Scholar]
- 22.Sörös P, et al. Clustered functional MRI of overt speech production. Neuroimage. 2006;32(1):376–387. doi: 10.1016/j.neuroimage.2006.02.046. [DOI] [PubMed] [Google Scholar]
- 23.Rao SM, et al. Distributed neural systems underlying the timing of movements. J Neurosci. 1997;17(14):5528–5535. doi: 10.1523/JNEUROSCI.17-14-05528.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Raichle ME, et al. A default mode of brain function. Proc Natl Acad Sci USA. 2001;98(2):676–682. doi: 10.1073/pnas.98.2.676. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Critchley HD, Wiens S, Rotshtein P, Ohman A, Dolan RJ. Neural systems supporting interoceptive awareness. Nat Neurosci. 2004;7(2):189–195. doi: 10.1038/nn1176. [DOI] [PubMed] [Google Scholar]
- 26.Sridharan D, Levitin DJ, Menon V. A critical role for the right fronto-insular cortex in switching between central-executive and default-mode networks. Proc Natl Acad Sci USA. 2008;105(34):12569–12574. doi: 10.1073/pnas.0800005105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Corbetta M, Patel G, Shulman GL. The reorienting system of the human brain: From environment to theory of mind. Neuron. 2008;58(3):306–324. doi: 10.1016/j.neuron.2008.04.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Culham JC, Kanwisher NG. Neuroimaging of cognitive functions in human parietal cortex. Curr Opin Neurobiol. 2001;11(2):157–163. doi: 10.1016/s0959-4388(00)00191-4. [DOI] [PubMed] [Google Scholar]
- 29.Belin P, et al. The functional anatomy of sound intensity discrimination. J Neurosci. 1998;18(16):6388–6394. doi: 10.1523/JNEUROSCI.18-16-06388.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Pardo JV, Fox PT, Raichle ME. Localization of a human system for sustained attention by positron emission tomography. Nature. 1991;349(6304):61–64. doi: 10.1038/349061a0. [DOI] [PubMed] [Google Scholar]
- 31.Gitelman DR, et al. Functional imaging of human right hemispheric activation for exploratory movements. Ann Neurol. 1996;39(2):174–179. doi: 10.1002/ana.410390206. [DOI] [PubMed] [Google Scholar]
- 32.Paus T, et al. Time-related changes in neural systems underlying attention and arousal during the performance of an auditory vigilance task. J Cogn Neurosci. 1997;9(3):392–408. doi: 10.1162/jocn.1997.9.3.392. [DOI] [PubMed] [Google Scholar]
- 33.Strother SC, et al. The quantitative evaluation of functional neuroimaging experiments: The NPAIRS data analysis framework. Neuroimage. 2002;15(4):747–771. doi: 10.1006/nimg.2001.1034. [DOI] [PubMed] [Google Scholar]
- 34.LaConte S, et al. The evaluation of preprocessing choices in single-subject BOLD fMRI using NPAIRS performance metrics. Neuroimage. 2003;18(1):10–27. doi: 10.1006/nimg.2002.1300. [DOI] [PubMed] [Google Scholar]
- 35.Sasaki Y, Nanez JE, Watanabe T. Advances in visual perceptual learning and plasticity. Nat Rev Neurosci. 2010;11(1):53–60. doi: 10.1038/nrn2737. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Byers A, Serences JT. Exploring the relationship between perceptual learning and top-down attentional control. Vision Res. 2012;74:30–39. doi: 10.1016/j.visres.2012.07.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Poldrack RA. Can cognitive processes be inferred from neuroimaging data? Trends Cogn Sci. 2006;10(2):59–63. doi: 10.1016/j.tics.2005.12.004. [DOI] [PubMed] [Google Scholar]
- 38.Krüger G, Glover GH. Physiological noise in oxygenation-sensitive magnetic resonance imaging. Magn Reson Med. 2001;46(4):631–637. doi: 10.1002/mrm.1240. [DOI] [PubMed] [Google Scholar]
- 39.Hu X, Le TH, Parrish T, Erhard P. Retrospective estimation and correction of physiological fluctuation in functional MRI. Magn Reson Med. 1995;34(2):201–212. doi: 10.1002/mrm.1910340211. [DOI] [PubMed] [Google Scholar]
- 40.Glover GH, Li TQ, Ress D. Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR. Magn Reson Med. 2000;44(1):162–167. doi: 10.1002/1522-2594(200007)44:1<162::aid-mrm23>3.0.co;2-e. [DOI] [PubMed] [Google Scholar]
- 41.Kohn A, Zandvakili A, Smith MA. Correlations and brain states: From electrophysiology to functional imaging. Curr Opin Neurobiol. 2009;19(4):434–438. doi: 10.1016/j.conb.2009.06.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Harris KD, Thiele A. Cortical state and attention. Nat Rev Neurosci. 2011;12(9):509–523. doi: 10.1038/nrn3084. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Reddy L, Kanwisher NG, VanRullen R. Attention and biased competition in multi-voxel object representations. Proc Natl Acad Sci USA. 2009;106(50):21447–21452. doi: 10.1073/pnas.0907330106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Desimone R, Duncan J. Neural mechanisms of selective visual attention. Annu Rev Neurosci. 1995;18:193–222. doi: 10.1146/annurev.ne.18.030195.001205. [DOI] [PubMed] [Google Scholar]
- 45.Saproo S, Serences JT. Spatial attention improves the quality of population codes in human visual cortex. J Neurophysiol. 2010;104(2):885–895. doi: 10.1152/jn.00369.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Cox RW. AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res. 1996;29(3):162–173. doi: 10.1006/cbmr.1996.0014. [DOI] [PubMed] [Google Scholar]
- 47.LaConte S, Strother S, Cherkassky V, Anderson J, Hu X. Support vector machines for temporal classification of block design fMRI data. Neuroimage. 2005;26(2):317–329. doi: 10.1016/j.neuroimage.2005.01.048. [DOI] [PubMed] [Google Scholar]
- 48.Talairach J, Tournoux P. Co-Planar Stereotaxic Atlas of the Human Brain: 3-Dimensional Proportional System: An Approach to Cerebral Imaging. New York: Thieme; 1988. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.





