Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jul 1.
Published in final edited form as: J Neuroimaging. 2013 Mar 29;24(4):371–378. doi: 10.1111/jon.12015

Task-correlated facial and head movements in classifier-based real-time fMRI

Jeremy F Magland 1, Anna Rose Childress 2
PMCID: PMC3706575  NIHMSID: NIHMS447307  PMID: 23551805

Abstract

BACKGROUND

Real-time fMRI is especially vulnerable to task-correlated movement artifacts because statistical methods normally available in conventional analyses to remove such signals cannot be used in the context of real-time fMRI. Multi-voxel classifier-based methods, although advantageous in many respects, are particularly sensitive. Here we systematically studied various movements of the head and face to determine to what extent these can “masquerade” as signal in multi-voxel classifiers.

METHODS

Ten subjects were instructed to move systematically (twelve instructed movements) throughout fMRI exams and data from a previously published real-time study was also analyzed to determine the extent to which non-neural signals contributed to the high reported accuracy in classifier output.

RESULTS

Of potential concern, whole-brain classifiers based solely on movements exhibited false positives in all cases (p<0.05). Artifacts were also observed in the spatial activation maps for two of the twelve movement tasks. In the retrospective analysis, it was determined that the relatively high reported classification accuracies were (fortunately) mostly explainable by neural activity, but that in some cases performance was likely dominated by movements.

CONCLUSION

Movement tasks of many types (including movements of the eyes, face, and body) can lead to false positives in classifier-based real-time fMRI paradigms.

INTRODUCTION

Subject motion is a primary limiting factor in virtually every functional magnetic resonance imaging (fMRI) experiment. In addition to reducing statistical power by masking subtle changes in neural activity, movements can also cause type I errors (false positives), particularly if they are correlated with design tasks and/or stimuli [1]. A number of techniques have been developed to deal with head motion in fMRI, including the use of physical restraints such as head padding or bite bars, prospective motion correction [2], retrospective registration and realignment [3], and inclusion of motion parameters as covariates in statistical analyses [4]. Each technique has limitations. Apart from physical restraints, all the above methods assume rigid (or at least 3D affine) motion of the head -- an assumption that does not hold for several common facial movements such squinting, yawning, and smiling. Even in the case of rigid motion, type I errors can persist due to interpolation errors during realignment [5]. The situation is further complicated by the high sensitivity of the BOLD measurement to susceptibility-induced field distortions, particularly near the air-tissue interfaces of the sinuses. Variations in signal caused by motion-related fluctuations in these field distortions are heterogeneous and difficult to correct [6]. Including motion parameters as covariates when computing fMRI parameters is perhaps the most effective processing technique for suppressing motion-induced false positives. However, this technique is also known to decrease sensitivity in block designs when task-correlated motion occurs (see [4]). Furthermore, real-time approaches that use whole-brain classifiers [7, 8] or regional BOLD signals [9] to provide feedback on a TR-to-TR basis cannot rely on the use of these motion covariates.

Ideally, facial movements can be assumed to occur randomly throughout the fMRI scan and therefore do not cause type I errors, especially when data is compiled over a group of subjects. However, since humans use facial movements to express emotion, task-correlated movement cannot be ruled out, particularly if the cognitive processes being studied involve emotional states. For example, clenching of the jaw or frowning can be expected to occur more frequently during a stressful task (see [10] for an example of false detection of increased blood flow in the temporopolar cortex due to anxiety), whereas smiling or closing of eyes may also occur systematically at other points during the scan. If these movements cause unidirectional signal changes for voxels within the brain area (either by partial volume or susceptibility effects), then type I errors may occur at those locations.

Real-time fMRI is a relatively new technique that has inspired a variety of paradigms and potential clinical applications [1116]. However, the problems described above are especially difficult in real-time fMRI, because feedback must be derived on the basis of data collected over the course of only a few seconds – there is no opportunity to “average out” motion. For conventional group analyses, this problem is mitigated because task-correlated motion varies across subjects and therefore the induced artifacts can be assumed to “average out”. But for real-time or single-scan experiments it can be very difficult to separate these artifacts from actual neural activity.

In addition to this problem of insufficient data for averaging, there is a special problem for multi-voxel classifier-based real-time paradigms involving a classifier-training period. If there is task-correlated motion during the classifier-training portion of the scan, then the subject may actually be trained to perform that motion (even outside awareness) during the feedback phase of the scan, and may be able to successfully control the neurofeedback with mechanical rather than neural activity. This type of problem could also occur when regional BOLD signal is used for feedback. For example, an apparent eye-motion (task-correlated) artifact was found in a recent real-time fMRI study investigating the potential for insula self-regulation [17]. The authors of that study determined that the participants may have been misled by the erroneous feedback into thinking that they were controlling the feedback with neural activity, when in fact the feedback may have been dominated by subtle eye movements. The nonneural causes of such artifacts are not limited to eye movements, but also include other non-rigid motions of the head and face, including smiling, yawning, or clenching of the jaw.

The present investigation was triggered by our own experience with real-time fMRI classifiers [18] in which we reported an average of 83% classification accuracy on a task involving imagined spatial navigation and imagined motor activity. In order to investigate the potential that a portion of this relatively high classification accuracy may be explainable from non-neural activity (i.e. task-correlated movements), we retrospectively analyzed the same data, but developed the classifier on the basis of pixels outside the brain. The rationale for this was that neural activity should affect only pixels inside the brain whereas other non-neural processes can affect all voxels throughout the head. Second, to more systematically explore the potential pitfalls of task-correlated motion on the development of classifiers for real-time fMRI, we instructed a second cohort of subjects to move systematically while undergoing an fMRI scan, with a variety of different movements of the head and face. Finally, we provide an example of how task-correlated motion can create a special challenge in the case of classifier-based emotion paradigms.

METHODS

Subjects

This study involved a total of 29 subjects, all of which consented to be scanned within protocols approved by the Office of Human Research at the University of Pennsylvania Perelman School of Medicine. Nineteen of these subjects were scanned as part of a previously-publish real-time fMRI study [18], some of which were treatment-seeking substance abuse patients. The remaining 10 subjects were healthy volunteers scanned with a special instructed motion protocol as described below. One of these 10 participants was also scanned with an emotion protocol during the same scan session.

Image Acquisition

All functional imaging was performed at 3 Tesla (Tim Trio; Siemens, Erlangen, Germany) using a standard 2D echo-planar imaging (EPI) BOLD technique with the following acquisition parameters: TE = 31 ms, TR = 2 s, flip angle = 90°, resolution = (3.6 mm)2, FOV = (230 mm)2, 32 slices, slice thickness = 4.5 mm with zero inter-slice spacing (image array size 64x64x32). We used bi-temporal head pads for immobilization. Subjects were instructed to keep their head as still as possible throughout each scan, except when performing the instructed head movements.

Tasks

Prior real-time experiment

For the 19 subjects in the previously-published real-time study, subjects were instructed to alternate between two “thought” tasks: (1) “Imagine hitting a tennis ball, over and over again” (repetitive motor thoughts), and (2) “Imagine moving from room to room in a familiar building” (spatial navigation thoughts). A block design was used with 30 seconds per task separated by 10 seconds of instructions. In each case, a classifier-training period (5–8 minutes) preceded a neurofeedback period (8–24 minutes) in which subjects were asked to use these thoughts to move a feedback marker up and down. The feedback marker moved in real time according to the output of the whole-brain linear classifier.

Instructed motion experiment

For the instructed motion scans, subjects were provided with audible and visible instructions (via headphones and projector) for performing various movements throughout an 18-minute EPI scan comprising 24 blocks (46 seconds each block). Within each block, subjects received 10 seconds of general instructions, and were then told to alternate between two contrasting motion tasks for 6 seconds each, with 3 repetitions of each task pair (see Fig. 1). Twelve task pairs were run twice each for a total of 72 seconds for each pair. Specific instructions for the motion tasks are provided in Table I.

Figure 1.

Figure 1

Design diagram for the motion investigation experiments.

Table I.

Instructions for the twelve pairs of movement tasks.

Block Task A/Task B
1. Eyes-Close-Open Close your eyes/Open your eyes
2. Eyes-Left-Right Focus on the ball: [Ball appears on left/right]
3. Eyes-Up-Down Focus on the ball: [Ball appears on top/bottom]
4. Eyes-Blink Blink your eyes/Stop blinking
5. Eyes-Squint-Wide Squint your eyes/Open your eyes wide
6. Smile-Frown Smile/Frown
7. Mouth-Open-Close Open your mouth/Close your mouth
8. Jaw-Clench Clench your jaw/Relax your jaw
9. Shoulders-Raise Raise your shoulders/Lower your shoulders
10. Arms-Stretch Stretch your arms/Relax your arms
11. Legs-Stretch Stretch your legs/Relax your legs
12. Inhale-Exhale Inhale slowly/Exhale slowly

Emotion - motion example

To further explore whether false activation can occur during a fMRI experiment as a result of task-correlated facial movement, one of the subjects in the instructed motion study was scanned with a protocol designed to detect emotion-specific brain activity. This scan was acquired in addition to the instructed motion scan with identical imaging parameters, but with different stimuli and instructions. Face images were presented (Radboud Faces Database [21]) with happy and fearful emotions, and the subject was instructed to try to relate to the emotions as they were presented. However, no instructions were provided with respect to facial movement. That is, the subject was told to keep his head still throughout the scan, but was not specifically told to keep his face still. Ten seconds of fearful faces were presented followed by ten seconds of happy faces, repeated 18 times (2 seconds per face).

Data Analysis

Preprocessing

Each dataset was realigned (3D affine transformations with 12 degrees of freedom), and then resampled into standard space using the FSL software [19]. Spatial smoothing was applied (Gaussian smoothing with σ = 1.8 mm) as well as prospective drift correction [18]. We note that the previously published real-time data were retrospectively reanalyzed with somewhat different preprocessing parameters.

Prior real-time dataset

The 19 real-time datasets were retrospectively analyzed to investigate the possibility that systematic movement artifacts may have accounted for some portion of the classification accuracies (an average of 83% classification accuracy was reported previously for these data). Two classifiers were developed – one based on a whole-brain mask, and one based on a complement of the whole-brain (or outside-brain) mask, which includes fat/muscle near the skull as well as tissues of the face (eyes, nose, etc). For each scan, the classifier was developed on the basis of the first four cycles (or 320 seconds) of the scan and then tested on the remaining part of the scan All classifiers were defined based on raw intensity values after preprocessing. Classification accuracies were then compared between the whole-brain and the outside-brain masks. The rationale for this comparison is that if outside-brain accuracy is significantly greater than 50%, then systematic motion may be present, and may have accounted for some or all of the classification accuracy for the whole-brain mask. On the other hand, if the outside-brain accuracy is significantly less than the whole-brain accuracy, then we can conclude that neural activity accounted for at least portion of the whole-brain accuracy.

Instructed motion dataset

To investigate the effect of systematic movement on classifier formation, linear classifiers were developed on the basis of each block to discriminate between the two movement states. Each classifier was then tested on the other motion block of the same movement type, and the total classification accuracy (or percent of volumes correctly classified) was computed for each scan and each task. Classifiers were developed using partial least squares (PLS) regression [18, 20] on the basis of pixels within a mask. Three separate masks were used: (a) whole brain; (b) complement of the whole brain (i.e. outside the brain); and (c) a regional mask of the frontal medial cortex. Classifiers were developed using four PLS components. The same preprocessing was applied as in the conventional analysis except that the drift correction was different -- mean images (computed over each 12-second period comprising one run of task A and task B) were subtracted from each volume.

Parametric (t-test) maps were generated for each of the 12 pairs of movements and then averaged across all subjects. Computations were performed voxel-by-voxel at all locations inside and outside the brain area. The first volume of each task was excluded from parameter computation, since it was assumed that it took the subject up to 1–2 seconds to begin following the instructions. This left 12 volumes for each task (or 24 volumes for each pair of tasks) per dataset. The t-test map was obtained using unpaired t-tests to detect unequal means between the two contrasting states.

Emotion - motion example

The single emotion scan described above was analized in two ways. First, we computed a parametric activation map using the same processing parameters as in the instructed motion dataset, using the happy/sad face contrast. Second, we applied the smile/frown classifier developed during the instructed motion scan (for the same subject) to the emotion scan to determine whether the motion-induced classifier could predict the instructed emotion, thus providing evidence of a classifier with false-positives caused by facial movement.

RESULTS

Prior real-time dataset

The relationship between the outside-brain and inside-brain classification accuracies for the nineteen real-time scans is shown in Fig. 2. The weak correlation (R2 = 0.071) between these two parameters suggests that classification accuracy inside the brain is (fortunately) dominated by neural activity and not by systematic movements. This is further supported by the observation that the inside-brain accuracies are on average significantly higher than the outside-brain accuracies (80% vs. 59%, p < 0.001 for 1-tailed paired t-test). However, since many of the outside-brain classification accuracies are significantly greater than 50%, we cannot rule out that some part of the inside-brain performance may be due to movement. For example, in a few cases, outside-brain accuracy is close to (or even greater than) the inside-brain accuracy. In these cases, we cannot rule out that some or all of the performance may be due to task-correlated movements.

Figure 2.

Figure 2

Outside-brain vs. inside-brain classification accuracies for the nineteen real-time scans (R2 = 0.071). Most of the points lie well above the line of identity suggesting that only a portion of the performance can be accounted for by systematic movement.

Instructed motion dataset

Figures 3 and 4 show the two approaches (spatial maps and classification accuracies) to revealing potential motion confound using the instructed motion paradigm. Figure 3 shows average t-test maps for the twelve movement tasks (displayed with a threshold of p = 0.01). The first four eye movement tasks resulted in apparent activations that were limited to the eye regions outside the brain. On the other hand, the fifth eye movement task (squinting vs. opening eyes wide) resulted in false activations in the ventral prefrontal cortex (medial orbitofrontal cortex, anterior cingulate and inferior frontal regions). The mouth open/close task also resulted in false activations, in both anterior (medial orbitofrontal cortex, striatum) and posterior (cerebellum) regions. In addition, the jaw clench task resulted in apparent activations outside the brain near the temporal muscles. The other six tasks did not result in any apparent activation clusters.

Figure 3.

Figure 3

Average t-score maps for the twelve movement tasks displayed with threshold of p = 0.01. Positive (red/yellow) values correspond to signals in the second of the two tasks (e.g. mouth close); negative (blue) values correspond to higher signals for the first of the two tasks (e.g. mouth open).

Figure 4.

Figure 4

Average classification accuracies for the twelve movement tasks. Linear classifiers were defined using three different masks: whole brain, outside the brain, and regional (frontal medial cortex). For each task pair, classifiers were developed on the first 18-second block and then applied to the second block, and vice versa.

Though the spatial maps indicate potential confounds for only two tasks, the classifier tests reveal widespread contributions of motion across all the tasks. Most of the average accuracies for the classifier tests, shown in Fig. 4, were well above the 50% (chance) level, demonstrating that classifiers can form on the basis of systematic movements of many types. For the whole-brain and outside-of-brain masks, the average accuracies were all significantly greater than chance (p < 0.05, single-sample one-tailed t-test) with many of the accuracies above 70%. All classification accuracies (with the exception of smile-frown) for the regional mask were also significantly greater than chance (p < 0.05). The arms-stretch task produced the highest whole-brain accuracy of 78% and the first two eye movement tasks yielded classification accuracies above 70%.

The average translational and rotational motion levels (as detected by FSL) for the twelve task blocks are presented in Table II. These values were computed as the maximum deviation between distinct time-points during the block, after subtracting out the mean position over each 12-second cycle. As expected, the eye movement tasks resulted in the lowest levels of detected rigid motion. The mouth open/close task resulted in the largest translational displacements (approximately 6 mm on average). Overall we did not notice a correlation between the level of rigid motion as detected by FSL and the outside-brain classification accuracies. This is presumably due to the fact that only affine motion is detected (and corrected) during image alignment, whereas highly non-rigid motion contributes most to the classifiers.

Table II.

Detected average rigid motion for twelve instructed motion tasks.

Block Translation (mm) Rotation (deg)
Eyes-Close-Open 0.7 0.2
Eyes-Left-Right 0.4 0.2
Eyes-Up-Down 0.9 0.3
Eyes-Blink 0.6 0.2
Eyes-Squint-Wide 1.0 0.3
Smile-Frown 1.0 0.2
Mouth-Open-Close 6.3 0.7
Jaw-Clench 1.9 0.3
Shoulders-Raise 2.4 1.8
Arms-Stretch 2.3 1.0
Legs-Stretch 3.1 1.9
Inhale-Exhale 3.2 0.7

Emotion - motion example

Fig. 5B shows the spatial map associated with the emotion task for one of the ten subjects highlighting pixels that distinguish between the happy (positive) and fearful (negative) conditions. The similarity between this map and the same subject’s t-test map for the smile-frown motion task (Fig. 5A) suggests that the supra-threshold pixels in Fig. 5B may be largely explainable by facial movement (i.e., smiling). Further evidence for this is given in the whole-brain classifier result of Fig. 5C in which a whole-brain classifier, developed on the basis of the movement scan to distinguish between smiling and frowing, was applied to the emotion task. The classification values based on movement strongly predict the happy emotion periods throughout the scan, suggesting that the subject was indeed smiling during these periods.

Figure 5.

Figure 5

Example of type I errors caused by facial movement (presumably smiling). (A) Parametric (t-test) map for smiling/frowning task in a single subject (threshold at p = 0.01). (B) Parametric map in a separate experiment for an ‘emotional’ task in which the same subject was told to relate to the happy and scared emotions of face cues, but was not instructed to keep his face still. (C) Predictions throughout the emotion scan of a smile/frown classifier derived from the motion scan, where red and green periods correspond to the presentation of fearful and happy faces, respectively.

DISCUSSION

This study highlights a potential pitfall with classifier-based real-time fMRI experiments. If head or face movement is systematic and correlated with the fMRI task, then the real-time classifier may form based on non-neural activity. For example, if the intended classifier involves aversive stimuli vs. rest, the subject may reflexively or intentionally squint to reduce the visual impact of the aversive cues. If the squinting is systematic, the classifier can form based on the movement. Additionally, the brain maps will be confounded, as squinting apparently impacts brain regions that process hedonics/emotional states (e.g., medial orbitofrontal cortex, as in Figure 3).

Although the spatial maps only showed false activations for a few of the motion tasks, the whole-brain classifiers showed a larger potential problem: accuracies were significantly above chance for all motion tasks. In three cases (eyes-close-open, eyes-left-right, and arms-stretch) the accuracies were above 70%. This demonstrates that robust classifiers can form from a wide variety of movement types. A comparison of figs. 3 and 4 reveals the fact that classifiers (or multi-voxel approaches) are much more sensitive to such artifacts than voxel-by-voxel measures.

It is possible that the results of the instructed motion study may be somewhat affected by brain activity used to generate the instructed motions (e.g. motor activation). However, we have reasons to believe that this effect should be small as compared to the movement artifact. First, brain activity cannot explain the outside-brain classification accuracies, which are in fact higher on average than the within-brain accuracies. Second, the spatial activation patterns of fig. 3 can best be explained by motion artifacts. Third, the number of repetitions for each task is relatively small for detecting a cognitive effect (but is apparently large enough to see the motion artifacts). Finally, we note that the hemodynamic response delay (which affects neural activity signals but not motion artifacts) is on the order of 4–6 seconds which is comparable to the motion task duration of 6 seconds.

The present study highlights the importance of instructing subjects not to move their head or face during real-time fMRI scans. In addition, the outside-brain brain classifier can serve as a red flag for detecting when systematic movements of the head or face (or eyes, as in squinting) may be taking place. We note that for all types of movements, the outside-brain accuracy was almost equal to (if not higher than) the inside-brain accuracy. Therefore, one can expect that such movements would be reflected in an outside-brain classifier. Conversely, neural activity would only be expected to contribute to the inside-brain classifier. Therefore, as one remedy for the confound of motion, investigators could monitor the classifier feedback during the scan to ensure that the outside-brain accuracy is much lower than the inside-brain activity, as was the case for the majority of the real-time scans shown in Fig. 2. If the outside-brain accuracy exceeds a pre-set threshold (e.g. 60%), the scan could be re-started and/or the subjects could be re-instructed in a way that attempts to avoid the confound. One could also include all non-gray matter pixels in the outside-brain classifier to control for physiologic effects in addition to movements, although we suspect that it may be challenging to obtain an accurate within-brain segmentation of the relatively low-resolution functional images.

The detected rigid motion parameters obtained during real-time motion correction could also be used to raise a red flag about task-correlated movement. However, we note that these parameters are only designed to detect rigid (or affine) movements of the head, which can be corrected and thus will ideally not cause artifacts. On the other hand, the highly non-rigid movements (i.e. induced by eye movements and facial expressions) are more problematic, and may not be detectable with standard motion correction algorithms. As we have shown, these non-rigid movements can be detected using outside-brain classifiers.

Though re-instructing the participant (e.g. to keep eyes open and relaxed) might prevent a confound in this and other general examples, for some designs the remedy not be as straightforward. As illustrated in the emotion-motion example, though facial motion may be contributing to the classifier, it is also intimately connected to the brain states of interest. Thus instructing individuals to not move their face (i.e. not to smile or frown) may actually impact the emotion-based contrast – studies have shown that mimicking smiles or frowns can actually create emotion [22, 23]. In real-time studies of emotional states based on classifiers, investigators need to take this into account.

CONCLUSION

Most of the high classification accuracies (83% on average) in the previously-published real-time study seem to be (fortunately) explainable by neural activity. However, upon inspection of the outside- vs. inside-brain classification accuracies, it is likely that in a few cases, some or all of the high performance may have been due to task-correlated movements. We have shown herein that movement tasks of many types (including movements of the eyes, face, and body) can lead to false positives in classifier-based real-time paradigms. On the other hand, only two of the applied movements (eyes-squint-wide and mouth-open-close) resulted in artifacts in the spatial activation maps, suggesting that multi-voxel techniques are more vulnerable to such problems. Finally, we have proposed a potential remedy for real-time paradigms, whereby investigators can monitor the extent of task-correlated movements by viewing the output of a classifier developed based on pixels outside the brain region. This should enable investigators to retain the strengths (e.g. noise reduction) of classifiers while mitigating the potential confound of task-correlated motion.

Acknowledgments

This work was supported by National Institutes of Health research grants NIBIB K25-EB007646, NIDA R33-DA026114; NIDA P50-DA12756; P60-DA005186 and by VA VISN 4 MIRECC.

References

  • 1.Hajnal JV, et al. Artifacts due to stimulus correlated motion in functional imaging of the brain. Magnetic Resonance in Medicine: Official Journal of the Society of Magnetic Resonance in Medicine / Society of Magnetic Resonance in Medicine. 1994;31(3):283–291. doi: 10.1002/mrm.1910310307. [DOI] [PubMed] [Google Scholar]
  • 2.Thesen S, et al. Prospective acquisition correction for head motion with image-based tracking for real-time fMRI. Magnetic Resonance in Medicine: Official Journal of the Society of Magnetic Resonance in Medicine / Society of Magnetic Resonance in Medicine. 2000;44(3):457–465. doi: 10.1002/1522-2594(200009)44:3<457::aid-mrm17>3.0.co;2-r. [DOI] [PubMed] [Google Scholar]
  • 3.Jiang A, et al. Motion detection and correction in functional MR imaging. Human Brain Mapping. 1995;3(3):224–235. [Google Scholar]
  • 4.Johnstone T, et al. Motion correction and the use of motion covariates in multiple-subject fMRI analysis. Human Brain Mapping. 2006;27(10):779–788. doi: 10.1002/hbm.20219. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Grootoonk S, et al. Characterization and correction of interpolation effects in the realignment of fMRI time series. Neuroimage. 2000;11(1):49–57. doi: 10.1006/nimg.1999.0515. [DOI] [PubMed] [Google Scholar]
  • 6.Andersson JL, et al. Modeling geometric deformations in EPI time series. Neuroimage. 2001;13(5):903–919. doi: 10.1006/nimg.2001.0746. [DOI] [PubMed] [Google Scholar]
  • 7.LaConte SM, Peltier SJ, Hu XP. Real-time fMRI using brain-state classification. Human Brain Mapping. 2007;28(10):1033–1044. doi: 10.1002/hbm.20326. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.LaConte SM. Decoding fMRI brain states in real-time. Neuroimage. 2011;56(2):440–454. doi: 10.1016/j.neuroimage.2010.06.052. [DOI] [PubMed] [Google Scholar]
  • 9.deCharms RC, et al. Control over brain activation and pain learned by using real-time functional MRI. Proceedings of the National Academy of Sciences of the United States of America. 2005;102(51):18626–18631. doi: 10.1073/pnas.0505210102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Drevets WC, et al. PET images of blood flow changesduring anxiety: correction. Science (New York, NY) 1992;256(5064):1696–1696. doi: 10.1126/science.256.5064.1696. [DOI] [PubMed] [Google Scholar]
  • 11.Posse S, et al. Real-time fMRI of temporolimbic regions detects amygdala activation during single-trial self-induced sadness. NeuroImage. 2003;18(3):760–768. doi: 10.1016/s1053-8119(03)00004-1. [DOI] [PubMed] [Google Scholar]
  • 12.Yoo SS, Jolesz FA. Functional MRI for neurofeedback: feasibility study on a hand motor task. Neuroreport. 2002;13(11):1377–1381. doi: 10.1097/00001756-200208070-00005. [DOI] [PubMed] [Google Scholar]
  • 13.deCharms RC, et al. Learned regulation of spatially localized brain activation using real-time fMRI. NeuroImage. 2004;21(1):436–443. doi: 10.1016/j.neuroimage.2003.08.041. [DOI] [PubMed] [Google Scholar]
  • 14.Caria A, et al. Regulation of anterior insular cortex activity using real-time fMRI. NeuroImage. 2007;35(3):1238–1246. doi: 10.1016/j.neuroimage.2007.01.018. [DOI] [PubMed] [Google Scholar]
  • 15.Haller S, Birbaumer N, Veit R. Real-time fMRI feedback training may improve chronic tinnitus. European Radiology. 2009 doi: 10.1007/s00330-009-1595-z. [DOI] [PubMed] [Google Scholar]
  • 16.Birbaumer N, Cohen LG. Brain-computer interfaces: communication and restoration of movement in paralysis. The Journal of physiology. 2007;579(Pt 3):621–636. doi: 10.1113/jphysiol.2006.125633. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Zhang X, et al. Single subject task-related BOLD signal artifact in a real-time fMRI feedback paradigm. Human Brain Mapping. 2011;32(4):592–600. doi: 10.1002/hbm.21046. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Magland JF, Tjoa CW, Childress AR. Spatio-temporal activity in real time (STAR): Optimization of regional fMRI feedback. Neuroimage. 2011 doi: 10.1016/j.neuroimage.2010.12.085. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Smith SM, et al. Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage. 2004;23(Suppl 1):S208–219–S208–219. doi: 10.1016/j.neuroimage.2004.07.051. [DOI] [PubMed] [Google Scholar]
  • 20.Krishnan A, et al. Partial Least Squares (PLS) methods for neuroimaging: a tutorial and review. Neuroimage. 2011;56(2):455–475. doi: 10.1016/j.neuroimage.2010.07.034. [DOI] [PubMed] [Google Scholar]
  • 21.Langner O, et al. Presentation and validation of the Radboud Faces Database. Cognition & Emotion. 2010;24(8):1377–1388. [Google Scholar]
  • 22.Lee TW, et al. Imitating expressions: emotion-specific neural substrates in facial mimicry. Social cognitive and affective neuroscience. 2006;1(2):122–135. doi: 10.1093/scan/nsl012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Levenson RW, Ekman P, Friesen WV. Voluntary Facial Action Generates Emotion-Specific Autonomic Nervous System Activity. Psychophysiology. 1990;27(4):363–384. doi: 10.1111/j.1469-8986.1990.tb02330.x. [DOI] [PubMed] [Google Scholar]

RESOURCES