Abstract
The operations and processes that the human brain employs to achieve fast visual categorization remain a matter of debate. A first issue concerns the timing and place of rapid visual categorization and to what extent it can be performed with an early feed-forward pass of information through the visual system. A second issue involves the categorization of stimuli that do not reach visual awareness. There is disagreement over the degree to which these stimuli activate the same early mechanisms as stimuli that are consciously perceived. We employed continuous flash suppression (CFS), EEG recordings, and machine learning techniques to study visual categorization of seen and unseen stimuli. Our classifiers were able to predict from the EEG recordings the category of stimuli on seen trials but not on unseen trials. Rapid categorization of conscious images could be detected around 100 ms on the occipital electrodes, consistent with a fast, feed-forward mechanism of target detection. For the invisible stimuli, however, CFS eliminated all traces of early processing. Our results support the idea of a fast mechanism of categorization and suggest that this early categorization process plays an important role in later, more subtle categorizations, and perceptual processes.
Keywords: visual awareness, rapid categorization, continuous flash suppression, EEG
Introduction
The human brain continuously performs visual categorization of stimuli in everyday life. Studies of rapid visual categorization suggest that the first 100–200 ms are crucial to this process, consistent with categorization during the first pass of visual processing (Potter and Faulconer, 1975; VanRullen and Thorpe, 2001b; Liu et al., 2009). For go/no go tasks, for example, early event-related potentials (ERPs) at approximately 150 ms reflect the decision that there was a target present in a natural scene (Thorpe et al., 1996; VanRullen and Thorpe, 2001b). This first rapid categorization appears to be similar for diverse categories such as means of transportation or living objects (Thorpe and Fabre-Thorpe, 2001; VanRullen and Thorpe, 2001a).
An open issue regards the capacity of invisible stimuli to influence visual categorization and to activate different areas of the visual cortex. Experiments employing change blindness and inattentional blindness have clearly documented that important visual events that impinge on our retina can go widely unseen when attention is diverted from them (Mack and Rock, 1998; Simons and Chabris, 1999). On the other side, it has been also demonstrated that visual category detection can be rapidly achieved even in the near absence of visual attention (Fei-Fei et al., 2002). Psychophysical studies using interocular suppression as well as neuroimaging studies have given conflicting reports on the degree to which suppressed information activates areas of the brain (Blake and Logothetis, 2002; Alais and Melcher, 2007). Visual information arriving to the ventral stream appears to be deeply suppressed under interocular suppression, as shown by psychophysics (Zimba and Blake, 1983; Alais and Melcher, 2007), neurophysiological data in monkeys (Logothetis, 1998), and in single-cell recordings in humans (Kreiman et al., 2005). However, it has recently been proposed that there is a difference between the ventral and dorsal stream for the processing of invisible pictures of animals and tools (Fang and He, 2005; Almeida et al., 2008, 2010). It was suggested that while dorsal stream neurons responded to invisible tools that carried the characteristic of being “graspable” the categorization of invisible animals was widely suppressed in the ventral stream (Fang and He, 2005).
Even though the human and non-human primate brain can achieve visual categorization very fast, theories of visual awareness propose that a rapid feed-forward mechanism might not be sufficient for visual awareness, which might also require horizontal connections between different brain areas (Lamme, 2000) and/or late feedback projections from prefrontal areas (Sergent et al., 2005). The study of the neural correlates of invisible stimuli is valuable to disentangle the minimal set of processes that are necessary and sufficient for visual awareness to occur (Koch, 2003). Further investigation is needed to understand the relationship between the first, rapid feed-forward pass of information and the emergence of visual awareness.
We studied the timing of categorization of seen and unseen images of animals, tools, and scrambled control images employing continuous flash suppression (CFS; Tsuchiya and Koch, 2005), EEG recordings and single trial analysis. Based on the EEG signal, our classifiers were able to predict the visual category on single trials of seen but not unseen stimuli. Fast categorization of conscious images could be detected around 100 ms on the occipital electrodes, suggesting a fast, feed-forward mechanism responsible for the fast recognition of visual categories. For the unconscious images of animals and tools, however, no trace of a distinction between semantic categories was found in the EEG signal. The claim that processing of unseen tools but not of unseen animals can occur in the dorsal stream (Fang and He, 2005) was not be replicated with EEG recordings. Overall, these results provide further evidence that categorization occurs early in visual processing (VanRullen and Thorpe, 2001b; Hung et al., 2005) and that this early, initial and (perhaps) approximate categorization plays a role in later semantic processing and in conscious awareness.
Materials and Methods
Participants
We recruited 12 students – 2 female and 10 male, mean age 26.7 ranging from 21 to 31-year-old from the university of Buenos Aires for the experiment. All subjects had normal or corrected to normal visual acuity and were tested for ocular dominance before running the experiment. All participants gave written informed consent and were naive about the aims of the experiment. All the experiments described in this paper were reviewed and approved by the ethics committee of the Centre of Medical Education and Clinical Research “Norberto Quirno” (CEMIC), qualified by the Department of Health and Human Services (HHS, USA).
Stimuli
For the current experiments we used 50 images of animals, 50 tools (all downloaded from the Internet), 100 phase-scrambled control images (Figure 1) and a set of 40 Mondrians images (Tsuchiya and Koch, 2005; Figure 2A). All images of animals and tools were converted into gray-scale images with a maximum brightness intensity value of 0.8 for every pixel in the image, while background pixels were turned into a value of 0.5 pixel brightness in a scale from 0 = black to 1 = white. No images with strong emotional saliency were used for this study (such as spiders, snakes, or guns). We created 100 control phase-scrambled images (one scrambled image for each animal and tool image) with the same spatial frequencies and mean luminance values as the animal and tool images. In order to generate these images we applied the Fourier transform to each picture of an animal, tool, or Mondrian and obtained the respective magnitude and phase matrices. We then reconstructed each image by using the magnitudes of the animals and tools and the phases of the Mondrian images. Finally, we multiplied each single pixel of the scrambled images by an appropriate constant to correct for any differences in mean luminance values between the original images and their scrambled counterparts. A t-test comparing the mean luminance value of the group of original images and the group of scrambled images showed no significant difference between both groups (p = 0.6).
Procedures
Stimuli were presented using a PC computer with a CRT display monitor (resolution = 1024 × 768, 75 Hz refresh rate) using Matlab Psychophysics toolbox (Brainard, 1997). Subjects were instructed to fixate on a central point while they viewed a series of pictures through a pair of red–green anaglyph glasses. These pictures were viewed from a distance of 70 cm at the center of the screen and subtended 8 by 8 degrees of visual angle. To allow competition between stimuli on each trial we randomly selected 10 Mondrians from the set of 40 Mondrians and presented them to the red RGB channel of the image every eight frames (10 Hz, each Mondrian presentation lasting 106 ms). Target animals, tools, and scrambled images were presented to the green RGB channel of the image. In this way, subjects using the anaglyph glasses saw the Mondrians with their dominant eye and the animals, tools, and scrambled images with their non-dominant eye.
Throughout a trial Mondrians were dynamically changed every 106 ms. Trials initiated with a period of 424–636 ms (4, 5, or 6 flashes) of Mondrian presentation followed by the targets plus the Mondrians for 636 ms. Once the targets disappeared, four more Mondrians were flashed on the screen to avoid afterimages (Figure 2A). For the seen conditions targets were presented at low luminance and Mondrians at high luminance whereas for the unseen condition targets were presented at a low luminance and Mondrians at high luminance. These luminance levels were chosen for each subject based on a visibility threshold detection task (see below). On each trial, subjects were asked to fixate at the center of the screen to avoid eye movements. Their task was to respond with their right hand index or middle fingers whether the picture of an animal or tool had appeared on the screen (2AFC). An inter-trial interval of either 1, 1.5, or 2 s was used in between trials to avoid attentional expectations. The EEG experiment comprised eight conditions: four stimuli types (animals, tools, scrambled animals, and scrambled tools) by two visibility levels (seen and unseen). The EEG experiment consisted of 800 trials (100 trials per condition by 8 condition) and lasted approximately 70 min.
Invisibility assessment
On a previous session to running the EEG recordings subjects performed a visibility threshold detection task. We presented the targets at six different luminance levels while Mondrians were kept constant. The trial presentation was exactly the same as the sequence described above (see Figure 2A) with the only exception that instead of scrambled images a blank green screen of the same luminance as target images was presented on half the trials to the dominant eye. Subjects had to respond whether they had seen a target (animal or tool) or a blank screen. Using signal detection theory we calculated the d’ values for each luminance condition to obtain a measure of each subjects visibility threshold (see Figure 2C). Finally, we chose a luminance value that yielded a d’ between −0.5 and 0.5. This luminance value was then assigned to the targets in the unseen conditions and to the Mondrians in the seen conditions for the EEG experiment. For all subjects, the final targets were presented against a green background (CIE coordinates X = 0.414, Y = 0.391) with a maximum luminance of 4.8 cd/m2. When presented at low luminance the mean pixel intensities for the gray-scale targets was 0.26 (in a scale from 0 to 1) with an STD of 0.02. For the gray-scale high luminance targets and Mondrians the mean pixel intensities was 0.71 with an STD of 0.06. CFS allowed us to present constant stimuli to our subjects while they underwent two conditions: a visible condition in which pictures were consciously perceived and an invisible condition in which participants were not able to report the presence nor the identity of the suppressed stimuli.
Preprocessing of EEG data
EEG activity was recorded on a dedicated PC at 1024 Hz, at 128 electrode positions on a standard 10–20 montage, using a BrainVision electrode system1. An additional electrode at the right ear lobe was used as reference. Datasets were bandpass filtered (1–120 Hz), down sampled at 300 Hz, and independent component analysis (ICA) was run on the continuous datasets to detect components associated with eye blinks, eye movements, electrical noise, and muscular noise. The resulting components from the ICA decomposition were visually inspected and the ones associated with eye blinks and eye movements were manually eliminated from the data. Channels containing artifact noise for long periods of time were interpolated (a maximum of four channels were interpolated for each dataset). The datasets were notch filtered at 50 Hz to clear out electrical noise, and the reference on the right ear was digitally transformed into an averaged reference. Once this process was finished the datasets were epoched in synchrony with the beginning of the target presentation and each epoch was corrected for baseline over a 400-ms window during fixation at the beginning of the trial. An automatic method was applied to discard those trials with voltage exceeding 200 mv, transients exceeding 80 mv, or oculogram activity larger than 80 mv. The remaining trials were then separated accordingly to the experimental conditions and averaged to create the ERPs. On average, 15% of trials were discarded after artifact removal. All the preprocessing steps were performed using EEGLAB (Delorme and Makeig, 2004) and custom made scripts in Matlab.
Statistical analysis of ERPs
In order to assess the earliest time point in visual categorization we initially ran statistical comparisons between seen animals versus seen scrambled animals and seen tools versus seen scrambled tools (Figure 3). In all cases we submitted each (channel, time) sample of the ERP calculated for each subject to a non-parametric rank-sum test to compare the two conditions across all subjects. This implied over 5000 comparisons for each pair of conditions. We filtered these multiple comparisons across time samples and recording sites with the following criteria. (1) We kept only samples with p < 0.01. (2) For each channel, a given time point was considered significant if it was part of a cluster of six or more consecutive significant consecutive time points for a 19.5-ms time window (Thorpe et al., 1996; Dehaene et al., 2001). (3) Each sample was considered significant if for the same time point at least two neighboring channels were also significant.
Single trial classification
We employed multivariate pattern analysis (Hanke et al., 2009) to decode visual information from the EEG recordings in single trials. All the analysis were performed in Python language adopting the software library PyMVPA2. For each channel we assessed the time and frequency intervals within the EEG signals that carried the biggest amount of stimuli-related information and maximized the separability between the stimuli categories. The amount of information was estimated as the accuracy of a classifier trained on single trials at predicting the stimuli of future trials. A dimensionality reduction step was performed before each classification process via a variable selection step, and for this study the selected variables were time intervals. Variable selection was conducted in order to discard irrelevant information for classification and improve the signal-to-noise ratio.
For each subject, the dataset consisted in 3D matrices of 300 sample points by 128 channels by 800 trials containing voltage values. For all the analysis we used a sixfold cross-validation procedure in order to estimate the accuracy of classification. We refer to “classification performance” of a subject as the average of the six classification accuracies obtained on the test set of each fold. At each one of the iterations of the cross-validation scheme the following variable selection procedure was applied to the training dataset. First, we computed the mean and SD for each timestep and class over all training trials. Then, for each timestep and each channel we ran a one-way ANOVA between the two classes to compare the differences in signal amplitude. Next, we obtained a vector with 300 p-values (one for each sample point). The 300 time points correspond to the time range 0–1000 ms after stimuli presentation. The first 100 timesteps with the lowest p-value were selected for each channel and used as feature values to set up a final dataset where each trial is a vector of 100 features. Finally, the resulting dataset was used to train a classifier with a support vector machine (SVM) algorithm (Schlkopf and Smola, 2001) and linear kernel.
The corresponding test dataset for the given iteration of the cross-validation scheme was reduced to the 100 features selected on the training dataset. In order to avoid circularity analysis during variable selection we performed the feature selection process jointly with the cross-validation process for each step of the multivariate analysis (Olivetti et al., 2010). For each fold of the cross-validation process the SVM algorithm produced a classifier for each channel and the related accuracy on the test set was used to evaluate the informativeness of the channel. The result of these classification processes yielded a measure of the information contained in each channel to discriminate stimuli categories, a “single-channel based decoding.” To obtain a global measure of classification or “all-channel based decoding” (see Results), we performed the same procedure as before with the exception that during training the 128 channels were concatenated and 12800 features were selected, i.e., the best 100 features for each of the 128 channels. The classification performance of an SVM classifier was estimated by means of cross-validation.
Results
Behavioral responses
We ran eight Bonferroni corrected one-sample t-tests (one for each condition, Figure 2B) against the null hypothesis that subjects were performing at chance level. Subjects were above 90% accuracy at discriminating animals and tools on the seen condition (corrected p < 0.05). For the remaining six conditions none of the tests rejected the null hypothesis that participants were responding differently from chance level.
ERP analysis
First of all, we found three main components that distinguished targets (animals or tools) from scrambled images (Figure 3; see Materials and Methods for details on statistical criteria for ERPs comparisons). The earliest differences between ERPs of seen animals and seen scrambled animals, and seen tools and seen scrambled tools were observed for a P1 component at occipital electrodes. These components had a statistically significant onset starting at 93 ms for animals and 127 ms for tools. These results are in agreement with previous studies showing early categorization around 100 ms (Thorpe et al., 1996; Fabre-Thorpe et al., 2001; VanRullen and Thorpe, 2001b; Rousselet et al., 2007). We also observed a second N2 component with a peak starting at ∼230 ms and a third late component with a peak starting at ∼490 ms. The pictures of seen animals or seen tools produced a widespread activation throughout the cortex as compared to the seen scrambled controls.
Second, we assessed the EEG signal associated with the presentation of unseen pictures of animals or tools (Figure 4). We observed no EEG correlates of unseen stimuli as compared with their unseen scrambled controls. Even the earliest P1 components were eliminated under CFS, suggesting that under interocular suppression unseen stimuli were completely suppressed before 100 ms or that they were too weak to be detected with ERPs.
Third, we found no differences in the ERPs for the subtler semantic categorization of animals versus tools (Figure 5). For the seen targets the EEG activity related to the two categories was almost identical, with only an N2 component that (even though not statistically significant) could be still observed around 200 ms after the subtraction of the two categories. For the unseen targets even the earliest components were nor present.
The failure in categorizing unseen stimuli could have been simply a matter of insufficient signal-to-noise ratio: the target images could have been too weak to produce a reliable EEG signal. In order to rule out this possibility we ran a control experiment (n = 6) with two conditions. We presented the same target and scrambled images as in the unseen condition: (1) without presenting any Mondrian and (2) with low luminance Mondrians presented to the dominant eye. We found that the targets animals and tools produced a significant cortical response at P1 and N2 components (Figure 6). This result implies that the lack of signal under the unseen conditions (Figure 4) cannot be attributed to weak visual stimulation. On the other hand, it remains possible that some form of interaction between the CFS Mondrian sequence and our already weak target stimuli could have wiped out the corresponding signals at an early cortical stage. While this possibility is, to some extent, compatible with our conclusions, it would be useful in future studies to observe the ERP signals (or lack thereof) generated by unseen stimuli that are physically matched to the consciously seen ones (e.g., in a condition where the same target stimulus sometimes becomes conscious and generates an ERP and sometimes remains unconscious with no associated ERP).
EEG single trial analysis
We employed single trial multivariate analysis for all the conditions in our data (Figure 7). We designed our classification approach with an emphasis on avoiding biases in the process of feature extraction and parameter estimation implementing a nested cross-validation scheme (Olivetti et al., 2010). For the seen conditions the classifiers were able to discriminate well above chance animals from scrambled animals with an accuracy of 72% and tools from scrambled tools with an accuracy of 66% (p < 0.0001). A 55% of accuracy was obtained for the comparison of seen animals versus seen tools, which was statistically suggestive (p = 0.0518) but not significant at the confidence level of 0.01 that we adopted in this work. This suggestive classification performance could not be attributed to low-level image statistics, as the comparison seen scrambled animals versus seen scrambled tools was at chance level. The most important channels contributing to the classification performance can be observed in Figure 8. For the discrimination between seen animals, tools, and their scrambled controls the occipital electrodes conveyed the highest discriminative information. For the seen animal versus tool classification performance our results were lower than in previous EEG and MEG studies (Murphy et al., 2009; Simanova et al., 2010; Chan et al., 2011). We speculate that our results might differ from these studies due to two reasons. First, the relative small number of trials used in our study (100 per category) compared to previous works [i.e., Murphy et al. (2009, 2010)] could have made the classifier and the estimation of its accuracy less precise. Second, the additive noise effect of the low luminance Mondrians in the seen conditions might have reduced the signal-to-noise ratio thus decreasing the accuracy of the classifier.
Discussion
The primate visual system can accomplish complex visual stimuli processing in a fraction of a second. Fast visual categorization can occur in human and non-human primates as fast as 100 ms after stimuli presentation (Perrett et al., 1982; Oram and Perrett, 1992; Liu et al., 2009). The nature of connections among neurons from the retina to the inferotemporal cortex (Felleman and Van Essen, 1991) and the spikes needed for the information to step through all these areas suggest that fast categorization is performed in a single feed-forward pass of information (Thorpe et al., 1996; VanRullen and Thorpe, 2001a,b).
Our results give support to this view as they show a first early categorization of meaningful pictures starting at 90–120 ms. Consistent with studies showing that a visual scene can be rapidly detected (Thorpe et al., 1996; Fei-Fei et al., 2002), the presentation of a meaningful picture produced a rapid widespread activation throughout the cortex as compared to the meaningless scrambled pictures. In our data identifiable targets of animals or tools generated an early event-related component as early as 100 ms, suggesting that initial visual categorization might originate from the first pass of processing in visual cortex. These early processes could not be attributed in our data to spatial frequency or mean luminance differences between targets and scrambled controls.
The absence of a difference in ERPs between seen animals and tools along with the low single trial classification performance suggests that finer semantic categorization takes place at a later stage of the processing hierarchy. This idea is in accordance with the latency of the N400 ERP component, usually in the time window of 200–500 ms after stimuli onset, associated with semantic processing as reported in previous studies (Kutas and Hillyard, 1980; Dehaene, 1995; Pulvermuller et al., 1996; Kiefer, 2001). Recent studies employing single trial analysis have shown better classification performances at discriminating between seen animal and tool categories (Murphy et al., 2009, 2010; Simanova et al., 2010; Chan et al., 2011) and animals versus vehicles (VanRullen and Thorpe, 2001b). Our results are slightly different from these previous studies as our classifier only found suggestive evidence of a discrimination between these two categories but a classification performance at the limit of chance level. We can only speculate that these differences might have occurred due to the additive noise effect of the low luminance Mondrians accompanying the targets, to the lower number of trials or to possible biases in the feature selection and classification process.
Some theories state that visual awareness is linked to late stages of processing in the ventral stream (Milner and Goodale, 1995; Koch, 1996; Bar and Biederman, 1999). If this is the case, is there a preliminary categorization process for unseen stimuli? Backward masking studies have shown that for unseen stimuli the second stage of processing of information around 250 ms is eliminated while the first pass of information survives suppression (Schiller and Chorover, 1966; Dehaene et al., 2001; Bacon-Macé et al., 2005; Melloni et al., 2007).
Models of binocular suppression propose that neural competitive interactions occur at several levels of the visual processing hierarchy (Tong, 2001; Blake and Logothetis, 2002; Tong et al., 2006). The idea that interocular suppression starts very early in visual processing (Tong and Engel, 2001; Haynes et al., 2005; Wunderlich et al., 2005) and that there exists an almost complete suppression of the information conveyed by the non-dominant stimuli in ventral areas of the visual cortex is supported by psychophysics (Moradi et al., 2005; Alais and Melcher, 2007), fMRI (Tong et al., 1998; Pasley et al., 2004; Hesselmann and Malach, 2011), single-cell recordings in monkeys (Sheinberg and Logothetis, 1997), and single-cell recordings in human beings (Kreiman et al., 2002, 2005).
On the other side, recent experiments have shown that during interocular competition complex suppressed stimuli can nonetheless generate behavioral effects, suggesting invisible processing beyond striate cortex (Kovács et al., 1996; Andrews and Blakemore, 1999; Alais and Parker, 2006; Jiang et al., 2006; Stein et al., 2011). Also, it has been proposed that weak category-specific neural activity could be detected during CFS using MEG/EEG (Jiang et al., 2009; Sterzer et al., 2009) and multivariate analysis of fMRI data (Sterzer et al., 2008).
Our results suggest that CFS suppresses information even for the first pass and that unseen animal and tool categories are suppressed early in visual cortex. Previous reports on CFS have found residual processing of information in the dorsal stream for unseen tool pictures but not for unseen animal pictures (Fang and He, 2005; Almeida et al., 2008). In these studies the results were explained in terms of the “graspability” nature of tool pictures and by the difference in interocular suppression for the ventral and the dorsal stream (Almeida et al., 2010). We were not able to corroborate this hypothesis as we did not observe any evidence of cortical activity in the parietal channels associated with the perception of manipulable objects, a result in agreement with a recent study by Hesselmann and Malach (2011).
An important reason for the discrepancies between previous studies and our results might be related to differences in depth of suppression (Tsuchiya et al., 2006; Alais and Melcher, 2007). In the studies by Fang and He (2005) and Almeida et al. (2008) the authors presented the target stimuli on the screen for 200 ms. Target pictures were made invisible by presenting two flashes of Mondrians in each trial, the minimal set of Mondrian flashes for a paradigm to be considered CFS. This fact might have bring their paradigms closer to backward masking than to classical CFS (Tsuchiya and Koch, 2005) in terms of depth of suppression (Tsuchiya et al., 2006). In our study, however, we presented a continuum of flashes before and after picture targets appeared on the screen, which rendered suppression stronger (but see Jiang et al. (2006) for a design with several Mondrian presentation). Under these conditions both unseen animal and tool targets were equally suppressed. Alternatively, it could also be argued that the our results arise from the differences in nature and temporal dynamics of EEG and fMRI recordings, that the small changes in BOLD signal generated by unseen tools in the parietal cortex that were reported by Fang and He (2005) might not have any correlate in any ERP component and would therefore go undetected.
Conclusion
Our results suggest that rapid categorization is suppressed under CFS. The fact that these processes can be canceled by deep binocular suppression suggests a competition even within the first pass of visual recognition. Under some conditions, with only minimal competition, some weak categorization might occur, but our results question the robustness of these unconscious processes to strong suppression. Thus, even target detection is not completely automatic, but can be intercepted by competitive interactions in early vision.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
We are very grateful to Mariano Sigman for allowing us to access the EEG facilities at the Laboratory of Integrative Neuroscience of the University of Buenos Aires. We are also thankful to Marco Buiatti who helped us to automatize the process of EEG artifact removal and to Guido Hesselmann for his comments on an early version of the manuscript. This work has been realized also thanks to the support from the Provincia Autonoma di Trento and the Fondazione Cassa di Risparmio di Trento e Rovereto.
Footnotes
References
- Alais D., Melcher D. (2007). Strength and coherence of binocular rivalry depends on shared stimulus complexity. Vision Res. 47, 269–279 10.1016/j.visres.2006.09.003 [DOI] [PubMed] [Google Scholar]
- Alais D., Parker A. (2006). Independent binocular rivalry processes for motion and form. Neuron 52, 911–920 10.1016/j.neuron.2006.10.027 [DOI] [PubMed] [Google Scholar]
- Almeida J., Mahon B. Z., Caramazza A. (2010). The role of the dorsal visual processing stream in tool identification. Psychol. Sci. 21, 772–778 10.1177/0956797610371343 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Almeida J., Mahon B. Z., Nakayama K., Caramazza A. (2008). Unconscious processing dissociates along categorical lines. Proc. Natl. Acad. Sci. U.S.A. 105, 15214–15218 10.1073/pnas.0805867105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Andrews T. J., Blakemore C. (1999). Form and motion have independent access to consciousness. Nat. Neurosci. 2, 405–406 10.1038/8068 [DOI] [PubMed] [Google Scholar]
- Bacon-Macé N., Macé M. J. M., Fabre-Thorpe M., Thorpe S. J. (2005). The time course of visual processing: backward masking and natural scene categorisation. Vision Res. 45, 1459–1469 10.1016/j.visres.2005.01.004 [DOI] [PubMed] [Google Scholar]
- Bar M., Biederman I. (1999). Localizing the cortical region mediating visual awareness of object identity. Proc. Natl. Acad. Sci. U.S.A. 96, 1790–1793 10.1073/pnas.96.4.1790 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blake R., Logothetis N. K. (2002). Visual competition. Nat. Rev. Neurosci. 3, 13–21 10.1038/nrn701 [DOI] [PubMed] [Google Scholar]
- Brainard D. H. (1997). The psychophysics toolbox. Spat. Vis. 10, 433–436 10.1163/156856897X00357 [DOI] [PubMed] [Google Scholar]
- Chan A. M., Halgren E., Marinkovic K., Cash S. S. (2011). Decoding word and category-specific spatiotemporal representations from MEG and EEG. Neuroimage 54, 3028–3039 10.1016/j.neuroimage.2010.07.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dehaene S. (1995). Electrophysiological evidence for category-specific word processing in the normal human brain. Neuroreport 6, 2153–2157 10.1097/00001756-199511000-00014 [DOI] [PubMed] [Google Scholar]
- Dehaene S., Naccache L., Cohen L., Bihan D. L., Mangin J.-F., Poline J.-B., Riviere D. (2001). Cerebral mechanisms of word masking and unconscious repetition priming. Nat. Neurosci. 4, 752–758 10.1038/89551 [DOI] [PubMed] [Google Scholar]
- Delorme A., Makeig S. (2004). EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 134, 9–21 10.1016/j.jneumeth.2003.10.009 [DOI] [PubMed] [Google Scholar]
- Fabre-Thorpe M., Delorme A., Marlot C., Thorpe S. (2001). A limit to the speed of processing in ultra-rapid visual categorization of novel natural scenes. J. Cogn. Neurosci. 13, 171–180 10.1162/089892901564234 [DOI] [PubMed] [Google Scholar]
- Fang F., He S. (2005). Cortical responses to invisible objects in the human dorsal and ventral pathways. Nat. Neurosci. 8, 1380–1385 10.1038/nn1537 [DOI] [PubMed] [Google Scholar]
- Fei-Fei L., Vanrullen R., Koch C., Perona P. (2002). Rapid natural scene categorization in the near absence of attention. Proc. Natl. Acad. Sci. U.S.A. 99, 9596–9601 10.1073/pnas.092277599 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Felleman D. J., Van Essen D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex 1, 1–47 10.1093/cercor/1.1.1 [DOI] [PubMed] [Google Scholar]
- Hanke M., Halchenko Y. O., Sederberg P. B., Olivetti E., Fründ I., Rieger J. W., Herrmann C. S., Haxby J. V., Hanson S. J., Pollmann S. (2009). PyMVPA: a unifying approach to the analysis of neuroscientific data. Front. Neuroinform. 3:3. 10.3389/neuro.11.003.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haynes J.-D., Deichmann R., Rees G. (2005). Eye-specific effects of binocular rivalry in the human lateral geniculate nucleus. Nature 438, 496–499 10.1038/nature04169 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hesselmann G., Malach R. (2011). The link between fMRI-BOLD activation and perceptual awareness is â stream-invariantâ in the human visual system. Cereb. Cortex. 10.1093/cercor/bhr085 [Epub ahead of print]. [DOI] [PubMed] [Google Scholar]
- Hung C. P., Kreiman G., Poggio T., DiCarlo J. J. (2005). Fast readout of object identity from macaque inferior temporal cortex. Science 310, 863–866 10.1126/science.1116739 [DOI] [PubMed] [Google Scholar]
- Jiang Y., Costello P., Fang F., Huang M., He S. (2006). A gender- and sexual orientation-dependent spatial attentional effect of invisible images. Proc. Natl. Acad. Sci. U.S.A. 103, 17048–17052 10.1073/pnas.0605678103 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jiang Y., Shannon R. W., Vizueta N., Bernat E. M., Patrick C. J., He S. (2009). Dynamics of processing invisible faces in the brain: automatic neural encoding of facial expression information. Neuroimage 44, 1171–1177 10.1016/j.neuroimage.2008.09.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kiefer M. (2001). Perceptual and semantic sources of category-specific effects: event-related potentials during picture and word categorization. Mem. Cognit. 29, 100–116 10.3758/BF03195745 [DOI] [PubMed] [Google Scholar]
- Koch C. (1996). Towards the neuronal correlate of visual awareness. Curr. Opin. Neurobiol. 6, 158–164 10.1016/S0959-4388(96)80068-7 [DOI] [PubMed] [Google Scholar]
- Koch C. (2003). The Quest for Consciousness: A Neurobiological Approach. Englewood, CO: Roberts and Company Publishers [Google Scholar]
- Kovács I., Papathomas T. V., Yang M., Fehér A. (1996). When the brain changes its mind: interocular grouping during binocular rivalry. Proc. Natl. Acad. Sci. U.S.A. 93, 15508–15511 10.1073/pnas.93.26.15508 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kreiman G., Fried I., Koch C. (2002). Single-neuron correlates of subjective vision in the human medial temporal lobe. Proc. Natl. Acad. Sci. U.S.A. 99, 8378–8383 10.1073/pnas.072194099 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kreiman G., Fried I., Koch C. (2005). “Responses of single neurons in the human brain during flash suppression,” in Binocular Rivalry, eds Alais D., Blake R. (Cambridge, MA: MIT Press; ), 213–230 [Google Scholar]
- Kutas M., Hillyard S. A. (1980). Reading senseless sentences: brain potentials reflect semantic incongruity. Science 207, 203–205 10.1126/science.7350657 [DOI] [PubMed] [Google Scholar]
- Lamme V. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends Neurosci. 23, 571–579 10.1016/S0166-2236(00)01657-X [DOI] [PubMed] [Google Scholar]
- Liu H., Agam Y., Madsen J. R., Kreiman G. (2009). Timing, timing, timing: fast decoding of object information from intracranial field potentials in human visual cortex. Neuron 62, 281–290 10.1016/j.neuron.2009.02.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Logothetis N. K. (1998). Single units and conscious vision. Philos. Trans. R. Soc. Lond. B Biol. Sci. 353, 1801–1818 10.1098/rstb.1998.0333 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mack A., Rock I. (1998). Inattentional Blindness. Cambridge, MA: MIT Press [Google Scholar]
- Melloni L., Molina C., Pena M., Torres D., Singer W., Rodriguez E. (2007). Synchronization of neural activity across cortical areas correlates with conscious perception. J. Neurosci. 27, 2858–2865 10.1523/JNEUROSCI.4623-06.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Milner D., Goodale M. (1995). The Visual Brain in Action, 2nd Edn Oxford: Oxford University Press [Google Scholar]
- Moradi F., Koch C., Shimojo S. (2005). Face adaptation depends on seeing the face. Neuron 45, 169–175 10.1016/j.neuron.2004.12.018 [DOI] [PubMed] [Google Scholar]
- Murphy B., Baroni M., Poesio M. (2009). “EEG responds to conceptual stimuli and corpus semantics,” in Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 – Volume 2, EMNLP ’09 (Stroudsburg, PA: Association for Computational Linguistics), 619–627 [Google Scholar]
- Murphy B., Poesio M., Bovolo F., Bruzzone L., Dalponte M., Lakany H. (2010). EEG decoding of semantic category reveals distributed representations for single concepts. Brain Lang. 117, 12–22 10.1016/j.bandl.2010.09.013 [DOI] [PubMed] [Google Scholar]
- Olivetti E., Mognon A., Greiner S., Avesani P. (2010). Brain decoding: biases in error estimation. First Workshop on Brain Decoding: Pattern Recognition Challenges in Neuroimaging, Istanbul, 40–43 [Google Scholar]
- Oram M. W., Perrett D. I. (1992). Time course of neural responses discriminating different views of the face and head. J. Neurophysiol. 68, 70–84 [DOI] [PubMed] [Google Scholar]
- Pasley B. N., Mayes L. C., Schultz R. T. (2004). Subcortical discrimination of unperceived objects during binocular rivalry. Neuron 42, 163–172 10.1016/S0896-6273(04)00155-2 [DOI] [PubMed] [Google Scholar]
- Perrett D. I., Rolls E. T., Caan W. (1982). Visual neurones responsive to faces in the monkey temporal cortex. Exp. Brain Res. 47, 329–342 10.1007/BF00239352 [DOI] [PubMed] [Google Scholar]
- Potter M. C., Faulconer B. A. (1975). Time to understand pictures and words. Nature 253, 437–438 10.1038/253437a0 [DOI] [PubMed] [Google Scholar]
- Pulvermuller F., Preissl H., Lutzenberger W., Birbaumer N. (1996). Brain rhythms of language: nouns versus verbs. Eur. J. Neurosci. 8, 937–941 10.1111/j.1460-9568.1996.tb01580.x [DOI] [PubMed] [Google Scholar]
- Rousselet G. A., Macé M. J., Thorpe S. J., Fabre-Thorpe M. (2007). Limits of event-related potential differences in tracking object processing speed. J. Cogn. Neurosci. 19, 1241–1258 10.1162/jocn.2007.19.8.1241 [DOI] [PubMed] [Google Scholar]
- Schiller P. H., Chorover S. L. (1966). Metacontrast: its relation to evoked potentials. Science 153, 1398–1400 10.1126/science.153.3742.1398 [DOI] [PubMed] [Google Scholar]
- Schlkopf B., Smola A. J. (2001). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning), 1st Edn. Cambridge, MA: The MIT Press [Google Scholar]
- Sergent C., Baillet S., Dehaene S. (2005). Timing of the brain events underlying access to consciousness during the attentional blink. Nat. Neurosci. 8, 1391–1400 10.1038/nn1549 [DOI] [PubMed] [Google Scholar]
- Sheinberg D. L., Logothetis N. K. (1997). The role of temporal cortical areas in perceptual organization. Proc. Nat. Acad. Sci. U.S.A. 94, 3408–3413 10.1073/pnas.94.7.3408 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Simanova I., van Gerven M., Oostenveld R., Hagoort P. (2010). Identifying object categories from event-related EEG: toward decoding of conceptual representations. PLoS ONE 5, e14465. 10.1371/journal.pone.0014465 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Simons D. J., Chabris C. F. (1999). Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception 28, 1059–1074 10.1068/p2952 [DOI] [PubMed] [Google Scholar]
- Stein T., Senju A., Peelen M. V., Sterzer P. (2011). Eye contact facilitates awareness of faces during interocular suppression. Cognition 119, 307–311 10.1016/j.cognition.2011.01.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sterzer P., Haynes J.-D. D., Rees G. (2008). Fine-scale activity patterns in high-level visual areas encode the category of invisible objects. J. Vis. 8, 1–12 10.1167/8.10.1 [DOI] [PubMed] [Google Scholar]
- Sterzer P., Jalkanen L., Rees G. (2009). Electromagnetic responses to invisible face stimuli during binocular suppression. Neuroimage 46, 803–808 10.1016/j.neuroimage.2009.02.046 [DOI] [PubMed] [Google Scholar]
- Thorpe S., Fize D., Marlot C. (1996). Speed of processing in the human visual system. Nature 381, 520–522 10.1038/381520a0 [DOI] [PubMed] [Google Scholar]
- Thorpe S. J., Fabre-Thorpe M. (2001). Seeking categories in the brain. Science 291, 260–263 10.1126/science.1058249 [DOI] [PubMed] [Google Scholar]
- Tong F. (2001). Competing theories of binocular rivalry: a possible resolution. Brain Mind 2, 55–83 10.1023/A:1017942718744 [DOI] [Google Scholar]
- Tong F., Engel S. A. (2001). Interocular rivalry revealed in the human cortical blind-spot representation. Nature 411, 195–199 10.1038/35075583 [DOI] [PubMed] [Google Scholar]
- Tong F., Meng M., Blake R. (2006). Neural bases of binocular rivalry. Trends Cogn. Sci. (Regul. Ed.) 10, 502–511 10.1016/j.tics.2006.09.003 [DOI] [PubMed] [Google Scholar]
- Tong F., Nakayama K., Vaughan J. T., Kanwisher N. (1998). Binocular rivalry and visual awareness in human extrastriate cortex. Neuron 21, 753–759 10.1016/S0896-6273(00)80592-9 [DOI] [PubMed] [Google Scholar]
- Tsuchiya N., Koch C. (2005). Continuous flash suppression reduces negative afterimages. Nat. Neurosci. 8, 1096–1101 10.1038/nn1500 [DOI] [PubMed] [Google Scholar]
- Tsuchiya N., Koch C., Gilroy L. A., Blake R. (2006). Depth of interocular suppression associated with continuous flash suppression, flash suppression, and binocular rivalry. J. Vis. 6, 1068–1078 10.1167/6.10.6 [DOI] [PubMed] [Google Scholar]
- VanRullen R., Thorpe S. J. (2001a). Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects. Perception 30, 655–668 10.1068/p3029 [DOI] [PubMed] [Google Scholar]
- VanRullen R., Thorpe S. J. (2001b). The time course of visual processing: from early perception to decision-making. J. Cogn. Neurosci. 13, 454–461 10.1162/08989290152001880 [DOI] [PubMed] [Google Scholar]
- Wunderlich K., Schneider K. A., Kastner S. (2005). Neural correlates of binocular rivalry in the human lateral geniculate nucleus. Nat. Neurosci. 8, 1595–1602 10.1038/nn1554 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zimba L. D., Blake R. (1983). Binocular rivalry and semantic processing: out of sight, out of mind. J. Exp. Psychol. Hum. Percept. Perform. 9, 807–815 10.1037/0096-1523.9.5.807 [DOI] [PubMed] [Google Scholar]