Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2016 Jun 20;113(27):7459–7464. doi: 10.1073/pnas.1520027113

Working memory is not fixed-capacity: More active storage capacity for real-world objects than for simple stimuli

Timothy F Brady a,1, Viola S Störmer b, George A Alvarez b
PMCID: PMC4941470  PMID: 27325767

Significance

Visual working memory is the cognitive system that holds visual information in an active state, making it available for cognitive processing and protecting it against interference. Here, we demonstrate that visual working memory has a greater capacity than previously measured. In particular, we use EEG to show that, contrary to existing theories, enhanced performance with real-world objects relative to simple stimuli in short-term memory tasks is reflected in active storage in working memory and is not entirely due to the independent usage of episodic long-term memory systems. These data demonstrate that working memory and its capacity limitations are dependent upon our knowledge. Thus, working memory is not fixed-capacity; instead, its capacity is dependent on exactly what is being remembered.

Keywords: working memory capacity, contralateral delay activity, visual memory, visual short-term memory, visual long-term memory

Abstract

Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli—colors and orientations—is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up,” revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge.


Visual working memory is a system used to actively store and manipulate visual sensory information (1). This memory system is severely limited in capacity (2), and these capacity limits are closely related to measures of intelligence and academic achievement (3, 4), suggesting that working memory may be a core cognitive ability that underlies, and constrains, our ability to process information across domains (5).

To date, the vast majority of studies on visual working memory have focused on memory for simple stimuli such as colored squares, oriented lines, or novel shapes, all stimuli about which participants have no background knowledge or expectations. These simple, meaningless stimuli are assumed to best assess the core capacity of working memory because they have no semantic associations and are repeated from trial to trial, which minimizes participants’ ability to use other memory systems, such as episodic visual long-term memory. Episodic visual long-term memory is the process of forming memory traces and later retrieving them without continued active maintenance and can be used at any time scale (even with brief delays). Contributions from this system, which operates best with conceptually meaningful stimuli and when there is little interference from items repeating across trials (68), is thought to be minimal in working memory tasks that use simple, meaningless stimuli.

In experiments using simple stimuli, working memory is often estimated to have a fixed capacity (of approximately three or four items’ worth of information) no matter how long participants are given to encode those items (9). In particular, participants’ memory performance remains at the same limit regardless of whether the stimuli are presented for 200 ms or for several seconds during encoding (10). This is consistent with the theoretical proposal that the primary function of visual working memory is to span across saccades so that important items can be located after the eyes have moved (11), which requires a memory system that can be “filled up” within the span of a single eye fixation.

However, by contrast to the fixed capacity and fast encoding that is observed with simple stimuli, similar tasks with real-world objects have found that participants remember more items with more time, without an obvious capacity limit (1214). In addition, familiarity or stored knowledge can also increase working memory capacity estimates (1518). Furthermore, paradigms where interference is minimized with real-world objects also show that participants can remember a very large number of objects, also without reaching a fixed capacity limit (19). These differences in capacity estimates for simple stimuli and real-world objects could simply be due to the fact that working memory operates equally well on both stimulus types, but real-world objects and other familiar stimuli can additionally benefit from the high-capacity episodic long-term memory system. Alternatively, it is possible that, at least to some extent, the active working memory system has a different (and higher) capacity for real-world stimuli than for simple stimuli.

Most researchers have generally assumed that lack of process purity explains these differences in memory capacities for real-world objects relative to simple stimuli; in particular, using complex, meaningful stimuli and allowing for long encoding times or long retention times are thought to allow for the use of episodic long-term memory systems (20). None of the existing studies of memory for real-world objects were able to directly examine whether the increase in memory performance was due to increases in the active storage of information in working memory or to the use of other memory systems, such as episodic long-term memory.

In the present study we used electrophysiological recordings to directly test whether or not the maintenance of real-world objects relies on the same active working memory system as simple stimuli (in particular, colors). Visual working memory is associated with persistent neural activity in frontal and parietal regions (21, 22), which may reflect either actual information storage (23) or attentional refreshing mechanisms required to maintain information storage in lower-level regions (24); in either case, such activity is directly associated with active maintenance of working memory representations. In humans, one particularly strong marker of active working memory maintenance is the contralateral-delay activity (CDA), a sustained negative deflection in the event-related potential (ERP) that occurs during the retention period over the hemisphere contralateral to the to-be-remembered items and is typically largest over parietal-occipital scalp sites (2528). The CDA provides a neural signature of active storage in working memory (29): Its magnitude increases with the number of items participants hold in working memory (27) and decreases when items are dropped from working memory (28, 30), the CDA correlates with individual’s memory capacity (27), and, most importantly, the CDA disappears when items have been consolidated into episodic long-term memory (31). Thus, the CDA is a measure of how much information is actively maintained in visual working memory at each moment rather than the encoding or consolidation into episodic long-term memory systems (27, 30, 31).

In the present study, by using this electrophysiological marker of active maintenance, we directly assess the role of visual working memory in storing real-world objects. In a first experiment, we vary how long participants have to encode either simple stimuli (colors) or real-world objects in a working memory task and find a behavioral advantage for real-world objects at long encoding times (experiment 1). We then examine whether the increase in memory performance for real-world objects at longer encoding times was due to encoding in episodic long-term memory systems or whether it would be reflected in active working memory maintenance, as indexed by the CDA (25, 27, 28). If the CDA amplitude increased with increasing behavioral performance, this would suggest that this additional accumulation of information was a result of active storage in visual working memory rather than an artifact of people using a different memory system (e.g., episodic long-term memory). Next, we directly compare active memory maintenance (i.e., CDA amplitude) for colors vs. real-world objects at long encoding times. We find that the behavioral advantage for real-world objects with additional encoding time is supported by the active working memory system, relative to performance at short encoding times (experiment 2) and relative to memory for colors (experiment 3).

Results

Experiment 1: Increased Encoding Time Improves Memory for Real-World Objects but Not Colors.

In our first experiment, we asked participants to remember either real-world objects or simple colors while doing a simultaneous verbal task (rehearsing two digits) to ensure that visual memory, not verbal memory, was used. Participants were asked to remember a set of either colors or real-world objects and then given a forced-choice comparison testing memory for a single item after a brief delay. Our main manipulation was to vary how long participants had to encode the stimuli, to assess how encoding time affects performance for simple stimuli vs. for real-world objects (Fig. 1A).

Fig. 1.

Fig. 1.

Effects of encoding time on colors vs. realistic objects. (A) Participants saw either colors or objects and saw them for either 200, 1,000, or 2,000 ms each. While they performed each trial, they also performed a verbal interference task (rehearsing two digits). (B) The number of colors, objects, and objects with detail remembered as a function of encoding time. Note that the zero encoding time point is theoretical, because people cannot remember anything if they did not see any stimuli. The number of colors remembered (Left) plateaus with less than 200-ms encoding time; the number of objects (Middle) remembered and number of detailed objects remembered (Right) both continue to increase with more encoding time.

Participants performed well on the digit task, entering both digits correctly on 93% of trials (±2.6% SEM). Memory performance for each stimulus type is shown in Fig. 1B. Replicating previous work (9, 10), we found that 200 ms was sufficient time to encode color. Little to no additional information was accumulated over the remaining 1,800 ms (the slope from 200 ms to 2,000 ms was not significantly different from zero [t(11) = 0.61, P = 0.555]. By contrast, the number of objects’ worth of information remembered increased reliably from 200 ms to 2,000 ms [t(11) = 4.38, P = 0.001], as did the number remembered with detail [t(11) = 3.38, P = 0.006].

Comparing only the conditions that estimate how many items are remembered at all (e.g., the large change conditions; Fig. 1B, Middle), there was a significant interaction between the estimated capacity for objects and estimated capacity for colors as a function of additional encoding time [e.g., three encoding times × two stimuli kinds; F(2,71) = 4.72, P = 0.019], indicating that the capacity for objects reliably increased but the capacity for colors did not. Thus, real-world objects and colors are differentially affected by encoding time, with only real-world objects continuing to benefit from more encoding time.

At our longest encoding time (2 s), participants had an estimated capacity of more than four objects’ worth of information [4.7 ± 0.22; significantly greater than 4, t(11) = 3.35, P = 0.006], and remembered 27% more about objects than colors [4.7 vs. 3.7; t(11) = 2.50; P = 0.030].

The color results replicate existing work showing no improvement with additional encoding time for simple stimuli (9, 10) or complex but meaningless stimuli (15). However, the nonfixed capacity (19) and the continued encoding with additional time in the case of real-world objects raises the question of whether this increase in behavioral performance is supported by active working memory, or whether it resulted from the use of a separate episodic long-term memory system. Thus, in a second and third experiment, we examine the CDA, the electrophysiological measure of how much information is being actively held in mind in visual working memory, while participants remember real-world objects. In experiment 2, we focus on memory for real-world objects. In particular, we ask whether the increase in information stored with additional encoding time is a result of active storage, or whether this is due to encoding in episodic long-term memory. In experiment 3, we compare memory for real-world objects and colors. In particular, we ask whether the greater capacity for real-world objects than for colors (at long encoding times) is reflected in greater active storage for real-world objects than for colors.

Experiment 2: Encoding Time Advantages for Real-World Objects Result from Active Storage.

As encoding time increases, participants remember more details about real-world objects over a short delay (e.g., Fig. 1B, green line). In this experiment, we tested whether these details were actively stored using the CDA. In general, memory capacities tend to be lower in CDA experiments due to the extra demands of the task (ignoring half of the items on the screen, maintaining fixation for long durations, and avoiding blinks or eye movements). However, existing evidence shows that the CDA nonetheless provides an excellent proxy for more typical working memory situations (27). Thus, in the following experiments we will focus on the differences between the conditions in both behavior and CDA amplitude, rather than on the absolute performance level.

We showed participants real-world objects for either 200 ms or 1,000 ms while recording event-related brain activity in response to the memory display. We asked participants to hold in mind either those objects to the left or right of the fixation cross without moving their eyes, and then, after a short delay, asked participants to do detailed object comparisons to assess how many objects they could remember with sufficient detail to distinguish between two exemplars of the same category (Fig. 2A). The CDA was measured as the contralateral-minus-ipsilateral difference wave over parietal-occipital electrode sites during the retention interval (300–700 ms after the memory display disappeared; gray box in Fig. 2B).

Fig. 2.

Fig. 2.

Active storage of detailed object representations. (A) Participants were cued to remember the objects on either the left or right side of the screen. The objects were presented for 200 ms in the short encoding time condition and 1,000 ms in the long encoding time condition. After a brief delay, a forced-choice comparison assessed detailed object memory. (B) Contralateral-minus-ipsilateral waveforms for the short (Top) and long (Bottom) encoding time conditions. The CDA is measured from 300 ms after offset until the cue appears (gray shaded rectangle, labeled CDA). (C) Mean CDA amplitude for short and long encoding times. Error bars represent within-subject SEMs.

In line with the results of experiment 1, there was a behavioral performance advantage, with greater capacity after a 1,000-ms encoding time than after a 200-ms encoding time [2.44 vs. 1.99; t(11) = 6.09, P < 0.001], although, as expected, overall capacity was lower with the greater task demands of the EEG experiment. Our main question of interest was whether the increase in capacity as a result of more encoding time was reflected in the CDA component. In fact, the CDA was reliably greater after the 1,000-ms encoding time than after the 200-ms encoding time [t(11) = 2.53, P = 0.028; Fig. 2]. Thus, the increased memory capacity for real-world objects with longer encoding time is reflected in active working memory maintenance systems over the parietal-occipital cortex, suggesting that the additional information accumulated with longer encoding time is actively stored.

Experiment 3: More Real-World Objects Than Colors Can Be Actively Maintained in Working Memory.

Experiment 2 shows that the accumulation of more detailed information about a set of real-world objects with longer encoding time is supported by active working memory processes, as reflected in the CDA component. Another important question is whether the greater capacity we observe for real-world objects relative to simple stimuli, like colors, is also supported by active storage. In particular, at long encoding times, participants are able to remember more real-world objects than simple colors, despite the fact that the real-world objects are more complex (Fig. 1). Is this additional information a result of storage in visual working memory systems or is it a result of the task being contaminated by the use of episodic long-term memory systems?

To address this question, we showed participants either five objects or five colors for 1,000 ms followed by a short delay and a two-alternative forced-choice (2AFC) test designed to measure the total number of objects about which information could be encoded (e.g., the foils consisted of objects or colors that were as distinct as possible from the encoded stimulus). In line with the results of experiment 1, participants performed better at remembering objects than at remembering colors at this 1,000-ms encoding time [3.18 vs. 2.86; t(17) = 2.36, P = 0.031], although, as expected, overall fewer objects and colors were remembered with the greater task demands of the EEG experiment (see Supporting Information for a discussion of this issue).

We found that the greater memory capacity for real-world objects than for colors was reflected in the CDA amplitude. As in experiment 2, the CDA component was measured during the retention interval as the contralateral-minus-ipsilateral difference over parietal-occipital scalp sites. The CDA was reliably greater for remembering five objects than for remembering five colors [t(17) = 2.26, P = 0.037; Fig. 3]. This indicates that the additional real-world objects that are remembered beyond the limit on color memory are actively stored in visual working memory systems, rather than being an artifact of the use of the episodic long-term memory system.

Fig. 3.

Fig. 3.

More active storage for objects than for colors. (A) ERP waveforms for showing the contralateral-minus-ipsilateral waveforms, separately for when participants are asked to remember five colors (in blue) or five objects (in red). (B) Mean CDA amplitude for the five colors and five objects conditions. Error bars represent within-subject SEMs.

An important alternative interpretation of these results is that this difference in CDA does not really reflect more objects being stored (as seen in behavioral performance) but is instead a main effect, where objects always generate a larger CDA than colors regardless of how many are stored. To address this possibility, we also included a condition where participants had to remember only three colors or three objects, which are both within the capacity limit we would expect for simple stimuli. We did not find a CDA difference between the three colors and three objects condition [t(17) = 0.53, P = 0.602] and, in fact, the CDA was numerically larger for three colors than for three objects (−0.70 vs. −0.60; SEM: ±0.26 and ±0.17; Supporting Information). This confirms that the CDA is not always larger for objects than for colors as a result of the perceptual encoding of the stimuli themselves.

Discussion

The present results show that visual working memory capacity is not fixed for real-world objects. Instead, visual working memory representations continue to accumulate information for longer when real-world objects are remembered than when simple colors are remembered. As a result, more real-world objects can be stored than simple colors in visual working memory when sufficient encoding time is given. These higher capacities for real-world objects are reflected in active storage in visual working memory, because they elicit continued activity in the occipital-parietal regions throughout the delay period, in particular the CDA. The amplitude of the CDA has been shown to reflect the amount of information actively stored in visual working memory (25, 27, 28) and is not influenced by storage or consolidation into episodic long-term memory systems (31). Thus, the present data reveal that the benefits for real-world objects in short-term memory tasks are not simply due to contributions of separate episodic long-term memory systems but reflect enhanced active working memory storage for meaningful, real-world stimuli. Overall, this suggests that visual working memory does not always comprise the fixed capacity previously described based on studies using simple stimuli but is a flexible system that varies in capacity depending on stimulus type.

Relationship Between Working Memory and Episodic Long-Term Memory.

The current data do not rule out the idea that real-world objects also lead to better episodic long-term memory representations than simple stimuli do (in addition to being better represented in active working memory systems). It is possible—in fact, likely—that participants are maintaining active working memory representations of the objects and are also forming episodic long-term memory representations that will be available even after a significant delay (3234). It is well known that episodic long-term memory encoding and retrieval are more successful for meaningful stimuli and on stimuli that do not repeat trial to trial (68). Thus, our data suggest a commonality between active visual working memory processes and episodic long-term memory systems: Both may benefit from conceptually distinct stimuli about which participants have existing knowledge.

Conceptual vs. Perceptual Complexity.

Our data show that visual working memory actively maintains more information about real-world objects than simple colors. Is this a result of the additional conceptual information in these stimuli or the result of their additional perceptual complexity? Existing data suggests that it is the conceptual information that is relevant, as is the case in long-term memory (6, 35, 36). In particular, with complex but meaningless objects such as 3D cubes and polygons, participants cannot remember more than one or two objects, even with long encoding times (15, 37), unless ensemble coding processes are used to combine information across objects (38). Thus, perceptual complexity without conceptual meaning actually leads to lower storage capacity. Furthermore, the behavioral capacity for complex-but-meaningless objects is about two objects, which is the same point at which the CDA plateaus for these objects even with increased encoding time (39) and is significantly less than the capacity for colors.

Relationship Between Working Memory and Existing Semantic Long-Term Memory Representations (Knowledge).

The CDA correlates with how much information is being actively maintained in visual working memory and is absent when participants remember auditory information (40) or when visual information is readily available in episodic visual long-term memory (31). Because the CDA does not reflect episodic visual long-term memory, the current data suggest that visual working memory is enhanced by access to a different kind of nonepisodic information. For example, it is possible that real-world objects activate stored perceptual or semantic knowledge about different categories of objects (i.e., representations that are not tied to a particular episode). This account is consistent with views that suggest that visual working memory is based on the activation of existing long-term memories (41). Under this account, the CDA, and more generally parietal and frontal working memory activity, might reflect not the information storage itself but instead the attentional refreshing mechanism that serves to keep active and shield from interference the long-term memory traces currently being stored in visual working memory. This idea is consistent with evidence suggesting that the brain regions involved in processing perceptual representations of different stimulus categories contain information about which items are in memory (24, 42, 43). Thus, one possibility for why participants can actively store more information in visual working memory for real-world objects than for simple color stimuli is that a wider variety of long-term memory representations and perceptual representations are relevant and can be refreshed for real-world objects than for simple stimuli. This may reduce interference and allow for more information to be remembered.

Importantly, the attentional refreshing mechanism involved in visual working memory is not the same as the attentional mechanisms used to perform other visual tasks, although both have a common parietal focus (44). For example, during the delay period of a visual working memory task, participants are relatively unaffected by having to perform other attentional demanding tasks if they do not tap into working memory per se (45).

Conclusions

By measuring active storage using the CDA, we found that working memory accumulates information over a greater time scale for real-world objects than has been shown for simple colors and that more real-world objects than simple colors can be stored in working memory. These results demonstrate that working memory capacity is systematically underestimated by using simple stimuli about which we have no existing knowledge. Our findings also raise important questions about the functional role of the visual working memory system. In particular, they raise doubts that the entire function of visual working memory can be to persist information across eye movements (11), because the system seems to continue to accumulate information on a much longer time scale than typical fixation durations.

Materials and Methods

Participants.

All studies were approved by the Institutional Review Board at Harvard University. All participants read and signed an informed consent before the studies. The final samples of experiment 1 and experiment 2 consisted of 12 participants recruited from the Harvard University and Cambridge, MA communities. An additional participant was run in experiment 1 but excluded because their performance did not differ from chance on average across all of the conditions. An additional participant was run in experiment 2 but excluded due to excessive artifacts in the EEG (>30% of trials had to be excluded due to eye movements and muscle movement artifacts).

The final sample of experiment 3 consisted of 18 participants recruited from the Harvard University and Cambridge, MA communities. Data from an additional four participants had to be excluded due to excessive artifacts in the EEG (>30% of trials rejected; three participants because of horizontal eye movements and one participant because of muscle movements) and an additional one participant because behavioral performance did not differ from chance on average across all conditions.

All participants (age range 18–28 y) had normal or corrected-to-normal color vision, assessed with Ishihara’s test for color deficiencies.

Procedure.

Experiment 1.

Experiment 1 was a behavioral visual working memory task where encoding time was varied for both colors and objects. Participants saw six colors or six objects for 200, 1,000, or 2,000 ms, followed by a 1,000-ms delay. After the delay, a single item was cued and then tested in a 2AFC format. For colors, the test consisted of a forced choice between the original color and a novel color that was categorically distinct and did not appear on the original display. For objects, there were two conditions. In one case, an object was tested against another categorically distinct object (e.g., a bell you had seen vs. a cookie; Fig. 1). This was designed to measure how many objects participants remembered. In another condition, an object was tested against another exemplar from the same category (e.g., one pitcher vs. another pitcher; Fig. 1). This was designed to measure how many objects participants remembered with detail. Each condition (encoding time × stimulus type) was performed in a single block of 33 trials, with the order of the nine blocks randomized across participants.

While participants performed each trial they did a concurrent verbal interference task to prevent them from verbally encoding the objects and colors. In particular, on each trial they were shown two digits and asked to covertly rehearse those digits throughout the trial. At the end of the trial, after the memory test, they typed the digits.

Experiment 2.

Experiment 2 was designed to measure the CDA while participants remembered real-world objects. It required participants to complete a 2AFC memory test with the foils being objects from the same category and thus required detailed memory representations. The experiment manipulated encoding time (200 vs. 1,000 ms), enabling us to assess how many objects could be stored with detail and how this changed with longer encoding times.

Participants completed 440 total trials. On each trial, three objects were presented in a semicircle in each visual hemifield and participants were cued to memorize either the objects on the left or on the right of fixation and to ignore the other objects. The to-be-ignored objects consisted of the to-be-remembered stimuli from a different trial. Thus, across trials, the exact same visual displays were presented on the to-be-remembered side and the to-be-ignored side, thus equating all perceptual information so that any brain differences between the cued and uncued side were due to differences in memory processes, not perceptual processing.

The cue indicating which side of the screen to remember was an arrow pointing either left or right that appeared at the center of the screen 1,000 ms before the presentation of the objects. Participants were instructed to keep their eyes in the center of the screen throughout each trial, until the test display appeared. Trials with horizontal eye movements were excluded from the analysis (Materials and Methods, Electrophysiological Recordings and Analysis). On one-half of the trials the objects were shown for 200 ms, and on the other half of the trials for 1,000 ms. These trial types were intermixed, so participants did not know how long the objects would remain visible on each trial. After the objects were presented for this encoding duration, there was a 700-ms delay interval and then a cue indicating which of the three memorized objects would be tested; the cue was visible for 500 ms. Finally, the forced-choice test was presented and remained visible until the participant made a selection. One of the two forced-choice options was always the same object that had been visible at the cued location and one was always a foil object from the same category. The 2AFC was presented such that the spatial midpoint of the two choices was centered on the location that the to-be-remembered object had appeared. Trials were broken up into eight blocks of 55 trials each.

Experiment 3.

Experiment 3 was designed to measure the CDA while participants remembered either three or five real-world objects or three or five colored squares. After a brief delay, participants were required to complete a 2AFC memory test with either objects or colors that were as distinct as possible from the encoded objects/colors, thus providing an estimate of how many objects or colors can be held in visual working memory.

Participants completed 440 total trials, divided into eight blocks of 55 trials each. In each block, participants remembered either three objects, five objects, three colors, or five colors for the entire set of 55 trials. On each trial, the to-be-remembered stimuli were presented in a single visual hemifield and a matched set of stimuli were presented in the other visual hemifield; this matched set of stimuli was from the same condition (e.g., three colors) and represented the to-be-remembered stimuli from a different trial, thus ensuring that the exact same visual displays were presented as to-be-remembered and to-be-ignored across trials. When three stimuli were presented in each hemifield they were presented in a semicircle around fixation. When five stimuli were presented in each hemifield the two additional stimuli were presented larger [approximately M-scaled (46)] and in more peripheral locations (Fig. 3). Participants were cued to memorize the to-be-remembered stimuli with an arrow cue at fixation that appeared at the center of the screen 1,000 ms before the presentation of the stimuli. Just like in experiment 2, participants were instructed to keep their eyes in the center of the screen throughout each trial. Which side of fixation the to-be-remembered stimuli were presented on varied randomly across trials within a block. Following the arrow cue, the stimuli were presented for 1,000 ms, followed by an 800-ms delay period and then a 500-ms cue indicating which of the three or five memorized stimuli would be tested. Finally, the forced-choice objects or colors were presented and remained visible until the participant made a selection.

In the object blocks, the foil object at test was chosen from a different category than the shown object, ensuring that no detailed information about the object was required to succeed at the memory task, and thus providing an overall estimate of how many objects could be remembered. In the color blocks, stimuli were chosen from a circle (radius 59°) cut out of the CIE L*a*b* color space, centered at L = 54, a = 18, and b = –8. The foil color in the 2AFC was always 180° in color space from the originally shown color, providing the maximum possible separation and thus requiring the least possible detail to succeed at the memory task. The initially presented set of three or five colors were restricted by forcing all of the colors to be separated by a minimum distance of 15° in color space, limiting the impact of perceptual grouping and ensemble encoding (47).

Behavioral Analysis.

In each experiment we calculated capacity (the number of objects remembered, K) using the standard form for a two-alternative forced choice test with N items to be remembered. In particular, participants can correctly answer the 2AFC comparison test (p, percent correct) either because they remember the tested object, which should occur on K/N of the trials, or can answer correctly on trials they do not remember the tested object [on (NK)/N of the trials] if they guess correctly (chance = 50%). Thus, the equation for percent correct is p=(K/N)1.0+(NK/N)0.50. Solving for K and simplifying gives the formula for capacity, K=N(2p1). This tells us how many objects or colors participants remembered with the required precision—in other words, how many objects were remembered well enough to distinguish them from a novel object or from another exemplar from the same category.

Electrophysiological Recordings and Analysis.

EEG was recorded continuously from 32 Ag/AgCL electrodes mounted in an elastic cap and amplified by an ActiCHamp amplifier (BrainVision). Electrodes were arranged according to the 10-10 system. The horizontal electrooculogram was acquired using a bipolar pair of electrodes positioned at the external ocular canthi, and the vertical oculogram was measured at electrode FP1, above the left eye. All scalp electrodes were referenced to an electrode on the right mastoid online, and were digitized at a rate of 500 Hz. Continuous EEG data were filtered offline with a band pass of 0.01–112 Hz. Trials with horizontal eye movements, blinks, or excessive muscle movements were excluded from the analysis. Artifact rejection was performed for individual trials on epochs starting 200 ms before the memory display onset until the onset of the forced-choice display. Artifact-free data were rereferenced to the average of the left and right mastoid.

ERPs were time-locked to the onset of the memory display in all experiments, and ERPs from artifact-free epochs were averaged and digitally low-pass-filtered (−3-dB cutoff at 25 Hz) separately for each subject. ERPs elicited by the memory display were averaged separately for each condition and were then collapsed across to-be-remembered hemifield (left or right) and lateral position of the electrodes (left or right) to obtain waveforms recorded contralaterally and ipsilaterally to the to-be-remembered side. For each participant, mean CDA amplitudes were measured with respect to a 200-ms prestimulus period at four lateralized posterior electrodes (PO3/PO4/PO7/PO8), consistent with existing data on the location of the CDA (25). For all experiments, the measurement window for the CDA started 300 ms after the offset of the memory display and lasted for 400 ms, until the cue indicating the memory test item. Thus, for experiment 2 the CDA amplitude was measured between 500–900 ms for the short encoding time condition and between 1,300–1,700 ms for the long encoding time condition with respect to the onset of the memory display. For experiment 3, the CDA amplitude was measured between 1,300–1,700 ms with respect to the onset of the memory display. The resulting mean amplitudes were statistically compared using paired t tests and ANOVAs. Signal processing was performed with MATLAB (the MathWorks) using EEGLAB and ERPLAB toolboxes (48, 49). Topographical maps of the CDA were constructed by spherical spline interpolation (50).

SI Results

Experiment 2.

Fig. 2 in the main text, which shows the data from experiment 2, focuses on the difference between the contralateral and ipsilateral response—the critical comparison for examining the CDA. Fig. S1 plots these data independently. Of note is that in the long encoding condition we observe an ERP to the stimulus offset, as is typical for long stimulus presentations (51). However, this is present equally at both ipsilateral and contralateral sites and does not affect the CDA component.

Fig. S1.

Fig. S1.

EEG data for experiment 2, for short encoding time (Top) and long encoding time (Bottom). After the initial perceptual response, the waveforms over the contralateral hemisphere to the objects to be remembered are more negative than the waveforms over the ipsilateral hemisphere to the objects to be remembered, and this activity maintains throughout the delay period. The CDA is measured as the contralateral-minus-ipsilateral difference from 300 ms after offset until the cue appears (gray shaded rectangle). Notice that in the long encoding condition we observe an ERP to the stimulus offset, as is typical for long stimulus presentations, but this is present equally at both ipsilateral and contralateral sites.

Experiment 3.

Fig. 3 in the main text focuses on the data from experiment 3’s critical conditions at set size 5. The data from the set size 3 condition is plotted in Fig. S2. As can be seen in Fig. S2B, we did not find a CDA difference between the three colors and three objects condition [t(17) = 0.53, P = 0.602], and in fact the CDA was numerically larger for colors than for objects (−0.70 vs. −0.60; SEM: ±0.26 and ±0.17). This confirms that the CDA is not always larger for objects than for colors as a result of the perceptual encoding of the stimuli themselves. Separate ipsilateral and contralateral waveforms for all conditions from experiment 3 are plotted in Fig. S3. Consistent with the idea that the CDA is saturated (e.g., at capacity) for both the three-color and five-color conditions, there is no statistically reliable difference between the CDA in these conditions (P = 0.71). By contrast, the CDA for five objects is statistically reliably larger than all other conditions, including the CDA for three colors, five colors, and three objects (all P < 0.05); see Fig. S4 for a replot of the data from all of these conditions together.

Fig. S2.

Fig. S2.

Data from the set size 3 condition of experiment 3. (A) Contralateral-minus-ipsilateral waveforms for the color (blue) and object (pink) conditions. The CDA is measured from 300 ms after offset until the cue appears (gray shaded rectangle, labeled CDA). (B) Average CDA in each condition.

Fig. S3.

Fig. S3.

Ipsilateral and contralateral ERP waveforms for experiment 3, for color memory condition (A) and object memory condition (B). The CDA was measured as the contralateral-minus-ipsilateral difference from 1,300 ms until 1,700 ms after the onset of the memory display (gray shaded rectangle).

Fig. S4.

Fig. S4.

Data from all conditions of experiment 3, showing average CDA in each condition.

Differences in Performance Between Experiments 1 and 3.

Whereas the benefit for objects relative to colors is quite large in experiment 1, this benefit is somewhat diminished in experiment 3 (the EEG experiment). This is in large part because overall performance is significantly reduced in typical CDA experiments compared with similar behavioral experiments. There are several reasons for this, mostly related to the more demanding task requirements of participants in a CDA setting: (i) the necessity in the EEG experiment to use a spatial cue to determine which items to attend, (ii) the necessity in the EEG experiment to suppress items presented in the opposite hemifield, (iii) the necessity for participants in the EEG experiment to exert executive control to prevent blinks and eye movements for long intervals, and (iv) perhaps most importantly, the interference between items in the EEG experiment that is not present in experiment 1, because participants were allowed to move their eyes in experiment 1 but not in experiment 3 (and because twice as many items had to presented on the screen in experiment 3 compared with experiment 1 to have irrelevant items on the opposite side of the screen).

These effects result in reduced performance in experiment 3 relative to experiment 1. Importantly, we believe that this does not affect our claim that the CDA is larger for objects than for colors. In fact, there are reasons to believe the demands of the EEG tasks are actually greater for objects than for colors, and thus performance and the CDA (in experiment 3) is likely to be impaired more for objects than colors relative to a real-world situation. In particular, previous work has demonstrated that interference effects (point iv, above) are significantly worse for real objects than for simple colors (52). Because of this, experiment 3 is a very conservative way to estimate whether there is any benefit for objects over colors. In particular, evidence has shown that whereas many colors can be presented in the same hemifield while fixating without causing interference, even just two real-world objects presented in the same hemifield cause interference with each other (52). Thus, the data in experiment 3 shows a benefit for real-world objects over colors even in a circumstance where objects are under greater interference than colors. We would expect, based on previous data, that the benefit for objects would be considerably larger in real-world conditions where eye movements are allowed and where only relevant objects are present, because this would produce much less interference among the real objects, whereas it would have little effect on the colors. This is consistent with what we find in experiment 1 (behaviorally).

Differences in Overall Raw Signal Value (in Microvolts) Between Experiments 2 and 3.

We find an overall lower CDA in experiment 3 than experiment 2. This is true even though both experiments feature a similar condition, with a 1,000-ms presentation time and three real-world objects to be remembered. What causes this difference? In this case, our stimuli differed significantly between the experiments in a way that might be expected to cause lower signal in experiment 3. In particular, the stimuli were arranged differently on the screen and so had different sizes and eccentricities in experiment 2 and experiment 3. In experiment 2, all of the items appeared at a single fixed size and fixed offset from fixation, because only three items were ever presented per hemifield. However, in experiment 3, we had to allow for five objects to fit on the screen within a hemifield. Thus, even in the three-item conditions the items were smaller and further in the periphery (Fig. S5). This is a likely explanation of the difference in signal between the two conditions. Critically, these differences in raw signal are not an issue for the present study, because our primary conclusions rest on comparisons within experiments.

Fig. S5.

Fig. S5.

(Left) Alignment of the three-object condition in experiment 2. (Right) Alignment of the three-object condition in experiment 3.

Eye Movements.

Eye movements are a concern when examining lateralized changes of the ERP signal such as the CDA. In the main text, we present the analyses from all experiments after removing all trials with eye movements or other artifacts (see Materials and Methods for details). However, we began our artifact rejection window only 200 ms before the onset of the to-be-remembered stimuli, which is after participants were cued to which side to attend. Thus, it is possible, although unlikely, that participants might have moved their eyes immediately after the cue appeared (but before the memory display) and then held this new position steadily throughout the trial, and did this only in some conditions, not others, artifactually affecting the CDA. To rule this out, Fig. S6A shows the horizontal electrooculography (HEOG) traces from all experiments from the moment of the cue, for each of the experiments and conditions in the main text. We can also compute the CDA after performing artifact rejection over an extended interval (from cue onset to test stimulus onset), although these data are based on significantly fewer trials than the data in the main text, because participants relatively frequently blinked in the interval after the cue but before the stimuli to be remembered appeared 1,000 ms later. Nevertheless, our main results remain the same in this dataset as in the main dataset; in particular, we still find a significant difference between the CDA after short vs. long encoding in experiment 2 [t(11) = 2.13; P = 0.05] and a significant difference between the CDA for five objects vs. five colors in experiment 3 [t(17) = 2.65; P = 0.037]. HEOG traces using these artifact-rejection criteria are plotted in Fig. S6B.

Fig. S6.

Fig. S6.

HEOG traces from all experiments, where 0 time is the onset of the items to be remembered, and −1,000 ms is the onset of the cue to which side to attend. the memory display. A shows the HEOG traces for the main analysis reported in the paper, where eye movements were rejected −200 ms prior to the onset of the memory display until the test display. B shows the HEOG traces of an additional analysis in which artifacts were rejected over the entire period of −1,000 ms (onset of the arrow cue) until the test display. Negative numbers (plotted up) reflect drift of the eyes toward the left side; positive numbers reflect drift toward the right side. In all conditions, participants kept their eyes fixated and we observed no significant drifts toward the attended side throughout the trial or before the onset of the stimuli.

Scalp Topographies.

To examine whether the CDA measured for real-world objects shows a parietal-occipital scalp distribution similar to the CDA measured for simple stimuli, such as color, we plotted the topographical voltage distributions of the CDA separately for the two conditions. Scalp topographies of the CDA component (contralateral-minus-ipsilateral waveform, projected on the right side of the scalp) are plotted in Fig. S7. The CDA shows very similar scalp distributions with clear parietal-occipital foci over the hemisphere contralateral to the to-be-remembered items. The topographies for objects and colors were compared by analysis of variance following the procedure of McCarthy and Wood (53). The results revealed no differences between the topographies [for set size 3: electrodes × condition interaction: F(1,12) = 0.68; P = 0.772; for set size 5: electrodes × condition interaction: F(1,12) = 0.51 P = 0.907]. Together with the similar time course (Fig. 3), this indicates that the same ERP component is measured for real-world objects and colors.

Fig. S7.

Fig. S7.

Scalp distributions of the CDA component while remembering objects (Left) and colors (Right) in experiment 3. Topographical voltage maps show the contralateral-minus-ipsilateral amplitude differences projected on the right side of the scalp.

Footnotes

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1520027113/-/DCSupplemental.

References

  • 1.Baddeley A. Working memory: Theories, models, and controversies. Annu Rev Psychol. 2012;63:1–29. doi: 10.1146/annurev-psych-120710-100422. [DOI] [PubMed] [Google Scholar]
  • 2.Cowan N. The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behav Brain Sci. 2001;24(1):87–114, discussion 114–185. doi: 10.1017/s0140525x01003922. [DOI] [PubMed] [Google Scholar]
  • 3.Alloway TP, Alloway RG. Investigating the predictive roles of working memory and IQ in academic attainment. J Exp Child Psychol. 2010;106(1):20–29. doi: 10.1016/j.jecp.2009.11.003. [DOI] [PubMed] [Google Scholar]
  • 4.Fukuda K, Vogel E, Mayr U, Awh E. Quantity, not quality: The relationship between fluid intelligence and working memory capacity. Psychon Bull Rev. 2010;17(5):673–679. doi: 10.3758/17.5.673. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Brady TF, Konkle T, Alvarez GA. A review of visual memory capacity: Beyond individual items and toward structured representations. J Vis. 2011;11(5):4. doi: 10.1167/11.5.4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Konkle T, Brady TF, Alvarez GA, Oliva A. Conceptual distinctiveness supports detailed visual long-term memory for real-world objects. J Exp Psychol Gen. 2010;139(3):558–578. doi: 10.1037/a0019165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Wickens DD, Born DG, Allen CK. Proactive inhibition and item similarity in short-term memory. J Verbal Learn Verbal Behav. 1963;2(5):440–445. [Google Scholar]
  • 8.Wiseman S, Neisser U. Perceptual organization as a determinant of visual recognition memory. Am J Psychol. 1974;87(4):675–681. [PubMed] [Google Scholar]
  • 9.Luck SJ, Vogel EK. The capacity of visual working memory for features and conjunctions. Nature. 1997;390(6657):279–281. doi: 10.1038/36846. [DOI] [PubMed] [Google Scholar]
  • 10.Bays PM, Gorgoraptis N, Wee N, Marshall L, Husain M. Temporal dynamics of encoding, storage, and reallocation of visual working memory. J Vis. 2011;11(10):1–15. doi: 10.1167/11.10.6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Hollingworth A, Richard AM, Luck SJ. Understanding the function of visual short-term memory: Transsaccadic memory, object correspondence, and gaze correction. J Exp Psychol Gen. 2008;137(1):163–181. doi: 10.1037/0096-3445.137.1.163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Brady TF, Konkle T, Oliva A, Alvarez GA. Detecting changes in real-world objects: The relationship between visual long-term memory and change blindness. Commun Integr Biol. 2009;2(1):1–3. doi: 10.4161/cib.2.1.7297. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Melcher D. Persistence of visual memory for scenes. Nature. 2001;412(6845):401. doi: 10.1038/35086646. [DOI] [PubMed] [Google Scholar]
  • 14.Melcher D. Accumulation and persistence of memory for natural scenes. J Vis. 2006;6(1):8–17. doi: 10.1167/6.1.2. [DOI] [PubMed] [Google Scholar]
  • 15.Alvarez GA, Cavanagh P. The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychol Sci. 2004;15(2):106–111. doi: 10.1111/j.0963-7214.2004.01502006.x. [DOI] [PubMed] [Google Scholar]
  • 16.Brady TF, Konkle T, Alvarez GA. Compression in visual working memory: Using statistical regularities to form more efficient memory representations. J Exp Psychol Gen. 2009;138(4):487–502. doi: 10.1037/a0016797. [DOI] [PubMed] [Google Scholar]
  • 17.Curby KM, Glazek K, Gauthier I. A visual short-term memory advantage for objects of expertise. J Exp Psychol Hum Percept Perform. 2009;35(1):94–107. doi: 10.1037/0096-1523.35.1.94. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Kaiser D, Stein T, Peelen MV. Real-world spatial regularities affect visual working memory for objects. Psychon Bull Rev. 2015;22(6):1784–1790. doi: 10.3758/s13423-015-0833-4. [DOI] [PubMed] [Google Scholar]
  • 19.Endress AD, Potter MC. Large capacity temporary visual memory. J Exp Psychol Gen. 2014;143(2):548–565. doi: 10.1037/a0033934. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Lin PH, Luck SJ. Proactive interference does not meaningfully distort visual working memory capacity estimates in the canonical change detection task. Front Psychol. 2012;3:42. doi: 10.3389/fpsyg.2012.00042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Goldman-Rakic PS. Cellular basis of working memory. Neuron. 1995;14(3):477–485. doi: 10.1016/0896-6273(95)90304-6. [DOI] [PubMed] [Google Scholar]
  • 22.Buschman TJ, Siegel M, Roy JE, Miller EK. Neural substrates of cognitive capacity limitations. Proc Natl Acad Sci USA. 2011;108(27):11252–11255. doi: 10.1073/pnas.1104666108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Bettencourt KC, Xu Y. Decoding the content of visual short-term memory under distraction in occipital and parietal areas. Nat Neurosci. 2016;19(1):150–157. doi: 10.1038/nn.4174. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.D’Esposito M, Postle BR. The cognitive neuroscience of working memory. Annu Rev Psychol. 2015;66:115–142. doi: 10.1146/annurev-psych-010814-015031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.McCollough AW, Machizawa MG, Vogel EK. Electrophysiological measures of maintaining representations in visual working memory. Cortex. 2007;43(1):77–94. doi: 10.1016/s0010-9452(08)70447-7. [DOI] [PubMed] [Google Scholar]
  • 26.Robitaille N, Grimault S, Jolicoeur P. Bilateral parietal and contralateral responses during maintenance of unilaterally encoded objects in visual short-term memory: Evidence from magnetoencephalography. Psychophysiology. 2009;46(5):1090–1099. doi: 10.1111/j.1469-8986.2009.00837.x. [DOI] [PubMed] [Google Scholar]
  • 27.Vogel EK, Machizawa MG. Neural activity predicts individual differences in visual working memory capacity. Nature. 2004;428(6984):748–751. doi: 10.1038/nature02447. [DOI] [PubMed] [Google Scholar]
  • 28.Vogel EK, McCollough AW, Machizawa MG. Neural measures reveal individual differences in controlling access to working memory. Nature. 2005;438(7067):500–503. doi: 10.1038/nature04171. [DOI] [PubMed] [Google Scholar]
  • 29. Luria R, Balaban H, Awh E, Vogel E. The contralateral delay activity as a neural measure of visual working memory. Neurosci Biobehav Rev 62:100–108. [DOI] [PMC free article] [PubMed]
  • 30.Williams M, Woodman GF. Directed forgetting and directed remembering in visual working memory. J Exp Psychol Learn Mem Cogn. 2012;38(5):1206–1220. doi: 10.1037/a0027389. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Carlisle NB, Arita JT, Pardo D, Woodman GF. Attentional templates in visual working memory. J Neurosci. 2011;31(25):9315–9322. doi: 10.1523/JNEUROSCI.1097-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Hollingworth A, Hollingworth A. Constructing visual representations of natural scenes: The roles of short- and long-term visual memory. J Exp Psychol Hum Percept Perform. 2004;30(3):519–537. doi: 10.1037/0096-1523.30.3.519. [DOI] [PubMed] [Google Scholar]
  • 33.Hollingworth A. The relationship between online visual representation of a scene and long-term scene memory. J Exp Psychol Learn Mem Cogn. 2005;31(3):396–411. doi: 10.1037/0278-7393.31.3.396. [DOI] [PubMed] [Google Scholar]
  • 34.LaRocque JJ, et al. The short- and long-term fates of memory items retained outside the focus of attention. Mem Cognit. 2015;43(3):453–468. doi: 10.3758/s13421-014-0486-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.McWeeny K, Young A, Hay D, Ellis A. Putting names to faces. Br J Psychol. 1987;78(2):143–146. [Google Scholar]
  • 36.Bower GH, Karlin MB, Dueck A. Comprehension and memory for pictures. Mem Cognit. 1975;3(2):216–220. doi: 10.3758/BF03212900. [DOI] [PubMed] [Google Scholar]
  • 37.Olsson H, Poom L. Visual memory needs categories. Proc Natl Acad Sci USA. 2005;102(24):8776–8780. doi: 10.1073/pnas.0500810102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Brady TF, Alvarez GA. No evidence for a fixed object limit in working memory: Spatial ensemble representations inflate estimates of working memory capacity for complex objects. J Exp Psychol Learn Mem Cogn. 2015;41(3):921–929. doi: 10.1037/xlm0000075. [DOI] [PubMed] [Google Scholar]
  • 39.Luria R, Sessa P, Gotler A, Jolicoeur P, Dell’Acqua R. Visual short-term memory capacity for simple and complex objects. J Cogn Neurosci. 2010;22(3):496–512. doi: 10.1162/jocn.2009.21214. [DOI] [PubMed] [Google Scholar]
  • 40.Lefebvre C, et al. Distinct electrophysiological indices of maintenance in auditory and visual short-term memory. Neuropsychologia. 2013;51(13):2939–2952. doi: 10.1016/j.neuropsychologia.2013.08.003. [DOI] [PubMed] [Google Scholar]
  • 41.Postle BR. The cognitive neuroscience of visual short-term memory. Curr Opin Behav Sci. 2015;1:40–46. doi: 10.1016/j.cobeha.2014.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Harrison SA, Tong F. Decoding reveals the contents of visual working memory in early visual areas. Nature. 2009;458(7238):632–635. doi: 10.1038/nature07832. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Serences JT, Ester EF, Vogel EK, Awh E. Stimulus-specific delay activity in human primary visual cortex. Psychol Sci. 2009;20(2):207–214. doi: 10.1111/j.1467-9280.2009.02276.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Esterman M, Chiu Y-C, Tamber-Rosenau BJ, Yantis S. Decoding cognitive control in human parietal cortex. Proc Natl Acad Sci USA. 2009;106(42):17974–17979. doi: 10.1073/pnas.0903593106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Fougnie D, Marois R. Distinct capacity limits for attention and working memory: Evidence from attentive tracking and visual working memory paradigms. Psychol Sci. 2006;17(6):526–534. doi: 10.1111/j.1467-9280.2006.01739.x. [DOI] [PubMed] [Google Scholar]
  • 46.Rovamo J, Virsu V, Näsänen R. Cortical magnification factor predicts the photopic contrast sensitivity of peripheral vision. Nature. 1978;271(5640):54–56. doi: 10.1038/271054a0. [DOI] [PubMed] [Google Scholar]
  • 47.Brady TF, Alvarez GA. Contextual effects in visual working memory reveal hierarchically structured memory representations. J Vis. 2015;15(15):6. doi: 10.1167/15.15.6. [DOI] [PubMed] [Google Scholar]
  • 48.Lopez-Calderon J, Luck SJ. ERPLAB: An open-source toolbox for the analysis of event-related potentials. Front Hum Neurosci. 2014;8:213. doi: 10.3389/fnhum.2014.00213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Delorme A, Makeig S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods. 2004;134(1):9–21. doi: 10.1016/j.jneumeth.2003.10.009. [DOI] [PubMed] [Google Scholar]
  • 50.Perrin F, Pernier J, Bertrand O, Echallier JF. Spherical splines for scalp potential and current density mapping. Electroencephalogr Clin Neurophysiol. 1989;72(2):184–187. doi: 10.1016/0013-4694(89)90180-6. [DOI] [PubMed] [Google Scholar]
  • 51.Crevits L, van Lith G, Viifvinkel-Bruinenga S. On and off contribution to the combined occipital on-off response to a pattern stimulus. Ophthalmologica. 1982;184(3):169–173. doi: 10.1159/000309201. [DOI] [PubMed] [Google Scholar]
  • 52.Cohen MA, Rhee JY, Alvarez GA. Limits on perceptual encoding can be predicted from known receptive field properties of human visual cortex. J Exp Psychol Hum Percept Perform. 2016;42(1):67–77. doi: 10.1037/xhp0000108. [DOI] [PubMed] [Google Scholar]
  • 53.McCarthy G, Wood CC. Scalp distributions of event-related potentials: An ambiguity associated with analysis of variance models. Electroencephalogr Clin Neurophysiol. 1985;62(3):203–208. doi: 10.1016/0168-5597(85)90015-2. [DOI] [PubMed] [Google Scholar]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES