Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2016 Oct 24;113(45):12874–12879. doi: 10.1073/pnas.1602722113

Perceptual training profoundly alters binocular rivalry through both sensory and attentional enhancements

Kevin C Dieter a,b,c,d,1, Michael D Melnick c,d, Duje Tadin c,d,e,1
PMCID: PMC5111677  PMID: 27791061

Significance

Attention can exert a strong influence over perception, especially when multiple stimuli compete for processing resources. However, attention has relatively modest effects on binocular rivalry. Competitive interactions between the two eyes/stimuli unfold largely automatically, with observers unable to strongly bias perception toward either stimulus. Here, we demonstrate that this limitation can be overcome following prolonged perceptual training. Trained observers exhibited substantial changes in rivalry dynamics, with individual percepts occasionally stabilizing for tens of seconds. These large changes were mediated through both eye-specific changes in visual processing and enhanced attention to task-relevant features in the trained eye. Evidently, strong modulation of binocular rivalry can be achieved through training-induced plasticity of bottom-up sensory and top-down attentional mechanisms in low-level vision.

Keywords: visual attention, binocular rivalry, perceptual learning, visual plasticity

Abstract

The effects of attention, as well as its functional utility, are particularly prominent when selecting among multiple stimuli that compete for processing resources. However, existing studies have found that binocular rivalry—a phenomenon characterized by perceptual competition between incompatible stimuli presented to the two eyes—is only modestly influenced by selective attention. Here, we demonstrate that the relative resistance of binocular rivalry to selective modulations gradually erodes over the course of extended perceptual training that uses a demanding, feature-based attentional task. The final result was a dramatic alteration in binocular rivalry dynamics, leading to profound predominance of the trained stimulus. In some cases, trained observers saw the trained rival image nearly exclusively throughout 4-min viewing periods. This large change in binocular rivalry predominance was driven by two factors: task-independent, eye-specific changes in visual processing, as well as an enhanced ability of attention to promote predominance of the task-relevant stimulus. Notably, this strengthening of task-driven attention also exhibited eye specificity above and beyond that from observed sensory processing changes. These empirical results, along with simulations from a recently developed model of interocular suppression, reveal that stimulus predominance during binocular rivalry can be realized both through an eye-specific boost in processing of sensory information and through facilitated deployment of attention to task-relevant features in the trained eye. Our findings highlight the interplay of attention and binocular rivalry at multiple visual processing stages and reveal that sustained training can substantially alter early visual mechanisms.


From the earliest empirical reports of binocular rivalry (1), scientists have asked whether the fluctuating perceptual experience induced by presenting unmatched images to the two eyes (“binocular rivalry”) could be willfully controlled by the observer. Despite early claims of nearly complete voluntary control (2), recent studies show that attention has only a modest selective impact during continuous viewing of binocular rivalry. Observers who are instructed to “hold” one of the two rival targets dominant exhibit relatively little influence over the dynamics of binocular rivalry (3). Rivalry becomes more susceptible to selective modulation under conditions that promote the deployment of attention to features present in only one of the two rivalry targets (47). However, compared with strong effects of visual attention on perception in other domains (8, 9), these effects are modest and suggest additional limiting conditions on attention’s ability to influence perception during binocular rivalry (10). This is puzzling given that visual attention typically has its strongest effects in cases of visual competition (11), which notably include other, ostensibly related bistable stimuli [e.g., the Necker cube (3); apparent motion (12); structure from motion (13)]. Moreover, attention even appears necessary for rivalry fluctuations to occur (1416), which makes the relative resistance of binocular rivalry to attentional modulation even more perplexing. Here, we address this apparent contradiction by asking whether the effects of attention on binocular rivalry dynamics could be enhanced through targeted perceptual learning.

There is existing evidence that individuals can learn to alter rivalry dynamics; with practice, one can learn to speed up or slow down rivalry alternations (17). However, a critical distinction is that such changes affect both rival stimuli similarly, and thus do not selectively modulate one relative to its competitor. Changes in alternation rate can also occur simply following repeated exposure to binocular rivalry (18, 19). Thus, we specifically asked whether perceptual training could produce selective changes in rivalry dynamics, causing enhanced visibility of a task-relevant image. Answering this question has several important implications. First, it will help better delineate the limits of selective control over rivalry, addressing a growing literature on this topic (for reviews, see refs. 10 and 20). Second, it will give us a better mechanistic understanding of interactions between binocular rivalry and attention—an objective inspired by recent evidence that attention might be an essential component of interocular suppression (21, 22). Finally, it would provide insights into underlying mechanisms of binocular rivalry. For example, if novel stimuli presented to the trained eye maintain training-related changes, such “eye specificity” would indicate an early locus of training-induced plasticity. However, if changes in rivalry dynamics travel with the stimulus regardless of the eye of presentation (i.e., “stimulus specificity”), that would indicate plasticity later in the visual hierarchy.

In designing the training paradigm, our starting point was the observation that modest selective modulations of binocular rivalry are found when task-driven attention is directed to the features of the target rival stimulus (57, 23, 24). For example, predominance of a rival target increased when observers discriminated subtle changes in its aspect ratio, but did not change when they tracked changes in the background shading overlaid on the top of the same image (6). Here, we used the same aspect ratio tracking task (6) to ensure that observers continuously attended to the target stimulus’s features when it was perceptually dominant (Fig. 1A). Crucially, this yields an objective measurement of rivalry predominance because this task can only be performed accurately when the attended stimulus is perceptually dominant (6). As task performance improved, we increased task difficulty to ensure that the task remained attentionally demanding throughout training. A battery of pretraining and posttraining tasks was used to infer training-induced changes in mechanisms underlying binocular rivalry and its attentional control.

Fig. 1.

Fig. 1.

Task and main results. (A) Observers tracked aspect ratio changes of one of two stimuli viewed in binocular rivalry, an attentionally demanding perceptual task. As task performance improved, task difficulty was adjusted any time task accuracy within a session exceeded 90% correct. Note that this task can only be performed accurately while the bull’s-eye stimulus is dominant, and thus constitutes an objective measure of stimulus dominance (6). The depicted aspect ratio change is exaggerated for illustrative purposes. (B) The proportion of viewing time that the task-relevant stimulus (bull’s-eye) was dominant gradually increased throughout training for an observer with a representative effect size, resulting in dramatically increased bull’s-eye predominance after training. Training curves for the other four observers are in Fig. S1. Error bars are SEM across training blocks in each session. (C) The large change in predominance of the task-relevant stimulus from pretest to posttest was seen for each observer (all individual results P < 10−4). S2 is the same observer as in B. Error bars are 95% confidence intervals (CIs). See Supporting Information for bootstrap analysis used to generate CIs. (D) Group (n = 5) histogram of percept durations for the task-relevant stimulus normalized to each observer’s pretest median percept duration. Results show fewer short and more long dominance periods following training. Posttraining, dominance durations that were more than 15× the pretest median duration accounted for 6.6% of percepts (0% pretraining) and averaged 48 ± 26.6 s. Error bars are SEM.

Because our attentional manipulation is linked to the task performed on the target stimulus, it is important to distinguish between effects on binocular rivalry that are (i) due to training-induced enhancements of top-down attentional modulation of rivalry and those that are (ii) due to task-driven changes in bottom-up stimulus processing. To differentiate between these two types of training-induced effects and to provide a formal framework for our empirical findings, we used a recent model of interocular suppression (22). This model estimates the response strength of competing signals presented to the two eyes, taking into account both bottom-up stimulus factors (e.g., stimulus tuning and attentional salience) and top-down, task-driven attention. The latter is operationalized as a feature-based task performed on one of the two rival stimuli (i.e., similar to the way attention is operationalized in our study).

Results

Observers participated in a training experiment in which they performed an attentionally demanding discrimination task on one of the two rival stimuli (“Aspect Ratio Task” condition). During training, the “trained eye” always viewed a bull’s-eye stimulus that modulated in aspect ratio, with observers continuously tracking these slight changes (Fig. 1A). Across training sessions, each observer showed a large, gradual increase in the proportion of time they reported seeing the task-relevant stimulus and a corresponding decrease in predominance of the task-irrelevant stimulus, ultimately resulting in a dramatic increase in bull’s-eye stimulus predominance posttraining (Fig. 1 B and C, and Fig. S1). For two observers, this predominance was nearly complete, with the bull’s-eye stimulus dominating perception more than 90% of the time. These striking changes in rivalry dynamics are highlighted by unusually long time periods during which observers continuously perceived the bull’s-eye stimulus. For example, the percentage of percepts exceeding 10 s increased from 1.5% pretraining to 28% posttraining, almost a 20-fold increase (Fig. 1D). Crucially, observers’ good performance throughout the experiment on the difficult aspect ratio task (percent correct, 87.6 ± 1.4%) indicates that observed reports of bull’s-eye stimulus dominance cannot be due to reporting biases, but reflect an objective increase in bull’s-eye predominance (6).

Fig. S1.

Fig. S1.

Training data for individual observers. Across training sessions, the proportion of each block for which the attended stimulus was reported as perceptually dominant increased for all observers. By the end of training, each observer reported perceiving the attended stimulus significantly more than the unattended stimulus. For each observer, the pretest and posttest orange dot corresponds to the blue and red bar (respectively) in Fig. 1C. Observer S2’s training curve is shown in Fig. 1B. See Materials and Methods for training procedure details.

Motivated by evidence that attention can alter eye-specific processing (25), we conducted a battery of pretests and posttests to elucidate visual mechanisms underlying the observed plasticity (Fig. 2). Specifically, we switched the eye–stimulus pairing used during training, such that observers performed the same task as during training sessions (Fig. 2A), but with the trained stimulus now presented to the untrained eye (Fig. 2B). If induced plasticity were entirely stimulus specific, this would not affect predominance of the bull’s-eye stimulus (i.e., the training effect should travel with the stimulus). Instead, we found that this switch essentially eliminated training-induced changes in stimulus predominance—that is, a significant portion of the training effect remained with the trained eye [Fig. 3A, Left; interaction, F(1,4) = 13.1, P = 0.02; all main effects = n.s.]. We also asked observers to track rivalry alternations of the same stimuli without performing the aspect ratio task (“Rivalry Tracking Only” condition; Fig. 2 C and D). Here, observers simply indicated when the bull’s-eye was perceptually dominant, with no additional task. Again, we found that predominance of the bull’s-eye was enhanced only when it was shown to the trained eye [Fig. 3A, Right; interaction, F(1,4) = 13.4, P = 0.02; all main effects = n.s.]. These results provide strong empirical evidence for eye-specific changes in visual processing.

Fig. 2.

Fig. 2.

Key conditions from the pretest and posttest battery. In one set of conditions (A and B), observers performed the attentionally demanding aspect ratio discrimination task as described in Fig. 1A (here, depicted by the red circle). In the second set of conditions (C and D), observers only reported which stimulus they perceived as perceptually dominant. These conditions were tested both with the stimuli in the same configuration as used during training (A and C, bull’s-eye stimulus in trained eye) as well as in the flipped ocular configuration (B and D, bull’s-eye stimulus in the untrained eye). Additional conditions from the pretest and posttest battery that used images not seen during training are shown in Fig. 3C.

Fig. 3.

Fig. 3.

Eye-specific effects of training. (A, Left) Empirical results from the aspect ratio task for both pretraining and posttraining. Predominance of the task-relevant bull’s-eye stimulus increased as a function of training, but only when the stimulus was shown in the trained eye. (Right) Empirical results from the Rivalry Tracking Only task. Again, the change in the bull’s-eye predominance was the largest when it was shown to the trained eye. Error bars are SEM; n = 5. (B) Model fits of data shown in A. Model results are based on the output of model 9 (Supporting Information). This model assumes training induced eye-specific changes in bottom-up stimulus salience, as well as eye-specific changes in task-driven attention. No eye specificity is preset in the fit of the pretraining data. Hence, the model fits the average of the two ocular conditions. (C) Training-induced predominance changes for stimuli not used during training. Data shown are for the trained eye. Each symbol represents data from an individual observer; black symbols indicate observers with significant posttraining increases in predominance. Error bars are 95% CIs. Grating stimuli were tested for all observers. Only S3 and S4 tracked rivalry between face and house images.

To further probe eye specificity of the observed plasticity, we also asked observers to track rivalry dynamics while viewing stimuli not used during training. In most cases, observers demonstrated increased predominance of the trained eye from pretest to posttest (Fig. 3C; S1, S2, and S5 showed significant transfer to vertical/horizontal gratings; S3 and S4 showed significant transfer to face/house images but not gratings). Together with results from the swapped eye–stimulus pairing (Fig. 3A), these findings suggest a locus of plasticity early in visual processing via eye-specific mechanisms.

The results so far indicate training-induced eye-specific changes in bottom-up visual processing. However, because these changes were evident both when observers performed the aspect ratio task and when they did not (Fig. 3A), we have not yet determined whether training also led to enhanced task-driven attentional control over rivalry dynamics. Such a change would be evident as a strengthened ability to increase predominance of the trained stimulus while performing the aspect ratio task relative to the baseline set by predominance of the same stimulus in the Rivalry Tracking Only condition. To address this question, we computed attentional selectivity indices (ASIs) both pretraining and posttraining (Supporting Information). An ASI greater than 1 indicates that (i) performing the attentionally demanding task selectively biased rivalry dynamics in favor of the task-relevant stimulus and that (ii) this bias is bigger than the baseline set by the Rivalry Tracking Only condition. As expected (6), pretraining ASIs were greater than 1 (Fig. 4A). This simply indicates that performing the aspect ratio task was effective at enhancing predominance of the task-relevant stimulus. Notably, after training ASIs increased in both the trained and untrained ocular configuration (Fig. 4A; trained P < 10−4; untrained P = 0.025; see Supporting Information for analysis details). In other words, even when accounting for training-induced effects in the Rivalry Tracking Only condition (Fig. 3A, Right), we still find an additional training-induced strengthening of task-driven attention. Moreover, the training-induced changes in ASI were larger for the trained than untrained ocular configuration (P = 0.017), indicating a partial eye specificity of changes in task-related attention.

Fig. 4.

Fig. 4.

Eye specificity of training-induced changes in task-driven attention and in task performance. (A) Attentional selectivity indices (ASIs) for each ocular configuration both before and after training (Supporting Information). During the posttest, observers demonstrated a significantly stronger ability to attentionally modulate rivalry dynamics in both the trained and untrained configurations (P < 10−4 and P = 0.025, respectively). This increase was larger for the trained than untrained configuration (P = 0.017), indicating a significant degree of eye specificity in this attentional enhancement. Large dots represent group mean (n = 5); small dots represent individual data. Error bars are 95% CIs from Monte Carlo simulation. (B) Eye specificity of training-induced changes in performance on the aspect ratio discrimination task (Fig. 1A). Here, data are plotted as differences in task accuracy: trained configuration accuracy – untrained configuration accuracy. Following training, task performance was better when the trained stimulus was presented to the trained eye. Bars represent group mean (n = 5); circles represent individual observer data.

Thus far, we find evidence for eye-specific changes in both bottom-up stimulus processing and task-related attention. These training-induced changes should also result in an eye-specific boost of task performance; aspect ratio discriminations should improve both with enhanced stimulus representation and more effective deployment of task-relevant attention. Task performance indeed improved with training. Observers discriminated smaller aspect ratio changes during posttest [pretest, 7.1 ± 2.0% of stimulus diameter; posttest, 3.3 ± 0.7%; t(4) = 4.36, P = 0.012]. Because we presented the same physical aspect ratio change in both the trained and untrained configuration within each session, we can directly compare task performance across the two configurations. As expected, task performance did not differ in the two ocular configurations before training (accuracy in trained minus untrained configuration, 0.4 ± 2.3%; Fig. 4B, Left). However, a difference in task performance emerged after training [5.9 ± 3.9%; Fig. 4B, Right; posttraining vs. pretraining: t(4) = 3.3, P = 0.029], revealing eye-specific changes in task performance.

Taken together, our empirical results suggest important roles of both bottom-up and top-down changes. We speculate that plasticity at multiple levels of processing is the key reason why we see such dramatic effects of training on rivalry dynamics. This, however, also makes it harder to differentiate between various training-induced effects. To provide a formal framework for these results, we used a recent model of interocular suppression (22). Here, the response strength of competing signals presented to the two eyes is affected by both top-down and bottom-up factors as well as divisive normalization that implements mutually suppressive interactions between neurons responding to rival images.

First, we tested what changes in model parameters could account for training-induced changes in the Rivalry Tracking Only task (Supporting Information). The results were best explained by either strengthening of bottom-up stimulus drive (Fig. 3B, Right) or enhancements (i.e., narrowing) of excitatory tuning to the trained stimulus features. Crucially, in both cases, only models that included eye-specific changes provided good fits to the data. Although our model analysis does not allow us to exclude either model, the model that incorporates changes in bottom-up stimulus drive is better suited to account for the observation that training also affects novel stimuli (Fig. 3C). Next, we investigated potential impacts of training on task-related attention (Supporting Information). The results indicate that training-induced strengthening of task-driven attention is necessary to account for large changes in stimulus predominance for the Aspect Ratio Tracking condition (Fig. 3B, Left). As with the bottom-up results, we also find strong evidence for eye specificity of training-induced changes in task-driven attention (Fig. 3C).

Discussion

We show that through (i) enhanced eye-specific sensory processing and (ii) strengthened task-related attention, one rival image can come to profoundly predominate perception during binocular rivalry. Instead of the typical rivalry experience in which one witnesses stochastic perceptual alternations that largely resist selective control, our observers were able to strongly influence their own perceptual experience during rivalry through an attentionally demanding task directed to features of a rival image. Remarkably, in some cases, the result was nearly exclusive predominance of the task-relevant stimulus following training. These results demonstrate a fundamental change in the typical perceptual experience of binocular rivalry, with alternations becoming an increasingly rare event. To our knowledge, the only comparable result is a study in which Tibetan monks with decades of practice in one-point meditation subjectively reported strong selective control over rivalry while meditating (26).

Our empirical results and model simulations provide evidence for both stimulus- and task-specific changes as a result of training. Notably, both exhibited partial specificity to the trained eye. For example, the increase in predominance of the trained image was essentially eliminated when that image was presented to the untrained eye (Fig. 3A). These results suggest enhanced competitive strength of images presented to the trained eye (27). Moreover, this pattern provides strong support for a critical role of early visual mechanisms in binocular rivalry (2830) and points toward an early neural locus of training-related plasticity in our study. Model simulations (Supporting Information) show that this could be mediated through either enhanced strength of bottom-up stimulus drive or narrower tuning of excitatory responses to rival features. Although both models provided similarly good fits to the data, we favor the former because eye-specific increases in the stimulus drive can also explain our observation of enhanced predominance of the trained eye during viewing of novel stimuli (Fig. 3C).

We also found evidence for an additional training-induced enhancement of task-related attention (Fig. 4). The aspect ratio training task required observers to direct feature-based attention to the bull’s-eye image (6). We observed that, after training, this task was considerably more effective at increasing predominance of the task-relevant stimulus (Fig. 4A). Notably, this effect cannot be explained by the above-described changes in bottom-up processing, indicating the existence of training-induced enhancements of task-driven, top-down control of binocular rivalry dynamics (Supporting Information). However, even this enhanced attentional control exhibited eye specificity, as evidenced by both larger changes (Fig. 4A) and better task performance (Fig. 4B) in the trained than untrained ocular configuration. This additional eye asymmetry is consistent with recent evidence that attention can enhance processing of stimuli presented to a cued eye (25), and further demonstrates that similar effects can be achieved without explicit cuing if monocular information is task relevant.

The eye specificity of training-induced changes indicates that effects are not due to test retest changes, and also rules out potential explanations involving eye movement strategies. For example, if observers simply tried not to blink or held their eyes extra steady when the bull’s-eye was dominant, or blinked more frequently during pinwheel predominance, effects would be seen in both ocular configurations (i.e., the results would not be eye specific). In addition, observers S3 and S4 completed a brief test of utrocular discrimination (31) and were no better than chance when reporting the eye of stimulus presentation for monocular gratings (52% correct pretraining; 41% posttraining). This result, with the observers who showed the largest training effects (Fig. 1C), shows that it is unlikely that a particular strategy was applied only in the trained ocular configuration. It is also worth noting that observers had no incentive to try to extend predominance of the bull’s-eye stimulus, as their only task was to accurately perform the aspect ratio discrimination (if anything, encouraging a conservative strategy of reporting bull’s-eye predominance). As such, it is extremely unlikely that observers applied different eye movement strategies for the two ocular configurations of training stimuli, or for stimuli not used during training.

In sum, we show that attentionally demanding training can substantially alter the normal perceptual experience of binocular rivalry. Instead of automatically witnessing frequent switches between rival images, trained observers mostly experienced the task-relevant rival image, with individual percepts occasionally remaining stable for tens of seconds. These drastic changes were largely eye-specific, supporting recent findings indicating that attentional training could be an effective therapy for amblyopia (32). Overall, our results highlight the critical role of early visual mechanisms in resolving binocular rivalry conflict, and show that typically modest effects of attention on binocular rivalry dynamics can be expanded through perceptual training.

Materials and Methods

Observers.

Six observers (two females, four males) participated in the study. Data for one observer were excluded because debriefing revealed a failure to follow task instructions during training. Two observers (K.C.D., S2; M.D.M., S5) are also authors. Observers had little or no prior experience viewing binocular rivalry before starting this study, and all had normal or corrected-to-normal vision. With respect to the target sample size, our goal was to approximate the sample size of other attention and rivalry studies (e.g., frequently cited ref. 6 tested four observers). Key analyses were conducted at both the individual and group levels. All procedures were approved by the Research Subjects Review Board at the University of Rochester, and all observers gave informed consent before participation. Raw data are available on request.

Apparatus.

Stimuli were created using the Psychophysics Toolbox and were shown on a Sony GDM-FW900 monitor (1,024 × 640 resolution) at 85 Hz. Observers viewed the stimuli through a mirror stereoscope placed 68 cm from the display. All observers were trained to adjust the stereoscope and did so at the start of each session (and as necessary during each session).

Training Using the Aspect Ratio Task.

Throughout training, the “trained eye” viewed a bull’s-eye stimulus, while the other eye (“untrained eye”) viewed a pinwheel stimulus (Fig. 1A). Both stimuli underwent synchronous counterphase flicker (1.4 Hz). The dynamic nature of the stimuli was intended to mask visual transience caused by changes in aspect ratio of the bull’s-eye stimulus. These aspect ratio changes occurred with 0.5 probability at 0.7 Hz. Specifically, the bull’s-eye stimulus was always stretched tall or wide by a small percentage of its diameter. The magnitude of this change was adaptively adjusted during training to ensure a difficult task throughout [first session, 5.7 ± 1.8%; final session, 3.3 ± 0.7%; t(4) = 4.41, P = 0.012]. Observers were instructed to continuously track the state of the bull’s-eye stimulus (“tall” vs. “wide”) whenever it was dominant by pressing and holding one of two keys. Observers made no report during periods of pinwheel dominance. The pinwheel stimulus was chosen to be locally orthogonal to the bull’s-eye—a key factor for ensuring crisp rivalry transitions. However, as internal pinwheel features would be largely unaffected by small aspect ratio of the whole stimulus, this precluded an effective counterbalancing of the aspect ratio task for both stimuli.

Tracking accuracy was evaluated at 0.7 Hz—correct responses were those that matched the stimulus state from the previous 1,412 ms. As detailed in the original report of this task (6), these same responses were used to estimate bull’s-eye dominance durations. It is worth noting that observers had no incentive to report the bull’s-eye stimulus more often as training progressed. In fact, higher aspect ratio tracking accuracy (i.e., observers’ only task) could be achieved by adapting a conservative criterion of when to track aspect ratio changes. Previous results show that this method provides a slightly conservative estimate of the task-relevant stimulus predominance (6).

Stimuli were shown within a 1.67-deg-diameter fusion frame, with actual stimuli subtending 1.33 deg ± aspect ratio change for the bull’s-eye (which, at the end of the training, was 0.044 deg). The nondominant eye was selected as the trained eye, as determined by a “hole-in-card” sighting dominance test (left for S2 and S4; right for S1, S3, and S5). Given that sighting eye dominance has a minimal effect on eye differences in rivalry predominance (33), both rival stimuli were presented at 25% contrast. An exception was made for S5, who demonstrated a strong eye imbalance in rivalry dynamics during the orientation session. To correct for this, the pinwheel and bull’s-eye stimuli were presented at 15% and 60% contrast, respectively. This resulted in roughly equal predominance of rival stimuli during the pretest (Fig. 1C).

All observers trained for 12 sessions, except for S5 who trained for 24 sessions. We extended training for S5 because he, unlike other observers, started with a severe eye imbalance. Observers completed at least four but not more than five sessions per week (on separate days), at roughly the same time each day. S1, S2, and S5 completed 6 training blocks per session (∼27 min of binocular rivalry/day), whereas S3 and S4 completed 12 training blocks per session (∼54 min/d). Although we did find some indications that longer training sessions resulted in stronger posttraining effects, each individual observer exhibited highly significant posttraining changes (Results). Each block of binocular rivalry lasted 268 s, with the first 14 s excluded from analysis (to avoid a brief phase immediately following rival stimulus onset known to be very susceptible to attentional modulation) (5, 23). To analyze training data, we computed the proportion predominance of the bull’s-eye and pinwheel stimulus during each training block. The session average was determined by averaging across all blocks in a given session (Fig. 1B).

Pretest and Posttest Measures.

Pretest and posttest measures included five binocular rivalry tasks (task order counterbalanced across sessions), completed over 4–8 d. Four tests (Fig. 2) used the bull’s-eye and pinwheel stimuli (same stimuli as training). In two tests (Fig. 2 A and B), observers performed the same aspect ratio task as during training in each of two ocular configurations. The other two tests (Fig. 2 C and D; one in each ocular configuration) only required observers to track rivalry alternations of the two stimuli. In all tests, observers were required to make a forced choice judgment regarding stimulus dominance (i.e., they could not report mixtures) (6). For conditions that used the aspect ratio training task, we assessed whether training effects occurred within the pretest or posttest and found no significant changes [first vs. second half of the data; pretest: t(4) = 0.6, P = 0.58; posttest: t(4) = 0.6, P = 0.56].

For a fifth test, observers tracked rivalry of orthogonal grating stimuli (1.33-deg diameter; spatial frequency = 4 cycles/deg; Fig. 3C, Right). Stimulus-to-eye pairings were randomized across observers and maintained across the pretest and posttest. The relative contrast of the left and right eye gratings was adjusted during an introductory session to result in roughly equal predominance of each eye during the pretest sessions. Mixed percepts were recorded while observers viewed grating stimuli, and accounted for only 6.5% of the total time with no difference between pretest and posttest (Mann–Whitney tests: all five z < 1.6, all P > 0.13). Two observers (S3 and S4) also completed an additional pretest and posttest condition in which they tracked rivalry between a face and house stimulus (Fig. 3C, Right). The face and house images were obtained from the MIT-CBCL face recognition database (34) and a Google Images search, respectively. Each image was presented within a vertical elliptical frame matched to the starting aspect ratio for the bull’s-eye stimulus. Reports of mixed percepts were more common while viewing face/house stimuli. This is likely because there are local regions with less pronounced stimulus conflict for complex face/house images. Moreover, mixed percepts also increased from pretest to posttest (Mann–Whitney tests: S3: 21–45%, z = 3.2, P < 0.01; S4: 37–55%, z = 2.5, P = 0.01). To match data from the other conditions, these mixture reports were not included in the predominance analysis (Fig. 3C). Here, we note that this increased predominance of mixed percepts corresponded with a decrease in predominance of the untrained eye (3.6× and 2.8× decrease, respectively; both z > 3, both P < 0.01), suggesting that it was harder to fully suppress the trained eye after training.

Data Analysis

For all conditions, we report the proportion predominance of a stimulus out of the total time for which either stimulus was exclusively dominant (Figs. 1 B and C, and 3). To compute 95% confidence intervals (CIs), we first pooled dominance duration data across all pretest and posttest blocks. Then, we generated 10,000 bootstrap samples from each dataset (i.e., produced new samples of the same size as the measured sample by drawing randomly, with replacement, from the measured sample). Finally, we calculated stimulus predominance for each sample, and computed 95% confidence intervals (CIs) from these samples (35). These results are plotted as error bars in Figs. 1C and 3C. We also examined distributions of dominance durations both before and after training. We first normalized each observer’s data to his/her pretest median percept duration for the task-relevant stimulus and computed the histogram of dominance durations, averaging across observers to generate the plot in Fig. 1D.

To quantify effects of attention, we computed attentional modulation indices (AMIs) by calculating the ratio of median dominance durations under “Aspect Ratio Task” versus “Rivalry Tracking Only” baseline (6). This index was computed separately for each stimulus, and separately for pretest and posttest. Here, values over 1 indicate that observers lengthened the median dominance duration when completing the aspect ratio task compared with only tracking rivalry alternations, and vice versa for values under 1. Because observers could manifest selective attentional control both by increasing the duration of the task-relevant stimulus [AMItask-relevant > 1 (6)] and/or by decreasing the duration of the task-irrelevant stimulus [AMItask-irrelevant < 1 (7)], we developed a combined metric, attentional selectivity index (ASI):

ASI=(AMItask-relevant+AMItask-irrelevant1)/2. [S1]

Here, ASI > 1 indicates net selective attentional control in favor of the task-relevant stimulus. To assess significance, we estimated 95% CIs for individual observers by generating 10,000 bootstrap samples from the Aspect Ratio Task conditions and computed AMItask-relevant and AMItask-irrelevant. Here, we used the measured median duration from the corresponding Rivalry Tracking Only baseline as a fixed denominator. We then calculated ASI for each sample, resulting in 10,000 bootstrap estimates of ASI (independently for pretest and posttest) and corresponding 95% CIs. To estimate statistical significance, we computed the proportion of bootstrap samples (1-α) for which posttest ASI was greater than pretest ASI. Because we expected a priori an increase in AMI after training, P values were defined as α (i.e., the proportion of samples in which pretest ASI was greater than or equal to posttest ASI; note that the smallest P value from such analysis is P < 10−4). We used this same method to estimate statistical significance when comparing ASI changes across ocular configurations.

Supplemental Modeling Results

To better understand the nature of visual processing changes caused by our training paradigm, we used a recently developed computational model of interocular suppression that simulates how neural responses are impacted by the confluence of top-down and bottom-up factors (22). This model is built around a normalization framework (36, 37) and extends the idea that attention plays a key role in interocular suppression (21). The model explicitly proposes two attentional components that modulate sensory responses when the two eyes view conflicting images: (i) a task-related attentional component that is driven by observers following task instructions (a top-down component), and (ii) a stimulus-driven attentional component that captures bottom-up salience of a perceptually dominant stimulus.

As discussed in the main text, our empirical results suggest that training affected both bottom-up stimulus drive and top-down task-related attention. This makes the Li et al. (22) model suitable for a computational inquiry into our findings. For consistency with Li et al. (22), we will use the terms “task-driven” and “stimulus-driven” attention to refer to these two components. Here, with an aim to “unpack” the nature of training-induced changes in binocular rivalry, we considered 12 implementations of the Li et al. (22) model. In these, we allowed multiple configurations of individual parameters to vary to help explain the origin of training effects and quantified the results as follows.

General Method.

Model simulations were based on the freely available MATLAB code for implementing the Li et al. (22) model (available at www.cns.nyu.edu/heegerlab).

We set up a series of conditions through which we could simulate model responses to the bull’s-eye and pinwheel stimuli. By controlling which parameters were allowed to vary in each case, we hoped to identify processing changes that could account for our empirical results. For example, we can simulate eye-specific changes by using separate parameters to fit results from the trained and the untrained ocular configurations, and can remove said eye specificity by using only one parameter to fit both conditions. As detailed below, we constrained bottom-up parameters based on the results from the Rivalry Tracking Only condition, assuming that additional top-down factors are involved in the Track Aspect Ratio condition.

The Li et al. (22) model does not make direct predictions about ongoing binocular rivalry dynamics. Rather, it produces estimates of the effective stimulus strength for each eye in terms of relative simulated response levels. To apply the model to our results, we made the following assumption: the relative predominance of each eye during extended viewing of binocular rivalry stimuli is related to the relative response levels associated with the two eyes. Namely, a large interocular difference in response level predicts greater predominance of the eye driving a larger response. To obtain simulated predominance values under this assumption, we first used the below-described models to estimate response levels for each of the two stimuli (i.e., bull’s-eye and pinwheel). Next, we transformed these response levels to stimulated predominance values. Here, we took the ratio of bull’s-eye to pinwheel responses (rbp) and computed simulated predominance as follows:

Predominancesimulated=rbp2/(1+rbp2). [S2]

This simple transformation ensures that the simulated predominance of the stronger stimulus is between 0.5 and 1, and that effects reach asymptote for increasing differences in responses between the two stimuli. Notably, the key conclusions from modeling analyses are robust to various possible transformations (e.g., removing the squaring for a less aggressive asymptote or using cubic term to reach asymptote faster) as long as stronger responses equal higher predominance and as long as the transformation allows enough of a dynamic range to capture a wide range of predominances.

To fit the data, we used the predominance data in Fig. 3A as a benchmark for finding parameter values that minimized the sum of squared error (SSE) between observed and simulated predominances (using the fmincon function in MATLAB). Similar results are obtained if models are constrained by individual observer data. To compare models, we calculated SSE by summing across the SSEs from each configuration. To account for differences in model complexity, we computed the Akaike information criterion (AIC) (38, 39). This metric quantifies the information loss in each model, taking into account both the goodness of fit (SSE for each model) and the number of free parameters. For both bottom-up and top-down models, we computed Akaike weights, which estimate the probability that a given model is the best-fitting model among models tested (38, 39).

To estimate model parameters of interest (as detailed below), we first fit the Rivalry Tracking Only conditions and then explored which top-down changes needed to be made to the model to account for the results in the Aspect Ratio Task conditions. In effect, then, we explored two separate models in sequence, with the parameter values obtained for Rivalry Tracking Only serving as the bottom-up parameters for the top-down models. This was done for both pretraining and posttraining data.

Pretraining Model.

Aside from changes to stimulus size and contrast (set to match our experiment), we set all pretraining parameters to the values obtained by Li et al. (22) except for those representing the magnitudes of bottom-up stimulus drive and top-down (i.e., task-related) attention. There was no eye specificity in the pretraining model, matching the final model proposed by Li et al. (22). As both of our stimuli were relatively small and, more importantly, the same size, we expect that similar results will be obtained for a wide range of suprathreshold contrasts (21, 22). Feature space in the Li et al. (22) model is 1D, as their stimuli varied only in orientation. Although our stimuli have more complex representations in feature space, they are largely orthogonal locally (main text, Fig. 2). Therefore, we preserved the 1D feature representation in the model and explored the models below with simulated orthogonal gratings.

To match the model to the properties of task-related attention in our study, we made two adjustments. First, the feature tuning of task-related attention matched the features of the dominant rather than the suppressed stimulus. This change reflects differences in the tasks used in the two studies: our observers detected aspect ratio changes in a perceptually dominant stimulus, whereas the observers in Li et al. (22) detected orientation changes in a suppressed stimulus. Second, we fixed the magnitude of task-related attention at 0 when simulating the Rivalry Tracking Only condition. That is, task-related attention was only fit to the data from the Aspect Ratio Task condition. Although there is evidence that top-down attention plays a role in rivalry tracking, this seems to be limited to discerning and reporting stimulus alternations (e.g., ref. 30). To make sure that our results are not affected by this assumption, we also tested the models where the top-down attention parameter was fixed at nonzero values in Rivalry Tracking Only, and the overall pattern of results was similar.

As there was no eye specificity in the pretraining model, we effectively found the parameter values that best matched observers' predominance data across the two ocular configurations. On average, the simulated predominance values from the model closely approximated the across-configuration average data for each observer (Fig. 3B). Best-fit parameter values for the strength of stimulus-driven and task-driven attention were similar, although somewhat smaller than values obtained by Li et al. (22).

Posttraining Models.

We considered several distinct ways in which training may affect visual processing, including models that allowed changes to be partially or exclusively eye-specific, and models that did not. As a preview, our analyses found that models 1 and 4 provided good fits to the data.

Bottom-up models.

Model 1: Strength of stimulus-driven attention varies, impacted by both stimulus- and eye-specific components (two parameters).

This model explored whether changes in stimulus-driven attention (i.e., bottom-up stimulus salience) could account for training-induced changes in the Rivalry Tracking Only condition. Here, we allowed changes to be both eye and stimulus specific. Specifically, when the trained stimulus was presented to the trained eye (trained ocular configuration), the magnitude of stimulus-driven attention was the sum of eye- and stimulus-specific components. In the untrained ocular configuration, the eye-specific component affected the response strength of the untrained pinwheel stimulus (now presented in the trained eye), whereas the stimulus-specific component remained with the trained bull’s-eye stimulus (now presented in the untrained eye).

Model 2: Strength of stimulus-driven attention varies, impacted only by a stimulus-specific component (one parameter).

In this more constrained version of model 1, we only allowed stimulus-specific changes of stimulus-driven attention. That is, this model did not allow for eye specificity—all training was attributed to changes in the drive from bull’s-eye features. Thus, like the pretraining model, this model in practice fits the average data across both ocular configurations.

Model 3: Strength of stimulus-driven attention varies, impacted only by an eye-specific component (one parameter).

Another more constrained version of model 1, this model assumes that changes in the magnitude of stimulus-driven attention are entirely specific to the trained eye, irrespective of what stimulus is presented. Thus, the magnitude of stimulus-driven attention for the bull’s-eye is impacted in the trained configuration, whereas the same is true for the pinwheel in the untrained configuration (as it is presented to the trained eye).

Model 4: Excitatory feature kernel, with eye specificity (two parameters).

This model allows the excitatory feature kernel, which determines the width of the excitatory response in feature space when driven by the bull’s-eye stimulus, to vary after training, representing either a narrowing or broadening of the neural population’s response. We allowed this to vary separately for each ocular configuration, whereas the pinwheel feature kernel was fixed to the pretraining value (i.e., we assumed training did not impact neural population responses to the untrained image).

Model 5: Excitatory feature kernel, without eye specificity (one parameter).

A more constrained version of model 4. Here, the parameter for excitatory tuning was allowed to vary, but it did not differ across ocular configurations of the stimuli.

Model 6: Inhibitory feature kernel, with eye specificity (two parameters).

Identical to model 4, except that the parameter for inhibitory (rather than excitatory) tuning was allowed to vary. This parameter influences the tuning of surround suppression in feature space.

Model 7: Interocular normalization weight differs in each ocular configuration (two parameters).

This model allows the interocular normalization weight to vary across the two ocular configurations. That is, there is one weight for the trained configuration, and one for the untrained configuration. This parameter determines how strongly the stimulus from one eye contributes to the normalization pool by which the other eye’s responses are divided. Results from Li et al. (22) suggested that, for some observers, this parameter could account for differences in suppression strength resulting from viewing images in opposite ocular configurations. Therefore, we reasoned that allowing this parameter to vary across ocular configurations might account for observed eye specificity in our posttraining results.

Results.

The modeling results support our conclusions based on empirical data (main text). Namely, we find evidence for both bottom-up sensory changes and their partial eye specificity. Models 4 (eye-specific changes to excitatory tuning) and 1 (eye-specific changes to stimulus-driven attention) provided the best fits to the data. From Akaike weights, models 4 and 1 had 61.7% and 38.3% chance of being the best models, respectively. Taken together, all other models had a 0.03% chance including the best-fitting model. Critically, this includes more constrained versions of models 4 and 1 that do not allow any eye specificity. Models that do not allow for any eye specificity essentially fit the average of an observer’s data across the trained and untrained configuration, resulting in a dramatic increase in SSE relative to models that can capture these asymmetries (>104). Although we cannot conclusively distinguish between models 4 and 1 based on this result, it is clear that the other models poorly fit the empirical data. This included model 7, in which interocular normalization weights were allowed to vary. Preliminary simulations revealed that changes in this parameter had the effect of magnifying slight imbalances between the stimuli caused by stimulus-driven attention. Thus, it is possible that changes in interocular normalization weight coupled with smaller changes to stimulus-driven attention could account for our results (i.e., a hybrid of models 1 and 7). However, limitations in the size of our dataset preclude us from directly testing this hybrid model.

In sum, we find strong computational evidence for conclusions that include either eye-specific changes in stimulus-driven attention (model 1) or eye-specific changes in excitatory tuning (model 4). We will consider each of these models separately as a baseline bottom-up model while exploring top-down models next.

Top-down models.

We next explored models testing potential impacts of training on task-related attention. These models were designed to test two key conclusions from our empirical data: (i) that training of task-driven attention in addition to bottom-up changes is necessary to account for our results, and (ii) that changes to task-driven attention were partially eye specific. To test the first conclusion, we considered a model in which top-down attention did not change from the pretraining model (model 8). Next, we explored two sets of models that allow training-induced changes in the strength of task-related attention. In one set (models 9 and 11), we allowed for eye-specific changes in task-driven attention, motivated by AMI results that suggest eye specificity (Fig. 4A). We also explored models (models 10 and 12) where the magnitude of task-driven attention changed with training, but in a manner that did not depend on ocular configuration. These models inherited bottom-up training parameters either from model 4 (models 8, 11, and 12) or model 1 (models 9 and 10). Thus, when computing SSE and AIC for the models, we considered the fit to both Rivalry Tracking Only and Aspect Ratio Task conditions. As a preview, our analyses found that models 9 and 11 provided good fits to the data.

Model 8: No effects of training on task-driven attention (0 parameters).

In this model, we assume that training only affected bottom-up processes. This model inherits the model 4 parameters to reflect eye-specific bottom-up training (as the best fitting bottom-up model), but assumes that the top-down parameters are the same as in pretraining. This model tests our conclusions that training reflects changes to both sensory and attentional aspects of visual processing. (We also considered a version of model 8 that builds on model 1, and the result was a slightly worse fit than when starting with model 4.)

Model 9: Strength of task-driven attention varies, separate parameter in each ocular configuration (two parameters).

This model allows for eye-specific changes in the strength of task-driven attention as a result of training. It assumes that bottom-up changes in stimulus drive were due to eye-specific changes in the magnitude of stimulus drive, that is, it builds off of the results from model 1.

Model 10: Strength of task-driven attention varies, same parameter for both ocular configurations (one parameter).

Same as model 9, except the strength of task-driven attention varies after training in a manner that does not depend on ocular configuration.

Model 11: Strength of task-driven attention varies, separate parameter in each ocular configuration (two parameters).

Same as model 9, except that this model assumes that excitatory tuning narrowed in an eye-specific manner (i.e., it builds from a baseline determined by model 4 results).

Model 12: Strength of task-driven attention varies, same parameter for both ocular configurations (one parameter).

Same as model 11, except that the parameter for the magnitude task-driven attention is the same for both ocular configurations.

Results.

Modeling results support the two main conclusions from the empirical data. First, we find that model 8, which involves no training-induced changes in top-down attention, provides a poor fit to the data (less than a 0.07% chance of being the best model). This indicates that training impacted both bottom-up and top-down aspects of visual processing.

Based on Akaike weights, models 9 and 11 provided good fits to the data, with 72.7% and 27.1% chances of being the best models, respectively (i.e., we do not have strong evidence to rule out either model). Taken together, all other models had a 0.17% chance, including the best-fitting model. As these poor-fitting models include versions of models 9 and 11 without eye-specific variations in task-driven attention, we show that our modeling analysis provides support for our conclusions for eye-specific changes in task-driven attention.

Conclusion

The results from our exploration of the Li et al. (22) model support the major conclusions drawn from our empirical data. We find evidence for training effects on task-driven attention paired with changes in bottom-up stimulus processing caused by either enhanced bottom-up stimulus saliency or changes in excitatory stimulus tuning. For both bottom-up and top-down changes, we found evidence for eye specificity.

Acknowledgments

We thank Randolph Blake and Woon Ju Park for manuscript comments. We thank Hsin-Hung Li for assistance in implementing the model of interocular suppression. This research was supported by NIH National Eye Institute Awards EY014999, P30-EY001319, T32-EY007125, T32-EY007135, and F32-EY025514.

Footnotes

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1602722113/-/DCSupplemental.

References

  • 1.Wheatstone C. Contributions of the physiology of vision. Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philos Trans R Soc Lond. 1838;128:371–394. [Google Scholar]
  • 2.Helmholtz HV. 1925. Treatise on Physiological Optics, trans Southall JPC (Dover, New York)
  • 3.Meng M, Tong F. Can attention selectively bias bistable perception? Differences between binocular rivalry and ambiguous figures. J Vis. 2004;4(7):539–551. doi: 10.1167/4.7.2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Dieter KC, Melnick MD, Tadin D. When can attention influence binocular rivalry? Atten Percept Psychophys. 2015;77(6):1908–1918. doi: 10.3758/s13414-015-0905-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Mitchell JF, Stoner GR, Reynolds JH. Object-based attention determines dominance in binocular rivalry. Nature. 2004;429(6990):410–413. doi: 10.1038/nature02584. [DOI] [PubMed] [Google Scholar]
  • 6.Chong SC, Tadin D, Blake R. Endogenous attention prolongs dominance durations in binocular rivalry. J Vis. 2005;5(11):1004–1012. doi: 10.1167/5.11.6. [DOI] [PubMed] [Google Scholar]
  • 7.Hancock S, Andrews TJ. The role of voluntary and involuntary attention in selecting perceptual dominance during binocular rivalry. Perception. 2007;36(2):288–298. doi: 10.1068/p5494. [DOI] [PubMed] [Google Scholar]
  • 8.Simons DJ, Chabris CF. Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception. 1999;28(9):1059–1074. doi: 10.1068/p281059. [DOI] [PubMed] [Google Scholar]
  • 9.Resnick RA, O’Regan K, Clark JJ. To see or not to see: The need for attention to perceive changes in scenes. Psychol Sci. 1997;8(5):368–373. [Google Scholar]
  • 10.Dieter KC, Tadin D. Understanding attentional modulation of binocular rivalry: A framework based on biased competition. Front Hum Neurosci. 2011;5:155. doi: 10.3389/fnhum.2011.00155. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Desimone R, Duncan J. Neural mechanisms of selective visual attention. Annu Rev Neurosci. 1995;18(1):193–222. doi: 10.1146/annurev.ne.18.030195.001205. [DOI] [PubMed] [Google Scholar]
  • 12.Suzuki S, Peterson MA. Multiplicative effects of intention on the perception of bistable apparent motion. Psychol Sci. 2000;11(3):202–209. doi: 10.1111/1467-9280.00242. [DOI] [PubMed] [Google Scholar]
  • 13.Hol K, Koene A, van Ee R. Attention-biased multi-stable surface perception in three-dimensional structure-from-motion. J Vis. 2003;3(7):486–498. doi: 10.1167/3.7.3. [DOI] [PubMed] [Google Scholar]
  • 14.Brascamp JW, Blake R. Inattention abolishes binocular rivalry: Perceptual evidence. Psychol Sci. 2012;23(10):1159–1167. doi: 10.1177/0956797612440100. [DOI] [PubMed] [Google Scholar]
  • 15.Zhang P, Jamison K, Engel S, He B, He S. Binocular rivalry requires visual attention. Neuron. 2011;71(2):362–369. doi: 10.1016/j.neuron.2011.05.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Dieter KC, Brascamp JW, Tadin D, Blake R. Does visual attention drive the dynamics of perceptual bistability. Atten Percept Psychophys. 2016;78(7):1861–1873. doi: 10.3758/s13414-016-1143-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Lack LC. Selective Attention and the Control of Binocular Rivalry. Mouton; Paris: 1978. [Google Scholar]
  • 18.Klink PC, Brascamp JW, Blake R, van Wezel RJ. Experience-driven plasticity in binocular vision. Curr Biol. 2010;20(16):1464–1469. doi: 10.1016/j.cub.2010.06.057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Suzuki S, Grabowecky M. Long-term speeding in perceptual switches mediated by attention-dependent plasticity in cortical visual processing. Neuron. 2007;56(4):741–753. doi: 10.1016/j.neuron.2007.09.028. [DOI] [PubMed] [Google Scholar]
  • 20.Paffen CL, Alais D. Attentional modulation of binocular rivalry. Front Hum Neurosci. 2011;5:105. doi: 10.3389/fnhum.2011.00105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ling S, Blake R. Normalization regulates competition for visual awareness. Neuron. 2012;75(3):531–540. doi: 10.1016/j.neuron.2012.05.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Li H-H, Carrasco M, Heeger DJ. Deconstructing interocular suppression: Attention and divisive normalization. PLoS Comput Biol. 2015;11(10):e1004510–e1004526. doi: 10.1371/journal.pcbi.1004510. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Chong SC, Blake R. Exogenous attention and endogenous attention influence initial dominance in binocular rivalry. Vision Res. 2006;46(11):1794–1803. doi: 10.1016/j.visres.2005.10.031. [DOI] [PubMed] [Google Scholar]
  • 24.Ooi TL, He ZJ. Binocular rivalry and visual awareness: The role of attention. Perception. 1999;28(5):551–574. doi: 10.1068/p2923. [DOI] [PubMed] [Google Scholar]
  • 25.Zhang P, Jiang Y, He S. Voluntary attention modulates processing of eye-specific visual information. Psychol Sci. 2012;23(3):254–260. doi: 10.1177/0956797611424289. [DOI] [PubMed] [Google Scholar]
  • 26.Carter OL, et al. Meditation alters perceptual rivalry in Tibetan Buddhist monks. Curr Biol. 2005;15(11):R412–R413. doi: 10.1016/j.cub.2005.05.043. [DOI] [PubMed] [Google Scholar]
  • 27.Blake R. A neural theory of binocular rivalry. Psychol Rev. 1989;96(1):145–167. doi: 10.1037/0033-295x.96.1.145. [DOI] [PubMed] [Google Scholar]
  • 28.Blake R, Logothetis N. Visual competition. Nat Rev Neurosci. 2002;3(1):13–21. doi: 10.1038/nrn701. [DOI] [PubMed] [Google Scholar]
  • 29.Tong F, Meng M, Blake R. Neural bases of binocular rivalry. Trends Cogn Sci. 2006;10(11):502–511. doi: 10.1016/j.tics.2006.09.003. [DOI] [PubMed] [Google Scholar]
  • 30.Brascamp J, Blake R, Knapen T. Negligible fronto-parietal BOLD activity accompanying unreportable switches in bistable perception. Nat Neurosci. 2015;18(11):1672–1678. doi: 10.1038/nn.4130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Blake R, Cormack RH. On utrocular discrimination. Percept Psychophys. 1979;26(1):53–68. doi: 10.3758/bf03202005. [DOI] [PubMed] [Google Scholar]
  • 32.Ooi TL, Su YR, Natale DM, He ZJ. A push-pull treatment for strengthening the “lazy eye” in amblyopia. Curr Biol. 2013;23(8):R309–R310. doi: 10.1016/j.cub.2013.03.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Ooi TL, He ZJ. Sensory eye dominance. Optometry. 2001;72(3):168–178. [PubMed] [Google Scholar]
  • 34.Weyrauch B, Huang J, Heisele B, Blanz V. First IEEE Workshop on Face Processing in Video. IEEE; Washington, DC: 2004. Component-based face recognition with 3D morphable models. [Google Scholar]
  • 35.Zheleznyak L, Alarcon A, Dieter KC, Tadin D, Yoon G. The role of sensory ocular dominance on through-focus visual performance in monovision presbyopia corrections. J Vis. 2015;15(6):17. doi: 10.1167/15.6.17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Carandini M, Heeger DJ. Normalization as a canonical neural computation. Nat Rev Neurosci. 2011;13(1):51–62. doi: 10.1038/nrn3136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Reynolds JH, Heeger DJ. The normalization model of attention. Neuron. 2009;61(2):168–185. doi: 10.1016/j.neuron.2009.01.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Burnham KP, Anderson DR. Model Selection and Multimodal Inference. Springer; New York: 2002. [Google Scholar]
  • 39.Wagenmakers EJ, Farrell S. AIC model selection using Akaike weights. Psychon Bull Rev. 2004;11(1):192–196. doi: 10.3758/bf03206482. [DOI] [PubMed] [Google Scholar]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES