Skip to main content
The Yale Journal of Biology and Medicine logoLink to The Yale Journal of Biology and Medicine
. 2019 Mar 25;92(1):21–28.

The Role of Attention in Learning in the Digital Age

Jason M Lodge a,b,*, William J Harrison b
PMCID: PMC6430174  PMID: 30923470

Abstract

New and evolving technologies provide great opportunities for learning. With these opportunities, though, come questions about the impact of new ways of acquiring information on our brain and mind. Many commentators argue that access to the Internet is having a persistent detrimental impact on the brain. In particular, attention has been implicated as a cognitive function that has been negatively impacted by use of digital technologies for learning. In this paper, we critique this claim by analyzing the current understanding of the cognitive neuroscience of attention and research in educational settings on how technologies are influencing learning. Across the two bodies of literature, a complex situation emerges placing doubt on the claim that the use of digital technologies for learning is negatively affecting the brain. We suggest therefore that a more systemic approach to understanding the relationship between technologies and attention involving researchers examining the relationship at different levels from the laboratory to the real world.

Keywords: Attention, learning, educational technology, memory

The Role of Attention in Learning in the Digital Age

The means and media through which people can learn are fundamentally changing. Technology increasingly impacts on the ways in which people acquire, update, and correct their understanding. The emergence of mobile networked devices now means that information can be accessed anywhere, anytime with a connection to the Internet. This new information reality has created substantial affordances for learning both in formal education and in informal settings. These opportunities have seemingly not come without a cost. Many scholars and commentators [e.g. 1-3] argue that the ease with which we can now access information is negatively and persistently impacting our capacity to learn, understand, and interact with others. In particular, attention is implicated as a key factor in the apparent negative influence of technology on learning in the digital age. Our aim in this paper is to critically evaluate the claim that technology is simply causing ongoing attention deficits that can be traced to neurophysiology. As we will demonstrate, the situation is vastly more complicated than it may seem on the surface.

The impact of emerging technologies on various mental functions has been a topic of some consternation for decades, if not centuries. For example, the invention of the printing press led to much speculation about how much of a negative impact the wide distribution of books and other print material would have on memory [cf. 4]. The concerns about the influence digital technologies seemingly have on people has increased in recent years. For example, Baroness Susan Greenfield has been making claims that digital technologies are having a negative impact on our brains for some years [e.g. 5,6]. These concerns have been reflected in the writing of many others, including journalists [e.g. 2] and other commentators [e.g. 7]. The concerns span possible negative impacts on everything from attention, to memory, to capacity for social interaction. It appears, therefore, that there is a growing tide of concern about the detrimental effects of digital technologies on everything from foundational cognitive functions through to more complex social dynamics.

In the research literature, the claims about the effects of digital technologies on learning, attention, memory, and social interaction are more nuanced than those in the popular media but point to inconsistencies and contradictions. For example, Loh and Kanai [8] critically examined the evidence of the impact of the Internet on the brain in the neuroscience literature. They found that, although there are some examples of neuroscience studies pointing to changes in the brain as a result of Internet use, the evidence is far from conclusive. Within an educational setting, Junco and Cotton [9] conducted an analysis of the effect of digital multitasking on student learning in a formal educational environment. The results of their study show robust evidence of the negative impact of multitasking in digital environments on academic performance (an issue we will return to later). The more students engaged in multiple activities in digital environments, the worse their academic performance. Furthermore, there is one area where there has been a substantial body of integrated research examining the influence of technology: reading and comprehension [for overview, see 10]. There is a growing literature on comparisons between reading in a digital environment and reading from printed material. In a review and meta-analysis of this research Delgado and colleagues [11] found a robust disadvantage to reading in a digital environment, otherwise known as “screen inferiority” [see also, 12]. Several mechanisms for this effect have been implicated, including the possibility that digital environments are inherently more distracting with scrolling and other aspects of the digital experience taking away cues and critical cognitive processing capacity from the task of comprehending the material [13]. The literature on the screen inferiority effect suggests that there may be robust evidence of the negative effects of technology on learning. However, the bulk of the research on screen inferiority has been conducted under controlled laboratory conditions. The conclusions of the meta-analyses by both Delgado et al. [11] and Kong, Seo, and Zhai [12] suggest that there are many complicating factors, such as the nature of the content, the amount of study time allocated, and demographic factors that could influence the effect outside the lab. As such, it remains uncertain as to when and how technologies influence reading and comprehension and what mechanisms might be responsible. Across the research literature, there are therefore some indicators of the possible negative implications of technology use for learning, but many unanswered questions remain about the implications in the real world.

In order to provide an overview of the impact of technology on learning, Tamim, Bernard, Borokhovski, Abrami, and Schmid [14] conducted a second-order meta-analysis on 40 years of research on the role of technology in learning in educational settings. The analysis revealed that the overall effect size of technology use on learning is Cohen’s d = .33, suggesting a positive influence of technology on learning in classrooms. However, this effect size is below that recommended by Hattie [15] of d = .4 based on his very large second order meta-analysis. An effect size of .33 suggests that the use of technology to enhance learning might be positive overall but not among the most effective ways of enhancing learning. So, while there is much discussion about the potential negative impacts of technologies, there are few examples of robust research findings across different levels of analysis to support this claim, particularly at a physiological level. Aside from the aforementioned research on reading and comprehension in digital environments, much of the emphasis of the research has been on the impact of technologies on learning in the classroom. The research that has been conducted in these contexts is inconclusive, demonstrating both benefits and harms in the use of technology on learning, attention, and memory. Selwyn [16] describes the polarization of the views in the literature on educational technology as the “booster” and “doomster” positions. Even though the research literature points to some uncertainty about the impact of technologies, there is a general tendency for commentators to see technology as either a savior or as a direct path to a dystopian future.

Determining how technologies might be impacting broader cognitive functioning, such as that involved in reading and comprehension, and neurophysiology “in the wild” is a complicated undertaking. There are two main reasons for this. Firstly, laboratory-based research in cognitive neuroscience cannot easily be translated into the real world to provide any confidence about possible cause and effect relationships between technologies and the brain [17]. While lab-based research gives some indication about how certain technologies might influence the ways in which people deal with and acquire concepts and ideas under controlled conditions, the capacity to infer from such constrained conditions to the complex real world is limited [18]. Social and environmental factors will heavily influence how these processes occur and how effective they are for people in their day-to-day lives. As technologies become more ubiquitous over time, the skills needed to use the technology become more important than the skills the technology supplants. For example, as calculators and computers have become readily available, the need to carry out hand calculation has diminished. There may be clear differences evident in neuronal activity as people do computer calculation or hand calculation, but these differences mean little in a world where calculators and computers are so commonly used. The translation process from the laboratory to the complex real world where these factors need to be considered is therefore difficult. Correspondingly, the ability to make any claims about whether technology is causing persistent, high-level changes in neurophysiology in a meaningful way or not on the basis of data collected in the wild is similarly limited. The complexity of the situation makes any correlation between factors difficult to interpret in isolation. This includes the specific impacts of different technologies.

A second, related issue is that, given the interaction of factors contributing to the ways in which people work with information in the 21st Century, it is difficult to isolate certain factors both in terms of the specific technologies people use and how, but also the specific cognitive functions they ostensibly influence. This has proven to be the case in research on screen superiority, for example. The net result is that broad claims are made about the impact of technologies on the brain without any specificity about the exact technology that is causing the problem and under what conditions it is an issue. There is then also a range of factors that are implicated as the point of influence. Some point to the negative impact of digital technologies on motivation, self-regulation or shallow “engagement” [e.g. 2]. Others make more specific claims about aspects of attention [e.g. 19] and/or memory [e.g. 20]. The influence of technology is then also extended to the physiology of the brain by others still [e.g. 6]. A not insignificant part of the problem of determining whether digital technologies are having a negative impact is that there are many technologies implicated and many places where the technology is having an impact, which ultimately highlights the inherent complexity of the problem.

Beneath all the complexity described here, there is one cognitive function that is consistently featured as the primary place where technologies are having a negative impact: attention. In an early attempt at understanding how attention might be influenced by technology, Hembrooke and Gay [21] examined differences in memory performance between a group of students allowed access to laptops during a lecture and a group that did not have access. They found a substantial decrement in performance for the group with access to laptops and attributed this poorer performance to the splitting of student attention in this group. In one of many similar studies, Wood and colleagues [22] separated off-task (e.g. social media use) and on-task (e.g. note-taking) activities to determine how much of the negative effect of technology is simply due to doing two things at once. They found that the groups allowed access to social media, texting, and email (off task activities) performed poorly compared to those engaged in task-related activities. They use these results to argue for the substantial detriment caused by the use of digital technologies over and above simply doing more than one thing at a time. This conclusion shares a family resemblance with results from neuroscientific studies. For example, Foerde, Knowlton, and Poldrack [23] found a decrease in performance accompanied by correlated differences in activity in the medial frontal cortex for participants completing dual tasks, as opposed to a single task. In summary then, it is apparent that there is some merit in the argument that attention is being negatively impacted through the use of technologies with some evidence that there are associated neuronal correlates. Much of the evidence to support this claim, however, is specifically focused on the detrimental effects of multitasking in highly controlled experimental settings looking at isolated parts of the brain or in highly complex educational contexts [see also, 24]. The body of research therefore does not provide evidence that the impact is occurring at a biological level in the global and persistent manner that is described by those taking the doomster position on technologies. In essence, the problem that is observed in real life learning situations is distraction and the negative consequences of constant task-switching. These negative consequences on working and long-term memory have been particularly evident in habitual multitaskers [25]. What remains unclear then is exactly how attention is implicated here, what is occurring in the brain over the longer term, and what can be done to address this distraction. To delve into this, we now turn to what is understood about the cognitive neuroscience of attention.

The Cognitive Neuroscience of Attention

Probably the most critical element of attention that is relevant to how information is processed in digital environments is its restricted capacity. Humans have only limited neural resources to process the complexity of the surrounding environment. Moreover, there are an infinite number of ways in which we could act in any given situation at any given time. The cognitive ability to allocate our attention selectively allows us to prioritize only some elements of the environment while filtering out others. A now classic example of such filtering is known as the “cocktail party effect” [26,27]: when standing in a room full of people speaking to one another, relatively little effort is required to tune into only a single speaker of interest. In such an instance, the selected speaker can be understood easily while all surrounding conversations turn into incoherent background noise. This phenomenon, selectively attending to only a single auditory source amongst many, demonstrates the cognitive capacity to voluntarily filter information according to our internal goals. In some cases, however, our attention is captured involuntarily. Consider again selectively listening to only one speaker at a cocktail party, but, seemingly from out of nowhere, you hear your name being spoken by someone you had previously been ignoring. Auditory filtering would automatically shift to tune into this new speaker, making their conversation clear while the previous speaker’s words become incoherent. Thus, although attention can greatly focus our thoughts and actions on only some aspects of our environment, the ways in which we allocate our attention depend on both our internal goals as well as external factors.

Both voluntary and involuntary forms of attentional allocation greatly impact many other cognitive functions [28]. Visual working memory, for example, is the ability to hold in mind visual information, such as simple shapes, colors, or letters, for just a few seconds. Visual working memory provides a sort of cognitive buffer that temporarily stores perceptual objects during decision making and action planning, and is highly predictive of intelligence [29]. However, this form of memory is surprisingly constrained; countless lab-based experiments have revealed that as more items are required to be remembered, the resolution with which those items can be remembered decreases dramatically [30]. Indeed, a classical view of visual working memory is that only three or four items can be remembered at one time, and any additional information is simply not stored in memory [31]. Given this highly limited capacity to hold items in memory, attentional control can play a critical role in governing whether a subset of visual information should hold priority in working memory. If a particular visual object is expected to be more important than others, voluntarily allocating attention to that object improves the precision with which it is remembered, but this improvement comes at a loss of memory precision for non-attended objects [32]. Indeed, the neural resources involved in holding items in visual working memory appear to change dynamically according to attentional goals. Attentional distraction, therefore, can result in the inability to hold information in memory, even for short periods of time.

Although the picture is still incomplete, the brain areas involved in attention have been thoroughly investigated over recent decades. A large distributed network of brain areas is involved in attentional control, primarily extending between so called “parietal.” “temporal,” and “frontal” brain areas [33]. Components of this network are differentially engaged when preparing and applying voluntary attention versus involuntary attention. Unsurprisingly given the important role of attention in guiding decisions and actions, the neural areas involved in attention are heavily connected to brain areas involved in processing basic sensory signals and the planning and generation of motor actions. When voluntarily allocating visual attention to a particular area of space, the attentional network sends signals that enhance the responsiveness of the brain areas involved in processing information at that location [34]. Similar changes in neural activity occur when attention is allocated involuntarily, though the precise areas of the brain involved in involuntary vs. voluntary attention are at least somewhat dissociated [33].

Damage to the attention network can result in remarkable attentional deficits. For example, following a stroke in the right side of the parietal cortex, a brain area particularly involved in the voluntary allocation of attention, the stroke patient can lose awareness of visual information toward the left side of space. As a result, the patient may only eat food on the right side of a plate, and fail to read words on the left side of a page [35]. This phenomenon, referred to as “neglect,” demonstrates that the brain areas involved in attention are critical to even basic sensory experiences.

It is not just injury, however, that can influence attention to the extent that there is a substantial loss of awareness; a perfectly healthy and typically operating brain is also susceptible to gross lapses in attention. Indeed, a skillful illusionist explicitly manipulates an audience’s attention so as to conceal how a magic trick is accomplished [36]. Attention is also involved in cognitive illusions, such as “change blindness.” In a change blindness experiment, dramatic changes in a visual scene will go completely unnoticed if an observer has their attention disrupted for just a fraction of a second [37]. This effect of distraction is so robust that, for example, the observer will fail to notice that an airplane’s engine is disappearing and re-appearing in consecutive images, when those images are separated by an attention-disrupting brief flash. Furthermore, attention can be greatly impacted by even more subtle distractions depending on internal goals. During a landing procedure in a flight simulator, pilots who had information continuously presented digitally on their cockpit often failed to notice a clearly visible plane on the runway, resulting in a collision event [38]. Therefore, focusing attention on some information can come at the cost of our attention not being captured involuntarily, even when such a shift in attention may be ideal.

One of the primary ways in which humans shift their attentional focus is by making eye movements. The most common type of eye movements, called saccades, rapidly shift high resolution central vision to a point of interest [39]. Central vision, as opposed to lower-resolution peripheral vision, is the only part of the visual field that has the visual resolution necessary for many common visual tasks, such as reading, recognizing faces, or watching television [40]. Humans thus make approximately two to four saccades per second to move objects of interest into central vision, resulting in a greater attentional focus for those objects than for objects in peripheral vision. Eye movements therefore provide an index of attentional focus, and the neural systems involved in eye movement control heavily overlap with those involved in both voluntary and involuntary attention shifts [33]. However, predicting where people will move their eyes, which is an overt shift of attention, remains one of the principle challenges in visual neuroscience [41]. Although recent advances have been impressive, the most up-to-date fixation models, which use a form of artificial intelligence to predict where a person will move their eyes, fail to make correct predictions for a substantial number of images [42]. This difficulty in predicting to what elements of an image a person will attend is particularly important in extending findings from typical cognitive experiments that use relatively minimal and contrived visual environments to scenarios beyond the lab. When predicting overt shifts in attention, researchers use naturalistic images, including, for example, outdoor scenes, people, and text [42]. The balance of successes and failures of models to predict where people will shift their attention in such complex scenarios attests to the difficulty in understanding the dynamics of voluntary and involuntary attention shifts in everyday settings.

Attention and Technology

It is evident in what we have presented thus far that attention is becoming better understood as a cognitive process from a neuroscience perspective. As we have explained, there is difficulty in determining how the biological processes are influencing and being influenced by technologies. This is particularly the case given the complex network of brain areas that are responsible for attentional control. It is therefore difficult to point to exact locations where long-lasting changes in information processing are impacting the brain. We can, however, see some evidence of the basic attentional processes observed in the laboratory environment also in play in real world settings. Drawing on the examples we have described here, there is a role of both voluntary and involuntary attentional processes for various technologies in the learning process. For example, if a learner is using the Internet to search for information about a given topic, they will be exposed to elements within that environment that will capture attention involuntarily. Pop-up elements of webpages are an obvious example of this. The voluntary engagement with material can be interrupted by elements in the environment that have been specifically designed to capture involuntary attentional processes. Herein lies one of the key areas in which technology has been deliberately created to exploit these processes.

Many websites, particularly commercial websites and social media, have been deliberately designed in order to capture attention and to maintain attention on that site [cf. 43]. The premise is that no exposure is bad exposure and therefore, the longer someone remains on a page, the more impact that page is having and the more likely it is that companies will be able to make a sale, either directly or through advertising. The underlying theory is based on mere exposure effect [44]; the more exposure a site or product gets, the more likely it is that it will be perceived favorably. The end result for someone trying to learn something by searching for information on the Internet is that there is constant competition between the voluntary attentional processes working towards the goal of greater understanding and the involuntary attentional processes constantly being lured away by design features specifically created for the purpose of attracting attention. Automated “notifications” are specifically designed to attract a user’s attention to a new email, private message, or social media comment, necessarily drawing their attention away from some other task [45].

As has been described by Fogg [43], technologies have been specifically designed in order to attract and maintain attention on a site or in an application. Fogg coined technologies designed to exploit cognitive systems in this way as “persuasive technology.” Between the concerns of technology insiders who have pointed out the ways in which technologies have been developed [e.g. 46] and the doomsters who argue that technology is leading us into dystopia, it would be easy to point to the impact of technologies on attention as the central problem.

Several recent studies demonstrate that technology use can have either positive or negative effects on cognition, depending on the type of technology, context, and cognitive functions being examined. In a groundbreaking study, Green and Bavelier [47] studied whether the attentional demands of modern video games may improve video gamers’ attentional capacities. They compared individuals who spent more than several hours a week playing video games to non-gamers and found that video gamers had superior attentional abilities on several standard cognitive tasks, such as ignoring distracting information, and attending to information over time. However, Boot et al. [48] repeated this experiment using a broader range of cognitive tasks than tested previously and concluded that it remains unclear to what extent attentional differences between gamers and non-gamers are due to pre-existing group differences, or to video game play specifically. Moreover, and as discussed above, evidence linking any positive benefits of such technology use to situations beyond the specifically trained task of video game playing is lacking [49].

The influence of media multitasking on cognitive ability is also unclear, and again likely depends on specific use cases as well as cognitive tasks being assessed. Ophir, Nass, and Wagner [50] developed a media multitasking questionnaire that distinguishes heavy media multitaskers from light media multitaskers, to test the hypothesis that media multitasking may train the ability to hold items in short term memory, switch between tasks, and ignore distractions. Contrary to these predictions, however, they found that heavier media multitaskers performed worse at a battery of cognitive tasks than light media multitaskers. A more recent study found that the mere presence of a mobile phone, but not its use, can reduce cognitive capacity [51]. Such media multitasking has been implicated in poorer learning in the classroom [52]. Although these studies may seem cause for concern about the growing abundance of devices on which to consume media, a more recent study called into question the conclusion that heavy media multitasking, as defined by Ophir et al., interferes with cognitive ability. Wiradhany and Nieuwenstein [53] repeated the same experiments as Ophir et al., with a different participant group. These more recent authors also performed a meta-analysis of published results, which is a statistical analysis that combines data from multiple studies thus improving the precision of the result over and above any single study. They found little support for the conclusion that heavy media multitasking negatively impacts cognitive performance as tested in the lab. Even when similar tests are conducted in relatively tightly controlled laboratory conditions, therefore, a clear impact of technology on cognitive performance remains elusive. Orben and Przybylski [54] note that, when testing the association between psychological well-being and technology use in a very large dataset (i.e. over 300,000 samples), multiple contradictory conclusions can be made depending on how a single dataset is analyzed.

The reality is that attention is a complex process that interacts with perception, memory, and conscious experience. It has voluntary and involuntary components and can be influenced by factors such as interest, motivation, and self-regulation. The evidence from cognitive neuroscience about the diffuse activity associated with attentional control supports the inter-relationship between processes. The way we direct our attention will influence how we learn from technologies as much as the technologies can influence how we attend to them. There is therefore a complex two-way relationship between humans and technologies that is influenced by a range of other factors. For example, there is clear intent in the design of technologies towards certain ends, often to sell products or services. Focusing on learning and education specifically, Clark and Mayer [55] argue that the critical element to the successful use of technologies in learning is to rely on solid evidence to inform the design and development of instruction. In this instance, the intent of the design is specifically to enhance learning. In understanding the relationship between attention and technology use, factors such as the intention of the design must be taken into account, it cannot be assumed that technologies are neutral [see also, 16]. With the extent of the complexity that these factors bring to the situation, it is premature to make blanket statements about technology causing attention deficits in the brain, as is so often the claim in the popular press.

The Underlying Difficulty of Understanding the Role of Attention in Technology-Mediated Learning

The volume of commentary on the apparent influence of attention in technology’s role in the demise of attentional capacity might suggest that the matter is largely settled. The research discussed here on the impact of reading and comprehension in digital environments vs. in print [11,12] also suggests that there is a strong case for that technology is negatively impacting learning. We have merely begun to highlight the reasons why the situation is more complex than it is often portrayed, particularly in regard to the role of attention. There does appear to be some relationship between our attentional systems and the evolving ways in which we acquire, use, and update our understanding in the 21st Century and, indeed, the screen inferiority effect has proven to be robust. Having said that though, there is some way to go before the true nature of the role attention plays in the observed effects is clear. Compounding the problem of developing this understanding is that technology evolves more rapidly than the research can. There is a substantial lag between the introduction of new technologies and the publication and dissemination of research about the implications of using the technology [56]. For example, mobile tablet devices have been available and in use for nearly a decade now but research on the implications of these devices on learning and in education has only recently been published [e.g. 57,58]. The principles of multimedia learning described by Mayer [59] provide guidance for teachers and learners in the interim but the lag between use and evidence means it is difficult to provide specific recommendations about particular technologies until some years after they are adopted in practice.

Given the complexity inherent in determining the relationship between attentional processes in the brain and technology-mediated learning, the only real option is to consider the situation systematically. This means taking a holistic approach to translating the basic research to understand how it might be applied in the complex social environment and to carry out systems-based analysis where the complexities of the situation may be methodically included in the analysis. Jacobson, Kapur, and Reimann [60] argue that this kind of systems-based approach is the only viable mechanism for overcoming the problems in analyzing how factors such as the use of technologies for learning can be adequately explored in educational environments. Systems-based approaches could similarly help to provide more comprehensive answers about the impact of technologies on attention. These approaches are, however, inherently complex requiring collaboration between researchers and practitioners (particularly in educational applications). This kind of collaboration requires testing assumptions about what attention and learning are, what level of analysis is appropriate for answering what kinds of questions and how to effectively translate and make meaning of research that occurs in highly controlled environments for the complex real world [cf. 61]. Without this more systemic approach, there will continue to be sensationalist claims made without substantive evidence of the impact of technologies on processes such as attention.

Conclusion

While some commentators are making bold claims about the negative impact of technologies on attention and how this, in turn, impacts on learning, there is much still to be understood. Research in cognitive neuroscience is helping to advance what we know about how attention works in the brain but there is some distance between this foundational research and what happens in the complex real world. While researchers and theorists continue to try to bridge this gap, technology vendors are exploiting attentional processes to engage people with websites and applications and keep their attention on the page or in the app. Increasing our understanding of fundamental attentional processes and how they influence learning in the complex social world will allow educators to develop strategies and tactics for helping students to better manage their own attention. A systemic approach to bringing researchers and practitioners together to make sense of the evidence across multiple levels of analysis is the only viable way to develop a sophisticated understanding of how technologies are influencing the brain.

Author Contributions

Both authors contributed to the conceptualization, writing, and editing of this article. JML funded by a Special Initiative of the Australian Research Council (SRI20300015). WJH funded through a National Health and Medical Research Council Fellowship (NHMRC APP1091257).

References

  1. Alter A. Irresistible: The rise of addictive technology and the business of keeping us hooked. New York: Penguin; 2017. [Google Scholar]
  2. Carr N. The shallows: What the internet is doing to our brains. New York (NY): Norton; 2011. [Google Scholar]
  3. Turkle S. Alone together: Why we expect more from technology and less from each other. New York (NY): Basic Books; 2011. [Google Scholar]
  4. Moodie G. Universities, disruptive technologies, and continuity in higher education: The impact of information revolutions. New York: Springer; 2016. [Google Scholar]
  5. Greenfield S. Tomorrow’s people: How 21st Century technology is changing the way we think and feel. London: Allen Lane; 2003. [Google Scholar]
  6. Greenfield S. Mind change: How digital technologies are leaving their mark on our brains. London: Random House; 2015. [Google Scholar]
  7. Wolf M. Reader, come home: The reading brain in a digital world. New York: Harper Collins; 2018. [Google Scholar]
  8. Loh KK, Kanai R. How has the Internet reshaped human cognition? Neuroscientist. 2016;22(5):506–20. [DOI] [PubMed] [Google Scholar]
  9. Junco R. Cotten SR. No A 4 U: the relationship between multitasking and academic performance. Comp Educ. 2012;59:505–14. [Google Scholar]
  10. Baron NS. Words onscreen: The fate of reading in a digital world. USA: Oxford University Press; 2015. [Google Scholar]
  11. Delgado P, Vargas C, Ackerman R, Salmeron L. Don’t throw away your printed books: A meta-analysis on the effects of reading media on reading comprehension. Educ Res Rev. 2018;25:23–38. [Google Scholar]
  12. Kong Y, Seo YS, Zhai L. Comparison of reading performance on screen and on paper: A meta-analysis. Comp Educ. 2018;123:138–49. [Google Scholar]
  13. Mangen A, Walgermo BR, Bronnick K. Reading linear texts on paper versus computer screen: effects on reading comprehension. Int J Educ Res. 2013;58:61–8. [Google Scholar]
  14. Tamim RM, Bernard RM, Borokhovski E, Abrami C, Schmid RF. What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Rev Educ Res. 2011;81(1):4–28. [Google Scholar]
  15. Hattie J. Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London, UK: Routledge; 2009. [Google Scholar]
  16. Selwyn N. Digital technology and the contemporary university. Abingdon, UK: Routledge; 2014. [Google Scholar]
  17. Lodge JM, Kennedy G, Lockyer L. Special Issue: Brain, mind and educational technology. Aust J Educ Technol. 2016;32(6):i–iii. [Google Scholar]
  18. Horvath JC. Donoghue GM. A bridge too far-revisited: reframing Bruer’s neuroeducation argument for modern science of learning practitioners. Front Psychol. 2016;7:377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Gausby A. Attention spans. Consumer Insights, Microsoft Canada; 2015. [Google Scholar]
  20. Storm BC, Stone SM, Benjamin AS. Using the Internet to access information inflates future use of the Internet to access other information. Memory. 2017;25(6):717–23. [DOI] [PubMed] [Google Scholar]
  21. Hembrooke H, Gay G. The laptop and the lecture: the effects of multitasking in learning environments. J Comput High Educ. 2003;15(1):46–64. [Google Scholar]
  22. Wood E, Zivcakova L, Gentile P, Archer K, De Pasquale D, Nosko A. Examining the impact of off-task multi-tasking with technology on real-time classroom learning. Comp Educ. 2012;58(1):365–37. [Google Scholar]
  23. Foerde K, Knowlton BJ, Poldrack RA. Modulation of competing memory systems by distraction. Proc Natl Acad Sci USA. 2006;103(31):11778–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. May KE, Elder AD. Efficient, helpful, or distracting? A literature review of media multitasking in relation to academic performance. Int J Educ Techn Higher Educ. 2018;15(1):13. [Google Scholar]
  25. Uncapher MR, Thieu MK, Wagner AD. Media multitasking and memory: differences in working memory and long-term memory. Psychon Bull Rev. 2016;23(2):483–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Bronkhorst AW. The cocktail party phenomenon: A review of research on speech intelligibility in multiple-talker conditions. Acta Acust United Acust. 2000;86(1):117–28. [Google Scholar]
  27. Cherry EC. Some experiments on the recognition of speech, with one and two ears. J Acoust Soc Am. 1953;25:975–9. [Google Scholar]
  28. Posner MI. Orienting of attention: then and now. Q J Exp Psychol. 2014;69:1864–75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Baddeley A. Working memory: looking back and looking forward. Nat Rev Neurol. 2003;4(10):829–39. [DOI] [PubMed] [Google Scholar]
  30. Ma WJ, Husain M, Bays PM. Changing concepts of working memory. Nat Neurosci. 2014;17(3):347–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Luck SJ, Vogel EK. Visual working memory capacity: from psychophysics and neurobiology to individual differences. Trends Cogn Sci. 2013. August;17(8):391–400. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Bays PM. Noise in neural populations accounts for errors in working memory. J Neurol. 2014;34(10):3632–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Corbetta M, Shulman GL. Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurol. 2002;3(3):201–15. [DOI] [PubMed] [Google Scholar]
  34. Carrasco M. Visual attention: the past 25 years. Vision Res. 2011;51(13):1484–525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Driver J, Mattingley JB. Parietal neglect and visual awareness. Nat Neurosci. 1998;1(1):17–22. [DOI] [PubMed] [Google Scholar]
  36. Macknik SL, King M, Randi J, Robbins A. Teller Thompson J Martinez-Conde S. Attention and awareness in stage magic: turning tricks into research. Nat Rev Neurol. 2008;9(11):871–9. [DOI] [PubMed] [Google Scholar]
  37. Rensink RA, O’Regan JK, Clark JJ. To see or not to see: the need for attention to perceive changes in scenes. Psych. Sci. 1997;8(5):368–73. [Google Scholar]
  38. Haines RF. A breakdown in simultaneous information processing In: Obrecht G, Stark LW. Presbyopia research: From molecular biology to visual adaptation. Boston (MA). Springer, US; 1991. pp. 171–5. [Google Scholar]
  39. Sparks DL. The brainstem control of saccadic eye movements. Nat Rev Neurol. 2002;3(12):952–64. [DOI] [PubMed] [Google Scholar]
  40. Pelli DG, Tillman KA. The uncrowded window of object recognition. Nat Neurosci. 2008;11(10):1129–35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Itti L, Koch C. Computational modelling of visual attention. Nat Rev Neurol. 2001;2(3):194–203. [DOI] [PubMed] [Google Scholar]
  42. Kummerer M. Wallis TSA Gatys LA Bethge M. Understanding low- and high-level contributions to fixation prediction. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 4799-4808). Venice: IEEE; 2017 [Google Scholar]
  43. Fogg B. Persuasive technology. Using computers to change what we think and do. San Francisco (CA): Morgan Kaufmann; 2002. [Google Scholar]
  44. Zajonc RB. The attitudinal effects of mere exposure. J Pers Soc Psychol. 1968;9:1–27.5667435 [Google Scholar]
  45. Lee K, Flinn J, Noble B. The case for operating system management of user attention. In Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications (pp. 111-116). New York, NY, USA: ACM; 2015 [Google Scholar]
  46. Harris T. How technology hijacks people’s minds: From a magician and Google’s design ethicist. 2016. Available from: http://www.tristanharris.com/2016/05/how-technology-hijacks-peoples-minds%e2%80%8a-%e2%80%8afrom-a-magician-and-googles-design-ethicist/
  47. Green CS, Bavelier D. Action video game modifies visual selective attention. Nature. 2003;423(6939):534–7. [DOI] [PubMed] [Google Scholar]
  48. Boot WR, Kramer AF, Simons DJ, Fabiani M, Gratton G. The effects of video game playing on attention, memory, and executive control. Acta Psychol (Amst). 2008;129(3):387–98. [DOI] [PubMed] [Google Scholar]
  49. Sala G Semir Tatlidil K. Gobet F. Video game training does not enhance cognitive ability: A comprehensive meta-analytic investigation. Psychol Bull. 2018;144(2):111–39. [DOI] [PubMed] [Google Scholar]
  50. Ophir E, Nass C, Wagner AD. Cognitive control in media multitaskers. Proc Natl Acad Sci USA. 2009;106(37):15583–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Ward AF, Duke K, Gneezy A, Bos MW. Brain drain: the mere presence of one’s own smartphone reduces available cognitive capacity. J Assoc Cons Res. 2017;2(2):140–54. [Google Scholar]
  52. Rosen LD, Carrier LM, Cheever NA. Facebook and texting made me do it: media-induced task-switching while studying. Comput Human Behav. 2013;29(3):948–58. [Google Scholar]
  53. Wiradhany W, Nieuwenstein MR. Cognitive control in media multitaskers: two replication studies and a meta-analysis. Att Perc Psych. 2017;79(8):2620–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Orben A, Przybylski AK. The association between adolescent well-being and digital technology use. Nat Hum Behav. 2019;3:173–82. [DOI] [PubMed] [Google Scholar]
  55. Clark RC, Mayer RE. E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. 4th ed. New York (NY): John Wiley & Sons; 2016. [Google Scholar]
  56. Lodge JM, Horvath JC. Science of learning and digital learning environments. JC Horvath JM Lodge JAC Hattie. From the laboratory to the classroom: Translating science of learning for teachers. Abingdon, UK: Routledge; 2017. [Google Scholar]
  57. Walczak S, Taylor NG. Geography learning in primary school: comparing face-to-face versus tablet-based instruction methods. Comp Educ. 2018;117:188–98. [Google Scholar]
  58. Yanikoglu B, Gogus A, Inal E. Use of handwriting recognition technologies in tablet-based learning modules for first grade education. Educ Technol Res Dev. 2017;65(5):1369–88. [Google Scholar]
  59. Mayer RE. Multimedia learning. 2nd ed. New York (NY): Cambridge University Press; 2009. [Google Scholar]
  60. Jacobson MJ, Kapur M, Reimann P. Conceptualizing debates in learning and educational research: toward a complex systems conceptual framework of learning. Educ Psychol. 2016;51(2):210–8. [Google Scholar]
  61. Palghat K, Horvath JC, Lodge JM. The hard problem of ‘educational neuroscience’. Trends Neurosci Educ. 2017;6:204–10. [Google Scholar]

Articles from The Yale Journal of Biology and Medicine are provided here courtesy of Yale Journal of Biology and Medicine

RESOURCES