Skip to main content
Sage Choice logoLink to Sage Choice
. 2024 Jan 21;66(12):2561–2568. doi: 10.1177/00187208241228049

Adopting Stimulus Detection Tasks for Cognitive Workload Assessment: Some Considerations

Francesco N Biondi 1,2,
PMCID: PMC11475934  PMID: 38247319

Abstract

Objective

This article tackles the issue of correct data interpretation when using stimulus detection tasks for determining the operator’s workload.

Background

Stimulus detection tasks are a relative simple and inexpensive means of measuring the operator’s state. While stimulus detection tasks may be better geared to measure conditions of high workload, adopting this approach for the assessment of low workload may be more problematic.

Method

This mini-review details the use of common stimulus detection tasks and their contributions to the Human Factors practice. It also borrows from the conceptual framework of the inverted-U shape model to discuss the issue of data interpretation.

Results

The evidence being discussed here highlights a clear limitation of stimulus detection task paradigms.

Conclusion

There is an inherent risk in using a unidimensional tool like stimulus detection tasks as the primary source of information for determining the operator’s psychophysiological state.

Application

Two recommendations are put forward to Human Factors researchers and practitioners dealing with the interpretation conundrum of dealing with stimulus detection tasks.

Keywords: stimulus detection task, psychomotor vigilance task, detection response task, ISO DRT, PVT, workload, vigilance, response times, automation, automated driving systems, Fatigue, Drowsiness

The Problem at Hand

Stimulus detection tasks offer a simple yet effective means of measuring the user’s workload during task completion. Considering the unidimensional nature of this paradigm whereby nonoptimal workload—no matter its nature—results in a task performance decline, understanding the etiology of the observed data pattern is key to a correct data interpretation. Whereas this may be easier in conditions like multitasking wherein the decline in task performance is likely the direct result of the added task load (Strayer & Cooper, 2015), there are key applications where data interpretation is murkier. For example, take those situations wherein the human operator hands over some (but not all) operations to an automated system while concurrently maintaining responsibility for the task at hand. While it is arguable that the partial transfer of control will result in a reduction in workload and, thus, boredom (Parasuraman & Manzey, 2010), it can be posited that the additional cognitive cost of maintaining vigilance over the automated system’s functioning will instead increase workload (Warm et al., 2008). No matter the real cause, both conditions will result in a decline in detection task performance.

This mini-review investigates the germane issue of data interpretation when using stimulus detection tasks as a way to measure workload during task completion in the workplace. The analysis begins with a discussion of the importance of measuring the user’s state in the Human Factors practice, and the contribution made by stimulus detection tasks. The issue of the interpretation conundrum is then analyzed in the context of the inverted-U conceptual model. Finally, practical recommendations for practitioners and researchers are put forward.

The Importance of Workload Assessment

Maintaining an optimal psychophysiological state is necessary for task performance and safety. Research across the Human Factors and Ergonomics spectrum highlight how workload declines during the execution of a task lead to decrements in response times and accuracy and increase the risk of accidents and injuries. For example, failures in sustaining attention toward mission-relevant tasks is often associated with slower responses and a greater chance of overlooking safety-critical information (Biondi et al., 2015; Cummings et al., 2013; MackWorth, 1948; Parasuraman et al., 2009). Conditions of underload like fatigue are known contributors to workplace injuries among law enforcement officers (Fekedulegn et al., 2017) and healthcare professionals (Patterson et al., 2012). Likewise, conditions of overload resulting from, for example, multitasking and distraction, are also contributing factors to workplace errors and accidents (Bonsang & Caroli, 2021; Lee, 2008).

Key to maintaining optimal workload is employing effective means of measuring it. Practitioners have adopted a variety of methodologies for monitoring the user’s state during task completion. Physiological metrics like heart rate and skin conductance are often used as metrics of fatigue (O’Keeffe et al., 2020; Segerstrom & Nes, 2007). Methodologies such as event-related potentials (ERPs) and functional near-infrared spectroscopy (fNIRS) have also been used to determine levels of attentional processing during controlled experiments (Aghajani et al., 2017; Perello-march et al., 2021; Zhu et al., 2021). These approaches have clear advantages in their ability to provide an accurate real-time tracking of the operator’s state, often with a high degree of temporal resolution. However, their adoption likely require the use of scientific-grade sensors and laborious data processing which may limit their employment in the real-world scenarios.

Stimulus Detection Tasks in Human Factors

Stimulus detection tasks represent a cheaper, simpler means for detecting the operator’s state in the workplace. Their simple design requires responding to the presentation of an intermittent stimulus, auditory, visual, or vibrotactile in nature, via pressing an actuator like a key on a keyboard or a microswitch. The amount of time taken to respond to the stimulus and the number of accurate detections provide information on the operator’s state (e.g., vigilance and workload) so that faster responses and higher accuracy are indicative of their greater ability to maintain optimal performance in the primary task at hand.

Two common examples of stimulus detection tasks in Human Factors are the psychomotor vigilance task (PVT) and the detection response task (DRT). The PVT was conceived to measure vigilance in tasks requiring sustained attention (Graeber et al., 1990; Kribbs et al., 1993). This task requires responding to the intermittent onset of a stimulus which is presented with a variable interstimulus interval of 2–10 seconds. It is also administered alone, not in conjunction with any other task, to measure vigilance decrements induced by, for example, prolonged drowsiness. It typically has a set duration of 10 minutes, which may make it less suitable at tracking temporal fluctuations in vigilance during prolonged supervisory tasks. Hogan (1966) adopted it to measure interindividual differences in vigilance performance. Graeber et al. (1990) employed it to quantify drowsiness in aircraft pilots. In the years since, it has been widely adopted to quantify losses in vigilance resulting from drowsiness. Thomann et al. (2014) presented participants with a visual stimulus on a computer screen to which they were asked to respond by pressing a button. When compared to control sample, participants suffering from daytime sleepiness displayed longer response times and lower accuracy (see Drummond et al., 2005 for consistent patterns). Morris et al. (2015) adopted this paradigm to measure drowsiness during night driving. Longer PVT response times were observed as drivers became drowsier in the early hours of the morning. Consistent results were found in Kozak et al. (2005)’s study wherein the onset of drowsiness as measured by physiological and subjective ratings correlated with longer responses in the PVT.

The DRT was standardized in 2015 by the International Organization for Standardization to measure the increase in cognitive workload resulting from driver multitasking (ISO, 2015). It requires responding to an intermittent auditory, visual, or vibrotactile stimulus which is presented with an interstimulus interval that is drawn randomly from a uniform distribution of values between 3 and 5 seconds. Unlike the PVT which is typically administered as the sole task, the DRT is conceived to be administered concurrently with both a primary (i.e., driving) and a secondary task (e.g., interacting with a visual-manual, voice-based, or haptic interface) thus effectively serving as a tertiary performance task. Its duration is not set, and varies with that of the primary/secondary task at hand. Research has been conducted to quantify the potential cost of performing the DRT and, although it found a nonzero effect on cognitive workload (Biondi et al., 2021), its interference on the primary task performance is thought to be minimal especially during the completion of tasks in the real world (Castro et al., 2019; Stojmenova & Sodnik, 2018). Since its inception in 2015, the DRT has been widely adopted to measure the cognitive workload of using smartphone interfaces (Monk et al., 2023; Snider et al., 2021) and in-vehicle infotainment systems for navigation or media purposes (Lee et al., 2017; Strayer et al., 2015; Strayer Cooper, et al., 2017). This paradigm has also been used to quantify the residual cognitive cost of multitasking following the completion of the secondary task (Bowden et al., 2019; Strayer, Cooper, et al., 2017), and for measuring fluctuations in workload during driver-passenger conversations (Strayer et al., 2016).

The Interpretation Conundrum

The success and popularity of stimulus detection tasks largely lie in their simple response time paradigm wherein one response (e.g., button press) is required to the presentation of one stimulus (e.g., a sound). With it, however, comes the issue of context-dependent data interpretation. To delineate this issue, for simplicity I will focus on response times alone, acknowledging however that my examination applies to task accuracy as well. I will also borrow from the theoretical approach of the inverted-U model of performance and arousal known as the Yerkes-Dodson law with its sole purpose of serving as a visual aid and while acknowledging its caveats (Hancock & Ganey, 2003).

The inverted-U model was originally conceived to better understand the relationship between physiological arousal and performance (Hebb, 1955) so that poor performance in a task follows either low or high arousal, and optimal performance occurs only at an intermediate arousal level. In Human Factors research, this model—and its right end-tail in particular—has been applied to describe the relationship between workload and performance in driving (Figure 1). For example, Curry et al. (2013), Reimer (2009), and Coughlin et al. (2011) used it to better understand the effect that driving under high levels of cognitive load has on decrements in driver behavior and road safety. More recently, the same conceptual framework—and its left end-tail in particular—has been borrowed in driving automation research to delineate the opposite scenario wherein low (not high) levels of workload result in poorer driving performance (Biondi et al., 2019, 2023; Rössger, 2015; Shupsky et al., 2020; Sibi et al., 2016). With the introduction of semi-automated systems that maintain control of the vehicle yet require the human driver to stay vigilant should a take-over be necessary (SAE level-2 and -3 systems; SAE, 2021), it is hypothesized that drivers will gradually lose engagement in the driving task (Strayer, Getty et al., 2017). In turn, this will slowly lead to a decrement in cognitive workload which, back to the inverted-U model, will lead to poorer performance during take-over scenarios.

Figure 1.

Figure 1.

Inverted-U model of workload and performance.

The data interpretation conundrum begins when using a stimulus detection task as the primary source of information to ascribe declines in performance to either the left or right-end tail of the inverted-U curve. When inspected from the unique lens of this unidimensional paradigm, both low and high workload will in fact result in longer response times. Hamstrung, the researcher will then need to lean even more heavily on the theoretical construct of choice—underload or overload—to extricate themselves from the interpretation conundrum. This of course comes with clear unintended risks.

In a recent article, my coauthors and I used a detection task to measure vigilance decrements during the use of on-road semi-automated systems (Biondi et al., 2023). The task required participants to press a microswitch upon the presentation of a vibrotactile stimulus which occurred every 3–5 seconds. Following our decision to ground ourselves in the literature of user automation, we interpreted the longer response times observed during semi-automated driving not as indicative of greater workload (right-end tail of the inverted-U curve) but instead like the manifestation of lower workload and vigilance (left-end tail of the inverted-U curve). Zhang et al. (2021) found themselves in a similar bog when they, alongside the DRT, also used spectral EEG to measure changes in the driver’s state during monotonous automated driving. Greater alpha power—which is typically associated with a state of relaxation (Wang et al., 2013)—was found during automated driving. Together with longer responses in the DRT, data was interpreted as being the manifestation of driver drowsiness. Stapel et al. (2019) also used the DRT to measure changes in the driver’s state between manual and automated driving. Longer response times were found in automated driving mode, which they interpreted as being the result of greater workload. A consistent interpretation was given by (McDonnell et al., 2023) wherein, when analyzed within the context of the ISO standard, longer DRT response times during automated driving were accounted for as the manifestation of greater drivers’ workload. Theoretical wrangling aside, interpretation missteps have clear consequences on the scientific progress in Human Factors, especially when the exact same data can and will be accounted for in diametrically opposed fashions. For example, in the context of driving automation, the two hypotheses that operating automated systems will lead to a reduction in workload following greater driver disengagement or, instead, an increase in load resulting from the need to pay attention to both the system functioning and safety-related events have both equal merit when using a simple response task as the sole adjudicator. With this in mind, it becomes even more challenging to ascertain not only the true nature of the operator’s state but indeed the real implications—benefits or risks they may be—that automation use has on safety.

Still in the Bog

The overview of the literature that I provided has hopefully persuaded the reader of the veracity of the interpretation conundrum when using a unidimensional tool like detection task paradigms for tackling multifaceted problems. As much as it requires a simple methodological setup, this approach may also demand taking sides when it comes to data interpretation, especially when it is taken as the primary source of information. With this comes two possible recommendations to Human Factors researchers and practitioners. The first is about taking into account the characteristics of the user and the task at hand. Given the known effect of practice and expertise on task performance (Hancock & Matthews, 2018; Parasuraman, 2014; Strayer et al., 2004), accounting for the user’s characteristics may provide additional information to disambiguate the etiology of the observed data patterns. Likewise, considering characteristics like task complexity and the single versus multi-tasking nature of the activity at hand may also facilitate the correct interpretation.

In keeping with the epistemic norms common of the scientific method, the second recommendation is about striving to gather additional evidence that both prove and refute the leading hypothesis rather than accepting it as summa veritas. Through strictly applying the approach known as data triangulation (Carayon et al., 2015), the Human Factors practitioner ought to gather data that unveils ulterior facets of task completion via the adoption of, for example, self-reported (e.g., POSWAT; Adrion, 1986) or physiological measures (e.g., EEG). This approach, which is already being adopted in some studies (Frazier et al., 2022; McDonnell, Imberger, et al., 2021; Scheutz et al., 2023; Zhang et al., 2021), should become more common practice in this discipline to help settle the debate once and for all. “Is the longer response times indicating of greater workload or drowsiness?” Let the debate continue, at least for now.

Key Points

  • This article tackles the conundrum of correct data interpretation when using stimulus detection tasks.

  • The exact same detection task pattern may yield diametrically opposed data interpretations depending on the theoretical construct of choice.

  • The evidence being discussed here highlights a clear limitation of detection tasks for determining the operator’s workload.

  • Two recommendations are put forward to Human Factors researchers and practitioners.

Author Biography

Francesco N. Biondi is an associate professor at the University of Windsor and Director of the Human Systems Lab in the Department of Kinesiology. He received his master’s degree in experimental psychology in 2011 and his doctorate in experimental psychology in 2015 from the University of Padova, Italy.

Footnotes

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by Social Sciences and Humanities Research Council and Natural Sciences and Engineering Research Council of Canada.

ORCID iD

Francesco N. Biondi https://orcid.org/0000-0002-5558-4707

References

  1. Adrion J. (1986). The effects of experience and Tpaining on the assessment of pilot subjective workload. Proceedings of the Human Factors Society Annual Meeting, 30(7), 619–623. 10.1177/154193128603000702 [DOI] [Google Scholar]
  2. Aghajani H., Garbey M., Omurtag A. (2017). Measuring mental workload with EEG + fNIRS. Frontiers in Human Neuroscience, 11(July), 359–420. 10.3389/fnhum.2017.00359 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Biondi F., Alvarez I., Jeong K., Biondi F., Alvarez I., Jeong K. (2019). Human - system cooperation in automated driving. International Journal of Human-Computer Interaction, 35(11), 917–918. 10.1080/10447318.2018.1561793 [DOI] [Google Scholar]
  4. Biondi F., Turrill J., Coleman J. R., Cooper J. M., Strayer D. L. (2015). Cognitive distraction impairs drivers’ anticipatory glances: An on-road study. Proceedings of the Eighth International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design Distractive, 8, 23–29. [Google Scholar]
  5. Biondi F. N., Balasingam B., Ayare P. (2021). On the cost of detection response task performance on cognitive load. Human Factors, 63(5), 804–812. 10.1177/0018720820931628 [DOI] [PubMed] [Google Scholar]
  6. Biondi F. N., McDonnell A. S., Mahmoodzadeh M., Jajo N., Balasingam B., Strayer D. L. (2023). Vigilance decrement during on-road partially automated driving across four systems. Human Factors. 10.1177/00187208231189658 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bonsang E., Caroli E. (2021). Cognitive load and occupational injuries. Industrial Relations: A Journal of Economy and Society, 60(2), 219–242. 10.1111/irel.12277 [DOI] [Google Scholar]
  8. Bowden V. K., Loft S., Wilson M. K., Howard J., Visser T. A. W. (2019). The long road home from distraction: Investigating the time-course of distraction recovery in driving. Accident Analysis & Prevention, 124(August 2018), 23–32. 10.1016/j.aap.2018.12.012 [DOI] [PubMed] [Google Scholar]
  9. Carayon P., Kianfar S., Li Y., Xie A., Alyousef B., Wooldridge A. (2015). A systematic review of mixed methods research on human factors and ergonomics in health care. Applied Ergonomics, 51(November 2015), 291–321. 10.1016/j.apergo.2015.06.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Castro S. C., Strayer D. L., Matzke D., Heathcote A. (2019). Cognitive workload measurement and modeling under divided attention. Journal of Experimental Psychology: Human Perception and Performance, 45(6), 826–839. 10.1037/xhp0000638 [DOI] [PubMed] [Google Scholar]
  11. Coughlin J. F., Reimer B., Mehler B. (2011). Monitoring, managing, and motivating driver safety and well-being. IEEE Pervasive Computing, 10(3), 14–21. 10.1109/MPRV.2011.54 [DOI] [Google Scholar]
  12. Cummings M. L., Mastracchio C., Thornburg K. M., Mkrtchyan A. (2013). Boredom and distraction in multiple unmanned vehicle supervisory control. Interacting with Computers, 25(1), 34–47. 10.1093/iwc/iws011 [DOI] [Google Scholar]
  13. Curry D., Meyer J., Jones A. (2013). Driver distraction: Are we mistaking a symptom for the problem? SAE Technical Paper Series. Paper #2013-01-0439, (April 2013), 2. 10.4271/2013-01-0439 [DOI] [Google Scholar]
  14. Drummond S. P. A., Bischoff-Grethe A., Dinges D. F., Ayalon L., Mednick S. C., Meloy M. J. (2005). The neural basis of the psychomotor vigilance task. Sleep, 28(9), 1059–1068. 10.1093/sleep/28.9.1059 [DOI] [PubMed] [Google Scholar]
  15. Fekedulegn D., Burchfiel C. M., Ma C. C., Andrew M. E., Hartley T. A., Charles L. E., Gu J. K., Violanti J. M. (2017). Fatigue and on-duty injury among police officers: The BCOPS study. Journal of Safety Research, 60, 43–51. 10.1016/j.jsr.2016.11.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Frazier S., Pitts B. J., McComb S. (2022). Measuring cognitive workload in automated knowledge work environments: A systematic literature review. Cognition, Technology and Work, 24(4), 557–587. 10.1007/s10111-022-00708-0 [DOI] [Google Scholar]
  17. Graeber R. C., Rosekind M., Connell L. J. (1990). Cockpit napping. ICAO Journal. [Google Scholar]
  18. Hancock P. A., Matthews G. (2018). Workload and performance: Associations, insensitivities, and dissociations. Human Factors, 61(3), 374–392. 10.1177/0018720818809590 [DOI] [PubMed] [Google Scholar]
  19. Hancock P. A., Warm J. S. (2003). From the inverted-U to the extended-U: The evolution of a law of psychology. Human Performance in Extreme Environments: The Journal of the Society for Human Performance in Extreme Environments, 7(1), 15–28. 10.7771/2327-2937.1024 [DOI] [PubMed] [Google Scholar]
  20. Hebb D. O. (1955). Drives and the CNS (conceptual nervous system). Psychological Review, 62(4), 243–254. 10.1037/h0041823 [DOI] [PubMed] [Google Scholar]
  21. Hogan M. J. (1966). Influence of motivation on reactive inhibition in extraversion-introversion. Perceptual and Motor Skills, 22(1), 187–192. 10.2466/pms.1966.22.1.187 [DOI] [PubMed] [Google Scholar]
  22. ISO . (2015). Detection-response task (DRT) for assessing attentional effects of cognitive load in driving, ISO/DIS 17488. ISO. [Google Scholar]
  23. Kozak K., Curry R., Greenberg J., Artz B., Blommer M., Cathey L. (2005). Leading indicators of drowsiness in simulated driving. Proceedings of the Human Factors and Ergonomics Society - Annual Meeting, 49(22), 1917–1921. 10.1177/154193120504902207 [DOI] [Google Scholar]
  24. Kribbs N. B., Pack A. I., Kline L. R., Getsy J. E., Schuett J. S., Henry J. N., Maislin G., Dinges D. F. (1993). Effects of one night without nasal CPAP treatment on sleep and sleepiness in patients with obstructive sleep apnea. American Review of Respiratory Disease, 147(5), 1162–1168. 10.1164/ajrccm/147.5.1162 [DOI] [PubMed] [Google Scholar]
  25. Lee J., Sawyer B. D., Mehler B., Angell L., Seppelt B. D., Seaman S., Fridman L., Reimer B. (2017). Linking the detection response task and the attend algorithm through assessment of human-machine interface workload. Transportation Research Record: Journal of the Transportation Research Board, 2663(1), 82–89. 10.3141/2663-11 [DOI] [Google Scholar]
  26. Lee J. D. (2008). Fifty years of driving safety research. Human Factors, 50(3), 521–528. 10.1518/001872008X288376 [DOI] [PubMed] [Google Scholar]
  27. MackWorth N. H. (1948). These problems should be amenable to experimental investigation, and for this purpose the use. Medical Research Council Applied Psychology. [Google Scholar]
  28. McDonnell A. S., Crabtree K. W., Cooper J.M., Strayer D.L. (2023). This Is Your Brain on Autopilot 2.0: The Influence of Practice on Driver Workload and Engagement During On-Road, Partially Automated Driving. Human Factors. 10.1177/0018720823120105 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. McDonnell A. S., Imberger K., Poulter C., Cooper J. M. (2021. a). The power and sensitivity of four core driver workload measures for benchmarking the distraction potential of new driver vehicle interfaces. Transportation Research Part F: Traffic Psychology and Behaviour, 83(February), 99–117. 10.1016/j.trf.2021.09.019 [DOI] [Google Scholar]
  30. Monk C., Sall R., Lester B. D., Stephen Higgins J. (2023). Visual and cognitive demands of manual and Voice-based driving mode implementations on smartphones. Accident Analysis & Prevention, 187(April), 107033. 10.1016/j.aap.2023.107033 [DOI] [PubMed] [Google Scholar]
  31. Morris D. M., Pilcher J. J., Switzer F. S. (2015). Lane heading difference: An innovative model for drowsy driving detection using retrospective analysis around curves. Accident Analysis & Prevention, 80(July 2015), 117–124. 10.1016/j.aap.2015.04.007 [DOI] [PubMed] [Google Scholar]
  32. O’Keeffe K., Hodder S., Lloyd A. (2020). A comparison of methods used for inducing mental fatigue in performance research: Individualised, dual-task and short duration cognitive tests are most effective. Ergonomics, 63(1), 1–12. 10.1080/00140139.2019.1687940 [DOI] [PubMed] [Google Scholar]
  33. Parasuraman R, McKinley, R. A. . (2014). Using noninvasive brain stimulation to accelerate learning and enhance human performance. Human factors, 56(5), 816–824. [DOI] [PubMed] [Google Scholar]
  34. Parasuraman R., Cosenzo K. A., De Visser E. (2009). Adaptive automation for human supervision of multiple uninhabited vehicles: Effects on change detection, situation awareness, and mental workload. Military Psychology, 21(2), 270–297. 10.1080/08995600902768800 [DOI] [Google Scholar]
  35. Parasuraman R., Manzey D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. 10.1177/0018720810376055 [DOI] [PubMed] [Google Scholar]
  36. Patterson P. D., Weaver M. D., Frank R. C., Warner C. W., Martin-Gill C., Guyette F. X., Fairbanks R. J., Hubble M. W., Songer T. J., Callaway C. W., Kelsey S. F., Hostler D. (2012). Association between poor sleep, fatigue, and safety outcomes in emergency medical services providers. Prehospital Emergency Care, 16(1), 86–97. 10.3109/10903127.2011.616261 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Perello-march J., Transport N., Centre D., Burns C. G., Woodman R., Birrell S., Transport N., Centre D., Elliott M. T. (2021). How do drivers perceive risks during automated driving scenarios? An fNIRS neuroimaging study. Human Factors. 10.1177/00187208231185705 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Reimer B. (2009). Impact of cognitive task complexity on drivers’ visual tunneling. Transportation Research Record Journal of the Transportation Research Board, 2138(1), 13–19. 10.3141/2138-03 [DOI] [Google Scholar]
  39. Rössger P. (2015). Autonomous Driving How Much Autonomy Driving Does Stand? ATZelektronik worldwide, 10(2), 26–29. [Google Scholar]
  40. SAE . (2021). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. https://www.sae.org/standards/content/j3016_202104/ [Google Scholar]
  41. Scheutz M., Aeron S., Aygun A., de Ruiter J. P., Fantini S., Fernandez C., Haga Z., Nguyen T., Lyu B. (2023). Estimating systemic cognitive states from a mixture of physiological and brain signals. Topics in Cognitive Science, 1–42. 10.1111/tops.12669 [DOI] [PubMed] [Google Scholar]
  42. Segerstrom S. C., Nes L. S. (2007). Heart rate variability reflects self-regulatory strength, effort, and fatigue. Psychological Science, 18(3), 275–281. 10.1111/j.1467-9280.2007.01888.x [DOI] [PubMed] [Google Scholar]
  43. Shupsky T., Morales K., Baldwin C., Hancock P., Greenlee E. T., Horrey W. J., Klauer C. (2020). Secondary task engagement during automated drives: Friend and foe? Proceedings of the Human Factors and Ergonomics Society - Annual Meeting, 64(1), 1926–1930. 10.1177/1071181320641464 [DOI] [Google Scholar]
  44. Sibi S., Ayaz H., Kuhns D. P., Sirkin D. M., Ju W. (2016). Monitoring driver cognitive load using functional near infrared spectroscopy in partially autonomous cars. 2016 IEEE Intelligent Vehicles Symposium (IV) (pp. 419–425), Gothenburg, Sweden, IEEE, 2016-Augus(Iv). 10.1109/IVS.2016.7535420 [DOI] [Google Scholar]
  45. Snider J., Spence R. J., Engler A. M., Moran R., Hacker S., Chukoskie L., Townsend J., Hill L. (2021). Distraction “hangover”: Characterization of the delayed return to baseline driving risk after distracting behaviors. Human Factors, 65(2), 306. 10.1177/00187208211012218 [DOI] [PubMed] [Google Scholar]
  46. Stapel J., Mullakkal-babu F. A., Happee R. (2019). Automated driving reduces perceived workload, but monitoring causes higher cognitive load than manual driving. Transportation Research Part F: Traffic Psychology and Behaviour, 60, 590–605. 10.1016/j.trf.2018.11.006 [DOI] [Google Scholar]
  47. Stojmenova K., Sodnik J. (2018, March). Detection-response task: How intrusive is it. In Proceedings 8th International Conference on Information Society and Techology, Kopaonik, Serbia (pp. 11–14).
  48. Strayer D. L., Biondi F., Cooper J. M., FrancescoBiondi U. (2016). Dynamic workload fluctuations in driver/non-driver conversational dyads (pp. 362–367). Driving Assessment Conference. [Google Scholar]
  49. Strayer D. L., Cooper J. M. (2015). Driven to distraction. Human Factors, 57(8), 1343–1347. 10.1177/0018720815610668 [DOI] [PubMed] [Google Scholar]
  50. Strayer D. L., Cooper J. M., Turrill J., Coleman J. R., Hopman R. J. (2017. a). The smartphone and the driver’s cognitive workload: A comparison of Apple, Google, and Microsoft’s intelligent personal assistants. Canadian Journal of Experimental Psychology, 71(2), 93–110. 10.1037/cep0000104 [DOI] [PubMed] [Google Scholar]
  51. Strayer D. L., Drews F., Burns S. (2004). The development and evaluation of a high-fidelity simulator training program for snowplow operators. Proceedings of the third international driving symposium on human factors in driver assessment, training and vehicle design THE (pp. 464–470). Driving Assessment Conference. [Google Scholar]
  52. Strayer D. L., Getty D., Biondi F., Cooper J. M. (2017. b). The multitasking motorist and the attention economy. Cooper University of Utah. [Google Scholar]
  53. Strayer D. L., Turrill J., Cooper J. M., Coleman J. R., Medeiros-Ward N., Biondi F. (2015). Assessing cognitive distraction in the automobile. Human Factors, 57(8), 1300–1324. 10.1177/0018720815575149 [DOI] [PubMed] [Google Scholar]
  54. Thomann J., Baumann C. R., Landolt H. P., Werth E. (2014). Psychomotor Vigilance Task demonstrates impaired vigilance in disorders with excessive daytime sleepiness. Journal of Clinical Sleep Medicine: JCSM: Official Publication of the American Academy of Sleep Medicine, 10(9), 1019–1024. 10.5664/jcsm.4042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Wang J., Barstein J., Ethridge L. E., Mosconi M. W., Takarae Y., Sweeney J. A. (2013). Resting state EEG abnormalities in autism spectrum disorders. Journal of Neurodevelopmental Disorders, 5(1), 24–214. 10.1186/1866-1955-5-24 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Warm J. S., Parasuraman R., Matthews G. (2008). Vigilance requires hard mental work and is stressful. Human Factors, 50(3), 433–441. 10.1518/001872008X312152 [DOI] [PubMed] [Google Scholar]
  57. Zhang Y., Ma J., Zhang C., Chang R. (2021). Electrophysiological frequency domain analysis of driver passive fatigue under automated driving conditions. Scientific Reports, 11(1), 20348–20349. 10.1038/s41598-021-99680-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Zhu Y., Weston E. B., Mehta R. K., Marras W. S. (2021). Neural and biomechanical tradeoffs associated with human-exoskeleton interactions. Applied Ergonomics, 96(March), 103494. 10.1016/j.apergo.2021.103494 [DOI] [PubMed] [Google Scholar]

Articles from Human Factors are provided here courtesy of SAGE Publications

RESOURCES