ABSTRACT
Simulations are increasingly popular in employee selection and training. While face valid and engaging, the attributes being assessed are often poorly understood. This study evaluated the extent to which a multitasking assessment based on concurrent memorization, math, visual monitoring, and listening tasks predicted simulated unmanned aircraft vehicle (UAV) mission performance in a military trainee sample (N = 368). Performance was based on accuracy of mission planning, information recall during “Lost Link” conditions, and success in rescuing stranded allies while monitoring the aircraft’s resources. Although scores on the multitasking assessment were only weakly related to performance of pre-flight mission planning tasks completed under static conditions, multitasking was strongly related to overall simulated UAV mission performance, including execution of tasks requiring attending to multiple, dynamic sources of information and shifting attention among concurrent processes and demands. Further, multi-tasking demonstrated substantial incremental validity beyond the traditional measures of cognitive ability that have been used for decades within the US military. Implications, limitations, and recommendations for selection and classification and future research are discussed.
KEYWORDS: Multitasking, simulation, aircraft simulation performance, unmanned aerial vehicle, remotely piloted aircraft
What is the public significance of this article? —Simulations are increasingly popular in employee selection and training. While face valid and engaging, the attributes being assessed are often poorly understood. This study evaluated the extent to which a multitasking assessment based on concurrent memorization, math, visual monitoring, and listening tasks predicted simulated unmanned aircraft vehicle (UAV) mission performance in a military trainee sample
Gamified assessments and computer-based simulations have become increasingly popular in recent years as selection and training tools. In an evaluation of new technology for talent acquisition and selection, Chamorro-Premuzic, Winsborough, Sherman, and Hogan (2016) found that gamification and/or simulation of assessments was one of the top four emergent digital methodologies. A contributing factor in the popularity of the tools is that applicants and trainees view them as being more engaging than text-based tasks (Bruk-Lee et al., 2016). In addition, gamification and simulations can be developed to measure attributes that are not always obvious to the applicant ranging from critical thinking (Lovelace, Eggers, & Dyck, 2016) to situational awareness (Miles & Strybel, 2017), and even Machiavellianism and deviant workplace behavior (Dubbelt, Oostrom, Hiemstra, & Modderman, 2015). Job-related simulations have been demonstrated to not only provide a face-valid interface, interactive multi-media job simulations have been shown to provide moderate correlations with job performance and provide incremental validity above and beyond other cognitive and non-cognitive selection assessments (Fluckinger, Dudley, & Seeds, 2014).
One specific occupational area that lends itself well to simulations for training and selection is in the growing field of piloting unmanned aircraft. The use of unmanned aircraft, or remotely piloted aircraft (RPA) as they are referred to in the United States Air Force (USAF), has increased dramatically in recent years (Dillingham, 2012). The increased use of RPAs within the military and other government entities (e.g., law enforcement) is due to the efficiency and utility of the aircraft in collecting intelligence and conducting surveillance and reconnaissance missions without placing a pilot in harm’s way (Farrell, 2014). Given the nature of the job, computer simulations can be customized to replicate conditions similar to what the pilot would experience in an actual job scenario, providing a face-valid interface for training or evaluating potential performance on the job. Given the relatively new RPA pilot career field and the limited number of simulations, there is a paucity of related peer-reviewed published research. While some research has been conducted to evaluate the skills and aptitudes required to do the job (Barron, Carretta, & Rose, 2016), and the personality domains that make an individual well-suited for successful RPA training completion (Barron et al., 2016; Rose, Barron, Carretta, Arnold, & Howse, 2014), little research has been conducted to evaluate the attributes that predict performance on RPA-related simulations.
Multitasking
Although multitasking has been measured in many different ways and the definitions have varied, for the purpose of this study, multitasking was defined as concurrently performing multiple distinct tasks that require quickly shifting attention from one task to another (Oswald, Hambrick, & Jones, 2007). This definition is particularly appropriate for RPA pilots in that they must switch attention across multiple tasks in quick succession and monitor and process information from multiple sources in order to effectively perform the job. In addition, working memory has been demonstrated to be a critical component of multitasking (Redick, 2016). The working memory component of multitasking is also of critical importance for RPA pilots in that when a task is interrupted by another task, they must retain the information from the interrupted task in order to effectively resume the task (Colom, Martinez-Molina, Shih, & Santacreu, 2010; Hambrick et al., 2011; Redick, 2016).
Indeed, the existing literature suggests that multitasking has been shown to be predictive of important training- and job-related outcomes for complex jobs requiring frequent shifts in attention. Barron and Rose (2017) found that overall multitasking scores were related to performance in manned aircraft pilot training (r = .21 with academic performance; r = .23 with daily flying performance). In contrast, the lack of the ability to multitask has been found to be related to omission errors in preflight checklists resulting in airline crashes (Loukopoulos, Dismukes, & Barshi, 2009). In other complex jobs, the lack of multitasking ability has been related to surgical errors by medical doctors (Murji et al., 2016), medication administration errors by nurses (Hayes, Jackson, Davidson, & Power, 2015), and patient hand-off errors by emergency medical personnel (Laxmisan et al., 2007).
The current study
Given the paucity of related research, this study was designed to evaluate the extent to which a multitasking assessment would predict performance on an unmanned aircraft vehicle (UAV) pilot simulation (i.e., Stealth Adapt) that was designed based on RPA pilot subject matter expert feedback on the most important tasks required for successful search and rescue (SAR) mission performance. Constituent tasks performed concurrently during the multitasking assessment included a math computation task, a visual monitoring task, a listening task, and a memorization task. Criterion measures assessing mission performance in the RPA simulation were based on accuracy of pre-flight mission planning, information retention and recall during “Lost Link” conditions, and success in rescuing stranded allies while monitoring the aircraft’s resources (i.e., fuel and battery life). Given that pre-flight mission planning is an untimed task that does not require shifts in attention and that some research has found that a multitasking orientation may be detrimental to performance in activities requiring a single-task focus (Conte & Jacobs, 2003), we did not anticipate a substantive relationship between the Multitasking Test and performance of serially presented pre-flight mission planning tasks (Flight Path Prioritization Accuracy).
In contrast to task performance in static conditions, information recall and retention during “Lost Link” conditions (i.e., Situational Awareness, MiniMap Memory, Authentication Code Memory, Waypoint ID Memory) requires memory of visual and auditory information presented in a dynamic environment requiring frequent shifts of attention across multiple tasks and multiple sources of changing information. Given that a multitasking orientation has been well-documented in predicting performance on tasks requiring frequent shifts of attention (Barron & Rose, 2017), we formed the following hypothesis:
Hypothesis 1: Scores on the Multitasking Test, particularly the Memory Score, will be positively correlated with the Lost Link performance criteria.
Given that overall success in rescuing allies during a UAV search-and-rescue mission requires adjustments while constantly monitoring all information via auditory instructions, instant message, and visual scanning of data, codes, and surroundings, we formed the following hypothesis:
Hypothesis 2: Multitasking Test scores will be positively correlated with the Percent of Allies Rescued.
Finally, although multitasking correlates with general cognitive ability, there are some unique abilities (e.g., visual scanning, attention, interruption management) that are critical for multitasking performance that may not be entirely explained by general cognitive ability measures of the type included on the Armed Services Vocational Aptitude Battery (ASVAB). Barron and Rose (2017) showed that serially presented cognitive measures (e.g., short-term memory, math) were less highly related to flying performance than measures of the same cognitive abilities when concurrently presented. Thus, we formed the following hypothesis:
Hypothesis 3: Multitasking scores will demonstrate incremental validity above cognitive ability in predicting overall success in the search-and-rescue mission (Percent of Allies Rescued).
Method
Participants
A total of 368 enlisted trainees completed the Stealth Adapt RPA simulation and the US Air Force Multitasking Test. Of the trainees, 304 had been selected to attend technical training for aircrew-related positions and the remaining 64 were still awaiting graduation from US Air Force Basic Military Training (BMT). The aircrew participants were predominantly male (81.9%) and self-identified as White (75.0%). The majority of BMT participants were female (54.0%) and self-identified as White (67.2%). All participants had at least a high-school education, and 44% had at least some college-level coursework.
Predictor measures
The US Air Force Multitasking Test was developed based on a multitasking assessment called SynWin (Elsmore, 1994). The test includes four tasks that must be monitored and completed concurrently. A math task requires participants to continually add three-digit numbers. A listening task requires the participants to respond with a mouse click when they hear their call sign announced (a modification of the original SynWin to be more face-valid for pilot selection than a high- versus low-pitch tone distinction). A visual monitoring task requires participants to continuously monitor and reset a fuel-level type gauge when the needle gets close to the red zone. Finally, the memorization task presents a list of letters that disappear and, after a delay, the participant is presented with a probe letter and must indicate whether the letter was in the list. In an initial practice trial, the tasks are presented as serial, independent tasks (i.e., the single-task practice trial), then there is one practice trial where the four tasks are all presented at the same time (i.e., the multitasking practice trial), followed by four scored multitasking trials. Points are gained by providing correct responses and deducted for incorrect responses (Barron & Rose, 2017).
RPA pilot search and rescue (SAR) simulation
The Stealth Adapt RPA simulation (Mangos, 2016), shown in Figure 1, requires the participant to strategically prioritize the order of rescuing multiple stranded allies via a remotely piloted aircraft (RPA) based on factors such as proximity to hostile forces, survivability of the ally, and passing through weapon engagement zones that are likely to damage the aircraft. The pre-flight path prioritization process (Flight Path Prioritization Accuracy) to determine the optimal path to rescue the allies is untimed, and there is no multitasking requirement during this phase. When the mission begins, the participant is required to complete the planned flight path (or modify if needed), rescue the allies, monitor aircraft resources (e.g., fuel, battery life), monitor information such as ally ID and authentication codes provided via a dialogue box, and maintain situational awareness and short-term memory of all information in the event that communication with the aircraft is lost (i.e., a “loss of link”). The participant is given 10 missions to complete. Each mission has a different number of allies that need to be rescued ranging from 4 to 9, with 62 total possible rescues across the ten missions.
Figure 1.

Screen image of Stealth Adapt remotely-piloted aircraft simulation
From Stealth Adapt Mangos (2016). Copyright 2016 by Adaptive Immersion Technologies. Reprinted with permission.
Criterion measures during the flying portion of the simulation include the four “Lost Link” metrics: Situation Awareness is the ability to remember the environment/view of the terrain at the time communication with the aircraft was lost, MiniMap Memory is the ability to remember the status of the location map, Authentication Code Memory is the ability to remember the authentication code of the next stranded ally, and Waypoint ID Memory is the ability to remember the ID for the next stranded ally. An additional metric that evaluates the lack of task engagement is the Average Idle Time, which is the amount of time the pilot spends hovering without actively engaging in a task. Finally, the most important criterion is the Percent of Allies Rescued, which it the percent of successfully rescued allies out of the 62 possible stranded allies.
Armed Services Vocational Aptitude Battery (ASVAB)
The Armed Services Vocational Aptitude Battery (ASVAB) has been used for decades by all services of the US military (Earles & Ree, 1992). The version of the ASVAB currently used by the US Air Force includes four composite scores (i.e., Mechanical, Administrative, General, and Electronics) and ten subscales used in various combinations for selection and classification purposes.
Procedure
All participants were provided with an informed consent form assuring them of the voluntary nature of the study, explaining the confidentiality of the results, and guaranteeing that the results would only be used at the aggregate level and that their scores would not be included in their military records. The participants were first administered the US Air Force Multitasking Test, immediately followed by the Stealth Adapt RPA pilot simulation. Archival data from their initial enlistment process at Military Entrance Processing Stations (MEPS) were used to obtain their ASVAB test scores.
Results
Multitasking
A correlational analysis was evaluated first to determine which Stealth Adapt outcomes were related to single-task performance versus multitasking performance on the Multitasking Test. As shown in Table 1, multitasking scores predicted performance across all Stealth Adapt outcomes better than the individual single-task scores. For the most critical Stealth Adapt outcome (i.e., Percent of Allies Rescued), multitasking scores were highly predictive, with validity coefficients ranging from r = .22 for Visual Monitoring to r = .27 for Listening. The Overall Multitasking Score was the most predictive at r = .37. Multitasking scores were least predictive for the pre-flight mission planning criterion (i.e., the untimed Flight Path Prioritization Accuracy), with a correlation of r = −.04 with the Overall Multitasking Score. Validity was moderate for the Lost Link criteria: Situational Awareness, MiniMap Memory, Authentication Code Memory, and Next Waypoint ID Memory with rs with the Overall Multitasking Score ranging from .26 to .30. The validity of multitasking scores to predict Average Idle Time (i.e., the time a pilot hovers without engaging in a task) was high with the Overall Multitasking Score correlating r = −.46.
Table 1.
Correlations: Multitasking with stealth adapt performance (N = 368)
| Multitasking | Percent of Allies Rescued | Flight Path Prioritization Accuracy | Lost Link: Situational Awareness | Lost Link: MiniMap Memory | Lost Link: Authentication Code Memory | Lost Link: Waypoint ID Memory | Average Idle Time | M | SD |
|---|---|---|---|---|---|---|---|---|---|
| Single Task – Memory | 0.07 | −0.12* | 0.10* | 0.11* | 0.07 | 0.12* | −0.20*** | 0.00 | 1.01 |
| Single Task – Math | 0.13* | −0.08 | 0.16** | 0.14** | 0.10 | 0.17*** | −0.20*** | 0.01 | 0.99 |
| Single Task – Visual | 0.09 | −0.01 | 0.06 | 0.15** | 0.09 | 0.07 | −0.08 | 0.00 | 1.01 |
| Single Task – Listening | 0.17*** | 0.02 | 0.10 | 0.11* | 0.14** | 0.16** | −0.24*** | 0.02 | 0.99 |
| Single Task Total | 0.20*** | −0.08 | 0.19*** | 0.22*** | 0.17*** | 0.23*** | −0.31*** | 0.01 | 0.57 |
| Multitask Trial 1 | 0.28*** | −0.01 | 0.17** | 0.27*** | 0.23*** | 0.25*** | −0.38*** | 0.02 | 0.56 |
| Multitask Trial 2 | 0.35*** | −0.01 | 0.22** | 0.32*** | 0.27*** | 0.26*** | −0.43*** | 0.02 | 0.60 |
| Multitask Trial 3 | 0.33*** | −0.06 | 0.24*** | 0.24*** | 0.28*** | 0.28*** | −0.41*** | 0.01 | 0.66 |
| Multitask Trial 4 | 0.38*** | −0.07 | 0.28*** | 0.25*** | 0.28*** | 0.28*** | −0.42*** | 0.02 | 0.64 |
| Memory | 0.25*** | 0.02 | 0.23*** | 0.30*** | 0.25*** | 0.22*** | −0.32*** | 0.01 | 0.81 |
| Math | 0.26*** | −0.11* | 0.16** | 0.21*** | 0.13* | 0.17** | −0.35*** | 0.01 | 0.90 |
| Visual | 0.22*** | 0.01 | 0.07 | 0.11* | 0.14** | 0.17** | −0.18** | 0.03 | 0.77 |
| Listening | 0.27*** | −0.02 | 0.22*** | 0.18** | 0.28*** | 0.24*** | −0.36*** | 0.02 | 0.86 |
| Overall Multitasking | 0.37*** | −0.04 | 0.26*** | 0.30*** | 0.30*** | 0.30*** | −0.46*** | 0.02 | 0.56 |
| M | 0.78 | 6.98 | 0.71 | 0.63 | 0.61 | 0.52 | 10.01 | ||
| SD | 0.08 | 0.69 | 0.24 | 0.20 | 0.20 | 0.20 | 3.11 |
*p < .05.
**p < .01.
***p < .001.
ASVAB
Correlations between the ASVAB scores and Stealth Adapt performance were moderate, but generally lower than multitasking. As shown in Table 2, correlations between the ASVAB composites with the Percent of Allies Rescued ranged from r = .20 for the General composite to r = .25 for the Mechanical composite. Correlations for the subscales were generally lower, ranging from r = .13 for Assembling Objects to r = .29 for Mechanical Comprehension.
Table 2.
Correlations: ASVAB and percent of rescued allies (N = 306)
| ASVAB Composites | r |
|---|---|
| Mechanical | 0.25*** |
| Administrative | 0.22*** |
| General | 0.20*** |
| Electronics | 0.23*** |
| ASVAB Subscales | r |
| General Science | 0.15** |
| Arithmetic Reasoning | 0.17** |
| Word Knowledge | 0.17** |
| Paragraph Comprehension | 0.17** |
| Math Knowledge | 0.18** |
| Electronics Information | 0.18** |
| Automotive and Shop | 0.14** |
| Mechanical Comprehension | 0.29*** |
| Assembling Objects | 0.13* |
| Verbal Expression | 0.19** |
*p < .05.
**p < .01.
***p < .001.
Incremental validity of multitasking test scores over cognitive ability
To evaluate the incremental validity of Multitasking beyond cognitive overall ability (i.e., ASVAB scores), regression analyses using two models (i.e., ASVAB composites only versus ASVAB composites plus Multitasking) were conducted. As shown in Table 3, the inclusion of Multitasking in the model resulted in a substantial increase in the percentage of variance accounted for from 7% to 18% (R2Δ = .1128). Given that the ASVAB General composite entered Models 1 and 2 with a negative beta weight, two additional models were analyzed without the composite. Results were similar in that the inclusion of Multitasking resulted in a substantial increase in the percentage of variance accounted for from 7% to 17% (R2Δ = .1030). In addition, when Multitasking was added to the regression models, the test provided the most substantial contribution of all predictors in explaining the variance for the Percent of Allies Rescued criterion variable. Specifically, Multitasking was the most heavily weighted variable (55.7% with ASVAB General and 67.6% without ASVAB General) when evaluated by relative weight analyses (Tonidandel & LeBreton, 2015).
Table 3.
Incremental validity of multitasking over ASVAB in predicting percent of rescued allies
| Model | Variable | β | b | SE | Rescaled Relative Weight | R2 | R2 Change |
|---|---|---|---|---|---|---|---|
| Model 1 | ASVAB Mechanical | .26 | .0012 | .0005 | 37.52 | .0710 | |
| ASVAB Administrative | .14 | .0009 | .0007 | 23.72 | (r = .27) | ||
| ASVAB General | -.16 | -.0009 | .0007 | 15.71 | |||
| ASVAB Electronics | .03 | .0002 | .0006 | 23.05 | |||
| Model 2 | ASVAB Mechanical | .23 | .0009 | .0005 | 13.41 | .1838 | .1128 |
| ASVAB Administrative | .16 | .0009 | .0006 | 12.17 | (r = .43) | ||
| ASVAB General | -.30 | -.0015 | .0007 | 7.25 | |||
| ASVAB Electronics | .10 | .0005 | .0005 | 11.45 | |||
| Multitasking Overall Score | .33 | .0466 | .0080 | 55.72 | |||
| Model 3 | ASVAB Mechanical | .19 | .0009 | .0005 | 44.01 | .0664 | |
| ASVAB Administrative | .07 | .0004 | .0006 | 28.32 | (r = .26) | ||
| ASVAB Electronics | .01 | .0001 | .0006 | 27.67 | |||
| Model 4 | ASVAB Mechanical | .08 | .0003 | .0004 | 14.52 | .1694 | .1030 |
| ASVAB Administrative | .03 | .0002 | .0005 | 8.14 | (r = .41) | ||
| ASVAB Electronics | .08 | .0004 | .0006 | 9.73 | |||
| Multitasking Overall Score | .31 | .0444 | .0080 | 67.61 |
N = 306. The Multitasking Overall Score is calculated by taking the mean score for all four trials across the four constituent tasks: Memory, Math, Visual Monitoring, and Listening.
Incremental validity of multiple multitasking test trials over a single trial
To determine the added value of giving several multitasking trials, the incremental validity of multiple trials (4) over a single trial of multitasking in predicting Stealth Adapt performance was evaluated. A series of multiple regression analyses were conducted by adding one trial to each of the four regression models. As shown in Table 4, the model including all four trials as compared to a single trial resulted an increase in the percentage of variance accounted for from 8% to 16% (R2Δ = .0823)
Table 4.
Incremental validity of multitasking trials in predicting percent of rescued allies
| Model | Variable(s) | β | b | SE | R2 | R2 Change |
|---|---|---|---|---|---|---|
| Model 1 | Multitasking Trial 1 Score | .28 | .0381 | .0069 | .0777 | |
| (r = .28) | ||||||
| Model 2 | Multitasking Trial 1 Score | .02 | .0028 | .0104 | .1252 | .0475 |
| Multitasking Trial 2 Score | .34 | .0441 | .0099 | (r = .35) | ||
| Model 3 | Multitasking Trial 1 Score | -.03 | -.0044 | .0109 | .1350 | .0098 |
| Multitasking Trial 2 Score | .27 | .0346 | .0109 | (r = .37) | ||
| Multitasking Trial 3 Score | .15 | .0182 | .0090 | |||
| Model 4 | Multitasking Trial 1 Score | -.05 | -.0066 | .0108 | .1600 | .0250 |
| Multitasking Trial 2 Score | .21 | .0270 | .0110 | (r = .40) | ||
| Multitasking Trial 3 Score | -.05 | -.0060 | .0115 | |||
| Multitasking Trial 4 Score | .31 | .0376 | .0114 |
N = 368. Multitasking trial scores are calculated by obtaining the mean score for each trial across the four constituent tasks: Memory, Math, Visual Monitoring, and Listening.
Discussion
The results of the current study are potentially informative regarding the aspects of cognitive ability that are most predictive of performance in jobs that require constant task switching and monitoring of multiple sources of information (e.g., an RPA pilot). Specifically, the finding that multitasking ability was significantly predictive of the ability to successfully complete SAR missions and provided incremental validity over and above traditional tests used for classification and selection (e.g., the ASVAB) suggests that giving traditional tests of knowledge and cognitive ability may not be sufficient in selecting the applicants who are most likely to succeed in a highly complex job such as an RPA pilot.
In addition, the results replicated those of Barron and Rose (2017) who found that scores on the single constituent tasks were not as predictive as the multitasking scores when the tasks were presented concurrently. Thus, the ability of an applicant to perform a single task does not necessarily equate to being able to perform that task while attending to other tasks or while being required to monitor multiple sources of information concurrently as part of the job. Furthermore, the results indicated that the pre-flight, untimed mission planning task was unrelated to multitasking performance whereas all metrics of task performance during flight (i.e., performance requiring task switching and monitoring information from multiple sources) were significantly predicted from the Multitasking Test scores.
In contrast to other studies that found that cognitive ability was highly correlated (r = .63) with multitasking (Sanderson, Bruk-Lee, Viswesvaran, Gutierrez, & Kantrowitz, 2016), the results of this study found less construct overlap (e.g., rs = .28 to .36 with ASVAB composites) and suggest that multitasking is related but explains some unique variance not accounted for by cognitive ability alone. Thus, supplementing traditional cognitive ability and knowledge tests with a test such as the Multitasking Test may prove useful for more accurate selection and classification. Put in practical terms, although this study was based on a simulation, the incremental validity and unique variance explained by multitasking in predicting the number of rescued allies could have actual life-and-death implications in actual SAR missions.
Finally, results of this study suggest that as multitasking improves (i.e., through practice by completing multiple multitasking trials), performance on the simulation also improves. These results provide some support for studies that have found that multitasking can be trained to some extent, and may improve performance and/or decrease errors (Bongers, Dierderick van Hove, Stassen, Dankelman, & Schrueder, 2015).
Limitations and recommendations for future research
Limitations of the current study include a reliance on a desktop computer-based simulated RPA environment with narrowly defined outcome variables. In addition, the sample was limited to non-RPA military trainees who may, or may not, have been motivated to perform at the optimal level on either the Multitasking Test or during the simulation. Thus, a recommendation for future research would be to include a simulation which more fully replicates the operational complexities of a real-life RPA environment and utilize current RPA pilot trainees or other aircrew trainees further into the training pipeline. In addition, the Multitasking Test could be used to distinguish pre-flight and flying performance during live mission simulations (i.e., with interaction from live aircrew personnel) rather than those interactions simulated by a computer.
Given that the incremental validity of multitasking over general cognitive ability was evaluated using a test that does not include perceptual speed (i.e., the ASVAB), future research including a measure of perceptual speed may be warranted (e.g., Mount, Oh, & Burns, 2008). Future research may also be needed to integrate the literature on multitasking training with greater understanding of underlying individual differences in multitasking performance (e.g., Barron & Rose, 2017) to look at potential aptitude–treatment interaction effects (Cronbach & Snow, 1977). Although beyond the scope of the current study, a more comprehensive evaluation of the relationships between multitasking and situational awareness, and the trainability of both concepts may be warranted in future research. Finally, the Multitasking Test could be used to predict performance in other highly complex jobs in both military and civilian contexts (e.g., nurses, surgeons) to support validity generalizability of the findings.
Disclosure statement
No potential conflict of interest was reported by the authors.
References
- Barron, L. G., Carretta, T. R., & Rose, M. R. (2016). Aptitude and trait predictors of manned and unmanned aircraft pilot job performance. Military Psychology, 28, 65–77. doi: 10.1037/mil0000109 [DOI] [Google Scholar]
- Barron, L. G., & Rose, M. R. (2017). Multitasking as a predictor of pilot performance: Validity beyond serial single-task assessments. Military Psychology, 29, 316–326. doi: 10.1037/mil0000168 [DOI] [Google Scholar]
- Bongers, P. J., Diederick Van Hove, P., Stassen, L. P. S., Dankelman, J., & Schreuder, H. W. R. (2015). A new virtual-reality training module for laparoscopic surgical skills and handling: Can multitasking be trained? A randomized controlled trial. Journal of Surgical Education, 72(2), 184–191. doi: 10.1016/j.surg.2014.09.004 [DOI] [PubMed] [Google Scholar]
- Bruk-Lee, V., Lanz, J., Drew, E. N., Coughlin, C., Levine, P., Tuzinski, K., & Wrenn, K. (2016). Examining applicant reactions to different media types in character-based simulations for employee selection. International Journal of Selection and Assessment, 24(1), 77–91. doi: 10.1111/ijsa.12132 [DOI] [Google Scholar]
- Chamorro-Premuzic, T., Winsborough, D., Sherman, R. A., & Hogan, R. (2016). New talent signals: Shiny new objects or a brave new world. Industrial and Organizational Psychology, 9(3), 6210640. doi: 10.1017/iop.2016.6 [DOI] [Google Scholar]
- Colom, R., Martinez-Molina, A., Shih, P. C., & Santacreu, J. (2010). Intelligence, working memory, and multitasking performance. Intelligence, 38, 543–551. doi: 10.1016/j.intell.2010.08.002 [DOI] [Google Scholar]
- Conte, J. M., & Jacobs, R. R. (2003). Validity evidence linking polychronicity and Big Five personality dimensions to absence, lateness, and supervisory performance ratings. Human Performance, 16, 107–129. doi: 10.1207/S15327043HUP1602_1 [DOI]
- Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional methods: A handbook for research on interactions. New York, NY: Irvington. [Google Scholar]
- Dillingham, G. L. (2012). Unmanned aircraft systems: Use in the national airspace system and the role of the department of homeland security (Report No. GAO-12-889T). Washington, DC: Government Accountability Office. [Google Scholar]
- Dubbelt, L., Oostrom, A. J. K., Hiemstra, A. M. F., & Modderman, J. P. L. (2015). Validation of a digital work simulation to assess Machiavellianism and compliant behavior. Journal of Business Ethics, 130, 619–637. doi: 10.1007/s10551-014-2249-x [DOI] [Google Scholar]
- Earles, J. A., & Ree, M. J. (1992). The predictive validity of the ASVAB for training grades. Educational and Psychological Measurement, 52(3), 721–725. doi: 10.1177/0013164492052003022 [DOI] [Google Scholar]
- Elsmore, T. F. (1994). SYNWORK1: A PC-based tool for assessment of performance in a simulated work environment. Behavior Research Methods, Instruments, & Computers, 26, 421–426. doi: 10.3758/BF03204659 [DOI] [Google Scholar]
- Farrell, B. (2014). Air force: Actions needed to strengthen management of unmanned aerial system pilots. (Report No. GAO-14-316). Washington, DC: Government Accountability Office. [Google Scholar]
- Fluckinger, C. D., Dudley, N. M., & Seeds, M. (2014). Incremental validity of interactive multimedia simulations in two organizations. International Journal of Selection and Assessment, 22(1), 108–112. doi: 10.1111/ijsa.12061 [DOI] [Google Scholar]
- Hambrick, D. Z., Rench, T. A., Poposki, E. M., Darowski, E. S., Roland, D., Bearden, R. M., … Brou, R. (2011). The relationship between the ASVAB and multitasking in navy sailors: A process-specific approach. Military Psychology, 23, 365–380. doi: 10.1080/08995605.2011.589323 [DOI] [Google Scholar]
- Hayes, C., Jackson, D., Davidson, P. M., & Power, T. (2015). Medication errors in hospitals: A literature review of disruptions to nursing practice during medication administration. Journal of Clinical Nursing, 24(21–22), 3063–3076. doi: 10.1111/jocn.12944 [DOI] [PubMed] [Google Scholar]
- Laxmisan, A., Hakimzada, F., Sayan, O. R., Green, R. A., Zhang, J., & Patel, V. L. (2007). The multitasking clinician: Decision-making and cognitive demand during and after team handoffs in emergency care. International Journal of Medical Informatics, 76(11–12), 801–811. doi: 10.1016/j.ijmedinf.2006.09.019 [DOI] [PubMed] [Google Scholar]
- Loukopoulos, L. D., Dismukes, R. K., & Barshi, I. (2009). The multitasking myth: Handling complexity in real-world operations. Burlington, VT: Ashgate Publishing Co. [Google Scholar]
- Lovelace, K. J., Eggers, F., & Dyck, L. R. (2016). I do and I understand: Assessing the utility of web-based management simulations to develop critical thinking skills. Academy of Management Learning and Education, 15(1), 100–121. doi: 10.5465/amle/2013.0203 [DOI] [Google Scholar]
- Mangos, P. (2016). Stealth Adapt [Computer program]. Tampa, FL: Adaptive Immersion Technologies. [Google Scholar]
- Miles, J. D., & Strybel, T. Z. (2017). Measuring situation awareness of student air traffic controllers with online probe queries: Are we asking the right questions? International Journal of Human-Computer Interaction, 33(1), 55–65. doi: 10.1080/10447318.2016.1232231 [DOI] [Google Scholar]
- Mount, M. K., Oh, I.-S., & Burns, M. (2008). Incremental validity of perceptual speed and accuracy over general mental ability. Personnel Psychology, 61(1), 113–139. doi: 10.1111/j.1744-6570.2008.00107.x [DOI] [Google Scholar]
- Murji, A., Luketic, L., Sobel, M. L., Kulasegaram, K., Leyland, M., & Posner, G. (2016). Evaluating the effect of distractions in the operating room on clinical decision-making and patient safety. Surgical Endoscopy, 30(10), 4499–4504. doi: 10.1007/s00464-016-4782-4 [DOI] [PubMed] [Google Scholar]
- Oswald, F. L., Hambrick, D. Z., & Jones, L. A. (2007). Keeping all the plates spinning: Understanding and predicting multitasking performance. In Jonassen D. H. (Ed.), Learning to solve complex scientific problems (pp. 77–96). New York, NY: Erlbaum. [Google Scholar]
- Redick, T. S. (2016). On the relation of working memory and multitasking: Memory span and synthetic work performance. Journal of Applied Research in Memory and Cognition, 5, 401–409. doi: 10.1016/j.jarmac.2016.05.003 [DOI] [Google Scholar]
- Rose, M. R., Barron, L. G., Carretta, T. R., Arnold, R. D., & Howse, W. R. (2014). Early identification of unmanned aircraft pilots using measures of personality and aptitude. The International Journal of Aviation Psychology, 24, 36–52. doi: 10.1080/10508414.2014.860849 [DOI] [Google Scholar]
- Sanderson, K. R., Bruk-Lee, V., Viswesvaran, C., Gutierrez, S., & Kantrowitz, T. (2016). Investigating the nomological network of multitasking ability in a field sample. Personality and Individual Differences, 91, 52–57. doi: 10.1016/j.paid.2015.11.013 [DOI] [Google Scholar]
- Tonidandel, S., & LeBreton, J. M. (2015). RWA web: A free, comprehensive, web-based, and user-friendly tool for relative weight analyses. Journal of Business and Psychology, 30, 207–216. doi: 10.1007/s10869-014-9351-z [DOI] [Google Scholar]
