Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 May 19.
Published in final edited form as: Neuroscience. 2017 Nov 10;369:363–373. doi: 10.1016/j.neuroscience.2017.11.005

Is Hand Selection Modulated by Cognitive–perceptual Load?

Jiali Liang a,*, Krista Wilkinson a,c, Robert L Sainburg b
PMCID: PMC7235685  NIHMSID: NIHMS1588628  PMID: 29129794

Abstract

Previous studies proposed that selecting which hand to use for a reaching task appears to be modulated by a factor described as “task difficulty”. However, what features of a task might contribute to greater or lesser “difficulty” in the context of hand selection decisions has yet to be determined. There has been evidence that biomechanical and kinematic factors such as movement smoothness and work can predict patterns of selection across the workspace, suggesting a role of predictive cost analysis in hand-selection. We hypothesize that this type of prediction for hand-selection should recruit substantial cognitive resources and thus should be influenced by cognitive–perceptual loading. We test this hypothesis by assessing the role of cognitive–perceptual loading on hand selection decisions, using a visual search task that presents different levels of difficulty (cognitive–perceptual load), as established in previous studies on overall response time and efficiency of visual search. Although the data are necessarily preliminary due to small sample size, our data suggested an influence of cognitive–perceptual load on hand selection, such that the dominant hand was selected more frequently as cognitive load increased. Interestingly, cognitive–perceptual loading also increased cross-midline reaches with both hands. Because crossing midline is more costly in terms of kinematic and kinetic factors, our findings suggest that cognitive processes are normally engaged to avoid costly actions, and that the choice not-to-cross midline requires cognitive resources.

Keywords: cognitive–perceptual load, hand selection, reaching, task difficulty

INTRODUCTION

Approximately 90% of all humans prefer the right hand for most unimanual tasks (Annett, 1972; Bryden, 1977). Handedness, assessed as the preference to use one hand for a variety of tasks has been attributed to both genetic predisposition (Annett, 1972; Levy and Nagylaki, 1972; Klar, 1996) and environmental reinforcement (Michel, 1992; De Agostini et al., 1997; Singh et al., 2001; Fagard and Dahmen, 2004; Schaafsma et al., 2009). Because handedness is usually defined by the choice individuals make about which hand to use for a given task (Scharoun and Bryden, 2014), the factors that influence the hand-selection process are integral to understanding the neurobehavioral mechanisms that underlie handedness.

Factors influencing hand selection

Hand preference refers to the hand that is most often selected to complete a task, and implies a decision-making process. Questionnaires or inventories of handedness are a typical method used to assess hand preference (McManus and Bryden, 1992). Researchers either ask participants to fill out a questionnaire or record the participants’ responses to questionnaire items that are read out loud to them. These questionnaires are limited by their inherent subjectivity in assessment of hand-preference. While questionnaires may often reflect the result of the hand-selection process, they do not assess the underlying mechanisms involved in the process (Doyen et al., 2001; Brown et al., 2006). Attempts to determine the mechanisms that underlie hand selection have generally focused on how hand-selection is influenced by target location in the workspace, or the requirements for manipulation following procurement of the target. When targets do not require manipulation, participants tend to reach to the lateral workspace with the ipsilateral hand, whether or not it is dominant. However, for the midline region including areas just lateral to midline, subjects tend to choose the dominant hand (Bishop et al., 1996; Mamolo et al., 2004; Przybyla et al., 2013). Thus, participants will tend to cross midline with the dominant hand to procure targets or objects that are in the contralateral workspace, but only if near the midline. When objects are placed farther from midline, participants will tend to switch to their non-preferred hand. These patterns are robust across paradigms that use simple 2-dimensional spatial targets (Przybyla et al., 2013), as well as reach-to-grasp of 3-Dimensional objects, such as a small cube or ball (Gabbard, 1998; Gabbard et al., 2001; Leconte and Fagard, 2006).

Leconte and Fagard (2006) proposed that this effect of target location on hand selection may reflect the costs of biomechanical factors in order to ensure that actions are efficient and comfortable. This view has been supported by studies that have quantified the kinematics and kinetics of movements during hand selection tasks. Compared to reaches to the contralateral workspace, ipsilateral reaches show quantitative advantages in reaction time, peak velocity, duration, final position accuracy, movement linearity and smoothness (Przybyla et al., 2012; Coelho et al., 2013). In addition, kinetic factors such as smoothness and work can also predict patterns of selection in the workspace (Przybyla et al., 2012; Coelho et al., 2013). Thus, spatial biases for hand-selection seem likely to reflect the tendency to limit kinematic and kinetic costs (Coelho et al., 2013).

Another line of work has cited “task difficulty” as a factor that can modulate hand selection decisions for reaching (Calvert and Bishop, 1998; Mamolo et al., 2004, 2005). However, the factors that might contribute to “difficulty” remain elusive. Variations in the definition of “task difficulty” may be responsible for the equivocal findings between the few studies that examined the influence of task difficulty on hand-selection. Some studies have focused on the number of elements of a movement sequence, such as the single action of grasping, as compared with a sequence of actions such as grasp-to-relocate the object (e.g., Gabbard, 1998; Gabbard et al., 2001). These studies found greater use of preferred hand for tasks requiring sequential actions (Mamolo et al., 2004), and concluded that the nature of the action following the reach tended to dominate the decision process. Consistent with this view, participants reached progressively more with the dominant hand when required to pick a tool up, simulate the use of the tool, and finally to use the tool (Steenhuis and Bryden, 1999; Mamolo et al., 2004). However, Bryden and Roy (2006) showed no effect of “task difficulty”, when participants were required to manipulate an object with graded difficulty. When participants were required to either simply pick up and toss an object, or to orient and place the object in a slot, hand selection was not altered. It is likely that whether a motor sequence influences hand selection depends on the coordination requirements of the most dexterous element in the sequence.

Since the seminal work of Rosenbaum (1980), it has been clear that hand-selection for simple reaching movements reflects a cognitive decision-making process. Rosenbaum (1980) showed that reaction time increased with the amount of decisions that participants needed to make prior to simple reaching movements, and that a substantial increase in reaction time was associated with the requirement to select which hand to use. When participants were cued about which hand to use, prior to movement, reaction time was reduced substantially, reflecting the substantial cognitive load that is associated with this choice. Previous studies have suggested that simple factors, such as the distance required to reach to a target might help determine the decision of which hand to use, such that the hand that is closest to the target is often selected (Gabbard and Rabb, 2000). Two previous studies have provided evidence that the arm-decision process might reflect a cost analysis for the required movements. First, Przybyla et al. (2012) showed that while the dominant arm was chosen more frequently when visual feedback was provided, non-dominant arm use increased substantially when visual feedback was not available. They also showed that across most of the workspace, the dominant arm was more accurate when vision was available, while the non-dominant arm was more accurate when vision was not available, a finding that might be related to better proprioceptive acuity in the non-dominant arm. Whatever the origin of this asymmetry, Przybyla and colleagues demonstrated that participants increased their choice to use the non-dominant arm, under no-vision conditions, when non-dominant arm use became more advantageous to task performance. These findings suggest that a cost analysis might underlie arm selection processes that occur prior to reaching movements. In a more direct test of the cost-analysis hypothesis, Stoloff et al. (2011) examined hand selection in the region of the workspace in which subjects often reach with both left and right hands. Using a virtual reality environment, they manipulated the accuracy rates for the two hands, by introducing noise to one or the other hand, without the participants’ awareness. They found that participants increased non-dominant hand selection under these subtle modifications of task success. This emphasized the role of decision making in hand selection, as well as the role of cost analysis, based on past success. Overall, these findings support the idea that hand-selection is an active decision process, based on analysis of movement costs, associated with current movement conditions and derived from prior experience.

Foundation of current study

Regardless of how the target location, kinematic, kinetic, and manipulation requirements of a target or object might influence hand selection, it is clear that participants go through a decision-making process in order to select which hand to use in a given situation (e.g., Rosenbaum, 1980; Gabbard and Rabb, 2000; Przybyla et al., 2012). Thus, a process of cost analysis seems to underlie the hand-selection process. We now hypothesize that this type of analysis should recruit substantial cognitive resources and thus should be influenced by cognitive–perceptual loading.

We tested this hypothesis by adapting a visual search task that presented participants with visual displays that varied in visual–perceptual characteristics. The task required participants to select a just-prompted target stimulus from an array of 16 symbols. We examined how/whether the visual–perceptual characteristics of the different displays influenced motor behavior during the selection of the target. The displays were constructed based on prior work (described below) that had indicated that two visual–perceptual characteristics of stimulus displays – target color and spatial organization – influenced speed of selection and efficiency of visual search in children with and without disabilities. Reliable differences in accuracy as well as time to respond (latency) across different displays were taken to suggest differences in the “cognitive load” exerted by the display, that is, differences in the task difficulty.

The prior studies had examined visual search in arrays containing 16 line drawings of clothing items, four of which were red, four of which were yellow, four of which were brown, and four of which were blue (Wilkinson et al., 2008; Wilkinson and McIlvane, 2013; Wilkinson et al., 2014; those studies had focused on children with typical development and also individuals with Down syndrome; consideration of the differences of those samples with the participants studied here is presented in the Discussion). In those studies, two variants on the display were created. These two variants are illustrated in Panels B and C of Fig. 1 (in the current study we added a third condition, which is also visualized in Fig. 1; the discussion and justification for that added “control” condition are provided in the Experimental Procedures). In one variant, Fig. 1B, the line drawings that shared color were “clustered” into quadrants in the space. For instance, all four red items appeared in the top left quadrant. In the other variant, the line drawings that shared a color were scattered or “distributed” around the display, as illustrated in the right hand panel of Fig. 1C. Across both variants of the displays, the content of the display – that is, the actual clothing items depicted – were identical, thus, any differences in performance must relate to the visual–perceptual characteristics of the displays and their influence on responding.

Fig. 1.

Fig. 1.

Experimental conditions. One-symbol control condition (A) means that only the target symbol was displayed. Clustered condition (B) means that symbols were clustered based on internal colors (red, brown, blue, and yellow). Distributed condition (C) means that adjacent symbols were not sharing any internal colors. The Picture Communication Symbols© 1981–2016 by Tobii Dynavox. All Rights Reserved Worldwide. Used with permission. BoardmakerÒ is a trademark of Tobii Dynavox.

Wilkinson and McIlvane (2013) examined the influence of the different displays on accuracy and efficiency of responding via mouse click in 24 individuals with either Down syndrome or autism. On each trial, participants were presented with a sample line-drawing symbol and asked to select it with the mouse. Upon the selection, participants were then presented with either the clustered or the distributed arrangement of the 16-symbol display, and asked to click on the target (that had just appeared as the sample) with the mouse/cursor. Participants were more accurate in finding the targets when the like-colored line-drawing were clustered (though this did not reach statistical significance) and were significantly faster to click on the target line-drawing in that clustered condition. Wilkinson et al. (2014) replicated these findings with 14 children with typical development, with the addition of automated eye-tracking technologies that recorded point of gaze during the visual search task itself. Once again, accuracy and efficiency of responding via mouse click was greater under the clustered condition; the eye-tracking data indicated that the greater efficiency in responding in the clustered condition was associated with fewer looks to the distracter items (i.e., more efficient visual search itself).

Research aims

The findings from Wilkinson and Colleagues (2013, 2014) offered initial empirical evidence that visual–perceptual cues offered by color grouping resulted in different levels of performance in the task, and thus, in turn, exerted different levels of task difficulty. To extend our experimental design in the domain of motor behaviors, we examined whether displays with the different visual–perceptual characteristics (and, thus, cognitive loads) affect motor behaviors including hand selection, reaction time, and hand kinematic measures.

We hypothesized that decision of which hand to use when reaching to various locations in space requires implicit cognitive processes (i.e., task difficulty). Thus we predicted that increasing the task difficulty would alter the hand selection patterns, with little contralateral reaching under the clustered condition and greater contralateral reaching under the more difficult distributed condition. For reaction time, longer reaction times indicate processing of more information, which happens when there is a larger number of motor choices (Rosenbaum, 1980) or when a more complex motor response is being prepared (Henry and Rogers, 1960). We expected that because the clustered presentation likely reduces the visual–perceptual demands, we would see shorter reaction time to initiate movement for the clustered organization relative to distributed. For hand kinematic measures, hand path curvature, or deviation from linearity of movement, measures how curvy the movement is during its execution or as it proceeds toward a target. We anticipated that the clustered display condition in which target stimuli were clustered by their internal colors would be associated with the straightest reaches, because the participant could initiate the movement directly toward the subset of symbols, while the distributed condition would be associated with curvier hand paths.

Finally, we also examined whether ipsilateral reaches have the advantage in reaction time, distance, maximum speed, and hand-path curvature compared to contralateral reaches, to evaluate if the results align with those of the previous studies (Przybyla et al., 2012; Coelho et al., 2013). Specifically, we anticipated that contralateral reaches would (a) take longer time to respond, (b) travel longer distance from the start point (green cursor) to the target, (c) yield higher maximum speed (Maximum Tangential Hand Velocity), and (d) produce greater hand path curvature than ipsilateral reaches.

EXPERIMENTAL PROCEDURES

Participants

Participants were 11 right-handed (as indicated by self-report) young adult participants who were enrolled in college or who had successfully gained a college degree. Participants had a mean age of 24 (range 21–29) years, and five were male. All procedures were approved via the Institutional Review Board at the Pennsylvania State University. The limitation of small sample size is considered in the Discussion.

Stimulus materials

Stimuli were 16 line-drawing picture symbols taken from the Mayer Johnson Boardmaker symbol set (PCS; Mayer-Johnson, 1992). As in Wilkinson and McIlvane (2013), the stimuli consisted of four items worn on the feet (e.g., sandals, boots), four items worn on the torso (e.g., t-shirt, dress shirt), four summertime items (e.g., bathing suits, sunglasses), and four inclement weather items (e.g., raincoat, warm hat). Each set of items had loose semantic/categorical relations to one another, in addition to sharing internal color. Their physical shapes were quite dissimilar, for instance, the baseball cap and the swim-suits were both red, but quite different in shape.

Experimental set-up and data collection

Fig. 2 shows the experimental setup. Participants sat in a raised chair and faced a horizontal workspace. Arms rested on and were gently strapped onto air sleds, to minimize the effects of friction and fatigue as participants moved their arms across the workspace. A splint on each arm immobilized joints distal to the elbow. A mirror, positioned above the workspace, reflected stimuli projected by an overhead 55-in. high-definition television (Sony Electronics). Although the participants’ hands were beneath the mirrored workspace, their location relative to the display was indicated via small crosses (cursors) on the mirror surface, such that when the participant moved their hands the crosses moved in tandem. We placed participants’ hands underneath the stimulus display (under the mirrored surface) rather than having them hovering over the display in order to avoid the hands themselves occluding the display. In other words, if a participant’s hands were above the display while they were reaching toward a target, their hands would cover up or occlude the display itself. Using the mirrored setup, participants were able to see the display at all times. Participants had no difficulty understanding the relation between their hand movement under the mirror and the movement of the cross/cursor displayed on the mirrored surface display.

Fig. 2.

Fig. 2.

Schematics of the experimental apparatus (A) and Task (B). There was an array of either 1 or 16 symbols (see Fig. 1 for details). FOB, Fight of Birds sensor. Cursors (green) representing left and right hand positions were displaced 30 cm to the center of the workspace. The Picture Communication SymbolsÓ 1981–2016 by Tobii Dynavox. All Rights Reserved Worldwide. Used with permission. BoardmakerÒ is a trademark of Tobii Dynavox.

The stimuli were displayed with custom software written in REALbasic (REAL Software). A six-degree-of-freedom (6-DOF) Trackstar (Northern Digital Technology) magnetic tracking system sampled limb positions and orientations at a rate of 116 Hz. For motion tracking, we digitized the following bony structures in each limb: (1) index fingertip, (2) meta-carpal phalangeal joint, (3) lateral and medial epicondyle of the humerus, and (4) acromion process.

We processed the data with custom programs written in IgorPro (WaveMetrics). We low-pass filtered the displacement data at 8 Hz with a third-order dual-pass Butterworth filter before differentiation to obtain velocity and acceleration profiles. Because there were minor oscillations of the cursors in the start circle, we defined the start of each reach as the first minimum in tangential velocity that was under 8% of the maximum velocity for that trial. Likewise, we defined the end of each reach as the first minimum following peak velocity that was below 8% of maximum velocity. The start and end point of movement were verified manually, and corrected if necessary.

Kinematic measures included reaction time, maximum tangential hand velocity, distance, and hand path deviation from linearity. Reaction time was the time between the imperative “go” signal and the beginning of movement. We defined hand path deviation from linearity as the minor axis of the path divided by the major axis of the path (aspect ratio). The major axis was the longest distance between any two points on the hand path, whereas the minor axis was the longest distance on an axis perpendicular to the major axis, between any two points in the hand path.

EXPERIMENTAL PROCEDURE

The task was a 0-delay identity matching to sample task. When participants had placed their hands within the two green circles on the reflected display, a single “sample” appeared in the midline of the display. This sample was one of the line-drawings from the set of 16 symbols just described. The sample remained in sight for 1.5 s and then disappeared, being replaced by the choice array for the experimental condition. The participants’ task was to move their hand to the line drawing in the choice array that was identical to the sample they had just seen. Participants were encouraged to use whichever hand they wanted. The trial ended either when the participant reached the target choice and dwelt on it, or after 3 s (whichever came first). Correct selections resulted in the ringing of a chime and an accumulation of “points” presented on the computer screen.

Experimental conditions and session design

In addition to the two choice arrays already illustrated in Fig. 1, the current study added a third display condition. This was an “one-symbol control” condition illustrated in Panel A of Fig. 1, in which a single line-drawing appeared as the sample and then again as the target on the stimulus display, and the participant was required simply to reach for it. We added this control in order to have a condition involving little cognitive demand, to obtain a measure of baseline motor control and planning when no decision (regarding which symbol to select) was necessary. The 16-symbol clustered and distributed display conditions illustrated in Fig. 1 then served as the “clustered” and “distributed” experimental conditions. In both, a single line-drawing appeared as the sample; however the choice array now contained all 16 line-drawings.

A total of 48 trials were presented in each condition, with each of the 16 line-drawings serving as sample-choice target on three trials, and each of the 16 possible response positions containing three targets on two trials (for instance, the upper left-corner position was the location of the target on three of the trials, and so forth). The positions of the stimuli were pseudorandomized between trials, such that participants could not anticipate where a stimulus would appear on any given trial. This was done to minimize the influence of practice effects on the performances of interest.

Participants received four blocks of 48 trials each. Each block contained trials for one condition. For example, all of the trials for the clustered condition were presented in a single 48-trial block, and the 48 distributed trials were a separate block. Each of the two experimental conditions had an associated control condition, resulting in four blocks of 48 trials (total = 192 trials). The order of these sessions was quasi-randomly counterbalanced, such that a control condition was always the first block presented; otherwise, the order in which participants experienced the experimental conditions was counterbalanced.

Data analysis

For the purposes of data analysis, two independent factors were examined: experimental condition (control, clustered, distributed) and target column (columns 1, 2, 3, or 4). Target column was defined spatially, such that the four spaces on the farthest left of the display were identified as column #1, the four spaces just to their right as column #2, the four spaces to the right of that as column #3, and the four spaces on the farthest right of the display as column #4. Target column was included as an independent variable in order to examine whether visual–perceptual characteristics (experimental condition) influenced the production of ipsilateral vs contralateral reaches. Repeated measures 3 × 4 ANOVA examined the influence of condition (control/clustered/distributed), and target column (columns 1–4), and their interaction on the dependent measure. Post hoc testing, when needed, was conducted. Because of the smaller sample sizes, assessments for violation of assumptions were conducted. All findings with p > 0.1 were reported as non-significant.

RESULTS

Not surprisingly, as these were college students undergoing a fairly easy memory and search task, the average correct response was near 100%, regardless of target column and display condition. Repeated measures 3 × 4 ANOVA indicated that there was no significant effect of condition, F(2, 20) = 3.03, p = 0.071, partial η2 = 0.233, target column, F(3,30) = 2.47, p = 0.081, partial η2 = 0.198, or their interaction. To check whether our non-significance result was due to a lack of statistical power, we conducted post hoc power analysis using GPower (Faul et al., 2007) with power (1 – β) set at 0.80 and α = 0.05. This showed that significant effects at p = .05 would have been found with sample sizes of N = 28 (condition) and N = 24 (columns).

Visual display complexity modulates reaction time

Reaction time for motor tasks is thought to reflect cognitive processes associated with the preparation of movements. It has been shown that the more complex the task condition, the longer the reaction time (Henry and Rogers, 1960; Rosenbaum, 1980; Ghez et al., 1991). Fig. 3 shows the mean reaction time as well as linearity, across target columns, within each condition. The repeated measures 3×4 ANOVA examined the role of condition (control/clustered/distributed) and target column (columns 1 to 4) on mean reaction time. In evaluating the effects of condition, Greenhouse-Geisser correction was made to adjust for a slight but significant violation of sphericity χ (2) = 6.318, p = 0.042. There was no effect of target column. However, a large main effect of condition, F(1.33, 13.29) = 24.79, p < 0.001, partial η2 = 0.713, was qualified by a significant interaction between condition and target column, F(6, 60) = 4.398, p = 0.001, partial η2 = 0.305. Given the presence of the significant interaction, post hoc testing (Scheffe) was conducted to examine the differences between conditions for each column. For all four columns, the distributed condition produced significantly longer reaction time than the control condition (p < 0.05). For column 1, the clustered condition produced significantly longer reaction time than the control condition (p < 0.05), whereas for the columns 2 to 3, no such significance was detected. The distributed condition produced significantly longer reaction than the clustered for columns 1, 3, and 4 (p < 0.05), but not for column 2. Thus, the display condition significantly modified reaction times across our display conditions. Because of the distribution of reaction times across conditions, we conclude that the cognitive load associated with each condition was lowest for the control conditions, medium for the clustered condition, and highest for the distributed condition.

Fig. 3.

Fig. 3.

Reaction time (A) and deviation from linearity (B) across experimental conditions and columns. Column 1 = left-most column; Column 2 = second-left-most column; Column 3 = second-right-most column; Column 4 = right-most column; Error bars in the current and the following bar graphs are standard errors.

Cognitive load condition did not influence the quality of the trajectory

We next examined whether the quality of the trajectory varied between our 3 cognitive loading conditions and our target columns by quantifying deviation from linearity of the hand-path. Fig. 3 shows the mean deviation from linearity, across conditions and columns. The repeated measure 3×4 ANOVA evaluated the role of condition and target column on the mean linearity. A main effect of target column was detected, F(3, 30) = 5.879, p = 0.003, partial η2 = 0.370. There was no significant effect of condition, nor was there an interaction between target column and condition. The effect of target location on linearity was expected, as reaches to different locations in space reflect different joint coordination patterns and thus different degrees of linearity (Hollerbach and Flash, 1982; Sainburg et al., 1999). In summary, our display conditions had a substantial effect on reaction time, and thus on cognitive load, but not on movement quality, as reflected by linearity.

Cognitive load modulates hand selection

Fig. 4 shows the proportion of hand reaches averaged across participants for each hand and for each target, across conditions. Each target is shown as a circular piegraph, and the proportion of reaches with the left hand is shown in gray, with the right hand in black. Panel A shows the results for the control condition, while panels B and C show the results for each display condition, clustered and distributed, respectively. Thus, the degree of cognitive load of the task is ordered from Panel A (low) to Panel C (high). Two distinct patterns should be noted. As the cognitive load of the task increases, more black reaches (right hand) are made. In addition, as load increases, more right and left hand movements are made across the midline. For the right hand, this is reflected by more black area in columns 1 and 2, and for the left hand, by more gray area in column 3.

Fig. 4.

Fig. 4.

Proportion of reaches by each hand. Means were computed across participants (n = 11). Each target is shown as a circular pie-graph (light gray area for left hand, and black area for right hand). The results for each display condition: control (A), clustered (B) and distributed (C). Numbers across the top: 1 = left-most column; 2 = second-left-most column; 3 = second-right-most column; 4 = right-most column. Numbers across the right, 1 = top row; 2 = mid-top row; 3 = mid-bottom row; 4 = bottom row.

Fig. 5 shows graphs of the proportion of contralateral reaches (midline-crossing) by both hands across conditions. Repeated measures ANOVA with Greenhouse-Geisser correction to adjust for a significant violation of sphericity indicated that the visual differences observed in Figs. 4 and 5 failed to reach statistical significance, at least with this sample size. The figures suggest that when the cognitive load was high, participants increased the tendency to reach across midline with both hands, though further direct study of this is warranted to determine if this tendency would be of statistical significance with a larger sample.

Fig. 5.

Fig. 5.

Proportion of contralateral reaches (midline-crossing) by both hands across conditions. Control condition = only the target symbol was displayed; Clustered condition = symbols were clustered based on internal colors (red, brown, blue, and yellow); Distributed condition = adjacent symbols were not sharing any internal colors.

Contralateral reaches are more costly than ipsilateral reaches

We next asked whether ipsilateral reaches or contralateral reaches showed any differences in motor performance. For this analysis, we collapsed data across target columns 1 and 2 (ipsilateral to the left hand, contralateral to the right) vs. columns 3 and 4 (ipsilateral to the right hand, contralateral to the left). Fig. 6A shows the reaction time for ipsilateral reaches and contralateral reaches. Paired sample t-test indicated that reaction time was longer in contralateral reaches than ipsilateral reaches, and the difference approached significance, t(10) = 2.219, p = 0.059. Post hoc power analysis using GPower (Faul et al., 2007) with power (1 – β) set at 0.80 and α = 0.05, one-tailed revealed that group differences would have been detected at p = .05 with N = 27. Thus, it is possible that our finding could be attributed to a limited sample size. Fig. 6B shows the distance traveled for ipsilateral and contralateral reaches. Paired sample t-test indicated that the distance traveled was significantly longer in contralateral reaches than ipsilateral reaches, t(10) = 27.508, p < 0.001. Fig. 6C shows the maximum velocity of ipsilateral and contralateral reaches. Paired sample t-test indicated that maximum velocity was significantly greater in contralateral reaches than ipsilateral reaches, t(10) = 8.570, p < 0.001. Fig. 6D shows the linearity deviation in ipsilateral reaches and contralateral reaches. Paired sample t-test indicated that movement was significantly more curvy in contralateral reaches than ipsilateral reaches, t(10) = 3.586, p = 0.005. Thus, contralateral reaches elicited significantly longer preparation times, required traveling further distances at higher speeds, and were less straight than ipsilateral reaches, suggesting that contralateral reaches were associated with greater kinematic and movement preparation costs.

Fig. 6.

Fig. 6.

Kinematic measures of ipsilateral versus contralateral reaches. A = reaction time; B = distance; C = maximum velocity; D = linearity; ipsi = target symbol was ipsilateral to the hand used for selection; contra = target symbol was contralateral to the hand used for selection.

DISCUSSION

We examined whether hand-selection decisions recruit cognitive resources, and thus might be modified by the cognitive–perceptual loading of a task. We used a memory and search task with displays in which the visual–perceptual characteristics resulted in different levels of cognitive load, as suggested by prior research in which speed and accuracy of search for a target varied systematically by display condition (Wilkinson and McIlvane, 2013; Wilkinson et al., 2014). The cognitive loading of each condition was confirmed in the current study by analysis of reaction times, which showed substantially increased length in conditions that presented greater cognitive loads. Cognitive load condition did not alter the quality of motor performance, as reflected by response accuracies, nor hand path linearity. As Figs. 4 and 5 indicate, we also observed a pattern of increased use of the dominant hand as well as more frequent cross-midline reaches with both hands under conditions of greater cognitive load. Although these findings warrant further study and replication, the increase in cross-midline reaches is particularly notable, given that the kinematic and reaction-time costs of contralateral reaches were greater, eliciting longer reaction times, longer distances, higher velocities, and lower linearities. Because crossing midline is more costly in terms of kinematic and kinetic factors, we conclude that cognitive processes are engaged to avoid costly actions, and that the choice to not cross midline requires cognitive resources.

Previous studies found an increase only in right-hand reaches to the left workspace (midline crossing) with the increasing task difficulty (Steenhuis and Bryden, 1999; Mamolo et al., 2004). However, in those studies, “task difficulty” was associated with more complex movements and did not separate cognitive from motor components. Surprisingly, our findings indicated an increase in cross-midline reaches under the greatest cognitive load. This suggests that the choice not to cross midline requires cognitive resources, which became limited under our conditions with greater cognitive load, due to competition. As a result, hand choice was less-selective resulting in an increase in the more costly reaches across midline with both hands.

Evidence has emerged that cognitive resources are recruited for standing, walking and postural balance, which have traditionally been considered fairly automatic processes (Teasdale et al., 1993; Woollacott and Shumway-Cook, 2002). For example, simultaneous performance of a secondary task, counting backward, decreases gait velocity while increasing stride variability, an effect potentiated by aging (Priest et al., 2008). Similar findings have been demonstrated for tasks as simple as maintaining standing posture (see Boisgontier et al., 2013 for a review). Reaching is a voluntary motor task that has long been known to recruit substantial cognitive processes (Jeannerod, 1997), especially when attention has been predominantly focused on the number and complexity of actions required for reaching rather than the visual–perceptual features of the stimuli. However, gait and postural controls differ from hand reaching in many ways, such as automaticity of the motor processing, brain correlates, peripheral and central levels of functioning (muscle, vestibular, joint, skin, etc.).

Our current findings demonstrate that substantial cognitive resources are allocated to the seemingly simple decision of which hand to use for a reach. The manipulation of visual–perceptual features is reflected on the constant conceptual or linguistic functioning in the study as all the displays presenting the same concepts (for the 1-symbol display, only target was presented). Only the visual–perceptual features of the displays varied. Midline-crossing reaches and reaction times were found to gradually increase with the elevated cognitive loading in relation to visual–perceptual features of the displays. However, the accuracy of trials remained high, which indicates the hierarchical value of stimulus accuracy. Our findings suggest that substantial cognitive load is associated with hand-choice, a finding consistent with a classic set of experiments conducted by Rosenbaum (1980). He reported reaction times during targeted reaching movements, in which the choice of arm, direction, and extent of the movement was either pre-cued and thus determined by the task, or left up to the participant to decide. The view was that the cognitive decision process should be reflected by the reaction time, such that the more decisions required by the subject, the greater the reaction time. The study predicted that reaction time should directly mirror the time required to specify the values that were not pre-cued in the task. The findings indicated that reaction times were longer when the all information except arm choice was pre-cued, and shortest when all information except movement amplitude was pre-cued. Thus, the duration of reaction time reflected the amount of decisions that the participants needed to make, when the imperative cue was delivered. The results indicated that reaction time reflects the cognitive load required by the task, and further, that the need to select which arm to use introduced the most substantial cognitive load when compared with other movement parameters.

The important role of cognitive processes in determining the reaction time of simple motor behaviors was later confirmed by a series of seminal studies by Hening et al., 1988 that examined the time course of planning targeted isometric elbow joint movements. Participants were provided advanced cues about target amplitude, at different times relative to an imperative “go” signal. If the time between the target cue and the imperative signal was too brief, participants made default responses based on prior experiences. However, with more time they specified response amplitude with a gradual time course that depended on target amplitude. In a follow-up study (Favilla et al., 1989), participants were cued about two distinct response features: target amplitude and direction. As in the previous study, the targets were presented at random times (0–400 ms) before the imperative “go” stimulus. Importantly, target directions and amplitudes were either predictable, in the simple condition or unpredictable, in the choice condition. In the predictable condition, no choices needed to be made and both direction and amplitude were specified accurately, even in the shortest reaction time conditions (100 ms). However, in the choice condition, the accuracy of amplitude and direction varied with the time between the precue and the imperative stimulus. At the shortest reaction times of less than 100 ms, neither amplitude nor direction were accurately specified, whereas when more time was available, the proportion of correct responses gradually increased, with the most accurate performance occurring at intervals greater than 300 ms. This series of studies demonstrated that even in very simple targeted isometric tasks, the simple choice of direction and amplitude increased the cognitive load and therefore the reaction-time required for specification of accurate responses. Together these studies by Rosenbaum and Ghez and colleagues indicate that the duration of reaction time depends on the cognitive requirements of the task, and that choosing which arm to use contributes substantially to this load and to the duration of reaction time.

In the current study, the cognitive load of the task included not only the choice of arm, direction, and distance for arm movements, but also the requirement for visual memory, visual search, and target selection from different arrays of potential targets. We found that increased reaction times in the reaching tasks corresponded to increased cognitive loading of our stimulus conditions. This finding is congruent with previous studies that measured speed and accuracy of responding to similar displays in individuals with Down syndrome or autism (Wilkinson and McIlvane, 2013) as well as efficiency of visual search as measured through eye gaze technologies with children with typical development (Wilkinson et al., 2014). Those prior studies were deliberately designed for use with individuals with disabilities, which is in part why the task in the current study was quite simple. It is therefore of substantial interest that despite the task being quite easy for these adults (as indicated by the very high accuracy), we nonetheless saw the very same pattern of results across the display conditions. The current study was designed to begin with typically developing adults so that we could map out the influences of the visual–perceptual characteristics of the displays in isolation (normative data without the influence of cognitive, motor, and linguistic functioning for future studies of impaired population) (Higginbotham, 1995). Indeed, the clustered versus distributed displays exerted significant influence over motor responses, in particular in the latency to initiate the response and the likelihood of inefficient contralateral reaches. That such small changes to the visual–perceptual characteristics of fairly simple displays could influence responding even in an adult sample suggests the power of these design features.

Cognitive load modulates hand-selection

Previous studies indexed task difficulty in terms of the number of actions and sets of skills required to complete a task (e.g., Mamolo et al., 2004). The current study focused on the visual–perceptual features of the visual stimulus. The number and precisions of actions or skills were the same across tasks: participants were asked to simply reach their hand to a target. Task difficulty increased when distractors increased and the symbols were not clustered by their internal color. Under this circumstance, participants spent more time to initiate reaches to the target as compared to a less difficulty task condition (no distractor or like-color symbols clustered condition). Critically, once the movement was initiated, such task difficulty did not affect basic kinematics, as hand path curvature did not vary across conditions.

We also found an increase in contralateral hand selection in the high processing load condition (i.e., more difficult task), which supports Coelho and colleagues’ (Coelho et al., 2013; Przybyla et al., 2013) proposition that reaching is an active and dynamic decision process, built into the planning of the motor task. In the current study, as displays became more demanding, participants reached across the midline more often with both the left and the right hands. Under the control condition where minimal cognitive demands were required, participants reached their left hand for targets on the left and their right hand for targets on the right, with few contralateral reaches. In contrast, the number of midline crossing became highest in the most demanding display (distributed) condition. This pattern applied to the left and right hands. Przybyla et al. (2012) and Coelho et al. (2013) showed that for a large array of targets, the energetic cost, trajectory deviations, reaction times and other kinematic measures were higher for midline crossings (contralateral reaches) as compared to ipsilateral reaches. Our findings corroborated their conclusions for kinematic and reaction time measures.

Previous studies have attributed hand choice patterns to intrahemispheric attentional processing benefits, such as the proximity of the hand to intended target (Gabbard and Helbig, 2004) and attentional biases resulted from an initial processing of targets ipsilateral to the hand (Verfaellie and Heilman, 1990). In contrast, Carey et al., 1996) demonstrated that increased midline crossings were associated with such kinematic costs as peak velocity and distance, independent of the processing costs associated with cross hemispheric communication. We found that participants made more contralateral reaches, despite the potential kinematic costs with cross-midline reaches. Because the size and the location of the targets did not change across conditions, it is reasonable to conclude that these choices were not affected by condition-related differences in mechanical costs. As the accuracy remained high across conditions, the choices were not likely affected by condition-related error costs either. Instead, we suggest that cognitive resources are needed to assess the cost of reaching with each hand, often resulting in ipsilateral decisions. As increased cognitive resources are needed for such processes as visual memory, discrimination, and visual–spatial localization, the decision to make efficient ipsilateral reaches (i.e., avoid midline crossing) seems to become compromised.

Limitation and future directions

There are several limitations to this early-stage study, which could give rise to a number of possibilities for future research. First, replication and extension of this work with larger samples is clearly needed. For example, we found the increased contralateral reaches by both hands under a more difficult condition, but the number of these reaches was still small, the left hand reaching contralaterally in particular. Nevertheless, our effect size estimates indicated that our current findings had large effect sizes. We suggest that this might indicate that our current findings, though from a limited sample of participants, might be representative of the larger population. Second, in addition to self-reports on handedness, a fuller characterization of the sample would allow a more detailed discussion, including its relation to our finding that the number of contralateral reaches by both hands increased with greater task difficulty although the number was still small. For example, the strengths of hand preference, and visual acuity might contribute to this finding.

We demonstrated the significant findings in both hand selection and hand performance under a restricted laboratory setting. However, as the tasks were designed to simulate situations where people use the arrays of symbols to communicate, three directions are opened up by this first step. First, it is important to examine the reaching behavior toward actual arrays of symbols rather than simulated ones. A possible avenue is to apply the study in a 3-dimensional space, in which the hands are free to move in any direction. Second, it is critical to map out the influence of visual–perceptual characteristics of the displays in the context of real communication settings. This requires the presence of a communication partner, for instance, during a snack or storybook reading activity. Third, participants with developmental and/or intellectual disabilities who require the use of such arrays of symbols to communication are desperately needed to verify our speculations of their clinical implications. Individuals with Down syndrome or autism are the ones who might benefit from communication supports. Thus, involving these clinical groups could inform us of the role of cognitive roles in motor reaching tasks.

Acknowledgments—

This work was supported by the National Institutes of Health [R01HD059783], and a Penn State SSRI Level 1 Award. The second author is co-funded by the Penn State SSRI. The authors thank Christine Regiec, Emily Neumann, and Tara O’Neill for their assistance throughout the development of this research.

REFERENCES

  1. Annett M (1972) The distribution of manual asymmetry. Br J Psychol 63(3):343–358. [DOI] [PubMed] [Google Scholar]
  2. Bishop DVM, Ross VA, Daniels MS, Bright P (1996) The measurement of hand preference: a validation study comparing three groups of right-handers. Br J Psychol 87(2):269–285. [DOI] [PubMed] [Google Scholar]
  3. Boisgontier MP, Beets IAM, Duysens J, Nieuwboer A, Krampe RT, Swinnen SP (2013) Age-related differences in attentional cost associated with postural dual tasks: increased recruitment of generic cognitive resources in older adults. Neurosci Biobehav Rev 37(8):1824–1837. [DOI] [PubMed] [Google Scholar]
  4. Brown SG, Roy EA, Rohr LE, Bryden PJ (2006) Using hand performance measures to predict handedness. Laterality 11 (1):1–14. [DOI] [PubMed] [Google Scholar]
  5. Bryden PJ, Roy EA (2006) Preferential reaching across regions of hemispace in adults and children. Dev Psychobiol 48(2):121–132. [DOI] [PubMed] [Google Scholar]
  6. Bryden MP (1977) Measuring handedness with questionnaires. Neuropsychologia 15(4–5):617–624. [DOI] [PubMed] [Google Scholar]
  7. Calvert GA, Bishop DVM (1998) Quantifying hand preference using a behavioural continuum. Laterality 3(3):255–268. [DOI] [PubMed] [Google Scholar]
  8. Carey DP, Hargreaves EL, Goodale MA (1996) Reaching to ipsilateral or contralateral targets: within-hemisphere visuomotor processing cannot explain hemispatial differences in motor control. Exp Brain Res 112(3):496–504. [DOI] [PubMed] [Google Scholar]
  9. Coelho CJ, Przybyla A, Yadav V, Sainburg RL (2013) Hemispheric differences in the control of limb dynamics: a link between arm performance asymmetries and arm selection patterns. J Neurophysiol 109(3):825–838. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. De Agostini M, Khamis AH, Ahui AM, Dellatolas G (1997) Environmental influences in hand preference: an African point of view. Brain Cognition 35(2):151–167. [DOI] [PubMed] [Google Scholar]
  11. Doyen AL, Duquenne V, Nuques S, Carlier M (2001) What can be learned from a lattice analysis of a laterality questionnaire? Behav Genet 31(2):193–207. [DOI] [PubMed] [Google Scholar]
  12. Fagard J, Dahmen R (2004) Cultural influences on the development of lateral preferences: a comparison between French and Tunisian children. Laterality 9(1):67–78. [DOI] [PubMed] [Google Scholar]
  13. Faul F, Erdfelder E, Lang A-G, Buchner A (2007) G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods 39 (2):175–191. [DOI] [PubMed] [Google Scholar]
  14. Favilla M, Hening W, Ghez C (1989) Trajectory control in targeted force impulses. VI. Independent specification of response amplitude and direction. Exp Brain Res 75(2):280–294. [DOI] [PubMed] [Google Scholar]
  15. Gabbard C, Helbig CR (2004) What drives children’s limb selection for reaching in hemispace? Exp Brain Res 156(3):325–332. [DOI] [PubMed] [Google Scholar]
  16. Gabbard C, Rabb C (2000) What determines choice of limb for unimanual reaching movements? J Gen Psychol 127(2):178–184. [DOI] [PubMed] [Google Scholar]
  17. Gabbard C, Helbig CR, Gentry V (2001) Lateralized effects on reaching by children. Dev Neuropsychol 19(1):41–51. [DOI] [PubMed] [Google Scholar]
  18. Gabbard C (1998) Attentional stimuli and programming hand selection: a developmental perspective. Int J Neurosci 96(3/4):205. [DOI] [PubMed] [Google Scholar]
  19. Ghez C, Hening W, Gordon J (1991) Organization of voluntary movement. Curr Opin Neurobiol 1(4):664–671. [DOI] [PubMed] [Google Scholar]
  20. Hening W, Vicario D, Ghez C (1988) Trajectory control in targeted force impulses. Exp Brain Res 71(1):103–115. [DOI] [PubMed] [Google Scholar]
  21. Henry FM, Rogers DE (1960) Increased response latency for complicated movements and a “memory drum” theory of neuromotor reaction. Res Q Exercise Sport 31(3):448–458. [Google Scholar]
  22. Higginbotham DJ (1995) Use of nondisabled subjects in AAC research: confessions of a research infidel. Augment Altern Comm 11(1):2–5. [Google Scholar]
  23. Hollerbach JM, Flash T (1982) Dynamic interactions between limb segments during planar arm movement. Biol Cybern 44(1):67–77. [DOI] [PubMed] [Google Scholar]
  24. Jeannerod M (1997) The cognitive neuroscience of action. Oxford,: Blackwell. [Google Scholar]
  25. Klar AJ (1996) A single locus, RGHT, specifies preference for hand utilization in humans. Cold Spring Harb Symp 61:59–65. [PubMed] [Google Scholar]
  26. Leconte P, Fagard J (2006) Which factors affect hand selection in children’s grasping in hemispace? Combined effects of task demand and motor dominance. Brain Cogn 60(1):88–93. [DOI] [PubMed] [Google Scholar]
  27. Levy J, Nagylaki T (1972) A model for the genetics of handedness. Genetics 72(1):117–128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Mamolo CM, Roy EA, Bryden PJ, Rohr LE (2004) The effects of skill demands and object position on the distribution of preferred hand reaches. Brain Cogn 55(2):349–351. [DOI] [PubMed] [Google Scholar]
  29. Mamolo CM, Roy EA, Bryden PJ, Rohr LE (2005) The performance of left-handed participants on a preferential reaching test. Brain Cogn 57(2):143–145. [DOI] [PubMed] [Google Scholar]
  30. Mayer-Johnson R (1992) The picture communication symbols. Solana Beach, CA: Mayer-Johnson. [Google Scholar]
  31. McManus IC, Bryden MP (1992) The genetics of handedness, cerebral dominance and lateralization In: Rapin I, Segalowitz SJ, editors. Handbook of neuropsychology. Child neuropsychology, vol. 6 Amsterdam: Elsevier; p. 115–142. [Google Scholar]
  32. Michel GF (1992) Maternal influences on infant hand-use during play with toys. Behav Genet 22(2):163–176. [DOI] [PubMed] [Google Scholar]
  33. Priest AW, Salamon KB, Hollman JH (2008) Age-related differences in dual task walking: a cross sectional study. J Neuroeng Rehabil 5:29–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Przybyla A, Good DC, Sainburg RL (2012) Dynamic dominance varies with handedness: reduced interlimb asymmetries in left-handers. Exp Brain Res 216(3):419–431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Przybyla A, Coelho CJ, Akpinar S, Kirazci S, Sainburg RL (2013) Sensorimotor performance asymmetries predict hand selection. Neuroscience 228:349–360. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Rosenbaum DA (1980) Human movement initiation: specification of arm, direction, and extent. J Exp Psychol 109(4):444–474. [DOI] [PubMed] [Google Scholar]
  37. Sainburg RL, Ghez C, Kalakanis D (1999) Intersegmental dynamics are controlled by sequential anticipatory, error correction, and postural mechanisms. J Neurophysiol 81(3):1045–1056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Schaafsma S, Riedstra B, Pfannkuche K, Bouma A, Groothuis TG (2009) Epigenesis of behavioural lateralization in humans and other animals. Philos Trans R Soc B 364(1519):915–927. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Scharoun SM, Bryden PJ (2014) Hand preference, performance abilities, and hand selection in children. Front Psychol 5:82–97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Singh M, Manjary M, Dellatolas G (2001) Lateral preferences among Indian school children. Cortex 37(2):231–241. [DOI] [PubMed] [Google Scholar]
  41. Steenhuis RE, Bryden MP (1999) The relation between hand preference and hand performance: what you get depends on what you measure. Laterality 4(1):3–26. [DOI] [PubMed] [Google Scholar]
  42. Stoloff RH, Taylor JA, Xu J, Ridderikhoff A, Ivry RB (2011) Effect of reinforcement history on hand choice in an unconstrained reaching task. Front Neurosci 5:41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Teasdale N, Bard C, Larue J, Fleury M (1993) On the cognitive penetrability of posture control. Exp Aging Res 19(1):1–13. [DOI] [PubMed] [Google Scholar]
  44. Verfaellie M, Heilman KM (1990) Hemispheric asymmetries in attentional control: implications for hand preference in sensorimotor tasks. Brain Cogn 14(1):70–80. [DOI] [PubMed] [Google Scholar]
  45. Wilkinson KM, McIlvane WJ (2013) Perceptual factors influence visual search for meaningful symbols in individuals with intellectual disabilities and Down syndrome or autism spectrum disorders. Am J Intellect 118(5):353–364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Wilkinson KM, Carlin M, Thistle J (2008) The role of color cues in facilitating accurate and rapid location of aided symbols by children with and without Down syndrome. Am J Speech Lang Pathol 17(2):179–193. [DOI] [PubMed] [Google Scholar]
  47. Wilkinson KM, O’Neill T, McIlvane WJ (2014) Eye-tracking measures reveal how changes in the design of aided AAC displays influence the efficiency of locating symbols by school-age children without disabilities. J Speech Lang Hear Res 57(2):455–466. [DOI] [PubMed] [Google Scholar]
  48. Woollacott M, Shumway-Cook A (2002) Attention and the control of posture and gait: a review of an emerging area of research. Gait Posture 16(1):1–14. [DOI] [PubMed] [Google Scholar]

RESOURCES