Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Nov 1.
Published in final edited form as: Am J Speech Lang Pathol. 2011 Aug 3;20(4):288–301. doi: 10.1044/1058-0360(2011/10-0065)

Facilitating Children’s Ability to Distinguish Symbols for Emotions: The Effects of Background Color Cues and Spatial Arrangement of Symbols on Accuracy and Speed of Search

Krista M Wilkinson 1,2, Julie Snell 1
PMCID: PMC3472415  NIHMSID: NIHMS405421  PMID: 21813821

Abstract

Purpose

Communication about feelings is a core element of human interaction. Aided augmentative and alternative communication systems must therefore include symbols representing these concepts. The symbols must be readily distinguishable in order for users to communicate effectively. However, emotions are represented within most systems by schematic faces in which subtle distinctions are difficult to represent. We examined whether background color cuing and spatial arrangement might help children identify symbols for different emotions.

Methods

Thirty nondisabled children searched for symbols representing emotions within an 8-choice array. On some trials, a color cue signaled the valence of the emotion (positive vs. negative). Additionally, symbols were either organized with the negatively-valenced symbols at the top and the positive symbols on the bottom of the display, or the symbols were distributed randomly throughout. Dependent variables were accuracy and speed of responses.

Results

The speed with which children could locate a target was significantly faster for displays in which symbols were clustered by valence, but only when the symbols had white backgrounds. Addition of a background color cue did not facilitate responses.

Conclusions

Rapid search was facilitated by a spatial organization cue, but not by the addition of background color. Further examination of the situations in which color cues may be useful is warranted.

Keywords: Aided AAC, Color Cuing, Display Construction


Individuals who have communication support needs often use visual symbols as part of an aided augmentative and alternative communication (AAC) system. One significant challenge facing such individuals and their partners is the efficiency and accuracy of message preparation. As Beukelman and Mirenda (2005) noted, the rate of message preparation for aided symbol use is estimated to be approximately 15 words per minute, far below the rate of 150–250 words per minute in spoken or signed communication. A number of techniques for enhancing efficiency of symbol production have been suggested, including pre-stored formulas for common phrases (“how are you”) and message prediction on high-technology devices that offers likely options during message construction (see Beukelman & Mirenda, 2005, for a review).

Another means of enhancing use of aided AAC involves various physical aspects of the system itself. For instance, poorly positioned aids that require effort on the part of the communicator can lead to reduced rates of communication due to fatigue or frustration, barriers that have little to do with the motivation or capabilities of the user (Higginbotham, Shane, Russell, & Caves, 2007; McEwen & Lloyd, 1990). In this research we examined how two visual-perceptual aspects of the visual display might facilitate the efficiency with preschool children could find a target symbol. Specifically, we examined the influence of a background color cue and the spatial organization of the symbols on search for symbols representing emotional states. The rationales for each decision are presented in the following sections.

Studies of visual-spatial characteristics of displays and aided AAC system design

Wilkinson and Jagaroo (2004) argued that because the channel for much of aided AAC is visual, construction of these displays might benefit from consideration of principles of visual processing. Their argument was that displays that are compatible with the way humans perceive and interpret visual information might be more facilitative of functional use than displays that are poorly matched or incompatible with human visual information processing. They suggested that the extensive literature in visual cognitive science and visual cognitive neuroscience might offer direction about the perceptual features that might be relevant for aided AAC.

The value of laboratory studies in clinical disciplines

The current study used an experimental laboratory task in order to map out basic influences of visual-perceptual task demands, within a context that controlled and minimized the social-communicative task demands. Laboratory studies have historically provided an integral foundation for building an evidence base for clinical practice, and have been used to examine various cognitive phenomena of relevance to aided AAC (e.g., Mizuko & Esser, 1991; Mizuko, Reichle, Ratcliff, & Esser, 1994; Wagner & Jackson, 2006). The logic is that if the effect is found in the laboratory context, we can then examine if it is sufficiently robust to withstand the additional demands of a social-communicative task. Without conducting the research in the controlled laboratory environment, however, it is impossible to isolate the specific influence of visual-perceptual demands on behavior separate from the social-communicative demands.

The task in the current study evaluated the influence of visual-perceptual cues on the speed and accuracy with which participants could find a target. Both dependent measures are fundamental components of successful aided communication. It is critical to be able to select one’s intended message accurately and avoid communication breakdowns, and effective communication is fostered when users can do so efficiently. Thus, while not communicative in and of themselves, behaviors of visual search are relevant to functional use of aided AAC.

Studies of color cuing and spatial organization

An important first step in examining Wilkinson and Jagaroo’s (1994) proposal is to examine whether visual-perceptual factors identified in basic science disciplines might extend to the types of clinical materials used in aided AAC. Wilkinson and her colleagues (2006, 2008) conducted a series of proof-of-concept studies in which they examined whether color cues might influence responding with aided AAC picture symbols. They chose stimulus color because it has been identified as a powerful influence on behavior in visual cognitive science but also because it is readily manipulated in many of the commercially available symbol sets. In the first study, Wilkinson et al (2006) examined whether the internal color of a stimulus might influence visual search in ways predicted from cognitive sciences, in nondisabled preschool children. Speed and accuracy of search was facilitated in arrays containing 8 symbols that all unique colors as compared to when all eight symbols shared a color (all were red fruits; apple, cherry). Search was equally facilitated when only subsets of symbols shared internal color, that is, when four were red and four were yellow. This effect was found both for stimuli like shapes (triangles, starburst patterns; these are the stimuli often used in cognitive science) and also for meaningful line drawings used in AAC. These findings supported the argument that principles of visual processing may apply with meaningful clinical materials.

In their second study, Wilkinson et al (2008) explored further the earlier finding that search can be facilitated by having subsets of symbols share color. They examined whether the spatial arrangement of these like-colored symbols might also be used to highlight relationships. Children with and without Down syndrome were presented with displays in which the symbols that shared a color were either clustered together in a small group or distributed throughout a display. For all participants, speed of search was enhanced when the spatial arrangement included clustering by symbol-internal color. Accuracy was also enhanced for the participants with Down syndrome.

Color-coding of the symbol background

While manipulating the internal color of symbols seems to facilitate search, there are many concepts for which the internal symbol color cannot be used to provide such a cue. For instance, it would not make sense to violate semantic relationships just to cluster symbols by color; clustering a red ball with a red apple might not be an appropriate arrangement. Many items come in only certain colors (horses are not typically green or purple) and one would not violate these natural color constraints in order to provide a perceptual cue. To exploit perceptual dimensions in construction of aided displays, it will in many cases be necessary to seek perceptual supports other than symbol-internal color cues.

One alternative to manipulating the internal color of a symbol is to provide the color cue in the symbol background. Symbols of one type might be placed on one color background while symbols of another type are placed on a different color. Such background color cuing has been widely adapted from earlier recommendations as a means to distinguish among word-class categories (such as actor, action, object, descriptor) for users of aided AAC who are beginning to construct syntax (e.g., Goossens’ et al., 1999). Wilkinson and Hennig (2009) suggested that background colors might also be useful for helping differentiate symbols of different semantic classes. However, initial studies conducted with nondisabled preschool children examining the use of symbol background color cues for distinguishing among animals of different taxonomic categories (Wilkinson & Coombs, 2010) or among fruit and vegetable symbols (Thistle & Wilkinson, 2009) have reported the surprising outcome that background color cuing either has no facilitative effect or can interfere with performance, particularly with younger children. Thus, the role of background color cuing seems less straightforward than that of internal color cuing. The current study was designed to provide an initial step for systematic research on this topic.

Emotion symbols as the concept of choice for evaluating color and spatial cues

We chose concepts related to emotions for this study, in part because communication about feelings is a core element of interaction and symbols for these concepts are essential basic vocabulary (see, e.g., Fallon, Light, & Page, 2001). We chose to do this for three reasons, which are detailed in the following section: (a) emotion concepts can be difficult to represent visually, in general, (b) research in AAC has suggested that emotion representations in some widely-used symbol sets in AAC can be difficult for typically developing children to identify, and (c) many individuals with disabilities (many of whom might use aided AAC supports) appear to have particular difficulty discriminating among visual representations of emotion states. For these reasons, emotion symbols are potentially excellent candidates for the use of perceptual cues to support discrimination. Simple manipulations of symbol internal color would be inappropriate or inadvisable, however, thus an alternative means of perceptual cuing is necessary.

Representing emotions in pictures/picture symbols

If users of aided AAC are to communicate effectively about their emotions, it is essential to have representations that are readily distinguishable. Some emotions have specific facial features that make them distinguishable (Ekman & Friesen, 1975; see Visser, Alant, & Harty, 2008 for a discussion within the context of AAC); happy usually involves the corners of the mouth being turned upwards, whereas sad and afraid involve the corners of the mouth pointing down. Two other distinguishing features are the upper portion of the face, such as eyes, eyebrows, and forehead, and the middle portion of the face including the nose and cheeks.

Despite these similarities, many emotions have similar characteristics. Consider the emotions of surprised and scared. Both emotions typically involve raised eyebrows, open eyes, and an open mouth (Visser et al., 2008). Such subtle facial distinctions are difficult to represent visually, even in photographs. Many aided AAC symbol sets represent emotions by schematic faces (line drawings) with a small set of core features arranged in different ways to represent the emotional state. Unless clinicians choose to have only one or two emotion symbols on any page, they will necessarily create displays in which some symbols are distinguished only by these small changes to core features. Visual-perceptual cues may be a promising means of enhancing discrimination among these often highly-similar representations.

Emotion symbol recognition in aided AAC

In one of the only studies examining emotion symbol distinction directly within the context of AAC, Visser and colleagues (2008) examined perceptions by 26 typically developing children between the ages of 48 and 59 months of line drawings representing four emotions; happy, sad, afraid, and angry. They sought to determine which of four possible symbols for each concept best depicted each emotional state; thus, there were four symbols representing happy, four representing sad, four representing afraid, and four representing angry. All 16 symbols were presented at once. Participants were asked questions like: “Peter is going to play at his friend’s house. He is Happy. Show me the Happy face.” The child pointed to the emotion on the display that s/he felt best answered the question asked by the researcher. A total of 12 questions were asked for each participant.

Children were most consistent in selecting from among the four symbols for happy than for any of the other 3 emotions. Generally, the percentage of out-of-emotion selections for sad, afraid, and angry was 15–26%. While certain symbols were selected more often than others within each emotion state, all four of the different symbols for each emotion were chosen at some point during the study. The study highlights the challenge of discriminating emotions represented through icons, even for typically developing children.

Emotion symbol recognition in individuals with disabilities

As Moore (2001) has reviewed, perceiving or interpreting line drawings or other stimuli depicting emotions has been demonstrated repeatedly to be a particular challenge for individuals with various developmental disabilities, many of whom will be users of aided AAC and the associated symbols. Harwood, Hall, and Shinkfield, (1999) reported that 12 individuals with intellectual disabilities performed significantly worse than matched peers on identifying four emotions (anger, fear, disgust, surprise) depicted in moving and static photographs. McAlpine, Kendal, and Singh (1991) examined how well 373 individuals ranging in age and degree of intellectual disability were able to match six emotions depicted within the normed photograph set from Ekman and Friesen (1975) with the emotion depicted in a story that was read to them. The stimuli used were photographs representing the emotions happy, sad, angry, fearful, surprised, and disgusted. Matching performance was significantly poorer in individuals with mild intellectual disability than typically developing peers, and performance was less accurate in individuals with more severe intellectual disability; similar findings have been reported by others (Simon, Rosen, & Ponpipom, 1996). When looking at specific diagnostic categories, Barisnikov, Hyppolyte, and Van der Linden (2008) reported that despite important strengths, individuals with Down syndrome showed difficulty with neutral and surprised expressions, judging them more positively than children matched on receptive vocabulary, while Gross (2004) reported that individuals with autism attend to different features to judge emotion than matches with and without other forms of developmental disability.

These studies suggest that discrimination of representations for emotions may be particularly difficult for individuals with developmental disabilities, many of whom can benefit from aided AAC. Considered together with the challenges of representing emotions in general, particularly through line-drawing symbols, these findings reinforce the importance of providing maximal support for users to discriminate accurately between the various symbols.

Rationale and research aims

Given the critical role of emotion expression in functional communication, the challenges of representing emotions in static picture symbols, and the difficulties children and individuals with disabilities have in identifying line drawings representing emotions, emotion symbols were selected as the symbol type of choice for this study. Manipulations of symbol internal color would be inappropriate in representations for emotional states that are depicted through line-drawings of faces. Cues other than symbol internal cuing are therefore necessary to help users distinguish the symbols readily and accurately. Background color cuing and/or spatial arrangement promise a potential means of doing so.

The term used in research on affective or emotion recognition for the positive or negative content of emotions is “valence,” and valence of emotions depicted has been found to be one of two primary determinants of many outcomes related to affective behavior (cf. Lang, Bradley, & Cuthbert, 2008). We sought to examine whether adding a background color cue to symbols with positive versus negative valence would aid children who were developing typically in accurately and rapidly finding a line-drawing that matched to the emotional expression depicted in a sample photograph. We also sought to examine whether these same dependent variables would be influenced by a spatial cue of clustering the symbols according to their valence.

To do this, we created four display types, all of which contained the identical set of stimuli but which varied in the number of cues (no cues, spatial, color, or both). Figure 1 illustrates these displays. The display on the top panel has neither color cues (the background is white) nor spatial ones (the symbols are not arranged by valence); this is the “no-cue” condition. Each of the two middle panels illustrates displays with one cue added; on the left side, only a spatial cue is present (the symbols are clustered by valence but have white background) and on the right side only a color cue is present (the symbols are color coded by background, but are still not arranged by valence). The lower panel illustrates the display when both a spatial cue (clustering by valence) and a color cue are present.

Figure 1.

Figure 1

Illustrations of the four types of displays and their level of cue

We expected that search would be least efficient and accurate when no cues were present. We anticipated that some facilitation would be offered by adding a single cue (either spatial clustering or background color), and that the most efficient and accurate search would occur when both color and spatial layout offered cues.

Method

Participants

Thirty typically developing children from three preschools served as participants. Use of typically developing children ensured that effects observed were due to the conditions rather than to the presence of physical or intellectual disabilities (Higginbotham, 1995). The importance of extending the research to children with disabilities will be considered in the discussion. Participants were included in the study if they: (a) were between the age of 3;8 and 6;1 years, (b) received at or above a standard score of 85 on a standardized test of receptive vocabulary (1 standard deviation of the mean on the Peabody-Picture Vocabulary Test – IV, or PPVT-IV; Dunn & Dunn, 2007), (c) were capable of using a mouse to access the computer, as demonstrated through performance on computer-accessed research tasks prior to the current study, and (d) had vision and hearing abilities sufficient to perform the research task, as judged by accurate performance on the other research tasks collected prior to this session. Thirty-three participants were originally recruited, but one participant scored more than one standard deviation below the mean on the PPVT-IV and two others chose not to complete the PPVT-IV. Although there was no concern about the language development of these two participants either by teacher report or experimenter observation, their data were omitted from analysis. The average standard score for the PPVT-IV for the 30 participants was 118 (range = 89–152). The mean chronological age was 4 years, 10 months (range = 3 years 8 months to 6 years 1 month). Sixteen participants were female and fourteen were male.

General Procedures

The study consisted of a single experimental session that contained 16 trials presented to participants through a pre-programmed computer application designed for this and related research (Wallace, 2010, from Dube, 1991). The experimental session took place in a quiet room in the child’s school, and generally took approximately 5 minutes to complete. During the session, the adult sat just behind the child and offered general encouragement to continue, if necessary. Participants received a sticker at the end to thank them for their time.

Materials

Each trial in the experimental session consisted of a single photograph sample that indicated the target emotion, followed by a display of eight line drawing symbol choices interspersed with eight blank squares, thus simulating a 4×4 array with 8 filled spaces. The line drawings were obtained from the Picture Communication Symbols set (PCS; Mayer Johnson, 1992). All stimuli were 180×180 pixels, reflecting a square of 2 inches by 2 inches. A visual photograph served as sample rather than a spoken word in order to reduce any influence of variability in oral comprehension.

Eight emotions were presented on every trial display, including happy, angry, sad, surprised, afraid/fearful, loving, bored, and silly, and each emotion served as the correct target on two trials in the session. The first five emotions were selected because of consensus in the literature of their centrality in human emotion perception and their widespread use in previous studies within and outside of AAC (e.g., Harwood et al., 1999; McAlpine et al., 1991; Visser et al, 2008). The other three were selected because clinically, expressions of love, boredom, and silliness are commonly used in aided AAC displays. As described next, four emotions (happy, loving, angry, sad) were considered cardinal or central emotions, and four (surprised, afraid, bored, silly) were non-cardinal emotions.

The sample that indicated to the participants which target symbol they were seeking on any given trial was a digital photograph presented on a separate screen before the choice array appeared. A different photograph served as sample on every trial, thus, there were 16 photographs used, two for each emotion (this was chosen to reduce potential learning/order effects; the justification is provided below). When possible, we used photographs from the International Affective Picture System (IAPS; Lang, Bradley, & Cuthbert, 2008), which is a database of photographs for which norms have been obtained on several dimensions across a large sample (including both adults and children) and within a number of studies. One dimension on which ratings were normed is that of the emotional valence of the photograph. Ratings ranged from 1 (negatively-valenced) to 9 (positively-valenced). From this database we obtained one photograph of each of the four cardinal emotions, defined as emotions considered in the literature to be central (see introduction) and confirmed as strong in valence in IAPS ratings; two of these emotions were positive (happy; IAPS #2045, valence = 7.87 and loving; IAPS #2550, valence = 7.77) and two were negative (angry; IAPS #2100, valence = 3.85 and sad; IAPS #2301, valence = 2.78). From this database we also obtained photographs for two emotions whose valences were intermediate, including fearful/scared (IAPS #2458, valence = 4.69) and bored (IAPS #2101, valence = 4.49). No valence ratings were available for surprised and silly because there was no representative photograph in the IAPS, however, neither has been included in any literature as cardinal emotions in nature and thus we considered these non-cardinal emotions.

A second exemplar of each emotion that had only a single IAPS photograph was obtained through an internet search by a research assistant, who also obtained the two necessary photographs for surprised and silly. All photographs obtained from the internet were subjectively judged by members of the first authors’ research laboratory to be similar to the IAPS photographs (a group of 8 undergraduate and graduate students that met on a weekly basis). The use of a mix of IAPS and internet sources was deemed appropriate for the purpose of this research, which was to evaluate influence of different arrays of PCS symbols on efficiency of search. Furthermore, in actual use there will be variations among the exemplars of emotions being seen by users of AAC; thus, the variations among our samples reflects an ecological reality of the ultimate behaviors we sought to simulate.

Because the IAPS requests that users not publish the photographs, and the remainder of the stimuli were taken from the internet, these images will not be reproduced. To provide examples for readers, a volunteer who was not in any of the actual photographs replicated three of the photographs that were used as samples; these include bored, silly, and surprised and are illustrated in Figure 2. The symbols on the choice array were obtained from options available for the targeted emotions in the Picture Communication Symbols dictionary (PCS; Mayer-Johnson, 1992); the three line-drawing symbols for the emotions depicted in Figure 1 are also illustrated.

Figure 2.

Figure 2

Model depicting examples of the content of photographs used as samples and PCS used in the choice array during experimental task

Session Structure

The 16 trials in the experimental session were pre-programmed into the computer software program that controlled stimulus presentation and recorded accuracy and latency of response, as well as the response that the child made when there was an error. Each trial began with the single sample photograph that represented the emotion being targeted on that trial. When the participant clicked on the sample with the mouse, it disappeared from view and was replaced by the choice array containing the 8 PCS symbols.

Feedback

When the child chose the appropriate target symbol on a trial, the program produced a sound to alert him/her that their response was correct. When the incorrect target was chosen, no sound was elicited. Although having no feedback whatsoever would have been ideal, this task was quite difficult for children and we were concerned that they might be unwilling to complete the session without some form of ongoing reinforcement. Further, the likelihood that the feedback on earlier trials might influence later outcomes (due to learning) seemed to be minimal for two reasons: (a) each emotion served as the target on only two trials in the session – that is, “happy” was the target on two of the 16 trials, and thus any reinforcement on the earlier trial would directly affect only one subsequent trial, and (b) different photographs served as the samples on each of the two trials for any given emotion – that is, the photograph depicting a happy face on the first trial was not the same photograph presented on the second trial, and thus the reinforcement of the photograph from the first trial necessitated generalization to the second.

Order of Presentation

The 16 trials of the experimental session were comprised of four trials of each of the four display types illustrated in Figure 1. In other words, each participant responded on four trials with no perceptual cue in the display (top of Figure 1), four with only a spatial cue (left hand side of middle panel), four with only a color cue (right hand side of middle panel), and four with both spatial and color cues (bottom panel). The order of the different display types was held constant across all participants, such that all participants received the session in the same order. However, the trial order was carefully balanced to minimize the possibility of order effects influencing outcomes in terms of the comparison across trial types. Table 1 presents the structure and order of each of the 16 trials presented. As Table 1 illustrates, the different cue conditions were intermixed such that an equal number of trials containing the white versus color cuing appeared in the first and the second halves of the session (e.g., four of the first 8 trials had white background, the others had color, and the same for the second 8 trials) and an equal number of trials containing clustered versus distributed displays appeared in the first and the second halves of the session (e.g., four of the first 8 trials had symbols clustered by valence, the others had symbols distributed, or no cue, and the same for the second 8 trials). Thus, the distribution of trial types was balanced across the first and second half of the sessions, and equally so across the trial types. This balancing reduced both the likelihood of learning as well as an order effect that would selectively benefit one of the four trial types over another.

Table 1.

Order of trials, color condition, and spatial arrangement

Trial Emotion Color Condition Spatial Arrangement
1 Happy color distributed
2 Angry white distributed
3 Loving color clustered
4 Bored white distributed
5 Surprised white clustered
6 Angry color distributed
7 Scared white clustered
8 Sad color clustered
9 Loving white clustered
10 Bored color distributed
11 Silly white distributed
12 Scared color clustered
13 Sad white clustered
14 Happy white distributed
15 Surprised color distributed
16 Silly color clustered

Original Study Group and Systematic Replications

The study consisted of an initial study group (n = 10) and two further study groups (n = 11 and 9, respectively). We will refer to these as the original study group and the two systematic replication groups, respectively. The three groups were matched in age and language status (see below) and all procedures for these groups were identical with the sole exception of the nature of the color cuing that was provided. We therefore considered the two additional study groups to offer systematic replication of the procedures from the original group, and will refer to them as such throughout the paper.

The reason that the two additional study groups cannot be considered directly in relation to the original study group was that the replications were not planned a priori for this study. Rather, after the data were obtained with the 10 original participants it became clear that further examination would be necessary. Specifically, we saw a lack of facilitation when there was a color cue in the background of the symbols, irrespective of the spatial layout. Upon observing this, we considered two possible explanations; either the addition of the color cue truly did not produce the expected facilitation, or the saturation of the entire symbol background inadvertently reduced the color cuing effect because the physical contrast between the black line of the symbol and its background was reduced. It seemed necessary to examine these alternatives before drawing any conclusions about the effect (or lack of effect) of background color cuing. All procedures were held constant across the systematic replications. The only dimension that varied was the nature of the color cuing that was provided in the displays.

Participants for Original Group and Systematic Replications

The division of the total participant groups (n = 30) into three cohorts reduced the number of participants within each condition. This limited the statistical power of the study to detect small effects, a limitation we consider in the discussion. Nonetheless, prior work has more than once demonstrated considerable effect sizes for similar structural manipulations (η = .55 to .82) with participant groups of similar sizes as those in our replications (Wilkinson et al., 2006, 2008; see also Wilkinson & Coombs, 2010). Further, because we seek to examine structural impact on behaviors of relevance to clinical outcomes, we are primarily interested in detecting larger effects, as statistically small effects would likely be of limited interest for translation to clinic. Finally, as we will report in this paper, the outcomes we found were quite consistent across the three groups. Thus, the replication of the same pattern of outcomes across three independent groups provides converging evidence for the reliability of findings.

The 10 participants who underwent the original condition (Group 1) had a mean chronological age of 4 years 6 months, and mean and median standard scores on the PPVT-IV of 112 and 114, respectively. Five participants were female and five were male. Eleven participants contributed data for the first systematic replication (Group 2). Their mean chronological age was 4 years 10 months, and mean and median standard scores on the PPVT-IV were 121 and 118, respectively. Six of these participants were female and five were male. Nine participants underwent the second systematic replication (Group 3). Their mean chronological age was 4 years 11 months, and mean and median standard scores on the PPVT-IV were 121 and 118, respectively. Five participants were female and four were male. One-way ANOVAs were conducted to determine whether chronological age differed across the groups; the ANOVA confirmed that the groups did not differ on this measure, F(2,27) = .982, p = .387. Similarly, ANOVA confirmed that the groups did not differ on PPVT-IV score, F(2,27) = 1.21, p = .313.

Stimulus Conditions for Systematic Replication

For the original group of participants (Group 1) the 8 trials that had a color cue involved a fully saturated background, as illustrated in Figure 1. As noted, the remaining 8 trials in the session had no color cuing. To our surprise, these first participants showed no facilitation on the trials with the color cuing. It seemed necessary to probe further to determine whether this was due to a reduction in the physical contrast between the black line demarking the symbol and the white background.

To evaluate this alternative explanation of the findings in Group 1, the first systematic replication altered the cue from saturation of the entire symbol background to a border around the outside of the symbol, illustrated in the left panel of Figure 3. This preserved the contrast between the black line of the symbol and the white of its immediate background while maintaining the color cuing as a border cue around the edge. The 8 trials with white backgrounds remained identical to those seen by the Group 1, and all other procedures were also identical.

Figure 3.

Figure 3

Illustration of the border and page background cue conditions

The second systematic replication removed the color from the symbol altogether and made it appear as a backdrop, in the clustered color trials. As illustrated in the right hand side of Figure 3, the entire top half was colored red and contained the negatively valenced symbols, and the entire bottom half colored blue, with the positively valenced symbols. Because we could only do this division for the clustered spatial arrangement (when symbols were distributed, it is impossible to use this arrangement), these participants saw a border cue in the distributed color cue trials. The 8 trials with white backgrounds were identical to those seen by the other groups.

Dependent measures and analysis

For all participants, dependent measures were accuracy and latency to respond. Accuracy was calculated as the percent correct selections. Latency was calculated as the median latency for each session, on correct trials only. Median was used to reduce the influence of single trials with outlying response times (see Wilkinson et al., 2006).

Analysis began with a description of performance on each of the eight emotion symbols. Analysis of the main research questions involved inferential analysis. The inferential analysis was guided by the research questions, concerning the effect of adding different perceptual cues relative to performance when no color or spatial cues were available. Thus, as illustrated in Figure 1, there was a single display condition in which the display offered no cue, two display conditions in which there was one cue available (either a spatial cue with no color, or a color cue with no spatial arrangement), and one final display condition in which two cues were available (both the color and the spatial cue). For this reason, ANOVA was not considered the optimal approach, as the relative weight of cuing was not evenly distributed across these four conditions (there were 0, 1, 1, or 2 cues available). Instead, the following specific contrasts were conducted via t-test; performance on (a) no cue versus spatial cue, (b) no cue versus color cue, and (c) no cue versus both color and spatial cue. Because three contrasts were conducted for each set of analyses, the p-value criterion was adjusted from .05 to .017 (.05/3) to adjust for error introduced by multiple comparisons.

Results

Descriptive analysis of responding by concept/symbol

Overall, participants selected correctly on a mean of 72% of the trials, and took 9.8 seconds to produce the response. Table 2 presents the accuracy and latency for each symbol, in each of the three groups. Because the latency is only calculated from correct trials, the number of trials that contribute to the latency depends on the accuracy. Thus, the number in parentheses in the latency columns represents the number of trials contributing to the median response time.

Table 2.

Mean Accuracy and Median Latency for Each Symbol, by Group

GROUP 1
GROUP 2
GROUP 3
Accuracy Latency Accuracy Latency Accuracy Latency
cardinal positive loving 100.00% 8.97 (20) 90.00% 8.09 (16) 72.22% 2.76 (13)
happy 90.00% 11.27 (18) 77.27% 11.73 (16) 72.22% 9.26 (13)
cardinal negative angry 85.00% 12.17 (17) 77.27% 10.60 (15) 77.78% 5.33 (14)
sad 75.00% 14.07 (15) 75.00% 11.79 (13) 66.67% 3.79 (12)
intermediate bored 90.00% 14.09 (18) 75.00% 6.90 (14) 44.44% 3.22 (8)
surprised 70.00% 11.08 (14) 90.00% 10.12 (16) 83.33% 4.37 (15)
silly 85.00% 18.30 (17) 70.00% 17.71 (12) 55.56% 4.3 (10)
scared 45.00% 15.75 (9) 30.00% 18.39 (6) 38.89% 16.14 (7)
Overall mean = 80.00% 13.21 73.07% 11.92 63.89% 6.15
cardinal positive 95.00% 10.12 83.64% 9.91 72.22% 6.01
cardinal negative 80.00% 13.12 76.14% 11.20 72.22% 4.56
bored/surprised 80.00% 12.59 82.50% 8.51 63.89% 3.80
silly/scared 65.00% 17.02 50.00% 18.05 47.22% 10.22

Group 1 showed the highest accuracy, on average (80%) but also showed the longest response latencies (13.21 seconds); in contrast participants in Group 3, who underwent the second systematic replication (page background color cue) had the least accurate responses (64%) but responded the fastest (6.15 seconds). Accuracy was the highest or among the highest for the two cardinal positive emotions and, with the exception of happy for Group 3, speed of response was fastest for these cardinal positive emotions than for others. Strikingly, silly and scared were the least accurate and showed the longest latencies for correct trials, with the exception of silly for Group 3. Overall patterns were quite similar irrespective of whether the color cue was saturated, border, or on the page background.

Research Questions: Visual/Descriptive Presentation of Results

The accuracies and the latencies for all three groups, as well as the hypothesized patterns that we had expected at the outset of the study, are presented in Figures 4 and 5 (the inferential analyses were conducted individually for groups, reported below). The left-most set of data represent a schematic pattern that would have been consistent with our original hypotheses, with expected lowest accuracies and longest latencies for the no cue display condition, some gains when a single cue was present (either clustered by spatial arrangement or with a background color cue) and the most gains when both spatial and color cues were present. The next three sets of data represent the actual patterns of accuracies (Figure 4) and latencies (Figure 5).

Figure 4.

Figure 4

Mean accuracy for each type of trial (and the pattern hypothesized at the outset of research)

Figure 5.

Figure 5

Mean latency for each type of trial (and the pattern hypothesized at the outset of research)

We had anticipated an increase in accuracy with the addition of spatial and/or color cues. Figure 4 illustrates that the pattern was actually almost the opposite, with little to no impact of adding the spatial cue and a lower accuracy upon addition of the color cue. We had also anticipated gains in speed to find the target upon addition of the cues. Figure 5 illustrates that, consistent with our hypothesis, latencies were shorter for participants in all three groups when a spatial cue was added, when the symbols had white backgrounds. Surprisingly, the addition of the color cue provided no observable gains to latency relative to the no cue condition, in any of the groups, and when color was present the benefits of clustered spatial arrangements were absent. Descriptively, therefore, it appears that the addition of the color cue did not have the anticipated beneficial effect on accuracy or on latency to find a target.

Inferential Analysis: Group 1 (saturated color cue)

Mean accuracy when there was no cue was 88%. A decline in accuracy to the mid-70% range occurred when the color cue was present (either alone or in combination with the spatial clustering cue). The t-tests revealed that none of the cue additions (spatial alone, color alone, or spatial plus color) altered the accuracy of performance at a level of statistical significance with the p value adjusted for multiple comparisons, although the difference between the no cue condition and the condition with both spatial and color cue approached it, t(1,10) = 1.96, p = .04.

Average median latency when there was no perceptual cue was 11.9 seconds. The average latency when the spatial cue of clustering symbols by valence was 7.9 seconds, an average gain of 4 seconds from the no-cue condition and one that was of statistical significance, t(1,9) = 2.9, p = .009, with the effect size measuring as moderate to large (η = .69). When a background color cue was added to the no-cue display, the latency slowed to 14.4 seconds, a difference that approached but did not meet the criterion for statistical significance using a p-value adjusted for multiple comparisons (t(1,9) = 1.92, p = .04) but with a moderate to large effect that might have been detected with a larger sample (η = .64). Interestingly, the latencies when there was both a background color cue and the spatial cue were also slower than when no cue was present, at 14.2 seconds, although this difference did not reach statistical significance with either the original or the adjusted p value (t(1,9) = 1.68, p = .06).

Inferential Analysis: Group 2 (border color cue)

Mean accuracy when there was no perceptual cue was 73%. Visual inspection indicates slight decline in accuracy when either a spatial or color cue was added, with the most decline when both spatial and color were present; however, t-tests confirmed that these differences did not reach statistical significance in this sample.

Average median latency when there was no perceptual cue was 11.2 seconds. The average latency with an added spatial cue was 7.8 seconds, an average gain of 3.4 seconds from the no-cue condition, but one that failed to reach the criterion for statistical significance using an adjusted p-value (t(1,10) = 2.11, p = .03) despite a moderate to large effect size (η = .55). Little additional gain relative to the no-cue condition were offered by adding either the border color cue (average latency = 11.6) or both the color and spatial cue together (average latency = 10.5), and t-tests confirmed that neither of these conditions differed from the no-cue condition.

Group 3 (border/page color cue)

This group experienced border cuing for the color cue trials and page cues for the trials with both color and spatial cuing (see Figure 3). Mean accuracy when there was no perceptual cue was 72%. While there were some declines in accuracy when cues were added, t-tests confirmed that none of these other conditions differed statistically from the no-cue condition.

Mean latency when there was no perceptual cue was 6.3 seconds. The mean latency with an added spatial cue was 2.8 seconds, an average gain of 3.5 seconds from the no-cue condition, and one that was of statistical significance, t(1,8) = 3.25, p = .006 and of moderate effect size (η = .48). There was little change from the no-cue condition either by adding a color cue (mean latency = 6.7) or by adding both a color and a spatial cue (mean latency = 6.2), and t-tests confirmed that these latencies were not statistically different from that in the no-cue condition.

Discussion

This research demonstrates that visual-perceptual cues can influence the speed with which children without disabilities find a symbol in a target array. Descriptively, performance was superior for the two cardinal positive emotions; this finding is consistent with similar findings of greatest reliability for identifying symbols for happy (e.g., Visser et al., 2008). When considering the two types of cues, we found that when the symbol background was white, clustering the symbols by valence resulted in little change in accuracy but gains of between 3.4 and 4.0 seconds in the speed of responding, a difference that was of statistical significance in two of the groups and which approached but did not meet criterion for significance in the other. To our surprise, addition of the color cue did not enhance either accuracy or speed of responding, either when the color cue was alone or even when it was paired with the spatial cluster cue.

Role of spatial arrangement and color cuing

The facilitative effect for clustered symbol arrangement is consistent with findings of benefits of clustering symbols, either by internal color (Wilkinson et al., 2008) or taxonomic status (Carelli & Wilkinson, in preparation). This finding extends the breadth of arguments about structural/organizational dimensions of the display beyond basic perceptual cues – that of internal color – to the realm of more interpretive or cognitive dimensions, in this case, the valence of the emotion being represented. The children in this study were not instructed about the arrangement of the symbols, that is, they were not told “look, the positive symbols are clustered together now” on the trials with clustered symbols. The fact that these fairly young children still responded more rapidly on trials with symbols clustered by valence therefore suggests sensitivity to an implicit cue that was expressed only through the spatial organization of the display. This observation underscores the possibility that some display organizations might facilitate use better than others (see, e.g., McFadd & Wilkinson, 2010; Wilkinson et al., 2008). Research on what other dimensions influence both basic search as well as more functional communication outcomes is clearly needed.

As noted in the Methods section, the inclusion of three different types of color cuing was not part of the original research question. Rather, the lack of facilitation by the background color cue in the original group needed additional exploration before this finding could be interpreted. We felt that perhaps the saturation of the symbol background had reduced the level of contrast between the lines defining the symbol (which were black) and the symbol backdrop. It seemed critical to use the border color cue to retain the black-white contrast along with the color cuing of symbol valence. The finding of a continued failure of the color cue to facilitate responding prompted the addition of the page-backdrop for the combined color and spatial cue condition in the third group, to remove the color cue from the symbol altogether.

It would appear that the lack of facilitation by color cues in Group 1 may not have been due to the reduction of the contrast between the symbol and its background, given the similarities in outcomes in the two replication groups. In none of the groups did addition of the color cue facilitate either accuracy or speed of response relative to an identically-organized display with no background color cuing. Even when the spatial clustering cue was also added (thus, both the color cue and the spatial cue were present), there was no facilitation of response speed. Thus, while the addition of color did not impede performance beyond what was observed when there were no structural cues in the display at all (the no-cue condition), it also did not facilitate performance either. This is interesting given the clear facilitation offered by the spatial cue when the symbols had a white background. The finding that symbol background color did not offer an added benefit was contrary to our initial hypotheses, but suggest that fostering efficient search for emotion symbols is better accomplished through spatial rather than background color cuing.

Limitations and findings needing further examination

This research represents only a first step toward delineating the structural/perceptual features of AAC displays that might contribute to efficient use of a system. The research highlighted the potential role of color and spatial cues in search for a target in a small sample of children without disabilities. Further exploration of a number of questions is urgently needed. These include experimental/methodological questions that might be pursued further in laboratory studies as well as questions of application and extension to functional settings.

Sample characteristics and size

This study was initiated with children without disabilities, for reasons articulated earlier (see Higginbotham, 1995). Moreover, the sample sizes were fairly small when the groups were considered separately, restricting our ability to detect smaller effects. Nonetheless, we detected large or moderate-to-large effects in the effect of spatial arrangement on latency consistently across the three groups, with only one of those failing to reach criterion for statistical significance.

The expected effect of color cuing was not found. Is it possible that this failure to detect an effect was due to the limited participant numbers? It is conceivable, although rendered less likely by our ability to detect the large effect of spatial arrangement in displays with white backgrounds. Moreover, visual analysis would suggest that whatever effect might be detected in larger samples would be in the opposite direction from what was originally hypothesized. That is, the trends visible in Figures 4 and 5 were reduced accuracy and slowed search for display conditions that had the color cue. Clearly, replication with a larger sample would be necessary to distinguish if adding the color simply has no effect or if it actually compromises search, however, it seems quite unlikely that such a replication would result in a finding that the color cue is aiding in the search.

Application of findings to disability

Clearly, the research needs to be extended to participants who have disabilities who might benefit from AAC interventions. It is quite possible that these effects would be mitigated in individuals who are chronologically older and who may have more experience with detecting and identifying symbols for emotions. Alternatively, it is also possible that the effects would be more pronounced in individuals who have intellectual disabilities, given that emotion recognition may be a point of specific vulnerability in this population (e.g., Harwood et al., 1991; McAlpine et al., 1991, see review in introduction). This is one of the most critical next steps for study.

The role of color cues in novel displays versus familiar ones

The experimental design of this study focused solely on the “up front” gains in search performance with displays that were novel to the children. This approach allowed us to examine our main question concerning whether basic perceptual or bottom-up influences might be exploited to facilitate more complex behaviors (such as, in this case, the top-down search for a specified target). The considerations raised by the findings would potentially apply for users of aided AAC whenever a display is wholly novel (such as new pages added to a system) or even fairly unfamiliar (such as existing pages that are modified to grow with the user, that are accessed only occasionally, or that are accessed by users who have difficulty learning or remembering the spatial layouts of arrays).

Not addressed by this research is the extent to which these structural considerations apply once the contents and organization of specific pages become very well learned. Indeed, one might predict that as users become familiar with the displays, the influence of the visual-perceptual cues might wane. Alternatively, the influence of the structural cues might not simply wane, but might change as familiarity grows. Specifically, the color cuing may begin to facilitate search once the user is familiar with the display. It is of great interest to determine if and how the influence of the structural cues operates over time, within and across different individuals.

Potential role of color cues other than detection of emotion symbols

The lack of facilitation by color cues relative to similar displays with only white backgrounds suggests that the use of such background color cues in AAC displays needs to be examined systematically. It is worth noting, however, that we tested only children’s ability to distinguish symbols representing different emotional states. We therefore did not provide a test of recommendations concerning background color for other purposes, such as to cue word-class category (Goossens’ et al., 1999). The use of-color coding to aid with sequential selection of symbols may operate quite differently than locating single symbols. Furthermore, it is quite possible that background color might be quite useful for functions such as navigation among pages of displays, rather than for search within a single page. Detailed evaluation of whether background color cuing facilitates search for word-class categories seems warranted at this point.

Application to functional communication

Another pressing question is the extent to which findings from the task in this study operate or extend to functional communication settings. The influence of structural characteristics of a display may be overridden by the many social demands of using a system for communication with a partner. Again, however, an alternative is that the effects would be magnified because the user must now negotiate the visual display while also meeting the added demands of the social interaction. Examination of the generality of our findings to communicative contexts is urgently needed.

Recognizing that the search task used here is clearly removed from daily communication settings, we would argue the findings are still relevant and warrant further translational research. Both accuracy and speed of responding are critical components of aided AAC communication. The ability to find a symbol accurately has obvious implications for effective communication; incorrect selections result in communication breakdowns. Moreover, enhancing the speed with which a user can find a symbol is a natural goal for facilitating message preparation rate in actual communicative contexts. We found that responding on trials with symbols with white backgrounds clustered by valence was faster by over 3 seconds in all three groups, relative to either no cue or indeed to trials with any type of color cuing. From a practical standpoint, a child who is in the midst of trying to share feelings of happiness, anger, or so forth who has to spend additional time finding a symbol may find themselves either further angered or at the very least, less happy due to the sheer time spent searching.

Perceptual differences between displays not controlled in this research

Because we did not plan the experimental groups a priori, and the contrast of the conditions was not a specific research question at the outset, the three groups constitute replications rather than direct comparisons. Some findings show fairly consistent across-group outcomes; the facilitation of response time by the spatial cue, for instance, and the lack of facilitation by the color cue, either alone or in combination with a spatial cue. However, there are some findings that are not consistent across the groups, and we cannot interpret their implications until a direct examination of these findings is conducted. For instance, clear differences in both accuracy and latency emerged across the three groups. Descriptively, search times were faster but accuracies were lower when the page background cue was present, whereas responding was more accurate but slower in the saturated color cue condition. While the groups were comparable in chronological age and receptive vocabulary standard scores, were tested on one of two computers, and had undergone a number of mouse-related response tasks prior to engaging in the current one, it is possible that there was some factor that differentiated the groups that was not measured directly by the experimenters. Another intriguing possibility is that there is an actual difference introduced by the different displays. The trials with white backgrounds and the color-cued trials in saturated displays and the border displays all had gray areas in between the symbols, simulating an actual grid used in many devices. The page-backdrop trials seen by Group 3 had no such gray areas, but rather the more uniform red and blue backdrops. Could this have caused the children in that condition to respond less accurately, but faster? Systematic analysis with true experimental control over the type of cuing would allow us to determine more clearly whether across-group differences reflect differences in the perceptual features of the display itself.

Conclusion and future directions

This research suggests that the design of aided communication displays may have a significant impact on how readily it is perceived, as measured by the speed with which a target can be found within an array. Rather than providing a final answer, this research highlights that structural design characteristics of the visual display itself are a potentially important feature that might contribute to greater or lesser fluency with an aided AAC system. While we would never suggest that structural alterations would be the one and only solution to barriers of rate in aided AAC communication, it does seem reasonable to examine how such small and readily achieved changes could enhance and facilitate fluent and effective message preparation.

Acknowledgments

This research was based in part on an undergraduate honor’s project completed by the second author under the supervision of the first author. Parts of this research were presented as a poster at the 2009 convention of the American Speech-Language-Hearing Association in New Orleans. The larger research program is supported by NICHD P01 HD25995. Thanks to Lauren Seidman, Lauren Foderaro, Jessie Miller, and Bridgett Coombs for assistance with data collection and analysis, and to Tara O’Neill for quality control and analysis of individual items. We also thank the children, families, and staff of the Child Development Laboratory, the Bennett Family Child Center, and the Daybridge Center for their generosity in offering their time and enthusiastic cooperation with this research. The use of the Mayer Johnson PCS symbols is with permission.

References

  1. Barisnikov K, Hippolyte L, Van der Linden M. Face processing and facial emotion recognition in adults with Down syndrome. American Journal on Mental Retardation. 2008;113:292–306. doi: 10.1352/0895-8017(2008)113[292:FPAFER]2.0.CO;2. [DOI] [PubMed] [Google Scholar]
  2. Beukelman DR, Mirenda P. Augmentative and Alternative Communication: Supporting children and adults with complex communication needs. 3. Baltimore, MD: Paul H. Brookes; 2005. [Google Scholar]
  3. Dube WV. Computer software for stimulus control research with Macintosh computers. Experimental Analysis of Human Behavior Bulletin. 1991;9:28–30. [Google Scholar]
  4. Dunn LM, Dunn LM. Peabody Picture Vocabulary Test – III. Circle Pines, MN: American Guidance Service; 1997. [Google Scholar]
  5. Ekman P, Friesen W. Unmasking the face: A guide to recognizing emotion from facial cues. Oxford: Prentice-Hall; 1975. [Google Scholar]
  6. Fallon K, Light J, Page T. Enhancing vocabulary selection for preschoolers who require augmentative and alternative communication. American Journal of Speech-Language Pathology. 2001;10:81–94. [Google Scholar]
  7. Goossens’ C, Crain SS, Elder PS. Engineering the Preschool Environment for Interactive Symbolic Communication: 18 months to 5 years developmentally. 4. Birmingham, AL: Southeast Augmentative Communication; 1999. [Google Scholar]
  8. Gross TF. The perception of four basic emotions in human and nonhuman faces by children with autism and other developmental disabilities. Journal of Abnormal Child Psychology. 2004;32:469–480. doi: 10.1023/b:jacp.0000037777.17698.01. [DOI] [PubMed] [Google Scholar]
  9. Harwood NK, Hall LJ, Shinkfield AJ. Recognition of facial emotional expressions from moving and static displays by individuals with mental retardation. American Journal on Mental Retardation. 1999;104:270–278. doi: 10.1352/0895-8017(1999)104<0270:ROFEEF>2.0.CO;2. [DOI] [PubMed] [Google Scholar]
  10. Higginbotham DJ. Use of nondisabled subjects in AAC research: Confessions of a research infidel. Augmentative and Alternative Communication. 1995;11:2–5. [Google Scholar]
  11. Higginbotham J, Shane H, Russell S, Caves K. Access to AAC: Present, past, and future. Augmentative and Alternative Communication. 2007;23:243–257. doi: 10.1080/07434610701571058. [DOI] [PubMed] [Google Scholar]
  12. Lang PJ, Bradley MM, Cuthbert BN. Technical Report A-8. University of Florida; Gainesville: 2008. International affective picture system (IAPS): Affective ratings of pictures and instruction manual. [Google Scholar]
  13. Mayer-Johnson R. The Picture Communication Symbols. Solana Beach, CA: Mayer-Johnson Co; 1992. [Google Scholar]
  14. McAlpine C, Kendall KA, Singh NN. Recognition of facial expressions of emotion by persons with mental retardation. American Journal on Mental Retardation. 1991;96:29–36. [PubMed] [Google Scholar]
  15. McEwen IR, Lloyd LL. Positioning students with cerebral palsy to use augmentative and alternative communication. Language, Speech, and Hearing Services in Schools. 1990;21:15–21. [Google Scholar]
  16. McFadd E, Wilkinson KM. Qualitative analysis of decision making by clinicians during design of aided visual displays. Augmentative and Alternative Communication. 2010;26:136–147. doi: 10.3109/07434618.2010.481089. [DOI] [PubMed] [Google Scholar]
  17. Mizuko M, Esser J. The effect of direct selection and circular scanning on visual sequential recall. Journal of Speech and Hearing Research. 1991;34:43–48. doi: 10.1044/jshr.3401.43. [DOI] [PubMed] [Google Scholar]
  18. Mizuko M, Reichle J, Ratcliff A, Esser J. Effects of selection techniques and array sizes on short-term visual memory. Augmentative and Alternative Communication. 1994;10:237–244. [Google Scholar]
  19. Moore D. Reassessing emotion recognition performance in people with mental retardation: A review. American Journal on Mental Retardation. 2001;106:481–502. doi: 10.1352/0895-8017(2001)106<0481:RERPIP>2.0.CO;2. [DOI] [PubMed] [Google Scholar]
  20. Simon EW, Rosen M, Ponpipom A. Age and IQ as predictors of emotion identification in adults with mental retardation. Research in Developmental Disabilities. 1996;17:383–389. doi: 10.1016/0891-4222(96)00024-8. [DOI] [PubMed] [Google Scholar]
  21. Visser N, Alant E, Harty M. Which graphic symbols do 4-year-old children choose to represent each of the four basic emotions? Augmentative and Alternative Communication. 2008;24:302–312. doi: 10.1080/07434610802467339. [DOI] [PubMed] [Google Scholar]
  22. Wagner BT, Jackson HM. Developmental memory capacity resources of typical children retrieving picture communication symbols using direct selection and visual linear scanning with fixed communication displays. Journal of Speech, Language, and Hearing Research. 2006;49:113– 126. doi: 10.1044/1092-4388(2006/009). [DOI] [PubMed] [Google Scholar]
  23. Wallace B. Software program. 2010. MTS2. [Google Scholar]
  24. Wilkinson KM, Carlin M, Thistle J. The role of color cues in facilitating accurate and rapid location of aided symbols by children with and without Down syndrome. American Journal of Speech Language Pathology. 2008;17:179–193. doi: 10.1044/1058-0360(2008/018). [DOI] [PubMed] [Google Scholar]
  25. Wilkinson KM, Coombs B. Early Childhood Services; Special Issue on Augmentative and Alternative Communication. Preliminary exploration of the effect of background color on the speed and accuracy of search for an aided symbol target by typically developing preschoolers. (in press) [PMC free article] [PubMed] [Google Scholar]
  26. Wilkinson KM, Jagaroo V. Contributions of principles of visual cognitive science to AAC system display design. Augmentative and Alternative Communication. 2004;20:123–136. [Google Scholar]
  27. Wilkinson KM, McIlvane WJ, Albert A, Coombs B. Optimizing display design for visual supports: Role of color and taxonomic groupings. Poster presented at the annual Gatlinburg Conference on Research and Theory in Mental Retardation/Developmental Disabilities; Annapolis, MD. March.2010. [Google Scholar]
  28. Wilkinson KM, Hennig S. Consideration of cognitive, attentional, and motivational demands in the construction of aided AAC systems. In: Soto G, Zangari C, editors. Practically Speaking: Language, Literacy, and Academic Development for Students with Special Needs. Baltimore: Paul H. Brookes; 2009. pp. 313–334. [Google Scholar]

RESOURCES