Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Apr 22.
Published in final edited form as: Perspect ASHA Spec Interest Groups. 2025 Nov 26;11(1):202–212. doi: 10.1044/2025_persp-25-00100

Visual Communication Supports for the Assessment of Swallowing and Somatosensation in Adults With Down Syndrome

Sophie Wolf a, Aarthi Madhavan a, Nicole Etter a, Krista Wilkinson a
PMCID: PMC13099009  NIHMSID: NIHMS2159992  PMID: 42022580

Abstract

Purpose:

This article describes practical strategies for enhancing swallow assessments for individuals with intellectual or cognitive disabilities through the inclusion of visual communication supports (known as augmentative and alternative communication [AAC]). These strategies were developed to support understanding and self-expression during the swallow assessments for individuals with Down syndrome, as part of a large research study. Over the last 2 years, these supports have been integrated into swallow assessments with 14 adults with Down syndrome.

Method:

Clinicians and researchers with expertise in swallowing and AAC collaboratively designed and refined visual communication aids to enhance understanding and task completion of swallowing-related assessments, including the Mann Assessment of Swallowing Ability, Iowa Oral Performance Instrument, spontaneous swallowing frequency, food avoidance inquiries, and lingual somatosensation testing. A summary table of assessments and suggested strategies is provided. All study procedures were approved by the Institutional Review Board (IRB #00022372) at The Pennsylvania State University.

Results:

Our observations suggest the integration of AAC enhances the accessibility of swallow assessments for individuals with intellectual disabilities, supporting their participation in these assessments. Many participants referenced the visual aids, such as videos, as key to their understanding of what to expect and what to do during assessments. These observations are consistent with research showing that multisensory modalities and adaptations can improve functional outcomes for people with Down syndrome.

Conclusions:

Incorporation of visual supports may be useful for clinicians and researchers seeking to improve accessibility in swallow assessments for diverse populations. Clearly, dedicated research is necessary to examine these observations systematically.


Swallow assessments require patients to comprehend and respond to verbal instructions with minimal clinician modeling accessible during the tasks. For instance, one of the tasks on the Mann Assessment of Swallowing Ability (MASA) to evaluate lingual strength, coordination, and range of motion, asks the patient to push their tongue against the inside of their cheek. A second example can be found when patients are asked to place the Iowa Oral Performance Instrument (IOPI) bulb against their palate and press their tongue against it. Due to the nature of both of these tasks, there is limited visual feedback available to the patient, thus clinicians typically rely on verbal instructions alone.

Comprehension of these verbal instructions may be difficult for any individual who has an intellectual disability. This challenge is of particular salience when the movement occurs inside the mouth (as with the IOPI), because the movement is difficult to observe directly. Yet, accurately assessing swallow function in such individuals is of utmost importance, as individuals with disabilities such as Down syndrome show high prevalence of difficulty swallowing (e.g., 13.6% in those with Down syndrome vs. 2.5% in nondisabled peers; Chicoine et al., 2021), potentially leading to aspiration, pneumonia, hospitalization, or even mortality (Landes et al., 2020; Pandit & Fitzgerald, 2012). Furthermore, Smith et al. (2014) reported that all 23 adults with Down syndrome in their observational study demonstrated swallowing-related difficulties, even though none had self-reported problems, highlighting the need for more sensitive and inclusive assessment strategies. It is imperative to adapt swallowing assessments to be accessible to individuals with comprehension or cognitive impairments (including but not limited to those with Down syndrome) in order to accurately evaluate risk for these negative outcomes.

Visual communication supports, including augmentative and alternative communication (AAC), provide tools and strategies to support communication when an individual’s natural speech and language is inadequate to support either understanding or self-expression (Beukelman & Light, 2020). This article will describe traditional AAC as well as other visual supports and how they were developed to improve comprehension and task performance during swallowing assessments with individuals with intellectual disability associated with Down syndrome. As will be reviewed, an established evidence base supports the role of AAC for various purposes in medical service delivery, including to reduce challenges of understanding verbal instructions for complex behavioral tasks and also to promote the ability to respond independently. Although the supports described here were developed as part of a research grant, swallow assessments are part of clinical service delivery as well. The goal of this clinical focus article is to illustrate how visual supports, including AAC, can be integrated into swallowing assessments across research and medical settings, particularly those involving individuals with intellectual disabilities or other cognitive or communication impairments.

AAC

As described in Beukelman and Light (2020), AAC supports individuals whose speech and language comprehension or production may be limited due to intellectual disability, cognitive impairment, or a speech-language impairment. Unaided forms of AAC require no external equipment and are produced solely by the body (i.e., vocalizations, gestures, facial expressions, etc.). However, many forms of AAC interventions incorporate some form of external aid, ranging from low-technology aids such as books containing written word or picture symbols to high-technology systems that produce speech output when the user selects their desired message. When AAC includes these types of aids, it is referred to as “aided” AAC. An example of an individual who used aided AAC due to speech limitations associated with amyotrophic lateral sclerosis was the renowned physicist Stephen Hawking, who communicated using a speech generating device.

AAC systems are designed to match an individual’s vision, motor, cognitive, and language abilities, with display options tailored to meet the unique needs of the individual (Beukelman & Light, 2020). Research shows that AAC promotes expressive vocabulary (Harris & Reichle, 2004; Kasari et al., 2014; Romski et al., 2010), grammatical development (Binger et al., 2011), and overall functional communication. It fosters self-advocacy, social interaction, and literacy while supporting individuals in achieving greater independence and participation across various settings, including health care (Beukelman & Light, 2020; Snodgrass et al., 2013).

In addition to supporting expressive communication, AAC also helps with language comprehension, vocabulary development, participation in social interactions, and involvement in community or vocational tasks (Laubscher & Wilkinson, 2021). For instance, AAC tools such as video prompting, where a person with a disability watches a brief video showing the steps of an activity, can serve as helpful reminders or prompts during the completion of that activity. This approach encourages greater independence by enabling individuals to complete complex tasks more autonomously in the community or workplace (Babb et al., 2018; Bereznak et al., 2012; O’Neill et al., 2017).

While shared decision making is a process that includes providing individuals with meaningful choices and accessible ways to express preferences, its use along with AAC may lead to an increase in compliance of health care routines, reduced adverse medical events, improved patient safety and satisfaction, as well as improved quality of child–provider interactions (Bartlett et al., 2008; Hemsley & Balandin, 2014). Well-designed AAC strategies can support both comprehension and expression throughout diverse health care–related experiences. A brief training for medical professionals has been shown to increase opportunities for choice making among children with complex communication needs in inpatient medical settings (Gormley et al., 2023). AAC strategies have also been proven useful in intensive care settings for individuals who are temporarily voiceless due to medical intervention (Carruthers et al., 2017). Costello (2000) further highlights the importance of AAC for ICU patients, emphasizing the use of voice output devices and picture displays to allow maintenance of autonomy and reduce anxiety. Both high-tech and low-tech supports have been used preoperatively to support pediatric tracheostomy patients as well, demonstrating AAC’s adaptability in medical settings (Santiago et al., 2020).

AAC supports have been used in a variety of health care delivery systems with individuals with intellectual or cognitive impairment. de Knegt et al. (2015) report on the benefits of visual supports such as simple line drawings to aid in the communication of quality, type, and intensity of pain in individuals with Down syndrome. Additionally, Santoro (2020) argued assessment and intervention of common health-related experiences in individuals with Down syndrome such as sleep apnea, atlantoaxial instability, celiac disease, feeding difficulties, and adult-onset cardiac valvular disease are likely to benefit from use of visual aids. These aids may include social stories, video visual scene displays (vvSDs), visual schedules, and choice making.

In this clinical focus article, we describe the potential role of AAC during swallowing assessment, particularly, how AAC might support understanding of task instructions and the ability to self-advocate, while engaging in the assessment. We will describe the actual supports we have developed over the last 2 years to assist in these swallow assessments, which have been implemented thus far with 14 individuals with Down syndrome. We offer our anecdotal observations of their impact in the research. Although our focus has been on engagement within research-based assessments and with one specific population, the principles of evidence-based AAC are applicable within clinical swallow assessments more broadly, as well and with individuals with other disabilities, whether developmental or acquired. Therefore, our hope is to spark future systematic research to directly evaluate the role of AAC in swallow assessment and to offer clinicians ideas for integrating AAC into their own clinical assessments.

The 14 individuals with Down syndrome ranged in age from 18 to 36 years old. The sample included six females, with one participant identifying as Black, and the remaining participants identifying as White. All participants were native English speakers and reported receiving speech therapy in the past, with services addressing areas from articulation to language development. One participant had a co-diagnosis of autism. While some participants currently use dedicated AAC devices to support their communication, all were motivated to primarily rely on speech as their main form of communication; this was, in part, due to an inclusion criterion in the larger study (which included a speech component) and therefore illustrates the supplemental or “augmentative” nature of visual supports for communication. Language abilities varied but were generally limited, as reflected by participants’ performance on the Peabody Picture Vocabulary Test–Fourth Edition (PPVT-4), with a mean standard score of 52.0 (SD = 11.2; range: 40–73). Supports were individualized to align closely with each participant’s communication needs and preferences during assessments. None of the participants had formally requested accommodations for a communication disability.

Supporting Swallowing Assessments Through AAC

The motivation to integrate AAC to support swallow assessments arose within a larger research grant examining speech, language, and swallow function in individuals with Down syndrome. Down syndrome results from the presence of three copies of chromosome 21 and is associated with the presence of intellectual disability as well as communication impairment (Abbeduto & McFadd, 2021). Individuals with Down syndrome often experience structural (Sforza et al., 2012) and functional (Fawcett & Peralego, 2009; Kent & Vorperian, 2013; Kumin, 1994) differences in the oral-pharyngeal region, which can affect their ability to perform complex sensorimotor behaviors such as speech and swallowing. These difficulties are compounded by sensory processing differences, including challenges with sensory acuity (de Knegt et al., 2015) and auditory processing (Abbeduto et al., 2007). These documented structural and sensory characteristics are likely related to swallow function, efficiency, and safety.

These challenges mean that supporting comprehension and self-expression using visual supports is of critical importance (see Wilkinson & Finestack, 2021, for detailed considerations of AAC supports for individuals with Down syndrome across various contexts and the lifespan). We discuss the strategies we used and offer the evidence base for that strategy within the AAC literature. In addition, Table 1 presents the information organized around each of the swallowing tasks, to serve as a useful guide for potential application.

Table 1.

Summary of the six swallowing assessments and potential augmentative and alternative communication (AAC) strategies to support comprehension and accuracy at completing the task.

Task Description AAC or support strategy
Iowa Oral Performance Instrument Assesses tongue strength by having participants press their tongue against a small air-filled bulb on the roof of their mouth, measuring the pressure applied. This is done three times for a few seconds to determine peak and average pressure. Plush mouth model: Used to demonstrate tongue placement in the mouth.
Image of tongue location with an arrow: Shows the correct direction for tongue movement.
Mann Assessment of Swallowing Ability Evaluates swallowing skills through 24 different tasks assessing swallowing-related abilities such as speech and language, range and strength of movement, etc. The total score helps to understand swallowing function. Plush mouth model: Used to demonstrate each task
Visual and verbal placement cues: Help participants understand how to position their body with appropriate models.
Grating Orientation Test Evaluates tactile sensitivity by placing textured stimuli on the tongue. Participants close their eyes, stick out their tongue, and identify whether the grooves are horizontal or vertical. Image of tongue location with an arrow: Increases comprehension of directionality and provides ability to respond nonverbally.
Spontaneous swallow frequency Measures the frequency of natural swallows over 10 min using a microphone attached to the side of the neck. The swallows are counted and analyzed for frequency per minute. Presession video: Demonstrates how to position the microphone.
Researcher modeling: Shows how to apply and remove the microphone on the participant’s neck to ensure comfort and understanding.
Von Frey hair test Assesses point pressure detection by gently pressing nylon monofilaments against the tongue and fingertip. Participants indicate whether they can feel one or two filaments when compared to a reference. Box to cover hand: Used to ensure the participant cannot see the filaments being applied to their hand, reducing anxiety and ensuring the test’s accuracy.
Texture Avoidance Questionnaire Assesses food preferences and aversions based on texture. Participants use a 5-point scale, selecting food images that correspond to their preferences (from “like” to “dislike”). Talking Mats: A low-tech AAC tool where food images are placed along a 5-point scale, allowing participants to visually express their preferences.

A biophysiological framework (Madhavan et al., 2023) proposed the integration of sensory and motor processes with cognitive and linguistic functions. This frame-work emphasized the need for tailored interventions that consider the unique phenotypical characteristics of individuals with Down syndrome. The framework highlights the necessity of integrating multiple sensory channels to enhance understanding and sensorimotor performance in this population (Madhavan et al., 2023). Although the original intention of this framework was to spotlight the need for individualized interventions, we identified the need for individualized and supported assessment strategies as well. This framework along with the AAC literature informed the development and implementation of the described supports.

Video Modeling to Illustrate Task Procedures

Video modeling is a well-researched method for teaching a variety of skills, including for individuals with intellectual disabilities (see Park et al., 2019). Within AAC, a technique called “video visual scene displays” (vVSDs) uses video modeling combined with pauses in the video to teach various practical skills and offer communication opportunities for individuals with intellectual disabilities. vVSDs have been used to support independent participation in vocational and volunteer activities in adolescents on the autism spectrum or with Down syndrome (Babb et al., 2018, 2020) and to promote peer social interactions between preschoolers and communication partners (Chapin et al., 2022) or adolescents with autism and their typically developing peers (Babb et al., 2021).

Due to the power of video modeling both within and outside the domain of AAC, we created videos of all of our procedures for participants to watch before the study visit. These videos served an important function of maximizing the participant’s understanding of what we would be asking them to do. For instance, the MASA is a clinical tool designed to evaluate swallowing function and is widely used by speech-language pathologists to guide interventions and improve patient outcomes (Mann, 2002). This standardized assessment helps identify the severity of dysphagia and the risk of aspiration. The tasks in the MASA include asking participants to move their tongue over their lips in a circular motion, puffing out their cheeks, producing a cough, and so forth. Our video demonstrated the various steps in the MASA.

We also made videos of how we measured spontaneous swallow frequency (SSF), which evaluates the brainstemgenerated swallow reflex, a critical mechanism for airway protection. In this process, recordings are analyzed to determine the frequency of spontaneous swallows, expressed as “swallows per minute.” Our video illustrated a research assistant being fitted with a small microphone taped to the side of their antero-lateral neck to record audio signals from spontaneous saliva swallows while sitting quietly for 10 min. The video allowed the participants to understand what would happen, in particular that the means to adhere the microphone was a gentle piece of tape, that would not hurt or be uncomfortable.

Although we did not collect direct data on the impact of the videos on participant understanding, there are two sources of support for their use (other than the literature already reviewed concerning their effectiveness). First, many participants commented when we would introduce a task “oh yeah, I saw that on the video,” suggesting that the video was supporting their understanding of what they would be asked to do. In addition, an unpublished master’s student research project (Baransky & Wilkinson, 2025) asked participants and parent/care aides about the usefulness of the video. One parent noted: “My son is a visual learner and auditory learner, so I think rather than reading something and trying to process that, being able to see somebody do it while it is being explained, I think is beneficial.” Another parent noted that being able to watch the video beforehand also allowed her to supplement the information herself: “It gave me a sense of what would actually happen, and I was able to sit down and talk to him about what was going to happen.”

Low-Technology Supports

We also developed low tech supports to serve a variety of functions. These were intended to support participants’ understanding of their rights in the research and the comprehension of a complex set of instructions. However, we also saw that participants began using the low-tech supports to provide responses in the task, meaning that they were not dependent on producing verbal answers, but rather could point to the display.

Thumbs Up/Down

The video models described above included a short video clip of one of our grant advisors, who himself has Down syndrome, at the end of each segment. In this clip, the advisor is visible on the screen, and is asked “Do you think you can do that?” The advisor produces a “thumbs up” and says “Yes I can do that,” after which a still image of him with the thumbs up moves to the lower right corner of the screen. After that, the advisor returns to the screen and is asked again “Do you think you can do that?” but this time the advisor produces a “thumbs down” and says “No, I can’t do that.” The still image with the thumbs down then moves to the lower left corner of the screen. By the end of the clip, the image in the video is of the advisor, in one corner doing a thumbs down and the other doing the thumbs up.

The original purpose of this clip was to ensure that participants understood that they could choose to do a task, but that they could also choose not to do it. To remind participants that they had this option during the actual procedures, the final image from the video clip was printed onto paper and laminated, and was present on the table throughout the assessment. Periodically, researchers would remind participants that they had a choice.

This still image has also served another function for our participants. Specifically, we test for tactile detection threshold estimates using what is called Von Frey Hair (VFH) monofilaments. A flexible monofilament, or “hair,” is applied to the fingertips and to the tongue using a forced-choice paradigm (Etter et al., 2017, 2023). Participants began to use the “thumbs up/down” image to respond to the test itself, pointing to indicate “yes, I feel that” or “no, I don’t feel that.” This adaptation supported communication and enabled researchers to gather accurate feedback from participants who may otherwise have found it challenging to respond verbally. This adaptation appeared to improve communication efficiency and minimize misunderstandings. When used, participants expressed unambiguous choices, and researchers made few requests for clarification, indicating clear interpretation of responses.

Ongoing Support for Task Comprehension

Static AAC supports were also used for task-specific comprehension. The grating orientation task (GOT) is a method for evaluating tongue somatosensory acuity and is thought to be related to texture appreciation. The GOT uses custom-made ~1-in. square “buttons” with varying textures, or gratings. The gratings, or grooves, on the stimuli range from distances of 0.2 mm to 1.25 mm apart. A total of six differing gratings are used. Participants are asked to stick out their tongue, and the button is pressed against the medial tongue tip in either a vertical or horizontal direction. The participant is asked to identify the directionality of the gratings. The size (distance) of the grating and direction are randomized (Van Boven & Johnson, 1994). The participant relies on spatial cues of the tongue to determine the orientation of the gratings and their responses are recorded (Essick et al., 1988).

Images of the tongue with arrows demonstrating the direction and placement of the tool were provided to support comprehension that they were being asked to judge the direction of the grating. One image on the page had arrows going laterally across the tongue, and the other had the arrows going horizontally across the tongue, improving participants’ understanding of the task. In addition, as with the thumbs up/down already described, participants started using the images as part of their response, pointing to the visuals (horizontal arrows or lateral arrows) to indicate their response for orientation on each trial. This approach enabled participants to focus on the tactile task while providing nonverbal forms of response. The visuals supported both comprehension and expression, which appeared to enhance the clarity of instructions and the accuracy of task completion.

Talking Mats

One of the goals of the larger project was to identify if participants had certain foods they avoid due to the texture of that food. This clinical focus article’s authors developed a Texture Avoidance Questionnaire (TAQ) specifically targeting food avoidance based on texture. We chose to develop the TAQ in-house because other available scales (e.g., the Adolescent/Adult Sensory Profile [Brown et al., 2001] and the Adult Eating Behavior Questionnaire [Hunot et al., 2016]) focus predominantly on smell, taste, emotional eating, and so forth rather than texture and oral sensation. The TAQ was informed by these other tools, and includes statements related to food texture avoidance (e.g., “I avoid foods that are dry”).

A critical aspect of communicating about feeding difficulties is ensuring that adults with Down syndrome are offered the opportunity to report relevant information for themselves (Santoro, 2020). The TAQ initially used a Likert scale with questions presented to participants orally, and with no visual supports to assess participant preferences and food aversions. However, challenges with comprehension and consistency led to an adaptation, shifting to a binary scale and static visual supports for clearer responses. In that approach, participants are shown pictures of food representing different textures, and they indicate their preferences by responding in their preferred communication mode, including options to give a thumbs up or thumbs down, or by tapping the picture of the community advisor doing the same gesture. This idea builds upon Santoro’s (2020) suggestions to generate picturebased list of foods to refer to when an individual with Down syndrome is meeting with a physician in a health care setting.

While these adaptations have improved engagement with this questionnaire, they decreased our ability to identify fine, individual texture-preferences. An AAC technique called Talking Mats may be able to further enhance communication and provide richer response data when completing the TAQ. An example of Talking Mats is provided in Figure 1. Talking Mats is a resource that supports understanding and facilitates expression (Murphy & Cameron, 2008). It uses a 5-point scale to help participants discuss food preferences and texture avoidance, with participants placing symbols or images along the scale to express their views. Talking Mats has been used effectively to support individuals with intellectual disabilities in expressing themselves, including in discussions specifically related to food and drink preferences (Murphy & McKillop, 2017). It has been successfully implemented with populations such as adults with dementia (Murphy & Oliver, 2013) and students with intellectual disabilities (Samuelsson et al., 2024). We anticipate that the use of Talking Mats will further support participants comprehension of the range of responses available on the scale (e.g., don’t like, unsure) and facilitate an increase in independent expression during the assessment.

Figure 1.

Figure 1.

An example of the Talking Mats approach using a 5-point visual scale.

One continuing challenge is communicating to our participants that we seek to learn about their preferences regarding food texture, not food taste. To better communicate this somewhat more abstract idea of texture, we have expanded this adaptation of Talking Mats beyond pictures and plan to incorporate tangible objects that represent different textures (see Figure 2). These tangibles include nonfood items such as Playdoh and sandpaper, which themselves have very distinct textures. Our goal is to support our participants in becoming familiar with the concept of texture, versus responding to how much they like the food exemplar. First, participants will complete an interactive Talking Mat using the tangible nonfood items to explore and express their preferences of texture while feeling the items. This first Talking Mat will be completed alongside a researcher who will take turns feeling the items and modeling their preference of certain textures as well. Next, the participant will then complete a Talking Mat using only pictured food items and independently complete this with regard to texture. This additional support will allow participants to express their preferences in a more detailed, consistent manner and improve the quality of information gathered in the TAQ. Exploring tactile materials in TAQ could provide a multisensory approach that enhances comprehension and task completion. We anticipate that this multisensory approach will reduce participants’ confusion between taste and texture and lead to clearer, more consistent responses. By increasing task comprehension, we also expect to observe higher rates of task completion and greater independence in responses.

Figure 2.

Figure 2.

An example of the Talking Mats training approach using tangible objects to represent different textures.

Other Support Strategies

We also developed two other strategies, which might not be considered traditional AAC, but that solved challenges that the traditional AAC was not yet addressing. These were the use of a plush mouth puppet to offer real-time modeling of swallow tasks and the use of decorated boxes to promote performance in the VFH task, described above.

Plush Mouth Puppet

Some of the swallow assessments involve tasks occurring inside the mouth, meaning that there is limited visual cueing available. For instance, the IOPI assessment involves participants pressing their tongue against a small air-filled bulb placed on the roof of their mouth, with the device measuring the pressure applied. Participants repeat this action three times for a few seconds, and the results reflect both the peak and average tongue strength. Although we presented participants with the video models as well as demonstration by the clinician, participants often still struggled to follow the instructions. In particular, participants had trouble understanding the directive to move their tongue upwards toward the roof of their mouth while pressing against the air-filled bulb. Instead, participants tended to push the bulb forward toward their top front teeth, bite the bulb, or were unsure how to move their tongue at all.

We first attempted a static visual to support comprehension of the task, using a clipart illustration depicting a mouth with a tongue pushing upwards toward the roof of the mouth. A blue arrow was included alongside the image to visually emphasize the upward movement required. Although this helped a little to clarify the motion, it was not as effective as hoped, likely because the participant still had to imagine the movement based on the image. We therefore added in a three-dimensional plush mouth model. This was a soft puppetlike model of the mouth in which the tongue is manipulated by the researcher’s hand so that she could demonstrate the upward motion required for the IOPI. This allowed participants to actually watch the action of the tongue as it pressed the bulb to the roof of the mouth rather than having to imagine what was happening inside the mouth, offering participants a concrete example of the task. This support is not only helpful from for cognitive understanding but also provides a visual motor support for people with oral apraxia.

The plush mouth model was also a key tool in supporting the MASA, described earlier. Researchers used it to demonstrate the proper placement and movement of oral structures, such as the tongue and lips, during the assessment task. Once again, by providing a three-dimensional and tactile reference, the plush mouth allowed participants to visualize the movements of their own mouths. For example, the model was used to illustrate how the tongue should elevate to assess portions of range of motion, offering a concrete example that reduced confusion and enhanced accuracy.

Decorated Boxes

In the VFH assessment, described earlier, we observed that the first few participants were resistant to either having a blindfold put on or closing their eyes, which is a requirement for this task. This assessment is completed on both the tongue and the fingertip. Examiners were able to hide the trials when applying the filaments to the tongue; however, it was necessary to shield the trials when testing the fingertip; otherwise, the participant could “peek.” To solve this problem, two small decorated boxes were developed to block the participant’s view of their hand during the test. These boxes were customized with themes, such as animals, space, or rainbows. In addition to occluding the participant’s ability to see the filament on their finger, this allowed participants to choose their preferred box. Choice making is a well-established strategy to promote engagement and, importantly, compliance (Gormley et al., 2023), and these benefits definitely accrued for our task. Allowing participants to select their preferred decorated box appeared to reduce resistance to eye closure, an otherwise necessary component for the task, and minimized distractions and compliance challenges, since use of the boxes eliminated the need for participants to close their eyes or be blindfolded.

Discussion

This project offers examples of the potential of AAC and visual supports to improve participation and task completion in swallowing and oral somatosensory assessments. Three major takeaways emerged across all tasks including: (a) ease and simplicity of AAC strategies; (b) impact on patient comprehension and engagement; and (c) feasibility for application in health care settings.

Ease and Simplicity of AAC and Other Strategies

The implementation of simple AAC strategies demonstrate that effective communication supports can be cost effective, rather than solely relying on expensive software and equipment. High-tech equipment, such as speechgenerating devices, are essential for some individuals and are often utilized as a primary mode of communication. Low-tech AAC strategies may also play a valuable role in supporting understanding and expression (either alongside or in the case where high-tech option is not available due to environmental constraints). Emphasizing a multimodal approach ensures greater flexibility and access across a range of research settings and tasks. Low tech AAC played a critical role in supporting participants across a range of tasks. For example, static visual choices, including thumbs up/thumbs down images reinforced participant autonomy by providing clear and consistent means of expressing food preferences and the opportunity to decline to do the task. While a verbal scale without visuals was initially used, the shift to Talking Mats facilitated food preference discussions. Additionally, though the novel approach of a plush mouth or decorated boxes were unconventional, they provide an example that individualized and tangible aids play a critical role.

Comprehension and Engagement

Video modeling was a particularly useful strategy to prepare participants for assessments and increase comprehension and engagement. Many directly referenced the videos during the study, which indicated that prerecorded demonstrations supported comprehension. Assessments such as the MASA and swallow frequency, which involve complex motor tasks, benefited from the visual and auditory explanation and model before the session. Videos allowed caregivers to provide additional explanations and opportunity for discussion prior to the session. When individuals fully understand the task and have accessible means to express responses, the data are more likely to be reflective of true abilities and needs. By reducing these communication barriers, AAC and other supports lead more accurate assessment results and ultimately guide more effective and efficient intervention and treatment planning.

Application in Health Care Settings

A key consideration is the implementation of AAC strategies into clinical practice in terms of time. The strategies utilized in this project have a strong evidence base, are easily created, and use minimal resources while increasing the changes of compliance and accurate data. These procedures also reflect inclusive approaches required in health care. For example, prior to participation, individuals were given the opportunity to express their preferred communication methods and to request any specific accommodations. During the consent process, researchers also checked for comprehension and offered space for participants to ask questions or request clarification. Low-tech supports such as Talking Mats and static choices required minimal time to develop but significantly enhanced communication and decision-making opportunities. While video making requires an initial time investment, these resources are reusable across clients and benefit numerous individuals. This project displays adaptable and accessible strategies that meaningfully enhance patient communication and comprehension, while remaining feasible for clinicians to implement in practice. Ultimately, these strategies support patient’s communication while also providing tools to improve efficiency and ease of administration for clinicians.

Limitations and Future Directions

These observations require systematic research to determine their impact. Empirical studies comparing the efficacy of different visual supports (e.g., static images, video modeling, etc.) and examining their long-term impact are clearly needed. It is necessary for future studies to include systematic data collection, such as measures of task accuracy, task completion rates, and comprehension checks. Additionally, participation measures, including engagement levels, duration of time spent on activities, the number and types of prompts from researchers, and the percentage of tasks attempted or completed, should be recorded. Research should also explore the generalizability of these supports to other populations. For example, many individuals with Down syndrome present with co-occurring conditions such as autism spectrum disorder, Down syndrome regression disorder, Alzheimer’s disease, hearing or vision loss, and other sensory or neurological differences. These co-diagnoses may impact both communication and swallowing, often requiring unique considerations when providing intervention. While this current work did not systematically examine how co-occurring conditions influence AAC needs during swallowing assessments, this remains a critical direction for future research. Additionally, comparing high-tech versus low-tech AAC tools could reveal which are most effective in different contexts. Cross-cultural studies could offer insights into adapting visual supports for diverse populations, while research on caregivers’ roles may enhance the effectiveness of these assessments as well. Finally, investigating the use of visual supports in medical settings, such as for decision making and clinical assessment, could broaden their application across health care contexts. Additionally, future research should explore how visual supports may enhance the process of informed consent, increasing both participation and understanding of assessment and research protocols for individuals with intellectual disabilities.

Conclusions

The use of AAC strategies during swallow and oral somatosensory assessments suggests the potential to improve patient understanding and engagement, and may therefore improve the accuracy and effectiveness of these assessments and intervention plans. The use of video modeling, low-tech supports, and other strategies highlight the importance of a multimodal and flexible approach. These observations provide guidance for the use of AAC in clinical swallow applications.

Acknowledgments

This research was supported by the National Institutes of Health (NIH 1R01DC020622-01A1, Principal Investigator: Krista M. Wilkinson) through the INCLUDE (INvestigation of Co-occurring conditions across the Lifespan to Understand Down syndromE) Project.

Footnotes

Disclosure: The authors have declared that no competing financial or nonfinancial interests existed at the time of publication.

Ethics Statement

All study procedures were approved by the Institutional Review Board (IRB #00022372) at The Pennsylvania State University.

Data Availability Statement

Data sharing not applicable to this article as no data sets were generated or analyzed during this study.

References

  1. Abbeduto L, & McFadd ED (2021). Overview of multimodal AAC intervention across the life span for individuals with Down syndrome. In Wilkinson KM & Finestack LH (Eds.), Multimodal AAC for individuals with Down syndrome (pp. 11–37). Brookes. [Google Scholar]
  2. Abbeduto L, Warren SF, & Conners FA (2007). Language development in Down syndrome: From the prelinguistic period to the acquisition of literacy. Mental Retardation and Developmental Disabilities Research Reviews, 13(3), 247–261. 10.1002/mrdd.20158 [DOI] [PubMed] [Google Scholar]
  3. Babb S, Gormley J, McNaughton D, & Light J (2018). Enhancing independent participation within vocational activities for an adolescent with ASD using AAC video visual scene displays. Journal of Special Education Technology, 34(2), 120–132. 10.1177/0162643418795842 [DOI] [Google Scholar]
  4. Babb S, McNaughton D, Light J, & Caron J (2021). “Two friends spending time together”: The impact of video visual scene displays on peer social interaction for adolescents with autism spectrum disorder. Language, Speech, and Hearing Services in Schools, 52(4), 1095–1108. 10.1044/2021_LSHSS-21-00016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Babb S, McNaughton D, Light J, Caron J, Wydner K, & Jung S (2020). Using AAC video visual scene displays to increase participation and communication within a volunteer activity for adolescents with complex communication needs. Augmentative and Alternative Communication, 36(1), 31–42. 10.1080/07434618.2020.1737966 [DOI] [PubMed] [Google Scholar]
  6. Baransky A, & Wilkinson K (2025). The use of video modeling to maximize the informed consent process [Unpublished manuscript]. Pennsylvania State University. [Google Scholar]
  7. Bartlett G, Blais R, Tamblyn R, Clermont RJ, & MacGibbon B (2008). Impact of patient communication problems on the risk of preventable adverse events in acute care settings. Canadian Medical Association Journal, 178(12), 1555–1562. 10.1503/cmaj.070690 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bereznak S, Ayres K, Mechling L, & Alexander J (2012). Video self-prompting and mobile technology to increase daily living and vocational independence for students with autism spectrum disorders. Journal of Developmental and Physical Disabilities, 24(3), 269–285. 10.1007/s10882-012-9270-8 [DOI] [Google Scholar]
  9. Beukelman DR, & Light JC (2020). Augmentative and alternative communication: Supporting children and adults with complex communication needs (5th ed.). Brookes. [Google Scholar]
  10. Binger C, Maguire-Marshall M, & Kent-Walsh J (2011). Using aided AAC models, recasts, and contrastive targets to teach grammatical morphemes to children who use AAC. Journal of Speech, Language, and Hearing Research, 54(1), 160–176. 10.1044/1092-4388(2010/09-0163) [DOI] [Google Scholar]
  11. Brown C, Tollefson N, Dunn W, Cromwell R, & Filion D (2001). The Adult Sensory Profile: Measuring patterns of sensory processing. American Journal of Occupational Therapy, 55(1), 75–82. 10.5014/ajot.55.1.75 [DOI] [Google Scholar]
  12. Carruthers H, Astin F, & Munro W (2017). Which alternative communication methods are effective for voiceless patients in intensive care units? A systematic review. Intensive and Critical Care Nursing, 42, 88–96. 10.1016/j.iccn.2017.03.003 [DOI] [PubMed] [Google Scholar]
  13. Chapin SE, McNaughton D, Light J, McCoy A, Caron J, & Lee DL (2022). The effects of AAC video visual scene display technology on the communicative turns of preschoolers with autism spectrum disorder. Assistive Technology, 34(5), 577–587. 10.1080/10400435.2021.1893235 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Chicoine B, Rivelli A, Fitzpatrick V, Chicoine L, Jia G, & Rzhetsky A (2021). Prevalence of common disease conditions in a large cohort of individuals with Down syndrome in the United States. Journal of Patient-Centered Research and Reviews, 8(2), 86–97. 10.17294/2330-0698.1824 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Costello J (2000). AAC intervention in the intensive care unit: The children’s hospital Boston model. Augmentative and Alternative Communication, 16(3), 137–153. 10.1080/07434610012331279004 [DOI] [Google Scholar]
  16. de Knegt N, Defrin R, Schuengel C, Lobbezoo F, Evenhuis H, & Scherder E (2015). Quantitative sensory testing of temperature, pain, and touch in adults with Down syndrome. Research in Developmental Disabilities, 47, 306–317. 10.1016/j.ridd.2015.08.016 [DOI] [PubMed] [Google Scholar]
  17. Essick GK, Afferica T, Aldershof B, Nestor J, Kelly D, & Whitsel B (1988). Human perioral directional sensitivity. Experimental Neurology, 100(3), 506–523. 10.1016/0014-4886(88)90035-0 [DOI] [PubMed] [Google Scholar]
  18. Etter NM, Miller OM, & Ballard KJ (2017). Clinically available assessment measures for lingual and labial somatosensation in healthy adults: Normative data and test reliability. American Journal of Speech-Language Pathology, 26(3), 982–990. 10.1044/2017_AJSLP-16-0151 [DOI] [PubMed] [Google Scholar]
  19. Etter NM, Schmauk N, & Neely KA (2023). Clinically measuring orofacial somatosensation in a cohort of healthy aging adults. American Journal of Speech-Language Pathology, 32(1), 306–315. 10.1044/2022_AJSLP-22-00078 [DOI] [PubMed] [Google Scholar]
  20. Fawcett AJ, & Peralego C (2009). Dyslexia, speech, and language: A practitioner’s handbook (2nd ed.). Wiley-Blackwell. [Google Scholar]
  21. Gormley J, McNaughton D, & Light J (2023). Supporting children’s communication of choices during inpatient rehabilitation: Effects of a mobile training for health care providers. American Journal of Speech-Language Pathology, 32(2), 545–564. 10.1044/2022_AJSLP-22-00200 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Harris MD, & Reichle J (2004). The impact of aided language stimulation on symbol comprehension and production in children with moderate cognitive disabilities. American Journal of Speech-Language Pathology, 13(2), 155–167. 10.1044/1058-0360(2004/016) [DOI] [PubMed] [Google Scholar]
  23. Hemsley B, & Balandin S (2014). A metasynthesis of patient-provider communication in hospital for patients with severe communication disabilities: Informing new translational research. Augmentative and Alternative Communication, 30(4), 329–343. 10.3109/07434618.2014.955614 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Hunot C, Fildes A, Croker H, Llewellyn CH, Wardle J, & Beeken RJ (2016). Appetitive traits and relationships with BMI in adults: Development of the Adult Eating Behaviour Questionnaire. Appetite, 105, 356–363. 10.1016/j.appet.2016.05.024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Kasari C, Kaiser A, Goods K, Nietfeld J, Mathy P, Landa R, Murphy S, & Almirall D (2014). Communication interventions for minimally verbal children with autism: A sequential multiple assignment randomized trial. Journal of the American Academy of Child and Adolescent Psychiatry, 53(6), 635–646. 10.1016/j.jaac.2014.01.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Kent RD, & Vorperian HK (2013). Speech impairment in Down syndrome: A review. Journal of Speech, Language, and Hearing Research, 56(1), 178–210. 10.1044/1092-4388(2012/12-0148) [DOI] [Google Scholar]
  27. Kumin L (1994). Intelligibility of speech in children with Down syndrome in natural settings: Parents’ perspective. Perceptual and Motor Skills, 78(1), 307–313. 10.2466/pms.1994.78.1.307 [DOI] [PubMed] [Google Scholar]
  28. Landes SD, Stevens JD, & Turk MA (2020). Cause of death in adults with Down syndrome in the United States. Disability and Health Journal, 13(4), Article 100947. 10.1016/j.dhjo.2020.100947 [DOI] [Google Scholar]
  29. Laubscher E, & Wilkinson KM (2021). Overview of multimodal AAC intervention across the life span for individuals with Down syndrome. In Wilkinson KM & Finestack LH (Eds.), Multimodal AAC for individuals with Down syndrome (pp. 61–86). Brookes. [Google Scholar]
  30. Madhavan A, Lam L, Etter NM, & Wilkinson KM (2023). A biophysiological framework exploring factors affecting speech and swallowing in clinical populations: Focus on individuals with Down syndrome. Frontiers in Psychology, 14, Article 1085779. 10.3389/fpsyg.2023.1085779 [DOI] [Google Scholar]
  31. Mann G (2002). MASA, The Mann Assessment of Swallowing Ability (Vol. 1). Cengage Learning. [Google Scholar]
  32. Murphy J, & Cameron L (2008). The effectiveness of Talking Mats® with people with intellectual disability. British Journal of Learning Disabilities, 36(4), 232–241. 10.1111/j.1468-3156.2008.00490.x [DOI] [Google Scholar]
  33. Murphy J, & McKillop J (2017). I don’t enjoy food like I used to: The views of people with dementia about mealtimes. Communication Matters Journal, 31(1), 27–29. [Google Scholar]
  34. Murphy J, & Oliver T (2013). The use of Talking Mats to support people with dementia and their carers to make decisions together. Health & Social Care in the Community, 21(2), 396–414. 10.1111/hsc.12005 [DOI] [Google Scholar]
  35. O’Neill T, Light J, & McNaughton D (2017). Videos with integrated AAC visual scene displays to enhance participation in community and vocational activities: Pilot case study with an adolescent with autism spectrum disorder. Perspectives of the ASHA Special Interest Groups, 2(12), 55–69. 10.1044/persp2.sig12.55 [DOI] [Google Scholar]
  36. Pandit C, & Fitzgerald DA (2012). Respiratory problems in children with Down syndrome. Journal of Paediatrics and Child Health, 48(3), E147–E152. 10.1111/j.1440-1754.2011.02077.x [DOI] [PubMed] [Google Scholar]
  37. Park J, Bouck E, & Duenas A (2019). The effect of video modeling and video prompting interventions on individuals with intellectual disability: A systematic literature review. Journal of Special Education Technology, 34(1), 3–16. 10.1177/0162643418780464 [DOI] [Google Scholar]
  38. Romski MA, Sevcik RA, Adamson LB, Cheslock M, Smith A, Barker RM, & Bakeman R (2010). Randomized comparison of augmented and nonaugmented language interventions for toddlers with developmental delays and their parents. Journal of Speech, Language, and Hearing Research, 53(2), 350–364. 10.1044/1092-4388(2009/08-0156) [DOI] [Google Scholar]
  39. Samuelsson J, Holmer E, Johnels JÅ, Palmqvist L, Heimann M, Reichenberg M, & Thunberg G (2024). My point of view: Students with intellectual and communicative disabilities express their views on speech and reading using Talking Mats. British Journal of Learning Disabilities, 52(1), 23–35. 10.1111/bld.12543 [DOI] [Google Scholar]
  40. Santiago R, Howard M, Dombrowski ND, Watters K, Volk MS, Nuss R, Costello JM, & Rahbar R (2020). Preoperative augmentative and alternative communication enhancement in pediatric tracheostomy. The Laryngoscope, 130(7), 1817–1822. 10.1002/lary.28288 [DOI] [PubMed] [Google Scholar]
  41. Santoro SL (2020). Supporting communication and self-advocacy related to special health and medical needs and services. In Wilkinson KM & Finestack L (Eds.), Multimodal AAC for individuals with Down syndrome (p. 240). Brookes. [Google Scholar]
  42. Sforza C, Dellavia C, Allievi C, Tommasi DG, & Ferrario VF (2012). Anthropometric indices of facial features in Down’s syndrome subjects. In Preedy V (Ed.), Handbook of anthropometry. Springer. 10.1007/978-1-4419-1788-1_98 [DOI] [Google Scholar]
  43. Smith CH, Teo Y, & Simpson S (2014). An observational study of adults with Down syndrome eating independently. Dysphagia, 29(1), 52–60. 10.1007/s00455-013-9479-4 [DOI] [PubMed] [Google Scholar]
  44. Snodgrass MR, Stoner JB, & Angell ME (2013). Teaching conceptually referenced core vocabulary for initial augmentative and alternative communication. Augmentative and Alternative Communication, 29(4), 322–333. 10.3109/07434618.2013.848932 [DOI] [PubMed] [Google Scholar]
  45. Van Boven RW, & Johnson KO (1994). The limit of tactile spatial resolution in humans: Grating orientation discrimination at the lip, tongue, and finger. Neurology, 44(12), 2361–2366. 10.1212/WNL.44.12.2361 [DOI] [PubMed] [Google Scholar]
  46. Wilkinson KM, & Finestack L (Eds.). (2021). Multi-modal augmentative and alternative communication for individuals with Down syndrome across the lifespan. Brookes. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing not applicable to this article as no data sets were generated or analyzed during this study.

RESOURCES