Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Mar 1.
Published in final edited form as: Augment Altern Commun. 2019 Jan 21;35(1):13–25. doi: 10.1080/07434618.2018.1556730

New and Emerging Access Technologies for Adults with Complex Communication Needs and Severe Motor Impairments: State of the Science

Susan Koch Fager 1, Melanie Fried-Oken 2, Tom Jakobs 3, David R Beukelman 4
PMCID: PMC6436971  NIHMSID: NIHMS1010328  PMID: 30663899

Abstract

Individuals with complex communication needs often use alternative access technologies to control their augmentative and alternative communication (AAC) devices, their computers, and mobile technologies. While a range of access devices is available, many challenges continue to exist, particularly for those with severe motor-control limitations. For some, access options may not be readily available or access itself may be inaccurate and frustrating. For others, access may be available but only under optimal conditions and support. There is an urgent need to develop new options for individuals with severe motor impairments and to leverage existing technology to improve efficiency, increase accuracy, and decrease fatigue of access. This paper describes person-centered research and development activities related to new and emerging access technologies, with a particular focus on adults with acquired neurological conditions.

Keywords: Augmentative and alternative communication, Access, Complex communication needs, Locked-in, Severe motor impairment


During the past several decades, many people with acquired neurological conditions who experience complex communication needs and severe motor impairments have become effective communicators because of evolving augmentative and alternative communication (AAC) strategies and technologies (Ball, Fager, & Fried-Oken, 2012; Beukelman, Fager, Ball, & Dietz, 2007; Fager, Bardach, Russell, & Higginbotham, 2012; Fager, Beukelman, Fried-Oken, Jakobs, & Baker, 2012; Fried-Oken, Beukelman, & Hux, 2012; Higginbotham, Shane, Russell, & Caves, 2007; Shane, Blackstone, Vanderheiden, Williams, & DeRuyter, 2012). Through the years, the AAC community has developed and implemented a range of motor access options for AAC technology that allow users to speak messages, write text, and participate with communication/social media. These options include adapted keyboards, modifications to touch screens on computers or mobile technologies (e.g., delayed activation, with touch activation on touch exit); single- or multiple-switch scanning; head tracking; and, most recently, eye tracking (Beukelman & Mirenda, 2013). These options, often referred to as access technologies, effectively address the motor access challenges of many people with complex communication needs. Unfortunately, some individuals still remain poorly served. They cannot accurately access AAC technology; have inconsistent control depending on changes to their physical condition, environments, or support; and often can only access AAC technology in optimal situations. For example:

Mandy is a university student with athetoid cerebral palsy who uses eye-gaze to access her computer, write papers, send e-mails and text messages, participate in communication/social media, and converse face-to-face with her caregivers and fellow students. She accomplishes this in her dormitory room, where her eye-gaze device is optimally situated. When she is out of her dormitory room, her eye-gaze access is inconsistent and relatively ineffective as she moves about the campus from class to class, the library, and social situations. Mandy would benefit from adaptive access methods that better meet her motor challenges as well as her location, communication partner, and message needs.

Individuals with other types of motor impairments meet similar challenges resulting in varying degrees of access limitations depending on their ability to compensate for access requirements either by using their motor capabilities or through the technical support of the people who provide care for them. These limitations range from inefficient access causing fatigue to a lack of access (and communication) in situations throughout the day.

The Challenges of Access

There are several ways to describe limitations to current AAC access technologies and opportunities to improve or enhance access options. For individuals with severe motor impairments (e.g., locked-in syndrome), an access option may not be readily available. This can occur for individuals with brainstem stroke, high-level spinal cord injury, or in the late stages of degenerative conditions such as amyotrophic lateral sclerosis (ALS) (Murguialday et al., 2011; Kruger, Teasell, Salter, Foley, & Hellings, 2007). For others, an access solution may be available but is unreliable, causing frustration and limited use of technology to support communication. For still others, the use of a single access method over time can become difficult due to over-use (Blackstone, 1995; Fager, Bardach et al. 2012) or age-related changes (e.g., individuals with spastic cerebral palsy experiencing increasing neck pain while using a head-activated switch for scanning access to their AAC device as they age).

There is an urgent need to improve available access technologies. Typical access challenges include the need for optimal positioning of equipment; and consistent, skilled support by caregivers throughout the day (Ball, Nordness, Fager, Kersch, Pattee, & Beukelman, 2010; Beukelman et al., 2007; Fager, Bardach et al. 2012). For those who are active in a wide range of communication environments (e.g., home, school, outdoors, work site, doctor’s office, restaurant etc.), variability in lighting and optimal positioning requirements often limits the use of AAC technology (Ball et al., 2010; Fager, Bardach et al. 2012; Fager, Beukelman et al., 2012). For individuals who find themselves in medical settings, where caregivers change frequently, the use of sophisticated access methods can be challenging if a consistent support system of caregivers trained in the set-up, positioning, calibration, and trouble-shooting of the technology to support access is not available. For example, precise positioning of switches may be required for successful use. This positioning may be compromised when individuals inadvertently shift position (e.g., due to a cough, sneeze, or muscle spasm) or as they are re-positioned throughout the day as part of their care (e.g., moved from a wheelchair to a recliner or bed).

Person-centered Design in Developing New Access Technologies

Developing technology to fill gaps in available access options requires the close involvement of individuals with complex communication needs and severe motor impairments, caregivers, clinical specialists (e.g., speech-language pathologists, occupational therapists), engineers, researchers, and manufacturers of AAC technology. Due to the complexity and uniqueness of the challenges that individuals with severe motor impairments exhibit, research and development activities often emerge out of individual case examples, which often illustrate a broader need. Technology is then developed based on the needs and skills of the target group and individualized incrementally, with ongoing feedback from all stakeholders to further refine and develop the access method.

This person-centered design process is consistent with theoretical frameworks employed in medical device and assistive technology development (Shah, Robinson, & Al Shawi, 2009). This process includes (a) early involvement of the individual who will be using the technology, research that investigates the human processes that impact access (e.g., motor performance, visual-cognitive processing, etc.), (b) development of prototypes with human processes in mind (e.g., solutions that reduce motor demands, consider cognitive demands, etc.), (c) iterative development and refinement of technology prototypes, and (d) outcome data directly related to individual performance, perceptions and preferences (Kübler et al., 2014; Shah et al., 2009; Fager et al., 2017; Peters, Mooney, Oken, & Fried-Oken, 2016; Wilkinson & De Angeli, 2014). Technology development typically occurs in stages, beginning with proof of concept or idea generation and prototype design, testing and evaluation or proof of product, and implementation or proof of adoption (NIDILRR Development Framework, 2017; Shah et al., 2009). Throughout all stages of the development framework, input from all key stakeholders is a critical component to successful development of complex devices such as access technologies (NIDILRR Development Framework 2017; Shah et al., 2009).

This paper describes the research and development of new and emerging access technologies that begin with a clinical challenge and lead to a research-driven engineering discovery. The access technology solutions described illustrate different stages of development. Following person-centered design frameworks, case illustrations and/or group outcome results are provided to describe the ongoing development process. New and emerging technologies are introduced, with case illustrations used to describe the need for each technology and associated research to date. Future research and development directions are discussed. The individuals described in the case illustrations were involved in the research of the technology solutions but all identifiable information has been changed to preserve anonymity.

Although many of the technologies described may be of benefit to children, this paper will not address challenges related to learning, cognition, or language development that may require different solutions and strategies. The focus is on the following new and emerging access technologies for adults with acquired neurological conditions: movement-sensing technologies, brain-computer interface, multi-input strategies, supplemented speech recognition, and supplementation of access with partner input.

Movement-Sensing Technologies

A range of movement-sensing devices exists to track fitness and activity (e.g., wearable fitness monitors that incorporate accelerometers and gyroscopes that track activities such as walking and running). The development of this technology has been driven primarily by a market of individuals without disabilities; there have been no reports to date of individuals with severe motor impairments using the technology as an access method to AAC. However, leveraging this technology to detect unique movement patterns of individuals with severe motor impairments may provide a new genre of access devices for AAC. For example, once movement signals are identified, they often can be categorized (Haghi, Thurow, & Stoll, 2017; Hasan & Hongnian, 2017). For individuals with severe motor impairment, movement-sensing technology provides opportunities to identify and categorize movements as intentional or unintentional (e.g., null-model detection). The following case example illustrates an early prototype of a movement-sensing system that utilizes null-model detection, developed for an individual with severe motor impairment who relies on AAC.

Andrea had a severe brainstem stroke that left her with limited motor capabilities that were restricted to small thumb and pinky finger movements on one hand. When her caregivers were by her side, they could watch her hand and identify these movements. When optimally positioned, she could access small micro-light switches using a custom switch mount and thereby control an alphabet array via step scanning (thumb movement to advance and pinky movement to select). However, if Andrea moved slightly out of position, she no longer had access to her switch system. Or if she coughed or laughed, she inadvertently activated the switch system causing errors in her message construction. If her caregivers were in another room or distracted with other activities, some time would pass before they might notice that she could no longer access her switch system. All of this was extremely frustrating for Andrea.

Technology Solution

Colleagues at InvoTek, Inc. developed a novel, wearable sensor system that incorporated an accelerometer, a gyroscope, and a magnetometer (Fried-Oken, Fager, & Jakobs, 2016). This sensor array was “trainable” in that, given several repetitions of intentional and unintentional movements, it could “learn” when Andrea was intentionally activating her device and when her movements were unintentional (e.g., during a cough). Housing for the sensors was 3-D printed for a customized fit to her hand (see Figure 1). The device constructed for Andrea leveraged sensor technology in a way that addressed her specific access challenges. In this early prototype, the system required recorded movement samples from Andrea and personalized support from the engineering team to calculate the equations needed to recognize her movements. With advances in machine learning, this process has the potential to be automated much like initial calibration and set up incorporated by other access devices today.

Figure 1:

Figure 1:

Sensor switch incorporating accelerometer, gyroscope, and magnetometer for position-independent access.

The wearable nature as well as the 3-D sensor measurement properties of the wearable sensor device eliminated the need for Andrea to be precisely positioned to have access to her switch. Precise positioning is often a substantial challenge for individuals with severe motor impairment and their caregivers. The individuals with severe motor impairment often lack the motor ability to move back into position if they have shifted while using access technologies. Caregivers are often required to be highly trained to optimally position these devices because a slight deviation makes access impossible. The 3-axis sensors allowed the device to function regardless of position. This feature increased Andrea’s accurate access and alleviated the burden of optimal positioning that other access technologies often placed upon her caregivers. The sensor prototype developed for Andrea illustrates the potential to increase accuracy and independence of access for individuals with severe motor impairment and decrease the burden associated with optimal set-up for switch use by caregivers (Fried-Oken et al., 2016).

What Do We Know So Far?

There is limited information available about the use of wearable sensor technology to support access as described in this case illustration. The Invotek prototype met a specific need for an individual with severe motor impairment and complex communication needs. Given the ubiquity of sensor technology in mainstream devices, the exploration of the benefits of this technology as an access tool for individuals with severe motor impairments is emerging. For example, other studies have documented preliminary results of the use of capacitive-sensing technology for assistive technology and environmental control options (Fager et al., 2017; Singh, Nelson, Robucci, Patel, & Banerjee, 2015). These sensors have been embedded into fabrics that can be worn or placed within the environment. The sensors detect gestures or movements that an individual can make while near the sensor array (without having to touch the sensors). These gestures have been translated into environmental control command options (e.g., turn lights on or off) or used as a switch access option. Descriptions of the technology, stakeholder feedback in the early design process, and preliminary laboratory testing of the capacitive sensors have been reported (Fager et al., 2017; Singh et al., 2015). All stakeholders (individuals with motor impairments, family members, and caregivers) rated the technology as having high potential benefit as an access method for individuals with motor impairments and preliminary trials of the technology by individuals with motor impairments resulted in a high level of gesture recognition accuracy despite diversity of motor capabilities. Use of this technology as an AAC access method by individuals with severe motor impairments has yet to be explored.

Future Directions

Creating the movement recognition equations for the sensor system developed for Andrea must be automated. Machine learning algorithms hold promise for automating setup in a way that is reasonable and easily managed by caregivers and not excessively fatiguing for individuals with severe motor impairments. Additional research on how well sensor technology identifies and filters unintentional movements is needed. For example, it is unknown whether the technology can differentiate intentional from unintentional movements in individuals who exhibit extraneous movements (e.g., due to cerebral palsy). If this can be accomplished, sensor technology could vastly improve switch scanning for successful access to AAC devices and computers. There is a need for more extensive clinical evaluation to collect performance data using sensor technology under real life conditions with a range of individuals with severe motor impairments as well as person-centered data.

Brain-Computer Interface (BCI)

BCI is a promising, but somewhat elusive, access method for individuals with minimal movement capabilities who cannot rely on traditional methods for control of communication and computers. The role of BCI within AAC is becoming clearer, and the challenges for implementation of the technology better understood (Brumberg, Pitt, Mantie-Kozlowski, & Burnison, 2018). Adults with locked-in syndrome play a critical role as both participants and consultants within the research and development process because they have previously experienced communication competence and independent control over their environments, and can offer insights that drive person-centered designs, as in the following case example:

Kraig is a 37-year-old man who worked as a software engineer until he was diagnosed with ALS 4 years ago. He is now quadriplegic, relies on a ventilator, and cannot speak. Kraig has used many AAC access technologies, most recently single-switch access and an eye-tracking AAC device; however, he can no longer control the device due to inconsistent, unreliable ocular motility and loss of motor function for switch access. He uses small horizontal eye movements to answer yes/no and other binary-choice questions, but these subtle signals can be difficult for partners to recognize and interpret. Kraig lives in an adult care home. He is an avid sports fan and spends many hours sitting in a wheelchair in front of any televised competition. Kraig’s parents recently saw a television program about brain-computer interfaces and asked his physiatrist if BCI technology might help him regain reliable access to communication.

Technology Solution

There are a number of BCI systems that have been developed for communication (Akcakaya et al., 2014). Most are matrix spellers that rely on the P300 brain wave for intent selection (Wolpaw, Birbaumer, McFarland, Pfurtscheller, & Vaughan, 2002). In the matrix speller, letters are presented in an alphabetic grid with random flashes to letter boxes. Individuals select letters by focusing on the flashing letter of choice and their P300 waves are averaged for the selection (Sellers & Donchin, 2006). The P300 wave is elicited after a novel or surprise stimulus appears. The intended letter is the surprise or novel stimulus in a series of highlighted letters on a grid. The random flashing sequence on the grid is referred to as an oddball paradigm (Farwell & Donchin, 1988).

The RSVP Keyboard is a prototype BCI system that uses the P300 brain wave with a different selection paradigm: a Rapid Serial Visual Presentation (RSVP) of individual letters. One large letter at a time is presented on the screen for 400 milliseconds (or less), thus reducing the visual perceptual demands of BCI displays with a matrix with letters (Oken et al., 2014). When the intended letter appears, it is considered the surprise or novel stimulus and P300 waves are produced and averaged together so that a selection can be inferred. The same brainwave, the P300, is elicited for both the RSVP Keyboard and the matrix spellers but the end-user selects letters by looking at different letter displays. The keyboard is unique in that it fuses the P300 wave with a statistical language model (an algorithm that computes the probability of a sentence or sequence of words) for spelling by individuals with locked-in syndrome, thereby improving the probability of accurate letter selections (Moghadamfalahi et al., 2015).

The Shuffle Speller prototype is another spelling interface for BCI systems, which acquires and processes a different brain wave, the steady state visual evoked potential, to imply selection intent. Letters move (or shuffle) over a screen based on their probabilities. Color intensities and letter locations are animated until the system is confident of the speller’s intended selection (Higger et al., 2017). Yet another example of a BCI speller is the Berlin BCI which presents letters in a nested circular fashion, analyzing another brainwave, the sensory-motor rhythm, for selection intent (Blankertz et al., 2006). Finally, there are some spelling BCIs that use the more traditional AAC row-column scanning display and measure the sensory-motor rhythm as end-users are asked to imagine moving their limbs for selection (Brumberg, Burnison, & Pitt, 2016; Scherer et al., 2015).

What Do We Know So Far?

There are reports of BCI systems that have been used successfully by individuals with severe physical disabilities resulting from ALS (Sellers, Ryan, & Hauser, 2014; Sellers, Vaughn, & Wolpaw, 2010; Nijboer et al., 2008). There is a range of letter displays, different brain waves, and stimuli that are presented in various modalities that have been implemented with BCI systems. McCane et al. (2014) introduced a P300-BCI matrix speller to 25 individuals with ALS in their homes and demonstrated that most of the participants who had severe motor impairments could use a visual P300-BCI for communication. A current publication (Wolpaw et al., 2018) followed 39 individuals with ALS who met study criteria for BCI home implementation. From this initial cohort, 14 participants became home users of the noninvasive P300 Wadsworth BCI system and seven chose to continue using the system after the 18-month study ended.

Combaz and colleagues (2013) compared two spelling BCIs that analyzed different brainwaves (the P300 and the steady state visual evoked response) in seven patients with incomplete locked-in syndrome. They found that there was increased accuracy and satisfaction and reduced mental workload when typing was accomplished with a steady state visual evoked response compared to the P300-based system. Kaufmann, Holz, and Kübler (2013) described a case study with a patient with complete locked-in syndrome who tried spelling when letters were presented in different modalities. First, she tried spelling when letters were presented visually on different size letter grids; next she tried with an auditory BCI, when letters were presented one at a time through headphones to both ears with the same volume for all stimuli; and finally she tried spelling in tactile mode, which involved placing four sensors on her left arm, with six letters of the alphabet grouped together for each sensor. She attempted to spell in a partner-scanning approach, by responding to the tactor that produced a sensory stimulus and represented the letter she chose. Unfortunately, with event-related potentials for intent selection, the BCI system was not successful for communication in any modality. Oken et al. (2014) described performance with the previously mentioned RSVP Keyboard, where only one letter at a time is presented on the screen. Nine participants with locked-in syndrome and nine without disabilities (matched for age, gender, or years of education) used the keyboard. On average, as expected, the individuals without disabilities had substantially higher scores for a copy-spelling task, but all participants with locked-in syndrome could complete the copy-spelling task for predictable words.

Despite these positive preliminary results, enthusiasm for BCI implementation is dampened somewhat by the challenges faced when translating the technology from the laboratory to practical application (Chavarriago, Fried-Oken, Kleih, Lotte, & Scherer, 2017). Many of the hardware, software, and training obstacles that were discussed 5 years ago (Fager, Beukelman et al., 2012), remain today. Noninvasive EEG-based BCIs have not been readily implemented because it is still difficult to acquire a robust and reliable brain signal. There are long calibration and training sessions needed to teach the BCI to interpret brain signals and intent. The cognitive demands for attention, memory, and executive function are significant for extended use, though a comprehensive evaluation of the cognitive demands of BCI has yet to be completed. Currently there is interest in “human-in-the-loop” systems, where the machine learns an individual’s characteristics and adjusts signal acquisition and processing algorithms to optimize the system for each person (Lotte, Larrue, & Muhl, 2013). This is a closed-loop system between the computer and the human, where the computer relies on input from the human to improve its intelligence and decision making. Within BCI, human feedback in response to different tasks and training paradigms will improve the signal processing and robustness of the computer recognizing the correct intent for letter and word selection.

Future Directions

BCIs for communication are becoming a reality, and there is an international multidisciplinary research community that is dedicated to helping individuals with minimal movement capabilities utilize BCI to produce spoken output and control a computer cursor for written communication and social interaction. The ultimate goal of this technology is the transmission of language, either written or spoken, directly from brain signals, avoiding the need to operate an AAC device. The vision is that an individual would think of a word and the BCI would directly decode brain activity into spoken communication (Chakrabarti, Sandberg, Brumberg, & Krusienski, 2015). We are far from achieving that goal, though there has been research on decoding phonemes directly from intracortical microelectrodes (Herff et al., 2015; Kellis et al., 2010; Mugler et al., 2014) as well as from non-invasive approaches (Brumberg, Pitt, & Burnison, 2018). It is unclear at this time whether the non-invasive BCI (which requires sophisticated signal processing to detect the correct brain waves through the scalp) or the invasive BCI (which places electrodes right on the brain or dura) will optimize BCI access options. Initial trials of intracortical BCI for neural point-and-click communication by individuals with locked-in syndrome have been reported by a number of BCI labs (e.g., Bacher et al., 2015; Vansteelsel et al., 2016).

The integration of BCI with eye-tracking technologies is another promising direction for research that leverages current assistive technologies. It is possible that each person with locked-in syndrome will present with a constellation of sensory, motor, and cognitive skills that will define the type of BCI system (i.e., auditory or visual BCI), the type of letter display, and the choice of brainwave that maximizes function and success. A scientific discussion about success and priorities can only occur once there is consensus on the desired outcomes and how to measure them. For this population, there are many stakeholders who present with a range of needs and priorities, all of which must be met to ensure successful implementation so that practical usage of this new technology can become a reality (Wolpaw et al, 2018; Andresen, Fried-Oken, Peters, & Patrick, 2015).

Multi-Input Strategies: Eye Tracking Plus Switch Scanning

Eye tracking and switch scanning are currently available as access methods to AAC devices and computers with specialized software. Using a direct pointing method such as eye tracking can be an efficient way to get to a general location on an AAC device interface or computer. Using another access method such as switch scanning can be an efficient way to then make a discrete selection once the eye-tracker has narrowed the field to be scanned to a specific location. There are potential benefits to combining these two access methods. First, eye tracking can be challenging when selecting small targets. Lighting (e.g., bright lights, outdoor lighting), positioning (e.g., re-positioning requiring frequent re-calibration), and physical conditions of the eye (e.g., dry eyes) can pose challenges to continuous use of eye tracking and these problems often result in decreased accuracy of access (Ball et al., 2010; Fager, Bardach et al. 2012; Fager, Beukelman et al., 2012). Due to the severity of occulomotor control deficits, some individuals cannot rely on eye tracking alone for consistent access. Second, switch scanning can be slow, and the repetitive movements needed to activate a switch may be over-taxing. Technology that combines the best aspects of both methods could be beneficial for those who are challenged by using either access method individually, as illustrated in the following case example:

Lisa sustained a brainstem stroke and was undergoing rehabilitation at a long-term acute care hospital. Lisa had severe/profound dysarthria, limited movement in her left hand, and significant oculomotor control impairments and diplopia (double-vision). She applied spot patches to either of her glasses lenses periodically throughout the day to eliminate the impact of her double vision when reading and focusing on computer-based tasks. The severity of her oculomotor deficits and the periodic spot patching which obscured one of her eyes made eye-tracking access inconsistent, inaccurate, and frustrating. Switch-scanning with her left hand was possible, but she fatigued quickly due to her early stage of recovery and emerging motor capabilities, limiting her ability to use this access method to support her communication needs throughout the day.

Technology Solution

Lisa trialed a multi-input prototype that integrated eye tracking and switch scanning (Fager, Jakobs, & Sorenson, 2018). The prototype worked in the following manner: First, as she focused on her target (e.g., word in the word prediction list or a letter on an onscreen keyboard) with her eyes, small groupings of words or groups of letters were highlighted. Once the target word or letter was highlighted within the group, she activated her switch. Each word or letter within that grouping was then scanned. When the target letter or word was reached, Lisa selected it with her switch. The number of letters or words that were highlighted could be adjusted based on an individual’s eye-tracking capabilities. This approach was beneficial for Lisa because she could use gross eye tracking to narrow down the set of letters or words to be scanned which, in turn, decreased her physical fatigue. See https://rerc-aac.psu.edu/development/d1-developing-multimodal-technologies-to-improve-access/ for a brief video illustration of the multi-input prototype.

What Do We Know So Far?

Preliminary investigation of the multi-input (eye tracking plus switch scanning) approach has been initiated (Fager et al., 2018). A single-subject case study comparison of performance using eye tracking only and the multi-input approach was completed. Using an alternating treatment design, the participant in the case illustration completed a total of 10 sessions: five with each access option (single access using eye tracking, and multi-input using eye tracking plus switch scanning) randomly assigned. During the sessions, the participant spelled randomly selected sentences using an onscreen keyboard display (ABC layout) accessed via either eye tracking only or by using the multi-input prototype. Data were collected on accurate first attempts to target letters per sentence and the total number of errors per sentence. Results indicated that the participant averaged 63% first attempt accuracy across sentences using eye tracking only and 91% using the multi-input prototype. First-letter attempt accuracy (%) across sessions and access methods is displayed in Figure 2. Additionally, an average of eight errors per sentence occurred using eye tracking only and an average of two errors per sentence occurred using the multi-input prototype. The number of errors for each session and for each access method is displayed in Figure 3. Rate data were not collected for this participant because she struggled using the eye tracking only access method to make accurate selections and corrections and was therefore allowed to abandon attempts to correct errors and move onto the next target letter to reduce her fatigue and over-all frustration during the trials using eye tracking only.

Figure 2:

Figure 2:

First attempt accuracy per sentence across alternating conditions (eye tracking only vs. eye-tracking plus switch scanning [multi-input]).

Figure 3:

Figure 3:

Errors per sentence across alternating conditions (eye tracking only vs. eye-tracking plus switch scanning [multi-input]).

Other researchers are investigating the use of eye gaze coupled with scanning. Biswas and Langdon (2011, 2013) compared eye tracking alone to a combined eye tracking and switch- scanning access method for cursor control on a computer interface for navigation. The eye tracker moved the cursor to a general location and then the system scanned various mouse direction options to further move and refine the location of the cursor. Results of a study with individuals without disability indicated that the combined eye tracking and scanning access method was less strenuous than eye tracking alone and faster than scanning access only.

Research into combinations of other access methods is also emerging. Sahadat, Alreja, and Ghovanloo (2018) have preliminary results of a simultaneous, multimodal computer access method that uses tongue movement, head tracking, and speech recognition. Head tracking was used to move a computer cursor to a general location on a computer interface; tongue movement (captured with a head-mounted device that tracked a magnetic tracer attached to the tongue) was used to execute mouse functions such as click and hold, right click, left click; and speech recognition was used for text generation. Although this technology was described as being developed for individuals with tetraplegia, only participants without disabilities trialed the technology in the study and used this prototype to perform a computer search and send an email. Results indicated improvements in task completion rates with practice but overall slower task completion rates using the multimodal prototype compared to using a standard keyboard and mouse for individuals without physical impairments. Although these prototype examples demonstrate exploration into multi-input strategies for access, the inclusion of individuals with motor impairments in future research and development efforts will be essential to fully understand the benefits and challenges of these technology solutions.

Future Directions

Lisa’s case illustration demonstrates the use of a multi-input strategy for an individual who had challenges using each access method individually. Other studies examining a similar use of eye tracking and switch scanning (Biswas & Langdon, 2011, 2013) demonstrate promising results. However, further investigation with individuals who have severe motor impairments is warranted to fully understand when combinations of access methods may be most appropriate and beneficial. For example, an otherwise successful access method may become less accurate at times (e.g., during a hospitalization when positioning challenges are present, or when specific lighting environments interfere with successful calibration and use of an access method such as eye tracking). A multi-input approach may be useful to allow individuals to rely on their familiar access method (eye tracking) with additional support (switch scanning) to reduce new barriers in challenging conditions. Further examination by individuals with disabilities is critical at this point in development to understand the potential benefit of this approach for a wide range of individuals with severe motor impairments. Additional combinations of access methods (e.g., touch access plus switch scanning; eye tracking plus head tracking) need to be explored. The cognitive and learning demands of using multiple access options need to be investigated across different ages and with individuals who have cognitive challenges (e.g., severe traumatic brain injury impacting new learning, attention, and memory).

Supplemented Speech Recognition

Many individuals with severe motor impairment also have co-occurring speech impairments, such as dysarthria, that impact their ability to successfully use standard speech- recognition technologies (Young & Mihailidis, 2010; Hamidi, Baljko, Livingston, & Spalteholz, 2010; Christensen, Cunningham, Fox, Green, & Hain, 2012). Some researchers have attempted to design algorithms to recognize specific characteristics of dysarthric speech (Caballero-Morales & Trujillo-Romero, 2014; Mengitsu & Rudzicz, 2011). However, speech impairments can result from a wide range of conditions (e.g., cerebral palsy, traumatic brain injury, brainstem stroke, multiple sclerosis, Guillaine Barré), dysarthria types (e.g., flaccid, spastic, mixed, hyperkinetic) and severity levels, all of which are challenges for computer algorithms to manage effectively. High speech recognition rates (80% and above) have primarily been documented for those with mild dysarthria (Hux, Rankin-Erickson, Manasse, & Lauritzen, 2009) or for recognition of small vocabulary sets (Hawley et al., 2007; Judge, Robertson, Hawley, & Enderby, 2009). Although speech recognition use by individuals without speech impairments has grown as a result of consumer-based applications such as Alexa1™ or Siri2™, the technology has been of limited benefit to those with severe speech impairments. Consider the following case illustration:

Mark has cerebral palsy and severe spastic dysarthria. He uses his natural speech to communicate with close friends and family and uses a text-to-speech AAC device to communicate with unfamiliar listeners. Due to Mark’s motor impairments, his AAC device requires a key guard and modifications to the touch screen to decrease accidental activation. Mark complains that the access method to his text-to-speech AAC device is slow and laborious. He often marvels at speech-recognition technology and states that he would love to be able to use his natural speech to decrease the motor demands of using his AAC device. However, Mark has tried many commercially available speech recognition systems with limited success due to the severity of his dysarthria.

Technology Solution

Leveraging other sources of information to supplement the speech signal of an AAC device may benefit individuals like Mark, who have severe speech impairments. One possibility is the Supplemented Speech Recognition (SSR)3 system (Hosom, Jakobs, Baker, & Fager, 2010). The SSR combines large vocabulary speech recognition algorithms designed for dysarthric speech, first letter supplementation, language models, and word prediction. Individuals customize the system to the specific characteristics of their speech capabilities prior to use through a training protocol. With the SSR, the user first types the first letter of the target word on a keyboard or accesses the first letter from an onscreen keyboard. The SSR system then provides an auditory prompt (e.g., a “beep” sound) that signals the individual using the system to speak the target word. After the target word is spoken, the SSR (a) types the word the speech-recognition algorithm recognizes into the line of text, and (b) provides the next five probable word choices in the word prediction boxes. The word prediction offers word choices based on the language model and information from the speech recognition algorithm. If the speech recognition algorithm cannot recognize the speech input, the system leaves the line of text blank and only provides word options in the word prediction boxes. If the speech-recognition algorithm accurately recognizes the target word the user then types the first letter of the next target word. If the word entered into the line of text is incorrect, or if the line of text is blank, the user either selects the target word from the word prediction boxes or spells the target word letter-by-letter. A full description of the SSR and details of the speech recognition algorithm development are available in Hosom et al. (2010) and Fager, Beukelman, Jakobs, and Hosom (2010). A video illustration of SSR can be viewed at https://www.invotek.org/collections/products/products/speech-recognition-software-with-desktop-mic.

What Do We Know So Far?

In Fager et al. (2010) the SSR’s effect on keystroke savings was examined because the system was originally designed to be an assistive writing tool for reducing physical access demands for individuals with motor impairments and severe dysarthria who used keyboards or touchscreens. There was a keystroke savings of 68% for typical sentences compared to standard text entry. Although the preliminary results demonstrated that SSR might reduce access demands for individuals with severe motor impairments and dysarthria through substantial keystroke savings, further investigations are needed to understand the performance of individuals with a range of dysarthria types and severities, the cognitive processing load required to use SSR, and the impact of using this technology in a range of communication settings.

Future Directions

Some individuals with complex communication needs prefer to use their natural speech. Coupling other input modalities (e.g., gesture cues, word and topic supplementation) with improved speech recognition may lead to more efficient and preferred access methods. Including individuals with severe speech impairments in the design of new speech-controlled technologies is important because they have been left behind in the advancements of current, commercially available technologies even though they could benefit extensively from improvements in speech recognition.

Supplementing Access with Partner Input

Literate adults who rely on single-switch scanning for text entry often experience slow message production rates. Some may use word prediction in an attempt to increase the efficiency of text entry (Lesher, Moulton, & Higginbotham, 1998). However, words that are infrequent or not within the language model’s word database, including proper names, acronyms or abbreviations, often do not show up in the system, requiring full typing (Roark, Fried-Oken, & Gibbons, 2015). In an effort to decrease effort and/or increase rate of communication, familiar conversational partners often provide spoken word choices to the person using the AAC device during message production, but doing so violates the production modality: While the user produces written words, the familiar partner suggests spoken words. The dyad must decide whether to take the time for the message to be written in its entirety or to stop, mid-typing. This approach creates more efficient but awkward text entries, with violations of both written language and spoken conversation rules. These conversational challenges are common for people who rely on AAC and their communication partners (Blackstone, Williams, & Wilkins, 2007).

Although word supplementation by partners might help with typing speed, people who rely on single-switch scanning are faced with additional challenges because partners often disengage from conversation during long, inactive waiting times. Lack of engagement is a serious obstacle to successful conversation using AAC (Hoag, Bedrosian, McCoy, & Johnson, 2004). Because of the slow interactions and timing violations, the conversation partner may lose focus and interest in the exchange, resulting in shorter or less satisfying conversations than those experienced by peers using spoken language (Higginbotham & Wilkins, 1999). Conversation management strategies are required so that communication partners can more effectively engage and participate in conversations with people who rely on switch-scanning communication devices. With word prediction and wireless connections, new forms of vocabulary supplementation are now possible to support communication for people who prepare messages with speech-generating devices, as in the following case example:

Pam is a 45-year old woman with severe dysarthria and quadriplegia secondary to spastic cerebral palsy. Pam relies on an AAC device with single-switch scanning for conversations when her speech is not understood. She has much to talk about, as she is employed in the universal design laboratory at the local university. Unfortunately, students often disengage from conversation because of the extended time it takes her to form messages. Even with the system’s word prediction, Pam is limited in her message length. She often complains that she cannot teach new techniques, ask questions, or give her opinion in a timely fashion in a group discussion.

Technology Solution

The Smart Predict3 app is a novel dual-AAC application designed to support spelling- based message generation. (see https://rerc-aac.psu.edu/development/d3-developing-a-smart-predictor-app-for-aac-conversation/ for a brief video illustration). The user’s app resembles a spelling-based AAC device. It includes a keyboard that is organized to meet the typist’s preferences (e.g., QWERTY, alphabetical, or frequency of occurrence letter display) and a horizontal word prediction list of five word choices that sits between the keyboard and a message bar. Instead of only relying on word predictions generated by the language model alone (e.g., words predicted based on rules of grammar, words used frequently, etc.), word suggestions from the communication partner are integrated into the five choices. There is a bidirectional wireless connection between the AAC app and a second partner app. The partner app displays a miniature copy of the AAC app’s interface display, enabling the partner to see messages as they are produced on the AAC app. Viewing the AAC app provides additional contextual information so that the communication partner can follow message formulation in real time. At any time during message production, he or she can type in a new word or phrase, which is immediately sent to the AAC app and subsequently appears in the five-word prediction line. If a whole phrase is sent, the first word of the phrase appears initially. If the person using the AAC app chooses to select this word, the remaining words within the phrase are presented in the word prediction area as a single, multiword prediction. The text chosen from the prediction line appends to the message bar. If the communication partner words are not chosen, the individual using the app simply continues to type, and the word prediction area returns to displaying five words based on the original language model and future partner supplementations.

Although an unchosen phrase predicted by the communication partner disappears from the user interface display, it is not discarded by the AAC app. Instead, the software compares the linguistic similarity of each word in the phrase predicted by the partner to the words typed on the AAC app. This enables “near miss” predictions to still have value to the individual using AAC. For example, if the communication partner predicts “Her child laughed and laughed” and the individual using the AAC device rejects the word “child” and instead writes “daughter,” the Smart Predict system can still offer “laughed and laughed” as a phrase after daughter.

Smart Predict builds on the concept of co-construction, the joint formulation of messages that is a natural part of many spoken interactions (Engelke & Higginbotham, 2013). Communication is viewed as a cooperative effort between the two partners, one with a communication impairment, the second without an impairment. Support provided by the unimpaired partner can be considered vocabulary supplementation, a strategy that improves message transmission (Hanson, Beukelman, & Yorkston, 2013). In the context of Smart Predict, the meaning or intent of a message is established by the person with complex communication needs and joint formulation occurs with vocabulary supplementation. Co-construction, with the dual-apps, occurs in the same written modality that is produced with the AAC device. The user maintains complete control over message production while knowledgeable partners augment the AAC app’s prediction capabilities by suggesting contextually relevant words and phrases, in real time, using a second app.

What Do We Know So Far?

Smart Predict has been tested with five literate adults with complex communication needs who rely on AAC, using an alternating treatment single subject research design (Fried-Oken, Jakobs, & Jakobs, 2018). Results demonstrated the potential value of adding a human vocabulary source to software-based language models to increase efficiency of message generation for some participants who used a keyboard-based AAC app. For one participant who relied on single-switch access for picture description, selections per word were reduced by 12% during 10 min of message generation when the communication partner offered word choices; another participant who relied on direct selection for picture description reduced selections per word by 21% during a 10-min interchange. The other three participants were challenged by the tablet screen’s access method and did not demonstrate significant efficiency savings. Additional evaluations are in process for individuals who rely on scanning for message generation. It is expected that Smart Predict will offer vocabulary support as well as increased partner engagement in conversations.

Technology-assisted word supplementation takes advantage of the communication partner’s knowledge of language, context, and personally relevant vocabulary. A familiar partner almost always will provide better predictions than a relatively uninformed (albeit state-of-the art) computational language model (Roark, Fowler, Sproat, Gibbons, & Fried-Oken, 2011). One benefit of the Smart Predict prototype is the level of engagement that is afforded during interactions, as the partner is not simply sitting and waiting for message completion and transmission. A system that leverages communication partner predictions may more fully engage those partners in the process rather than expecting them to wait their turn with nothing to do. Importantly, an application such as Smart Predict provides predictive, not direct compositional, input from the communication partner. The responsibility of selecting letters and words during text entry remains with the person with complex communication needs, as the sole author of the text (Roark et al. 2011).

Future Directions

Future directions should include work to measure the effect of Smart Predict on communication partner engagement during conversations with people who rely on AAC, especially for those who type slowly with scanning access. So far the effects of Smart Predict have only been evaluated with five participants and results are equivocal because of motor demands of the typing task. It will be important to investigate who benefits from this app (and who does not), in what ways, and under what conditions. This new technology, which really provides both communication partner assistance and technology assistance, has the potential to support language and literacy learning as well as message generation for those who are already literate. Within AAC technology, context-aware supplementation by partners or by computers, where the person who relies on AAC controls all message generation, remains an achievable goal with new artificial intelligence (AI) options, especially since they are appearing within general technologies at such a rapid rate.

Where Do We Go From Here?

Historically, access to AAC devices has focused on ensuring the independence of the individual using AAC. While independence, understood as giving the individual control over the exact message he or she produces, is non-negotiable, it has come at the price of very slow communication that likely is fatiguing for the individual using AAC, makes it difficult to maintain engagement with communication partners, and limits opportunities to engage and participate. The goal of the new and emerging access technologies described in this paper is to increase the ability to access technology, reduce fatigue, and improve satisfaction with communication, particularly in relation to individuals with acquired neurological conditions whose needs remain underserved.

Although the technologies presented in this paper are promising, there is more work to do. Access technologies are needed that (a) do not require precise setup, (b) support emerging motor abilities, (c) support message correction, (d) support control of multiple smart devices, and (e) reduce cognitive processing demands associated with alternative access solutions. Despite the advances in standard and assistive technologies, individuals with the most severe motor impairments are experiencing a growing digital divide as their needs continue to be unmet.

Most access methods still require precise positioning of individuals using AAC and their equipment for reliable access. New movement-sensing technologies such as the highly-integrated inertial measurement units (IMUs) presented in this paper hold promise for relaxing setup requirements and reducing access errors and fatigue resulting from unintentional movements, such as spasms or coughing. There is also a desperate need for access technologies that support emerging access skills. Some technologies, such as head-movement trackers (e.g., HeadMouse® Extreme4 or Glassouse5) and laser-pointing systems (e.g. Safe Laser System3), for example, enable an individual to transition from partner-interpreted pointing (i.e., a partner watches mouse or laser movements and judges the intent of those movements), to partner selection of targets (e.g. the partner uses a secondary switch to select items when the individual using assistive technology is on their target), to independent access via dwell. Also needed are new technologies that leverage artificial intelligence strategies or partner knowledge and abilities to reduce the effort required by individuals who use AAC and assist with message correction. In addition, many individuals who use technology, regardless of physical ability, need or want to use multiple smart devices; more technologies that support access to multiple devices, such as Tecla-e6, BlueHub3, and BlueSwitch3, are needed to support the access needs of all. Finally, continued research is needed to understand the human processes involved in using new access technologies, such as a BCI speller, to ensure that development efforts not only reduce motor demands but also consider cognitive processing demands associated with alternative access solutions. The cognitive demands associated with using a BCI speller have not been empirically examined yet, and only single case reports are available.

Most importantly, a person-centered design is essential to future access technology development, and models of person-centered design that identify real-life access challenges and closely integrate the experiences and feedback of all key stakeholders (e.g., individuals with complex communication needs, family, teachers, caregivers, AAC device manufacturers), must become standard requirements (Shah et al., 2009). Research and development efforts must incorporate evidence related to the human processes required for access; iteratively develop and evaluate prototype technologies; and work closely with key stakeholders for final development, evaluation, and implementation. This process is essential to ensure that the technology meets the needs of all individuals with complex communication needs in order to maximize opportunities to communicate and thereby increase participation and independence and reduce abandonment or disuse of technology. In addition, research efforts must be directly translated into clinical practice (McNaughton et al., 2019). Only then can future access technology development efforts begin to close the digital divide for all individuals with complex communication needs.

Acknowledgments

The contents of this paper were developed under a grant to the Rehabilitation Engineering Research Center on Augmentative and Alternative Communication (The RERC on AAC) from the U.S. Department of Health and Human Services, National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR grant # 90RE5017); and grants from the National Institutes of Health, National Institute on Deafness and Other Communication Disorders (NIH/NIDCD grant #s R01DC009834 & R43DC014294). The contents do not necessarily represent the policy of the funding agency; endorsement by the federal government should not be assumed.

Footnotes

2

Siri™ is a product of Apple, Cupertino, California; https://www.apple.com/ios/siri/

3

Supplemented Speech Recognition (SSR), SmartPredict app, CoConstruction Partner app, Safe Laser System, and BlueHub & BlueSwitch are products of Invotek, Inc. Alma, Arkansas; https://invotek.org.

4

HeadMouse® Extreme is a product of Origin Instruments, Grand Prairie, Texas; http://www.orin.com/access/headmouse/

5

GlassOuse is a product of GlassOuse Assistive Device, 7, Shenzhen, Guangdong, China; http://glassouse.com/shop/

6

Tecla-e™ is a product of Komodo OpenLab, Digital Media Zone, Toronto, ON; https://gettecla.com/pages/tecla-e

Contributor Information

Susan Koch Fager, Institute for Rehabilitation Science and Engineering, Madonna Rehabilitation Hospitals.

Melanie Fried-Oken, Institute on Development & Disability, Oregon Health & Science University.

Tom Jakobs, Invotek, Inc..

David R. Beukelman, Institute for Rehabilitation Science and Engineering, Madonna Rehabilitation Hospitals

References

  1. Akcakaya M, Peters B, Moghadamfalahi M, Mooney A, Orhan U, Oken B,… Fried-Oken M (2014). Noninvasive brain-computer interfaces for augmentative and alternative communication. IEEE Reviews in Biomedical Engineering, 7, 31–49. doi: 10.1109/RBME.2013.2295097 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Andresen EM, Fried-Oken M, Peters B, & Patrick DL (2015). Initial constructs for patient-centered outcome measures to evaluate brain–computer interfaces, Disability and Rehabilitation: Assistive Technology, 11, 548–557. doi: 10.3109/17483107.2015.1027298 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bacher D, Jarosiewicz B, Masse NY, Stavisky SD, Simeral JD, Newell K, ... Hochberg LR (2015). Neural point-and-click communication by a person with incomplete locked-in syndrome. Neurorehabilitation and Neural Repair, 29, 462–471. doi: 10.1177/1545968314554624 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Ball L, Fager S, & Fried-Oken M (2012). Augmentative and alternative communication for people with progressive neuromuscular disease. Physical Medicine & Rehabilitation Clinics, 23, 689–699. 10.1016/j.pmr.2012.06.003 [DOI] [PubMed] [Google Scholar]
  5. Ball L, Nordness A, Fager S, Kersch K, Pattee G, & Beukelman D (2010). Eye gaze access of AAC technology for persons with amyotrophic lateral sclerosis. Journal of Medical Speech Language Pathology, 18, 11–23. Retrieved from https://www.atia.org/wp.../Research_Article-Eye_Gaze_Access_ALS_4AF258B.doc [Google Scholar]
  6. Beukelman DR, Fager S, Ball L, & Dietz A (2007). AAC for adults with acquired neurologic conditions: A review. Augmentative and Alternative Communication, 23, 230–242. doi: 10.1080/07434610701553668 [DOI] [PubMed] [Google Scholar]
  7. Beukelman DR, & Mirenda P (2013). Augmentative and alternative communication: Supporting children and adults with complex communication needs (4th ed.). Baltimore, MD: Paul H. Brookes. [Google Scholar]
  8. Biswas P, & Langdon P (2011). A new input system for disabled users involving eye gaze tracker and scanning interface. Journal of Assistive Technologies, 5, 58–66. 10.1108/17549451111149269 [DOI] [Google Scholar]
  9. Biswas P, & Langdon P (2013). A new interaction technique involving eye gaze tracker and scanning system. Proceedings of the 2013 Conference South Africa, 67–70. [Google Scholar]
  10. Blackstone S (1995). Repetitive strain injury and AAC. Augmentative Communication News, 8, 1–4. Retrieved from http://www.augcominc.com/newsletters/?fuseaction=newsletters&C=ACN [Google Scholar]
  11. Blackstone SW, Williams MB, & Wilkins DP (2007). Key principles underlying research and practice in AAC. Augmentative and alternative communication, 23, 191–203. 10.1080/07434610701553684 [DOI] [PubMed] [Google Scholar]
  12. Blankertz B, Dornhege G, Lemm S, Krauledat M, Curio G, & Müller KR (2006). The Berlin brain-computer interface: Machine learning based detection of user specific brain states. IEEE Transactions on Biomedical Engineering, 12, 581–607. doi: 10.1109/TBME.2008.923152 [DOI] [Google Scholar]
  13. Brumberg JS, Burnison JD, & Pitt KM (2016, July). Using motor imagery to control brain-computer interfaces for communication. In International Conference on Augmented Cognition (pp. 14–25). Springer. doi: 10.1007/978-3-319-39955-32 . Retrieved from 10.1007/978-3-319-39955-32https://pdfs.semanticscholar.org/bf49/d7cccdc9b435d5f0656356bc2b6d15373d19.pdf. Retrieved from https://pdfs.semanticscholar.org/bf49/d7cccdc9b435d5f0656356bc2b6d15373d19.pdf [DOI] [Google Scholar]
  14. Brumberg JS, Pitt KM, & Burnison JD (2018). A non-invasive brain–computer interface for real-time speech synthesis: The importance of multimodal feedback. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26, 874–881. 10.1109/TNSRE.2018.2808425 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Brumberg JS, Pitt KM, Mantie-Kozlowski A, & Burnison JD (2018). Brain–computer interfaces for augmentative and alternative communication: A tutorial. American Journal of Speech-Language Pathology, 27, 1–12. doi: 10.1044/2017_AJSLP-16-0244 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Caballero-Morales SO, & Trujillo-Romero F (2014). Evolutionary approach for integration of multiple pronunciation patterns for enhancement of dysarthric speech recognition. Expert Systems with Applications, 41, 841–852. doi: 10.1016/j.eswa.2013.08.014 [DOI] [Google Scholar]
  17. Chakrabarti S, Sandberg HM, Brumberg JS, & Krusienski DJ (2015). Progress in speech decoding from the electrocorticogram. Biomedical Engineering Letters, 5, 10–21. 10.1007/s1353 [DOI] [Google Scholar]
  18. Chavarriaga R, Fried-Oken M, Kleih S, Lotte F, & Scherer R (2017). Heading for new shores! Overcoming pitfalls in BCI design. Brain-Computer Interfaces, 4, 1–14. 10.1080/2326263X.2016.1263916 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Christensen H, Cunningham S, Fox C, Green P, Hain T (2012). A comparative study of adaptive, automatic recognition of disordered speech, In INTERSPEECH 2012 Annual Conference of the International Speech Communication Association, 1776–1779. Retrieved from https://pdfs.semanticscholar.org/68b0/b9f6141172939bb778ccfb8afdf63c977f86.pdf [Google Scholar]
  20. Combaz A, Chatelle C, Robben A, Vanhoof G, Goeleven A, Thijs V,…Laureys S (2013). A comparison of two spelling brain-computer interfaces based on visual p3 and SSVEP in locked-in syndrome. PloS One. Retrieved from 10.1371/journal.pone.0073691 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Engelke CR & Higginbotham DJ (2013). Looking to speak: On the temporality of misalignment in interaction involving an augmented communicator using eye-gaze technology. Journal of Interactional Research in Communication Disorders, 4, 95–122. doi: 10.1558/jircd.v4i1.95 [DOI] [Google Scholar]
  22. Fager S, Bardach L, Russell S, & Higginbotham J (2012). Access to augmentative and alternative communication: New technologies and clinical decision-making. Journal of Pediatric Rehabilitation Medicine, 5, 53–61. doi: 10.3233/PRM-2012-0196 [DOI] [PubMed] [Google Scholar]
  23. Fager S, Beukelman D, Fried-Oken M, Jakobs T, & Baker J (2012). Access interface strategies. Assistive Technology, 24, 25–33. 10.1080/10400435.2011.648712 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Fager S, Beukelman D, Jakobs T, & Hosom JP (2010). Evaluation of a speech recognition prototype for speakers with moderate and severe dysarthria: A preliminary report. Augmentative and Alternative Communication, 26, 267–277. 10.3109/07434618.2010.532508 [DOI] [PubMed] [Google Scholar]
  25. Fager SK, Jakobs T, & Sorenson T (2018, January). Multimodal input to enhance access to technology. Paper presented at the annual Assistive Technology Industry Association (ATIA) conference Orlando, FL. [Google Scholar]
  26. Fager SK, Sorenson T, Butte S, Nelson A, Banerjee N, & Robucci R (2017). Integrating end-user feedback in the concept stage of development of a novel sensor access system for environmental control. Disability and Rehabilitation: Assistive Technology, 19, 1–7. 10.1080/17483107.2017.1328615 [DOI] [PubMed] [Google Scholar]
  27. Farwell LA, & Donchin E (1988). Talking off the top of your head: Toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology, 70, 510–523. 10.1016/0013-4694(88)90149-6 [DOI] [PubMed] [Google Scholar]
  28. Fried-Oken M, Beukelman DR, & Hux K (2012). Current and future AAC research considerations for adults with acquired cognitive and communication impairments. Assistive Technology, 24, 56–66. 10.1080/10400435.2011.648713 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Fried-Oken M, Fager S, & Jakobs T (2016, July). New directions in access to augmentative and alternative communication technologies for persons with minimal movements. Paper presented at Rehabilitation Engineering Society of North America’s annual convention Arlington, VA. [Google Scholar]
  30. Fried-Oken M, Jakobs E, & Jakobs T (2018, January). Smart Predict: AAC app that integrates partner knowledge into word prediction. Paper presented at the annual Assistive Technology Industry Association (ATIA) conference Orlando, FL. [Google Scholar]
  31. Haghi M, Thurow K, & Stoll R (2017). Wearable devices in medical Internet of things: Scientific research and commercially available devices. Healthcare Informatics Research, 23, 4–15. 10.4258/hir.2017.23.1.4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Hamidi F, Baljko M, Livingston N, & Spalteholz L (2010). CanSpeak: A customizable speech interface for people with dysarthric speech In: Miesenberger K, Klaus J, Zagler W, & Karshmer A A (Eds.), Computers helping people with special needs. ICCHP 2010. Lecture notes in computer science (Vol. 6179). Springer, Berlin, Heidelberg: 10.1007/978-3-642-14097-6_97 [DOI] [Google Scholar]
  33. Hanson EK, Beukelman DR, & Yorkston KM (2013). Communication support through multimodal supplementation: A scoping review. Augmentative and Alternative Communication, 28, 310–321. doi: 10.3109/07434618.2013.848934 [DOI] [PubMed] [Google Scholar]
  34. Hasan MS, & Hongnian Y (2017). Innovative developments in HCI and future trends. International Journal of Automation and Computing, 14, 10–20. 10.1007/s11633-016-1039-6 [DOI] [Google Scholar]
  35. Hawley M, Enderby P, Green P, Cunningham S, Brownsell S, Carmichael J, . . . Palmer R (2007). A speech-controlled environmental control system for people with severe dysarthria. Medical Engineering & Physics, 29, 586–593. doi: 10.1016/j.medengphy.2006.06.009 [DOI] [PubMed] [Google Scholar]
  36. Herff C, Heger D, De Pesters A, Telaar D, Brunner P, Schalk G, & Schultz T (2015). Brain-to-text: Decoding spoken phrases from phone representations in the brain. Frontiers in Neuroscience, 9, 217 10.3389/fnins.2015.00217 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Higger M, Quivira F, Akcakaya M, Moghadamfalahi M, Nezamfar H, Cetin M, & Erdogmus D (2017). Recursive Bayesian coding for BCIs. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25, 704–714. doi: 10.1109/TNSRE.2016.2590959 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Higginbotham DJ, Shane H, Russell S, & Caves K (2007). Access to AAC: Present, past, and future. Augmentative and Alternative Communication, 23, 243–257. 10.1080/07434610701571058 [DOI] [PubMed] [Google Scholar]
  39. Higginbotham DJ, & Wilkins DP (1999). Slipping through the time stream: Social issues of time and timing in augmented interactions In: Kovarsky D, Duchan J, & Maxwell M Eds.), Constructing (in)competence: Disabling evaluations in clinical and social interactions. Lawrence Erlbaum Associates: Mahwah, NJ: 49–82. [Google Scholar]
  40. Hoag LS, L.A., Bedrosian JL, McCoy KF, & Johnson D (2004). Trade-offs between informativeness and speed of message delivery in augmentative and alternative communication. Journal of Speech, Language, and Hearing Research, 47:1270–1285. doi: 10.1044/1092-4388(2004/096) [DOI] [PubMed] [Google Scholar]
  41. Hosom JP, Jakobs T, Baker A, & Fager S (2010). Automatic speech recognition for assistive writing in speech supplemented word prediction. INTERSPEECH 2010: 2674–2677. Retrieved from https://www.isca-speech.org/archive/archive_papers/interspeech_2010/i10_2674.pdf [Google Scholar]
  42. Hux H, Rankin-Erickson J, Manasse N, & Lauritzen E (2009). Accuracy of three speech recognition systems: Case study of dysarthric speech. Augmentative and Alternative Communication, 16, 186–196, doi: 10.1080/07434610012331279044 [DOI] [Google Scholar]
  43. Judge S, Robertson Z, Hawley M, & Enderby P (2009). Speech-driven environmental control systems: A qualitative analysis of users’ perceptions. Disability & Rehabilitation: Assistive Technology, 4, 151–157. 10.1080/17483100802715100 [DOI] [PubMed] [Google Scholar]
  44. Kaufmann T, Holz EM, & Kübler A (2013). Comparison of tactile, auditory, and visual modality for brain-computer interface use: A case study with a patient in the locked-in state. Frontiers in Neuroscience, 7, 129 Retrieved from https://www.frontiersin.org/articles/10.3389/fnins.2013.00129/full [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Kellis S, Miller K, Thomson K, Brown R, House P, & Greger B (2010). Decoding spoken words using local field potentials recorded from the cortical surface. Journal of neural engineering, 7, 056007. doi: 10.1088/1741-2560/7/5/056007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Kruger E, Teasell R, Salter K, Foley N, & Hellings C (2007). The rehabilitation of patients recovering from brainstem strokes: Case studies and clinical considerations. Topics in Stroke Rehabilitation, 14, 56–64, doi: 10.1310/tsr1405-56 [DOI] [PubMed] [Google Scholar]
  47. Kübler A, Holz EM, Riccio A, Zickler C, Kaufmann T, Kleih SC,… Mattia D (2014). The user-centered design as novel perspective for evaluating the usability of BCI-controlled applications. PLoS ONE 9, e112392 10.1371/journal.pone.0112392 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Lesher GW, Moulton BJ, & Higginbotham DJ (1998), Techniques for augmenting scanning communication. Augmentative and Alternative Communication, 14, 81–101. 10.1080/07434619812331278236 [DOI] [Google Scholar]
  49. Lotte F, Larrue F, & Mühl C (2013). Flaws in current human training protocols for spontaneous brain-computer interfaces: Lessons learned from instructional design. Frontiers in Human Neuroscience, 7, 568. doi: 10.3389/fnhum.2013.00568 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. McCane LM, Sellers EW, McFarland DJ, Mak JN, Carmack CS, Zeitlin D,…Vaughan TM (2014). Brain-computer interface (BCI) evaluation in people with amyotrophic lateral sclerosis. Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration, 15, 207–215. doi: 10.3109/21678421.2013.865750 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. McNaughton D, Light J, Beukelman DR, Klein C, Nieder D, & Nazareth G (2019). Building capacity in AAC: A person-centered approach to supporting participation by people with complex communication needs [Special Issue]. Augmentative and Alternative Communication. [DOI] [PubMed] [Google Scholar]
  52. Mengistu KT, & Rudzicz F (2011). Adapting acoustic and lexical models to dysarthric speech. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4924–4927. doi:10.1109/ICASSP.2011.5947460. Retrieved 8/20/18 from http://www.cs.toronto.edu/~frank/Download/Papers/mengistu_icassp11.pdf [Google Scholar]
  53. Moghadamfalahi M, Orhan U, Akcakaya M, Nezamfar H, Fried-Oken M, & Erdogmus D (2015). Language-model assisted brain computer interface for typing: A comparison of matrix and rapid serial visual presentation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 23, 910–920. doi: 10.1109/TNSRE.2015.2411574 [DOI] [PubMed] [Google Scholar]
  54. Mugler EM, Patton JL, Flint RD, Wright ZA, Schuele SU, Rosenow J, … Slutzky MW (2014). Direct classification of all American English phonemes using signals from functional speech motor cortex. Journal of Neural Engineering, 11, 035015 10.1088/1741-2560/11/3/035015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Murguialday AR, Hill J, Bensch M, Martens S, Halder S, Nijboer F,… Gharabaghi A (2011). Transition from the locked in to the completely locked-in state: A physiological analysis. Clinical Neurophysiology, 122, 925–933. doi: 10.1016/j.clinph.2010.08.019 [DOI] [PubMed] [Google Scholar]
  56. NIDILRR Development Framework (2017). Retrieved from https://www.acl.gov/index.php/aging-and-disability-in-america/nidilrr-frameworks.
  57. Nijboer F, Sellers EW, Mellinger J, Jordan MA, Matuz T, & Furdea A (2008). A P300-based brain-computer interface for people with amyotrophic lateral sclerosis. Clinical Neurophysiology. 119, 1909–16. doi: 10.1016/j.clinph.2008.03.034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Oken BS, Orhan U, Roark B, Erdogmus D, Fowler A, Mooney A,… Fried-Oken M (2014). Brain-computer interface with language model-electroencephalography fusion for locked-in syndrome. Neurorehabilitation and Neural Repair, 28, 387–394. doi: 10.1177/1545968313516867 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Peters B, Mooney A, Oken B, & Fried-Oken M (2016). Soliciting BCI user experience feedback from people with severe speech and physical impairments. Brain-Computer Interfaces, 3, 47–58. doi: 10.1080/2326263X.2015.1138056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Roark B, Fowler A, Sproat R, Gibbons C, & Fried-Oken M (2011). Towards technology-assisted co-construction with communication partners. In Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies (pp. 22–31). Association for Computational Linguistics; Retrieved from https://pdfs.semanticscholar.org/942a/943ac4b46ec6d5a5c3e988e3e9a87cbf1ae9.pdf [Google Scholar]
  61. Roark B, Fried-Oken M, & Gibbons C (2015). Huffman and linear scanning methods with statistical language models. Augmentative and Alternative Communication, 31, 37–50. doi: 10.3109/07434618.2014.997890. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Sahadat MN, Alreja A, & Ghovanloo M (2017). Simultaneous multimodal PC access for people with disabilities by integrating head tracking, speech recognition, and tongue motion. IEEE Transactions on Biomedical Circuits and Systems, 12, 192–201. doi: 10.1109/TBCAS.2017.2771235 [DOI] [PubMed] [Google Scholar]
  63. Scherer R, Billinger M, Wagner J, Schwarz A, Hettich DT, Bolinger E, ... Müller-Putz G (2015). Thought-based row-column scanning communication board for individuals with cerebral palsy. Annals of Physical and Rehabilitation Medicine, 58, 14–22. doi: 10.1016/j.rehab.2014.11.005 [DOI] [PubMed] [Google Scholar]
  64. Sellers EW, & Donchin E (2006). A P300-based brain-computer interface: Initial tests by ALS patients. Clinical Neurophysiology, 11, 538–548. doi: 10.1016/j.clinph.2005.06.027 [DOI] [PubMed] [Google Scholar]
  65. Sellers EW, Ryan DB, & Hauser CK (2014). Noninvasive brain-computer interface enables communication after brainstem stroke. Science Translational Medicine 6(257), 257r7. doi: 10.1126/scitranslmed.3007801 [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Sellers EW, Vaughan TM, & Wolpaw JR (2010). A brain-computer interface for long-term independent home use. Amyotrophic Lateral Sclerosis. 11, 449–455. doi: 10.3109/17482961003777470 [DOI] [PubMed] [Google Scholar]
  67. Singh G, Nelson A, Robucci R, Patel C, & Banerjee N (2015). Inviz: Low-powered personalized gesture recognition using wearable textile capacitive sensor arrays. Proceedings of IEEE Pervasive Computing and Communications (PerCom), 198–206. [Google Scholar]
  68. Shah SGS, Robinson I, & Al Shawi S (2009). Developing medical device technologies from users’ perspectives: A theoretical framework for involving users in the development process. International Journal of Technology Assessment in Health Care, 25, 514–521. 10.1017/S0266462309990328 [DOI] [PubMed] [Google Scholar]
  69. Shane H, Blackstone S, Vanderheiden G, Williams M, & DeRuyter F (2012). Using AAC technology to access the world. Assistive Technology, 24, 3–13. 10.1080/10400435.2011.648716 [DOI] [PubMed] [Google Scholar]
  70. Vansteensel MJ, Pels EG, Bleichner MG, Branco MP, Denison T, Freudenburg ZV, ... Van Rijen PC (2016). Fully implanted brain–computer interface in a locked-in patient with ALS. New England Journal of Medicine, 375, 2060–2066. doi: 10.1056/NEJMoa1608085 [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Wilkinson CR, & De Angli A (2014). Applying user centred and participatory design approaches to commercial product development. Design Studies, 35, 614–631. doi: 10.1016/j.destud.2014.06.001 [DOI] [Google Scholar]
  72. Wolpaw JR, Bedlack RS, Reda DJ, Ringer RJ, Banks PG, Vaughn TM,… McFarland DJ (2018). Independent home use of brain-computer interface by people with amyotrophic lateral sclerosis. Neurology, 91, e258–e267. doi: 10.1212/WNL.0000000000005812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, & Vaughan TM (2002). Brain-computer interfaces for communication and control. Clinical Neurophysiology, 113, 767–791. 10.1016/S1388-2457(02)00057-3 [DOI] [PubMed] [Google Scholar]
  74. Young V, & Mihailidis A (2010). Difficulties in automatic speech recognition of dysarthric speakers and implications for speech-based applications used by the elderly: A literature review. Assistive Technology, 22, 99–112. doi: 10.1080/10400435.2010.483646 [DOI] [PubMed] [Google Scholar]

RESOURCES