Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 May 24.
Published in final edited form as: Assist Technol. 2011 Spring;24(1):25–33. doi: 10.1080/10400435.2011.648712

Access Interface Strategies

Susan Fager 1, David R Beukelman 1,2, Melanie Fried-Oken 3, Tom Jakobs 4, John Baker 1
PMCID: PMC3663592  NIHMSID: NIHMS470126  PMID: 22590797

Abstract

Individuals who rely on augmentative and alternative communication (AAC) devices to support their communication often have physical movement challenges that require alternative methods of access. Technology that supports access, particularly for those with the most severe movement deficits, have expanded substantially over the years. The purposes of this article are to review the state of the science of access technologies that interface with augmentative and alternative communication devices and to propose a future research and development agenda that will enhance access options for people with limited movement capability due to developmental and acquired conditions.

Keywords: AAC, access, assistive technology, minimal movement

INTRODUCTION

The augmentative and alternative communication (AAC) options are increasing dramatically for people unable to meet their communication needs through natural speech, handwriting, or typing. It is encouraging that proposed technical advances promise to provide even greater access to face-to-face and electronic communication options that will support social, recreational, educational, commercial, volunteer, and employment engagement (Shane, Blackstone, Vanderheiden, Williams, & DeRuyter, 2012). However, these levels of communication support require that those who rely on them can accurately and efficiently interact with these technologies.

The purposes of this article are to review the state of the science with regard to physical access of AAC technologies and to propose a future research and development agenda that will enhance access options for people with limited movement capability due to developmental and acquired conditions. This article focuses on direct AAC access strategies involving the tracking of head and eye movement, recognition of residual speech, recognition of gestures, and monitoring of the electrical activity of the brain.

MINIMAL MOVEMENT: HEAD, EYE

Many people with complex communication needs also experience such limited movement of their arms and hands that they must rely on head and/or eye movement to access their communication options. Through history, listeners (communication partners) have interpreted head or eye movements as they co-constructed messages with the person who experienced complex communication needs (CCN). During the last two decades steady progress has been made in providing alternative technical access using head and hand movement. For more detailed descriptions of access strategies discussed in this article, readers are referred to Beukelman and Mirenda (2005) and Cook and Polgar (2007).

LISTENER PERCEPTION OF HEAD OR EYE MOVEMENT

Augmentative and alternative communication access involving communication partner perception of residual eye or head movement is relatively common for people with minimal movement and AAC needs (Garrett, Happ, Costello, & Fried-Oken, 2007; Culp, Beukelman, & Fager, 2007; Hurtig & Downey, 2009). Examples of no-technology options include upward head or eye movement to signal “yes” and downward movement to signal “no,” or lateral movement to signal “more” or “less,” “slower” or “faster,” or “go” or “stop.”

Communication partner perception of eye gaze is also a common AAC strategy. The person with AAC needs looks toward an item (object, image, or printed message) in the environment or on a communication board, thereby directly selecting it, as the communication partner co-constructs meaning by confirming the meaning or message associated with the gaze.

Eye-linking is a low-tech communication option in which multiple images are placed or mounted on a transparent sheet. The communication partner visually focuses at the eyes of the person with AAC needs, who looks at the image of interest. The communication partner then moves the sheet until their eyes meet because they are both looking at a shared item. At this point, the communication partner confirms the meaning or message to be conveyed.

EYE TRACKING STRATEGIES

In the past decade, there have been significant advances in eye tracking using infrared technology. Eye tracking systems work by reflecting safe, invisible infrared light on the surface of the eye. This light causes a reflection on the user's pupil that the computer cameras can track. The software program correlates the reflected infrared light and the computer mouse with information gathered through a calibration routine. Symbols, pictures, messages, and/or letters are presented on the computer screen or speech generating device (SGD). Individual selections of this content are made through pausing (dwelling) on a location, blinking, or activating a switch when the cursor is located on the preferred location (item) on the screen. A wide range of SGDs now incorporate eye tracking as a potential access method through an accessory that can be added to the system.

Research into the use of eye gaze technology to access SGDs to support communication is beginning to emerge. Individuals with amyotrophic lateral sclerosis (ALS) have received the most attention in the research literature regarding the use of eye gaze technology. ALS is a degenerative condition that can leave the individual in a completely locked-in state with only eye movement preserved at the end stages of the disease process. Some have documented the relative level of ease of use of eye tracking technology experienced by individuals with ALS (Calvo et al., 2008; Harris & Goren, 2009; Gibbons & Beneteau, 2010). Ball and colleagues (2010) followed 15 individuals with ALS who used eye tracking technology to support communication. They found a wide range of communicative functions served using these systems (group communication, phone, e-mail, face-to-face interaction, and internet). Others have also reported a variety of communicative functions served by this technology for persons with ALS (Fried-Oken et al., 2006; Doyle & Phillips, 2001; McNaughton, Light, & Groszyk, 2001; and McNaughton & Bryen, 2002).

HEAD TRACKING STRATEGIES

Head tracking technologies use video or infrared cameras. Video camera-based systems track specific body features (e.g., the tip of the nose) and translate that into cursor control on a computer screen (Betke, Gips, & Fleming, 2002; Kim & Ryu, 2006). Infrared systems track a reflective dot or configuration of dots placed on the forehead, glasses, brim of a hat, or the individual's hand or finger. Through the years several variations of infrared head tracking applications have been developed. In general, some are relative strategies in which the cursor movement and head movement are calibrated by moving the head from side-to-side and up-and-down, thereby moving the cursor to the edges of the screen. Absolute strategies involve the calibration of the movement and cursor locations to specific locations on the screen. Communication content is represented on a computer or SGD screen similar to what has been described with eye tracking. Selections, similar to eye tracking, are often made by dwelling or can be made with the use of a switch activated by an additional movement (e.g., arm or hand movement). Case study illustrations are available that describe the use of this technology for individuals with disabilities (Fager, Bardach, Russell, & Higginbotham, 2011; Man & Wong, 2007; McKinley, Tewksbury, Sitter, Reed, & Floyd, 2004).

EYE AND HEAD TRACKING FUTURE RESEARCH

There are several important directions for future research into eye and head tracking technology. First, the clinical training and on-going support required of this sophisticated technology option to support communication needs over time is yet to be understood. Additionally how adverse environmental conditions affect technology stability and performance (natural light, changing positions over time due to medical needs, changing environments) needs to be documented. This knowledge not only drives technology development (making technology simpler and more robust in a wide range of settings), but drives models of clinical intervention to ensure successful implementation.

Relative and absolute head tracking each require different levels of head movement capability. Relative systems often require extremes in head movement with use in order to keep the cursor aligned on the computer or SGD screen. The physical demands of long-term use of these systems warrants further investigation as individuals who use these technologies may spend significant amounts of time using the technology to support their daily communication and computer access needs.

Finally, the use of eye and head tracking technology with other access methods needs to be explored. For example, the combined use of eye movement with other physical movements (i.e., head movement, gesture, speech recognition) may provide an added benefit or more efficient or intuitive access method. There is limited understanding of what combinations of access methods are best-matched for specific tasks.

HEAD POINTING STRATEGIES

Two different laser strategies have been utilized to access AAC technology or materials. A conventional laser pointer has been mounted on the head or, in some cases held in the hand or attached to a foot or hand. The laser beam is then directed toward an item, object, or image in the environment or on a communication board. The communication partner perceives the laser beam on the item (object, print, icon, or object) and confirms the message or meaning being communicated. A concern about this type of laser use is the eye safety of the communication partner or others in the environment. There is particular concern if young children or people with cognitive disability are not aware of the eye danger and may look directly into the laser device to investigate the light source.

The Safe-Laser is a second laser strategy. This laser system operates at a relatively low (eye-safe) level of intensity until it is pointed toward a laser-sensing surface at which it shifts to higher power. The Safe-Laser can be used to identify images positioned on the laser-sensing screen and the communication partner can confirm the message. The Safe-Laser can also be used to electronically activate a location on the laser-sensing screen, thereby accessing a stored spoken message, printing a message, or providing environmental control.

The authors have been involved in research of safe laser technologies. In an investigation of six individuals with locked-in syndrome (LIS) due to brainstem stroke, safe laser technology was used as a head movement training system (Fager, Beukelman, Karantounis, & Jakobs, 2006). Individuals with minimal head movement capabilities could use the safe laser technology if the laser sensing surface was moved a sufficient distance from the laser pointer. Three of the participants transitioned from using the safe laser to head tracking technologies. One individual continued to use the safe laser to support communication. Two of the participants discontinued use of the system due to ongoing medical setbacks.

Future research on head pointing technologies needs to further address the integration of this technology as a movement training device as well as a system to support communication. How this technology is used over time and the potential impact on range of movement as well as the ability to support independent communication needs to be explored.

Individuals who are early in recovery and who have minimal movement capabilities often have to rely on partner dependent and scanning strategies to support communication. Because safe laser technology can be used with minimal movement as a direct selection access strategy, the use of this technology in acute medical care settings needs to be explored. Additionally the use of this technology for environmental pointing (e.g., controlling the fan, call-light) and to support communication using items in the environment needs to be researched.

RECOGNITION OF RESIDUAL SPEECH

Automatic speech recognition (ASR) is rapidly being integrated into computers, gaming, and mobile device applications. ASR is standard on many computers (e.g., Microsoft Speech Recognition), and commercially available ASR has expanded to support a range of professions with specialized vocabularies (e.g., Dragon NaturallySpeaking). Mobile devices (iPad, Android, etc.) come equipped with voice command menus and specialized apps are available to assist in web searches and other applications. Simply speaking to your computer or mobile device is appealing, particularly for individuals with physical impairments impacting control of their arms and hands. However, this becomes challenging when the speaker has dysarthria, or impaired speech capabilities.

Limited reports exist of functional (sufficient to support writing and daily communication) use of ASR for individuals with moderate and severe speech impairments. Most reports of success are with individuals with mild impairments (Hux, Rankin-Erikson, Manasse, & Lauritzen, 2000). This can be attributed to several reasons. First, commercially available ASR has been developed for a market of typical (non-impaired) speakers. As such, the technology has been developed on models of non-impaired speech. The strategies that current ASR technology implement require the speaker to talk continuously making it difficult or impossible for some individuals with severe speech impairments to use. Additionally, individuals with dysarthric speech can demonstrate much inter- and intra-speaker variability making it difficult for the technology to reliably recognize and improve performance with use over time (Blaney & Wilson, 2000; Magnuson & Blomberg, 2000; Raghavendra, Rosengren, & Hunnicutt, 2001; Young & Mihailidis, 2010). Due to these challenges, some have focused on developing ASR technology to specifically recognize dysarthric speech. Many of these systems have focused on controlling specific software programs, environmental control, and recognition of other limited vocabulary sets (Hawley et al., 2007; Judge, Robertson, Hawley, & Enderby, 2009; Caves, Boemler, & Cope, 2007; Omar, Morales, & Cox, 2009; Hamidi, Baljko, Livingston, & Spalteholz, 2010).

Others are beginning to look at recognition of residual speech as a part of an overall access strategy for communication and assistive writing. Invotek, Inc., (Alma, AR, USA) a partner in the Rehabilitation Engineering Research Center for Communication Enhancement (AAC-RERC), has developed a Supplemented Speech Recognition (SSR) program with collaborators. This unique system incorporates ASR that has been developed on models of dysarthric speech with a method for individual speaker optimization (training), first letter cues or alphabet supplementation and word prediction (Fager, Beukelman, Jakobs, & Hosom, 2010). With multiple sources of information (audio signal, first letter cues, language model) as well as recognition that is based on dysarthric speech, the SSR gave speakers with a wide range of intelligibility (16.4%–89.1%) a high level of keystroke savings (64.8%–70.1%). For individuals with severe physical impairments, keystroke savings can be of particular benefit as it decreases the amount of precise and accurate movement control required to use a keyboard. Others are beginning to incorporate multiple sources of information that may be of benefit to dysarthric speakers. For example, SpeakQ by goQ utilizes speech recognition as well as word prediction. Deng and colleagues (2009) have investigated the use of speech recognition with sEMG signals for individuals with dysarthria.

There are several issues that arise from previous research on ASR with dysarthric speakers that warrant further investigation. First, variability of speech performance in dysarthria is a significant challenge for current commercially available ASR. Not only do individuals with dysarthria vary substantially in their speech performance across dysarthria types (flaccid, spastic, ataxic, hypokinetic, ataxia), but also across severity ranges (mild, moderate, severe) as dysarthria can occur from a wide range of etiologies. Personalization of speech recognition models as well as individualizing speaker templates for recognition appears to be a particularly important feature of dysarthric speech recognition. However, the amount and kind of personalization required for various types and severity levels of dysarthria is yet to be understood.

Commercially available ASR requires the speaker to produce speech in a relatively consistent pattern during use. Not only do the acoustic features of individuals with dysarthria vary between speakers, but significant variability of speech (the way a speaker produces words or specific sounds) has also been noted within some speakers. Additionally, this variability can change throughout the day due to fatigue, medications, and the nature of the disorders that cause the dysarthria. Research is needed to understand exactly how this variability impacts recognition performance and what strategies can improve performance. Additionally, with new ASR being developed that allows individuals with severe dysarthria to use their voices extensively throughout the day to support written and face-to-face communication, the impact of extensive voice use for these individuals is unknown.

Finally, the use of speech as an input method for AAC technology is becoming a possibility as strategies for dysarthric speech recognition continue to advance. However, the integration of natural speech via speech recognition technology during AAC interactions is yet to be understood. Specifically, the level of acceptance of using one's own speech, no matter how unintelligible, and having an AAC device recognize and speak the utterance intelligibly using synthesized speech, needs to be explored.

BRAIN COMPUTER INTERFACE

Twenty years ago, patients with Guillain-Barre Syndrome were asked to evaluate the AAC options they had used when they were locked in. At that time, one of the patients said, “Hey, what I really needed was a computer that could read my brain waves and speak out what I was thinking. Then none of this other stuff would be necessary” (Fried-Oken et al., 1991). Indeed, brain computer interface (BCI) technology as an AAC access method may now have the potential to meet these challenges (Kubler, Kotchoubey, Kaiser, Wolpaw, & Birbaumer, 2001).

Within the past two decades interest in the development of BCIs for function has been sparked by significant advances in computing capabilities, miniaturization, signal acquisition, and processing abilities. Also, we have a solid body of research from animal models that contributes to the rehabilitation technology efforts. We have learned so much about brain behavior connections and what the brain is capable of in the face of severe neuromotor impairment. And most importantly, from a clinical perspective, we have changed our perception of severe disability and the potential participation of individuals with severe disability. People who are locked-in and require access methods that do not rely on neuromuscular control are speaking up (Bauby, 1997; Bieker, Noethe, and Fried-Oken, 2011). It is time to examine the future of BCI as a plausible access method for AAC users. Our challenge is to find an independent, user-friendly means of expression that does not rely on neuromuscular control and produces written and spoken language that is fast, accurate, and non-fatiguing, with unlimited vocabulary.

What is a BCI? Brain-computer interface refers to technology whereby a computer detects a “selection” made by a person who does not rely on neuromuscular activity. Rather, the technology uses the person's changes in brain electricity as the intended execution. BCIs can substitute for the loss of typical neuromuscular outputs by enabling people to interact with their environments through brain signals rather than through muscles (Wolpaw, Birbaumer, McFarland, Pfurtscheller, & Vaughan, 2002). There are a number of different brain electrical signals that can serve as the intended selection method for the BCI. They are categorized as invasive and non-invasive techniques. The invasive BCI uses recordings of neuronal action potentials (spikes) or local field potentials (FLPS) where an electrode array is placed directly on to the cortex (Wolpaw & Birbaumer, 2006). The Braingate™ Neural Interface System is one example of an invasive BCI where intracortical microelectrode sensors read control signals directly from the motor cortex (Donoghue, Nurmikko, Black, & Hochberg, 2007). Non-invasive techniques use recording sites at the scalp (electroencephalographic activity or EEG) or rely on magnetic brain forces, magnetoencephalography (MEG and Magnetic Resonance Imaging (MRI) (Birbaumer & Cohen, 2007). There is a “partially invasive” technique called electrocorticography (ECoG) where sensors are placed within the skull but outside the gray matter of the brain. Non-invasive methods have the challenge of filtering out noise from a brain signal that is far from its source, but signal processing techniques are improving rapidly so that the advantages of lower cost, portability, lack of infection concerns, no surgery, and faster applications are become more appealing.

Presently, most human BCI systems used for communication and control rely on non-invasive EEG-based methodologies. The EEG is a popular recording method since it can measure spontaneous electrical activity of the cerebral cortex as well as cortical responses to external or internal events. Both the Visual Event Potential (VEP) and the Event-Related Potential (ERP) or P300 have been used for BCI signals (Kubler et al., 2001). The P300 produces an electrical response that is time-locked to a physical stimulus or behavior, and can be characterized as a positive peak in the EEG that occurs 300–600 ms after stimulus onset. The P300 is a well documented EEG response to a salient event, and has been used as a metric of cognitive function, attention and cognitive workload (Oken & Phillips, 2009). Consider the person who is looking at a series of numbers. All of a sudden, a letter appears. That letter is considered a salient and novel event and will cause an involuntary P300 response, which can be measured with surface brain electrodes. This response, when measured in multiple stimulus-response times, is considered the “virtual key press.”

Who can benefit from BCI? BCI as an alternative access method is often considered for individuals with LIS. Plum and Posner (1983) described the LIS as

“a state in which selective supranuclear motor de-efferentation produces paralysis of all four limbs and the last cranial nerves without interfering with consciousness. The voluntary motor paralysis prevents the subjects from communicating by word or body movement. Usually, but not always, the anatomy of the responsible lesion in the brainstem is such that locked-in patients are left with the capacity to use vertical eye movements and blinking to communicate their awareness of internal and external stimuli.”

Bauer, Gerstenbrand, & Rumpl (1979) subdivided the syndrome on the basis of the extent of motor impairment: (a) classical LIS is characterized by total immobility except for vertical eye movements or blinking; (b) incomplete LIS permits remnants of voluntary motion; and (c) total LIS consists of complete immobility including all eye movements combined with preserved consciousness. We propose to use the term in a functional framework. We refer to individuals who are “locked-in” as those who cannot rely on motor or speech skills to conduct activities that allow them to participate in the social, economic, and cultural aspects of their environments. We refer to potential users as individuals with severe speech and physical impairments (SSPI). This population of patients presents with severe dysarthria (Duffy, 2005) and severe motor impairments secondary to tetraplegia or quadriplegia. The most common etiologies of SSPI, or functional LIS, may include (but are not limited to): acquired neurological disease such as ALS and motor neuron disease; Parkinson's disease, Parkinsonian-plus syndromes, and other movement disorders; multiple sclerosis and neuroimmunologic disease; Guillain-Barre Syndrome; basilar artery strokes and other CVAs; muscular dystrophy; spinal cord injuries; traumatic brain injuries; as well as neurodevelopmental disorders such as cerebral palsy.

What BCIs do we have today? A number of noninvasive BCIs are available today, both for people with neurologic impairment, as well as for non-medical users. Blankertz and his colleagues (2010) discuss uses of the BCI for entertainment (such as the popular Emotiv™ for gamers) and for mental state monitoring (i.e., attention feedback and improvements for human performance). For communication and control access, the P300 Speller is a popular BCI that is used by people with LIS secondary to brainstem stroke, ALS, and spinal cord injury (Sellers, Krusienski, McFarland, Vaughan, & Wolpaw, 2006). The system relies on a P300 response when a 6 X6 matrix of characters randomly intensifies a row or column of letters (Farwell & Donchin, 1988). A number of trials must be averaged to classify the P300 as a reliable character selection, and the speed of the highlighting determines the number of characters processed per minute. The Berlin BCI or Hex-o-Spell relies on a circular array of letters and has been found to be more accurate for spelling than the standard letter matrix (Treder & Blankertz, 2010). The rapid serial visual presentation (RSVP Keyboard™) BCI, being developed at the Oregon Health & Science University (Orhan et al., 2011), uses individual letters presented singly on the screen with RSVP Keyboard™ for single-event classification of the P300. When a letter selection is validated, a language model works to strengthen future selections (Roark, de Villiers, Gibbons, & Fried-Oken, 2010; Roark, Gibbons, & Fried-Oken, 2010).

What should we expect from BCI in the future? Like with any assistive technology, the BCI of the future must be safe, convenient, and reliable for long term use. BCI is not presently a practical and available technology for people with LIS. Current efforts have focused primarily on the technical and electrophysiological aspects, and have not yet addressed the human interface and usage concerns. Many questions remain for clinical research, and many technical issues still need attention. We are challenged to explore the functions that BCI could provide for daily participation. We do not know what levels of cognitive communication skills are required, or what kind of training is needed for competent use. We do not yet know who will be successful potential users. The current BCI systems call for substantial concentration and motivation, with limited feedback or reward during automatic selection. The level of fatigue that is permissible is not yet understood. There are significant of obstacles for set up and calibration, problem solving, and reliability checks. A BCI system now must be administered by a technical expert. Ideally, it will not require continual technical support and can be used permanently in daily life with ease (Daly & Wolpaw, 2008). We predict that the BCI of the future will manage speed and accuracy tradeoffs for functional communication, but we have not yet examined these issues. Likewise, we need to compare this access method to available eye tracking and single switch devices for individuals with SSPI, and determine if there is a use benefit.

The development of functional BCIs to serve as alternative access methods for AAC technology is a multi-disciplinary, translational endeavor that will continue for many years before we reach our goal of finding an independent means of expression that does not rely on neuromuscular control. The task will require collaboration with experts in signal processing, computer miniaturization, computer language specialists, clinical teams of neuroscientists and communication specialists, rehabilitation experts, and users (Wolpaw et al., 2002). We, as AAC specialists and rehabilitation engineers, are new to this clinical research agenda and have much to offer as this promising access method is realized for people with significant motor impairments.

GESTURE RECOGNITION

Historically, word prediction, semantic compaction, and abbreviation expansion strategies for people with CCN have been used to reduce the amount of typing required to enter text; however, as with letter-by-letter typing all of these techniques require the user to precisely access each key or interface button involved in the access strategy. For head or eye tracking access strategies, the cursor or laser beam must be directed at an interface button and that button activated through dwell or acceptance switch.

Within the last few years gesture input strategies such as SWYPE and Shapewriter offer text input without requiring precise targeting of each letter; however, complete words must be gestured (with a few exceptions for double letters, apostrophes, etc.). These applications involve touch assess with the screen of a mobile device such as a smart phone, iPad, or tablet computer.

Recently, InvoTek, an AAC-RERC Partner, has developed a prototype gesture input method that only requires the first few letters of a word to be gestured either through touch access or head tracking. If the gesture points to a highly likely word, that word is automatically inserted into the sentence and alternatives are presented in a word prediction list. If multiple words are likely, the word prediction list is populated but the most likely word is not automatically inserted into the sentence. This predictive gesture prototype provides three important advantages over other AAC access strategies for people with disabilities. The number of precise activations required to enter text was 64% less than normal typing and 47% less than using word prediction.1,2 Gesturing reduced the vigilance required to monitor the word prediction list and enabled the writer to maintain focus on composing their message. Gesture users accessed the word prediction list 78% fewer times, on average, than when using word prediction.

Future research is needed to study the use accuracy and efficiency of these new gestures technologies by people with a range of movement limitation patterns. Additionally, there has been essentially no research investigation instructional and learning needed to develop proficiencies with such technologies. Finally, there is an ongoing need to improve these access strategies to meet the needs of people with different types of movement patterns.

Given the above preliminary results, we believe that predictive gesturing promises to substantially reduce the workload (cognitive and physical) for people with severe disabilities when writing. The reduction in precise activations and word prediction vigilance could reduce the cognitive load of message preparation for people who presently have few choices.

CONCLUSION

The range of access technology options for individuals with severe physical impairments is rapidly expanding. Along with the new innovations, there is considerable need for research to understand the impact of these technologies on the communication of individuals with severe physical impairments.

While access technologies have developed over the past decade, most of these methods do not yet interface with mobile technologies (i.e., iPad, iPod, mobile tablets). Switch scanning applications are now beginning to emerge; however, touch-screen access to these devices (with no modifications for sensitivity) remain the primary access method to mobile technologies. For individuals with severe physical impairments, these devices remain largely inaccessible.

Additionally, there is limited information available about the use of language modeling techniques to provide necessary vocabulary to individuals using different access options as communication contexts change. Advances in how communication content is managed may have significant impact on the use of different access methods by facilitating more efficient and effective interaction.

ACKNOWLEDGMENT

The preparation of this manuscript was supported in part by the Rehabilitation Engineering Research Center on Communication Enhancement (AAC-RERC) funded under grant #H133E080011 from the National Institute on Disability and Rehabilitation Research (NIDRR) in the U.S. Department of Education's Office of Special Education and Rehabilitative Services (OSERS) and NIH grant #5R0IDC009834.

Footnotes

1

A “precise activation” is defined as targeting and selecting (via dwell or touch) a particular letter.

2

The sentences and input methods (typing, word prediction, and gesturing) were randomized.

REFERENCES

  1. Ball L, Nordness A, Fager S, Kersch K, Pattee G, Beukelman D. Eye gaze access of AAC technology for persons with amyotrophic lateral sclerosis. Journal of Medical Speech Language Pathology. 2010;18:11–23. [Google Scholar]
  2. Bauby J-D. The Diving Bell and the Butterfly. Random House; New York: 1997. [Google Scholar]
  3. Bauer G, Gerstenbrand F, Rumpl E. Varieties of locked-in syndrome. Journal of Neurology. 1979;221(2):77–91. doi: 10.1007/BF00313105. [DOI] [PubMed] [Google Scholar]
  4. Betke M, Gips J, Fleming P. The camera mouse: Visual tracking of body features to provide computer access for people with severe disabilities. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2002;10(1):1–10. doi: 10.1109/TNSRE.2002.1021581. [DOI] [PubMed] [Google Scholar]
  5. Beukelman D, Mirenda P. Augmentative and alternative communication: support for children and adults with complex communication needs. 3rd ed. Paul H. Brookes Publishing Co.; Baltimore, MD: 2005. [Google Scholar]
  6. Bieker G, Noethe G, Fried-Oken M. Brain-computer interface: Locked-in and reaching new heights. Speak Up! USAAC Newsletter. 2011 [Google Scholar]
  7. Blaney B, Wilson J. Acoustic variability in dysarthria and computer speech recognition. Clinical Linguistics & Phonetics. 2000;14(4):307–327. [Google Scholar]
  8. Blankertz B, Tangermann M, Vidaurre C, Fazli S, Sannelli C, Haufe S, Müller K-R. The Berlin brain–computer interface: non-medical uses of BCI technology. Front. Neurosci. 2010;4:198. doi: 10.3389/fnins.2010.00198. doi: 10.3389/fnins.2010.00198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Calvo A, Chio A, Castellina E, Corno F, Farinetti L, Ghiglione P, Vignola A. Eye tracking impact on quality-of-life of ALS patients. Lecture Notes in Computer Science- Computer Helping People with Special Needs. 2008;5101:70–77. [Google Scholar]
  10. Caves K, Boemler S, Cope B. Development of an automatic recognizer for dysarthric speech.. Proceedings of the RESNA Annual Conference; Phoenix, AZ. 2007. [Google Scholar]
  11. Cook A, Polgar J. Cook and Hussey's assistive technologies: Principles and practice. 3rd ed. Mosby; St. Louis, MO: 2007. [Google Scholar]
  12. Culp D, Beukelman DR, Fager SK. Brainstem impairment. In: Beukelman DR, Garrett KL, Yorkston KM, editors. Augmentative communication strategies for adults with acute or chronic medical conditions. Paul Brookes Publishing Co.; Baltimore, MD: 2007. pp. 59–90. [Google Scholar]
  13. Daly JJ, Wolpaw JR. Brain-computer interfaces in neurological rehabilitation. Lancet Neurology. 2008;7:1032–1043. doi: 10.1016/S1474-4422(08)70223-0. [DOI] [PubMed] [Google Scholar]
  14. Deng Y, Patel R, Heaton JT, Colby G, Gilmore D, Cabrera J, Meltzner GS. Disordered speech recognition using acoustic and sEMG signals. INTERSPEECH-2009. 2009:644–647. [Google Scholar]
  15. Donoghue JP, Nurmikko A, Black M, Hochberg LR. Assistive technology and robotic control using motor cortex ensemble-based neural interface systems in humans with tetraplegia. Journal of Physiology. 2007;579(3):603–611. doi: 10.1113/jphysiol.2006.127209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Doyle M, Phillips B. Trends in augmentative and alternative communication use by individuals with amyotrophic lateral sclerosis. Augmentative and Alternative Communication. 2001;17:167–178. [Google Scholar]
  17. Duffy JR. Motor speech disorders: Substrates, differential diagnosis and management. 2nd ed. Elsevier Mosby; St. Louis, MO: 2005. [Google Scholar]
  18. Fager S, Bardach L, Russell S, Higginbotham J. Access to augmentative and alternative communication: New technologies and clinical decision-making. Journal of Pediatric Rehabilitation Medicine. 2011 doi: 10.3233/PRM-2012-0196. in press. [DOI] [PubMed] [Google Scholar]
  19. Fager SK, Beukelman DR, Jakobs T, Hosom JP. Evaluation of a speech recognition prototype for speakers with moderstae and severe dysarthria: A preliminary report. Augmentative and Alternative Communication. 2010;26:267–277. doi: 10.3109/07434618.2010.532508. [DOI] [PubMed] [Google Scholar]
  20. Fager S, Beukelman D, Karantounis R, Jakobs T. Use of safe-laser access technology to train head movement in persons with severe motor impairment: A series of case reports. Augmentative and Alternative Communication. 2006;22:222–229. doi: 10.1080/07434610600650318. [DOI] [PubMed] [Google Scholar]
  21. Farwell L, Donchin E. Talking off the top of your head: Toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology. 1988;70:510–523. doi: 10.1016/0013-4694(88)90149-6. [DOI] [PubMed] [Google Scholar]
  22. Fried-Oken M, Fox L, Rau MT, Tullman J, Baker G, Hindal M, Lou J-S. Purposes of AAC device use for persons with ALS as reported by caregivers. Augmentative and Alternative Communication. 2006;22:209–221. doi: 10.1080/07434610600650276. [DOI] [PubMed] [Google Scholar]
  23. Garrett KL, Happ MB, Costello JM, Fried-Oken MB. AAC in the intensive care unit. In: Beukelman DR, Garrett KL, Yorkston KM, editors. Augmentative Communication Strategies for Adults with Acute or Chronic Medical Conditions. Paul Brookes Publishing Co.; Baltimore, MD: 2007. pp. 17–57. [Google Scholar]
  24. Gibbons C, Beneteau E. Functional performance using eye control and single switch scanning by people with ALS. Perspectives on Augmentative and Alternative Communication. 2010;19(3):64–69. [Google Scholar]
  25. Hamidi F, Baljko M, Livingston N, Spalteholz L. CanSpeak: A Customizable speech interface for people with dysarthric speech. Lecture Notes in Computer Science. 2010;6179:605–612. [Google Scholar]
  26. Harris D, Goren M. The ERICA eye gaze system versus manual letter board to aid communication in ALS/MND. British Journal of Neuroscience Nursing. 2009;5(5):227–230. [Google Scholar]
  27. Hawley M, Enderby P, Green P, Cunningham S, Brownsell S, Carmichael J, Palmer R. A speech-controlled environmental control system for people with severe dysarthria. Medical Engineering & Physics. 2007;29(5):586–593. doi: 10.1016/j.medengphy.2006.06.009. [DOI] [PubMed] [Google Scholar]
  28. Hurtig R, Downey D. Augmentative and alternative communication in acute and critical care settings. Plural Publishing; San Diego, CA: 2009. [Google Scholar]
  29. Hux K, Rankin-Erickson JL, Manasse NJ, Lauritzen E. Accuracy of three speech recognition systems: Case study of dysarthric speech. Augmentative and Alternative Communication. 2000;16:186–196. [Google Scholar]
  30. Judge S, Robertson Z, Hawley M, Enderby P. Speech-driven environmental control systems—a qualitative analysis of users’ perceptions. Disability & Rehabilitation: Assistive Technology. 2009;4:151–157. doi: 10.1080/17483100802715100. [DOI] [PubMed] [Google Scholar]
  31. Kim H-J, Ryu D. Computer control by tracking head movements for the disabled. Lecture Notes in Computer Science. 2006;4061:709–715. [Google Scholar]
  32. Kubler A, Kotchoubey B, Kaiser J, Wolpaw JR, Birbaumer N. Brain-computer communication: Unlocking the locked in. Psychological Bulletin. 2001;127(3):358–375. doi: 10.1037/0033-2909.127.3.358. [DOI] [PubMed] [Google Scholar]
  33. Magnuson T, Blomberg M. Acoustic analysis of dysarthric speech and some implications for automatic speech recognition. TMH-QPSR. 2000;41:19–30. [Google Scholar]
  34. Man D, Wong M-SL. Evaluation of computer-access solutions for students with quadriplegic athetoid cerebral palsy. The American Journal of Occupational Therapy. 2007;61(3):355–364. doi: 10.5014/ajot.61.3.355. [DOI] [PubMed] [Google Scholar]
  35. McKinley W, Tewksbury MA, Sitter P, Reed J, Floyd S. Assistive technology and computer adaptations for individuals with spinal cord injury. NeuroRehabilitation. 2004;19(2):141–146. [PubMed] [Google Scholar]
  36. McNaughton D, Bryen D. Enhancing participation in employment through AAC technologies. Assistive Technology. 2002;14:58–70. doi: 10.1080/10400435.2002.10132055. [DOI] [PubMed] [Google Scholar]
  37. McNaughton D, Light J, Groszyk L. “Don't give up”: Employment experiences of individuals with amyotrophic lateral sclerosis who use augmentative and alternative communication. Augmentative and Alternative Communication. 2001;17:179–195. [Google Scholar]
  38. Ohran U, Erdogmus D, Roark B, Purwar S, Hild KE, Oken B, Fried-Oken M. Fusion with language models improves spelling accuracy for ERP-based brain computer interface spellers.. Proceedings of IEEE Engineering in Medicine and Biology Society International Conference; Boston, MA. 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Oken BS, Phillips TS. Evoked Potentials: Clinical. In: Squire LR, editor. Encyclopedia of Neuroscience. Vol. 4. Academic Press; Oxford: 2009. pp. 19–28. [Google Scholar]
  40. Omar S, Morales C, Cox SJ. Modeling errors in automatic speech recognition for dysarthric speakers. [August 26, 2011];EURASIP Journal on Advances in Signal Processing. 2009 from http://www.hindawi.com/journals/asp/2009/308340.html.
  41. Plum F, Posner JB. The diagnosis of stupor and coma. 3rd ed. Davis; Philadelphia: 1983. [Google Scholar]
  42. Raghavendra P, Rosengren E, Hunnicutt S. An investigation of different degrees of dysarthric speech as input to speaker-adaptive and speaker-dependent recognition systems. Augmentative and Alternative Communication. 2001;17:265–275. [Google Scholar]
  43. Roark B, de Villiers J, Gibbons C, Fried-Oken M. Scanning methods and language modeling for binary switch typing.. Proceedings of the North American Association for Computational Linguistics; Los Angeles, CA. 2010. [Google Scholar]
  44. Roark B, Gibbons C, Fried-Oken M. Binary coding with language models for EEG-based access methods.. International Society for Augmentative and Alternative Communication Biennial Conference.; Barcelona, Spain. 2010. [Google Scholar]
  45. Sellers EW, Krusienski DJ, McFarland DJ, Vaughan TM, Wolpaw JR. A P300 event-related potential brain–computer interface (BCI): The effects of matrix size and inter stimulus interval on performance. Biological Psychology. 2006;73:242–252. doi: 10.1016/j.biopsycho.2006.04.007. [DOI] [PubMed] [Google Scholar]
  46. Shane H, Blackstone S, Vanderheiden G, Williams M, DeRuyter F. Using AAC technology to assess the world. Assistive Technology. 2012;24:3–13. doi: 10.1080/10400435.2011.648716. [DOI] [PubMed] [Google Scholar]
  47. Treder MS, Blankertz B. (C)overt attention and visual speller design in an ERP-based brain-computer interface. Behavioral and Brain Function. 2010;6:28. doi: 10.1186/1744-9081-6-28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain-computer interfaces for communication and control. Clinical Neurophysiology. 2002;113:767–791. doi: 10.1016/s1388-2457(02)00057-3. [DOI] [PubMed] [Google Scholar]
  49. Wolpaw JR, Birbaumer N. Brain-computer interfaces for communication and control. In: Selzer M, Clarke S, Cohen L, Duncan PW, Gage F, editors. Textbook of neural repair and rehabilitation: Volume 1. Cambridge University Press; Cambridge: 2006. pp. 602–614. [Google Scholar]
  50. Young V, Mihailidis A. Difficulties in automatic speech recognition of dysarthric speakers and implications for speech-based applications used by the elderly: A literature review. Assistive Technology. 2010;22:99–112. doi: 10.1080/10400435.2010.483646. [DOI] [PubMed] [Google Scholar]

RESOURCES