Abstract
Background
Some non-invasive brain computer interface (BCI) systems are currently available for locked-in syndrome (LIS) but none have incorporated a statistical language model during text generation.
Objective
To begin to address the communication needs of individuals with LIS using a non-invasive BCI that involves Rapid Serial Visual Presentation (RSVP) of symbols and a unique classifier with EEG and language model fusion.
Methods
The RSVP Keyboard™ was developed with several unique features. Individual letters are presented at 2.5 per sec. Computer classification of letters as targets or non-targets based on EEG is performed using machine learning that incorporates a language model for letter prediction via Bayesian fusion enabling targets to be presented only 1–4 times. Nine participants with LIS and nine healthy controls were enrolled. After screening, subjects first calibrated the system, and then completed a series of balanced word generation mastery tasks that were designed with five incremental levels of difficulty, that increased by selecting phrases for which the utility of the language model decreased naturally.
Results
Six participants with LIS and nine controls completed the experiment. All LIS participants successfully mastered spelling at level one and one subject achieved level five. Six of nine control participants achieved level five.
Conclusions
Individuals who have incomplete LIS may benefit from an EEG-based BCI system, which relies on EEG classification and a statistical language model. Steps to further improve the system are discussed.
Introduction
Locked-In Syndrome (LIS) consists of tetraplegia and anarthria with preserved consciousness, with three levels of severity. Classical LIS describes individuals whose voluntary movement is limited to blinking and vertical eye movements. Incomplete LIS refers to individuals who demonstrate voluntary movement other than blinking or eye movement, and total LIS to those without any voluntary muscle function whatsoever.1, 2 LIS etiologies include brainstem stroke, traumatic brain injury, and neurodegenerative conditions such as advanced amyotrophic lateral sclerosis.3 Incomplete LIS can be defined functionally as a condition where individuals cannot consistently rely on oral motor speech or upper extremity function to meet environmental control or communication needs. In addition to the above etiologies, these disabilities may also result from cerebral palsy, muscular dystrophy, multiple sclerosis, Parkinson's disease, Parkinson's plus syndromes, and brain tumors. This significantly increases the number of individuals who fit within a definition of LIS and can benefit from brain computer interface (BCI), and offers a broad perspective of their functional status for rehabilitation and medical management.
Ischemic strokes are the most common cause of classical LIS which has a prevalence of 1-2 per million.4 Incomplete LIS which includes additional diagnoses has an uncertain but significantly greater prevalence. The usual age of onset of LIS varies between 17 and 52 years old.5-7. The youngest patients have a better prognosis for survival, with more than 85% of individuals still living ten years after onset.5, 6 With advances in medical technology, life expectancy will likely increase.
Expressive communication (both speech and writing) is a significant challenge for individuals with LIS. People with classical LIS rely on blinking or eye movements to communicate via yes/no responses or partner-assisted communication methods, or to control a speech-generating device.3, 8, 9 Individuals who present with incomplete LIS may have additional options for gestural communication or alternative access to a speech-generating device.10, 11 However, even these methods may not be reliable due to fatigue or variability in motor function2 and those with degenerative conditions may transition to total LIS and lose the ability to communicate even through blinking or eye movements.12 Current efforts in assistive technology have resulted in new access methods for people with severe neuromuscular impairments.13, 14 BCI is a promising option for people with LIS.
BCI uses brain signals to provide a non-motor communication channel for people with severely limited motor control. Considerable research efforts are being invested into EEG BCIs, both from non-invasive scalp recordings and from invasive electrocorticography for both human and animal models.15 Among non-invasive EEG-based BCI options, the most commonly used spelling interface is the BCI2000 with P300 speller.16, 17 The P300 response has been shown to be a reliable signal for controlling a BCI for a number of functions, including text generation.17 The P300 speller presents a grid of characters arranged in a 6 × 6 matrix. Rows and columns randomly flash with the target cell represented by an intersection occurring with a probability of 1/6. The rare brightening of the target stimulus elicits a P30018 that is identified by the computer program and interpreted as a ‘keystroke’.19 A second spelling application is the Berlin BCI: Hex-o-spell.20, 21 In the usual configuration, a user focuses on six hexagonal fields surrounding a circle. In each of the fields, five letters or other symbols are arranged. In order to select a symbol, the user imagines directing a small arrow to their desired target. Successful imagination results in that field being chosen. Then all other hexagons are cleared and the five symbols of the selected hexagon are moved to a novel set of six individual hexagons. The user then replicates the same procedure to select one symbol.20
The current non-invasive BCIs allow people with LIS to access letters for communication and computer control. The current systems do not integrate a language model with signal detection for letter selection although some systems have used predictive spelling after the classifier has decided on the correct letter.22 Statistical language models assign probabilities to text. High utility probabilistic language models can be estimated from a large sample of text in any language by counting how often letters occur in particular contexts. They are important components of many computer-based natural language applications, including speech recognition, machine translation and optical character recognition. They are also used to make text entry more efficient in word processors or text messaging in cell phones. These same statistical language models are now frequently used to speed up text entry in non-BCI communication devices for individuals with severe speech and language disorders.23
This paper describes the development and implementation of the RSVP Keyboard™, the first BCI device for people with LIS that tightly fuses a language model with an EEG classifier for effective and efficient spelling and expressive communication.
Methods
Participants
Participants with LIS were recruited through the ALS Center of Oregon, the ALS Association Oregon and SW Washington Chapter, and the outpatient Neurology and Augmentative and Alternative Communication clinics at Oregon Health & Science University (OHSU). Control participants included a convenience sample of people without disabilities. Participants with LIS met the following inclusion criteria: (1) diagnosed by a neurologist; (2) age between 18 and 75 years; (3) capable of participating in one- to three-hour experimental interactions; (4) literate in English; (5) adequate vision and hearing; (6) speech that is understood less than 25% of the time (as assessed by the referring speech-language pathologist) or (7) severely reduced hand function for writing and/or typing; and (8) willing to be videotaped for research purposes. Participants in the control group met criteria 2 through 5. All participants completed the RSVP screening protocol that assessed requisite skills for use of the RSVP Keyboard™.
The screening protocol addresses history and participant/caregiver perception of current sensory abilities, performance on subtests from existing standardized assessment instruments for auditory comprehension, reading, and spelling and on novel tasks developed to screen sustained visual attention, working memory and ability to perceive stimuli in all four visual quadrants.24, 25 The screening protocol only required minimal movement responses and was completed by all participants with LIS.
This study was approved by the Institutional Review Board at OHSU and all participants provided informed consent. Participants with LIS authorized a relative or caregiver to sign the consent forms on their behalf, via yes/no signals or other alternative means of communication.
Procedure
The experimental sessions were performed at the residences of people with LIS and at OHSU for the non-disabled group. EEG was recorded using a 16-channel g.USBamp (g.tec, Graz, Austria) with active electrodes in a cap at approximate 10-20 locations. The reference electrode was placed at TP10 and ground at FpZ. The raw EEG was grossly inspected for signal quality. A 500 msec window of EEG following character presentation was used for the signal analysis of the classifier reducing the detection problem into binary classification problem to decide between target and non-target stimulus.26
Stimuli were presented in an RSVP (rapid serial visual presentation) paradigm.26 The stimuli consisted of the 26 letters plus a “<” for delete prior character and “_” for spacebar and had a visual angle of 3.8 degrees. Individual characters were presented singly at 2.5 per sec and were on screen for 400 msec. The stimuli were presented on an 18” laptop computer monitor positioned 75 cm away from the participants.
Participants first had a calibration session where the classifier was generated after seeing 75 sequences of the 28 characters. Prior to each sequence, the participants were briefly shown a target letter or character they were instructed to detect.
Mastery task
After calibration of the RSVP Keyboard™, participants performed the mastery task. The mastery task was designed to provide practice opportunities to improve user performance. Varying levels of contribution from the language model (described in detail below) were used to minimize errors and reduce frustration to encourage further practice. During the task, participants were presented with a pre-selected set of balanced phrases27 one at a time, and were asked to copy a target word from each phrase using the RSVP Keyboard™. They were instructed to correct any errors by selecting the “<”. The 28 characters were presented in 2 blocks of 14, with a fixed pause of 1 sec between blocks and sequences. The letters were chosen randomly with the constraint that all 28 characters were chosen in each sequence of 28. One to four sequences were presented for each character classifier decision. A classifier decision, with input from the language model, was made after a sequence if the classifier achieved a certain probability threshold. After classification, there was a decision screen presented with the classifier character choice and a one second pause before going onto the next character.
The program moved on to the next phrase when one of the following criteria was met: the target word was copied correctly; the participant spent 10 minutes attempting to type the same word; or the number of presented sequences exceeded eight times the number of letters in the target word. Each of the five mastery task levels included three sets of three phrases. A level was considered successfully completed if participants accurately generated a target word for two of the three phrases in a set. If the participant did not successfully complete the first set on a level, she could attempt up to two more sets at that level. The mastery task continued until the subject completed the fifth level, failed to pass a lower level, or opted to end the session. Error rates were calculated using the Total Error Rate formula.28
The EEG was sampled at 256 Hz and 2.5 – 44 Hz bandpass filtered. Artifact contaminated sequences were rejected if the average amplitude of any channel was higher than 40 μV and the trial was repeated. The 500 msec samples from each channel following each character presentation were further processed. A linear dimension reduction was applied using Principal Component Analysis (PCA) to remove zero variance directions, essentially equivalent to employing a bank of eigenfilters on EEG and downsampling their outputs to obtain features. The directions with a variance lower than 10−5 of the maximum variance were removed. The energy in the removed directions wasn't exactly zero, however it was assumed to be negligible compared to higher energy components. The number of dimensions was reduced to approximately 48 from 64 for each channel, where 64 time samples correspond to half a second windowing after downsampling to 128 Hz. The total number of features, i.e. dimensions, for each trial was approximately 800. Subsequently, for each stimulus, the aggregate feature vector obtained from all the channels was further projected into one-dimensional space using Regularized Discriminant Analysis (RDA) classifier.29
The signal statistics required for PCA and RDA were learned during the calibration session described above. Using the calibration data, RDA model was fit to data. The accuracy of the classifier during calibration was estimated from the area under the curve (AUC) of true positive versus false positive rate for the calibration target versus non-target classification, under a 10-fold cross validation.
Character-based language models were trained on a large sample of New York Times text, from the English Gigaword corpus, following described methods.30 Briefly, the probability of each letter is conditioned on the previous five letters in the sentence. Using these models, experimental stimuli were generated with specific levels of predictability, to provide increasing levels of typing difficulty in the 5-stage mastery task. Low levels in the mastery task are highly predictable given the previous symbols; higher levels include less predictable words. Specifically, in level 1, each letter in the target word is at least 5 times more likely than the next highest probability letter; level 2 target letters are at least 2 times more likely; level 3 target letters are always the most likely; level 4 target letters are never most likely but always at least half as probable as the most likely letter; and level 5 target letters are between 0.3 and 0.5 times as probable as the most likely letter. While the first letter of a word is usually less accurately predicted by language models than later letters of the word, these particular words were chosen so that the language model worked comparably well for the first as later letters of a word. Using the model, many possible stimuli fitting the mastery criteria were found in the New York Times corpus, then hand-filtered for linguistic variety and incorporated into natural-sounding phrases.
To improve character classification, evidence from the language model and from the EEG are tightly combined. The score corresponding to each trial stimulus is obtained after the EEG feature extraction step as a result of RDA. Based on the scores, relative likelihoods for the target and non-target class are estimated using kernel density estimation. A probabilistic Bayesian fusion is made with the assumptions of conditional independence of EEG evidence in each epoch from a prior epoch and from the language model evidence. A Naïve Bayesian fusion is applied between the language model and EEG given the class label of each trial being target or non-target. The fusion is done probabilistically. The probability of each symbol being the intended one changes according to EEG evidence collected by new repetitions of the sequences. As more EEG evidence is collected the effect of language model becomes less prominent. Even a target letter with a probability of 0.0001 according to the language model may be selected after collecting EEG evidence corresponding to multiple sequences. After the probability is calculated, the symbol with maximum a posteriori probability is selected by the system either once this probability exceeds a preset confidence threshold or when a preset maximum allowed number of sequences is reached.26
Results
Participants
Demographic information for nine participants with LIS and nine healthy controls is provided in Table 1. There were no significant differences between the two groups in terms of age, gender, or years of education. One participant did not pass the cognitive screening, either because of lack of consistent motor response or being in a minimally conscious state. Two participants with LIS passed the screening but did not take the mastery task: one with significant electrode problems and one because of hospitalization and move to new foster home (this person achieved AUCs of 0.79 and 0.88 on two trials of calibration).
Table 1. Demographic information on 9 participants with LIS and 9 healthy controls.
LIS | CONTROLS | p value | |
---|---|---|---|
|
|
|
|
Age, mean (range) | 45.8 (27-65) | 45.2 (17-66) | 0.965a |
Gender (M/F) | 7/2 | 4/5 | 0.147b |
Ethnicity (%Caucasian) | 77.8 | 100 | 0.134b |
Years of education (range) | 14.6 (12-23) | 18.2 (11-22) | 0.067a |
First language English (n) | 8 | 9 | 0.303b |
Level of familiarity with computer(some/expert) | 4/5 | 2/7 | 0.317b |
Type of LIS (n) | |||
Incomplete | 6 | ||
Classical | 2 | ||
Total | 1 | ||
Years since LIS onset | 14.8 (1-55)c |
Mann-Whitney test.
Chi-square test.
Years since LIS onset was calculated for all participants with LIS to the earliest time they had LIS, either classical or incomplete.
The cause of LIS was ALS (4), brainstem stroke (2), cerebral palsy (1), brainstem AVM (1), and Duchenne muscular dystrophy (1)
Mastery Task Performance
Table 2 presents results from the mastery task on the remaining six participants with LIS and nine healthy controls. All participants completed at least the first level of the RSVP Keyboard™ mastery task. The number of successful participants decreased at higher mastery task levels. Higher AUC scores were required for success at higher mastery task levels. Six of nine control participants completed all five mastery levels, compared to only one of six participants with LIS.
Table 2. Mastery level completion, AUC and Total Error Rate for Participants with LIS and Control Groups.
LIS (n = 6) | Controls (n = 9) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Mastery task level | Average word length (letters) | Mean word frequency: total corpus/same 5-grams | Participants who completed level | AUC | Total error rate (%) | Correct characters per min | Participants who completed level | AUC | Total error rate (%) | Correct characters per min |
1 | 3.0 | .02/.15 | 6 | .73 (.56-.93) | 9.1 (0-66.7) | 1.8 (0.9-3.7) | 9 | .81 (.0.69-.92) | 5.9 (0-71.4) | 2.5 (1.0-3.5) |
2 | 4.0 | .003/.24 | 4 | .76 (.56-.93) | 9.3 (0-71.4) | 1.5 (0.0-2.5) | 8 | .81 (.69-.92) | 10.2 (0-100) | 2.3 (0.6-3.1) |
3 | 3.5 | .001/.003 | 2 | .83 (.71-.93) | 4.1 (0-25) | 0.4 (0.0-0.9) | 7 | .83 (.73-.92) | 6.7 (0-33.3) | 1.5 (0.2-2.6) |
4 | 4.0 | .0001/.0002 | 1 | .92 (.91-.93)* | 0 (0) | 3.3 (3.3-3.3) | 6 | .86 (.79-.92) | 21.0 (0-71.4) | 1.2 (0.3-2.1) |
5 | 4.0 | .00001/.00005 | 1 | .92 (.91-.93)* | 15.4 (0-57.1) | 2.3 (1.6-3.0) | 6 | .86 (.79-.92) | 11.3 (0-33.3) | 1.6 (0.9-2.7) |
Mean word frequency is given as number of occurrences of target words divided by total corpus number/ number of occurrences of target words with same 5 preceding characters divided by the occurrence number of the 5 preceding characters.
AUC and total error rate include only participants who successfully completed a given level.
Values presented as: mean (range). Examples of the 5 mastery levels with the target word in quotations are: Level 1: I_DO_“NOT”_AGREE; Level 2: IN_NEW_“YORK”_CITY; Level 3: EAT_THREE_TIMES_A_“DAY”; Level 4: MY_PARENTS_“FIND”_ME_FUNNY; Level 5: THE_MAN_WITH_“WAVY”_EYEBROWS. The probability of letters in the target word range from 5 times more likely as the next most likely letter (level 1) to 0.3 times as likely as the most likely letter (level 5).
One participant with LIS completed levels 4 and 5 during two different sessions, which is why there is a range here even though there was only one successful participant. All levels contain data from multiple sessions with the same participant.
On average, control participants had significantly higher maximum AUC scores than participants with LIS (Mann-Whitney p = 0.045), and tended to reach higher levels in the mastery task (p = 0.069). Although the number of sessions required to either complete level five or fail to pass a level varied from subject to subject (due to time constraints, individual performance, and/or fatigue), there was no significant difference between the two groups (p = 0.414). These results are displayed in Table 3.
Table 3. Highest mastery level achieved and number of sessions.
Participants with LIS | Controls | P | |
---|---|---|---|
Maximum AUC score | .71 (62-.93) | .83 (70-.92) | 0.045 |
Highest level completed | 2.3 (1-5) | 4.0 (1-5) | 0.069 |
Number of sessions | 1.7 (1-3) | 1.3 (1-2) | 0.414 |
Mean (range), p value (2-tailed Mann-Whitney), AUC area under the curve, LIS locked-in syndrome
Several participants with LIS consistently achieved low AUC scores during calibration attempts. Two of these participants, both with significant spasticity, demonstrated frequent, uncontrolled movements of the facial and respiratory muscles that interfered with accurate EEG signal acquisition.
Discussion
For the first time, we have demonstrated the utility of fusion of an EEG classifier with a statistical language model for spelling with a brain computer interface in participants with LIS. Equipment organization, transport and protocols were streamlined to allow research assistants to set up the RSVP Keyboard™ with ease in 45 minutes. The RSVP Keyboard™ quickly presents one large letter at a time on the screen for 400ms (or shorter), thus reducing the visual perceptual demands compared to other more complicated BCI displays. Through calibration and mastery tasks, EEG signals were recorded for up to 5 hours in participant's homes. The mastery task is another unique feature that allows participants to utilize the BCI to spell words with minimal errors due to the strong contribution of the language model to the lower levels. This 5-level mastery exercise is suited for functional training of BCI use as well as experimental manipulation.
Non-disabled participants performed better using the BCI system than those with LIS as has been observed previously.31 Some with LIS demonstrated low AUC. It is possible that those with LIS may have more difficulty with sustained attention but other potential confounds have not been fully addressed with these small samples. Possible causes include medications32, 33 and additional electronic equipment (two on ventilators). Five of the six participants with LIS were taking at least one EEG-altering medication. However, one subject with LIS achieved the highest AUC of all participants despite both being on a ventilator and taking two different anti-depressants.
While the RSVP Keyboard™ is usable in a small subset of people with LIS in its current form, there are limitations that will be addressed in future versions. Current artifact rejection or minimization techniques in real time are not ideal. While independent component analysis is frequently used to subtract eye blink artifact in off-line EEG analysis34, doing this in real time is more difficult. Eye blinks need to be detected and not just subtracted so that ERPs to stimuli presented during a blink are not used for classification. Active electrodes were used that require less quality contact than the typical 5 Kohm impedance needed for passive EEG electrode recordings. Electrode artifacts during mastery task or spelling are also a potential problem because the calibration utilized all electrodes. Electrode locations do not need to be exactly measured since the classifier can utilize any electrode location and the classifier is partly unique to a subject and a recording session. It is reasonable to try to keep placement as consistent across days as possible within a subject in order to significantly shorten the calibration time for each session. There were occasional problems with 60 Hz sources in peoples' homes.
For this study, all 28 possible characters were presented for each sequence for the classifier. The unique integration of a statistical language model has allowed a newer developmental version to present just a subset of the 28: the more likely characters. The current stimulus rate of 2.5 Hz allowed novice users to learn the RSVP paradigm, but those familiar with the task have been able to go faster than 5 Hz. We are currently exploring the feasibility of the 5 Hz presentation rate in subjects with LIS. The language model is currently using a standard corpus based on written English but this will possibly be improved with other corpora and with individualization of language model based on participants' prior text-based communication using email or other communication devices.
Characters per minute was fairly low as with all BCI systems. It is of interest that the correct characters per minute is not markedly different than those reported in other systems with very different letter presentation strategies. It would be useful in the future to design experiments to directly compare peformances using different presentation strategies with and without fused language models. For this experiment, no direct comparison with another system was performed.
There were participant issues related to having significant neurological dysfunction. Two participants had uncontrolled movements making useful EEG recording unreliable. This might be overcome with much improved subject specific artifact detection and minimization procedures. It was occasionally difficult to physically position the participants with LIS to comfortably maintain gaze at the laptop computer monitor. Participants did fatigue from the task, both those with LIS and healthy controls. It should be feasible to integrate some physiological markers of decreased alertness or vigilance35 into the system and then take breaks or stimulate participants in other ways to increase alertness.
This BCI RSVP Keyboard™ has fused an EEG classifier with a statistical language prediction model in real-time for the first time in people with incomplete LIS allowing for spelling with only 1-4 target letter presentations. The plan is to continue to refine the methodology to speed up the spelling rate and to allow its use for people who are completely locked-in.
Acknowledgments
Supported by grants NIH R01 DC009834, NSF IIS-0914808, NSF CNS-1136027, and NSF IIS-1149570. The authors acknowledge technical contributions from Shalini Purwar and Dr. Kenneth Hild, III. The contents of this paper are solely the responsibility of the authors and do not necessarily represent the official views of the NIH or NSF.
References
- 1.Bauer G, Gerstenbrand F, Rumpl E. Varieties of the locked-in syndrome. Journal of Neurology. 1979;221:77–91. doi: 10.1007/BF00313105. [DOI] [PubMed] [Google Scholar]
- 2.Smith E, Delargy M. Locked-in syndrome. Clinical Review British Medical Journal. 2005;330:406–409. doi: 10.1136/bmj.330.7488.406. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Laureys S, Pellas F, Van Eeckhout P, et al. The locked-in syndrome : what is it like to be conscious but paralyzed and voiceless? Progress in Brain Research. 2005;150:495–511. doi: 10.1016/S0079-6123(05)50034-7. [DOI] [PubMed] [Google Scholar]
- 4.Schjolberg A, Sunnerhagen KS. Unlocking the locked in; a need for team approach in rehabilitation of survivors with locked-in syndrome. Acta Neurologica Scandinavica. 2012;125:192–198. doi: 10.1111/j.1600-0404.2011.01552.x. [DOI] [PubMed] [Google Scholar]
- 5.Doble JE, Haig AJ, Anderson C, Katz R. Impairment, activity, participation, life satisfaction, and survival in persons with locked-in syndrome for over a decade: follow-up on a previously reported cohort. The Journal of Head Trauma Rehabilitation. 2003;18:435–444. doi: 10.1097/00001199-200309000-00005. [DOI] [PubMed] [Google Scholar]
- 6.Cassanova E, Lazzari RE, Lotta S, Mazzuchi A. Locked-in syndrome: improvement in the prognosis after an early intensive multidisciplinary rehabilitation. Archives of Physical Medicine and Rehabilitation. 2003;84:862–867. doi: 10.1016/s0003-9993(03)00008-x. [DOI] [PubMed] [Google Scholar]
- 7.Beaudoin N, De Serres L. In: Locked-in syndrome. Stone JH, Blouin M, editors. International Encyclopedia of Rehabilitation; Available online: http://cirriebuffaloedu/encyclopedia/en/article/303/2012. [Google Scholar]
- 8.Trojano L, Moretta P, Estraneo A, Santoro L. Neuropsychologic assessment and cognitive rehabilitation in a patient with locked-in syndrome and left neglect. Archives of Physical Medicine & Rehabilitation. 2010;91:498–502. doi: 10.1016/j.apmr.2009.10.033. [DOI] [PubMed] [Google Scholar]
- 9.Ball LJ, Beukelman DR, Fager SK, et al. Eye-gaze access to AAC technology for people with amyotrophic lateral sclerosis. Journal of Medical Speech - Language Pathology. 2010;18:11–23. [Google Scholar]
- 10.Soderholm S, Meinander M, Alaranta H. Augmentative and alternative communication methods in locked-in syndrome. Journal of Rehabilitation Medicine. 2001;33:235–239. doi: 10.1080/165019701750419644. [DOI] [PubMed] [Google Scholar]
- 11.Fager S, Beukelman D, Karantounis R, Jakobs T. Use of safe-laser access technology to increase head movement in persons with severe motor impairment: a series of case reports. AAC: Augmentative & Alternative Communication. 2006;22:222–229. doi: 10.1080/07434610600650318. [DOI] [PubMed] [Google Scholar]
- 12.Murguialday AR, Hill J, Bensch M, et al. Transition from the locked in to the completely locked-in state: a physiological analysis. Clinical Neurophysiology. 2011;122:925–933. doi: 10.1016/j.clinph.2010.08.019. [DOI] [PubMed] [Google Scholar]
- 13.Ball LJ, Fager S, Fried-Oken M. Augmentative and alternative communication for people with progressive neuromuscular disease. Physical Medicine & Rehabilitation Clinics of North America. 2012;23:689–699. doi: 10.1016/j.pmr.2012.06.003. [DOI] [PubMed] [Google Scholar]
- 14.Fager S, Beukelman DR, Fried-Oken M, Jakobs T, Baker J. Access interface strategies. Assistive Technology. 2011;24:25–33. doi: 10.1080/10400435.2011.648712. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Wolpaw JR, Wolpaw EW, editors. Brain-Computer Interfaces: Principles and Practice. USA: Oxford University Press; 2012. [Google Scholar]
- 16.Schalk G, McFarland DJ, Hinterberger T, Birbaumer N, Wolpaw JR. BCI2000: a general-purpose brain-computer interface (BCI) system. IEEE Transactions on Biomedical Engineering. 2004;51:1034–1043. doi: 10.1109/TBME.2004.827072. [DOI] [PubMed] [Google Scholar]
- 17.Farwell LA, Donchin E. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology. 1988;70:510–523. doi: 10.1016/0013-4694(88)90149-6. [DOI] [PubMed] [Google Scholar]
- 18.Fabiani M, Gratton G, Karis D, Donchin E. Definition, identification, and reliability of measurement of the P300 component of the event-related brain potential. Advances in Psychophysiology. 1987;2:1–78. [Google Scholar]
- 19.Krusienski DJ, Sellers EW, McFarland DJ, Vaughan TM, Wolpaw JR. Toward enhanced P300 speller performance. Journal of Neuroscience Methods. 2008;167:15–21. doi: 10.1016/j.jneumeth.2007.07.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Blankertz B, Dornhege G, Krauledat M, et al. The Berlin Brain-Computer Interface presents the novel mental typewriter Hex-o-Spell. Proceedings of the 3rd International Brain-Computer Interface Workshop and Training Course; 2006; Graz. Verlag der Technischen Universität Graz; pp. 108–109. [Google Scholar]
- 21.Treder MS, Schmidt NM, Blankertz B. Towards gaze-independent visual brain-computer interfaces. Frontiers in Computational Neuroscience. 2010 [Google Scholar]
- 22.Ryan DB, Frye GE, Townsend G, et al. Predictive spelling with a P300-based brain-computer interface: Increasing the rate of communication. International journal of human-computer interaction. 2011;27:69–84. doi: 10.1080/10447318.2011.535754. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Higginbotham J, Moulton B, Lesher G, Roark B. The application of natural language processing to augmentative and alternative communication. Assistive Technology. 2012;24:14–24. doi: 10.1080/10400435.2011.648714. [DOI] [PubMed] [Google Scholar]
- 24.Fried-Oken M, Oken B, Erdogmus D, et al. A brain computer interface using the RSVP keyboard for users who are locked-in. International Society on Augmentative and Alternative Communication; Pittsburgh, PA: 2012. In: http://aac-rerc.psu.edu/documents/Fried_Oken_et_al_RSVP_2012.pdf, ed. [Google Scholar]
- 25.Fried-Oken M, Mooney A, Oken B, Peters B. Clinical screening protocol for RSVP keyboard BCI use. Fifth International conference on brain-computer interface; Pacific Grove, CA. 2013; [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Orhan U, Hild KE, Erdogmus D, Roark B, Oken B, Fried-Oken M. RSVP Keyboard: An EEG based typing interface. Proceedings of the IEEE International Conference on Acoustics, Speech, and SIgnal Processing (ICASSP); 2012; pp. 645–648. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.MacKenzie IS, Soukoreff RW. Phrase sets for evaluating text entry techniques. ACM Conference on Human Factors in Computing Systems (CHI); 2003; pp. 754–755. [Google Scholar]
- 28.Soukoreff RW. Quantifying text entry performance. Toronto, Ontario: York University; 2010. [Google Scholar]
- 29.Friedman JH. Regularized discriminant analysis. Journal of the American Statistical Association. 1989;84:165–175. [Google Scholar]
- 30.Roark B, Beckley R, Gibbons C, Fried-Oken M. Huffman scanning: using language models within fixed-grid keyboard emulation. Computer Speech and Language. 2012 doi: 10.1016/j.csl.2012.10.006. Available online 23 October 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Brunner P, Joshi S, Briskin S, Wolpaw JR, Bischof H, Schalk G. Does the ‘P300’ speller depend on eye gaze? Journal of neural engineering. 2010;7:056013. doi: 10.1088/1741-2560/7/5/056013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Meador KJ. Cognitive side effects of medications. Neurologic Clinics. 1998;16:141–155. doi: 10.1016/s0733-8619(05)70371-6. [DOI] [PubMed] [Google Scholar]
- 33.Polich J, Criado JR. Neuropsychology and neuropharmacoogy of P3a and P3b. International Journal of Psychophysiology. 2006;60:172–185. doi: 10.1016/j.ijpsycho.2005.12.012. [DOI] [PubMed] [Google Scholar]
- 34.Jung T-P, Makeig S, Humphries C, et al. Removing electroencephalographic artifacts by blind source separation. Psychophysiology. 2000:163–178. [PubMed] [Google Scholar]
- 35.Oken BS, Salinsky MC, Elsas SM. Vigilance, alertness, or sustained attention: Physiological basis and measurement. Clinical Neurophysiology. 2006;117:1885–1901. doi: 10.1016/j.clinph.2006.01.017. [DOI] [PMC free article] [PubMed] [Google Scholar]