Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Jul 22.
Published in final edited form as: ACM Trans Access Comput. 2009 Mar;1(3):15. doi: 10.1145/1497302.1497305

The Effect of Voice Output on the AAC-Supported Conversations of Persons with Alzheimer’s Disease

Melanie Fried-Oken 1, Charity Rowland 1, Glory Baker 1, Mayling Dixon 1, Carolyn Mills 1, Darlene Schultz 1, Barry Oken 1
PMCID: PMC3141213  NIHMSID: NIHMS268209  PMID: 21785666

Abstract

The purpose of this study was to determine whether the presence or absence of digitized 1–2 word voice output on a direct selection, customized augmentative and alternative communication (AAC) device would affect the impoverished conversations of persons with dementia. Thirty adults with moderate Alzheimer’s disease participated in two personally relevant conversations with an AAC device. For 12 of the participants the AAC device included voice output. The AAC device was the Flexiboard™ containing 16 messages needed to discuss a favorite autobiographical topic chosen by the participant and his/her family caregivers. Ten-minute conversations were videotaped in participants’ residences and analyzed for four conversational measures related to the participants’ communicative behavior. Results show that AAC devices with digitized voice output depress conversational performance and distract participants with moderate Alzheimer’s disease as compared to similar devices without voice output. There were significantly more 1-word utterances and fewer total utterances when AAC devices included voice output, and the rate of topic elaborations/initiations was significantly lower when voice output was present. Discussion about the novelty of voice output for this population of elders and the need to train elders to use this technology is provided.

General Terms: Human Factors, Performance

Additional Key Words and Phrases: Dementia, Alzheimer’s disease, Augmentative and Alternative Communication (AAC), language, digitized speech synthesis

1. INTRODUCTION

Dementia is a neurodegenerative disorder, diagnosed when an individual shows impairment or related changes in memory and one other cognitive domain (i.e., language, abstract thinking, judgment, executive function) that are sufficiently severe to affect social and occupational functioning and that reflect a decline from a previously higher level of functioning [American Psychiatric Association 1994]. Specific dementia subtypes include Alzheimer’s disease (AD), vascular dementia, frontotemporal dementia, and dementia with Lewy bodies [Sjogren et al. 2003]. AD is the most common dementia syndrome, with prevalence estimated at 5 million Americans age 65 years and over [Hebert et al. 2001]. The Alzheimer’s Association [2008] reports that 13 percent of Americans over the age of 65 (or one in eight people) present with AD.

During the course of the disease, individuals lose cognitive-communication skills in predictable ways. Disturbances in language are due in part to deterioration of memory function [Bayles and Tomoeda 1995]. Skill decline is often divided into three stages: mild (or early stage), moderate (or mid stage) and severe (or end stage). In the moderate stage of AD (modAD), individuals may show poor comprehension of written material and poor writing skills. There is noticeably reduced verbal output and the person has difficulty expressing a series of related ideas and staying “on track” in conversations. There are a number of spared language skills, however, that can be utilized to facilitate communication in the moderate stages. The person with modAD continues to exhibit relatively intact use of syntax, reads and comprehends single words, expresses his or her needs when supported by a communication partner, and follows two-stage commands. Individuals with modAD can demonstrate good recognition memory for information that they cannot recall freely [Lubinski 1995].

Given these strengths, it is plausible that augmentative and alternative communication (AAC) would be an intervention option to improve conversation. AAC, in the form of external aids that incorporate stimuli highly relevant to a person’s daily life, may rely on procedural memory which includes familiar and spared skills (such as reading aloud, turning pages) for language support [Bourgeois 2006]. Memory aids include wallets, notebooks, calendars, signs, color codes, timers, communication boards, labels, and other tangible visible symbols that provide cues for interaction. In an initial study examining the effect of memory wallets on conversation, Bourgeois [1992] found decreases in the frequency of ambiguous, erroneous, preservative, and unintelligible utterances, and significant increases in the number of factual statements produced by people with early-to middle- stage dementia during interactions with family members. Bourgeois et al. [2001] further demonstrated that memory aids used as an AAC strategy in nursing homes significantly improved the quality and quantity of discourse between nursing aides and residents with dementia. Indeed, Bourgeois and Hickey [2007] present clinical tools for AAC intervention based on their previous research, such as designing cue cards and memory books for word finding and conversation. One AAC variable that has received minimal attention for persons with AD is the output modality. Specifically, there is little discussion of the potential of voice output as a feature of memory aids.

1.1 AAC and Voice Output

Through the 1970s, AAC consisted of “unaided” methods, primarily manual signs and gestures. With unaided methods there is no distinction between input and output mode: for instance, the motion required to produce a manual sign constitutes the sign, or message, itself. As the field matured, “aided” options such as nonelectronic communication boards, books, or picture cards became popular. Input/output from nonelectronic aids was often a static picture or written word accessed by an indicating response such as pointing. The communication partner would provide a verbal confirmation by reading the word or interpreting the picture. The development of electronic communication devices over the past 30 years heralded in the use of voice output on AAC devices, as well as other smart language options (i.e., word prediction or abbreviation expansion). With the advent of voice output came the separation of input and output modes. Aided AAC devices now are described according to three features: input mode (how messages are represented to the user); output mode (how messages are presented to the conversational partner); and access method (how the user selects a message) [Vanderheiden 1985]. An alphabet-based speech generating device, for example, may use letters as the input mode, synthesized speech for the output mode, and direct selection with a forefinger or non-anatomical pointer (such as a head pointer) as the access method.

Speech as an output mode, whether digitized or synthesized, has received considerable attention for communication partners, learners, and learner-partner dyads [Schlosser 2003]. In most cases, voice output has been used as an alternative to speech for individuals who are severely speech impaired. Since persons with AD usually have perfectly adequate speech, this purpose for voice output is not relevant. More pertinent to persons with AD is the examination of voice output effects for individuals with language impairment associated with severe cognitive limitations. Romski and Sevcik [1996] found that young adults with severe cognitive impairments learned to make requests when using the System of Augmented Language that includes synthesized speech as one output mode. McGregor et al. [1992] taught a young adult with physical and intellectual disabilities to express requests for assistance, materials, a break, and comments using a speech generating device. Healy [1994] found that speech output was beneficial in helping a young man with intellectual disabilities to increase initiations and responses in natural settings, while these communication functions were only maintained with non-voice output modes. Additionally, Schlosser [2003] showed the potential of voice output to reduce the challenging behaviors of adults with significant developmental disabilities. Clinical rationale exists for the use of digitized voice output in AAC devices for persons with dementia. The therapeutic technique of partner-assisted communication, whereby a conversant supplies word cues to persons with AD to reduce their struggle to find words can be compared to the computer-assisted communication of the AAC device. Lexical therapy has been shown to help patients with AD recall and learn new words [Ousetta et al. 2002]. Cueing also was found to reduce verbal perseverations in adults with language impairments [Corbett et al. 2008], a behavior observed frequently in the conversations of persons with dementia. The digitized word cues available during personalized conversations through an AAC device may function similarly to lexical cues available through human-human interactions.

Garrett and Yorkston [1997] suggested that simple, digitized voice output devices with large message squares could serve as “mechanical scrapbooks” to support participation and interaction in individuals with dementia. A hypermedia reminiscence device called CIRCA has been shown to enhance interaction and conversation among individuals with dementia and their carers [Gowens et al. 2007; Alm et al. 2006; Fried-Oken et al. 2006]. Finally, the presence of voice output in AAC devices might provide spoken cues to reduce word finding problems experienced by persons with AD. Older adults all experience the “tip of the tongue phenomenon” where we look at a photograph of an old friend and cannot recall her name. Consider the user who has chosen a symbol on an AAC device, hears the digitized label for that symbol, and repeats the spoken word. The verbal repetition becomes a form of learning, practice, and lexical stabilization. The digitized label also may serve as a confirmation of correct symbol selection, reinforcing the accuracy of the semantic choice after the fact. The digitized label may provide the user with dementia a “cognitive access method” where the spoken cue actually stimulates semantic nodes for enriched lexical networks. [Fried-Oken et al. 2000].

2. PURPOSE

The goal of the research reported here was to determine whether a direct selection, customized AAC device with digitized 1–2 word voice output would improve conversation in persons with moderate AD as compared to a similar AAC device without voice output. It was hypothesized that the effect of voice output resembles the facilitative word cues provided in standard lexical therapy, and that voice output would enhance language use, as evidenced by individuals with developmental disabilities. This research involved the first cohort of participants enrolled in a larger study that was designed to examine the effect of a variety of AAC features on conversation in dementia. Each participant was randomly assigned to an AAC device with one of two voice output modes (present versus absent) and one of three symbol types. Conversations were videotaped and analyzed for four conversational measures related to the participants’ communicative behavior. As the study progressed, anecdotal evidence suggested that voice output might have an unexpected effect on conversational success; it was at this point that we elected to analyze the data for the effect of voice output.

3. METHODS

3.1 Participants

The participants included in this research were the first 30 who met inclusion/exclusion criteria for the larger study and who had completed at least two conversations using an AAC device. All exhibited moderate to severe Alzheimer’s disease based on NINCDS-ADRDA criteria [McKhann et al. 1984]. They were recruited from the Layton Aging & Alzheimer’s Disease Center at Oregon Health & Science University, one of the 30 Alzheimer’s disease centers funded by the U.S. National Institute on Aging. Inclusion criteria were: diagnosis of probable or possible AD by a board certified neurologist according to DSM-IV criteria [American Psychiatric Association 1994]); Clinical Dementia Rating (CDR) = 1 or 2 [Hughes et al. 1982]; Mini Mental Status Examination (MMSE) = 5–18 within 6 months of enrollment in the study [Folstein et al. 1975]; visual acuity better than 20/50 O.U. (as performed in the Layton Center); hearing loss < 40dB (as performed in the Layton Center); English as primary language. Exclusion criteria were: history of other neurologic or psychiatric illness (no CVA, reported alcohol abuse, traumatic brain damage, and reported recent significant psychological or speech/language disorder).

Twenty three females and seven males with a mean age of 74 years (range of 50–94 years) participated. All participants identified their race as White, except for one whose race was African American. The mean MMSE score was 12 (range of 5–18, out of a possible 30), placing them well between moderate and severe dementia stages. The Functional Linguistic Communication Inventory (FLCI) [Bayles and Tomeda 1994] was administered to each participant to document degree of language impairment. The mean FLCI score was 61 (range of 27–85, out of a possible 88). Informed consent was provided by all participants and caregivers, using protocols approved by the OHSU Institutional Review Board.

3.2 Procedures

The procedures described below were implemented for the larger study, and included the 30 participants who are reported here.

3.2.1 Consenting, testing and assignment to condition

All participant sessions were held in participants’ home environments, whether those were family homes or residential care facilities. During the first session, the consenting process was completed along with administration of the FLCI and (if the latest test scores were more than 6 months old) the MMSE and/or the CDR. Participants were then randomly assigned to one of the six combinations of two voice output and three input conditions. Since the randomization strategy was applied to the entire sample for the larger study, cell sizes were not equal for this group consisting of the first 30 participants. Twelve of the 30 participants were assigned to the voice output-present condition and the remaining 18 were assigned to the voice output-absent condition.

3.2.2 Vocabulary selection

During the initial visit, participants, familiar caregivers and other direct care staff were queried regarding autobiographical topics that the participant had enjoyed discussing in the recent past but that s/he had difficulty discussing currently. To assist with conversational topic selection, a list of approximately 100 typical events (e.g., traveling; grandchildren; a famous person) that was developed by Svoboda [2002] was presented. Participants were guided to select topics that they were comfortable discussing in detail. Once a topic was selected, 1–2 word phrases needed to converse about the chosen topic were obtained; ultimately 16 1–2 word phrases most needed to discuss the topic were selected and approved by each participant.

3.2.3 The AAC device

The AAC device was a Flexiboard™, chosen because it is physically appealing to elderly participants (it is made of natural wood and titanium), it can be programmed to provide voice output, it includes software to develop vocabulary overlays, it is light-weight, portable and user-friendly. Once the symbol type, voice output condition, topic and vocabulary were determined, a customized overlay for the AAC device that incorporated symbols for the 16 selected vocabulary items was created. Spoken phrases were recorded using Microsoft® Office Sound Recorder (PCM 22.050 kHz, 8 bit, Mono; sound playback/recording is Intel® Integrated audio) for those assigned to the voice output-present condition. Since the majority of participants were female, only female digitized speech was used. Portable amplifiers were placed next to the Flexiboard™ for output.

3.2.4 Conversations

Each participant engaged in two conversations with the randomly assigned AAC device conducted during separate visits. Trained research assistants (RAs) were the conversational partners. Conversations involved a predictable structure, with a greeting, introduction to the topic and AAC device, posing of questions and comments to prompt conversation about the topic, and closing grammar. RAs provided at least five seconds for the participant to respond to each question or conversational prompt; if no response was forthcoming after that time, RAs used a downshifting strategy to support conversation that has proven effective with AAC users [Light et al. 1988]. Each conversation was videotaped by a second RA and videotaping was terminated after 10 minutes.

3.3 Coding of Videotaped Conversations

The data set included 60 conversations: 24 conducted with an AAC device that produced voice output (2 for each of 12 participants) and 36 conversations conducted with an AAC device that did not include voice output (2 for each of 18 participants). The first five minute segment of each 10-minute conversation was discarded to allow for familiarization between participants and conversational partners (which had to be reestablished at each visit, since participants did not remember the RAs); the remaining five minutes of the conversation were coded. Five minutes was determined to be sufficient based on the research of Bourgeois [1992] with a similar population. The spoken language of the participants was coded. Observer 5.0 software, developed by NOLDUS [2003], was used to view and code the conversations.

The participant’s utterance, defined in relationship to the conversational turn, is the unit of analysis. An utterance is defined as a proposition completed, abandoned or interrupted within the bounds of a conversational turn. Each utterance is coded according to one of four topic management strategies: topic maintenance (merely continuing the previously established topic and adding only solicited information to it); topic revival (reviving a topic discussed earlier in the conversation); topic elaboration (providing new and unsolicited information about the previously established topic); or topic initiation (entering a new topic into the conversation). In an effort to operationalize “enhanced language use”, topic elaboration and initiation were combined into one dependent variable since they imply a more active role in the conversation than do topic maintenance or revival. Two other variables are coded. First, 1-word utterances spoken by the participant are flagged, since they imply a paucity of speech and minimal response to conversational prompts. Second, physical references to AAC device are coded to quantify use of the Flexiboard™, such as touching a symbol on it or pointing to it (but excluding passively resting the hand on it). Based on hypotheses and experimental questions, four dependent measures were tallied from the coded data: number of utterances produced by the participant, percent of participant utterances involving either topic elaborations or initiations, percent of 1-word utterances produced by the participant, and number of references to the AAC device made by the participant.

3.3.1 Reliability

Conversations were coded by three RAs. One conversation per participant was systematically selected for reliability analyses, totaling 50% of the data. RA1 served as the standard for the other two RAs; thus, reliability was evaluated for RA1/RA2 and for RA1/RA3 pairs. Inter-observer agreement, calculated as # agreements / (# agreements + disagreements), averaged 84% across coding categories.

4. RESULTS

A MANOVA was calculated with voice output (present versus absent) as the independent variable. FLCI and MMSE scores were entered as covariates. Means and standard deviations for the four dependent variables for conversations that involved AAC devices with and without voice output are presented in Table 1. Results showed a significant effect of voice output across the four dependent variables, yielding Wilks’ Lambda = .772, F (4, 53) = 3.921, p < .007. Univariate tests showed that there were significantly more 1-word utterances and fewer total utterances when AAC devices included voice output (for 1-word utterances, F [1, 56] = 8.679, p<.005; for total utterances, F [1, 56] = 7.604, p<.008). In addition, the percent of utterances involving either topic elaborations or initiations was significantly lower when voice output was present (F [1, 56] = 8.807, p<.004. The effect of voice output was not significant for references to AAC device, but there was a clear trend toward fewer references to devices with voice output (mean = 1) than to those without voice output (mean = 3).

Table 1.

Means (Standard Deviations) for Dependent Variables Across Conversations Supported by AAC Devices with and without Voice Output

Voice output # Utterances % 1-word
utterances
% Initiations/
elaborations
# References
to AAC
Absent 54 (16) 30% (19%) 29% (18%) 3 (6)
Present 46 (10) 35% (16%) 22% (14%) 1 (2)

As expected, the measures of language and cognitive ability that were treated as covariates demonstrated significant effects. The effect of MMSE score was highly significant (Wilks’ Lambda = .784, F [4, 53] = 3.648, p < .011). The effect of Expressive FLCI trended toward significance (Wilks’ Lambda = .846, F [4, 53] = 2.406, p < .061). A t-test revealed that there was no significant difference between the two groups in terms of age. The ratio of females to males was similar between the two groups (3:1 for Voice Output present and 3.5:1 for Voice Output absent).

5. DISCUSSION

Results clearly demonstrate that AAC devices with digitized voice output depress conversational performance and distract participants with moderate AD as compared to similar devices without voice output. The direction of this statistically significant effect was not expected. While we had posited that the presence of the spoken word would support and even enhance conversation, voice output appeared to have a deleterious effect on language performance. This was supported by the significantly higher rate of 1-word utterances, the lower total number of utterances produced and the lower rate of topic elaborations/initiations when voice output was present.

Explanations for the “negative result” are varied. Perhaps the very presence of voice output produced perceptual and attentional problems that interfere with the use of an external device for conversation. McPherson et al. [2001] posited an interference effect in similar work with adults with severe AD. For a number of participants, the novelty of the voice output caused them to stop conversing and produce a perseverative behavior of pressing a symbol repeatedly. Indeed, the original hypothesis that the presence of voice output would enhance language use was not supported. The AAC devices with voice output were associated with a paucity of language and repetition of challenging behaviors. Some participants simply ignored the output, perhaps because they couldn’t hear it or because the output was emitted by speakers that were separate from the AAC device. The significant reduction in elaboration and initiation suggests that the digital labels may interfere with language use. Perhaps a participant could embellish a point, but the spoken cue interfered with conversation, causing the user to forget his purpose or drop the line of thought.

The similarities suggested between human-human word cueing and machine-generated word cueing were not substantiated. Perhaps the word cues provided by speaking partners create a familiar, comfortable and frequent conversational environment that facilitates word finding, while the machine-generated cues might be foreign and uninterpretable by this population. In fact, this sample of elders with AD may not be familiar with present day “talking technology.” They probably don’t own talking photo frames or speaking computers; they may not use voice output information kiosks or talking key chains that tell them where they parked their cars; they may not buy talking stuffed animals for their grandchildren.

Clinicians and device designers should integrate the discovery that voice output does not facilitate and may even impede conversation for elders with dementia into their practices [Rowland et al. 2007]. This is the first time that a deleterious effect of voice output has been presented for this population of adults with acquired cognitive impairment. Originally, we had expected the digitized spoken word to help elders with word finding problems to access words, since they are not dysarthric and do not need help with motor speech behaviors. One might argue that the auditory symbol serves to reinforce the accuracy of the semantic choice after the fact, but this case is not supported by the data. The data tell a different story: the spoken label interferes with connected speech, limiting topic elaboration/ initiation and reducing output to minimal responses.

6. LIMITATIONS AND FUTURE DIRECTIONS

Certainly a number of limitations to this research are acknowledged. A training protocol to introduce the elders to voice output on AAC devices should have accompanied the introduction of the device since familiarity with “talking boards” might affect behavior and reduce disinhibition. It is possible that AAC intervention at an earlier disease stage may stimulate lexical retrieval patterns with AAC use at a later disease stage. Perhaps voice output would be less distracting for adults who were exposed to it earlier. Finally, the nature of the device design is questioned. The Flexiboard™ required the addition of speakers next to the device. Using a device with an internal speaker and amplifier might have made the association of the voice output with the AAC device more obvious to participants.

ACKNOWLEDGEMENTS

We thank the participants and their informal caregivers for allowing us to record their personal conversations in their homes.

This work was supported by grant #R21 HD47754 from the National Institutes of Health, grant #H133G040176 from the National Institute for Disability and Rehabilitation Research, and grant # P30 AG008017 from the Layton Aging and Alzheimer's Disease Center.

Footnotes

Permission to make digital/hard copy of part of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication, and its date of appear, and notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee.

NOTE

Flexiboard™ and Flexiloader™ are items available through ZYGO Industries, Portland, Oregon USA

REFERENCES

  1. Alm N, Dye R, Astell A, Ellis M, Gowans G, Campbell J. International Workshop on Cognitive Prosthesis and Assisted Communication. Sydney, Australia: 2006. A cognitive prosthesis to support communication by people with dementia; pp. 30–36. [Google Scholar]
  2. Alzheimer’s Association. Disease Facts and Figures. 2008 Retrieved May 29, 2008, from http://alz.org/index.asp.
  3. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed. Washington, DC: Author; 1994. [Google Scholar]
  4. Bayles KA, Tomoeda CK. Functional Linguistic Communication Inventory. Tucson, AZ: Canyonlands Publishing; 1994. [Google Scholar]
  5. Bayles KA, Tomoeda CK. Improving the Ability of Alzheimer’s Patients to Communicate. Phoenix, AZ: Canyonlands Publishing; 1995. [Google Scholar]
  6. Bourgeois MS. Evaluating memory wallets in conversations with persons with dementia. Journal of Speech and Hearing Research. 1992;35:1344–1357. doi: 10.1044/jshr.3506.1344. [DOI] [PubMed] [Google Scholar]
  7. Bourgeois M. External Aids. In: Attix DK, Welsh-Bohmer K, editors. Geriatric Neuropsychological Assessment & Intervention. New York: Guilford Press; 2006. pp. 333–346. [Google Scholar]
  8. Bourgeois MS, Dijkstra K, Burgio L, Allen-Burge R. Memory aids as an augmentative and alternative communication strategy for nursing home residents with dementia. Augmentative and Alternative Communication. 2001;17(30):196–209. [Google Scholar]
  9. Bourgeois MS, Hickey E. Dementia. In: Beukelman DR, Garrett K, Yorkston K, editors. Augmentative Communication Strategies for Adults with Acute or Chronic Medical Conditions. Baltimore, MD: Paul H Brookes Publishing; 2007. [Google Scholar]
  10. Carlsen K, Hux K, Beukelman DR. Comprehension of synthetic speech by individuals with aphasia. Journal of Medical Speech-Language Pathology. 1994;2:105–111. [Google Scholar]
  11. Corbett F, Jeffries E, Lambon-Ralph MA. The use of cueing to alleviate recurrent verbal perseverations: Evidence from transcortical sensory aphasia. Aphasiology. 2008;22(4):363–382. [Google Scholar]
  12. Dahle A, Goldman R. ASHA Convention. Seattle, WA: 1990. Perception of synthetic speech by normal and developmentally disabled children. [Google Scholar]
  13. Folstein MF, Folstein SE, McHugh PR. Mini mental-state: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research. 1975;12:189–198. doi: 10.1016/0022-3956(75)90026-6. [DOI] [PubMed] [Google Scholar]
  14. Fried-Oken M, Rau M, Oken B. AAC and dementia. In: Beukelman DR, Yorkston KM, Reichle J, editors. Augmentative and Alternative Communication for Adults with Acquired Neurologic Disorders. Baltimore, MD: Paul H. Brookes; 2000. [Google Scholar]
  15. Fried-Oken M, Rowland C, Fox L. Augmentative communication for persons with dementia: Can we make it work?. Presentation to the Oregon and Washington Speech-Language-Hearing Association annual meeting; Vancouver, WA. 2006. [Google Scholar]
  16. Fried-Oken M, Rowland C, Small J, Baker G, Alm N, Dye R, et al. Can we augment conversation for persons with dementia?. 12th Biennial Conference of the International Society for Augmentative and Alternative Communication; Dusseldorf, Germany. 2006. [Google Scholar]
  17. Garrett KL, Yorkston KM. Assistive communication technology for elders with cognitive and language disabilities. In: Lubinski R, Higginbotham DJ, editors. Communication Technologies for the Elderly: Vision, Hearing and Speech. San Diego, CA: Singular Publishing Group, Inc.; 1997. [Google Scholar]
  18. Gorenflo CW, Gorenflo DW. The effects of information and AAC technique on attitudes toward nonspeaking individuals. Journal of Speech and Hearing Research. 1991;34:19–34. doi: 10.1044/jshr.3401.19. [DOI] [PubMed] [Google Scholar]
  19. Gorenflo CW, Gorenflo DW, Santer S. Effects of synthetic voice output on attitudes toward augmented communication. Journal of Speech and Hearing Research. 1994;37:64–68. doi: 10.1044/jshr.3701.64. [DOI] [PubMed] [Google Scholar]
  20. Gowans G, Dye R, Alm N, Vaughan P, Astell A, Ellis M. Designing the interface between dementia patients, caregivers and computer-based intervention. The Design Journal. 2007;10(1):12–23. [Google Scholar]
  21. Healy S. The use of a synthetic speech output communication aid with a youth having severe disabilities. In: Linfoot K, editor. Communication Strategies for Persons with Developmental Disabilities: Issues from Theory and Practice. Baltimore, MD: Brookes; 1994. [Google Scholar]
  22. Hebert LE, Scherr PA, Bienias DA, Evans DA. Annual incidence of Alzheimer's disease in the United States projected to the years 2000 through 2050. Alzheimer's Disease and Associated Disorders. 2001;15:169–173. doi: 10.1097/00002093-200110000-00002. [DOI] [PubMed] [Google Scholar]
  23. Hughes CP, Berg L, Danzinger W, Coben L, Martin RL. A new clinical scale for staging of dementia. British Journal of Psychiatry. 1982;140:566–572. doi: 10.1192/bjp.140.6.566. [DOI] [PubMed] [Google Scholar]
  24. Koul R, Harding R. Identification and production of graphic symbols by individuals with aphasia: Efficacy of a software application. Augmentative and Alternative Communication. 1998;14:11–23. [Google Scholar]
  25. Light J, Beesley M, Collier B. Transition through multiple AAC systems: A three-year case study of a head injured adolescent. Augmentative and Alternative Communication. 1988;4:2–14. [Google Scholar]
  26. Lubinski R. Dementia and Communication. San Diego, CA: Singular Publishing Group; 1995. [Google Scholar]
  27. Lilienfeld M, Alant E. Attitudes of children toward an unfamiliar peer using an AAC device with and without voice output. Augmentative and Alternative Communication. 2002;18:91–101. [Google Scholar]
  28. McGregor G, Young J, Gerak J, Thomas B, Vogelsberg RT. Increasing functional use of an assistive communication device by a student with severe disabilities. Augmentative and Alternative Communication. 1992;9:72–73. [Google Scholar]
  29. McKhann G, Drachman D, Folstein M, Katzman R, Price D, Stadlan EM. Clinical diagnosis of Alzheimer's disease: Report of the NINCDS-ADRDA Work Group under the auspices of Department of Health and Human Services Task Force on Alzheimer's disease. Neurology. 1984;34:939–944. doi: 10.1212/wnl.34.7.939. [DOI] [PubMed] [Google Scholar]
  30. McPherson A, Furniss FG, Sdogali C, Cesaroni F, Tartaglini B, Lindesay J. Effects of individualized memory aids on the conversation of persons with severe dementia: A pilot study. Aging & Mental Health. 2001;5(3):289–294. doi: 10.1080/13607860120064970. [DOI] [PubMed] [Google Scholar]
  31. Noldus Information Technology bv. The Observer (Version 5.0). [Computer software] The Netherlands: Wageningen; 2003. [Google Scholar]
  32. Ousset PJ, Viallard G, Puel M, Celsis P, Demonet JF, Cardebat D. Lexical therapy and episodic word learning in dementia of the Alzheimer type. Brain and Language. 2002;80(1):14–20. doi: 10.1006/brln.2001.2496. [DOI] [PubMed] [Google Scholar]
  33. Romski MA, Sevcik RA. Breaking the Speech Barrier. Baltimore, MD: Paul H. Brookes; 1996. [Google Scholar]
  34. Rowland C, Fried-Oken M, Baker G, Mills C, Schultz D, Small J, et al. Supporting conversation in persons with moderate Alzheimer’s disease. Paper Presented at the Festival of International Conferences on Caregiving, Disability, Aging, and Technology; Toronto, Canada. 2007. [Google Scholar]
  35. Schlosser R. Roles of speech output in augmentative and alternative communication: Narrative review. Augmentative and Alternative Communication. 2003;19:5–27. doi: 10.1080/0743461032000056450. [DOI] [PubMed] [Google Scholar]
  36. Sjogren M, Wallen A, Blennow K. Clinical subgroups of Alzheimer’s disease. In: Emery VOB, Oxman TE, editors. Dementia: Presentations, Differential Diagnosis, and Nosology. 2nd ed. Baltimore, MD: Johns Hopkins University Press; 2003. pp. 139–145. [Google Scholar]
  37. Svoboda E. Autobiographical Interview: Age-related Differences in Episodic Retrieval. University of Toronto: Department of Psychology, Toronto; 2002. [Google Scholar]
  38. Vanderheiden G. Non-Vocal Communication Resource Book. Madison, WI: Waisman Center; 1985. [Google Scholar]

RESOURCES