Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 1995 Oct 24;92(22):9999–10006. doi: 10.1073/pnas.92.22.9999

Processing of speech signals for physical and sensory disabilities.

H Levitt 1
PMCID: PMC40725  PMID: 7479816

Abstract

Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.

Full text

PDF
9999

Images in this article

Selected References

These references are in PubMed. This may not be the complete list of references from this article.

  1. Breeuwer M., Plomp R. Speechreading supplemented with auditorily presented speech parameters. J Acoust Soc Am. 1986 Feb;79(2):481–499. doi: 10.1121/1.393536. [DOI] [PubMed] [Google Scholar]
  2. Brooks P. L., Frost B. J., Mason J. L., Gibson D. M. Continuing evaluation of the Queen's University tactile vocoder II: Identification of open set sentences and tracking narrative. J Rehabil Res Dev. 1986 Jan;23(1):129–138. [PubMed] [Google Scholar]
  3. Cholewiak R. W., Sherrick C. E. Tracking skill of a deaf person with long-term tactile aid experience: a case study. J Rehabil Res Dev. 1986 Apr;23(2):20–26. [PubMed] [Google Scholar]
  4. Clark G. M., Shepherd R. K., Patrick J. F., Black R. C., Tong Y. C. Design and fabrication of the banded electrode array. Ann N Y Acad Sci. 1983;405:191–201. doi: 10.1111/j.1749-6632.1983.tb31632.x. [DOI] [PubMed] [Google Scholar]
  5. Cooper F. S., Gaitenby J. H., Nye P. W. Evolution of reading machines for the blind: Haskins Laboratories' research as a case history. J Rehabil Res Dev. 1984 May;21(1):51–87. [PubMed] [Google Scholar]
  6. Douek E. A new approach to the cochlear implant. Proc R Soc Med. 1977 Jun;70(6):379–383. doi: 10.1177/003591577707000604. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Eddington D. K. Speech recognition in deaf subjects with multichannel intracochlear electrodes. Ann N Y Acad Sci. 1983;405:241–258. doi: 10.1111/j.1749-6632.1983.tb31637.x. [DOI] [PubMed] [Google Scholar]
  8. Guttman N., Levitt H., Bellefleur P. A. Articulatory training of the deaf using low-frequency surrogate fricatives. J Speech Hear Res. 1970 Mar;13(1):19–29. doi: 10.1044/jshr.1301.19. [DOI] [PubMed] [Google Scholar]
  9. House A. S., Goldstein D. P., Hughes G. W. Perception of visual transforms of speech stimuli: learning simple syllables. Am Ann Deaf. 1968 Mar;113(2):215–221. [PubMed] [Google Scholar]
  10. House W. F., Urban J. Long term results of electrode implantation and electronic stimulation of the cochlea in man. Ann Otol Rhinol Laryngol. 1973 Jul-Aug;82(4):504–517. doi: 10.1177/000348947308200408. [DOI] [PubMed] [Google Scholar]
  11. James W. C. American association of mental deficiency presents panel on training the mentally retarded deaf. Am Ann Deaf. 1967 Jan;112(1):3–13. [PubMed] [Google Scholar]
  12. Liberman A. M., Cooper F. S., Shankweiler D. P., Studdert-Kennedy M. Why are speech spectrograms hard to read? Am Ann Deaf. 1968 Mar;113(2):127–133. [PubMed] [Google Scholar]
  13. Mazor M. Moderate frequency compression for the moderately hearing impaired. J Acoust Soc Am. 1977 Nov;62(5):1273–1278. doi: 10.1121/1.381652. [DOI] [PubMed] [Google Scholar]
  14. Nicholls G. H., Ling D. Cued Speech and the reception of spoken language. J Speech Hear Res. 1982 Jun;25(2):262–269. doi: 10.1044/jshr.2502.262. [DOI] [PubMed] [Google Scholar]
  15. Osberger M. J., Levitt H. The effect of timing errors on the intelligibility of deaf children's speech. J Acoust Soc Am. 1979 Nov;66(5):1316–1324. doi: 10.1121/1.383552. [DOI] [PubMed] [Google Scholar]
  16. PICKETT J. M., PICKETT B. H. COMMUNICATION OF SPEECH SOUNDS BY A TACTUAL VOCODER. J Speech Hear Res. 1963 Sep;6:207–222. doi: 10.1044/jshr.0603.207. [DOI] [PubMed] [Google Scholar]
  17. Posen M. P., Reed C. M., Braida L. D. Intelligibility of frequency-lowered speech produced by a channel vocoder. J Rehabil Res Dev. 1993;30(1):26–38. [PubMed] [Google Scholar]
  18. Revoile S. G., Holden-Pitt L., Pickett J. M., Brandt F. Speech cue enhancement for the hearing impaired: I. Altered vowel durations for perception of final fricative voicing. J Speech Hear Res. 1986 Jun;29(2):240–255. doi: 10.1044/jshr.2902.240. [DOI] [PubMed] [Google Scholar]
  19. Risberg A. Visual aids for speech correction. Am Ann Deaf. 1968 Mar;113(2):178–194. [PubMed] [Google Scholar]
  20. Rosen S., Walliker J. R., Fourcin A., Ball V. A microprocessor-based acoustic hearing aid for the profoundly impaired listener. J Rehabil Res Dev. 1987 Fall;24(4):239–260. [PubMed] [Google Scholar]
  21. Shimizu Y. Microprocessor-based hearing aid for the deaf. J Rehabil Res Dev. 1989 Spring;26(2):25–36. [PubMed] [Google Scholar]
  22. Simmons F. B. Electrical stimulation of the auditory nerve in man. Arch Otolaryngol. 1966 Jul;84(1):2–54. doi: 10.1001/archotol.1966.00760030004003. [DOI] [PubMed] [Google Scholar]
  23. Stark R. E. Teaching-ba- and -pa-to deaf children using real-time spectral displays. Lang Speech. 1972 Jan-Mar;15(1):14–29. doi: 10.1177/002383097201500103. [DOI] [PubMed] [Google Scholar]
  24. Uchanski R. M., Delhorne L. A., Dix A. K., Braida L. D., Reed C. M., Durlach N. I. Automatic speech recognition to aid the hearing impaired: prospects for the automatic generation of cued speech. J Rehabil Res Dev. 1994;31(1):20–41. [PubMed] [Google Scholar]
  25. Upton H. W. Wearable eyeglass speechreading aid. Am Ann Deaf. 1968 Mar;113(2):222–229. [PubMed] [Google Scholar]
  26. Watson C. S., Reed D. J., Kewley-Port D., Maki D. The Indiana Speech Training Aid (ISTRA). I: Comparisons between human and computer-based evaluation of speech quality. J Speech Hear Res. 1989 Jun;32(2):245–251. doi: 10.1044/jshr.3202.245. [DOI] [PubMed] [Google Scholar]
  27. Wilson B. S., Finley C. C., Lawson D. T., Wolford R. D., Zerbi M. Design and evaluation of a continuous interleaved sampling (CIS) processing strategy for multichannel cochlear implants. J Rehabil Res Dev. 1993;30(1):110–116. [PubMed] [Google Scholar]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES