Skip to main content
IOS Press Open Library logoLink to IOS Press Open Library
. 2019 Apr 16;37(2):155–166. doi: 10.3233/RNN-190898

Immediate improvement of speech-in-noise perception through multisensory stimulation via an auditory to tactile sensory substitution

Katarzyna Cieśla a,b, Tomasz Wolak a, Artur Lorens a, Benedetta Heimler b, Henryk Skarżyński a, Amir Amedi b,c,*
PMCID: PMC6598101  PMID: 31006700

Abstract

Background:

Hearing loss is becoming a real social and health problem. Its prevalence in the elderly is an epidemic. The risk of developing hearing loss is also growing among younger people. If left untreated, hearing loss can perpetuate development of neurodegenerative diseases, including dementia. Despite recent advancements in hearing aid (HA) and cochlear implant (CI) technologies, hearing impaired users still encounter significant practical and social challenges, with or without aids. In particular, they all struggle with understanding speech in challenging acoustic environments, especially in presence of a competing speaker.

Objectives:

In the current proof-of-concept study we tested whether multisensory stimulation, pairing audition and a minimal-size touch device would improve intelligibility of speech in noise.

Methods:

To this aim we developed an audio-to-tactile sensory substitution device (SSD) transforming low-frequency speech signals into tactile vibrations delivered on two finger tips. Based on the inverse effectiveness law, i.e., multisensory enhancement is strongest when signal-to-noise ratio is lowest between senses, we embedded non-native language stimuli in speech-like noise and paired it with a low-frequency input conveyed through touch.

Results:

We found immediate and robust improvement in speech recognition (i.e. in the Signal-To-Noise-ratio) in the multisensory condition without any training, at a group level as well as in every participant. The reported improvement at the group-level of 6 dB was indeed major considering that an increase of 10 dB represents a doubling of the perceived loudness.

Conclusions:

These results are especially relevant when compared to previous SSD studies showing effects in behavior only after a demanding cognitive training. We discuss the implications of our results for development of SSDs and of specific rehabilitation programs for the hearing impaired either using or not using HAs or CIs. We also discuss the potential application of such a set-up for sense augmentation, such as when learning a new language.

Keywords: Speech understanding in noise, sensory substitution device, vibrotactile stimulation, cochlear implants, multisensory training, hearing impairment, multisensory rehabilitation

1. Introduction

Over 5% of the world population has disabling hearing loss (i.e., a loss <40 dB in the better hearing ear; World Hearing Organisation 2018). Furthermore, its prevalence is expected to grow in the society, both among the elderly, where it is already the most common sensory deficit, as well as among younger people due to heavy noise exposure (World Hearing Organisation 2015). People affected by hearing loss, including age-related hearing loss (presbyacusis), experience specific difficulties with understanding rapid speech, speech presented with background noise or in case of two or more speakers talking simultaneously (Schneider et al., 2010; Agrawal et al., 2008; Imam et al., 2017; Davis et al., 2016). In the elderly, a hearing deficit not only impairs exchange of information, causing isolation and dependency on others, but its presence has been found to correlate with cognitive decline related to aging (Uchida et al., 2019; Davis et al., 2016; Fortunato et al., 2016). Recent studies even showed that hearing impairments increase the chance of developing dementia and other neurodegenerative diseases (Ford et al., 2018; Gurgel et al., 2014; Livingston et al., 2017; Loughrey et al., 2018; among others).

Hearing function and hearing-related quality of life have been shown to improve by using both hearing aids (HAs) and/or cochlear implants (CIs). Hearing aids only improve audibility of sound and thus can provide benefit to patients with mild to severe hearing loss. Nevertheless, severe damage to the inner ear, namely sensorineural hearing loss (SNHL), is not compensated by a hearing aid. Patients with severe to profound SNHL can receive a cochlear implant, an invasively inserted neuronal prosthesis that bypasses damaged hearing cells in the inner ear and directly stimulates auditory nerve fibers (Gaylor et al., 2013). Nevertheless, a number of patients are unwilling to use hearing devices [the numbers range from 4.7% (Hougaard & Ruf, 2011) to 24% (Hartley et al., 2010), due to problems with handling and maintaining them, as well as the related social stigma. Invasiveness of the cochlear implantation procedure is another reason why some patients are reluctant towards this solution (e.g, Maki-Torkko et al., 2015). Among actual users of HAs and CIs the most fundamental problem that is reported is not being able to understand speech, especially when presented in background noise (McCormack and Fortnum, 2013; Hickson et al., 2014). Patients have difficulty perceiving speech in acoustically hard conditions despite their good performance in quiet. Noisy environments are specifically challenging due to patients’ inability to segregate different speech streams and thus discriminate among talkers (Gaylor et al., 2013; Carol et al., 2011; Moradi et al., 2017). In all these populations, the underlying reason for struggling in noisy environments is that the auditory input reaching the brain is deprived of temporal fine structure information: in hearing loss due to the damage to the inner ear, and in CI-users due to the limitations of the algorithms applied for speech coding (Moore et al., 2008; Moon and Hong, 2014).

The abovementioned difficulties, with understanding speech in challenging acoustic situations, are however also experienced by healthy adults. Indeed, many of us have been in a situation when we were unable to understand the person speaking on the other end of the phone line, especially when using a language that is not our native. Many of us had struggled to understand a person talking to us if there was another person talking at the same time. All these occurrences are very demanding for the central nervous system as they require “glimpsing” in the impoverished acoustic signal to recover the information, and such process engages a considerable amount of cognitive effort, also in the hearing impaired (Rosen et al., 2013; Erb et al., 2013; Huyck and Johnsrude, 2012; Banks et al., 2015; Wendt et al., 2018; Rosemann et al., 2017; Krueger et al., 2017).

In the current study we hypothesized that we would be able to improve speech understanding in noise by applying multisensory input combining auditory and tactile stimuli. To enhance the multisensory benefit of our set up we designed our experiment according to two fundamental rules governing multisensory integration, i.e. the temporal rule (stimuli are temporarily congruent) and the inverse effectiveness rule. The latter rule states that the size of the multisensory enhancement is inversely proportional to the signal to noise ratio (SNR) between the two senses; clear stimuli are more reliable and thus information from another sensory modality is redundant, resulting in reduced benefit from multisensory stimulation (see e.g., Meredith and Stein, 1986; Holmes, 2007; Otto et al., 2013).

To deliver tactile stimulation, we designed a minimalistic auditory-to-tactile Sensory Substitution Device (SSD) which transforms auditory signal into tactile vibration. Indeed, SSDs are ideal tools to test the benefits of multisensory stimulation on perception, as they can convey specific features typically provided by one sensory modality using a different one (Heimler & Amedi, in press; Heimler, Striem-Amit & Amedi, 2015). There have been many works showing that blind people are able to perform a great variety of visual tasks when visual input is conveyed via visual-to-tactile SSD, namely via mapping images from a video camera to a vibrotactile device worn on the subject’s back, forehead or tongue (Bach Y Rita, 2004; Chebat et al., 2011; Kupers and Ptito, 2011). However, to observe benefits of SSD stimulations in performance, extensive training programs had to be implemented or at least prolonged use was necessary, due to the complexity of the SSD algorithms and the cognitive load required from new SSD users (Striem-Amit et al., 2012; Bubic et al., 2010; Maidenbaum et al., 2014; Kupers and Ptito. 2011; Yamanaka et al., 2009). Here we hypothesized that due to the numerous similarities between the auditory and the tactile system (see Discussion) our set-up would require minimal training in order to observe improvement in performance (and in fact we started testing with no training whatsoever).

Several attempts have already been undertaken to improve speech understanding in patients with hearing loss, and mainly children, with the use of tactile devices. The aids provided single- or multichannel electric pulses or vibrations and were worn on various parts of the body (Galvin et al., 2000; Plant and Risberg, 1983; Weisenberger and Percy, 1995; Bernstein et al., 1998; Galvin et al., 1991). All these early studies demonstrated that the whole complex speech signal cannot be effectively translated into tactile vibration. With that in mind and also following some more recent findings in that field (Carol et al., 2011; Young et al., 2016; Huang et al., 2017), we decided to convey via touch only the low-frequency (<500Hz) information which is shared by touch and audition. Specifically, we transformed the auditory speech fundamental frequency (f0), namely the lowest frequency of a periodic waveform which is typically in the range of 80-250 Hz, into the corresponding vibrotactile frequency. Availability of fundamental frequency in the speech signal has been found crucial for understanding speech in noise, as well as segregation of talkers (Carol et al., 2011; Young et al., 2016). In addition, adding it as a tactile input has already been shown to have an enhancing effect on recognition of speech in cochlear implant users (Huang et al., 2017).

In sum, we performed a proof-of-concept study first in normal hearing individuals, by 1) applying a very degraded auditory input, deprived of temporal fine structure information and 2) using non-native speech. This procedure was selected to increase the difficulty of the task, and thereby increase the chance of taking advantage of the inverse effectiveness law for multisensory integration. Only one study thus far applied a similar multisensory audio-tactile set-up but only in the native language of a group of normal hearing participants (Fletcher et al., 2018). Participants showed minor improvement of 10% in sentence-in-noise recognition but only at fixed SNR rates and, moreover, only after a dedicated training. Demonstrating that our multisensory set-up pairing impoverished auditory signal with a vibrotactile input, delivered via a minimalistic auditory-to-tactile SSD, would enhance perception of speech-in-noise even without specific training, would have crucial implications for a number of domains. These would include research in sensory perception and integration, design of SSDs, as well as rehabilitative programs tailored for auditory recovery. In addition, showing an improvement in normal hearing non-native speakers would mean we could offer our set-up as a sensory augmenting technology to normal hearing individuals struggling in challenging acoustic situations.

2. Methods

2.1. Subjects

Twelve normal-hearing individuals participated in the proof-of-concept study (3 male, 9 female; age 29 + /–7 years). None of the subjects participated in a hearing test to objectify their normal hearing, which can be considered a study limitation. However, all of them reported never having been diagnosed with or having experienced any hearing problems. Participants were not native English-speakers but were fluent in English. They reported having learned English since the mean age of 7.8 years (SD = 1.6 years), for 12.2 years on average (SD = 5.1 years). Most of them now use English regularly to communicate at work and/or with relatives living abroad, as well as international friends. All participants were right-handed. Each provided an informed consent and they were not paid for participation. The experiment was approved by the Ethical Committee of the Hebrew University (353-18.1.08).

2.2. Preparation of stimuli

For the experiment we used recordings of the Hearing in Noise Test (HINT) sentences (Nilsson 1994). The set is composed of 25 equivalent lists of ten sentences that have been normed for naturalness, difficulty, and intelligibility, and 3 lists of 12 sentences for practice. All sentences are of similar length and convey a simple semantic content, such as e.g. “The boy fell from the window” or “It’s getting cold in here”. The duration of each recorded sentence was approximately 2.6 seconds. They were spoken by a male.

In the first step of stimuli preparation, the energy of all sentences was normalized with a standard RMS procedure and peaks of energy were leveled to –6 dB. The specialist sound engineer made sure that all sentences sound similar in terms of the conveyed energy. All sounds were stored as 16-bit 44.1 kHz digital waveforms. Next, the sentences were vocoded using an in-house algorithm developed at the Institute of Physiology and Pathology of Hearing (IPPH). The algorithm involved the following steps: bandpass filtering of the input signal to 8 channels, signal rectification, low-pass filtering for envelope wave extraction (100–7500 Hz) with a 6th-order bandpass filter, modulation with a narrowband noise, adding electrode interactions, summation of all channels. The amount of channel interactions, which usually limits benefit from cochlear implant systems was simulated through spread of excitation (SoE) profiles which were measured in a representative group of cochlear implant users (Walkowiak et al., 2010). A number of works have already showed that normal hearing listeners are able to learn such manipulated auditory input (but not with our tactile pairing approach) (Lee et al., 2017; Erb et al., 2013; Casserly and Pisoni, 2015; Rosemann et al., 2017).

For vibratory stimulation, the refined fundamental frequency structure for each sentence was first extracted using the STRAIGHT algorithm, as originally described in (Kawahara et al., 1999). The output signal contained information during voiced speech; the rest was represented as silence. The amplitude information for the f0 contour was extracted by low-passing the original signal with a 3rd-order bandpass digital elliptical filter with cut-off frequencies equal to the highest and the lowest frequency in the f0 contour. Finally, the amplitude of the outcome signal was normalized to its maximum digital value (0 dB attenuation) to provide maximum intensity of vibration across the whole f0 frequency range. It was made sure that all participants feel the vibration and find it pleasant.

2.3. Experimental set-up

A dedicated 3T MR-compatible Vibrating Auditory Stimulator (VAS) was developed by the Warsaw-based Neurodevice company (http://www.neurodevice.pl/en) The main parts of the system were a vibrating interface with two piezoelectric plates to simultaneously stimulate two fingers (Fig. 1 A) and a controller. The controller was powered from a socket of 230V, and vibration was delivered from a PC via an audio input. The vibration frequency range of the device is 50–500 Hz.

Fig.1.

Fig.1

A) vibrating interface of the Vibrating Auditory Stimulator; B) Matlab GUI for stimuli presentation and control; C) Speech Reception Threshold values obtained for auditory and auditory-tactile speech in noise stimulation, at the group level and in individual subjects [subject 6 showed an improvement from 0.3 to –3.0 SRT(dB)].

The auditory stimulation was delivered via noise-cancelling headphones (BOSE QC35 IIA). Noise-cancelling was needed to attenuate noise produced by the tactile device when vibrating. Nevertheless, a follow-up study is envisaged with a free-field sound presentation, which set-up would be more ecologically valid for the hearing impaired population using hearing aids and/or cochlear implants. Headphones and the VAS system were connected to a PC via a sound-card (Creative Labs, SB1095). Sound intensity and harmonic distortions were regularly monitored with a GRASS calibration system (Audiometer Calibration Analyzer HW1001).

A MatLab (version R2016a, The MathWorks Inc., Natick, MA, USA) application with a user-friendly GUI was developed by dr Tomasz Wolak to provide very precise auditory and/or vibratory stimulation (Fig. 1 B). The application offers a number of parameter configurations to be used, including, type of stimulation to be presented (an earlier defined target vs noise), type of background noise (eg. speech or white noise), channel of stimulation (headphones vs VAS), type of the adaptive procedure to be applied to estimate SNR (for 25%–75% understanding) and the step size (1–4 dB), among others.

2.4. Procedure

First, five sentences from the HINT set were used for participants to practice. They were presented one after another, first unmodified, then vocoded and finally vocoded with simultaneous background noise, specifically the International Female Fluctuating Masker (IFFM; Ehima 2016). This type of noise, a mix of 6 female speakers, was chosen to reflect a real-life challenging auditory situation, but also to limit informational masking which has been found to require high amounts of cognitive effort (Rosen et al., 2013). The participants were asked to repeat the sentence they have just heard or at least the parts they were able to. After practice participants took part in a test. The aim was to establish the Speech Reception Threshold (SRT), i.e. speech-to-noise ratio (SNR) for 50 % understanding. This procedure is typically used in the clinical ENT setting when assessing speech understanding of HA/CI users (Levitt, 1978). Participants listened to HINT sentences (target) presented against the IFFM noise. The initial SNR was set at 0 which corresponded to 65 dB for both target and noise. For each presentation, an algorithm randomly selected a 3 s excerpt of noise from a 1 min-recording. Noise started several seconds before the target sentence. An adaptive procedure was applied to estimate the SRT for each participant, increasing the SNR by 2 dB when the person was unable to repeat the whole sentence, and decreasing the SNR by 2 dB if he/she responded correctly. The response was determined correct only if the person repeated each word of the sentence exactly, apart from cases such as using the verb in a wrong tense or using an in/definite article instead of an in/definite one. The adaptive procedure and the way of response evaluation was adapted from the original paper on the HINT test (Nilsson, 1994).

Each participant took part in two study conditions, one with sentences presented only via headphones (A) and one when the auditory signal was accompanied with tactile vibration representing f0 extracted from sentences (AT). The tactile stimulation was delivered on the index and middle finger of the dominant hand. Sentences from two 10-sentence lists (List 1 and List 2) were presented in the A condition, and other 20 sentences were used in the AT condition (List 3 and List 4), or the other way around (counterbalanced across participants). For each participant the same sentence lists were used and sentences were always presented in the same order. The test for SNR estimation took approximately 10 minutes.

Apart from the Speech-in-Noise test, all participants listened to different 20 HINT sentences (List 5 and List 6) in Quiet; half of participants before and half of them after the test for SNR estimation, and were asked to repeat as much as they were able to. This part took approximately 5 minutes.

3. Results

The results of the experiment are presented in (Fig. 1 C-D). The mean SNR for 50% understanding (SRT) of speech in noise for the A (auditory only) condition was 18.6 dB (SD = 7.9 dB), and for the AT (auditory-tactile) condition it improved to 12.6 dB (+ /–8.5 dB). The mean benefit of adding vibration in terms of the SRT value was 6 dB (SD = 4 dB). The outcomes in the two experimental conditions were compared using the non-parametric Wilcoxon Signed-Ranks test. It was found that when vocoded sentences were accompanied with vibration, the SNR for 50% understanding was significantly lower (z = –3.06; p <  0.005) (IBM SPSS Statistics 20). Participants were able to correctly repeat on average 21.25% (+ /–9%) of the sentences in the A condition and 25% (+ /–9.3%) in the AT condition (percent of understood sentences out of 20) (Wilcoxon, p = 0.047). The outcomes of speech understanding in noise, with and without accompanying vibration, were found significantly correlated (Spearman’s Rank-Order correlation; rho = 0.6; p <  0.05). In terms of understanding vocoded sentences presented in quiet (Audio in Q; percent of understood sentences out of 20) participants obtained a mean of 55% (SD = 19%). There was no correlation found between these latter outcome and the outcomes obtained when listening to sentences presented in noise (Spearman’s Rank-Order correlation; A and Audio in Q: rho = 0.18; p = 0.57; AT and Audio in Q: rho =&thinsp-0.08; p = 0.82).

4. Discussion

The present study shows that when an auditory signal is degraded, understanding of speech in noise can be significantly improved by adding complementary information via tactile vibration, thus ultimately providing further support to the inverse effectiveness law (Otto et al., 2013; Meredith & Stein, 1986; Holmes, 2007). One crucial aspect of the current result is the automaticity of such improvement, which has very interesting consequences both for rehabilitation as well as for basic research on multisensory processing and its benefits on perception. Our results were obtained by using a minimal custom-made auditory-to-tactile Sensory Substitution Device (SSD) that conveys extracted fundamental frequency (f0) of the speech signal through vibration. All our subjects showed automatic and immediate improvement when perceiving speech-in-noise with tactile stimulation, compared to when perceiving speech-in-noise alone, resulting in a significant group mean benefit of 6dB. This effect is quite remarkable if one considers that: (1) every increase of 3 dB represents a doubling of sound intensity and every increase of 10 dB represents a doubling of the perceived loudness (Stevens, 1957), (2) no training aimed at matching audition and touch was applied, (3) the direction of the effect was consistent across all individuals. To our knowledge, this is the first study to show such a systematic improvement across participants during multisensory audio-tactile perception of speech-in-noise.

Two other recent studies tested a similar question, though both obtaining somewhat less convincing results (Huang et al., 2017; Fletcher et al., 2018). Specifically, Huang and colleagues (2017) tested cochlear implant patients and showed an improvement in speech-understanding when adding low-frequency vibration corresponding to the f0 extracted from the presented sentences, without any specific training. The improvement was, however, significantly more modest (mean 2.2 dB) than in our experiment. Fletcher and colleagues (2018) complemented vocoded speech with tactile vibration in normal hearing subjects using their native language. After a dedicated training, participants showed an improvement of only 10% in sentence-in-noise recognition. In addition, Fletcher and colleagues used fixed SNRs, as opposed to an adaptive procedure for SNR estimation as we applied in our study. The latter is more optimal as it matches task difficulty across participants preventing the floor and the ceiling effects (Levitt, 1978). We hypothesize that the fact that we used non-native and thus less intelligible auditory input enhanced the effects of the inverse effectiveness law. Nevertheless, future studies should investigate which experimental settings used in the other two works made their outcomes less impressive.

The findings of the current study are especially interesting if embedded within the more general literature regarding the benefits in behavior when using SSDs. Most SSD solutions have been thus far aimed for the blind population, with visual input conveyed via audition or touch (Bach-Y-Rita et al., 1969; Meijer, 1992; Bach-y-Rita et al., 2004; Abboud et al., 2014). All these works showed that blind participants can perform a variety of ”visual” tasks with SSDs but only after an extensive training (e.g., Striem-Amit et al., 2012; Bach-Y-Rita, 2004; Chebat et al., 2011; Chebat et al., 2015). Training was also needed to observe benefits in the sighted popultion learning to perceive visual information via SSDs (e.g., Amedi et al., 2007; Hamilton-Fletcher et al., 2016).

Why did the current SSD set-up show such an automatic effect? One possible reason might be that the aforementioned SSD solutions use quite complex algorithms to convey a variety of visual features simultaneously, such as shape, location and even colors of objects (Striem-Amit et al., 2012; Maidenbaum et al., 2014; Levy-Tzedek et al., 2014; Abboud et al., 2014). Indeed, when the amount of information was reduced, also visual SSDs required shorter training programs (e.g., for navigation with the EyeCane; Maidenbaum et al., 2014). The current auditory-to-tactile SSD transfers “only” the fluctuating frequency information, and therefore maybe requires less cognitive effort from the user, ultimately speeding the learning process.

In addition, in many everyday situations the auditory and the tactile system often work together (e.g. when a mobile phone is ringing and vibrating at the same time or when listening to music), thus potentially increasing the chances of an immediate multisensory integration. Furthermore, audition and tactile vibration share several physical properties. In both types of stimulation information is conveyed through mechanical pressure generating oscillatory patterns, ultimately constructing frequency percepts (e.g., Soto-Franco and Deco, 2009; Ro et al., 2013; Good et al., 2014; Auer et al., 2007). Moreover, within a certain frequency range, the very same oscillatory pattern can be perceived simultaneously by the peripheral receptors of both sensory modalities (i.e., the basilar membrane of the cochlea and the skin, respectively; e.g., Von Békésy, 1959; Gescheider, 1970; Soto-Franco & Deco, 2009; Heimler et al., 2014). Finally, there seems to be a privileged neuronal coupling between auditory and sensory-motor brain regions (Suarez et al., 1997; Burton et al., 2004; Kayser et al., 2005; Beauchamp et al., 2008; Bellido et al., 2018; Arenada et al., 2017; Igucji et al., 2007; Caetano and Jousmaki, 2006; Auer et al., 2007; Fu et al., 2003; Hoefer et al., 2013). All these similarities probably contribute to why both hearing and deaf participants consistently report to perceive simultaneous vibrotactile and auditory stimulation as an interleaved signal (Wilson et al., 2012; Bernstein et al., 1998; Russo et al., 2012). Future studies may further elucidate the easiness and intuitiveness of SSD learning in multisensory contexts, thus unraveling rules of multisensory integration depending on the used sensory modalities.

Our results carry important implications for further research, as well as possible clinical and practical solutions. In rehabilitation, benefits of unisensory task-specific trainings for recovery of sensory or cognitive functions have been demonstrated in a number of domains, such as in the ageing brain (Cheng et al., 2017; Smith et al., 2009; Anderson et al., 2013; Bherer, 2015) or following brain lesions occurred during adulthood (Kerr et al., 2011; Xerri et al., 1998). In addition, successful multisensory interventions have been reported in patients after stroke (Tinga et al., 2016), as well as in hemianopia (i.e., with loss of vision in only one part of the visual field and/or one eye) and spatial neglect (Keller and Lefin-Rank 2010; Bolognini et al., 2017). Importantly, improvements in speech skills have been reported in CI patients following extensive audio-visual trainings combining the restored auditory input with speech-reading or sign language (Leybaert and LaSasso, 2010; Fu and Galvin, 2007; Stevenson et al., 2017).

The present work extends the current approaches in rehabilitation even further, by showing benefits of multisensory stimulations on performance even without any training. This is an important advantage because training programs are generally time consuming to both patients and caregivers and discourage potential users to adopt SSDs in everyday life (Elli et al., 2014). Our minimal set-up that requires only two fingertips is also especially attractive in relation to the more cumbersome tactile SSDs described in literature (Kupers and Ptito, 2011; Novich and Eagleman, 2015).

We suggest that the designed auditory-to-tactile SSD could serve as a valid assistive technology for several populations with various degrees of hearing deficits, either with or without a hearing aid/a cochlear implant. As an example, CIs provide great benefits for understanding speech in quiet but do not transmit the low-frequency cues effectively, thereby making speech comprehension in noise extremely hard. The latter is related to inherent technological limitations of the device, i.e. the restricted number of independent frequency-specific channels, spread of excitation, as well as the fixed rate of pulses delivered to the auditory nerve (Wilson, 2012; Cullington and Zeng, 2010). Therefore, conveying the missing low-frequency information through vibration seems a promising approach to improve speech understanding in this population (e.g., Huang et al., 2017). This is even more convincing, if one considers patients with partial deafness who use an electro-acoustic hearing prosthesis, combining electrical hearing via partially inserted CI electrode array with low-frequency acoustic signal delivered naturally or via a HA in the same ear. This population consistently shows superior speech performance in noise, when compared to profoundly deaf users of CIs, probably due to the access to low-frequency acoustic cues (Skarzynski et al., 2003; Gstoetner et al., 2004; Gifford et al., 2013; Skarzynski et al., 2009; von Illberg, 2011; Zhang et al., 2010).

Although the current results show an immediate benefit of speech in noise perception with audio-vibratory stimulation, we suggest that perhaps some specific training might be required when aiming at achieving an improvement in unisensory auditory performance. In this case, we predict that multisensory stimulation might facilitate understanding of the auditory signal, ultimately boosting its further recovery (see Isaiah et al., 2014 for a successful example in deaf ferrets). Such an approach may be most effective if training is started before cochlear implantation by delivering stimulation via touch that, nevertheless, maintains features typical of the auditory modality (i.e. periodic information with fluctuating frequency and intensity). We suggest that such a training could prepare the auditory cortex for future reafferentation of its natural sensory input (Heimler et al., 2018; see analogous suggestions for the blind population in Heimler and Amedi in press). Interestingly, even though conclusive research data supporting this approach are still lacking, some hearing aid manufacturers already propose solutions delivering vibration to profoundly deaf individuals prior to cochlear implantation (see, e.g., http://www.horentekpro.com).

Importantly, we also predict that our proposed SSD might serve as an assistive aid for the elderly population whose both cognitive and sensory abilities (and most often hearing) have deteriorated (Amieva et al., 2015; Murray et al., 2018). Solutions for the elderly seem crucial in the nowadays aging society, with hearing loss becoming an epidemic. Since this population might have issues with complying with extensive training programs, we believe that our intuitive set-up might prove helpful.

Finally, we see the potential use of our device to support second language acquisition, as we have shown in the current experiment that users do benefit from vibration when trying to decipher sentences in a non-native language. The device we have developed provides low-frequency tactile stimulation which conveys much of the cues that have been shown hardest to detect in a foreign speech signal, such as e.g. duration, rhythm and voicing (Kuhl et al., 1992; Riviera et al., 2005). Interestingly, multisensory approaches have already been successfully applied in foreign language teaching, such as e.g. within the Multisensory Structural Language Education framework, which combines visual, auditory and tactile modalities (Schams and Seitz, 2008; Lidestam et al., 2014). Other possible applications of our setup in normal hearing subjects (but also the hearing impaired) can include voice rehabilitation, improving appreciation of music, as well as assistance when talking on the phone.

Future studies should investigate the feasibility of our set-up for all the aforementioned applications and potentially implement slight modifications to adapt it to the needs of the specific population of interest. Our aim in describing possible applications of our multisensory audio-tactile set-up was to highlight its flexibility and therefore, the wide spectrum of rehabilitative conditions it can be used for. On a final note, we are planning to elucidate the neural correlates of the demonstrated improved speech understanding during multisensory stimulation. Indeed, our SSD has been designed to be MR-compatible, thus making it immediately suitable for testing in the scanner with functional magnetic resonance imaging (fMRI).

Acknowledgments

This work was supported by the Polish National Science Center (grant MOBILNOŚĆ PLUS, V edition) awarded to K.C., the European Research Council Starting-Grant (310809) and the ERC Consolidator-Grant (773121) to A.A., the James S. McDonnel Foundation scholar award (no. 652 220020284) to A.A.

References

  1. Abboud S., Hanassy S., Levy-Tzedek S., Maidenbaum S., Amedi A. (2014). EyeMusic: Introducing a “visual” colorful experience for the blind using auditory sensory substitution. Restorative Neurology and Neuroscience, 32(2), 247–257. [DOI] [PubMed] [Google Scholar]
  2. Agrawal Y., Platz E.A., Niparko J.K. (2008). Prevalence of hearing loss and differences by demographic characteristics among US adults: data from the National Health and Nutrition Examination Survey, 1999-2004. Archives of Internal Medicine, 168(14), 1522–1530. [DOI] [PubMed] [Google Scholar]
  3. Amedi A., Stern W., Camprodon J.A., Bermpohl F., Merabet L., Rotman S., Hemond C.C., Meijer P., Pascual-Leone A. (2007). Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nature Neuroscience, 10, 687–689. [DOI] [PubMed] [Google Scholar]
  4. Amieva H., Ouvrard C., Giulioli C., Meillon C., Rullier L., Dartigues J.F. (2015). Self-Reported Hearing Loss, Hearing Aids, and Cognitive Decline in Elderly Adults: A 25-Year Study. Journal of the American Geriatrics Society, 63(10), 2099–2104. [DOI] [PubMed] [Google Scholar]
  5. Anderson S., White-Schwoch T., Parbery-Clark A., Kraus N. (2013). Reversal of age-related neural timing delays with training. Proceedings of the National Academy of Sciences U S A, 4357–4362. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Araneda R., Renier L., Ebner-Karestinos D., Dricot L., De Volder A.G. (2017). Hearing, feeling or seeing a beat recruits a supramodal network in the auditory dorsal stream. European Journal of Neuroscience, 45(11), 1439–1450. [DOI] [PubMed] [Google Scholar]
  7. Auer E.T., Bernstein L.E., Sungkarat W., Sing M. (2007). Vibrotactile Activation of the Auditory Cortices in Deaf versus Hearing Adults. Neuroreport, 18(7), 645–648. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bach Y., Rita P., Collins CC., Saunders FA., White B., Scaddem L. (1969). Vision Substitution by Tactile Image Projection. Nature, 221, 963–964. [DOI] [PubMed] [Google Scholar]
  9. Bach-y-Rita P. (2004). Tactile sensory substitution studies. Annals of the New York Academy of Sciences, 1013, 83–91. [DOI] [PubMed] [Google Scholar]
  10. Banks B., Gowen E., Munro K., Adank P. (2015). Cognitive predictors of perceptual adaptation to accented speech. Neuropsychologia, 87, 134–143. [DOI] [PubMed] [Google Scholar]
  11. Beauchamp M.S., Yasar N.E., Frye R.E., Ro T. (2008). Touch, Sound and Vision in Human Superior Temporal Sulcus. Hearing Research, 369, 67–78. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bellido A.P., Barnes K.A., Crommett L.E., Yau M. (2018). Auditory Frequency Representations in Human Somatosensory Cortex. Cerebral Cortex, 28(11), 3908–3921. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Bernstein L.E., Tucker P.E., Auer T. (1998). Potential perceptual bases for successful use of a vibrotactile speech perception aid. Scandinavian Journal of Psychology, 39, 181–186. [DOI] [PubMed] [Google Scholar]
  14. Bherer L. (2015). Cognitive plasticity in older adults: effects of cognitive training and physical exercise. Annals of the New York Academy of Sciences, 1337, 1–6. [DOI] [PubMed] [Google Scholar]
  15. Bolognini N., Convento S., Casati C., Mancini F., Brighina F., Vallar G. (2017). Multisensory integration in hemianopia and unilateral spatial neglect: Evidence from the sound induced flash illusion. Neuroreport, 18(10), 1077–1081. [DOI] [PubMed] [Google Scholar]
  16. Bubic A., Striem-Amit E., Amedi A. (2010). Large-Scale Brain Plasticity Following Blindness and the Use of Sensory Substitution Devices. Chapter in Book: Multisensory Object Perception in the Primate Brain, 351–380. [Google Scholar]
  17. Burton H., Sinclair R.J., McLaren D.G. (2004). Cortical Activity to Vibrotactile Stimulation: An fMRI Study in Blind and Sighted Individuals. ing, 23(4), 210–228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Caetano G., and Jousmaki V. (2006). Evidence of vibrotactile input to human auditory cortex. Neuroimage, 15–28. [DOI] [PubMed] [Google Scholar]
  19. Carol J., Tiaden S., Zeng F-G. (2011). Fundamental frequency is critical to speech perception in noise in combined acoustic and electric hearing. The Journal of the Acoustical Society of America, 130, 2054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Casserly E.D., Pisoni D.B. (2015). Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings. Journal of Speech, Language and Hearing Research, 58(3), 1001–1016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Chebat D.R., Schneider F.C., Kupers R., Ptito M. (2011). Navigation with a sensory substitution device in congenitally blind individuals. Neuroreport, 22(7), 342–347. [DOI] [PubMed] [Google Scholar]
  22. Chebat D.R., Maidenbaum S., Amedi A. (2015). Navigation using sensory substitution in real and virtual mazes. PloS one, 10(6), e0126307. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Cheng Y., Jia G., Zhang Y. (2017). Positive impacts of early auditory training on cortical processing at an older age. Proceedings of the National Academy of Sciences U S A, 13, 114(24), 6364–6369. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Cullington H.E., Zeng FG. (2010). Bimodal hearing benefit for speech recognition with competing voice in cochlear implant subject with normal hearing in contralateral ear. Ear and Hearing, 31(1), 70–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Davis A., McMahon C.M., Pichora-Fuller K.M., Russ S., Lin F., Olusanya B.O., Chadha S., Tremblay K.L. (2016). Aging and Hearing Health: The Life-course Approach. Gerontologist, 56(2), S256–67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Elli G.V., Stefania B., Collignon O. (2014). Is There a Future for Sensory Substitution Outside Academic Laboratories? In: Multisensory Research, 27, 271–291. [DOI] [PubMed] [Google Scholar]
  27. European Hearing Instrument Manufacturers Association (EHIMA) (2016). Description and Terms of Use of the IFFM and IFnoise signals. (Available at: http://www.ehima.com/wp-content/uploads/2016/06/IFFM_and_IFnoise.zip)
  28. Erb J., Molly H., Eisner F., Obleser J. (2013). The Brain Dynamics of Rapid Perceptual Adaptation to Adverse Listening Conditions. Journal of Neuroscience, 33(26), 10688–10697. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Fletcher M.D., Mills S.R., Goehring T. (2018). Vibro-Tactile Enhancement of Speech Intelligibility in Multi-talker Noise for Simulated Cochlear Implant Listening. Trends in Hearing, 22, 2331216518797838. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Ford A.H., Hankey G.J., Yeap B.B., Golledge J., Flicker L., Almeida O.P. (2018). Hearing loss and the risk of dementia in later life. Maturitas, 112, 1–11. [DOI] [PubMed] [Google Scholar]
  31. Fortunato S., Forli F., Guglielmi V., De Corso E., Paludetti G., Berrettini S., Fetoni A.R. (2016). A review of new insights on the association between hearing loss and cognitive decline in ageing. Acta Otorhinolaryngolocica Italica, 36(3), 155–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Fu K.M., Johnston T.A., Shah A.S., Arnold L., Smiley J., Hackett T.A., Garraghty P.E., Schroeder C.E. (2003). Auditory Cortical Neurons Respond to Somatosensory Stimulation. Journal of Neuroscience, 23(20), 7510–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Fu Q.J., Galvin J.J. 3rd (2007). Computer-Assisted Speech Training for Cochlear Implant Patients: Feasibility, Outcomes, and Future Directions. Seminars in Hearing, 28(2). [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Galvin K.L., Cowan R. S. C., Sarant J.Z., Alcantara J., Blarney P.J., Clark G.M. (1991). Use of a Multichannel Electrotactile Speech Processor by Profoundly Hearing-Impaired Children in a Total Communication Environment. Journal of the American Academy of Audiology, 12, 214–225. [PubMed] [Google Scholar]
  35. Galvin KL, Blamey PJ, Cowan RS, Oerlemans M, Clark GM. (2000). Generalization of tactile perceptual skills to new context followingtactile-alone word recognition training with the Tickle Talker. Journal of the Acoustical Society of America, 108(6), 2969–2979. [DOI] [PubMed] [Google Scholar]
  36. Gaylor J.M., Raman G., Chung M. (2013). Cochlear implantation in adults: a systematic review and meta-analysis. JAMA Otolaryngology- Head and Neck Surgery, 139(3), 265–272. [DOI] [PubMed] [Google Scholar]
  37. Gifford R.H., Dorman M.F., Skarzynski H., Lorens A., Polak M., Driscoll C.L., Roland P., Buchman C.A. (2013). Cochlear implantation with hearing preservation yields significant benefit for speech recognition in complex listening environments. Ear and Hearing, 34(4), 413–425. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Good A., Reed M.J., Russo F.A. (2014). Compensatory Plasticity in the Deaf Brain: Effects on Perception of Music. Brain Sciences, 560–574. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Gstoettner W., Kiefer J., Baumgartner W. (2004). Hearing preservation in cochlear implantation for electric acoustic stimulation. Acta Otolaryngologica, 124, 348–352. [DOI] [PubMed] [Google Scholar]
  40. Gurgel R. K., Ward P. D., Schwartz S., Norton M. C., Foster N. L., Tschanz J. T. (2014). Relationship of hearing loss and dementia: a prospective, population-based study. Otology & Neurotology, 35(5), 775. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Hamilton-Fletcher G., Wright T.D., Ward J. (2016). Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device. Multisensory Research, 29(4-5), 337–363. [DOI] [PubMed] [Google Scholar]
  42. Hartley D., Rochtchina E., Newall P., Golding M., Mitchell P. (2010). Use of hearing aids and assistive listening devices in an older Australian population. Journal of the American Academy of Audiology, 21, 642–653. [DOI] [PubMed] [Google Scholar]
  43. Heimler B., Pavani F., Amedi A. (2018). Implications of cross-modal and intra-modal plasticity for the education and rehabilitation of deaf children and adults In: Evidence-Basedractices in Deaf Education. In Knoors H. and Marschark M. [eds.]Oxford University Press. [Google Scholar]
  44. Heimler and Amedi (in press). Task-selectivity in the sensory deprived brain and sensory substitution approaches for clinical practice: evidence from blindness In: Multisensoryerception: from laboratory to clinic. Eds: Sathian K and Ramachandran VS. . [Google Scholar]
  45. Heimler B., Striem-Amit E., Amedi A. (2015). Origins of task-specific sensory-independent brain organization in the visual and auditory systems: neuroscience evidence, open questions and clinical implications. Current Opinion in Neurobiology, 35, 169–177. [DOI] [PubMed] [Google Scholar]
  46. Heimler B., Weisz N., Colignon O. (2014). Revisiting the adaptive and maladaptive effects of crossmodal plasticity. Neuroscience, 283, 44–63. [DOI] [PubMed] [Google Scholar]
  47. Hickson L., Meyer C., Lovelock K., Lampert M., Khan A. (2014). Factors associated with success with hearing aids in older adults. International Journal of Audiology, 53(1), S18–27. [DOI] [PubMed] [Google Scholar]
  48. Hoefer M., Tyll S., Kanowski M., Brosch M., Schoenfeld M.A., Heinze H.J., Noesselt T. (2013). Tactile stimulation and hemispheric asymmetries modulate auditory perception and neural responses in primary auditory cortex. Neuroimage, 79, 371–382. [DOI] [PubMed] [Google Scholar]
  49. Holmes N.P. (2007). The law of inverse effectiveness in neurons and behaviour: multisensory integration versus normal variability. Neuropsychologia, 45(14), 3340–3345. [DOI] [PubMed] [Google Scholar]
  50. Hougaard S., Ruf S. (2011). EuroTrak 1: A consumer survey about hearing aids in Germany, France, and the UK. Hearing Review, 18, 12–28. [Google Scholar]
  51. Huang J., Sheffield B., Lin. P., Zeng F.G. (2017). Electro-Tactile Stimulation Enhances Cochlear Implant Speech Recognition in Noise. Science Reports, 7(1), 2196. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Huyck J.J., Johnsrude I.S. (2012). Rapid perceptual learning of noise-vocoded speech requires attention.. Journal of the Acoustical Society of America – Express Letters, 131, EL236–42. [DOI] [PubMed] [Google Scholar]
  53. Iguchi Y., Hoshi Y., Nemoto M., Taira M., Hashimoto I. (2007). Co-activation of the secondary somatosensory and auditory cortices facilitates frequency discrimination of vibrotactile stimuli. Neuroscience, 148(2), 461–472. [DOI] [PubMed] [Google Scholar]
  54. Imam L., Hannan S.A. (2017). Noise-induced hearing loss: a modern epidemic? British Journal of Hospital Medicine (London), 78(5), 286–290. [DOI] [PubMed] [Google Scholar]
  55. Isaiah A., Vongpaisal T., King A.J., Hartley D.E. (2014). Multisensory training improves auditory spatial processing following bilateral cochlear implantation. Journal of Neuroscience, 34(33), 11119–11130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Kawahara H., Masuda-Katsuse I., and de Cheveigne A. (1999). Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0 extraction: possible role of a repetitive structure in sounds. Speech Communication, 27, 187–207. [Google Scholar]
  57. Kayser C. (2005). Integration of Touch and Sound in Auditory Cortex. Neuron, 48, 373–384. [DOI] [PubMed] [Google Scholar]
  58. Keller I., Lefin-Rank G. (2010). Improvement of visual search after audiovisual exploration training in hemianopic patients. Neurorehabilitation and Neural Repair, 24(7), 666–673. [DOI] [PubMed] [Google Scholar]
  59. Kerr A.L., Cheng S.Y., Jones T.A. (2011). Experience-dependent neural plasticity in the adult damaged brain. Journal of Communication Disorders, 44(5), 538–548. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Krueger M., Schulte M., Zokoll M.A., Wagener K.C., Meis M., Brand T., Holube I. (2017). Relation Between Listening Effort and Speech Intelligibility in Noise. American Journal of Audiology, 26(3S), 378–392. [DOI] [PubMed] [Google Scholar]
  61. Kuhl P.K., Williams K.A., Lacerda F., Stevens K.N., Lindblom B. (1992). Linguistic experience alters phonetic perception in infants by 6 months of age. Science, 255, 606–608. [DOI] [PubMed] [Google Scholar]
  62. Kupers R. and Ptito M. (2011). Cross-Modal Brain Plasticity in Congenital Blindness: Lessons from the Tongue Display Unit. i-Perception, 2, 748. [Google Scholar]
  63. Lee S., Bidelman G.M. (2017). Objective Identification of Simulated Cochlear Implant Settings in Normal-Hearing Listeners Via Auditory Cortical Evoked Potentials. Ear and Hearing, 38(4), e215–e226. [DOI] [PubMed] [Google Scholar]
  64. Levitt H. (1978). Adaptive testing in audiology. Scandinavian Audiology. Supplementum, 6(6), 241–291. [PubMed] [Google Scholar]
  65. Levy-Tzedek S., Riemer D., Amedi A. (2014). Color improves ‘visual’ acuity via sound. Frontiers in Neuroscience, 8, 358. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Leybaert J. and LaSasso C.J. (2010). Cued speech for enhancing speech perception and first language development of children with cochlear implants. Trends in Amplification, 14(2), 96–112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Lidestam B., Moradi S., Petterson R., Ricklefs T. (2014). Audiovisual training is better than auditory-only training for auditory only speech-in-noise identification. Journal of the Acoustical Society of America, 136, EL142–EL147. [DOI] [PubMed] [Google Scholar]
  68. Livingston G., Sommerlad A., Orgeta V., Costafreda S. G., Huntley J., Ames D. (2017). Dementia prevention, intervention, and care. The Lancet, 390(10113), 2673–2734. [DOI] [PubMed] [Google Scholar]
  69. Loughrey D.G., Kelly M.E., Kelley G.A., Brennan S., Lawlor B.A. (2018). Association of Age-Related Hearing Loss With Cognitive Function, Cognitive Impairment, and Dementia: A Systematic Review and Meta-analysis. JAMA Otolaryngology - Head and Neck Surgery, 144(2), 115–126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Maidenbaum S., Chebat D.R., Levy-Tzedek S., Namer-Furstenberg R., Amedi A. (2014). The effect of expanded sensory range via the EyeCane sensory substitution device on the characteristics of visionless virtual navigation. Multisensory Research, 27(5-6). [DOI] [PubMed] [Google Scholar]
  71. Mäki-Torkko E.M., Vestergren S., Harder H., Lyxell B. (2015). From isolation and dependence to autonomy - expectations before and experiences after cochlear implantation in adult cochlear implant users and their significant others. Disability and Rehabilitation, 37(6), 541–547. [DOI] [PubMed] [Google Scholar]
  72. McCormack A. and Fortnum H. (2013). Why do people fitted with hearing aids not wear them? International Journal of Audiology, 52(5), 360–368. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Meijer P. (1992). An experimental system for auditory image representations in IEEE. Transactions on Biomedical Engineering, 39(2), 112–121. [DOI] [PubMed] [Google Scholar]
  74. Meredith M.A. and Stein B.E. (1986). Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. Journal of Neurophysiology, 56(3), 640–662. [DOI] [PubMed] [Google Scholar]
  75. Moon J. and Hong, S.H. (2014). What Is Temporal Fine Structure and Why Is It Important? Korean Journal of Audiology, 18(1), 1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Moore B. (2008). The Role of Temporal Fine Structure Processing in Pitch Perception, Masking, and Speech Perception for Normal-Hearing and Hearing-Impaired People. Journal of the Association of Research in Otoloaryngology, 9, 399–406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Moradi S., Wahlin A., Haellgren M., Roenneberg J., Lidestam B. (2017). The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users. Science Reports, 7, 5808. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Murray M.M., Eardley A.F., Edginton T., Oyekan R., Smyth E., Matusz P.J. (2018). Sensory dominance and multisensory integration as screening tools in aging. Science Reports, 8(1), 8901. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Nilsson M., Soli S.D., Sullivan J.A. (1994). Development of the Hearing In Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America, 95(2), 1085–1099. [DOI] [PubMed] [Google Scholar]
  80. Novich S.D. and Eagleman D.M. (2015). Using space and time to encode vibrotactile information: toward an estimate of the skin’s achievable throughput. Experimental Brain Research, 233(10), 2777–2788. [DOI] [PubMed] [Google Scholar]
  81. Otto T.U., Dassy B., Mamassian P. (2013). Principles of Multisensory. Journal of Neuroscience, 33(17), 7463–7474. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Plant G. and Risberg A. (1983). The transmission of fundamental frequency variations via a single channel vibrotactile aid. Speech Transmission Laboratory – Quaterly Progress and Status Reports, 24(2-3). [Google Scholar]
  83. Rivera-Gaxiola M., Silva-Pereyra J., Kuhl P.K. (2005). Brain potentials to native and non-native speech contrasts in 7- and 11-month-old American infants. Developmental Science, 8, 162–172. [DOI] [PubMed] [Google Scholar]
  84. Ro T., Ellmore T.M., Beauchamp M.S. (2013). A Neural Link Between Feeling and Hearing. Cerebral Cortex, 23(7), 1724–1730. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Rosemann S., Gießing C., Özyurt J., Carroll R., Puschmann S., Thiel C.M. (2017). The Contribution of Cognitive Factors to Individual Differences in Understanding Noise-Vocoded Speech in Young and Older Adults. Frontiers in Human Neuroscience, 11, 294. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Rosen S., Souza P., Ekelund C., Majeed A.A. (2013). Listening to speech in a background of other talkers: Effects of talker number and noise vocoding. Journal of the Acousticaaal Society of America, 133(4), 2431–2443. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Russo F. A., Ammirante P., Fels D. I. (2012). Vibrotactile Discrimination of Musical Timbre. Journal of Experimental Psychology: Human Perception and Performance, 822–826. [DOI] [PubMed] [Google Scholar]
  88. Schneider B., Pichora-Fuller M. K., Daneman M. (2010). Effects of senescent changes in audition and cognition on spoken language comprehension In Gordon-Salant S., Frisina R.D., Fay R., & Popper A. [Eds.], The aging auditory system (pp. 167–210). New York: Springer-Verlag. [Google Scholar]
  89. Shams L. and Seitz A. (2008). Benefits of multisensory learning. Trends in Cognitive Sciences, 12(11), 411–417. [DOI] [PubMed] [Google Scholar]
  90. Skarżyński H., Lorens A., Piotrowska A. (2003). A new method of partial deafness treatment. Medical Science Monitor, 9(4), CS20–24. [PubMed] [Google Scholar]
  91. Skarżyński H., Lorens A., Piotrowska A., Podskarbi-Fayette R. (2009). Results of Partial Deafness Cochlear Implantation Using Various Electrode Designs. Audiology and Neurotology, 14(1), 39–45. [DOI] [PubMed] [Google Scholar]
  92. Smith G.E., Housen P., Yaffe K. (2009). A cognitive training program based on principles of brain plasticity: Results from the improvement in memory with plasticity-based adaptive cognitive training (IMPACT) study. Journal of the American Geriatrics Society, 57(4), 594–603. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Soto-Faraco S. and Deco G. (2009). Multisensory contributions to the perception of vibrotactile events. Behavioural Brain Research, 196(2), 145–154. [DOI] [PubMed] [Google Scholar]
  94. Stevens S.S. (1957). On the psychophysical law. Psychological Review, 153–181. [DOI] [PubMed] [Google Scholar]
  95. Stevenson R., Sterling W., Sheffield Au.D., Butera I.M., Gifford R.H., Wallace M. (2017). Multisensory integration in cochlear implant recipients. Ear and Hearing, 38(5), 521–538. [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Striem-Amit E., Guendelman M., Amedi A. (2012). Visual’ acuity of the congenitally blind using visual-to-auditory sensory substitution. PloS one, 7(3), e33136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Suárez H., Cibils D., Caffa C., Silveira A., Basalo S., Svirsky M. (1997). Vibrotactile aid and brain cortical activity. Acta Otolaryngologica, 117(2), 208–210. [DOI] [PubMed] [Google Scholar]
  98. Tinga A.M., Visser-Meily J.M.A., van der Smagt M.J., Van der Stigchel S., van Ee R., Nijboer T.C.W. (2016). Multisensory stimulation to improve low-and higher-level sensory deficits after stroke: a systematic review. iew, 26(1), 73–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Uchida Y., Sugiura S., Nishita Y., Saji N., Sone M., Ueda H. (2019). Age-related hearing loss and cognitive decline - the potential mechanisms linking the two. Auris Nasus Larynx, 46(1), 1–9. [DOI] [PubMed] [Google Scholar]
  100. von Békésy G. (1959). Similarities between hearing and skin sensations. Psychological Review, 1–22. [DOI] [PubMed] [Google Scholar]
  101. von Illberg C.A., Baumann U., Kiefer J., Tillein J., Adunka O.F. (2011). Electric-acoustic stimulation of the auditory system: a review of the first decade. Neurootology, 16(2), 1–30. [DOI] [PubMed] [Google Scholar]
  102. Walkowiak A., Kostek B., Lorens A., Obrycka A., Wasowski A., Skarzynski H. (2010). Spread of Excitation (SoE) - a non-invasive assessment of cochlear implant electrode placement. Cochlear Implants International, 11(1), 479–481. [DOI] [PubMed] [Google Scholar]
  103. Weisenberger J.M., Percy M.E. (1995). The Transmission of Phoneme-Level Information by Multichannel Tactile Speech Perception Aids. Ear and Hearing, 16(4), 392–406. [DOI] [PubMed] [Google Scholar]
  104. Wendt D., Koelewijn T., Książek P., Kramer S.E., Lunner T. (2018). Toward a more comprehensive understanding of the impact of masker type and signal-to-noise ratio on the pupillary response while performing a speech-in-noise test. Hearing Research, 369, 67–78. [DOI] [PubMed] [Google Scholar]
  105. Wilson B.S. (2012). Cochlear implant technology In: Niparko J.K., Kirk I.K., Mellon N.K., Robbins A.M., Tucci D.L., & Wilson B.S. [Eds.], Cochlear Implants: Principles & Practices. Lippincott Williams & Wilkins, Philadelphia, 109–127. [Google Scholar]
  106. World Health Organisation (2015). Hearing loss due to recreational exposure to loud sounds. A review. [Google Scholar]
  107. World Health Organization. (2018). Addressing the rising prevalence of hearing loss. Geneva. [Google Scholar]
  108. Xerri C., Merzenich M.M., Peterson B.E., Jenkins W.M. (1998). Plasticity of primary somatosensory cortex paralleling sensorimotor skill recovery from stroke in adult monkeys. Journal of Neurophysiology, 79(4), 2119–2148. [DOI] [PubMed] [Google Scholar]
  109. Yamanaka T., Hosoi H., Skinner K., Bach-y-Rita P. (2009). Clinical application of sensory substitution for balance control. Practica Oto-Rhino-Laryngologica, 102, 527–538. [Google Scholar]
  110. Young G.W., Murphy D., Weeter J. (2016). Haptics in Music: The Effects of Vibrotactile Stimulus in Low Frequency Auditory Difference Detection Tasks. IEEE Transactions on Haptics, 99, 1. [DOI] [PubMed] [Google Scholar]
  111. Zhang T., Dorman M., Spahr A. (2010). Information from the voice fundamental frequency (F) region accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation. Ear and Hearing, 31(1), 63–69. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Restorative Neurology and Neuroscience are provided here courtesy of IOS Press

RESOURCES