Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2025 Jun 12;15:20054. doi: 10.1038/s41598-025-02946-4

Exposure to vibrotactile music improves audiometric performances in individuals with cochlear implants

Luca Turchet 1,, Raffaele Rosaia 2, Alessandro Diodati 2, Marco Carner 2
PMCID: PMC12162821  PMID: 40506478

Abstract

Vibrotactile stimulation has been shown to enhance the music listening experience of cochlear implant (CI) users. However, while existing studies have focused on music perception, significant gaps remain in our understanding of how music induces emotions in CI users and the role of vibrotactile stimulation in this process. Furthermore, the after-effects of audio-vibrotactile music listening on audiometric test performances have not yet been investigated on CI users. This paper presents a study in which two groups of twelve CI users were each exposed to music alone and to music with concurrent vibrotactile stimulation delivered via a vest enhanced with actuators. Standardized tonal and speech tests were conducted before and after both types of exposure (audio and audio-vibrotactile). In particular, speech audiometry was conducted in the quiet condition (with no masking sounds) for the first group and with the competing sounds for the second group. Results from both groups consistently showed that the exposure to tactile music significantly enhanced CI users’ ability to decode tonal and speech signals compared to the effect resulting from exposure to sounds alone. The majority of participants preferred listening to music with concurrent vibrations over an audio-only experience, as it led to higher levels of immersion and engagement. Consistent with findings from previous studies on individuals with normal hearing, an increase in arousal in CI users was observed in the audio-vibrotactile condition compared to the absence of vibrations, regardless of the type of emotion being conveyed. Nevertheless, participants emphasized the need for vibrotactile devices to incorporate personalization mechanisms, allowing them to dynamically adjust vibration intensity for different body parts. These findings may open the door to novel therapeutic approaches for CI users.

Keywords: Cochlear implants, Tactile music enhancement, Speech audiometry, Human-computer interaction, Haptic wearables, Musical haptics

Subject terms: Engineering, Health care

Introduction

The past two decades have witnessed an increase of researchers’ attention towards the creation of haptic devices for the vibrotactile enhancement of music listening experiences13, along with perceptual studies investigating their effects on listeners4,5. These devices have included both wearable68 and non-wearable systems9,10, for both listening to recordings11 and attending live concerts12.

Hove et al. utilized a backpack with a built-in linear actuator, showing that low-frequency vibrotactile stimulation synchronized with music improved enjoyment and groove ratings13. Additionally, they observed that using a subwoofer led to increased spontaneous movement for some music pieces compared to when subwoofer-induced vibrations were absent. In a related study, Cameron et al. found that dance behavior could be influenced by extremely low-frequency sounds that are not consciously perceived, likely due to vestibular and/or tactile processing14. Recently, Siedenburg et al. used a chair equipped with vibrotactile actuators and found that vibrations significantly enhanced the musical experience compared to listening through headphones alone15. Specifically, participants reported feeling more connected to the music, with increased arousal and groove sensations. Their findings suggested that groove perception and sensorimotor synchronization are integral aspects of musical engagement, contributing to heightened arousal and immersion. The authors also hypothesized that the enhanced engagement resulting from vibrations could partly explain the substantial improvements in overall quality ratings observed by Merchel and Altinsoy when whole-body vibrations were integrated with loudspeaker playback of classical music16. Collectively, these findings highlight the strong potential of vibrotactile cues to intensify musical experiences.

Research in this space has focused not only on individuals with normal hearing, to create novel forms of musical experiences leveraging an additional sensory dimension, but also on individuals with hearing impairments, to cope with their music hearing deficits (e.g., pitch direction discrimination, melody recognition, timbre recognition17). There is an increasing body of literature providing evidence that deaf and hard of hearing individuals can benefit from a vibrotactile representation of music to enhance their abilities of understanding and appreciating it18. This includes cochlear implant (CI) users, for whom vibrotactile stimulations have been shown to enhance listening by giving access to sound features that are poorly transmitted through the electrical signal of the CI19. For instance, researchers have shown that the use of vibrotactile devices by individuals with CI improves their ability to recognize melodies20 and discriminate pitch21. These results parallel the improvements observed in speech-in-noise performance22,23 and sound localization tasks24.

However, while existing studies have focused on music perception, there are major gaps in our understanding of how music can induce emotions in CI users and what is the role of vibrotactile enhancements in altering such induction. It is unknown whether the same increase in arousal observed for individuals with normal hearing13,15 holds for CI users too. Moreover, existing studies have focused on the ability of musical training to improve speech audiometric performances in hearing-impaired people, including CI users2529, while other studies have investigated the role of vibrotactile stimuli in augmenting or replacing the auditory input in deafened subjects or in CI recipients, in a real-time fashion, i.e., at the moment in which the auditory stimulus is provided2,30,31. Nevertheless, to our best knowledge, the effects of exposure to vibrotactile music on audiometric performances have not yet been investigated on CI users.

The present study aims at bridging these gaps by answering the following research questions:

  • Does the exposure to audio-vibrotactile musical stimuli modulate CI users’ performances with audiometric tests?

  • How are emotions in music induced in CI users, with and without concurrent vibrotactile stimulation?

  • How is the listening experience of CI users affected by the presence of vibrotactile music stimulation?

To respond to these questions, we devised two experiments where two different groups of participants were exposed to emotionally-connotated music presented with and without vibrations. In both experiments, participants underwent a tonal audiometric test in quiet condition. This was followed by a speech audiometric test, which was conducted, in the first experiment, in quiet condition, i.e., without any background sounds, while in the second experiment, in noisy environment, i.e., with competing sounds. We involved a different pool of CI users for the two experiments as we aimed at avoiding possible learning effects due to repeated exposure to the audio or audio-vibrotactile stimuli.

Materials and methods

Participants

Each of the two experiments involved twelve participants. In the first experiment (which involved the speech evaluation in quiet condition), 10 were males and 2 females, aged between 19 and 81 (mean = 56.41, SD = 19.34). In the second experiment (which involved the speech evaluation with competing sounds), 7 were males and 5 females, aged between 19 and 80 (mean = 39.16, SD = 21.87). The age differences between the two groups are not statistically significant, as assessed by a Wilcoxon test. No participant in the first experiment took part in the second experiment. Participants’ demographics are reported in Tables 1 and 2 for the two experiments respectively. No individual with only one cochlear implant held a hearing aid on the non-implanted side. All participants reported normal touch sensitivity. They were recruited from the pool of CI users of the Department of Otorhinolaryngology-Head and Neck Surgery of the University Hospital of Verona, who regularly attend the sessions for monitoring their stable hearing with the CI32. All participants had an history of monitoring lasting from 5 to 7 years (with no statistical differences between the two groups as assessed by a Wilcoxon test).

Table 1.

CI subjects demographics in the first experiment (speech evaluation in quiet condition).

Subject Sex Age Implant side Neurosensorial Hearing Loss grade
CI1 M 71 Left unilateral Bilateral profound
CI2 M 61 Right unilateral Bilateral severe
CI3 M 64 Left unilateral Bilateral severe
CI4 M 51 Left unilateral Bilateral profound
CI5 M 57 Bilateral Bilateral profound
CI6 M 63 Right unilateral Bilateral profound
CI7 M 68 Left unilateral Bilateral severe
CI8 F 72 Bilateral Bilateral profound
CI9 F 19 Left unilateral Bilateral profound
CI10 M 81 Left unilateral Bilateral profound
CI11 M 21 Right unilateral Bilateral profound
CI12 M 50 Left unilateral Bilateral profound

Table 2.

CI subjects demographics in the second experiment (speech evaluation with competing sounds).

Subject Sex Age Implant side Neurosensorial Hearing Loss grade
CI1 M 61 Right unilateral Bilateral severe
CI2 F 28 Left unilateral Bilateral profound
CI3 M 24 Left unilateral Bilateral profound
CI4 M 69 Right unilateral Bilateral severe
CI5 F 19 Bilateral Bilateral profound
CI6 M 80 Left unilateral Bilateral profound
CI7 M 21 Right unilateral Bilateral profound
CI8 M 50 Left unilateral Bilateral profound
CI9 F 16 Right unilateral Bilateral profound
CI10 M 18 Right unilateral Bilateral moderate
CI11 F 39 Left unilateral Bilateral profound
CI12 F 45 Right unilateral Bilateral severe

Apparatus and setting

The experiments took place at the Department of Otorhinolaryngology-Head and Neck Surgery of the University Hospital of Verona, which provided a dedicated laboratory for the audiometric tests and a room for the audio-vibrotactile sessions. The apparatus for the audio-vibrotactile sessions consisted of a laptop (Macbook Air 2013 by Apple), a pair of headphones (DT770PRO by Beyerdynamic), a soundcard (UMC202HD a Behringer), and a haptic vest (Skinetic by Actronika). The latter was equipped with 20 actuators distributed across the entire torso, with actuators placed on both the front and back of the wearer. The laptop run a software coded in Pure Data that allowed participants to launch autonomously the trials via an intuitive graphical user interface. Figure 1 shows a participant during the experiment, where it is possible to see the setup.

Fig. 1.

Fig. 1

A picture of a participant during the experiment where it is possible to notice the components of the audio-vibrotactile system and the interface running on the laptop.

The haptic vest was selected for its ability to stimulate at the same time various parts of the upper body (left, right, top and bottom, front and back), under the plausible hypothesis that such full stimulation would have led to a greater immersive experience than that achievable with smaller and less complex vibrotactile devices, such as vibrotactile bracelets or vibrotactile belts12,19,21,33.

Audiometric tests

The hearing ability of the participants before and after the exposure to either audio-only music or audio-vibrotactile music was analyzed via two widely used types of standardized audiometric tests34: tonal audiometry and speech audiometry. These were the same tests employed during the CI mapping and hearing rehabilitation sessions regularly conducted by participants. Together, the two tests lasted an average of 18 min.

Tonal audiometry

This test consists of assessing, in quiet condition, the audiometric threshold for warble tones at 250, 500, 1000, 2000 and 4000 Hz. This test was common to both experiments.

Speech audiometry

 The first experiment involved a speech audiometric test in quiet condition (i.e., speech against silence), while the second a speech audiometric test in competing noise (i.e., speech against babble noise). In both experiments, phonetically balanced lists of 20 bisyllabic Italian words in quiet and in competing noise were used to test speech intelligibility. In both experiments, the percentage of correct word identification was defined for each subject. In the first experiment, words were presented in sound-field through a speaker from the front (0 degrees azimut) at 20, 30, 40, 50, 60 and 70 dBHL. In the second experiment, speech stimuli were presented at 60 dBHL, while babble noise (5 male and 5 female talkers) was presented at 50, 60 and 70 dBHL, resulting in signal-to-noise ratios (SNRs) of −10, 0 and + 10 dB, respectively. Speech and noise were presented in sound-field from the same speaker and from the same direction involved in the first experiment.

Stimuli

The stimuli consisted of 12 music pieces provided to the subject at either at auditory or audio-vibrotactile level, depending on the tested condition, namely audio-only (A) and audio-haptic (AH). In both conditions all music pieces were presented as mono (i.e., each stereo recording was mixed down to a mono signal). The music pieces (see Table 3) were selected to cover a large variety of features of the music signal (e.g., tempo, harmony, instrumentation), as well as genre and conveyed emotion. Moreover, they were selected after having tried them on the haptic vest. A pilot test with three subjects not involved in the experiment was conducted for this purpose (two with bilateral profound and one with bilateral severe hearing loss). This ensured that the stimuli could be felt well via the sense of touch.

Table 3.

The 12 musical stimuli involved in the experiments (trimmed to 1 min and 30 s).

Title Composer/band Genre Emotion
Ob-La-Di Ob-La-Da The Beatles Pop-ska Happy 1
Divertimento in D Major, K. 136 Wolfgang Amadeus Mozart Classical music Happy 2
And you know that Yellowjackets Jazz Happy 3
The vengeful one Disturbed Heavy metal Aggressive 1
Jekyll and Hyde Five Finger Death Punch Heavy metal Aggressive 2
Night on Bald Mountain Modest Petrovič Musorgskij Classical music Aggressive 3
Schindler’s list theme John Williams Classical music Sad 1
November rain Guns N’ Roses Rock Sad 2
Blue in green Miles Davis Jazz Sad 3
Weightless Marconi Union Ambient Relaxed 1
No time to die Billie Eilish Pop Relaxed 2
Dance of the blessed spirits Christoph Willibald Gluck Classical music Relaxed 3

Specifically, we focused on four emotions: happiness, aggressiveness, sadness, and relaxation. Such emotions were chosen because they have been investigated in several studies on emotional expression in music35,36, and because they cover the four quadrants of the two-dimensional Arousal-Valence space (positive/high arousal, negative/high arousal, negative/low arousal, and positive/low arousal)37.

The vibrotactile stimuli were derived from the selected audio files by filtering the audio signal to adapt it to the human vibrotactile perception range as well as the range supported by the actuators involved in the utilized haptic vest, i.e., [30, 1000] Hz. Specifically, the filter was composed by a third order butterworth low-pass filter with cut-off frequency at 1000 Hz, and a third order butterworth high-pass filter with cut-off frequency at 30 Hz. All actuators of the haptic vest were simultaneously controlled by the same vibrotactile signal.

The amplitude of the auditory and vibrotactile stimuli was manually adjusted to be coherent between each other, based on the results in4. These amplitudes were determined via the pilot test conducted with the three subjects. For each piece, they were asked to agree on the amplitude values for both audio and vibrotactile signals yielding a comfortable (i.e., neither too low nor too high) and musically meaningful audio-vibrotactile experience. Each music piece was trimmed to last 1 min and 30 s.

Procedure

The procedure was the same for the two types of experiments, with the exception of the speech audiometric test involved (without competition and with competition). Each experiment was divided into two parts, each testing one of the two conditions (A or AH). Each part was conducted in a different day, at an interval of six weeks on average. This interval was chosen to exclude or reduce any potential learning effect that could have arisen due to the earlier exposure to one of the two conditions. In each experiment, half of the participants started first with the part involving condition A and then performed the part involving condition AH, the other half vice versa. Each part involved three phases:

  1. Pre-exposure speech audiometric tests: Participants were asked to conduct first the tonal test and then the speech audiometric test (in quiet or in noisy environment depending on the experiment).

  2. Exposure: The subjects were asked to wear a pair of headphones (Beyerdynamic DT-770 Pro 80 Ohm), and in the AH condition also the haptic vest. At the outset, they were briefed about the experiment and asked to sign an informed consent form. Further, they familiarized themselves with the system via a session in which they were provided with an audio or an audio-vibrotactile stimulus. For this purpose, we utilized a music piece not involved in the main experiment (“Don’t stop me now” by Queen).

Subsequently, they were provided with a graphical interface allowing them to trigger the trials comprised in the experiment. The 12 stimuli were presented in randomized order across participants. After each trial, the subjects were asked to fill in a paper-based questionnaire composed by the following items:

  1. Emotion induction via valence and arousal assessment over a 9-point Self-Assessment Manikin38. Subjects were specifically instructed to not assess the recognition of the emotion contained in the experienced audio or audio-vibrotactile music, but the emotion they were feeling while listening to/feeling it;

  2. Ad-hoc questions about perception to be assessed over an 11-point Likert scale [0 = totally disagree, 10 = totally agree]:
    • [Immersion] “I felt immersed in the experience”;
    • [Music appreciation] “I appreciated the music I listened to”;
    • [Vibrations appreciation] “I appreciated the vibrations I felt” (only for parts involving the condition AH).
  3. Post-exposure speech audiometric test: Immediately after the exposure session, participants of both groups underwent the same audiometric tests conducted during the pre-exposure phase, but with a different list of words compared to those used before the exposure.

Finally, participants underwent a short semi-structured interview with the experimenter to discuss their experience in interacting with the vibrotactile system and explain the possible difference between the two tested conditions.

Participants performed the phase 2 alone, interacting autonomously with the developed apparatus, while during phase 1 and 3 were assisted by the experimenters who provided the standard stimuli of the audiometric tests and collected the responses. The study was approved by the Ethics Committee of University Hospital of Verona and was carried out in accordance with the relevant ethical standards of the Declaration of Helsinki and in compliance with the EU GDPR.

Results

Objective assessment

Figures 2 and 3 illustrate the results of participants’ performances with the audiometric test during the first and second experiment respectively. Specifically, the figures show the difference between the test items collected during post-experiment phase and those in the pre-experiment phase. For the tonal audiometry test, a post-pre difference that is negative indicates an improvement of the post with respect to the pre, vice versa for the speech audiometry test.

Fig. 2.

Fig. 2

Mean and standard error of the differences between post- and pre-experiment for the first experiment (involving speech audiometry in quiet). Legend: * = p < 0.05, ** = p < 0.01, *** = p < 0.001.

Fig. 3.

Fig. 3

Mean and standard error of the differences between post- and pre-experiment for the second experiment (involving speech audiometry with competing sounds). Legend: *** = p < 0.001.

Statistical analysis was performed via a set of linear mixed effects models, one for each quantity. The models had the item (for the tonal audiometry test: 250, 500, 1000, 2000, 4000, and 8000 Hz; for the speech audiometry test: 20, 30, 40, 50, 60, and 70 dBHL, only the last three were involved in the experiment with competition) and the condition (A, AH) as fixed factors, and subject as a random factor. For all models, the assumption on the normality of the residuals was verified.

Concerning the first experiment, the analysis showed that participants’ performances during the tonal audiometry test were significantly better in condition AH than condition A for items 250 Hz (F(1,22) = 10.24, p = 0.00413), 500 Hz (F(1,22) = 8.87, p = 0.006933), 1000 Hz (F(1,22) = 5.01, p = 0.04584), 2000 Hz (F(1,22) = 8.2, p = 0.008996), 4000 Hz (F(1,22) = 53.11, p = 1.567e-05), and 8000 Hz (F(1,22) = 14.19, p = 0.003116). Participants’ performances during the speech audiometry test (in quiet) were significantly better in condition AH than condition A for items 30 dBHL (F(1,22) = 9.32, p = 0.005822), 40 dBHL (F(1,22) = 11.88, p = 0.005459), 50 dBHL (F(1,22) = 6.7, p = 0.01672), 60 dBHL (F(1,22) = 11.88, p = 0.005459), and 70 dBHL (F(1,22) = 15.8, p = 0.0006409).

Regarding the second experiment, the analysis showed that participants’ performances during the tonal audiometry test were significantly better in condition AH than condition A for items 500 Hz (F(1,22) = 25, p = 0.0004025), 1000 Hz (F(1,22) = 25, p = 0.0004025), 2000 Hz (F(1,22) = 19.56, p = 0.0002144), 4000 Hz (F(1,22) = 27.07, p = 3.223e-05), and 8000 Hz (F(1,22) = 22.39, p = 0.0006168). Participants’ performances during the speech audiometry test (with competing sounds) were significantly better in condition AH than condition A for items 50 dBHL (F(1,22) = 64.25, p = 6.411e-06) and 60 dBHL (F(1,22) = 86.45, p = 1.522e-06).

To assess whether the level of hearing impairment had an influence on the magnitude of the found improvement effect caused by the vibrotactile stimulation concurrent with the music, we searched for correlations between the results of the tonal and speech audiometry in the pre-test (i.e., the baseline for each frequency band and intensity) and the difference between post- and pre-test, for both A and AH conditions and for both the experiments. For this purpose, the Pearson’s correlation test was utilized.

Regarding the first experiment, in the tonal audiometry test we did not find any significant correlation for both conditions A and AH. In the speech audiometry test, for condition A we did not find any significant correlation, while for condition AH we found statistically significant and correlations of strong level for 20 dbHL (r(10) = 0.83, p = 0.0007115), 30 dbHL (r(10) = 0.65, p = 0.0215), 60 dbHL (r(10) = 0.94, p = 3.979e-06), and 70 dbHL (r(10) = 0.9, p = 5.132e-05).

Regarding the second experiment, in the tonal audiometry test we did not find any significant correlation for both conditions A and AH. In the speech audiometry test, we found for both A and AH significant correlations of strong level for 70 dBHL. However, these correlations are not relevant since nearly all results are 0 (as illustrated in Fig. 3), meaning that in both A and AH conditions participants did not perform well for the case of 70 dBHL.

Subjective assessment

Questionnaire

The evaluations of the questionnaire items were very consistent across the two groups. For this reason, hereinafter we present the results of both groups together. Figure 4 illustrates the affective evaluations of the 12 stimuli along the Arousal-Valence plane for both conditions. Figure 5 shows a comparison between conditions for all stimuli together. Statistical analysis was performed via a set of generalized linear mixed effects models, one for each item of the questionnaire (with exception of the question about vibrations that pertained to the sole condition AH). The models had the item (Arousal, Valence, Immersion, Music Appreciation) and the condition (A, AH) as fixed factors, and subject as a random factor. For all models, the assumption on the normality of the residuals was verified. The analysis showed that participants’ ratings for Arousal were significantly higher in condition AH than condition A (F(1,551) = 23.84, p = 1.372e-06). They were also significantly higher in condition AH than condition A for what concerns item Immersion (F(1,551) = 14.19, p = 0.0001). No significant main effect was found for Valence and Music Appreciation.

Fig. 4.

Fig. 4

Arousal and Valence assessment for each of the 12 musical stimuli (see Table 3). According to the circumplex model of affect, happiness is placed in the high-valence high-arousal quadrant, sadness in the low-valence low-arousal quadrant, relaxation in the high-valence low-arousal quadrant, and aggressiveness in the low-valence high-arousal quadrant. For each emotional stimulus the corresponding quadrant is indicated in light gray.

Fig. 5.

Fig. 5

Mean and standard error for the five questionnaire items. Legend: *** = p < 0.001.

Interview

During the short final interview, participants reported valuable comments about their audio and audio-vibrotactile experiences, which were analyzed via a reflexive thematic analysis39. The analysis was conducted by the authors, who generated codes that were further organized into two main themes reflecting shared patterns:

Preference for condition AH: 19 participants reported to prefer listening to music with the accompanying vibrations compared to their absence. The reason for the preference for the AH condition was attributed to the capability of vibrations of creating a more intense musical experience. The following sub-themes were identified.

  • Immersion: 8 participants explicitly stated that vibrations made them feeling more involved in the music listening experience, leading to the sensation of being enveloped in the music (e.g., “Surely I preferred the music with the vibrations because I felt intensely involved”; “When I closed my eyes I felt completely immersed”; “Without the vibrations I felt less engaged and less immersed in the music”).

  • Novelty and surprise: 4 participants commented with enthusiasm on the novelty of the experience, which surprised them to a great extent, as well as on their willingness of repeating it (e.g., “Fantastic! I definitively want to have the vest at home”; “In some cases I wanted to dance!”).

  • Usefulness: 3 participants commented on the ability of vibrations of facilitating their understanding of some attributes of the music (e.g., “The device helped me understand the rhythmic component of the song”; “The vibrations helped me better distinguish the various musical instruments, especially when there were many of them”).

  • Transportation to a real environment: 3 participants commented on the ability of vibrations of inducing the sensation of being in another environment (e.g., “It feels like being in discotheque”; “During the pieces with the orchestra I felt like if I was in the theater attending a live concert”).

  • Preference for more intense vibrations: 4 participants reported to have appreciated more the music experiences that involved strong vibrations compared to the ones having soft vibrations (e.g., “I liked more the songs with strong vibrations that those having lower intensities”; “I would like to have stronger vibrations in the slow pieces, like in the rhythmic ones”).

Preference for condition A: 5 participants preferred the musical experience devoid of vibrations. Their choice was motivated by the following sub-themes.

  • Uncomfortable sensations: 4 participants commented to have perceived the vibrations as too strong, which bothered them (e.g., “I found the experience without the vibrations more pleasant as the vibrations were too intense for me”; “After a while the intensity of the vibrations started to bother me, although I felt more immersed in the experience”).

  • Need for regulating the vibrations intensity: 3 participants suggested to include the possibility to adjust the intensity of the vibrations (e.g., “For each song I would like to have a knob to regulate the vibrations, especially when I start to feel that they are too much”; “I would like to reduce the strength of the vibrations, especially on the chest”).

Discussion

In both the experiment, participants performed the audiometric test better after exposure to condition AH compared to condition A. This is shown by the many statistically significant comparisons in both the tonal and the speech audiometry tests of both experiments, as illustrated in Figs. 2 and 3 (likely, some comparisons may not have turned significant because of the small sample size). This clearly indicates that vibrations congruent and simultaneous with the heard music are effective in improving the ability of CI users to decode pure tones and speech signals. In particular, our results suggest that just about 18 min of intermittent audio-vibrotactile stimulation of diverse types of music is beneficial to achieve an improvement in audiometric performances. This paves the way for new forms of therapy for CI individuals, with the additional benefit of leading to a deeper feeling of being immersed in the music experience, as indicated by the questionnaires. However, longitudinal studies are needed to understand whether such improvements observed after the immediate exposure to the vibrotactile musical stimuli can persist.

The found effect is plausibly due to processes of multisensory integration40, where the combination of the vibrotactile stimulation with the electrical signal generated by the CI implant is effective in greatly enhancing participants’ abilities to process sound stimuli. Our findings support the hypothesis that the audio-vibrotactile stimulation triggers pathways that are not activated via the sole auditory stimulation, through the secondary somatosensory cortex41.

Our study confirms the known bridge between music and speech, two complex auditory signals of similar acoustic parameters and building blocks that share common neural resources27,42. The speech audiometry results of our vibrotactile music training experience enforce and expand the “OPERA” hypothesis43,44 that music training induces plasticity in speech and language networks, for the first time to the best of our knowledge, even in CI users.

In particular, for the first experiment strong correlations were found between the speech audiometric performances in the pre-test and the magnitude of the improvement following AH exposure, for four out of six intensity levels (20, 30, 60 and 70 dBHL). This result sheds lights into the processes of multisensory integration occurring in the AH condition that can have led to the found improvements. Indeed, the effect may be partially explained by the well-known principle of inverse effectiveness, which states that for a stimulus with multisensory components, if the neural response to each unisensory component alone is weak, then the opportunity for multisensory enhancement is large45,46. In our case, if a CI user’s performance with the speech audiometry is weak, then the opportunity for improving it following an audio-vibrotactile music exposure is large. However, for the second experiment significant correlations in the AH condition were not found for 50 and 60 dBHL. This result does not necessarily imply that processes of multisensory integration, including the principle of inverse effectiveness, were not occurring during the experiment with competition. Rather, the type of experiment itself (which involves concurrent noise) may have blurred the relation between the perceptual baseline and the magnitude of the improvement due to the audio-vibrotactile exposure.

Notably, in both experiments for the tonal audiometry tests, no statistically significant correlations between the degree of hearing loss and the improvement following AH exposition were found. This result may be explained by the fact that the tonal test is just a tool for determining type, degree, and configuration of hearing loss. It is inadequate to evaluate the activation of the central somatosensory music-language pathways, which can instead be assessed by speech audiometry.

Furthermore, it is worth noticing that both tonal and speech audiometry performances in the A condition were affected by the auditory fatigue that follows music listening, a well-known phenomenon reported in the literature for both CI users and people with normal hearing42,47,48. With respect to this, vibrotactile stimulation appears to contrast this phenomenon.

According to the questionnaires, as well as the participants’ comments, generally vibrations were well received (with an average appreciation of about 7 out of 9). Results showed that the ratings for arousal and the feeling of being immersed in the music experience were significantly higher for condition AH than condition A. These results are in accordance with the findings reported in the literature for individuals with normal hearing13,15. Nevertheless, vibrations were not effective in altering the music perception in terms of pleasantness and appreciation, which were dimensions rated on pair across the A and AH condition. An explanation for this may be found in the comments of some of the participants, who even in the case of preference for the AH condition suggested to include the possibility to adjust the vibrations intensity, especially in different part of the haptic vest. These comments parallel those reported in studies involving individuals with normal hearing12 and highlight the importance of equipping the vibrotactile devices with customization mechanisms to ensure a comfortable and pleasant experience, as recently advocated by Choi et al. in49. In part, our mixed results about the preference for the A or AH condition may be also explained by the findings reported by Aker et al. in50, who found that not all CI users prefer music with congruent vibrotactile stimulation.

In addition, our experiment provides insights on how emotions in music are induced in CI users, with and without concurrent tactile stimulation. As illustrated in Fig. 5, participants reported that music induced in them a higher sense of arousal during happy and aggressive pieces compared to sad and relaxed ones. The same differences generally persisted during the AH condition. As for valence, participants’ ranking indicate that all music pieces had a positive effect on them, since for each piece the average evaluation of valence are towards the upper part of the Self-Assessment Manikin scale. The pieces in the AH condition generally followed a similar trend.

Notably, our experiment involved vibrotactile stimuli which were coherent and synchronous with the auditory counterparts. Future work could assess whether this effect is actually due to the congruence in amplitude and time between the stimuli in the two modalities, which contributes to multisensory integration. This could be proven by involving a comparison with vibrotactile stimuli unrelated to the auditory ones in terms of frequency, amplitude and temporal distribution. Moreover, future work could assess that the effect is actually due to the multisensory integration, and not just to the sole stimulation of the haptic channel. Furthermore, future studies could focus on the exploration of adjustable vibrotactile settings, as well as determine the required duration of exposure for notable improvements in audiometric test results. Finally, future work could assess whether the effect of vibrotactile exposure persists over time.

Conclusion

This study investigated for the first time the effects on CI users’ audiometric performances following the exposure to music presented with concurrent vibrotactile stimuli. Such vibrotactile stimuli, provided through a vest enhanced with actuators, were congruent in amplitude and time with the musical signals from which they were derived via simple bandpass filtering. Our findings provide evidence that tactile music greatly enhanced the ability of CI users in decoding pure tone and speech signals compared to the exposure to auditory signals alone. The effect was observed after the immediate exposure to diverse audio-vibrotactile music pieces, and was consistent in both quiet and competition test conditions. These findings may open the possibility to conceive novel forms of therapy for CI users (or hearing aids holders).

Most of participants reported that they preferred the condition of music with concurrent vibrations compared to an audio-only experience, as this can lead to more immersive and engaging music consumption experiences. In line with the findings of previous studies involving individuals with normal hearing, an increase in arousal was observed for the audio-vibrotactile condition compared to the absence of vibrations. Nevertheless, participants highlighted the need for equipping the vibrotactile device with personalization mechanisms enabling them to dynamically adjust the intensity of the vibrations for different body parts. Effective vibrotactile devices and signal processing techniques for providing complex sound information, such as music, via the sense of touch could offer a non-invasive and affordable means for enhancing listening abilities in hearing-impaired individuals. This calls for further research at the confluence of engineering and medical disciplines.

In future work we plan to conduct longitudinal studies to assess whether repeated exposure to audio-vibrotactile music on a daily or weekly basis results in a long-term improvement in CI recipients’ ability to perform audiometric tests.

Acknowledgements

We acknowledge the support of the MUSMET project funded by the EIC Pathfinder Open scheme of the European Commission (grant agreement n. 101184379). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Innovation Council. Neither the European Union nor the European Innovation Council can be held responsible for them. The authors are grateful to Nicolò Marconi for his help during the data collection.

Author contributions

L. Turchet and M. Carner conceived and designed the experiment. R. Rosaia and A. Diodati conducted thedata collection. L. Turchet developed the software and performed the data analysis. M. Carner, R. Rosaia andA. Diodati provided clinical expertise. L. Turchet and M. Carner contributed to writing/revising themanuscript.

Data availability

The data related to this study are available from the corresponding author upon reasonable request.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Remache-Vinueza, B., Trujillo-León, A., Zapata, M., Sarmiento-Ortiz, F. & Vidal-Verdú, F. Audio-tactile rendering: a review on technology and methods to convey musical information through the sense of touch. Sensors21 (19), 6575 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Flores Ramones, A. & del-Rio-Guerra, M. S. Recent developments in haptic devices designed for hearing-impaired people: A literature review. Sensors23 (6), 2968 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Eagleman, D. M. & Perrotta, M. V. The future of sensory substitution, addition, and expansion via haptic devices. Front. Hum. Neurosci.16, 1055546 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Aker, S. C., Innes-Brown, H., Faulkner, K. F., Vatti, M. & Marozeau, J. Effect of audio-tactile congruence on vibrotactile music enhancement. J. Acoust. Soc. Am.152 (6), 3396–3409 (2022). [DOI] [PubMed] [Google Scholar]
  • 5.Young, G. W., O’Dwyer, N., Vargas, M. F., Donnell, R. M. & Smolic, A. Feel the music! – Audience experiences of Audio–Tactile feedback in a novel virtual reality volumetric music video. Arts12 (4), 156 (2023). [Google Scholar]
  • 6.Sakuragi, R., Ikeno, S., Okazaki, R. & Kajimoto, H. CollarBeat: Whole Body Vibrotactile Presentation via the Collarbone to Enrich Music Listening Experience. In Proceedings of the International Conference on Artificial Reality and Telexistence - Eurographics Symposium on Virtual Environments (pp. 141–146). (2015).
  • 7.Yamazaki, Y. et al. Hapbeat: single DOF wide range wearable haptic display. In ACM SIGGRAPH 2017 Emerging Technologies (pp. 1–2). (2017).
  • 8.Yamazaki, Y., Mitake, H. & Hasegawa, S. Implementation of tension-based compact necklace-type haptic device achieving widespread transmission of low-frequency vibrations. IEEE Trans. Haptics. 15 (3), 535–546 (2022). [DOI] [PubMed] [Google Scholar]
  • 9.Nanayakkara, S., Taylor, E., Wyse, L. & Ong, S. H. An enhanced musical experience for the deaf: design and evaluation of a music display and a haptic chair. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 337–346). (2009).
  • 10.Hayes, L. Skin music (2012) an audio-haptic composition for ears and body. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition (pp. 359–360). (2015).
  • 11.Gunther, E. & O’Modhrain, S. Cutaneous Grooves: composing for the sense of touch. J. New. Music Res.32 (4), 369–381 (2003). [Google Scholar]
  • 12.Turchet, L., West, T. & Wanderley, M. M. Touching the audience: musical haptic wearables for augmented and participatory live music performances. Personal. Uniquit. Comput.25, 749–769 (2021). [Google Scholar]
  • 13.Hove, M. J., Martinez, S. A. & Stupacher, J. Feel the bass: music presented to tactile and auditory modalities increases aesthetic appreciation and body movement. J. Exp. Psychol. Gen.149 (6), 1137 (2020). [DOI] [PubMed] [Google Scholar]
  • 14.Cameron, D. J. et al. Undetectable very-low frequency sound increases dancing at a live concert. Curr. Biol.32 (21), R1222–R1223 (2022). [DOI] [PubMed] [Google Scholar]
  • 15.Siedenburg, K., Bürgel, M., Özgür, E., Scheicht, C. & Töpken, S. Vibrotactile enhancement of musical engagement. Sci. Rep.14 (1), 7764 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Merchel, S. & Altinsoy, M. E. The influence of vibrations on musical experience. J. Audio Eng. Soc.62 (4), 220–234 (2014). [Google Scholar]
  • 17.Drennan, W. R., Oleson, J. J., Gfeller, K., Crosson, J., Driscoll, V. D., Won, J.H., … Rubinstein, J. T. (2015). Clinical evaluation of music perception, appraisal and experience in cochlear implant users. International Journal of Audiology, 54(2),114–123. [DOI] [PMC free article] [PubMed]
  • 18.Fletcher, M. D. Using haptic stimulation to enhance auditory perception in hearing-impaired listeners. Expert Rev. Med. Dev.18 (1), 63–74 (2021). [DOI] [PubMed] [Google Scholar]
  • 19.Aker, S. C., Faulkner, K. F., Innes-Brown, H. & Marozeau, J. Perceived auditory dynamic range is enhanced with wrist-based tactile stimulation. J. Acoust. Soc. Am.156, 2759–2766 (2024). [DOI] [PubMed] [Google Scholar]
  • 20.Huang, J., Lu, T., Sheffield, B. & Zeng, F. G. Electro-tactile stimulation enhances cochlear-implant melody recognition: effects of rhythm and musical training. Ear Hear.41 (1), 106–113 (2020). [DOI] [PubMed] [Google Scholar]
  • 21.Fletcher, M. D., Thini, N. & Perry, S. W. Enhanced pitch discrimination for cochlear implant users with a new haptic neuroprosthetic. Sci. Rep.10 (1), 10354 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Huang, J., Sheffield, B., Lin, P. & Zeng, F. G. Electro-tactile stimulation enhances cochlear implant speech recognition in noise. Sci. Rep.7 (1), 2196 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Fletcher, M. D., Hadeedi, A., Goehring, T. & Mills, S. R. Electro-haptic enhancement of speech-in-noise performance in cochlear implant users. Sci. Rep.9 (1), 11428 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Fletcher, M. D. & Zgheib, J. Haptic sound-localisation for use in cochlear implant and hearing-aid users. Sci. Rep.10 (1), 14171 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Shahin, A. J. Neurophysiological influence of musical training on speech perception. Front. Psychol.2 (126), 1–10 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Fuller, C. D., Galvin, I. I. I., Maat, J. J., Başkent, B. & Free, R. H. D., Comparison of two music training approaches on music and speech perception in cochlear implant users. Trends Hear., 22. 10.1177/2331216518765379 (2018). [DOI] [PMC free article] [PubMed]
  • 27.Moossavi, A. & Gohari, N. The impact of music on auditory and speech processing. Auditory Vestib. Res.28 (3), 134–145 (2019). [Google Scholar]
  • 28.Ab Shukor, N. F., Lee, J., Seo, Y. J. & Han, W. Efficacy of music training in hearing aid and cochlear implant users: a systematic review and meta-analysis. Clin. Exp. Otorhinolaryngol.14 (1), 15–28 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Abdulbaki, H., Mo, J., Limb, C. J. & Jiam, N. T. The impact of musical rehabilitation on complex sound perception in cochlear implant users: A systematic review. Otology Neurotology. 44 (10), 965–977 (2023). [DOI] [PubMed] [Google Scholar]
  • 30.Fletcher, M. D. Can haptic stimulation enhance music perception in hearing-impaired listeners? Front. NeuroSci.15, 723877 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Oh, Y., Schwalm, M. & Kalpin, N. Multisensory benefits for speech recognition in noisy environments. Front. NeuroSci.16, 1031424 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Carner, M. et al. Personal experience with the remote check telehealth in cochlear implant users: from COVID-19 emergency to routine service. Eur. Arch. Otorhinolaryngol.280 (12), 5293–5298 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Luo, X. & Hayes, L. Vibrotactile stimulation based on the fundamental frequency can improve melodic contour identification of normal-hearing listeners with a 4-channel cochlear implant simulation. Front. NeuroSci.13, 1145 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Suh, M. J., Lee, J., Cho, W. H., Jin, I. K., Kong, T. H., Oh, S. H., … Seo, Y. J.(2023). Improving accuracy and reliability of hearing tests: an exploration of international standards. Journal of Audiology & Otology, 27(4), 169. [DOI] [PMC free article] [PubMed]
  • 35.Gabrielsson, A. & Juslin, P. N. Emotional Expression in Music (Oxford University Press, 2003).
  • 36.Turchet, L., O’Sullivan, B., Ortner, R. & Guger, C. Emotion Recognition of Playing Musicians from EEG, ECG, and Acoustic Signals (IEEE Transactions on Human-Machine Systems, 2024).
  • 37.Russell, J. A. A circumplex model of affect. J. Personal. Soc. Psychol.39 (6), 1161 (1980). [Google Scholar]
  • 38.Bradley, M. M. & Lang, P. J. Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry. 25 (1), 49–59 (1994). [DOI] [PubMed] [Google Scholar]
  • 39.Braun, V. & Clarke, V. Reflecting on reflexive thematic analysis. Qualitative Res. Sport Exerc. Health11(4), 589–597 (2019). [Google Scholar]
  • 40.Nava, E. et al. Audio-tactile integration in congenitally and late deaf cochlear implant users. PLoS One, 9(6), e99606 (2014). [DOI] [PMC free article] [PubMed]
  • 41.Schürmann, M., Caetano, G., Hlushchuk, Y., Jousmäki, V. & Hari, R. Touch activates human auditory cortex. Neuroimage30 (4), 1325–1331 (2006). [DOI] [PubMed] [Google Scholar]
  • 42.Kyrtsoudi, M., Sidiras, C., Papadelis, G. & Iliadou, V. M. Auditory processing in musicians, a cross-sectional study, as a basis for auditory training optimization. In Healthcare, 11(14), 2027. (2023). [DOI] [PMC free article] [PubMed]
  • 43.Patel, A. D. Can nonlinguistic musical training change the way the brain processes speech? The expanded OPERA hypothesis. Hear. Res.308, 98–108 (2014). [DOI] [PubMed] [Google Scholar]
  • 44.Neves, L., Correia, A. I., Castro, S. L., Martins, D. & Lima, C. F. Does music training enhance auditory and linguistic processing? A systematic review and meta-analysis of behavioral and brain evidence. Neurosci. Biobehavioral Reviews. 140, 104777 (2022). [DOI] [PubMed] [Google Scholar]
  • 45.Holmes, N. P. & Spence, C. Multisensory integration: space, time and superadditivity. Curr. Biol.15 (18), R762–R764 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Meredith, M. A. & Stein, B. E. Interactions among converging sensory inputs in the superior colliculus. Science221 (4608), 389–391 (1983). [DOI] [PubMed] [Google Scholar]
  • 47.Venet, T. et al. Parameters influencing auditory fatigue among professionals working in the amplified music sector: noise exposure and individual factors. Int. J. Audiol.63 (9), 686–694 (2024). [DOI] [PubMed] [Google Scholar]
  • 48.Cartocci, G., Inguscio, B. M. S., Giorgi, A., Vozzi, A., Leone, C. A., Grassia, R.,… Babiloni, F. (2023). Music in noise recognition: An EEG study of listening effort in cochlear implant users and normal hearing controls. PLoS One, 18(8), e0288461. [DOI] [PMC free article] [PubMed]
  • 49.Choi, Y., Jeon, J., Lee, C., Noh, Y. G. & Hong, J. H. A Way for Deaf and Hard of Hearing People to Enjoy Music by Exploring and Customizing Cross-modal Music Concepts. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1–17). (2024).
  • 50.Aker, S. C., Faulkner, K. F., Innes-Brown, H., Vatti, M. & Marozeau, J. Some, but not all, cochlear implant users prefer music stimuli with congruent haptic stimulation. J. Acoust. Soc. Am.155 (5), 3101–3117 (2024). [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data related to this study are available from the corresponding author upon reasonable request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES