Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Sep 1.
Published in final edited form as: J Am Acad Audiol. 2012 Sep;23(8):623–634. doi: 10.3766/jaaa.23.8.7

Using Patient Perceptions of Relative Benefit and Enjoyment to Assess Auditory Training

Nancy Tye-Murray 1, Mitchell S Sommers 2, Elizabeth Mauzé 1, Catherine Schroy 1, Joe Barcroft 3, Brent Spehar 1
PMCID: PMC3684041  NIHMSID: NIHMS468692  PMID: 22967737

Abstract

Background

Patients seeking treatment for hearing-related communication difficulties are often disappointed with the eventual outcomes, even after they receive a hearing aid or a cochlear implant. One approach that audiologists have used to improve communication outcomes is to provide auditory training (AT), but compliance rates for completing AT programs are notoriously low.

Purpose

The primary purpose of the investigation was to conduct a patient-based evaluation of the benefits of an AT program, I Hear What You Mean, in order to determine how the AT experience might be improved. A secondary purpose was to examine whether patient perceptions of the AT experience varied depending on whether they were trained with a single talker’s voice or heard training materials from multiple talkers.

Research Design

Participants completed a 6-week auditory training program and were asked to respond to a post-training questionnaire. Half of the participants heard the training materials spoken by six different talkers and half heard the materials produced by only one of the six talkers.

Study Sample

Participants included 78 adult hearing-aid users and 15 cochlear-implant users for a total of 93 participants who completed the study, ages 18 to 89 years (M=66 years, SD=16.67 years). Forty-three females and 50 males participated. The mean better ear pure-tone average for the participants was 56 dB HL (SD=25 dB).

Intervention

Participants completed the single- or multiple-talker version of the 6-week computerized AT program, I Hear What You Mean, followed by completion of a post-training questionnaire in order to rate the benefits of overall training, the training activities, and to describe what they liked best and what they liked least.

Data Collection and Analysis

After completing a 6-week computerized AT program participants completed a post-training questionnaire. Seven-point Likert scaled responses to whether understanding spoken language had improved were converted to individualized z scores and analyzed for changes due to AT. Written responses were coded and categorized to consider both positive and negative subjective opinions of the AT program. Regression analyses were conducted to examine the relationship between perceived effort and perceived benefit and to identify factors that predict overall program enjoyment.

Results

Participants reported improvements in their abilities to recognize spoken language and in their self-confidence as a result of participating in AT. Few differences were observed between reports from those trained with one versus six different talkers. Correlations between perceived benefit and enjoyment were not significant and only participant age added unique variance to predicting program enjoyment.

Conclusion

Participants perceived AT to be beneficial. Perceived benefit did not correlate with perceived enjoyment. Compliance with computerized AT programs might be enhanced if patients have regular contact with a hearing professional and train with meaning-based materials. An unheralded benefit of AT may be an increased sense of control over the hearing loss. In future efforts, we might aim to make training more engaging and entertaining, and less tedious.

Keywords: Auditory training, hearing aids, cochlear implants, self-assessment, hearing loss

INTRODUCTION

One approach that audiologists have used to improve communication outcomes in individuals with hearing loss is to provide auditory training (AT). AT is instruction designed to maximize an individual’s use of residual hearing by means of listening practice, and often follows a structured hierarchy of listening activities that become progressively more difficult with each training session. A significant challenge for the routine implementation of AT, however is that compliance and completion rates are often quite low. For instance, Sweetow and Sabes (2010) found a completion rate of less than 30% for their computerized program in a group of over 3000 participants, meaning that participants simply stopped taking the lessons in the curriculum. The low compliance rates for AT are surprising given that patients who engage in such training are presumably highly motivated to improve spoken communication. In addition, even when travel and time demands are minimized by use of home-based programs, completion rates remain a significant difficulty in implementing AT programs (Sweetow and Palmer, 2005; Sweetow and Sabes, 2010).

The traditional approach to assessing AT programs, comparing pre-training and post-training performance, provides critical information regarding program efficacy, with the underlying assumption being that more effective programs will have greater compliance rates and higher ratings of enjoyment. However, research on behavior change and maintenance in a number of health-related domains, including cigarette smoking, weight control, alcohol abuse and exercise behaviors (see Strecher et al., 1986 for a review) suggests that patient perceptions are a principal determinant of program compliance. Bandura (1977, 1982) has proposed that in determining whether to continue engaging in new behaviors, such as extensive AT, individuals weigh their perceptions about the outcomes that will result from engaging in the behavior (perceived benefit) against the perceived demands of continued engagement with the activity (perceived enjoyment).

Within the framework for behavior maintenance proposed by Bandura (1977, 1982), subjective evaluations of both perceived benefit and perceived enjoyment can provide essential information regarding factors that affect overall compliance rates. This focus on subjective evaluations is likely especially important for maintenance of AT because most programs require extensive, multi-session training involving thousands, if not tens of thousands of trials, with little or no objective assessment of program efficacy until after training has been completed. Thus, subjective measures of perceived efficacy and program enjoyment are likely critical factors in participants’ decisions about whether to continue with AT. However, there is scant information about patients’ subjective impressions about the benefits of AT and virtually no information about which aspects of training they like or which aspects they dislike. The absence of quantitative analyses of subjective reports regarding AT is surprising given that such investigations have proven invaluable in other research and clinical applications, as when evaluating hearing-aid benefit (e.g., Glasgow Hearing Aid Benefit Profile, Gatehouse, 1999), hearing-aid performance (e.g., Profile of Hearing Aid Performance, Cox and Gilmore, 1990), quantifying perceived handicap (e.g., Hearing Handicap for the Elderly, Weinstein et al., 1986), and evaluating a broad array of listening-related difficulties (e.g., Communication Profile for the Hearing Impaired, Demorest and Erdman, 1987). Subjective evaluations can also identify benefits of training that are not amenable to evaluation by simple comparisons of pre- and post-training speech-recognition scores, but that nevertheless have important consequences for communicative behavior. For example, subjective measures of how AT affects confidence in different listening situations (e.g., talking to a family member; talking to a stranger) can provide unique information about changes in an individual's overall quality of life and sense of self-efficacy (e.g., Smith and West, 2006).

For purposes of establishing why some individuals elect to complete AT while others do not, one would ideally like to compare subjective evaluations from participants who do and do not complete a given AT program. In practice, however, such direct comparisons are rarely possible because those who elect to discontinue the program are often unavailable or unwilling to provide additional responses regarding their impressions of the AT program. Moreover, even if such direct comparisons were possible, interpretation would be complicated by the need to compare subjective reports from individuals who had completed all components of the program with those from individuals who had only partial information about the program.

A specific self-assessment questionnaire that is designed to quantify changes following an AT program does not exist (Gil and Iorio, 2010). Even so, previous investigations have attempted to obtain subjective data about training efficacy by administering self-assessment questionnaires. These questionnaires most typically require patients to respond to items that offer a closed-set of choices, such as yes, sometimes, and no (as in the Hearing Handicap Inventory for the Elderly, Weinstein, et al., 1986). Gil and Iorio (2010) noted a trend for adults with mild to moderate hearing loss to self-report fewer listening difficulties in daily situations after they received formal AT, as compared to a control group of participants who received no training. Their participants completed the Abbreviated Profile of Hearing Aid Benefit (Cox and Alexander, 1995), which is typically administered to verify hearing-aid benefit, and quantifies listening difficulties experienced in quiet and noisy daily situations. Smaldino and Smaldino (1988) administered a self-assessment instrument to adults in order to assess the benefits of an aural rehabilitation program that included AT. The instrument, The Hearing Performance Inventory (Giolas et al., 1979), is designed to assess the communication skills of adults in a variety of listening situations. Their participants demonstrated a significant change in perceived hearing handicap scores as a result of participation (see also Bode and Oyer, 1970; Kricos and Holmes, 1996; Kricos et al, 1992; Newman and Weinstein, 1988, for other examples in which questionnaires designed for other purposes have been used to assess the efficacy of auditory training).

In the current report, we focus on analyzing subjective evaluations from participants who have completed our six-week AT program, I Hear What You Mean, with the goal of identifying components of the program that are associated with both positive and negative participant impressions. In keeping with the general framework of behavioral compliance proposed by Bandura (1977, 1982), the focus of the current research was on establishing subjective impressions of perceived benefit from the AT program and on identifying components of the training protocol that are most and least palatable to participants. The primary hypothesis of the research was that perceived benefit of AT and perceived value of the training activities will make significant and independent contributions to overall ratings of program enjoyment.

A secondary goal of the research was to establish whether perceived benefit of AT would vary depending upon the number of talkers that participants were exposed to during training. To address this issue, we developed two versions of the AT training program, one in which all of the training material was spoken by 1 talker and another version in which 6 different talkers produced the training stimuli. We predicted that those trained with single talkers would report greater confidence and speech-recognition when interacting with familiar talkers (family members or a close friend) than would those trained with multiple talkers. Conversely, we expected those trained with multiple talkers to self report greater confidence and speech recognition when interacting with less familiar talkers (casual acquaintances or strangers). These hypotheses were based on the well-established principle of transfer-appropriate-processing (Morris et al., 1977), which proposes that learning is optimized when training and testing conditions overlap. As applied to AT, we predicted that single-talker training would teach participants to focus on idiosyncratic properties of a single voice (i.e., idiolects) and that individuals would report the most gains in situations where they are asked to attend to a highly familiar single talker. In contrast, we expected listeners in the multiple-talker training condition to learn what was common across productions of a given stimulus by multiple talkers and therefore to report the most benefit in multiple-talker situations.

The investigation is novel in the following ways. We asked questions that pertained directly to the AT experience rather than administered questionnaires that have been developed for other purposes. We included rating scales as well as open-ended questions that do not limit responses to a pre-determined set of choices. Our open-ended questions focused on identifying both positive and negative aspects of training. Finally, we performed quantitative and qualitative analyses of the responses and then considered the implications of the findings for how clinicians might best provide computer-based AT to adults who have hearing loss.

METHODS

Participants

Participants were recruited through flyers posted in local audiology clinics, through a volunteer database maintained by Washington University School of Medicine, and through a database maintained by our laboratory and included 78 adult hearing-aid users and 15 cochlear-implant users for a total of 93 participants who completed the study. Their ages ranged from 18 to 89 years (M=66 years, SD=16.67 years). Forty-three females and 50 males participated. The mean better ear pure tone average (bPTA) was 48.7 (SD = 15.2) for the HA participants and 65.8 (SD = 11.2) for the CI participants. When determining bPTA, a no response was recorded as 120 dB HL. All participants had been wearing amplification for over 6 months. The mean duration of hearing loss was 19.8 years (SD=14.9). For the hearing-aid users, approximately 80% wore two hearing aids, with the remaining 20% wearing one hearing aid. Slightly less than half (7 of 15) of the CI participants wore a hearing aid on the ear opposite their cochlear implant. None of the participants had prior experience with AT. Participants received $10/hour for their participation to compensate them for time and travel expenses. This study was approved by Washington University School of Medicine Human Research Protection Office. Written consent was obtained from each participant.

AT Program

The I Hear What You Mean program consisted of 12 lessons that were completed by each participant within a six-week period. Although AT programs differ extensively with respect to the nature of stimuli and training exercises, each lesson in the I Hear What You Mean program consisted of five activities that are representative of those used in AT (see below for additional details about the individual training exercises). Participants made two visits to our facility each week. Each training lesson focused on a particular theme (e.g., restaurant, travel) and took approximately one hour to complete. In addition, each exercise provided extensive practice on a set of phonemes, with easily discriminated phonemes featured in early exercises and less easily discriminated phonemes featured in later exercises. All stimuli in the program were presented through a loudspeaker in a sound-treated booth. Stimuli were presented with a background noise of four-talker babble at approximately 62 dB SPL. The level of the speech varied adaptively under computer control using a tracking protocol that maintained performance for each of the five activities at approximately an 80% correct response rate. Participants were trained individually, and sat in a comfortable chair before a computer monitor touch screen.

As noted, we developed two versions of the program: a multi-talker version and a single-talker version, and participants were alternately assigned to one of the two versions. Participants who completed the multi-talker training program listened to six different talkers, three men and three women, throughout the entire program. Participants who completed the single-talker training program were randomly assigned to one of the six talkers used in the multi-talker training (with all six talkers represented across participants in the single-talker conditions). They listened to their assigned talker throughout the entire program.

All training and testing activities were completed with participants seated in a sound-attenuating booth. Stimuli were delivered through two loudspeakers placed at approximately 45° angles to the participant. When background babble was present, it was combined with the speech signal prior to presentation through the loudspeakers (i.e., speech and babble were presented simultaneously through both loudspeakers). The activities were designed to encompass a wide range of both analytic and synthetic AT exercises (Tye-Murray, 2009), ranging from basic phoneme discrimination to comprehension of extended passages. Within each of the 12 lessons, Activity 1 focused on sound identification in a manner that introduced the theme of the lesson. Participants heard a word and then had to indicate whether a lesson’s target sound occurred in the initial, medial, or final position. Activity 2 was a meaning-oriented picture-based 4-choice discrimination task. Participants heard two words that were either the same (e.g., mat-mat) or that differed by a single phoneme (e.g., mat-bat), and then had to select which of four pictures shown on the touch screen illustrated the temporal order of the two words. In this example, the pictures would display two mats, two bats, a mat next to a bat, and a bat next to a mat. Activity 3 involved completing sentences. Participants heard the first part of the sentence in quiet and then had to select the final word that completed the sentence’s meaning, from a choice of four options varying by one phoneme and heard in the four-talker babble. Activity 4 was a meaning-oriented sentence-identification task and required participants to listen to a sentence and then select from three written sentences the one that was most likely to occur next, given the context of the preceding spoken sentence. Activity 5 focused on comprehension of extended passages. Participants heard a passage lasting approximately 45 seconds, and then answered two multiple-choice comprehension questions. Next they heard and simultaneously read the same passage and answered two additional multiple choice comprehension questions. This second presentation and the accompanying text allowed them to verify what they had heard. All activities employed a test-like format, where participants made a decision in response to a test prompt. In addition, all activities included feedback in the form of presenting the correct answer to participants following their responses. Further description and examples for the four meaning-based activities appear in Table 1.

Table 1.

Names, training type, description, and examples for the five training activities used in theI Hear What You Meanprogram. Within an exercise, all activities centered on a common theme, such as a restaurant.

Activity Name Training
Type
Description Example
Activity 1:
Introduction of
sound contrast
Analytic The participant hears word
presented in 4-talker babble
and indicates position of target
sound
Target sound: B
Stimuli: cab, bat, about
Activity 2:
Four-Choice
Discrimination
Analytic The participant hears two words
presented with a background of
4-talker babble and chooses a
response from a choice of four
double picture sets.
Stimuli: “Meat Meat”
Picture Sets: Meat
Meat, Beet Beet,
Meat Beet, Beet Meat
Activity 3:
Sentence
Completion
Analytic The participant listens to a
sentence spoken in quiet,
containing a missing final word.
The participant then presses a
series of four buttons and with
each press, hears a word
spoken in the presence of background babble. The task is
to select the appropriate word
to complete the sentence.
Stimuli: “A hamburger
is served on a ______.”
Word choices: ton,
bun, none, done
Activity 4:
Contextualized
Sentences
Synthetic The participant hears a target
entence in background
babble. Three sentences then
appear on the touchscreen,
one of which is logically related
to the spoken sentence.
Depending on what the
participant hears, all three
sentences could make sense
(e.g., lap, map, cap). The task is
to select the appropriate sentence.
Stimuli: “Bob spilled
coffee on his map.”
Sentence choices:
He was looking for
directions. It burned
his leg., He took it off
and set it on the
table.
Activity 5
Listening
Comprehension
Synthetic The participant listens to a
paragraph spoken in
background babble and then
answers two multiple-choice
questions. The paragraph is
repeated and simultaneously
presented orthographically,
followed by two additional
multiple-choice questions.
Stimuli: Paragraph
about tipping in
European restaurants.
Comprehension
Question: Where
should you leave the
ip?
Choices: a) Give it
directly to the waiter.;
b) Leave it on the
table.; c) Put it under
your plate.

An audiologist explained each activity during the first training session to be sure the participant understood it. The audiologist also was available to answer questions throughout the session. The sessions were self-guided and participants were allowed to complete sessions at their own pace. During training, the participants were instructed to set their hearing aid(s) or cochlear implant to the setting they typically used in everyday communication. All but one participant (who withdrew for a family medical emergency) enrolled in the experiment completed the pre-test session, training and the post-test session immediately following training.

Test Sessions

The first two and last two visits of the study period for each participant consisted of a pre-test and a post-test, respectively. After two pre-test sessions, participants began the computerized-training program. A test session lasted approximately 1.5 hours and included auditory-only tests of consonant, word, and sentence recognition and tests of spoken language comprehension. Approximately one-third of the participants also completed a lipreading test. At the end of training, all participants completed the post-test battery and a questionnaire. As the focus of the current study was on subjective evaluations and predictors of program enjoyment, we restrict the current analyses to the findings from the questionnaires.

Questionnaires

As part of the post-training assessments, participants completed an exit questionnaire. On the questionnaire, participants responded using a 7-point Likert scale and then were asked to briefly explain their answers in an open-ended format using written text. Questions from the post-training questionnaire are listed in Table 2. Participants also were asked to indicate if they felt that training had improved their ability to, a) understand words; b) understand single sentences; c) understand multiple sentences; and d) understand the meaning of an extended series of sentences. Analyses were conducted on the frequency with which participants indicated improvement in each of the four areas as well as a summed value of the total number of areas (out of 4 possible) in which they indicated improvement. Finally, participants were queried about the value of each of the training activities (also using a 7-point Likert scale).

Table 2.

Post-training Exit Questionnaire (Questions Q1 through Q6 are referred to by number in the text).

Rating Questions (rated from 1-very little to 7-very much)
Please indicate how much you believe that you improved in your ability to understand
spoken language as a result of having participated in training. Briefly explain your
answer. (Q1)
To what extent has participating in this auditory program improved your self-
confidence in engaging in conversation with casual acquaintances or strangers? (Q3)
To what extent has participating in this auditory program improved your self-
confidence in engaging in conversation with family members or close friends? (Q4)
Please indicate how much you enjoyed participating in this auditory training program.
(Q6)
Please indicate how helpful each of the following types of activities helped you in
terms of improving your listening abilities (Activities 1 through 5 listed).
Additional Questions
In what aspects (if any) of comprehending spoken language do you feel that you
have improved? Please check all that apply.
  • _____

    Understanding individual words

  • _____

    Understanding individual sentences

  • _____

    Understanding multiple sentences

  • _____

    Getting the gist of a series of sentences

  • _____

    None at all

What did you like most about the program?
What did you like least about the program?

RESULTS

Perceived benefit of AT training

Our initial analysis assessed perceived changes in language understanding and communicative confidence as a result of participating in the AT program as well as how much participants enjoyed the training. We also summed the total number of areas (individual words, individual sentences, multiple sentences, extended discourse) in which participants felt they had improved as a consequence of training. In this and all remaining analyses, we initially analyzed data for hearing-aid users and cochlear implant recipients separately. No differences emerged between the two groups and thus data were combined across the two sensory aids. Figure 1 displays means (and standard errors) for the four questions assessed using the Likert scale1. In general, participants indicated moderate improvements in their ability to understand spoken language (mean of 4.1 on the 7-point scale for single- and multiple-talker training combined) and generally enjoyed their participation in the program (mean of 5.9 across the two training groups). The only significant difference2 observed across the four questions shown in Figure 1, was for question number 3 (“to what extent did auditory training improve confidence for engaging in conversations with casual acquaintances or strangers”). As noted, our working hypothesis for this question was that multiple-talker training would improve confidence to a greater extent than would single-talker training, owing to greater input variability for the former. Contrary to this proposal, however, individuals who received single-talker training indicated significantly greater gains in confidence, Mann-Whitney U (91) = 2.1, p < .05 when talking to strangers or casual acquaintances. Participants who received single-talker training self reported that they had improved on an average of 1.7 (SE = .16) of the four types of language tasks we assessed (words, individual sentences, series of sentences, understanding discourse). The corresponding value for the multiple-talker training was 1.3 (SE = .16) and the difference between single- and multiple-talker training was not significant3.

Figure 1.

Figure 1

Means and standard errors for Likert-scale responses from participants trained using single talkers (black bars) and multiple talkers (open bars). Abbreviated versions of the questions that were asked are shown on the right side of the Figure. Error bars represent standard errors.

To further examine specific aspects of spoken language that participants felt had improved as a result of AT training, we obtained frequency counts of the total number of individuals (maximum of 93) indicating improvement in each of the four aspects of language assessed by the questionnaire and these findings are shown in Figure 2. Eighty-eight percent of the participants believed that they had improved in at least one aspect of spoken language comprehension, with only 12% declining to check one of the available choices. The majority of participants (66%) indicated that they believed they comprehend individual words better as a result of training. Thirty-four percent believed that their ability to understand sentences had improved; 22% indicated that their understanding of multiple sentences had improved; and 34% indicating that training had improved their ability to understand the general meaning of sentences.

Figure 2.

Figure 2

Responses to the question, In what aspects (if any) of comprehending spoken language do you feel that you have improved? Please check all that apply.

Perceived value of training activities

Figure 3 displays participants’ ratings of the perceived value of the five different training activities. Overall, participants found the activities moderately to quite valuable with a mean of approximately 5.0 on the 7-point Likert scale. Each of the five activities targeted a specific aspect of spoken language comprehension and it was hoped that participants would find each of value. As is evident from Figure 3, no significant differences were observed in the value ratings across activities, suggesting that participants perceived similar value for all of the activities. Finally, the value for each of the activity ratings was nearly identical across the single- and multiple-talker conditions.

Figure 3.

Figure 3

Responses to Likert-scaled questions asking about the value of the five training activities. Error bars indicate standard error.

Correlations across ratings

One goal of the current study was to investigate the relationship between perceived benefit of the AT program and overall enjoyment. Table 3 provides Spearman’s rank order correlations4 between our measures of perceived benefit (Q1, Q3, Q4, totalhelp), and participants' ratings of their overall enjoyment of the program (Q6). As indicated in the table, all measures of perceived benefit were moderately to highly correlated (all p’s <. 01). Thus, individuals who perceived the most improvement in understanding spoken language (Q1) also indicated that AT resulted in greater improvements in confidence when conversing with both strangers and close acquaintances (Q3, Q4) and also perceived gains in more areas of language perception (TOTALHELP). Of particular note, however, is that none of the measures of perceived benefit correlated with overall program enjoyment. These findings suggest that the perceived benefit of partaking in AT did not contribute to an individual's enjoyment of the program.

Table 3.

Spearman’s Rank Order Correlations Between Questions Assessing Perceived Benefit of AT (Q1, Q3, Q4, TOTALHELP) and Self-Reported Measures of Program Enjoyment (Q6).

Q1 Q3 Q4 TOTALHELP Q6
Q1 .670** .603** .455** 0.181
Q3 .742** .336** 0.183
Q4 .595** 0.204
TOTALHELP 0.103
**

P < .01; See Table 2 for specific text of questions (Q1, Q3, Q4, Q6). TOTALHELP is the number of areas (out of 4) in which participants indicated improvement as a result of AT.

We next investigated whether program enjoyment was related to the perceived value of one or more of the training activities and these results are displayed in Table 4. As indicated in the table, moderate to strong correlations were observed between the perceived values of all five training activities (all p's < .01), but none of these were significantly correlated with overall program enjoyment.

Table 4.

Spearman Rank Order Correlations Between Perceived Value of the Training Activities (ACT1-ACT5) and Program Enjoyment (Q6).

ACT1 ACT2 ACT3 ACT4 ACT5 Q6
ACT1 .608** .418** .597** .519** 0.159
ACT2 .487** .573** .458** 0.117
ACT3 .529** .441** 0.204
ACT4 .527** 0.209
ACT5 0.102
**

P < .01. ACT1-ACT5 are the five training activities (see Table 1).

In our last set of analyses examining possible correlates of program enjoyment, we assessed whether demographic (age, sex) or audiological (pure-tone averages; PTA) factors were related to ratings of program enjoyment. Only age was significantly correlated with ratings of program enjoyment, with older adults indicating greater enjoyment than younger adults (r = .35, p < .01). To further examine the independent contribution of age to ratings of program enjoyment, we conducted a stepwise multiple regression in which ratings of perceived effectiveness (Q1, Q3, Q4, totalhelp) and perceived value of training exercises were entered in the first step of the regression and age was entered in the second step. Results of the analysis indicate that after controlling for perceived effectiveness and value of the training exercises, age accounted for approximately 20% of the variance in program enjoyment βage = .371, p < .001; Fage (1, 91) = 11.6, p < .001.

Open-ended responses

Eighty-five participants wrote responses to the question What did you like best about the program? As described by Graneheim and Lundman (2004), we performed a qualitative content analysis. Responses to the question were considered meaning units, which are statements that convey a single opinion and that stand by themselves (Baxter, 1991). Participants’ meaning units were reviewed to identify descriptive categories, and then the remarks were sorted into the categories accordingly. A category includes meaning units that share a commonality (Krippendorff, 1980), and the identification of categories is considered to be the key feature of a content analysis. The first author reviewed and assigned the remarks to the identified categories, as did two of the other authors (Mauzé and Schroy) jointly. To assess inter-rater consistency, we determined the percentage of remarks that were assigned to the same category by the first author and the other two authors jointly. The sets of categorizations agreed at 89%. The two sets of assignments were then reviewed, and any discrepancy between them was discussed amongst the three authors until agreement was reached. The majority of responses contained one meaning unit, which is a statement that conveys a single opinion and that stands by itself (Glaser, 1998). In an instance where two opinions were expressed (which happened infrequently), as with the response “The professionalism of the staff and the quality of the computer programs,” the statement was divided into two meaning units and recorded under two categories. Ninety-three comments were categorized. Three categories encompassed most of the responses (Table 5): Sense of helping one’s self/Development of listening skills/ Challenging one’s self (25% of the comments); Contact with the clinicians and clinic (19%), and Program design or a particular training activity (34%). Six percent of the comments pertained to improved concentration for listening and 8% pertained to increased awareness of one’s listening capabilities and limitations.

Table 5.

Categories, example comments, and percentage of corresponding comments elicited in response to the questionWhat did you like best about the program?

What did you like best about the program?
Category Example Comments Percent
Responses
Sense of helping one’s self;
Development of listening skills;
Challenging one’s self
“Exposing me to sound groupings I
hadn’t thought of before.”
“Trying something that might help me.”
“I liked that I could practice with real
noise in the background.”
25%
Contact with the clinicians “The expertise and friendly manner of
the people conducting the program.”
“The friendly atmosphere.”
“Pleasant atmosphere –
nice people.”
19%
Program design or particular training
activity
“Being in control with the computer –
to pick what I wanted…”
“The…listening to paragraphs.”
“Listening to the different pitches in
the voices.”
34%
Learning to concentrate/pay attention “It makes you think, listen, and concentrate.”
“The learning to concentrate more.”
“Opportunity to practice
attentiveness.”
6%
Awareness of listening capabilities
and limitations
“Learning about what I’m not hearing.”
“Was surprised that I could do it.”
“It helped me understand my
problem.”
8%
Other “Just participating.”
“Very thorough!”
“It was a guessing program.”
8%

Eighty-five participants wrote responses to the question What did you like least about the program? Of these 85, 12 participants indicated that there was “nothing” they did not like and that “none of it was bad”. Categories were identified and a total of 79 meaning units were categorized, after removing these 12 non-negative responses. To assess inter-rater consistency, we again determined the percentage of meaning units that were assigned to the same category by the first author and the other two authors jointly. The majority of meaning units fell into one of three categories (Table 6), Pre- and post-tests (27% of the comments), A particular exercise activity/Difficulty of listening in background noise (25%), and Tedious/Boring/Monotonous/Fatiguing (20%). Ten percent of the comments reflected unfavorably on the clinical setting.

Table 6.

Categories, example comments, and percentage of corresponding comments elicited in response to the questionWhat did you like least about the program?

Category Example Comments Percent Responses
Pre- and post-tests “The length of some of the tests.”
“The lipreading test.”
“The tests could be tedious.”
27%
A particular exercise activity;
difficulty of listening in background
noise
“Some of the pictures were difficult to figure out.”
“Noise background while hearing words.”
“Listening to the lecture type exercise – I couldn’t keep up with the speed.”
25%
Tedious/boring/monotonous/fatiguing “I did get a bit tedious after a while.”
“Monotonous.”
“It was the same exercises over and over that I had practiced before.”
20%
Training occurred at the clinic “Being committed to a certain time and day of the week.”
“Traffic.”
“Parking distance.”
10%
Frustration/no improvement noted “It was frustrating to sense no major improvement.”
“It’s frustrating when all the words sound like the same words…I know the answer is ‘golf’ but all the choices sound like ‘golf’.”
5%
Other “The length.”
“Waiting for the questions to load.”
“Quite frankly, some of the questions and phrases/examples.”
13%

DISCUSSION

From a broad clinical perspective, perhaps the most important contribution of the current investigation is to highlight the importance of obtaining subjective measures of benefit and enjoyment from participants engaging in AT. On average, participants indicated that they both benefited from and enjoyed participating in the AT program. Within the framework of health behavior maintenance proposed by Bandura (1977, 1982) positive outcomes on both of these measures, benefit and enjoyment, should predict relatively high compliance rates and, indeed, completion rates for the I Hear What You Mean program exceeded 90%.

Perceived benefit, program enjoyment and design of AT programs

A novel and somewhat unexpected finding from the current investigation was the absence of significant correlations between how much participants perceived they benefited from the program and how much they enjoyed it. The lack of a significant relationship between these two measures suggests that they may make independent contributions to an individual’s willingness to initiate and complete AT. Consistent with this proposal, nearly 60% of responses to the open-ended question “What did you like best about the program? were either Program design or a particular training activity or Sense of helping one’s self/Development of listening skills/ Challenging one’s self, which we interpret as reflecting enjoyment and benefit, respectively.

The independence of enjoyment and benefit ratings may also provide some insight into why age was the only significant predictor of program enjoyment after controlling for perceived benefit and value of the exercises. Specifically, research in the area of problem-solving training (Artistico et al., 2003) has demonstrated that training produces greater improvements in self-efficacy for older than for younger adults, but only for problems that are judged as ecologically relevant. In contrast, on problems that are judged as abstract or unrelated to “real-world” issues (e.g., the Tower of Hanoi problem) training has greater effects on the self-efficacy of young adults than older adults. Considered with evidence that self-efficacy is related to activity enjoyment (Sherwood et al., 2008), one reason that age may have been associated with overall program enjoyment is that auditory training is viewed as ecologically important by older adults, resulting in both increased self-efficacy and program enjoyment for this group.

The finding that subjective evaluations of enjoyment and benefit may make independent contributions to compliance rates in AT has important implications for the design of future training programs in that it provides two potential targets for program improvement. First, the results suggest that AT programs should be designed to provide participants with the strong impression that they are benefiting from the extensive time demands required by most training regimens. One aspect of the I Hear What You Mean program that likely contributed to participants' feelings of competence and benefit is that all training activities were designed to maintain performance at approximately 80% correct. Thus, although signal-to-babble ratios got poorer (more difficult) as participants progressed through the program, reflecting training-based improvements in speech perception, overall performance levels remained quite high. From the participants' perspective, therefore, overall performance levels remained relatively high despite clear increases in the overall magnitude of the background noise and this likely contributed to their moderate to high ratings of perceived benefit.

The second potential target for improving AT programs, overall program enjoyment, is more difficult to incorporate and only recently has been the focus of design considerations (cf. Sweetow and Sabes, 2010). The I Hear What You Mean program was certainly not immune from concerns regarding overall program enjoyment, as approximately 20% of the responses to What did you like least about the program? fell into the category of Tedious/Boring/Monotonous/Fatiguing. Although this result suggests the need for additional modifications to the program, participants nevertheless provided relatively positive responses to the question regarding overall program enjoyment (slightly higher than 5 out of 7 on the Likert scale). One aspect of the program that may have contributed to the positive evaluations is the focus on meaning-based training activities. For example, even Activity 2 with its focus on basic phoneme discrimination had a meaning-based component (e.g., participants not only had to discriminate MAT from PAT, they also had to select the picture that illustrated the correct semantic relationship based on the order of presentation). Although it remains unclear to what extent the meaning-based component contributed to overall program enjoyment, future research should focus on design considerations, such as use of a game format, as possible ways of increasing program enjoyment and, eventually, compliance.

One other consideration for developing AT training programs that emerged from the open-ended responses concerns the recent trend (Burk and Humes, 2008; Sweetow and Sabes, 2010) toward home-based computerized training. The advantages of home- rather than clinic-based training include, 1) increased availability of AT for individuals with limited mobility; 2) increased flexibility with respect to when and how long training can take place; and 3) a significant reduction in travel time and expense. Reflecting these advantages, approximately 10% of the comments obtained from participants cited driving, parking, and scheduling as undesirable components of our training protocol. On the other hand, approximately 19% of participants indicated that contact with a clinician or the clinic was the component that they liked best about the program. One way of possibly resolving these conflicting program demands would be to develop home-based AT programs that take advantage of recent technology to incorporate extensive clinician contact. For example, daily contact with participants via Skype or other technologies would provide the convenience of a home-based program but still offer substantial amounts of clinician contact.

Potential Limitations

The results of the current study provide strong evidence for the benefit of obtaining self- report measures from individuals engaged in AT. Nevertheless, it is important to note some of the potential methodological limitations of the investigation, as any conclusions should be considered in light of such limitations. First, the present study was not comparative and therefore it remains unclear to what extent the findings are program specific. This difficulty is certainly not unique to the present investigation and, as suggested by a recent meta-analysis of AT (Sweetow and Palmer, 2005), the use of different training materials, training regimens, training activities, and outcome measures makes it nearly impossible to make direct comparisons across AT programs. Our approach to this issue was to use training activities that are representative of those used in traditional AT, while still incorporating specific components that reflect our particular theoretical perspectives (e.g., a focus on mean-based activities). For example, AT activities have often been classified as providing either analytic or synthetic training (e.g., Carhart, 1960; Tye-Murray, 2009) or similarly, as providing practice in listening to phonemic distinctions or connected speech (e.g., Burk et al., 2006; Lansing & Davis, 1988; Moog et al., 1995; Stout and Windle, 1992). Training activities of both types were included in the I Hear What You Mean program. Activities 1, 2, and 3 for instance, were largely analytic in that they focused on basic phoneme discrimination. Activities 4 and 5, in contrast, were more synthetic in nature as they presented participants with individual sentences (Activity 4) or a series of connected sentences (Activity 5). Based on these considerations, we believe that the current pattern of subjective responses is broadly representative of AT programs, but also likely reflect program-specific components.

A second concern regarding generalizability is that the majority of participants in the current study were recruited from a database of individuals who had volunteered for participation in research studies. As such, they were likely more motivated to improve spoken communication and more committed to completing research studies than the “typical” patient who receives a hearing aid. These participant characteristics may partially account for the exceptionally high compliance rate obtained in the present study, but it is unclear how (or if) they affected subjective evaluations of the program. To our knowledge there has been little or no systematic research investigating the interaction between participant motivation and subjective evaluations in AT, but such research would seem critically important for improving the AT experience given the extensive time demands of most training programs.

Two other potential limitations that could influence interpretation of the subjective reports obtained from individuals in the current study are that participants received monetary compensation for their participation and they were aware that the investigators would be reading their responses. We consider providing monetary compensation a minor concern because the amounts were relatively small ($10/hour) and likely just covered the cost of transportation. The issue of participants knowing that investigators would be reading their responses is potentially more significant because study participants can often respond in ways they deem desirable to the experimenter if they know the purpose of the study. In the current design, there is no way to establish how such bias might have influenced responses, but it may be that the subjective reports are slightly more positively skewed than would have been the case had the questionnaires been completed anonymously.

Conclusions

In summary, we note at least two types of information that are uniquely available from subjective evaluations and that advocate strongly for their inclusion in assessments of AT. First, information regarding changes in self-efficacy (e.g., “the training program made me more confident in social situations”) provides critical supplements to performance-based measures of improvement because they often translate directly into changes in communicative behaviors, such as increased social interaction. Second, subjective evaluations of AT programs are likely to provide a strong index of program compliance and subsequent changes in behaviors that can improve spoken communication. As noted in the introduction, Bandura (1977, 1982) suggested that behavior change and maintenance are determined, in part, by an individual's expectations about the likelihood that behaviors will bring about a desired outcome and their judgments about their ability to engage in such behaviors. In the case of AT, subjective evaluations provide unique access to both types of expectations and therefore offer an invaluable opportunity to improve AT.

Acknowledgement

This work was supported by a grant from the National Institutes of Health #RO1DC008964-01A1.

Abbreviations

AT

Auditory Training

bPTA

better ear pure tone average

Footnotes

1

In this and all subsequent analyses of the responses made using the Likert scale, we also converted raw scores to individualized z-scores to adjust for possible differences in the interpretation of the 7-point Likert scale across participants. In no case was the pattern of results or statistical significance different for the raw and standardized scores. We report raw scores in the text, table and figures to facilitate comparisons with previous results.

2

For all analyses comparing responses to individual questions as a function of single versus multiple talker training, we first compared the distributions (using z-score units). In no case was there a significant difference in the distributions for those trained with single versus multiple talkers. Nevertheless, we performed analyses using both parametric (t-tests) and non-parametric (Mann-Whitney U tests) and the pattern of results was identical for both types of tests. We report only the non-parametric statistics because this test does not require assumptions regarding normality of the distributions.

3

Post-hoc computations of achieved power for all analyses ranged from approximately 0.1 to 0.4 (based on an alpha level of 0.5). Although these power estimates would be classified as low to moderate (Cohen, 1988), they need to be considered in relation to the very small effect sizes (.05 – .3). Considered together, the results of the power analysis suggest that the current study was adequately powered to detect moderate to strong effect sizes and that even small effect sizes could be detected in a number of instances. Therefore, we believe it unlikely that any of the null effects obtained in the current study were a result of inadequate power to detect differences between these two conditions.

4

This is the non-parametric equivalent of the Pearson product-moment correlation and was used to avoid analyses that assume normal distributions.

REFERENCES

  1. Artistico D, Cervone D, Pezzuti L. Perceived self-efficacy and everyday problem solving among young and older adults. Psychol Aging. 2003;18:68–79. doi: 10.1037/0882-7974.18.1.68. [DOI] [PubMed] [Google Scholar]
  2. Bandura A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol Rev. 1977;84:191–215. doi: 10.1037//0033-295x.84.2.191. [DOI] [PubMed] [Google Scholar]
  3. Bandura A. The assessment and predictive generality of self-percepts of efficacy. J Behav Ther Exp Psychiatry. 1982;13:195–199. doi: 10.1016/0005-7916(82)90004-0. [DOI] [PubMed] [Google Scholar]
  4. Baxter LA. Content analysis. In: Montgomery BM, Duck S, editors. Studying interpersonal interaction. New York, NY: The Guilford Press; 1991. pp. 239–254. [Google Scholar]
  5. Bode DL, Oyer HJ. Auditory training and speech discrimination. J Speech Hear Res. 1970;13:839–855. doi: 10.1044/jshr.1304.839. [DOI] [PubMed] [Google Scholar]
  6. Burk MH, Humes LE. Effects of long-term training on aided speech-recognition performance in noise in older adults. J Speech Lang Hear Res. 2008;51:759–771. doi: 10.1044/1092-4388(2008/054). [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Burk MH, Humes LE, Amos NE, Strauser LE. Effect of training on word-recognition performance in noise for young normal-hearing and older hearing-impaired listeners. Ear Hear. 2006;27:263–278. doi: 10.1097/01.aud.0000215980.21158.a2. [DOI] [PubMed] [Google Scholar]
  8. Carhart R. Auditory Training. New York, NY: Holt, Rinehart and Winston; 1960. [Google Scholar]
  9. Cohen J. Statistical power analysis for the behavioral sciences. Hilldale, New Jersey: Lawrence Erlbaum Associated; 1988. [Google Scholar]
  10. Cox RM, Alexander GC. The Abbreviated Profile of Hearing Aid Benefit. Ear Hear. 1995;16:176–186. doi: 10.1097/00003446-199504000-00005. [DOI] [PubMed] [Google Scholar]
  11. Cox RM, Gilmore C. Development of the Profile of Hearing Aid Performance (PHAP) J Speech Hear Res. 1990;33:343–357. doi: 10.1044/jshr.3302.343. [DOI] [PubMed] [Google Scholar]
  12. Demorest ME, Erdman SA. Development of the Communication Profile for the Hearing Impaired. J Speech Hear Disord. 1987;52:129–143. doi: 10.1044/jshd.5202.129. [DOI] [PubMed] [Google Scholar]
  13. Gatehouse S. A self-report outcome measure for the evaluation of hearing aid fittings and services. Health Bull (Edinb) 1999;57:424–436. [PubMed] [Google Scholar]
  14. Gil D, Iorio MC. Formal auditory training in adult hearing aid users. Clinics (Sao Paulo) 2010;65:165–174. doi: 10.1590/S1807-59322010000200008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Giolas TG, Owens E, Lamb SH, Schubert ED. Hearing Performance Inventory. J Speech Hear Disord. 1979;44:169–195. doi: 10.1044/jshd.4402.169. [DOI] [PubMed] [Google Scholar]
  16. Glaser BG. Doing grounded theory: Issues and discussions. Mill Valley, CA: Sociology Press; 1998. [Google Scholar]
  17. Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24:105–112. doi: 10.1016/j.nedt.2003.10.001. [DOI] [PubMed] [Google Scholar]
  18. Kricos P, Holmes AE. Efficacy of audiologic rehabilitation for older adults. J Am Acad Audiol. 1996;7:219–229. [PubMed] [Google Scholar]
  19. Kricos P, Holmes AE, Doyle DA. Efficacy of a communication training program for hearing-impaired elderly adults. J Acad Rehabil Audiol. 1992;25:69–80. [Google Scholar]
  20. Krippendorff K. Content analysis: An introduction to its methodology. London, England: Sage Publications LTD; 1980. [Google Scholar]
  21. Lansing CR, Davis JM. Early versus delayed speech perception training for adult cochlear implant users: Initial results. J Acad Rehabil Audiol. 1988;21:29–41. [Google Scholar]
  22. Moog JS, Biedenstein JJ, Davidson LS. Speech Perception Instructional Curriculum and Evaluation. St. Louis, MO: Central Institute for the Deaf; 1995. [Google Scholar]
  23. Morris CD, Bransford JD, Franks JJ. Levels of processing versus transfer appropriate processing. J Verb Learn Verb Beh. 1977;16:519–533. [Google Scholar]
  24. Newman CW, Weinstein BE. The Hearing Handicap Inventory for the Elderly as a measure of hearing aid benefit. Ear Hear. 1988;9:81–85. doi: 10.1097/00003446-198804000-00006. [DOI] [PubMed] [Google Scholar]
  25. Sherwood NE, Martinson BC, Crain AL, Hayes MG, Pronk NP, O'Connor PJ. A new approach to physical activity maintenance: rationale, design, and baseline data from the Keep Active Minnesota Trial. BMC Geriatr. 2008;8:17. doi: 10.1186/1471-2318-8-17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Smaldino S, Smaldino J. The influence of aural rehabilitation and cognitive style discourse on the perception of hearing handicap. J Acad Rehabil Audiol. 1988;21:57–64. [Google Scholar]
  27. Smith SL, West RL. Hearing aid self-efficacy of new and experienced hearing aid users. Semin Hear. 2006;27:325–329. [Google Scholar]
  28. Stout G, Windle J. Developmental Approach to Successful Listening-Revised (DASL) Englewood, CO: Resource Point; 1992. [Google Scholar]
  29. Strecher VJ, DeVellis BM, Becker MH, Rosenstock IM. The role of self-efficacy in achieving health behavior change. Health Educ Q. 1986;13:73–92. doi: 10.1177/109019818601300108. [DOI] [PubMed] [Google Scholar]
  30. Sweetow R, Palmer CV. Efficacy of individual auditory training in adults: A systematic review of the evidence. J Amer Acad Audiol. 2005;16:494–504. doi: 10.3766/jaaa.16.7.9. [DOI] [PubMed] [Google Scholar]
  31. Sweetow RW, Sabes JH. Auditory training and challenges associated with participation and compliance. J Amer Acad Audiol. 2010;21:586–593. doi: 10.3766/jaaa.21.9.4. [DOI] [PubMed] [Google Scholar]
  32. Tye-Murray N. Foundations of Aural Rehabilitation: Children, Adults, and Their Family Members. 3rd ed. Clifton Park, NY: Cengage Learning; 2009. [Google Scholar]
  33. Weinstein BE, Spitzer JB, Ventry IM. Test-retest reliability of the Hearing Handicap Inventory for the Elderly. Ear Hear. 1986;7:295–299. doi: 10.1097/00003446-198610000-00002. [DOI] [PubMed] [Google Scholar]

RESOURCES