Abstract
Objectives:
Personalized music playlists are increasingly being utilized in aged care settings. This study aims to investigate how musical features influence the affective response to music of people with probable dementia.
Methods:
A factorial experiment (2 × 2 × 3) was conducted to investigate the influence of tempo (fast, slow), mode (major, minor), and lyrics (none, negative, positive). Ninety-nine people with probable dementia were randomly assigned to 3 conditions, listening to 3 personalized playlists. Galvanic skin response and activation of facial action units were measured.
Results:
Music with fast tempos increased arousal and reduced enjoyment. Music in minor keys increased activation of the depressor anguli oris, suggesting increased sadness. Lyrics had no significant effect on response.
Discussion:
The findings demonstrate that both tempo and mode influenced the response of the listener. As well as accounting for personal preferences, music for people with dementia should be carefully targeted toward the affective outcome desired.
Keywords: music, playlists, psychosocial interventions, dementia, older people, care homes
Introduction
The currently incurable nature of dementia makes improved quality of life a key treatment goal for people with dementia. However, pharmacological approaches to dealing with psychological distress in people with dementia are problematic due to the high rates of adverse reactions. 1
Music-based interventions such as dance, 2 singing, 3 and music therapy 4 are common nonpharmacological approaches to treating the behavioral and psychological symptoms of dementia (BPSD). Although there is convincing evidence that music therapy is effective in reducing BPSD, 5,6 in this article we focus on the use of prerecorded music. Personalized playlists (PP) involve the creation of song lists based on individual music preferences and are often facilitated by health-care workers without training in music therapy. 7 Such interventions are increasingly being utilized in health-care contexts in part due to the anecdotal evidence highlighting the power of music such as in the documentary Alive Inside. 8 However, recent evidence suggests that not all aged care facilities are using music in the most effective ways. 9
PP hold some advantage over other music interventions for people with dementia. Interventions where music is not individualized typically demonstrate mixed results. 10 Chang and colleagues, 11 for example, reported increases in behavioral symptoms when nature music was played to residents in an aged care facility. Similarly, Nair and colleagues 12 reported increased behavioral disturbances after playing Baroque music to people with dementia. They concluded that since most participants did not like Baroque music, individualized music selections would have been more effective.
Interventions using PP generally demonstrate more encouraging results. 13 Several studies that have used a music selection protocol developed by Gerdner, 14 for example, have shown that PP can reduce agitated behavior. 15 -17 However, the effect of PP on other mood dimensions is less clear and the outcomes are not universally positive. 10 Garland and colleagues, 17 for example, reported widely divergent responses, with dramatic reductions in agitation in some people being “offset by neutral or negative outcomes for others” (p. 250). Other studies similarly report a worsening of mood or a neutral response in a small percentage of patients, 18 while other program evaluations report no significant improvements on mood or agitation at all. 19
One of the reasons for these mixed responses may be that playlists selected purely on the basis of preferences do not take into account the effect of features of the music itself. For example, the tempo (speed) of the music can influence arousal levels, with fast tempos typically increasing arousal and slow tempos reducing arousal. 20,21 Similarly, the mode of the music—whether it is in a major or minor key—can also have an effect on mood, with major keys generally resulting in greater mood improvements than minor keys. 20 In addition, studies have found that the lyrics of noninstrumental music also have an important influence on the affective response of the listener. 22,23
The extent to which these musical features influence the response of people with dementia is not well understood. Although the lyrics of a song may influence the mood of listeners without cognitive impairment, we do not yet understand whether the effect is similar in people with reduced verbal comprehension. With reduced capacity for linguistic decoding, the mere presence of a human voice may have a positive effect on mood regardless of the content of the lyrics.
Similarly, the association between minor modes and sadness is learned through frequent exposure to music in which minor modes are paired with other negative cues such as in films. There is also some evidence that features of minor keys mimic those found in speech conveying sadness. 24 Children acquire this understanding gradually, 25 but the degree to which such knowledge is retained in people with dementia is unknown.
The aim of the current study is thus to investigate the effects of musical features on the mood and arousal levels of people with dementia.
Hypotheses
Hypothesis 1: Arousal (galvanic skin response [GSR]) increases compared to baseline when listening to music in fast tempos.
Hypothesis 2: Greater increases in arousal (GSR) compared to baseline when listening to music in fast tempos is associated with low self-reported enjoyment of music listening
Hypothesis 3: Background arousal (GSR number of peaks) decreases compared to baseline when listening to music in slow tempos
Hypothesis 4: Activation of lip corner depressor (indicating sad facial expression) is greater compared to baseline when listening to music in minor keys than when listening to music in major keys.
Hypothesis 5: Activation of lip corner depressor is greater compared to baseline when listening to music with no or negative lyrics than when listening to music with positive lyrics.
Methods
Study Design
The study was a randomized factorial experiment with a 2 × 2 × 3 design, which allowed us to investigate the independent effects of 3 factors (independent variables) tempo (fast, slow), mode (major, minor) and lyrics (none, negative, positive) on mood (measured by lip corner depressor activation), and arousal (measured by GSR) in people with dementia.
Participants
Ethics approval was first obtained from the Human Ethics Committee of Western Sydney University (H11427). We recruited 117 participants from 6 nursing homes in NSW, Australia. Of these, 113 met the eligibility criteria: a score of <25 on the Standardized Mini-Mental State Examination (SSMSE), 26 which indicates the presence of mild-to-severe cognitive impairment 27 and no hearing disorder that could prevent listening to music. Of these 113, 1 participant withdrew and 13 were unable to complete the study due to illness. A total of 99 people took part including 83 females and 34 males, aged 63 to 99 years (mean [m] = 84, standard deviation [SD] = 8.1).
Dementia diagnosis could only be obtained for 51 participants. Of those, 41 had a nonspecific diagnosis of dementia, 3 had been diagnosed with Alzheimer’s dementia, 3 with vascular dementia, 2 with alcohol-induced dementia, 1 with Korsakoff disease, and 1 with mild cognitive impairment. Scores on the SMMSE ranged from 0 to 25 with the average score being 7.7 (SD = 8.5), indicating severe levels of cognitive impairment. Only 20 participants played a musical instrument or considered themselves a singer (m = 42 years, SD = 27.7).
Procedures
Facilities were randomly selected from a list of nursing homes in NSW, Australia, and approached by phone and e-mail. Participants from 3 facilities were recruited from this list, and from another 3 facilities by word of mouth. After referral to the study by staff and obtaining consent from individuals and/or legal guardians, participants or a close family member completed a prescreening questionnaire with the assistance of the researchers. From this prescreener, musical preferences were assessed and exclusion criteria were checked (Figure 1).
Figure 1.
CONSORT flowchart of procedure.
One to 2 weeks after completing the prescreener, eligible participants took part in an experiment taking approximately 30 minutes at their facility. Participants were randomly assigned to 3 of 12 experimental conditions to distribute serial order effects, and listened to 3 playlists of 8 to 9 minutes each with 2 to 3 minutes between conditions and a 2-minute baseline period (7 participants listened to 1-2 playlists only). The experiment was mostly conducted in the participant’s own room or a private meeting room. Family members were invited to observe, but only 15 participants had a family member observing during the experiment, and interactions between those participants and the observing family member were minimal.
Measures and Materials
Prescreen measures
The prescreener consisted of questions from Gerdner’s Assessment of Personal Music Preference (the patient or family versions) 14 the Standard Mini Mental State Examination as an assessment of cognitive functioning (SSMSE). 26
Musical stimuli
A database of music of many different genres from the 1930s to 1970s was created with songs categorized according to the 12 conditions (Table 1). This database served as the basis from which PP were created according to individual preferences. For example, for participants who preferred country music, a list of well-known country songs were categorized by decade as well as according to tempo, mode, and lyrics. This enabled PPs to be created based on both the experimental condition and participant preferences.
Table 1.
Experimental Conditions.
| Lyrics | |||
|---|---|---|---|
| None | Positive | Negative | |
| Major key | |||
| Fast tempo | |||
| Slow tempo | |||
| Minor key | |||
| Fast tempo | |||
| Slow tempo | |||
Two coders independently allocated songs to the categories based on tempo, mode, and lyrics. Lyrics were coded as positive or negative based on subject and the overall message or attitude in the song. For example, lyrics about heartbreak or death were coded as negative, while songs conveying a hopeful attitude about love or that were about positive subjects such as happiness or fun were rated as positive. Songs with ambiguous content or containing both positive and negative content were eliminated. Tempos were determined using auditory comparison to an online metronome (http://a.bestmetronome.com). Songs with a tempo of <80 beats/min were allocated to the slow conditions, while songs with >120 beats/min were allocated to the fast conditions. 28 Two coders, both trained musicians, coded songs as major or minor using auditory cues and scrutinizing printed musical scores. Interrater reliability scores were 0.66 for tempo, 0.8 for lyrics, and 0.74 for mode (Cohen’s κ), indicating substantial agreement. Songs where interrater agreement was not reached were excluded.
Three PP were created for each participant based on their reported preferences but fitting the parameters of their allocated conditions. For example, if the individual reported liking Frank Sinatra and was randomly allocated to the major, fast tempo, with positive lyrics condition, their playlist would include a song by Frank Sinatra with the musical features required by their allocated condition. “Sad” music conditions (ie, music in minor keys and/or with negative lyrics) were not used for the final condition to avoid ending the experiment in a negative mood.
Music was played on an Apple iPad 2 through a set of Sennheiser HDR 160 wireless headphones. Four of the participants preferred to listen without headphones and heard the music directly from the iPad speakers.
Experimental measures
Facial expressions were filmed continuously by a Logitech c270 webcam positioned approximately 1 m meter in front of the face of participants in good lighting conditions, and connected via USB port to an Acer Aspire S7-392 laptop.
Skin conductance was also measured continuously using a custom-built wireless transmitter, which measured both GSR and acceleration on 3-axes (to enable measurement of covarying movement distortions), and transmitted it wirelessly to a receiver that converted the signals to analog. The sampling rate of the transmitter was 73 Hz with a range between 0.2 and 50 μS. The GSR transmitter was attached to the wrist of the nondominant hand using Velcro straps. Two Kendall Meditrace 100 pregelled disposable electrodes connected to the transmitter were attached to the second phalanges of the index and ring fingers. The GSR receiver was connected to an ADInstruments PowerLab 16/35 amplifier. Music, physiological data, and video were synchronized using LabChart 8. Between playlists participants were asked how familiar the music was they had heard and this was rated on a scale of 1 to 5. Participants also indicated enjoyment on an adaptation of the Wong-Baker FACES scale. 29 Observational data of behavior and verbal responses were also collected. 30
Analysis
Data smoothing was conducted on the GSR data in LabChart 8 using a low-pass filter of 3 Hz and a high-pass filter of 0.05 Hz. Epochs containing artefacts were excluded from data analysis. The average number of peaks and average peak amplitude (with a threshold of 0.5 μS) were obtained from a 30-second period within the middle of each baseline period, and from a 7-minute window within the middle of each listening block. The number of peaks can indicate ongoing responses to a stimulus and overall levels of background arousal, while peak amplitude is more closely related to the intensity of arousal responses in that a small number of high-amplitude peaks can occur rapidly in an initial response to a stimulus which then decline over time, often known as “task-related activation.” 31,32 Similar smoothing procedures as above were applied to the data from the accelometer x-axis (side-to-side motion).
Facial expressions were analyzed using Noldus FaceReader 6 with the Action Unit (AU) Module. FaceReader uses a model-based method to analyze the varying location of 500 key points in the face and generates output in relation to the activation levels of 20 AUs of the Facial Action Coding System FACS. 33 Studies have demonstrated that automated coding can be more reliable than human coding. 34,35 Individual data files were first calibrated using still shots of neutral facial expressions for each participant in order to account for personal variation in facial expression. Average scores over 30-second baseline periods and 7-minute test block conditions were then obtained for AUs. Subsequent data analyses were conducted using SPSS 22.0.
Results
Preliminary Analyses
Overall the music selected was relatively familiar to participants (m = 3.5, range 1-5) and enjoyable (m = 4.0, range 1-5). Behavioral observation ratings showed responses ranging between no sign of recognition but some sign of interest while listening, and showing a weak sense of familiarity via verbal responses, humming, or facial expressions.
Effect of Tempo
To investigate hypotheses 1 and 2, a repeated-measures analysis of variance (ANOVA) was conducted exclusively on fast tempo conditions. It was hypothesized that fast tempos would result in increased intensity of event-related responses in GSR and that GSR average peak amplitude scores therefore provided the best indicator of arousal for this analysis. Time (baseline, test block: GSR average peak amplitude) was the within-participant factor. Scores on self-reported enjoyment of the music were split at the median to form enjoyment groups (high and low) and entered as the between-participant variable. Change over time in accelerometer (movement), and familiarity were entered as covariates. A significant main effect of time was found on average peak amplitude, F (1, 89) = 9.1, P = .003, Wilks’ λ = .9, ηp2 = .1. There was also a significant interaction between time and enjoyment groups, F (1, 89) = 5.1, P = .03, Wilk’s λ = .9, ηp2 = .1, with the low enjoyment group (baseline m = 0.04, SD = 0.04, test block m = 0.1, SD = 0.3) experiencing a greater increase in GSR average peak amplitude than the high enjoyment group (baseline m = 0.1, SD = 0.1, test block m = 0.1 SD = 0.1, test block m = 1, SD = 1). There was no significant interaction with movement, F (1, 89) = 1.0, P = .329, Wilks’ λ = 1.0, ηp2 = .01, or familiarity, F (1, 89) = 0.1, P = .764, Wilks’ λ = 1.0, ηp2 = .001. These results show significant increases in arousal responses when listening to music with fast tempos and that greater increases in arousal were associated with low self-reported enjoyment regardless of familiarity in support of the first 2 hypotheses.
We then analyzed slow tempo conditions and conducted a repeated-measures ANOVA with time (baseline and test block on GSR number of peaks) as the within-participant variable and enjoyment group (high and low) as the between-participant variable. The GSR number of peaks was selected for this analysis because of the potential for slow music to decrease background arousal levels over time. 36 Change over time in accelerometer data (movement) was added as a covariate. A significant main effect of time on GSR number of peaks was found, F (1, 93) = 4.0, P = .04, Wilks’ λ = 1.0, ηp2 = .1, with average number of peaks increasing from m = 17.3 at baseline to 19.4 during the test block. No significant interaction was found with enjoyment, F (1, 93) = 0.01, P = .93, ηp2 < .01, or with movement, F (1, 93) = 0.2, P = .70, Wilks’ λ = 1.0, ηp2 < .01, or familiarity, F (1, 93) = 0.3, P = .538, Wilks’ λ = 1.0, ηp2 = .004. These results did not support hypothesis 3 since number of peaks increased, suggesting some rise in background arousal levels when listening to music with slow tempos.
Effect of Mode
To examine the effect of mode on mood, we looked at the effect of mode on the FaceReader output for AU 15. The AU 15 is the depressor anguli ori, or the lip corner depressor, which causes the lowering of the corners of the mouth typical of sad facial expressions. 37 A repeated-measures ANOVA was performed with time (baseline and test block: AU 15) as the within-participant variable, mode (major and minor) as the between-participant variable, and familiarity as a covariate. A significant main effect of time was found, F (1, 209) = 9.6, P = .002, Wilks’ λ = 1.0, ηp2 = .04, with activation of the lip corner depressor increasing from baseline (m = 0.52, SD = 0.72) to test block (m = 0.64, SD = 0.74). A significant interaction was also found between time and mode, F (1, 209) = 4.0, P = .027, Wilks’ λ = 1.0, ηp2 = .02 (Figure 2), with music in minor keys resulting in greater increases in activation of AU 15 than music in major keys (major: baseline m = 0.6, SD = 0.8, test block m = 0.6, SD = 0.7; minor: baseline m = 0.5, SD = 0.7, test block m = 0.7, SD = 0.8). No significant interaction was found with familiarity, F (1, 209) = 0.4, P = .515, Wilks’ λ = 1.0, ηp2 = .002. These results supported hypothesis 4.
Figure 2.
Major and minor mode activation of action unit 15 from baseline to test block.
Effect of Lyrics
We tested the effects of positive versus negative lyrics on mood by conducting a repeated-measures ANOVA with time (baseline, test block: AU 15) as the within-participant variable and lyrics (positive, negative) as the between-participant variable. There was no significant main effect of time, F (1, 170) = 2.4, P = .127, Wilks’ λ = .1, ηp2 = .01. We then compared the use of lyrics whether positive or negative with no lyrics by conducting a repeated-measures ANOVA with time (baseline, test block on AU 15) as the within-participant variable and lyrics (yes, no) as the between-participant variable. There was a significant main effect of time, F (1, 252) = 9.6, P = .002, Wilks’ λ = 1.0, ηp2 = .04, but no significant interaction with lyrics, F (1, 252) = 2.0, P = .16, Wilks’ λ = 1.0, ηp2 = .01. Thus, hypothesis 5 was not supported, since lyrics had no interactive effect with time in either analysis.
Discussion
We hypothesized that fast tempos would cause increases in event-related arousal that would be associated with low self-reported enjoyment of the music, while slow tempos would result in a lowering of background arousal. In relation to fast tempos, our hypotheses were supported, with results showing that fast tempos caused increases in GSR average peak amplitude with greater increases being experienced by those reporting low levels of enjoyment. Contrary to expectations, however, music in slow tempos caused an increase in arousal.
The differing types of GSR indicators used for these analyses, however, suggest some differences in the arousal responses caused by fast and slow tempos. Average peak amplitude reflects phasic responses, indicating relatively intense event-related activation. Number of peaks, on the other hand, can indicate more ongoing (although less intense) spikes in arousal and an overall increase in background or tonic arousal. It seems that music with slow tempos causes a milder and more sustained increase in arousal that may be pleasant or unpleasant, while music with fast tempos can cause more intense peaks in activation that can be unpleasant.
Tempos in our fast tempo conditions were well above average resting heart rates, and it may be that these tempos caused overstimulation, something to which people with dementia are particularly susceptible. 38 Slower music, on the other hand, may be capable of engaging interest without causing overstimulation, resulting in a more sustained increase in background arousal without the same intensity as fast tempos. The results imply that when selecting music for people with dementia, slow-to-moderate tempos could be effective in increasing alertness and engagement in people who are withdrawn or apathetic. Faster tempos, on the other hand, may cause individuals to become overstimulated, which could be of concern in individuals prone to agitation or anxiety.
It was also hypothesized that the mode of the music would have an influence on the mood of participants. This hypothesis was supported by our analyses, with the data showing that people who listened to music in minor keys displayed facial expressions indicating greater sadness than those who listened to major keys. This is an interesting finding given that an understanding of mode as found in Western music is an acquired (although usually implicit) knowledge of cultural musical conventions. Comprehension of meaning that is communicated extralinguistically in music appears to be preserved even at relatively severe levels of cognitive impairment in people with dementia.
We had further hypothesized that positive lyrics would have more desirable effects on mood than negative lyrics and that the mere presence of the human voice in the music would have more positive effects than music with no lyrics. Neither of these hypotheses were supported. These results contrast with findings in populations of people without dementia, for whom lyrics are often the primary determinant of affective response. 22,39 While comprehension of nonlinguistic meaning in music appears to remain somewhat preserved, the linguistic content has relatively little impact compared to populations without verbal impairment.
Interestingly, these results were not influenced by the familiarity of the music. Previous research has suggested that liking for music increases as it becomes more familiar. 40 Many music interventions, therefore, focus on music from an era from which the individual would be familiar, or on “favorite” music. 9 However, the current study suggests that regardless of whether or not the music is familiar, particular musical features have specific effects on mood and arousal. This, therefore, indicates that the selection of familiar or favorite music without regard for musical features may not be sufficient to achieve a particular affective state in a listener with dementia.
The current study is limited by the broad range of participants in the sample. Participants included people with mild, moderate, and severe levels of cognitive impairment, as well as people with varying types of dementia or no specific dementia diagnosis. Since the degree of cognitive impairment as well as the type of dementia could have an influence on responses to music, future studies should look more closely at specific dementia subtypes with a narrower range of severity. For example, people with Alzheimer’s type dementia may be more susceptible to negative stimuli than people with frontotemporal dementia, 41 a fact that could cause differing responses to music.
Despite these limitations, these findings have important implications for the use of PP interventions in people with dementia. They suggest a reason for the inconsistent effects of interventions using solely “favorite” music. Although personally significant music may be more effective than the bulk administration of researcher-selected music, our results indicate that the features of the music also need to be taken into consideration. Studies that have combined PP with music targeted toward particular affective symptoms have demonstrated positive effects of music on mood and behavior 6,42 and the results of the current study suggest further guidelines that can be implemented in future interventions.
Footnotes
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was support by an NHMRC-ARC Dementia Research Development Fellowship to the first author.
ORCID iD: Sandra Garrido
https://orcid.org/0000-0001-8980-0044
References
- 1. Douglas S, James I, Ballard C. Non-pharmacological interventions in dementia. Adv Psychol Treat. 2004;10(3):171–179. [Google Scholar]
- 2. Guzman-Garcia A, Hughes JC, James IA, Rochester L. Dancing as a psychosocial intervention in care homes: a systematic review of the literature. Int J Geriatr Psych. 2013;28(9):914–924. [DOI] [PubMed] [Google Scholar]
- 3. Särkämö T, Laitinen S, Numminen A, Kurki M, Johnson JK, Rantanen P. Pattern of emotional benefits induced by regular singing and music listening in dementia. J Am Geriatr Soc. 2016;64(2):439–440. [DOI] [PubMed] [Google Scholar]
- 4. McDermott O, Crellin N, Ridder HM, Orrell M. Music therapy in dementia: a narrative synthesis systematic review. Int J Geriatr Psych. 2012;28(8):781–794. [DOI] [PubMed] [Google Scholar]
- 5. Raglio A, Bellandi D, Baiardi P, et al. Effect of active music therapy and individualized listening to music on dementia: a multicenter randomized controlled trial. J Am Geriatr Soc. 2015;63(8):1534–1539. [DOI] [PubMed] [Google Scholar]
- 6. Sakamoto M, Ando H, Tsutou A. Comparing the effects of different individualized music interventions for elderly individuals with severe dementia. Int Psychogeriatr. 2013;25(5):775–784. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Sung HC, Chang AM, Abbey J. The effects of preferred music on agitation of older people with dementia in Taiwan. Int J Geriatr Psychiatry. 2006;21(10):999–1000. [DOI] [PubMed] [Google Scholar]
- 8. Rossato-Bennett M. Alive Inside: A Story of Music and Memory. USA: Projector Media; 2014. [Google Scholar]
- 9. Garrido S, Dunne L, Perz J, Chang E, Stevens C. The use of music in aged care facilities: a mixed methods study [published online ahead of print February 1, 2018]. J Health Psychol. 2018. [DOI] [PubMed] [Google Scholar]
- 10. Garrido S, Dunne L, Chang E, Perz J, Stevens C, Haertsch M. The use of music playlists for people with dementia: a critical synthesis. J Alzheimers Dis. 2017;60(3):1129–1142. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Chang FY, Huang HC, Lin KC, Lin LC. The effect of a music programme during lunchtime on the problem behaviour of the older residents with dementia at an institution in Taiwan. J Clin Nurs. 2010;19(7-8):939–948. [DOI] [PubMed] [Google Scholar]
- 12. Nair BK, Heim C, Krishnan C, D’Este C, Marley J, Attia J. The effect of Baroque music on behavioural disturbances in patients with dementia. Australas J Ageing. 2011;30(1):11–15. [DOI] [PubMed] [Google Scholar]
- 13. Gerdner LA. Effects of individualized vs. classical “relaxation” music on the frequency of agitation in elderly persons with Alzheimer’s disease and related disorders. Int Psychogeriatr. 2000;12(1):49–65. [DOI] [PubMed] [Google Scholar]
- 14. Gerdner LA. Evidence-Based Protocol: Individualized Music Intervention. Iowa City, IA: University of Iowa; 2000. [Google Scholar]
- 15. Hicks-Moore SL, Robinson BA. Favorite music and hand massage: two interventions to decrease agitation in residents with dementia. Dementia. 2008;7(1):95–108. [Google Scholar]
- 16. Park H, Specht J. Effect of individualized music on agitation in individuals with dementia who live at home. J Geront Nurs. 2009;35(8):47–55. [DOI] [PubMed] [Google Scholar]
- 17. Garland K, Beer E, Eppingstall B, O’Connor DW. A comparison of two treatments of agitated behaviour in nursing home redidents with dementia: simulated family presence and preferred music. Am J Ger Psychiat. 2007;15(6):514–521. [DOI] [PubMed] [Google Scholar]
- 18. Martin PK, Schroeder RW, Smith JM, Jones B. The roth project – music and memory: surveying the observed benefit of personalized music in individuals with diagnosed or suspected dementia. Alzheimer’s Dementia. 2016;12(7 suppl):P988. [Google Scholar]
- 19. Kwak J, Brondino MJ, O’Connell Valuch K, Maeda H. Evaluation of the Music and Memory Program Among Nursing Home Residents With Dementia: Final Report to the Wisconsin Department of Health Services. Milwaukee, WI: University of Wisconsin-Milwaukee; 2016. [Google Scholar]
- 20. Husain G, Thompson WF, Schellenberg EG. Effects of musical tempo and mode on arousal, mood and spatial abilities. Music Perception. 2002;20(2):151–171. [Google Scholar]
- 21. Dillman Carpentier FR, Potter RF. Effects of music on physiological arousal: explorations into tempo and genre. Media Psychol. 2007;10(3):339–363. [Google Scholar]
- 22. Garrido S. Why Are We Attracted to Sad Music?. Cham, Switzerland: Palgrave Macmillan; 2017. [Google Scholar]
- 23. Brattico E, Alluri V, Bogert B, et al. A functional MRI study of happy and sad emotions in music with and without lyrics. Front Psychol. 2011;2:308. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Curtis ME, Bharucha JJ. The minor third communicates sadness in speech, mirroring its use in music. Emotion. 2010;10(3):335–348. [DOI] [PubMed] [Google Scholar]
- 25. Gregory AH, Worrall L, Sarge A. The development of emotional responses to music in young children. Motiv Emotion. 1996;20(4):341–348. [Google Scholar]
- 26. Molloy DW, Alemayehu E, Roberts RO. Reliability of a Standardized Mini-Mental State Examination compared with the traditional Mini-Mental State Examination. Am J Psychiat. 1991;148(1):102–105. [DOI] [PubMed] [Google Scholar]
- 27. Perneczky R, Wagenpfeil S, Komossa K, Grimmer T, Diehl J, Jurz A. Mapping scores onto stages: Mini-Mental State Examination and Clinical Dementia Rating. Am J Geriat Psychiat. 2006;14(2):139–144. [DOI] [PubMed] [Google Scholar]
- 28. Brodksy W. The effects of music tempo on simulated driving performance and vehicular control. Transport Res. 2001;Part F(4):219–241. [Google Scholar]
- 29. Foundation W-BF. Wong-Baker FACES Pain Rating Scale. 2016.
- 30. Samson S, Dellacherie D, Platel H. Emotional power of music in patients with memory disorders: clinical implications of cognitive neuroscience. Ann N Y Acad Sci. 2009;1169(1):245–255. [DOI] [PubMed] [Google Scholar]
- 31. Braithwaite JJ, Watson DG, Jones R, Rowe M. A Guide for Analysing Electrodermal Activity (EDA) & Skin Conductance Responses (SCRs) for Psychological Experiments. Birmingham, England: Behavioural Brain Sciences Centre, Univeristy of Birmingham; 2015. [Google Scholar]
- 32. VaezMousavi SM, Barry RJ, Rushby JA, Clarke AR. Evidence for differentiation of arousal and activation in normal adults. Acta Neurobiol Exp. 2007;67(2):179–186. [DOI] [PubMed] [Google Scholar]
- 33. Ekman P, Friesen WV. A new pan-cultural facial expression of emotion. Motiv Emotion. 1986;10(2):159–168. [Google Scholar]
- 34. Lewinski P. Automated facial coding software outperforms people in recognizing neutral faces as neutral from standardized datasets. Front Psychol. 2015;6:1386. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Lewinski P, den Uyl TM, Butler C. Automated facial coding: validation of basic emotions and FACS AUs in FaceReader. J Neurosci Psychol E. 2014;7(4):227–236. [Google Scholar]
- 36. Khalfa S, Peretz I, Blondin J-P, Manon R. Event-related skin conductance responses to musical emotions in humans. Neurosci Lett. 2002;328(2):145–149. [DOI] [PubMed] [Google Scholar]
- 37. Duchenne G-B. The Mechanism of Human Facial Expression. Cambridge, England: Cambridge University Press; 1990. [Google Scholar]
- 38. Bakker R. Sensory loss, dementia, and environments. Generations. 2003;1:46–51. [Google Scholar]
- 39. Brattico E, Jacobsen T, De Baene W, Glerean E, Tervaniemi M. Cognitive vs. affective listening modes and judgments of music – an ERP study. Biol Psychol. 2010;85(3):393–409. [DOI] [PubMed] [Google Scholar]
- 40. Madison G, Schiölde G. Repeated listening increases the liking for music regardless of its complexity: implications for the appreciation and aesthetics of music. Front Neurosci. 2017;11:147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Kumfor F, Irish M, Hodges JR, Piquet O. The orbitofrontal cortex is involved in emotional enhancement of memory: evidence from the dementias. Brain. 2013;136(10):2292–3003. [DOI] [PubMed] [Google Scholar]
- 42. Guetin S, Portet F, Picot MC, et al. Effect of music therapy on anxiety and depression in patients with Alzheimer’s type dementia: randomised, controlled study. Dement Geriatr Cogn Disord. 2009;28(1):36–46. [DOI] [PubMed] [Google Scholar]


