Advanced
Listeners' Perception of Intended Emotions in Music
Listeners' Perception of Intended Emotions in Music
International Journal of Contents. 2013. Dec, 9(4): 78-85
Copyright © 2013, The Korea Contents Association
  • Received : June 11, 2013
  • Accepted : November 26, 2013
  • Published : December 28, 2013
Download
PDF
e-PUB
PubReader
PPT
Export by style
Article
Author
Metrics
Cited by
TagCloud
About the Authors
Hyun Ju, Chong
Eunju, Jeong
Soo Ji, Kim

Abstract
Music functions as a catalyst for various emotional experiences. Among the numerous genres of music, film music has been reported to induce strong emotional responses. However, the effectiveness of film music in evoking different types of emotions and its relationship in terms of which musical elements contribute to listeners' perception of intended emotion have been rarely investigated. The purpose of this study was to examine the congruence between the intended emotion and the perceived emotion of listeners in film music listening and to identify musical characteristics of film music that correspond with specific types of emotion. Additionally, the study aimed to investigate possible relationships between participants’ identification responses and personal musical experience. A total of 147 college students listened to twelve 15-second music excerpts and identified the perceived emotion during music listening. The results showed a high degree of congruence between the intended emotion in film music and the participants’ perceived emotion. Existence of tonality and modality were found to play an important role in listeners' perception of intended emotion. The findings suggest that identification of perceived emotion in film music excerpts was congruent regardless of individual differences. Specific music components that led to high congruence are further discussed.
Keywords
1. INTRODUCTION
As a form of nonverbal communication, music related activities, such as listening to music and playing musical instruments, involve perception and expression of emotion. In music listening, various musical elements play a role stimulating inter-seated emotion related responses, such as physiological, behavioral, as well as psychological [1] . Further, such musical elements inter-communicate with listeners’ variables, such as pre-exposure to music, current mood state, and personality [2] .
- 1.1 Emotion in Music
The literature examining perception of emotion through music has utilized various genres of music and reported effectiveness of music in triggering diverse dimensions of emotional responses. In terms of listeners’ perceptions of emotion in music, it was been shown that listeners can perceive at minimum three [3] and at most nine types of emotions [4] . Among these, happiness, sadness, anger, and fear have been identified as the most commonly perceived emotional types [5] - [9] . One study [10] indicated that listeners’ perceived emotional responses to music were clustered into three groups in the pleasantness-arousal dimensional circumplex space. The groups included (1) a “positive valence and high arousal” group, such as happiness; (2) a “positive valence and “positive valence and low arousal” group, such as sadness and peace; and (3) a “negative valence and high arousal” group, such as anger and fear.
Anger and fear are characteristically distinct in terms of their effect on individuals’ behavioral responses; anger facilitates avoidance-related behaviors, while fear evokes approach-related behaviors as observed in facial expressions [11] . However, the two types of emotion are perceived as a stress response on a self-identifying measure [12] . Also, both are modulated by a common neural mechanism (i.e., amygdala), specifically in the auditory modality [13] . Since listeners commonly perceive musically induced anger and fear as similar in terms of their emotional quality, these two emotions are often categorized together [3] , [10] , [14] .
- 1.2 Music Characteristics
In some music, composers purposefully manipulate the various musical properties in order to evoke specific emotional or affective responses in listeners. According to Juslin [15] , expressive intentions of composers can be successfully delivered to listeners through structural elements of music and various compositional techniques. Appropriate use of musical elements contributes to inducing various emotions in listeners by arousing different levels of activation and emotional valence [7] , [16] - [20] .
A small number of studies have attempted to uncover linear relationships between musical elements and emotionality. For example, sadness is triggered with slow tempo, low sound level, and legato articulation [5] , [15] , [21] . In comparing tempo and mode, happy and sad emotions are more likely to be induced by changes of tempo rather than those of mode [22] .
In terms of musical characteristics evoking negative emotion, Bruscia [23] examined the relationship between musical elements and induced emotionality. Bruscia identified that rhythmic components moderate physical energy or arousal level, whereas tonal components moderate emotional valence, the quality of the emotion. More specifically, anger is presented using fast tempo, very loud sound level, abrupt onset, and nonlegato articulation [21] , [24] . Also, atonal music was found to strongly associate with perceived negative emotion, such as anger, fear, and madness [25] , [26] .
- 1.3 Listener Characteristics
Listener-related variables can influence perception of emotion in music. The ability of music to induce both the physiological as well as the psychological changes underpinning emotional responses has been reported. A wide range of research investigating diverse responses to different types of music has been conducted and yielded inconsistent findings due to compounding variables, such as individual differences, musical experience, and preference [27] - [30] . Many studies suggest that emotional recognition of music is closely related to a listener’s individual characteristics because music perception is dependent upon a listener's physiological state, music preference, personality style, and previous music experience [27] , [28] , [30] . Among the diverse individual variables, cultural background [5] , [7] , age [16] , gender [31] , personality [32] , and temperament [33] have been reported as the major variables that affect individuals’ emotional behaviors.
- 1.4 Emotion in Film Music
The literature examining perception of emotion through music has employed musical stimuli included in commercial music [34] , Western classical music [10] , [35] , popular music [36] , ethnic music [37] , [38] , and intentionally composed music by researchers [39] - [41] . From among the many music genres, film music is believed to induce the vivid emotionality intended to evoke according to the context of various movie scenes. Film music is considered relatively neutral (as compared to classical music, for example) in terms of its listener preference and familiarity since the music is intended for a wide audience [42] . However, previous research has rarely investigated what and how individuals experience emotionality during listening to film music. Still unknown is whether the listeners’ perceived emotion is congruent with the film music maker’s or composer’s intended emotion. And if so, which structural elements in music contribute to this congruence.
2. PURPOSE OF THE STUDY
The current study examined the relationship between intended emotion and listener perceived emotion following film music listening. In addition, this study highlights music's structural elements as well as listeners' individual differences that influence perceived emotion. This study first examined whether selected musical excerpts from films would induce the intended emotions from listeners. The study then examined the characteristics of these musical elements in terms of their potential emotional relevance. The study examined if listeners' basic demographics and music listening habits (i.e., gender, academic major, exposure time to music) had an effect on their identification of emotion resulting from music listening.
3. METHOD
- 3.1 Participants
A total of 147 college students participated from universities located in central and remote areas of the Republic of Korea. The average age of the participants was 21.21 years ( SD = 2.52). The descriptive results of the sample’s demographic characteristics are presented in Table 1 . Distribution of music listening hours ( M = 2.37, SD = 2.35) and current or past involvement of music activity are presented in Table 2 .
Participants first completed a demographic questionnaire. The 9-item questionnaire was researcher-developed and requested information concerning age, gender, academic major, and musical experience (i.e., hours of music listening, years and types of musical activity involvement). The purpose of the questionnaire was to gather demographic information to describe the participants’ characteristics and to investigate possible relationships between these variables and identification of musically induced emotion.
Demographic characteristics (N= 147)
PPT Slide
Lager Image
Demographic characteristics (N = 147)
Personal exposure to music (N= 147)
PPT Slide
Lager Image
Personal exposure to music (N = 147)
- 3.2 Music Excerpts
For this study, the musical stimuli were selected from various films released between 1963 and 1994 to avoid preexposure as much as possible. The purpose of using film music was to reflect practical aspects of music listening in a real world context. Also, film music is considered to induce strong emotion that is congruent with a movie scene [40] . The music selection in the current study was based primarily on the structural elements of music, including tempo and tonality, which were suggested to reflect the discrete type and dimension of emotion rather than the content and context of the movie [23] .
In order to match emotional salience of non-musical context with that of musical context, a circumplex model in which two axes (i.e., valence, arousal) are extended to tonality and tempo was employed [43] . This model was combined with the general principle suggested by Peretz and his colleagues that perception of happiness and sadness in music is categorized by mode (i.e., major, minor) and tempo (i.e., fast, slow) [22] , [44] . According to Darrow's [3] study, negative feelings, such as, fear and anger, can be delivered through the use of atonality, frequent tone clusters, and/or ambiguous meter. Frequent use of minor chords combined with nonharmonic chords also has been reported to induce “scary” emotion [25] [40] . Further, Vieillard et al.’s [40] study validated and suggested the specified range of aforementioned structural elements in music to induce certain types of emotion.
Film music excerpts and emotional salience
PPT Slide
Lager Image
Film music excerpts and emotional salience
Collectively, for positive valence with high arousal, music excerpts were presented in a relatively faster tempo (i.e., Allegretto) composed in a major mode [22] , [40] , [44] . Music excerpts for positive valence with low arousal were presented in a relatively slower tempo (i.e., Largo, Adagio, and Andante) composed in a minor mode [22] , [40] , [44] . Music excerpts for negative valence with high arousal were presented in a relatively faster tempo (i.e., Allegretto) with minor and atonal chords [3] , [25] , [40] .
Based on the aforementioned criteria for music selection, twelve film score excerpts were selected to examine emotion identification. The included excerpts were from the following films: Excalibur, Far and Away, Forrest Grump, Jurassic Park, Out of Africa, Parinelli, Platoon, Sound of Music, and The Trial. A description of the film music excerpts is presented in Table 3 .
To select appropriate adjectives that describe emotional salience, an expert group, consisting of musicians ( N = 7) and non-musicians ( N = 5), reviewed a list of adjectives and rated their appropriateness. Three types of emotion were selected based on frequency analysis of the group’s responses, including happiness, sadness, and anger/fear.
- 3.3 Procedure and Measures
The research was announced in non-music related classes offered at the two universities. Those who volunteered to participate were gathered in a group at their universities. Once participants agreed to participate in the study, the researcher arranged the dates, times, and sites for the experiment based upon the participants’ availability. At the time of administration, participants filled out the demographic questionnaire and then listened to twelve film music excerpts presented in a random order. Participants were asked to identify the types of emotion that they perceived from the film music by circling the most compatible emotion (i.e., happiness, sadness, anger/fear) on the answer sheet. Each musical excerpt lasted for fifteen seconds and five-second inter-excerpt time was given to identify the perceived emotion. According to Bigand, Filpic, and Lalitte's [45] study, listening to music as short as 15 seconds in duration is sufficient for judging the music’s emotion. The time to listen to the 12 music excerpts and complete the accompanying answer sheet took approximately 10 minutes.
- 3.4 Data Analysis
After the answer sheets were completed, the researcher collected them. The responses were coded for further statistical analysis. First, the coded data were analyzed by performing a frequency analysis to examine whether intended emotion in film music was congruent with participants’ identification responses. Second, the chi-square analysis was used to discern if any individual variables were significant in the successful identification of the music’s emotion. The statistical tool used was Statistical Package for the Social Sciences (SPSS) version 17.0.
4. RESULTS
The purpose of this study was to examine the congruence between intended emotional outcome and actual self-reported emotion following film music listening. The study also examined the characteristics of musical elements that influenced successful congruence and identification. Lastly, the study examined possible influence of listeners' demographic variables and music experience on identification of emotion. A total of 147 participants listened to music excerpts from twelve films and identified their perceived emotion as a result of music listening. The sampled music excerpts were those with three categories of emotion including positive valence with high arousal (i.e., happiness), low arousal with positive valence (i.e., sadness), and negative valence with high arousal (i.e., anger/fear). The congruence between the intended and the identified emotion was analyzed based on the proportion of participants’ identification responses. If the primary identification response (i.e., listeners’ perceived emotion) to a music excerpt was consistent with the intended emotion type, the music excerpt was determined to have high congruence.
The frequency analysis of the obtained data demonstrated that the intended emotion types were congruent with the emotions identified by the participants (see Table 4 ). For the four excerpts that intended to evoke happiness (i.e., music excerpts 1 through 4), the highest percentage of participants reported that they felt happiness while listening to these four musical excerpts (57% to 98%). Musical excerpt 2 “Prelude” showed relatively low congruence probably due to the characteristics of musical elements (i.e., ascending melodic line with a gradually increasing intensity, additional use of ornamentation), considered potentially representing “angry” emotion.
Congruence between Emotional Salience and Identified Emotion (N = 147)
PPT Slide
Lager Image
Congruence between Emotional Salience and Identified Emotion (N = 147)
For the group of four excerpts that intended to evoke sadness (i.e., music excerpts 5 through 8), the highest percentage of participants reported feeling sad while listening to these four music excerpts (76% to 97%), which showed relatively high congruence as compared to the responses for the “happiness” excerpts. For the anger/fear music excerpts (i.e., music excerpts 9 through 12), the congruence was more consistent than for the happiness and sadness excerpts. The majority of the participants identified their perceived emotion as anger/fear (93% to 99%). As Table 4 shows, the participants successfully matched their perceived emotion with that intended by the music excerpt.
Further analysis examined common characteristics of the musical elements in connection with the identified emotions. Table 5 shows the musical elements present in each of the twelve film music excerpts. The common musical elements among the “happiness” and “sadness” music excerpts were tonality. Music excerpts identified as “happiness” or “sadness” were composed with a specific tonal center, such as G, D, and B, whereas music excerpts identified as “anger/fear” lacked such tonality. In addition, “happiness” music excerpts were written in major mode, while “sadness” music excerpts were written in minor mode. Tempo of “happiness” music excerpts ranged from 88 to 106, while those intended to express “sadness” ranged from 44 to 76.
Musical Characteristics of Film Music Excerpts
PPT Slide
Lager Image
Musical Characteristics of Film Music Excerpts
Emotion Identification by Daily Music Listening (N = 147)
PPT Slide
Lager Image
Emotion Identification by Daily Music Listening (N = 147)
This research also examined the relationship between individual variables (i.e., gender, academic major, music listening hours, and music activities) and the congruence between self-reported music-induced and the intended musicinduced affect. Table 6 shows the distribution of participants’ responses to music excerpt 6 and how they differed by hours of daily music listening. Chi-square tests revealed that participants who listened to music less than two hours per day were more likely to report feeling sadness after listening to the music excerpt intended to induce sadness (i.e., music excerpt 6) than those who listened to music for more than two hours per day (χ²(2, N = 146) = .038, p < .05). The influence of other variables (i.e., gender, academic major, and music activities) on emotion identification was not significant ( p > .05).
5. DISCUSSION
The purpose of this study was to investigate the congruence between intended emotion and perceived emotion following film music listening. Among the successful identification responses, the musical elements were individually examined in reference to induced emotion. This study found a high congruence between listeners’ reported emotion and intended emotion in film music. Further analysis revealed that congruent identification responses were attributed to structural musical elements (i.e., tonality, modality). The study further showed that regardless of personal traits or backgrounds, participants’ responses matched the intended emotions in the film excerpts. The findings lend support for music being the universal language of emotional expression.
The study results revealed that the participants successfully identified the emotion in the music as intended by the excerpts. The current finding is compatible with a representative study in music and emotion [24] in which participants were successful in identifying the performers’ intended emotional expression. The level of congruence was highest for the “anger/fear” music excerpt. This result was similar to that of Terwogt and Van Grinsven [14] who found that participants more easily identified negative emotions through music listening.
The analysis on the characteristics of the musical elements revealed commonalities. The most salient musical element from among the music excerpts was tonality. That is, “happiness” and “sadness” music excerpts were written based on tonality, whereas “anger/fear” excerpts were written with tonal vagueness or atonality. According to the psychobiological perspective on musically induced arousal [46] , affective response to music depends on the amount of information presented and the degree of physical or cognitive arousal that the information activates. A moderate amount of information conveyed by art stimuli may lead to optimal arousal and this state is evaluated as a pleasant experience (i.e., positive emotion). Atonal music is unfamiliar, delivers an excessive amount of information, especially for nonmusically trained individuals, and thus, is strongly associated with perceived negative emotions, such as anger, fear, and madness [25] , [26] . For the current study, listening to atonal music, therefore, likely led to highly aroused states that in turn were perceived as an unpleasant or aversive experience.
Analysis of musical elements for “happiness” and “sadness” showed that each of the groups of music excerpts possessed their own unique characteristics, such as modality. That is, major mode was a shared element among the “happiness” excerpts, while minor mode was common among the “sadness” excerpts. This finding is compatible with the general consensus that “happiness” and “sadness” in music primarily differ by modality, either major or minor. Major or minor mode triggers happy or sad emotions, respectively [44] , [47] . However, the effect of tempo was vague in the current study due to the work of rhythm subdivision. This result is inconsistent with Gagnon and Peretz's [22] study, reporting that tempo was more influential to differentiate musically induced “sadness” from “happiness.”
Some additional elements were found to maximize the level of emotional intensity, which was primarily induced by different modality. For example, musical excerpt 6 for sadness (i.e., Adagio for Strings composed by Samuel Barber) is distinguished by the use of additional musical elements, such as a very high pitch range and sustained dissonance followed by consonance. Tension within the musical context, which was the perfect 4 th interval presented in a very high pitch range and its delay in resolution, was likely to evoke very intense emotional states. Collectively, such saliency in musical elements contributes to increasing the intensity of sadness experienced by participants. Increases in emotional intensity while listening to “Adagio for Strings” were also supported by neurophysiological evidence examined in a study by Blood and Zatorre [48] . This provides a possible explanation for a single significant correlation between this musical excerpt and hours of music listening as shown in the results of the chi-square test.
In terms of the influence of individual characteristics on emotion identification, the immediate identification response in music was consistent regardless of individual differences. A single significant finding was found in the hours of daily music listening and the current result may be due to participants’ heterogeneous responses (i.e., high congruence between intended and perceived emotion). Non normal distribution due to high congruence yielded high frequency in cells that have an expected count less than 5, which failed to meet the minimum criteria, so were excluded from the further interpretation. Also, this can be explained by the selection procedure employed for sampling music excerpts (i.e., most of the selected film excerpts were released before the participants were born). Although the selected music excerpts were well-known at the time their movies were first released, the university students who participated in this study were unlikely to have exposure to these films. The selection criteria followed those of Eerola and Vuoskoski [42] in which film excerpts were chosen from the year before the participants were born in order to avoid the influence of episodic memories. These criteria minimized any referential connections the participants may have had to the music, and the participants were able to strictly attend to their emotions induced by music listening. However, the present study assumed some influence of schematic memories as participants might have some previous experience with the film music excerpts.
This study had limitations. First, since the pool of film music excerpts used for the study was limited, the effectiveness of using film music in inducing and identifying emotion needs to be carefully interpreted. Current findings can be generalized to only film music consisting of similar structural music components as those present in the current study. Second, a replication study with a larger pool of music excerpts, either composed or selected using identical criteria to those in the current study is necessary. In addition, replication with a larger group of individuals from a variety of age ranges may reconfirm the congruence between the intended and perceived emotion as well as the role of musical elements that lead to such congruence. With a larger sample, the influence of individual variables on emotion identification should be re-examined.
Despite these limitations, the current study was meaningful in that it integrated the dimensional approach and the discrete approach in the context of listening to film music excerpts. Also, the study was an initial attempt to identify the types of emotions matched with the two axes of the circumplex model and reported they correspond with each other in a musical context. Lastly, given that emotion identification through film music listening was supported by this study, future studies are necessary for expansion and corroboration with respondents from Western music culture.
In conclusion, the present study confirmed that intended emotion in music is successfully perceived and identified by young adults. Due to the universal power of music as a tool that delivers emotional messages, individual variables and previous music experience were found to not be influential in identifying listeners’ perceived emotion in music. Identifying emotion in music and the specific music elements that induce particular emotional states contributes to our understanding of music as a medium that is sensitive to real emotion and capable of systematically facilitating desired emotional states. Such understanding has implications for music-related professionals ranging from composers to music therapists.
Acknowledgements
This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF 2012S1A5A2A03034273).
BIO
Hyun Ju Chong
She is a professor and chair of the Department of Music Therapy, Graduate School, Ewha Womans University. Her recent research interests have focused on developing music therapy assessment and training protocols for cognitive functioning skill. She received her Ph.D. in music therapy at University of Kansas, U.S.A.
Eunju Jeong
She is a postdoctoral researcher at Ewha Music Rehabilitation Center, Ewha Womans University. Her research interests include development and validation of music-based assessment tools for cognitive processes. She received her Ph.D. in music education with music therapy emphasis at University of Miami, U.S.A.
Soo Ji Kim
She is an assistant professor and a program director of Music Therapy Education at Graduate School of Education, Ewha Womans University. Her recent research interests include the use of music in various medical settings, specifically targeting for the elderly and people who suffered from neurological disorder. She received her Ph.D. in music therapy at University of Kansas, U.S.A.
References
Radocy R. E. , Boyle J. D. 1997 Psychological foundations of musical behavior Charles C. Thomas Publishers
Hodges D. , Sebald D. C. 2010 Music in the human experience: An introduction to music psychology Routledge
Darrow A. A. 2006 “The role of music in deaf culture: deaf students' perception of emotion in music,” Journal of music therapy 43 (1) 2 - 15    DOI : 10.1093/jmt/43.1.2
Zentner M. , Grandjean D. , Scherer K. R. 2008 “Emotions evoked by the sound of music: Characterization, classification, and measurement,” Emotion 8 (4) 494 - 521    DOI : 10.1037/1528-3542.8.4.494
Balkwill L. L. , Thompson W. F. 1999 “A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues,” Music perception 17 (1) 43 - 64    DOI : 10.2307/40285811
Eerola T. 2010 “Analysing Emotions in Schubert's Erlkönig: a Computational Approach,” Music Analysis 29 (1-3) 214 - 233    DOI : 10.1111/j.1468-2249.2011.00324.x
Fritz T. , Jentschke S. , Gosselin N. , Sammler D. , Peretz I. , Turner R. 2009 “Universal recognition of three basic emotions in music,” Current biology 19 (7) 573 - 576    DOI : 10.1016/j.cub.2009.02.058
Schellenberg E. G. , Krysciak A. M. , Campbell R. J. 2000 “Perceiving emotion in melody: Interactive effects of pitch and rhythm,” Music Perception 8 (2) 155 - 171    DOI : 10.2307/40285907
Thayer J. F. , Faith M. L. 2001 “A dynamic systems model of musically induced emotions,” Annals of the New York Academy of Sciences 930 452 - 456    DOI : 10.1111/j.1749-6632.2001.tb05768.x
Kreutz G. , Ott U. , Teichmann D. , Osawa P. , Vaitl D. 2008 “Using music to induce emotions: Influences of musical preference and absorption,” Psychology of Music 36 (1) 101 - 126    DOI : 10.1177/0305735607082623
Marsh A. A. , Ambady N. , Kleck R. E. 2005 “The effects of fear and anger facial expressions on approach-and avoidance-related behaviors,” Emotion 5 (1) 119 - 124    DOI : 10.1037/1528-3542.5.1.119
Seaward B. 2008 Managing stress: Principles and strategies for health and well-being Jones & Bartlett Publishers
Scott S. K. , Young A. W. , Calder A. J. , Hellawell D. J. , Aggleton J. P. , Johnson M. 1997 “Impaired auditory recognition of fear and anger following bilateral amygdala lesions,” Nature 385 (6613) 254 - 257    DOI : 10.1038/385254a0
Terwogt M. M. , Van Grinsven F. 1991 “Musical expression of mood states,” Psychology of Music 19 (2) 99 - 109    DOI : 10.1177/0305735691192001
Juslin P. N. 2000 “Cue utilization in communication of emotion in music performance: Relating performance to perception,” Journal of Experimental Psychology Human Perception and Performance 26 (6) 1797 - 1813    DOI : 10.1037/0096-1523.26.6.1797
Cunningham J. G. , Sterling R. S. 1988 “Developmental change in the understanding of affective meaning in music,” Motivation and Emotion 12 (4) 399 - 413    DOI : 10.1007/BF00992362
Keltner D. , Buswell B. N. 1997 “Embarrassment: Its distinct form and appeasement functions,” Psychological Bulletin 122 (3) 250 - 270    DOI : 10.1037/0033-2909.122.3.250
Nawrot E. S. 2003 “The perception of emotional expression in music: Evidence from infants, children and adults,” Psychology of Music 31 (1) 75 - 92    DOI : 10.1177/0305735603031001325
Mayer J. D. , Allen I. P. , Beauregard K. 1995 “Mood inductions for four specific moods: A procedure employing guided imagery,” Journal of Mental Imagery 19 (1-2) 133 - 150
Resnicow J. E. , Salovey P. , Repp B. H. 2004 “Is recognition of emotion in music performance an aspect of emotional intelligence?,” Music Perception 22 (1) 145 - 158    DOI : 10.1525/mp.2004.22.1.145
Dahl S. , Friberg A. 2004 “Expressiveness of musician’s body movements in performances on marimba,” In Gesture-based communication in human-computer interaction, ed Springer 479 - 486
Gagnon L. , Peretz I. 2003 “Mode and tempo relative contributions to “happy-sad” judgements in equitone melodies,” Cognition & Emotion 17 (1) 25 - 40    DOI : 10.1080/02699930302279
Bruscia K. E. 1987 Improvisational models of music therapy CC Thomas Springfield, IL
Gabrielsson A. , Juslin P. N. 1996 “Emotional expression in music performance: Between the performer's intention and the listener's experience,” Psychology of Music 24 (1) 68 - 91    DOI : 10.1177/0305735696241007
Daynes H. 2011 “Listeners’ perceptual and emotional responses to tonal and atonal music,” Psychology of Music 39 (4) 468 - 502    DOI : 10.1177/0305735610378182
Parncutt R. , Marin M. M. 2006 “Emotions and associations evoked by unfamiliar music,” Proc. International Association of Empirical Aesthetics 725 - 729
McNamara L. , Ballard M. E. 1999 “Resting arousal, sensation seeking, and music preference,” Genetic, Social, and General Psychology Monographs 125 (3) 229 - 250
Schwartz K. D. , Fouts G. T. 2003 “Music preferences, personality style, and developmental issues of adolescents,” Journal of Youth and Adolescence 32 (3) 205 - 213    DOI : 10.1023/A:1022547520656
Sloboda J. A. , Juslin P. N. 2001 “Psychological perspectives on music and emotion,” In Music and emotion: Theory and research Oxford University Press 71 - 104
Walworth D. 2003 “The effect of preferred music genre selection versus preferred song selection on experimentally induced anxiety levels,” Journal of Music Therapy 40 (1) 2 - 14    DOI : 10.1093/jmt/40.1.2
Baron-Cohen S. , Knickmeyer R. C. , Belmonte M. K. 2005 “Sex differences in the brain: implications for explaining autism,” Science 310 (5749) 819 - 823    DOI : 10.1126/science.1115455
Lewis M. 2001 “Issues in the study of personality development,” Psychological Inquiry 12 (2) 67 - 83    DOI : 10.1207/S15327965PLI1202_02
Rothbart M. K. 2007 “Temperament, development, and personality,” Current Directions in Psychological Science 16 (4) 207 - 212    DOI : 10.1111/j.1467-8721.2007.00505.x
Vries B. De 1991 “Assessment of the affective response to music with Clynes's sentograph,” Psychology of Music 19 (1) 46 - 64    DOI : 10.1177/0305735691191004
Schmidt L. A. , Trainor L. J. 2001 “Frontal brain electrical activity (EEG) distinguishes valence and intensity of musical emotions,” Cognition & Emotion 15 (4) 487 - 500    DOI : 10.1080/02699930126048
Altenmüller E. , Schürmann K. , Lim V. K. , Parlitz D. 2002 “Hits to the left, flops to the right: different emotions during listening to music are reflected in cortical lateralisation patterns,” Neuropsychologia 40 (13) 2242 - 2256    DOI : 10.1016/S0028-3932(02)00107-0
Gregory A. H. , Varney N. 1996 “Cross-cultural comparisons in the affective response to music,” Psychology of Music 24 (1) 47 - 52    DOI : 10.1177/0305735696241005
Gupta U. , Gupta B. 2005 “Psychophysiological responsivity to Indian instrumental music,” Psychology of Music 33 (4) 363 - 372    DOI : 10.1177/0305735605056144
Sloboda J. A. , Lehmann A. C. 2001 “Tracking performance correlates of changes in perceived intensity of emotion during different interpretations of a Chopin piano prelude,” Music Perception 19 (1) 87 - 120    DOI : 10.1525/mp.2001.19.1.87
Vieillard S. , Peretz I. , Gosselin N. , Khalfa S. , Gagnon L. , Bouchard B. 2008 “Happy, sad, scary and peaceful musical excerpts for research on emotions,” Cognition & Emotion 22 (4) 720 - 752    DOI : 10.1080/02699930701503567
Webster G. D. , Weir C. G. 2005 “Emotional responses to music: Interactive effects of mode, texture, and tempo,” Motivation and Emotion 29 (1) 19 - 39    DOI : 10.1007/s11031-005-4414-0
Eerola T. , Vuoskoski J. K. 2011 “A comparison of the discrete and dimensional models of emotion in music,” Psychology of Music 39 (1) 18 - 49    DOI : 10.1177/0305735610362821
Wallis I. , Ingalls T. , Campana E. , Goodman J. 2011 “A rule-based generative music system controlled by desired valence and arousal,” PROC. International Sound and Music Computing Conference Retrieved from
Peretz I. , Gagnon L. , Bouchard B. 1998 “Music and emotion: perceptual determinants, immediacy, and isolation after brain damage,” Cognition 68 (2) 111 - 141    DOI : 10.1016/S0010-0277(98)00043-2
Bigand E. , Filipic S. , Lalitt P. 2005 “The time course of emotional responses to music,” Annals of the New York Academy of Sciences 1060 429 - 437    DOI : 10.1196/annals.1360.036
Berlyne D. E. 1971 Aesthetics and psychobiology Appleton-Century-Crofts
Hevner K. 1935 “Expression in music: A discussion of experimental studies and theories,” Psychological Review 42 (2) 186 - 204    DOI : 10.1037/h0054832
Blood A. J. , Zatorre R. J. 2001 “Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion,” Proceedings of the National Academy of Sciences 98 (20) 11818 - 11823    DOI : 10.1073/pnas.191355898