Abstract
Affective algorithmic composition is a growing field that combines perceptually motivated affective computing strategies with novel music generation. This article presents work toward the development of one application. The long-term goal is to develop a responsive and adaptive system for inducing affect that is both controlled and validated by biophysical measures. Literature documenting perceptual responses to music identifies a variety of musical features and possible affective correlations, but perceptual evaluations of these musical features for the purposes of inclusion in a music generation system are not readily available. A discrete feature, rhythmic density (a function of note duration in each musical bar, regardless of tempo), was selected because it was shown to be well-correlated with affective responses in existing literature. A prototype system was then designed to produce controlled degrees of variation in rhythmic density via a transformative algorithm. A two-stage perceptual evaluation of a stimulus set created by this prototype was then undertaken. First, listener responses from a pairwise scaling experiment were analyzed via Multidimensional Scaling Analysis (MDS). The statistical best-fit solution was rotated such that stimuli with the largest range of variation were placed across the horizontal plane in two dimensions. In this orientation, stimuli with deliberate variation in rhythmic density appeared farther from the source material used to generate them than from stimuli generated by random permutation. Second, the same stimulus set was then evaluated according to the order suggested in the rotated two-dimensional solution in a verbal elicitation experiment. A Verbal Protocol Analysis (VPA) found that listener perception of the stimulus set varied in at least two commonly understood emotional descriptors, which might be considered affective correlates of rhythmic density. Thus, these results further corroborate previous studies wherein musical parameters are monitored for changes in emotional expression and that some similarly parameterized control of perceived emotional content in an affective algorithmic composition system can be achieved and provide a methodology for evaluating and including further possible musical features in such a system. Some suggestions regarding the test procedure and analysis techniques are also documented here.
- C. Ames. 1989. The Markov process as a compositional model: A survey and tutorial. Leonardo. 22, 2, 175--187.Google ScholarCross Ref
- B. Astill. 1994. An investigation of social values in a senior secondary school milieu. In Proceedings of the Australian Association for Research in Education Conference.Google Scholar
- J.-J. Aucouturier, F. Pachet, and M. Sandler. 2005. “The way it sounds”: Timbre models for analysis and retrieval of music signals. IEEE Transactions in Multimedia 7, 1028--1035. doi:10.1109/TMM.2005.858380 Google ScholarDigital Library
- E. Bigand, S. Filipic, and P. Lalitte. 2005. The time course of emotional responses to music. Annals of the New York Academy of Science 1060, 429--437.Google ScholarCross Ref
- E. Bigand, S. Vieillard, F. Madurell, J. Marozeau, and A. Dacquet. 2005. Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cognition and Emotion 19, 1113--1139. doi:10.1080/02699930500204250Google ScholarCross Ref
- D. Bolger. 2004. Computational models of musical timbre and the analysis of its structure in melody. PhD Thesis, University of Limerick.Google Scholar
- J. Bresson, C. Agon, and G. Assayag. 2005. Openmusic 5: A cross-platform release of the computer-assisted composition environment. In 10th Brazilian Symposium on Computer Music, Belo Horizonte, MG, Brésil.Google Scholar
- N. Collins. 2009. Musical form and algorithmic composition. Contemporary Music Review 28, 103--114. doi:10.1080/07494460802664064Google ScholarCross Ref
- D. Cope. 1989. Experiments in musical intelligence (EMI): Non-linear linguistic-based composition. Journal of New Music Research 18, 117--139.Google Scholar
- D. Cope. 1992. Computer modeling of musical intelligence in EMI. Computer Music Journal 16, 2, 69--83.Google ScholarCross Ref
- D. Cope and M. J. Mayer. 1996. Experiments in Musical Intelligence. AR Editions, Madison, WI.Google Scholar
- T. Eerola and J. K. Vuoskoski. 2010. A comparison of the discrete and dimensional models of emotion in music. Psychology of Music 39, 18--49. doi:10.1177/0305735610362821Google ScholarCross Ref
- A. Friberg, E. Schoonderwaldt, and A. Hedblad. 2011. Perceptual ratings of musical parameters. In Heinz von Loesch and Stefan Weinzierl, eds. Gemessene Interpretation - Computergestützte Aufführungsanalyse im Kreuzverhör der Disziplinen. Schott, 237--253.Google Scholar
- A. Gabrielsson. 2001. Emotion perceived and emotion felt: Same or different? Music Science Special Issue, 2001--2002, 123--147.Google ScholarCross Ref
- L. Gagnon and I. Peretz. 2003. Mode and tempo relative contributions to “happy-sad” judgements in equitone melodies. Cognition and Emotion 17, 25--40.Google ScholarCross Ref
- E. E. Hannon, J. S. Snyder, T. Eerola, and C. L. Krumhansl. 2004. The role of melodic and temporal cues in perceiving musical meter. Journal of Experimental Psychology: Human Perception and Performance 30, 956--974. doi:10.1037/0096-1523.30.5.956Google ScholarCross Ref
- K. Hevner. 1936. Experimental studies of the elements of expression in music. American Journal of Psychology 48, 246--268.Google ScholarCross Ref
- L. Hiller and L. M. Isaacson. 1957. Illiac Suite, for String Quartet. New Music Edition.Google Scholar
- P. N. Juslin and P. Laukka. 2004. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research 33, 217--238.Google ScholarCross Ref
- P. N. Juslin and J. A. Sloboda. 2010. Handbook of Music and Emotion: Theory, Research, Applications. Oxford University Press, Oxford.Google Scholar
- A. Kirke and E. Miranda. 2011. Combining EEG frontal asymmetery studies with affective algorithmic composition and expressive performance models. In Proceedings of International Computer Music Conference (ICMC’11). Huddersfield, UK.Google Scholar
- A. Kirke, E. Miranda, and S. J. Nasuto. 2013. Artificial affective listening towards a machine learning tool for sound-based emotion therapy and control. In Proceedings of the Sound and Music Computing Conference. 259--265.Google Scholar
- T. Kohonen. 1989. A self-learning musical grammar, or “associative memory of the second kind.” In Proceedings of the 1989 International Joint Conference on Neural Networks. IEEE, 1--5.Google ScholarCross Ref
- J. Kratus. 1993. A developmental study of children's interpretation of emotion in music. Psychology of Music 21, 3--19.Google ScholarCross Ref
- J. B. Kruskal. 1964. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29, 1--27.Google ScholarCross Ref
- O. Ladinig and E. G. Schellenberg. 2012. Liking unfamiliar music: Effects of felt emotion and individual differences. Psychology of Aesthetics, Creativity and the Arts 6, 146.Google ScholarCross Ref
- A. Lamont and T. Eerola. 2011. Music and emotion: Themes and development. Music Science 15, 139--145. doi:10.1177/1029864911403366Google ScholarCross Ref
- F. Lerdahl and R. Jackendoff. 1983. An overview of hierarchical structure in music. Music Perception 1, 2, 229--252.Google ScholarCross Ref
- S. R. Livingstone, R. Muhlberger, and A. Brown. 2006. Influencing perceived musical emotions: The importance of performative and structural aspects in a rule system. In Music as Human Communication: An HCSNet Workshop on the Science of Music Perception, Performance and Cognition. Bankstown, NSW, Australia.Google Scholar
- M. M. Marin and J. Bhattacharya. 2010. Music induced emotions: Some current issues and cross-modal comparisons. In Joao Hermida and Mariana Ferreo, eds. Music Education. Nova Science Publishers, Hauppauge, NY, 1--38.Google Scholar
- A. Mattek. 2011. Emotional communication in computer generated music: Experimenting with affective algorithms. In Proceedings of the 26th Annual Conference of the Society for Electro-Acoustic Music in the United States. Presented at the SEAMUS, University of Miami Frost School of Music, Miami, Florida.Google Scholar
- E. R. Miranda. 2001. Composing Music with Computers, 1st ed. Music technology series. Focal Press, Oxford/Boston. Google ScholarDigital Library
- G. Nierhaus. 2009. Algorithmic Composition Paradigms of Automated Music Generation. Springer, Wien/New York. Google ScholarDigital Library
- A. P. Oliveira and A. Cardoso. 2010. A musical system for emotional expression. Knowledge-Based Systems 23, 901--913. Google ScholarDigital Library
- G. Papadopoulos and G. Wiggins. 1999. AI methods for algorithmic composition: A survey, a critical view and future prospects. In Proceedings of the AISB Symposium on Musical Creativity. Edinburgh, UK, 110--117.Google Scholar
- C. Raphael. 2001. Automated rhythm transcription. In Proceedings of ISMIR 2001.Google Scholar
- R. Rowe. 1992. Interactive Music Systems: Machine Listening and Composing. MIT Press. Google ScholarDigital Library
- J. A. Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39, 1161.Google ScholarCross Ref
- J. A. Russell. 2003. Core affect and the psychological construction of emotion. Psychology Review 110, 145.Google ScholarCross Ref
- J. A. Russell and L. F. Barrett. 1999. Core affect, prototypical emotional episodes, and other things called emotion: Dissecting the elephant. Journal of Personality and Social Psychology 76, 805.Google ScholarCross Ref
- S. Schachter and J. Singer. 1962. Cognitive, social, and physiological determinants of emotional state. Psychology Review 69, 379--399. doi:10.1037/h0046234Google ScholarCross Ref
- K. R. Scherer. 2004. Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them? Journal of New Music Research 33, 239--251. doi:10.1080/0929821042000317822Google ScholarCross Ref
- L. A. Schmidt and L. J. Trainor. 2001. Frontal brain electrical activity (EEG) distinguishes valence and intensity of musical emotions. Cognition and Emotion 15, 487--500.Google ScholarCross Ref
- E. Schubert and J. Wolfe. 2006. Does timbral brightness scale with frequency and spectral centroid. Acta Acusta United with Acusta 92, 820--825.Google Scholar
- T. Sugimoto, R. Legaspi, A. Ota, K. Moriyama, S. Kurihara, and M. Numao. 2008. Modelling affective-based music compositional intelligence with the aid of ANS analyses. Knowledge-Based Systems 21, 200--208. doi:10.1016/j.knosys.2007.11.010 Google ScholarDigital Library
- L. Taruffi and S. Koelsch. 2014. The paradox of music-evoked sadness: An online survey. PloS One 9, e110490.Google ScholarCross Ref
- D. Västfjäll. 2001. Emotion induction through music: A review of the musical mood induction procedure. Music Science Special Issue, 2001--2002, 173--211.Google ScholarCross Ref
- S. Vieillard, I. Peretz, N. Gosselin, S. Khalfa, L. Gagnon, and B. Bouchard. 2008. Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition and Emotion 22, 720--752.Google ScholarCross Ref
- J. K. Vuoskoski and T. Eerola. 2011. Measuring music-induced emotion: A comparison of emotion models, personality biases, and intensity of experiences. Music Science 15, 159--173. doi:10.1177/1029864911403367Google ScholarCross Ref
- J. K. Vuoskoski and T. Eerola. 2012. Can sad music really make you sad? Indirect measures of affective states induced by music and autobiographical memories. Psychology of Aesthetics, Creativity and the Arts 6, 204.Google ScholarCross Ref
- J. K. Vuoskoski, W. F. Thompson, D. McIlwain, and T. Eerola. 2012. Who enjoys listening to sad music and why? Music Perception 29, 311--317.Google ScholarCross Ref
- I. Wallis, T. Ingalls, E. Campana, and J. Goodman. 2011. A rule-based generative music system controlled by desired valence and arousal. In Proceedings of the Sound and Music Computing Conference. Presented at the Sound and Music Computing, SMC, SMC Network.Google Scholar
- K. C. Wassermann, K. Eng, and P. F. Verschure. 2003. Live soundscape composition based on synthetic emotions. IEEE Multimedia 10, 82--90. Google ScholarDigital Library
- D. L. Wessel. 1979. Timbre space as a musical control structure. Computer Music Journal 3, 45--52.Google ScholarCross Ref
- N. Whiteley, A. T. Cemgil, and S. Godsill. 2007. Sequential inference of rhythmic structure in musical audio. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’07). IEEE, pp. IV--1321--IV--1324.Google Scholar
- D. Williams, A. Kirke, E. R. Miranda, E. B. Roesch, and S. J. Nasuto. 2013. Towards affective algorithmic composition. In Proceedings of the 3rd International Conference on Music & Emotion (ICME’’13).Google Scholar
- J. Wingstedt, M. Liljedahl, S. Lindberg, and J. Berg. 2005. Remupp: An interactive tool for investigating musical properties and relations. In Proceedings of the 2005 Conference on New Interfaces for Musical Expression. National University of Singapore, 232--235. Google ScholarDigital Library
- T.-L. Wu and S.-K. Jeng. 2008. Probabilistic estimation of a novel music emotion model. In Advances in Multimedia Modeling. Springer, 487--497. Google ScholarDigital Library
- M. Zentner, D. Grandjean, and K. R. Scherer. 2008. Emotions evoked by the sound of music: Characterization, classification, and measurement. Emotion 8, 494--521. doi:10.1037/1528-3542.8.4.494Google ScholarCross Ref
Index Terms
- Investigating Perceived Emotional Correlates of Rhythmic Density in Algorithmic Music Composition
Recommendations
Affective Calibration of Musical Feature Sets in an Emotionally Intelligent Music Composition System
Affectively driven algorithmic composition (AAC) is a rapidly growing field that exploits computer-aided composition in order to generate new music with particular emotional qualities or affective intentions. An AAC system was devised in order to ...
Algorithmic composition for pop songs based on lyrics emotion retrieval
AbstractMusical composition is difficult for people due to the complicated composition theories and the combination of artistic conception with emotion-based ideas. A 2-D emotional plane which can define the valence and arousal coordinate has been ...
Automatic music composition using answer set programming
Music composition used to be a pen and paper activity. These days music is often composed with the aid of computer software, even to the point where the computer composes parts of the score autonomously. The composition of most styles of music is ...
Comments